Terminal testing for network operators: the applications and limitations of a network simulator

July 13, 2015 //By Francois Ortolan, Anritsu (EMEA)
Mobile phone subscribers care nothing about the complex mix of factors underlying the performance of their handset in the real world. By and large, if the user experiences a dropped call, or a low data download rate, or poor voice quality, he or she will blame the network operator. The real cause of the poor performance might be the handset, or the backhaul network - factors outside the direct control of the operator. To subscribers, this is irrelevant: they pay for mobile network service, and if they do not get it, the service provider must be at fault.

This means that the operator must take responsibility for understanding the way that each terminal affects the network, and the way that the network affects the terminals' performance. Although all mobile telephone networks and handsets conform to universal industry standards, such as the 3G and LTE standards specified by the 3GPP industry consortium, no two network configurations are identical. Equally, operators cannot rely solely on the product specifications provided to them by manufacturers of handsets and other terminal equipment.

To be able to draw on specific terminal performance data, each network operator must therefore carry out its own terminal tests in a realistic network environment. This article describes the various methods of terminal testing available, and shows how one of them - network simulation - offers some valuable types of information not available from any other test method.

The drawbacks of conventional KPIs

In fact, network operators can already draw on a rich set of data about terminal performance from the Operational and Support Systems (OSS) that they run. These enable the operator to compare a terminal's performance against a set of Key Performance Indicators (KPIs) reported from the live network. While these are a valuable tool for uncovering and analysing user problems in real time, the test results are generally unrepeatable, and the source of errors or faults cannot always be reliably identified. They might be caused by the terminal, the network or the operating environment: KPIs captured from a live network might not tell the operator which it is.

In order to reliably capture actionable data about the user's experience, the terminal has to be isolated, or tested in known network conditions.

Field testing is one way to do this. Typically implemented on a laptop carried in a car, field testing provides information about the performance of terminals in a given (location-specific) network infrastructure, at a given time. The results obtained, however, are hard to reproduce and, since network conditions change constantly, any analysis of terminal performance over time is difficult to make.

Moreover, the infrastructure within which a terminal operates is generally owned by various providers, which makes it difficult for the engineer to gain access to a complete view of the parameters in which a test was conducted, or to the traces of the tests. While field testing helps the operator to verify the network's performance in terms of accessibility, retainability, mobility, throughput and latency, the results provide little information to support the resolution of any problems it might uncover.

Design category: