Operators can learn a certain amount from the technical specifications that handset manufacturers provide. They can learn a lot more by directly testing handsets themselves and comparing their performance against the operator's own benchmarks. The results of these benchmark tests can inform marketing decisions, because they can show which handsets help to improve the operator's portfolio of devices offered to subscribers.
Different terminals can also be compared for their potential to impair the operator's network, through such behaviour as excessive signalling operations, or RF power leakage. Operators rely on conformance tests designed by standards bodies such as the PTCRB (US) or GCF (Europe) to verify that a handset will work properly on a mobile telephone network. But the conformance tests generally only show that a handset achieves a minimal level of performance consistent with the standard, and only in certain aspects of a device's performance.
An operator might prefer to support terminals that offer performance above the minimum required by the standard. They will also be concerned with their subscribers' experience of features that are not covered by the conformance tests, such as power consumption, voice quality, stability, Wi-Fi offload and multiple radio co-existence.
In fact, some operators have made such benchmark tests a formal requirement for terminal manufacturers, through a form of certification referred to as CAT (Carrier Acceptance Testing). A network simulator is required for benchmark testing, since it provides the only way to test different terminals in an identical network environment, in order to achieve true like-for-like test results (see Figure 2). CAT benchmark testing is becoming particularly important for the increasing number of machine-to-machine (M2M) communications devices coming on to the market, since there is currently no generally accepted conformance test for their validation.
Figure 2: automation framework for CAT (battery testing). Click image to enlarge.