Virtualized testing keeps pace with NFV cost savings: Page 2 of 4

January 28, 2015 //By Steve Jarman, Spirent Communications
The growth in IP traffic presents both an opportunity and a challenge for network operators – revenues continue to grow, but are being overtaken by the costs of handling the growth in traffic.
The challenges of testing NFV

The benefits are significant and highly attractive, nevertheless replacing thousands of specialized routers and appliances with NFV-based servers presents quite a challenge. For a start, no-one likes to junk costly equipment that still promises years of good service before it becomes obsolete, so migration to NFV will be subject to depreciation of legacy equipment. In addition, staff skilled in traditional network deployments, CLIs and element management systems will need to be re- trained to work with cloud-management systems.

Above all there is the challenge of the unexpected when making any change to a complex system. However good the virtualization process looks on paper, what matters is how it works in practice. If the upgraded system fails or performs below expectation, it will result in angry customers, lost revenue and ultimately discourage the adoption of NFV. So everything — virtualized functions, virtual environments and end-to-end services – must be thoroughly validated by testing before deployment.

What are we testing for? The end user is mainly interested in performance and quality of experience – they want the service they have paid for to achieve its SLAs. They don’t really care whether the BNG, routing, CDN or mobility functionalities are virtualized or running in purpose-built appliances. For operators, performance and quality are also important, but there are additional concerns regarding the control-plane and data-plane scale, and whether, for example the number of PPPoE sessions, throughput and forwarding rates, number of MPLS tunnels and routes supported are broadly similar between physical and virtual environments.

The new, virtualized network may be reducing costs, but it must not in any way deliver worse service than the corresponding physical environment. Operators and users accustomed to 99.999% availability will expect the same for virtual environments. So node, link and service failures must be detected within milliseconds and corrective action taken promptly without degradation of services.

When virtual machines are migrated between servers, any loss of packets or services must not break the relevant SLAs. Instantiating or deleting VMs can affect the performance of existing VMs as well as services on the server, so new VMs must be assigned the appropriate number of compute cores and storage without degrading existing services. It is also critically important to test the virtual environment, including the orchestrator and cloud-management system.

Pre-deployment and turn-up testing of the virtualized network functions mean that they can be rolled out with confidence, but it is also important to monitor services and network functions on either an on-going, passive basis or an as-needed, active basis to make sure that the system can cope with evolving demand and unexpected surges. Monitoring virtual environments is more complex than their physical equivalents, because operators need to tap into either an entire service chain or just a subset of that service chain. For active monitoring, a connection between the monitoring end-points must also be created on an on-demand basis, again without degrading the performance of other functions that are not being monitored in that environment.


Design category: