Cloud RAN and Mobile Edge Computing, a dichotomy in the making: Page 2 of 3

February 24, 2016 // By Harpinder Singh Matharu, Xilinx Inc
Over the past few years, wireless infrastructure deployment has increasingly moved to distributed base station architecture. This architecture centralizes the baseband processing pool, sometimes called super Macro, which is capable of feeding a larger number of radios and therefore is more effective for coverage and load balancing. The concept of Cloud RAN takes centralized base station pool all the way into the Cloud co-located with the content or data repositories in the data center. There are merits in Cloud RAN as it allows use of lower cost compute, leveraging off-the-shelf server chassis for cost effective RAN deployment, load balancing and significant ease in network provisioning. In parallel, Mobile Edge Computing, under the auspices of the ETSI Mobile Edge Computing Industry Standard Group (MEC ISG), is emerging with a concept of placing compute at the edge, co-located with the baseband pool, to maintain a local content cache to provide improved services to users.
There are few hurdles on the way to Cloud RAN that are slowing down adoption. Latency and low jitter long distance connectivity to remote radio heads is a big challenge. Off-the-shelf servers do not have compute resources to efficiently run baseband processing. Telco grade servers with accelerator cards for layer 1 baseband are needed to host pools of baseband processing running in virtualized environments. System vendors that are lagging in certain geographies are championing this cause to disrupt markets and gain market share, forcing incumbents to follow suite to secure their market share. Carriers are welcoming this trend with a desire to harmonize their Cloud computing assets with network infrastructure to ease deployment and maintenance.

Figure 1: Cloud RAN network architecture. Click image to enlarge.

Distributed base stations have their own unique benefits in terms of caching content as per local users’ preferences for improving service delivery and processing data close to the source for latency sensitive applications. Proximity to users at the edge results in ultra-low latency access that opens opportunities to deploy customized services. MEC envisages convergence of IT and communication segments at the network edge to enable new services and business segments. Location services, Internet-of-Things (IoT), video analytics, augmented reality, local content distribution, and data caching are some of the use cases identified by MEC. MEC architecture proposes adding servers to macro and super macro base station sites for local compute and storage to enable new applications. Application development stack, tools, and framework are in the making to allow ecosystems to launch new applications and integrate services for multiple business verticals. Key hurdles in the path of MEC are cost in terms of space rental for adding server and storage to base stations, maintenance, and charging policies. Currently, policy charging and rules function is part of the core network controlled by carrier. A derivative PCRF function would need to be hosted locally at the base station to allow carriers and other content providers to fairly charge end users for services.

Figure 2: Conceptual diagram of Mobile Edge Computing. Click image to enlarge.

Design category: