Caching content locally based on user preferences for lower latency and handling ephemeral data such as location based analytics needs edge computing. These two architecture concepts are proposing to deploy compute at different nodes within the network. On the surface, these two competing architectures appear to be creating opposite pulls, resulting in a dichotomy within the network. A deeper look suggests that a balanced approach to deploying networks could leverage merits of both, thereby transitioning these two competing technologies to complement each other for enabling new services.
More than a decade ago, the concept of distributed base stations emerged with a desire to overcome power loss in sending signals using coaxial cable from traditional base stations located at the foot of towers to antennas mounted on the top of tower. Radio heads were located remotely in close proximity to the antenna on tower top to eliminate power losses. Remote radio heads were connected to baseband BTS chassis using fiber. Protocols such as the Common public radio protocol (CPRI) were devised to transport data and synchronize remote radios. In some situations when fiber was not available, microwave or millimeter wave radios were used to transport CPRI payload. This architectural shift raised hopes for operators to mix and match radio and baseband chassis from different system vendors to lower costs, improve supply chain, and ease inventory management. Interoperability concerns prevented this from happening, nevertheless it opened the way for tier-1 system vendors to leverage radios from smaller vendors for managing the rapid increase in a variety of radios for different geographies.
Distributed base station architecture has taken roots. This architecture is centralizing the baseband processing pool, sometimes called super Macro, which is capable of feeding a larger number of radios and therefore more effective coverage and load balancing. Success of data center and Cloud computing resulted in the emergence of Cloud RAN concept that extends distributed base station architecture by virtualizing base band pools running on server farms. There are merits in Cloud RAN as it allows use of lower cost compute, leveraging off-the-shelf server chassis for cost effective RAN deployment, load balancing and significant ease in network provisioning. Cloud RAN, when implemented broadly, holds the promise of allowing third party providers to own the network, enabling multiple virtual network providers to concentrate on content and services.
Cloud RAN architecture is seeing some early acceptance in the Asia pacific region where operators have significant fiber assets to deploy remote radio heads. Carriers are investigating hosting layer 1-3 base station stacks and evolved packet cores on off-the-shelf servers as virtual machines. General purpose compute cannot implement the layer 1 baseband functions, packet processing, and security efficiently for high throughput and low latency. These functions would require servers to use specialized accelerator cards. The ability to host base stations as a set of software functions offers significant benefits. Carriers no longer need to build out network gear per peak capacity requirements. Instead, base stations can be instantiated in the Cloud on a need basis to provide the desired coverage and capacity. Cloud RAN allows base stations to be co-located in data centers where most of the content resides. This leads to higher efficiencies and effective dissemination of content.