Many of these new applications require an end-to-end latency below 10 milliseconds; however, typical public clouds are unable to fulfill such requirements. While 5G holds the promise of enabling new applications and use cases to get there, Telecom operators and cloud and data center providers will have to make significant updates to their current networks. For example, Telco edge data centers have several constraints because they must fit within an existing legacy central office and, as a result, must contend with very limited space, power, and cooling as part of their infrastructure. These telco edge data centers will need to host a variety of container-based applications, namely real-time interactive applications, content delivery networks, basic networking services, mobile and fixed packet core nodes, and IoT frameworks.
Innovation at the Edge
Edge computing and the hybrid cloud are necessary components to help drive consistency across all infrastructure footprints. But a mishmash of software-defined networking has also resulted in a proliferation of virtual machines (VMs), creating the dreaded server bloat. Imagine, if you will, the added layers of code and file for routing, switching, gateways, firewalls, and more going virtual — all of this eats up the cloud’s valuable compute and storage power resources. Worse, it can also introduce an extra layer of network latency by creating circuitous pathways, redundant packets, and loads of inefficiencies that can be invisible to service providers but add milliseconds of latency to every request.
One architectural approach is to create a clear separation between the control plane and data plane — for example, leveraging containers to consolidate and offload the data plane to P4-enabled switches. “Collapsing” the data plane, not just via containerization but also by logically unifying its code into the same hardware compute and storage area, avoids “tromboning” (the meandering data path among servers) and the replication of redundant networking code. P4-enabled switches add server-like compute and storage capabilities, so the switches can then “offload” these data plane capabilities from servers to the P4-enabled switches. Not only does consolidating and offloading lead to savings in both CapEx and OpEx, there are other benefits too.
In the event of a VM failure, several virtual network functions (VNFs) demonstrate severe service disruption. Often, it takes several minutes to restart a VM after a crash. Such system behavior is not acceptable in latency-sensitive applications — consider an autonomous vehicle failure, where milliseconds of latency can result in a crash.
Additionally, integrating networking functions into the switches and running them in containers can potentially double the number of available servers. By freeing up additional servers, more capacity is available to run differentiated revenue-generating apps, like connected cars, augmented reality, streaming movies, and critical infrastructure, within the same limited space, power, and compute resources.
And, finally, P4 delivers the programmability for 5G providers to offer innovative new services on different network slices that meet the SLAs their customers require while also delivering secure, end-to-end, fully isolated networks.
At the heart of new edge architecture lies a unified solution for distributed edge computing. For the longest time, network, compute, and storage capabilities were deployed as silos, with each running its own control plane or operating system (OS). As much as automation has eased the deployment, configuration, and management of these elements, the need for common services — like zero-touch provisioning (ZTP), upgrades/downgrades, scalability on demand, and more — has generated different OSs, including network OSs. Kubernetes, for example, has become the de-facto standard for the orchestration of containers on compute. By using the same orchestration layer for network, compute, and storage, managing resources at the edge can be a simple process that leads to optimal results.
Today’s edge solutions must enable service providers to create multiple virtual data centers in support of network slicing as required by the 3GPP specifications. This partitioning is key to delivering on the promise of 5G networks. True network slicing will enable multiple operators to share a common distributed cloud infrastructure with each entity enjoying full isolation down to the hardware level for better security and a better quality of experience.