New, industrial grade, data-intensive applications create the need for a new, more decentralized infrastructure. This infrastructure will cater to the evolution of NFV as it comes to be deployed on all types of network sites. It must also provide new value to support application use cases.

Distributed Cloud Infrastructure

We call this a Distributed Cloud Infrastructure, and it is about having a unified approach across both centralized and decentralized resources, as well as making optimal use of network connectivity, in terms of both transport and access.

Examples of current and emerging applications that benefit from a Distributed Cloud Infrastructure include: content delivery networks (CDNs), data storage with regulatory compliance, hybrid cloud platforms, IoT data processing, video processing, VR/AR, machine learning, and control/decision systems.

Entering an era of topological scaling

ericsson_hyperscale_blog_distributed_cloud_infrastructure_topology_1.png

Just as the IT era was about vertical scaling and the Web era was about horizontal scaling, the era we are now entering will be about topological scaling.

In the IT era, the main challenge was to build increasingly larger servers to support monolithic data processing for enterprises; that is, to “scale up”.

In the web era, the main challenge was to replicate simpler (stateless) processes across many servers in clouds supporting mobile consumer content and centralized big data batch analytics; that is, to “scale out”.

In the new era, the main challenge is to distribute data processing along the flows of data supporting billions of machines, real-time data stream analytics and control systems; that is, to “scale across”.

Read our new paper on Distributed Cloud Infrastructure

5 benefits of a decentralized architecture

Let's consider some of the benefits of a decentralized architecture.

  1. Low latency can be achieved by avoiding use of unmanaged networks between machines and the computing resources. In most cases, the speed-of-light factor in a well-planned transport network is not a significant blocker. However, the overall application-level responsiveness is determined by load distribution and bandwidth factors as well. And then there are gains from placing application components topologically closer to each other. An example use case would involve industrial control loops, for example, wind turbines, based on learning models or multi-machine correlations.
  2. Autonomy and security is not only an issue for individual devices or machines. It can also be required on the level of a business facility, such as a factory or a power station. Having ownership of the required compute and storage resources enables a higher level of control and security. An example use case would involve highly sensitive facilities, such as factories and power stations.
  3. Resilience is being able to handle all relevant failure cases in a cost-effective way. Having smaller, more decentralized strike-out units lowers the impact of failures, and decreases the need for allocating spare capacity. An example use case would involve society-critical facilities, such as power stations.
  4. Network scalability can be improved if we avoid transporting massive data volumes that are generated by machines all the way to centrally located datacenters. We need to manage the trade-off between having compute and storage resource on “the other side” of scalability bottlenecks. An example use case would involve massive volumes of video cameras with real-time computer vision processing.
  5. Regulatory compliance requires that data be processed and stored within borders: nation-state, regional, or business facility. This can be achieved with data services built on policy-controlled local storage resources. An example use case would involve the processing of personal data. Many countries have regulations about keeping local copies and storing metadata.

Unified by software-defined infrastructureEricsson_hyperscale_blog_distributed_cloud_cover.jpg

Through a Distributed Cloud Infrastructure, network functions and customer applications can share the same resources. This infrastructure is unified by a software-defined infrastructure layer that provides optical interconnect and infrastructure automation and an application and data platform providing interfaces to deploy and maintain applications and data. This allows for offering important services, security isolation and better application performance, resource utilization.

The ability to have data and applications scale across a topology of compute, data and networking resources is a major opportunity for operators and application businesses, especially when combined with the connectivity infrastructure evolution, in which 5G provides new low-latency, network slicing and scalability properties.

Dive deeper into this topic in our new paper: Distributed Cloud Infrastructure:

  Download the paper

And please talk to us at Mobile World Congress 2017, where we will be talking about Distributed Cloud Infrastructure and Future Digital Infrastructure.


Cloud Infrastructure

Martin Körling

Martin Körling manages cloud product strategy at Ericsson, in Stockholm, Sweden. Previously, Martin spent several years in Silicon Valley, leading product and research efforts at the Palo Alto AT&T Foundry, among other locations. He's also an innovator in the areas of cloud platform and network slicing. Martin has a PhD in Theoretical Physics from the Royal Institute of Technology, Stockholm.

Martin Körling

Discussions