Everything is a datacenter - your smartphone, your laptop, even the connected devices in your home like your connected thermostat.  Every single one of these devices has compute, storage, networking and a set of services and applications running in an operating system that connect into the broader fabric we know today as the cloud. One of the emerging challenges in today’s device dominated world is managing all of these endpoints.

In order to effectively manage this, you must understand the device that is being managed. You must know what it is composed of – software and hardware, if applicable, and how it is running in terms of its utilization of CPU, memory, network, disk and even temperature and power. You also have to understand the issues it may be experiencing as indicated by alerts. logs and alarms.

This is fairly straightforward for a single known device – you can plan for it and build software that allows you to collect this data as you define it. Scale this out to tens of thousands and in some cases millions of different devices distributed all over the world, and this becomes a massively complex problem.

Data problems

When we think about this environment, several unique challenges begin to emerge. Data tends to be high volume and high velocity but low in variety. In other words, machines are fairly predictable in the types of data they generate but there is simply a tremendous amount of it, in some cases more than any reasonable storage device can keep up with. 

Time problem

Even assuming we can do data collection fairly well, the data tends to be out of sync – time is relative, and clocks are likely to be out of sync even with properly configured NTP.  So we need to account for this at all places in the network to derive a consistent observer view on time. Most current database platforms assume clocks will be in sync to converge complex consensus algorithms.

Machine problems

Machines are as diverse as biological life. No two machines are identical: they can vary across operating systems, software packages, hardware components and even things like firmware. They can even be physical or virtual and in some cases completely ephemeral. All of this variability creates a massive long-tail management problem simply dealing with the diversity and churn as technology refresh cycles continue to accelerate. This is why we need open source, shared data, and proper standards. No single company or organization can possibly keep up with billions of ever changing machines.

Contributing to the open source community

At Ericsson we have seen the convergence of cloud, 5G and IoT coming for some time now and have built a portfolio of software-defined infrastructure products to help provide the benefits of hyperscale to everyone. This week at the Open Compute Summit in Santa Clara, we will be announcing some significant contributions to open source to help address some of these challenges.

See you at the OCP Summit!ericsson_hyperscale_blog_distributed_cloud_infrastructure_topology_1.png

 To further explore our ideas, please read our new paper on distributed cloud infrastructure!

  Download the paper

Ericsson's strategy and commitment to OCP

 


Cloud Infrastructure

Smita Deshpande

Smita is on the Ericsson Cloud Marketing team and leads Product Marketing for Developer Platforms. Prior to Ericsson, Smita worked at VMware where she led Technical Partner Product Marketing for NSX, VMware’s network virtualization platform.

Smita Deshpande

Discussions