Hyperscale is commonly described as the ability to scale efficiently and, with minimal effort, provision and add compute, memory, networking, and storage resources to a given node or set of nodes that make up a larger computing environment.

In the new white paper Hyperscale cloud—reimagining data centers from hardware to applications, we describe the work that is being undertaken to realize hyperscale through new datacenter architectures, and the impact on the infrastructure, the composition and resource orchestration layer, and workload execution as well as on data and applications. These new architectures typically rely on the principles of hardware disaggregation and programmable infrastructure, making the distinction between today’s infrastructure, referred to as “hardware-defined infrastructure,” and the future software-defined infrastructure (SDI).

But what about the legacy infrastructure?

We do not see a sudden shift to SDI but more of a gradual transformation. This is because the transition to a fully software-defined infrastructure is a technological journey that will take several years, starting with storage disaggregation and gradually moving into memory and CPU (see white paper for details). Secondly, from a financial perspective, it does not make sense to scrap the infrastructure you already have before the financial lifetime of that infrastructure is over.

These facts reflect a well-established infrastructure paradigm that cannot be neglected, and the coexistence of both the old and the new is, therefore, mandatory. But the transition to a new infrastructure paradigm needs to be as smooth as possible. Let's examine the most important points.

A common management and monitoring tool is key

How can you collect data from all systems and get a complete overview of your entire datacenter, both legacy equipment and disaggregated hardware, while still operating it smoothly? Without the full picture, you will not be able to get the efficiency required of a modern datacenter. As a system administrator, you need an out-of-band management tool that simply fulfills these requirements.

From the physical infrastructure perspective, it is up to the SDI composition and resource orchestration layer to ensure that the binding with legacy servers is possible. In other words, it should allow management of legacy hardware (with its inherent limitations) and do it in a multi-vendor environment.

IPMI is getting outdated

Datacenters typically feature a range of management solutions that are used for everything from infrastructure management to inventory management. When a new tool is brought into a datacenter that has an existing installed base (including existing management tools), it is critical that the new tool can be integrated with the existing ones.

The Intelligent Platform Management Interface (IPMI) is widely used as a basis for today's management and monitoring tools, but due to its age it has limitations with respect to supporting modern architectures and data formats. For example, IPMI’s communications protocols must be executed one at a time, in serial, whereas modern communication protocols can be executed in parallel, thus being faster. Furthermore, IPMI is no longer interoperable across different brands of servers (for example, HP ILO and Dell IDRAC are different implementations of IPMI with limited compatibility).

Redfish is the new base


The drawbacks of IPMI have sparked interest in the open industry standard specification Redfish from the Distributed Management Task Force (DMTF). It provides a Representational State Transfer (REST)-based application programming interface (API) over HTTPS in JSON format, a modern easy-to-read data format. A management system based on Redfish APIs will allow a simpler integration with already existing management tools.

In summary, the right management system will not only be the centerpiece of a software-defined datacenter, it will also be the tool for making sure you get the most out of what legacy infrastructure you currently have as you are on the way to a software-defined infrastructure.

Explore our thoughts on hyperscale datacenters further in our white paper: Hyperscale cloud—reimagining data centers from hardware to applications 

Download the white paper

Cloud Infrastructure

Jesper Tunér

Jesper Tunér joined Ericsson in 2006, after working at Accenture as a management consultant and, prior to that, at a small software company as a programmer. He has held several international positions in marketing, sales and finance throughout Europe, Asia, Africa and the Americas. In 2013 he was appointed Head of the Cloud Marketing Program at Ericsson. Since the start of 2016, he has worked with Cloud Strategy & Portfolio Management for the Product Area Cloud Systems in the Cloud & IP Business Unit. Jesper has a BSc in Computer Science and a MSc in Business Administration from Lund University, Sweden, followed by post graduate courses at IMD Business School and Kellogg School of Management. He is passionate about the digital industrialization opportunities that come with cloud technology.

Jesper Tunér