Did you know that you can include certain legacy servers in the virtual performance-optimized datacenters (vPODs) configured by our hyperscale system (Ericsson Hyperscale Datacenter System 8000)? In fact, if they speak either Internet Management Protocol Interface (IPMI) 2.0 or the Pooled Systems Management Engine (PSME) protocol, we treat them as first-class vPOD citizens, with the same capabilities as our rack- and sled-based resources, including no degradation in performance.

What does this mean to me?

These capabilities mean that you can:

  1. Keep the legacy systems you've already invested in while you get used to working with our vPODs, making for an easier transition to a software-defined infrastructure.
  2. Manage both the vPODs built from your legacy systems and their underlying hardware from our management platform.
  3. In the future, manage our vPODs and your legacy systems from your management platform, provided it communicates with our RESTful API (see below).

The only requirements are that those legacy systems have been certified to operate with our hyperscale system (see "What legacy systems can I include in vPODs," below), and that you install a minimum configuration of our hyperscale system on your 19" rack, as described below.

What are vPODs again?

vPODs are the software systems defined by our next-generation hyperscale software-defined infrastructure. This is day one on the journey to something that will look like Ericsson's Future Digital Infrastructure, a software-defined infrastructure, a disaggregated hardware architecture, and a transformative networking structure.

vPODs are fully functional systems aggregated from pools of compute, storage, and network resources. Thanks to our optical networking, those resources may be located as far away as 500 meters from each other and still constitute a single system. And the workloads hosted by that system can remain isolated from other workloads in the same infrastructure.

Because they are an instance of software-defined infrastructure, you can create vPODs with whatever characteristics your workloads require. If you need high-performance computing, create a high-performance vPOD. If your workload interacts with lots of data, create a storage-intensive vPOD. You can create vPODs as needed and, when you no longer need those resources, you can delete the vPOD and return those resources to the common pool so they can be used by other vPODs. This flexibility provides a big jump in hardware utilization rates.

Furthermore, if you build vPODs with compute resources powered by Intel® Xeon® Scalable processor family, you can get native performance from your virtual environment.

What legacy systems can I include in vPODs?

You can include any compute resources that meet these two requirements:

  1. They speak either IMPI 2.0 or the new PSME protocol.
  2. They have been tested and validated to work with our hyperscale system.

If they meet those two conditions, Ericsson Command Center will discover them and make them available to our vPODs.

The list of servers Ericsson has validated to date are listed here [ link ]. However, the Intel® Xeon® Scalable processor is worth highlighting.

A note about Intel® Xeon® Scalable processors

In fact, if you were planning to purchase Intel® Xeon® Scalable processors, you have an even better starting point for your migration to a software-defined infrastructure.

This processor delivers monumental leaps in I/O, memory, storage, and network technologies to, at last, provide native performance from virtualized systems. It is designed to thrive across the broadest range of datacenter workloads—from high-performance computing to virtualized infrastructures to advanced analytics and artificial intelligence. The Intel® Xeon® Scalable processor has Intel's most advanced compute core, designed to provide integrated performance across the datacenter, from on-premises to hybrid to public cloud applications. And it provides "no drag" data protection for security without compromise. Install it in your rack as planned, and configure their compute resources into vPODs.

For more info about how Ericsson uses the new Intel® Xeon® Scalable processor, see SDI now with Intel's latest processor.

What is the minimum configuration of our hyperscale system?

The short answer is:

  1. A control network we create with our network cabling and switches
  2. A data network we create with our network cabling and switches
  3. Compute and storage resources from either your legacy system, our rack and sled units, or both
  4. Ericsson Command Center.

The longer answer is more interesting. As described on our website, Ericsson Hyperscale Datacenter System 8000 consists of a 19" rack; compute, storage, and network resources; an optical backplane; and Ericsson Command Center. As an option, you can install our rack units in your rack system. This means that our hyperscale system comes in two flavors:

  • Electrical system
  • Optical system

ElectricalOptical.jpg

Each is described below, but it's important to note that you can have both an electrical system and an optical system in the same 19" rack, whether you supply it or we supply it. Ericsson Command Center can manage the compute, storage, and network resources of both, including the usual hardware operations such as inventory, power management, and firmware upgrades. Plus the creation and management of vPODs, of course.

Electrical system

The electrical system consists of pooled resources that are connected by traditional electrical networks. Ericsson packages them into three types of rack units:

  • Compute Rack Units (CRUs)
  • Storage Rack Units (SRUs)
  • Network Rack Units (NRUs)

These rack units can, as the name implies, be installed on any 19" rack. The minimum configuration required to create vPODs is simply our NRU. It contains the data and control networks provided by our cabling and switches. In other words, by installing the NRU in your legacy 19" rack system, you can build vPODs with your compute and storage resources.

If you decide to use any of our CRUs and SRUs as well, you can build vPODs from any combination of your legacy compute and storage resources and ours.

The tool that discovers your legacy resources, and builds and manages their vPODs, is Ericsson Command Center. Ericsson Command Center is pre-installed on each of our rack and sled units.

Optical system

The optical system consists of pooled resources that are connected by our light-speed optical backplane. Ericsson packages them into three types of sled units:

  • Compute Sled Unit (CSU)
  • Storage Sled Unit (SRU)
  • Network Sled Unit (NSU)

These sled units don't slide directly into a 19" rack. They slide into a Chassis Plenary Assembly (CPA), which provides the optical connections. The CPA, in turn, slides into any 19" rack.

Chassis.jpg

This combination of sled units inside a chassis constitutes an optical-speed hyperscale system that allows you to create vPODs without latency from resources that are physically located as far away as 500 meters.

All you need in order to use your legacy resources in our vPODs is the electrical system described above, but you can also install our optical system on the same rack as our electrical system, and use its resources for vPODs.

When will I be able to configure vPODs from my management platform?

That depends on your management platform. Northbound of Ericsson Command Center, we communicate using the Redfish RESTful API from the Distributed Management Task Force (DTMF). If your management platform can communicate using the Redfish API, it will be able to access the functions of Ericsson Command Center.

Find out more here 

Dowload the paper that explores vPODs in greater depth.

Download the paper

About the photograph

I took the photograph of the back door to a church in one of the mountain towns just inland of Praiano, Italy, in July of 2016.  I believe it was Furore.

Tags:
Build your cloud and NFV infrastructure Evaluate options

Rick Ramsey
Follow Rick Ramsey:

Rick Ramsey

I started my high tech training as an avionics technician in the US Air Force. While studying Economics at UC Berkeley, I wrote reference manuals and developer guides for two artificial intelligence languages, ART and SYNTEL. At Sun Microsystems I wrote about hardware, software, and toolkits for developers and sysadmins, and published “All About Administering NIS+.” I served as information architect before joining BigAdmin, which morphed into the Systems Community at Oracle. I left Oracle in May of 2015, and now write for Ericsson.

Discussions