When you upgrade your datacenter, how much do you throw away? You shouldn't have to discard everything just to replace a single type of component. Intel® Rack Scale Design solves this problem by breaking components down into resource pools that can be replaced separately from each other.

Throwing good resources after bad

When your laptop gets old, you usually get rid of it - the whole thing - even though it might be only a single component that is out of date. Maybe you want a faster CPU, with better processing power. You don't pull out the hard drive, the NIC card, and the DIMM modules for later use, even though they might all be in fine shape.

Where would you use them? Your new laptop will have all that stuff built in.

If you're replacing a laptop, this type of thinking makes sense. But if you're upgrading a datacenter, it doesn't. At scale, all those discarded components cost real money. If you have to replace a thousand CPUs, it's a shame to also have to replace the thousands of components associated with them.

Thinking outside the box

And yet, in a traditional datacenter, that's how it's done. A server comes in a box. When it's time for any part of the box to be replaced, the datacenter just gets a new box. In theory, a technician could switch out just the one outdated component and replace it with a new one. But that isn't cost effective. There are too many components to upgrade, and technicians cost too much money.

Ericsson solves this problem with the Hyperscale Datacenter System (HDS) 8000, which utilizes Intel® Rack Scale Design (RSD), and offers the first steps towards disaggregation.

Unlike a traditional rack server, the HDS 8000 is not an immutable box. It contains sleds that each hold a different type of component. Today this would be compute, storage and networking, while in the future it will be even more granular. These groups of components, along with others in the datacenter, make up resource pools.

In the future, if one type of component, say CPUs, needs to be updated, the whole sled can be replaced at once, while the rest of the system remains intact. Nothing is thrown away that can live to compute another day.

Resource cooperation through datacenter virtualization

So if the components are separated into pools, where is the actual server? Well, in this design model, servers do not exist as physically contiguous units. Rather, using software-defined infrastructure (SDI), they are joined together as needed into virtual performance-optimized datacenters (vPODs). vPODs can be spun up and taken down on the fly, ensuring that every process has access to exactly the resources it needs; no more and no less.

Watch HDS 8000 Product Manager David Partain explain the HDS 8000 in a recent video:



To explore more, download our study with Mainstay on transforming the economics of the datacenter, showing how hyperscale dramatically cuts both opex and capex:

Download the study now

Read more:

And you can subscribe to the blog or sign up for our new newsletter:

Sign up for the Hyperscale Cloud newsletter

Transcript of video

And you can explore a transcript of the video below:

When You Upgrade Your Datacenter, How Much Do You Throw Away?

David Partain, Strategic Product Manager, Software Defined Infrastructure:

Today if you buy a server, lets say, you get that server and its got all of its components inside – a little bit of CPU, a little memory, a little hardrive space and some networking equipment but its all in one box.

When I buy a new one of those I take it out to my datacentre, put it in a rack, cable it up and everything is cool. It’s the greatest thing since sliced bread because it’s the latest and the greatest. Which is great…for a year. Or two years. But not for three. Because at the end of those three years suddenly what we have is an old piece of equipment.

So what do I do then? Today you go out and you take that piece of equipment out and you throw it in the garbage and you stick in a new one that looks like the same thing except its got newer components. What you’ve just done is throw away a bunch of useful stuff as well as the stuff that’s old and worn out – because its all been thrown into the garbage.

What we want to do with disaggregated hardware or the Rack Scale architecture is we’ve taken that piece of metal, the box, and we’ve broken it into bits and we have smaller discrete units that we can magically, if you will, with software, glue together to make a new server so you will say “I want a new server” and today you would go into a web shop and you would order a box.

In the future when we’ve reached the end of our Rack Scale design journey, you will get to a point where you can say “I want a new server”  and software will, using components that are spread out in pools inside of the datacentre, create that server for you on the fly with exactly those characteristics that you want. 

The reason thts cool is because it changes the way that we manage inventory. It changes the way we handle TCO. It’s a win for everybody. You get exactly what you want and you save money at the same time. [end]

Cloud Infrastructure

Michael Bennett Cohn

Michael Bennett Cohn was head of digital product and revenue operations at Condé Nast, where he created the company's first dynamic system for digital audience cross-pollination. At a traditional boutique ad agency, he founded and ran the digital media buying team, during which time he planned and executed the digital ad campaign that launched the first Amazon Kindle. At Federated Media, where he was the first head of east coast operations, he developed and managed conversational marketing campaigns for top clients including Dell, American Express, and Kraft. He also has a master's degree in cinema-television from the University of Southern California. He lives in Brooklyn.

Michael Bennett Cohn