In 2006, people became interested in running different operating systems on top of their PC operating system. VMware provided an all-software solution using a technique called binary translation. With it, you could run Windows on a Mac and macOS on a Windows PC, and it helped people deal with operating system heterogeneity. But it consumed a lot of CPU cycles and memory.

 

In 2009, Intel launched the Nehalem chip, which offloaded those CPU and memory loads onto the hardware and provided extended page tables in memory. As a result, Zen and KVM joined VMware as players in hardware-assisted virtualization. But the Nehalem chip did not offload network I/O or disk I/O. As a result, Zen, KVM, and VMware have always had network overhead and disk I/O overhead issues.

In 2017, Intel is launching Skylake [Intel® Xeon ® Scalable processor ]. Its programmable FPGA makes network I/O and disk I/O offloads finally possible. So what we will have is a readily available chip from Intel designed for "general-purpose infrastructure" that has all the hardware offload capabilities that any native telecom node would need.

What's happening inside datacenters with software-enabled virtualization

[Editor's note: What follows is a transcript based on an interview with Jason Hoffman.]

We were delivering systems where you had the infrastructure and the app tightly coupled together and it's not so much that it's bad—it's a perfectly fine thing to do in any sort of compute system. If you're doing a repetitive task such as processing packets, it always make sense to put that in a specialized piece of hardware. The question is whether it's a readily available piece of hardware or whether it's something you have to build yourself.

Ten, 15, or 20 years ago, if you wanted to build a node that was super-fast at packet processing or super-fast at any compute task, you were basically building custom ASICs or custom FPGAs to do that. Now what's happened on the component space is that you actually have programmable FPGAs so you don't have to design the FPGA yourself. You're basically writing code that can go onto the FPGA using specific libraries. Custom ASICs have largely been replaced by ARM-based system-on-chips. So you have to do only a subset of the design and the rest of the design is on an ARM platform.

Also what's occured is that Intel is continuously putting capabilities into the chips. It was only in 2006 that Intel made a 64-bit x86 chip. Their strategy up to that point was not to have x86 be their 64-bit platform. Itanium was their 64-bit platform, and it was meant to catch up with the 64-bit SPARC and Power chips that were available. The Intel® Xeon® EMT had a very important extension in it called VTX. And what VTX did was CPU offload for any type of guest operating system emulation on top of the given operating system. Keep in mind that it wasn't until 64-bit showed up in 2006 that it made sense to try to put more than one operating system on a server. So in 2006, you had the dominant chip in the world, x86, which was dominant in the PC world but not dominant in the datacenter world. It's fair to say that Power, SPARC, MIPS, and others were out there just as much, and you had the Chip Wars.

What Intel observed at that time was that people were interested in running different operating systems on top of their PC operating system. There was a company called VMware which provided an all-software solution to do that using a technique called binary translation. It was the only way that you could run Windows on a Mac and macOS on a Windows PC, and it helped people deal with operating system heterogeneity.

In 2006, 2007, and 2008 there were all these great white papers from VMware explaining why binary translations were better than VTX. They were claiming that their software-only approach of translating system calls between a guest operating system and the host operating system was superior to any CPU offload that Intel could do. But you started having, in 2006, people experimenting with guest host operating system combinations on 64-bit x86 chips that at least didn't have CPU overhead.

A lot of those datacenter chips at Intel came after the PC chips did, and that was a convenience: a sysadmin who needed to run Windows and Linux on a Mac laptop was common then. But it was pretty clear that virtualization wasn't going to work unless you could offload memory as well.

...With hardware-assisted virtualization

In 2009, you had extended page tables show up as an instruction set in the Nehalem chips from Intel. And extended page tables allowed you to essentially offload memory performance. What that meant in 2009 was that "guest-host virtualization" (where you have a full operating system sitting on top of another operating system, rather than just OS-level virtualization such as containers, jails, zones LPARs, and everything else) could run an entire kernel on top of another kernel and have that offloaded by the chip. Plus, the fact that you had enough memory density on those servers in 2009 to actually do that meant hardware-assisted virtualization had finally shown up on x86.

So in 2009, you start this inevitable reasoning, "Why don't we use hardware-assisted virtualization to make our lives easier? It could definitely help us with the operating system heterogeneity that we have today so we'll have one host operating system everywhere, and then we'll use that additional layer of abstraction to manage all the different operating systems we have in the datacenter. And we're going to do that without CPU overhead and without memory overhead, and we now have enough RAM in these boxes to go ahead and do that."

That was only eight years ago, and like most things it was another two to three years before it went mainstream. That happened only four or five years ago, and it happened only because Intel supported it with their chip.

So it goes back to the original principle I talked about: if you're going to do any repetitive task, put it in hardware.

What Intel didn't offload were the other things: it didn't offload network I/O and it didn't offload disk I/O. It offloaded only CPU and memory. For the last eight years, hardware-assisted virtualization—Zen and VMware—have always had network overhead issues and disk I/O overhead issues. You always take a hit there because that's not offloaded by the chip.

Also, in 2009 with the extended page tables, a third hypervisor showed up. KVM was the only hypervisor that was designed for extended page tables. So now you have the KVM world, you have the Zen world, and you have VMware.

Skylake offloads network I/O and disk I/O

It was about 2012 when we started talking about virtualization hitting mainstream three years after the release of Nehalem. If we fast-forward about five years to now, the Skylake release from Intel is the most significant Intel chip release since 2009 Nehalem. Because now you still have the crypto co-processing. You have an expansion of the advanced vector extensions so that you can do even more GPU-type workloads. You have expansions to other extensions so that, for example, new types of columnar databases that weren't fast enough before are going to be fast enough on Skylake. And it has a programmable FPGA, which makes network I/O and disk I/O offloads finally possible.

As of 2017, we're in a situation where CPU offload is OK, memory offload is OK, network offload is OK, and disk offload is OK. So what you have is a readily available chip from Intel designed for "general-purpose infrastructure" that has all the hardware offload capabilities that any native telecom node would need.

We tested our cloud-optimized vEPG on the Intel® Xeon® Scalable processor and the result is impressive: a throughput of 40Gbps per processor. Download this article to get the full picture on how we supercharge Ericsson Evolved Packet Gateway.

Download now

For more info about how Ericsson solutions use the new Intel® Xeon® Scalable processor, click here.


Cloud Infrastructure

Jason Hoffman

Jason Hoffman is the CTO, Business Area Digital Services at Ericsson. Previously he was the Head of Cloud Technologies where he's responsible for product, architecture and engineering and prior to that Head of Product Line, Ericsson Cloud System and Platforms in the former Business Unit Cloud and IP. Prior to that he was a founder and the CTO at Joyent, a pioneering high performance cloud IaaS and software provider, where he ran product, engineering, operations and commercial management for nearly a decade. He is considered to be one the pioneers of large scale cloud computing, in particular the use of container technologies, asynchronous, high concurrency runtimes and converged server, storage and networking systems. Jason is also an angel investor, strategy and execution advisor, venture and private equity advisor and on the boards of the Wordpress Foundation and New Context, a Digital Garage company. Jason has a BS and MS from UCLA and a PhD from UCSD. He is a San Francisco native that now lives in Stockholm with his wife and daughters.

Follow Jason Hoffman:
Jason Hoffman

Discussions