“Hardware outpaces software and we need to do a better job on the software side of working ahead of what could be possible with hardware and when it’s possible, use it.”

This is what Ericsson’s Jason Hoffman said in an interview (see below) following a recent key note talk at Red Hat Summit. With the biggest platform advancement of the decade coming up, the call to the world’s software developers has become  – have you done enough to prepare?

In light of the upcoming launch of Intel’s new processor family, Jason’s call for improved software becomes especially relevant. In his talk, Jason further reinforces his point with an example: “If all of a sudden, we have an onboard FPGA inside of the [Intel® Xeon® Scalable Processor] platform, you know we should spend the years before that to be ready for that, not to spend the years after that to adopt it. You need to be a bit more proactive around that.

How we prepared for the Intel® Xeon® Scalable Processor launch

If you want to know more on how we have prepared our cloud-optimized applications and our software-defined infrastructure (HDS 8000) for the upcoming launch of Intel’s Xeon Processor Scalable Family, follow us on our site.

We tested our cloud-optimized vEPG on the Intel® Xeon® Scalable processor and the result is impressive: a throughput of 40Gbps per processor. Download this article to get the full picture on how we supercharge Ericsson Evolved Packet Gateway. 

Download now

Ericsson and Intel have been working closely as leading partners to accelerate cloud computing solutions that take advantage of Intel’s latest platform capabilities.  Ericsson Hyperscale Datacenter System 8000, the first complete system based on Intel® Rack Scale Design, is designed to transform companies into digital leaders.

Automation, data-centric architectures and artificial intelligence

In the interview with Jason Hoffman that I referenced above, he also talks about physical automation of the datacenter, the need for datacentric architectures as well as deep-learning system that studies the future digital infrastructure. Note the following quote “if you're not data centric you're not going to be around in the future because the future is all about that.”

 

 

You can also view Jason’s full length seminar from Red Hat Summit on this topic or dive into our brochure: 

Download the Future Digital Infrastructure brochure

For more of the business fundamentals behind this "zero-distance" world, read our new paper on Future Digital - Changing Designs and Minds:

Download the paper

Jason Hoffman on the future of digital infrastructure

Jason Hoffman:  I spoke at the Red Hat Summit about the future of digital infrastructure. And what I wanted to do was talk about four areas that I use myself as things to think about – and what we should be doing in the future in the infrastructure space.

The first one was this idea that hardware always outpaces software and we need to do a better job on the software side of working ahead of what could be possible with hardware and when it's possible,  use it.

If all of a sudden we have an onboard FPGA inside of the Purley platform from Intel, you know we should spend the years before that to be ready for that, not to spend the years after that to adopt it. You need to be a bit more proactive around that.

The second area is a consequence of everything we're doing in the software defined infrastructure space that not everybody thinks about. And that is that the idea that we're now separating applications from infrastructure and driving a certain approach and infrastructure and practice. And we're actually having all the appropriate sort of software abstractions to be separated from the physicality of it. Not only does that free up the applications above but it actually frees up the physicality below as well. And the consequence of everything that's happening there is going to be a tremendous amount of physical automation that occurs from how quote unquote a server looks to what entire facilities look to where they are.

So there’s this sort of funny thing that as we're decoupling these layers in some way, the applications can get a lot more flexibility and everything else inside of that, where it doesn't have that risk profile of the infrastructure. But then the thing underneath is that the consequence of getting those good software abstractions is we can completely rethink the industrial designs and tackle the physicality and fully physically automate as well. And that tends to be something not a lot of people think about so I want to point out that's going to happen as well.

The third area was about data-centric architectures, and I shouldn’t say architecture is plural. I mean it's architecture, a datacenter architecture, in which everything you need to do from a compute and a networking standpoint is a feature of that system. It's not a separate thing. In the compute-centric, storage is an add-on and the network is assumed to always work.

And in the network center of architecture you're really optimizing it to be a network that doesn't do the other aspects of things. And so when you think of that effort of trying to take one software-defined infrastructure approach, one sort of system view, one thing that encapsulates everything you need to do and to really simplify to take a lot of parts out, you do have to be in a datacenter-type design.

And I think a lot of people sit in their network-centric sort of thing, and they’re almost not conscious of the fact that they're biased for that. People are compute-centric and they're not conscious of the fact that they're biased for that, and then it's natural that people like Amazon and Google have been very data-centric because that's the core of their business.

And the thing I was pointing out to everyone is be mindful of what your unconscious biases are inside of these architectures and also understand that if you're not data centric you're not going to be around in the future because the future is all about that.

Then the fourth one was really a question of whether humans can even design the next platform and the deep learning systems we have today they require tremendously large data sets to go into them. And it's not guaranteed they're going to learn from them and they're not going to learn the right things or the right sort of behaviors that come out.

Most training efforts of these systems are not successful, and when they are successful, we don't know why they're successful. And when they work we actually don't know how they work. Now if you look at the infrastructure space, there's a very interesting thing we do not do in the infrastructure space – we engineer it and we throw it out there and we just leave it – but we never study it.

And so a lot of stuff that we're doing around say for example the HDS software visibility to collect very large data sets across infrastructure so that we can actually study them. And most importantly we can feed them in the deep learning systems, and we can start getting some insights out of that. As we run that process, the result of that may very well be a platform design that none of us would have come up with on our own, and I think we have to be open to that possibility.

Tags:
Build your cloud and NFV infrastructure Inspiration & knowledge

Michael Martinsson
Follow Michael Martinsson:

Michael Martinsson

Michael Martinsson is Director, IT & Cloud Solutions Marketing for Ericsson. He joined Ericsson in 1997 and has held various positions in sales, marketing and business development. His recent focus has been on the converging business landscape, service provider strategies and the transformation of cloud, network and IT infrastructure. Martinsson holds an MSc in electrical engineering and a degree in marketing.

Discussions