Are you ready to apply machine learning to your datacenter and actually use that big data in an iterative process to drive change back into the environment? Imagine if you had the same level of automation and awareness and security in your own datacenter as the Super 7 do in theirs.


Datacenters present a challenging environment. They are extremely complex, diverse, and there are a multitude of applications and services you need to support. The “Super 7” cloud giants – including Facebook, Google, Amazon, Alibaba, Tencent, Baidu and Microsoft – don't necessarily have those constraints. They have often had only one application and potentially hundreds of thousands of physical devices to support that application.

The Super 7 see the datacenter as an asset

If you’re outside of these giant companies, the demands on your datacenter are different. You might have thousands of applications, potentially millions of end users, and a diverse infrastructure. You need to be platform-agnostic and vendor-agnostic to mitigate risk to your supply chain. Moving forward from their roots, companies like Google, Amazon and Facebook have taken the solutions to these situations to the extreme and, in the process, have turned the datacenter into an asset. One way to start on the same journey is by revisiting how you measure your datacenter – what do you have, what is it doing, and how has it changed over time?

Datacenters produce big data

One of the next things to think about on this journey is artificial intelligence. We have self-driving cars, and Uber recently launched a fleet of them in Pennsylvania. If they can do that, why don't we have self-driving datacenters?

Think about the different verticals of the Internet of Things (IoT) and big data: the datacenter is actually within both of those verticals. You open up a server; you open up a datacenter; and there are potentially millions of endpoints providing information. How do you start to capture that information and more importantly gain knowledge from that information?

Self-driving cars are continuously learning

It starts with measurement. Think about the data you have today. How do you understand it? How do you measure it? And then how do you start to take that data and feed it back into the system? Traditional monitoring tools fail to make the connection between data that is gained and knowledge that has been applied. This is not the case inside the Super 7 – they leverage data and subsequently the datacenter itself as an asset to mine value.

Uber’s self-driving cars are constantly learning. There are actually two drivers in the car. Uber has somebody driving the car or on standby to drive the car, and then it has somebody else taking notes and annotating all the different actions. All that data then gets fed back into the system so that the next time a particular situation occurs, the car can anticipate it.

But how do you build the brains behind this kind of operation in a datacenter, from physical to people to infrastructure? 

Applying machine learning in the datacenter

We want to move towards a world where we can measure datacenter performance and actually use that data in an iterative process to drive change back into the environment. That requires an automation platform that can take all this data, apply advanced machine learning techniques and apply changes back into the environment using the continuous integration and deployment methodologies of software development to your datacenter.

Imagine if you had that level of automation and awareness and security in your own datacenter. That’s the journey we want to take you on.

If you want to explore our ideas further, please read our e-book on transforming your datacenter into your competitive edge.  Or check out my demo of our Datacenter Automation Platform in the video below from Intel Developer Forum. 

Download the eBook

 


Data & Analytics Digital Industrialization

James Malachowski

James Malachowski is Director of Product, Software-Defined Infrastructure for Ericsson. James joined Ericsson via the acquisition of NodePrime, a hyperscale datacenter management and intelligence startup, where he was Founder & CEO. Before NodePrime, James worked at Dell in the Enterprise Solutions Group helping some of the world’s largest Cloud, SaaS, and gaming companies scale their datacenter infrastructure. Before Dell, James spent four years at Cisco working in the public sector as a systems engineer, where he was a graduate of the Cisco Sales Associate Program. James is a graduate of the University of Southern California with a BS in Electrical Engineering.

James Malachowski

Discussions