Advances in cloud computing, plus the emergence of the Internet of Things (IoT), big data, artificial intelligence, and automation, are dramatically altering not only how IT departments are employing cloud technology, but also the fundamental way the enterprise does business.

The IoT alone will create an enormous number of connections not only between people and devices, but increasingly between the devices themselves. This growth will only add to the enormous amount of data that is already being collected from mobile devices, social media, and the web:

 data_from_everything_else-01.png

To manage, process, and extract value from all that data, we need to rely on big data, artificial intelligence (AI), and automation. Because those technologies are particularly well suited to cloud, and because cloud is a more efficient way to manage and process data, more and more industries are moving their data, operations, and customers to a cloud infrastructure.

General Motors and T-Mobile are transforming

General Motors is investing USD 500 million in the ride-sharing service Lyft to develop an on-demand network of self-driving cars. Ford co-invested USD 24 million in India-based Zoom car, and partnered with Baidu to jointly invest USD 150 million in Velodyne, a company that makes LiDAR sensors for self-driving cars.

T-Mobile can better satisfy the needs of its customers by using high-speed analytics to identify offers that are more likely to interest them, based on the attributes of each individual customer. It also makes its marketing campaign needs available to third-party agencies through a partner portal, so agencies can bid on and upload their projects directly into T-Mobile’s internal system.

Read our paper on how vPODs will make you both agile and lean

The cloud giants were the first to hyperscale

The need to deal with all that data and all those customers for all those services led cloud giants such as Google, Apple, Amazon, and Facebook to focus on economies of scale. The result was hyperscale datacenters.

Hyperscale refers to the ability of an architecture to scale as rapidly as the demand that is placed on the system–in either direction. A hyperscale datacenter can quickly and seamlessly add compute, memory, storage, and networking resources to its systems as their workloads increase in either number or bandwidth. It can just as quickly and seamlessly remove those resources as workloads decrease.

(For more about hyperscale, see the blog titled “
The difference between big and hyperscale.”)

To keep costs manageable, those hyperscale datacenters are operated with revolutionary efficiency. Instead of an admin managing a few hundred systems, each admin manages a few hundred racks. Instead of using a variety of hardware, each optimized for a particular application or software stack, these datacenters standardized on one hardware platform and one software stack.

Hardware utilization rates as high as 65 percent

They also employed automation and predictive maintenance. The operational priority is no longer to optimize individual operations, but rather to optimize the operation of the entire datacenter, and, in some cases, the operation of multiple, related datacenters. Those datacenters are already functioning with exceptional asset and operational efficiency, reaching average hardware utilization rates as high as 65 percent:

Virtualization-v-Hyperscale.jpg

The estimates above, are the result of meticulous research, but they are nevertheless based on a particular set of assumptions. To help you estimate potential savings based on assumptions relevant to your operations, ask your sales contact to give you a quick TCO analysis using Ericsson’s online TCO tool. Deeper, more thorough analyses are also available.

On-premises cloud must also transform

Public cloud adoption will continue to grow, but parallel to on-prem cloud, not replacing it. Some industries require that sensitive data be stored and processed on-site. Some governments demand that workloads be provisioned on local resources. Some of the reasons include more specific requirements for security, accessibility, or visibility into data than is feasible for public cloud providers to economically provide.

However, for that partnership to be fruitful, on-prem cloud must go through the same digital transformation that public cloud has experienced. It must achieve the revolutionary efficiency and hardware utilization rates of the cloud giants. And it must be comfortable with workload portability. Unfortunately, today’s datacenter infrastructure and operations don’t lend themselves to this kind of elasticity. They have been designed to deliver back-office applications to internal business users, and they lack the elasticity to rapidly meet the changing workloads of a modern digital enterprise.vPODs paper cover image.png

In short, your traditional IT must evolve into your Future Digital Infrastructure (FDI)

Learn about Future Digital Infrastructure

And to explore these themes in more depth, please download our new paper titled: How vPODs make you both agile and lean

Download the paper

About the background photograph

My friend Lee Becker took that photograph of his wife Bex Becker, a motorcycling enthusiast and founder of the Colorado GS Girls, on a BMW Scrambler, which is an excellent example of motorcycling transformation.


Digital Industrialization Cloud Infrastructure

Rick Ramsey

I started my high tech training as an avionics technician in the US Air Force. While studying Economics at UC Berkeley, I wrote reference manuals and developer guides for two artificial intelligence languages, ART and SYNTEL. At Sun Microsystems I wrote about hardware, software, and toolkits for developers and sysadmins, and published “All About Administering NIS+.” I served as information architect before joining BigAdmin, which morphed into the Systems Community at Oracle. I left Oracle in May of 2015, and now write for Ericsson.

Follow Rick Ramsey:
Rick Ramsey

Discussions