So does software defined infrastructure.  Disaggregation and software defined infrastructures are both critical to the operations of a hyperscale datacenter in the era of digital industrialization.

To explain why, I'll start with a dated but still relevant quote from CIO Insight.

"We're entering a period of great digital uncertainty that is trying the souls of senior IT and business leaders alike."

Digital Fear and Loathing in the CIO Ranks, CIO Insight, January 2014

True, but what else is new?  Technologies have always been risky to adopt.  Disruptive ones, more so.  Peruse sysadmin forums and you'll observe a culture wary of being "that guy or gal" who was responsible for an IT horror story.  Sysadmins have good reason to worry: a single line of code can cause major high-tech headaches and tremendous amounts of lost revenue.

In my own experience writing about new technologies, I have noticed two persistent problems that make upgrades and migrations more of a headache than they have to be:

  1. The design team didn't think about upgrades or migration until after the product was finished.
  2. Most customers did not upgrade or migrate until later releases to give the early adopters a chance to get the bugs out.

If a technology company wouldn't do the former, customers wouldn't have do the latter.  Which is neither here nor there because neither one of those bad habits may feasible any longer.  The disruptive nature of the Internet of Things (IoT), big data, and cloud technologies are changing early adoption from an IT adventure to a competitive necessity:

"Instead of waiting for emerging technologies to reach a certain level of maturity before adopting them, IT organizations need to start working with them early on in order to enable the business to get the most value from them."

Digital Fear and Loathing in the CIO Ranks, CIO Insight, January 2014

Virtualization Is No Longer Enough

Virtualization was supposed to help reduce the risk of upgrades.  And it did. Thanks to virtualization, someone who wanted a new application or environment did not have go through the trails and tribulations of provisioning a new server.  They selected what they wanted from a menu of pre-validated environments and cloned it. 

That worked well enough when provisioning applications or even software stacks.  It does not, however, suit the hyperscale of modern datacenters.  Why is that?  Because even datacenters that have fully exploited the benefits of virtualization and have been experimenting with cloud technologies are still organized into silos: silos for compute, silos for storage, and silos for networking.

These silos may not have interfered with the provisioning of applications or software stacks, but they do interfere with the provisioning of hyperscale infrastructures and datacenters.  Too much coordination is required between the compute, storage, and network silos.  There are too any processes to follow, too many details to double-check, and too many dependencies to validate for hyperscale.

upgrading with Software Defined Infrastructures is better

Software defined infrastructures offer two important advantages over virtualization.  The first is automation.  The second is a better focus on customer needs instead of IT limitations. 

Together, these advantages simplify the provisioning of the compute, storage, and network resources used by a customer, whether they are for an application, a development platform, an infrastructure, or even a test environment.  Instead of having to manually coordinate with the people and processes of each silo in the datacenter, the sysadmin selects the product offering (for example, infrastructure as a service with a particular service level agreement) and lets the system software assign the necessary hardware and software components according to a pre-existing definition.

If the customer expects to need a greater amount of resources over time, the sysadmin can include that requirement in the product offering or service level agreement.  The software defined datacenter will automatically make more resources available to the customer as needed.

How would a software defined infrastructure handle upgrades?  In the same way that it handles product offerings: by putting the needs of customers ahead of the limitations of traditional IT.  

For quite a while now, IT has focused on lowering costs.  Satisfying customers, and certainly delighting them, has taken a back seat to efficiency.  As a result, existing IT processes, including upgrades, are not designed to put the needs of the customer first.

A software defined infrastructure can change all that.  Instead of designing, testing, and rolling out upgrades for a class of resources such as compute nodes, you can change your focus to the cloud services you offer customers.  First, determine the impact of an upgrade on a particular cloud service, and then come up with a roll-out plan that accommodates the needs of your customers. 

Offer your customers a choice of several upgrade windows, for instance. Or let them make upgrades on demand, when they're ready.

But what about hardware upgrades? 

In Praise of Disaggregated Architecture

A disaggregated architecture is designed, from the start, to facilitate upgrades.  It will support your introduction of a software-defined infrastructure right alongside your legacy datacenter environment.

Two things make a disaggregated architecture work.  The first is a common pool of compute, storage, and network resources that you can dynamically combine to build virtual datacenters.  Because resources are allocated dynamically, there is no interruption to services while you swap out components.

More importantly, the ability to dynamically assign the right set of resources to an application allows a hyperscale cloud provider to provision a new cloud service in minutes.  And to do so with the most efficient combination of not only compute, storage, and network resources, but also of human labor and power consumption.

This dynamic, common pool of resources is a big part of the reason modern hyperscale datacenters deliver four times the level of CPU utilization, and increase network utilization by as much as 26% compared to traditional virtualized datacenters:Hyperscale-v-Virtualization.jpg

Image courtesy of  An Economic Study of the Hyperscale Datacenter, by Mainstay, January 2016

  Download the study now

The second technology that makes disaggregated architecture so suitable for hyperscale datacenters is an optical backplane that can connect systems as far away as two kilometers.  From the start you can set up your cabling for maximum capacity, and let the other resources expand or contract their use of that capacity without the slowdown and added cost of manual labor.  The result is a modern datacenter with much greater flexibility and lower cost.

Summing it all up

Virtualization led the last wave of datacenter modernization.  It provided a big improvement in hardware utilization and provisioning efficiency.  At first it appeared that cloud computing would be an evolution in the efficiency that virtualization started: a way to further reduce both capital and operating expenses.

That was a clever head fake.  Cloud computing is not an evolution of virtualization.  It is one of the technologies, along with the internet of things and big data, that is inaugurating the era of digital industrialization. Digital industrialization will dramatically alter not only the kinds of business we do, but also the economics of doing business.  It will be implemented through hyperscale datacenters.  Software defined infrastructures and disaggregated hardware are just two of the technologies that a hyperscale datacenter will need to remain competitive and useful.

Read more:

About the Photo behind the title

In March of 2015 I took the photo of the sunlit wall of one of the rare motels on Daytona Beach that had not yet gone hyperscale.

- Rick

 

 


Cloud Infrastructure

Rick Ramsey

I started my high tech training as an avionics technician in the US Air Force. While studying Economics at UC Berkeley, I wrote reference manuals and developer guides for two artificial intelligence languages, ART and SYNTEL. At Sun Microsystems I wrote about hardware, software, and toolkits for developers and sysadmins, and published “All About Administering NIS+.” I served as information architect before joining BigAdmin, which morphed into the Systems Community at Oracle. I left Oracle in May of 2015, and now write for Ericsson.

Follow Rick Ramsey:
Rick Ramsey

Discussions