Do you know the server CPU utilization rates for traditional enterprise datacenters? For non-virtualized servers, it’s about 6 to 15 percent. Virtualized servers can increase that average to around 30 percent, but that’s still far below industry best-in-class rates for datacenter utilization.

Many factors drive these low utilization rates, including infrastructure silos and the Ericsson_Cloud_infrastructure_hyperscale_text.pngtypical enterprise infrastructure procurement cycle, which leads to overprovisioning of capacity. Enterprise architects we’ve interviewed said they typically design 30 percent over-capacity for a standard workload and up to 60–70 percent for mission-critical workloads. The hyperscale players such as Google and Facebook have achieved over 60 percent utilization by avoiding these pitfalls.

How have they done it? They transformed from traditional datacenters to software-defined infrastructure or “SDI.” Hyperscale datacenters use SDI to provide the flexibility to re-provision and re-allocate infrastructure based on changing workload demand.

Explore this topic further in our report with Mainstay: An Economic Study of the Hyperscale Datacenter.

We found there are three critical drivers of utilization improvements:

1. Resource pooling and datacenter utilization

Ericsson-Cloud-infrastructure-tco-data-center-utlization

Traditional datacenters typically build dedicated infrastructures for each workload. Server virtualization has helped break down some of these silos, but in practice this technique can be applied only to a portion of enterprise workloads.

Software-defined datacenters provide the ability to rapidly match workloads with hardware components from a common pool of infrastructure components. The focus of planning cycles can move from procuring infrastructure capacity for individual workloads (procure-to-provision cycle) to managing the disposition and deployment of the datacenter’s entire pool of hardware (pool-to-provision cycle).

This shift to more holistic planning eliminates traditional overprovisioning practices and enables rapid reallocations based on actual usage patterns.

2. Hyperscale system management software

The right kind of system management software allows datacenter operators to manage their entire datacenter infrastructure. It gives IT administrators full visibility into all datacenter infrastructure elements (including legacy and/or non-hyperscale elements via APIs) to define the optimal infrastructure components for each workload. It then enables the ability to compose a system using pooled resources that include compute, network, and storage based on workload requirements.

The right visibility and reporting capabilities also allow operators to audit all infrastructure components, enabling them to locate under-utilized “ghost servers” (those not procured or managed by IT) and bring these resources under IT management. Industry-leading management software should also provide auto-provisioning, real-time performance tracking and triggers to adapt to workload shifts. For example, infrastructure used to support Workload A, which experiences peak demand during the day, could be shifted to Workload B, which experiences peak demand during the night.

3. Dynamic resource allocation

In a traditional datacenter, resources are static, with dedicated CPU resources determined per workload. This leads to significant inefficiency in resource utilization, because CPU requirements can change over time for specific applications. Again, virtualization can help improve overall efficiency, but it too requires a certain amount of resources to be allocated per workload. A better way to boost utilization is the ability to dynamically allocate CPU resources across servers and racks, allowing administrators to quickly migrate resources to address shifting demand. Studies have shown this can drive 100–300 percent greater utilization for virtualized workloads and 200–600 percent greater utilization for bare-metal workloads.

These three drivers have already enabled large public cloud providers to shrink datacenter infrastructure and management costs, making it economically feasible to accommodate the massive computing and storage demands of the Networked Society.

Are you ready to join them? 

Sign up for the Hyperscale Cloud blog


Cloud Infrastructure

Scott Walker

Scott Walker is Head of Cloud Infrastructure at Ericsson where he leads the go-to-market execution of Ericsson’s ever-evolving cloud portfolio, as well as building partnerships across North America. He is a cloud technology expert and pioneer, having spearheaded the launch of multiple innovative technologies, including the first of its kind direct connect program with Amazon Web Services in 2011. Scott’s career includes executive leadership positions at prominent technology firms including Cisco, AT&T, Equinix, Masergy and Neustar’s Internet Infrastructure Group. He currently resides in Dallas, TX.

Follow Scott Walker:
Scott Walker

Discussions