I was talking with a CIO the other day about datacenter operations, and he was lamenting the state of his budget. “I used to have to do more with less,” he said, “but now I have to do everything with nothing.” The fact is, do more with less—or, if you’re lucky, do more with the same—is a harsh reality for many CIOs. This is especially true in the age of fast-emerging digital business opportunities that require agile deployments of compute, storage, and networking resources, all at lower TCO.
At the same time, demand can also shift to new projects so rapidly that enterprise datacenters need to be able to turn capacity on and off at a moment’s notice. This means datacenters need to provide business stakeholders with the same “infrastructure on tap” capabilities that modern cloud-service providers such as Amazon Web Services have pioneered.
And yet, as I said earlier, the budgets aren’t growing to match. It’s just “do more with less.” And the great majority of enterprise datacenters today are ill-equipped to help businesses succeed. Designed primarily to deliver back-office applications to internal business users, the traditional datacenter is burdened with an inefficient and costly architectural design that prevents it from scaling and flexing rapidly to meet the changing workloads of a modern digital enterprise.
In short, they are not agile and the TCO is too high. But there’s no reason to panic, because there are solutions.
Technology giants Google, Amazon, and Facebook were among the first to exploit a new architectural paradigm, developing what are called hyperscale datacenters—highly scalable platforms built to handle massively complex workloads.
In fact, our research found that these custom-built hyperscale datacenters, when compared to traditional datacenters, deliver four times the level of CPU utilization, increase network utilization by as much as 26 percent, and dramatically lift productivity by enabling a single IT administrator to manage thousands of servers versus just hundreds.
Server virtualization technology has enabled system administrators to have more flexibility in provisioning infrastructure resources. This has opened the door to virtualizing other datacenter components, such as network and storage, leading to the next wave of enterprise datacenter optimization—software-defined infrastructure (SDI)—which seeks to virtualize the entire datacenter infrastructure.
In new research, we found that companies implementing the right hyperscale platform rather than a traditional datacenter infrastructure can capture significant TCO savings—generating capex savings of up to 59 percent, opex savings of up to 75 percent, and a return on investment (ROI) of up to 149 percent for large enterprise datacenter operators over a five-year period.
One of the key drivers of this TCO savings is greater CPU utilization, which allows for the deployment of fewer servers and CPUs, less network hardware, and fewer storage disks. This transformation of the “procure to provision” process into a vastly more efficient “pool to provision” process enables more-frequent, collaborative planning efforts that help avoid over-investment in infrastructure.
Interested in diving deeper into how hyperscale systems can increase agility while lowering TCO? Then check out our new study: An Economic Study of the Hyperscale Data Center.