As part of the management team for your company, tracking trends and fads of new technology can seem like a full-time job these days.  Exciting technologies are rapidly turning into products, or at least pseudo-products, at an accelerated pace - the list is long; Containers, CEPH, Hadoop, SDI/SDN, NVMe and new optical fabrics as just a few examples.  

Believe the hyperscale value propositions

One key area to pay attention to is the hype amp’ing up around the notion of hyperscale (1), or what some call hyper-converged, which is essentially web-scale-type equipment that promises to re-make the data center.  Value propositions are being thrown around, like; easy scale, agility, single-pane orchestration, capital and operational expense reduction, among many other promises.  

Numbers are framed calling out 25%+ improvement in network utilization, 4x improvement in compute resource utilization, and 10x improvement in data center technician resource utilization(2) – all key promises of this idea of industrializing the cloud.

It can be easy to be skeptical but, in short, much of this is to be believed.  Hyperscale is truly a trend, not a fad, and has been tried and tested over the last seven plus years by the top tier web-scale companies – which are now encouraging open innovation in forums like the opencompute.org and openstack.org.  

Don’t ignore the economics of hyperscale

Being competitive in your IT, web, private and hybrid cloud domains is essential for overall competitiveness. But when most datacenter infrastructure is two to four times the cost of a hyperscale-built datacenter (3), the economics cannot be ignored.  With hyperscale economics you can look to save the extra costs of excess compute and storage equipment and facility power infrastructure, while keeping pace with the speed of technology advances.  It is a headache, and sometimes a heartache, when missed technology transitions cause real business impact. When your competition operates on a more efficient scale and makes more money to re-invest because of that efficiency, it’s a real problem. 

The state-of-the-art design of next generation hyperscale-class systems enable low-cost methods of technology transition, like easy upgrades, new technology additions, increased performance, and more.  At the systems level, the modular design aspects of hyperscale systems coupled with standard-software interfaces like DMTF Redfish APIs create a straight forward method of updating critical elements of your systems.  Add to that essential new functionality like ‘composable / pooled’ compute, storage, and network resources that are on-demand per for service need, and the return on investment equation is attractive.

You may say, I don’t have need for a massive homogenous datacenter built on hyperscale principles… the industry acknowledges this with economics starting at a few racks.  It’s about taking the proven economics of large deployments and making them work in smaller scale deployments of efficiently utilized compute, storage, and network resources.

High value innovations

How do I get started?  The timing is right, both established and new entrants are driving high-value innovations with leading-edge hyperscale solutions in this new era to capture the trend.  Ask your preferred systems suppliers and add to that list the new entrants for a proposal based on your business needs, services horizon, and key economic metrics you define, like total cost of ownership, return on investment, and opportunity cost.

 A new era is dawning.

To explore further how hyperscale systems can increase agility while lowering TCO, check out this study: An Economic Study of the Hyperscale Data Center

Download the study now

Sources:

(1) https://en.wikipedia.org/wiki/Hyperscale and Intel definitions. In computing, hyperscale is the ability of an architecture to scale appropriately as increased demand is added to the system. The new “hyperscale datacenter” systems architecture is designed to provide cohesive and scalable resources utilizing three main systems sub-components – compute, storage, and networking. These are abstracted into a sharable resource pool that can be used by different application workloads orchestrated in an “SDI environment”.

Software-defined infrastructure (SDI) is the extension of the principles and benefits of virtualization from individual servers to the entirety of the hyperscale datacenter

Intel Rack Scale Architecture is an enabler of hyperscale and SDI. Intel investments look to simplify the platform management towards open standards and pooling of compute, storage and networking with an industry standard API and enabling software that deliver critical capabilities like system topology discovery, disaggregated resource management, system composition support.

(2) An Economic Study of Hyperscale Data Centers – Ericsson and Mainstay – Jan 2016

(3) Intel market-gathered estimates


Partnerships Cloud Infrastructure

Kevin D Johnson, Intel Service Provider Group

Kevin D Johnson, Managing Director of Data Center and Cloud Solutions, Service Provider Group, Intel Corporation, is responsible for Intel technology solutions in data center and cloud services for the global communications and cloud service provider sectors. Johnson brings 20+ years of skill and experience capturing new market growth using advanced technology products. He holds a Bachelor of Science in Engineering from Oregon State University and a Master of Business from the University of Portland.

Kevin D Johnson, Intel Service Provider Group

Discussions