The default choice for IT departments is to stick with their existing infrastructure. They don't want to make any fundamental changes in passive infrastructure, human resources, real estate resources, and operational models. But what was great in the past is not good enough now.

7 potential defaults in your default IT infrastructure model

The framework for classic IT infrastructure deployments can be articulated as follows:

1. Moore's law delivers sufficient capacity growth, with two-to-three-year technology refresh cycles.

2. Unit prices for infrastructure technology are fixed, but Moore's law generates a 2X price-performance increase every 18 months.

3. Passive infrastructure represents sunk cost. Cabinets, wiring, power, and cooling are often considered as fixed depreciated costs.

4. The real-estate footprint is fixed—and occupied by fully depreciated passive infrastructure.

5. The human resources function has been refined over a long time and is considered to be at or close to fully optimized.

6. Public and private clouds are decoupled—with two completely different operational models and optimization paradigms.

7. One configuration type fits all applications—without the need for further optimizations between workloads.

If all these default principles would stay true, your ride would be a dull one:

  • Plan: Anticipate technology refresh as before.
  • Do: Buy new blades according to depreciation and utilization plans.
  • Check: Monitor utilization rates.
  • Act : This is a wait state until the technology depreciation cycle is completed.

But this default scenario suffers from a set of inefficiencies that you can attack, with a hyper focus on taking the faults out of your default strategy.

1. Optimize across public and private cloud

The first area to attack is the need to create a holistic view of your infrastructure challenges. Both private and public clouds will be used, and they will not be divided on a per-application basis as in the past. A more powerful split is to divide between the two types of cloud based on workload characteristics.

As you progress with your digital transformation, you will have more and more business-critical applications that require a mix of private and public clouds to support them.

2. Design your private cloud around 3 to 4 extreme workload scenarios

The second area is about differentiation of configurations in your private cloud. Parts of your applications will be compute intense, others will be memory intense, and some will be storage intense. Finally you might run into networking intense applications.

As you look at optimizing resource utilization for all elements, you need to consider deploying infrastructure for three to four extreme characteristics, such as media-intense applications on a storage-centric configuration and transaction-centric applications on a memory-centric configuration.

3. Treat power, human resources, passive infrastructure and real estate as variable costs

The third is related to the non-technology resources used. With the growing data demand and diversified performance needs, expansions are occurring across human, passive, and real- estate resources for many infrastructure deployments.

Software-defined infrastructure (SDI) allows for a higher degree of automation, enabling a more efficient use of human resources. The shift from electrical to optical wiring can increase the size of resource pools. Real estate, power, and cooling need to grow too, in case you cannot increase efficiency faster than capacity needs grow.

When you look at these costs as variable, you enable a scenario in which expansions can be pushed out in time.

4. You don't become fast and cost efficient without agility and automation

The fourth and final point to address is the demand to be fast and cost efficient. The biggest lever to consider for your future infrastructure is an SDI approach, which is key to both agility and automation.

Application software developers use DevOps and generate a continuous flow of new application releases onto your infrastructure. An SDI makes both the platform and application management agile.

With a steep increase in the influx of software, process automation becomes central. Automated handling of new infrastructure and application software is needed to secure fast upgrades with predictable results.

Predictions for the faults in default infrastructure

My predictions for the future of infrastructure strategies are:

  • The optimization of the IT infrastructure is key to supporting large-scale digital transformation initiatives with new IT needs.
  • The real benefits of the optimization will come from rethinking the IT infrastructure beyond incremental improvements of the default deployment model.
  • Holistic approaches deliver the largest benefits.
  • Focusing too much on leveraging fully depreciated passive assets leads to sub-optimized prioritizations.Challenges.jpg

To explore our ideas more, please read our e-book on how today’s businesses require a datacenter that’s built for the digital economy and how you need to approach IT in a fundamentally different way

Download the free eBook


Digital Industrialization Cloud Infrastructure

Peter Linder

Peter Linder is Head of Business Management and Sales Support for Business Unit IT & Cloud Products towards Region North America. Since 2011 Peter has been based in North America in various management roles for the development of Ericsson’s cloud and IP Business in the US and Canada. He is also a Network Society evangelist appointed in the original group in 2011 and an intrapreneur dedicated to learning and sharing insights on how the digital transformation is reshaping future networks.

Follow Peter Linder:
Peter Linder

Discussions