By 2020, it is forecast that tens of billions of connected devices will be in use worldwide. The exponential growth in the number of networked devices requires more-capable and sophisticated software functionality that meets the demands of cross-industry consumer use cases. This goes beyond the vast number of devices. The capabilities expected by the increasing number of users will require further optimization as well as increases in raw processing power and the intelligence of the devices. The services provided by these devices will be organized and delivered at greater scale and efficiency.

To create such a Networked Society, it may be necessary to rethink how future datacenters and cloud services can be delivered and integrated with connected devices and applications. The Networked Society can be viewed as an integrated system of computing devices of various kinds, along with networking and storage systems, applications, and the end users who ultimately gain new capabilities and provide feedback into the whole system. In this holistic feedback control loop, the majority of the predictive and mechanical adjustments can be performed automatically by machine learning systems, freeing users from mundane tasks and allowing them to extend their capabilities.

We may, therefore, need to reconsider what an “app” is. A typical “full stack” application is thought of as the combination of a front-end user interface and a back-end API for database or server-side logic. Future apps may be considered in the context of a much deeper stack, consisting of not just front- and back-end application logic, but also of instances of customized operating-system layer primitives, container mechanisms, hypervisors, bare-metal compute resources, network connectivity, and storage topology. This entire "deep stack" presents a comprehensive way to think about composing an app that is ready to meet the elevated demands of new use cases enabled by increasingly numerous and capable datacenter and cloud resources.

To enable this new way of composing and delivering applications, we will need higher levels of abstraction and corresponding tools that allow more malleable procedures for dealing with all elements of the deep stack—from bits, bare metal, and wires, through the layers of the operating system stack, all the way to microservices REST APIs interfaced via the latest web or mobile UI frameworks.

These new abstraction levels need to allow different layers of simulation. First, we may need tools to conduct behavioral simulation of the entire topology of a datacenter in a reproducible and quantifiable way. We can then simulate the topology more faithfully using containers, hypervisors, and software-defined networks and storage. Ultimately, the topology can be compiled to physical targets. The physical datacenter can then be treated as a substrate that runs such a compiled program output. The source for the program is the application topology. Such a topology can be expressed in a domain-specific language to lay out the blueprint of all the elements of the deepstack—from web frameworks, databases, the operating system, the root file system, the container system, the hypervisor, network ports, and routing all the way down to the instruction-level machine emulation. A single, comprehensive topology consisting of multiple such elements and their network can then be used to describe a use case: an “app” at datacenter scale.

Automation drives new applications

In 1998, VMware, Inc. introduced virtualized machines for the general public. Before that, running a simple experiment on a Linux box meant spending days to procure, configure, and set up real machines. Doing so on multiple machines was often prohibitively expensive, not to mention slow. Nowadays, most developers have not just virtual machines but also container-based development and target environments. An application consisting of many instances of virtualized containers is common. The level of abstraction and tooling provided by these technologies enabled a new generation of developers to launch applications with greater capabilities and ambitions.

Similarly, the new level of abstraction and tooling over an entire datacenter's worth of devices, wires, power distribution, and software can be the catalyst for a new generation of applications with greater capabilities and ambitions.

The industry behind cloud-based infrastructure is advancing very rapidly, but we are at the very beginning of this evolution. With a better set of tools covering new levels of abstractions, quantum leaps are still possible in scaling or leveling up. Typical datacenter design and construction strategies are largely based on ad hoc attempts and non-quantifiable processes. There is no A/B testing of different datacenter infrastructure models, partly due to the difficulty and complexity of the problem and partly due to the lack of necessary tools and methods. We need to think of a "unit of application" at different scales and levels of abstraction. We need tools that enable future designers through repeatable and quantifiable infrastructure support for applications. Such tools can ultimately be used to deploy the result of such designs into physical reality—malleable, composed, flexible hardware and software layers.

Cloud Infrastructure

Bob Bae

Bob works as an Engineering Director at Ericsson in the Bay Area. He served as CTO at NodePrime, a startup focused on immutable infrastructure, which has been acquired by Ericsson. Prior to that, Bob has contributed to various products from companies such as Wind River, NetApp, SGI, Aspera, Mylex, Primary Data, Accensus, and Activision. Bob has a BS from Ohio State University.

Bob Bae