Containers, also known as software containers, are an operating-system-level virtualization technology through which multiple isolated user-space instances share the same kernel of an operating system. While the popularity of containers is rather new, the concept can be traced back to as early as the 1980s. Docker brought containers into mainstream, which has led people to believe that Docker is the same thing as a container. But Docker is actually only one possible container implementation.

From a technical perspective, a typical container ecosystem consists of container images, the container runtime, and a container cluster manager.

Container images

Container images can be thought of as a snapshot of a file system that has your application installed. Technically, it is a Git repository that allows you to store and use version control for your application efficiently. Because the format of container images is defined, developers can easily create and deploy applications that are very portable and deterministic. By this I mean a developer can build and test an application in a development environment, copy the image into a target environment, run the container using a single command, and expect that the application will work as designed.

Don't underestimate this simple but very useful capability, which enables you to run your application anywhere, including in a car or on a smartwatch! It’s a capability that brings containers a lot of love, especially in DevOps environments.

Container runtime

The container runtime provides an execution environment to container instances. Unlike with hypervisors, there is no emulation layer; just a thin layer controls resource access. The runtime normally comes as a piece of software that can be installed in an existing operating system. However, because containers are so popular, many minimalistic operating systems have been developed to specialize in providing a container runtime. Examples include KurmaOS, Project Atomic, RancherOS, and Ubuntu Snappy. Minimalistic OSes are designed for cloud infrastructure. They are slim, supporting atomic update, hence easy to deploy and manage.

I tested RancherOS recently. It’s an operating system that takes the container design philosophy to the extreme. The entire operating system is a Linux kernel, plus the Docker engine, plus a couple of system containers. The traditional init system is ripped out and all system services are packed into containers, which results in a slim operating system of 26MB that can be instantly installed, upgraded, or rebooted. This is useful for building a reliable cloud platform because simpler is likely more stable.

Container cluster manager

The container cluster manager is responsible for deploying, monitoring, managing, and orchestrating containers. If each container is like a brick, then the cluster manager is like an architect. Looking at a brick likely does not impress you, but looking at magnificent buildings very likely does! The cluster manager is what helps you build grand buildings. To construct a cluster across thousands of nodes used to be a years-long project with many developers working together. Now it can be done through a series of mouse clicks. Developers typically build clusters by employing the microservice concept. Google Kubernetes, Apache MesOS, and OpenStack Magnum are the major players in this space now.

Containers have also emerged as an alternative to hypervisors. I see them as complementary technologies rather than competing ones. I love containers because they break down complexity, isolate problems, and let you build large clusters in a modular style, while remaining manageable and easy to upgrade. I’m not surprised that people run containers over virtual machines or vice versa.

Scale in-house datacenters into public ones

Among the different container-based solutions, Apcera interests me particularly. Apcera is a hybrid cloud platform that allows you build clusters across different datacenters seamlessly, which means you can scale out from your in-house datacenter into public ones and grow your "buildings" into a city. Here are some interesting use cases:

  • When you temporarily need a lot of computing power that is beyond your in-house capability, you can rent additional nodes from a public cloud such as Amazon. Apcera can integrate those nodes into your cluster to fuel it with more horsepower. After completing your task, the nodes can be dismissed. This obviously lowers your cost and increases your time efficiency.
  • Let's assume you have three clusters: one for collecting data from connected cars, one for machine learning based on the collected data, and one for providing the results of your analysis to a customer who has a car factory. With an Apcera deployment, the work becomes straightforward: one click to deploy the data-collecting cluster into cars, one click to deploy the machine-learning cluster in-house, and one click to deploy the cluster in the manufacturing cloud. Your business pipeline is up instantly.

The number of use cases can be as large as your imagination. Apcera, being an advanced cloud computing solution, also comes with rich toolsets to help you build your applications, a container runtime built on top of KurmaOS, and Google Kubernetes as the cluster manager.

For more information on Ericsson’s activities at the OpenStack Summit this week, please visit our special event page:

And we'll be giving updates on related thought-leadership topics after the summit.  Please sign up for the Dare to Be Better blog to stay up to date:

Sign up


Cloud Infrastructure

Liyi Meng

Liyi Meng currently works on the Ericsson cloud system. His main focus is on cloud system design and development. Liyi joined Ericsson about 12 years ago as a software developer for mobile platforms. Since then he has been focusing on software development most of the time, with experience covering from embedded systems to hyperscale server systems. Liyi is always passionate about developing new applications for the next generation of the internet.

Follow Liyi Meng:
Liyi Meng