Vir­tu­al­isa­tion has re­volu­tion­ised the world of in­form­a­tion tech­no­logy. The method of dis­trib­ut­ing a physical computer’s resources onto several virtual machines (VM) first occurred in the form of hardware vir­tu­al­isa­tion. This approach is based on emulating hardware com­pon­ents to be able to supply different virtual servers with their own operating systems (OS), on one shared hosting system. A structure such as this is often used in the software de­vel­op­ment when different test en­vir­on­ments should run on a single computer. Vir­tu­al­isa­tion also forms the basis of various cloud-based web hosting products.

One al­tern­at­ive to hardware vir­tu­al­isa­tion is operating-system-level vir­tu­al­isa­tion. This is where various server ap­plic­a­tions are realised in isolated virtual en­vir­on­ments, or con­tain­ers, which all run on the same operating system. This is also called container-based vir­tu­al­isa­tion. Like virtual machines, which have their own operating systems, con­tain­ers can also run different ap­plic­a­tions with varying re­quire­ments on the same physical system. Since con­tain­ers don’t have their own OS, this vir­tu­al­isa­tion tech­no­logy is char­ac­ter­ised by a con­sid­er­ably more stream­lined in­stall­a­tion process and a smaller overhead.

Server con­tain­ers are nothing new, but today, the tech­no­logy has come to prom­in­ence through open source projects, like Docker and CoreOS’s rkt.

What are server con­tain­ers?

Hardware vir­tu­al­isa­tion is supported by a so-called Hy­per­visor, which runs on the host system’s hardware and dis­trib­utes its resources pro­por­tion­ately between the guest operating systems. With container-based vir­tu­al­isa­tion, on the other hand, no ad­di­tion­al operating systems are started; instead, the common OS creates isolated instances of itself. A complete runtime en­vir­on­ment is available for ap­plic­a­tions to use on these virtual con­tain­ers.

Software con­tain­ers can be fun­da­ment­ally regarded as server apps. To install an ap­plic­a­tion, a container is loaded into a portable format (or image) with all the required files, which is then loaded onto a computer and started in a virtual en­vir­on­ment. It’s possible to implement ap­plic­a­tion con­tain­ers on prac­tic­ally any operating system. While Windows systems use Virtuozzo (the software developed by Parallels), FreeBSD uses the vir­tu­al­isa­tion en­vir­on­ment Jails, and Linux systems support OpenVZ and LXC con­tain­ers. Operating system vir­tu­al­isa­tion has only become at­tract­ive for the mass market through container platforms such as Docker or rkt, which add basic features that make handling server con­tain­ers a simpler task.

Side note: Docker and the comeback of container tech­no­logy

Users dealing with container-based vir­tu­al­isa­tion will in­vari­ably encounter Docker at some point. Thanks to its out­stand­ing marketing, the open source project has quickly become syn­onym­ous with container tech­no­logy. The command line tool, Docker, is used for starting, stopping and managing con­tain­ers. It’s based on Linux tech­niques, like Cgroups and Namespaces to separate the resources of in­di­vidu­al con­tain­ers. Initially, the LXC interface of the Linux kernel was used; these days, however, Docker con­tain­ers use a self-developed pro­gram­ming interface called Lib­con­tain­er.

One central feature of the Docker platform is the Docker Hub, an online service that contains a re­pos­it­ory for Docker images so that self-created images can be shared easily with other users. For Linux users, in­stalling a pre-built server container is as simple as going to the app store. Ap­plic­a­tions can be down­loaded via simple command line in­struc­tions from the central Docker Hub and run on your own system.

Docker’s biggest com­pet­it­or on the container solution market is rkt, which supports Docker images as well as its own format, app container images (ACI).

Managed Nextcloud from IONOS Cloud
Work together in your own cloud
  • Industry-leading security
  • Com­mu­nic­a­tion and col­lab­or­a­tion tools
  • Hosted and developed in Europe

Char­ac­ter­ist­ics of container-based vir­tu­al­isa­tion

With ap­plic­a­tion con­tain­ers, all the files that are required for operating server ap­plic­a­tions are provided in one handy package, allowing for a more stream­lined in­stall­a­tion and simpler operation of complex server programs. However, their main selling points are the man­age­ment and auto­ma­tion of container-based ap­plic­a­tions.

  • Easier in­stall­a­tion process: software con­tain­ers are started from images. This refers to a container’s portable images, which consist of a single server program and all the required com­pon­ents, such as libraries, sup­port­ing programs, and con­fig­ur­a­tion files. The dif­fer­ences between various operating system dis­tri­bu­tions can thus be com­pensated, allowing for a simpler in­stall­a­tion process with just one command line in­struc­tion.
  • Platform in­de­pend­ence: images can be easily trans­ferred from one system to another and are char­ac­ter­ised by a high level of platform in­de­pend­ence. To start a software container from an image, you just need an operating system with a cor­res­pond­ing container platform.
  • Minimal vir­tu­al­isa­tion overhead: a Linux with Docker consists of around 100 megabytes and can be set up in a matter of minutes. But it’s not only its compact size that’s a great selling point for system ad­min­is­trat­ors; the container solution can keep vir­tu­al­isa­tion overhead to a minimum. This contrasts with the sig­ni­fic­antly reduced per­form­ance with hardware vir­tu­al­isa­tion, caused by the Hy­per­visor and ad­di­tion­al operating systems. Fur­ther­more, booting virtual machines can take several minutes, whereas container apps for servers are always im­me­di­ately available.
  • Isolated ap­plic­a­tions: every program in a server container runs in­de­pend­ently from other software con­tain­ers on the OS. This allows even ap­plic­a­tions with con­tra­dict­ory re­quire­ments to operate parallel on the same system with ease.
  • Stand­ard­ised ad­min­is­tra­tion and auto­ma­tion: as the man­age­ment of all server con­tain­ers takes place on one container platform (i.e. Docker) with the same tools, the ap­plic­a­tions in the data centre can largely be automated. Container solutions are therefore es­pe­cially suited to server struc­tures, in which in­di­vidu­al com­pon­ents are dis­trib­uted across multiple servers, so that load is carried by several machines. For areas of ap­plic­a­tion such as these, Docker provides tools to configure auto­ma­tion, which enable new instances to start in peak loads. Google also offers a software solution for or­ches­trat­ing large container clusters, a software tailored es­pe­cially for Docker, called Kuber­netes.

How secure are container solutions?

Forgoing separate operating systems provides a per­form­ance advantage for container-based vir­tu­al­isa­tion. However, this is ac­com­pan­ied by a reduced level of security. In hardware vir­tu­al­isa­tion, security issues in operating systems normally just apply to virtual machines, but they affect the vir­tu­al­isa­tion on operating system levels in all software con­tain­ers. Con­tain­ers are therefore not en­cap­su­lated to the same extent as virtual machines with their own OS. Indeed, an attack on the Hy­per­visor could cause sig­ni­fic­ant damage in hardware vir­tu­al­isa­tion systems. However, thanks to its low com­plex­ity, there are fewer op­por­tun­it­ies for attackers to strike than, for instance, with a Linux kernel. Server con­tain­ers therefore serve as a credible al­tern­at­ive to hardware vir­tu­al­isa­tion, although for the time being, it can’t be con­sidered a complete re­place­ment.

Go to Main Menu