A Linux container is either secure or flexible and simple. At least this used to be the case for a long time until the open-source project Kata Containers emerged and combined the best from both container technology and virtual machines. How did the open-source community behind the OpenStack Foundation (OSF) pull this off?
Thanks to its user friendliness, Linux Containers is very popular and has now become an important component of IT security. The eponymous LXC (Linux Containers) container platform is used to isolate several processes from one another and the rest of the system simultaneously. An image is then generated through virtualisation which allows every container to be continuously portable and consistent from creation to test phase and until operation. As such, individual applications have a virtual environment but still collectively use the kernel of the host system.
What does LXC (Linux Containers) provide? Is simplicity always an advantage? And how many virtual machines are there in a Linux container?
What is LXC (Linux Containers)?
The term Linux Containers (LXC) applies to both virtualised applications in the Linux kernel as well as the underlying container platform, and technology. This should be kept in mind when discussing alternative container platforms that also use Linux Containers as their base technology.
LXC is an open-source container platform that promises user-friendliness and an intuitive, modern user experience, which is quite atypical for container systems, through various tools, languages, templates, and libraries. In addition, the virtualisation environment can be installed and used across all current Linux distributions.
Containers are unique tools that are helpful for managing and developing applications in a way that was previously unthinkable. They make it possible to isolate applications from the system without actually completely isolating them. They can also exchange information and communicate with the outside world. Their arrival marked a veritable revolution, and container technology is now booming with countless providers competing on the market. The largest container-as-a-service providers include Amazon, Microsoft, and Google. The most popular platform is Docker, a further development of the Linux Containers (LXC) project, which is supported by all CaaS providers.
The idea for Linux Containers technology came about in 2001. Initially, an isolated environment was implemented within the framework of the VServer project. That was the basis for the establishment of several controlled namespaces in Linux and for what is now called Linux Containers. Other technologies such as cgroups (control groups), which could control and limit the usage of resources for a process or an entire group of processes, then followed. After that came systemd,an initialisation system for the management of namespaces and their processes.
In practice, LXC allows for applications to be developed more quickly. Container technology is useful for porting, configuration, and isolation. Containers also show their strengths when it comes to data streaming in real-time in that they provide the required scalability for applications. Linux containers adapt to an infrastructure, and do so independently to a large extent, so that they can be used either locally, in the cloud or in a hybrid environment.
The reason for container technology’s popularity is explained as follows: Every application in an operating system has its tasks, and they execute them in that very environment. They rely on the existing configurations set up by a programmer and are thus dependent on certain libraries, contexts, and files. Containers can be used to test applications more easily, more rapidly, and more securely. As such, newly developed applications can be used in a virtual environment without any problems and without the need to be debugged or re-written. The content of a container is based on the installation of a Linux distribution and contains all configuration files, but it is much easier to set up than the actual operating system.
What are the goals and features of LXC?
The basic idea of LXC does not appear to be very different from the idea of classic virtualisation at first glance. Only when looking at the broader context do the differences become more obvious. The most basic principle is the following: Containers work at the level of the operating system, whereas virtual machines work at the hardware level. This means that containers split an operating system and isolate application processes from the rest of the system, whereas classic virtualisation allows several operating systems to run simultaneously on one system.
For several operating systems to be able to run simultaneously in a virtual environment, a hypervisor is used to emulate the hardware system. However, this implies a high usage of resources. A more compact solution is the use of application containers which can be run natively on an operating system, meaning without emulation.
Linux containers use fewer resources than a virtual machine and also have a default interface from which several containers can be managed simultaneously. A platform with LXC can also be organised across several clouds. This provides portability and guarantees that applications running correctly on the developer’s system will also function correctly on every other system. Larger applications can be started or stopped, and their environment variables can also be changed from the Linux Containers interface.
To summarise, the goal of LXC is to create an environment that comes as close as possible to a standard Linux installationwithout the need for a separate kernel.
The current Linux Containers platform uses the following kernel features to “enclose” applications and processes in containers:
- Kernel namespaces (ipc, uts, mount, pid, network and user)
- AppArmor and SELinux-Profile
- Seccomp policies
- Chroots (using pivot_root)
- Kernel capabilities
- cgroups (control groups)
Linux containers are meant to be compact. As such, they are composed of only a few separate components:
How does LXC work?
Isolation and virtualisation are important because they aid in managing resources and security aspects as efficiently as possible. For example, they make monitoring for errors in the system easier, which often have nothing to do with newly developed applications. But how does LXC work? Or, to put it another way: how does Linux Containers work?
The easiest and most sensible way to use Linux Containers is by linking each container to a process, allowing for complete control. For each process, the namespaces that make resources available for one or more processes using the same namespaces are especially important. The processes also act as access controls in securing the containers.
To use an LXC environment, the features and their functions have to be clear. cgroups (kernel control groups) limit and isolate process resources, such as CPU, I/O, memory, and network resources. In addition, the content of a control group can be managed, monitored, prioritised, and edited.
Everything is a file in Linux. This is why every cgroup is ultimately a collection of files (/sys/fs/cgroup). There are various tools to manage these types of files, such as CGManager.
The functions are easily understandable, which has the advantage of making an LXC platform very beginner friendly. However, it also has some disadvantages which will be explained in the following section.
An overview of the pros and cons of Linux Containers
The user-friendliness of Linux Containers is its greatest advantage in comparison to classic virtualisation techniques. However, the incredible spread of LXC, a virtually all-encompassing ecosystem along with innovative tools, can mostly be attributed to the platform Docker which brought about Linux Containers. When compared to other container systems like rkt, OpenVZ, and Cloud Foundry Garden, which are significantly more limited in their usage, LXC benefits from its close ties to the forerunner in container platforms.
A system administrator who has already worked with a hypervisor-based virtualization method like Hyper-V will have no problems using LXC. The entire set-up, including the creation of container templates and their deployment, the configuration of the operating system and establishing connections, and the deployment of applications, remains the same. All scripts and workflows that have been written for virtual machines can also be used for Linux Containers. As such, developers are not given new, customised solutions or tools but can seamlessly continue to work in a familiar environment with their own scripts and automation workflows.
One majordisadvantage of LXC becomes very obvious when it comes to memory management: Even though various memory backends are supported (Ivm, overlayfs, zfs and btrfs), everything is saved to rootfs by default. There is no possibility to register images. In this respect, other container platforms offer smarter and more flexible solutions, both for saving containers and the management of images.
When is LXC used?
LXC is an open-source project that is financially supported by Canonical, the company behind the Linux distribution Ubuntu. The greatest support, however, comes from the user community which collectively develops stable versions and security updates and also pushes the project forward. Various editions of LXC now even come with ongoing support and regular security updates. Other versions are maintained as best as possible, usually until a newer, more stable version appears.
In most cases, Linux Containers is used as a supporting, supplementary container technology, which is not unusual in this field since, in contrast to virtual machines, containers are still a very new technology. However, you should keep in mind that container technology providers are constantly growing, along with the ecosystem around the technology.
LXC is currently an entirely viable alternative to existing traditional applications, which is specifically geared towards VM administrators. The transition from a virtual machine to a container technology is easier with Linux Containers than with other container technologies.
What are the alternatives to Linux Containers?
The most popular LXC alternative is Docker. This platform, based on Linux Containers, has been continuously developed over the past few years and can now also be run on Windows systems. As such, the largest cloud providers, such as Google, IBM, AWS and Azure, are now able to offer native Docker support.
A well-known container alternative (Linux) for the virtualisation of a complete server is OpenVZ. Like LXC, OpenVZ uses the kernel of the host operating system and makes the virtual server available to users in an isolated environment.
KVM is an open-source virtualisation technology that is already built into Linux. KVM stands for “Kernel-based Virtual Machine.” It can be used to convert Linux into a hypervisor, allowing the host computer to run multiple isolated environments.
Kubernetes originally came from Google which was one of the first supporters of the Linux Containers technology. This open-source platform automates the activities of Linux Containers. Entire groups of hosts running the containers are assembled into clusters, making it very easy to manage them all.
When working with LXC, it will be impossible to avoid the closely related LXD. It is difficult to differentiate these two terms and technologies from one another. LXD is a further development of LXC and also contains a system daemon.