CRI-O is an im­ple­ment­a­tion of the Container Runtime Interface (CRI) for Kuber­netes, using ‘Open Container Ini­ti­at­ive’ (OCI) images and runtime en­vir­on­ments. The project was launched in 2016 by the company Red Hat and handed over to the ‘Cloud Native Computing Found­a­tion’ (CNCF) in spring 2019.

IONOS Cloud Managed Kuber­netes
Container workloads in expert hands

The ideal platform for demanding, highly scalable container ap­plic­a­tions. Managed Kuber­netes works with many cloud-native solutions and includes 24/7 expert support.

How does CRI-O work?

To un­der­stand how CRI-O works and how it interacts with related tech­no­lo­gies, it is worth looking at the his­tor­ic­al de­vel­op­ment of container-based vir­tu­al­isa­tion. The basis for its creation was the Docker software which made the vir­tu­al­isa­tion of in­di­vidu­al apps based on light­weight con­tain­ers main­stream. Be­fore­hand, vir­tu­al­isa­tion was primarily un­der­stood as the use of virtual machines. A virtual machine contains an entire operating system, whereas several con­tain­ers access a shared operating system kernel.

From Docker to Kuber­netes to CRI-O

A container usually contains a single app which often provides a micro-service. In practical use, several con­tain­ers are usually con­trolled together to implement an ap­plic­a­tion. The co­ordin­ated man­age­ment of entire groups of con­tain­ers is known as or­ches­tra­tion.

Even if or­ches­tra­tion with Docker and tools like Docker Swarm is feasible, Kuber­netes has prevailed as an al­tern­at­ive to Docker. Kuber­netes combines several con­tain­ers in a so-called pod. The pods in turn run on so-called nodes – these can be both physical and virtual machines.

One of the main problems with Docker was its mono­lith­ic ar­chi­tec­ture. The Docker daemon ran with root rights and was re­spons­ible for a multitude of different tasks: from down­load­ing the container images to executing them in the runtime en­vir­on­ment to creating new images. This merging of in­de­pend­ent areas violates the software de­vel­op­ment principle ‘Sep­ar­a­tion of concerns’ and leads to security issues in practice. Therefore, efforts were made to decouple the in­di­vidu­al com­pon­ents.

When Kuber­netes was released, the Kuber­netes daemon kubelet contained a hard-coded Docker runtime en­vir­on­ment. However, the need to support other runtimes soon became apparent. Mod­u­lar­isa­tion of the in­di­vidu­al aspects promised a sim­pli­fied de­vel­op­ment and higher security. To make various runtimes com­pat­ible with Kuber­netes, an interface was defined: the Container Runtime Interface (CRI). CRI-O is a specific im­ple­ment­a­tion of this interface.

Note

Make use of Kuber­netes today – with Managed Kuber­netes from IONOS.

Ar­chi­tec­ture and function of CRI-O

The following com­pon­ents are part of CRI-O:

  • The software library con­tain­ers/image to download container images from various online sources.
  • The software library con­tain­ers/storage to manage container layers and create the file system for container pods.
  • An OCI-com­pat­ible runtime to run the container; the standard runtime is runC, but other OCI-com­pat­ible runtimes like Kata Con­tain­ers can be used.
  • The container net­work­ing interface (CNI) used to create the network for a pod; plugins for Flannel, Weave and OpenShift-SDN are used.
  • The container mon­it­or­ing tool conmon to con­tinu­ously monitor the container.

CRI-O is often used in con­junc­tion with the pod man­age­ment tool Podman. This works because Podman relies on the same libraries for down­load­ing the container images and managing the container layers as CRI-O.

In principle, using CRI-O consists of the following steps:

  1. Download an OCI container image
  2. Extract the image into an OCI runtime file system bundle
  3. Execution of the container by an OCI runtime

When is CRI-O being used?

Currently, CRI-O is primarily used as part of Red Hat’s OpenShift product line. OpenShift im­ple­ment­a­tions exist for cloud platforms from all major providers. Fur­ther­more, the software can be operated as part of the OpenShift Container Platform in public or private data centres. Here is an overview of the various OpenShift products:

Product In­fra­struc­ture Managed by Supported by
Red Hat OpenShift Dedicated AWS, Google Cloud Red Hat Red Hat
Microsoft Azure Red Hat OpenShift Microsoft Azure Red Hat and Microsoft Red Hat and Microsoft
Amazon Red Hat OpenShift AWS Red Hat and AWS Red Hat and AWS
Red Hat OpenShift on IBM Cloud IBM Cloud IBM Red Hat and IBM
Red Hat OpenShift Online Red Hat Red Hat Red Hat
Red Hat OpenShift Container Platform Private Cloud, Public Cloud, Physical machine, virtual machine, Edge Kunde Red Hat, others
Red Hat OpenShift Kuber­netes Engine Private Cloud, Public Cloud, Physical machine, virtual machine, Edge Kunde Red Hat, others

What dif­fer­en­ti­ates CRI-O from other Runtimes?

CRI-O is a re­l­at­ively new de­vel­op­ment in container vir­tu­al­isa­tion. His­tor­ic­ally, there are several al­tern­at­ive container runtimes. Perhaps the most unique selling point of CRI-O is its singular focus on Kuber­netes as the en­vir­on­ment. With CRI-O, Kuber­netes can execute con­tain­ers directly without ad­di­tion­al tools or special code ad­just­ments. CRI-O directly supports the existing, OCI-com­pat­ible runtimes. Here is an overview of actively developed and fre­quently used runtimes:

Runtime Type De­scrip­tion
runC Low-level OCI Runtime De facto standard runtime that emerged from Docker and is written in Go
crun Low-level OCI Runtime High-per­form­ance runtime; im­ple­men­ted in C instead of Go
Kata Con­tain­ers Vir­tu­al­ised OCI Runtime Uses light-weight virtual machine (VM)
con­tainerd High-level CRI Runtime Uses standard runC
CRI-O Light-weight CRI Runtime Can use runC, crun, Kata Con­tain­ers, among others
Go to Main Menu