The com­pre­hens­ive Docker ecosystem offers de­velopers a number of pos­sib­il­it­ies to deploy ap­plic­a­tions, or­ches­trate con­tain­ers and more. We’ll go over the most important Docker tools and give you an overview of the most popular third-party projects that develop open-source Docker tools.

Web hosting
The hosting your website deserves at an un­beat­able price
  • Loading 3x faster for happier customers
  • Rock-solid 99.99% uptime and advanced pro­tec­tion
  • Only at IONOS: up to 500 GB included

What are the essential Docker tools/com­pon­ents?

Today, Docker is far more than just a soph­ist­ic­ated platform for managing software con­tain­ers. De­velopers have created a range of diverse Docker tools to make deploying ap­plic­a­tions via dis­trib­uted in­fra­struc­ture and cloud en­vir­on­ments easier, faster and more flexible. In addition to tools for clus­ter­ing and or­ches­tra­tion, there is also a central app mar­ket­place and a tool for managing cloud resources.

Docker Engine

When de­velopers say “Docker”, they are usually referring to the open-source client-server ap­plic­a­tionthat forms the basis of the container platform. This ap­plic­a­tion is referred to as Docker Engine. Central com­pon­ents of Docker Engine are the Docker daemon, a REST API and a CLI (command line interface) that serves as the user interface.

With this design, you can talk to Docker Engine through command-line commands and manage Docker images, Docker files and Docker con­tain­ers con­veni­ently from the terminal.

Image: Schematic representation of the Docker engine
The main com­pon­ents of the Docker engine: the Docker daemon, REST API and Docker CLI

You can find a detailed de­scrip­tion of Docker Engine in our Docker tutorial for beginners Docker tutorial: in­stall­a­tion and first steps.

Docker Hub

Docker Hub provides users with a cloud-based registry that allows Docker images to be down­loaded, centrally managed and shared with other Docker users. Re­gistered users can store Docker images publicly or in private re­pos­it­or­ies. Down­load­ing a public image (known as pulling in Docker ter­min­o­logy) does not require a user account. An in­teg­rated tag mechanism enables the ver­sion­ing of images.

In addition to the public re­pos­it­or­ies of other Docker users, there are also many resources from the Docker developer team and well-known open-source projects that can be found in the official re­pos­it­or­ies in Docker Hub. The most popular Docker images include the NGINX webserver, the Redis in-memory database, the BusyBox Unix tool kit and the Ubuntu Linux dis­tri­bu­tion.

Image: Official repositories in the Docker node
You can find more than 100,000 free images in the official Docker re­pos­it­or­ies.

Or­gan­isa­tions are another important Docker Hub feature, which allow Docker users to make private re­pos­it­or­ies that are ex­clus­ively available to a select group of people. Access rights are managed within an or­gan­isa­tion using teams and group mem­ber­ships.

Docker Swarm

Docker Engine contains a native function that enables its users to manage Docker hosts in clusters called swarms. The cluster man­age­ment and or­ches­tra­tion cap­ab­il­it­ies built into the Docker engine are based on the Swarmkit toolbox. If using an older version of the container platform, the Docker tool is available as a stan­dalone ap­plic­a­tion.

Tip

Clusters are made up of any number of Docker hosts and are hosted on the in­fra­struc­ture of an external IaaS provider or in their own data centre.

As a native Docker clus­ter­ing tool, Swarm gathers a pool of Docker hosts into a single virtual host and serves the Docker REST API. Any Docker tool as­so­ci­ated with the Docker daemon can access Swarm and scale across any number of Docker hosts. With the Docker Engine CLI, users can create swarms, dis­trib­ute ap­plic­a­tions in the cluster, and manage the behaviour of the swarm without needing to use ad­di­tion­al or­ches­tra­tion software.

Docker engines that have been combined into clusters run in swarm mode. Select this if you want to create a new cluster or add a Docker host to an existing swarm. In­di­vidu­al Docker hosts in a cluster are referred to as “nodes”. The nodes of a cluster can run as virtual hosts on the same local system, but more often a cloud-based design is used, where the in­di­vidu­al nodes of the Docker swarm are dis­trib­uted across different systems and in­fra­struc­tures.

The software is based on a master-worker ar­chi­tec­ture. When tasks are to be dis­trib­uted in the swarm, users pass a service to the manager node. The manager is then re­spons­ible for schedul­ing con­tain­ers in the cluster and serves as a primary user interface for accessing swarm resources.

The manager node sends in­di­vidu­al units, known as tasks, to worker nodes.

  • Services: services are central struc­tures in Docker clusters. A service defines a task to be executed in a Docker cluster. A service pertains to a group of con­tain­ers that are based on the same image. When creating a service, the user specifies which image and commands are used. In addition, services offer the pos­sib­il­ity to scale ap­plic­a­tions. Users of the Docker platform simply define how many con­tain­ers are to be started for a service.
  • Tasks: to dis­trib­ute services in the cluster, they are divided into in­di­vidu­al work units (tasks) by the manager node. Each task includes a Docker container as well as the commands that are executed in it.

In addition to the man­age­ment of cluster control and or­ches­tra­tion of con­tain­ers, manager nodes by default can also carry out worker node functions – unless you restrict the tasks of these nodes strictly to man­age­ment.

An agent program runs on every worker node. This accepts tasks and provides the re­spect­ive principal node status reports on the progress of the trans­ferred task. The following graphic shows a schematic rep­res­ent­a­tion of a Docker Swarm:

Image: Schematic representation of a Docker Swarm
The manager-worker ar­chi­tec­ture of a Docker Swarm

When im­ple­ment­ing a Docker Swarm, users generally rely on the Docker machine.

Docker Compose

Docker Compose makes it possible to merge multiple con­tain­ers and execute with a single command. The basic element of Compose is the central control file based on the award-winning language YAML. The syntax of this compose file is similar to that of the open-source software Vagrant, which is used when creating and pro­vi­sion­ing virtual machines.

In the docker-compose.yml file, you can define any number of software con­tain­ers, including all de­pend­en­cies, as well as their re­la­tion­ships to each other. Such multi-container ap­plic­a­tions are con­trolled according to the same pattern as in­di­vidu­al software con­tain­ers. Use the docker-compose command in com­bin­a­tion with the desired sub­com­mand to manage the entire life cycle of the ap­plic­a­tion.

This Docker tool can be easily in­teg­rated into a cluster based on Swarm. This way, you can run multi-container ap­plic­a­tions created with Compose on dis­trib­uted systems just as easily as you would on a single Docker host.

Another feature of Docker Compose is an in­teg­rated scaling mechanism. With the or­ches­tra­tion tool, you can com­fort­ably use the command-line program to define how many con­tain­ers you would like to start for a par­tic­u­lar service.

What third-party Docker tools are there?

In addition to the in-house de­vel­op­ment from Docker Inc., there are various software tools and platforms from external providers that provide in­ter­faces for the Docker Engine or have been specially developed for the popular container platform. Within the Docker ecosystem, the most popular open-source projects include the or­ches­tra­tion tool Kuber­netes, the cluster man­age­ment tool Shipyard, the multi-container shipping solution Panamax, the con­tinu­ous in­teg­ra­tion platform Drone, the cloud-based operating system OpenStack and the D2iQ DC/OS data centre operating system, which is based on the cluster manager Mesos.

Kuber­netes

It’s not always possible for Docker to come up with their own or­ches­tra­tion tools like Swarm and Compose. For this reason, various companies have been investing in their own de­vel­op­ment work for years into creating tailor-made tools designed to fa­cil­it­ate the operation of the container platform in large, dis­trib­uted in­fra­struc­tures. Among the most popular solutions of this type is the open-source project Kuber­netes.

Kuber­netes is a cluster manager for container-based ap­plic­a­tions. The goal of Kuber­netes is to automate ap­plic­a­tions in a cluster. To do this, the or­ches­tra­tion tool uses a REST-API, a command line program and a graphical web interface as controls in­ter­faces. With these in­ter­faces, auto­ma­tions can be initiated, and status reports can be requested. You can use Kuber­netes to:

  • execute container-based photos on a cluster,
  • install and manage ap­plic­a­tions in dis­trib­uted systems,
  • scale ap­plic­a­tions, and
  • use hardware as best as possible.

To this end, Kuber­netes combines con­tain­ers into logical parts, which are referred to as pods. Pods represent the basic units of the cluster manager, which can be dis­trib­uted in the cluster by schedul­ing.

Like Docker’s Swarm, Kuber­netes is also based on a master-worker ar­chi­tec­ture. A cluster is composed of a Kuber­netes master and a variety of workers, which are also called Kuber­netes nodes (or minions). The Kuber­netes master functions as a central control plane in the cluster and is made up of four basic com­pon­ents, allowing for direct com­mu­nic­a­tion in the cluster and task dis­tri­bu­tion. A Kuber­netes master consists of an API server, the con­fig­ur­a­tion memory etcd, a scheduler and a con­trol­ler manager.

  • API server: all auto­ma­tions in the Kuber­netes cluster are initiated with REST-API via an API server. This functions as the central ad­min­is­tra­tion interface in the cluster.
  • etcd: you can think of the open-source con­fig­ur­a­tion memory etcd as the memory of a Kuber­netes cluster. The Key Value Store, which CoreOS developed spe­cific­ally for dis­trib­uted systems, stores con­fig­ur­a­tion data and makes it available to every node in the cluster. The current state of a cluster can be managed at any time via etcd.
  • Scheduler: the scheduler is re­spons­ible for dis­trib­ut­ing container groups (pods) in the cluster. It de­term­ines the resource re­quire­ments of a pod and then matches this with the available resources of the in­di­vidu­al nodes in the cluster.
  • Con­trol­ler manager: the con­trol­ler manager is a service of the Kuber­netes master and controls or­ches­tra­tion by reg­u­lat­ing the state of the cluster and per­form­ing routine tasks. The main task of the con­trol­ler manager is to ensure that the state of the cluster cor­res­ponds to the defined target state.

The overall com­pon­ents of the Kuber­netes master can be located on the same host or dis­trib­uted over several master hosts within a high-avail­ab­il­ity cluster.

While the Kuber­netes master is re­spons­ible for the or­ches­tra­tion, the pods dis­trib­uted in the cluster are run on hosts, Kuber­netes nodes, which are sub­or­din­ate to the master. To do this, a container engine needs to run on each Kuber­netes node. While Docker is the de facto standard, Kuber­netes does not have to use a specific container engine.

In addition to the container engine, Kuber­netes nodes cover the following com­pon­ents:

  • kubelet: kubelet is an agent that runs on each Kuber­netes node and is used to control and manage the node. As the central point of contact of each node, kubelet is connected to the Kuber­netes master and ensures that in­form­a­tion is passed on to and received from the control plane.
  • kube-proxy: in addition, the proxy service kube-proxy runs on every Kuber­netes node. This ensures that requests from the outside are forwarded to the re­spect­ive con­tain­ers and provides services to users of container-based ap­plic­a­tions. The kube-proxy also offers rudi­ment­ary load balancing.

The following graphic shows a schematic rep­res­ent­a­tion of the master-node ar­chi­tec­ture on which the or­ches­tra­tion platform Kuber­netes is based:

Image: Schematic representation of the Kubernetes architecture
The master-node ar­chi­tec­ture of the or­ches­tra­tion platform Kuber­netes

In addition to the core project Kuber­netes, there are also numerous tools and ex­ten­sions that make it possible to add more func­tion­al­ity to the or­ches­tra­tion platform. The most popular are the mon­it­or­ing and error diagnosis tools Pro­meth­eus, Weave Scope, and sysdig, as well as the package manager Helm. Plugins also exist for Apache Maven and Gradle, as well as a java API, which allows you to remotely control Kuber­netes.

Shipyard

Shipyard is a community-developed man­age­ment solution based on Swarm that allows users to maintain Docker resources like con­tain­ers, images, hosts and private re­gis­tries via a graphical user interface. It is available as a web ap­plic­a­tion via the browser. In addition to the cluster man­age­ment features that can be accessed via a central web interface, Shipyard also offers user au­then­tic­a­tion and role-based access control.

The software is 100% com­pat­ible with the Docker remote API and uses the open-source NoSQL database RethinkDB to store data for user accounts, addresses and oc­cur­rences. The software is based on the cluster man­age­ment toolkit Citadel and is made up of three main com­pon­ents: con­trol­ler, API and UI.

  • Shipyard con­trol­ler: the con­trol­ler is the core component of the man­age­ment tool Shipyard. The Shipyard con­trol­ler interacts with RethinkDB to store data and makes it possible to address in­di­vidu­al hosts in a Docker cluster and to control events.
  • Shipyard API: the Shipyard API is based on REST. All functions of the man­age­ment tool are con­trolled via the Shipyard API.
  • Shipyard user interface (UI): the Shipyard UI is an AngularJS app, which presents users with a graphical user interface for the man­age­ment of Docker clusters in the web browser. All in­ter­ac­tions in the user interface take place via the Shipyard API.

Further in­form­a­tion about the open-source project can be found on the official website of Shipyard.

Panamax

The de­velopers of the open-source software project Panamax aim to simplify the de­ploy­ment of multi-container apps. The free tool offers users a graphical user interface that allows complex ap­plic­a­tions based on Docker con­tain­ers to be con­veni­ently developed, deployed and dis­trib­uted using drag-and-drop.

Panamax makes it possible to save complex multi-container ap­plic­a­tions as ap­plic­a­tion templates and dis­trib­ute them in cluster ar­chi­tec­tures with just one click. Using an in­teg­rated app mar­ket­place hosted on GitHub, templates for self-created ap­plic­a­tions can be stored in Git re­pos­it­or­ies and made available to other users.

The basic com­pon­ents of the Panamax ar­chi­tec­ture can be divided into two groups: the Panamax Local Client and any number of remote de­ploy­ment targets.

The Panamax local client is the core component of this Docker tool. It is executed on the local system and allows complex container-based ap­plic­a­tions to be created. The local client is comprised of the following com­pon­ents:

  • CoreOS: in­stall­a­tion of the Panamax local client requires the Linux dis­tri­bu­tion CoreOS as its host system, which has been spe­cific­ally designed for software con­tain­ers. The Panamax client is then run as a Docker container in CoreOS. In addition to the Docker features, users have access to various CoreOS functions. These include Fleet and Journ­alctl, among others:
  • Fleet: instead of in­teg­rat­ing directly with Docker, the Panamax Client uses the cluster manager Fleet to or­ches­trate its con­tain­ers. Fleet is a cluster manager that controls the Linux daemon systemd in computer clusters.
  • Journ­alctl: the Panamax client uses Journ­alctl to request log messages from the Linux system manager systemd from the journal.
  • Local client installer: the local client installer contains all com­pon­ents necessary for in­stalling the Panamax client on a local system.
  • Panamax local agent: the central component of the local client is the local agent. This is linked to various other com­pon­ents and de­pend­en­cies via the Panamax API. These include the local Docker host, the Panamax UI, external re­gis­tries, and the remote agents of the de­ploy­ment targets in the cluster. The local agent interacts with the following program in­ter­faces on the local system via the Panamax API to exchange in­form­a­tion about running ap­plic­a­tions:
  • Docker remote API: Panamax searches for images on the local system via the Docker remote API and obtains in­form­a­tion about running con­tain­ers.
  • etcd API: files are trans­mit­ted to the CoreOS Fleet daemon via the etcd API.
  • systemd-journal-gatewayd.services: Panamax obtains the journal output of running services via systemd-journal-gatewayd.services.

In addition, the Panamax API also enables in­ter­ac­tions with various external APIs.

  • Docker registry API: Panamax obtains image tags from the Docker registry via the Docker registry API.
  • GitHub API: Panamax loads templates from the GitHub re­pos­it­ory using the GitHub API.
  • Kiss­Met­rics API: the Kiss­Met­rics API collects data about templates that users run.
  • Panamax UI: the Panamax UI functions as a user interface on the local system and enables users to control the Docker tool via a graphical interface. User input is directly forwarded to the local agent via Panamax API. The Panamax UI is based on the CTL Base UI Kit, a library of UI com­pon­ents for web projects from Cen­turyLink.

In Panamax ter­min­o­logy, each node in a Docker cluster without man­age­ment tasks is referred to as a remote de­ploy­ment target. De­ploy­ment targets consist of a Docker host, which is con­figured to deploy Panamax templates with the help of the following com­pon­ents:

  • De­ploy­ment target installer: the de­ploy­ment target installer starts a Docker host, complete with a Panamax remote agent and or­ches­tra­tion adapter.
  • Panamax remote agent: if a Panamax remote agent is installed, ap­plic­a­tions can be dis­trib­uted over the local Panamax client to any desired endpoint in the cluster. The Panamax remote agent runs as a Docker container on every de­ploy­ment target in the cluster.
  • Panamax or­ches­tra­tion adapter: in the or­ches­tra­tion adapter, the program logic is provided for each or­ches­tra­tion tool available for Panamax in an in­de­pend­ent adapter layer. Because of this, users have the option to always choose the exact or­ches­tra­tion tech­no­logy to be supported by their target en­vir­on­ment. Pre-con­figured adapters include Kuber­netes and Fleet:
  • Panamax Kuber­netes adapter: in com­bin­a­tion with the Panamax remote agent, the Panamax Kuber­netes adapter enables the dis­tri­bu­tion of Panamax templates in Kuber­netes clusters.
  • Panamax Fleet adapter: in com­bin­a­tion with the Panamax remote agent, the Panamax Fleet adapter enables the dis­tri­bu­tion of Panamax templates in clusters con­trolled with the help of the Fleet cluster manager.

The following graphic shows the interplay between the in­di­vidu­al Panamax com­pon­ents in a Docker cluster:

Image: Schematic representation of the software architecture for the Panamax container management tool
The software ar­chi­tec­ture of the Panamax container man­age­ment tool

The CoreOS-based Panamax container man­age­ment tool provides users with a variety of standard container or­ches­tra­tion tech­no­lo­gies through a graphical user interface, as well as the option to con­veni­ently manage complex multi-container ap­plic­a­tions in cluster ar­chi­tec­tures using any system (i.e. your own laptop).

With Panamax’s public template re­pos­it­ory, Panamax users have access to a public template library with various resources via GitHub.

Drone

Drone is a lean con­tinu­ous in­teg­ra­tion platform with minimal re­quire­ments. With this Docker tool, you can auto­mat­ic­ally load your newest build from a Git re­pos­it­ory like GitHub and test it in isolated Docker con­tain­ers. You can run any test suite and send reports and status messages via email. For every software test, a new container based on images from the public Docker registry is created. This means any publicly available Docker image can be used as the en­vir­on­ment for testing the code.

Tip

Con­tinu­ous In­teg­ra­tion (CI) refers to a process in software de­vel­op­ment, in which newly developed software com­pon­ents—builds—are merged and run in test en­vir­on­ments at regular intervals. CI is a strategy to ef­fi­ciently recognise and resolve in­teg­ra­tion errors that can arise from col­lab­or­a­tion between different de­velopers.

Drone is in­teg­rated in Docker and supported by various pro­gram­ming languages, such as PHP, Node.js, Ruby, Go and Python. The container platform is the only true de­pend­ency. You can create your own personal con­tinu­ous in­teg­ra­tion platform with Drone on any system that Docker can be installed on. Drone supports various version control re­pos­it­or­ies, and you can find a guide for the standard in­stall­a­tion with GitHub in­teg­ra­tion on the open source project’s website under readme.drone.io.

Managing the con­tinu­ous in­teg­ra­tion platform takes place via a web interface. Here you can load software builds from any Git re­pos­it­ory, merge them into ap­plic­a­tions, and run the result in a pre-defined test en­vir­on­ment. In order to do this, a .drone.yml file is defined that specifies how to create and run the ap­plic­a­tion for each software test.

Drone users are provided with an open-source CI solution that combines the strengths of al­tern­at­ive products like Travis and Jenkins into a user-friendly ap­plic­a­tion.

OpenStack

When it comes to building and operating open-source cloud struc­tures, the open-source cloud operating system OpenStack is the software solution of choice.

With OpenStack you can manage computer, storage and network resources from a central dashboard and make them available to end users via a web interface.

The cloud operating system is based on a modular ar­chi­tec­ture that’s comprised of multiple com­pon­ents:

  • Zun (container service): Zun is OpenStack’s container service and enables the easy de­ploy­ment and man­age­ment of con­tain­er­ised ap­plic­a­tions in the OpenStack cloud. The purpose of Zun is to allow users to manage con­tain­ers through a REST API without having to manage servers or clusters. To operate Zun, you’ll need to have three other OpenStack services, which are Keystone, Neutorn, and kryr-lib­net­work. The func­tion­al­ity of Zun can also be expanded through ad­di­tion­al OpenStack services such as Cinder and Glance.
  • Neutron (network component): Neutron (formally Quantum) is a portable, scalable API-supported system component used for network control. The module provides an interface for complex network to­po­lo­gies and supports various plugins through which extended network functions can be in­teg­rated.
  • kuryr-lib­net­work (Docker driver): kuryr-lib­net­work is a driver that acts as an interface between Docker and Neutron.
  • Cinder (block storage): Cinder is the nickname of a component in the OpenStack ar­chi­tec­ture that provides per­sist­ent block storage for the operation of VMs. The module provides virtual storage via a self-service API. Through this, end users can make use of storage resources without being aware of which device is providing the storage.
  • Keystone (identity service): Keystone provides OpenStack users with a central identity service. The module functions as an au­then­tic­a­tion and per­mis­sions system between the in­di­vidu­al OpenStack com­pon­ents. Access to projects in the cloud is regulated by tenants. Each tenant rep­res­ents a user, and several user accesses with different rights can be defined.
  • Glance (image service): with the Glance module, OpenStack provides a service that allows images of VMs to be stored and retrieved.

You can find more in­form­a­tion about OpenStack com­pon­ents and services in our article on OpenStack.

In addition to the com­pon­ents mentioned above, the OpenStack ar­chi­tec­ture can be extended using various modules. You can read about the different optional modules on the OpenStack website.

D2iQ DC/OS

DC/OS (Dis­trib­uted Cloud Operating System) is an open-source software for the operation of dis­trib­uted systems developed by D2iQ Inc. (formerly Meso­sphere). The project is based on the open-source cluster manager Apache Mesos and is an operating system for data centres. The source code is available to users under the Apache license Version 2 in the DC/OS re­pos­it­or­ies on GitHub. An en­ter­prise version of the software is also available at d2iq.com. Extensive project doc­u­ment­a­tion can be found on dcos.io.

You can think of DC/OS as a Mesos dis­tri­bu­tion that provides you with all the features of the cluster manager (via a central user interface) and expands upon Mesos con­sid­er­ably.

DC/OS uses the dis­trib­uted system core of the Mesos platform. This makes it possible to bundle the resources of an entire data centre and manage them in the form of an ag­greg­ated system like a single logical server. This way, you can control entire clusters of physical or virtual machines with the same ease that you would operate a single computer with.

The software sim­pli­fies the in­stall­a­tion and man­age­ment of dis­trib­uted ap­plic­a­tions and automates tasks such as resource man­age­ment, schedul­ing, and inter-process com­mu­nic­a­tion. The man­age­ment of a cluster based on D2iQ DC/OS, as well as its included services, takes place over a central command line program (CLI) or web interface (GUI).

DC/OS isolates the resources of the cluster and provides shared services, such as service discovery or package man­age­ment. The core com­pon­ents of the software run in a protected area – the core kernel. This includes the master and agent programs of the Mesos platform, which are re­spons­ible for resource al­loc­a­tion, process isolation, and security functions.

  • Mesos master: the Mesos master is a master process that runs on a master node. The purpose of the Mesos master is to control resource man­age­ment and or­ches­trate tasks (abstract work units) that are carried out on an agent node. To do this, the Mesos master dis­trib­utes resources to re­gistered DC/OS services and accepts resource reports from Mesos agents.

  • Mesos agents: Mesos agents are processes that run on agent accounts and are re­spons­ible for executing the tasks dis­trib­uted by the master. Mesos agents deliver regular reports about the available resources in the cluster to the Mesos master. These are forwarded by the Mesos master to a scheduler (i.e. Marathon, Chronos or Cassandra). This decides which task to run on which node. The tasks are then carried out in a container in an isolated manner.

All other system com­pon­ents as well as ap­plic­a­tions run by the Mesos agents via executor run in the user space. The basic com­pon­ents of a standard DC/OS in­stall­a­tion are the admin router, the Mesos DNS, a dis­trib­uted DNS proxy, the load balancer Minuteman, the scheduler Marathon, Apache ZooKeeper and Exhibitor.

  • Admin router: the admin router is a specially con­figured webserver based on NGINX that provides DC/OS services as well as central au­then­tic­a­tion and proxy functions.
  • Mesos DNS: the system component Mesos DNS provides service discovery functions that enable in­di­vidu­al services and ap­plic­a­tions in the cluster to identify each other through a central domain name system (DNS).
  • Dis­trib­uted DNS proxy: the dis­trib­uted DNS proxy is an internal DNS dis­patch­er.
  • Minuteman: the system component Minuteman functions as an internal load balancer that works on the transport layer (Layer 4) of the OSI reference model.
  • DC/OS Marathon: Marathon is a central component of the Mesos platform that functions in the D2iQ DC/OS as an init system (similar to systemd). Marathon starts and su­per­vises DC/OS services and ap­plic­a­tions in cluster en­vir­on­ments. In addition, the software provides high-avail­ab­il­ity features, service discovery, load balancing, health checks and a graphical web interface.
  • Apache ZooKeeper: Apache ZooKeeper is an open source software component that provides co­ordin­a­tion functions for the operation and control of ap­plic­a­tions in dis­trib­uted systems. ZooKeeper is used in D2iQ DC/OS for the co­ordin­a­tion of all installed system services.
  • Exhibitor: Exhibitor is a system component that is auto­mat­ic­ally installed and con­figured with ZooKeeper on every master node. Exhibitor also provides a graphical user interface for ZooKeeper users.

Diverse workloads can be executed at the same time on cluster resources that are ag­greg­ated via DC/OS. This, for example, enables parallel operation on the cluster operating system of big data systems, mi­croservices, or container platforms such as Hadoop, Spark and Docker.

Within the D2iQ Universe, a public app catalogue is available for DC/OS. With this, you can install ap­plic­a­tions like Spark, Cassandra, Chronos, Jenkins, or Kafka by simply clicking on the graphical user interface.

What Docker tools are there for security?

Even though en­cap­su­lated processes running in con­tain­ers share the same core, Docker uses a number of tech­niques to isolate them from each other. Core functions of the Linux kernel, such as Cgroups and Namespaces, are usually used to do this.

Con­tain­ers, however, still don’t offer the same degree of isolation that can be ac­com­plished with virtual machines. Despite the use of isolation tech­niques, important core sub­sys­tems such as Cgroups as well as kernel in­ter­faces in the /sys and /proc dir­ect­or­ies can be reached through con­tain­ers.

The Docker de­vel­op­ment team has ac­know­ledged that these safety concerns are an obstacle for the es­tab­lish­ment of container tech­no­logy on pro­duc­tion systems. In addition to the fun­da­ment­al isolation tech­niques of the Linux kernel, newer versions of Docker Engine also support the frame­works AppArmor, SELinux and Seccomp, which function as a type of firewall for core resources.

  • AppArmor: with AppArmor, access rights of con­tain­ers to the file systems are regulated.
  • SELinux: SELinux provides a complex reg­u­lat­ory system where access control to core resources can be im­ple­men­ted.
  • Seccomp: Seccomp (Secure Computing Mode) su­per­vises the invoking of system calls.

In addition to these Docker tools, Docker also uses Linux cap­ab­il­it­ies to restrict the root per­mis­sions, which Docker Engine starts con­tain­ers with.

Other security concerns also exist regarding software vul­ner­ab­il­it­ies within ap­plic­a­tion com­pon­ents that are dis­trib­uted by the Docker registry. Since es­sen­tially anyone can create Docker images and make them publicly ac­cess­ible to the community in the Docker Hub, there’s the risk of in­tro­du­cing malicious code to your system when down­load­ing an image. Before deploying an ap­plic­a­tion, Docker users should make sure that all of the code provided in an image for the execution of con­tain­ers stems from a trust­worthy source.

Docker offers a veri­fic­a­tion program that software providers can use to have their Docker images checked and verified. With this veri­fic­a­tion program, Docker aims to make it easier for de­velopers to build software supply chains that are secure for their projects. In addition to in­creas­ing security for users, the program aims to offer software de­velopers a way to dif­fer­en­ti­ate their projects from the multitude of other resources that are available. Verified images are marked with a Verified Publisher badge and, in addition to other benefits, given a higher ranking in Docker Hub search results.

Go to Main Menu