With the container platform Docker, you can quickly, com­fort­ably, and ef­fi­ciently spread ap­plic­a­tions out in the network as tasks. All you need is the swarm cluster manager, which has been included since version 1.12.0 as “Swarm Mode”, a native piece of the Docker engine and so part of the container platform’s core software. Docker Swarm allows you to scale container ap­plic­a­tions by operating them in any number of instances on any number of nodes in your network. On the other hand, if you want to run a multi-container ap­plic­a­tion in a cluster - called a “stack” in Docker - then you’ll need the Docker Compose tool. Here, we explain the basic concepts of Docker or­ches­tra­tion with Swarm and Compose and il­lus­trate their im­ple­ment­a­tion using code examples.

Docker Swarm

Swarm is a piece of software from the developer of Docker that con­sol­id­ates of any number of Docker hosts into a cluster and enables central cluster man­age­ment as well as the or­ches­tra­tion of con­tain­ers. Up to Docker version 1.11, Swarm had to be im­ple­men­ted as a separate tool. Newer versions of the container platform, though, support a native swarm mode. The cluster manager is available to every Docker user with the in­stall­a­tion of the Docker engine.

A master-slave ar­chi­tec­ture forms the basis of Docker Swarm. Each Docker cluster consists of at least one manager and any number of worker nodes. While the swarm manager is re­spons­ible for the man­age­ment of clusters and the del­eg­a­tion of tasks, the swarm workers take over for the execution. This way, container ap­plic­a­tions are divided into a number of worker nodes as so-called “Services”.

In Docker ter­min­o­logy, the term “Service” refers to an abstract structure for defining tasks that are to be carried out in the cluster. Each service cor­res­ponds to a set of in­di­vidu­al tasks, which are each processed in a separate container on one of the nodes in the cluster. When you create a service, you specify which container image it’s based on and which commands are run in the container. Docker Swarm supports two modes in which swarm services are defined: you choose between rep­lic­ated and global services.

  • Rep­lic­ated services: A rep­lic­ated service is a task which is run on a user-defined number of rep­lic­a­tions. Each rep­lic­a­tion is an instance of the Docker container defined in the service. Rep­lic­ated services can be scaled by allowing users to create ad­di­tion­al rep­lic­a­tions. A web server such as NGINX, for example, can be scaled as needed with a separate command line of 2, 4, or 100 instances
  • Global services: If a service is run in global mode, each available node in the cluster starts a task for the cor­res­pond­ing service. If a new node is added to the cluster, then the swarm manager im­me­di­ately assigns a task to it for the global service. Global services are suitable for mon­it­or­ing ap­plic­a­tions or anti-virus programs, for example

A central field of ap­plic­a­tion for Docker Swarm is load dis­tri­bu­tion. In swarm mode, Docker has available in­teg­rated load balancing functions. If you run an NGINX web server with 4 instances, for example, Docker in­tel­li­gently divides the incoming requests between the available web server instances.

Docker Compose

Docker Compose allows you to define multi-container ap­plic­a­tions - or “stacks” - and run them either in their own Docker node or in a cluster. The tool provides command line commands for managing the entire lifecycle of your ap­plic­a­tions.

Docker defines stacks as groups of in­ter­con­nec­ted services that share software de­pend­en­cies and are or­ches­trated and scaled together. A Docker stack enables you to define various functions of an ap­plic­a­tion in a central file - the docker-compose.yml - and start it from there, run it together in an isolated runtime en­vir­on­ment, and manage it centrally.

Depending on which operating system you’re using to run Docker, Compose may need to be installed sep­ar­ately.

If you use the Docker container platform as part of the desktop in­stall­a­tions Docker for Mac or Docker for Windows, then Docker Compose is already contained in the range of functions. The same goes for the Docker toolbox, which is available for older Mac or Windows systems. If you use Docker either on Linux or on Windows Server 2016, a manual in­stall­a­tion of the tool is required.

Compose-In­stall­a­tion unter Linux

Open the terminal and run the following command to download the Compose binary files from the GitHub re­pos­it­ory:

sudo curl -L https://github.com/docker/compose/releases/download/1.18  
.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

Permit all users to run the binary files:

sudo chmod +x /usr/local/bin/docker-compose

To check if the tool has been installed correctly, run the following command:

docker-compose -version

If the in­stall­a­tion was suc­cess­ful, you’ll receive the version number of the tool as a terminal output.

Compose in­stall­a­tion on Windows Server 2016 (Docker EE for Windows only)

Start the Power­Shell as an ad­min­is­trat­or and run the following command to start the download of the Compose binary files from the GitHub re­pos­it­ory:

Invoke-WebRequest 
"https://github.com/docker/compose/releases/download/1.18.0/docker-compose-
Windows-x86_64.exe" -UseBasicParsing -OutFile
$Env:ProgramFiles\docker\docker-compose.exe

Start the ex­ecut­able file to install Docker Compose.

Note
Further in­form­a­tion on Docker tools like Swarm and Compose can be found in our article on the Docker ecosystem.

Tutorial: Docker Swarm and Compose in use

To operate multi-container apps in a cluster with Docker, you need a swarm - a Docker engine cluster in swarm mode - as well as the Docker Compose tool.

In the first part of our tutorial, you’ll learn how to create your own swarm in Docker in just a few steps. The creation of multi-container apps with Docker Compose and de­ploy­ment in the cluster are discussed in the second part.

Tip
An in­tro­duc­tion to Docker as well as a step-by-step manual on in­stalling the Docker engine on Linux can be found in our basics article on container platforms.

Part 1: Docker in swarm mode

A swarm refers to any number Docker engines in swarm mode. Each Docker engine runs on a separate node and in­teg­rates it into the cluster.

The creation of a Docker cluster involves three steps:

  1. Prepare Docker hosts
  2. Ini­tial­ise swarm
  3. Integrate Docker hosts in the swarm
Note
Al­tern­at­ively, an in­di­vidu­al Docker engine can be swarmed in a local de­vel­op­ment en­vir­on­ment. This is referred to as a single node swarm.

Step 1: Prepare Docker hosts

For the pre­par­a­tion of Docker nodes, it’s re­com­men­ded to use the pro­vi­sion­ing tool Docker Machine. This sim­pli­fies the im­ple­ment­a­tion of Docker hosts (also called “Dock­er­ised hosts”, virtual hosts including Docker engine). With Docker Machine, prepare accounts for your swarm on any number of in­fra­struc­tures and remotely manage them. Driver plugins for Docker Machine are provided by various cloud platforms. This reduces the effort required for the pro­vi­sion­ing of Docker hosts by providers such as Amazon Web Services (AWS) or Digital Ocean to a simple line of code. Use the following code to create a Docker host (here: docker-sandbox) in the in­fra­struc­ture of Digital Ocean.

$ docker-machine create --driver digitalocean --digitalocean-access-token xxxxx docker-sandbox

Create a Docker host in AWS (here: aws-sandbox) with the following command:

$ docker-machine create --driver amazonec2 --amazonec2-access-key AKI******* --amazonec2-secret-key 8T93C******* aws-sandbox
Note
The char­ac­ters xxxxx and ****** function as place­hold­ers for in­di­vidu­al access codes or keys, which you will generate with your user account for the service in question.

Step 2: Ini­tial­ise swarm

If you’ve prepared the desired number of virtual hosts for your swarm, you can manage them via Docker Machine and con­sol­id­ate them into a cluster with Docker Swarm. First, access the node that you would like to use as the swarm manager. Docker Machine provides the following command for building an SSH-encrypted con­nec­tion to the Docker host.

docker-machine ssh MACHINE-NAME
Note
Replace the MACHINE-NAME place­hold­er with the name of the Docker host that you want to access.

If the con­nec­tion the desired node is es­tab­lished, use the following command to ini­tial­ise a swarm.

docker swarm init [OPTIONS]

The command docker swarm init - with options, if desired (see doc­u­ment­a­tion) - defines the currently selected node as swarm manager and creates two random tokens: a manager token and a worker token.

Swarm initialized: current node (1ia0jlt0ylmnfofyfj2n71w0z) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-511cy9taxx5w47n80vopivx6ii6cjpi71vfncqhcfcawxfcb14-6cng4m8lhlrdfuq9jgzznre1p \
10.0.2.15:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

The command docker swarm init generates a terminal output that contains all of the in­form­a­tion you need to add ad­di­tion­al accounts to your swarm.

Note
In general, docker swarm init is used with the flag --advertise-addr. This indicates which IP address should be used for API access and overlay net­work­ing. If the IP address isn’t ex­pli­citly defined, Docker auto­mat­ic­ally checks which IP address the selected system is reachable under and selects this one. If a node has more than one IP address, then the cor­res­pond­ing flag has to be set. As long as nothing else is entered, Docker uses port 2377.

Step 3: Integrate Docker hosts in the swarm

After you’ve ini­tial­ized your swarm with the chosen node as swarm manager, add any numbers of nodes as managers or workers. Use the command docker swarm join in com­bin­a­tion with the cor­res­pond­ing token.

3.1 Add worker nodes: If you would like to add a worker node to your swarm, access the cor­res­pond­ing node via docker-machine and run the following command:

docker swarm join [OPTIONS] HOST:PORT

A mandatory component of the docker swarm join command is the flag --token, which contains the token for access to the cluster.

docker swarm join \
--token SWMTKN-1-511cy9taxx5w47n80vopivx6ii6cjpi71vfncqhcfcawxfcb14-6cng4m8lhlrdfuq9jgzznre1p \
10.0.2.15:2377

In the current example, the command contains the pre­vi­ously generated worker token as well as the IP address under which the swarm manager is available.

If you don’t have the cor­res­pond­ing token on hand, you can identify it via docker swarm join-token worker.

3.2 Add master node: If you’d like to add another manager node to your swarm, first identify the manager token. Then run the command docker swarm join-token manager on the manager account on which the swarm was ini­tial­ised, and follow the in­struc­tions on the terminal.

Docker generates a manager token that can be run in com­bin­a­tion with the command docker swarm join and the IP address that you define on any number of Docker hosts to integrate it into the swarm as a manager.

$ sudo docker swarm join-token manager
To add a manager to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-511cy9taxx5w47n80vopivx6ii6cjpi71vfncqhcfcawxfcb14-ed2ct6pg5rc6vp8bj46t08d0i \
10.0.2.15:2377

3.3 Overview of all nodes in the swarm: An overview of all nodes in­teg­rated into your swarm can be obtained using the man­age­ment command docker node ls on one of your manager nodes.

ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER
jhy7ur9hvzvwd4o1pl8veqms3    worker2   Ready   Active
jukrzzii3azdnub9jia04onc5    worker1   Ready   Active
1ia0jlt0ylmnfofyfj2n71w0z *  osboxes   Ready   Active        Leader

Manager nodes are labeled as Leader in the overview.

Note
If you’d like to delete a node from your swarm, log in to the cor­res­pond­ing host and run the command docker swarm leave. If the node is a swarm manager, you must force the execution of the command using the flag --force.

Part 2: Run multi-container app in cluster

In the first part of our Docker tutorial, we provided the Docker host with Docker Machine and compiled it in swarm mode as a cluster. Now we show you how to define various services as compact multi-container apps with the help of Docker Compose and run them in a cluster.

The pre­par­a­tion of multi-container apps in a cluster involves five steps:

  1. Create local Docker registry
  2. Define multi-container app as stack
  3. Test multi-container app with Compose
  4. Load image in the registry

Step 1: Start local Docker registry as service

Since a Docker swarm consists of any number of Docker engines, ap­plic­a­tions can only be run in a cluster if all Docker engines involved have access to the ap­plic­a­tion’s image. For this, you need a central service that allows you to manage images and prepare them in a cluster. Such a service is called a registry.

Note
An image is a compact, ex­ecut­able likeness of an ap­plic­a­tion. In addition to the ap­plic­a­tion code, this also includes all de­pend­en­cies (runtime en­vir­on­ments, libraries, en­vir­on­ment­al variables, and con­fig­ur­a­tion files) that Docker needs to run the cor­res­pond­ing ap­plic­a­tion as a container. This means that each container is a runtime instance of an image.

1.1 Start registry as a service in the cluster: Use the command docker service create as follows to start a local registry server as a service in the cluster.

docker service create --name registry --publish 5000:5000 registry:2

The command indicates to Docker to start a service with the name registry that listens to port 5000. The first value following the --publish flag specifies the host port and the second specifies the container port. The service is based on the image registry:2, which contains an im­ple­ment­a­tion of the Docker registry HTTP API V2 and can be obtained for free via the Docker hub.

1.2 Check the status of the registry service: Use the command docker service ls to check the status of the registry service that you just started.

$ sudo docker service ls
ID            NAME      MODE        REPLICAS  IMAGE          PORTS
K2hq2ivnwuq4  registry  replicated  1/1       registry:2     *:5000->5000/tcp

The command docker service ls outputs a list of all services running in your Docker cluster.

1.3 Check registry con­nec­tion with cURL: Make sure that you can access your registry via cURL. To do this, enter the following command:

$ curl http://localhost:5000/v2/

If your registry is working as intended, then cURL should deliver the following terminal output:

{}
Note
cURL is a command line program for calling up web addresses and uploading or down­load­ing files. Learn more about cURL on the project website of the open source software: curl.haxx.se.

Step 2: Create a multi-container app and define it as a stack

In the next step, create all files that are needed for the de­ploy­ment of a stack in the Docker cluster and file them in a common project directory.

2.1 Create a project folder: Create a project directory with any name that you like - for example, stackdemo.

$ mkdir stackdemo

Navigate to the project directory.

$ cd stackdemo

Your project directory functions as a common folder for all files that are necessary for the operation of your multi-container app. This includes a file with the app’s source code, a text file in which you define the software required for the operation of your app, as well as a Dock­er­file and a Compose-File.

2.2 Create the app: Create a Python ap­plic­a­tion with the following content and file it under the name app.py in the project directory.

from flask import Flask
from redis import Redis
app = Flask(__name__)
redis = Redis(host='redis', port=6379)
@app.route('/')
def hello():
    count = redis.incr('hits')
    return 'Hello World! I have been seen {} times.\n'.format(count)
if __name__ == "__main__":
    app.run(host="0.0.0.0", port=8000, debug=True)

The example ap­plic­a­tion app.py is a simple web ap­plic­a­tion with a homepage dis­play­ing the greeting “Hello World!” as well as a detail showing how often the app was accessed.  The basis for this is the open source web framework Flask and the open source in-memory database Redis.

2.3 Define re­quire­ments: Create a text file titled re­quire­ments.txt with the following content and file it in the project directory.

flask
redis

In the re­quire­ments.txt file, specify on which software your ap­plic­a­tion is built.

2.4 Create Dock­er­file: Create another text file with the name Dock­er­file, add the following content, and file it in the project folder like the others.

FROM python:3.4-alpine
ADD . /code
WORKDIR /code
RUN pip install -r requirements.txt
CMD ["python", "app.py"]

The Dock­er­file contains all in­struc­tions necessary for creating an image of an ap­plic­a­tion. For example, the Dock­er­file points to re­quire­ments.txt and specifies which software must be installed to run the ap­plic­a­tion.

The Dock­er­file in the example makes it possible to create an image of the web ap­plic­a­tion app.py including all re­quire­ments (Flask and Redis).

2.5 Create Compose-File: Create a con­fig­ur­a­tion file with the following content and save it as docker-compose.yml.

version: '3'
services:
    web:
        image: 127.0.0.1:5000/stackdemo
        build: .
        ports:
            - "8000:8000"
    redis:
        image: redis:alpine

The docker-compose.yml file allows you to link various services to one another, run them as a single entity, and manage them centrally.

Note
The Compose-File is written in YAML, a sim­pli­fied markup language which serves to map struc­tured data and is primarily used in con­fig­ur­a­tion files. With Docker, the docker-compose.yml file serves as the central con­fig­ur­a­tion of services of a multi-container ap­plic­a­tion.

In the current example, we define two services: A web service and a Redis service.

  • Web service: The found­a­tion of the web service is an image generated on the basis of the created Dock­er­file in the stackdemo directory.

  • Redis service: For the Redis service, we don’t use our own image. Instead, we access a public Redis image (redis:alpine) available over the Docker hub.

Step 3: Test multi-container app with Compose

Test the multi-container app locally first by running it on your manager node.

3.1 Start the app: Use the command docker-compose up in com­bin­a­tion with the flag -d to start your stack. The flag activates “Detached mode”, which runs all con­tain­ers in the back­ground. Your terminal is now ready for ad­di­tion­al command input.

$ sudo docker-compose up -d
WARNING: The Docker Engine you're using is running in swarm mode.
Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.
To deploy your application across the swarm, use the bundle feature of the Docker experimental build.
More info:
https://docs.docker.com/compose/bundles
Creating network "stackdemo_default" with the default driver
Creating stackdemo_web_1
Creating stackdemo_redis_1

3.2 Check stack status: Run the command docker-compose ps to check the status of your stack. You’ll receive a terminal output that looks something like the following example:

$ sudo docker-compose ps
    Name         Command         State          Ports     
-------------------------------------------------------------------------
stackdemo_redis_   docker-        Up             6379/tcp       
1            entrypoint.sh                            
             redis ...                                
stackdemo_web_1    python app.py    Up             0.0.0.0:8000->80 
                                       00/tcp

The command docker-compose ps gives you an overview of all con­tain­ers that are started in the context of your multi-container ap­plic­a­tion. In the current example, this list includes two con­tain­ers - one for each of the Redis and web services.

3.3 Test the stack with cURL: Test your stack by running the command line program cURL with the localhost address (localhost or 127.0.0.1).

$ curl http://localhost:8000
Hello World! I have been seen 1 times.
$ curl http://localhost:8000
Hello World! I have been seen 2 times.
$ curl http://localhost:8000
Hello World! I have been seen 3 times.

You can also access the web ap­plic­a­tion in a browser.

3.4 De­ac­tiv­ate the app: If you would like to turn the app off, run the command docker-compose down with the flag --volumes.

$ sudo docker-compose down --volumes
Stopping stackdemo_redis_1 ... done
Stopping stackdemo_web_1 ... done
Removing stackdemo_redis_1 ... done
Removing stackdemo_web_1 ... done
Removing network stackdemo_default

Step 4: Load image into the registry

Before you can run your multi-container app as a divided ap­plic­a­tion in the cluster, you need to prepare all of the required images via the registry service. In the current example, this includes only the self-created web service image (the Redis image is available via a public registry in the Docker hub).

Uploading a locally created image into a central registry is called a “push” in Docker. Docker Compose has the command docker-compose push available for this. Run the command in the project directory.

All images run in the docker-compose.yml file that are locally created and loaded into the registry.

$ sudo docker-compose push
Pushing web (127.0.0.1:5000/stackdemo:latest)...
The push refers to a repository [127.0.0.1:5000/stackdemo]
5b5a49501a76: Pushed
be44185ce609: Pushed
bd7330a79bcf: Pushed
c9fc143a069a: Pushed
011b303988d2: Pushed
latest: digest: sha256:a81840ebf5ac24b42c1c676cbda3b2cb144580ee347c07e1bc80e35e5ca76507 size: 1372

In the current example, docker-compose push loads the image from the stackdemo stack with the tag latest into the local registry under 127.0.0.1:5000.

Step 5: Run stack in a cluster

If your stack’s image is available via the local registry service, then the multi-container ap­plic­a­tion can be run in the cluster.

5.1 Run stack in a cluster: You can also run stacks in a cluster with a simple command line. The container platform provides the following command for this:

docker stack deploy [OPTIONS] STACK
Note
Replace the STACK place­hold­er with the name of the stack image that you want to run.

Run the command docker stack deploy on one of the manager nodes in your swarm.

$ sudo docker stack deploy --compose-file docker-compose.yml stackdemo
Ignoring unsupported options: build
Creating network stackdemo_default
Creating service stackdemo_web
Creating service stackdemo_redis

The flag --compose-file shows the path to the Compose-File.

5.2 Obtain the stack status: Use the following command to obtain the status of your stack:

docker stack services [OPTIONS] STACK

Docker gives you IDs, names, modes, rep­lic­a­tions, images, and ports of all of the services that are run in the context of your stack.

$ sudo docker stack services stackdemo
ID                  NAME                MODE                REPLICAS            IMAGE                             PORTS
cxyp7srukffy        stackdemo_web       replicated          1/1                 127.0.0.1:5000/stackdemo:latest   *:8000->8000/tcp
z0i2rtjbrj9s        stackdemo_redis     replicated          1/1                 redis:alpine

5.3 Test app with cURL: To test your multi-container app, call it up via the local host address on port 8000.

$ curl http://localhost:8000
Hello World! I have been seen 1 times.
$ curl http://localhost:8000
Hello World! I have been seen 2 times.
$ curl http://localhost:8000
Hello World! I have been seen 3 times.

As an al­tern­at­ive to cURL, the app can also be accessed via the web browser. Use the local host address or the address of one of the nodes. Thanks to the internal routing network, you can access any node in your swarm on port 8000 to be routed to your app.

5.4 De­ac­tiv­ate the stack: If you would like to shut down your stack, use the command docker stack rm in com­bin­a­tion with the name of the stack.

$ docker stack rm stackdemo
Removing service stackdemo_web
Removing service stackdemo_redis
Removing network stackdemo_default

5.5 De­ac­tiv­ate registry service: If you would like to shut down the registry service, use the command docker service rm with the name of the service - in this case: registry.

$ docker service rm registry
Summary

Docker Swarm and Compose extend the core func­tion­al­ity of the container platform with tools that enable you to run complex ap­plic­a­tions in divided systems with minimal man­age­ment effort. The market leader in the area of container vir­tu­al­isa­tion offers its users a complete solution for the or­ches­tra­tion of con­tain­ers. Both tools are well-supported with doc­u­mented and updates at regular intervals. Swarm and Compose have po­si­tioned them­selves as good al­tern­at­ives to es­tab­lished third-party tools such as Kuber­netes or Panamax.

Go to Main Menu