How to Setup a Docker Swarm Cluster on a CentOS 7 VPS or Dedicated Server

How to Setup a Docker Swarm Cluster on a CentOS 7 VPS or Dedicated Server

Docker Swarm, also known as Docker engine in swarm mode is a new clustering and orchestration tool for Docker containers used to manage a group of Docker hosts.

In this article, we present Docker in the Docker 1.12. It allows for addition and subtraction of container in the computing process.

There are two main components of Docker Swarm:

Manager Node: Deals with management of cluster task such as scheduling services, maintaining the state of clusters, and serving docker swarm mode endpoints.

Worker Node: Used to execute the cluster container.

In this tutorial, we will get into details on installing and configuring Docker Swarm Mode on CentOS 7. For this article, we will use three CentOS 7 to install and launch docker engine. Out of these, two servers will acts as Worker node or Docker Engine and the remaining one will be a manager


In this case, we will need the following:

  • A local machine installed with Docker. The machine can be running on Windows, Linux, or macOS.
  • Three servers with CentOS 7 fully installed. One server will be the Manager node while the other two servers will be the Worker node.
  • We will use the following IP address: for Manager Node, for Worker node1 and for Worker node2.

Check out these top 3 VPS services:

AU$6.05 /mo
Starting price
Visit Kamatera
Rating based on expert review
  • User Friendly
  • Support
  • Features
  • Reliability
  • Pricing
AU$4.46 /mo
Starting price
Visit ScalaHosting
Rating based on expert review
  • User Friendly
  • Support
  • Features
  • Reliability
  • Pricing
Liquid Web
AU$7.56 /mo
Starting price
Visit Liquid Web
Rating based on expert review
  • User Friendly
  • Support
  • Features
  • Reliability
  • Pricing

Sign in to and choose CentOS 7. Once you’re logged in, run the command below to ensure the system is updated with the latest packages available:

yum update -y

Get Started

Before you begin the process, make sure you configure /etc/hosts file on every node so that it can be easier to communicate to each other using hostnames.

Use the command below to update the host file:   dkmanager    workernode1     workernode2

Save the file once you’re finished.

Now configure hostname of every node depending on the hosts file.

Run the command below for each node.

Manager node:

hostnamectl set-hostname managernode

Worker node1:

hostnamectl set-hostname workernode1

Worker node2:

hostnamectl set-hostname workernode2

Step 1:
Installing Docker Engine

Now, install the Docker version on each node. Set the docker repository system and run the command below on all the hostnames.

Do the same for the two Worker node servers.

Step 2:
Configuring Firewall on Each Node

The next step is to open the ports on the firewall to ensure the swarm cluster is working correctly.

Continue and run the command below on all nodes:

Restart the docker service:

Open the firewall ports below on each worker node then restart the docker service:

Step 3:
Launch the Swarm or Cluster

Initialized the swarm on your Manager node. To do this, run the command below.

docker swarm init --advertise-addr

Make sure you see the output below:

The token that is generated by the output above helps to join manager node and worker nodes

Verify the manager status using the command below:

docker info

The output should look like this:

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 17.12.0-ce
Storage Driver: devicemapper
 Pool Name: docker-253:0-618740-pool
 Pool Blocksize: 65.54kB
 Base Device Size: 10.74GB
 Backing Filesystem: xfs
 Udev Sync Supported: true
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Data Space Used: 11.8MB
 Data Space Total: 107.4GB
 Data Space Available: 3.817GB
 Metadata Space Used: 581.6kB
 Metadata Space Total: 2.147GB
 Metadata Space Available: 2.147GB
 Thin Pool Minimum Free Space: 10.74GB
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Library Version: 1.02.140-RHEL7 (2017-05-03)
Logging Driver: json-file
Cgroup Driver: cgroupfs
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
 NodeID: viwovkb0bk0kxlk98r78apopo
 Is Manager: true
 ClusterID: ttauawqrc8mmd0feluhcr1b0d
 Managers: 1
 Nodes: 1
  Task History Retention Limit: 5
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
  Force Rotate: 0
 Autolock Managers: false
 Root Rotation In Progress: false
 Node Address:
 Manager Addresses:
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 89623f28b87a6004d4b785663257362d1658a729
runc version: b2567b37d7b75eb4cf325b77297b140ea686ce8f
init version: 949e6fa
Security Options:
  Profile: default
Kernel Version: 3.10.0-693.11.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 1
Total Memory: 1.102GiB
Name: centOS-7
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Experimental: false
Insecure Registries:
Live Restore Enabled: false

You should see the entire list of nodes present in your cluster using the command below:

docker node ls

The output should be like this:

Step 4:
Add Worker nodes to the Manager node

Add the Worker nodes to the docker swarm service with the command below:

docker swarm join --token SWMTKN-1

The output should be:

To check the status of the nodes, run the command below:

docker node ls

If the process is successful, you should get the output shown below:

In case, you want to retrieve a lost join token, run the command below:

docker swarm join-token manager -q

By now, the docker swarm mode should be running successfully with two worker nodes.

Step 5:
Set up the Service in Swarm mode

Now launch the service in Swarm Mode. In this case, we will launch a web service in Docker Swarm Mode using the three containers.

Run the command below from the Docker Manager only:

docker service create -p 80:80 --name webservice --replicas 3 httpd

The output should look like this:

To check the status of your service, run the command below:

docker service ls

Output will be:

With the above output, the containers are deployed successfully across the cluster nodes. Now, it’s easier to log in to the web page from any node using the following web browser addresses.

http:// 172.168. 0.101
http://172.168. 0.102
http://172.168. 0.103

Step 6:
Testing Container Self-Healing

Docker Swarm Mode contain unique features like container self-healing. In case a container fails to function, the manager will ensure that the containers restarts automatically on the that particular node.

To test if the process is working, let’s remove a container from workernode2 and find out whether a new container is launched or not.

Run the command below to list the container ID on Workernode2:

docker ps

The output should be like this:

Now, run the command below to remove container 9b01b0a55cb7:

docker rm 9b01b0a55cb7 -f

Now check whether a new container is deployed from the Manager node:

docker service ps webservice

By now, you may realize that one container has failed and immediately, another container has been started on workernode2:

Step 7:
Scaling up and down containers for the service

In the Docker Cluster, it’s possible to scale up and down containers. In this case, let’s try to scale up the containers to 5 for the service.

[root@dkmanager ~]# docker service scale webserver=5
webserver scaled to 5
[root@dkmanager ~]#

Check the status of the service again using the command below:

Now, let’s try to scale down container to 2 for the service:

[root@dkmanager ~]# docker service scale webserver=2
webserver scaled to 2
[root@dkmanager ~]#

Check to see the process is done with the command below:

Now, you should have a fully configured Docker Swarm cluster on CentOS7.


There you have it. That’s how easy it is to setup Docker Swarm with the help the new Swarm mode and Docker engine. It is important to note that after the setup is done, make sure you protect the servers by providing an additional layer of security. A security feature with firewall and monitoring capabilities is a good start.


Check out the top 3 Dedicated server hosting services:

AU$4.52 /mo
Starting price
Visit Hostinger
Rating based on expert review
  • User Friendly
  • Support
  • Features
  • Reliability
  • Pricing
AU$1.51 /mo
Starting price
Rating based on expert review
  • User Friendly
  • Support
  • Features
  • Reliability
  • Pricing
AU$4.38 /mo
Starting price
Visit Ultahost
Rating based on expert review
  • User Friendly
  • Support
  • Features
  • Reliability
  • Pricing

How To Setup a Docker Swarm Cluster on Ubuntu 16.04 VPS or Dedicated Server

This article shows you how to set up a Docker Swarm Cluster on Ubuntu 16.04.
5 min read
Idan Cohen
Idan Cohen
Marketing Expert

How to Deploy Docker Containers with OpenStack Heat

This guide is written to help users deploy Docker containers using OpenStackHeat
5 min read
Max Ostryzhko
Max Ostryzhko
Senior Web Developer, HostAdvice CTO

How to install Django on a CentOS 7 VPS or Dedicated Server

When building a website, similar components are required, and you do not have to
3 min read
Mark Armistead
Mark Armistead

Install & Configure the Caddy web server on a CentOS 7 VPS

Caddy is the only new server's that secure by default. It's growing in popularit
2 min read
Max Ostryzhko
Max Ostryzhko
Senior Web Developer, HostAdvice CTO provides professional web hosting reviews fully independent of any other entity. Our reviews are unbiased, honest, and apply the same evaluation standards to all those reviewed. While monetary compensation is received from a few of the companies listed on this site, compensation of services and products have no influence on the direction or conclusions of our reviews. Nor does the compensation influence our rankings for certain host companies. This compensation covers account purchasing costs, testing costs and royalties paid to reviewers.
Click to go to the top of the page
Go To Top