Differences in the ways of creating a Docker swarm - docker

I have been reading a lot of articles and documentation for the past 3 days now about the "new" docker swarm recently built within its engine.
Having identified a couple of ways of creating a swarm (whether it's local or on a cloud provider), I can't help my confusion of understanding the differences between those methods and when you'd use one over the other.
Here are the methods to create a swarm that I have identified so far:
Method 1
docker-machine create -d virtualbox swarm-manager
docker-machine create -d virtualbox swarm-worker-1
docker-machine create -d virtualbox swarm-worker-2
manager_ip=$(docker-machine ip swarm-manager)
swarm_join_command="docker swarm join --token SWMTKN-1-49nj1cmql0jkz5s954yi3oex3nedyz0fb0xx14ie39trti4wxv-8vxv8rssmk743ojnwacrr2e7c 192.168.99.100:2377"
docker-machine ssh swarm-manager "docker swarm init --advertise-addr $manager_ip"
docker-machine ssh swarm-worker-1 "${swarm_join_command}"
docker-machine ssh swarm-worker-2 "${swarm_join_command}"
Method 2
docker-machine create -d virtualbox token
token=$(docker-machine ssh token "docker run swarm create" | tail -n 1)
docker-machine create -d virtualbox \
--swarm --swarm-master \
--swarm-discovery token://${token} \
master-node
docker-machine create -d virtualbox \
--swarm --swarm-discovery token://${token} \
node-01
I am excluding consul because it seems that is no longer needed.
What is the difference between those methods?
When should I use one over the other?

Confusingly there are two implementations of Docker Swarm. The first ran as containers, the second was integrated into the docker Engine as part of the v1.12 release.
So embrace Method 1. The following example creates a HA setup with multiple managers:
Troubles using docker-machine to setup Swarm
The older Swarm documentation contains the following:
You are viewing docs for legacy standalone Swarm. These topics
describe standalone Docker Swarm. If you use Docker 1.12 or higher,
Swarm mode is integrated with Docker Engine. Most users should use
integrated Swarm mode — a good place to start is Getting started with
swarm mode and Swarm mode CLI commands. Standalone Docker Swarm is not
integrated into the Docker Engine API and CLI commands.

Related

Is there a way to setup a test docker swarm on a single machine?

I am trying to setup a docker swarm on WSL2 for testing purposes. I want to know, if it is possible to have a swarm with multiple "dummy" nodes on a single machine.
Here are the two ways that I trid:
Run multiple WSL instances as suggested here.
PS C:\Users\jdu> wsl -l
Windows-Subsystem für Linux-Distributionen:
Ubuntu3
Ubuntu
Ubuntu2
Docker is installed and run in each WSL instance. So I manage to initialize a swarm on Ubuntu and let Ubuntu2 and Ubuntu3 to join as workers.
On Ubuntu
$ docker swarm init
Swarm initialized: current node (hude19jo7t9dqpe0akg55ipmy) is now a manager.
On Ubuntu2
$ docker swarm join --token SWMTKN-1-xxxxxxxxx-xxxxxxxxx 192.168.189.5:2377 --listen-addr 0.0.0.0:12377
This node joined a swarm as a manager.
Then if I check on Ubuntu
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
hude19jo7t9dqpe0akg55ipmy * laptop-ebc155 Ready Active Leader 20.10.21
ozeq43yukgfbltjnfya0tlx08 laptop-ebc155 Ready Active Reachable 20.10.20
Inspired by the ideas here, I have tried with docker-in-docker containers, e.g. I deploy multiple docker instances on a single WSL.
# Init Swarm master
docker swarm init
# Get join token:
SWARM_TOKEN=$(docker swarm join-token -q worker)
echo $SWARM_TOKEN
# Get Swarm master IP (Docker for Mac xhyve VM IP)
SWARM_MASTER_IP=$(docker info | grep -w 'Node Address' | awk '{print $3}')
echo $SWARM_MASTER_IP
DOCKER_VERSION=dind
# setup deploy Docker-in-Docker containers and join them to a swarm
docker run -d --privileged --name worker-1 --hostname=worker-1 -p 12377:2377 docker:${DOCKER_VERSION}
docker exec worker-1 docker swarm join --token ${SWARM_TOKEN} ${SWARM_MASTER_IP}:2377
docker run -d --privileged --name worker-2 --hostname=worker-2 -p 22377:2377 docker:${DOCKER_VERSION}
docker exec worker-2 docker swarm join --token ${SWARM_TOKEN} ${SWARM_MASTER_IP}:2377
docker run -d --privileged --name worker-3 --hostname=worker-3 -p 32377:2377 docker:${DOCKER_VERSION}
docker exec worker-3 docker swarm join --token ${SWARM_TOKEN} ${SWARM_MASTER_IP}:2377
After that
$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
s371tmygu9h640xfosn6kyca4 * laptop-ebc155 Ready Active Leader 20.10.21
w1ina9ttvje4hn6r13p3gzbge worker-1 Ready Active 20.10.20
m8mqky6jchjao01nz8t5e392a worker-2 Ready Active 20.10.20
n29afhbb090tlyn9p0byga9au worker-3 Ready Active 20.10.20
To test the above two swarm setup, I use a very simple compose file as suggested by the official docs. As you can expect, these two swarm setup didn't work that well :/
If the MongoDB and MongoExpress are deployed on different nodes, both of the swarm setups show a same error MongoNetworkError: failed to connect to server [mongo:27017] on first connect. My understanding to this error is, that MongoExpress can not reach MongoDB under mongo:27017, which seems like a problem of the docker internal DNS. Can someone help me out? Or just feel free to tell me, dont try this single-multi nodes ideas anymore :D I am very appreciate to any help!
I just tried the same two exercises :)
Approach 1 - swarm nodes in WSL instances
I think it is currently impossible because of WSL2 design see https://github.com/microsoft/WSL/issues/4304. WSL2 instances are in fact sharing network setup - ip, interfaces, network namespaces, and so on. Every change made in one of them is immediately visible in all others and this conflicts with virtual interfaces and namespaces created by docker swarm nodes when they start up.
I tried configuring multiple ip addresses on eth0 interface, so that each node can have it's own (like here), and then used --advertise-addr --listen-addr options in docker swarm init and docker swarm join commands. Still I'm getting this error in dockerd logs:
moving interface ov-001000-yis5e to host ns failed, invalid argument, after config error error setting interface \"ov-001000-yis5e\" IP to 10.0.0.1/24: cannot program address 10.0.0.1/24 in sandbox interface because it conflicts with existing route {Ifindex: 4 Dst: 10.0.0.0/24 Src: 10.0.0.1 Gw: <nil> Flags: [] Table: 254}"
I believe here docker swarm hits a problem, because it already sees master's interfaces when it tries to to set up routing mesh networking for the worker. All because master and node share network config.
Approach 2 - swarm nodes as docker containers (docker-in-docker)
But I've got no 2. working with just a small change in swarm init command:
# advertise swarm on default bridge network
docker swarm init --advertise-addr 172.17.0.1
For me, the standard docker swarm init selected by default the eth0 address, which was only working for communication from dind -> wsl, but not the other way round.
Another but probably unrelated problem was that I could not access services/stacks executed this way from Windows host. This seems to be a wls bug and luckily there is a workaround.
One last hint about this mongo stack is ... patience. The stack consists of 2 services: mongo - the database and mongo-express - the client. Mongo image is a lot bigger ~600MB while mongo-express just ~135MB. The mongo-express image will be downloaded faster and it will be recreated by swarm multiple times before mongo is even started. Note also that docker images are independently downloaded for each worker in this setup, so also rebalancing may take some time.
I found these commands useful to see what is really happening:
# overview of services
docker service ls
# containers in each swarm service
docker service ps $(docker service ls --format {{.Name}})
# images in each dind worker
for i in $(seq "${NUM_WORKERS}"); do
docker exec worker-${i} docker images
done
#containers in each dind worker
for i in $(seq "${NUM_WORKERS}"); do
docker exec worker-${i} docker ps -a
done
Full listing of commands necessary to get working docker swarm using dind:
docker swarm init --advertise-addr docker0
SWARM_TOKEN=$( docker swarm join-token -q worker)
echo $SWARM_TOKEN
SWARM_MASTER_IP=$( docker info 2>&1 | grep -w 'Node Address' | awk '{print $3}')
echo $SWARM_MASTER_IP
DOCKER_VERSION=20.10.12-dind
NUM_WORKERS=3
# Run NUM_WORKERS workers with SWARM_TOKEN
for i in $(seq "${NUM_WORKERS}"); do
docker run -d --privileged --name worker-${i} --hostname=worker-${i} docker:${DOCKER_VERSION}
sleep 5
docker exec worker-${i} docker swarm join --token ${SWARM_TOKEN} ${SWARM_MASTER_IP}:2377
done
# Setup the visualizer
docker service create \
--detach=true \
--name=viz \
--publish=8000:8080/tcp \
--constraint=node.role==manager \
--mount=type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \
dockersamples/visualizer
####### play with mongo
mkdir mongodemo && cd mongodemo
wget https://raw.githubusercontent.com/docker-library/docs/f6c9b596064e2eed9c3b6ac75bea606cb6d94099/mongo/stack.yml
docker stack deploy -c stack.yml mongo
# from windows:
# mongo will be available under <eth0>:8081
# visualizer under <eth0>:8000
ip -4 addr | grep eth0

Is it possible to create container in multiple host using a single docker compose file?

I have to create containers in multiple host. I have a dockerfile for each container. I found that docker-compose can be used to run multiple containers from a single yaml file. I have to run containerA in HostA, containerB in HostB and so on.. is it possible to achieve this using docker-compose ? or what is the best way to create container in different host using the dockerfile.
No, docker-compose alone won't achieve this. Managing containers across multiple hosts is generally the job of schedulers. The Docker Ecosystem: Scheduling and Orchestration is an older article, but should give an introduction.
The scheduler provided by Docker is called Swarm. Use Compose with Swarm is a good place to start to understand how to use them together.
This part of the answer is time limited, but during Mentor Week there are a few free courses you can take to learn more about Swarm. In your case Operations Beginner and Intermediate may be interesting.
The answer is no.
Docker compose is for multiple containers on a single host.
Docker swarm is for container(s) at a cluster level.
And, you cannot decide where to run a container, only the docker swarm scheduler decides, but you can influence it. Check the link below, and focus on image affinity. i.e. you put specific images on only some nodes and configure the scheduler to use image affinity.
Filters - Which hosts are chosen
Strategies - ranking the nodes.
docker filters
Here is a script to create a docker swarm cluster. Play with until you learn the process itself
id=$(docker run swarm create)
docker-machine create -d virtualbox --swarm --swarm-master --swarm-discovery token://$id swarm-master
docker-machine create -d virtualbox --swarm --swarm-discovery token://$id node1
docker-machine create -d virtualbox --swarm --swarm-discovery token://$id node2
docker-machine create -d virtualbox --swarm --swarm-discovery token://$id node3
docker-machine create -d virtualbox --swarm --swarm-discovery token://$id node4
Here is what happened:
A cluster with one master and four nodes is created.
run eval $(docker-machine env --swarm swarm-master) to configure your shell.
From this point on, creating and running containers is as usual.
running docker ps would show you where the container is running.
Docker-compose is exactly what you need. It is a tool for defining and running multi-container Docker applications. A good starting point is official documentation: https://docs.docker.com/compose/gettingstarted/

How to setup multi-host networking with docker swarm on multiple remote machines

Before asking this question I have read quiet of articles and stackoverflow questions but I couldn't get the right answer for my setup(perhaps it is already answered). Here is the architecture I have been struggling to get it to work.
I have three physical machines and I would like to setup the Docker swarm with multi-host networking so that I can run docker-compose.
For example:
Machine 1(Docker Swarm Manager and Contains Consoul)(192.168.5.11)
Machine 2(Docker Swarm Node)(192.168.5.12)
Machine 3 (Docker Swarm Node)(192.168.5.13)
And I need to run docker-compose from any other separate machine.
I have tried Docker article but in that article it is all setup under the same physical machine using docker-machine and virtual box. How can I achieve above in three remote machines. Any help appreciated.
The latest version of Docker has Swarm Mode built in, so you don't need Consul.
To set up on your boxes, make sure they all have docker version of 1.12 or higher and then you just need to initialise the swarm and join it.
On Machine 1 run:
docker swarm init --advertise-addr 192.168.5.11
The output from that will tell you the command to run on Machine 2 and 3 to join them to the swarm. You'll have a unique swarm token, and the command is something like:
docker swarm join \
--token SWMTKN-1-49nj1... \
192.168.5.11:2377
Now you have a 3-node swarm. Back on Machine 1 you can create a multi-host overlay network:
docker network create -d overlay my-app
And then you run workloads in the network by deploying services. If you want to use Compose with Swarm Mode, you need to use distributed application bundles - which are currently only in the experimental build of Docker.
I figured this needs an update, as docker compose files are supported in docker swarm
Initialize the swarm on Machine 1 using
docker swarm init --advertise-addr 192.168.5.11
Join the swarm from Machine 2 & 3 using
docker swarm join \
--token <swarm token from previous step> 192.168.5.11:2377 \
--advertise-addr eth0
eth0 is the network interface on machines 2 & 3, & could be different
based on your config. I found that without the --advertise-addr
option, containers couldn't talk to each other across hosts.
To list all the nodes in the swarm & see their status
docker node ls
After this, deploy the stack (group of services or containers) from a compose file
docker stack deploy -c <compose-file> my-app
This will create all the containers across multiple hosts
To list services (containers) on the swarm run docker service ls
See docker docs Getting started with swarm mode

Docker Swarm vs. Docker Cluster

I created a swarm cluster via
docker-machine -d azure --swarm --swarm-master --swarm-discovery token://SWARM_CLUSTER_TOKEN my-swarm-master
and
docker-machine -d azure--swarm --swarm-discovery token://SWARM_CLUSTER_TOKEN my-node-01
After that, I Logged into cloud.docker.com - but when I click on Node Clusters or Nodes I can't see my swarm.
So is swarm (via command line) and cluster (via cloud.docker.com) not the same thing? What's the difference and when should I use which one?
Edit:
Yes, my Azure subscription is added in cloud.docker.com under Cloud Settings.
They are separate. The docker-machine commands you ran create a self hosted swarm that you manage yourself (from your first docker-machine command). The Docker Cloud creates an environment that's managed for you from the Docker infrastructure. Without access to that token used by Swarm, Docker Cloud won't know about the nodes in your Swarm.

Adding services in different consul clients running on same host

I've followed the section of in Testing a Consul cluster on a single host using consul. Three consul servers are successfully added and running in same host for testing purpose. Afterwards, I've also followed the tutorial and created a consul client node4 to expose ports. Is it possible to add more services and bind to one of those consul clients ?
Use the new 'swarm mode' instead of the legacy Swarm. Swarm mode doesn't require Consul. Service discovery and key/value store is now part of the docker daemon. Here's how to create a 3 nodes High Available cluster (3 masters).
Create three nodes
docker-machine create --driver vmwarefusion node01
docker-machine create --driver vmwarefusion node02
docker-machine create --driver vmwarefusion node03
Find the ip of node01
docker-machine ls
Set one as the initial swarm master
docker $(docker-machine config node01) swarm init --advertise-addr <ip-of-node01>
Retrieve the token to let other nodes join as master
docker $(docker-machine config node01) swarm join-token manager
This will print out something like
docker swarm join \
--token SWMTKN-1-0siwp7rzqeslnhuf42d16zcwodk543l99liy0wuq1mern8s8u9-8mbsrxzu9mgfw7x6ehpxh0dof \
192.168.40.144:2377
Add the other two nodes to the swarm as masters
docker $(docker-machine config node02) swarm join \
--token SWMTKN-1-0siwp7rzqeslnhuf42d16zcwodk543l99liy0wuq1mern8s8u9-8mbsrxzu9mgfw7x6ehpxh0dof \
192.168.40.144:2377
docker $(docker-machine config node03) swarm join \
--token SWMTKN-1-0siwp7rzqeslnhuf42d16zcwodk543l99liy0wuq1mern8s8u9-8mbsrxzu9mgfw7x6ehpxh0dof \
192.168.40.144:2377
Examine the swarm
docker node ls
You should now be able to shutdown the leader node and see another pick up as manager.
Best practice for Consul, is to run consul one per HOST, and when you want to talk to consul, you always talk locally. In general, everything 1 consul node knows, every other consul node also knows. So you can just talk to your localhost consul (127.0.0.1:8500) and do everything you need to do. When you add services, you add them to the local consul node that has the service's process on it. There are projects like Registrator (https://github.com/gliderlabs/registrator) That will automatically add services from running docker containers, which makes life easier.
Overall, welcome to Consul, it's great stuff!

Resources