How do the Docker Swarm Agents tell their IP to the Swarm Manager back? - docker

I'd like to understand the communication mechanism between the Docker Swarm Manager and the Docker Swarm Agents :
The Swarm Manager generates a token.
The Swarm Agents are generated, with this token passed to them. (and their own IP)
Now that the Manager needs to give instructions to the agents, how was it informed that the agents were existing to these IPs ?
Hypotesis :
Does the Agents register themselves on some docker.com server with their token, and the Manager gets their addresses from it using the same token ?
Thank you

Options are described in the doc here:
https://docs.docker.com/swarm/discovery/
In this example I use the hosted discovery with Docker Hub. There are other options like a static file, consul, etcd etc.
You create your docker cluster:
docker run -rm swarm create
This will give you a token to be used as your cluster id: e4802398adc58493...longtoken
You register one/multiple docker host(s) with your cluster
docker run -d swarm join --addr=172.17.42.10:2375 token://e4802398adc58493...longtoken
The ip address provided is the address of your docker host node.
This is how the future manager will know about agents/nodes
You deploy the swarm manager to any of your docker host (let's say 172.17.42.10:2375, the same one I used to create the swarm and register my first docker host)
docker run -d -p 9999:2375 swarm manager token://e4802398adc58493...longtoken
To use the cluster you set the DOCKER_HOST to the ip address and port of your swarm manager
export DOCKER_HOST="tcp://172.17.42.10:2375"
Using something like docker info should now return information about nodes in your cluster.

Related

What is the extra container with "lb-" prefix in docker swarm network? How to set up docker network not to have that?

Docker network is created in a docker swarm, which contains several nodes, with this command:
docker network create --attachable --driver overlay [network-name]
And containers are attached to the network with "docker service create" command.
There is extra container with the name "lb-[network-name]" appeared after in the network.
What is that container and how to configure docker network not to have that?
From docker swarm documentation (https://docs.docker.com/engine/swarm/key-concepts/):
Swarm mode has an internal DNS component that automatically assigns
each service in the swarm a DNS entry. The swarm manager uses internal
load balancing to distribute requests among services within the
cluster based upon the DNS name of the service.
It's a part of swarm architecture, you can't deactivate it.
Take a look also to this detailed answer regarding networking of docker swarm:
https://stackoverflow.com/a/44649746/3730077

How to init docker swarm with consul

How do I start a docker swarm cluster with consul back-end?
I can't see any discovery param in the docker swarm init command? or the docker swarm join command? I successfully ran
docker swarm init ....
and than
docker swarm join
to start a cluster on the internal swarm discovery mechanism, but it's not recommended for production. So what am I missing?
You are running the newer Swarm Mode commands but asking about the usage of the classic Swarm that runs as a container, these are two very different things.
Swarm Mode uses a raft implementation for the manager state that is not swappable with an external key/value store. You run swarm mode with the commands you listed (docker swarm init and docker swarm join). The join command eliminates the need for an external node discovery database. https://docs.docker.com/engine/swarm/how-swarm-mode-works/nodes/
Classic swarm used an external node discovery, the default using a docker hub token that was not recommended for production. To implement classic Swarm you run the docker run swarm manage with the options to publish the port to access the manager and option to discover the nodes in the swarm. Classic Swarm has more in common with a reverse proxy to the docker api than an orchestration tool like Swarm Mode or Kubernetes. https://docs.docker.com/swarm/reference/manage/
So the answer to your question is to either not use Swarm Mode commands and instead run the classic Swarm containers, or if you want Swarm Mode, to not try to implement your own external node discovery database because that's not an option. I'd recommend the latter unless you have a specific need for classic Swarm.

Is the 'local' vm required once the swarm cluster has been deployed?

According to the official documentation on Install and Create a Docker Swarm, first step is to create a vm named local which is needed to obtain the token with swarm create.
Once the manager and all nodes have been created and added to the swarm cluster, do I need to keep running the local vm?
Note: this tutorial is for the first version of Swarm (called Swarm legacy). There is a new version called Swarm mode available since Docker 1.12. Putting it out there because there seems to be a lot of confusion between the two.
No you don't have to keep the local VM, this is just to get a unique cluster token with the Docker Hub discovery service.
Now this is a bit overkill just to generate a token. You can bypass this step by:
Running the swarm container directly if you have Docker for Mac or a more generally a local instance of Docker running:
docker run --rm swarm create
Directly query the service discovery URL to generate a token:
curl -X POST "https://discovery.hub.docker.com/v1/clusters"

Docker swarm, Consul and Spring Boot

I have 6 microservices packed in docker containers. On every swarm node, i have installed consul agent, binded to host ip, and client in 0.0.0.0 mode.
All microservices are in docker-compose file which I am running from Swarm manager.
Microservices are written in Java and in bootstrap.yml I must to specify consul agent endpoint. Possible choices are:
localhost
${HOSTIP} environment variable
Problems:
- localhost is not localhost of host, but container localhost, and I don't have consul agent on container localhost but host.
- ${HOSTIP} in compose file i have to supply this env var. But, I don't know where Swarm MAnager will schedule microservice start so I cannot know which IP address will be used.
I tried to expose on each node host ip address but since i am running compose from manager, it will not read this variable.
Do you have any proposal how to solve this? I have consul cluster, 3 managers and 3 nodes. on each manager and node i have consul agent started (as docker container). No matter what type of networking i am using, i am not able to start up microservice. I started consul as --net=host and --net=bridge, but this is not working.
Is there anyone with some idea?
Thanks ahead.
So you are running consul in containers also, right? Is it possible in your setup to link containers? So you could start the consul containers as "consul" on each host and link your microservices to it. Linked containers get a hosts entry and so the consul service should be reachable at "consul:8500" from within your services.
Edit: If you are using the official Consul Docker image from Hashicorp, you can configure the client address to 0.0.0.0, this should make the consul API available to the other containers running on the host.
Let me answer my own Q: This is not a way we want to do this, I mean, we cannot put some things in Swarm and some thing outside Swarm with expectation that it will work. It will not. Consul as a service discovery cannot be used outside Swarm, too. Simple answer would be to use Docker Orchestration and Service discovery and not to involve Consul. If someone is using Swarm, everything should be in overlay networks (rabbit, redis, elk and so on)...

Can you explain roles of swarm machine?

In http://docs.docker.com/swarm/install-w-machine/
There are four machines:
Local, where swarm create will run
swarm-master
swarm-agent-00
swarm-agent-01
My understanding is swarm-master will control agents, but what is Local used for?
It is for generating the discovery token using the Docker Swarm image.
That token is used when creating the swarm master.
This discovery service associates a token with instances of the Docker Daemon running on each node. Other discovery service backends such as etcd, consul, and zookeeper are available.
So the "local" machine is there to make sure the swarm manager discovers nodes. Its functions are:
register: register a new node
watch: callback method for the swarm manager
fetch: fetch the list of entries
See this introduction:

Resources