Join Docker swarm manager - docker

I user docker for Windows 10 and I try to use swarm, but I can't add a worker to the sarm.
I manage to create a manager like this :
docker swarm init --advertise-addr my-ip-addr:2377
Then I have this answer :
Swarm initialized: current node (uv8kzentx6jugl855a26e5qad) is now a
manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-4ff8ldvzoasrbq0iprugdb5owqrzx5chyudbm7uu4r11t6dz0i-6om1gepq4yqm1s70a6gq75qtr
my-ip-addr:2377
To add a manager to this swarm, run 'docker swarm join-token manager'
and follow the instructions.
Then , from my remote machine ( in the same network than the manager) I run the following command :
docker swarm join --token SWMTKN-1-4ff8ldvzoasrbq0iprugdb5owqrzx5chyudbm7uu4r11t6dz0i-6om1gepq4yqm1s70a6gq75qtr my-ip-addr:2377
But I have this issue:
Error response from daemon: Timeout was reached before node joined.
The attempt to join the swarm will continue in the background. Use the
"docker info" command to see the current swarm status of your node.
Why do I have this issue ? What should I do ?
thanks in advance

Related

Docker swarm join linux container Error - remote CA does not match fingerprint

Start docker swarm :
docker swarm init --advertise-addr
Join docker swarm:
docker swarm join --token :2377
I am using Windows 10,
it is working fine with Windows container mode, but gives below error in Linux container mode.
Error:
Error response from daemon: remote CA does not match fingerprint. Expected: 91030413f17ec7c023a2a796ee05a024915080ca8dfd646a597c7e966f667df6
Docker swarm manager host command: docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
2zf1l2o7sl2a1qka55s2vi77x * moby Ready Active Leader
Host name is moby, when running in Windows container mode it gives machine host correctly.
Your token is wrong.
You can get a worker token in the manager node:
docker swarm join-token -q worker
It works for me.
https://docs.docker.com/engine/reference/commandline/swarm_join/

getting docker swarm cluster worker node error

I'm working on docker swarm. When I connect to a worker node I get this error:
Error response from daemon: rpc error: code = 14 desc = grpc: the
connection is unavailable
I have already stopped firewall and setenforce 0. What could be the problem?
If you are using VM then you can init docker swarm with alternative IP Address using docker "swarm init --advertise-addr :"
Example: docker swarm init --advertise-addr 192.168.99.100:2377
and then add the nodes to the swarm.
Example: docker swarm join --token --advertise-addr :
docker swarm join --token SWMTKN-1-RANDOMTOKEN 192.168.99.100:2377
Some people say that it works only with 2377 port.
Check yourself if it works for you as well.
If you are using the swarm init --advertise-addr <some ip>.Then you will get the join token but when I am going to add new swarm to the manager as a worker then i am getting the same error node is already part of swarm.
So take care while using vm ip address. It should be different from and exact match to the manager ip.

How to setup multi-host networking with docker swarm on multiple remote machines

Before asking this question I have read quiet of articles and stackoverflow questions but I couldn't get the right answer for my setup(perhaps it is already answered). Here is the architecture I have been struggling to get it to work.
I have three physical machines and I would like to setup the Docker swarm with multi-host networking so that I can run docker-compose.
For example:
Machine 1(Docker Swarm Manager and Contains Consoul)(192.168.5.11)
Machine 2(Docker Swarm Node)(192.168.5.12)
Machine 3 (Docker Swarm Node)(192.168.5.13)
And I need to run docker-compose from any other separate machine.
I have tried Docker article but in that article it is all setup under the same physical machine using docker-machine and virtual box. How can I achieve above in three remote machines. Any help appreciated.
The latest version of Docker has Swarm Mode built in, so you don't need Consul.
To set up on your boxes, make sure they all have docker version of 1.12 or higher and then you just need to initialise the swarm and join it.
On Machine 1 run:
docker swarm init --advertise-addr 192.168.5.11
The output from that will tell you the command to run on Machine 2 and 3 to join them to the swarm. You'll have a unique swarm token, and the command is something like:
docker swarm join \
--token SWMTKN-1-49nj1... \
192.168.5.11:2377
Now you have a 3-node swarm. Back on Machine 1 you can create a multi-host overlay network:
docker network create -d overlay my-app
And then you run workloads in the network by deploying services. If you want to use Compose with Swarm Mode, you need to use distributed application bundles - which are currently only in the experimental build of Docker.
I figured this needs an update, as docker compose files are supported in docker swarm
Initialize the swarm on Machine 1 using
docker swarm init --advertise-addr 192.168.5.11
Join the swarm from Machine 2 & 3 using
docker swarm join \
--token <swarm token from previous step> 192.168.5.11:2377 \
--advertise-addr eth0
eth0 is the network interface on machines 2 & 3, & could be different
based on your config. I found that without the --advertise-addr
option, containers couldn't talk to each other across hosts.
To list all the nodes in the swarm & see their status
docker node ls
After this, deploy the stack (group of services or containers) from a compose file
docker stack deploy -c <compose-file> my-app
This will create all the containers across multiple hosts
To list services (containers) on the swarm run docker service ls
See docker docs Getting started with swarm mode

Adding services in different consul clients running on same host

I've followed the section of in Testing a Consul cluster on a single host using consul. Three consul servers are successfully added and running in same host for testing purpose. Afterwards, I've also followed the tutorial and created a consul client node4 to expose ports. Is it possible to add more services and bind to one of those consul clients ?
Use the new 'swarm mode' instead of the legacy Swarm. Swarm mode doesn't require Consul. Service discovery and key/value store is now part of the docker daemon. Here's how to create a 3 nodes High Available cluster (3 masters).
Create three nodes
docker-machine create --driver vmwarefusion node01
docker-machine create --driver vmwarefusion node02
docker-machine create --driver vmwarefusion node03
Find the ip of node01
docker-machine ls
Set one as the initial swarm master
docker $(docker-machine config node01) swarm init --advertise-addr <ip-of-node01>
Retrieve the token to let other nodes join as master
docker $(docker-machine config node01) swarm join-token manager
This will print out something like
docker swarm join \
--token SWMTKN-1-0siwp7rzqeslnhuf42d16zcwodk543l99liy0wuq1mern8s8u9-8mbsrxzu9mgfw7x6ehpxh0dof \
192.168.40.144:2377
Add the other two nodes to the swarm as masters
docker $(docker-machine config node02) swarm join \
--token SWMTKN-1-0siwp7rzqeslnhuf42d16zcwodk543l99liy0wuq1mern8s8u9-8mbsrxzu9mgfw7x6ehpxh0dof \
192.168.40.144:2377
docker $(docker-machine config node03) swarm join \
--token SWMTKN-1-0siwp7rzqeslnhuf42d16zcwodk543l99liy0wuq1mern8s8u9-8mbsrxzu9mgfw7x6ehpxh0dof \
192.168.40.144:2377
Examine the swarm
docker node ls
You should now be able to shutdown the leader node and see another pick up as manager.
Best practice for Consul, is to run consul one per HOST, and when you want to talk to consul, you always talk locally. In general, everything 1 consul node knows, every other consul node also knows. So you can just talk to your localhost consul (127.0.0.1:8500) and do everything you need to do. When you add services, you add them to the local consul node that has the service's process on it. There are projects like Registrator (https://github.com/gliderlabs/registrator) That will automatically add services from running docker containers, which makes life easier.
Overall, welcome to Consul, it's great stuff!

Docker 1.12 Swarm Nodes IP's

Is there a way how I could get IPs of nodes joined in cluster?
In "old" swarm there is command that you can run on manager machine. docker exec -it <containerid> /swarm list consul://x.x.x.x:8500
To see a list of nodes, use:
docker node ls
Unfortunately they don't include IP's and ports in this output. You can run a docker node inspect $hostname on each one to get it's swarm ip/port. Then if you need to add more nodes to your cluster, you can use docker swarm join-token worker which does include the needed IP/port in it's output.
What docker node ls does provide is hostnames of each node in your swarm cluster. Unlike the standalone swarm, you do not connect your docker client directly to the swarm port. You now access it from one of the manager hosts in the same way you'd connect to that host before to init/join the swarm. After connecting to one of the manager hosts, you use docker service commands to control your running services.

Resources