After setting up multiple worker swarm cluster, When i try to login to the master node and access the list of worker nodes. It gives me an error.
root#swarm-master-91881543-0:~# docker node ls
Error response from daemon: This node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again.
root#swarm-master-91881543-0:~#
#biz DockerCE isn't available through the actual portal yet. You'll need to provision through the Azure CLI with the command #Ajay mentioned.
Related
As we know in docker swarm we can have more than one manger. Let's suppose that we have 2 nodes and 2 managers (so each node is both manager and worker).
Now, let client (using CLI tool) execute following two seperated scenarios:
1. docker create service some_service
2. docker update --force some_service
where client is launched on one of swarm nodes.
Where will above requests be sent? Only to leader or to each worker node? How docker deal with simultaneous requests?
I assume you're talking about the docker cli talking to the manager api.
The docker cli on a node will default to connecting to localhost. Assuming you're on a manager, you can see which node your cli is talking to with docker node ls.
The * next to a node name indicates that's the one you're talking to.
From there if that node isn't the Leader, it will relay the commands to the Leader node and wait on a response to return to your cli. This all means:
Just ensure you're running the docker cli on a manager node or your cli is configured to talk to one.
It doesn't matter which manager, as they will all relay your command to the current Leader.
I am trying to deploy the application on multiple instances.
On master node, I used these bunch of commands:
docker swarm init
docker network create --attachable --driver overlay fabric
docker stack deploy --compose-file docker-compose-org2.yaml fabric
And the service was deployed on master node and is running properly.
Now I have another compose file named: docker-compose-orderer.yaml Which I want to deploy on other AWS instance.
I used the following command on worker node:
docker swarm join --token SWMTKN-1-29jg0j594eluoy8g86dniy3opax0jphhe3a4w3hjuvglekzt1b-525ene2t4297pgpxp5h5ayf89 <IP>:2377
docker stack deploy --compose-file docker-compose-org1.yaml fabric
It command docker stack deploy --compose-file docker-compose-org1.yaml fabric says this node is not a swarm manager. Use "docker swarm init" or "docker swarm join" to connect this node to swarm and try again
Does anyone knows how to deploy the compose file in worker node?
Any help/suggestion would be appreciated.
Update 1:
Worker node joined swarm manager successfully.
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
qz9y7p1ba3prp23xtuv3uo2dk ip-172-31-18-206 Ready Active 18.06.1-ce
no97mrg6f7eftbbeu86xg88d9 * ip-172-31-40-235 Ready Active Leader 18.06.1-ce
you must apply all docker service and docker stack commands on manager nodes. it will deploy automatically the containers on less used nodes. when you want to explicit deploy a container on a specific node, you must tag this node and work with constraints.
I am trying to join a worker node to a manager in another machine. The former one is Mac and later one is Windows. The worker host on Mac have a response:
Timeout was reached before node joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node.
When I typed the Join-Token command again, I received response saying the
This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.
When I typed the command in manager side:
docker node ls
it only show one node which is the manager node.
Am I doing something wrong here.
Do you use the same docker version on all hosts?
I am following this tutorial. I ran sudo docker swarm init --advertise-addr <myip> on 1st ubuntu machine. And then I took the manager join-token and ran it on 2nd ubuntu machine and it is able to join as manager.
But the problem starts when i run docker network create --attachable --driver overlay my-net on 1st machine, it gives me following error:
Error response from daemon: rpc error: code = Unknown desc = The swarm does not have a leader. It's possible that too few managers are online. Make sure more than half of the managers are online.
If I run the above command to create network before joining the 2nd node, the network gets created successfully and the 2nd node also gets joined to the 1st swarm node. But when I do anything on the 1st Ubuntu machine, I get the same error on it.
Both Ubuntu machines are in same network and can be pinged by each other.
Ubuntu version - 17.1 64 bit
Docker version 18.03.1-ce, build 9ee9f40
Docker-compose version 1.21.2, build a133471
It seems that the tutorial is off as you will only end up with two managers and that is not enough to form a quorum. You can either add an additional manager node or simply create a single manager (docker swarm init) and then join a single worker using the command that is output as part of the response to docker swarm init. You should SKIP the docker swarm join-token manager step from the tutorial.
Just change the IP of your Ubuntu Machine.
Machine->Settings->nNetwork->select Attached to Bridged Adapter.
restart your machine.
I'd like to understand the communication mechanism between the Docker Swarm Manager and the Docker Swarm Agents :
The Swarm Manager generates a token.
The Swarm Agents are generated, with this token passed to them. (and their own IP)
Now that the Manager needs to give instructions to the agents, how was it informed that the agents were existing to these IPs ?
Hypotesis :
Does the Agents register themselves on some docker.com server with their token, and the Manager gets their addresses from it using the same token ?
Thank you
Options are described in the doc here:
https://docs.docker.com/swarm/discovery/
In this example I use the hosted discovery with Docker Hub. There are other options like a static file, consul, etcd etc.
You create your docker cluster:
docker run -rm swarm create
This will give you a token to be used as your cluster id: e4802398adc58493...longtoken
You register one/multiple docker host(s) with your cluster
docker run -d swarm join --addr=172.17.42.10:2375 token://e4802398adc58493...longtoken
The ip address provided is the address of your docker host node.
This is how the future manager will know about agents/nodes
You deploy the swarm manager to any of your docker host (let's say 172.17.42.10:2375, the same one I used to create the swarm and register my first docker host)
docker run -d -p 9999:2375 swarm manager token://e4802398adc58493...longtoken
To use the cluster you set the DOCKER_HOST to the ip address and port of your swarm manager
export DOCKER_HOST="tcp://172.17.42.10:2375"
Using something like docker info should now return information about nodes in your cluster.