docker container switch from one node to another node without draining or restarting the node - docker-swarm

How to switch docker container from one node to another node without draining or restarting the node. As everytime I kill container or set docker container unhealthy , the container is started on same node. Purpose of this is to overcome frequent restart of docker on same node?

To specify a node for a container / service you need to specify a constraint docker service create --name myname --constraint 'node.hostname == dahostname' myimage
https://docs.docker.com/engine/reference/commandline/service_create/#specify-service-constraints---constraint
and
https://docs.docker.com/engine/reference/commandline/service_update/#options

Related

Restarting docker container in swarm mode

I am running a docker service in swarm mode. When I want to restart it, there are 2 options I know:
from swarm manager: docker service scale myservice=0 then docker service scale myservice=1
from server running the server: docker ps, take the container id of my service and do docker stop <containerId>
And this works fine. However, if I go with option #2 and instead of docker stop I write docker restart it will restart the current instance, but because being in swarm mode it will also start a new one. So in the end I will end up having 2 of the same service, even though in my compose I have specified I want only 1 replica.
Is there any way to prevent docker restart and the docker swarm to start a 2nd service while one is already there?
I am using docker 18.09.2 on ubuntu 18.04

Docker container cannot reach other services for a few seconds

I have a docker swarm node running a set of docker services connected by a overlay network. When needed I dynamically add another docker node via terraform . It'll be a separate ec2 instance setup and connected as a worker node to the existing swarm network.
I'll run a container from my manager and the running container needs to talk to the existing services in manager node. For eg: Connecting to postgres service and running few queries.
docker -H <node ip> run --network <overlay network where services are running> <some image> <command>
The script running in the container fails with "Name or service not known" error. I tried to manually ping by bashing into the container and ping succeeds after some 4 or 5 seconds. I tried this hundreds of times and I always get the same issue. Also, it doesn't matter when the node is joined to the swarm. Every time I run the above command, I face the same issue.
Also, I don't have control over what script is run in the container so I cannot add retries.
One more thing. Sometimes, some services can be reached immediately. For eg., Postgres will fail. But another service exposing rest end points can be reached. But it's not always the case.
I was able to reproduce this issue with a bunch of test services:
Steps to reproduce the issue:
Create a docker swarm and add another machine as a worker node to
docker swarm
Create a overlay network in node 1 docker network create -d overlay --attachable platform
Create services in node 1 for i in {1..25} do docker service create --network platform -p :80 --name "service-${i}"
dockerbogo/docker-nginx-hello-world done
Create a task from node 1 to be run in node 2 docker -H 10.128.0.3:2376 run --rm --network platform centos ping service-1
Docker daemon logs: https://pastebin.com/65Lihp8v
Any help?

able to scale container with global mode in docker?

I have three swarm nodes.
Deployed containerized service with mode "global" through docker swarm.
Later, add one more swarm node to current to be four nodes.
How can i deploy the container service to new added nodes?
The command(docker service scale) only be used with "replicated" mode.
I would recommend running the following command if you have a lot of services running. This command will rebalance all services evenly across all the docker nodes.
docker service ls -q > docker_services && for i in `cat docker_services`; do docker service update "$i" --detach=false --force ; done
In case you have only one service then use this command
docker service update <service_name>

Can't list containers with manager node "Reachable", only with "Leader"

I worked with https://labs.play-with-docker.com.
I created a new service in one of the managers (not the leader):
docker service create --name example nginx
When I ran:
docker container ls
It didn't show me the containers:
But when I ran the same command on the manager leader node, it did:
Any explanation why is that ?
You have created a service that is replicated only once (one container), if you want a global service (a container in each VM) you have to add --mode global flag to you docker service create command, by the way you take a look at --replicas flag too : https://docs.docker.com/engine/swarm/services/

Docker 1.12 Swarm Nodes IP's

Is there a way how I could get IPs of nodes joined in cluster?
In "old" swarm there is command that you can run on manager machine. docker exec -it <containerid> /swarm list consul://x.x.x.x:8500
To see a list of nodes, use:
docker node ls
Unfortunately they don't include IP's and ports in this output. You can run a docker node inspect $hostname on each one to get it's swarm ip/port. Then if you need to add more nodes to your cluster, you can use docker swarm join-token worker which does include the needed IP/port in it's output.
What docker node ls does provide is hostnames of each node in your swarm cluster. Unlike the standalone swarm, you do not connect your docker client directly to the swarm port. You now access it from one of the manager hosts in the same way you'd connect to that host before to init/join the swarm. After connecting to one of the manager hosts, you use docker service commands to control your running services.

Resources