Docker swarm - add new worker - re scale the service - docker

I have created a docker manager. Created a service and scaled to 5 instances in the same server.
I added two workers. Now, How do I redistribute 5 instances of the applications across 3 nodes?
Is there any option to do without doing everything from the beginning?
docker service scale id=5 does it. Is it the right way? I don't want to restart already existing instances. It restarts at node 1.
docker service update servicename
I remove one node from the cluster by docker swarm leave. I updated the service. All of my instances are replicated within remaining nodes.
Glad, Update worked as expected. But, there is another twist.
I added the node back. Then I updated the service. Now, all of my instances are running as in the earlier case. It doesn't make use of the new node.
How does it work?

To redistribute, you don't need to destroy and recreate the service, but the containers will all be bounced in a rolling upgrade. The command is docker service update --force $service_name
When adding or removing nodes, docker will only reschedule when there is a difference between the current and target state. It doesn't prematurely kill running containers to reschedule then on new nodes until you do the above update command.

Related

Configure how many containers are kept in Docker Swarm controlled by Portainer

I don't know if Docker Swarm or Portainer manages the removal of old shutdown containers. But i want to configure how many are kept. It seems that currently 4 old instances are hold back per Slot (machine). When i update a service a new instance is spawned, the old instance gets shutdown and the oldest one gets deleted. Where is the setting for the count of dead containers per slot?
Try this:
docker swarm update --task-history-limit=5

adding a manager back into swarm cluster

I have a swarm cluster with two nodes. 1 Manager and 1 worker. I am running my application on worker node and to run a test case, I force remove manager from docker swarm cluster.
My application continues to work, but I would like to know if there is any possibility to add back the force removed manager in the cluster again. (I don't remember the join-token and neither have them copied anywhere)
I understand docker advises to have odd number of manager nodes to maintain quorum, but would like to know if docker has addressed such scenarios anywhere.
docker swarm init --force-new-cluster --advertise-addr node01:2377
When you run the docker swarm init command with the --force-new-cluster flag, the Docker Engine where you run the command becomes the manager node of a single-node swarm which is capable of managing and running services. The manager has all the previous information about services and tasks, worker nodes are still part of the swarm, and services are still running. You need to add or re-add manager nodes to achieve your previous task distribution and ensure that you have enough managers to maintain high availability and prevent losing the quorum.
How about create new cluster from your previously manager machine and let the worker leave previous cluster to join the new one? That seems like the only possible solution for me, since your cluster now does not have any managers and the previous manager now not in any cluster, you can just create a new cluster and let other workers join the new one.
# In previously manager machine
$ docker swarm init --advertise-addr <manager ip address>
# *copy the generated command for worker to join new cluster after this command
# In worker machine
$ docker swarm leave
# *paste and execute the copied command here

Docker container restart priority

I have a bunch of docker containers running in swarm mode (services). If the whole server restarts then containers start to run one by one after server reboot. Is there a way to set an order of container creation and run?
P.S. I can't user docker-compose as these services were created dynamically through Docker Remote API.
You can try to set a shorter restart delay (with --restart-delay) to the services you want to start firstly and a bigger to next one etc..
But I am not sure that working.

Docker Swarm: will restarting a single node manager delete all my services?

I have a single node Docker Swarm setup with a dozen services created by simply calling docker service create [...].
Can anyone tell me what will happen to my services if I reboot my node? WIll they automatically restart or will I have to recreate them all?
I undestand that Swarm services and docker-compose setups are different, but in the case of having to recreate the services upon reboot, is there a way to save a docker-compose.yml file for each of my services (i.e. something that parses the output of docker service inspect)? Is there a better way of "saving" my services configuration?
No need to recreate the services,it will remains same even after the node restart. I have tested the same in my swarm cluster. i have three node swarm setup (1 manager & 2 worker). completely stopped the worker nodes and services on worker node moved to the active node(manager node). I have restarted the active node(manager) and still i can see the services are up and running on the manager node.
before restart:
enter image description here
After Restart:
enter image description here
So Even if you are running one node swarm,no need to worry about the services, it will automatically recreated automatically. Attached the screen shots for your reference.

How do I avoid download images on all docker hosts which are part of my swarm?

I have a swarm setup which has around 6 nodes. Whenever I execute a docker run or docker pull command from the swarm manager it downloads the new image on all the swarm nodes.
This is creating data redundancy and choking my network.
Is there any way I can avoid this ?
Swarm Nodes need Images available to them by design. That will help swarm to start the container on an available node immediately when current node hosting the container crashes or current hosting node goes into maintenance (Drain Mode).
On the other hand docker Images will be pulled one time only, and you can use them until you upgrade your service.
Another one, Docker is designed for microservices, If you Image getting too large, Maybe you should try to cut it down to multiple containers.

Resources