Configure how many containers are kept in Docker Swarm controlled by Portainer - docker-swarm

I don't know if Docker Swarm or Portainer manages the removal of old shutdown containers. But i want to configure how many are kept. It seems that currently 4 old instances are hold back per Slot (machine). When i update a service a new instance is spawned, the old instance gets shutdown and the oldest one gets deleted. Where is the setting for the count of dead containers per slot?

Try this:
docker swarm update --task-history-limit=5

Related

Docker swarm recreate containers after reboot node

Docker swarm recreate containers after reboot node. That is, it actually destroys old containers and launches new ones.
I do not like this behavior. How to change it?
That is the default behavior of Swarm Mode and I'm not aware of an option to change the behavior that doesn't also risk worse issues (starting too many containers, and orphaned containers running outside of Swarm Mode's control). Containers should be treated as ephemeral, without persistent data inside the container (that data should be in volumes).

Docker container restart priority

I have a bunch of docker containers running in swarm mode (services). If the whole server restarts then containers start to run one by one after server reboot. Is there a way to set an order of container creation and run?
P.S. I can't user docker-compose as these services were created dynamically through Docker Remote API.
You can try to set a shorter restart delay (with --restart-delay) to the services you want to start firstly and a bigger to next one etc..
But I am not sure that working.

Docker Swarm - Should I remove a stack before deploying a stack?

I am not new to Docker, but I am new to Docker Swarm.
Our deployments typically consist of building a new docker image with the latest code, pushing that to our registry and then running docker stack deploy against a compose file.
My question is, do I need to run docker stack rm $STACK_NAME before running the deploy?
I'm not sure if the deploy command for swarm is smart enough to figure out that a docker image has changed and that it needs to do something.
You redeploy the same stack name without deleting the old stack. If you expect to have services deleted from your compose file, then you'll want to include the --prune option. For any unchanged service, swarm will leave it unmodified. But for any services with changes, including a new image on the registry server, you will see a rolling update performed according to the update config you specify in the compose file.
When you use the default VIP to connect to a service, as long as the service exists, even across rolling updates, the VIP will keep the same IP address so that other containers connecting to your service can do so without worrying about a stale DNS reference. And with a replicated service, the rolling update can prevent any visible outage. The combination of the two give you high availability that you would not have when deleting and recreating your swarm stack.

Docker swarm - add new worker - re scale the service

I have created a docker manager. Created a service and scaled to 5 instances in the same server.
I added two workers. Now, How do I redistribute 5 instances of the applications across 3 nodes?
Is there any option to do without doing everything from the beginning?
docker service scale id=5 does it. Is it the right way? I don't want to restart already existing instances. It restarts at node 1.
docker service update servicename
I remove one node from the cluster by docker swarm leave. I updated the service. All of my instances are replicated within remaining nodes.
Glad, Update worked as expected. But, there is another twist.
I added the node back. Then I updated the service. Now, all of my instances are running as in the earlier case. It doesn't make use of the new node.
How does it work?
To redistribute, you don't need to destroy and recreate the service, but the containers will all be bounced in a rolling upgrade. The command is docker service update --force $service_name
When adding or removing nodes, docker will only reschedule when there is a difference between the current and target state. It doesn't prematurely kill running containers to reschedule then on new nodes until you do the above update command.

How do I avoid download images on all docker hosts which are part of my swarm?

I have a swarm setup which has around 6 nodes. Whenever I execute a docker run or docker pull command from the swarm manager it downloads the new image on all the swarm nodes.
This is creating data redundancy and choking my network.
Is there any way I can avoid this ?
Swarm Nodes need Images available to them by design. That will help swarm to start the container on an available node immediately when current node hosting the container crashes or current hosting node goes into maintenance (Drain Mode).
On the other hand docker Images will be pulled one time only, and you can use them until you upgrade your service.
Another one, Docker is designed for microservices, If you Image getting too large, Maybe you should try to cut it down to multiple containers.

Resources