Docker Swarm: will restarting a single node manager delete all my services? - docker

I have a single node Docker Swarm setup with a dozen services created by simply calling docker service create [...].
Can anyone tell me what will happen to my services if I reboot my node? WIll they automatically restart or will I have to recreate them all?
I undestand that Swarm services and docker-compose setups are different, but in the case of having to recreate the services upon reboot, is there a way to save a docker-compose.yml file for each of my services (i.e. something that parses the output of docker service inspect)? Is there a better way of "saving" my services configuration?

No need to recreate the services,it will remains same even after the node restart. I have tested the same in my swarm cluster. i have three node swarm setup (1 manager & 2 worker). completely stopped the worker nodes and services on worker node moved to the active node(manager node). I have restarted the active node(manager) and still i can see the services are up and running on the manager node.
before restart:
enter image description here
After Restart:
enter image description here
So Even if you are running one node swarm,no need to worry about the services, it will automatically recreated automatically. Attached the screen shots for your reference.

Related

Docker - Restart specific container if another restarts

Is it possible to restart a container if another container fails and restarts?
I have a server container and multiple client containers, I want to have it that if the server container fails and restarts, that one of the client containers restarts as well.
I've already used the restart policies (always, on-failure etc.) but this would be linking two containers and triggering the restart of container A if container B restarts.
This question seems to be quite similar, if not duplicate, of this one.
TL;DR: There has been a shift from defining complex restart policies in docker/docker-compose superseded by explicitly checking for dependencies from within your service so it is deployment agnostic. Therefore, the recommendation is to create specific checks within the container that 'depends' on other services and crash properly when they are not met so that a simple restart: always policy is all that is needed.

How Swarm mode image orchestration works?

I have setup a 3 node cluster (with no Internet access) with 1 manager and 2 worker-nodes using the standard swarm documentation.
How does the swarm manager in swarm mode know about the images present in worker nodes?
Lets say I have image A in worker-node-1 and image B in worker-node-2 and no images in the manager-node.
Now how do I start container for image A using the manager?
Will it start in manager or node-1?
When I query manager for the list of images will it give the whole list with A and B in it?
Does anyone know how this works?
I couldn’t get the details from the documentation.
Docker Swarm manager node may to be a worker one by the second role but not strictly necessary.
Image deployment policy is mapped via docker-compose.yml which has an information like target nodes, networks, hostnames, volumes, etc. in relation of particular service. So, it will start either in specified node or in emptiest default one.
Swarm manager communicates with the worker nodes via Docker networks:
When you initialize a swarm or join a Docker host to an existing swarm, two new networks are created on that Docker host:
an overlay network called ingress, which handles control and data
traffic related to swarm services. When you create a swarm service and
do not connect it to a user-defined overlay network, it connects to
the ingress network by default
a bridge network called
docker_gwbridge, which connects the individual Docker daemon to the
other daemons participating in the swarm.
Reference
During Swarm deployment, the images of it's services are being propagated to worker nodes according to their deployment policy.
The manager node will contain images once the node is the worker one too (correct me, if it won't).
The default configuration with swarm mode is to pull images from a registry server and use pinning to reference a unique hash for those images. This can be adjusted, but there is no internal mechanism to distribute images within a cluster.
For an offline environment, I'd recommend a stand alone registry server accessible to the cluster. You can even run it on the cluster. Push your image there, and point your server l services to the registry for their images to pull. See this doc for details on running a stand alone registry, or any of the many 3rd party options (e.g. Harbor): https://docs.docker.com/registry/
The other option is to disable the image pinning, and manually copy images to each of your swarm nodes. You need to do this in advance of deploying any service changes. You'll also lose the benefit of reused image layers when you manually copy them. Because of all this issues it creates, overhead to manage, and risk of mistakes, I'd recommend against this option.
Run the docker stack deploy command with --with-registry-auth that will give the Workers access to pull the needed image
By default Docker Swarm will pull the latest image from registry when deploying

Docker swarm - add new worker - re scale the service

I have created a docker manager. Created a service and scaled to 5 instances in the same server.
I added two workers. Now, How do I redistribute 5 instances of the applications across 3 nodes?
Is there any option to do without doing everything from the beginning?
docker service scale id=5 does it. Is it the right way? I don't want to restart already existing instances. It restarts at node 1.
docker service update servicename
I remove one node from the cluster by docker swarm leave. I updated the service. All of my instances are replicated within remaining nodes.
Glad, Update worked as expected. But, there is another twist.
I added the node back. Then I updated the service. Now, all of my instances are running as in the earlier case. It doesn't make use of the new node.
How does it work?
To redistribute, you don't need to destroy and recreate the service, but the containers will all be bounced in a rolling upgrade. The command is docker service update --force $service_name
When adding or removing nodes, docker will only reschedule when there is a difference between the current and target state. It doesn't prematurely kill running containers to reschedule then on new nodes until you do the above update command.

Connect to docker containers with just service name and task slot?

I’m attempting to use docker swarm to manage a cluster of CouchDB 2 nodes where each node is a service. It is great that I can use the container name, which is comprised of {{.Service.Name}}.{{.Task.Slot}}.{{.Node.ID}} to connect the nodes, e.g. couchdb.1.52286blz1o0c7ym508sne6jyg. The issue is that the Node ID is a dynamic value and therefore my node names are constantly changing. This dynamic Node ID doesn’t play well with CouchDB’s node configuration, i.e. you really need static names for your nodes like couchdb.1, couchdb.2, etc…
Is there a way to configure the containers so that they can connect to each other with just {{.Service.Name}}.{{.Task.Slot}}? This way, if a node dies and is restarted, the restarted node will retake the dead node’s place automatically.
Docker run appears to have a --network-alias command that can be used to achieve this configuration, but nothing like --network-alias appears to be available for docker swarm.

How do I avoid download images on all docker hosts which are part of my swarm?

I have a swarm setup which has around 6 nodes. Whenever I execute a docker run or docker pull command from the swarm manager it downloads the new image on all the swarm nodes.
This is creating data redundancy and choking my network.
Is there any way I can avoid this ?
Swarm Nodes need Images available to them by design. That will help swarm to start the container on an available node immediately when current node hosting the container crashes or current hosting node goes into maintenance (Drain Mode).
On the other hand docker Images will be pulled one time only, and you can use them until you upgrade your service.
Another one, Docker is designed for microservices, If you Image getting too large, Maybe you should try to cut it down to multiple containers.

Resources