Swarm service on every node but the manager - docker-swarm

I have a swarm running with one manager and multiple workers.
I want a specific service to be deployed once (and only once) per node, but only on the workers.
The manager still run other services.
What I found doesn't fit my needs:
mode: global do what I want for the 'once per container' but that does not exclude the manager.
mode: replicated
replicas: 6
placement:
constraints:
- node.role == worker limit to the worker but with that solution there could be more than one replicas on a node. And --max-replicas-per-node doesn't exist yet.
docker node update --availability drain manager1 removes the manager from the workers, but that's not possible either because my manager should run other services.

You can combine your first solution with the second one. Something like this works well for me in my environment:
mode: global
placement:
constraints:
- node.role == worker
The only issue would be that you need to assign a label (worker) to every node you need to run the service on.

Related

Docker swarm run on multible machines

I deploy a docker swarm on 5 nodes and I have 5 microservices. The docker swarm assigns the services only in one node. Is there is any way to tell to docker swarm which node to use for every service in order to assign 1 service in every node.
Yes you can do this with the "deploy" configuration option in your compose file. For example:
deploy:
placement:
constraints:
- "node.hostname == desired_machine_hostname"

Docker swarm make sure that one replica always access the same replica of other service

Hey at first I'm not sure if this is possible at all. I have two different services in my docker swarm. Each service is replicated n times. Service A accesses service B via dns. Below you see a simplified version of my docker compose file:
version: "3.7"
services:
A:
image: <dockerimage_A>
deploy:
replicas: 5
B:
image: <dockerimage_B>
deploy:
replicas: 5
The replicas of service A accessing the replicas of service B via the DNS entry from docker ingress and send tasks to B. The runtime of the task of B variates and is blocking. Also the connection from A to B is blocking. Due to the round robin load balancing it could be the case that if one Replica combination of A and B is fast the fast A connects to another B is still blocked and the other B hasn't anything to do.
To solve this is would be ideal if one replica of A is always routed to the same replica of B. Is there a possibility to change the load balancing in that way?
I solved it on my own with the following hacky solution by setting the host name for each replica individually by using the slot id.
 services:
A:
hostname: "A-{{.Task.Slot}}"
deploy:
replicas: 2
B:
environment:
- SERVICEA=http://A-{{.Task.Slot}}/
deploy:
replicas: 2

Is there a way to prefer node deployment in docker swarm?

I got three nodes in my swarm, one manager and two workers (worker1 and worker2). I have a couple of services which preferably is running on the first worker node (worker1), however when this node goes down I wish it to start running on the second worker node.
From what I've gathered I could put a constraint like this.
placement:
constraints:
- node.hostname==worker1
This however forces the service to be run on worker1 and when that node goes down the service will simply go down.
I could also do this
placement:
constraints:
- node.role==worker
This restricts the service to either one of the worker nodes but doesn't prioritize worker1 as I want to. Is there a way to prioritize the service to a specific node rather than putting a constraint?

Running docker stack & swarm in production on a single node with autostart?

How can i run a docker stack (from a docker-compose.yml) on a single docker swarm node which automatically starts on system reboots?
I am using docker-compose to compose my application of multiple services and then use docker stack deploy to deploy it on my server to a single instance docker swarm instance.
In my docker-compose.yml i have defined my services with a restart policy:
deploy:
restart_policy:
condition: any
delay: 5s
max_attempts: 3
window: 120s
placement:
constraints: [node.role == manager]
which imho should force the service to always run/restart. But when the server/docker daemon is restarted the services are not started. Is there some easy way to do this?
docker service list would show:
ID NAME MODE REPLICAS IMAGE PORTS
s9gg88ul584t finalyzer-prod_backend replicated 0/1 registry.gitlab.com/hpoul/finalyzer/finalyzer-backend:latest *:8081->8080/tcp
vb9iwg7zcwxd finalyzer-prod_mongoadmin replicated 0/1 mrvautin/adminmongo:latest *:8082->1234/tcp
qtasgtqi7m0l finalyzer-prod_mongodb replicated 0/1 mongo#sha256:232dfea36769772372e1784c2248bba53c6bdf0893d82375a3b66c09962b5af9
wdnrtlbe8jpw finalyzer-prod_pgdb replicated 0/1 postgres#sha256:73d065c344b419ce97bba953c7887c7deead75e0998053518938231cd7beb22c
so it recognizes that it should run 1 node, but it does not scale it up. What is the right way to force docker swarm, service or docker stack to scale all configured services up to their configured values upon a server restart, (or docker daemon restart)?

docker swarm - how to balance already running containers in a swarm cluster?

I have docker swarm cluster with 2 nodes on AWS. I stopped the both instances and initially started swarm manager and then worker. Before stopped the instances i had a service running with 4 replicas distributed among manager and worker.
When i started swarm manager node first all replica containers started on manager itself and not moving to worker at all.
Please tell me how to do load balance?
Is swarm manager not responsible to do when worker started?
Swarm currently (18.03) does not move or replace containers when new nodes are started, if services are in the default "replicated mode". This is by design. If I were to add a new node, I don't necessarily want a bunch of other containers stopped, and new ones created on my new node. Swarm only stops containers to "move" replicas when it has to (in replicated mode).
docker service update --force <servicename> will rebalance a service across all nodes that match its requirements and constraints.
Further advice: Like other container orchestrators, you need to give capacity on your nodes in order to handle the workloads of any service replicas that move during outages. You're spare capacity should match the level of redundancy you plan to support. If you want to handle capacity for 2 nodes failing at once, for instance, you'd need a minimum percentage of resources on all nodes for those workloads to shift to other nodes.
Here's a bash script I use to rebalance:
#!/usr/bin/env bash
set -e
EXCLUDE_LIST="(_db|portainer|broker|traefik|prune|logspout|NAME)"
for service in $(docker service ls | egrep -v $EXCLUDE_LIST |
awk '{print $2}'); do
docker service update --force $service
done
Swarm doesn't do auto-balancing once containers are created. You can scale up/down once all your workers are up and it will distribute containers per your config requirements/roles/etc.
see: https://github.com/moby/moby/issues/24103
There are problems with new nodes getting "mugged" as they are added.
We also avoid pre-emption of healthy tasks. Rebalancing is done over
time, rather than killing working processes. Pre-emption is being
considered for the future.
As a workaround, scaling a service up and down should rebalance the
tasks. You can also trigger a rolling update, as that will reschedule
new tasks.
In docker-compose.yml, you can define:
version: "3"
services:
app:
image: repository/user/app:latest
networks:
- net
ports:
- 80
deploy:
restart_policy:
condition: any
mode: replicated
replicas: 5
placement:
constraints: [node.role == worker]
update_config:
delay: 2s
Remark: the constraint is node.role == worker
Using the flag “ — replicas” implies we don’t care on which node they are put on, if we want one service per node we can use “ — mode=global” instead.
In Docker 1.13 and higher, you can use the --force or -f flag with the docker service update command to force the service to redistribute its tasks across the available worker nodes.

Resources