I deploy a docker swarm on 5 nodes and I have 5 microservices. The docker swarm assigns the services only in one node. Is there is any way to tell to docker swarm which node to use for every service in order to assign 1 service in every node.
Yes you can do this with the "deploy" configuration option in your compose file. For example:
deploy:
placement:
constraints:
- "node.hostname == desired_machine_hostname"
Related
In system1(i.e Host name of Master node), the docker is started using
docker swarm init
And later the Compose file available in system1 (*.yml) are deployed using
docker stack deploy --compose-file file_1.yml system1
docker stack deploy --compose-file file_2.yml system1
docker stack deploy --compose-file file_3.yml system1
Next in system2 (i.e Host name of Worker node),
Will join the manager node (system1) using join --token command.And using below mentioned command,and later copy the output of that command and join the manager node.
docker swarm join-token worker
And once ran output of the above command in system2.Was able to successfully join the manager node.
Also cross verified by using ,
docker node ls
And I could see both manager node and worker in Ready and active state.
In my case I'm using worker node(system2) for failover .
Now that I have similar compose files (*.yml files) in system2.
How do I get that deployed in docker swarm ?
Since system2 is worker node, I cannot deploy in system2.
At first I'm not sure what do you mean by
In my case I'm using worker node(system2) for failover .
We are running Docker Swarm in production and the only way you can achieve failover with managers is to use more of them. Note because Docker Swarm uses etcd and that uses quorum, go with the rule of 1,3,5 ...
As for deployments from non-manager nodes, it is not possible to do so in Docker Swarm unless you use a management service which has a docker socket proxy and it can work with it through a service (service will be running on the manager and since it all lives inside Docker Swarm you can then invoke the calls from the worker.).
But there is no way to directly deploy or administrate the swarm from the worker node.
Some things:
First:
Docker contexts are used to communicate with a swarm manager remotely so that you do not have to be on the manager when executing docker commands.
i.e. to deploy remotely to a swarm you could create then use a context like this:
docker context create swarm1 --docker "host=ssh://user#node1"
docker --context swarm1 stack deploy --compose-file stack.yml stack1
2nd:
Once the swarm is set up, you always communicate with a manager node, and it orchestrates the deployment of the service to available worker nodes. In the case that worker nodes are added after services are deployed docker will not move tasks to the worker nodes until new deployments are performed as it prefers to not interrupt running tasks. The goal is eventual balance. If you want to force a docker to rebalance to consider the new worker node immediately, then just redeploy the stack, or
docker service update --force some-service
3rd:
To control which worker nodes services run tasks on you can use placement constraints and node labels.
docker service create --constraint node.role==worker ... would only deploy onto nodes that have the worker role (are not managers)
or
docker service update --constraint-add "node.labels.is-nvidia-enabled==1" some-service would only deploy tasks to the node where you have explicitly labeled the node with the corresponding label and value.
e.g. docker node update label-add is-nvidia-enabled=1 node1 node3
I am working on a project which uses Raspberry Pis as worker nodes and my laptop as the master node. I hope to control the deployment of my containers from my laptop, but I hope the containers run on the worker nodes only(which means no container on the master node). How can I do it with Docker Swarm?
I am going to presume you are using a stack.yml file to describe your deployment using desired-state, but docker service create does have flags for this too.
There are a number of values that docker defines that can be tested under a placement-constraints node:
version: "3.9"
service:
worker:
image: nginx
deploy:
placement:
constraints:
- node.role==worker
I created a 4 micro-services using the Moleculer framework with docker-compose. How do I statically configure each micro-service to run on a specific machine.
You may want to use docker swarm which has a feature allows you to deploy a container on a specific node which called Constraints
Node: A docker node refers to a member in a swarm mode cluster. Every swarm node must be a docker host, Source: What is the difference between docker host and node?
Constraints can be treated as node tags, They are key/value pairs associated to particular node.
Each node by default has the following constraints:
node.id
node.hostname
node.role
A service can be deployed as the following:
docker service create --name backendapp --constraint 'node.hostname == web.example.com'
Note that you can deploy to swarm using docker-compose.yml:
The deploy command supports compose file version 3.0 and above.
docker stack deploy --compose-file docker-compose.yml mystack
Also you can set constraints in docker-compose similar to the following example:
version: '3.3'
services:
web:
image: backendapp-image
deploy:
placement:
constraints:
- node.hostname == web.example.com
You can get start with docker swarm through here
I have docker swarm cluster with 2 nodes on AWS. I stopped the both instances and initially started swarm manager and then worker. Before stopped the instances i had a service running with 4 replicas distributed among manager and worker.
When i started swarm manager node first all replica containers started on manager itself and not moving to worker at all.
Please tell me how to do load balance?
Is swarm manager not responsible to do when worker started?
Swarm currently (18.03) does not move or replace containers when new nodes are started, if services are in the default "replicated mode". This is by design. If I were to add a new node, I don't necessarily want a bunch of other containers stopped, and new ones created on my new node. Swarm only stops containers to "move" replicas when it has to (in replicated mode).
docker service update --force <servicename> will rebalance a service across all nodes that match its requirements and constraints.
Further advice: Like other container orchestrators, you need to give capacity on your nodes in order to handle the workloads of any service replicas that move during outages. You're spare capacity should match the level of redundancy you plan to support. If you want to handle capacity for 2 nodes failing at once, for instance, you'd need a minimum percentage of resources on all nodes for those workloads to shift to other nodes.
Here's a bash script I use to rebalance:
#!/usr/bin/env bash
set -e
EXCLUDE_LIST="(_db|portainer|broker|traefik|prune|logspout|NAME)"
for service in $(docker service ls | egrep -v $EXCLUDE_LIST |
awk '{print $2}'); do
docker service update --force $service
done
Swarm doesn't do auto-balancing once containers are created. You can scale up/down once all your workers are up and it will distribute containers per your config requirements/roles/etc.
see: https://github.com/moby/moby/issues/24103
There are problems with new nodes getting "mugged" as they are added.
We also avoid pre-emption of healthy tasks. Rebalancing is done over
time, rather than killing working processes. Pre-emption is being
considered for the future.
As a workaround, scaling a service up and down should rebalance the
tasks. You can also trigger a rolling update, as that will reschedule
new tasks.
In docker-compose.yml, you can define:
version: "3"
services:
app:
image: repository/user/app:latest
networks:
- net
ports:
- 80
deploy:
restart_policy:
condition: any
mode: replicated
replicas: 5
placement:
constraints: [node.role == worker]
update_config:
delay: 2s
Remark: the constraint is node.role == worker
Using the flag “ — replicas” implies we don’t care on which node they are put on, if we want one service per node we can use “ — mode=global” instead.
In Docker 1.13 and higher, you can use the --force or -f flag with the docker service update command to force the service to redistribute its tasks across the available worker nodes.
I have:
three 1 swarm manager and 2 swarm working nodes
an application cluster that connected to each other
docker-compose.yml
services:
service1:
ports:
- 8888:8888
environment:
- ADDITIONAL_NODES=service2:8889,service3:8890
service2:
ports:
- 8889:8889
environment:
- ADDITIONAL_NODES=service1:8888,service3:8890
service3:
ports:
- 8890:8890
environment:
- ADDITIONAL_NODES=service1:8888,service2:8889
If I run docker stack deploy -c docker-compose.yml server :
swarm manager(service1), swarm node1(service2), swarm node2(service3)
swarm manager(service1、service2、service3), swarm node1(service1、service2、service3), swarm node3(service1、service2、service3)
Which one will be the result?
If it is 2, how can I deploy like 1 using docker swarm? I need to use docker swarm because I'm also using docker network overlay.
If it is 1, then how does my services distributed? Is it "averagely" distributed? If true then in what perspective is it "averagely" distributed?
Docker swarm has some logic which it uses to decide which services run on which nodes. It might now be 100% what you expect but they are smart people working on this and might consider things that you don't (such as CPU load, available ram....)
The goals is to spread the load evenly so like your example 1. If some services can for some reason not start on one node (like you use a private registry but didn't specify --with-registry-auth in stack deploy) then the services will all start on those nodes who can run them after failing on the other nodes.
From personal experience I can tell you that it spreads tasks nicely accross the swarm but theres no guarantee where which service ends up.
If you want to force where services run use constraints.