Based on this picture in this document https://docs.docker.com/engine/swarm/how-swarm-mode-works/services/:
How should the output of the command #docker service ls look like for the two services depicted? My understanding is:
grey global "1/1" consul:latest
yellow replicated "1/3" consul:latest
I am not sure about the numbers between ""
I need support to understand the output
Correct output based on answer and the picture would be:
grey global "5/5" consul:latest
yellow replicated "3/3" consul:latest
Without placement constraint, a service in global mode will be deployed with one replica on each cluster node. As a result you will have 5/5 replicas.
You can use placement constraints to restrict deployment to specific nodes, for instance to worker nodes or nodes having a specific node label:
You could use a placement constraint to restrict deployment to your
worker nodes, which will result in 4 replicas.
You could add a node label to n of your nodes and use it as placement
constraint, resulting in n replicas.
While global mode services guaranty that exactly one replica of a service is running on each node that fulfills the placement constraints, the same is not necessarily true for replicate mode services. Replicated mode services are usualy fanned out accross the nodes, but could also be placed on a single node...
Why do you list the replicas to 1/3 for the (yellow) replicated mode service? If all replicas are deployed successfully it should be 3/3.
The numbers indicates a summary of the total deployment for the service. It does not indicate how the replicas are spread accross the cluster, nor where the replicas are running.where the replicas are running.
Related
i’m currently in the process of setting up a swarm with 5 machines. i’m just wondering if i can and should limit the swarm to only allow one active instance of a service? and all others just wait till they should jump in when the service fail.
This is to prevent potential concurrency problems with maria-db (as the nodes sill write to a nas), or connection limit to an external service (like node red with telegram)
If you're deploying with stack files you can set "replicas: 1" in the deploy section to make sure only one instance runs at a time.
If that instance fails (crashes or exits) docker will start another one.
https://docs.docker.com/compose/compose-file/deploy/#replicas
If the service is replicated (which is the default), replicas
specifies the number of containers that SHOULD be running at any given
time.
services: frontend:
image: awesome/webapp
deploy:
mode: replicated
replicas: 6
If you want multiple instances running and only one "active" hitting the database you'll have to coordinate that some other way.
I have a Docker swarm Environment with 7 nodes(3 master and 4 Workers) I am trying to Deploy a Container and buy requirement is that at any point of time I need 2 instance of this container running but when I scale this the Container should be deployed to a different node than it is currently running.
Ex: say one instance of the container is running in Node 4 and I scale to scale=2 it should run in any other node except for Node 4.
tried this but no luck:
deploy:
mode: global
placement:
constraints:
- node.labels.cloud.type == nodesforservice
We solved this issue with deployment preferences configuration (under Placement section). We set up node.labels.worker, on all our worker nodes. We have 3 workers they have node.labels.worker = worker1, node.labels.worker = worker2 and node.labels.worker = worker3 labels set to each of them. On the docker compose side then we configure it:
placement:
max_replicas_per_node: 2
constraints:
- node.role==worker
preferences:
- spread: node.labels.worker
Note this will not FORCE it always on the separate node, but if it is possible it will do so. So it is not hard limit. Beware of that.
I would like to deploy a stack to a docker swarm where I want each node to run a given service.
I looked at deploy.placement configuration and the closest option I found is the placement preference spread=node.label.abc which will equally distribute the services on nodes matching the label. However this requires updating the replicas count all the time to match the number of nodes.
Is there a way to automatically deploy a service on all the nodes without manually updating the replica count?
Is there a way to automatically deploy a service on all the nodes without manually updating the replica count?
Yes, deploy your service in global mode instead of replicated. Example from the link:
version: '3'
services:
worker:
image: dockersamples/examplevotingapp_worker
deploy:
mode: global
This will run a single instance of the container on every node matching your constraints.
I'm using a docker-compose file version 3 with its deploy key to run a swarm (docker version 1.13) and I would like to replicate a service in order to make it resilient again single node failure.
However, when I'm adding a deploy section like this:
deploy:
replicas: 2
in my four node cluster I sometimes end up with both replicas scheduled on the same node. What I'm missing is a constraint that schedules the two instances on different nodes.
I know that there's a global mode I could use but that would run an instance on every node, i.e. four instances in my case instead of just two.
Is there a simple way to specify this constraint in a generic way without having to resort to a combination of global and a labels to keep additional instances away from?
Edit: After trying it again I find containers to be scheduled on different nodes this time around. I'm beginning to wonder if I may have had a 'node.hostname == X' constraint in place.
Edit 2: After another service update - and without any placement constraints - the service is again being scheduled on the same node (as displayed by ManoMarks Visualizer):
Extending VonC answer, and as in your example you are using a compose file, not the cli, you could add max_replicas_per_node: 1, as in
version: '3.8'
...
yourservice:
deploy:
replicas: 2
placement:
max_replicas_per_node: 1
The compose schema version 3.8 is key here, as there is no support for max_replicas_per_node below 3.8.
This was added in https://github.com/docker/cli/pull/1410
docker/cli PR 1612 seems to resolve issue 26259, and has been released in docker 19.03.
Added new switch --replicas-max-per-node switch to docker service
How to verify it
Create two services and specify --replicas-max-per-node one of them:
docker service create --detach=true --name web1 --replicas 2 nginx
docker service create --detach=true --name web2 --replicas 2 --replicas-max-per-node 1 nginx
See difference on command outputs:
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
0inbv7q148nn web1 replicated 2/2 nginx:latest
9kry59rk4ecr web2 replicated 1/2 (max 1 per node) nginx:latest
$ docker service ps --no-trunc web2
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
bf90bhy72o2ry2pj50xh24cfp web2.1 nginx:latest#sha256:b543f6d0983fbc25b9874e22f4fe257a567111da96fd1d8f1b44315f1236398c limint Running Running 34 seconds ago
xedop9dwtilok0r56w4g7h5jm web2.2 nginx:latest#sha256:b543f6d0983fbc25b9874e22f4fe257a567111da96fd1d8f1b44315f1236398c Running Pending 35 seconds ago "no suitable node (max replicas per node limit exceed)"
The error message would be:
no suitable node (max replicas per node limit exceed)
Examples from Sebastiaan van Stijn:
Create a service with max 2 replicas:
docker service create --replicas=2 --replicas-max-per-node=2 --name test nginx:alpine
docker service inspect --format '{{.Spec.TaskTemplate.Placement.MaxReplicas}}' test
2
Update the service (max replicas should keep its value)
docker service update --replicas=1 test
docker service inspect --format '{{.Spec.TaskTemplate.Placement.MaxReplicas}}' test
2
Update the max replicas to 1:
docker service update --replicas-max-per-node=1 test
docker service inspect --format '{{.Spec.TaskTemplate.Placement.MaxReplicas}}' test
1
And reset to 0:
docker service update --replicas-max-per-node=0 test
docker service inspect --format '{{.Spec.TaskTemplate.Placement.MaxReplicas}}' test
0
What version of docker are you using? According to this post in 1.13 this kind of problem has been rectified, do take a look: https://github.com/docker/docker/issues/26259#issuecomment-260899732
Hope that answers your question.
I'm running Kubernetes 1.2.0 on a number of lab machines. The machines have swap enabled. As the machines are used for other purposes, too, I cannot disable swap globally.
I'm observing the following problem: If I start a pod with a memory limit, the container starts swapping after it reached the memory limit. I would expect the container to be killed.
According to this issue this was a problem that has been fixed, but it still occurs with Kubernetes 1.2.0. If I check the running container with docker inspect, then I can see that MemorySwap = -1 and MemorySwappiness = -1. If I start a pod with low memory limits, it starts swapping almost immediately.
I had some ideas, but I couldn't figure out how to do any of these:
Change the default setting in Docker so no container is allowed to swap
Add a parameter to the Kubernetes container config so it passes --memory-swappiness=0
Fiddle with docker's cgroup and disallow swapping for the group
How can I prevent the containers to start swapping?
Kubernetes, specifically the kubelet, fails if swap is enabled on Linux since version 1.8 (flag --fail-swap-on=true), as Kubernetes can't handle swap. That means you can be sure that swap is disabled by default on Kubernetes.
To test it in local Docker container, set memory-swap == memory, e.g.:
docker run --memory="10m" --memory-swap="10m" dominikk/swap-test
My test image is based on this small program with the addition to flush output in Docker:
setvbuf(stdout, NULL, _IONBF, 0); // flush stdout buffer every time
You can also test it with docker-compose up (only works for version <= 2.x):
version: '2'
services:
swap-test:
image: dominikk/swap-test
mem_limit: 10m
# memswap_limit:
# -1: unlimited swap
# 0: field unset
# >0: mem_limit + swap
# == mem_limit: swap disabled
memswap_limit: 10m
If you are just playing around then no need to bother with turning swap off. Stuff will still run but resource isolation won't work as well. If you are using Kubernetes seriously enough to need resource isolation then you should not be running other things on the machines.