How to Set Minimum Container Requirements Using Docker Swarm - docker

In Docker Swarm you can set maximum system requirements like so:
my-service
image: hello-world
deploy:
resources:
limits:
cpus: '2'
memory: 4GB
I have a container that has minimum system requirements of 2 CPU cores and 4GB of RAM which is the exact size of the nodes in my Docker Swarm. This means that when this container is running, it needs to be the only container running on that node.
However, when I run the container alongside others, other containers get placed on the same node. How can I ensure that Docker gives this container a minimum level of CPU and RAM?
Update
I added reservations as suggested by #yamenk, however I still get other containers starting on the same node which causes performance problems for the container I am trying to protect:
my-service
image: hello-world
deploy:
resources:
reservations:
cpus: '2'
memory: 4GB

Update
Apparently the effect of memory reservations in docker swarm are not very well documented and they work as a best effort. To understand the effect of memory reservation flag, check the documentation:
When memory reservation is set, Docker detects memory contention or
low memory and forces containers to restrict their consumption to a
reservation limit.
...
Memory reservation is a soft-limit feature and does not guarantee
the limit won’t be exceeded. Instead, the feature attempts to ensure
that, when memory is heavily contended for, memory is allocated based
on the reservation hints/setup.
To enforce that no other container runs on the same node, you need to set service constraints. What you can do is give nodes in the swarm specific labels and use these labels to scheduel services to run only on nodes that have those specific labels.
As decribed here, node labels can be added to a node using the command:
docker node update --label-add hello-world=yes <node-name>
Then inside your stack file, you can restrict the container to run on nodes only having the specified label, and other container to avoid nodes labeled with hello-world=yes.
my-service:
image: hello-world
deploy:
placement:
constraints:
- node.labels.hello-world == yes
other-service:
...
deploy:
placement:
constraints:
- node.labels.hello-world == no
If you want to start replicas of my-service on multiple nodes, and still have one container running on each node, you need to set the global mode of my-service, and add the same label to nodes where you want a container to run.
The global mode ensures that exactly one container will run each node that satisfies the service constraints:
my-service:
image: hello-world
deploy:
mode: global
placement:
constraints:
- node.labels.hello-world == yes
Old Answer:
You can set resource reservations as such:
version: '3'
services:
redis:
image: redis:alpine
deploy:
resources:
reservations:
cpus: '1'
memory: 20M

Related

docker-compose: Reserve a different GPU for each scaled container

I have a docker-compose file that looks like the following:
version: "3.9"
services:
api:
build: .
ports:
- "5000"
deploy:
resources:
reservations:
devices:
- capabilities: [gpu]
count: 1
When I run docker-compose up, this runs as intended, using the first GPU on the machine.
However, if I run docker-compose up --scale api=2, I would expect each docker container to reserve one GPU on the host.
The actual behaviour is that both containers receive the same GPU, meaning that they compete for resources. Additionally, I also get this behaviour if I have two containers specified in the docker-compose.yml, both with count: 1. If I manually specify device_ids for each container, it works.
How can I make it so that each docker container reserves exclusive access to 1 GPU? Is this a bug or intended behaviour?
The behavior of docker-compose when a scale is requested is to create additional containers as per the exact specification provided by the service.
There are very few specification parameters that will vary during the creation of the additional containers and the devices which are part of the host_config set of parameters are copied without modifications.
docker-compose is python project, so if this is important feature for you, you can try to implement it. The logic that drives the lifecycle of the services (creation, scaling, etc.) reside in compose/services.py.

Start a docker service based on another service

Is there a possibility to start a service on a specific node, based on another running service? (using Docker Swarm)
To make myself a little more clear:
I want to run Nextcloud on a different node than for example, a Typo3, to spare some resources on my Nextcloud node.
How would I write that in a compose?
Look into deploy and using labels:
Example:
deploy:
mode: replicated
replicas: 1
placement:
constraints:
- node.labels.NextcloudDaemon == true
restart_policy:
condition: any
The above example will run exactly 1 container, and only on the node you've already given the label of "NextcloudDaemon".

docker-compose scale with different cpuset

How can I scale a service but apply a different cpuset on each instance with docker-compose ?
For example: I have 4 cpus, I want 4 instances, each using 1 unique cpu.
What version of docker-compose are you using? I'm asking because accomplish what you desire is only possible with docker-compose v2.x or docker-swarm as you can see below.
you can check more info here in the docker doc.
supposing that you are using docker-compose 2.4, you can define a service like this in your `docker-compose.yaml
version: '2.4'
services:
redis:
image: redis:1.0.0
restart: always
environment:
- REDIS_PASSWORD=1234
cpu_count: 1
mem_limit: 200m
Where cpu_count is number o cpu cores you want to use in the service, and mem_limit is the limit of memory that your service can consume.
To define the number of replicas you must run:
docker-compose up --scale redis=2
Where redis is name of the service in the docker-compose and 2 is the number of replicas that you desire. So both the two containers will spin up with 1 core of CPU and 200m of memory.
To check the container resources consumption you can run docker stats
Source:
https://docs.docker.com/engine/reference/run/#runtime-constraints-on-resources
https://docs.docker.com/compose/compose-file/compose-file-v2/#cpu-and-other-resources

How to specify to some node receive more replicas of the service in SWARM

I have 3 servers, one has more hardware than the others. I have a service that has 25 replicas when I run docker stack deploy each server receives the same amount of replicas of that specific server.
Can I specify some kind of percentage of replicas in the dockercompose file to redirect to that specific node?
You can define resource reservations for the containers you run, and docker will avoid scheduling containers are nodes that are out of resources according to those memory and CPU reservations. Note that this should be done for all containers being scheduled, and you'll need to profile your applications to identify an appropriate reservation and limit amount. Keep in mind during heavy load these numbers will increase, and the response to exceeding the memory limit is to kill the container and restart another instance, which would be bad during peak load, so make your limits sufficiently large.
Here's the example from Docker's documentation:
version: "3.7"
services:
redis:
image: redis:alpine
deploy:
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
See also Docker's resource constraints documentation: https://docs.docker.com/config/containers/resource_constraints/
A second less ideal option is to define node labels that split the nodes into equal size groups, where size is based on the resources. Then you can abuse the functionality that allows workload to be distributed across HA zones to split the workload between the machines based on their resources. For that you would use a placement preferences spread policy to spread the workload between different values of a label:
version: "3.7"
services:
db:
image: postgres
deploy:
placement:
preferences:
- spread: node.labels.zone

Deploy a docker stack on one node (co-schedule containers like docker swarm)

I'm aware that docker-compose with docker-swarm (which is now legacy) is able to co-schedule some services on one node (using dependency filters such as link)
I was wondering if this kind of co-scheduling is possible using modern docker engine swarm mode and the new stack deployment introduced in Docker 1.13
In docker-compose file version 3, links are said to be ignored while deploying a stack in a swarm, so obviously links aren't the solution.
We have a bunch of servers to run batch short-running jobs and the network between them is not very high speed. We want to run each batch job (which consists of multiple containers) on one server to avoid networking overhead. Is this feature implemented in docker stack or docker swarm mode or we should use the legacy docker-swarm?
Also, I couldn't find co-scheduling with another container in the placement policies.
#Roman: You are right.
To deploy to a specific node you need to use placement policy:
version: '3'
services:
job1:
image: example/job1
deploy:
placement:
node.hostname: node-1
networks:
- example
job2:
image: example/job2
deploy:
placement:
node.hostname: node-1
networks:
- example
networks:
example:
driver: overlay
You can still use depends_on
It worth having a look at dockerize too.

Resources