Using Docker Swarm, I keep all my "overrides" in a single yml file like so:
docker stack -c base.yml -c overrides.yml deploy myStack
If my base.yml file defines these deploy limits for serviceA:
serviceA:
. . .
deploy:
resources:
limits: {memory: 1024M}
reservations: {memory: 1024M}
I can easily override in overrides.yml:
serviceA:
deploy:
resources:
limits: {memory: 2048M}
reservations: {memory: 2048M}
This way, my base.yml can change as new versions of the product are released, but any overrides are easily transported from the old to the new version. However, what if I want to REMOVE or delete something defined in base.yml? If I want to keep the reservation, but remove the limits definition by using a second yml file. Is there any way to do this? Currently, I am on yaml version 3.6.
These two options do not work. This (is not parseable):
serviceA:
deploy:
resources:
limits: {memory: }
reservations: {memory: 2048M}
and this (uses the default defined in base.yml):
serviceA:
deploy:
resources:
reservations: {memory: 2048M}
When using multiple docker-compose files, the latter ones get merged into the former ones. That means: You cannot remove definitions, just modify existing ones or add new ones (see https://github.com/docker/compose/issues/3729). A PR to allow that was created and closed without ever being merged.
So all that leaves you with is to remove the limits definition from your base.yml and only have it in your overrides.yml.
fjc provided the actual answer to my question. For the specific case I was interested in, I learned that if you 'override' and set the memory limit to zero (0), that is equivalent to not having set it at all. Not a general workaround, but at least a workaround for this specific case. For instance:
serviceA:
deploy:
resources:
limits: { cpus: '0', memory: '0' }
would effectively not limit either the number of cpus or memory.
Related
On a machine with multiple Nvidia GPUs, is there a way to specify in the docker-compose.yml file that we want to use two specific GPU device IDs in the Docker container?
The following seems to be the equivalent of docker run --gpus=allsince all the GPUs are listed when runningnvidia-smi` inside the Docker container.
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities: [gpu]
You may use device_ids or count.
For example, to allow only GPUs with ID 0 and 3:
deploy:
resources:
reservations:
devices:
- driver: nvidia
device_ids: ['0', '3']
capabilities: [gpu]
Enabling GPU access with Compose
I introduced Docker Swarm resources limit on a cluster (24 GB RAM and 12 VCPUs) and specified services limits with the following configurations:
redis:
image: redis
deploy:
replicas: 1
resources:
reservations:
cpus: '1'
memory: 300m
ports:
- "6379:6379"
Now the problem is that I get the error no suitable node (insufficient resources on 3 nodes) and I can't understand what resources are over and where exactly. Is there a way to understand resource reservation overall?
I got a on-prem server which I would like to deploy many micro-services. I'm using a docker-compose file to declare all services and would like to set the cpu limit. I refer the docs below: https://docs.docker.com/compose/compose-file/
My docker compose file is like:
version: "3.7"
services:
redis:
image: redis:alpine
deploy:
resources:
limits:
cpus: '2'
memory: 256M
reservations:
cpus: '0.25'
memory: 64M
service1:
image: service1 image
deploy:
resources:
limits:
cpus: '2'
memory: 512M
reservations:
cpus: '0.25'
memory: 64M
...
I'm confused to calculate the cpus limits. For example, the cpu is 8 cores. There are 20 microservices.
Is there any way I can calculate the cpu limit of each service? Or any formulas to do so?
----- UPDATE ------
To make it clearer, my main point here is CPU limit. It is because I'd like to send alert if the CPU of one microservice is using 80% of CPU for that microservice. If I don't set the cpu limit, is it true that the microservice CPU usage will be the same with host CPU usage? I don't use Docker's Swarn but Docker only.
Any ideas are really appreciated.
Thanks,
Welcome to 2022. I'm using version 1.29 of docker-compose with a yaml version of 3.9
If I set my yml file as follows:
version: '3.9'
services:
astro-cron:
build: ./cron
image: astro-cron
restart: unless-stopped
volumes:
- /home/docker-sync/sites/com/cron:/astro/cron
environment:
- TZ=America/Phoenix
mem_limit: 300m
mem_reservation: 100m
cpus: 0.3
It nicely limits the container cpu to 30% of the machine.
The memory limits kill the container if exceeded, which will
auto-restart.
The cpu limit does not kill anything, just holds.
I'm not using docker swarm.
PS. To address the other answer of having an overcommitted cpu not being a problem. It IS a problem if one of your containers is an unimportant background job, and another is a customer facing web site or database. Unless someone can explain why that wouldn't be the case.
Having overcommitted CPU isn't really a problem. If you have 16 processes and they're all trying to do work requiring 100% CPU on an 8-core system, the kernel will time-share across the processes; in practice you should expect those processes to get 50% CPU each and the task to just take twice as long. One piece of advice I've heard (in a Kubernetes space) is to set CPU requests based on the steady-state load requirement of the service, and to not set CPU limits.
There's no magic formula to set either of these numbers. The best way is to set up some metrics system like Prometheus, run your combined system, and look at the actual resource utilization. In some cases it's possible to know that a process is single-threaded and will never use more than 1 core, or that you expect a process to be I/O bound and if it's not it should get throttled, but basing these settings on actual usage is probably better.
(Memory is different, in that you can actually run out of physical memory, but also that processes are capable of holding on to much more memory than they actually need. Again a metrics tool will be helpful here.)
Your question suggests a single host. The deploy: section only works with Docker's Swarm cluster manager. If you're not using Swarm then you need to use a version: '2' docker-compose.yml file, which has a different set of resource constraint declarations (and mostly doesn't have the concept of "reservations"). For example,
version: '2'
services:
redis:
image: redis:alpine
# cpus: 2.0
mem_limit: 256m
mem_reservation: 64m
I am running a service in Docker Swarm on a single machine. This is what I did to deploy the service:
docker swarm init
docker stack deploy -c docker-compose.yml MyApplication
Content of docker-compose.yml:
version: "3"
services:
web:
image: myimage:1.0
ports:
- "9000:80"
- "9001:443"
deploy:
replicas: 3
resources:
limits:
cpus: "0.5"
memory: 256M
restart_policy:
condition: on-failure
Is Docker Swarm able to increase number of replicas automatically based on current traffic? If yes, how to configure it to do so? If no, how can I achieve it, maybe use Kubernetes?
Based on CPU utilization of Pods it is possible to autoscale Deployments. You need to use kubectl autoscale command, which creates a HorizontalPodAutoscaler object that targets a specified resource and scales it as needed. The HPA periodically adjusts the number of replicas of the scale target to match the average CPU utilization that you specify.
When using kubectl autoscale, you need to specify a maximum and minimum number of replicas for your application, as well as a CPU utilization target.
Take a look for the example: to set the maximum number of replicas to five and the minimum to two, with a CPU utilization target of 60% utilization, run the following command:
$ kubectl autoscale deployment my-app --max 5 --min 2 --cpu-percent 60
Please, found more about it in the documentation and in the following article. I hope it will helps you.
I am running a service on Docker Swarm. This is what I did to deploy the service:
docker swarm init
docker stack deploy -c docker-compose.yml MyApplication
Content of docker-compose.yml:
version: "3"
services:
web:
image: myimage:1.0
ports:
- "9000:80"
- "9001:443"
deploy:
replicas: 3
resources:
limits:
cpus: "0.5"
memory: 256M
restart_policy:
condition: on-failure
Let't say that I update the application and build a new image myimage:2.0. What is a proper way to deploy the new version of image to the service without the downtime?
A way to achieve this is:
provide a healthcheck. That way docker will know if your new deployment has succeeded.
https://docs.docker.com/engine/reference/builder/#healthcheck
https://docs.docker.com/compose/compose-file/#healthcheck]
control how docker will update your service with update_config
https://docs.docker.com/compose/compose-file/#update_config
pay attention to order and parallelism, for example if you choose order: stop-first + parallelism: 2 and your replicas are the same amount as parallelism, your app will stop completely when updating
if your update doesn't succeed you probably want to rollback
https://docs.docker.com/compose/compose-file/#rollback_config
don't forget the restart_policy too
I have some examples on that subject:
Docker Swarm Mode Replicated Example with Flask and Caddy
https://github.com/douglasmiranda/lab/tree/master/caddy-healthcheck-of-caddy-itself
With this you can simply run docker stack deploy... again. If there was changes in the service, it will be updated.
you can use the command docker service update --image but it will start a new container with a implicit scale 0/1.
The downtime depends of your application.