How to limit memory usage in docker-compose? - docker

Here is part of my docker-compose.yaml file
version: '3.4'
services:
app:
build:
context: .
dockerfile: Dockerfile
working_dir: /app
deploy:
resources:
limits:
cpus: '0.50'
memory: 23M
Starting it docker-compose up -d
When I do docker stats it says that limit is still 1.9GiB. What am I doing wrong?
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM %
13b6588evc1e app_1 1.86% 20.45MiB / 1.952GiB 1.02%

deploy key only works in swarm mode and with docker-compose file version 3 and above.
In your case, use docker-compose file version 2 and define resource limits:
version: "2.2"
services:
app:
image: foo
cpus: "0.5"
mem_limit: 23m
See official docs here

Are you running the docker-compose in swarm mode ? If not Recommended to run 2.x version of compose file format.
3.X require docker-compose to be run in swarm mode for new set of resource directives to take effect.
Alternatives in 2.X are cpu_shares, cpu_quota, cpuset, mem_limit, memswap_limit, mem_swappiness

If you not intend to use docker swarm stack deployments, always stick to the latest 2.x version supported by the docker engine version you operate. Docker versions 17.12 and later support compose file version 2.4. Docker-Compose has all features the cli provides, while swarm still lacks some of those: see https://github.com/moby/moby/issues/25303.
If you use docker-compose, all swarm related elements in a 3.x file will be ignorend, except secrets (or was it configs?!). If you start to mix 3.x only elements with 2.x only elements, your configuration will become invalid.

Related

docker-compose version 3 doesn't recognize runtime

I met the following issue that compose file
version: '3'
services:
minkowski:
build:
context: .
dockerfile: DockerfileGPU
volumes:
- "../:/app:rw"
- "${DATA_PATH}:/app/data:rw"
working_dir: /app
tty: true
stdin_open: true
network_mode: "host"
runtime: nvidia
results in
ERROR: The Compose file './docker/compose-gpu.yaml' is invalid because:
services.minkowski.build contains unsupported option: 'runtime'
I have docker version 20.10.21 and docker-compose 1.25.0. Do you have any idea why that happens?
I tried use different versions. Running
sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi
works fine
The runtime: option isn't supported in Compose file version 3; it is only in version 2. More broadly, current versions of Compose support both Compose file versions 2 and 3, and it's okay to use either. Version 3's options are more oriented towards the Swarm orchestrator and for some single-host-specific options like this you need to use version 2.
version: '2.4' # not 3.x
services:
...:
runtime: nvidia
The newer Compose Specification also supports runtime: but support for this is inconsistent across Compose versions. The Compose 1.25.0 you mention will not support it. That doesn't suggest specific values for version:, and I might label a file as version: '4.0' if you're using Compose Specification specific functionality.
I managed to fix the issue by installing the newer docker compose. With docker compose 2.x it works just fine, without further changes

docker-compose to K8S deployment

I've built an application through Docker with a docker-compose.yml file and I'm now trying to convert it into deployment file for K8S.
I tried to use kompose convert command but it seems to work weirdly.
Here is my docker-compose.yml:
version: "3"
services:
worker:
build:
dockerfile: ./worker/Dockerfile
container_name: container_worker
environment:
- PYTHONUNBUFFERED=1
volumes:
- ./api:/app/
- ./worker:/app2/
api:
build:
dockerfile: ./api/Dockerfile
container_name: container_api
volumes:
- ./api:/app/
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8050:8050"
depends_on:
- worker
Here is the output of the kompose convert command:
[root#user-cgb4-01-01 vm-tracer]# kompose convert
WARN Volume mount on the host "/home/david/vm-tracer/api" isn't supported - ignoring path on the host
WARN Volume mount on the host "/var/run/docker.sock" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/david/vm-tracer/api" isn't supported - ignoring path on the host
WARN Volume mount on the host "/home/david/vm-tracer/worker" isn't supported - ignoring path on the host
INFO Kubernetes file "api-service.yaml" created
INFO Kubernetes file "api-deployment.yaml" created
INFO Kubernetes file "api-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "api-claim1-persistentvolumeclaim.yaml" created
INFO Kubernetes file "worker-deployment.yaml" created
INFO Kubernetes file "worker-claim0-persistentvolumeclaim.yaml" created
INFO Kubernetes file "worker-claim1-persistentvolumeclaim.yaml" created
And it created me 7 yaml files. But I was expected to have only one deployment file. Also, I don't understand these warning that I get. Is there a problem with my volumes?
Maybe it will be easier to convert the docker-compose to deployment.yml manually?
Thank you,
I'd recommend using Kompose as a starting point or inspiration more than an end-to-end solution. It does have some real limitations and it's hard to correct those without understanding Kubernetes's deployment model.
I would clean up your docker-compose.yml file before you start. You have volumes: that inject your source code into the containers, presumably hiding the application code in the image. This setup mostly doesn't work in Kubernetes (the cluster cannot reach back to your local system) and you need to delete these volumes: mounts. Doing that would get rid of both the Kompose warnings about unsupported host-path mounts and the PersistentVolumeClaim objects.
You also do not normally need to specify container_name: or several other networking-related options. Kubernetes does not support multiple networks and so if you have any networks: settings they will be ignored, but most practical Compose files don't need them either. The obsolete links: and expose: options, if you have them, can also usually be safely deleted with no consequences.
version: "3.8"
services:
worker:
build:
dockerfile: ./worker/Dockerfile
environment:
- PYTHONUNBUFFERED=1
api:
build:
dockerfile: ./api/Dockerfile
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8050:8050"
depends_on: # won't have an effect in Kubernetes,
- worker # but still good Docker Compose practice
The bind-mount of the Docker socket is a larger problem. This socket usually doesn't exist in Kubernetes, and if it does exist, it's frequently inaccessible (there are major security concerns around having it available, and it would allow you to launch unmanaged containers as well as root the node). If you need to dynamically launch containers, you'd need to use the Kubernetes API to do that instead (look at creating one-off Jobs). For many practical purposes, having a long-running worker container attached to a queueing system like RabbitMQ is a better approach. Kompose can't fix this architectural problem, though, you will have to modify your code.
When all of this is done, I'd expect Kompose to create four files, with one Kubernetes YAML manifest in each: two Deployments, and two matching Services. Each of your Docker Compose services: would get translated into a separate Kubernetes Deployment, and you need a paired Kubernetes Service to be able to connect to it (even from within the cluster). There are a number of related objects that are often useful (ServiceAccounts, PodDisruptionBudgets, HorizontalPodAutoscalers) and a typical Kubernetes practice is to put each in its own file.
I guess this is fine:
All your docker exposed ports are now kubernetes services
Your volumes need PV and PVC, they are generated
There is a deployment yaml for your API and WORKER service.
This is how it should be usually.
However if you have confusion in deploying these files; try -
kubectl apply -f mymanifests/*.yaml - this will deploy all at once.
Or if you just want a single fine , you can concatenate all these files with
--------- one after other; which can be used to seperate multiple manifests but still have them in a single file. Something like -
apiVersion.... deploymentfile....
-------------
apiVersion.... servicefile...... and so on...

docker-compose scale with different cpuset

How can I scale a service but apply a different cpuset on each instance with docker-compose ?
For example: I have 4 cpus, I want 4 instances, each using 1 unique cpu.
What version of docker-compose are you using? I'm asking because accomplish what you desire is only possible with docker-compose v2.x or docker-swarm as you can see below.
you can check more info here in the docker doc.
supposing that you are using docker-compose 2.4, you can define a service like this in your `docker-compose.yaml
version: '2.4'
services:
redis:
image: redis:1.0.0
restart: always
environment:
- REDIS_PASSWORD=1234
cpu_count: 1
mem_limit: 200m
Where cpu_count is number o cpu cores you want to use in the service, and mem_limit is the limit of memory that your service can consume.
To define the number of replicas you must run:
docker-compose up --scale redis=2
Where redis is name of the service in the docker-compose and 2 is the number of replicas that you desire. So both the two containers will spin up with 1 core of CPU and 200m of memory.
To check the container resources consumption you can run docker stats
Source:
https://docs.docker.com/engine/reference/run/#runtime-constraints-on-resources
https://docs.docker.com/compose/compose-file/compose-file-v2/#cpu-and-other-resources

Increase memory of Docker container with docker-compose on Windows?

On Docker for Windows, I have a simple SQL Server container based on microsoft/mssql-server-windows-developer that is launched with docker-compose up via a simple docker-compose.yaml file.
Is there a way to allocate more than 1GB of memory to this container? I can do it when running the image directly or when I build my image with -m 4GB, but I can't figure out how to do this when using Docker Compose. This container needs more than 1GB of RAM to run properly and all of my research has revealed nothing helpful thus far.
I've looked into the resources configuration option, but that only applies when running under Docker Swarm, which I don't need.
In docker compose version 2.* you could use the mem_limit option as below
version: '2.4'
services:
my-svc:
image: microsoft/mssql-server-windows-developer
mem_limit: 4G
In docker compose version 3 it is replaced by the resources options which requires docker swarm.
version: '3'
services:
my-svc:
image: microsoft/mssql-server-windows-developer
deploy:
resources:
limits:
memory: 4G
There is a compatibility flag that can be used to translate the deploy section to equivalent version 2 parameters when running docker-compose --compatibility up. However this is not recommended for production deployments
From documentation
docker-compose 1.20.0 introduces a new --compatibility flag designed
to help developers transition to version 3 more easily. When enabled,
docker-compose reads the deploy section of each service’s definition
and attempts to translate it into the equivalent version 2 parameter.
Currently, the following deploy keys are translated:
resources
limits and memory reservations
replicas
restart_policy
condition and max_attempts All other keys are ignored and produce a
warning if present. You can review the configuration that will be used
to deploy by using the --compatibility flag with the config command.
We recommend against using --compatibility mode in production. Because the resulting configuration is only an approximate using non-Swarm mode properties, it may produce unexpected results.
Looking for options to set resources on non swarm mode containers?
The options described here are specific to the deploy key and swarm mode. If you want to set resource constraints on non swarm deployments, use Compose file format version 2 CPU, memory, and other resource options. If you have further questions, refer to the discussion on the GitHub issue docker/compose/4513.
You can use the docker-compose file on version 2 instead of version 3. You can use mem_limit (available on version 2) to set the memory limit. So you can use a docker-compose file like this:
version: "2.4"
services:
sql-server:
image: microsoft/mssql-server-windows-developer
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=t3st&Pa55word
mem_limit: 4GB
You can check the memory limit using docker stats.
Was also out for setting this up via docker-compose. Had a hard time figuring out why sql server worked on a new machine but not any longer on my older one. Finally recalled I had tuned the size down able to allocate in Docker Desktop. Utilizing this you find it through the settings button, Resources/Advanced. Setting Memory to 2GB resolved the issue for me.

use nvidia-docker from docker-compose

I would like to run 2 docker images with docker-compose.
one image should run with nvidia-docker and the other with docker.
I've seen this post use nvidia-docker-compose launch a container, but exited soon
but this is not working for me(not even running only one image)...
any idea would be great.
UPDATE : please check nvidia-docker 2 and its support of docker-compose first
https://github.com/NVIDIA/nvidia-docker/wiki/Frequently-Asked-Questions#do-you-support-docker-compose
(I'd first suggest adding the nvidia-docker tag).
If you look at the nvidia-docker-compose code here it only generates a specific docker-file for docker-compose after a query of the nvidia configuration on localhost:3476.
You can also make by hand this docker-compose file as they turn out to be quite simple, follow this example, replace 375.66 with your nvidia driver version and put as many /dev/nvidia[n] lines as you have graphic cards (did not try to put services on separate GPUs but go for it !):
services:
exampleservice0:
devices:
- /dev/nvidia0
- /dev/nvidia1
- /dev/nvidiactl
- /dev/nvidia-uvm
- /dev/nvidia-uvm-tools
environment:
- EXAMPLE_ENV_VARIABLE=example
image: company/image
volumes:
- ./disk:/disk
- nvidia_driver_375.66:/usr/local/nvidia:ro
version: '2'
volumes:
media: null
nvidia_driver_375.66:
external: true
Then just run this hand-made docker-compose file with a classic docker-compose command.
Maybe you can then compose with non nvidia dockers by skipping the nvidia specific stuff in the other services.
Additionally to the accepted answer, here's my approach, a bit shorter.
I needed to use the old version of docker-compose (2.3) because of the required runtime: nvidia (won't necessarily work with version: 3 - see this). Setting NVIDIA_VISIBLE_DEVICES=all will make all the GPUs visible.
version: '2.3'
services:
your-service-name:
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
# ...your stuff
My example is available here.
Tested on NVIDIA Docker 2.5.0, Docker CE 19.03.13 and NVIDIA-SMI 418.152.00 and CUDA 10.1 on Debian 10.

Resources