Running container in non-swarm mode. docker-compose.yml is configured this way:
version: '3.9'
services:
backend:
container_name: 'w_server'
restart: always
build: .
mem_reservation: '30G'
mem_limit: '40G'
environment:
NODE_USER: '[...]'
However, after successful building and starting the container, stats look like this:
docker stats --all --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
d4426dd4e34d w_server 0.00% 573.7MiB / 23.55GiB 2.38% 225MB / 8.47MB 0B / 16MB 18
I learned about deploy.resources.reservations and deploy.resources.limits, but it only works in swarm mode and displays warning when building such configuration (and of course, the settings aren't taken into consideration when the building gets processed).
Is there any other way to assign memory resources?
Docker and docker-compose versions are:
docker-compose version 1.28.5, build c4eb3a1f
Docker version 18.09.7, build 2d0083d
Edit:
Found out this question, and answers suggest that mem_reservation and mem_limit are available in docker-compose.yml in version 2.x; version 3.x doesn't support it.
However, changing just the version to 2.4 gave exactly the same results: limit reported by docker stats was the same, not read from configuration file.
You should define your limits for v3 like that:
version: '3.9'
services:
backend:
container_name: 'w_server'
restart: always
build: .
deploy:
resources:
limits:
memory: 40g
environment:
NODE_USER: '[...]'
But, you need use --compatibility flag for it to work like:
docker compose --compatibility up
Related
I am trying to define a container in my docker-compose.yml file like so -
gitea:
image: gitea/gitea:latest
depends_on:
- mariadb
env_file:
- gitea_env
mem_limit: 100000000
ports:
- "127.0.0.1:4567:3000"
volumes:
- /var/lib/gitea:/data
However, once the container starts, I see with docker stats that the memory assigned to it is not limited to 100 MB. I am using version: '2' of docker-compose YML syntax and docker-compose version is 1.25.5.
Output of docker stats --all gitea shows -
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
dfed6585837c gitea 0.25% 150MiB / 982.8MiB 15.27% 251kB / 102kB 57.2MB / 69.6kB 12
Docker version (docker --version) is -
Docker version 19.03.8-ce, build afacb8b7f0
What is going wrong in my configuration?
Make sure you are recreating the container after changing the memory limit:
docker-compose down && docker-compose up
I’m running Sonarqube on Docker compose and my file looks like this:
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
- "5432:5432"
links:
- db:db
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
- SONARQUBE_JDBC_USERNAME=postgres
- SONARQUBE_JDBC_PASSWORD=sonar
volumes:
- ..../Work/tools/_SonarQube_home/conf:/opt/sonarqube/conf
# - sonarqube_data:/opt/sonarqube_new/data
- ...../Work/tools/_SonarQube_home/data:/opt/sonarqube/data
- ....../Work/tools/_SonarQube_home/extensions:/opt/sonarqube/extensions
- ..../Work/tools/_SonarQube_home/bundled-plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=sonar
- POSTGRES_DB=sonar
volumes:
- .../Work/tools/_PostgreSQL_data:/var/lib/postgresql
# This needs explicit mapping due to https://github.com/docker-library/postgres/blob/4e48e3228a30763913ece952c611e5e9b95c8759/Dockerfile.template#L52
- ..../Work/tools/_PostgreSQL_data/data:/var/lib/postgresql/data
Everything works and that’s great. One moment I saw that Sonarqube instance started to act slowly, therefore I checked docker stats. It looks like this:
| CPU | Mem Usage/ Limit |
|-------| --------------------
| 5.39% | 1.6GiB / 1.952GiB |
How do I define more RAM resources for the server, let’s say 4 GB? Previously it was mem_limit but now on version 3 it doesn’t exist.
What would be a good solution for that?
Thanks!
If you are deploying to Swarm, then you can use the resources keyword in your Compose file. (it's described under Resources in the file reference https://docs.docker.com/compose/compose-file/)
So you can do something like this is Swarm:
version: "3.7"
services:
redis:
image: redis:alpine
deploy:
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
If you are using Compose, then you have the option to go back to Compose file version 2.0, as described in the Compose file reference by Docker.
Looking for options to set resources on non swarm mode containers?
The options described here are specific to the deploy key and swarm mode. If you want to set resource constraints on non swarm
deployments, use Compose file format version 2 CPU, memory, and other
resource options. If you have further questions, refer to the
discussion on the GitHub issue docker/compose/4513.
I'm not familiar with Sonarqube memory issue, but you may want to have a look at this https://docs.sonarqube.org/display/SONARqube71/Java+Process+Memory.
In Compose file version 3, resource limits moved to under a deploy: {resources: ...} key, but are also only documented to work in Swarm mode. So to actually set them you need to switch to a mostly-compatible version 2 Compose file.
version: '2'
services:
sonarqube:
mem_limit: 4g
The default should be for the container to be able to use an unlimited amount of memory. If you're running in an environment where Docker is inside a Linux VM (anything based on Docker Toolbox or Docker Machine, Docker for Mac) it's limited by the memory size of the VM.
I want to have a policy in my docker-compose which a docker container memory usage will be higher than the amount of a certain memory limitation it restarts.
This is what I have done so far:
version: '3'
services:
modbus_collector:
build: .
image: modbus_collector:2.0.0
container_name: modbus_collector
restart: unless-stopped
deploy:
resources:
limits:
memory: 28M
I was expecting that to be restarted when the container memory usage exceeds 28M, but when I monitor docker containers by docker stats I see that this container memory usage grow up and not happens about restart!
I also tried by restart: always but the result was the same.
[UPDATE]:
With version 2 it works fine with mem_limit:. But it fails while using version 3, putting them under deploy section doesn't seem worthy unless I am using swarm mode.
Even on version 2.1, I have a problem in restarting the container: limitation applied correctly, but when the container memory usage grows up, this limitation prevents of that but I expected instead of decrease memory it will restart that container.
version: '2.1'
services:
modbus_collector:
build: .
image: modbus_collector:2.0.0
container_name: modbus_collector
restart: unless-stopped
mem_limit: 28m
I am trying use memory and CPU in docker-compose file.
I get below error code:
The Compose file './docker-compose.yml' is invalid because:
Unsupported config option for services.web: 'resources'
My docker-compose.yml file is below
version: '3'
services:
web:
build: .
volumes:
- "./app:/home"
ports:
- "8080:8080"
resources:
limits:
cpus: '0.001'
memory: 512M
How can I use CPU, memory with in docker-compose?
Docker compose resources were introduced in file format version 3, which needs docker-compose 1.13 or newer. Chances are you are using an older version:. Check the output of:
docker-compose version
See the upgrade guide.
The OP uses docker-compose 1.12, which does not yet support version 3.
solved: I use version: '2' instead of version: '3' in docker-composer file and ı use mem_limit instead of resources
If you are using Docker Compose v.3 configs and running docker-compose up mode, then this can be useful.
This is not documented anywhere in docker-compose, but you can pass any valid system call setrlimit option in ulimits.
So, you can specify in docker-compose.yaml
ulimits:
as:
hard: 130000000
soft: 100000000
memory size is in bytes. After going over this limit your process will get memory allocation exceptions, which you may or may not trap.
I am unable to specify CPU and memory limitation for services specified in version 3.
With version 2 it works fine with mem_limit & cpu_shares parameters under the services. But it fails while using version 3, putting them under deploy section doesn't seem worthy unless I am using swarm mode.
Can somebody help?
version: "3"
services:
node:
build:
context: .
dockerfile: ./docker-build/Dockerfile.node
restart: always
environment:
- VIRTUAL_HOST=localhost
volumes:
- logs:/app/out/
expose:
- 8083
command: ["npm","start"]
cap_drop:
- NET_ADMIN
- SYS_ADMIN
I know the topic is a bit old and seems stale, but anyway I was able to use these options:
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M
when using 3.7 version of docker-compose
What helped in my case, was using this command:
docker-compose --compatibility up
--compatibility flag stands for (taken from the documentation):
If set, Compose will attempt to convert deploy keys in v3 files to
their non-Swarm equivalent
Think it's great, that I don't have to revert my docker-compose file back to v2.
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M
reservations:
cpus: '0.0001'
memory: 20M
More: https://docs.docker.com/compose/compose-file/compose-file-v3/#resources
In you specific case:
version: "3"
services:
node:
image: USER/Your-Pre-Built-Image
environment:
- VIRTUAL_HOST=localhost
volumes:
- logs:/app/out/
command: ["npm","start"]
cap_drop:
- NET_ADMIN
- SYS_ADMIN
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M
reservations:
cpus: '0.0001'
memory: 20M
volumes:
- logs
networks:
default:
driver: overlay
Note:
Expose is not necessary, it will be exposed per default on your stack network.
Images have to be pre-built. Build within v3 is not possible
"Restart" is also deprecated. You can use restart under deploy with on-failure action
You can use a standalone one node "swarm", v3 most improvements (if not all) are for swarm
Also Note:
Networks in Swarm mode do not bridge. If you would like to connect internally only, you have to attach to the network. You can 1) specify an external network within an other compose file, or have to create the network with --attachable parameter (docker network create -d overlay My-Network --attachable)
Otherwise you have to publish the port like this:
ports:
- 80:80
Docker Compose v1 does not support the deploy key. It's only respected when you use your version 3 YAML file in a Docker Stack.
This message is printed when you add the deploy key to you docker-compose.yml file and then run docker-compose up -d
WARNING: Some services (database) use the 'deploy' key, which will be
ignored. Compose does not support 'deploy' configuration - use docker stack deploy to deploy to a swarm.
The documentation (https://docs.docker.com/compose/compose-file/#deploy) says:
Specify configuration related to the deployment and running of
services. This only takes effect when deploying to a swarm with docker
stack deploy, and is ignored by docker-compose up and docker-compose
run.
Nevertheless you can use Docker Compose v2. Given the following Docker composition you can use the deploy key to limit your containers resources.
version: "3.9"
services:
database:
image: mariadb:10.10.2-jammy
container_name: mydb
environment:
MYSQL_ROOT_PASSWORD: root_secret
MYSQL_DATABASE: mydb
MYSQL_USER: myuser
MYSQL_PASSWORD: secret
TZ: "Europe/Zurich"
MARIADB_AUTO_UPGRADE: "true"
tmpfs:
- /var/lib/mysql:rw
ports:
- "127.0.0.1:3306:3306"
deploy:
resources:
limits:
cpus: "4.0"
memory: 200M
networks:
- mynetwork
When you run docker compose up -d (Note: in version 2 of Docker Compose you call the docker binary at not the docker-compose python application) and then inspect the resources you see that the memory is limited to 200 MB. The CPU limit is not exposed by docker stats.
❯ docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
2c71fb8de607 mydb 0.04% 198MiB / 200MiB 99.02% 2.67MB / 3.77MB 70.6MB / 156MB 18
This is possible with version >= 3.8. Here is an example using docker-compose >= 1.28.x :
version: '3.9'
services:
app:
image: nginx
cpus: "0.5"
mem_reservation: "10M"
mem_limit: "250M"
Proof of it working (see the MEM USAGE) column :
The expected behavior when reaching memory limit is the container getting killed. In this case, whether set restart: always or adjust your app code.
Limits and restarts settings in Docker compose v3 should now be set using (restart: always is also deprecated):
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
I have other experiences, maybe somebody can explain this.
Maybe this is bug(i think this is a feature), but, I am able to use deployments limits (memory limits) in docker-compose without swarm, hovever CPU limits doesn't work but replication does.
$> docker-compose --version
docker-compose version 1.29.2
$> docker --version
Docker version 20.10.12
version: '3.2'
services:
limits-test:
image: alexeiled/stress-ng
command: [
'--vm', '1', '--vm-bytes', '20%', '--vm-method', 'all', '--verify', '-t', ' 10m', '-v'
]
deploy:
resources:
limits:
cpus: '0.50'
memory: 1024M
Docker stats
b647e0dad247 dc-limits_limits-test_1 0.01% 547.1MiB / 1GiB 53.43% 942B / 0B 0B / 0B 3
Edited, thx #Jimmix
I think there is confusion here over using docker-compose and docker compose (with a space). You can install the compose plugin using https://docs.docker.com/compose/install if you don't already have it.
Here is an example compose file just running Elasticsearch
version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
restart: always
ports:
- "9222:9200"
deploy:
resources:
limits:
cpus: "4"
memory: "2g"
environment:
- "node.name=elasticsearch"
- "bootstrap.memory_lock=true"
- "discovery.type=single-node"
- "xpack.security.enabled=false"
- "ingest.geoip.downloader.enabled=false"
I have it in a directory called estest the file is called es-compose.yaml. The file sets CPU and memory limits.
If you launch using docker-compose e.g.
docker-compose -f es-compose.yaml up
Then look at docker stats you see
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
e3b6253ee730 estest_elasticsearch_1 342.13% 32.39GiB / 62.49GiB 51.83% 7.7kB / 0B 27.3MB / 381kB 46
so the cpu and memory resource limits are ignored. During the launch you see the warning
WARNING: Some services (elasticsearch) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
Which I think is what leads people to look at Docker stack/swarm. However if you just switch to using the newer docker compose now built in to the docker CLI https://docs.docker.com/engine/reference/commandline/compose/ e.g.
docker compose -f es-compose.yaml up
And look again at docker stats you see
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
d062eda10ffe estest-elasticsearch-1 0.41% 1.383GiB / 2GiB 69.17% 8.6kB / 0B 369MB / 44MB 6
Therefore the limits have been applied.
This is better in my opinion than swarm as it still allows you to build containers as part of the compose project and pass environment easily via a file. I would recommend removing docker-compose and switching over to use the newer docker compose wherever possible.