Memory resources for a docker container running from docker-compose - docker

I’m running Sonarqube on Docker compose and my file looks like this:
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
- "5432:5432"
links:
- db:db
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
- SONARQUBE_JDBC_USERNAME=postgres
- SONARQUBE_JDBC_PASSWORD=sonar
volumes:
- ..../Work/tools/_SonarQube_home/conf:/opt/sonarqube/conf
# - sonarqube_data:/opt/sonarqube_new/data
- ...../Work/tools/_SonarQube_home/data:/opt/sonarqube/data
- ....../Work/tools/_SonarQube_home/extensions:/opt/sonarqube/extensions
- ..../Work/tools/_SonarQube_home/bundled-plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=sonar
- POSTGRES_DB=sonar
volumes:
- .../Work/tools/_PostgreSQL_data:/var/lib/postgresql
# This needs explicit mapping due to https://github.com/docker-library/postgres/blob/4e48e3228a30763913ece952c611e5e9b95c8759/Dockerfile.template#L52
- ..../Work/tools/_PostgreSQL_data/data:/var/lib/postgresql/data
Everything works and that’s great. One moment I saw that Sonarqube instance started to act slowly, therefore I checked docker stats. It looks like this:
| CPU | Mem Usage/ Limit |
|-------| --------------------
| 5.39% | 1.6GiB / 1.952GiB |
How do I define more RAM resources for the server, let’s say 4 GB? Previously it was mem_limit but now on version 3 it doesn’t exist.
What would be a good solution for that?
Thanks!

If you are deploying to Swarm, then you can use the resources keyword in your Compose file. (it's described under Resources in the file reference https://docs.docker.com/compose/compose-file/)
So you can do something like this is Swarm:
version: "3.7"
services:
redis:
image: redis:alpine
deploy:
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
If you are using Compose, then you have the option to go back to Compose file version 2.0, as described in the Compose file reference by Docker.
Looking for options to set resources on non swarm mode containers?
The options described here are specific to the deploy key and swarm mode. If you want to set resource constraints on non swarm
deployments, use Compose file format version 2 CPU, memory, and other
resource options. If you have further questions, refer to the
discussion on the GitHub issue docker/compose/4513.
I'm not familiar with Sonarqube memory issue, but you may want to have a look at this https://docs.sonarqube.org/display/SONARqube71/Java+Process+Memory.

In Compose file version 3, resource limits moved to under a deploy: {resources: ...} key, but are also only documented to work in Swarm mode. So to actually set them you need to switch to a mostly-compatible version 2 Compose file.
version: '2'
services:
sonarqube:
mem_limit: 4g
The default should be for the container to be able to use an unlimited amount of memory. If you're running in an environment where Docker is inside a Linux VM (anything based on Docker Toolbox or Docker Machine, Docker for Mac) it's limited by the memory size of the VM.

Related

docker-compose ignores memory reservation and memory limit

Running container in non-swarm mode. docker-compose.yml is configured this way:
version: '3.9'
services:
backend:
container_name: 'w_server'
restart: always
build: .
mem_reservation: '30G'
mem_limit: '40G'
environment:
NODE_USER: '[...]'
However, after successful building and starting the container, stats look like this:
docker stats --all --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
d4426dd4e34d w_server 0.00% 573.7MiB / 23.55GiB 2.38% 225MB / 8.47MB 0B / 16MB 18
I learned about deploy.resources.reservations and deploy.resources.limits, but it only works in swarm mode and displays warning when building such configuration (and of course, the settings aren't taken into consideration when the building gets processed).
Is there any other way to assign memory resources?
Docker and docker-compose versions are:
docker-compose version 1.28.5, build c4eb3a1f
Docker version 18.09.7, build 2d0083d
Edit:
Found out this question, and answers suggest that mem_reservation and mem_limit are available in docker-compose.yml in version 2.x; version 3.x doesn't support it.
However, changing just the version to 2.4 gave exactly the same results: limit reported by docker stats was the same, not read from configuration file.
You should define your limits for v3 like that:
version: '3.9'
services:
backend:
container_name: 'w_server'
restart: always
build: .
deploy:
resources:
limits:
memory: 40g
environment:
NODE_USER: '[...]'
But, you need use --compatibility flag for it to work like:
docker compose --compatibility up

Changing shared memory size in docker compose

I'm currently having some issues with the shared memory in one of my containers.
I have a docker-compose file in where I expect to be able to set the size. I basically converted an old docker run that had a --shm-size 16gb. I would guess it's as easy as adding shm_size:16gb to my service in the compose file.
Adding it just gives me the info: Ignoring unsupported options: shm_size.
I did check the docs, but it didn't really help me.
Just to clarify, it's not in the build but really for the "running" state.
Does one of you ever had this issue/know how to solve it?
Setup:
docker swarm with 7 nodes
Service should run on just a single node
Only running stacks
64 GB RAM host
32 GB shm (host)
Docker version 18.09.7, build 2d0083d
Using v 3.7 in my compose file
Compose file:
version: "3.7"
services:
server:
shm_size: 16GB # <<<<<<< This fails
image: local_repo/my_app:v1-dev
command: run
environment:
- UPDATES=enabled
volumes:
- type: volume
source: data
target: /var/lib/my_app/
- type: volume
source: db
target: /var/lib/postgresql/10/main
networks:
- xxx_traefik
deploy:
mode: replicated
labels:
- traefik.docker.network=xxx_traefik
- traefik.enable=true
- traefik.port=80
- traefik.frontend.rule=Host:my_container.xxx.com
- traefik.backend.loadbalancer.stickiness=true
- traefik.protocol=http
replicas: 1
placement:
constraints:
- node.hostname==node2
volumes:
db:
external: true
data:
external: true
networks:
xxx_traefik:
external: true
# shm_size: 16GB <<<<<<< Also tried to put it here since documentation doesn't show indents
Any help is appreciated:)
It should be below service, I can verify it, but here is what offical documentation said
SHM_SIZE
Added in version 3.5 file format
Set the size of the /dev/shm partition for this build’s containers.
Specify as an integer value representing the number of bytes or as a
string expressing a byte value.
build:
context: .
shm_size: '2gb'
compose-file-SHM_SIZE
Here is test
version: '3.7'
services:
your_service:
image: alpine
command: ash -c "sleep 2 && df /dev/shm"
shm_size: 2gb

Docker-compose: how to do version 2 "mem_limit" in version 3?

Recently, I tried upgrading a version 2 docker-compose yaml file to version 3. Specifically, I was going from 2.1 to 3.4. Using docker-compose version 1.18.0 and docker version 18.06.01.
The first attempt caused docker-compose to abort because of the presence of the Version 2 option: mem_limit. Reading these Version 3 docs, it clearly states mem_limit was removed and to see "upgrading" to guide usage away from this option. These instruction tell you to use the deploy section with resources. Making these changes to the docker-compose.yml file and the system started normally.
Unfortunately, I missed the disclaimer in there where it states that deploy is ignored by docker-compose! My question: is there a way to use Compose file reference 3 and docker-compose while still enforcing a container memory limit?
No, there is not.
Between versions 2.x and 3.x...several options have been removed
...mem_limit, memswap_limit: These have been replaced by the resources key under deploy. deploy configuration only takes effect when using docker stack deploy, and is ignored by docker-compose.
See Compose: Upgrading from 2 to 3
And also you don't have to upgrade, you don't even have any reason to upgrade if you don't use swarm.
Sadly in the official docker docs, there is stated
Version 3 (most current, and recommended)
which isn't actually really true, if you use docker-compose without swarm, there is hardly any reason to switch or to use on new project v3. In the official repository you can see comments like this [2][3].
Also in the compatibility-matrix you can see that v2 is still upgraded even when v3 is out for quite some time. And only v1 is marked as deprecated.
Here's an example Docker Compose file version 3.8 with memory limitations:
version: '3.8'
services:
web:
image: nginx:latest
deploy:
replicas: 3
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
resources:
limits:
memory: 256M
ports:
- "80:80"
volumes:
- type: volume
source: myapp-data
target: /var/www/html
networks:
- myapp-net
environment:
- NGINX_HOST=example.com
- NGINX_PORT=80
db:
image: mysql:latest
deploy:
replicas: 1
restart_policy:
condition: on-failure
resources:
limits:
memory: 512M
volumes:
- type: volume
source: myapp-db
target: /var/lib/mysql
networks:
- myapp-net
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=myapp
volumes:
myapp-data:
myapp-db:
networks:
myapp-net:

Can not start 2 cassandra containers on mac

I have a situation with cassandra container.
I have 2 docker-compse.yaml files in different folders.
docker-compose.yaml in folder 1
version: "3"
services:
cassandra-cluster-node-1:
image: cassandra:3.0
container_name: cassandra-cluster-node-1
hostname: cassandra-cluster-node-1
ports:
- '9142:9042'
- '7199:7199'
- '9160:9160'
docker-compose.yaml in folder 2
version: "3"
services:
cassandra-cluster-node-2:
image: cassandra:3.0
container_name: cassandra-cluster-node-2
hostname: cassandra-cluster-node-2
ports:
- '9242:9042'
- '7299:7199'
- '9260:9160'
I tried to up cassandra on folder 1, the system work well, after that I up cassandra on folder 2. But at this time, service cassandra on folder 1 is killed automatically. So I didn't understand with them, could you guys please, who have experiences with Docker can help me to explain this situation?
The error in cassandra_1 after I run cassandra_2
cassandra-cluster-node-1 exited with code 137
Thank you, I'm going to appreciate your help.
137 is out of memory error. Cassandra uses a lot of memory if started with default settings. By default it takes 1/4 of the system memory. For each instans. You can restrict the memory usage using environment variables (see my example further down)
Docker compose creates a network for each directory it runs under. With your setup the two nodes will never be able to find each other. This is the output from my test, your files are put into two directories; cass1 and cass1
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
dbe9cafe0af3 bridge bridge local
70cf3d77a7fc cass1_default bridge local
41af3e02e247 cass2_default bridge local
21ac366b7a31 host host local
0787afb9aeeb none null local
You can see the two networks cass1_default and cass2_default. So the two nodes will not find each other.
If you want them to find each other you have to give the first one as a seed to second one, and they have to be in the same network (same docker-compose file)
version: "3"
services:
cassandra-cluster-node-1:
image: cassandra:3.0
container_name: cassandra-cluster-node-1
hostname: cassandra-cluster-node-1
environment:
- "MAX_HEAP_SIZE=1G"
- "HEAP_NEWSIZE=256M"
ports:
- '9142:9042'
- '7199:7199'
- '9160:9160'
cassandra-cluster-node-2:
image: cassandra:3.0
container_name: cassandra-cluster-node-2
hostname: cassandra-cluster-node-2
environment:
- "MAX_HEAP_SIZE=1G"
- "HEAP_NEWSIZE=256M"
- "CASSANDRA_SEEDS=cassandra-cluster-node-1"
ports:
- '9242:9042'
- '7299:7199'
- '9260:9160'
depends_on:
- cassandra-cluster-node-1

How to specify Memory & CPU limit in docker compose version 3

I am unable to specify CPU and memory limitation for services specified in version 3.
With version 2 it works fine with mem_limit & cpu_shares parameters under the services. But it fails while using version 3, putting them under deploy section doesn't seem worthy unless I am using swarm mode.
Can somebody help?
version: "3"
services:
node:
build:
context: .
dockerfile: ./docker-build/Dockerfile.node
restart: always
environment:
- VIRTUAL_HOST=localhost
volumes:
- logs:/app/out/
expose:
- 8083
command: ["npm","start"]
cap_drop:
- NET_ADMIN
- SYS_ADMIN
I know the topic is a bit old and seems stale, but anyway I was able to use these options:
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M
when using 3.7 version of docker-compose
What helped in my case, was using this command:
docker-compose --compatibility up
--compatibility flag stands for (taken from the documentation):
If set, Compose will attempt to convert deploy keys in v3 files to
their non-Swarm equivalent
Think it's great, that I don't have to revert my docker-compose file back to v2.
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M
reservations:
cpus: '0.0001'
memory: 20M
More: https://docs.docker.com/compose/compose-file/compose-file-v3/#resources
In you specific case:
version: "3"
services:
node:
image: USER/Your-Pre-Built-Image
environment:
- VIRTUAL_HOST=localhost
volumes:
- logs:/app/out/
command: ["npm","start"]
cap_drop:
- NET_ADMIN
- SYS_ADMIN
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M
reservations:
cpus: '0.0001'
memory: 20M
volumes:
- logs
networks:
default:
driver: overlay
Note:
Expose is not necessary, it will be exposed per default on your stack network.
Images have to be pre-built. Build within v3 is not possible
"Restart" is also deprecated. You can use restart under deploy with on-failure action
You can use a standalone one node "swarm", v3 most improvements (if not all) are for swarm
Also Note:
Networks in Swarm mode do not bridge. If you would like to connect internally only, you have to attach to the network. You can 1) specify an external network within an other compose file, or have to create the network with --attachable parameter (docker network create -d overlay My-Network --attachable)
Otherwise you have to publish the port like this:
ports:
- 80:80
Docker Compose v1 does not support the deploy key. It's only respected when you use your version 3 YAML file in a Docker Stack.
This message is printed when you add the deploy key to you docker-compose.yml file and then run docker-compose up -d
WARNING: Some services (database) use the 'deploy' key, which will be
ignored. Compose does not support 'deploy' configuration - use docker stack deploy to deploy to a swarm.
The documentation (https://docs.docker.com/compose/compose-file/#deploy) says:
Specify configuration related to the deployment and running of
services. This only takes effect when deploying to a swarm with docker
stack deploy, and is ignored by docker-compose up and docker-compose
run.
Nevertheless you can use Docker Compose v2. Given the following Docker composition you can use the deploy key to limit your containers resources.
version: "3.9"
services:
database:
image: mariadb:10.10.2-jammy
container_name: mydb
environment:
MYSQL_ROOT_PASSWORD: root_secret
MYSQL_DATABASE: mydb
MYSQL_USER: myuser
MYSQL_PASSWORD: secret
TZ: "Europe/Zurich"
MARIADB_AUTO_UPGRADE: "true"
tmpfs:
- /var/lib/mysql:rw
ports:
- "127.0.0.1:3306:3306"
deploy:
resources:
limits:
cpus: "4.0"
memory: 200M
networks:
- mynetwork
When you run docker compose up -d (Note: in version 2 of Docker Compose you call the docker binary at not the docker-compose python application) and then inspect the resources you see that the memory is limited to 200 MB. The CPU limit is not exposed by docker stats.
❯ docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
2c71fb8de607 mydb 0.04% 198MiB / 200MiB 99.02% 2.67MB / 3.77MB 70.6MB / 156MB 18
This is possible with version >= 3.8. Here is an example using docker-compose >= 1.28.x :
version: '3.9'
services:
app:
image: nginx
cpus: "0.5"
mem_reservation: "10M"
mem_limit: "250M"
Proof of it working (see the MEM USAGE) column :
The expected behavior when reaching memory limit is the container getting killed. In this case, whether set restart: always or adjust your app code.
Limits and restarts settings in Docker compose v3 should now be set using (restart: always is also deprecated):
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
I have other experiences, maybe somebody can explain this.
Maybe this is bug(i think this is a feature), but, I am able to use deployments limits (memory limits) in docker-compose without swarm, hovever CPU limits doesn't work but replication does.
$> docker-compose --version
docker-compose version 1.29.2
$> docker --version
Docker version 20.10.12
version: '3.2'
services:
limits-test:
image: alexeiled/stress-ng
command: [
'--vm', '1', '--vm-bytes', '20%', '--vm-method', 'all', '--verify', '-t', ' 10m', '-v'
]
deploy:
resources:
limits:
cpus: '0.50'
memory: 1024M
Docker stats
b647e0dad247 dc-limits_limits-test_1 0.01% 547.1MiB / 1GiB 53.43% 942B / 0B 0B / 0B 3
Edited, thx #Jimmix
I think there is confusion here over using docker-compose and docker compose (with a space). You can install the compose plugin using https://docs.docker.com/compose/install if you don't already have it.
Here is an example compose file just running Elasticsearch
version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
restart: always
ports:
- "9222:9200"
deploy:
resources:
limits:
cpus: "4"
memory: "2g"
environment:
- "node.name=elasticsearch"
- "bootstrap.memory_lock=true"
- "discovery.type=single-node"
- "xpack.security.enabled=false"
- "ingest.geoip.downloader.enabled=false"
I have it in a directory called estest the file is called es-compose.yaml. The file sets CPU and memory limits.
If you launch using docker-compose e.g.
docker-compose -f es-compose.yaml up
Then look at docker stats you see
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
e3b6253ee730 estest_elasticsearch_1 342.13% 32.39GiB / 62.49GiB 51.83% 7.7kB / 0B 27.3MB / 381kB 46
so the cpu and memory resource limits are ignored. During the launch you see the warning
WARNING: Some services (elasticsearch) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
Which I think is what leads people to look at Docker stack/swarm. However if you just switch to using the newer docker compose now built in to the docker CLI https://docs.docker.com/engine/reference/commandline/compose/ e.g.
docker compose -f es-compose.yaml up
And look again at docker stats you see
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
d062eda10ffe estest-elasticsearch-1 0.41% 1.383GiB / 2GiB 69.17% 8.6kB / 0B 369MB / 44MB 6
Therefore the limits have been applied.
This is better in my opinion than swarm as it still allows you to build containers as part of the compose project and pass environment easily via a file. I would recommend removing docker-compose and switching over to use the newer docker compose wherever possible.

Resources