I am trying to define a container in my docker-compose.yml file like so -
gitea:
image: gitea/gitea:latest
depends_on:
- mariadb
env_file:
- gitea_env
mem_limit: 100000000
ports:
- "127.0.0.1:4567:3000"
volumes:
- /var/lib/gitea:/data
However, once the container starts, I see with docker stats that the memory assigned to it is not limited to 100 MB. I am using version: '2' of docker-compose YML syntax and docker-compose version is 1.25.5.
Output of docker stats --all gitea shows -
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
dfed6585837c gitea 0.25% 150MiB / 982.8MiB 15.27% 251kB / 102kB 57.2MB / 69.6kB 12
Docker version (docker --version) is -
Docker version 19.03.8-ce, build afacb8b7f0
What is going wrong in my configuration?
Make sure you are recreating the container after changing the memory limit:
docker-compose down && docker-compose up
Related
Running container in non-swarm mode. docker-compose.yml is configured this way:
version: '3.9'
services:
backend:
container_name: 'w_server'
restart: always
build: .
mem_reservation: '30G'
mem_limit: '40G'
environment:
NODE_USER: '[...]'
However, after successful building and starting the container, stats look like this:
docker stats --all --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
d4426dd4e34d w_server 0.00% 573.7MiB / 23.55GiB 2.38% 225MB / 8.47MB 0B / 16MB 18
I learned about deploy.resources.reservations and deploy.resources.limits, but it only works in swarm mode and displays warning when building such configuration (and of course, the settings aren't taken into consideration when the building gets processed).
Is there any other way to assign memory resources?
Docker and docker-compose versions are:
docker-compose version 1.28.5, build c4eb3a1f
Docker version 18.09.7, build 2d0083d
Edit:
Found out this question, and answers suggest that mem_reservation and mem_limit are available in docker-compose.yml in version 2.x; version 3.x doesn't support it.
However, changing just the version to 2.4 gave exactly the same results: limit reported by docker stats was the same, not read from configuration file.
You should define your limits for v3 like that:
version: '3.9'
services:
backend:
container_name: 'w_server'
restart: always
build: .
deploy:
resources:
limits:
memory: 40g
environment:
NODE_USER: '[...]'
But, you need use --compatibility flag for it to work like:
docker compose --compatibility up
I tried to run ELK on Centos 8 with docker-compose :
here my docker-compose.yml
version: '3.1'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.4
container_name: elasticsearch
hostname: elasticsearch
ports:
- "9200:9200"
expose:
- "9200"
volumes:
- elasticsearch-data:/usr/share/elasticsearch/data
networks:
- docker-network
kibana:
image: docker.elastic.co/kibana/kibana:6.2.4
container_name: kibana
ports:
- "5601:5601"
expose:
- "5601"
environment:
- SERVER_NAME=kibana.localhost
- ELASTICSEARCH_URL=http://elasticsearch:9200
- ELASTICSEARCH_USERNAME=elastic
- ELASTICSEARCH_HOST=elasticsearch
- ELASTICSEARCH_PORT=9200
- ELASTIC_PWD=changeme
- KIBANA_PWD=changeme
depends_on:
- elasticsearch
networks:
- docker-network
networks:
docker-network:
driver: bridge
volumes:
elasticsearch-data:
but i'm facing with this error :
{"type":"log","#timestamp":"2020-03-03T22:53:19Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable
to revive connection: http://elasticsearch:9200/"}
while i checked:
elasticsearch is running fine.
docker exec kibana ping elasticsearch work fine.
both kibana and elasticsearch are on same network as you can see in docker-compose.yml
i checked docker exec kibana curl http://elasticsearch:9200 and result is :
Failed connect to elasticsearch:9200; No route to host
I also check other similar problems and their solution but none of them worked.
If you are running ElasticSearch inside Docker, then you may need to check if you have allocated sufficient memory limits to Docker. This can cause ElasticSearch to slowdown and even crash.
By default Docker Desktop is set to allow 2Gb of RAM per Docker, but in my own project I found that 4Gb prevented crashing, but 5Gb produced an additional performance speedup. Your mileage may vary depending on the amount of data you are ingesting.
Docker Desktop memory settings can be set via:
Docker Desktop -> Preferences -> Resources -> Memory
To inspect memory usage within the Docker container
DOCKER_ID=`docker ps | tail -n1 | awk '{ print $1 }'`; docker exec -it $DOCKER_ID /bin/bash
free -h # repeatedly run to inspect changes over time
Note that ElasticSearch memory usage peaks during ingest and indexing and then eventually settle down to a slightly lower number once indexing and consolidation is complete. So ideally peak memory usage should be tested during ingest.
I’m running Sonarqube on Docker compose and my file looks like this:
version: "3"
services:
sonarqube:
image: sonarqube
ports:
- "9000:9000"
- "5432:5432"
links:
- db:db
environment:
- SONARQUBE_JDBC_URL=jdbc:postgresql://db:5432/sonar
- SONARQUBE_JDBC_USERNAME=postgres
- SONARQUBE_JDBC_PASSWORD=sonar
volumes:
- ..../Work/tools/_SonarQube_home/conf:/opt/sonarqube/conf
# - sonarqube_data:/opt/sonarqube_new/data
- ...../Work/tools/_SonarQube_home/data:/opt/sonarqube/data
- ....../Work/tools/_SonarQube_home/extensions:/opt/sonarqube/extensions
- ..../Work/tools/_SonarQube_home/bundled-plugins:/opt/sonarqube/lib/bundled-plugins
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=sonar
- POSTGRES_DB=sonar
volumes:
- .../Work/tools/_PostgreSQL_data:/var/lib/postgresql
# This needs explicit mapping due to https://github.com/docker-library/postgres/blob/4e48e3228a30763913ece952c611e5e9b95c8759/Dockerfile.template#L52
- ..../Work/tools/_PostgreSQL_data/data:/var/lib/postgresql/data
Everything works and that’s great. One moment I saw that Sonarqube instance started to act slowly, therefore I checked docker stats. It looks like this:
| CPU | Mem Usage/ Limit |
|-------| --------------------
| 5.39% | 1.6GiB / 1.952GiB |
How do I define more RAM resources for the server, let’s say 4 GB? Previously it was mem_limit but now on version 3 it doesn’t exist.
What would be a good solution for that?
Thanks!
If you are deploying to Swarm, then you can use the resources keyword in your Compose file. (it's described under Resources in the file reference https://docs.docker.com/compose/compose-file/)
So you can do something like this is Swarm:
version: "3.7"
services:
redis:
image: redis:alpine
deploy:
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
If you are using Compose, then you have the option to go back to Compose file version 2.0, as described in the Compose file reference by Docker.
Looking for options to set resources on non swarm mode containers?
The options described here are specific to the deploy key and swarm mode. If you want to set resource constraints on non swarm
deployments, use Compose file format version 2 CPU, memory, and other
resource options. If you have further questions, refer to the
discussion on the GitHub issue docker/compose/4513.
I'm not familiar with Sonarqube memory issue, but you may want to have a look at this https://docs.sonarqube.org/display/SONARqube71/Java+Process+Memory.
In Compose file version 3, resource limits moved to under a deploy: {resources: ...} key, but are also only documented to work in Swarm mode. So to actually set them you need to switch to a mostly-compatible version 2 Compose file.
version: '2'
services:
sonarqube:
mem_limit: 4g
The default should be for the container to be able to use an unlimited amount of memory. If you're running in an environment where Docker is inside a Linux VM (anything based on Docker Toolbox or Docker Machine, Docker for Mac) it's limited by the memory size of the VM.
My Docker container keeps restarting when running docker-compose up -d. When inspecting the logs with docker logs --tail 50 --follow --timestamps db, I get the following error:
/usr/local/bin/docker-entrypoint.sh: line 37: "/run/secrets/db_mysql_root_pw": No such file or directory
This probably means that no secrets are made. The output of docker secret ls also gives no secrets.
My docker-compose.yml file looks something like this (excluding port info etc.):
version: '3.4'
services:
db:
image: mysql:8.0
container_name: db
restart: always
environment:
- MYSQL_USER_FILE="/run/secrets/db_mysql_user"
- MYSQL_PASSWORD_FILE="/run/secrets/db_mysql_user_pw"
- MYSQL_ROOT_PASSWORD_FILE="/run/secrets/db_mysql_root_pw"
secrets:
- db_mysql_user
- db_mysql_user_pw
- db_mysql_root_pw
volumes:
- "./mysql-data:/docker-entrypoint-initdb.d"
secrets:
db_mysql_user:
file: ./db_mysql_user.txt
db_mysql_user_pw:
file: ./db_mysql_user_pw.txt
db_mysql_root_pw:
file: ./db_mysql_root_pw.txt
In the same directory I have the 3 text files which simply contain the values for the environment variables. e.g. db_mysql_user_pw.txt contains password.
I am running Linux containers on a Windows host.
This is pretty dumb but changing
environment:
- MYSQL_USER_FILE="/run/secrets/db_mysql_user"
- MYSQL_PASSWORD_FILE="/run/secrets/db_mysql_user_pw"
- MYSQL_ROOT_PASSWORD_FILE="/run/secrets/db_mysql_root_pw"
to
environment:
- MYSQL_USER_FILE=/run/secrets/db_mysql_user
- MYSQL_PASSWORD_FILE=/run/secrets/db_mysql_user_pw
- MYSQL_ROOT_PASSWORD_FILE=/run/secrets/db_mysql_root_pw
made it work. I still don't know why I cannot see the secrets with docker secret ls though.
I am unable to specify CPU and memory limitation for services specified in version 3.
With version 2 it works fine with mem_limit & cpu_shares parameters under the services. But it fails while using version 3, putting them under deploy section doesn't seem worthy unless I am using swarm mode.
Can somebody help?
version: "3"
services:
node:
build:
context: .
dockerfile: ./docker-build/Dockerfile.node
restart: always
environment:
- VIRTUAL_HOST=localhost
volumes:
- logs:/app/out/
expose:
- 8083
command: ["npm","start"]
cap_drop:
- NET_ADMIN
- SYS_ADMIN
I know the topic is a bit old and seems stale, but anyway I was able to use these options:
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M
when using 3.7 version of docker-compose
What helped in my case, was using this command:
docker-compose --compatibility up
--compatibility flag stands for (taken from the documentation):
If set, Compose will attempt to convert deploy keys in v3 files to
their non-Swarm equivalent
Think it's great, that I don't have to revert my docker-compose file back to v2.
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M
reservations:
cpus: '0.0001'
memory: 20M
More: https://docs.docker.com/compose/compose-file/compose-file-v3/#resources
In you specific case:
version: "3"
services:
node:
image: USER/Your-Pre-Built-Image
environment:
- VIRTUAL_HOST=localhost
volumes:
- logs:/app/out/
command: ["npm","start"]
cap_drop:
- NET_ADMIN
- SYS_ADMIN
deploy:
resources:
limits:
cpus: '0.001'
memory: 50M
reservations:
cpus: '0.0001'
memory: 20M
volumes:
- logs
networks:
default:
driver: overlay
Note:
Expose is not necessary, it will be exposed per default on your stack network.
Images have to be pre-built. Build within v3 is not possible
"Restart" is also deprecated. You can use restart under deploy with on-failure action
You can use a standalone one node "swarm", v3 most improvements (if not all) are for swarm
Also Note:
Networks in Swarm mode do not bridge. If you would like to connect internally only, you have to attach to the network. You can 1) specify an external network within an other compose file, or have to create the network with --attachable parameter (docker network create -d overlay My-Network --attachable)
Otherwise you have to publish the port like this:
ports:
- 80:80
Docker Compose v1 does not support the deploy key. It's only respected when you use your version 3 YAML file in a Docker Stack.
This message is printed when you add the deploy key to you docker-compose.yml file and then run docker-compose up -d
WARNING: Some services (database) use the 'deploy' key, which will be
ignored. Compose does not support 'deploy' configuration - use docker stack deploy to deploy to a swarm.
The documentation (https://docs.docker.com/compose/compose-file/#deploy) says:
Specify configuration related to the deployment and running of
services. This only takes effect when deploying to a swarm with docker
stack deploy, and is ignored by docker-compose up and docker-compose
run.
Nevertheless you can use Docker Compose v2. Given the following Docker composition you can use the deploy key to limit your containers resources.
version: "3.9"
services:
database:
image: mariadb:10.10.2-jammy
container_name: mydb
environment:
MYSQL_ROOT_PASSWORD: root_secret
MYSQL_DATABASE: mydb
MYSQL_USER: myuser
MYSQL_PASSWORD: secret
TZ: "Europe/Zurich"
MARIADB_AUTO_UPGRADE: "true"
tmpfs:
- /var/lib/mysql:rw
ports:
- "127.0.0.1:3306:3306"
deploy:
resources:
limits:
cpus: "4.0"
memory: 200M
networks:
- mynetwork
When you run docker compose up -d (Note: in version 2 of Docker Compose you call the docker binary at not the docker-compose python application) and then inspect the resources you see that the memory is limited to 200 MB. The CPU limit is not exposed by docker stats.
❯ docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
2c71fb8de607 mydb 0.04% 198MiB / 200MiB 99.02% 2.67MB / 3.77MB 70.6MB / 156MB 18
This is possible with version >= 3.8. Here is an example using docker-compose >= 1.28.x :
version: '3.9'
services:
app:
image: nginx
cpus: "0.5"
mem_reservation: "10M"
mem_limit: "250M"
Proof of it working (see the MEM USAGE) column :
The expected behavior when reaching memory limit is the container getting killed. In this case, whether set restart: always or adjust your app code.
Limits and restarts settings in Docker compose v3 should now be set using (restart: always is also deprecated):
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
resources:
limits:
cpus: '0.50'
memory: 50M
reservations:
cpus: '0.25'
memory: 20M
I have other experiences, maybe somebody can explain this.
Maybe this is bug(i think this is a feature), but, I am able to use deployments limits (memory limits) in docker-compose without swarm, hovever CPU limits doesn't work but replication does.
$> docker-compose --version
docker-compose version 1.29.2
$> docker --version
Docker version 20.10.12
version: '3.2'
services:
limits-test:
image: alexeiled/stress-ng
command: [
'--vm', '1', '--vm-bytes', '20%', '--vm-method', 'all', '--verify', '-t', ' 10m', '-v'
]
deploy:
resources:
limits:
cpus: '0.50'
memory: 1024M
Docker stats
b647e0dad247 dc-limits_limits-test_1 0.01% 547.1MiB / 1GiB 53.43% 942B / 0B 0B / 0B 3
Edited, thx #Jimmix
I think there is confusion here over using docker-compose and docker compose (with a space). You can install the compose plugin using https://docs.docker.com/compose/install if you don't already have it.
Here is an example compose file just running Elasticsearch
version: "3.7"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.2
restart: always
ports:
- "9222:9200"
deploy:
resources:
limits:
cpus: "4"
memory: "2g"
environment:
- "node.name=elasticsearch"
- "bootstrap.memory_lock=true"
- "discovery.type=single-node"
- "xpack.security.enabled=false"
- "ingest.geoip.downloader.enabled=false"
I have it in a directory called estest the file is called es-compose.yaml. The file sets CPU and memory limits.
If you launch using docker-compose e.g.
docker-compose -f es-compose.yaml up
Then look at docker stats you see
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
e3b6253ee730 estest_elasticsearch_1 342.13% 32.39GiB / 62.49GiB 51.83% 7.7kB / 0B 27.3MB / 381kB 46
so the cpu and memory resource limits are ignored. During the launch you see the warning
WARNING: Some services (elasticsearch) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
Which I think is what leads people to look at Docker stack/swarm. However if you just switch to using the newer docker compose now built in to the docker CLI https://docs.docker.com/engine/reference/commandline/compose/ e.g.
docker compose -f es-compose.yaml up
And look again at docker stats you see
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
d062eda10ffe estest-elasticsearch-1 0.41% 1.383GiB / 2GiB 69.17% 8.6kB / 0B 369MB / 44MB 6
Therefore the limits have been applied.
This is better in my opinion than swarm as it still allows you to build containers as part of the compose project and pass environment easily via a file. I would recommend removing docker-compose and switching over to use the newer docker compose wherever possible.