How to see memory allotment for Docker Engine? - docker

Setting up a docker instance of Elasticsearch Cluster.
In the instructions it says
Make sure Docker Engine is allotted at least 4GiB of memory
I am ssh'ing to the host, not using docker desktop.
How can I see the resource allotments from the command line?
reference URL
https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-docker.html

I had same problem, with Docker Desktop on Windows 10 while running Linux containers on WSL2.
I found this issue: https://github.com/elastic/elasticsearch-docker/issues/92 and tried to apply similar logic to the solution of there.
I entered the WSL instance's terminal by
wsl -d docker-desktop command.
Later I run sysctl -w vm.max_map_count=262144 command to set 'allotted memory'.
After these steps I could run elasticsearch's docker compose example.

I'd like to go about it by just using one command.
docker stats -all
This will give a output such as following
$ docker stats -all
CONTAINER ID NAME CPU% MEM USAGE/LIMIT MEM% NET I/O BLOCK I/O PIDS
5f8a1e2c08ac my-compose_my-nginx_1 0.00% 2.25MiB/1.934GiB 0.11% 1.65kB/0B 7.35MB/0B 2
To modify the limits :
when you're making your docker-compose.yml include the following at the end of your file. (if you'd like to set up a 4 GiB limit)
resources:
limits:
memory: 4048m
reservations:
memory: 4048m

Related

How to check heap size inside docker container

We are using docker swarm on the server. using openjdk8. If do :
docker service ls
see the result :
ID NAME MODE REPLICAS IMAGE PORTS
7l89205dje61 integration_api replicated 1/1 docker.repo1.tomba.com/koppu/koppu-api:3.1.2.96019dc
.................
I am trying to update jvm heap size for this service so I tried :
docker service update --env-add JAVA_OPTS="-Xms3G -Xmx3G -XX:MaxPermSize=1024m" integration_api
Saw this result:
integration_api
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
Now I am trying to see the heap size and not finding a way as when tried to get inside the container taking the id above as :
docker exec -it 7l89205dje61 bash
getting error :
this container does not exit.
Any suggestion?
Perhaps you can exec into the running container and display the current heap size with something like this?
# get the name of a container within your service with
docker exec -it <CONTAINER-ID> bash
# after execing into the container,
java -XX:+PrintFlagsFinal -version | grep HeapSize
Use this Stack Orerflow post to figure out how to exec into a service
Got the java code to print heap settings from this Stack Overflow post
Note these ideas don't have a good example out there as yet to my knowledge. However, one good way of doing it is to implement a "healthcheck" process that would query the JVM statistics like heap and other things and report it to another system.
Another way is exposing the Spring Boot Actuator API so that Prometheus can read and track it over time.

Why is OpenJDK Docker Container ignoring Memory Limits in Kubernetes?

I am running several Java Applications with the Docker image jboss/wildfly:20.0.1.Final on Kubernetes 1.19.3. Wildfly server is running in OpenJDK 11 so the jvm is supporting container memory limits (cgroups).
If I set a memory limit this limit is totally ignored by the container when running in Kubernetes. But it is respected on the same machine when I run it in plain Docker:
1. Run Wildfly in Docker with a memory limit of 300M:
$ docker run -it --rm --name java-wildfly-test -p 8080:8080 -e JAVA_OPTS='-XX:MaxRAMPercentage=75.0' -m=300M jboss/wildfly:20.0.1.Final
verify Memory usage:
$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
515e549bc01f java-wildfly-test 0.14% 219MiB / 300MiB 73.00% 906B / 0B 0B / 0B 43
As expected the container will NOT exceed the memory limit of 300M.
2. Run Wildfly in Kubernetes with a memory limit of 300M:
Now I start the same container within kubernetes.
$ kubectl run java-wildfly-test --image=jboss/wildfly:20.0.1.Final --limits='memory=300M' --env="JAVA_OPTS='-XX:MaxRAMPercentage=75.0'"
verify memory usage:
$ kubectl top pod java-wildfly-test
NAME CPU(cores) MEMORY(bytes)
java-wildfly-test 1089m 441Mi
The memory limit of 300M is totally ignored and exceeded immediately.
Why does this happen? Both tests can be performed on the same machine.
Answer
The reason for the high values was an incorrect output of the Metric data received from Project kube-prometheus. After uninstalling the kube-projemet and installing instead the metric-server all data was display correctly using kubctl top. It shows now the same values as docker stats. I do not know why kube-prometheus did compute wrong data. In fact it was providing the double values for all memory data.
I`m placing this answer as community wiki since it might be helpful for the community. Kubectl top was displying incorrect data. OP solved the problem with uninstalling the kube-prometheus stack and installing the metric-server.
The reason for the high values was an incorrect output of the Metric
data received from Project kube-prometheus. After uninstalling the
kube-projemet and installing instead the metric-server all data was
display correctly using kubectl top. It shows now the same values as
docker stats. I do not know why kube-prometheus did compute wrong
data. In fact it was providing the double values for all memory data.

ArangoDB Single Instance as a Docker Container Fills memory and restarts

We are running ArangoDB v3.5.2. The Docker container unexpectedly restarts at random intervals, and all the connected clients get disconnected. After further investigation, we found that Docker container running Arango is reaching the memory allocated to it fully. The memory gets filled incrementally ever since the container starts running and it never goes down, until it is filled and the container restarts.
Below is the docker command used to run the container
docker run -d --name test -v /mnt/test:/var/lib/arangodb3 --restart always --memory="1200m" --cpus="1.5" -p 8529:8529 --log-driver="awslogs" --log-opt awslogs-region="eu-west-1" --log-opt awslogs-group="/docker/test" -e ARANGO_RANDOM_ROOT_PASSWORD=1 -e ARANGO_STORAGE_ENGINE=rocksdb arangodb/arangodb:3.5.2 --log.level queries=warn --log.level performance=warn --rocksdb.block-cache-size 256MiB --rocksdb.enforce-block-cache-size-limit true --rocksdb.total-write-buffer-size 256MiB --cache.size 256MiB
Why does the memory keep increasing and does not go down, especially when it is not being used? how do i solve this issue?
My Environment
ArangoDB Version: 3.5.2
Storage Engine: RocksDB
Deployment Mode: Single Server
Deployment Strategy: Manual Start in Docker
Configuration:
Infrastructure: AWS t3a.small Machine
Operating System: Ubuntu 16.04
Total RAM in your machine: 2GB. However, the container's limit is 1.2GB
Disks in use: SSD
Used Package: Docker
There's a known issue if you have mixed settings with comments in the configuration file arangodb.conf (source: https://github.com/arangodb/arangodb/issues/5414)
And here's the part where they talk about the solution: https://github.com/arangodb/arangodb/issues/5414#issuecomment-498287668
I think the syntax config file syntax does not support lines such as
block-cache-size = 268435456 # 256M
I think effectively this will be interpreted as something like
block-cache-size = 268435456256000000, which is way higher than intended.
just remove the comments, and retry!

what does "MEM USAGE / LIMIT " column state in docker stats command output?

I created a docker container from an image which is of size: ~90 MB.
The container is running fine.
I wanted to know how much RAM does a container must be using on its host machine. So I ran "docker stats" command and it shows following output:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
66c266363746 sbrconfigurator_v0_2 0.00% 32.29MiB / 740.2MiB 4.36% 0B / 0B 15.1MB / 4.1kB 10
Here it shows memory usage as followings:
MEM USAGE / LIMIT
32.29MiB / 740.2MiB
I don't know what does this 740.2 MB memory means, whether it means that 740.2 MB of RAM has been allocated to this container i.e 740.2 MB RAM is being used by this container or not.
Please help me know how much RAM (host's machine) does this container must be using.
Host machine is Linux, Ubuntu.
The memory limit shows how much memory docker will allow the container to use before killing the container with an OOM. If you do not set the memory limit on a container, it defaults to all of the memory available on the docker host. With Docker for Win/Mac, this is memory allocated to the embedded VM which you can adjust in the settings. You can set the memory limit in the compose file, or directly on the docker run cli:
$ docker run -itd --name test-mem -m 1g busybox /bin/sh
f2f9f041a76c0b74e4c6ae51dd57070258a06c1f3ee884d07fef5a103f0925d4
$ docker stats --no-stream test-mem
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
f2f9f041a76c test-mem 0.00% 360KiB / 1GiB 0.03% 5.39kB / 0B 3.29MB / 0B 1
In the above example, busybox is not using 1GB of memory, only 360KB.
Without setting the limit, the memory limit in GiB can be converted (GiB*1024 = KB) to show something very close to what you see in the free command for total memory on the host. Not sure if the small difference between the two accounts for the kernel or some other overhead.
Image size of 90MB is only size on disk, it doesn't have much to do with occupied memory (RAM).
When you start a Docker container on Linux, e.g.:
$ docker run -itd --name test busybox /bin/sh
74a1cb1ecf7ad902c5ddb842d055b4c2e4a11b3b84efd1e600eb47aefb563cb3
Docker will create a directory in cgroups fs, typically mounted in /sys/fs/cgroup under the long ID of the container.
/sys/fs/cgroup/memory/docker/74a1cb1ecf7ad902c5ddb842d055b4c2e4a11b3b84efd1e600eb47aefb563cb3/
In order to make the path shorter I'll write
/sys/fs/cgroup/memory/docker/$(docker inspect -f "{{.Id}}" test)
instead. The directory contains number of plain text files read/written by the kernel.
When you run Docker container without -m / --memory flag, the container would be able to allocate all memory available on your system, which would be equal to:
$ numfmt --to=iec $(cat /sys/fs/cgroup/memory/docker/memory.max_usage_in_bytes)
When you specify a memory limit e.g. -m 1g
$ docker run -itd --name test -m 1g busybox /bin/sh
Docker will write the limit into file memory.limit_in_bytes:
$ numfmt --to=iec $(cat /sys/fs/cgroup/memory/docker/$(docker inspect -f "{{.Id}}" test)/memory.limit_in_bytes)
1.0G
This tells the Linux kernel to enforce hard memory limit on the container. In case that the container is trying to allocate more memory than the limit, the kernel will invoke the infamous OOM killer.
The "MEM USAGE" is probably read from memory.usage_in_bytes, which is approximation of actual memory usage.
$ numfmt --to=iec $(cat /sys/fs/cgroup/memory/docker/$(docker inspect -f "{{.Id}}" test)/memory.usage_in_bytes)
312K
According to cgroups documentation more precise value can be obtained from memory.stat file.
If you want to know more exact memory usage, you should use RSS+CACHE(+SWAP)
value in memory.stat(see 5.2).
Where you'd have to sum those 3 lines. Normally memory.usage_in_bytes is good enough estimate.

How to find out how much RAM a docker run execution consumed at maximum?

I have created a new docker services and am determining its required resources. Since applying the RAM to a new service is greedy--saying the container can have 8GB of RAM it will get them--I don't want to waste the cluster's resources.
Now I am trying to find out how much RAM a docker run took at its peak.
For example, I created a httpie-image (for the rightfully paranoid, the Dockerfile is also on dockerhub that I execute via:
docker run -it k0pernikus/httpie-docker-alpine HEAD https://stackoverflow.com/
I know that there is a docker stats command, yet it appears to show the current memory usage, and I don't really want to monitor that.
If I run it after the container ended, it will show 0. (To get the container id, I use the d flag.)
$ docker run -itd k0pernikus/httpie-docker-alpine HEAD https://stackoverflow.com/
132a93ffc9e297250b8ca37b2563aa2b5e423e146890fe3383a91a7f26ef990c
$ docker stats 132a93ffc9e297250b8ca37b2563aa2b5e423e146890fe3383a91a7f26ef990c
it will show:
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
132a93ffc9e297250b8ca37b2563aa2b5e423e146890fe3383a91a7f26ef990c 0.00% 0 B / 0 B 0.00% 0 B / 0 B 0 B / 0 B 0
Yet how much RAM did it consume at maximum?
tl;dr:
Enable memory accounting
cat /sys/fs/cgroup/memory/docker/${CONTAINER_ID}/memory.max_usage_in_bytes
Docker uses cgroups under the hood. We only have to ask the kernel the right question, that is to cat the correct file. For this to work memory accounting has to be enabled (as noted in the docs).
On systemd based systems this is quite straight forward. Create a drop-in config for the docker daemon:
systemctl set-property docker.service MemoryAccounting=yes
systemctl daemon-reload
systemctl restart docker.service
(This will add a little overhead though as each time the container allocates RAM a counter has to be updated. Further details: https://lwn.net/Articles/606004/)
Then by using the full container id that you can discover via docker inspect:
docker inspect --format="{{.Id}}" $CONTAINER_NAME
you can get the maximum memory used:
cat /sys/fs/cgroup/memory/docker/${CONTAINER_ID}/memory.max_usage_in_bytes
The container has to be running for this to work.

Resources