I created a docker container from an image which is of size: ~90 MB.
The container is running fine.
I wanted to know how much RAM does a container must be using on its host machine. So I ran "docker stats" command and it shows following output:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
66c266363746 sbrconfigurator_v0_2 0.00% 32.29MiB / 740.2MiB 4.36% 0B / 0B 15.1MB / 4.1kB 10
Here it shows memory usage as followings:
MEM USAGE / LIMIT
32.29MiB / 740.2MiB
I don't know what does this 740.2 MB memory means, whether it means that 740.2 MB of RAM has been allocated to this container i.e 740.2 MB RAM is being used by this container or not.
Please help me know how much RAM (host's machine) does this container must be using.
Host machine is Linux, Ubuntu.
The memory limit shows how much memory docker will allow the container to use before killing the container with an OOM. If you do not set the memory limit on a container, it defaults to all of the memory available on the docker host. With Docker for Win/Mac, this is memory allocated to the embedded VM which you can adjust in the settings. You can set the memory limit in the compose file, or directly on the docker run cli:
$ docker run -itd --name test-mem -m 1g busybox /bin/sh
f2f9f041a76c0b74e4c6ae51dd57070258a06c1f3ee884d07fef5a103f0925d4
$ docker stats --no-stream test-mem
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
f2f9f041a76c test-mem 0.00% 360KiB / 1GiB 0.03% 5.39kB / 0B 3.29MB / 0B 1
In the above example, busybox is not using 1GB of memory, only 360KB.
Without setting the limit, the memory limit in GiB can be converted (GiB*1024 = KB) to show something very close to what you see in the free command for total memory on the host. Not sure if the small difference between the two accounts for the kernel or some other overhead.
Image size of 90MB is only size on disk, it doesn't have much to do with occupied memory (RAM).
When you start a Docker container on Linux, e.g.:
$ docker run -itd --name test busybox /bin/sh
74a1cb1ecf7ad902c5ddb842d055b4c2e4a11b3b84efd1e600eb47aefb563cb3
Docker will create a directory in cgroups fs, typically mounted in /sys/fs/cgroup under the long ID of the container.
/sys/fs/cgroup/memory/docker/74a1cb1ecf7ad902c5ddb842d055b4c2e4a11b3b84efd1e600eb47aefb563cb3/
In order to make the path shorter I'll write
/sys/fs/cgroup/memory/docker/$(docker inspect -f "{{.Id}}" test)
instead. The directory contains number of plain text files read/written by the kernel.
When you run Docker container without -m / --memory flag, the container would be able to allocate all memory available on your system, which would be equal to:
$ numfmt --to=iec $(cat /sys/fs/cgroup/memory/docker/memory.max_usage_in_bytes)
When you specify a memory limit e.g. -m 1g
$ docker run -itd --name test -m 1g busybox /bin/sh
Docker will write the limit into file memory.limit_in_bytes:
$ numfmt --to=iec $(cat /sys/fs/cgroup/memory/docker/$(docker inspect -f "{{.Id}}" test)/memory.limit_in_bytes)
1.0G
This tells the Linux kernel to enforce hard memory limit on the container. In case that the container is trying to allocate more memory than the limit, the kernel will invoke the infamous OOM killer.
The "MEM USAGE" is probably read from memory.usage_in_bytes, which is approximation of actual memory usage.
$ numfmt --to=iec $(cat /sys/fs/cgroup/memory/docker/$(docker inspect -f "{{.Id}}" test)/memory.usage_in_bytes)
312K
According to cgroups documentation more precise value can be obtained from memory.stat file.
If you want to know more exact memory usage, you should use RSS+CACHE(+SWAP)
value in memory.stat(see 5.2).
Where you'd have to sum those 3 lines. Normally memory.usage_in_bytes is good enough estimate.
Related
I am running several Java Applications with the Docker image jboss/wildfly:20.0.1.Final on Kubernetes 1.19.3. Wildfly server is running in OpenJDK 11 so the jvm is supporting container memory limits (cgroups).
If I set a memory limit this limit is totally ignored by the container when running in Kubernetes. But it is respected on the same machine when I run it in plain Docker:
1. Run Wildfly in Docker with a memory limit of 300M:
$ docker run -it --rm --name java-wildfly-test -p 8080:8080 -e JAVA_OPTS='-XX:MaxRAMPercentage=75.0' -m=300M jboss/wildfly:20.0.1.Final
verify Memory usage:
$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
515e549bc01f java-wildfly-test 0.14% 219MiB / 300MiB 73.00% 906B / 0B 0B / 0B 43
As expected the container will NOT exceed the memory limit of 300M.
2. Run Wildfly in Kubernetes with a memory limit of 300M:
Now I start the same container within kubernetes.
$ kubectl run java-wildfly-test --image=jboss/wildfly:20.0.1.Final --limits='memory=300M' --env="JAVA_OPTS='-XX:MaxRAMPercentage=75.0'"
verify memory usage:
$ kubectl top pod java-wildfly-test
NAME CPU(cores) MEMORY(bytes)
java-wildfly-test 1089m 441Mi
The memory limit of 300M is totally ignored and exceeded immediately.
Why does this happen? Both tests can be performed on the same machine.
Answer
The reason for the high values was an incorrect output of the Metric data received from Project kube-prometheus. After uninstalling the kube-projemet and installing instead the metric-server all data was display correctly using kubctl top. It shows now the same values as docker stats. I do not know why kube-prometheus did compute wrong data. In fact it was providing the double values for all memory data.
I`m placing this answer as community wiki since it might be helpful for the community. Kubectl top was displying incorrect data. OP solved the problem with uninstalling the kube-prometheus stack and installing the metric-server.
The reason for the high values was an incorrect output of the Metric
data received from Project kube-prometheus. After uninstalling the
kube-projemet and installing instead the metric-server all data was
display correctly using kubectl top. It shows now the same values as
docker stats. I do not know why kube-prometheus did compute wrong
data. In fact it was providing the double values for all memory data.
Setting up a docker instance of Elasticsearch Cluster.
In the instructions it says
Make sure Docker Engine is allotted at least 4GiB of memory
I am ssh'ing to the host, not using docker desktop.
How can I see the resource allotments from the command line?
reference URL
https://www.elastic.co/guide/en/elastic-stack-get-started/current/get-started-docker.html
I had same problem, with Docker Desktop on Windows 10 while running Linux containers on WSL2.
I found this issue: https://github.com/elastic/elasticsearch-docker/issues/92 and tried to apply similar logic to the solution of there.
I entered the WSL instance's terminal by
wsl -d docker-desktop command.
Later I run sysctl -w vm.max_map_count=262144 command to set 'allotted memory'.
After these steps I could run elasticsearch's docker compose example.
I'd like to go about it by just using one command.
docker stats -all
This will give a output such as following
$ docker stats -all
CONTAINER ID NAME CPU% MEM USAGE/LIMIT MEM% NET I/O BLOCK I/O PIDS
5f8a1e2c08ac my-compose_my-nginx_1 0.00% 2.25MiB/1.934GiB 0.11% 1.65kB/0B 7.35MB/0B 2
To modify the limits :
when you're making your docker-compose.yml include the following at the end of your file. (if you'd like to set up a 4 GiB limit)
resources:
limits:
memory: 4048m
reservations:
memory: 4048m
Docker have --cpus to constrain CPU usage for container.
According to docs, it will
Specify how much of the available CPU resources a container can use. For instance, if the host machine has two CPUs and you set --cpus="1.5", the container is guaranteed at most one and a half of the CPUs.
However, I run machine:
# cat /proc/cpuinfo | grep "cpu cores" | tail -n 1
8
# cat /proc/cpuinfo | grep "processor" | wc -l
16
Does it make sense to set --cpus=8 if I want to set 50% limit to container? Or it will be 100%?
I don't see clear answer neither in Docker documentation nor in cgroups manual.
I saw detailed explanation of differences between physical cpu and virtual cpu and cores here, but it don't clarify what I should use for my limits with Docker.
By default, the process in the container is run with no CPU restrictions and can use all available processor time, competing with other processes running on the Linux host. When setting --cpus this configures the cgroup settings to limit processes inside that container to only use that many shares of CPU time. This is managed by the kernel, but the underlying hardware is still visible in /proc/cpuinfo. Instead you should look at the cgroup settings:
$ docker run -it --rm --cpus 0.75 busybox sh
/proc # cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us
75000
/proc # cat /sys/fs/cgroup/cpu/cpu.cfs_period_us
100000
Contrast with an unlimited container:
$ docker run -it --rm busybox sh
/ # cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us
-1
/ # cat /sys/fs/cgroup/cpu/cpu.cfs_period_us
100000
/ #
I have created a new docker services and am determining its required resources. Since applying the RAM to a new service is greedy--saying the container can have 8GB of RAM it will get them--I don't want to waste the cluster's resources.
Now I am trying to find out how much RAM a docker run took at its peak.
For example, I created a httpie-image (for the rightfully paranoid, the Dockerfile is also on dockerhub that I execute via:
docker run -it k0pernikus/httpie-docker-alpine HEAD https://stackoverflow.com/
I know that there is a docker stats command, yet it appears to show the current memory usage, and I don't really want to monitor that.
If I run it after the container ended, it will show 0. (To get the container id, I use the d flag.)
$ docker run -itd k0pernikus/httpie-docker-alpine HEAD https://stackoverflow.com/
132a93ffc9e297250b8ca37b2563aa2b5e423e146890fe3383a91a7f26ef990c
$ docker stats 132a93ffc9e297250b8ca37b2563aa2b5e423e146890fe3383a91a7f26ef990c
it will show:
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
132a93ffc9e297250b8ca37b2563aa2b5e423e146890fe3383a91a7f26ef990c 0.00% 0 B / 0 B 0.00% 0 B / 0 B 0 B / 0 B 0
Yet how much RAM did it consume at maximum?
tl;dr:
Enable memory accounting
cat /sys/fs/cgroup/memory/docker/${CONTAINER_ID}/memory.max_usage_in_bytes
Docker uses cgroups under the hood. We only have to ask the kernel the right question, that is to cat the correct file. For this to work memory accounting has to be enabled (as noted in the docs).
On systemd based systems this is quite straight forward. Create a drop-in config for the docker daemon:
systemctl set-property docker.service MemoryAccounting=yes
systemctl daemon-reload
systemctl restart docker.service
(This will add a little overhead though as each time the container allocates RAM a counter has to be updated. Further details: https://lwn.net/Articles/606004/)
Then by using the full container id that you can discover via docker inspect:
docker inspect --format="{{.Id}}" $CONTAINER_NAME
you can get the maximum memory used:
cat /sys/fs/cgroup/memory/docker/${CONTAINER_ID}/memory.max_usage_in_bytes
The container has to be running for this to work.
I am using Docker to run some containerized apps. I am interested in measuring how much resources they consume (as far as regarding CPU and Memory usage).
Is there any way to measure the resources consumed by Docker containers like RAM & CPU usage?
Thank you.
You can get this from docker stats e.g:
$ docker stats --no-stream
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
6b5c0fcfa7d4 0.13% 2.203 MiB / 4 MiB 55.08% 5.223 kB / 648 B 102.4 kB / 876.5 kB 3
Update: See #Adrian Mouat's answer below as docker now supports docker stats!
There isn't a way to do this that's built into docker in the current version. Future versions will support this via an api or plugin.
https://github.com/dotcloud/docker/issues/36
It does look like there's an lxc project that you should be able to use to track CPU and Memory.
https://github.com/Soulou/acadock-live-lxc
Also, you can read resource metrics directly from cgroups.
See example below (I am running on Debian Jessie and docker 1.2)
> docker ps -q
afa03c363af5
> ls /sys/fs/cgroup/memory/system.slice/ | grep docker-afa03c363af5
docker-afa03c363af54815d721d938e01fe4cb2debc4f6c15ebff1851e20f6cde3ae0e.scope
> cd docker-afa03c363af54815d721d938e01fe4cb2debc4f6c15ebff1851e20f6cde3ae0e.scope
> cat memory.usage_in_bytes
4358144
> cat memory.limit_in_bytes
1073741824
Kindly check out below commands for getting CPU and Memory usages of docker containers:-
docker status container_ID #to check single container resources
for i in $(docker ps -q); do docker stats $i --no-trunc --no-stream ; echo "--------";done #to check/list all container resources
docker stats --all #to check all container resources live
docker system df -v #to check storage related information
Memory usage of docker containers
docker system df -v
local docker space
df -kh