Resource consumption inside a Docker container - docker

CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
48c16e180af6 0.20% 91.48MiB / 31.31GiB 0.29% 3.86kB / 0B 85.3MB / 0B 33
f734efe5a249 0.00% 472KiB / 31.31GiB 0.00% 3.97kB / 0B 12.3kB / 0B 1
165a7b031093 0.00% 480KiB / 31.31GiB 0.00% 9.49kB / 0B 3.66MB / 0B 1
Does anyone know how to get resource consumption of a specific Docker container within its running environment?
Outside of a container, we can get it easily by typing a command "docker stats". Besides, if I try to get resource consumption inside a container, it will get the consumption (RAM, CPU) of the physical computer which the container runs on.
Another option is using 'htop' command, but it does not show the result exactly compared to 'docker stats' command.

If you want the processes consumption inside the container, you can go into the container and monitor the processes.
docker exec -it <container-name> watch ps -aux
Notice that after running the above command, the container doesn't know about any docker processes running.

Related

What does increasing NET I/O value in docker stats mean?

I am running command docker stats <container_id> > performance.txt over a period of 1 hour during multi-user testing. Some stats like memory, CPU increase, then normalize. But, it is with NET I/O value that it kept on increasing.
At the start, the O/P was:
NAME CPU % MEM USAGE / LIMIT NET I/O BLOCK I/O PIDS
my-service 0.10% 5.63GiB / 503.6GiB 310MB / 190MB 0B / 0B 80
NAME CPU % MEM USAGE / LIMIT NET I/O BLOCK I/O PIDS
my-service 0.20% 5.63GiB / 503.6GiB 310MB / 190MB 0B / 0B 80
After 1 hour, it is:
NAME CPU % MEM USAGE / LIMIT NET I/O BLOCK I/O PIDS
my-service 116.26% 11.54GiB / 503.6GiB 891MB / 523MB 0B / 0B 89
NAME CPU % MEM USAGE / LIMIT NET I/O BLOCK I/O PIDS
my-service 8.52% 11.54GiB / 503.6GiB 892MB / 523MB 0B / 0B 89
As above, the value of NET I/O is always increasing. What can it probably mean?
Docker documentation says it is the input received and output given by the container. If so, then why is it increasing? Is there some issue with the image running in the container?
NET I/O is a cumulative counter. It only goes up (when your app receives and sends data).
https://docs.docker.com/engine/reference/commandline/stats/
Column name
Description
NET I/O
The amount of data the container has sent and received over its network interface
So it's accumulated over time. Unlike, say, CPU % which is how much CPU the container is using right now.
The docker stats command returns a live data stream for running containers.
It's considered the total amount of data passed over the network since the container started. From the definition of stream:
computing
a continuous flow of data or instructions,
It doesn't say that explicitly but you can obviously infer this by the term continuous or stream. Perhaps the documentation could be made a bit more clear in that respect.

How do i make docker container resources mutually exclusive?

I want multiple running containers to have mutually exclusive resources with each other. For example, when there are CPU cores from id0 to id63, if 32 CPU cores are allocated to each container, the CPU cores assigned to them are mutually exclusive. In addition, when the host has 16GB of RAM, we want to allocate 8GB to each container so that one container does not affect the memory usage of another container.
Is there good way to do this?
I think all you need is to just limit container resources. This way you can ensure that no container uses more than X cores and/or Y RAM. To limit CPU usage to 1 core add --cpus=1.0 to your docker run command. To limit RAM to 8 gigabytes add -m=8g. Putting it altogether:
docker run --rm --cpus=1 -m=8g debian:buster cat /dev/stdout
And if your look at docker stats you will see that memory is limited (no indication for CPU though):
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
8d9a33b00950 funny_shirley 0.00% 1MiB / 8GiB 0.10% 6.7kB / 0B 0B / 0B 1
Read more in the docs.

"docker run --memory" doesn't account hugepages

Docker is running in privileged mode.
I want to know if this behavior is expected.
I am running DPDK based application in container.
My server has total 128G memory, I have limited container memory resource to 4G.
which I can see in docker stats.
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS [0/18152]
4deda4634b22 my_docker 38.12% 1.455GiB / 4GiB 36.37% 1.53kB / 0B 1.94GB / 755MB 69
I am seeing that even after docker memory is constraint to 4G.
application is able to allocate 32G huge pages memory along with other non huge page memory.
Is it expected?
Does docker run --memory option work only with non-huge page memory?
root#server# docker exec -ti my_docker bash
root#4deda4634b22:/#
root#4deda4634b22:/# ps aux |grep riot
root 893 17.2 0.0 68345740 105260 pts/0 Sl 05:54 1:02 /app/riot <<<<<< application.
root#4deda4634b22:/# cat /proc/meminfo |grep -i huge
AnonHugePages: 909312 kB
ShmemHugePages: 0 kB
**HugePages_Total: 32**
**HugePages_Free: 0**
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 1048576 kB
root#4deda4634b22:/# ls -rlt /mnt/huge/* | wc -l
32
I normally pass the access for huge page and vfio devices via docker run -it --privileged -v /sys/bus/pci/drivers:/sys/bus/pci/drivers -v /sys/kernel/mm/hugepages:/sys/kernel/mm/hugepages -v /sys/devices/system/node:/sys/devices/system/node -v /dev:/dev.
It looks like you are missing the same.

Docker container update --memory didn't work as expected

Good morning all,
In the process of trying to train myself in Docker, I'm having trouble.
I created a docker container from a wordpress image, via docker compose.
[root#vps672971 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
57bb123aa365 wordpress:latest "docker-entrypoint.s…" 16 hours ago Up 2 0.0.0.0:8001->80/tcp royal-by-jds-wordpress-container
I would like to allocate more memory to this container, however after the execution of the following command, the information returned by docker stats are not correct.
docker container update --memory 3GB --memory-swap 4GB royal-by-jds-wordpress-container
docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
57bb123aa365 royal-by-jds-wordpress-container 0.01% 9.895MiB / 1.896GiB 0.51% 2.68kB / 0B 0B / 0B 6
I also tried to request API engine to retrieve information about my container, but the limitation displayed is not correct either.
curl --unix-socket /var/run/docker.sock http:/v1.21/containers/royal-by-jds-wordpress-container/stats
[...]
"memory_stats":{
"usage":12943360,
"max_usage":12955648,
"stats":{},
"limit":2035564544
},
[...]
It seems that the modification of the memory allocated to the container didn't work.
Anyone have an idea?
Thank you in advance.
Maxence

Network usage data between every two pairs of docker containers

I have a few micro-services running in docker containers (One service in each container).
How do I find out the network usage between every two pair of docker containers? So that I can make a graph such that I have containers as vertices and on edges I have the amount of bytes transmitted/received.
I used cAdvisor, but it gives me the overall network usage of each container.
Start with docker stats:
$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
a645ca0d8feb wp_wordpress_1 0.00% 135.5MiB / 15.55GiB 0.85% 12.2MB / 5.28MB 2.81MB / 9.12MB 11
95e1649c5b79 wp_db_1 0.02% 203MiB / 15.55GiB 1.28% 6.44MB / 3.11MB 0B / 1.08GB 30

Resources