What does increasing NET I/O value in docker stats mean? - docker

I am running command docker stats <container_id> > performance.txt over a period of 1 hour during multi-user testing. Some stats like memory, CPU increase, then normalize. But, it is with NET I/O value that it kept on increasing.
At the start, the O/P was:
NAME CPU % MEM USAGE / LIMIT NET I/O BLOCK I/O PIDS
my-service 0.10% 5.63GiB / 503.6GiB 310MB / 190MB 0B / 0B 80
NAME CPU % MEM USAGE / LIMIT NET I/O BLOCK I/O PIDS
my-service 0.20% 5.63GiB / 503.6GiB 310MB / 190MB 0B / 0B 80
After 1 hour, it is:
NAME CPU % MEM USAGE / LIMIT NET I/O BLOCK I/O PIDS
my-service 116.26% 11.54GiB / 503.6GiB 891MB / 523MB 0B / 0B 89
NAME CPU % MEM USAGE / LIMIT NET I/O BLOCK I/O PIDS
my-service 8.52% 11.54GiB / 503.6GiB 892MB / 523MB 0B / 0B 89
As above, the value of NET I/O is always increasing. What can it probably mean?
Docker documentation says it is the input received and output given by the container. If so, then why is it increasing? Is there some issue with the image running in the container?

NET I/O is a cumulative counter. It only goes up (when your app receives and sends data).
https://docs.docker.com/engine/reference/commandline/stats/
Column name
Description
NET I/O
The amount of data the container has sent and received over its network interface
So it's accumulated over time. Unlike, say, CPU % which is how much CPU the container is using right now.

The docker stats command returns a live data stream for running containers.
It's considered the total amount of data passed over the network since the container started. From the definition of stream:
computing
a continuous flow of data or instructions,
It doesn't say that explicitly but you can obviously infer this by the term continuous or stream. Perhaps the documentation could be made a bit more clear in that respect.

Related

How do i make docker container resources mutually exclusive?

I want multiple running containers to have mutually exclusive resources with each other. For example, when there are CPU cores from id0 to id63, if 32 CPU cores are allocated to each container, the CPU cores assigned to them are mutually exclusive. In addition, when the host has 16GB of RAM, we want to allocate 8GB to each container so that one container does not affect the memory usage of another container.
Is there good way to do this?
I think all you need is to just limit container resources. This way you can ensure that no container uses more than X cores and/or Y RAM. To limit CPU usage to 1 core add --cpus=1.0 to your docker run command. To limit RAM to 8 gigabytes add -m=8g. Putting it altogether:
docker run --rm --cpus=1 -m=8g debian:buster cat /dev/stdout
And if your look at docker stats you will see that memory is limited (no indication for CPU though):
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
8d9a33b00950 funny_shirley 0.00% 1MiB / 8GiB 0.10% 6.7kB / 0B 0B / 0B 1
Read more in the docs.

Network usage data between every two pairs of docker containers

I have a few micro-services running in docker containers (One service in each container).
How do I find out the network usage between every two pair of docker containers? So that I can make a graph such that I have containers as vertices and on edges I have the amount of bytes transmitted/received.
I used cAdvisor, but it gives me the overall network usage of each container.
Start with docker stats:
$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
a645ca0d8feb wp_wordpress_1 0.00% 135.5MiB / 15.55GiB 0.85% 12.2MB / 5.28MB 2.81MB / 9.12MB 11
95e1649c5b79 wp_db_1 0.02% 203MiB / 15.55GiB 1.28% 6.44MB / 3.11MB 0B / 1.08GB 30

Able to malloc more than docker-compose mem_limit

I'm trying to limit my container so that it doesn't take up all the RAM on the host. From the Docker docs I understand that --memory limits the RAM and --memory-swap limits (RAM+swap). From the docker-compose docs it looks like the terms for those are mem_limit and memswap_limit, so I've constructed the following docker-compose file:
> cat docker-compose.yml
version: "2"
services:
stress:
image: progrium/stress
command: '-m 1 --vm-bytes 15G --vm-hang 0 --timeout 10s'
mem_limit: 1g
memswap_limit: 2g
The progrium/stress image just runs stress, which in this case spawns a single thread which requests 15GB RAM and holds on to it for 10 seconds.
I'd expect this to crash, since 15>2. (It does crash if I ask for more RAM than the host has.)
The kernel has cgroups enabled, and docker stats shows that the limit is being recognised:
> docker stats
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
7624a9605c70 0.00% 1024MiB / 1GiB 99.99% 396B / 0B 172kB / 0B 2
So what's going on? How do I actually limit the container?
Update:
Watching free, it looks like the RAM usage is effectively limited (only 1GB of RAM is used) but the swap is not: the container will gradually increase swap usage until it's eaten though all of the swap and stress crashes (it takes about 20secs to get through 5GB of swap on my machine).
Update 2:
Setting mem_swappiness: 0 causes an immediate crash when requesting more memory than mem_limit, regardless of memswap_limit.
Running docker info shows WARNING: No swap limit support
According to https://docs.docker.com/engine/installation/linux/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities this is disabled by default ("Memory and swap accounting incur an overhead of about 1% of the total available memory and a 10% overall performance degradation.") You can enable it by editing the /etc/default/grub file:
Add or edit the GRUB_CMDLINE_LINUX line to add the following two key-value pairs:
GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1"
then update GRUB with update-grub and reboot.

Resource consumption inside a Docker container

CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
48c16e180af6 0.20% 91.48MiB / 31.31GiB 0.29% 3.86kB / 0B 85.3MB / 0B 33
f734efe5a249 0.00% 472KiB / 31.31GiB 0.00% 3.97kB / 0B 12.3kB / 0B 1
165a7b031093 0.00% 480KiB / 31.31GiB 0.00% 9.49kB / 0B 3.66MB / 0B 1
Does anyone know how to get resource consumption of a specific Docker container within its running environment?
Outside of a container, we can get it easily by typing a command "docker stats". Besides, if I try to get resource consumption inside a container, it will get the consumption (RAM, CPU) of the physical computer which the container runs on.
Another option is using 'htop' command, but it does not show the result exactly compared to 'docker stats' command.
If you want the processes consumption inside the container, you can go into the container and monitor the processes.
docker exec -it <container-name> watch ps -aux
Notice that after running the above command, the container doesn't know about any docker processes running.

Docker stat network traffic

I want ask 2 question about docker stats
for example
NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
container_1 1.52% 11.72MiB / 7.388GiB 0.15% 2.99GB / 372MB 9.4MB / 0B 9
in this situation net i/o statement 2.99GB / 372MB
how much time reflected in that?
for one hour? or all of time?
and how can check docker container network traffic for an hour or minute?
i would appreciate if you any other advice.
thank you
This blog explains the network io of the docker stats command
Displays total bytes received (RX) and transmitted (TX).
If you need finer grained access, the blog also suggests to use the network pseudo files on your host system.
$ CONTAINER_PID=`docker inspect -f '{{ .State.Pid }}' $CONTAINER_ID`
$ cat /proc/$CONTAINER_PID/net/dev
To your second part: I'm not aware of any build-in method to get the traffic over the specific period, others might correct me. I think the easiest solution is to poll one of the two interfaces and calculate the differences yourself.

Resources