Disk usage shown by cAdvisor - monitoring

Following this guide I have setup cadvisor to monitor the local docker containers. Prometheus is scraping cAdvisor for data which is visualized in Grafana.
I am trying to get disk usage of all the docker containers running on the host. I want to get an output similar to the one shown after running
docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 21 10 1.18GB 489.9MB (41%)
Containers 10 8 27B 0B (0%)
Local Volumes 16 5 2.369GB 1.615GB (68%)
Build Cache 0 0 0B 0B
Grafana is used to visualize the data. In order to get disk usage I am running the following query to fetch data from Prometheus.
sum(container_fs_usage_bytes)
Grafana screenshot
The problem is that the value I am getting is the entire value of the overlay filesystem (82GB).
The following rows regarding filesystem usage are shown in cAdvisor (127.0.0.1/metrics)
# TYPE container_fs_usage_bytes gauge
container_fs_usage_bytes{container_label_com_docker_compose_config_hash="",container_label_com_docker_compose_container_number="",container_label_com_docker_compose_oneoff="",container_label_com_docker_compose_project="",container_label_com_docker_compose_service="",container_label_com_docker_compose_version="",container_label_maintainer="",container_label_org_label_schema_group="",device="/dev/sda2",id="/",image="",name=""} 4.4317888512e+10
container_fs_usage_bytes{container_label_com_docker_compose_config_hash="",container_label_com_docker_compose_container_number="",container_label_com_docker_compose_oneoff="",container_label_com_docker_compose_project="",container_label_com_docker_compose_service="",container_label_com_docker_compose_version="",container_label_maintainer="",container_label_org_label_schema_group="",device="overlay",id="/",image="",name=""} 4.4317888512e+10
container_fs_usage_bytes{container_label_com_docker_compose_config_hash="",container_label_com_docker_compose_container_number="",container_label_com_docker_compose_oneoff="",container_label_com_docker_compose_project="",container_label_com_docker_compose_service="",container_label_com_docker_compose_version="",container_label_maintainer="",container_label_org_label_schema_group="",device="shm",id="/",image="",name=""} 0
container_fs_usage_bytes{container_label_com_docker_compose_config_hash="",container_label_com_docker_compose_container_number="",container_label_com_docker_compose_oneoff="",container_label_com_docker_compose_project="",container_label_com_docker_compose_service="",container_label_com_docker_compose_version="",container_label_maintainer="",container_label_org_label_schema_group="",device="tmpfs",id="/",image="",name=""} 0
container_fs_usage_bytes{container_label_com_docker_compose_config_hash="0616b7e25768c2764fd083b53a061c53c6a2ffea9f8d5e74f6ec61fbc03aeeba",container_label_com_docker_compose_container_number="1",container_label_com_docker_compose_oneoff="False",container_label_com_docker_compose_project="dockprom",container_label_com_docker_compose_service="prometheus",container_label_com_docker_compose_version="1.17.1",container_label_maintainer="The Prometheus Authors <prometheus-developers#googlegroups.com>",container_label_org_label_schema_group="monitoring",device="/dev/sda2",id="/docker/6fbef095b2cf0c3f65e906d4ddca4102cd0646590c4003e50186af40f02ef805",image="prom/prometheus:v2.4.2",name="prometheus"} 409600
container_fs_usage_bytes{container_label_com_docker_compose_config_hash="19747419ff433903067d8681ee6149861347fcf65e7db246f4b56301feb53b78",container_label_com_docker_compose_container_number="1",container_label_com_docker_compose_oneoff="False",container_label_com_docker_compose_project="dockprom",container_label_com_docker_compose_service="caddy",container_label_com_docker_compose_version="1.17.1",container_label_maintainer="",container_label_org_label_schema_group="monitoring",device="/dev/sda2",id="/docker/611fe3e5812d9c3a3abc9dab27fd8a91f68c2e3acc3257b5926eae0cbf8914cf",image="stefanprodan/caddy",name="caddy"} 53248
container_fs_usage_bytes{container_label_com_docker_compose_config_hash="38ec70818559317528d83c7c8d889ab9ff2dcd78637a3e9cae71083014abc281",container_label_com_docker_compose_container_number="1",container_label_com_docker_compose_oneoff="False",container_label_com_docker_compose_project="dockprom",container_label_com_docker_compose_service="grafana",container_label_com_docker_compose_version="1.17.1",container_label_maintainer="",container_label_org_label_schema_group="monitoring",device="/dev/sda2",id="/docker/70cdb5ec332973c7a70cf339f5abc48afa5d8f6e6ff5ba0216031ff1c86aa2c5",image="grafana/grafana:5.2.4",name="grafana"} 204800
container_fs_usage_bytes{container_label_com_docker_compose_config_hash="42eb1e0967f80c097581f64bace779010aac94368f2fd366215b2a2506340a60",container_label_com_docker_compose_container_number="1",container_label_com_docker_compose_oneoff="False",container_label_com_docker_compose_project="dockprom",container_label_com_docker_compose_service="pushgateway",container_label_com_docker_compose_version="1.17.1",container_label_maintainer="The Prometheus Authors <prometheus-developers#googlegroups.com>",container_label_org_label_schema_group="monitoring",device="/dev/sda2",id="/docker/39049775176f8315f4ee3acf8167a6986ee1ce4fe89cebc20061accec8cdb5cf",image="prom/pushgateway",name="pushgateway"} 69632
container_fs_usage_bytes{container_label_com_docker_compose_config_hash="58ee943e7e48ae304fa7040d44ebeeb440627d311716a70665f4c43f16e6ad54",container_label_com_docker_compose_container_number="1",container_label_com_docker_compose_oneoff="False",container_label_com_docker_compose_project="dockprom",container_label_com_docker_compose_service="nodeexporter",container_label_com_docker_compose_version="1.17.1",container_label_maintainer="",container_label_org_label_schema_group="monitoring",device="/dev/sda2",id="/docker/6f4c169111b89ea2bb857fbd0cf52c1c729b17dbdd8e7a8e011a1f9ba7128e90",image="prom/node-exporter:v0.16.0",name="nodeexporter"} 184320
container_fs_usage_bytes{container_label_com_docker_compose_config_hash="db9a938539bf7f1b610cda7420e62088d2bd27c50f52b2510b3bfc9d6c099744",container_label_com_docker_compose_container_number="1",container_label_com_docker_compose_oneoff="False",container_label_com_docker_compose_project="dockprom",container_label_com_docker_compose_service="cadvisor",container_label_com_docker_compose_version="1.17.1",container_label_maintainer="",container_label_org_label_schema_group="monitoring",device="/dev/sda2",id="/docker/2d9fee9bd94d9d627259f04e17468840700a78267a894f3fbedda0d375c5e985",image="google/cadvisor:v0.31.0",name="cadvisor"} 86016
container_fs_usage_bytes{container_label_com_docker_compose_config_hash="fdcc8c8d658803bc1254fab7a308bc999f76e777a0f414d2f6c58024530cdfa8",container_label_com_docker_compose_container_number="1",container_label_com_docker_compose_oneoff="False",container_label_com_docker_compose_project="dockprom",container_label_com_docker_compose_service="alertmanager",container_label_com_docker_compose_version="1.17.1",container_label_maintainer="",container_label_org_label_schema_group="monitoring",device="/dev/sda2",id="/docker/c6f72c08c11cfd862fa565bfd190744555ff2d345dfce553b8839a4d13c0d0d2",image="prom/alertmanager:v0.15.2",name="alertmanager"} 73728
Here is the output of df -h on the host
overlay 228G 39G 178G 18% /var/lib/docker/overlay2/d3de4ea22cdec9aac513d0799b43b0c7fefdb7b75391b6c7b4bc35f46eec817c/merged
How can I get the same values in Grafana from cAdvisor as the ones returned by docker system df ?

Grafana doesn't collect and store any data. It just visualizes data, which are already collected/stored in some supported time series database.
If cAdvisor is able to get required metrics, then configure cAdvisor storage plugin (InfluxDB is a good start) and visualize them in the Grafana

Related

Docker container update --memory didn't work as expected

Good morning all,
In the process of trying to train myself in Docker, I'm having trouble.
I created a docker container from a wordpress image, via docker compose.
[root#vps672971 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
57bb123aa365 wordpress:latest "docker-entrypoint.s…" 16 hours ago Up 2 0.0.0.0:8001->80/tcp royal-by-jds-wordpress-container
I would like to allocate more memory to this container, however after the execution of the following command, the information returned by docker stats are not correct.
docker container update --memory 3GB --memory-swap 4GB royal-by-jds-wordpress-container
docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
57bb123aa365 royal-by-jds-wordpress-container 0.01% 9.895MiB / 1.896GiB 0.51% 2.68kB / 0B 0B / 0B 6
I also tried to request API engine to retrieve information about my container, but the limitation displayed is not correct either.
curl --unix-socket /var/run/docker.sock http:/v1.21/containers/royal-by-jds-wordpress-container/stats
[...]
"memory_stats":{
"usage":12943360,
"max_usage":12955648,
"stats":{},
"limit":2035564544
},
[...]
It seems that the modification of the memory allocated to the container didn't work.
Anyone have an idea?
Thank you in advance.
Maxence

Docker daemon memory consumption grows over time

Here's the scenario:
On a Debian GNU/Linux 9 (stretch) VM I have two containers running. The day before yesterday I got a warning from the monitoring that the memory usage is relatively high. After looking at the VM it could be determined that not the containers but Docker daemon needs them. htop
After a restart of the service I noticed a new increase of memory demand after two days. See graphic.
RAM + Swap overview
Is there a known memory leak for this version?
Docker version
Memory development (container) after 2 days:
Container 1 is unchanged
Container 2 increased from 21.02MiB to 55MiB
Memory development (VM) after 2 days:
The MEM increased on the machine from 273M (after reboot) to 501M
dockerd
- after restart 1.3% MEM%
- 2 days later 6.0% of MEM%
Monitor your containers to see if their memory usage changes over time:
> docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
623104d00e43 hq 0.09% 81.16MiB / 15.55GiB 0.51% 6.05kB / 0B 25.5MB / 90.1kB 3
We saw a similar issue and it seems to have been related to the gcplogs logging driver. We saw the problem on docker 19.03.6 and 19.03.9 (the most up-to-date that we can easily use).
Switching back to using a log forwarding container (e.g. logspout) resolved the issue for us.

Network usage data between every two pairs of docker containers

I have a few micro-services running in docker containers (One service in each container).
How do I find out the network usage between every two pair of docker containers? So that I can make a graph such that I have containers as vertices and on edges I have the amount of bytes transmitted/received.
I used cAdvisor, but it gives me the overall network usage of each container.
Start with docker stats:
$ docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
a645ca0d8feb wp_wordpress_1 0.00% 135.5MiB / 15.55GiB 0.85% 12.2MB / 5.28MB 2.81MB / 9.12MB 11
95e1649c5b79 wp_db_1 0.02% 203MiB / 15.55GiB 1.28% 6.44MB / 3.11MB 0B / 1.08GB 30

Docker stat network traffic

I want ask 2 question about docker stats
for example
NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
container_1 1.52% 11.72MiB / 7.388GiB 0.15% 2.99GB / 372MB 9.4MB / 0B 9
in this situation net i/o statement 2.99GB / 372MB
how much time reflected in that?
for one hour? or all of time?
and how can check docker container network traffic for an hour or minute?
i would appreciate if you any other advice.
thank you
This blog explains the network io of the docker stats command
Displays total bytes received (RX) and transmitted (TX).
If you need finer grained access, the blog also suggests to use the network pseudo files on your host system.
$ CONTAINER_PID=`docker inspect -f '{{ .State.Pid }}' $CONTAINER_ID`
$ cat /proc/$CONTAINER_PID/net/dev
To your second part: I'm not aware of any build-in method to get the traffic over the specific period, others might correct me. I think the easiest solution is to poll one of the two interfaces and calculate the differences yourself.

Mixing cpu-shares and cpuset-cpus in Docker

I would like to run two containers with the following resource allocation:
Container "C1": reserved cpu1, shared cpu2 with 20 cpu-shares
Container "C2": reserved cpu3, shared cpu2 with 80 cpu-shares
If I run the two containers in this way:
docker run -d --name='C1' --cpu-shares=20 --cpuset-cpus="1,2" progrium/stress --cpu 2
docker run -d --name='C2' --cpu-shares=80 --cpuset-cpus="2,3" progrium/stress --cpu 2
I got that C1 takes 100% of cpu1 as expected but 50% of cpu2 (instead of 20%), C2 takes 100% of cpu3 as expected and 50% of cpu2 (instead of 80%).
It looks like the --cpu-shares option is ignored.
Is there a way to obtain the behavior I'm looking for?
docker run mentions that parameter as:
--cpu-shares=0 CPU shares (relative weight)
And contrib/completion/zsh/_docker#L452 includes:
"($help)--cpu-shares=[CPU shares (relative weight)]:CPU shares:(0 10 100 200 500 800 1000)"
So those values are not %-based.
The OP mentions --cpu-shares=20/80 works with the following Cpuset constraints:
docker run -ti --cpuset-cpus="0,1" C1 # instead of 1,2
docker run -ti --cpuset-cpus="3,4" C2 # instead of 2,3
(those values are validated/checked only since docker 1.9.1 with PR 16159)
Note: there is also CPU quota constraint:
The --cpu-quota flag limits the container’s CPU usage. The default 0 value allows the container to take 100% of a CPU resource (1 CPU).

Resources