Thanks for taking the time to read my problem is the following, my auto-escalation policies are associated with a docker container, if the container requires autoscale memonia. In the container the processes (top) our one less load to "docker stats id". There are times when the RAM of the container becomes saturated because the dentry is not live (page cache)
docker stats does not show the actual RAM consumption that the container uses:
docker stats bf257938fa2d 66.54MiB
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O
bf257938fa2d ce88cfdda8f09bc08101 0.00% 66.54MiB / 512MiB 13.00% 1.44MB / 1.26MB 40.3MB / 0B 0
**docker exec -it bf257938fa2d top **
top - 23:24:02 up 53 min, 0 users, load average: 0.01, 0.21, 0.21
Tasks: 6 total, 1 running, 5 sleeping, 0 stopped, 0 zombie
Cpu(s): 3.7%us, 0.3%sy, 0.0%ni, 95.6%id, 0.2%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 15660100k total, 1989516k used, 13670584k free, 95920k buffers
Swap: 0k total, 0k used, 0k free, 1167184k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 11604 2468 2284 S 0.0 0.0 0:00.02 bash
6 root 20 0 309m 12m 7036 S 0.0 0.1 0:00.09 php-fpm
7 root 20 0 59292 7100 6052 S 0.0 0.0 0:00.00 nginx
8 nginx 20 0 59728 4824 3352 S 0.0 0.0 0:00.03 nginx
9 nginx 20 0 59728 4800 3328 S 0.0 0.0 0:00.02 nginx
70 root 20 0 15188 2040 1832 R 0.0 0.0 0:00.02 top
In what way could solve, that RAM consumption is equal in the container (top) and outside the container (docker stats).
Thank you
Related
I am using composer to run some system workers on the docker container, which is normally started with the www-data user on remote servers.
When I run them on the docker container they are started by the root user which is not correct, because the www-data user can not stop them from the browser app.
composer.json
...
"require": {
...
},
"scripts": {
"worker:start": [
"php path/to/the/script"
],
},
...
Start the script on the docker container
composer worker:start
And top results
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 267152 36396 29584 S 0.0 0.2 0:00.12 php-fpm
91 root 20 0 19132 4216 2208 S 0.0 0.0 0:00.04 sendmail-mta
103 www-data 20 0 267152 8952 2136 S 0.0 0.1 0:00.00 php-fpm
104 www-data 20 0 267152 8952 2136 S 0.0 0.1 0:00.00 php-fpm
154 root 20 0 2528 580 488 S 0.0 0.0 0:00.00 timeout
156 root 20 0 124460 56344 27900 S 0.0 0.4 0:00.18 php
157 root 20 0 2528 576 484 S 0.0 0.0 0:00.00 timeout
159 root 20 0 124460 55484 28224 S 0.0 0.3 0:00.19 php
160 root 20 0 2528 584 488 S 0.0 0.0 0:00.00 timeout
161 root 20 0 129012 61356 28020 S 0.0 0.4 0:00.27 php
162 root 20 0 4100 3428 2920 S 0.0 0.0 0:00.02 bash
168 root 20 0 7016 3360 2820 T 0.0 0.0 0:00.02 top
189 root 20 0 2528 576 484 S 0.0 0.0 0:00.00 timeout
191 root 20 0 124460 54948 27436 S 0.0 0.3 0:00.17 php
192 root 20 0 2528 576 484 S 0.0 0.0 0:00.00 timeout
194 root 20 0 122280 54548 28080 S 0.0 0.3 0:00.15 php
195 root 20 0 2528 640 548 S 0.0 0.0 0:00.00 timeout
196 root 20 0 128968 60336 27972 S 0.0 0.4 0:00.23 php
197 root 20 0 7016 3352 2812 R 0.0 0.0 0:00.00 top
As you see, only php-fpm proccess is run with www-data user.
How to configure docker container to run all PHP processes as www-data user instead root?
The reason FPM is running with that user is because it's written in the FPM config file. So it doesn't run as the root user, but as the user in the config file.
For example, somewhere in one of your FPM config files are settings simular to the below:
[www]
user = www-data
group = www-data
Composer doesn't seem to do this. At least not by default or with its current configuration.
I suggest generally switching the user in the docker container, for security purposes. Put this at the end of your Dockerfile.
USER www-data
This is good security practice and should also fix your problem.
I am able to run containers fine with this combination.
But I noticed - there is no /etc/docker directory on the linux side and when I do ps -eF I get this. I was expecting dockerd and container processes as children of dockerd
rookie#MAIBENBEN-PC:/mnt/c/Users/rookie$ ps -eF
UID PID PPID C SZ RSS PSR STIME TTY TIME CMD
root 1 0 0 223 580 6 04:07 ? 00:00:00 /init
root 98 1 0 223 80 5 04:07 ? 00:00:00 /init
root 99 98 0 223 80 5 04:07 ? 00:00:00 /init
rookie 100 99 0 191067 43220 0 04:07 pts/0 00:00:00 docker serve --address unix:///home/rookie/.docker/run/d
root 101 98 0 0 0 1 04:07 ? 00:00:00 [init] <defunct>
root 103 98 0 223 80 7 04:07 ? 00:00:00 /init
root 104 103 0 384463 28888 0 04:07 pts/1 00:00:00 /mnt/wsl/docker-desktop/docker-desktop-proxy --distro-na
root 142 1 0 223 80 4 05:17 ? 00:00:00 /init
root 143 142 0 223 80 6 05:17 ? 00:00:00 /init
rookie 144 143 0 2509 5048 2 05:17 pts/2 00:00:00 -bash
rookie 221 144 0 2654 3264 7 05:21 pts/2 00:00:00 ps -eF
Your Ubuntu session (and all WSL2 sessions) are set up as docker clients, but the actual docker daemon is running in a separate WSL session named "docker-desktop".
I generally recommend leaving this instance alone, as it is auto-configured and managed by Docker Desktop, but if you really want to take a look, run:
wsl -d docker-desktop
... from PowerShell, CMD, or Windows Start/Run.
Note that this instance is running BusyBox, so some commands will be different than you expect. For instance, the -F argument is not valid for ps.
You'll see dockerd and the associated containerd processes here.
There's also a separate image, docker-desktop-data, but it is not bootable (there is no init in it). If you want to see the filesystem, at least, you can wsl --export it and examine the tar file that is created. I wrote up an answer on Super User with details a few months ago.
I run my service app in docker container app-api
Result of top:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6420 root 20 0 30.572g 0.028t 38956 S 47.8 92.5 240:40.95 app
...
Result of htop:
PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command
6420 root 20 0 30.6G 29.0G 38956 S 47.1 92.5 4h21:53 app
6554 root 20 0 30.6G 29.0G 38956 S 6.6 92.5 23:04.15 app
6463 root 20 0 30.6G 29.0G 38956 S 2.0 92.5 27:29.53 app
6430 root 20 0 30.6G 29.0G 38956 S 0.0 92.5 25:30.61 app
6429 root 20 0 30.6G 29.0G 38956 S 5.3 92.5 26:36.17 app
6428 root 20 0 30.6G 29.0G 38956 S 10.0 92.5 23:56.10 app
6426 root 20 0 30.6G 29.0G 38956 S 6.0 92.5 8:09.12 app
6427 root 20 0 30.6G 29.0G 38956 S 0.0 92.5 23:03.81 app
6425 root 20 0 30.6G 29.0G 38956 S 0.0 92.5 0:00.00 app
6424 root 20 0 30.6G 29.0G 38956 S 0.0 92.5 25:42.46 app
6423 root 20 0 30.6G 29.0G 38956 S 4.6 92.5 26:10.82 app
6422 root 20 0 30.6G 29.0G 38956 S 12.0 92.5 23:24.68 app
6421 root 20 0 30.6G 29.0G 38956 S 2.0 92.5 4:32.47 app
2202 gitlab-ru 20 0 231M 70132 53620 S 5.3 0.2 4h54:21 nginx: worker process
2203 gitlab-ru 20 0 228M 59040 47680 S 0.7 0.2 54:44.83 nginx: worker process
281 root 19 -1 175M 58104 47728 S 0.0 0.2 0:17.76 /lib/systemd/systemd-journald
1036 root 20 0 1893M 38164 13332 S 0.0 0.1 0:38.17 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
...
Result of docker stats:
$ docker stats --no-stream
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
14654b8a4bfb app-letsencrypt 13.08% 244.5MiB / 31.41GiB 0.76% 183GB / 192GB 12.4GB / 4.64MB 23
a932dabbced8 app-api 60.50% 7.258GiB / 31.41GiB 23.10% 53.2GB / 10.6GB 48.1MB / 0B 14
2cebc542dda6 app-redis 0.12% 3.902MiB / 31.41GiB 0.01% 24.2kB / 0B 1.84GB / 655kB 4
As you are can see 0.028t (~29G) in top in much more than 7.258GiB in docker stats. Difference is about 29 - 7.258 > 20G of RAM.
Help me please to understand how to detect what is this phantom that takes 20G of RAM? Or maybe point me where to dig, into problems with my application or with docker (20.10.1) or with operation system (Ubuntu 18.04)?
UPD
Output in pprof (heap)
# runtime.MemStats
# Alloc = 7645359160
# TotalAlloc = 2552206192400
# Sys = 31227357832
# Lookups = 0
# Mallocs = 50990505448
# Frees = 50882282691
# HeapAlloc = 7645359160
# HeapSys = 29526425600
# HeapIdle = 21707890688
# HeapInuse = 7818534912
# HeapReleased = 9017090048
# HeapObjects = 108222757
# Stack = 1474560 / 1474560
# MSpan = 101848496 / 367820800
# MCache = 13888 / 16384
# BuckHashSys = 10697838
# GCSys = 1270984696
# OtherSys = 49937954
# NextGC = 11845576832
# LastGC = 1615583458331895138
# PauseNs = ..................
# NumGC = 839
# NumForcedGC = 0
# GCCPUFraction = 0.027290987331299785
# DebugGC = false
# MaxRSS = 31197982720
docker stats is reporting the cgroup resource usage of the container's cgroup:
$ docker run -it -m 1g --cpus 1.5 --name test-stats busybox /bin/sh
/ # cat /sys/fs/cgroup/memory/memory.usage_in_bytes
2629632
/ # cat /sys/fs/cgroup/memory/memory.limit_in_bytes
1073741824
/ # cat /sys/fs/cgroup/cpu/cpu.cfs_quota_us
150000
/ # cat /sys/fs/cgroup/cpu/cpu.cfs_period_us
100000
From another window (there's a small variation with the cat command stopped):
$ docker stats --no-stream test-stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
9a69d1323422 test-stats 0.00% 2.395MiB / 1GiB 0.23% 5.46kB / 796B 3MB / 0B 1
Note that this is will differ from the overall host memory and cpu if you have specified limits with your containers. Without limits, the cpu quota will be -1 to be unrestricted, and the memory limit will set to the page counter max value.
Trying to add up memory usage from the top command is very error prone. There is different types of memory in the linux kernel (including disk cache), memory gets shared between multiple threads (which is why you likely see multiple pids for app, each with the exact same memory), some memory may be mmap that is not backed with ram, and a long list of other challenges. People that know much more about this than me will say that the kernel doesn't even know when it's actually out of memory until it attempts to reclaim memory from many process and those attempts all fail.
You are comparing top/htop RES mem (man):
The non-swapped physical memory a task has used.
RES = CODE + DATA.
with docker stats CLI output (doc):
On Linux, the Docker CLI reports memory usage by subtracting cache usage from the total memory usage.
Use docker stats API and you will get much more granular view, e.g. stat for memory:
{
"total_pgmajfault": 0,
"cache": 0,
"mapped_file": 0,
"total_inactive_file": 0,
"pgpgout": 414,
"rss": 6537216,
"total_mapped_file": 0,
"writeback": 0,
"unevictable": 0,
"pgpgin": 477,
"total_unevictable": 0,
"pgmajfault": 0,
"total_rss": 6537216,
"total_rss_huge": 6291456,
"total_writeback": 0,
"total_inactive_anon": 0,
"rss_huge": 6291456,
"hierarchical_memory_limit": 67108864,
"total_pgfault": 964,
"total_active_file": 0,
"active_anon": 6537216,
"total_active_anon": 6537216,
"total_pgpgout": 414,
"total_cache": 0,
"inactive_anon": 0,
"active_file": 0,
"pgfault": 964,
"inactive_file": 0,
"total_pgpgin": 477
}
You can see - memory is not just one, but it has many types and each tool may report own set&combination of memory types. I guess you will find missing memory in app cache memory allocation.
You can check overall basic memory allocations with free command:
$ free -m
total used free shared buff/cache available
Mem: 2000 1247 90 178 662 385
Swap: 0 0 0
It is a normal state, when Linux uses unused memory for buff/cache.
I have launched several docker containers and using docker stats, I have verified that one of them increases the consumption of ram memory since it starts until it is restarted.
My question is if there is any way to verify where such consumption comes from within the docker container. There is some way to check the consumption inside the container, something of the docker stats style but for the inside of the container.
Thanks for your cooperation.
Not sure if it's what you are asking for, but here's an example:
(Before your start):
Run a test container docker run --rm -it ubuntu
Install stress by typing apt-get update and apt-get install stress
Run stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1 (it will start consuming memory)
1. with top
If you go to a new terminal you can type docker container exec -it <your container name> top and you will get something like the following:
(notice that the %MEM usage of PID 285 is 68.8%)
docker container exec -it dreamy_jang top
top - 12:46:04 up 22 min, 0 users, load average: 1.48, 1.55, 1.12
Tasks: 4 total, 2 running, 2 sleeping, 0 stopped, 0 zombie
%Cpu(s): 20.8 us, 0.8 sy, 0.0 ni, 78.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 6102828 total, 150212 free, 5396604 used, 556012 buff/cache
KiB Swap: 1942896 total, 1937508 free, 5388 used. 455368 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
285 root 20 0 4209376 4.007g 212 R 100.0 68.8 6:56.90 stress
1 root 20 0 18500 3148 2916 S 0.0 0.1 0:00.09 bash
274 root 20 0 36596 3072 2640 R 0.0 0.1 0:00.21 top
284 root 20 0 8240 1192 1116 S 0.0 0.0 0:00.00 stress
2. with ps aux
Again, from a new terminal you type docker container exec -it <your container name> ps aux
(notice that the %MEM usage of PID 285 is 68.8%)
docker container exec -it dreamy_jang ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 18500 3148 pts/0 Ss 12:25 0:00 /bin/bash
root 284 0.0 0.0 8240 1192 pts/0 S+ 12:39 0:00 stress --vm-byt
root 285 99.8 68.8 4209376 4201300 pts/0 R+ 12:39 8:53 stress --vm-byt
root 286 0.0 0.0 34400 2904 pts/1 Rs+ 12:48 0:00 ps aux
My source for this stress thing is from this question: How to fill 90% of the free memory?
I have a Rails app with Puma server and DelayedJob.
I did some load testing of it - multiple requests at the same time etc. And when I looked at htop I found a number of processes which made me suspicious that puma is leaking /not killing processes. On the other hand it may be normal behavior. I did see memory go up though.
I have 2 Puma workers total in Rails configuration and 2 Delayed job workers.
Can someone with experience with puma confirm / discard my concerns over memory leak?
CPU[| 1.3%] Tasks: 54, 19 thr; 1 running
Mem[||||||||||||||||||||||||||||||||||||||||||||||||||||| 746/1652MB] Load average: 0.02 0.03 0.05
Swp[ 0/2943MB] Uptime: 1 day, 12:48:05
1024 admin 20 0 828M 183M 3840 S 0.0 11.1 0:00.00 puma: cluster worker 0: 819
1025 admin 20 0 828M 183M 3840 S 0.0 11.1 0:00.00 puma: cluster worker 0: 819
1026 admin 20 0 828M 183M 3840 S 0.0 11.1 0:02.68 puma: cluster worker 0: 819
1027 admin 20 0 828M 183M 3840 S 0.0 11.1 0:00.43 puma: cluster worker 0: 819
1028 admin 20 0 828M 183M 3840 S 0.0 11.1 0:07.04 puma: cluster worker 0: 819
1029 admin 20 0 828M 183M 3840 S 0.0 11.1 0:00.05 puma: cluster worker 0: 819
1022 admin 20 0 828M 183M 3840 S 0.0 11.1 0:13.23 puma: cluster worker 0: 819
1034 admin 20 0 829M 178M 3900 S 0.0 10.8 0:00.00 puma: cluster worker 1: 819
1035 admin 20 0 829M 178M 3900 S 0.0 10.8 0:00.00 puma: cluster worker 1: 819
1037 admin 20 0 829M 178M 3900 S 0.0 10.8 0:02.68 puma: cluster worker 1: 819
1038 admin 20 0 829M 178M 3900 S 0.0 10.8 0:00.44 puma: cluster worker 1: 819
1039 admin 20 0 829M 178M 3900 S 0.0 10.8 0:07.12 puma: cluster worker 1: 819
1040 admin 20 0 829M 178M 3900 S 0.0 10.8 0:00.00 puma: cluster worker 1: 819
1033 admin 20 0 829M 178M 3900 S 0.0 10.8 0:14.28 puma: cluster worker 1: 819
1043 admin 20 0 435M 117M 3912 S 0.0 7.1 0:00.00 delayed_job.0
1041 admin 20 0 435M 117M 3912 S 0.0 7.1 0:52.71 delayed_job.0
1049 admin 20 0 435M 116M 3872 S 0.0 7.1 0:00.00 delayed_job.1
1047 admin 20 0 435M 116M 3872 S 0.0 7.1 0:52.98 delayed_job.1
1789 postgres 20 0 125M 10964 7564 S 0.0 0.6 0:00.26 postgres: admin app_production_ [local] idle
1794 postgres 20 0 127M 11160 6460 S 0.0 0.7 0:00.18 postgres: admin app_production_ [local] idle
1798 postgres 20 0 125M 10748 7484 S 0.0 0.6 0:00.24 postgres: admin app_production_ [local] idle
1811 postgres 20 0 127M 10996 6424 S 0.0 0.6 0:00.11 postgres: admin app_production_ [local] idle
1817 postgres 20 0 127M 11032 6460 S 0.0 0.7 0:00.12 postgres: admin app_production_ [local] idle
1830 postgres 20 0 127M 11032 6460 S 0.0 0.7 0:00.14 postgres: admin app_production_ [local] idle
1831 postgres 20 0 127M 11036 6468 S 0.0 0.7 0:00.20 postgres: admin app_production_ [local] idle
1835 postgres 20 0 127M 11028 6460 S 0.0 0.7 0:00.06 postgres: admin app_production_ [local] idle
1840 postgres 20 0 125M 7288 4412 S 0.0 0.4 0:00.04 postgres: admin app_production_ [local] idle
1847 postgres 20 0 125M 7308 4432 S 0.0 0.4 0:00.06 postgres: admin app_production_ [local] idle
1866 postgres 20 0 125M 7292 4416 S 0.0 0.4 0:00.06 postgres: admin app_production_ [local] idle
1875 postgres 20 0 125M 7300 4424 S 0.0 0.4 0:00.04 postgres: admin app_production_ [local] idle
If the number of processes matches your concurrency configuration i would say that's ok, if it keeps growing with every request then you may have an issue with processes hanging. The default for puma i believe is 16. It also looks like you are using clustered mode so it would have multiple processes and multiple threads per process.