I am using composer to run some system workers on the docker container, which is normally started with the www-data user on remote servers.
When I run them on the docker container they are started by the root user which is not correct, because the www-data user can not stop them from the browser app.
composer.json
...
"require": {
...
},
"scripts": {
"worker:start": [
"php path/to/the/script"
],
},
...
Start the script on the docker container
composer worker:start
And top results
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 267152 36396 29584 S 0.0 0.2 0:00.12 php-fpm
91 root 20 0 19132 4216 2208 S 0.0 0.0 0:00.04 sendmail-mta
103 www-data 20 0 267152 8952 2136 S 0.0 0.1 0:00.00 php-fpm
104 www-data 20 0 267152 8952 2136 S 0.0 0.1 0:00.00 php-fpm
154 root 20 0 2528 580 488 S 0.0 0.0 0:00.00 timeout
156 root 20 0 124460 56344 27900 S 0.0 0.4 0:00.18 php
157 root 20 0 2528 576 484 S 0.0 0.0 0:00.00 timeout
159 root 20 0 124460 55484 28224 S 0.0 0.3 0:00.19 php
160 root 20 0 2528 584 488 S 0.0 0.0 0:00.00 timeout
161 root 20 0 129012 61356 28020 S 0.0 0.4 0:00.27 php
162 root 20 0 4100 3428 2920 S 0.0 0.0 0:00.02 bash
168 root 20 0 7016 3360 2820 T 0.0 0.0 0:00.02 top
189 root 20 0 2528 576 484 S 0.0 0.0 0:00.00 timeout
191 root 20 0 124460 54948 27436 S 0.0 0.3 0:00.17 php
192 root 20 0 2528 576 484 S 0.0 0.0 0:00.00 timeout
194 root 20 0 122280 54548 28080 S 0.0 0.3 0:00.15 php
195 root 20 0 2528 640 548 S 0.0 0.0 0:00.00 timeout
196 root 20 0 128968 60336 27972 S 0.0 0.4 0:00.23 php
197 root 20 0 7016 3352 2812 R 0.0 0.0 0:00.00 top
As you see, only php-fpm proccess is run with www-data user.
How to configure docker container to run all PHP processes as www-data user instead root?
The reason FPM is running with that user is because it's written in the FPM config file. So it doesn't run as the root user, but as the user in the config file.
For example, somewhere in one of your FPM config files are settings simular to the below:
[www]
user = www-data
group = www-data
Composer doesn't seem to do this. At least not by default or with its current configuration.
I suggest generally switching the user in the docker container, for security purposes. Put this at the end of your Dockerfile.
USER www-data
This is good security practice and should also fix your problem.
Related
I am able to run containers fine with this combination.
But I noticed - there is no /etc/docker directory on the linux side and when I do ps -eF I get this. I was expecting dockerd and container processes as children of dockerd
rookie#MAIBENBEN-PC:/mnt/c/Users/rookie$ ps -eF
UID PID PPID C SZ RSS PSR STIME TTY TIME CMD
root 1 0 0 223 580 6 04:07 ? 00:00:00 /init
root 98 1 0 223 80 5 04:07 ? 00:00:00 /init
root 99 98 0 223 80 5 04:07 ? 00:00:00 /init
rookie 100 99 0 191067 43220 0 04:07 pts/0 00:00:00 docker serve --address unix:///home/rookie/.docker/run/d
root 101 98 0 0 0 1 04:07 ? 00:00:00 [init] <defunct>
root 103 98 0 223 80 7 04:07 ? 00:00:00 /init
root 104 103 0 384463 28888 0 04:07 pts/1 00:00:00 /mnt/wsl/docker-desktop/docker-desktop-proxy --distro-na
root 142 1 0 223 80 4 05:17 ? 00:00:00 /init
root 143 142 0 223 80 6 05:17 ? 00:00:00 /init
rookie 144 143 0 2509 5048 2 05:17 pts/2 00:00:00 -bash
rookie 221 144 0 2654 3264 7 05:21 pts/2 00:00:00 ps -eF
Your Ubuntu session (and all WSL2 sessions) are set up as docker clients, but the actual docker daemon is running in a separate WSL session named "docker-desktop".
I generally recommend leaving this instance alone, as it is auto-configured and managed by Docker Desktop, but if you really want to take a look, run:
wsl -d docker-desktop
... from PowerShell, CMD, or Windows Start/Run.
Note that this instance is running BusyBox, so some commands will be different than you expect. For instance, the -F argument is not valid for ps.
You'll see dockerd and the associated containerd processes here.
There's also a separate image, docker-desktop-data, but it is not bootable (there is no init in it). If you want to see the filesystem, at least, you can wsl --export it and examine the tar file that is created. I wrote up an answer on Super User with details a few months ago.
EDIT: First post, I'm trying to get some formatting...
I want to mount a host directory into a container directory so I can get container-created files back into the host. I've investigated at least a dozen examples with no luck. As near as I can tell, the following should work.
C:\tmp>ls -al jmeter
total 0
drwxrwxrwx 1 0 0 0 May 22 19:25 .
drwxrwxrwx 1 0 0 0 May 22 19:36 ..
C:\tmp>docker run -v /tmp/jmeter:/tmp/jmeter -it ubuntu bash
root#62a046b1dd74:/# ls -al /tmp/jmeter
total 4
drwxr-xr-x 2 root root 40 May 23 02:00 .
drwxrwxrwt 1 root root 4096 May 23 02:00 ..
root#62a046b1dd74:/# touch /tmp/jmeter/bob.txt
root#62a046b1dd74:/# ls -al /tmp/jmeter
total 4
drwxr-xr-x 2 root root 60 May 23 02:01 .
drwxrwxrwt 1 root root 4096 May 23 02:00 ..
-rw-r--r-- 1 root root 0 May 23 02:01 bob.txt
root#62a046b1dd74:/# exit
exit
C:\tmp>ls -al jmeter</b>
total 0
drwxrwxrwx 1 0 0 0 May 22 19:25 .
drwxrwxrwx 1 0 0 0 May 22 19:36 ..
C:\tmp>
My expectation is that /tmp/jmeter/bob.txt would exist on localhost.
FWIW, localhost is Windows 10 here, but I have the same problem in a github action, which I believe is Linux.
I have launched several docker containers and using docker stats, I have verified that one of them increases the consumption of ram memory since it starts until it is restarted.
My question is if there is any way to verify where such consumption comes from within the docker container. There is some way to check the consumption inside the container, something of the docker stats style but for the inside of the container.
Thanks for your cooperation.
Not sure if it's what you are asking for, but here's an example:
(Before your start):
Run a test container docker run --rm -it ubuntu
Install stress by typing apt-get update and apt-get install stress
Run stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1 (it will start consuming memory)
1. with top
If you go to a new terminal you can type docker container exec -it <your container name> top and you will get something like the following:
(notice that the %MEM usage of PID 285 is 68.8%)
docker container exec -it dreamy_jang top
top - 12:46:04 up 22 min, 0 users, load average: 1.48, 1.55, 1.12
Tasks: 4 total, 2 running, 2 sleeping, 0 stopped, 0 zombie
%Cpu(s): 20.8 us, 0.8 sy, 0.0 ni, 78.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 6102828 total, 150212 free, 5396604 used, 556012 buff/cache
KiB Swap: 1942896 total, 1937508 free, 5388 used. 455368 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
285 root 20 0 4209376 4.007g 212 R 100.0 68.8 6:56.90 stress
1 root 20 0 18500 3148 2916 S 0.0 0.1 0:00.09 bash
274 root 20 0 36596 3072 2640 R 0.0 0.1 0:00.21 top
284 root 20 0 8240 1192 1116 S 0.0 0.0 0:00.00 stress
2. with ps aux
Again, from a new terminal you type docker container exec -it <your container name> ps aux
(notice that the %MEM usage of PID 285 is 68.8%)
docker container exec -it dreamy_jang ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 18500 3148 pts/0 Ss 12:25 0:00 /bin/bash
root 284 0.0 0.0 8240 1192 pts/0 S+ 12:39 0:00 stress --vm-byt
root 285 99.8 68.8 4209376 4201300 pts/0 R+ 12:39 8:53 stress --vm-byt
root 286 0.0 0.0 34400 2904 pts/1 Rs+ 12:48 0:00 ps aux
My source for this stress thing is from this question: How to fill 90% of the free memory?
Thanks for taking the time to read my problem is the following, my auto-escalation policies are associated with a docker container, if the container requires autoscale memonia. In the container the processes (top) our one less load to "docker stats id". There are times when the RAM of the container becomes saturated because the dentry is not live (page cache)
docker stats does not show the actual RAM consumption that the container uses:
docker stats bf257938fa2d 66.54MiB
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O
bf257938fa2d ce88cfdda8f09bc08101 0.00% 66.54MiB / 512MiB 13.00% 1.44MB / 1.26MB 40.3MB / 0B 0
**docker exec -it bf257938fa2d top **
top - 23:24:02 up 53 min, 0 users, load average: 0.01, 0.21, 0.21
Tasks: 6 total, 1 running, 5 sleeping, 0 stopped, 0 zombie
Cpu(s): 3.7%us, 0.3%sy, 0.0%ni, 95.6%id, 0.2%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 15660100k total, 1989516k used, 13670584k free, 95920k buffers
Swap: 0k total, 0k used, 0k free, 1167184k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 11604 2468 2284 S 0.0 0.0 0:00.02 bash
6 root 20 0 309m 12m 7036 S 0.0 0.1 0:00.09 php-fpm
7 root 20 0 59292 7100 6052 S 0.0 0.0 0:00.00 nginx
8 nginx 20 0 59728 4824 3352 S 0.0 0.0 0:00.03 nginx
9 nginx 20 0 59728 4800 3328 S 0.0 0.0 0:00.02 nginx
70 root 20 0 15188 2040 1832 R 0.0 0.0 0:00.02 top
In what way could solve, that RAM consumption is equal in the container (top) and outside the container (docker stats).
Thank you
I have a Rails app with Puma server and DelayedJob.
I did some load testing of it - multiple requests at the same time etc. And when I looked at htop I found a number of processes which made me suspicious that puma is leaking /not killing processes. On the other hand it may be normal behavior. I did see memory go up though.
I have 2 Puma workers total in Rails configuration and 2 Delayed job workers.
Can someone with experience with puma confirm / discard my concerns over memory leak?
CPU[| 1.3%] Tasks: 54, 19 thr; 1 running
Mem[||||||||||||||||||||||||||||||||||||||||||||||||||||| 746/1652MB] Load average: 0.02 0.03 0.05
Swp[ 0/2943MB] Uptime: 1 day, 12:48:05
1024 admin 20 0 828M 183M 3840 S 0.0 11.1 0:00.00 puma: cluster worker 0: 819
1025 admin 20 0 828M 183M 3840 S 0.0 11.1 0:00.00 puma: cluster worker 0: 819
1026 admin 20 0 828M 183M 3840 S 0.0 11.1 0:02.68 puma: cluster worker 0: 819
1027 admin 20 0 828M 183M 3840 S 0.0 11.1 0:00.43 puma: cluster worker 0: 819
1028 admin 20 0 828M 183M 3840 S 0.0 11.1 0:07.04 puma: cluster worker 0: 819
1029 admin 20 0 828M 183M 3840 S 0.0 11.1 0:00.05 puma: cluster worker 0: 819
1022 admin 20 0 828M 183M 3840 S 0.0 11.1 0:13.23 puma: cluster worker 0: 819
1034 admin 20 0 829M 178M 3900 S 0.0 10.8 0:00.00 puma: cluster worker 1: 819
1035 admin 20 0 829M 178M 3900 S 0.0 10.8 0:00.00 puma: cluster worker 1: 819
1037 admin 20 0 829M 178M 3900 S 0.0 10.8 0:02.68 puma: cluster worker 1: 819
1038 admin 20 0 829M 178M 3900 S 0.0 10.8 0:00.44 puma: cluster worker 1: 819
1039 admin 20 0 829M 178M 3900 S 0.0 10.8 0:07.12 puma: cluster worker 1: 819
1040 admin 20 0 829M 178M 3900 S 0.0 10.8 0:00.00 puma: cluster worker 1: 819
1033 admin 20 0 829M 178M 3900 S 0.0 10.8 0:14.28 puma: cluster worker 1: 819
1043 admin 20 0 435M 117M 3912 S 0.0 7.1 0:00.00 delayed_job.0
1041 admin 20 0 435M 117M 3912 S 0.0 7.1 0:52.71 delayed_job.0
1049 admin 20 0 435M 116M 3872 S 0.0 7.1 0:00.00 delayed_job.1
1047 admin 20 0 435M 116M 3872 S 0.0 7.1 0:52.98 delayed_job.1
1789 postgres 20 0 125M 10964 7564 S 0.0 0.6 0:00.26 postgres: admin app_production_ [local] idle
1794 postgres 20 0 127M 11160 6460 S 0.0 0.7 0:00.18 postgres: admin app_production_ [local] idle
1798 postgres 20 0 125M 10748 7484 S 0.0 0.6 0:00.24 postgres: admin app_production_ [local] idle
1811 postgres 20 0 127M 10996 6424 S 0.0 0.6 0:00.11 postgres: admin app_production_ [local] idle
1817 postgres 20 0 127M 11032 6460 S 0.0 0.7 0:00.12 postgres: admin app_production_ [local] idle
1830 postgres 20 0 127M 11032 6460 S 0.0 0.7 0:00.14 postgres: admin app_production_ [local] idle
1831 postgres 20 0 127M 11036 6468 S 0.0 0.7 0:00.20 postgres: admin app_production_ [local] idle
1835 postgres 20 0 127M 11028 6460 S 0.0 0.7 0:00.06 postgres: admin app_production_ [local] idle
1840 postgres 20 0 125M 7288 4412 S 0.0 0.4 0:00.04 postgres: admin app_production_ [local] idle
1847 postgres 20 0 125M 7308 4432 S 0.0 0.4 0:00.06 postgres: admin app_production_ [local] idle
1866 postgres 20 0 125M 7292 4416 S 0.0 0.4 0:00.06 postgres: admin app_production_ [local] idle
1875 postgres 20 0 125M 7300 4424 S 0.0 0.4 0:00.04 postgres: admin app_production_ [local] idle
If the number of processes matches your concurrency configuration i would say that's ok, if it keeps growing with every request then you may have an issue with processes hanging. The default for puma i believe is 16. It also looks like you are using clustered mode so it would have multiple processes and multiple threads per process.