Docker container increases ram - docker

I have launched several docker containers and using docker stats, I have verified that one of them increases the consumption of ram memory since it starts until it is restarted.
My question is if there is any way to verify where such consumption comes from within the docker container. There is some way to check the consumption inside the container, something of the docker stats style but for the inside of the container.
Thanks for your cooperation.

Not sure if it's what you are asking for, but here's an example:
(Before your start):
Run a test container docker run --rm -it ubuntu
Install stress by typing apt-get update and apt-get install stress
Run stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1 (it will start consuming memory)
1. with top
If you go to a new terminal you can type docker container exec -it <your container name> top and you will get something like the following:
(notice that the %MEM usage of PID 285 is 68.8%)
docker container exec -it dreamy_jang top
top - 12:46:04 up 22 min, 0 users, load average: 1.48, 1.55, 1.12
Tasks: 4 total, 2 running, 2 sleeping, 0 stopped, 0 zombie
%Cpu(s): 20.8 us, 0.8 sy, 0.0 ni, 78.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 6102828 total, 150212 free, 5396604 used, 556012 buff/cache
KiB Swap: 1942896 total, 1937508 free, 5388 used. 455368 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
285 root 20 0 4209376 4.007g 212 R 100.0 68.8 6:56.90 stress
1 root 20 0 18500 3148 2916 S 0.0 0.1 0:00.09 bash
274 root 20 0 36596 3072 2640 R 0.0 0.1 0:00.21 top
284 root 20 0 8240 1192 1116 S 0.0 0.0 0:00.00 stress
2. with ps aux
Again, from a new terminal you type docker container exec -it <your container name> ps aux
(notice that the %MEM usage of PID 285 is 68.8%)
docker container exec -it dreamy_jang ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 18500 3148 pts/0 Ss 12:25 0:00 /bin/bash
root 284 0.0 0.0 8240 1192 pts/0 S+ 12:39 0:00 stress --vm-byt
root 285 99.8 68.8 4209376 4201300 pts/0 R+ 12:39 8:53 stress --vm-byt
root 286 0.0 0.0 34400 2904 pts/1 Rs+ 12:48 0:00 ps aux
My source for this stress thing is from this question: How to fill 90% of the free memory?

Related

pause container have pid 1 in the pod?

[root#k8s001 ~]# docker exec -it f72edf025141 /bin/bash
root#b33f3b7c705d:/var/lib/ghost# ps aux`enter code here`
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 1012 4 ? Ss 02:45 0:00 /pause
root 8 0.0 0.0 10648 3400 ? Ss 02:57 0:00 nginx: master process nginx -g daemon off;
101 37 0.0 0.0 11088 1964 ? S 02:57 0:00 nginx: worker process
node 38 0.9 0.0 2006968 116572 ? Ssl 02:58 0:06 node current/index.js
root 108 0.0 0.0 3960 2076 pts/0 Ss 03:09 0:00 /bin/bash
root 439 0.0 0.0 7628 1400 pts/0 R+ 03:10 0:00 ps aux
The display come from internet, it says pause container is the parent process of other containers in the pod, if you attach pod or other containers, do ps aux, you would see that.
Is it correct, I do in my k8s,different, PID 1 is not /pause.
...Is it correct, I do in my k8s,different, PID 1 is not /pause.
This has changed, pause no longer hold PID 1 despite being the first container created by the container runtime to setup the pod (eg. cgroups, namespace etc). Pause is isolated (hidden) from the rest of the containers in the pod regardless of your ENTRYPOINT/CMD. See here for more background information.
By default, Docker will run your entrypoint (or the command, if there is no entrypoint) as PID 1. However, that is not necessarily always the case, since, depending on how you start the container, Docker (or your orchestrator) can also run its custom init process as PID 1:
$ docker run -d --init --name test alpine sleep infinity
849efe38ecec439550738e981065ec4aff55ef5607f03b9fed975e2d3146b9b0
$ with-docker docker exec -ti test ps
PID USER TIME COMMAND
1 root 0:00 /sbin/docker-init -- sleep infinity
7 root 0:00 sleep infinity
8 root 0:00 ps
For more information on why you would want your entrypoint not to be PID 1, you can check this explanation from a tini developer:
Now, unlike other processes, PID 1 has a unique responsibility, which is to reap zombie processes.
Zombie processes are processes that:
Have exited.
Were not waited on by their parent process (wait is the syscall parent processes use to retrieve the exit code of their children).
Have lost their parent (i.e. their parent exited as well), which means they'll never be waited on by their parent.

What's memory shown in docker stats really mean?

1) I use next to start a container:
docker run --name test -idt python:3 python -m http.server
2) Then, I try to validate memory usage like next:
a)
root#shubuntu1:~# ps aux | grep "python -m http.server"
root 17416 3.0 0.2 27368 19876 pts/0 Ss+ 17:11 0:00 python -m http.server
b)
root#shubuntu1:~# docker exec -it test ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.9 0.2 27368 19876 pts/0 Ss+ 09:11 0:00 python -m http.
c)
root#shubuntu1:~# docker stats --no-stream test
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
d72f2ece6816 test 0.01% 12.45MiB / 7.591GiB 0.16% 3.04kB / 0B 0B / 0B 1
You can see from both docker host & docker container, we could see python -m http.server consume 19876/1024=19.1289MB memory (RSS), but from docker stats, I find 12.45MB, why it show container memory consume even less than the PID1 process in container?
rss        RSS      resident set size, the non-swapped physical memory that a task has used (in kiloBytes). (alias rssize, rsz).
MEM USAGE / LIMIT the total memory the container is using, and the total amount of memory it is allowed to use

Docker stats memory anomaly

Thanks for taking the time to read my problem is the following, my auto-escalation policies are associated with a docker container, if the container requires autoscale memonia. In the container the processes (top) our one less load to "docker stats id". There are times when the RAM of the container becomes saturated because the dentry is not live (page cache)
docker stats does not show the actual RAM consumption that the container uses:
docker stats bf257938fa2d 66.54MiB
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O
bf257938fa2d ce88cfdda8f09bc08101 0.00% 66.54MiB / 512MiB 13.00% 1.44MB / 1.26MB 40.3MB / 0B 0
**docker exec -it bf257938fa2d top **
top - 23:24:02 up 53 min, 0 users, load average: 0.01, 0.21, 0.21
Tasks: 6 total, 1 running, 5 sleeping, 0 stopped, 0 zombie
Cpu(s): 3.7%us, 0.3%sy, 0.0%ni, 95.6%id, 0.2%wa, 0.0%hi, 0.2%si, 0.0%st
Mem: 15660100k total, 1989516k used, 13670584k free, 95920k buffers
Swap: 0k total, 0k used, 0k free, 1167184k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1 root 20 0 11604 2468 2284 S 0.0 0.0 0:00.02 bash
6 root 20 0 309m 12m 7036 S 0.0 0.1 0:00.09 php-fpm
7 root 20 0 59292 7100 6052 S 0.0 0.0 0:00.00 nginx
8 nginx 20 0 59728 4824 3352 S 0.0 0.0 0:00.03 nginx
9 nginx 20 0 59728 4800 3328 S 0.0 0.0 0:00.02 nginx
70 root 20 0 15188 2040 1832 R 0.0 0.0 0:00.02 top
In what way could solve, that RAM consumption is equal in the container (top) and outside the container (docker stats).
Thank you

Haproxy reload with different backend server ip

Is it possible to reload haproxy while the backend server ip changed? If, how?
It is essential for docker stack. On every deploy, new containers with different ip will replace the old containers.
In our implementation, services return 503 occasionally as the old haproxy process is not terminated and still accepting request, while the backend server is already gone. httplog show that some requests forward a backend which is gone.
# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 893 0.0 0.0 0 0 ? Zs 19:39 0:01 [haproxy] <defunct>
root 898 0.3 0.0 49416 9640 ? Ss 19:49 0:13 /usr/local/sbin/haproxy -D -f /app/haproxy.cfg -p /var/run/haproxy.pid
root 915 0.2 0.0 0 0 ? Zs 19:49 0:12 [haproxy] <defunct>
root 920 0.2 0.0 49308 10196 ? Ss 20:57 0:01 /usr/local/sbin/haproxy -D -f /app/haproxy.cfg -p /var/run/haproxy.pid
root 937 0.0 0.0 0 0 ? Zs 20:57 0:00 [haproxy] <defunct>
root 942 0.3 0.0 49296 9880 ? Ss 20:58 0:01 /usr/local/sbin/haproxy -D -f /app/haproxy.cfg -p /var/run/haproxy.pid
root 959 0.2 0.0 49296 9852 ? Ss 20:58 0:01 /usr/local/sbin/haproxy -D -f /app/haproxy.cfg -p /var/run/haproxy.pid
[Edit]
I am using docker swarm mode. I did try with publish service's port to the host; however, the performance of the swarm’s internal load balancer is bad, and I try to avoid.
While it should be possible to change the HAProxy configuration to point to a different backend server, it seems like it would be easier to bind the Docker containers' ports to predictable ports on the Docker host, so the HAProxy config does not need to change.
For example:
docker run -d -p 127.0.0.1:80:9999 hello_world
And your HAProxy config could look like
backend something
# Assuming the Docker host's IP address is 192.0.2.123
server some-server 192.0.2.123:9999

"ps" command print error timestamp in container

I got a problem,in container ,when I execute ps command ,and print error future time stamp,but date command print right time;at localhost I execute ps command,print right timestamp
in container :
#date
Tue Nov 1 14:20:11 CST 2016
#ps uax
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 10416 660 ? Ss Nov08 0:00 init [3]
root 58 0.0 0.0 49332 544 ? S Nov08 0:00 supervising syslog-ng
root 59 0.0 0.0 52512 2508 ? Ss Nov08 0:14 /sbin/syslog-ng

Resources