Haproxy reload with different backend server ip - docker

Is it possible to reload haproxy while the backend server ip changed? If, how?
It is essential for docker stack. On every deploy, new containers with different ip will replace the old containers.
In our implementation, services return 503 occasionally as the old haproxy process is not terminated and still accepting request, while the backend server is already gone. httplog show that some requests forward a backend which is gone.
# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 893 0.0 0.0 0 0 ? Zs 19:39 0:01 [haproxy] <defunct>
root 898 0.3 0.0 49416 9640 ? Ss 19:49 0:13 /usr/local/sbin/haproxy -D -f /app/haproxy.cfg -p /var/run/haproxy.pid
root 915 0.2 0.0 0 0 ? Zs 19:49 0:12 [haproxy] <defunct>
root 920 0.2 0.0 49308 10196 ? Ss 20:57 0:01 /usr/local/sbin/haproxy -D -f /app/haproxy.cfg -p /var/run/haproxy.pid
root 937 0.0 0.0 0 0 ? Zs 20:57 0:00 [haproxy] <defunct>
root 942 0.3 0.0 49296 9880 ? Ss 20:58 0:01 /usr/local/sbin/haproxy -D -f /app/haproxy.cfg -p /var/run/haproxy.pid
root 959 0.2 0.0 49296 9852 ? Ss 20:58 0:01 /usr/local/sbin/haproxy -D -f /app/haproxy.cfg -p /var/run/haproxy.pid
[Edit]
I am using docker swarm mode. I did try with publish service's port to the host; however, the performance of the swarm’s internal load balancer is bad, and I try to avoid.

While it should be possible to change the HAProxy configuration to point to a different backend server, it seems like it would be easier to bind the Docker containers' ports to predictable ports on the Docker host, so the HAProxy config does not need to change.
For example:
docker run -d -p 127.0.0.1:80:9999 hello_world
And your HAProxy config could look like
backend something
# Assuming the Docker host's IP address is 192.0.2.123
server some-server 192.0.2.123:9999

Related

pause container have pid 1 in the pod?

[root#k8s001 ~]# docker exec -it f72edf025141 /bin/bash
root#b33f3b7c705d:/var/lib/ghost# ps aux`enter code here`
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 1012 4 ? Ss 02:45 0:00 /pause
root 8 0.0 0.0 10648 3400 ? Ss 02:57 0:00 nginx: master process nginx -g daemon off;
101 37 0.0 0.0 11088 1964 ? S 02:57 0:00 nginx: worker process
node 38 0.9 0.0 2006968 116572 ? Ssl 02:58 0:06 node current/index.js
root 108 0.0 0.0 3960 2076 pts/0 Ss 03:09 0:00 /bin/bash
root 439 0.0 0.0 7628 1400 pts/0 R+ 03:10 0:00 ps aux
The display come from internet, it says pause container is the parent process of other containers in the pod, if you attach pod or other containers, do ps aux, you would see that.
Is it correct, I do in my k8s,different, PID 1 is not /pause.
...Is it correct, I do in my k8s,different, PID 1 is not /pause.
This has changed, pause no longer hold PID 1 despite being the first container created by the container runtime to setup the pod (eg. cgroups, namespace etc). Pause is isolated (hidden) from the rest of the containers in the pod regardless of your ENTRYPOINT/CMD. See here for more background information.
By default, Docker will run your entrypoint (or the command, if there is no entrypoint) as PID 1. However, that is not necessarily always the case, since, depending on how you start the container, Docker (or your orchestrator) can also run its custom init process as PID 1:
$ docker run -d --init --name test alpine sleep infinity
849efe38ecec439550738e981065ec4aff55ef5607f03b9fed975e2d3146b9b0
$ with-docker docker exec -ti test ps
PID USER TIME COMMAND
1 root 0:00 /sbin/docker-init -- sleep infinity
7 root 0:00 sleep infinity
8 root 0:00 ps
For more information on why you would want your entrypoint not to be PID 1, you can check this explanation from a tini developer:
Now, unlike other processes, PID 1 has a unique responsibility, which is to reap zombie processes.
Zombie processes are processes that:
Have exited.
Were not waited on by their parent process (wait is the syscall parent processes use to retrieve the exit code of their children).
Have lost their parent (i.e. their parent exited as well), which means they'll never be waited on by their parent.

What's memory shown in docker stats really mean?

1) I use next to start a container:
docker run --name test -idt python:3 python -m http.server
2) Then, I try to validate memory usage like next:
a)
root#shubuntu1:~# ps aux | grep "python -m http.server"
root 17416 3.0 0.2 27368 19876 pts/0 Ss+ 17:11 0:00 python -m http.server
b)
root#shubuntu1:~# docker exec -it test ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.9 0.2 27368 19876 pts/0 Ss+ 09:11 0:00 python -m http.
c)
root#shubuntu1:~# docker stats --no-stream test
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
d72f2ece6816 test 0.01% 12.45MiB / 7.591GiB 0.16% 3.04kB / 0B 0B / 0B 1
You can see from both docker host & docker container, we could see python -m http.server consume 19876/1024=19.1289MB memory (RSS), but from docker stats, I find 12.45MB, why it show container memory consume even less than the PID1 process in container?
rss        RSS      resident set size, the non-swapped physical memory that a task has used (in kiloBytes). (alias rssize, rsz).
MEM USAGE / LIMIT the total memory the container is using, and the total amount of memory it is allowed to use

Docker containers are still running even after stopping the Docker service altogether

This is the first time such a thing happens to me. I'm really scared.
I've been coding and testing a Django webapp on my laptop. The app is running on Docker, with docker-compose. Both the host and guest are Ubuntu 18.04. It consists of 3 images: Django+Gunicorn, Nginx and Postgres.
Nothing really fancy and it worked perfectly, until 5 minutes ago.
When I tried to refresh the page (accessible via 127.0.0.1) on Chrome Incognito, it got stuck on loading. Same thing with curl. At the time, I was logged into the Django container (to activate collectstatic whenever I needed it) and it was still running as usual.
I thought something was stuck somewhere so I tried to see if there's anything listening to the 80 port. Nothing really special:
tcp6 0 0 :::80 :::* LISTEN 10815/docker-proxy
So, wanting to get back to coding as fast as possible, I tried to (sudo) down then kill the containers, to no avail:
ERROR: for xxxxxxxx_nginx_1 Cannot kill container: e94e64a75b1726ccd27623024a4223ffb3d77c6578b4d69f6240bea51e8e641b: Cannot kill container e94e64a75b1726ccd27623024a4223ffb3d77c6578b4d69f6240bea51e8e641b: unknown error after kill: docker-runc did not terminate sucessfully: container_linux.go:393: signaling init process caused "permission denied"
: unknown
No problem, I thought, and I just stoped the docker service:
sudo systemctl stop docker
I refreshed the 127.0.0.1 page expecting to see a This site can’t be reached page ... only to see the webapp loading!
I tried to see what container are running to stop them, but docker ps returned this:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Which confirms the Docker service was down. systemctl status confirmed just that. I also checked if the serverside code was running. It is. I also tried to change some frontend code, and it loading the new version.
Can someone tell me what's going on, and how to stop this 'zombie' app from running?
Thanks!
EDIT
I just had the idea to run ps aux | grep docker and here's what I found:
root 1661 0.5 0.9 670260 74136 ? Ssl 17:47 1:15 dockerd -G docker --exec-root=/var/snap/docker/384/run/docker --data-root=/var/snap/docker/common/var-lib-docker --pidfile=/var/snap/docker/384/run/docker.pid --config-file=/var/snap/docker/384/config/daemon.json --debug
root 2148 0.3 0.4 756640 34944 ? Ssl 17:47 0:47 docker-containerd --config /var/snap/docker/384/run/docker/containerd/containerd.toml
root 4105 0.0 0.0 7508 4112 ? Sl 17:48 0:01 docker-containerd-shim -namespace moby -workdir /var/snap/docker/common/var-lib-docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/7709ab085e470228c120eff4c9b36590348dac483a40d9b107cfb8d62146e060 -address /var/snap/docker/384/run/docker/containerd/docker-containerd.sock -containerd-binary /snap/docker/384/bin/docker-containerd -runtime-root /var/snap/docker/384/run/docker/runtime-runc -debug
root 10618 0.0 0.0 7508 4464 ? Sl 17:57 0:01 docker-containerd-shim -namespace moby -workdir /var/snap/docker/common/var-lib-docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/3a689a845ef012584e46d631c053ca0a00dbe34bb430f5e52a4de879c7efe966 -address /var/snap/docker/384/run/docker/containerd/docker-containerd.sock -containerd-binary /snap/docker/384/bin/docker-containerd -runtime-root /var/snap/docker/384/run/docker/runtime-runc -debug
root 10815 0.0 0.0 425952 2956 ? Sl 17:58 0:07 /snap/docker/384/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.20.0.4 -container-port 80
root 10822 0.0 0.0 9172 5032 ? Sl 17:58 0:01 docker-containerd-shim -namespace moby -workdir /var/snap/docker/common/var-lib-docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/e94e64a75b1726ccd27623024a4223ffb3d77c6578b4d69f6240bea51e8e641b -address /var/snap/docker/384/run/docker/containerd/docker-containerd.sock -containerd-binary /snap/docker/384/bin/docker-containerd -runtime-root /var/snap/docker/384/run/docker/runtime-runc -debug
ahmed 26359 0.0 0.0 21536 1048 pts/5 S+ 21:52 0:00 grep --color=auto docker
EDIT 2
After manually killing some of the processes above, the situation is back to normal. But still, I'd love to get an explanation if there's one.

Docker container increases ram

I have launched several docker containers and using docker stats, I have verified that one of them increases the consumption of ram memory since it starts until it is restarted.
My question is if there is any way to verify where such consumption comes from within the docker container. There is some way to check the consumption inside the container, something of the docker stats style but for the inside of the container.
Thanks for your cooperation.
Not sure if it's what you are asking for, but here's an example:
(Before your start):
Run a test container docker run --rm -it ubuntu
Install stress by typing apt-get update and apt-get install stress
Run stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 * 0.9;}' < /proc/meminfo)k --vm-keep -m 1 (it will start consuming memory)
1. with top
If you go to a new terminal you can type docker container exec -it <your container name> top and you will get something like the following:
(notice that the %MEM usage of PID 285 is 68.8%)
docker container exec -it dreamy_jang top
top - 12:46:04 up 22 min, 0 users, load average: 1.48, 1.55, 1.12
Tasks: 4 total, 2 running, 2 sleeping, 0 stopped, 0 zombie
%Cpu(s): 20.8 us, 0.8 sy, 0.0 ni, 78.4 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 6102828 total, 150212 free, 5396604 used, 556012 buff/cache
KiB Swap: 1942896 total, 1937508 free, 5388 used. 455368 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
285 root 20 0 4209376 4.007g 212 R 100.0 68.8 6:56.90 stress
1 root 20 0 18500 3148 2916 S 0.0 0.1 0:00.09 bash
274 root 20 0 36596 3072 2640 R 0.0 0.1 0:00.21 top
284 root 20 0 8240 1192 1116 S 0.0 0.0 0:00.00 stress
2. with ps aux
Again, from a new terminal you type docker container exec -it <your container name> ps aux
(notice that the %MEM usage of PID 285 is 68.8%)
docker container exec -it dreamy_jang ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 18500 3148 pts/0 Ss 12:25 0:00 /bin/bash
root 284 0.0 0.0 8240 1192 pts/0 S+ 12:39 0:00 stress --vm-byt
root 285 99.8 68.8 4209376 4201300 pts/0 R+ 12:39 8:53 stress --vm-byt
root 286 0.0 0.0 34400 2904 pts/1 Rs+ 12:48 0:00 ps aux
My source for this stress thing is from this question: How to fill 90% of the free memory?

docker on upstart on scaleway

I have docker container based on ubuntu 12.04 and wish start it on scaleway This instantApp run on ubuntu 15.04 with systemd. For my container I need upstart. I turn on upstart by this recommendation:
Install the upstart-sysv package, which will remove ubuntu-standard and systemd-sysv (but should not remove anything else -- if it does, yell!), and run sudo update-initramfs -u. After that, grub's "Advanced options" menu will have a corresponding "Ubuntu, with Linux ... (systemd)" entry where you can do an one-time boot with systemd.
Now my server running with upstart:
# ps aux|grep upstart
root 1447 0.0 0.0 2632 1744 ? S 13:44 0:00 upstart-udev-bridge --daemon
root 1598 0.0 0.0 2044 176 ? S 13:44 0:00 upstart-file-bridge --daemon
root 2571 0.0 0.0 2032 1128 ? S 13:44 0:00 upstart-socket-bridge --daemon
root 32408 0.0 0.0 3156 1472 pts/4 S+ 14:27 0:00 grep --color=auto upstart
but docker not running:
# service docker status
* Docker is managed via upstart, try using service docker status
# service docker start
* Docker is managed via upstart, try using service docker start
How I can start docker as daemon?
See answer for this Ask Ubuntu question - it's a workaround to get things running again until the Kernel bug is address: https://askubuntu.com/questions/683462/docker-is-managed-via-upstart-try-using-service-docker

Resources