Supposing my container is named fluentd, I'd expect this command to reload the config:
sudo docker kill -s HUP fluentd
Instead it kills the container.
Seems there is some spawning of a few processes from the entrypoint:
PID USER TIME COMMAND
1 root 0:00 {entrypoint.sh} /usr/bin/dumb-init /bin/sh /bin/entrypoint.sh /bin/sh -c fluentd -c /fluentd/etc/${FLUENTD_CONF} -p /fluentd/pl
5 root 0:00 /bin/sh /bin/entrypoint.sh /bin/sh -c fluentd -c /fluentd/etc/${FLUENTD_CONF} -p /fluentd/plugins $FLUENTD_OPT
13 fluent 0:00 /bin/sh -c fluentd -c /fluentd/etc/${FLUENTD_CONF} -p /fluentd/plugins $FLUENTD_OPT
14 fluent 0:00 {fluentd} /usr/bin/ruby /usr/bin/fluentd -c /fluentd/etc/fluentd.conf -p /fluentd/plugins
16 fluent 0:00 {fluentd} /usr/bin/ruby /usr/bin/fluentd -c /fluentd/etc/fluentd.conf -p /fluentd/plugins
Tried HUPping from inside the container pid 13 and it seems to work.
Docker is sending the signal to the entrypoint. If I inspect the State.Pid, I see 4450. Here's the host ps:
root 4450 4432 0 18:30 ? 00:00:00 /usr/bin/dumb-init /bin/sh /bin/entrypoint.sh /bin/sh -c fluentd -c
/fluentd/etc/${FLUENTD_CONF} -p /fluentd/plugins $FLUENTD_OPT
root 4467 4450 0 18:30 ? 00:00:00 /bin/sh /bin/entrypoint.sh /bin/sh -c fluentd -c /fluentd/etc/${FLUENTD_CONF} -p /fluentd/plugins $FLUENTD_OPT
ubuntu 4475 4467 0 18:30 ? 00:00:00 /bin/sh -c fluentd -c /fluentd/etc/${FLUENTD_CONF} -p /fluentd/plugins $FLUENTD_OPT
ubuntu 4476 4475 0 18:30 ? 00:00:00 /usr/bin/ruby /usr/bin/fluentd -c /fluentd/etc/fluentd.conf -p /fluentd/plugins
ubuntu 4478 4476 0 18:30 ? 00:00:00 /usr/bin/ruby /usr/bin/fluentd -c /fluentd/etc/fluentd.conf -p /fluentd/plugins
Any ideas how do reload the conf without a custom script to find the correct process to HUP?
This command should work I believe
sudo docker exec fluentd pkill -1 -x fluentd
I tested it on sleep command inside fluentd container and it works.
In my case fluentd is running as a pod on kubernetes.
The command that works for me is:
kubectl -n=elastic-system exec -it fluentd-pch5b -- kill --signal SIGHUP 7
Where the number 7 is the process id of fluentd inside the container
as you can see below:
root#fluentd-pch5b:/home/fluent# ps -elf
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
4 S root 1 0 0 80 0 - 2075 - 14:42 ? 00:00:00 tini -- /fluentd/entrypoint.sh
4 S root 7 1 0 80 0 - 56225 - 14:42 ? 00:00:02 ruby /fluentd/vendor/bundle/ruby/2.6.0/bin/fluentd -c /fluentd/etc/fluent.co
4 S root 19 7 0 80 0 - 102930 - 14:42 ? 00:00:06 /usr/local/bin/ruby -Eascii-8bit:ascii-8bit /fluentd/vendor/bundle/ruby/2.6.
4 S root 70 0 0 80 0 - 2439 - 14:52 pts/0 00:00:00 bash
0 R root 82 70 0 80 0 - 3314 - 14:54 pts/0 00:00:00 ps -elf
Related
In rootful containers, the solution to this problem is run with --user "$(id -u):$(id -g)" however this does not work for rootless contain systems (rootless docker, or in my case podman):
$ mkdir x
$ podman run --user "$(id -u):$(id -g)" -v "$PWD/x:/x:rw" ubuntu:focal bash -c 'echo hi >> /x/test'
bash: /x/test: Permission denied
so for rootless container systems I should remove --user since the root user is automatically mapped to the calling user:
$ podman run -v "$PWD/x:/x:rw" ubuntu:focal bash -c 'echo hi >> /x/test'
$ ls -al x
total 12
drwxr-xr-x 2 asottile asottile 4096 Sep 3 10:02 .
drwxrwxrwt 18 root root 4096 Sep 3 10:01 ..
-rw-r--r-- 1 asottile asottile 3 Sep 3 10:02 test
but, because this is now the root user, they can change the ownership to users which are undeleteable outside container:
$ podman run -v "$PWD/x:/x:rw" ubuntu:focal bash -c 'mkdir -p /x/1/2/3 && chown -R nobody /x/1'
$ ls -al x/
total 16
drwxr-xr-x 3 asottile asottile 4096 Sep 3 10:03 .
drwxrwxrwt 18 root root 4096 Sep 3 10:01 ..
drwxr-xr-x 3 165533 asottile 4096 Sep 3 10:03 1
-rw-r--r-- 1 asottile asottile 3 Sep 3 10:02 test
$ rm -rf x/
rm: cannot remove 'x/1/2/3': Permission denied
so my question is: how do I allow writes to a mount, but prevent changing ownership for rootless containers?
I think --user $(id -u):$(id -g) --userns=keep-id will get what you want.
$ id -un
erik
$ id -gn
erik
$ mkdir x
$ podman run -v "$PWD/x:/x:Z" --user $(id -u):$(id -g) --userns=keep-id docker.io/library/ubuntu:focal bash -c 'mkdir -p /x/1/2/3 && chown -R nobody /x/1'
chown: changing ownership of '/x/1/2/3': Operation not permitted
chown: changing ownership of '/x/1/2': Operation not permitted
chown: changing ownership of '/x/1': Operation not permitted
$ ls x
1
$ ls -l x
total 0
drwxr-xr-x. 3 erik erik 15 Sep 6 19:34 1
$ ls -l x/1
total 0
drwxr-xr-x. 3 erik erik 15 Sep 6 19:34 2
$ ls -l x/1/2
total 0
drwxr-xr-x. 2 erik erik 6 Sep 6 19:34 3
$
Regarding deleting files and directories that are not owned by your normal UID and GID (but from the extra ranges in /etc/subuid and /etc/subgid) , you could
use podman unshare rm filepath
and podman unshare rm -rf directorypath
I have a bash session in one of my containers using docker exec -it mysql-instance bash which I lost connectivity and I would like to know if I can reconnect to the same session based on the TTY. Is this possible?
user#debian64 ~> docker exec -it mysql-instance ps -A u x
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
mysql 1 0.1 1.0 1485872 116320 ? Ssl 22:29 0:00 mysqld
root 93 0.0 0.0 18508 3180 pts/0 Ss+ 22:32 0:00 bash
root 111 0.0 0.0 18508 3120 pts/1 Ss+ 22:35 0:00 /bin/bash
In this case I would like to reconnect the still running session on TTY pts/0. How can I do that?
I run a container in the background using:
docker run --restart always --name lnmp -v /Users/gedongdong/web:/var/www/ -itd lnmp
dockerfile:
FROM alpine:edge
LABEL author=gedongdong2010#163.com
RUN mkdir -p /run/nginx && mkdir -p /shell
RUN echo http://mirrors.aliyun.com/alpine/edge/main > /etc/apk/repositories && \
echo http://mirrors.aliyun.com/alpine/edge/community >> /etc/apk/repositories && \
apk update && apk add --no-cache nginx
COPY vhosts.conf /etc/nginx/conf.d/
COPY start.sh /shell
RUN chmod -R 777 /shell
EXPOSE 80 443 6379
CMD ["/shell/start.sh"]
start.sh:
nginx -c /etc/nginx/nginx.conf
tail -f /dev/null
vhosts.conf:
server {
listen 80;
server_name docker.test;
root /var/www;
index index.html;
}
when i use docker ps -a:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a3910c0dc29 lnmp "/shell/start.sh" 16 minutes ago Restarting (1) 50 seconds ago lnmp
docker ps -a
Why is my container always restarting?
Add #!/bin/sh to your start.sh file
#!/bin/sh
nginx -c /etc/nginx/nginx.conf
tail -f /dev/null
Why the container was always restarting:
As Henry has pointed out in his comments, your setting --restart always told so. In general, keep in mind that when the PID 1 of the container stops/crashes, then the container exits. For example your container shows something like this:
(notice the PID 1 line, where the problem was)
docker container exec -it lnmp top -n 1 -b
Mem: 2846060K used, 3256768K free, 62108K shrd, 83452K buff, 1102096K cached
CPU: 2% usr 2% sys 0% nic 95% idle 0% io 0% irq 0% sirq
Load average: 0.09 0.24 0.27 1/892 41
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
10 9 nginx S 15372 0% 0 0% nginx: worker process
12 9 nginx S 15372 0% 5 0% nginx: worker process
17 9 nginx S 15372 0% 1 0% nginx: worker process
11 9 nginx S 15372 0% 7 0% nginx: worker process
18 9 nginx S 15372 0% 5 0% nginx: worker process
15 9 nginx S 15372 0% 4 0% nginx: worker process
14 9 nginx S 15372 0% 1 0% nginx: worker process
16 9 nginx S 15372 0% 4 0% nginx: worker process
9 1 root S 14924 0% 6 0% nginx: master process nginx -c /etc/nginx/nginx.conf
1 0 root S 1592 0% 1 0% {start.sh} /bin/sh /shell/start.sh
34 0 root R 1532 0% 4 0% top -n 1 -b
13 1 root S 1524 0% 2 0% tail -f /dev/null
For me, it happened because of the permission issue. My machine's password was changed and in the Docker machine, it was not updated.
To debug it in terminal write the code below:
logs <your container's name>
I start a container with name pg.I wanted to debug a bash script in a container, so I installed bashdb in the container. I started it:
root#f8693085f270:/# /usr/share/bin/bashdb docker-entrypoint.sh postgres
I go back to the host, and do:
[eric#almond volume]$ docker exec -ti pg bash
root#f8693085f270:/# ps ajxw
PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND
0 1 1 1 ? 3746 Ss 0 0:00 bash
1 3746 3746 1 console 3746 S+ 0 0:00 /bin/bash
[eric#almond postgres]$ ps ajxw | grep docker
30613 3702 3702 30613 pts/36 3702 Sl+ 1000 0:01 docker run --name pg -v /home/eric/tmp/bashdb:/bashdb -it postgres bash
3760 8049 8049 3760 pts/19 8049 S+ 0 0:00 /bin/bash /usr/share/bin/bashdb docker-entrypoint.sh postgres
4166 8294 8294 4166 pts/9 8294 Sl+ 1000 0:00 docker exec -ti pg bash
So in the container I see a TTY entry console, which I have never seen before, and I see the debugging entry in ps on the host!
What is going on?
Docker isolates a container from the host, it doesn't isolate the host from the container. That means the host can see the processes run inside containers, though from a different name space so the pids will be different.
Attaching to console appears to be something from bashdb. It has automatic detection for the tty to direct output to, and may be getting thrown off by the Docker isolation.
so i have the following scenario to create my docker image and container. the question is: how can I have my process up at container start up ?
1. create image
cat /var/tmp/mod_sm.tar | docker import - mod_sm_39
2. see images
[root#sl1cdi151 etc]# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mod_sm_39 latest 1573470bfa06 2 hours ago 271 MB
mod_site_m0305 latest c029826a2253 4 days ago 53.8 MB
<none> <none> ee67b9aec2d3 4 days ago 163.4 MB
mod_site_soft latest 0933a386d56c 6 days ago 53.8 MB
mod_site_vm151 latest 4461c32e4772 6 days ago 53.8 MB
3. create container
docker run -it --net=host -v /root:/root -v /usr/share/Modules:/usr/share/Modules -v /usr/libexec:/usr/libexec -v /var:/var -v /tmp:/tmp -v /bin:/bin -v /cgroup:/cgroup -v /dev:/dev -v /etc:/etc -v /home:/home -v /lib:/lib -v /lib64:/lib64 -v /sbin:/sbin -v /usr/lib64:/usr/lib64 -v /usr/bin:/usr/bin --name="mod_sm_39_c2" -d mod_sm_39 /bin/bash
4. now in container i go to my application and start the following:
[root#sl1cdi151 ven]# ./service.sh sm_start
5. check if it's up
[root#sl1cdi151 etc]# ps -ef | grep http
root 33 1 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 34 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 36 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 37 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 39 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 41 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 43 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 45 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 47 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
root 80 1 0 13:32 pts/2 00:00:00 grep http
So i need to have that up "./service.sh sm_start" when container is started. how can i implement that. Thank you in advance
Either specify the command as part of the docker run command:
docker run [options] mod_sm_39 /path/to/service.sh sm_start
or specify the command as part of the image's Dockerfile so that it will be run whenever the container is started without an explicit command:
# In the mod_sm_39 Dockerfile:
CMD ["/path/to/service.sh", "sm_start"]