I run a container in the background using:
docker run --restart always --name lnmp -v /Users/gedongdong/web:/var/www/ -itd lnmp
dockerfile:
FROM alpine:edge
LABEL author=gedongdong2010#163.com
RUN mkdir -p /run/nginx && mkdir -p /shell
RUN echo http://mirrors.aliyun.com/alpine/edge/main > /etc/apk/repositories && \
echo http://mirrors.aliyun.com/alpine/edge/community >> /etc/apk/repositories && \
apk update && apk add --no-cache nginx
COPY vhosts.conf /etc/nginx/conf.d/
COPY start.sh /shell
RUN chmod -R 777 /shell
EXPOSE 80 443 6379
CMD ["/shell/start.sh"]
start.sh:
nginx -c /etc/nginx/nginx.conf
tail -f /dev/null
vhosts.conf:
server {
listen 80;
server_name docker.test;
root /var/www;
index index.html;
}
when i use docker ps -a:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3a3910c0dc29 lnmp "/shell/start.sh" 16 minutes ago Restarting (1) 50 seconds ago lnmp
docker ps -a
Why is my container always restarting?
Add #!/bin/sh to your start.sh file
#!/bin/sh
nginx -c /etc/nginx/nginx.conf
tail -f /dev/null
Why the container was always restarting:
As Henry has pointed out in his comments, your setting --restart always told so. In general, keep in mind that when the PID 1 of the container stops/crashes, then the container exits. For example your container shows something like this:
(notice the PID 1 line, where the problem was)
docker container exec -it lnmp top -n 1 -b
Mem: 2846060K used, 3256768K free, 62108K shrd, 83452K buff, 1102096K cached
CPU: 2% usr 2% sys 0% nic 95% idle 0% io 0% irq 0% sirq
Load average: 0.09 0.24 0.27 1/892 41
PID PPID USER STAT VSZ %VSZ CPU %CPU COMMAND
10 9 nginx S 15372 0% 0 0% nginx: worker process
12 9 nginx S 15372 0% 5 0% nginx: worker process
17 9 nginx S 15372 0% 1 0% nginx: worker process
11 9 nginx S 15372 0% 7 0% nginx: worker process
18 9 nginx S 15372 0% 5 0% nginx: worker process
15 9 nginx S 15372 0% 4 0% nginx: worker process
14 9 nginx S 15372 0% 1 0% nginx: worker process
16 9 nginx S 15372 0% 4 0% nginx: worker process
9 1 root S 14924 0% 6 0% nginx: master process nginx -c /etc/nginx/nginx.conf
1 0 root S 1592 0% 1 0% {start.sh} /bin/sh /shell/start.sh
34 0 root R 1532 0% 4 0% top -n 1 -b
13 1 root S 1524 0% 2 0% tail -f /dev/null
For me, it happened because of the permission issue. My machine's password was changed and in the Docker machine, it was not updated.
To debug it in terminal write the code below:
logs <your container's name>
Related
I don't make that much experience with containers and bash script also and I am having a really hard time trying to make a directory on my host machine copy the log files from a celery container. I run docker on rootless mode.
I have a dockerfile where I create a user and group named "celery", install gosu and define a entrypoint and a cmd. On the entry point I simple exec gosu "$USER_NAME" "$#" and the CMD is
celery \
-A src.core.celery_app \
worker \
--pool=gevent \
--concurrency=5 \
--loglevel=info \
--pidfile=/var/run/celery/%n.pid \
--logfile=/var/log/celery/%n%I.log
If I run the docker compose without volumes everything works fine: I have the worker running on the container under the celery user and also the /var/log/celery/celery.log owned by celery celery.
# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
celery 1 0.0 0.0 2420 596 ? Ss 10:42 0:00 /bin/sh ./start-celeryworker
celery 12 0.8 0.5 210872 84496 ? Sl 10:42 0:01 /usr/local/bin/python /usr/local/bin/celery -A src.core...
# ls -la /var/log
total 752
drwxr-xr-x 1 celery celery 4096 Jun 24 10:42 celery
But if I try to add a volume so I can have the log files on the host volumes: <local dir>:/var/log/celery the local dir is created, but it is empty and I get a permission denied error:
File "/usr/local/lib/python3.9/logging/__init__.py", line 1175, in _open
return open(self.baseFilename, self.mode, encoding=self.encoding,
PermissionError: [Errno 13] Permission denied: '/var/log/celery/celery.log'
I even tried chmod 777 on the container file, but still get the error. I looked at the postgres official image dockerfile and entrypoint.sh, and how is the file structure of the postgres container after build (because there I can get the pgdata directory on the host under user 100069 and group jap(this is my user name - uid:gid 1000:1000)), but as I mentioned before, I don't have that much knowledge of bash.
I gave uid and gid 1003 for the celery user and group on the container and on the host the process run under 101002.
Now, if I don't create the "celery" user and group and run celery as root everything works perfectly and I the volume works too. Any ideas? tks
I use the following command to run a container:
docker run -it -v /home/:/usr/ ubuntu64 /bin/bash
Then I run a program in the container, the program generates some files in the folder:/usr/ which also appear in /home/ but I can't access the generated files with an error: Permission denied outside the container.
I think this may because the files generated by root in the container but outside the container, the user have no root authority, but how to solve it?
What I want to do is accessing the files generated by the program(installed in the container) outside the container.
You need to use the -u flag
docker run -it -v $PWD:/data -w /data alpine touch nouser.txt
docker run -u `id -u` -it -v $PWD:/data -w /data alpine touch onlyuser.txt
docker run -u `id -u`:`id -g` -it -v $PWD:/data -w /data alpine touch usergroup.txt
Now if you do ls -alh on the host system
$ ls -alh
total 8.0K
drwxrwxr-x 2 vagrant vagrant 4.0K Sep 9 05:22 .
drwxrwxr-x 30 vagrant vagrant 4.0K Sep 9 05:19 ..
-rw-r--r-- 1 root root 0 Sep 9 05:21 nouser.txt
-rw-r--r-- 1 vagrant root 0 Sep 9 05:21 onlyuser.txt
-rw-r--r-- 1 vagrant vagrant 0 Sep 9 05:22 usergroup.txt
Supposing my container is named fluentd, I'd expect this command to reload the config:
sudo docker kill -s HUP fluentd
Instead it kills the container.
Seems there is some spawning of a few processes from the entrypoint:
PID USER TIME COMMAND
1 root 0:00 {entrypoint.sh} /usr/bin/dumb-init /bin/sh /bin/entrypoint.sh /bin/sh -c fluentd -c /fluentd/etc/${FLUENTD_CONF} -p /fluentd/pl
5 root 0:00 /bin/sh /bin/entrypoint.sh /bin/sh -c fluentd -c /fluentd/etc/${FLUENTD_CONF} -p /fluentd/plugins $FLUENTD_OPT
13 fluent 0:00 /bin/sh -c fluentd -c /fluentd/etc/${FLUENTD_CONF} -p /fluentd/plugins $FLUENTD_OPT
14 fluent 0:00 {fluentd} /usr/bin/ruby /usr/bin/fluentd -c /fluentd/etc/fluentd.conf -p /fluentd/plugins
16 fluent 0:00 {fluentd} /usr/bin/ruby /usr/bin/fluentd -c /fluentd/etc/fluentd.conf -p /fluentd/plugins
Tried HUPping from inside the container pid 13 and it seems to work.
Docker is sending the signal to the entrypoint. If I inspect the State.Pid, I see 4450. Here's the host ps:
root 4450 4432 0 18:30 ? 00:00:00 /usr/bin/dumb-init /bin/sh /bin/entrypoint.sh /bin/sh -c fluentd -c
/fluentd/etc/${FLUENTD_CONF} -p /fluentd/plugins $FLUENTD_OPT
root 4467 4450 0 18:30 ? 00:00:00 /bin/sh /bin/entrypoint.sh /bin/sh -c fluentd -c /fluentd/etc/${FLUENTD_CONF} -p /fluentd/plugins $FLUENTD_OPT
ubuntu 4475 4467 0 18:30 ? 00:00:00 /bin/sh -c fluentd -c /fluentd/etc/${FLUENTD_CONF} -p /fluentd/plugins $FLUENTD_OPT
ubuntu 4476 4475 0 18:30 ? 00:00:00 /usr/bin/ruby /usr/bin/fluentd -c /fluentd/etc/fluentd.conf -p /fluentd/plugins
ubuntu 4478 4476 0 18:30 ? 00:00:00 /usr/bin/ruby /usr/bin/fluentd -c /fluentd/etc/fluentd.conf -p /fluentd/plugins
Any ideas how do reload the conf without a custom script to find the correct process to HUP?
This command should work I believe
sudo docker exec fluentd pkill -1 -x fluentd
I tested it on sleep command inside fluentd container and it works.
In my case fluentd is running as a pod on kubernetes.
The command that works for me is:
kubectl -n=elastic-system exec -it fluentd-pch5b -- kill --signal SIGHUP 7
Where the number 7 is the process id of fluentd inside the container
as you can see below:
root#fluentd-pch5b:/home/fluent# ps -elf
F S UID PID PPID C PRI NI ADDR SZ WCHAN STIME TTY TIME CMD
4 S root 1 0 0 80 0 - 2075 - 14:42 ? 00:00:00 tini -- /fluentd/entrypoint.sh
4 S root 7 1 0 80 0 - 56225 - 14:42 ? 00:00:02 ruby /fluentd/vendor/bundle/ruby/2.6.0/bin/fluentd -c /fluentd/etc/fluent.co
4 S root 19 7 0 80 0 - 102930 - 14:42 ? 00:00:06 /usr/local/bin/ruby -Eascii-8bit:ascii-8bit /fluentd/vendor/bundle/ruby/2.6.
4 S root 70 0 0 80 0 - 2439 - 14:52 pts/0 00:00:00 bash
0 R root 82 70 0 80 0 - 3314 - 14:54 pts/0 00:00:00 ps -elf
I found a video about setting up the docker remote api by Packt publishing.
In the video we are told to change the /etc/init/docker.conf file by adding "-H tcp://127.0.0.1:4243 -H unix:///var/run/docker/sock" to DOCKER_OPTS=. Then we have to restart docker for the changes to take effect.
However after I do all that, I still can't curl localhost at that port. Doing so returns:
vagrant#vagrant-ubuntu-trusty-64:~$ curl localhost:4243/_ping
curl: (7) Failed to connect to localhost port 4243: Connection refused
I'm relativity new to docker, if somebody could help me out here I'd be very grateful.
Edit:
docker.conf
description "Docker daemon"
start on (filesystem and net-device-up IFACE!=lo)
stop on runlevel [!2345]
limit nofile 524288 1048576
limit nproc 524288 1048576
respawn
kill timeout 20
pre-start script
# see also https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount
if grep -v '^#' /etc/fstab | grep -q cgroup \
|| [ ! -e /proc/cgroups ] \
|| [ ! -d /sys/fs/cgroup ]; then
exit 0
fi
if ! mountpoint -q /sys/fs/cgroup; then
mount -t tmpfs -o uid=0,gid=0,mode=0755 cgroup /sys/fs/cgroup
fi
(
cd /sys/fs/cgroup
for sys in $(awk '!/^#/ { if ($4 == 1) print $1 }' /proc/cgroups); do
mkdir -p $sys
if ! mountpoint -q $sys; then
if ! mount -n -t cgroup -o $sys cgroup $sys; then
rmdir $sys || true
fi
fi
done
)
end script
script
# modify these in /etc/default/$UPSTART_JOB (/etc/default/docker)
DOCKER=/usr/bin/$UPSTART_JOB
DOCKER_OPTS="-H tcp://127.0.0.1:4243 -H unix:///var/run/docker.sock"
if [ -f /etc/default/$UPSTART_JOB ]; then
. /etc/default/$UPSTART_JOB
fi
exec "$DOCKER" daemon $DOCKER_OPTS
end script
# Don't emit "started" event until docker.sock is ready.
# See https://github.com/docker/docker/issues/6647
post-start script
DOCKER_OPTS=
if [ -f /etc/default/$UPSTART_JOB ]; then
"/etc/init/docker.conf" 60L, 1582C
EDIT2: Output of ps aux | grep docker
vagrant#vagrant-ubuntu-trusty-64:~$ ps aux | grep docker
root 858 0.2 4.2 401836 21504 ? Ssl 06:12 0:00 /usr/bin/docker daemon --insecure-registry 11.22.33.44
:5000
vagrant 1694 0.0 0.1 10460 936 pts/0 S+ 06:15 0:00 grep --color=auto docker
The problem
According to the output of ps aux|grep docker it can be noticed that the options the daemon is started with do not match the ones in the docker.conf file. Another file is then used to start the docker daemon service.
Solution
To solve this, track down the file that contains the option "--insecure-registry 11.22.33.44:5000 that may either /etc/default/docker or /etc/init/docker.conf or /etc/systemd/system/docker.service or idk-where-else and modify it accordingly with the needed options.
Then restart the daemon and you're good to go !
so i have the following scenario to create my docker image and container. the question is: how can I have my process up at container start up ?
1. create image
cat /var/tmp/mod_sm.tar | docker import - mod_sm_39
2. see images
[root#sl1cdi151 etc]# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mod_sm_39 latest 1573470bfa06 2 hours ago 271 MB
mod_site_m0305 latest c029826a2253 4 days ago 53.8 MB
<none> <none> ee67b9aec2d3 4 days ago 163.4 MB
mod_site_soft latest 0933a386d56c 6 days ago 53.8 MB
mod_site_vm151 latest 4461c32e4772 6 days ago 53.8 MB
3. create container
docker run -it --net=host -v /root:/root -v /usr/share/Modules:/usr/share/Modules -v /usr/libexec:/usr/libexec -v /var:/var -v /tmp:/tmp -v /bin:/bin -v /cgroup:/cgroup -v /dev:/dev -v /etc:/etc -v /home:/home -v /lib:/lib -v /lib64:/lib64 -v /sbin:/sbin -v /usr/lib64:/usr/lib64 -v /usr/bin:/usr/bin --name="mod_sm_39_c2" -d mod_sm_39 /bin/bash
4. now in container i go to my application and start the following:
[root#sl1cdi151 ven]# ./service.sh sm_start
5. check if it's up
[root#sl1cdi151 etc]# ps -ef | grep http
root 33 1 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 34 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 36 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 37 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 39 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 41 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 43 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 45 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
daemon 47 33 0 11:15 ? 00:00:00 ./httpd -k start -f /usr/local/clo/ven/mod_web/httpd/conf/httpd.conf
root 80 1 0 13:32 pts/2 00:00:00 grep http
So i need to have that up "./service.sh sm_start" when container is started. how can i implement that. Thank you in advance
Either specify the command as part of the docker run command:
docker run [options] mod_sm_39 /path/to/service.sh sm_start
or specify the command as part of the image's Dockerfile so that it will be run whenever the container is started without an explicit command:
# In the mod_sm_39 Dockerfile:
CMD ["/path/to/service.sh", "sm_start"]