Keeping alive Docker containers with supervisord - docker

I end my Debian Dockerfile with these lines:
EXPOSE 80 22
COPY etc/supervisor/conf.d /etc/supervisor/conf.d
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]
In /etc/supervisor/conf.d/start.conf file:
[program:ssh]
command=/usr/sbin/service ssh restart
[program:nginx]
command=/usr/sbin/nginx -g 'daemon off;'
[program:systemctl]
command=/bin/systemctl daemon-reload
[program:systemctl]
command=/bin/systemctl start php7-fpm.service
If I try to run this Docker image with the following command:
$ docker run -d -p 8080:80 -p 8081:22 lanti/debian
It's immediately stops running. if I try to run it on the foreground:
$ docker run -it -v /home/core/share:/root/share -p 8080:80 -p 8081:22 lanti/debian
It's the same, immediate exit. If I run with bash CMD:
$ docker run --rm -it -v /home/core/share:/root/share -p 8080:80 -p 8081:22 lanti/debian bash
It stays active in the console, but the predefined commands by supervisor not run, so I need to run $ service supervisor restart inside the container, otherwise Nginx and SSH won't be started.
How I can start a docker container with multiple commands run at startup? In the past I used ExecStartPost lines in a systemd file under the host OS, but becouse of that, the systemd file became complex so I try to move the pre-start commands into the container, to run automatically at any type of startup.
This docker container will have nginx, php, ssh, phpmyadmin and mysql in the future. I don't want multiple containers.
Thank You for your help!

Lets preface this by saying running the kitchen sink in a docker container is not a best practice. Docker is not a virtual machine.
That said, a few problems.
just like the processes that supervisor controls, supervisor itself should NOT daemonize. Add -n
I'm not entirely sure why you expect, need, or want to have systemd and supervisor running. Most docker containers do not have a functioning init system. Why not just user supervisor for everything? Unless docker has significantly changed in the last couple versions, systemd inside the container will not work like you think it should.

Related

Enable systemctl in Docker container

I am trying to create my own docker container, and custom service which I created for my work, this is my service file
[1/1] /etc/systemd/system/qsinavAI.service
[Unit]
Description=uWSGI instance to serve Qsinav AI
After=network.target
[Service]
User=www-data
Group=www-data
WorkingDirectory=/root/AI/
Environment="PATH=/root/AI/bin"
ExecStart=/root/AI/bin/uwsgi --ini ai.ini
[Install]
WantedBy=multi-user.target
and when I am trying to run this service I get this error
System has not been booted with systemd as init system (PID 1). Can't
operate. Failed to connect to bus: Host is down
I searched a lot to find a solution but I could not, how can I enable the systemctl in docker.
this is the command that I am using to run the container
docker run -dt -p 5000:5000 --name AIPython2 --privileged -v /sys/fs/cgroup:/sys/fs/cgroup:ro --cap-add SYS_ADMIN last_python_image
If your application is only ever run inside a container then you should create a docker-entrypoint.sh script with an "exec" at the end so that your application is run as a remapped PID 1 in the container. That way cloud systems can see if the application is alive and they can send a SIGTERM to stop the application.
#! /bin/bash
cd /root/AI
PATH=/root/AI/bin
exec /root/AI/bin/uwsgi --ini ai.ini
If your application shall be able to run in systemd environment outside of a container then you can choose to reuse the systemd descriptor. It requires an init-daemon on PID 1 and a service manager to check the "enbabled" services. One example would be the systemctl-docker-replacement script.
Docker containers should have an "entrypoint" command that runs in foreground to keep the container running. The basic idea behind a container is that it runs as long as the root process that started it, keeps running. Since you will issue a systemctl start qsinavAI.service, the command will succeed but once this command exits, the container will stop.
By design, containers started in detached mode exit when the root process used to run the container exits, ...
See some reference about this and starting nginx service in the official documentation.
So instead of trying to run your application as a service, you should have an entrypoint statement at the end of your Dockerfile. Then when you start this container with docker run, you can specify -d to run it in "detached" mode.
Example, taking the command from ExecStart and assuming it runs in foreground:
ENTRYPOINT ["/root/AI/bin/uwsgi", "--ini", "ai.ini"]
Exemple how to create image with systemd and boot like a real environment. A Dockerfile is required.
FROM ubuntu:22.04
RUN echo 'root:root' | chpasswd
RUN printf '#!/bin/sh\nexit 0' > /usr/sbin/policy-rc.d
RUN apt-get update
RUN apt-get install -y systemd systemd-sysv dbus dbus-user-session
ENTRYPOINT ["/sbin/init"]
/sbin/init is important to init systemd and enable systemctl.
Then build the system.
docker build -t testimage -f Dockerfile .
docker run -it --privileged --cap-add=ALL testimage

Docker container is not running even if -d

I'm french and new here (so I don't know how stack overflow works, his community) I'm gonna try to adapt myself.
So, my first problem is the following :
I run docker container with my image who it created with Dockerfile. (there is DNS container)
In Dockerfile, this container have to start script.sh when it start.
But after use that :
docker run -d -ti -p 53:53 alex/dns
(Use -p 53:53 because DNS.)
I can see my DNS runing at the end of my script.sh but, when I do :
Docker ps -a ; but > container is not running.
I'm novice with docker. I have started to learn it 2days ago.
I tried to add (one by one of course):
CMD ["bash"]
CMD ["/bin/bash"]
to run bash and make sure that does not poweroff.
I tried to add -d in Docker run command
I tried to use :
docker commit ti alex/dns
and
docker exec -ti alex/dns /bin/bsh
My dockerfile file :
FROM debian
...
RUN apt-get install bind9
...
ADD script.sh /usr/bin/script.sh
...
ENTRYPOINT ["/bin/bash", "script.sh]
CMD ["/bin/bash"]
My file script.sh :
service bind9 stop
*It copy en remplace conf file for bind9*
service bind9 restart
I hope that there are not too many mistakes and that I managed to make myself understood
I expect the DNS container stay runing and can use it with docker exec.
But now, after use docker run, the container start en stop juste after my script finish. Yes, the DNS server is runing the container tell me before close [ok] Bind9 running or somthing like that. But after container stop.
I suspect the problem you're facing is that your container will terminate once service bind9 restart completes.
You need to have a foreground process running to keep the container running.
I'm unfamiliar with bind9 but I recommend you explore ways to run bind9 in the foreground in your container.
Your command to run the container is correct:
docker run -d -ti -p 53:53 alex/dns
You may need to:
RUN apt-get update && apt-get -y install bind9
You will likely need something like (don't know):
ENTRYPOINT ["/bind9"]
Googled it ;-)
https://manpages.debian.org/jessie/bind9/named.8.en.html
After you've configured it, you can run it as a foreground process:
ENTRYPOINT ["named","-g"]

Docker container exits when using -it option

Consider the following Dockerfile:
FROM ubuntu:16.04
RUN apt-get update && \
apt-get install -y apache2 && \
apt-get clean
ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]
When running the container with the command docker run -p 8080:80 <image-id>, then the container starts and remains running, allowing the default Apache web page to be accessed on https://localhost:8080 from the host as expected. With this run command however, I am not able to quit the container using Ctrl+C, also as expected, since the container was not launched with the -it option. Now, if the -it option is added to the run command, then the container exits immediately after startup. Why is that? Is there an elegant way to have apache run in the foreground while exiting on Ctrl+C?
This behaviour is caused by Apache and it is not an issue with Docker. Apache is designed to shut down gracefully when it receives the SIGWINCH signal. When running the container interactively, the SIGWINCH signal is passed from the host to the container, effectively signalling Apache to shut down gracefully. On some hosts the container may exit immediately after it is started. On other hosts the container may stay running until the terminal window is resized.
It is possible to confirm that this is the source of the issue after the container exits by reviewing the Apache log file as follows:
# Run container interactively:
docker run -it <image-id>
# Get the ID of the container after it exits:
docker ps -a
# Copy the Apache log file from the container to the host:
docker cp <container-id>:/var/log/apache2/error.log .
# Use any text editor to review the log file:
vim error.log
# The last line in the log file should contain the following:
AH00492: caught SIGWINCH, shutting down gracefully
Sources:
https://bz.apache.org/bugzilla/show_bug.cgi?id=50669
https://bugzilla.redhat.com/show_bug.cgi?id=1212224
https://github.com/docker-library/httpd/issues/9
All that you need to do is pass the -d option to the run command:
docker run -d -p 8080:80 my-container
As yamenk mentioned, daemonizing works because you send it to the background and decouple the window resizing.
Since the follow-up post mentioned that running in the foreground may have been desirable, there is a good way to simulate that experience after daemonizing:
docker logs -f container-name
This will drop the usual stdout like "GET / HTTP..." connection messages back onto the console so you can watch them flow.
Now you can resize the window and stuff and still see your troubleshooting info.
I am also experiencing this problem on wsl2 under Windows 10, Docker Engine v20.10.7
Workaround:
# start bash in httpd container:
docker run --rm -ti -p 80:80 httpd:2.4.48 /bin/bash
# inside container execute:
httpd -D FOREGROUND
Now Apache httpd keeps running until you press CTRL-C or resize(?!) the terminal window.
After closing httpd, type:
exit
to leave the container
A workaround is to pipe the output to cat:
docker run -it -p 8080:80 <image-id> | cat
NOTE: It is important to use -i and -t.
Ctrl+C will work and resizing the terminal will not shut down Apache.

Docker issue commands to an app inside container?

I am using nodeBB to start a server you can run ./nodebb start to stop you can do ./nodebb stop. Now that I have dockerized it http://nodebb-francais.readthedocs.org/projects/nodebb/en/latest/installing/docker/nodebb-redis.html I am not sure how I can interact with it.
I have followed the steps "Using docker-machine mac os x"
docker run --name my-forum-redis -d -p 6379:6379 nodebb/docker:ubuntu-redis
Then
docker run --name my-forum-nodebb --link my-forum-redis:redis -p 80:80 -p 443:443 -p 4567:4567 -P -t -i nodebb/docker:ubuntu
Then
docker start my-forum-nodebb
I had an issue with redis address in use, so I want to fix that and restart but I am not sure how? Also I would like to issue the command grunt in the project directory, again not sure how?
My question is how can I interact with an app inside a docker container as if I had direct access to the project folder itself? Am I missing something?
All code in this answer is untested, as I'm currently at a computer without docker.
See whether the containers are still running
docker ps
Stop misconfigured containers
docker stop my-forum-redis
docker stop my-forum-nodebb
Remove misconfigured containers and their volumes
(The docker images they are based on will be retained.)
docker rm --volumes --force stop my-forum-nodebb
docker rm --volumes --force my-forum-redis
Start again
Then, issue your 3 commands again, now with the correct ports.
Execute arbitrary commands inside container
Also I would like to issue the command grunt in the project directory, again not sure how?
You probably want to do the following after the docker run --name my-forum-nodebb ... command but before docker start my-forum-nodebb.
docker run accepts a command to execute instead of the container's default command. Let's first use this to find out where in the container we'd land:
docker run my-forum-nodebb pwd
If that is the directory where you want to run grunt, just go forward with it:
docker run my-forum-nodebb grunt
If not, you'll have to stuff several commands into a single one. You can do that by invoking a shell:
docker run my-forum-nodebb bash -c 'cd /path/to/project/dir; grunt'
where /path/to/project/dir is to be replaced by where you want to run grunt.

running apache in docker

Ok, I have exhausted pretty much all threads and articles, but still cant get my apache webserver to run in standalone mode on Centos Docker Container.
Here is my simplified Dockerfile
# install apache
RUN yum -y install httpd
# start the webserver
ADD startservice /startservice
RUN chmod 775 /startservice
EXPOSE 80
CMD ["/startservice"]
My starservice script just has
#!/usr/bin/sh
service httpd start
I can build fine, but, cant seem to run the container in daemon/standalone mode. How do I do that?
I am using this to run the container in standalone mode
docker run -p 80:80 -d -t webserver
I have to log onto the container and start the service for the webserver to run.
docker run -p 80:80 -i -t webserver bash
service httpd start
This is a classic docker issue. The process you start must execute in the foreground, otherwise the container simply stops.
So, to be able to do so the following can be used in your startservice script:
#!/usr/bin/sh
service httpd start
# Tail the log file
tail -f /var/log/httpd/access_log
# Alternatively, you can tail any file or even /dev/null
#tail -f /dev/null
Note that there are also other ways of fixing this. One way is to use supervisord that keeps your processes alive. The supervisord-approach is cleaner and les hackish than the tail -f-approach and I would personally prefer that alternative.
Another alternative is simply that you do not start httpd as a service but instead provide the -DFOREGROUND parameter. This will make httpd be attached to the shell (and not fork off to a background process).
/usr/sbin/httpd -DFOREGROUND
For more info on http in foreground mode, check this question.

Resources