running apache in docker - docker

Ok, I have exhausted pretty much all threads and articles, but still cant get my apache webserver to run in standalone mode on Centos Docker Container.
Here is my simplified Dockerfile
# install apache
RUN yum -y install httpd
# start the webserver
ADD startservice /startservice
RUN chmod 775 /startservice
EXPOSE 80
CMD ["/startservice"]
My starservice script just has
#!/usr/bin/sh
service httpd start
I can build fine, but, cant seem to run the container in daemon/standalone mode. How do I do that?
I am using this to run the container in standalone mode
docker run -p 80:80 -d -t webserver
I have to log onto the container and start the service for the webserver to run.
docker run -p 80:80 -i -t webserver bash
service httpd start

This is a classic docker issue. The process you start must execute in the foreground, otherwise the container simply stops.
So, to be able to do so the following can be used in your startservice script:
#!/usr/bin/sh
service httpd start
# Tail the log file
tail -f /var/log/httpd/access_log
# Alternatively, you can tail any file or even /dev/null
#tail -f /dev/null
Note that there are also other ways of fixing this. One way is to use supervisord that keeps your processes alive. The supervisord-approach is cleaner and les hackish than the tail -f-approach and I would personally prefer that alternative.
Another alternative is simply that you do not start httpd as a service but instead provide the -DFOREGROUND parameter. This will make httpd be attached to the shell (and not fork off to a background process).
/usr/sbin/httpd -DFOREGROUND
For more info on http in foreground mode, check this question.

Related

docker exit immediately after launching apache and neo4j

I have a script /init that launches apache and neo4j. This script is already in the image ubuntu:14. The following is the content of /init:
service apache2 start
service neo4j start
From this image, I am creating another image with the following dockerfile
FROM ubuntu:v14
EXPOSE 80 80
ENTRYPOINT ["/init"]
When I run the command docker run -d ubuntu:v15, the container starts and then exit. As far as I understood, -d option runs the container in the background. Also, the script\init launches two daemons. Why does the container exit ?
In fact, I think your first problem is the #! in init file, if you did not add something like #!/bin/bash at the start, container will complain like next:
shubuntu1#shubuntu1:~$ docker logs priceless_tu
standard_init_linux.go:207: exec user process caused "exec format error"
But even you fix above problem, you will still can't start your container, the reason same as other folks said: the PID 1 should always there, in your case after service xxx start finish, the PID 1 exit which will also result in container exit.
So, to conquer this problem you should set one command never exit, a minimal workable example for your reference:
Dockerfile:
FROM ubuntu:14.04
RUN apt-get update && \
apt-get install -y apache2
COPY init /
RUN chmod +x /init
EXPOSE 80
ENTRYPOINT ["/init"]
init:
#!/bin/bash
# you can add other service start here
# e.g. service neo4j start as you like if you have installed it already
# next will make apache run in foreground, so PID1 not exit.
/usr/sbin/apache2ctl -DFOREGROUND
When your Dockerfile specifies an ENTRYPOINT, the lifetime of the container is exactly the length of whatever its process is. Generally the behavior of service ... start is to start the service as a background process and then return immediately; so your /init script runs the two service commands and completes, and now that the entrypoint process is completed, the container exits.
Generally accepted best practice is to run only one process in a container. That's especially true when one of the processes is a database. In your case there are standard Docker Hub Apache httpd and neo4j images, so I'd start by using an orchestration tool like Docker Compose to run those two containers side-by-side.

Docker container is not running even if -d

I'm french and new here (so I don't know how stack overflow works, his community) I'm gonna try to adapt myself.
So, my first problem is the following :
I run docker container with my image who it created with Dockerfile. (there is DNS container)
In Dockerfile, this container have to start script.sh when it start.
But after use that :
docker run -d -ti -p 53:53 alex/dns
(Use -p 53:53 because DNS.)
I can see my DNS runing at the end of my script.sh but, when I do :
Docker ps -a ; but > container is not running.
I'm novice with docker. I have started to learn it 2days ago.
I tried to add (one by one of course):
CMD ["bash"]
CMD ["/bin/bash"]
to run bash and make sure that does not poweroff.
I tried to add -d in Docker run command
I tried to use :
docker commit ti alex/dns
and
docker exec -ti alex/dns /bin/bsh
My dockerfile file :
FROM debian
...
RUN apt-get install bind9
...
ADD script.sh /usr/bin/script.sh
...
ENTRYPOINT ["/bin/bash", "script.sh]
CMD ["/bin/bash"]
My file script.sh :
service bind9 stop
*It copy en remplace conf file for bind9*
service bind9 restart
I hope that there are not too many mistakes and that I managed to make myself understood
I expect the DNS container stay runing and can use it with docker exec.
But now, after use docker run, the container start en stop juste after my script finish. Yes, the DNS server is runing the container tell me before close [ok] Bind9 running or somthing like that. But after container stop.
I suspect the problem you're facing is that your container will terminate once service bind9 restart completes.
You need to have a foreground process running to keep the container running.
I'm unfamiliar with bind9 but I recommend you explore ways to run bind9 in the foreground in your container.
Your command to run the container is correct:
docker run -d -ti -p 53:53 alex/dns
You may need to:
RUN apt-get update && apt-get -y install bind9
You will likely need something like (don't know):
ENTRYPOINT ["/bind9"]
Googled it ;-)
https://manpages.debian.org/jessie/bind9/named.8.en.html
After you've configured it, you can run it as a foreground process:
ENTRYPOINT ["named","-g"]

Docker exit after starting a command which goes into background. Then how can we take benifit of that service

I am starting a command which goes into background on its own. On the terminal it appears that the command shows some output on the screen and exited.
On my host i can check that the command is still running by finding it in
$ ps -aux
So the docker thinks the command is done and it exits.
Based on that background running command i want to run another command using --exec.
So how to achieve this
A docker container lives as long as the process that you have specified it to run, has not exited.
Docker containers do not make use of daemons and services - you are supposed to run your process in the foreground of the container. This is the recommended usage of containers - although you can force it to do otherwise if you want to.
Something that has helped me a lot conceptually, is to think more of a docker container as a "process isolation" mechanism, and less of it as a box of software that you can start and stop.
You may find this guide useful if you want to start multiple processes in the container: https://docs.docker.com/config/containers/multi-service_container/
A little trick is to add an indefinitly running command to the end of your docker ENTRYPOINT or CMD. One commenly used is tail -f /dev/null, like this:
systemctl start myservice && tail -f /dev/null
I cannot say I can recommend this, but it will quite likely do what you want it to.
I will include a minimal example here, of how this can be used. Here's a Dockerfile where the ENTRYPOINT is specified to start a service (running in the background), and then tailing the null device, /dev/null:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y apache2
ENTRYPOINT service apache2 start && tail -f /dev/null
Build it with:
docker build -t servicetest:01 .
Start it with:
docker run -p 8080:80 servicetest:01
And visit http://localhost:8080 to see it working

Docker container exits when using -it option

Consider the following Dockerfile:
FROM ubuntu:16.04
RUN apt-get update && \
apt-get install -y apache2 && \
apt-get clean
ENTRYPOINT ["apache2ctl", "-D", "FOREGROUND"]
When running the container with the command docker run -p 8080:80 <image-id>, then the container starts and remains running, allowing the default Apache web page to be accessed on https://localhost:8080 from the host as expected. With this run command however, I am not able to quit the container using Ctrl+C, also as expected, since the container was not launched with the -it option. Now, if the -it option is added to the run command, then the container exits immediately after startup. Why is that? Is there an elegant way to have apache run in the foreground while exiting on Ctrl+C?
This behaviour is caused by Apache and it is not an issue with Docker. Apache is designed to shut down gracefully when it receives the SIGWINCH signal. When running the container interactively, the SIGWINCH signal is passed from the host to the container, effectively signalling Apache to shut down gracefully. On some hosts the container may exit immediately after it is started. On other hosts the container may stay running until the terminal window is resized.
It is possible to confirm that this is the source of the issue after the container exits by reviewing the Apache log file as follows:
# Run container interactively:
docker run -it <image-id>
# Get the ID of the container after it exits:
docker ps -a
# Copy the Apache log file from the container to the host:
docker cp <container-id>:/var/log/apache2/error.log .
# Use any text editor to review the log file:
vim error.log
# The last line in the log file should contain the following:
AH00492: caught SIGWINCH, shutting down gracefully
Sources:
https://bz.apache.org/bugzilla/show_bug.cgi?id=50669
https://bugzilla.redhat.com/show_bug.cgi?id=1212224
https://github.com/docker-library/httpd/issues/9
All that you need to do is pass the -d option to the run command:
docker run -d -p 8080:80 my-container
As yamenk mentioned, daemonizing works because you send it to the background and decouple the window resizing.
Since the follow-up post mentioned that running in the foreground may have been desirable, there is a good way to simulate that experience after daemonizing:
docker logs -f container-name
This will drop the usual stdout like "GET / HTTP..." connection messages back onto the console so you can watch them flow.
Now you can resize the window and stuff and still see your troubleshooting info.
I am also experiencing this problem on wsl2 under Windows 10, Docker Engine v20.10.7
Workaround:
# start bash in httpd container:
docker run --rm -ti -p 80:80 httpd:2.4.48 /bin/bash
# inside container execute:
httpd -D FOREGROUND
Now Apache httpd keeps running until you press CTRL-C or resize(?!) the terminal window.
After closing httpd, type:
exit
to leave the container
A workaround is to pipe the output to cat:
docker run -it -p 8080:80 <image-id> | cat
NOTE: It is important to use -i and -t.
Ctrl+C will work and resizing the terminal will not shut down Apache.

Keeping alive Docker containers with supervisord

I end my Debian Dockerfile with these lines:
EXPOSE 80 22
COPY etc/supervisor/conf.d /etc/supervisor/conf.d
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]
In /etc/supervisor/conf.d/start.conf file:
[program:ssh]
command=/usr/sbin/service ssh restart
[program:nginx]
command=/usr/sbin/nginx -g 'daemon off;'
[program:systemctl]
command=/bin/systemctl daemon-reload
[program:systemctl]
command=/bin/systemctl start php7-fpm.service
If I try to run this Docker image with the following command:
$ docker run -d -p 8080:80 -p 8081:22 lanti/debian
It's immediately stops running. if I try to run it on the foreground:
$ docker run -it -v /home/core/share:/root/share -p 8080:80 -p 8081:22 lanti/debian
It's the same, immediate exit. If I run with bash CMD:
$ docker run --rm -it -v /home/core/share:/root/share -p 8080:80 -p 8081:22 lanti/debian bash
It stays active in the console, but the predefined commands by supervisor not run, so I need to run $ service supervisor restart inside the container, otherwise Nginx and SSH won't be started.
How I can start a docker container with multiple commands run at startup? In the past I used ExecStartPost lines in a systemd file under the host OS, but becouse of that, the systemd file became complex so I try to move the pre-start commands into the container, to run automatically at any type of startup.
This docker container will have nginx, php, ssh, phpmyadmin and mysql in the future. I don't want multiple containers.
Thank You for your help!
Lets preface this by saying running the kitchen sink in a docker container is not a best practice. Docker is not a virtual machine.
That said, a few problems.
just like the processes that supervisor controls, supervisor itself should NOT daemonize. Add -n
I'm not entirely sure why you expect, need, or want to have systemd and supervisor running. Most docker containers do not have a functioning init system. Why not just user supervisor for everything? Unless docker has significantly changed in the last couple versions, systemd inside the container will not work like you think it should.

Resources