docker exit immediately after launching apache and neo4j - docker

I have a script /init that launches apache and neo4j. This script is already in the image ubuntu:14. The following is the content of /init:
service apache2 start
service neo4j start
From this image, I am creating another image with the following dockerfile
FROM ubuntu:v14
EXPOSE 80 80
ENTRYPOINT ["/init"]
When I run the command docker run -d ubuntu:v15, the container starts and then exit. As far as I understood, -d option runs the container in the background. Also, the script\init launches two daemons. Why does the container exit ?

In fact, I think your first problem is the #! in init file, if you did not add something like #!/bin/bash at the start, container will complain like next:
shubuntu1#shubuntu1:~$ docker logs priceless_tu
standard_init_linux.go:207: exec user process caused "exec format error"
But even you fix above problem, you will still can't start your container, the reason same as other folks said: the PID 1 should always there, in your case after service xxx start finish, the PID 1 exit which will also result in container exit.
So, to conquer this problem you should set one command never exit, a minimal workable example for your reference:
Dockerfile:
FROM ubuntu:14.04
RUN apt-get update && \
apt-get install -y apache2
COPY init /
RUN chmod +x /init
EXPOSE 80
ENTRYPOINT ["/init"]
init:
#!/bin/bash
# you can add other service start here
# e.g. service neo4j start as you like if you have installed it already
# next will make apache run in foreground, so PID1 not exit.
/usr/sbin/apache2ctl -DFOREGROUND

When your Dockerfile specifies an ENTRYPOINT, the lifetime of the container is exactly the length of whatever its process is. Generally the behavior of service ... start is to start the service as a background process and then return immediately; so your /init script runs the two service commands and completes, and now that the entrypoint process is completed, the container exits.
Generally accepted best practice is to run only one process in a container. That's especially true when one of the processes is a database. In your case there are standard Docker Hub Apache httpd and neo4j images, so I'd start by using an orchestration tool like Docker Compose to run those two containers side-by-side.

Related

RUN in Dockerfile with systemd as pid 1

Is it possible to use RUN in a dockerfile while having systemd as pid 1?
I am trying to execute an install script that requires systemd to be present and running on the system, inside a dockerfile. I.e.
FROM debian:stable
RUN apt install -y systemd
RUN someInstallScriptThatRequiresSystemd.sh
Is it possible to use RUN in a dockerfile while having systemd as pid 1?
No. Each RUN command generally runs in its own container, and the command itself (or the sh -c wrapper Docker provides) will be pid 1.
Also remember that a Docker image doesn't contain running processes, only a filesystem image and metadata that says how to run a container. You can't persist an image with systemd or other services running, and since the image doesn't container running services, it doesn't make sense to restart a service in a Dockerfile.
Systemd wants to control a lot of things, most of which are host-level things that a container shouldn't be thinking about. You usually shouldn't run it in a container at all; if you must, the better setups remove most of the built-in system-setup tasks. Better is to use a lighter-weight supervisor like supervisord; better still is to run one process per container, and a minimal init like tini if you need it.
If you just need to let this installer run systemctl, you can cause that "command" to exist:
RUN ln -s /bin/true /sbin/systemctl
RUN systemctl restart my-service # does nothing, successfully

How can I make the docker container to run a script every time when the container restart?

I know I can use the dockerfile's CMD RUN and ENTRYPOINT commands to run a script when the container initiates, but how can I make the container run a script every time when the container restarts on failure?
entrypoint runs every time a container starts, or restarts. It's common practice to put startup configuration in a shell script that then execs the application's "true" entrypoint at the end. (See What purpose does using exec in docker entrypoint scripts serve? for why exec is important).
Remember, docker is really just a wrapper around filesystem , process, and network namespacing. It can't restart your container in any way other than rerunning the same process it started in the first place.
You can try it yourself with an invocation something like this:
docker run -d --restart=always --entrypoint=sh alpine -c "sleep 5; echo Exiting; exit"
if you docker logs -f that container, you'll see the Exiting come out after every 5 seconds. Note that the container stopping will also stop the log following though, so you'll have to run it again to see the next restart.

Docker exit after starting a command which goes into background. Then how can we take benifit of that service

I am starting a command which goes into background on its own. On the terminal it appears that the command shows some output on the screen and exited.
On my host i can check that the command is still running by finding it in
$ ps -aux
So the docker thinks the command is done and it exits.
Based on that background running command i want to run another command using --exec.
So how to achieve this
A docker container lives as long as the process that you have specified it to run, has not exited.
Docker containers do not make use of daemons and services - you are supposed to run your process in the foreground of the container. This is the recommended usage of containers - although you can force it to do otherwise if you want to.
Something that has helped me a lot conceptually, is to think more of a docker container as a "process isolation" mechanism, and less of it as a box of software that you can start and stop.
You may find this guide useful if you want to start multiple processes in the container: https://docs.docker.com/config/containers/multi-service_container/
A little trick is to add an indefinitly running command to the end of your docker ENTRYPOINT or CMD. One commenly used is tail -f /dev/null, like this:
systemctl start myservice && tail -f /dev/null
I cannot say I can recommend this, but it will quite likely do what you want it to.
I will include a minimal example here, of how this can be used. Here's a Dockerfile where the ENTRYPOINT is specified to start a service (running in the background), and then tailing the null device, /dev/null:
FROM ubuntu
RUN apt-get update
RUN apt-get install -y apache2
ENTRYPOINT service apache2 start && tail -f /dev/null
Build it with:
docker build -t servicetest:01 .
Start it with:
docker run -p 8080:80 servicetest:01
And visit http://localhost:8080 to see it working

Why docker container exits immediately

I run a container in the background using
docker run -d --name hadoop h_Service
it exits quickly. But if I run in the foreground, it works fine. I checked logs using
docker logs hadoop
there was no error. Any ideas?
DOCKERFILE
FROM java_ubuntu_new
RUN wget http://archive.cloudera.com/cdh4/one-click-install/precise/amd64/cdh4-repository_1.0_all.deb
RUN dpkg -i cdh4-repository_1.0_all.deb
RUN curl -s http://archive.cloudera.com/cdh4/ubuntu/precise/amd64/cdh/archive.key | apt-key add -
RUN apt-get update
RUN apt-get install -y hadoop-0.20-conf-pseudo
RUN dpkg -L hadoop-0.20-conf-pseudo
USER hdfs
RUN hdfs namenode -format
USER root
RUN apt-get install -y sudo
ADD . /usr/local/
RUN chmod 777 /usr/local/start-all.sh
CMD ["/usr/local/start-all.sh"]
start-all.sh
#!/usr/bin/env bash
/etc/init.d/hadoop-hdfs-namenode start
/etc/init.d/hadoop-hdfs-datanode start
/etc/init.d/hadoop-hdfs-secondarynamenode start
/etc/init.d/hadoop-0.20-mapreduce-tasktracker start
sudo -u hdfs hadoop fs -chmod 777 /
/etc/init.d/hadoop-0.20-mapreduce-jobtracker start
/bin/bash
This did the trick for me:
docker run -dit ubuntu
After it, I checked for the processes running using:
docker ps -a
For attaching again the container
docker attach CONTAINER_NAME
TIP: For exiting without stopping the container type: ^P^Q
A docker container exits when its main process finishes.
In this case it will exit when your start-all.sh script ends. I don't know enough about hadoop to tell you how to do it in this case, but you need to either leave something running in the foreground or use a process manager such as runit or supervisord to run the processes.
I think you must be mistaken about it working if you don't specify -d; it should have exactly the same effect. I suspect you launched it with a slightly different command or using -it which will change things.
A simple solution may be to add something like:
while true; do sleep 1000; done
to the end of the script. I don't like this however, as the script should really be monitoring the processes it kicked off.
(I should say I stole that code from https://github.com/sequenceiq/hadoop-docker/blob/master/bootstrap.sh)
I would like to extend or dare I say, improve answer mentioned by camposer
When you run
docker run -dit ubuntu
you are basically running the container in background in interactive mode.
When you attach and exit the container by CTRL+D (most common way to do it), you stop the container because you just killed the main process which you started your container with the above command.
Making advantage of an already running container, I would just fork another process of bash and get a pseudo TTY by running:
docker exec -it <container ID> /bin/bash
Why docker container exits immediately?
If you want to force the image to hang around (in order to debug something or examine state of the file system) you can override the entry point to change it to a shell:
docker run -it --entrypoint=/bin/bash myimagename
whenever I want a container to stay up after finish the script execution I add
&& tail -f /dev/null
at the end of command. So it should be:
/usr/local/start-all.sh && tail -f /dev/null
If you need to just have a container running without exiting, just run
docker run -dit --name MY_CONTAINER MY_IMAGE:latest
and then
docker exec -it MY_CONTAINER /bin/bash
and you will be in the bash shell of the container, and it should not exit.
Or if the exit happens during docker-compose, use
command: bash -c "MY_COMMAND --wait"
as already stated by two other answers here (though not that clearly referring to docker-compose, that is why I still mention the "wait" trick again).
I tried this --wait later again, did not work. It must have been an argument for some self-written python or shell code. If I ever find the time, I will look it up. It should be a good default since it was written by professionals. Perhaps it also just shadowed the workaround of another answer in this Q/A.
Add this to the end of Dockerfile:
CMD tail -f /dev/null
Sample Docker file:
FROM ubuntu:16.04
# other commands
CMD tail -f /dev/null
Reference
A nice approach would be to start up your processes and services running them in the background and use the wait [n ...] command at the end of your script. In bash, the wait command forces the current process to:
Wait for each specified process and return its termination status. If n is not given, all currently active child processes are waited for, and the return status is zero.
I got this idea from Sébastien Pujadas' start script for his elk build.
Taking from the original question, your start-all.sh would look something like this...
#!/usr/bin/env bash
/etc/init.d/hadoop-hdfs-namenode start &
/etc/init.d/hadoop-hdfs-datanode start &
/etc/init.d/hadoop-hdfs-secondarynamenode start &
/etc/init.d/hadoop-0.20-mapreduce-tasktracker start &
sudo -u hdfs hadoop fs -chmod 777 /
/etc/init.d/hadoop-0.20-mapreduce-jobtracker start &
wait
You need to run it with -d flag to leave it running as daemon in the background.
docker run -d -it ubuntu bash
My pracitce is in the Dockerfile start a shell which will not exit immediately CMD [ "sh", "-c", "service ssh start; bash"], then run docker run -dit image_name. This way the (ssh) service and container is up running.
I added read shell statement at the end. This keeps the main process of the container - startup shell script - running.
Adding
exec "$#"
at the end of my shell script was my fix!
Coming from duplicates, I don't see any answer here which addresses the very common antipattern of running your main workload as a background job, and then wondering why Docker exits.
In simple terms, if you have
my-main-thing &
then either take out the & to run the job in the foreground, or add
wait
at the end of the script to make it wait for all background jobs.
It will then still exit if the main workload exits, so maybe run this in a while true loop to force it to restart forever:
while true; do
my-main-thing &
other things which need to happen while the main workload runs in the background
maybe if you have such things
wait
done
(Notice also how to write while true. It's common to see silly things like while [ true ] or while [ 1 ] which coincidentally happen to work, but don't mean what the author probably imagined they ought to mean.)
There are many possible ways to cause a docker to exit immediately. For me, it was the problem with my Dockerfile. There was a bug in that file. I had ENTRYPOINT ["dotnet", "M4Movie_Api.dll] instead of ENTRYPOINT ["dotnet", "M4Movie_Api.dll"]. As you can see I had missed one quotation(") at the end.
To analyze the problem I started my container and quickly attached my container so that I could see what was the exact problem.
C:\SVenu\M4Movie\Api\Api>docker start 4ea373efa21b
C:\SVenu\M4Movie\Api\Api>docker attach 4ea373efa21b
Where 4ea373efa21b is my container id. This drives me to the actual issue.
After finding the issue, I had to build, restore, publish my container again.
If you check Dockerfile from containers, for example
fballiano/magento2-apache-php
you'll see that at the end of his file he adds the following command:
while true; do sleep 1; done
Now, what I recommend, is that you do this
docker container ls --all | grep 127
Then, you will see if your docker image had an error, if it exits with 0, then it probably needs one of these commands that will sleep forever.
#camposer
The solution is the solution that works for me.
I am running docker on my macbook.
The container was not firing. thanks to your friend's method, I was able to start it correctly.
`docker run -dit ubuntu`
Since the image is a linux, one thing to check is to make sure any shell scripts used in the container have unix line endings. If they have a ^M at the end then they are windows line endings. One way to fix them is with dos2unix on /usr/local/start-all.sh to convert them from windows to unix. Running the docker in interactive mode can help figure out other problems. You could have a file name typo or something. see https://en.wikipedia.org/wiki/Newline

running apache in docker

Ok, I have exhausted pretty much all threads and articles, but still cant get my apache webserver to run in standalone mode on Centos Docker Container.
Here is my simplified Dockerfile
# install apache
RUN yum -y install httpd
# start the webserver
ADD startservice /startservice
RUN chmod 775 /startservice
EXPOSE 80
CMD ["/startservice"]
My starservice script just has
#!/usr/bin/sh
service httpd start
I can build fine, but, cant seem to run the container in daemon/standalone mode. How do I do that?
I am using this to run the container in standalone mode
docker run -p 80:80 -d -t webserver
I have to log onto the container and start the service for the webserver to run.
docker run -p 80:80 -i -t webserver bash
service httpd start
This is a classic docker issue. The process you start must execute in the foreground, otherwise the container simply stops.
So, to be able to do so the following can be used in your startservice script:
#!/usr/bin/sh
service httpd start
# Tail the log file
tail -f /var/log/httpd/access_log
# Alternatively, you can tail any file or even /dev/null
#tail -f /dev/null
Note that there are also other ways of fixing this. One way is to use supervisord that keeps your processes alive. The supervisord-approach is cleaner and les hackish than the tail -f-approach and I would personally prefer that alternative.
Another alternative is simply that you do not start httpd as a service but instead provide the -DFOREGROUND parameter. This will make httpd be attached to the shell (and not fork off to a background process).
/usr/sbin/httpd -DFOREGROUND
For more info on http in foreground mode, check this question.

Resources