If I docker run a container with some script inside using --rm and --detach, how can I found the RC of that container ? I.e. whether the script inside the container finished successfully or failed ?
Because of --rm flag I can't see that container in docker ps --all after it finishes.
You can't, since you're explicitly asking Docker to clean up after the image. That will include all of the metadata like the exit status.
On the other hand, if you're actively planning to check the status code anyways, you'll have the opportunity to do the relevant cleanup yourself.
CONTAINER_ID=$(docker run -d ...)
...
docker stop "$CONTAINER_ID" # if needed
docker wait "$CONTAINER_ID" # produces its exit status
CONTAINER_RC=$?
docker rm "$CONTAINER_ID"
if [ "$CONTAINER_RC" -ne 0 ]; then
echo "container failed" >&2
fi
The best way to check weather the script works is first capture the script response using command1 > everything.txt 2>&1
And lastly, you can go inside to the running container using docker exec -it <mycontainer> bash
Official Docker images like MySQL can be run like this:
docker run -d --name mysql_test mysql/mysql-server:8.0.13
And it can run indefinitely in the background.
I want to try to create an image which does the same, specifically a Flask development server (just for testing). But my container exit immediately. My Dockerfile is like this:
FROM debian:buster
ENV TERM xterm
RUN XXXX # some apt-get and Python installation stuffs
ENTRYPOINT [ "flask", "run", "--host", "0.0.0.0:5000" ]
EXPOSE 80
EXPOSE 5000
USER myuser
WORKDIR /home/myuser
However it exited immediately as soon as it is ran. I also tried "bash" as an entry point just so to make sure it isn't a Flask configuration issue and it also exited.
How do I make it so that it runs as THE process in the container?
EDIT
OK someone posted below (but later deleted), the command to test is to use tail -f /dev/null, and it does run indefinitely. I still don't understand why bash doesn't work as a process which doesn't exist (does it?). But my flask configuration is probably off.
EDIT 2
I see that running without the -d flag print out the stdout (or stderr) so I can diagnose the problem.
Let's clear things out.
In general, a container exits as soon as its entrypoint is successfully executed.
In your case, without being a python expert this ENTRYPOINT [ "flask", "run", "--host", "0.0.0.0:5000" ] would be enough to keep the container alive. But I guess you have some configuration error and due to that error the container exited before running flask command. You can validate this by running docker ps -a and inspect the exit code(possibly 1).
Let's now discuss about the questions in your edits.
The key part of your misunderstanding derives from the -d flag.
You are right to think that setting bash as entrypoint would be enough to keep container alive but you need to attach to that shell.
When running in detach mode(-d), container will execute bash command but as soon as no one is attached to that shell, it will exit. In addition, using this flag will prevent you from viewing container logs lively(however you may use docker logs container_id to debug) which is very useful when you are in an early phase of setting thing up. So I recommend using this flag only when you are sure that everything works as intended.
To attach to bash shell and keep container alive, you should use the -it flag so that the bash shell will be attached to the current shell invoking the docker run command.
-t : Allocate a pseudo-tty
-i : Keep STDIN open even if not attached
Please also consult official documentation about foreground vs background mode.
The answer to your edit is: when do docker run <container> bash it will literally call bash and exit 0, because the command (bash) was successful. Bash isn't a shell, it's a command.
If you ran docker run -it <container> tail -f /dev/null and then docker exec -it /bin/bash. You'd drop into the shell, because its the command you ran.
Your Dockerfile doesn't have a command to run in the background that is persistent, in mysqls case, it runs mysqld, which starts a server on PID 0.
When PID 0 exits, the container stops.
Your entrypoint is most likely failing to start, or starting and exiting because of how your command is running.
I would try changing your entrypoint to a
Without making the title too long, here is the scenario...
I have two scripts:
The first script (host_startup.sh) checks whether a website is up. When it finally posts, then it opens the website in the default browser:
URL=http://localhost:9876
until contents=$(wget -q --spider --no-check-certificate "$URL")
do
sleep 1
done
xdg-open ${URL} &
The second script (run.sh) starts the host_startup script and then starts up a Docker container which serves up a webpage at the aforementioned address:
(../../host_startup.sh) &
docker run --rm -it \
-p 9786:9786 \
company:image
Note that the docker command runs with the --rm flag. However, when I run the run.sh script, and Ctrl+C the process, the Docker image is still running... Specifically, docker ps shows the container still.
I would like Ctrl+C to stop the container and clean it up.
I thought that the container would stop and cleanup because I put the host_startup.sh script in the background, NOT the docker run command...
Please tell me how I can achieve the desired behavior.
Appearantly "Bash does not forward signals like SIGTERM to processes it is currently waiting on".
So you could modify your run.sh to:
(../../host_startup.sh) &
exec docker run --rm -it \
-p 9786:9786 \
company:image
Which would replace the fork of the shell with the docker process instad of waiting for it.
Based on Suart P. Bentley's answer on the unix stackexchange
Alternatively you could manually listen to and act on signals using a modified version of cuonglm's answer to the same question.
I want to make a simple bash script which runs one docker container with -d and then do something else if and only if the container has finished running its CMD. How can I do this while avoiding timing issues since the docker container can take a while to finish starting up?
My only thought was that the Dockerfile for the container will need to create some sort of state on the container itself when it's done and then the bash script can poll until the state file is there. Is there a better / standard way to do something like this?
Essentially I need a way for the host that ran a docker container with -d to be able to tell when it's ready.
Update
Made it work with the tailing logs method, but it seems a bit hacky:
docker run -d \
--name sauceconnect \
sauceconnect
# Tail logs until 'Sauce Connect is up'
docker logs -f sauceconnect | while read LINE
do
echo "$LINE"
if [[ "$LINE" == *"Sauce Connect is up"* ]]; then
pkill -P $$ docker
fi
done
You should be fine to check the logs via docker logs -f <containter_name_or_ID>
-f : same as tail -f
For example, the CMD is finished, and export a log as JOB ABC is successfully started.
your script can detect and run the rest jobs after get it.
I run a container in the background using
docker run -d --name hadoop h_Service
it exits quickly. But if I run in the foreground, it works fine. I checked logs using
docker logs hadoop
there was no error. Any ideas?
DOCKERFILE
FROM java_ubuntu_new
RUN wget http://archive.cloudera.com/cdh4/one-click-install/precise/amd64/cdh4-repository_1.0_all.deb
RUN dpkg -i cdh4-repository_1.0_all.deb
RUN curl -s http://archive.cloudera.com/cdh4/ubuntu/precise/amd64/cdh/archive.key | apt-key add -
RUN apt-get update
RUN apt-get install -y hadoop-0.20-conf-pseudo
RUN dpkg -L hadoop-0.20-conf-pseudo
USER hdfs
RUN hdfs namenode -format
USER root
RUN apt-get install -y sudo
ADD . /usr/local/
RUN chmod 777 /usr/local/start-all.sh
CMD ["/usr/local/start-all.sh"]
start-all.sh
#!/usr/bin/env bash
/etc/init.d/hadoop-hdfs-namenode start
/etc/init.d/hadoop-hdfs-datanode start
/etc/init.d/hadoop-hdfs-secondarynamenode start
/etc/init.d/hadoop-0.20-mapreduce-tasktracker start
sudo -u hdfs hadoop fs -chmod 777 /
/etc/init.d/hadoop-0.20-mapreduce-jobtracker start
/bin/bash
This did the trick for me:
docker run -dit ubuntu
After it, I checked for the processes running using:
docker ps -a
For attaching again the container
docker attach CONTAINER_NAME
TIP: For exiting without stopping the container type: ^P^Q
A docker container exits when its main process finishes.
In this case it will exit when your start-all.sh script ends. I don't know enough about hadoop to tell you how to do it in this case, but you need to either leave something running in the foreground or use a process manager such as runit or supervisord to run the processes.
I think you must be mistaken about it working if you don't specify -d; it should have exactly the same effect. I suspect you launched it with a slightly different command or using -it which will change things.
A simple solution may be to add something like:
while true; do sleep 1000; done
to the end of the script. I don't like this however, as the script should really be monitoring the processes it kicked off.
(I should say I stole that code from https://github.com/sequenceiq/hadoop-docker/blob/master/bootstrap.sh)
I would like to extend or dare I say, improve answer mentioned by camposer
When you run
docker run -dit ubuntu
you are basically running the container in background in interactive mode.
When you attach and exit the container by CTRL+D (most common way to do it), you stop the container because you just killed the main process which you started your container with the above command.
Making advantage of an already running container, I would just fork another process of bash and get a pseudo TTY by running:
docker exec -it <container ID> /bin/bash
Why docker container exits immediately?
If you want to force the image to hang around (in order to debug something or examine state of the file system) you can override the entry point to change it to a shell:
docker run -it --entrypoint=/bin/bash myimagename
whenever I want a container to stay up after finish the script execution I add
&& tail -f /dev/null
at the end of command. So it should be:
/usr/local/start-all.sh && tail -f /dev/null
If you need to just have a container running without exiting, just run
docker run -dit --name MY_CONTAINER MY_IMAGE:latest
and then
docker exec -it MY_CONTAINER /bin/bash
and you will be in the bash shell of the container, and it should not exit.
Or if the exit happens during docker-compose, use
command: bash -c "MY_COMMAND --wait"
as already stated by two other answers here (though not that clearly referring to docker-compose, that is why I still mention the "wait" trick again).
I tried this --wait later again, did not work. It must have been an argument for some self-written python or shell code. If I ever find the time, I will look it up. It should be a good default since it was written by professionals. Perhaps it also just shadowed the workaround of another answer in this Q/A.
Add this to the end of Dockerfile:
CMD tail -f /dev/null
Sample Docker file:
FROM ubuntu:16.04
# other commands
CMD tail -f /dev/null
Reference
A nice approach would be to start up your processes and services running them in the background and use the wait [n ...] command at the end of your script. In bash, the wait command forces the current process to:
Wait for each specified process and return its termination status. If n is not given, all currently active child processes are waited for, and the return status is zero.
I got this idea from Sébastien Pujadas' start script for his elk build.
Taking from the original question, your start-all.sh would look something like this...
#!/usr/bin/env bash
/etc/init.d/hadoop-hdfs-namenode start &
/etc/init.d/hadoop-hdfs-datanode start &
/etc/init.d/hadoop-hdfs-secondarynamenode start &
/etc/init.d/hadoop-0.20-mapreduce-tasktracker start &
sudo -u hdfs hadoop fs -chmod 777 /
/etc/init.d/hadoop-0.20-mapreduce-jobtracker start &
wait
You need to run it with -d flag to leave it running as daemon in the background.
docker run -d -it ubuntu bash
My pracitce is in the Dockerfile start a shell which will not exit immediately CMD [ "sh", "-c", "service ssh start; bash"], then run docker run -dit image_name. This way the (ssh) service and container is up running.
I added read shell statement at the end. This keeps the main process of the container - startup shell script - running.
Adding
exec "$#"
at the end of my shell script was my fix!
Coming from duplicates, I don't see any answer here which addresses the very common antipattern of running your main workload as a background job, and then wondering why Docker exits.
In simple terms, if you have
my-main-thing &
then either take out the & to run the job in the foreground, or add
wait
at the end of the script to make it wait for all background jobs.
It will then still exit if the main workload exits, so maybe run this in a while true loop to force it to restart forever:
while true; do
my-main-thing &
other things which need to happen while the main workload runs in the background
maybe if you have such things
wait
done
(Notice also how to write while true. It's common to see silly things like while [ true ] or while [ 1 ] which coincidentally happen to work, but don't mean what the author probably imagined they ought to mean.)
There are many possible ways to cause a docker to exit immediately. For me, it was the problem with my Dockerfile. There was a bug in that file. I had ENTRYPOINT ["dotnet", "M4Movie_Api.dll] instead of ENTRYPOINT ["dotnet", "M4Movie_Api.dll"]. As you can see I had missed one quotation(") at the end.
To analyze the problem I started my container and quickly attached my container so that I could see what was the exact problem.
C:\SVenu\M4Movie\Api\Api>docker start 4ea373efa21b
C:\SVenu\M4Movie\Api\Api>docker attach 4ea373efa21b
Where 4ea373efa21b is my container id. This drives me to the actual issue.
After finding the issue, I had to build, restore, publish my container again.
If you check Dockerfile from containers, for example
fballiano/magento2-apache-php
you'll see that at the end of his file he adds the following command:
while true; do sleep 1; done
Now, what I recommend, is that you do this
docker container ls --all | grep 127
Then, you will see if your docker image had an error, if it exits with 0, then it probably needs one of these commands that will sleep forever.
#camposer
The solution is the solution that works for me.
I am running docker on my macbook.
The container was not firing. thanks to your friend's method, I was able to start it correctly.
`docker run -dit ubuntu`
Since the image is a linux, one thing to check is to make sure any shell scripts used in the container have unix line endings. If they have a ^M at the end then they are windows line endings. One way to fix them is with dos2unix on /usr/local/start-all.sh to convert them from windows to unix. Running the docker in interactive mode can help figure out other problems. You could have a file name typo or something. see https://en.wikipedia.org/wiki/Newline