I have a script which I want to optionally run within a container. I have observed that if I run an intermediate script it can be killed with Ctrl-C, however if I do not then it can't.
Here is an example:
test1.sh:
#!/bin/bash
if [ "${1}" = true ]; then
while true; do echo "args: $#"; sleep 1; done
else
docker run --rm -it $(docker build -f basic-Dockerfile -q .) /test2.sh $#
fi
test2.sh:
#!/bin/bash
/test1.sh true $#
basic-Dockerfile:
FROM alpine:3.7
RUN apk add --no-cache bash
COPY test1.sh test2.sh /
ENTRYPOINT ["bash"]
Running ./test1.sh true foo bar will happily print out true foo bar, and running ./test1.sh foo bar will do the same in a container. Sending Ctrl-C will kill the process and delete the container as expected.
However if I try to remove the need for an extra file by changing /test2.sh $# to /test1.sh true $#:
test1.sh
#!/bin/bash
if [ "${1}" = true ]; then
while true; do echo "args: $#"; sleep 1; done
else
docker run --rm -it $(docker build -f basic-Dockerfile -q .) /test1.sh true $#
fi
then the process can no longer be terminated with Ctrl-C, and instead must be stopped with docker kill.
Why is this happening?
Docker version 18.06.1-ce running on Windows 10 in WSL
That's a common misunderstanding in docker but it's for a good reason.
When a process run as PID 1 in Linux it behaves a little different. Specifically, it ignores signals as SIGTERM (which you send when hitting Ctrl-C), unless the script is coded to do so. This doesn't occur when PID > 1.
And that's why your second scenario works (The PID 1 is script2.sh, which delegates the signal in script1.sh, which stops because it is not PID1) but not the first one (script1.sh is PID 1 and thus it doesn't stop with SIGTERM).
To solve that, you can trap the signal in script1.sh and exit:
exit_func() {
echo "SIGTERM detected"
exit 1
}
trap exit_func SIGTERM SIGINT
Or tell docker run to init the container with a different process as PID 1. Specifically, if you add --init to docker run with no more arguments, it uses a default program, tini, prepared to handle these situations:
docker run --rm -it --init $(docker build -f basic-Dockerfile -q .) /test1.sh true $#
You can also you can also use exec to replace the current shell with a new one which can be stopped with ctrl-c
For example start.sh script which starts nginx server and runs uwsgi
#!/usr/bin/env bash
service nginx start
uwsgi --ini uwsgi.ini
should changed to
#!/usr/bin/env bash
service nginx start
exec uwsgi --ini uwsgi.ini
After theese changes ctrl c will stop the container
Related
I'm trying to understand why my Docker container does not stop gracefully and just times out. The container is running crond:
FROM alpine:latest
ADD crontab /etc/crontabs/root
RUN chmod 0644 /etc/crontabs/root
CMD ["crond", "-f"]
And the crontab file is:
* * * * * echo 'Working'
# this empty line required by cron
Built with docker build . -t periodic:latest
And run with docker run --rm --name periodic periodic:latest
This is all good, but when I try to docker stop periodic from another terminal, it doesn't stop gracefully, the time out kicks in and is killed abruptly. It's like crond isn't responding to the SIGTERM.
crond is definitely PID 1
/ # ps
PID USER TIME COMMAND
1 root 0:00 crond -f
6 root 0:00 ash
11 root 0:00 ps
However, if I do this:
docker run -it --rm --name shell alpine:latest ash and
docker exec -it shell crond -f in another terminal, I can kill crond from the first shell with SIGTERM so I know it can be stopped with SIGTERM.
Thanks for any help.
Adding an init process to the container (init: true in docker-compose.yml) solved the problem.
EDIT: I read this https://blog.thesparktree.com/cron-in-docker to understand the issues and solutions around running cron in Docker. From this article:
"Finally, as you’ve been playing around, you may have noticed that it’s difficult to kill the container running cron. You may have had to use docker kill or docker-compose kill to terminate the container, rather than using ctrl + C or docker stop.
Unfortunately, it seems like SIGINT is not always correctly handled by cron implementations when running in the foreground.
After researching a couple of alternatives, the only solution that seemed to work was using a process supervisor (like tini or s6-overlay). Since tini was merged into Docker 1.13, technically, you can use it transparently by passing --init to your docker run command. In practice you often can’t because your cluster manager doesn’t support it."
Since my original post and answer, I've migrated to Kubernetes, so init in docker-compose.yml won't work. My container is based on Debian Buster, so I've now installed tini in the Dockerfile, and changed the ENTRYPOINT to ["/usr/bin/tini", "--", "/usr/local/bin/entrypoint.sh"] (my entrypoint.sh finally does exec cron -f)
The key is that you cannot stop a pid=1 process in docker. It supposes that docker stops (or kills if it was launched with --rm).
That's why if you run -it ... ash, shell has pid 1 and you can kill other processes.
If you want your cron is killable without stopping/killing docker, just launch another process as entrypoint:
Launch cron after docker entrypoint (For example, run as cmd tail -F /dev/null and then launch cron docker run -d yourdocker service cron start)
If I docker run a container with some script inside using --rm and --detach, how can I found the RC of that container ? I.e. whether the script inside the container finished successfully or failed ?
Because of --rm flag I can't see that container in docker ps --all after it finishes.
You can't, since you're explicitly asking Docker to clean up after the image. That will include all of the metadata like the exit status.
On the other hand, if you're actively planning to check the status code anyways, you'll have the opportunity to do the relevant cleanup yourself.
CONTAINER_ID=$(docker run -d ...)
...
docker stop "$CONTAINER_ID" # if needed
docker wait "$CONTAINER_ID" # produces its exit status
CONTAINER_RC=$?
docker rm "$CONTAINER_ID"
if [ "$CONTAINER_RC" -ne 0 ]; then
echo "container failed" >&2
fi
The best way to check weather the script works is first capture the script response using command1 > everything.txt 2>&1
And lastly, you can go inside to the running container using docker exec -it <mycontainer> bash
I am running an docker Run command on bash it looks like immediately after firing this command control start executing next line instead of waiting for docker container startup.
Is this is how docker works??
UPDATE : I am using -dit as attribute which means it is running in detached mode which I think explains why it jumped to next line immediately. As it is a VM's startup script it will have to be detached but is there is any option where we can at least wait till docker container is done with its provisioning ?
The -d is causing the container to detach immediately. All containers have a different idea of when they are "done with their provisioning" and Docker can't know how the internals of every container work so it's hard for Docker to be responsible for this.
Docker has added a HEALTHCHECK so you can define a test specifically for you container. Then you can query the containers state and wait for it to become healthy in your script.
HEALTHCHECK --interval=1m --timeout=5s \
CMD curl -f http://localhost/ || exit 1
Then wait in the script
now="$(date +%s)"
let timeout=now+60
while sleep 5; do
res="$(docker inspect --format='{{.State.Health}}' container_id) 2>&1"
if [ "res" == "healthy" ]; then break; fi
if [ "$(date +%s)" -lt "$timeout" ]; then
echo "Error timeout: $res"
# handle error
break
fi
done
You can modify the wait to run any command, like a curl or nc if you want to forgo the HEALTHCHECK in the container
docker logs container_id may also include the information you need to wait for. Most daemons will log something like "Ready to accept connections"
FROM alpine:3.5
CMD ["echo", "hello world"]
So after building docker build -t hello . I can run hello by calling docker run hello and I get the output hello world.
Now let's assume I wish to run ls or sh - this is fine. But what I really want is to be able to pass arguments. e.g. ls -al, or even tail -f /dev/null to keep the container running without having to change the Dockerfile
How do I go about doing this? my attempt at exec mode fails miserably... docker run hello --cmd=["ls", "-al"]
Anything after the image name in the docker run command becomes the new value of CMD. So you can run:
docker run hello ls -al
Note that if an ENTRYPOINT is defined, the ENTRYPOINT will receive the value of CMD as args rather than running CMD directly. So you can define an entrypoint as a shell script with something like:
#!/bin/sh
echo "running the entrypoint code"
# if no args are passed, default to a /bin/sh shell
if [ $# -eq 0 ]; then
set -- /bin/sh
fi
# run the "CMD" with exec to replace the pid 1 of this shell script
exec "$#"
Q. But what I really want is to be able to pass arguments. e.g. ls -al, or even tail -f /dev/null to keep the container running without having to change the Dockerfile
This is just achieved with:
docker run -d hello tail -f /dev/null
So the container is running in background, and it let you to execute arbitrary commands inside it:
docker exec <container-id> ls -la
And, for example a shell:
docker exec -it <container-id> bash
Also, I recommend you what #BMitch says.
I want to make a simple bash script which runs one docker container with -d and then do something else if and only if the container has finished running its CMD. How can I do this while avoiding timing issues since the docker container can take a while to finish starting up?
My only thought was that the Dockerfile for the container will need to create some sort of state on the container itself when it's done and then the bash script can poll until the state file is there. Is there a better / standard way to do something like this?
Essentially I need a way for the host that ran a docker container with -d to be able to tell when it's ready.
Update
Made it work with the tailing logs method, but it seems a bit hacky:
docker run -d \
--name sauceconnect \
sauceconnect
# Tail logs until 'Sauce Connect is up'
docker logs -f sauceconnect | while read LINE
do
echo "$LINE"
if [[ "$LINE" == *"Sauce Connect is up"* ]]; then
pkill -P $$ docker
fi
done
You should be fine to check the logs via docker logs -f <containter_name_or_ID>
-f : same as tail -f
For example, the CMD is finished, and export a log as JOB ABC is successfully started.
your script can detect and run the rest jobs after get it.