My current command to run the container:
docker run -d -p 10000:3000 --restart=always --name metabase-prod metabase/metabase
However this one forces restart attempts too much. I only want for the restart attempt to happen every 5 minutes. How can achieve that?
you can use crontab to schedule restart in case container is down.
docker ps|grep 'my_container'
will output if container is running.
5 * * * * /script_to_check_container_is_down_and_run
crontab config, will execute the script every 5 mins
#!/bin/bash
if [[ $(docker ps|grep 'my_container'|wc -l) -gt 0 ]]
then
docker start 'my_container'
fi
is a rudimentary example of script which can be in script_to_check_container_is_down_and_run
Related
I'm trying to understand why my Docker container does not stop gracefully and just times out. The container is running crond:
FROM alpine:latest
ADD crontab /etc/crontabs/root
RUN chmod 0644 /etc/crontabs/root
CMD ["crond", "-f"]
And the crontab file is:
* * * * * echo 'Working'
# this empty line required by cron
Built with docker build . -t periodic:latest
And run with docker run --rm --name periodic periodic:latest
This is all good, but when I try to docker stop periodic from another terminal, it doesn't stop gracefully, the time out kicks in and is killed abruptly. It's like crond isn't responding to the SIGTERM.
crond is definitely PID 1
/ # ps
PID USER TIME COMMAND
1 root 0:00 crond -f
6 root 0:00 ash
11 root 0:00 ps
However, if I do this:
docker run -it --rm --name shell alpine:latest ash and
docker exec -it shell crond -f in another terminal, I can kill crond from the first shell with SIGTERM so I know it can be stopped with SIGTERM.
Thanks for any help.
Adding an init process to the container (init: true in docker-compose.yml) solved the problem.
EDIT: I read this https://blog.thesparktree.com/cron-in-docker to understand the issues and solutions around running cron in Docker. From this article:
"Finally, as you’ve been playing around, you may have noticed that it’s difficult to kill the container running cron. You may have had to use docker kill or docker-compose kill to terminate the container, rather than using ctrl + C or docker stop.
Unfortunately, it seems like SIGINT is not always correctly handled by cron implementations when running in the foreground.
After researching a couple of alternatives, the only solution that seemed to work was using a process supervisor (like tini or s6-overlay). Since tini was merged into Docker 1.13, technically, you can use it transparently by passing --init to your docker run command. In practice you often can’t because your cluster manager doesn’t support it."
Since my original post and answer, I've migrated to Kubernetes, so init in docker-compose.yml won't work. My container is based on Debian Buster, so I've now installed tini in the Dockerfile, and changed the ENTRYPOINT to ["/usr/bin/tini", "--", "/usr/local/bin/entrypoint.sh"] (my entrypoint.sh finally does exec cron -f)
The key is that you cannot stop a pid=1 process in docker. It supposes that docker stops (or kills if it was launched with --rm).
That's why if you run -it ... ash, shell has pid 1 and you can kill other processes.
If you want your cron is killable without stopping/killing docker, just launch another process as entrypoint:
Launch cron after docker entrypoint (For example, run as cmd tail -F /dev/null and then launch cron docker run -d yourdocker service cron start)
I'm trying to schedule docker which run jenkins slave to automatically restart everytime that desktop reboots.
Checking scheduling tasks with crontab using simple script
for i in seq 1 10000;do touch $i.stam && sleep 1;done
And add it to autorestart with crontab -e
#reboot /root/script.sh
Works as expected - the script starts right after reboot
Checking manually the docker_run command
the docker_run.sh script runs the command
docker run -it -u jenkins:jenkins -v /home/jenkins/.ssh/:/home/jenkins/.ssh/ -v /root/docker-jnlp-slave/.aws/:/home/jenkins/.aws/ jenkins/jnlp-slave:latest
works as expected - when running the entire docker run command and when saving it as docker_run.sh script
So great - let's copy to crontab the docker run*
#reboot /root/docker-jnlp-slave/docker_run.sh >/dev/null 2>&1
but than, nothing happens
"--restart always "resolved it. I didn't understand at the begining how it works.
Thanks
I have a script which I want to optionally run within a container. I have observed that if I run an intermediate script it can be killed with Ctrl-C, however if I do not then it can't.
Here is an example:
test1.sh:
#!/bin/bash
if [ "${1}" = true ]; then
while true; do echo "args: $#"; sleep 1; done
else
docker run --rm -it $(docker build -f basic-Dockerfile -q .) /test2.sh $#
fi
test2.sh:
#!/bin/bash
/test1.sh true $#
basic-Dockerfile:
FROM alpine:3.7
RUN apk add --no-cache bash
COPY test1.sh test2.sh /
ENTRYPOINT ["bash"]
Running ./test1.sh true foo bar will happily print out true foo bar, and running ./test1.sh foo bar will do the same in a container. Sending Ctrl-C will kill the process and delete the container as expected.
However if I try to remove the need for an extra file by changing /test2.sh $# to /test1.sh true $#:
test1.sh
#!/bin/bash
if [ "${1}" = true ]; then
while true; do echo "args: $#"; sleep 1; done
else
docker run --rm -it $(docker build -f basic-Dockerfile -q .) /test1.sh true $#
fi
then the process can no longer be terminated with Ctrl-C, and instead must be stopped with docker kill.
Why is this happening?
Docker version 18.06.1-ce running on Windows 10 in WSL
That's a common misunderstanding in docker but it's for a good reason.
When a process run as PID 1 in Linux it behaves a little different. Specifically, it ignores signals as SIGTERM (which you send when hitting Ctrl-C), unless the script is coded to do so. This doesn't occur when PID > 1.
And that's why your second scenario works (The PID 1 is script2.sh, which delegates the signal in script1.sh, which stops because it is not PID1) but not the first one (script1.sh is PID 1 and thus it doesn't stop with SIGTERM).
To solve that, you can trap the signal in script1.sh and exit:
exit_func() {
echo "SIGTERM detected"
exit 1
}
trap exit_func SIGTERM SIGINT
Or tell docker run to init the container with a different process as PID 1. Specifically, if you add --init to docker run with no more arguments, it uses a default program, tini, prepared to handle these situations:
docker run --rm -it --init $(docker build -f basic-Dockerfile -q .) /test1.sh true $#
You can also you can also use exec to replace the current shell with a new one which can be stopped with ctrl-c
For example start.sh script which starts nginx server and runs uwsgi
#!/usr/bin/env bash
service nginx start
uwsgi --ini uwsgi.ini
should changed to
#!/usr/bin/env bash
service nginx start
exec uwsgi --ini uwsgi.ini
After theese changes ctrl c will stop the container
I'm studying the Docker documentation, but I'm having a hard time understanding the concept of creating a container, ssh, and ssh back.
I created a container with
docker run -ti ubuntu /bin/bash
Then, it starts the container and I can run commands. docker ps gives me
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0e37da213a37 ubuntu "/bin/bash" About a minute ago Up About a minute keen_sammet
The issue is after I exit the container I can't ssh back.
I tried docker attach that gives me Error: No such container and I tried docker exec -ti <container>/bin/bash that gives me the same message Error: No such container
How do I run and ssh back to the container?
When you exit the bash process, the container exits (in general, a container will exit when the foreground process exits). The error message you are seeing is accurately describing the situation (the container is no longer running).
If you want to be able to docker exec into a container, you will want to run some sort of persistent command. For example, if you were to run:
docker run -ti -d --name mycontainer ubuntu bash
This would start a "detached" container. That means you've started bash, but it's just hanging around doing nothing. You could use docker exec to start a new process in this container:
$ docker exec -it mycontainer ps -fe
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 16:28 pts/0 00:00:00 bash
root 17 0 0 16:28 pts/1 00:00:00 ps -fe
Or:
$ docker exec -it mycontainer bash
There's really no reason to start bash as the main process in this case, since you're not interacting with it. You can just as easily run...
docker run -ti -d --name mycontainer ubuntu sleep inf
...and the behavior would be the same.
The most common use case for all of this is when your docker run command starts up some sort of persistent service (like a web server, or a database server, etc), and then you use docker exec to perform diagnostic or maintenance tasks.
The docker attach command will re-connect you with the primary console of a detached container. In other words, if we return to the initial example:
docker run -ti -d --name mycontainer ubuntu bash
You could connect to that bash process (instead of starting a new one) by running:
docker attach mycontainer
At this point, exit would cause the container to exit.
First, you don't ssh to a docker container (unless you have a sshd process in that container). But you can execute a command with docker exec -ti mycontainer bash -l
But you can exec a command only on running container. If the container exited already you must use another approach : create an image from the container and run a new one.
Here is an example. First I create a container and create a file within then I exit it.
$ docker run -ti debian:9-slim bash -l
root#09f889e80153:/# echo aaaaaaaaaa > /zzz
root#09f889e80153:/# cat /zzz
aaaaaaaaaa
root#09f889e80153:/# exit
logout
As you can see the container is exited (Exited (0) 24 seconds ago)
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
09f889e80153 debian:9-slim "bash -l" 45 seconds ago Exited (0) 24 seconds ago thirsty_hodgkin
So I create a new image with docker commit
$ docker commit 09f889e80153 bla
sha256:6ceb88470326d2da4741099c144a11a00e7eb1f86310cfa745e8d3441ac9639e
So I can run a new container that contains previous container content.
$ docker run -ti bla bash -l
root#479a0af3d197:/# cat zzz
aaaaaaaaaa
I want to make a simple bash script which runs one docker container with -d and then do something else if and only if the container has finished running its CMD. How can I do this while avoiding timing issues since the docker container can take a while to finish starting up?
My only thought was that the Dockerfile for the container will need to create some sort of state on the container itself when it's done and then the bash script can poll until the state file is there. Is there a better / standard way to do something like this?
Essentially I need a way for the host that ran a docker container with -d to be able to tell when it's ready.
Update
Made it work with the tailing logs method, but it seems a bit hacky:
docker run -d \
--name sauceconnect \
sauceconnect
# Tail logs until 'Sauce Connect is up'
docker logs -f sauceconnect | while read LINE
do
echo "$LINE"
if [[ "$LINE" == *"Sauce Connect is up"* ]]; then
pkill -P $$ docker
fi
done
You should be fine to check the logs via docker logs -f <containter_name_or_ID>
-f : same as tail -f
For example, the CMD is finished, and export a log as JOB ABC is successfully started.
your script can detect and run the rest jobs after get it.