Running a Docker command in the background? - docker

How would you run a daemon or background process in Docker? I've seen some suggestions, like this answer that launches supervisor from CMD.
However, I'm trying to test a server configuration tool that connects via SSH. So I need to launch the SSH daemon in the background, and then run my tool.sh to test connecting via SSH to its own container. I need to monitor my tool's output in order to verify it's working. What's the best way to accomplish this?
Is there any way to make a RUN command run in the background, like RUN /usr/sbin/sshd -D & or would I have to have some wrapper script launched from CMD that does something like this?
#!/bin/bash
/usr/sbin/sshd -D
tool.sh

You can run a daemon inside a docker container the same as you would on a bare metal linux machine. The only hard part is getting it to start without the nice runlevel scripts to help.
How about this:
#!/bin/sh
function run_script() {
ssh_pids=0
while [ ${ssh_pids} -lt 1 ]; do
sleep 5
ssh_pids=$(pgrep sshd | wc -l)
done
test.sh
}
run_script &
sshd -D > /dev/null 2>&1
I've used this trick before to do what you describe, and it's worked OK. It will just background the call to run_script and proceed to start SSHD in non-daemon mode, piping it to /dev/null. Meanwhile, run_script polls for sshd; when it finds it, it quits polling and runs test.sh, which should still have the terminal as it's stdout. you'll probably need to use some external kill signal to stop the whole thing, once test.sh is done.
If you don't like this tomfoolery, the other option would be to do as you described: write a wrapper script to use as the CMD/ENTRYPOINT, and have it start SSHD without the debug flag, and then start test.sh.
The advantage of doing it with the script I posted is that the container will stick around after test.sh is finished, so you can log in and poke around, while also making your script wait until the daemon is running.

Related

Docker container not exiting when disconnected from a login shell

I have a docker container running on the server side as a user's login shell so that anyone can ssh into the server and get access to some resource inside.
Say, I have a user called test and I want people to be able to SSH into test's account using some publicly available password. Here's what I have in /etc/passwd
test:1000:1000::/:/bin/test-shell
and in /bin/test-shell
#!/bin/bash
docker run -it --rm --network none python:3.10-alpine /bin/sh
Now, whenever someone ssh into my machine using ssh test#example.com, they are immediately dropped into a disposable docker container. So far so good.
The problem I have is, if the user doesn't exit the shell by either calling exit or pressing Ctrl-D but just closes their terminal window instead, the container is left running indefinitely and taking up limited server resources. I'm wondering if it is possible (and if so, how) to make sure the container is properly stopped (and therefore deleted) when a user disconnects.
I have seen Why does SIGHUP not work on busybox sh in an Alpine Docker container? and tried the approach of trapping both SIGHUP and SIGPIPE (running trap exit SIGHUP SIGPIPE inside the container), unfortunately nothing happens. I'm suspecting maybe the signals are received by the host shell instead of inside the shell inside the container, but I'm not sure how I can leverage that (if that's really what happens) considering I have no way to get the dynamically generated container name, and I can't name the container because I want every single ssh attempt to spawn a different container.
I think this answer may help you: https://unix.stackexchange.com/a/85429
For example you can try something like this:
#!/bin/bash
[[ "$PAM_USER" -ne "test" ]] && exit 0
SESSION_COUNT="$(w -h "$PAM_USER" | wc -l)"
if (( SESSION_COUNT == 0 )) && [[ "$PAM_TYPE" == "close_session" ]]; then
docker kill <containerId>
fi
You should use some unique tag for containerId maybe you can associate id of the container with the session id
Maybe setting up a connection timout with ClientAliveInterval and ClientAliveCountMax from /etc/ssh/sshd_config will help. By default it is not active.
https://linux.die.net/man/5/sshd_config
You could add this in your /bin/test-shell script:
#!/bin/bash
# Generate a random name for container
CONTAINER_NAME=${RANDOM}
# Register script to stop container with name
trap '/bin/stop-container "$CONTAINER_NAME"' EXIT
# Run container with name
docker run --name $CONTAINER_NAME -it --rm --network none python:3.10-alpine /bin/sh
# Unregister trap if normal exit
trap - EXIT
At the moment, I can't test it, but it could works, because as described in trap manual:
The environment in which the shell executes a trap on EXIT shall be identical to the environment immediately after the last command executed before the trap on EXIT was taken.
Hope this help you to find the right direction.

Run curl commands after docker server (kong) starts

I have a docker container that internally starts a server. (I don't own this. I am just reusing it)
Once the server starts, I am running some curl commands that hit this server.
I am running the above steps in a script. Here's the issue:
the docker container starts but internally I think it is taking some time to actually start the server in it.
Before that server is up and running, it looks like the curl commands start executing and give an error that server could not be found. If I manually run this a few seconds later, it works fine though.
Please let me know if there is a way to solve this. I don't think using entry point or CMD will work for similar reasons.
Also, if that matters, the server I am using is kong.
thanks, Om.
The general answer to this is to perform some sort of health check; once you've verified that the server is healthy you can start making live requests to it. As you've noticed, the container existing or the server process running on its own isn't enough to guarantee that the container can handle requests.
A typical approach to this is to make some request to the server that will fail until the server is ready. You don't need to modify the container to do this. In some environments like Kubernetes, you can specify health checks or probes as part of the deployment configuration, but for a simple shell script, you can just run curl in a loop:
docker run -p 8080:8080 -d ...
RUNNING=false
for i in $(seq 30); do
# Try GET / and see if it succeeds
if curl -s http://localhost:8080/
then
echo Server is running
RUNNING=true
break
else
echo Server not running, waiting
sleep 1
fi
done
if [ "$RUNNING" = false ]; then
echo Server did not start within 30s
# docker stop ... && docker rm ...
exit 1
fi
If you just need to know the port is up, this simple script is very handy:
https://github.com/vishnubob/wait-for-it

Run a script when docker is stopped

I am trying to create docker container using dockerfile where script-entry.sh is to be executed when the containers starts and script-exit.sh to be executed when the container stops.
ENTRYPOINT helped to accomplish the first part of the problem where script-entry.sh runs on startup.
How will i make sure the script-exit.sh is executed on docker exit/stop ?
docker stop sends a SIGTERM signal to the main process running inside the Docker container (the entry script). So you need a way to catch the signal and then trigger the exit script.
See This link for explanation on signal trapping and an example (near the end of the page)
Create a script, and save it as a bash file, that contains that following:
$CONTAINER_NAME="someNameHere"
docker exec -it $CONTAINER_NAME bash -c "sh script-exit.sh"
docker stop $CONTAINER_NAME
Run that file instead of running docker stop, and that should do the trick. You can setup an alias for that as well.
As for automating it inside of Docker itself, I've never seen it done before. Good luck figuring it out, if that's the road you want to take.

How to keep Docker container running after starting services?

I've seen a bunch of tutorials that seem do the same thing I'm trying to do, but for some reason my Docker containers exit. Basically, I'm setting up a web-server and a few daemons inside a Docker container. I do the final parts of this through a bash script called run-all.sh that I run through CMD in my Dockerfile. run-all.sh looks like this:
service supervisor start
service nginx start
And I start it inside of my Dockerfile as follows:
CMD ["sh", "/root/credentialize_and_run.sh"]
I can see that the services all start up correctly when I run things manually (i.e. getting on to the image with -i -t /bin/bash), and everything looks like it runs correctly when I run the image, but it exits once it finishes starting up my processes. I'd like the processes to run indefinitely, and as far as I understand, the container has to keep running for this to happen. Nevertheless, when I run docker ps -a, I see:
➜ docker_test docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7706edc4189 some_name/some_repo:blah "sh /root/run-all.sh 8 minutes ago Exited (0) 8 minutes ago grave_jones
What gives? Why is it exiting? I know I could just put a while loop at the end of my bash script to keep it up, but what's the right way to keep it from exiting?
If you are using a Dockerfile, try:
ENTRYPOINT ["tail", "-f", "/dev/null"]
(Obviously this is for dev purposes only, you shouldn't need to keep a container alive unless it's running a process eg. nginx...)
I just had the same problem and I found out that if you are running your container with the -t and -d flag, it keeps running.
docker run -td <image>
Here is what the flags do (according to docker run --help):
-d, --detach=false Run container in background and print container ID
-t, --tty=false Allocate a pseudo-TTY
The most important one is the -t flag. -d just lets you run the container in the background.
This is not really how you should design your Docker containers.
When designing a Docker container, you're supposed to build it such that there is only one process running (i.e. you should have one container for Nginx, and one for supervisord or the app it's running); additionally, that process should run in the foreground.
The container will "exit" when the process itself exits (in your case, that process is your bash script).
However, if you really need (or want) to run multiple service in your Docker container, consider starting from "Docker Base Image", which uses runit as a pseudo-init process (runit will stay online while Nginx and Supervisor run), which will stay in the foreground while your other processes do their thing.
They have substantial docs, so you should be able to achieve what you're trying to do reasonably easily.
you can run plain cat without any arguments as mentioned by bro #Sa'ad to simply keep the container working [actually doing nothing but waiting for user input] (Jenkins' Docker plugin does the same thing)
The reason it exits is because the shell script is run first as PID 1 and when that's complete, PID 1 is gone, and docker only runs while PID 1 is.
You can use supervisor to do everything, if run with the "-n" flag it's told not to daemonize, so it will stay as the first process:
CMD ["/usr/bin/supervisord", "-n"]
And your supervisord.conf:
[supervisord]
nodaemon=true
[program:startup]
priority=1
command=/root/credentialize_and_run.sh
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=false
startsecs=0
[program:nginx]
priority=10
command=nginx -g "daemon off;"
stdout_logfile=/var/log/supervisor/nginx.log
stderr_logfile=/var/log/supervisor/nginx.log
autorestart=true
Then you can have as many other processes as you want and supervisor will handle the restarting of them if needed.
That way you could use supervisord in cases where you might need nginx and php5-fpm and it doesn't make much sense to have them apart.
Motivation:
There is nothing wrong in running multiple processes inside of a docker container. If one likes to use docker as a light weight VM - so be it. Others like to split their applications into micro services. Me thinks: A LAMP stack in one container? Just great.
The answer:
Stick with a good base image like the phusion base image. There may be others. Please comment.
And this is yet just another plead for supervisor. Because the phusion base image is providing supervisor besides of some other things like cron and locale setup. Stuff you like to have setup when running such a light weight VM. For what it's worth it also provides ssh connections into the container.
The phusion image itself will just start and keep running if you issue this basic docker run statement:
moin#stretchDEV:~$ docker run -d phusion/baseimage
521e8a12f6ff844fb142d0e2587ed33cdc82b70aa64cce07ed6c0226d857b367
moin#stretchDEV:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
521e8a12f6ff phusion/baseimage "/sbin/my_init" 12 seconds ago Up 11 seconds
Or dead simple:
If a base image is not for you... For the quick CMD to keep it running I would suppose something like this for bash:
CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"
Or this for busybox:
CMD exec /bin/sh -c "trap : TERM INT; (while true; do sleep 1000; done) & wait"
This is nice, because it will exit immediately on a docker stop.
Just plain sleep or cat will take a few seconds before the container is forcefully killed by docker.
Updates
As response to Charles Desbiens concerning running multiple processes in one container:
This is an opinion. And the docs are pointing in this direction. A quote: "It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application." For sure it obviously much more powerful to devide your complex service into multiple containers. But there are situations where it can be beneficial to go the one container route. Especially for appliances. The GitLab Docker image is my favourite example of a multi process container. It makes deployment of this complex system easy. There is no way for mis-configuration. GitLab retains all control over their appliance. Win-Win.
Make sure that you add daemon off; to you nginx.conf or run it with CMD ["nginx", "-g", "daemon off;"] as per the official nginx image
Then use the following to run both supervisor as service and nginx as foreground process that will prevent the container from exiting
service supervisor start && nginx
In some cases you will need to have more than one process in your container, so forcing the container to have exactly one process won't work and can create more problems in deployment.
So you need to understand the trade-offs and make your decision accordingly.
Since docker engine v1.25 there is an option called init.
Docker-compose included this command as of version 3.7.
So my current CMD when running a container that should run into infinity:
CMD ["sleep", "infinity"]
and then run it using:
docker build
docker run --rm --init app
crf.:
rm docs and init docs
Capture the PID of the ngnix process in a variable (for example $NGNIX_PID) and at the end of the entrypoint file do
wait $NGNIX_PID
In that way, your container should run until ngnix is alive, when ngnix stops, the container stops as well
Along with having something along the lines of : ENTRYPOINT ["tail", "-f", "/dev/null"] in your docker file, you should also run the docker container with -td option. This is particularly useful when the container runs on a remote m/c. Think of it more like you have ssh'ed into a remote m/c having the image and started the container. In this case, when you exit the ssh session, the container will get killed unless it's started with -td option. Sample command for running your image would be: docker run -td <any other additional options> <image name>
This holds good for docker version 20.10.2
There are some cases during development when there is no service yet but you want to simulate it and keep the container alive.
It is very easy to write a bash placeholder that simulates a running service:
while true; do
sleep 100
done
You replace this by something more serious as the development progress.
How about using the supervise form of service if available?
service YOUR_SERVICE supervise
Once supervise is successfully running, it will not exit unless it is
killed or specifically asked to exit.
Saves having to create a supervisord.conf

Correct way to detach from a container without stopping it

In Docker 1.1.2 (latest), what's the correct way to detach from a container without stopping it?
So for example, if I try:
docker run -i -t foo /bin/bash or
docker attach foo (for already running container)
both of which get me to a terminal in the container, how do I exit the container's terminal without stopping it?
exit and CTR+C both stop the container.
Type Ctrl+p then Ctrl+q. It will help you to turn interactive mode to daemon mode.
See https://docs.docker.com/engine/reference/commandline/cli/#default-key-sequence-to-detach-from-containers:
Once attached to a container, users detach from it and leave it running using the using CTRL-p CTRL-q key sequence. This detach key sequence is customizable using the detachKeys property. [...]
Update: As mentioned in below answers Ctrl+p, Ctrl+q will now turn interactive mode into daemon mode.
Well Ctrl+C (or Ctrl+\) should detach you from the container but it will kill the container because your main process is a bash.
A little lesson about docker.
The container is not a real full functional OS. When you run a container the process you launch take the PID 1 and assume init power. So when that process is terminated the daemon stop the container until a new process is launched (via docker start) (More explanation on the matter http://phusion.github.io/baseimage-docker/#intro)
If you want a container that run in detached mode all the time, i suggest you use
docker run -d foo
With an ssh server on the container. (easiest way is to follow the dockerizing openssh tutorial https://docs.docker.com/engine/examples/running_ssh_service/)
Or you can just relaunch your container via
docker start foo
(it will be detached by default)
I dug into this and all the answers above are partially right. It all depends on how the container is launched. It comes down to the following when the container was launched:
was a TTY allocated (-t)
was stdin left open (-i)
^P^Q does work, BUT only when -t and -i is used to launch the container:
[berto#g6]$ docker run -ti -d --name test python:3.6 /bin/bash -c 'while [ 1 ]; do sleep 30; done;'
b26e39632351192a9a1a00ea0c2f3e10729b6d3e22f8e0676d6519e15c08b518
[berto#g6]$ docker attach test
# here I typed ^P^Q
read escape sequence
# i'm back to my prompt
[berto#g6]$ docker kill test; docker rm -v test
test
test
ctrl+c does work, BUT only when -t (without -i) is used to launch the container:
[berto#g6]$ docker run -t -d --name test python:3.6 /bin/bash -c 'while [ 1 ]; do sleep 30; done;'
018a228c96d6bf2e73cccaefcf656b02753905b9a859f32e60bdf343bcbe834d
[berto#g6]$ docker attach test
^C
[berto#g6]$
The third way to detach
There is a way to detach without killing the container though; you need another shell. In summary, running this in another shell detached and left the container running pkill -9 -f 'docker.*attach':
[berto#g6]$ docker run -d --name test python:3.6 /bin/bash -c 'while [ 1 ]; do sleep 30; done;'
b26e39632351192a9a1a00ea0c2f3e10729b6d3e22f8e0676d6519e15c08b518
[berto#g6]$ docker attach test
# here I typed ^P^Q and doesn't work
^P
# ctrl+c doesn't work either
^C
# can't background either
^Z
# go to another shell and run the `pkill` command above
# i'm back to my prompt
[berto#g6]$
Why? Because you're killing the process that connected you to the container, not the container itself.
If you do "docker attach "container id" you get into the container.
To exit from the container without stopping the container you need to enter Ctrl+P+Q
I consider Ashwin's answer to be the most correct, my old answer is below.
I'd like to add another option here which is to run the container as follows
docker run -dti foo bash
You can then enter the container and run bash with
docker exec -ti ID_of_foo bash
No need to install sshd :)
Try CTRL+P,CTRL+Q to turn interactive mode to daemon.
If this does not work and you attached through docker attach, you can detach by killing the docker attach process.
Better way is to use sig-proxy parameter to avoid passing the CTRL+C to your container :
docker attach --sig-proxy=false [container-name]
Same option is available for docker run command.
The default way to detach from an interactive container is Ctrl+P Ctrl+Q, but you can override it when running a new container or attaching to existing container using the --detach-keys flag.
You can use the --detach-keys option when you run docker attach to override the default CTRL+P, CTRL + Q sequence (that doesn't always work).
For example, when you run docker attach --detach-keys="ctrl-a" test and you press CTRL+A you will exit the container, without killing it.
Other examples:
docker attach --detach-keys="ctrl-a,x" test - press CTRL+A and then X to exit
docker attach --detach-keys="a,b,c" test - press A, then B, then C to exit
Extract from the official documentation:
If you want, you can configure an override the Docker key sequence for detach. This is useful if the Docker default sequence conflicts with key sequence you use for other applications. There are two ways to define your own detach key sequence, as a per-container override or as a configuration property on your entire configuration.
To override the sequence for an individual container, use the --detach-keys="<sequence>" flag with the docker attach command. The format of the <sequence> is either a letter [a-Z], or the ctrl- combined with any of the following:
a-z (a single lowercase alpha character )
# (at sign)
[ (left bracket)
\ (two backward slashes)
_ (underscore)
^ (caret)
These a, ctrl-a, X, or ctrl-\\ values are all examples of valid key sequences. To configure a different configuration default key sequence for all containers, see Configuration file section.
Note: This works since docker version 1.10+ (at the time of this answer, the current version is 18.03)
If you just want see the output of the process running from within the container, you can do a simple docker container logs -f <container id>.
The -f flag makes it so that the output of the container is followed and updated in real-time. Very useful for debugging or monitoring.
In Docker container atleast one process must be run, then only the container will be running the docker image(ubuntu,httd..etc, whatever it is) at background without exiting
For example in ubuntu docker image ,
To create a new container with detach mode (running background atleast on process),
docker run -d -i -t f63181f19b2f /bin/bash
it will create a new contain for this image(ubuntu) id f63181f19b2f . The container will run in the detached mode (running in background) at that time a small process tty bash shell will be running at background. so, container will keep on running untill the bash shell process will killed.
To attach to the running background container,use
docker attach b1a0873a8647
if you want to detach from container without exiting(without killing the bash shell),
By default , you can use ctrl-p,q. it will come out of container without exiting from the container(running background. that means without killing the bash shell).
You can pass the custom command during attach time to container,
docker attach --detach-keys="ctrl-s" b1a0873a8647
this time ctrl-p,q escape sequence won't work. instead, ctrl-s will work for exiting from container. you can pass any key eg, (ctrl-*)
You can simply kill docker cli process by sending SEGKILL. If you started the container with
docker run -it some/container
You can get it's pid
ps -aux | grep docker
user 1234 0.3 0.6 1357948 54684 pts/2 Sl+ 15:09 0:00 docker run -it some/container
let's say it's 1234, you can "detach" it with
kill -9 1234
It's somewhat of a hack but it works!
To prevent having logs you should run in detach mode using the -d flag
docker run -d <your_command>
If you are already stuck, you could open a new window/tab in your terminal and close the first one. It won't stop the process of the running job
in case if you using docker on windows, you may use combination 'CTRL + D'
Old post but just exit then start it again... the issue is if you are on a windows machine Ctrl p or Ctrl P are tied to print... exiting the starting the container should not hurt anything

Resources