How to keep a service running on a Docker container - docker

I am trying to run a simple docker container with my web application installed (Not using docker file).
During the testing I would always run a container using -t -i option and then start the tomcat service inside it by running a shell script.
How when I am moving to production I dont want to use the -t -i option any more and just need my Tomcat service to start and be the only primary service.
I trying pointing the entrypoint to the start up script for starting tomcat but the container terminates after that script finishes.
How do I run a container, start a service and keep that service as the single primary service of the container?
Note: I read some posts about supervisor but not sure if I would need to start building my image from scratch if I go that route? I would prefer not doing that.
Any suggestions?

If you have a Dockerfile that uses an entrypoint pattern, it will look something like this:
(Dockerfile)
FROM ubuntu
...Some configuration steps...
add start.sh /start.sh
ENTRYPOINT ["/start.sh"]
All you need to do is make sure your start.sh script 'hangs' in some way. Some people like to tail the syslogs, but tailing any file that exists will work.
(start.sh)
#!/bin/bash
service Your_Service_Or_Whatever start
tail -f /var/log/dmesg

A shorter version:
FROM ubuntu
...Some configuration steps...
ENTRYPOINT ["/bin/sh", "-c", "while true; do sleep 1; done"]
tested with Docker version 1.12.1, build 23cf638
Use docker --version to find out your version

Docker containers as default will run according to the configuration in the images Dockerfile. If you usually run a container with the -i flag, you leave STDIN open allowing you access to the containers entrypoint or it could be a bash shell. To achieve what you want, you can run the container in a detached state passing your commands into docker run directly.
docker run -d myapp /opt/catalina/bin/startup.sh
This will run the myapp container in a detached state and will run the command passed as the 3rd argument. If the command results in a long lived service, the container will stay active as long as the service is.
This is explained in detail in the docs.

Related

Docker: "service" command works but "systemctl" command doesn't work

I pulled centos6 image and made a container from it. I got its bash by:
$ docker run -i -t centos:centos6 /bin/bash
On the centos6 container, I could use "service" command without any problem. But when I pulled&used centos7 image:
$ docker run -i -t centos:centos7 /bin/bash
Both of "service" and "systemctl" didn't work. The error message is:
Failed to get D-Bus connection: Operation not permitted
My question is:
1. How are people developing without "service" and "systemctl" commands?
2. If I want to use, for example, httpd.service on the centos7 container, what should I do? Or maybe running services on a container is not recommended?
There is no process supervisor running inside either container. The service command in your CentOS 6 container works by virtue of the fact that it just runs a script from /etc/init.d, which by design ultimately launch a command in the background and return control to you.
CentOS 7 uses systemd, and systemd is not running inside your container, so there is nothing for systemctl to talk to.
In either situation, using the service or systemctl command is generally the wrong thing to do: you want to run a single application, and you want to run it in the foreground, so that your container continues to run (from Docker's perspective, a command that goes into the background has exited, and if that was pid 1 in the container, the container will exit).
How are people developing without "service" and "systemctl" commands?
They are starting their programs directly, by consulting the necessary documentation to figure out the appropriate command line.
If I want to use, for example, httpd.service on the centos7 container, what should I do? Or maybe running services on a container is recommended?
You would start the httpd binary using something like:
CMD ["httpd", "-DFOREGROUND"]
If you like to stick with service/sytemctl commands to start/stop services then you can do that in a centos7 container by using the docker-systemctl-replacement script.
I had some deployment scripts that were using th service start/stop commands on a real machine - and they work fine with a container. Without any further modification. When putting the systemctl.py script into the CMD then it will simply start all enabled services somewhat like the init-process on a real machine.
systemd is included but not enabled by default in CentOS 7 docker image. It is mentioned on the repository page along with steps to enable it.
https://hub.docker.com/_/centos/

How to keep the docker container up and running?

Here is my simple docker file
FROM java:8
EXPOSE 4000
now when I run it using the following command
sudo docker run --name hello dockerfile
and do docker ps -a it shows the status as exited. I just want to keep this container up and running so I can ssh into this container and probably transfer files and so on. It looks like containers are mainly used to run servers am I correct?
you can at least keep your container up with something like docker run -d hello sleep infinity but as said by René M, you should put in your Dockerfile something to do in your CMD or ENTRYPOINT, see the doc
https://docs.docker.com/engine/reference/builder/#cmd
and
https://docs.docker.com/engine/reference/builder/#entrypoint
That is realy simple.
Because your container is running nothing that last long. What happens is, that this container starts, has nothing to do and stops.
What you can do is:
Run the container in interactive mode with attached tty. This way your console enters the container after it's start, and let him run a tty, which is something to do and prevends the container from stopping. Then you can work inside this container, like installing an application. Doing this your work will be lost after stoping the container. But you can run docker commit on that container, which makes your changes persistent.
docker run -i -t --name hello dockerfile
Enhance your dockerfile with something usefull. Like copying an application into the container and provide a CMD command to run, when the container starts.
After this the container will last as long as your CMD command runs. If the command is a server or deamon application, the container will last for ever and will only stop when you stop him.

How to restart a single container with docker-compose

I have a docker-compose.yml file that contains 4 containers: redis, postgres, api and worker.
During the development of the worker container, I often need to restart it in order to apply changes. Is there any good way to restart a single container (e.g. worker) without restarting the others?
It is very simple: Use the command:
docker-compose restart worker
You can set the time to wait for stop before killing the container (in seconds)
docker-compose restart -t 30 worker
Note that this will restart the container but without rebuilding it. If you want to apply your changes and then restart, take a look at the other answers.
The other answers to restarting a single node are on target, docker-compose restart worker. That will bounce that container, but not include any changes, even if you rebuilt it separately. You can manually stop, rm, create, and start, but there are much easier methods.
If you've updated your code, you can do the build and reload in a single step with:
docker-compose up --detach --build
That will first rebuild your images from any changed code, which is fast if there are no changes since the cache is reused. And then it only replaces the changed containers. If your downloaded images are stale, you can precede the above command with:
docker-compose pull
To download any changed images first (the containers won't be restarted until you run a command like the up above). Doing an initial stop is unnecessary.
And to only do this for a single service, follow the up or pull command with the services you want to specify, e.g.:
docker-compose up --detach --build worker
Here's a quick example of the first option, the Dockerfile is structured to keep the frequently changing parts of the code near the end. In fact the requirements are pulled in separately for the pip install since that file rarely changes. And since the nginx and redis containers were up-to-date, they weren't restarted. Total time for the entire process was under 6 seconds:
$ time docker-compose -f docker-compose.nginx-proxy.yml up --detach --build
Building counter
Step 1 : FROM python:2.7-alpine
---> fc479af56697
Step 2 : WORKDIR /app
---> Using cache
---> d04d0d6d98f1
Step 3 : ADD requirements.txt /app/requirements.txt
---> Using cache
---> 9c4e311f3f0c
Step 4 : RUN pip install -r requirements.txt
---> Using cache
---> 85b878795479
Step 5 : ADD . /app
---> 63e3d4e6b539
Removing intermediate container 9af53c35d8fe
Step 6 : EXPOSE 80
---> Running in a5b3d3f80cd4
---> 4ce3750610a9
Removing intermediate container a5b3d3f80cd4
Step 7 : CMD gunicorn app:app -b 0.0.0.0:80 --log-file - --access-logfile - --workers 4 --keep-alive 0
---> Running in 0d69957bda4c
---> d41ff1635cb7
Removing intermediate container 0d69957bda4c
Successfully built d41ff1635cb7
counter_nginx_1 is up-to-date
counter_redis_1 is up-to-date
Recreating counter_counter_1
real 0m5.959s
user 0m0.508s
sys 0m0.076s
To restart a service with changes here are the steps that I performed:
docker-compose stop -t 1 worker
docker-compose build worker
docker-compose up --no-start worker
docker-compose start worker
Following command
docker-compose restart worker
will just STOP and START the container. i.e without loading any changes from the docker-compose.xml
STOP is similar to hibernating in PC . Hence stop/start will not look for any changes made in configuration file . To reload from the recipe of container (docker-compose.xml) we need to remove and create the container (Similar analogy to rebooting the PC )
So commands will be as following
docker-compose stop worker // go to hibernate
docker-compose rm worker // shutdown the PC
docker-compose create worker // create the container from image and put it in hibernate
docker-compose start worker //bring container to life from hibernation
Since some of the other answers include info on rebuilding, and my use case also required a rebuild, I had a better solution (compared to those).
There's still a way to easily target just the one single worker container that both rebuilds + restarts it in a single line, albeit it's not actually a single command. The best solution for me was simply rebuild and restart:
docker-compose build worker && docker-compose restart worker
This accomplishes both major goals at once for me:
Targets the single worker container
Rebuilds and restarts it in a single line
Hope this helps anyone else getting here.
Restart Service with docker-compose file
docker-compose -f [COMPOSE_FILE_NAME].yml restart [SERVICE_NAME]
Use Case #1: If the COMPOSE_FILE_NAME is docker-compose.yml and service is worker
docker-compose restart worker
Use Case #2: If the file name is sample.yml and service is worker
docker-compose -f sample.yml restart worker
By default docker-compose looks for the docker-compose.yml if we run the docker-compose command, else we have flag to give specific file name with -f [FILE_NAME].yml
The answer's here are talking about the reflection of the change on the docker-compose.yml file.
But what if I want to incorporate the changes I have done in my code, and I believe that will be only possible by rebuilding the image and that I do with following commands
1. docker container stop
docker stop container-id
2. docker container removal
docker rm container-id
3. docker image removal
docker rmi image-id
4. compose the container again
docker-compose up container-name
Restart container
If you want to just restart your container:
docker-compose restart servicename
Think of this command as "just restart the container by its name", which is equivalent to docker restart command.
Note caveats:
If you changed ENV variables they won't updated in container. You need to stop it and start again. Or, using single command docker-compose up will detect changes and recreate container.
As many others mentioned, if you changed docker-compose.yml file itself, simple restart won't apply those changes.
If you copy your code inside container at the build stage (in Dockerfile using ADD or COPY commands), every time the code changes you have to rebuild the container (docker-compose build).
Correlation to your code
docker-compose restart should work perfectly fine, if your code gets path mapped into the container by volume directive in docker-compose.yml like so:
services:
servicename:
volumes:
- .:/code
But I'd recommend to use live code reloading, which is probably provided by your framework of choice in DEBUG mode (alternatively, you can search for auto-reload packages in your language of choice). Adding this should eliminate the need to restart container every time after your code changes, instead reloading the process inside.
Simple 'docker' command knows nothing about 'worker' container.
Use command like this
docker-compose -f docker-compose.yml restart worker
After making changes, you need to pull the changes into the server and then reacreate the container. So as the documentation shows:
docker-compose pull worker && docker-compose up -d --no-deps worker
pull worker will make only this project to be pulled to the server, and --no-deps will prevent from restart containers the worker container depends on.
To apply changes to a Docker Compose file while only restarting a single service, you can use the docker-compose command with the up command and specify the service name. For example:
$ docker-compose up -d --no-deps myservice
This command will update the configuration for the myservice service and restart it, without touching any of the other services in the Compose file. The -d flag runs the services in the background, and the --no-deps flag tells Compose not to start any dependencies of the myservice service.
Alternatively, you can use the restart command to restart a single service:
$ docker-compose restart myservice
This will apply the latest configuration for the myservice service and restart it.
Note that this command will not apply any changes to the Compose file itself, it will only restart the service using the current configuration.

Docker: how to restart process inside of container?

I have a set of tests which I would like to run on docker container.
In the middle of the tests I am changing me test data and I need to restart JETTY.
What is the best way to do that?
I can imagine some options:
With SSH - but for the docker ssh is not the best option.
Python agent on docker to listen sockets - expose one more port, connect and restart jetty.
Maybe there are better ideas for that?
Thanks
Sounds like the process you're trying to restart is the primary process for the docker container (ie. the one you set in your Dockerfile if you have one, and when you run 'ps -ef' inside the container you would see the PID for your process set to 1). If this is the case, then you cannot restart it from inside the container. You should just restart the container itself:
docker restart <container_id>
Enter the container and restart it.
Manual Way:
docker exec -it <containeridorname> /bin/bash
Or Automated Way:
docker exec -it <containeridorname> /restartjettycommand.sh
you need to use an entrypoint shell script
you will need to build your docker file so that you can copy the files in to the container.
your entrypoint.sh or call it runjettytests.sh
pseudo code for that will look like:
#!/bin/sh
java -jar start.jar
$runTests
java -DSTOP.PORT=8080 -DSTOP.KEY=stop_jetty -jar start.jar --stop
cp dataset2 data/
java -jar start.jar
$runTests2
java -DSTOP.PORT=8080 -DSTOP.KEY=stop_jetty -jar start.jar --stop
exit(0)
clearly your use case may vary but thats a rough idea

How to keep Docker container running after starting services?

I've seen a bunch of tutorials that seem do the same thing I'm trying to do, but for some reason my Docker containers exit. Basically, I'm setting up a web-server and a few daemons inside a Docker container. I do the final parts of this through a bash script called run-all.sh that I run through CMD in my Dockerfile. run-all.sh looks like this:
service supervisor start
service nginx start
And I start it inside of my Dockerfile as follows:
CMD ["sh", "/root/credentialize_and_run.sh"]
I can see that the services all start up correctly when I run things manually (i.e. getting on to the image with -i -t /bin/bash), and everything looks like it runs correctly when I run the image, but it exits once it finishes starting up my processes. I'd like the processes to run indefinitely, and as far as I understand, the container has to keep running for this to happen. Nevertheless, when I run docker ps -a, I see:
➜ docker_test docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7706edc4189 some_name/some_repo:blah "sh /root/run-all.sh 8 minutes ago Exited (0) 8 minutes ago grave_jones
What gives? Why is it exiting? I know I could just put a while loop at the end of my bash script to keep it up, but what's the right way to keep it from exiting?
If you are using a Dockerfile, try:
ENTRYPOINT ["tail", "-f", "/dev/null"]
(Obviously this is for dev purposes only, you shouldn't need to keep a container alive unless it's running a process eg. nginx...)
I just had the same problem and I found out that if you are running your container with the -t and -d flag, it keeps running.
docker run -td <image>
Here is what the flags do (according to docker run --help):
-d, --detach=false Run container in background and print container ID
-t, --tty=false Allocate a pseudo-TTY
The most important one is the -t flag. -d just lets you run the container in the background.
This is not really how you should design your Docker containers.
When designing a Docker container, you're supposed to build it such that there is only one process running (i.e. you should have one container for Nginx, and one for supervisord or the app it's running); additionally, that process should run in the foreground.
The container will "exit" when the process itself exits (in your case, that process is your bash script).
However, if you really need (or want) to run multiple service in your Docker container, consider starting from "Docker Base Image", which uses runit as a pseudo-init process (runit will stay online while Nginx and Supervisor run), which will stay in the foreground while your other processes do their thing.
They have substantial docs, so you should be able to achieve what you're trying to do reasonably easily.
you can run plain cat without any arguments as mentioned by bro #Sa'ad to simply keep the container working [actually doing nothing but waiting for user input] (Jenkins' Docker plugin does the same thing)
The reason it exits is because the shell script is run first as PID 1 and when that's complete, PID 1 is gone, and docker only runs while PID 1 is.
You can use supervisor to do everything, if run with the "-n" flag it's told not to daemonize, so it will stay as the first process:
CMD ["/usr/bin/supervisord", "-n"]
And your supervisord.conf:
[supervisord]
nodaemon=true
[program:startup]
priority=1
command=/root/credentialize_and_run.sh
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=false
startsecs=0
[program:nginx]
priority=10
command=nginx -g "daemon off;"
stdout_logfile=/var/log/supervisor/nginx.log
stderr_logfile=/var/log/supervisor/nginx.log
autorestart=true
Then you can have as many other processes as you want and supervisor will handle the restarting of them if needed.
That way you could use supervisord in cases where you might need nginx and php5-fpm and it doesn't make much sense to have them apart.
Motivation:
There is nothing wrong in running multiple processes inside of a docker container. If one likes to use docker as a light weight VM - so be it. Others like to split their applications into micro services. Me thinks: A LAMP stack in one container? Just great.
The answer:
Stick with a good base image like the phusion base image. There may be others. Please comment.
And this is yet just another plead for supervisor. Because the phusion base image is providing supervisor besides of some other things like cron and locale setup. Stuff you like to have setup when running such a light weight VM. For what it's worth it also provides ssh connections into the container.
The phusion image itself will just start and keep running if you issue this basic docker run statement:
moin#stretchDEV:~$ docker run -d phusion/baseimage
521e8a12f6ff844fb142d0e2587ed33cdc82b70aa64cce07ed6c0226d857b367
moin#stretchDEV:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
521e8a12f6ff phusion/baseimage "/sbin/my_init" 12 seconds ago Up 11 seconds
Or dead simple:
If a base image is not for you... For the quick CMD to keep it running I would suppose something like this for bash:
CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"
Or this for busybox:
CMD exec /bin/sh -c "trap : TERM INT; (while true; do sleep 1000; done) & wait"
This is nice, because it will exit immediately on a docker stop.
Just plain sleep or cat will take a few seconds before the container is forcefully killed by docker.
Updates
As response to Charles Desbiens concerning running multiple processes in one container:
This is an opinion. And the docs are pointing in this direction. A quote: "It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application." For sure it obviously much more powerful to devide your complex service into multiple containers. But there are situations where it can be beneficial to go the one container route. Especially for appliances. The GitLab Docker image is my favourite example of a multi process container. It makes deployment of this complex system easy. There is no way for mis-configuration. GitLab retains all control over their appliance. Win-Win.
Make sure that you add daemon off; to you nginx.conf or run it with CMD ["nginx", "-g", "daemon off;"] as per the official nginx image
Then use the following to run both supervisor as service and nginx as foreground process that will prevent the container from exiting
service supervisor start && nginx
In some cases you will need to have more than one process in your container, so forcing the container to have exactly one process won't work and can create more problems in deployment.
So you need to understand the trade-offs and make your decision accordingly.
Since docker engine v1.25 there is an option called init.
Docker-compose included this command as of version 3.7.
So my current CMD when running a container that should run into infinity:
CMD ["sleep", "infinity"]
and then run it using:
docker build
docker run --rm --init app
crf.:
rm docs and init docs
Capture the PID of the ngnix process in a variable (for example $NGNIX_PID) and at the end of the entrypoint file do
wait $NGNIX_PID
In that way, your container should run until ngnix is alive, when ngnix stops, the container stops as well
Along with having something along the lines of : ENTRYPOINT ["tail", "-f", "/dev/null"] in your docker file, you should also run the docker container with -td option. This is particularly useful when the container runs on a remote m/c. Think of it more like you have ssh'ed into a remote m/c having the image and started the container. In this case, when you exit the ssh session, the container will get killed unless it's started with -td option. Sample command for running your image would be: docker run -td <any other additional options> <image name>
This holds good for docker version 20.10.2
There are some cases during development when there is no service yet but you want to simulate it and keep the container alive.
It is very easy to write a bash placeholder that simulates a running service:
while true; do
sleep 100
done
You replace this by something more serious as the development progress.
How about using the supervise form of service if available?
service YOUR_SERVICE supervise
Once supervise is successfully running, it will not exit unless it is
killed or specifically asked to exit.
Saves having to create a supervisord.conf

Resources