I would like to have a RUnit service to supervise a set of containers launched by docker-compose tool, here's my the runit script:
In /etc/sv/app/run
#!/bin/bash
exec 2>&1
APP_HOME=/home/myapp
source $APP_HOME/env.sh
exec docker-compose -f $APP_HOME/docker-compose.yml up
Here's what I have then:
sv start app - launches the docker-compose thing just fine
sv stop app - stops docker-compose process itself but for unknown reason it leaves the containers running
Is there any chance to have the stop command to stop containers as well? I thought that is what docker-compose should do when it gets stopped by RUnit.
I'm not familiar with docker (yet) but I have familiarity with runit.
When you issue sv stop app you are actually telling runsvdir to signal the runsv for your docker launch to tear down the process. If you need something to signal the container to shut down, it won't happen because runsv will haul off and kill any child processes that are attached. You may wish to read up on ./finish scripts, which are tasked with cleaning up things.
Related
I'm new in docker.
What is the difference between these?
docker run 'an image'
docker-compose run 'something'
docker-compose start 'docker-compose.yml'
docker-compose up 'docker-compose.yml'
Thanks in advance.
https://docs.docker.com/compose/faq/#whats-the-difference-between-up-run-and-start
What’s the difference between up, run, and start?
Typically, you want docker-compose up. Use up to start or restart all the services defined in a docker-compose.yml. In the default “attached” mode, you see all the logs from all the containers. In “detached” mode (-d), Compose exits after starting the containers, but the containers continue to run in the background.
The docker-compose run command is for running “one-off” or “adhoc” tasks. It requires the service name you want to run and only starts containers for services that the running service depends on. Use run to run tests or perform an administrative task such as removing or adding data to a data volume container. The run command acts like docker run -ti in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container.
The docker-compose start command is useful only to restart containers that were previously created, but were stopped. It never creates new containers.
Also: https://docs.docker.com/compose/reference/
Using docker-compose up -d, when one of my containers fails to start (i.e. the RUN command exits with an error code), it fails quietly - How can I make it fail loudly?
(Am I thinking about this the right way? My ultimate goal is using Docker for my development environment. I'd like to be able to spin up my environment and be informed of errors right away. I'd like to stay to Docker's true path as much as possible, and am hesitant to depend on additional tools like screen/tmux)
Since you are running it detached (-d), docker-compose only spawns the containers and exits, without monitoring for any issues. If you run the containers in the foreground with:
docker-compose up --abort-on-container-exit
That should give you a pretty clear error on any container problems. Otherwise, I'd recommend looking into some of the other more advanced schedulers that monitor the running containers to recover from failures (e.g. Universal Control Plane or Kubernetes).
Update: If you want to script something outside of the docker-compose up -d, you can do a
docker events -f "container=${compose_prefix}_" -f "event=die"
and if anything gets output there, you had a container go down. There's also docker-compose events | grep "container die".
As shown in compose/cli/main.py#L66-L68, docker-compose is supposed to fail on (Dockerfile) build:
except BuildError as e:
log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason))
sys.exit(1)
Since -d (Detached mode, which runs containers in the background) is incompatible with --abort-on-container-exit, a "docker way" would be to:
wrap the docker compose up -d in a script
add to that script a docker-compose logs), parsing for any error, and loudly exiting if any error message is found..
I am attempting to send SIGSTOP, and then later, SIGKILL to a container. This line leads me to believe that it will behave as I expect: https://github.com/docker/docker/issues/5948#issuecomment-43684471
However, it is going ahead and actually removing the containers. The commands are:
docker kill -s STOP container
docker kill -s CONT container
(Equivalent through the dockerode API I am using, but I just went to command line when that wasn't working). Is there some missing options I'm missing?
I think you're actually looking for the commands docker pause and docker unpause. Using the STOP signal is likely to be error-prone and dependent on how the process handles the signal.
I guess what's happening in this case is that Docker thinks the process has terminated and stops the container (it shouldn't be removed however, you can restart it with docker start).
I'm having a docker compose setup of a database container, an application container and one container which pre-loads the database with necessary data.
I want to start all of the containers together with docker-compose up while the pre-loading container terminates after it has completed it work with exit 0.
But terminating this one container takes down the complete setup with the message:
composesetup_load_1 exited with code 0
Gracefully stopping... (press Ctrl+C again to force)
Stopping composesetup_app_1...
Stopping composesetup_db_1...
Is there any way of having multiple containers with different life-time in one docker-compose setup? If yes, how?
My workaround for now is to keep the pre-loading container running by adding tail -f /dev/null to the end of the entrypoint script. This keeps the process running, while nothing actual happens.
Using -d option at docker-compose up -d will run the process in detached mode. This avoids the need to kill the service with Ctrl+C and therefore stop the containers.
Note: I am asumming you killed the process with Ctrl+C from the message "Gracefully stopping... (press Ctrl+C again to force)" you shared.
I have been running docker processes (apps) via
docker run …
But under runit supervision (runit is like daemontools) - so runit ensures that the process stays up, passes signals etc.
Is this reasonable? Docker seems to want to run its own demonization - but it isn't as thorough as runit. Furthermore, when runit restarts the app - a new container is created each time (fine) but it leaves a trace of the old one around - this seems to imply I am doing it in the wrong way.
Should docker not be run this way?
Should I instead set up a container from the image, just once, and then have runit run/supervise that container for all time?
Docker does do some management of daemonized containers: if the system shuts down, then when the Docker daemon starts it will also restart any containers that were running at the time the system shut down. But if the container exits on its own or the kernel (or a user) kills the container while it is running, the Docker daemon won't restart it. In cases where you do want a restart, a process manager makes sense.
I don't know runit so I can't give specific configuration guidance. But you should probably make the process manager communicate with the docker daemon and check to see if a given container id is running (docker ps | grep container_id or equivalent, or use the Docker Remote API directly). If the container has stopped, use Docker to restart it (docker run container_id) instead of running a new container. Or, if you do want a new container each time, then begin with docker run -rm to automatically clean it up when it exits or stops.
If you don't want your process manager to poll docker, you could instead run something that watches docker events.
You can get the container_id when you start the container as the return value of starting a daemon, or you can ask Docker to write this out to a file (docker run -cidfile myfilename, like a PID file)
I hope that helps or helps another runit guru offer more detailed advice.
Yes, I think running docker under runit makes sense. Typically when you start a process there is a way to tell it not to daemonize if it does by default since the normal way to hand-off from the runit run script to a process is via exec on the last line of your run script. For docker this means making sure not to set the -d flag.
For example, with docker you probably want your run script to look something like this:
#!/bin/bash -e
exec 2>&1
exec chpst -u dockeruser docker run -a stdin -a stdout -i ...
Using exec and chpst should resolve most issues with processes not terminating correctly when you bring down a runit service.