Using docker-compose up -d, when one of my containers fails to start (i.e. the RUN command exits with an error code), it fails quietly - How can I make it fail loudly?
(Am I thinking about this the right way? My ultimate goal is using Docker for my development environment. I'd like to be able to spin up my environment and be informed of errors right away. I'd like to stay to Docker's true path as much as possible, and am hesitant to depend on additional tools like screen/tmux)
Since you are running it detached (-d), docker-compose only spawns the containers and exits, without monitoring for any issues. If you run the containers in the foreground with:
docker-compose up --abort-on-container-exit
That should give you a pretty clear error on any container problems. Otherwise, I'd recommend looking into some of the other more advanced schedulers that monitor the running containers to recover from failures (e.g. Universal Control Plane or Kubernetes).
Update: If you want to script something outside of the docker-compose up -d, you can do a
docker events -f "container=${compose_prefix}_" -f "event=die"
and if anything gets output there, you had a container go down. There's also docker-compose events | grep "container die".
As shown in compose/cli/main.py#L66-L68, docker-compose is supposed to fail on (Dockerfile) build:
except BuildError as e:
log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason))
sys.exit(1)
Since -d (Detached mode, which runs containers in the background) is incompatible with --abort-on-container-exit, a "docker way" would be to:
wrap the docker compose up -d in a script
add to that script a docker-compose logs), parsing for any error, and loudly exiting if any error message is found..
Related
I run the one of the open source microservices from here. When i run docker ps then all the containers status are UP, means they keep running. My issue is when I separately run a container then it did not keep running and exits. Below is one of the service defined in docker-compose file.
social-graph-service:
image: yg397/social-network-microservices
hostname: social-graph-service
restart: always
entrypoint: SocialGraphService
when i run it using command
sudo docker run -d --restart always --entrypoint SocialGraphService --hostname social-graph-service yg397/social-network-microservices
then its status does not UP, it exits after running. Why all the containers run continuously when i run them using sudo docker-compose up? and exit when i run them individually?
It looks like the graph service depends on MongoDB in order to run. My guess is it crashes when you run it individually because the mongo instance doesn't exist and it fails to connect.
The author of the repo wrote the docker-compose file to hide away some of the complexity from you, but that's a substantial tree of relationships between microservices, and most of them seem to depend on others existing in order to boot up.
-- Update --
The real issue is in the comments below. OP was already running the docker-compose stack while attempting to start another container, but forgot to connect the container to the docker network generated by docker-compose.
I'm new in docker.
What is the difference between these?
docker run 'an image'
docker-compose run 'something'
docker-compose start 'docker-compose.yml'
docker-compose up 'docker-compose.yml'
Thanks in advance.
https://docs.docker.com/compose/faq/#whats-the-difference-between-up-run-and-start
What’s the difference between up, run, and start?
Typically, you want docker-compose up. Use up to start or restart all the services defined in a docker-compose.yml. In the default “attached” mode, you see all the logs from all the containers. In “detached” mode (-d), Compose exits after starting the containers, but the containers continue to run in the background.
The docker-compose run command is for running “one-off” or “adhoc” tasks. It requires the service name you want to run and only starts containers for services that the running service depends on. Use run to run tests or perform an administrative task such as removing or adding data to a data volume container. The run command acts like docker run -ti in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container.
The docker-compose start command is useful only to restart containers that were previously created, but were stopped. It never creates new containers.
Also: https://docs.docker.com/compose/reference/
It appears that docker-compose replays captured output on container re-launch. This is against expectation, and is misleading about what my container is actually doing. Can this be disabled?
For instance,
I have a simple service that logs and exits w/ code 0.
In docker-compose.yml, i have restart: always set.
When running docker-compose up, each time the logging container comes back up after existing, I see all previous output relogged, plus any new additions from the current run.
Here's an easy to run example.
clone, cd <project>/fluentd, docker-compose build, & docker-compose up
I'm using docker-compose version 1.16.1, build 6d1ac21 on OSX.
Tips would be great!
This appears to be an open issue with Docker, where it's replaying logs on up. A workaround is mentioned here:
alias docker-logs-truncate="docker-machine ssh default -- 'sudo find /var/lib/docker/containers/ -iname \"*json.log\"|xargs -I{} sudo dd if=/dev/null of={}'"
Is this a life cycle problem? There is a difference between a stop and a "rm". If you do a docker-compose stop, the containers are suspended. Use "docker-compose up" to restart them from last time.
But "docker-compose rm" will destroy the containers. docker-compose up again to recreate them from scratch.
Ok. Did you try to remove the restart always for your container?
I am trying to create docker container using dockerfile where script-entry.sh is to be executed when the containers starts and script-exit.sh to be executed when the container stops.
ENTRYPOINT helped to accomplish the first part of the problem where script-entry.sh runs on startup.
How will i make sure the script-exit.sh is executed on docker exit/stop ?
docker stop sends a SIGTERM signal to the main process running inside the Docker container (the entry script). So you need a way to catch the signal and then trigger the exit script.
See This link for explanation on signal trapping and an example (near the end of the page)
Create a script, and save it as a bash file, that contains that following:
$CONTAINER_NAME="someNameHere"
docker exec -it $CONTAINER_NAME bash -c "sh script-exit.sh"
docker stop $CONTAINER_NAME
Run that file instead of running docker stop, and that should do the trick. You can setup an alias for that as well.
As for automating it inside of Docker itself, I've never seen it done before. Good luck figuring it out, if that's the road you want to take.
Up until recently, when one was doing docker-compose up for a bunch of containers and one of the started containers stopped, all of the containers were stopped. This is not the case anymore since https://github.com/docker/compose/issues/741 and this is a really annoying for us: We use docker-compose to run selenium tests which means starting application server, starting selenium hub + nodes, starting tests driver, then exiting when tests driver stops.
Is there a way to get back old behaviour?
You can use:
docker-compose up --abort-on-container-exit
Which will stop all containers if one of your containers stops
In your docker compose file, setup your test driver container to depend on other containers (with depends_on parameter). Your docker compose file should look like this:
services:
application_server:
...
selenium:
...
test_driver:
entry_point: YOUR_TEST_COMMAND
depends_on:
- application_server
- selenium
With dependencies expressed this way, run:
docker-compose run test_driver
and all the other containers will shut down when the test_driver container is finished.
This solution is an alternative to the docker-compose up --abort-on-container-exit answer. The latter will also shut down all other containers if any of them exits (not only the test driver). It depends on your use case which one is more adequate.
Did you try the work around suggested on the link you provided?
Assuming your test script looked similar to this:
$ docker-compose rm -f
$ docker-compose build
$ docker-compose up --timeout 1 --no-build
When the application tests end, compose would exit and the tests finish.
In this case, with the new docker-compose version, change your test container to have a default no-op command (something like echo, or true), and change your test script as follows:
$ docker-compose rm -f
$ docker-compose build
$ docker-compose up --timeout 1 --no-build -d
$ docker-compose run tests test_command...
$ docker-compose stop
Using run allows you to get the exit status from the test run, and you only see the output of the tests (not all the dependencies).
Reference
If this is not acceptable, you could refer to Docker Remote API and watch for the stop event for the containers and act on it.
An example usage is this docker-gen tool written in golang which watches for container start events, to automatically regenerate configuration files.
I'm not sure this is the perfect answer to your problem, but maestro for Docker, lets you manage mulitple Docker containers as single unit.
It should feel familiar as you group them using a YAML file.