It appears that docker-compose replays captured output on container re-launch. This is against expectation, and is misleading about what my container is actually doing. Can this be disabled?
For instance,
I have a simple service that logs and exits w/ code 0.
In docker-compose.yml, i have restart: always set.
When running docker-compose up, each time the logging container comes back up after existing, I see all previous output relogged, plus any new additions from the current run.
Here's an easy to run example.
clone, cd <project>/fluentd, docker-compose build, & docker-compose up
I'm using docker-compose version 1.16.1, build 6d1ac21 on OSX.
Tips would be great!
This appears to be an open issue with Docker, where it's replaying logs on up. A workaround is mentioned here:
alias docker-logs-truncate="docker-machine ssh default -- 'sudo find /var/lib/docker/containers/ -iname \"*json.log\"|xargs -I{} sudo dd if=/dev/null of={}'"
Is this a life cycle problem? There is a difference between a stop and a "rm". If you do a docker-compose stop, the containers are suspended. Use "docker-compose up" to restart them from last time.
But "docker-compose rm" will destroy the containers. docker-compose up again to recreate them from scratch.
Ok. Did you try to remove the restart always for your container?
Related
In .yml file I have defined: restart: always. Is it possible to create this restart as the equivalent of --force-recreate flag?
I have an issue with XVFB and standard restart doesn't solve an issue but restarts with the flag --force-recreate help and I'm looking for an opportunity to do it automatically.
Always restart the container if it stops. If it is manually stopped, it is restarted only when Docker daemon restarts or the container itself is manually restarted. (See the second bullet listed in restart policy details) Source Link:
No --force-recreate is not the equivalent to restart: always
"--force-recreate Recreate containers even if their configuration and image haven't changed."
I use a Makefile for start/stop is also more practical.
Example:
SHELL := /bin/bash
# Docker: up
up:
docker-compose up -d --force-recreate --build
# Docker: down
down:
docker-compose down
... and so on
And than i can use like "make up, make down, make logs, make attach ..."
By the way, in most projects I also use for Automatic Restart and better logging Supervisor
I run the one of the open source microservices from here. When i run docker ps then all the containers status are UP, means they keep running. My issue is when I separately run a container then it did not keep running and exits. Below is one of the service defined in docker-compose file.
social-graph-service:
image: yg397/social-network-microservices
hostname: social-graph-service
restart: always
entrypoint: SocialGraphService
when i run it using command
sudo docker run -d --restart always --entrypoint SocialGraphService --hostname social-graph-service yg397/social-network-microservices
then its status does not UP, it exits after running. Why all the containers run continuously when i run them using sudo docker-compose up? and exit when i run them individually?
It looks like the graph service depends on MongoDB in order to run. My guess is it crashes when you run it individually because the mongo instance doesn't exist and it fails to connect.
The author of the repo wrote the docker-compose file to hide away some of the complexity from you, but that's a substantial tree of relationships between microservices, and most of them seem to depend on others existing in order to boot up.
-- Update --
The real issue is in the comments below. OP was already running the docker-compose stack while attempting to start another container, but forgot to connect the container to the docker network generated by docker-compose.
Using docker-compose up -d, when one of my containers fails to start (i.e. the RUN command exits with an error code), it fails quietly - How can I make it fail loudly?
(Am I thinking about this the right way? My ultimate goal is using Docker for my development environment. I'd like to be able to spin up my environment and be informed of errors right away. I'd like to stay to Docker's true path as much as possible, and am hesitant to depend on additional tools like screen/tmux)
Since you are running it detached (-d), docker-compose only spawns the containers and exits, without monitoring for any issues. If you run the containers in the foreground with:
docker-compose up --abort-on-container-exit
That should give you a pretty clear error on any container problems. Otherwise, I'd recommend looking into some of the other more advanced schedulers that monitor the running containers to recover from failures (e.g. Universal Control Plane or Kubernetes).
Update: If you want to script something outside of the docker-compose up -d, you can do a
docker events -f "container=${compose_prefix}_" -f "event=die"
and if anything gets output there, you had a container go down. There's also docker-compose events | grep "container die".
As shown in compose/cli/main.py#L66-L68, docker-compose is supposed to fail on (Dockerfile) build:
except BuildError as e:
log.error("Service '%s' failed to build: %s" % (e.service.name, e.reason))
sys.exit(1)
Since -d (Detached mode, which runs containers in the background) is incompatible with --abort-on-container-exit, a "docker way" would be to:
wrap the docker compose up -d in a script
add to that script a docker-compose logs), parsing for any error, and loudly exiting if any error message is found..
(Disclaimer: I'm a docker noob)
Each time I run sudo docker-compose up, the image name becomes a little longer. It looks like the image hash (or something like it) is being stuck onto the front each time:
1 a#ubuntu:~/projects/p⟫ sudo docker-compose up
Recreating 32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_p_postgres_1
...
1 a#ubuntu:~/projects/p⟫ sudo docker-compose up
Recreating 32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_p_postgres_1
...
1 a#ubuntu:~/projects/p⟫ sudo docker-compose up
Recreating 32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_32ba9196a0a9_p_postgres_1
As you might imagine, this is ... really irritating. How can I prevent this?
I'm still pretty new to Docker, so I'm not sure how to start debugging this. My more knowledgeable coworker is away for the next few days.
Other potentially useful information:
docker --version: 1.10.3, build 20f81dd
Ubuntu 14.04
sudo docker-compose build completes successfully
jazgot's suggestion of running with -p did the trick:
130 a#ubuntu:~/projects/p⟫ sudo docker-compose -p testtest up
Starting testtest_postgres_1
I'd prefer to mark jazgot's comment as the answer, but I don't see a way to do that (or to upvote his comment).
I wish I knew why this was happening in the first place, though. jazgot suggested that it was because my project name p was too short, but I just used p for anonymity; the actual name is 6 chars long, which seems long enough.
I had the same problem, and -p did not help me, but a restart of the docker service did
service docker restart
Up until recently, when one was doing docker-compose up for a bunch of containers and one of the started containers stopped, all of the containers were stopped. This is not the case anymore since https://github.com/docker/compose/issues/741 and this is a really annoying for us: We use docker-compose to run selenium tests which means starting application server, starting selenium hub + nodes, starting tests driver, then exiting when tests driver stops.
Is there a way to get back old behaviour?
You can use:
docker-compose up --abort-on-container-exit
Which will stop all containers if one of your containers stops
In your docker compose file, setup your test driver container to depend on other containers (with depends_on parameter). Your docker compose file should look like this:
services:
application_server:
...
selenium:
...
test_driver:
entry_point: YOUR_TEST_COMMAND
depends_on:
- application_server
- selenium
With dependencies expressed this way, run:
docker-compose run test_driver
and all the other containers will shut down when the test_driver container is finished.
This solution is an alternative to the docker-compose up --abort-on-container-exit answer. The latter will also shut down all other containers if any of them exits (not only the test driver). It depends on your use case which one is more adequate.
Did you try the work around suggested on the link you provided?
Assuming your test script looked similar to this:
$ docker-compose rm -f
$ docker-compose build
$ docker-compose up --timeout 1 --no-build
When the application tests end, compose would exit and the tests finish.
In this case, with the new docker-compose version, change your test container to have a default no-op command (something like echo, or true), and change your test script as follows:
$ docker-compose rm -f
$ docker-compose build
$ docker-compose up --timeout 1 --no-build -d
$ docker-compose run tests test_command...
$ docker-compose stop
Using run allows you to get the exit status from the test run, and you only see the output of the tests (not all the dependencies).
Reference
If this is not acceptable, you could refer to Docker Remote API and watch for the stop event for the containers and act on it.
An example usage is this docker-gen tool written in golang which watches for container start events, to automatically regenerate configuration files.
I'm not sure this is the perfect answer to your problem, but maestro for Docker, lets you manage mulitple Docker containers as single unit.
It should feel familiar as you group them using a YAML file.