Docker-compose --exit-code-from is ignored - docker

Supposed I have multiple containers deployed
init
service1
service2
db
web
test
The init container runs into completion and then shutdowns by itself. That is his job, which is to do some
pre-configuration stuffs then exit.
When running locally, I dont have any issues running this in my desktop work environment.
My issue is when it is deployed in my CI pipeline. When my init container finished up...it shutdowns the whole
docker-compose network.
Even if I explicitly set the --exit-code-from into my test container.
docker-compose up --exit-code-from test
The end result is that I am not able to run my test cases to its completion because everything is being shutdown by the init container that exits.
Anybody has hints what can I do?

This is interesting. Is it possible to include the compose file? Maybe you have a depends_on defined, and the version of docker used by your CI pipeline handles it differently from the one on your dev environment.
At any rate, you'd want to stop using --exit-code-from, it apparently implies --abort-on-container-exit.
From https://docs.docker.com/compose/reference/up/:
--abort-on-container-exit Stops all containers if any container was
stopped. Incompatible with -d.
--exit-code-from SERVICE Return the exit code of the selected service
container. Implies --abort-on-container-exit.

I ran into the same issue when trying to run Cypress together with MongoDB seeding container and a replica set starter container. The 2 mongo-related containers would exit quickly after doing their job, thus triggering the unintuitive --abort-on-container-exit implied by --exit-code-from cypress.
For me the simplest solution was to use the tail -f /dev/null hack. The idea is that if you run this command after whatever the containers that you don't want to exit are finished with their actual jobs, they will hang until another container triggers the --abort-on-container-exit and pulls with it the entire docker-compose setup down.
Note that this is not a univsal answer: the downside of this approach is that you have to find out what the original CMD is in containers that you don't have control over.
For example, let's take the mongo-seeding project and their Dockerfile. In order to keep the container alive after doing its job, I'd like to make my own Dockerfile in which I'll pull that image and define a custom ENTRYPOINT that will first run the CMD from the original definition of the mongo-seeding image and then run tail -f /dev/null to keep the container alive. In their Dockerfile I can see that the CMD is simply seed and I can assume it won't change in the future (good design) so my ENTRYPOINT script entry.sh can just look like this:
#!/bin/sh
seed
tail -f /dev/null
And my Dockerfile:
FROM pkosiec/mongo-seeding:3.6.0
ENTRYPOINT [ "/app/scripts/entry.sh" ]
Plus the relevant service in docker-compose with volumes mount for completeness:
mongo-seed:
build:
context: ./mongoSeed
volumes:
- ./mongoSeed/data:/app/data
- ./mongoSeed/scripts:/app/scripts
working_dir: /app/data
depends_on:
- mongodb
Which makes the container do its job and then hang until Cypress exits and causes the entire docker-compose setup to stop.

Related

How to avoid service dependencies from being stopped in Docker Compose?

Given the following Docker Compose file....
version: '3.8'
services:
producer:
image: producer
container_name: producer
depends_on: [db]
build:
context: ./producer
dockerfile: ./Dockerfile
db:
image: some-db-image
container_name: db
When I do docker-compose up producer obviously the db service gets started too. When I CTRL+C both services are stopped. This is expected and fine.
But sometimes, the db service is started before, on a different shell and so doing docker-compose up producer understands that db is running and only starts producer. But when I hit CTRL+C, both producer and db are stopped even though db was not started as part of this docker compose up command.
Is there a way to avoid getting the dependencies services stopped when stopping its "parent" ?
When running just docker-compose up, the CTRL+C command always stops all running services in the current compose scope. It doesn't care about depends_on.
You would need to spin it up with detach option -d, like
docker-compose up -d producer
Then you can do
docker stop producer
And db service should still be running.
As I understand your question: You want to stop a container A which depends on another container B. But when stopping A, you don't want docker-compose to stop B.
Docker-compose stops the dependent containers ('B' in this case) when 'A' is stopped.
How I would approach this:
Split up the docker-compose files into A and B
In docker-compose for A create a health check testing (and waiting) for container B to be alive.
Since this is a database, you could do this with a dummy query.
Then you still have dependency, but not the docker-compose connection of stopping dependant containers.
You can't simply do that with CTRL+C.
Your docker-compose file and the services defined in it are treated as a project. You may notice that all containers, networks and volumes are prefixed with the name of the directory where the docker-compose file is located by default. This is the project name. It can be changed via an environment variable or the -p flag of the docker-compose command.
What docker-compose does is it keeps track of all the resources for a given project.
In your case there are two services: db and producer. Whenever you run docker-compose up, both of them start up. They both end up being part of the same project. The same applies when you only start one of the services (e.g. with docker-compose up db). You can later start the other service and it will still be part of the same project.
One more thing to note here: Whenever you run docker-compose without the -d (detached) flag, you get attached to the whole project, meaning whenever you hit CTRL+C, you'll stop all services. It does not matter if the last compose command started only one of the services or if they depend on each other. Attaching to the project and hitting CTRL+C will stop them.
A possible solution to your problem would be the following:
Start up your services via docker-compose up -d (both db and producer will get created). They are now in detached mode. If you still want to check the logs in real time (kinda like attaching), use docker-compose logs -f. Now, however, if you want to stop only one of the services you can simply do docker-compose stop $SVC_NAME (where $SVC_NAME is either db or producer) and this will keep the other one running. This way, whatever happens to your terminal session, your services won't stop, unless you explicitly tell them to.
Is there a way to avoid getting the dependencies services stopped when stopping its "parent" ?
Yes.
Using the new version docker compose instead of docker-compose might solve your problem Reference.
Simple example
Assuming now you are using the new version, your process could be something like this.
docker-compose.yml
version: "3.8"
services:
db:
build: .
producer:
build: .
depends_on: [db]
extra:
build: .
Dockerfile
FROM node:alpine
WORKDIR /app
COPY . .
ENTRYPOINT [ "/bin/sh", "script.sh" ]
script.sh
while :; do sleep 1; done
Suppose db has started before with
$ docker compose up -d db.
Then later,
$ docker compose up -d producer.
Now you can stop only producer with
$ docker compose stop producer.
You can check if db is still running with
$ docker compose ps.
Notice the use of -d flag for detached mode, as pointed out in another answer, so you don't need to kill the process with CTRL+C. Also, using detached flag allows you to check the services that are running with docker compose ps.
A similar issue as yours was reported and fixed a while ago, as you can see here.
I was not able to reproduce the behavior you observe with a complete minimal example. Namely, when running docker compose stop producer, the underlying db is not stopped AFAICT.
Anyway, you may be interested in an alternative command that is a bit more flexible than docker compose up, regarding how to run "one-off commands": docker compose run.
The typical use cases are as follows:
docker compose run db bash → run the db service, replacing the default CMD with bash
docker compose run -d db → run the db service in the background (detach mode)
docker compose run --service-ports producer → run the service producer and its dependencies (unless they were run with docker compose up), enabling the ports mapping.
So for your specific use case, you could run:
docker compose up -d db
docker compose run --service-ports producer

Docker compose - run shell and application inside shell

I'm using docker compose for running my application in dev. environment.
version: '3.4'
services:
web:
build:
context: .
target: base
ports:
- "5000:5000"
stdin_open: true
tty: true
volumes:
- ./src:/src
command: node src/main/server/index.js
Composer is starting container and I can see logs output from node application. When I press CTR-C - container is stopped and my application is stopped as well.
I would like to have my application to be stopped when I press CTRL-C instead of whole container.
The same behavior when running an app within Windows CMD or Linux shell. For example, to restart an app - press CTRL-C, repeat startup command (node src/main/server/index.js by pressing top arrow key), and press enter.
I was thinking I could use something like this, but it does not work.
command: bash -c "node src/main/server/index.js
I know I can use command below to achieve expected behavior:
docker-compose up -d (to start in detached mode)
docker-compose exec web bash (run interactive shell)
node src/main/server/index.js (start node manually)
But maybe there is a way to start bash interactive bash and run an application in bash using singe command docker-compose up ?
Docker runs a main process in its containers, as such, stopping the main process will also stop the container.
I will attempt to answer your question, but I don't think that you should work like that in a Dev environment.
Answering your question, you can "trap" the container in a main process, then just bash into the container and perform the app start.
In order to trap the container, just change the docker-compose command to:
command: while true; do sleep 1; done;
To get into an interactive bash in the container:
docker exec -it <CONTAINER-ID> bash
And then you can start or stop the node app.
It seems that the problem you are facing is a container taking a lot to start, you should probably reorder your Dockerfile to prevent it from redownloading all dependencies (or other long process) every time a file changes.
You should place your COPY command after all commands that should persist from across builds, and take advantage of docker's image layering.
If you need a "hot reload" feature, you can research Webpack hot reloading.
You would need to bind your host volume to the container's work directory in order to let webpack properly watch the files and reload the app.

docker-compose execute command in sibling container

I am building an end to end test suite around a number of services. Some of these services aren't really services. They are actually procedural scripts which are run in sequence. These are executed at the command line and accept arguments, as you would expect a script to do.
We have docker images for these scripts/apps. I have compiled them into a docker-compose file. They are defined there as services which are sibling to the end to end test suite itself. So, for example:
docker-compose.yml
version: '3.4'
services:
script:
build: https://${GITHUB_ACCESS}:#github.com/company/script.git
image: script:e2e
e2e_tests:
build: .
image: e2e:e2e
Now, the e2e service needs to execute the script. Since the script isn't a service, I can't make a simple api call. How would I pass a command into the script container in order to execute it, from the e2e_tests container?
Problem
You want to call a command (let's say echo 1) which is located inside your script container (S1, derived from the image script:e2e) from your testing container (T1, derived from the image e2e_tests:e2e)
Solution
You could use the possibility to expose the Docker socket to a container.
Expose the Docker socket to container T1 (which should run the tests):
docker run -it --name T1 --volume /var/run/docker.sock:/var/run/docker.sock e2e_tests:e2e
Now from within the container T1 you are able to start other containers. This can be used to also start the script container S1 and execute a command:
docker run --name S1 scipt:e2e echo 1
1
The output of the command (here echo 1) will be piped to T1, so you can directly parse/use it.
How to transfer this to docker-compose.yml?
version: '3.4'
services:
script:
build: https://${GITHUB_ACCESS}:#github.com/company/script.git
image: script:e2e
e2e_tests:
build: .
image: e2e:e2e
volumes: /var/run/docker.sock:/var/run/docker.sock
Where to put the actual execution of the test (which in turn will have to execute docker run ... echo 1) depends on your specific usecase. You could:
execute this directly from within the CMD of e2e
put it into a script, which is executed by CMD of e2e
specify the entrypoint using the docker-compose.yml for e2e
Security
Be aware of the fact that the docker socket is highly privileged (it is like root). So exposing this socket might introduce security implications. It is on the same level as executing your tests on a system with password-less sudo, the tests won't get executed with privileged permissions, but an attacker which is able to modify your tests, could use it to gain privileged access. This might be ok, depending on your threat model.
For understanding the threat, see:
stackoverflow.com - Access Docker socket within container
Don't expose the Docker socket (not even to a container)
docker.com - Docker daemon attack surface

Spotify docker-gc: prevent auto stop after first run

I tried to use docker-gc for automatically collecting unused docker images and containers. I tried this config in docker-compose for running:
gc:
container_name: docker-gc
build: ./docker/docker-gc
dockerfile: Dockerfile
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /etc:/etc
When I first run, all unused docker images and containers are removed automatically. But after that, this container exit. I want this container runs and check periodically.
Just another approach:
Use cron to run docker-compose periodically.
About exiting container:
Containers run a script/service until its finished/killed.
For example library/nginx runs nginx service. The container will be in running state until the nginx service stopped/killed. Then it will show up as exited with appropriate exit code.
entrypoint/cmd directive will specify what script/service container will execute when run with no user override.

How does one close a dependent container with docker-compose?

I have two containers that are spun up using docker-compose:
web:
image: personal/webserver
depends_on:
- database
entrypoint: /usr/bin/runmytests.sh
database:
image: personal/database
In this example, runmytests.sh is a script that runs for a few seconds, then returns with either a zero or non-zero exit code.
When I run this setup with docker-compose, web_1 runs the script and exits. database_1 remains open, because the process running the database is still running.
I'd like to trigger a graceful exit on database_1 when web_1's tasks have been completed.
You can pass the --abort-on-container-exit flag to docker-compose up to have the other containers stop when one exits.
What you're describing is called a Pod in Kubernetes or a Task in AWS. It's a grouping of containers that form a unit. Docker doesn't have that notion currently (Swarm mode has "tasks" which come close but they only support one container per task at this point).
There is a hacky workaround beside scripting it as #BMitch described. You could mount the Docker daemon socket from the host. Eg:
web:
image: personal/webserver
depends_on:
- database
volumes:
- /var/run/docker.sock:/var/run/docker.sock
entrypoint: /usr/bin/runmytests.sh
and add the Docker client to your personal/webserver image. That would allow your runmytests.sh script to use the Docker CLI to shut down the database first. Eg: docker kill database.
Edit:
Third option. If you want to stop all containers when one fails, you can use the --abort-on-container-exit option to docker-compose as #dnephin mentions in another answer.
I don't believe docker-compose supports this use case. However, making a simple shell script would easily resolve this:
#!/bin/sh
docker run -d --name=database personal/database
docker run --rm -it --entrypoint=/usr/bin/runmytests.sh personal/webserver
docker stop database
docker rm database

Resources