I have services in my docker-compose.yml configuration that I would occasionally use, such as for end-to-end testing, linting, or some one-off service.
Something along the lines of this:
app:
...
e2e-or-linter-or-one-off:
...
deploy:
replicas: 0
With replicas set to 0, docker-compose up would not spin up the e2e-or-linter-or-one-off service when I just want to run my regular app container(s).
And when I would need that e2e-or-linter-or-one-off service, I want to do something like this:
docker-compose run e2e-or-linter-or-one-off bash
Is there a way to define a service that doesn't spin up on docker-compose up but is still able to be used with docker-compose run?
docker-compose up has a --scale flag that I can use if I wanted to spin everything up, such as:
docker-compose up --scale "e2e-or-linter-or-one-off"=1 e2e-or-linter-or-one-off
But docker-compose run doesn't have a similar flag that can be used and I need docker-compose run so I can run the container interactively. Without it this:
docker-compose run e2e bash
won't work and Docker returns: no containers to start
Thank you for your help 🙏
This article shows a way to use an environment variable for the replica count, allowing you to change the value at invocation-time:
app:
...
e2e-or-linter-or-one-off:
...
deploy:
replicas: ${E2E_REPLICAS:-0}
I modified the example a bit so you don't need to have the env var set 100% of the time. The :- in the variable expansion is an operator that says "use the default value to the right if the name to the left is unset or empty".
Now running docker-compose up should run every service with 1+ replicas defined, but invoking E2E_REPLICAS=1 docker-compose run --rm e2e-or-linter-or-one-off bash will set the env variable, overriding the default value of 0, create the container & service, and run bash. When you're done with your shell session, the --rm tag will tear down the container so the environment returns to its normal operational state.
Related
Given the following Docker Compose file....
version: '3.8'
services:
producer:
image: producer
container_name: producer
depends_on: [db]
build:
context: ./producer
dockerfile: ./Dockerfile
db:
image: some-db-image
container_name: db
When I do docker-compose up producer obviously the db service gets started too. When I CTRL+C both services are stopped. This is expected and fine.
But sometimes, the db service is started before, on a different shell and so doing docker-compose up producer understands that db is running and only starts producer. But when I hit CTRL+C, both producer and db are stopped even though db was not started as part of this docker compose up command.
Is there a way to avoid getting the dependencies services stopped when stopping its "parent" ?
When running just docker-compose up, the CTRL+C command always stops all running services in the current compose scope. It doesn't care about depends_on.
You would need to spin it up with detach option -d, like
docker-compose up -d producer
Then you can do
docker stop producer
And db service should still be running.
As I understand your question: You want to stop a container A which depends on another container B. But when stopping A, you don't want docker-compose to stop B.
Docker-compose stops the dependent containers ('B' in this case) when 'A' is stopped.
How I would approach this:
Split up the docker-compose files into A and B
In docker-compose for A create a health check testing (and waiting) for container B to be alive.
Since this is a database, you could do this with a dummy query.
Then you still have dependency, but not the docker-compose connection of stopping dependant containers.
You can't simply do that with CTRL+C.
Your docker-compose file and the services defined in it are treated as a project. You may notice that all containers, networks and volumes are prefixed with the name of the directory where the docker-compose file is located by default. This is the project name. It can be changed via an environment variable or the -p flag of the docker-compose command.
What docker-compose does is it keeps track of all the resources for a given project.
In your case there are two services: db and producer. Whenever you run docker-compose up, both of them start up. They both end up being part of the same project. The same applies when you only start one of the services (e.g. with docker-compose up db). You can later start the other service and it will still be part of the same project.
One more thing to note here: Whenever you run docker-compose without the -d (detached) flag, you get attached to the whole project, meaning whenever you hit CTRL+C, you'll stop all services. It does not matter if the last compose command started only one of the services or if they depend on each other. Attaching to the project and hitting CTRL+C will stop them.
A possible solution to your problem would be the following:
Start up your services via docker-compose up -d (both db and producer will get created). They are now in detached mode. If you still want to check the logs in real time (kinda like attaching), use docker-compose logs -f. Now, however, if you want to stop only one of the services you can simply do docker-compose stop $SVC_NAME (where $SVC_NAME is either db or producer) and this will keep the other one running. This way, whatever happens to your terminal session, your services won't stop, unless you explicitly tell them to.
Is there a way to avoid getting the dependencies services stopped when stopping its "parent" ?
Yes.
Using the new version docker compose instead of docker-compose might solve your problem Reference.
Simple example
Assuming now you are using the new version, your process could be something like this.
docker-compose.yml
version: "3.8"
services:
db:
build: .
producer:
build: .
depends_on: [db]
extra:
build: .
Dockerfile
FROM node:alpine
WORKDIR /app
COPY . .
ENTRYPOINT [ "/bin/sh", "script.sh" ]
script.sh
while :; do sleep 1; done
Suppose db has started before with
$ docker compose up -d db.
Then later,
$ docker compose up -d producer.
Now you can stop only producer with
$ docker compose stop producer.
You can check if db is still running with
$ docker compose ps.
Notice the use of -d flag for detached mode, as pointed out in another answer, so you don't need to kill the process with CTRL+C. Also, using detached flag allows you to check the services that are running with docker compose ps.
A similar issue as yours was reported and fixed a while ago, as you can see here.
I was not able to reproduce the behavior you observe with a complete minimal example. Namely, when running docker compose stop producer, the underlying db is not stopped AFAICT.
Anyway, you may be interested in an alternative command that is a bit more flexible than docker compose up, regarding how to run "one-off commands": docker compose run.
The typical use cases are as follows:
docker compose run db bash → run the db service, replacing the default CMD with bash
docker compose run -d db → run the db service in the background (detach mode)
docker compose run --service-ports producer → run the service producer and its dependencies (unless they were run with docker compose up), enabling the ports mapping.
So for your specific use case, you could run:
docker compose up -d db
docker compose run --service-ports producer
I have a docker-compose file that exposes 2 services, a master service and a slave service. I want to be able to scale the slave service to some number of instances using
docker-compose up --scale slave=N
However, one of the options I must specify on command run in the master service is the number of slave instances to expect. E.g. If I scale slave=10, I need to set --num-slaves=10 in the command on the master service.
Is there a way to determine the number of instances of a given service either from the docker-compose file itself, or from a customized entrypoint shellscript?
The problem I'm facing is that since there is no way I've yet found to specify the number of scaled instances from within the docker-compose file format itself, I'm relying on the person running the command to enter the scale factor consistently and to have that value align with the value I need to tell the master node to expect. And trusting users to do the right thing is a recipe for disaster. If I could continue to let the user specify the scale value on the command line, I need a way to determine what that value is at runtime.
scale is not added to up from compose version 3 but you may use replicas:
version: "3.7"
services:
redis:
image: redis:latest
deploy:
replicas: 1
and run it using:
docker-compose --compatibility up -d
docker-compose 1.20.0 introduces a new --compatibility flag designed
to help developers transition to version 3 more easily. When enabled,
docker-compose reads the deploy section of each service’s definition
and attempts to translate it into the equivalent version 2 parameter.
Currently, the following deploy keys are translated:
resources limits and memory reservations
replicas
restart_policy condition and max_attempts
but:
Do not use this in production!
We recommend against using --compatibility mode in production. Because
the resulting configuration is only an approximate using non-Swarm
mode properties, it may produce unexpected results.
see this
PS:
Docker container names must be unique you cannot scale a service beyond 1 container if you have specified a
custom name. Attempting to do so results in an error.
Unfortunately there is no way to define replicas for docker compose. IT ONLY WORKS FOR DOCKER SWARM The documentation specifies it link
Tip: Alternatively, in Compose file version 3.x, you can specify replicas under the deploy key as part of a service configuration for Swarm mode. The deploy key and its sub-options (including replicas) only works with the docker stack deploy command, not docker-compose up or docker-compose run.
So if you have the deploy section in the yaml, but run it with docker-compose, then it will not take any effect.
version: "3.3"
services:
alpine1:
image: alpine
container_name: alpine1
command: ["/bin/sleep", "10000"]
deploy:
replicas: 4
alpine2:
image: alpine
container_name: alpine2
command: ["/bin/sleep", "10000"]
deploy:
replicas: 2
So the only way to scale up in docker compose is by running the scale command manually.
docker-compose scale alpine1=3
Note I had a job in which they loved docker-compose so we had bash scripts to perform operations such as the ones you describe. So for example we would have something like ./controller-app.sh scale test_service=10 and it would run docker-compose scale test_service=10
UPDATE
To check the number of replicas you can mount the docker socket into your container. Then run docker ps --format {{ .Names }} | grep $YOUR_CONTAINER_NAME.
Here is how you would mount the socket.
docker run -v /var/run/docker.sock:/var/run/docker.sock -it alpine sh
Install docker
apk update
apk add docker
I have a docker-compose.yml set up like this:
app:
build:
dockerfile: ./docker/app/Dockerfile.dev
image: test/test:${ENV}-test-app
...
Dockerfile called here has this line present:
...
RUN ln -s ../overrides/${ENV}/plugins ../plugins
...
And there is also a script I am running to get the whole environment up (it is dependant upon several containers so I tried to omit irrelevant info).
It is a bash script and running the following:
ENV=$1 docker-compose -p $1 up -d --force-recreate --build app
What I wanted to achieve is that i can run two app containers at the same time, and this works as follows:
sh initializer.sh foo -> creates foo-test-app container
sh initializer.sh bar -> creates bar-test-app container
Now the issue I'm having is that even when I have --force-recreate flag present two images created actually are seen as the same image with two different tags.
And what this does when I inspect the containers is that both containers have a symbolic link to:
overrides/foo/plugins
It doesn't notice when I create the new container to re-do that part. How can I fix it?
Also if I sh to one container and change the symbolic link, it is automatically changed in the other container as well.
$ENV in your dockerfile is not the same as the one in your compose file.
When you run docker-compose up, it can be roughly seen as a docker build followed by a docker run. So Docker builds the image, layer by layer, at that stage there is not env called ENV. Only at docker run will $ENV be used.
Environment variables at build stage can be used though, they are passed via ARG
// compose.yml
build:
context: frontend
args:
- BUILD_ENV=${BUILD_ENV}
// dockerfile
ARG BUILD_ENV
RUN ./node_modules/.bin/ng build --$BUILD_ENV
You can do this to solve your problem however this will create one image per project, which you may not want. Or you can do it in an entrypoint script.
I have found the answer to be in project flag when creating my containers. So this is what I did:
docker-compose -p foo up -d
docker-compose -p bar up -d
This would bring containers up as 2 separate projects.
Link to documentation
Supposed I have multiple containers deployed
init
service1
service2
db
web
test
The init container runs into completion and then shutdowns by itself. That is his job, which is to do some
pre-configuration stuffs then exit.
When running locally, I dont have any issues running this in my desktop work environment.
My issue is when it is deployed in my CI pipeline. When my init container finished up...it shutdowns the whole
docker-compose network.
Even if I explicitly set the --exit-code-from into my test container.
docker-compose up --exit-code-from test
The end result is that I am not able to run my test cases to its completion because everything is being shutdown by the init container that exits.
Anybody has hints what can I do?
This is interesting. Is it possible to include the compose file? Maybe you have a depends_on defined, and the version of docker used by your CI pipeline handles it differently from the one on your dev environment.
At any rate, you'd want to stop using --exit-code-from, it apparently implies --abort-on-container-exit.
From https://docs.docker.com/compose/reference/up/:
--abort-on-container-exit Stops all containers if any container was
stopped. Incompatible with -d.
--exit-code-from SERVICE Return the exit code of the selected service
container. Implies --abort-on-container-exit.
I ran into the same issue when trying to run Cypress together with MongoDB seeding container and a replica set starter container. The 2 mongo-related containers would exit quickly after doing their job, thus triggering the unintuitive --abort-on-container-exit implied by --exit-code-from cypress.
For me the simplest solution was to use the tail -f /dev/null hack. The idea is that if you run this command after whatever the containers that you don't want to exit are finished with their actual jobs, they will hang until another container triggers the --abort-on-container-exit and pulls with it the entire docker-compose setup down.
Note that this is not a univsal answer: the downside of this approach is that you have to find out what the original CMD is in containers that you don't have control over.
For example, let's take the mongo-seeding project and their Dockerfile. In order to keep the container alive after doing its job, I'd like to make my own Dockerfile in which I'll pull that image and define a custom ENTRYPOINT that will first run the CMD from the original definition of the mongo-seeding image and then run tail -f /dev/null to keep the container alive. In their Dockerfile I can see that the CMD is simply seed and I can assume it won't change in the future (good design) so my ENTRYPOINT script entry.sh can just look like this:
#!/bin/sh
seed
tail -f /dev/null
And my Dockerfile:
FROM pkosiec/mongo-seeding:3.6.0
ENTRYPOINT [ "/app/scripts/entry.sh" ]
Plus the relevant service in docker-compose with volumes mount for completeness:
mongo-seed:
build:
context: ./mongoSeed
volumes:
- ./mongoSeed/data:/app/data
- ./mongoSeed/scripts:/app/scripts
working_dir: /app/data
depends_on:
- mongodb
Which makes the container do its job and then hang until Cypress exits and causes the entire docker-compose setup to stop.
Using Docker Compose to link a master and slave service together. The slave container is thus automatically injected by Compose with environment variables containing the various ports and IPs needed to connect to the other master container.
The service accepts the IP/Port of the master via a command line argument, so I set this in my commands.
master:
command: myservice
ports:
- '29015'
slave:
command: myservice --master ${MASTER_PORT_29015_TCP_ADDR}:${MASTER_PORT_29015_TCP_PORT}
links:
- master:master
The problem is that the environment variables like MASTER_PORT_29015_TCP_PORT are evaluated when the compose command is run, and not from within the container itself where they are actually set.
When starting the cluster - you see the warning: WARNING: The MASTER_PORT_29015_TCP_ADDR variable is not set. Defaulting to a blank string.
I tried setting entrypoint: ["/bin/sh", "-c"] but produced unusual behaviour where the service wouldn't see any variables at all. (For information, the service I'm actually using is RethinkDB).
As stated in the documentation, link environment variables are now discouraged, and you should just write master instead of $MASTER_PORT_29015_TCP_ADDR. Moreover, there doesn't seem to be any point to writing $MASTER_PORT_29015_TCP_PORT when you know its value's going to be 29015.
Hence, change the command to:
myservice --master master:29015