How to override docker-compose values in multiple combined files? - docker

Letzt imagine i have 3 compose files (only focus on the mysql service)
docker-compose.yml
docker-compose.staging.yml
docker-compose.prod.yml
In my docker compose.yml i have my basic mysql stuff with dev als build target
version: "3.4"
services:
mysql:
build:
target: dev
...
And start it with
docker-compose up -d
In my staging environment i would like to expose port 3306, but also want another build target so i would create the docker-compose.staging.yml with the following content.
version: "3.4"
services:
mysql:
build
target: prod
ports:
- 3306:3306
And combine it with
docker-compose -f docker-compose.yml -f docker-compose.staging.yml up -d
So the build target is overwritten and the port 3306 is now exposed to the outside.
Now i want the same in the docker-compose.prod.yml, just without having the port 3306 exposed to the outside ... How can i override the ports directive to not having ports exposed?
I tried to put an empty array in the prod.yml without success (port is still exposed):
version: "3.4"
services:
mysql:
ports: []
In the end i would like to stack the up command like this:
docker-compose -f docker-compose.yml -f docker-compose.staging.yml -f docker-compose.prod.yml up -d
I also know the docs says
For the multi-value options ports, expose, external_links, dns, dns_search, and tmpfs, Compose concatenates both sets of values
But how can i reach my goal anyway without duplicating configuration?
Yes for sure, i could omit the docker-compose.staging.yml but in the staging.yml are build steps defined, which should also be used for the prod stage to not have any differences between the built container.
So duplicating things isn't really an option.
Thanks

I would actually strongly suggest just not using the "target" command in your compose files. I find it to be extremely beneficial to build a single image for local/staging/production - build once, test it, and deploy it in each environment. In this case, you change things using environment variables or mounted secrets/config files.
Further, using compose to build the images is... fragile. I would recommend building the images in a CI system, pushing them to a registry, and then using the image version tags in your compose file- it is a much more reproducible system.

You might consider using extends key in your compose files like this:
mysql:
extends:
file: docker-compose.yml
service: mysql
ports:
- 3306:3306
# other definitions
Although you'd have to change your compose version from 3.4 to < 3 ( like 2.3 ) because v3 doesn't support this feature ref as there is a open feature request hanging for a long time now.
Important note here is that you shouldn't expose any ports in your base docker-compose.yml file, only on the specific composes.
Oficial docs ref for extends
edit
target clause is not supported in v2.0 so I've adjusted the answer to match the extends and target requirement. That's compose v2.3.
edit from comments
As there is a deploy keyword requirement, then there is compose v3 requirement. And as for now, there is no possibility to extend composes. I've read in some official doc (can't find it now for ref) that they encourage us to use flat composes specific for environment so that it's always clear. Also Docker states that's hard to implement in v3 (ref in the above issue) and it's not going to be implemented anywhere soon. You have to use separate compose files per environment.

Related

Can docker-compose profiles be used together with docker-compose override to have a common code with configurable environments?

I have the requirement of having a unique docker-compose.yaml / infrastrucutre code which will be versioned across the different deployment stages.
I would like to have some ports exposed in development and not in production. As I learned from other questions this seems to not be possible (using the same .env file that is used to configure other environment variables for the containers).
My idea would be to have my docker-compose.yaml, for example:
version: "3.9"
services:
myservice:
image: myimage
# **
# configuration
# **
ports:
- 80:80
- 19980:19980
Then in production overrideit with a profile docker-compose.production.yaml
version: "3.9"
services:
myservice:
profiles:
- production
ports:
- 80:80
This would allow to have always the same configuration (both .yamlf iles) and switch between them by just calling the docker-compose up command with the production profile (--profile).
My question is, does this work as expected or is the service always overwritten also when the profile flag is not provided?
Compose profiles only affect which services start; they do not have any effect on the options those services use. If you have multiple Compose files then the options in those files are merged according to a set of rules. My expectation is that this would take effect before the profile selection took place.
What you're describing seems like a fairly routine setup for multiple Compose files, without using the profile feature. The most common case I've seen is that a "development" setup strictly adds options to a "production" setup. In your example, both "production" and "development" publish port 80, but only development also publishes the debugger port. There also might be additional environment variables or bind mounts that only make sense in development, but you (usually) are not trying to remove values.
So in this setup, your base docker-compose.yml file would contain the production setup, with the minimum values that are used in all environments.
# docker-compose.yml
version: '3.8'
services:
myservice:
image: myimage
ports:
- '80:80'
Then you'd have a second file that only has the options that are added for the development setup:
# docker-compose.dev.yml / docker-compose.override.yml (see below)
version: '3.8'
services:
myservice:
# (do not need to repeat `image:`; could add `build:`)
ports:
- '19980:19980'
If you name the file docker-compose.override.yml, Compose will use both files automatically, and you need to make sure to push the base file but not the override file to the production environment.
# uses both files, if docker-compose.override.yml is present
docker-compose up -d
If you name it something else, you need to explicitly name all of the files with docker-compose -f options, on every Compose invocation.
docker-compose -f docker-compose.yml -f docker-compose.dev.yml up -d
docker-compose -f docker-compose.yml -f docker-compose.dev.yml ps
(Or you can set the $COMPOSE_FILE environment variable, but you have to remember to set it in every shell session in every environment.)

docker compose pull newest image

I have a few microservices. Jenkins builds these projects, creates docker images and publishes them to the artifactory.
I have another project for automation testing which is using these docker images.
We have a docker-compose file that has all the configuration of all microservice images.
Following is sample docker-compose
version: "1.0"
services:
my-service:
image: .../my-service:1.0.0-SNAPSHOT-1
container_name: 'my-service'
restart: always
volumes:
...
ports:
...
...
all these are working fine.
Now to update the image then I have to manually change the image tag (1.0.0-SNAPSHOT-2) in docker-compose.
This is an issue because this involves human intervention. Is there any way to pull the newest docker image without any change in docker-compose?
NOTE - I cannot create images with the latest tag. Getting issue when publishing image with the same name in the artifactory (unauthorized: The client does not have permission for manifest: Not enough permissions to delete/overwrite artifact).
Well What actually you can do is, use environment variables substitutions in cli commands (envsubst). Let me explain an escenario as example.
First in the docker-compose.yaml you define an environmet variable, as a tag of the container
version: "3"
services:
my-service:
image: .../my-service:$TAG
container_name: 'my-service'
restart: always
volumes:
...
ports:
...
...
Second, with cli command (or terminal) you define an environment variable, with you version. This part is important because here you add your version tag to the container (and you can execute bash commands to extract some id, or last git commit or what ever you want to execute as tag for the container, i give you some ideas)
export TAG=1.0.0-SNAPSHOT-1
export TAG="$(bash /path/to/script/tag.sh)"
export TAG="$(git log --format="%H" -n 1)"
And the third part and last one is for execute "envsubst" and then execute docker-compose.yaml to deploy your container. Note the pipe |, very important for execution.
envsubst < docker-compose.yaml | docker-compose up -d
link to envsubst
I use this format to deploy tagged containers in kubernetes, but the idea must be the same with docker compose.
envsubst < deployment.yaml | kubectl apply -f -
And change version to 3 in the docker-compose.yaml. Good luck

Access container_name in Dockerfile (from docker-compose)

I have setup a docker-compose project which are creating multiple images:
cache_server:
image: current_timezone/full-supervisord-cache-server:1.00
container_name: renamed-varnish-cache
networks:
- network_frontend
build:
context: "./all-services/"
dockerfile: "./cache-server/Dockerfile.cacheserver.varnish"
args:
- DOCKER_CONTAINER_USERNAME=username
ports:
- "6081:6081"
- "6082:6082"
When I use docker-compose up -f file1.yml file2.override.yml I will then get the containers: in the case of above one it will be named : renamed-varnish-cache
In the corresponding Dockerfile (./nginx-proxy/Dockerfile.proxy.nginx) I want to be able use the container_name property defined in the docker-compose.yml shown above.
When the containers are created I want to update the Varnish configurations inline inside Dockerfile : RUN sed -i "s|webserver_container_name|renamed-varnish-cache|g" /etc/varnish/default.vcl"
For instance:
backend webserver_container_name{
.host = "webserver_container_name";
.port = "8080";
}
To: I anticipate I will have to replace the - with _ for the backend:
backend renamed_varnish_cache{
.host = "renamed-varnish-cache";
.port = "8080";
}
Is there a way to receive the docker-compose named items as variables inside Dockerfile?
In core Docker, there are two separate concepts. An image is a built version of some piece of software packaged together with its dependencies; a container is a running instance of an image. There are separate docker build and docker run commands to build images and launch containers, and you can launch multiple containers from a single image.
Docker Compose wraps these concepts. In particular, the build: block corresponds to the image-build step, and that is what invokes the Dockerfile. None of the other Compose options are available or visible inside the Dockerfile. You cannot access the container_name: or environment: variables or volumes: because those don't exist at this point in the build lifecycle; you also cannot contact other Compose services from inside the Dockerfile.
It's pretty common to have multiple containers run off the same image if they have largely the same code base but need a different top-level command. One example is a Python Django application that needs Celery background workers; you'd have the same project structure but a different command for the Celery worker.
version: '3.8'
services:
web:
build: .
image: my/django-app
worker:
image: my/django-app
command: celery worker ...
Now with this stack you can docker-compose build to build the one image, and then run docker-compose up to launch both containers from that image. (During the build you can't know what the container names will be, and there will be two container names so you can't just use one in the Dockerfile.)
At a design level, this means that you often can't include configuration-type settings in the image itself (other containers' hostnames, user IDs for host-shared filesystems). If your application lets you specify these things as environment variables, that's the easiest option. You can use bind mounts (volumes:) to inject whole config files. If neither of these things work for you, you can use an entrypoint script to rewrite the config file.

Conditionally mount volumes in docker-compose for several conditions

I use docker and docker compose to package scientific tools into easily/universally executable modules. One example is a docker that packages a rather complicated python library into a container that runs a jupyter notebook server; the idea is that other scientists who are not terribly tech-savvy can clone a github repository, run docker-compose up then do their analyses without having to install the library, configure various plugins and other dependencies, etc.
I have this all working fine except that I'm having issues getting the volume mounts to work in a coherent fashion. The reason for this is that the library inside the docker container handles multiple kinds of datasets, which users will store in several separate directories that are conventionally tracked through shell environment variables. (Please don't tell me this is a bad way to do this--it's the way things are done in the field, not the way I've chosen to do things.) So, for example, if the user stores FreeSurfer data, they will have an environment variable named SUBJECTS_DIR that points to the directory containing the data; if they store HCP data, they will have an environment variable HCP_SUBJECTS_DIR. However, they may have both, either, or neither of these set (as well as a few others).
I would like to be able to put something like this in my docker-compose.yml file in order to handle these cases:
version: '3'
services:
my_fancy_library:
build: .
ports:
- "8080:8888"
environment:
- HCP_SUBJECTS_DIR="/hcp_subjects"
- SUBJECTS_DIR="/freesurfer_subjects"
volumes:
- "$SUBJECTS_DIR:/freesurfer_subjects"
- "$HCP_SUBJECTS_DIR:/hcp_subjects"
In testing this, if the user has both environment variables set, everything works swimmingly. However, if they don't have one of these set, I get an error about not mounting directories that are fewer than 2 characters long (which I interpret to be a complaint about mounting a volume specified by ":/hcp_subjects").
This question asks basically the same thing, and the answer points to here, which, if I'm understanding it right, basically explains how to have multiple docker-compose files that are resolved in some fashion. This isn't really a viable solution for my case for a few reasons:
This tool is designed for use by people who don't necessarily know anything about docker, docker-compose, or related utilities, so expecting them to write/edit their own docker-compose.yml file is a problem
There are more than just two of these directories (I have shown two as an example) and I can't realistically make a docker-compose file for every possible combination of these paths being declared or not declared
Honestly, this solution seems really clunky given that the information needed is right there in the variables that docker-compose is already reading.
The only decent solution I've been able to come up with is to ask the users to run a script ./run.sh instead of docker-compose up; the script examines the environment variables, writes out its own docker-compose.yml file with the appropriate volumes, and runs docker-compose up itself. This also seems somewhat clunky, but it works.
Does anyone know of a way to conditionally mount a set of volumes based on the state of the environment variables when docker-compose up is run?
You can set defaults for environment variable in a .env-file shipped alongside with a docker-compose.yml [1].
By setting your environment variables to /dev/null by default and then handling this case in the containerized application, you should be able to achieve what you need.
Example
$ tree -a
.
├── docker-compose.yml
├── Dockerfile
├── .env
└── run.sh
docker-compose.yml
version: "3"
services:
test:
build: .
environment:
- VOL_DST=${VOL_DST}
volumes:
- "${VOL_SRC}:${VOL_DST}"
Dockerfile
FROM alpine
COPY run.sh /run.sh
ENTRYPOINT ["/run.sh"]
.env
VOL_SRC=/dev/null
VOL_DST=/volume
run.sh
#!/usr/bin/env sh
set -euo pipefail
if [ ! -d ${VOL_DST} ]; then
echo "${VOL_DST} not mounted"
else
echo "${VOL_DST} mounted"
fi
Testing
Environment variable VOL_SRC not defined:
$ docker-compose up
Starting test_test_1 ... done
Attaching to test_test_1
test_1 | /volume not mounted
test_test_1 exited with code 0
Environment variable VOL_SRC defined:
$ VOL_SRC="./" docker-compose up
Recreating test_test_1 ... done
Attaching to test_test_1
test_1 | /volume mounted
[1] https://docs.docker.com/compose/environment-variables/#the-env-file
Even though #Ente's answer solves the problem, here is an alternative solution when you have more complex differences between environments.
Docker compose supports multiple docker-compose files for configuration overriding in different environments.
This is useful if you have different named volumes you need to potentially mount on the same path depending on the environment.
You can modify existing services or even add new ones, for instance:
# docker-compose.yml
version: '3.3'
services:
service-a:
image: "image-name"
volumes:
- type: volume
source: vprod
target: /data
ports:
- "80:8080"
volumes:
vprod:
vdev:
And then you have the override file to change the volume mapping:
# docker-compose.override.yml
services:
service-a:
volumes:
- type: volume
source: vdev
target: /data
When running docker-compose up -d both configurations will be merged with the override file taking precedence.
Docker compose picks up docker-compose.yml and docker-compose.override.yml by default, if you have more files, or files with different names, you need to specify them in order:
docker-compose -f docker-compose.yml -f docker-compose.custon.yml -f docker-compose.dev.yml up -d

Start particular service from docker-compose

I am new to Docker and have docker-compose.yml which is containing many services and iI need to start one particular service. I have docker-compose.yml file with information:
version: '2'
services:
postgres:
image: ${ARTIFACTORY_URL}/datahub/postgres:${BUILD_NUMBER}
restart: "no"
volumes:
- /etc/passwd:/etc/passwd
volumes_from:
- libs
depends_on:
- libs
setup:
image: ${ARTIFACTORY_URL}/setup:${B_N}
restart: "no"
volumes:
- ${HOME}:/usr/local/
I am able to call docker-compose.yml file using command:
docker-compose -f docker-compose.yml up -d --no-build
But I need to start "setup service" in docker-compose file:
How can I do this?
It's very easy:
docker compose up <service-name>
In your case:
docker compose -f docker-compose.yml up setup -d
To stop the service, then you don't need to specify the service name:
docker compose down
will do.
Little side note: if you are in the directory where the docker-compose.yml file is located, then docker-compose will use it implicitly, there's no need to add it as a parameter.
You need to provide it in the following situations:
the file is not in your current directory
the file name is different from the default one, eg. myconfig.yml
As far as I understand your question, you have multiple services in docker-compose but want to deploy only one.
docker-compose should be used for multi-container Docker applications. From official docs :
Compose is a tool for defining and running multi-container Docker
applications.
IMHO, you should run your service image separately with docker run command.
PS: If you are asking about recreating only the container whose image is changed among the multiple services in your docker-compose file, then docker-compose handles that for you.

Resources