Is there a way to provide docker-compose env-file from stdin? - docker

In docker run one can do
docker run --env-file <(env | grep ^APP_) ...
Is there a similar way for docker-compose?
I would like to avoid physical env file.

The equivalent of --env-file option of the docker cli in docker-compose is the env_file configuration option in the docker-compose file. But I think this requires a physical .env file.
If you want use the environment variables of your host machine, you can define them in docker-compose (with an optional fallback value):
version: "3.9"
services:
app:
image: myapp
environment:
- APP_MYVAR=${APP_MYVAR-fallbackvalue}
It's not so convenient as doing a grep of your ^APP_ vars, but one way to avoid the physical file.

You can do it by supplying multiple compose files, as documented here. In your case the first one is a physical docker-compose.yml and the second one a Compose containing only the environment variables for the needed service. Obviously, env variables must be properly formatted, so a sed that prepends the string - is necessary because they are added as YAML list.
docker-compose -f docker-compose.yml -f <(printf "services:
your_service:
environment:\n$(env | grep ^APP_ | sed -e "s/^/ - /")"
) up
This is how Docker behaves:
When you supply multiple files, Compose combines them into a single configuration. Compose builds the configuration in the order you supply the files. Subsequent files override and add to their predecessors

Related

Cannot pass a variable to the volumes section in docker-compose

I have a docker-compose.yml file that defines the volumes section like this:
volumes:
seqfs:
driver: azure_file
driver_opts:
share_name: seqtest
storage_account_name: stacctest
storage_account_key: ${STORAGE_ACCOUNT_KEY}
I am trying to pass in STORAGE_ACCOUNT_KEY during the build command:
docker-compose -f docker-compose.yml build --build-arg STORAGE_ACCOUNT_KEY="##########"
But an error is returned:
The STORAGE_ACCOUNT_KEY variable is not set. Defaulting to a blank string.
Please note I do not want to save STORAGE_ACCOUNT_KEY into a file such as .env for security reasons -- I want to pass it from the command line.
How can I pass an argument to the volumes section in my docker-compose.yml?
Regarding the error you get:
docker-compose -f docker-compose.yml build --build-arg STORAGE_ACCOUNT_KEY="##########"
But an error is returned:
The STORAGE_ACCOUNT_KEY variable is not set. Defaulting to a blank string.
the fact is that --build-arg only deals with ARG directives within the Dockerfile.
(BTW it is simpler to run docker-compose up --build
rather than running docker-compose build then docker-compose up)
the only solutions I can think about to achieve what you want are:
either put STORAGE_ACCOUNT_KEY=foobar in an .env file
and run docker-compose up --build
or put STORAGE_ACCOUNT_KEY=foobar in an other.env file
and run docker-compose --env-file=other.env up --build
(Note: the docker/compose syntax contains two different kinds of "env-file", for details on this subtlety, see this answer of mine: Pass variables from .env file to dockerfile through docker-compose.)
or run STORAGE_ACCOUNT_KEY=foobar docker-compose up --build
Concluding remarks:
You said you do not want to save STORAGE_ACCOUNT_KEY into a file such as .env for security reasons, so at first sight you would only accept solution 3. above.
However, note that it is unlikely that 3. is more secure if you assume you don't trust the environment of your docker host. Indeed, .env files may be protected thanks to file permissions (and .gitignore FWIW), while the foobar string above automatically leaks in the name of the running processes (try e.g. ps aux | grep STORAGE_ACCOUNT_KEY).

Adding a tag to all images in a docker-compose file?

For all my images I have a latest tag for production, and test for testing. I also have 2 docker-compose files that only differ in that one uses latest on all packages (except nginx), while the other uses test. Is there a way to set this via a CLI variable, so that I don't have to keep those two files in sync manually all the time?
We can handle this with an environment variable.
docker-compose.yml:
services:
...
my-service-one:
image: my/service-one:$TAG
...
my-service-two:
image: my/service-two:$TAG
...
Then we can add an .env file (for production) with content
TAG=latest
and a test.env file (for testing) with content
TAG=test
When we run docker compose up, file .env will be used by default. If we want to start our test deployment, we can run docker compose --env-file test.env up.
You can simply run these commands for each compose file
TAG=latest docker-compose -p latestapp -f compose-for-latest.yaml up -d
and
TAG=test docker-compose -p testapp -f compose-for-test.yaml up -d
of course, don't forget to specify where you want to use your Tag variable in the compose files using ${TAG} or $TAG (BOTH ARE VALID)
If you are a LINUX user this should work fine.
if you are a WINDOWS user and the commands didn't work you can use GIT BASH instead.

Is there a way to define the namespace your Compose Swarm lives in, in the yaml?

I have a simple docker-compose file which is used to launch my containers. I wish to have another yaml file which contains additional, optional containers. It can live in a separate directory. My goal is to find a way to force the namespace of the created swarms so they exist within the same network/use space so they can talk to each other.
compose1.yaml
services:
web:
build: .
compose2.yaml
services:
web1:
build: .
So if i run both of these they would be prepended with the folder they exist in, in my case: a, and b respectively.
I wanted to ensure that they flow together, despite not being in the same file hierarchy.
I have been coming over keywords in the docker-compse documents, and was not sure what the best way to do this in the yaml file would be, but noticed in the CLI, might be able up to update various names.
How does one accomplish this?
Note: I have also created a third file under the b directory, a sibling to compose2.yaml. So i can run those separately and they work just fine.
a/
compose.yaml
b/
compose2.yaml
another.yaml
So i have been able to merge them together by doing: cd /b/ && docker-compose -f compose2.yaml -f another.yaml up -d to run 2 files together, and they exist under the B namespace. Likewise, I can also run them sequentially instead of referencing them in 1 command.
So my question is how can I do something like:
docker-compose --namespace test compose.yaml up
docker-compose --namespace test compose2.yaml up
such that I could view items accordingly with docker? It seems that I would need to consider running the command from under the first shared parent folder?
so if a and b existed under test, I could just do:
cd /test
docker-compose -f a/compose.yaml up -d
docker-compose -f b/compose2.yaml up -d
then my services would be listed as: test_web, test_db-box, etc.
So I found out that one person's namespace is another person's project-name.
That being said, after understanding nuances, the project-name ( -p | --project-name ) is the prepend for the compose services.
docker-compose --project-name foo -f a/compose.yaml up
docker-compose --project-name foo -f b/compose2.yaml up
This will create the services: foo_web_1
The format for this is: %{prepend}%{servicename}%{number}
The issue then is, Can we find a way to implement this CLI property to work from within the YAML file, possibly as a config option for the file. The Docker Compose website information states that you can supply an environment variable ( _ COMPOSE_PROJECT_NAME_ ) to change change the project name from the default of the base directory, BUT not from within a Compose YAML.
If i want to then launch multiple compose files, under a particular project what I would want to do is to just encapsulate it with a BASH or SHELL script.
#/bin/bash
export COMPOSE_PROJECT_NAME=ultimate-project
docker-compose -f a/compose.yaml up -d
docker-compose -f b/compose2.yaml up -d
and that would create services :
ultimate-project_web_1
ultimate-project_web2_1

How can I pass the variables I have placed in the .env file to the containers in my docker swarm?

I am trying to use the same docker-compose.yml and .env files for both docker-compose and swarm. The variables from the .env file should get parsed, via sed, into a config file by running a run.sh script at boot. This setup works fine when using the docker-compose up command, but they're not getting passed when I use the docker stack deploy command.
How can I pass the variables into the container so that the run.sh script will parse them at boot?
Loading the .env file is a feature of docker-compose that is not part of the docker CLI. You can manually load the contents of this file in your shell before performing the deploy:
set -a; . ./.env; set +a
docker stack deploy -c docker-compose.yml stack_name
Other options include using docker-compose to pre process the compose file:
docker-compose config >docker-compose.processed.yml
Or you could use envsubst to replace the variables to make a compose file with the variables already expanded:
set -a; . ./.env; set +a
envsubst <docker-compose.yml >docker-compose.processed.yml
To pass shell environment variables through to containers use env_file syntax:
web:
env_file:
- web-variables.env
As docs state:
You can pass multiple environment variables from an external file through to a service’s containers with the ‘env_file’ option
However, using .env as external filename may cause unexpected results and is semantically problematic.
Placing .env in the folder where the docker-compose command serves different purpose:
As Docs, Docs2, Docs3 state:
The environment variables you define here are used for variable
substitution in your Compose file
You can set default values for environment variables using a .env
file, which Compose automatically looks for
So if compose file contains:
db:
image: "postgres:${POSTGRES_VERSION}"
You .env would contain:
POSTGRES_VERSION=4.0
This feature indeed works only in compose:
The .env file feature only works when you use the docker-compose up
command and does not work with docker stack deploy
Actually I found the best/easiest way is to just add this argument to the docker-compose.yml file:
env_file:
- .env

Conditionally mount volumes in docker-compose for several conditions

I use docker and docker compose to package scientific tools into easily/universally executable modules. One example is a docker that packages a rather complicated python library into a container that runs a jupyter notebook server; the idea is that other scientists who are not terribly tech-savvy can clone a github repository, run docker-compose up then do their analyses without having to install the library, configure various plugins and other dependencies, etc.
I have this all working fine except that I'm having issues getting the volume mounts to work in a coherent fashion. The reason for this is that the library inside the docker container handles multiple kinds of datasets, which users will store in several separate directories that are conventionally tracked through shell environment variables. (Please don't tell me this is a bad way to do this--it's the way things are done in the field, not the way I've chosen to do things.) So, for example, if the user stores FreeSurfer data, they will have an environment variable named SUBJECTS_DIR that points to the directory containing the data; if they store HCP data, they will have an environment variable HCP_SUBJECTS_DIR. However, they may have both, either, or neither of these set (as well as a few others).
I would like to be able to put something like this in my docker-compose.yml file in order to handle these cases:
version: '3'
services:
my_fancy_library:
build: .
ports:
- "8080:8888"
environment:
- HCP_SUBJECTS_DIR="/hcp_subjects"
- SUBJECTS_DIR="/freesurfer_subjects"
volumes:
- "$SUBJECTS_DIR:/freesurfer_subjects"
- "$HCP_SUBJECTS_DIR:/hcp_subjects"
In testing this, if the user has both environment variables set, everything works swimmingly. However, if they don't have one of these set, I get an error about not mounting directories that are fewer than 2 characters long (which I interpret to be a complaint about mounting a volume specified by ":/hcp_subjects").
This question asks basically the same thing, and the answer points to here, which, if I'm understanding it right, basically explains how to have multiple docker-compose files that are resolved in some fashion. This isn't really a viable solution for my case for a few reasons:
This tool is designed for use by people who don't necessarily know anything about docker, docker-compose, or related utilities, so expecting them to write/edit their own docker-compose.yml file is a problem
There are more than just two of these directories (I have shown two as an example) and I can't realistically make a docker-compose file for every possible combination of these paths being declared or not declared
Honestly, this solution seems really clunky given that the information needed is right there in the variables that docker-compose is already reading.
The only decent solution I've been able to come up with is to ask the users to run a script ./run.sh instead of docker-compose up; the script examines the environment variables, writes out its own docker-compose.yml file with the appropriate volumes, and runs docker-compose up itself. This also seems somewhat clunky, but it works.
Does anyone know of a way to conditionally mount a set of volumes based on the state of the environment variables when docker-compose up is run?
You can set defaults for environment variable in a .env-file shipped alongside with a docker-compose.yml [1].
By setting your environment variables to /dev/null by default and then handling this case in the containerized application, you should be able to achieve what you need.
Example
$ tree -a
.
├── docker-compose.yml
├── Dockerfile
├── .env
└── run.sh
docker-compose.yml
version: "3"
services:
test:
build: .
environment:
- VOL_DST=${VOL_DST}
volumes:
- "${VOL_SRC}:${VOL_DST}"
Dockerfile
FROM alpine
COPY run.sh /run.sh
ENTRYPOINT ["/run.sh"]
.env
VOL_SRC=/dev/null
VOL_DST=/volume
run.sh
#!/usr/bin/env sh
set -euo pipefail
if [ ! -d ${VOL_DST} ]; then
echo "${VOL_DST} not mounted"
else
echo "${VOL_DST} mounted"
fi
Testing
Environment variable VOL_SRC not defined:
$ docker-compose up
Starting test_test_1 ... done
Attaching to test_test_1
test_1 | /volume not mounted
test_test_1 exited with code 0
Environment variable VOL_SRC defined:
$ VOL_SRC="./" docker-compose up
Recreating test_test_1 ... done
Attaching to test_test_1
test_1 | /volume mounted
[1] https://docs.docker.com/compose/environment-variables/#the-env-file
Even though #Ente's answer solves the problem, here is an alternative solution when you have more complex differences between environments.
Docker compose supports multiple docker-compose files for configuration overriding in different environments.
This is useful if you have different named volumes you need to potentially mount on the same path depending on the environment.
You can modify existing services or even add new ones, for instance:
# docker-compose.yml
version: '3.3'
services:
service-a:
image: "image-name"
volumes:
- type: volume
source: vprod
target: /data
ports:
- "80:8080"
volumes:
vprod:
vdev:
And then you have the override file to change the volume mapping:
# docker-compose.override.yml
services:
service-a:
volumes:
- type: volume
source: vdev
target: /data
When running docker-compose up -d both configurations will be merged with the override file taking precedence.
Docker compose picks up docker-compose.yml and docker-compose.override.yml by default, if you have more files, or files with different names, you need to specify them in order:
docker-compose -f docker-compose.yml -f docker-compose.custon.yml -f docker-compose.dev.yml up -d

Resources