How to pass arguments within docker-compose? - docker

Docker 1.9 allows to pass arguments to a dockerfile.
See link: https://docs.docker.com/engine/reference/builder/#arg
How can I pass the same arguments within docker-compose.yml?
Please provide an example too, if possible.

Now docker-compose supports variable substitution.
Compose uses the variable values from the shell environment in which docker-compose is run. For example, suppose the shell contains POSTGRES_VERSION=9.3 and you supply this configuration in your docker-compose.yml file:
db:
image: "postgres:${POSTGRES_VERSION}"
When you run docker-compose up with this configuration, Compose looks for the POSTGRES_VERSION environment variable in the shell and substitutes its value in. For this example, Compose resolves the image to postgres:9.3 before running the configuration.

This can now be done as of docker-compose v2+ as part of the build object;
docker-compose.yml
version: '2'
services:
my_image_name:
build:
context: . #current dir as build context
args:
var1: 1
var2: c
See the docker compose docs.
In the above example "var1" and "var2" will be sent to the build environment.
Note: any env variables (specified by using the environment block) which have the same name as args variable(s) will override that variable.

This feature was added in Compose file format 1.6.
Reference: https://docs.docker.com/compose/compose-file/#args
services:
web:
build:
context: .
args:
FOO: foo

Something to add to these answers that the args are picked up only when using docker-compose up --build and not when using docker-compose build. If you want to build and run in separate steps, you need use docker-compose build --build-arg YOUR_ENV_VAR=${YOUR_ENV_VAR}or docker build --build-arg YOUR_ENV_VAR=${YOUR_ENV_VAR}

Create a variable environment on Linux shell:
export TAG=0.1.2
Set variable inside docker-compose.yml
db:
image: "redis:${TAG}"
Verify if value was replaced
docker-compose config

Related

Accessing shell environment variables from docker-compose?

How do you access environment variables exported in Bash from inside docker-compose?
I'm essentially trying to do what's described in this answer but I don't want to define a .env file.
I just want to make a call like:
export TEST_NAME=test_widget_abc
docker-compose -f docker-compose.yml -p myproject up --build --exit-code-from myproject_1
and have it pass TEST_NAME to the command inside my Dockerfile, which runs a unittest suite like:
ENV TEST_NAME ${TEST_NAME}
CMD python manage.py test $TEST_NAME
My goal is to allow running my docker container to execute a specific unittest without having to rebuild the entire image, by simply pulling in the test name from the shell at container runtime. Otherwise, if no test name is given, the command will run all tests.
As I understand, you can define environment variables in a .env file and then reference them in your docker-compose.yml like:
version: "3.6"
services:
app_test:
build:
args:
- TEST_NAME=$TEST_NAME
context: ..
dockerfile: Dockerfile
but that doesn't pull from the shell.
How would you do this with docker-compose?
For the setup you describe, I'd docker-compose run a temporary container
export COMPOSE_PROJECT_NAME=myproject
docker-compose run app_test python manage.py test_widget_abc
This uses all of the setup from the docker-compose.yml file except the ports:, and it uses the command you provide instead of the Compose command: or Dockerfile CMD. It will honor depends_on: constraints to start related containers (you may need an entrypoint wrapper script to actually wait for them to be running).
If the test code is built into your "normal" image you may not even need special Compose setup to do this; just point docker-compose run at your existing application service definition without defining a dedicated service for the integration tests.
Since Compose does (simple) environment variable substitution you could also provide the per-execution command: in your Compose file
version: "3.6"
services:
app_test:
build: ..
command: python manage.py $TEST_NAME # uses the host variable
Or, with the Dockerfile you have, pass through the host's environment variable; the CMD will run a shell to interpret the string when it starts up
version: "3.6"
services:
app_test:
build: ..
environment:
- TEST_NAME # without a specific value here passes through from the host
These would both work with the Dockerfile and Compose setup you show in the question.
Environment variables in your docker-compose.yaml will be substituted with values from the environment. For example, if I write:
version: "3"
services:
app_test:
image: docker.io/alpine:latest
environment:
TEST_NAME: ${TEST_NAME}
command:
- env
Then if I export TEST_NAME in my local environment:
$ export TEST_NAME=foo
And bring up the stack:
$ docker-compose up
Creating network "docker_default" with the default driver
Creating docker_app_test_1 ... done
Attaching to docker_app_test_1
app_test_1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
app_test_1 | HOSTNAME=be3c12e33290
app_test_1 | TEST_NAME=foo
app_test_1 | HOME=/root
docker_app_test_1 exited with code 0
I see that TEST_NAME inside the container has received the value from my local environment.
It looks like you're trying to pass the environment variable into your image build process, rather than passing it in at runtime. Even if that works once, it's not going to be useful, because docker-compose won't rebuild your image every time you run it, so whatever value was in TEST_NAME at the time the image was built is what you would see inside the container.
It's better to pass the environment into the container at run time.

How to pass host computer working directory as a build argument to my Dockerfile using docker-compose?

I would like to pass the current working directory (in this case the directory on the host computer where docker-compose.yml is located) as a build argument to my Dockerfile, so I can use that host computer directory later inside the container for some specific things.
My approach was to first define an ARG and ENV variable in the Dockerfile like this:
FROM python:3.8
WORKDIR /usr/src/runner
ARG HOST_WORKING_DIRECTORY
ENV HOST_WORKING_DIRECTORY=$HOST_WORKING_DIRECTORY
and then define my build argument (which I would like to be the current working directory) in the docker-compose.yml like this:
version: "3.7"
services:
runner:
build:
context: "./runner"
dockerfile: "Dockerfile"
args:
HOST_WORKING_DIRECTORY: $PWD
However, this does not work. When I do print(os.environ["HOST_WORKING_DIRECTORY"]) inside my running container I get empty string in response.
Any ideas how can I achieve this?
Update:
Interestingly I can achieve this when I build my image directly with Docker command line like this:
docker build --build-arg HOST_WORKING_DIRECTORY="${PWD}" -t myimage .
However, this does not help because I need to build my image using docker-compose.
You need to use string interpolation ("${ENV_VAR}") to get the actual value of an environment variable (see documentation).
version: "3.7"
services:
runner:
build:
context: "./runner"
dockerfile: "Dockerfile"
args:
HOST_WORKING_DIRECTORY: "${PWD}"

How to use an environment variable from a docker-compose.yml in a Dockerfile?

My docker-compose.yml looks something like this:
version: '2'
services:
myapp:
build:
context: .
environment:
- GITLAB_USER=myusername
I want to use that environment variable inside a Dockerfile but it does not work:
FROM node:7
ENV GITLAB_USER=${GITLAB_USER} \
RUN echo '${GITLAB_USER}'
echos just: ${GITLAB_USER}
How can I make this work so I can use variables from an .env file inside Docker (via docker-compose) ?
There are two different time frames to understand. Building an image is separate from running the container. The first part uses the Dockerfile to create your image. And the second part takes the resulting image and all of the settings (e.g. environment variables) to create a container.
Inside the Dockerfile, the RUN line occurs at build time. If you want to pass a parameter into the build, you can use a build arg. Your Dockerfile would look like:
FROM node:7
ARG GITLAB_USER=default_user_name
# ENV is optional, without it the variable only exists at build time
# ENV GITLAB_USER=${GITLAB_USER}
RUN echo '${GITLAB_USER}'
And your docker-compose.yml file would look like:
version: '2'
services:
myapp:
build:
context: .
args:
- GITLAB_USER=${GITLAB_USER}
The ${GITLAB_USER} value inside the yml file will be replaced with the value set inside your .env file.

How do I pass an argument along with docker-compose up?

I have a docker-compose.yml file and in the terminal I am typing docker-compose up [something] but I would also like to pass an argument to docker-compose.yml. Is this possible? I've read about interpolation variables and tried to specify a variable in the .yml file using ${testval} and then docker-compose up [something] var="test" but I receive the following error:
WARNING: The testval variable is not set. Defaulting to a blank string.
ERROR: No such service: testval=test
Based on dnephin answer, I created this sample repo that you can pass an variable to docker-compose up.
The usage is simple:
MAC / LINUX
TEST= docker-compose up to create and start both app and db container. The api should then be running on your docker daemon on port 3030.
TEST=DO docker-compose up to create and start both app and db container. The api should execute the npm run test inside the package.json file.
WINDOWS (Powershell)
$env:TEST="";docker-compose up to create and start both app and db container. The api should then be running on your docker daemon on port 3030.
$env:TEST="do";docker-compose up to create and start both app and db container. The api should execute the npm run test inside the package.json file.
You need to ensure 2 things:
The docker-compose.yml has the environment variable declared. For example,
services:
app:
image: python3.7
environment:
- "SECRET_KEY=${SECRET_KEY}"
have the variable available in the environment when docker-compose up is called:
SECRET_KEY="not a secret" docker-compose up
Note that this is not equivalent to pass them during build, as it is not advisable to store secrets in docker images.
You need to pass the variables as environment variables:
testvar=test docker-compose up ...
or
export testvar=test
docker-compose up
From the docs:
https://docs.docker.com/compose/reference/up/
https://docs.docker.com/compose/reference/build/
You can't pass arguments to docker-compose up, but you can pass arguments to docker-compose build:
docker-compose build --build-arg KEY1=VALUE1 --build-arg KEY2=VALUE2
I'm not sure what you want to do here, but if what you need is to pass an environmental variable to a specific container docker-compose.yml allows you to do that:
web:
...
environment:
- RAILS_ENV=production
- VIRTUAL_HOST=www.example.com
- VIRTUAL_PORT=3011
This variables will be specific for the container you specified them to, and wil not be shared between containers.
Also "docker-compose up" doesn't take any argument.
When dealing with build argumenets please declare them in compose yml file as follows
services:
app: (name of service
build:
context: docker/app/ (where is your docker build root)
dockerfile: Dockerfile (that is optional)
args:
- COMPOSER_AUTH_TOKEN (name of variable, value will be taken from host environment)
Well before running docker-compose up, export variable as other guys suggested. It will work. I tried. Use docker compose version 3 and above. Have fun
Compose supports declaring default environment variables in an environment file named .env placed in the project directory.
Step 1:
Create a file named .env in the project directory
Step 2:
Declare variables in the form VAR=VAL
NOTE: There is no special handling of quotation mark i.e. TESTVAL='test' means TESTVAL is 'test'(with quotation mark) and not just test. So you'd declare it as TESTVAL=test.
Step 3:
Use the variables in the Compose file as:
environment:
myval=${TESTVAL}
Documentation: Declare default environment variables in file
BONUS: If you are building image on the fly in you docker-compose.yaml, then you can even pass the build args using environment variables. Eg:
version: "3.8"
services:
myapp:
build:
context: ./myDir
dockerfile: ./myDir/myDockerfile
args:
- MYARG=${TESTVAL}
I was trying to find solution for batch file, based on Rafael Delboni answer you can add command inside batch file for calling powershell:
powershell $env:TEST="";docker-compose up ...
but instead of that because it's expensive to call powershell inside batch file you can initialize TEST variable inside batch file and then call your docker-compose command.
Something like this:
set TEST = ...
docker compose up ...

How can I use environment variables in docker-compose?

I would like to be able to use environment variables inside docker-compose.yml, with values passed in at the time of docker-compose up. This is the example.
I am doing this today with a basic docker run command, which is wrapped around my own script. Is there a way to achieve it with compose, without any such bash wrappers?
proxy:
hostname: $hostname
volumes:
- /mnt/data/logs/$hostname:/logs
- /mnt/data/$hostname:/data
The Docker solution:
Docker-compose 1.5+ has enabled variables substitution: Releases ยท docker/compose
The latest Docker Compose allows you to access environment variables from your compose file. So you can source your environment variables, then run Compose like so:
set -a
source .my-env
docker-compose up -d
For example, assume we have the following .my-env file:
POSTGRES_VERSION=14
(or pass them via command-line arguments when calling docker-compose, like so: POSTGRES_VERSION=14 docker-compose up -d)
Then you can reference the variables in docker-compose.yml using a ${VARIABLE} syntax, like so:
db:
image: "postgres:${POSTGRES_VERSION}"
And here is more information from the documentation, taken from Compose file specification
When you run docker-compose up with this configuration, Compose looks
for the POSTGRES_VERSION environment variable in the shell and
substitutes its value in. For this example, Compose resolves the image
to postgres:9.3 before running the configuration.
If an environment variable is not set, Compose substitutes with an
empty string. In the example above, if POSTGRES_VERSION is not set,
the value for the image option is postgres:.
Both $VARIABLE and ${VARIABLE} syntax are supported. Extended
shell-style features, such as ${VARIABLE-default} and
${VARIABLE/foo/bar}, are not supported.
If you need to put a literal dollar sign in a configuration value, use
a double dollar sign ($$).
The feature was added in this pull request.
Alternative Docker-based solution: Implicitly sourcing an environment variables file through the docker-compose command
If you want to avoid any Bash wrappers, or having to source a environment variables file explicitly (as demonstrated above), then you can pass a --env-file flag to the docker-compose command with the location of your environment variable file: Use an environment file
Then you can reference it within your docker-compose command without having to source it explicitly:
docker-compose --env-file .my-env up -d
If you don't pass a --env-file flag, the default environment variable file will be .env.
Note the following caveat with this approach:
Values present in the environment at runtime always override those defined inside the .env file. Similarly, values passed via command-line arguments take precedence as well.
So be careful about any environment variables that may override the ones defined in the --env-file!
The Bash solution:
I notice that Docker's automated handling of environment variables can cause confusion. Instead of dealing with environment variables in Docker, let's go back to basics, like Bash! Here is a method using a Bash script and a .env file, with some extra flexibility to demonstrate the utility of environment variables:
POSTGRES_VERSION=14
# Note that the variable below is commented out and will not be used:
# POSTGRES_VERSION=15
# You can even define the compose file in an environment variable like so:
COMPOSE_CONFIG=my-compose-file.yml
# You can define other compose files, and just comment them out
# when not needed:
# COMPOSE_CONFIG=another-compose-file.yml
Then run this Bash script in the same directory, which should deploy everything properly:
#!/bin/bash
docker rm -f `docker ps -aq -f name=myproject_*`
set -a
source .env
cat ${COMPOSE_CONFIG} | envsubst | docker-compose -f - -p "myproject" up -d
Just reference your environment variables in your compose file with the usual Bash syntax (ie ${POSTGRES_VERSION} to insert the POSTGRES_VERSION from the .env file).
While this solution involves Bash, some may prefer it because it has better separation of concerns.
Note the COMPOSE_CONFIG is defined in my .env file and used in my Bash script, but you can easily just replace {$COMPOSE_CONFIG} with the my-compose-file.yml in the Bash script.
Also note that I labeled this deployment by naming all of my containers with the "myproject" prefix. You can use any name you want, but it helps identify your containers so you can easily reference them later. Assuming that your containers are stateless, as they should be, this script will quickly remove and redeploy your containers according to your .env file parameters and your compose YAML file.
Since this answer seems pretty popular, I wrote a blog post that describes my Docker deployment workflow in more depth: Let's Deploy! (Part 1) This might be helpful when you add more complexity to a deployment configuration, like Nginx configurations, Let's Encrypt certificates, and linked containers.
It seems that docker-compose has native support now for default environment variables in a file.
All you need to do is declare your variables in a file named .env and they will be available in docker-compose.yml.
For example, for a .env file with contents:
MY_SECRET_KEY=SOME_SECRET
IMAGE_NAME=docker_image
You could access your variable inside docker-compose.yml or forward them into the container:
my-service:
image: ${IMAGE_NAME}
environment:
MY_SECRET_KEY: ${MY_SECRET_KEY}
Create a template.yml, which is your docker-compose.yml with environment variable.
Suppose your environment variables are in a file 'env.sh'
Put the below piece of code in a sh file and run it.
source env.sh;
rm -rf docker-compose.yml;
envsubst < "template.yml" > "docker-compose.yml";
A new file docker-compose.yml will be generated with the correct values of environment variables.
Sample template.yml file:
oracledb:
image: ${ORACLE_DB_IMAGE}
privileged: true
cpuset: "0"
ports:
- "${ORACLE_DB_PORT}:${ORACLE_DB_PORT}"
command: /bin/sh -c "chmod 777 /tmp/start; /tmp/start"
container_name: ${ORACLE_DB_CONTAINER_NAME}
Sample env.sh file:
#!/bin/bash
export ORACLE_DB_IMAGE=<image-name>
export ORACLE_DB_PORT=<port to be exposed>
export ORACLE_DB_CONTAINER_NAME=ORACLE_DB_SERVER
The best way is to specify environment variables outside the docker-compose.yml file. You can use env_file setting, and define your environment file within the same line. Then doing a docker-compose up again should recreate the containers with the new environment variables.
Here is how my docker-compose.yml looks like:
services:
web:
env_file: variables.env
Note:
docker-compose expects each line in an env file to be in VAR=VAL format. Avoid using export inside the .env file. Also, the .env file should be placed in the folder where the docker-compose command is executed.
The following is applicable for docker-compose 3.x
Set environment variables inside the container
method - 1 Straight method
web:
environment:
- DEBUG=1
POSTGRES_PASSWORD: 'postgres'
POSTGRES_USER: 'postgres'
method - 2 The โ€œ.envโ€ file
Create a .env file in the same location as the docker-compose.yml
$ cat .env
TAG=v1.5
POSTGRES_PASSWORD: 'postgres'
and your compose file will be like
$ cat docker-compose.yml
version: '3'
services:
web:
image: "webapp:${TAG}"
postgres_password: "${POSTGRES_PASSWORD}"
source
When using environment variables for volumes you need:
create .env file in the same folder which contains docker-compose.yaml file
declare variable in the .env file:
HOSTNAME=your_hostname
Change $hostname to ${HOSTNAME} at docker-compose.yaml file
proxy:
hostname: ${HOSTNAME}
volumes:
- /mnt/data/logs/${HOSTNAME}:/logs
- /mnt/data/${HOSTNAME}:/data
Of course you can do that dynamically on each build like:
echo "HOSTNAME=your_hostname" > .env && sudo docker-compose up
Don't confuse the .env file and the env_file option!
They serve totally different purposes!
The .env file feeds those environment variables only to your docker compose file, which in turn, can be passed to the containers as well.
But the env_file option only passes those variables to the containers and NOT the docker compose file ๐Ÿ˜ตโ€๐Ÿ’ซ
Example
OK, let's say we have this simple compose file:
services:
foo:
image: ubuntu
hostname: suchHostname # <-------------- hard coded 'suchHostname'
volumes:
- /mnt/data/logs/muchLogs:/logs # <--- hard coded 'muchLogs'
- /mnt/data/soBig:/data # <----------- hard coded 'soBig'
We don't want to hard code these anymore! So, we can put them in the current terminal's environment variables and check if docker-compose understands them:
$ export the_hostname="suchHostName"
$ export dir_logs="muchLogs"
$ export dir_data="soBig"
and change the docker-compose.yml file to:
services:
foo:
image: ubuntu
hostname: $the_hostname # <-------------- use $the_hostname
volumes:
- /mnt/data/logs/$dir_logs:/logs # <--- use $dir_logs
- /mnt/data/$dir_data:/data # <-------- usr $dir_data
Now let's check out if it worked with executing $ docker-compose convert and inspecting the output:
name: tmp
services:
foo:
hostname: suchHostName # <------------- $the_hostname
image: ubuntu
networks:
default: null
volumes:
- type: bind
source: /mnt/data/logs/muchLogs # <-- $dir_logs
target: /logs
bind:
create_host_path: true
- type: bind
source: /mnt/data/soBig # <---------- $dir_data
target: /data
bind:
create_host_path: true
networks:
default:
name: tmp_default
OK it works! But let's use the .env file instead. Since docker-compose understands the .env file, let's just create one and set it up:
# .env file (in the same directory as 'docker-compose.yml')
the_hostname="suchHostName"
dir_logs="muchLogs"
dir_data="soBig"
OK, you can test it with a NEW terminal (so that the older environment variables we set with export don't interfere and make sure everything works in a clean terminal) ๐Ÿ–ฅ Just follow step 4 again and see that it works!
So far so good ๐Ÿ˜ƒ However, when you stumble upon the env_file option, it gets confusing ๐Ÿค” Let's say that you want to pass a password to the docker compose file (NOT the container).
๐Ÿ™„ In the wrong approach, you might put a password in .secrets file:
# .secrets
somepassword="0P3N$3$#M!"
and then update the docker-compose file as follows:
services:
foo:
image: ubuntu
hostname: $the_hostname
volumes:
- /mnt/data/logs/$dir_logs:/logs
- /mnt/data/$dir_data:/data
# ๐Ÿ”ฝ BAD:
env_file:
- .env
- .secrets
entrypoint: echo "Hush! This is a secret '$somepassword'"
Now checking it just like step 4 again would result in:
WARN[0000] The "somepassword" variable is not set. Defaulting to a blank string.
name: tmp # ^
services: # |
foo: # |
entrypoint: # |
- echo # |
- Hush! This is a secret '' # <---- ๐Ÿ˜ตโ€๐Ÿ’ซ Oh no!
environment:
dir_data: soBig
dir_logs: muchLogs
somepassword: 0P3N$$3$$#M! # <--- ๐Ÿค” Huh?!
the_hostname: suchHostName
hostname: suchHostName
image: ubuntu
networks:
default: null
volumes:
- type: bind
source: /mnt/data/logs/muchLogs
target: /logs
bind:
create_host_path: true
- type: bind
source: /mnt/data/soBig
target: /data
bind:
create_host_path: true
networks:
default:
name: tmp_default
So as you can see, the $somepassord variable is only passed to the container, and NOT the docker compose file.
Wrapping up
You can pass environment variables to docker-compose files in two ways:
By exporting the variable to the terminal before running docker compose.
By putting the variables inside .env file.
The env_file option only passes those extra variables to the containers ๐Ÿ“ฆ and not the compose file ๐Ÿณ
Since 1.25.4, docker-compose supports the option --env-file that enables you to specify a file containing variables.
Yours should look like this:
hostname=my-host-name
And the command:
docker-compose --env-file /path/to/my-env-file config
To add an environment variable, you may define an env_file (let's call it var.env) as:
ENV_A=A
ENV_B=B
And add it to the docker compose manifest service. Moreover, you can define environment variables directly with environment.
For instance, in docker-compose.yaml:
version: '3.8'
services:
myservice:
build:
context: .
dockerfile: ./docker/Dockerfile.myservice
image: myself/myservice
env_file:
- ./var.env
environment:
- VAR_C=C
- VAR_D=D
volumes:
- $HOME/myfolder:/myfolder
ports:
- "5000:5000"
Please check here for more/updated information: Manuals โ†’ Docker โ†’Compose โ†’Environment variables โ†’ Overview
Use:
env SOME_VAR="I am some var" OTHER_VAR="I am other var" docker stack deploy -c docker-compose.yml
Use version 3.6:
version: "3.6"
services:
one:
image: "nginx:alpine"
environment:
foo: "bar"
SOME_VAR:
baz: "${OTHER_VAR}"
labels:
some-label: "$SOME_VAR"
two:
image: "nginx:alpine"
environment:
hello: "world"
world: "${SOME_VAR}"
labels:
some-label: "$OTHER_VAR"
I got it from Feature request: Docker stack deploy pass environment variables via cli options #939.
You cannot ... yet. But this is an alternative, think like a docker-composer.yml generator:
https://gist.github.com/Vad1mo/9ab63f28239515d4dafd
Basically a shell script that will replace your variables. Also you can use Grunt task to build your docker compose file at the end of your CI process.
I have a simple bash script I created for this it just means running it on your file before use:
https://github.com/antonosmond/subber
Basically just create your compose file using double curly braces to denote environment variables e.g:
app:
build: "{{APP_PATH}}"
ports:
- "{{APP_PORT_MAP}}"
Anything in double curly braces will be replaced with the environment variable of the same name so if I had the following environment variables set:
APP_PATH=~/my_app/build
APP_PORT_MAP=5000:5000
on running subber docker-compose.yml the resulting file would look like:
app:
build: "~/my_app/build"
ports:
- "5000:5000"
To focus solely on the issue of default and mandatory values for environment variables, and as an update to #modulito's answer:
Using default values and enforcing mandatory values within the docker-compose.yml file is now supported (from the docs):
Both $VARIABLE and ${VARIABLE} syntax are supported. Additionally when using the 2.1 file format, it is possible to provide inline default values using typical shell syntax:
${VARIABLE:-default} evaluates to default if VARIABLE is unset or empty in the environment.
${VARIABLE-default} evaluates to default only if VARIABLE is unset in the environment.
Similarly, the following syntax allows you to specify mandatory variables:
${VARIABLE:?err} exits with an error message containing err if VARIABLE is unset or empty in the environment.
${VARIABLE?err} exits with an error message containing err if VARIABLE is unset in the environment.
Other extended shell-style features, such as ${VARIABLE/foo/bar}, are not supported.
This was written for Docker v20, using the docker compose v2 commands.
I was having a similar roadblock and found that the --env-file parameter ONLY works for docker compose config command. On top of that using the docker compose env_file variable, still forced me to repeat values for the variables, when wanting to reuse them in other places than the Dockerfile such as environment for docker-compose.yml. I just wanted one source of truth, my .env, with the ability to swap them per deployment stage. So here is how I got it to work, basically use docker compose config to generate a base docker-compose.yml file that will pass ARG into Dockerfile's.
.local.env This would be your .env, I have mine split for different deployments.
DEVELOPMENT=1
PLATFORM=arm64
docker-compose.config.yml - This is my core docker compose file.
services:
server:
build:
context: .
dockerfile: docker/apache2/Dockerfile
args:
- PLATFORM=${PLATFORM}
- DEVELOPMENT=${DEVELOPMENT}
environment:
- PLATFORM=${PLATFORM}
- DEVELOPMENT=${DEVELOPMENT}
Now sadly I do need to pass in the variables twice, once for the Dockerfile, the other for environment. However, they are still coming from the single source .local.env so at least I do not need to repeat values.
I then use docker compose config to generate a semi-final docker-compose.yml. This lets me pass in my companion override docker-compose.local.yml for where the final deployment is happening.
docker compose --env-file=.local.env -f docker-compose.config.yml config > docker-compose.yml
This will now let my Dockerfile access the .env variables.
FROM php:5.6-apache
# Make sure to declare after FROM
ARG PLATFORM
ARG DEVELOPMENT
# Access args in strings with $PLATFORM, and can wrap i.e ${PLATFORM}
RUN echo "SetEnv PLATFORM $PLATFORM" > /etc/apache2/conf-enabled/environment.conf
RUN echo "SetEnv DEVELOPMENT $DEVELOPMENT" > /etc/apache2/conf-enabled/environment.conf
This then passes the .env variables from the docker-compose.yml into Dockerfile which then passes it into my Apache HTTP server, which passes it to my final destination, the PHP code.
My next step to then to pass in my docker compose overrides from my deployment stage.
docker-compose.local.yml - This is my docker-compose override.
services:
server:
volumes:
- ./localhost+2.pem:/etc/ssl/certs/localhost+2.pem
- ./localhost+2-key.pem:/etc/ssl/private/localhost+2-key.pem
Lastly, run the docker compose command.
docker compose -f docker-compose.yml -f docker-compose.local.yml up --build
Please note if you change anything in you .env file you will need to re-run the docker compose config and add --build for docker compose up. Since builds are cached it has little impact.
So for my final command I normally run:
docker compose --env-file=.local.env -f docker-compose.config.yml config > docker-compose.yml; docker compose --env-file=.local.env -f docker-compose.yml -f docker-compose.local.yml up --build
As far as I know, this is a work-in-progress. They want to do it, but it's not released yet. See 1377 (the "new" 495 that was mentioned by #Andy).
I ended up implementing the "generate .yml as part of CI" approach as proposed by #Thomas.
Add an environment variable to the .env file
Such as
VERSION=1.0.0
Then save it to deploy.sh
INPUTFILE=docker-compose.yml
RESULT_NAME=docker-compose.product.yml
NAME=test
prepare() {
local inFile=$(pwd)/$INPUTFILE
local outFile=$(pwd)/$RESULT_NAME
cp $inFile $outFile
while read -r line; do
OLD_IFS="$IFS"
IFS="="
pair=($line)
IFS="$OLD_IFS"
sed -i -e "s/\${${pair[0]}}/${pair[1]}/g" $outFile
done <.env
}
deploy() {
docker stack deploy -c $outFile $NAME
}
prepare
deploy
Use .env file to define dynamic values in docker-compse.yml. Be it port or any other value.
Sample docker-compose:
testcore.web:
image: xxxxxxxxxxxxxxx.dkr.ecr.ap-northeast-2.amazonaws.com/testcore:latest
volumes:
- c:/logs:c:/logs
ports:
- ${TEST_CORE_PORT}:80
environment:
- CONSUL_URL=http://${CONSUL_IP}:8500
- HOST=${HOST_ADDRESS}:${TEST_CORE_PORT}
Inside .env file you can define the value of these variables:
CONSUL_IP=172.31.28.151
HOST_ADDRESS=172.31.16.221
TEST_CORE_PORT=10002
I ended up using "sed" in my deploy.sh script to accomplish this, though my requirements were slightly different since docker-compose is being called by Terrafom: Passing Variables to Docker Compose via a Terraform script for an Azure App Service
eval "sed -i 's/MY_VERSION/$VERSION/' ../docker-compose.yaml"
cat ../docker-compose.yaml
terraform init
terraform apply -auto-approve \
-var "app_version=$VERSION" \
-var "client_id=$ARM_CLIENT_ID" \
-var "client_secret=$ARM_CLIENT_SECRET" \
-var "tenant_id=$ARM_TENANT_ID" \
-var "subscription_id=$ARM_SUBSCRIPTION_ID"
eval "sed -i 's/$VERSION/MY_VERSION/' ../docker-compose.yaml"
It's simple like this:
Using command line as mentioned in the documentation:
docker-compose --env-file ./config/.env.dev config
Or using a .env file, I think this is the easiest way:
web:
env_file:
- web-variables.env
Documentation with a sample

Resources