Docker compose refuses to apply environment variables - docker

UPDATE
It appears the problem is specifically related to the RUN command in the Dockerfile. If I remove it, the build works fine and the environment variables are clearly being picked up since the password gets applied and I can connect using it. Not sure why the login fails in the RUN command, I've seen many examples using similar code.
I'm working on a very basic docker compose file to setup a dev environment for an app and I started with the database server, which is MS SQL. Here's what the docker-compose.yml file looks like:
version: '3.8'
services:
mssql:
build:
context: .
dockerfile: docker/mssql/Dockerfile
ports:
- '1434:1433'
environment:
ACCEPT_EULA: "Y"
SA_PASSWORD: "YourStrong!Passw0rd"
volumes:
- mssql-data:/var/opt/mssql
As you can see from my dockerfile path, that's in a sub-path and looks like this:
FROM mcr.microsoft.com/mssql/server:2019-latest
COPY ./docker/mssql/TESTDB.bak /var/opt/mssql/backup/TESTDB.bak
RUN ( /opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Service Broker manager has started" && /opt/mssql-tools/bin/sqlcmd -S localhost,1433 -U SA -P "YourStrong!Passw0rd" -Q 'RESTORE DATABASE TESTDB FROM DISK = "/var/opt/mssql/backup/TESTDB.bak" WITH MOVE "TESTDB_Data" to "/var/opt/mssql/data/TESTDB.mdf", MOVE "TESTDB_Log" to "/var/opt/mssql/data/TESTDB_log.ldf"'
(Yes, I realize that the password in the RUN command is redundant, I had tried to use a variable there earlier and since it wasn't working I hard coded it.)
When I run docker-compose up -d, I always get this error: Login failed for user 'SA'
I wasted way too much time thinking there was actually something wrong with the password until I realized that if I add the environment variables directly in the Dockerfile, it works. So in my Dockerfile, above the RUN command, I can just do this:
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=YourStrong!Passw0rd
So I concluded that my environment variables simply aren't being read. I tried with quotes, without quotes, using env_file instead, nothing seems to work. I also tried the following format, no luck:
environment
- ACCEPT_EULA=Y
- SA_PASSWORD=YourStrong!Passw0rd
I also tried using MSSQL_SA_PASSWORD instead of SA_PASSWORD, as well as having both in there. I assumed that was unlikely to be the problem though given SA_PASSWORD works fine. Lastly, I tried using a 2017 image in case it was image specific, that didn't work either.
I'm assuming it must be something silly I'm missing. I saw a lot of talk of .env in the root being different, but if I understood correctly people go wrong with that when they try to use environment values in their docker-compose.yml file, which is not what I'm doing here. So I'm about ready to lose my mind on this as it seems like such a simple, basic thing.

I think you're confusing the ENV statement in Dockerfile with the environment variables set when running an image. The key is still in the details of the docs. It notes that they are the same as saying docker run -e, not docker build.
What's causing more confusion, when you use ENV, you are setting defaults for when the image runs later:
https://docs.docker.com/engine/reference/builder/#env
If you haven't yet, I very much recommend getting familiar with building and running your image with docker run and docker build before moving on to compose, it's much less confusing that way.

The issue with your build here stems from a confusion between the build-time and run-time environment variables: with the environment or env_file properties you specify the environment variables to be set for the service container.
But the RUN command in your Dockerfile is executed at the build-time of the image! To pass variables when building a new image you should use build args instead, as you already mentioned in your comment:
services:
mssql:
build:
context: .
dockerfile: docker/mssql/Dockerfile
args:
SA_PASSWORD: "YourStrong!Passw0rd"
# ...
With this you can use the SA_PASSWORD as a build ARG:
FROM mcr.microsoft.com/mssql/server:2019-latest
COPY ./docker/mssql/TESTDB.bak /var/opt/mssql/backup/TESTDB.bak
ARG SA_PASSWORD
RUN ( /opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Service Broker manager has started" && /opt/mssql-tools/bin/sqlcmd -S localhost,1433 -U SA -P "$SA_PASSWORD" -Q 'RESTORE DATABASE TESTDB FROM DISK = "/var/opt/mssql/backup/TESTDB.bak" WITH MOVE "TESTDB_Data" to "/var/opt/mssql/data/TESTDB.mdf", MOVE "TESTDB_Log" to "/var/opt/mssql/data/TESTDB_log.ldf"'
If you want to move the actual password to a .env file you can use variable substitution in the compose.yml:
services:
mssql:
build:
# ...
args:
SA_PASSWORD: "$SA_PASSWORD"
# ...

In your docker-compose.yml, have you tried:
- ACCEPT_EULA=Y
- SA_PASSWORD=YourStrong!Passw0rd

Both responses above are fine, just a few more things:
SA_PASSWORD is deprecated instead use MSSQL_SA_PASSWORD
It is always nice to define .env files with the variables for instance:
sapassword.env
MSSQL_SA_PASSWORD=YourStrong!Passw0rd
sqlserver.env
ACCEPT_EULA=Y
MSSQL_DATA_DIR=/var/opt/sqlserver/data
MSSQL_LOG_DIR=/var/opt/sqlserver/log
MSSQL_BACKUP_DIR=/var/opt/sqlserver/backup
And in docker-compose.yml instance the env files the following way:
environment:
- sqlserver.env
- sapassword.env

Related

Proper way to build a CICD pipeline with Docker images and docker-compose

I have a general question about DockerHub and GitHub. I am trying to build a pipeline on Jenkins using AWS instances and my end goal is to deploy the docker-compose.yml that my repo on GitHub has:
version: "3"
services:
db:
image: postgres
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
volumes:
- ./tmp/db:/var/lib/postgresql/data
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
environment:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_HOST: db
I've read that in CI/CD pipelines people build their images and push them to DockerHub but what is the point of it?
You would be just pushing an individual image. Even if you pull the image later in a different instance, in order to run the app with the different services you will need to run the container using docker-compose and you wouldn't have it unless you pull it from the github repo again or create it on the pipeline right?
Wouldn't be better and straightforward to just fetch the repo from Github and do docker-compose commands? Is there a "cleaner" or "proper" way of doing it? Thanks in advance!
The only thing you should need to copy to the remote system is the docker-compose.yml file. And even that is technically optional, since Compose just wraps basic Docker commands; you could manually docker network create and then docker run the two containers without copying anything at all.
For this setup it's important to delete the volumes: that require a copy of the application code to overwrite the image's content. You also shouldn't need an override command:. For the deployment you'd need to replace build: with image:.
version: "3.8"
services:
db: *from-the-question
web:
image: registry.example.com/me/web:${WEB_TAG:-latest}
ports:
- "3000:3000"
depends_on:
- db
environment: *web-environment-from-the-question
# no build:, command:, volumes:
In a Compose setup you could put the build: configuration in a parallel docker-compose.override.yml file that wouldn't get copied to the deployment system.
So what? There are a couple of good reasons to structure things this way.
A forward-looking answer involves clustered container managers like Kubernetes, Nomad, or Amazon's proprietary ECS. In these a container runs somewhere in a cluster of indistinguishable machines, and the only way you have to copy the application code in is by pulling it from a registry. In these setups you don't copy any files anywhere but instead issue instructions to the cluster manager that some number of copies of the image should run somewhere.
Another good reason is to support rolling back the application. In the Compose fragment above, I refer to an environment variable ${WEB_TAG}. Say you push out one build a day and you give each a date-stamped tag; registry.example.com/me/web:20220220. But, something has gone wrong with today's build! While you figure it out, you can connect to the deployment machine and run
WEB_TAG=20220219 docker-compose up -d
and instantly roll back, again without trying to check out anything or copy the application.
In general, using Docker, you want to make the image as self-contained as it can be, though still acknowledging that there are things like the database credentials that can't be "baked in". So make sure to COPY the code in, don't override the code with volumes:, do set a sensible CMD. You should be able to start with a clean system with only Docker installed and nothing else, and docker run the image with only Docker-related setup. You can imagine writing a shell script to run the docker commands, and the docker-compose.yml file is just a declarative version of that.
Finally remember that you don't have to use Docker. You can use a general-purpose system-management tool like Ansible, Salt Stack, or Chef to install Ruby on to the target machine and manually copy the code across. This is a well-proven deployment approach. I find Docker simpler, but there is the assumption that the code and all of its dependencies are actually in the image and don't need to be separately copied.

Question on using docker secrets and environments with an existing image

I've been struggling with this concept. To start I'm new to docker and self teaching myself (slowly). I am using a docker swarm instance and trying to leverage docker secrets for a simple username and password to an exiting rocker/rstudio image. I've set up the reverse proxy and can successfully use https to access the R studio via my browser. Now when I pass the variables at path /run/secrets/user and /run/secrets/pass to the environment variables it doesn't work. Its essentially think the path is the actual username and password. I need the environment variables to actually pull the values (in this case user=test, pass=test123 as set up using the docker secret command). I've looked around and a bit of a loss on how to accomplish this. I know some have mentioned leveraging a custom entrypoint shell script and I'm a bit confused on how to do this. Here is what I've tried
Rebuild a brand new image using the existing r image with a dockerfile that adds entrypoint.sh to the image -> it can't find the entrypoint.sh doc
added entrypoint: entrypoint.sh as a part of my docker compose. Same issue.
I'm trying to use docker stack to build the containers. The stack gets built but the containers keep restarting to the point they are unusable.
Here are my files
Dockerfile
FROM rocker/rstudio
COPY entry.sh /
RUN chmod +x /entry.sh
ENTRYPOINT ["entry.sh"]
Here is my docker-compose.yaml
version: '3.3'
secrets:
user:
external: true
pass:
external: true
services:
rserver:
container_name: rstudio
image: rocker/rstudio:latest (<-- this is the output of the build using rocker/rstudio and Dockerfile)
secrets:
- user
- pass
environment:
- USER=/run/secrets/user
- PASSWORD=/run/secrets/pass
volumes:
- ./rstudio:/home/user/rstudio
ports:
- 8787:8787
restart: always
entrypoint: /entry.sh
Finally here is the entry.sh file that I found on another thread
#get your envs files and export envars
export $(egrep -v '^#' /run/secrets/* | xargs)
#if you need some specific file, where password is the secret name
#export $(egrep -v '^#' /run/secrets/password| xargs)
#call the dockerfile's entrypoint
source /docker-entrypoint.sh
In the end it would be great to use my secret user and pass and pass those to the environment variable so that I can authenticate into an R studio instance. If I just put a username and password in plain text under environment it works fine.
Any help is appreciated. Thanks in advance

Execute a host system command and pass the result as build arg in docker-compose

I'm building a docker-compose.yml file that builds my custom Dockerfile and I would need to execute a bash command on my host system first, then the pass the results as build argument for the Dockerfile.
Here is an example in practice:
Dockerfile:
#...
ARG SSH_KEY_BASE64
RUN echo "Build SSH_KEY_BASE64: $SSH_KEY_BASE64"
#...
docker-compose.yml:
#...
version: '3.4'
services:
container:
container_name: my-container
build:
context: .
dockerfile: Dockerfile
args:
SSH_KEY_BASE64: ${SSH_KEY_BASE64_COMMAND}
env_file: .env
#...
.env:
SSH_KEY_BASE64_COMMAND=$(cat ~/.ssh/id_rsa_mykey | base64)
At the moment the value of $SSH_KEY_BASE64 in the Dockerfile is unresolved and it prints just $(cat ~/.ssh/id_rsa_mykey | base64), but I want it to evaluate that command and print the base64 of the content of my key.
I would like to avoid to manually run $(cat ~/.ssh/id_rsa_mykey | base64) before running docker-compose up --build that's why I'm asking for an automatic way to do that.
What options do I have?
Thanks
Compose doesn't support this syntax, and can't directly execute commands on the host system. The only substitution syntax it supports are $VARIABLE, ${VARIABLE}, ${VARIABLE:-default}, and ${VARIABLE:?error} environment variable expansion syntaxes, and that only in the main docker-compose.yml file. The values in an env_file: file aren't interpreted or expanded at all.
In most cases you don't actually want to build an image that depends on the specific host system it's built on; an image is intended to be reused in multiple environments. In the particular case of an ssh key it's particularly dangerous to pass it as an ARG since it can be pretty easily extracted from the final image (docker-compose run container cat /root/.id_rsa). You might need to do whatever operation needs the ssh key (for example, an authenticated git clone) on the host system outside of Docker.
The only workaround is to set a host environment variable and reference that instead, but it's probably better to get rid of the ARG entirely.

How to pass environment variables to docker-compose's applications

I want to pass environment variables that is readable by applications spin up by docker-compose up.
What is the proper way of using docker-compose up with varying configuration settings?
I don't want to use .env & environment: config as the environment variables are changing frequently & it is insecure to save tokens in a file.
docker-compose run -e does work a bit, but loses many.
It does not map the ports that defined in docker-compose.yml services.
Also multiple services are defined in docker-compose.yml and I don't want to use depends_on just because docker-compose up doesn't work.
Let's say I define service in docker-compose.yml
service-a:
build:
context: .
dockerfile: DockerfileA
command: node serviceA.js
In my serviceA.js, I simply use the environment variable:
console.log("This is ", process.env.KEY, "running in service A");
When I run docker-compose run -e KEY=DockerComposeRun service-a
I do get the environment variable KEY read by serviceA.js
This is DockerComposeRun running in service A
However I could only get one single service running.
I could have use environment: in docker-compose.yml
environment:
- KEY=DockerComposeUp
But in my use case, each docker compose would have different environment variable values, meaning I would need to edit the file each time before I do docker-compose.
Also, not only single service would use the same environment variable, .env even done a better job, but it is not desired.
There doesn't seem to be a way to do the same for docker-compose up
I have tried KEY=DockerComposeUp docker-compose up,
but what I get is undefined .
Export doesn't work for me as well, it seems they are all about using environment variable for docker-compose.yml instead of for the applications in container
To safely pass sensitive configuration data to your containers you can use Docker secrets. Everything passed through Secrets is encrypted.
You can create and manage secrets using the commands below:
docker secret create
docker secret inspect
docker secret ls
docker secret rm
And use them in your docker-compose file, either referring to existing secrets (external) or use a file:
secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: true
You can use environment like this:
service-a:
build:
context: .
dockerfile: DockerfileA
command: node serviceA.js
environment:
KEY=DockerComposeRun
Refer at: https://docs.docker.com/compose/environment-variables/

How can I use environment variables in docker-compose?

I would like to be able to use environment variables inside docker-compose.yml, with values passed in at the time of docker-compose up. This is the example.
I am doing this today with a basic docker run command, which is wrapped around my own script. Is there a way to achieve it with compose, without any such bash wrappers?
proxy:
hostname: $hostname
volumes:
- /mnt/data/logs/$hostname:/logs
- /mnt/data/$hostname:/data
The Docker solution:
Docker-compose 1.5+ has enabled variables substitution: Releases ยท docker/compose
The latest Docker Compose allows you to access environment variables from your compose file. So you can source your environment variables, then run Compose like so:
set -a
source .my-env
docker-compose up -d
For example, assume we have the following .my-env file:
POSTGRES_VERSION=14
(or pass them via command-line arguments when calling docker-compose, like so: POSTGRES_VERSION=14 docker-compose up -d)
Then you can reference the variables in docker-compose.yml using a ${VARIABLE} syntax, like so:
db:
image: "postgres:${POSTGRES_VERSION}"
And here is more information from the documentation, taken from Compose file specification
When you run docker-compose up with this configuration, Compose looks
for the POSTGRES_VERSION environment variable in the shell and
substitutes its value in. For this example, Compose resolves the image
to postgres:9.3 before running the configuration.
If an environment variable is not set, Compose substitutes with an
empty string. In the example above, if POSTGRES_VERSION is not set,
the value for the image option is postgres:.
Both $VARIABLE and ${VARIABLE} syntax are supported. Extended
shell-style features, such as ${VARIABLE-default} and
${VARIABLE/foo/bar}, are not supported.
If you need to put a literal dollar sign in a configuration value, use
a double dollar sign ($$).
The feature was added in this pull request.
Alternative Docker-based solution: Implicitly sourcing an environment variables file through the docker-compose command
If you want to avoid any Bash wrappers, or having to source a environment variables file explicitly (as demonstrated above), then you can pass a --env-file flag to the docker-compose command with the location of your environment variable file: Use an environment file
Then you can reference it within your docker-compose command without having to source it explicitly:
docker-compose --env-file .my-env up -d
If you don't pass a --env-file flag, the default environment variable file will be .env.
Note the following caveat with this approach:
Values present in the environment at runtime always override those defined inside the .env file. Similarly, values passed via command-line arguments take precedence as well.
So be careful about any environment variables that may override the ones defined in the --env-file!
The Bash solution:
I notice that Docker's automated handling of environment variables can cause confusion. Instead of dealing with environment variables in Docker, let's go back to basics, like Bash! Here is a method using a Bash script and a .env file, with some extra flexibility to demonstrate the utility of environment variables:
POSTGRES_VERSION=14
# Note that the variable below is commented out and will not be used:
# POSTGRES_VERSION=15
# You can even define the compose file in an environment variable like so:
COMPOSE_CONFIG=my-compose-file.yml
# You can define other compose files, and just comment them out
# when not needed:
# COMPOSE_CONFIG=another-compose-file.yml
Then run this Bash script in the same directory, which should deploy everything properly:
#!/bin/bash
docker rm -f `docker ps -aq -f name=myproject_*`
set -a
source .env
cat ${COMPOSE_CONFIG} | envsubst | docker-compose -f - -p "myproject" up -d
Just reference your environment variables in your compose file with the usual Bash syntax (ie ${POSTGRES_VERSION} to insert the POSTGRES_VERSION from the .env file).
While this solution involves Bash, some may prefer it because it has better separation of concerns.
Note the COMPOSE_CONFIG is defined in my .env file and used in my Bash script, but you can easily just replace {$COMPOSE_CONFIG} with the my-compose-file.yml in the Bash script.
Also note that I labeled this deployment by naming all of my containers with the "myproject" prefix. You can use any name you want, but it helps identify your containers so you can easily reference them later. Assuming that your containers are stateless, as they should be, this script will quickly remove and redeploy your containers according to your .env file parameters and your compose YAML file.
Since this answer seems pretty popular, I wrote a blog post that describes my Docker deployment workflow in more depth: Let's Deploy! (Part 1) This might be helpful when you add more complexity to a deployment configuration, like Nginx configurations, Let's Encrypt certificates, and linked containers.
It seems that docker-compose has native support now for default environment variables in a file.
All you need to do is declare your variables in a file named .env and they will be available in docker-compose.yml.
For example, for a .env file with contents:
MY_SECRET_KEY=SOME_SECRET
IMAGE_NAME=docker_image
You could access your variable inside docker-compose.yml or forward them into the container:
my-service:
image: ${IMAGE_NAME}
environment:
MY_SECRET_KEY: ${MY_SECRET_KEY}
Create a template.yml, which is your docker-compose.yml with environment variable.
Suppose your environment variables are in a file 'env.sh'
Put the below piece of code in a sh file and run it.
source env.sh;
rm -rf docker-compose.yml;
envsubst < "template.yml" > "docker-compose.yml";
A new file docker-compose.yml will be generated with the correct values of environment variables.
Sample template.yml file:
oracledb:
image: ${ORACLE_DB_IMAGE}
privileged: true
cpuset: "0"
ports:
- "${ORACLE_DB_PORT}:${ORACLE_DB_PORT}"
command: /bin/sh -c "chmod 777 /tmp/start; /tmp/start"
container_name: ${ORACLE_DB_CONTAINER_NAME}
Sample env.sh file:
#!/bin/bash
export ORACLE_DB_IMAGE=<image-name>
export ORACLE_DB_PORT=<port to be exposed>
export ORACLE_DB_CONTAINER_NAME=ORACLE_DB_SERVER
The best way is to specify environment variables outside the docker-compose.yml file. You can use env_file setting, and define your environment file within the same line. Then doing a docker-compose up again should recreate the containers with the new environment variables.
Here is how my docker-compose.yml looks like:
services:
web:
env_file: variables.env
Note:
docker-compose expects each line in an env file to be in VAR=VAL format. Avoid using export inside the .env file. Also, the .env file should be placed in the folder where the docker-compose command is executed.
The following is applicable for docker-compose 3.x
Set environment variables inside the container
method - 1 Straight method
web:
environment:
- DEBUG=1
POSTGRES_PASSWORD: 'postgres'
POSTGRES_USER: 'postgres'
method - 2 The โ€œ.envโ€ file
Create a .env file in the same location as the docker-compose.yml
$ cat .env
TAG=v1.5
POSTGRES_PASSWORD: 'postgres'
and your compose file will be like
$ cat docker-compose.yml
version: '3'
services:
web:
image: "webapp:${TAG}"
postgres_password: "${POSTGRES_PASSWORD}"
source
When using environment variables for volumes you need:
create .env file in the same folder which contains docker-compose.yaml file
declare variable in the .env file:
HOSTNAME=your_hostname
Change $hostname to ${HOSTNAME} at docker-compose.yaml file
proxy:
hostname: ${HOSTNAME}
volumes:
- /mnt/data/logs/${HOSTNAME}:/logs
- /mnt/data/${HOSTNAME}:/data
Of course you can do that dynamically on each build like:
echo "HOSTNAME=your_hostname" > .env && sudo docker-compose up
Don't confuse the .env file and the env_file option!
They serve totally different purposes!
The .env file feeds those environment variables only to your docker compose file, which in turn, can be passed to the containers as well.
But the env_file option only passes those variables to the containers and NOT the docker compose file ๐Ÿ˜ตโ€๐Ÿ’ซ
Example
OK, let's say we have this simple compose file:
services:
foo:
image: ubuntu
hostname: suchHostname # <-------------- hard coded 'suchHostname'
volumes:
- /mnt/data/logs/muchLogs:/logs # <--- hard coded 'muchLogs'
- /mnt/data/soBig:/data # <----------- hard coded 'soBig'
We don't want to hard code these anymore! So, we can put them in the current terminal's environment variables and check if docker-compose understands them:
$ export the_hostname="suchHostName"
$ export dir_logs="muchLogs"
$ export dir_data="soBig"
and change the docker-compose.yml file to:
services:
foo:
image: ubuntu
hostname: $the_hostname # <-------------- use $the_hostname
volumes:
- /mnt/data/logs/$dir_logs:/logs # <--- use $dir_logs
- /mnt/data/$dir_data:/data # <-------- usr $dir_data
Now let's check out if it worked with executing $ docker-compose convert and inspecting the output:
name: tmp
services:
foo:
hostname: suchHostName # <------------- $the_hostname
image: ubuntu
networks:
default: null
volumes:
- type: bind
source: /mnt/data/logs/muchLogs # <-- $dir_logs
target: /logs
bind:
create_host_path: true
- type: bind
source: /mnt/data/soBig # <---------- $dir_data
target: /data
bind:
create_host_path: true
networks:
default:
name: tmp_default
OK it works! But let's use the .env file instead. Since docker-compose understands the .env file, let's just create one and set it up:
# .env file (in the same directory as 'docker-compose.yml')
the_hostname="suchHostName"
dir_logs="muchLogs"
dir_data="soBig"
OK, you can test it with a NEW terminal (so that the older environment variables we set with export don't interfere and make sure everything works in a clean terminal) ๐Ÿ–ฅ Just follow step 4 again and see that it works!
So far so good ๐Ÿ˜ƒ However, when you stumble upon the env_file option, it gets confusing ๐Ÿค” Let's say that you want to pass a password to the docker compose file (NOT the container).
๐Ÿ™„ In the wrong approach, you might put a password in .secrets file:
# .secrets
somepassword="0P3N$3$#M!"
and then update the docker-compose file as follows:
services:
foo:
image: ubuntu
hostname: $the_hostname
volumes:
- /mnt/data/logs/$dir_logs:/logs
- /mnt/data/$dir_data:/data
# ๐Ÿ”ฝ BAD:
env_file:
- .env
- .secrets
entrypoint: echo "Hush! This is a secret '$somepassword'"
Now checking it just like step 4 again would result in:
WARN[0000] The "somepassword" variable is not set. Defaulting to a blank string.
name: tmp # ^
services: # |
foo: # |
entrypoint: # |
- echo # |
- Hush! This is a secret '' # <---- ๐Ÿ˜ตโ€๐Ÿ’ซ Oh no!
environment:
dir_data: soBig
dir_logs: muchLogs
somepassword: 0P3N$$3$$#M! # <--- ๐Ÿค” Huh?!
the_hostname: suchHostName
hostname: suchHostName
image: ubuntu
networks:
default: null
volumes:
- type: bind
source: /mnt/data/logs/muchLogs
target: /logs
bind:
create_host_path: true
- type: bind
source: /mnt/data/soBig
target: /data
bind:
create_host_path: true
networks:
default:
name: tmp_default
So as you can see, the $somepassord variable is only passed to the container, and NOT the docker compose file.
Wrapping up
You can pass environment variables to docker-compose files in two ways:
By exporting the variable to the terminal before running docker compose.
By putting the variables inside .env file.
The env_file option only passes those extra variables to the containers ๐Ÿ“ฆ and not the compose file ๐Ÿณ
Since 1.25.4, docker-compose supports the option --env-file that enables you to specify a file containing variables.
Yours should look like this:
hostname=my-host-name
And the command:
docker-compose --env-file /path/to/my-env-file config
To add an environment variable, you may define an env_file (let's call it var.env) as:
ENV_A=A
ENV_B=B
And add it to the docker compose manifest service. Moreover, you can define environment variables directly with environment.
For instance, in docker-compose.yaml:
version: '3.8'
services:
myservice:
build:
context: .
dockerfile: ./docker/Dockerfile.myservice
image: myself/myservice
env_file:
- ./var.env
environment:
- VAR_C=C
- VAR_D=D
volumes:
- $HOME/myfolder:/myfolder
ports:
- "5000:5000"
Please check here for more/updated information: Manuals โ†’ Docker โ†’Compose โ†’Environment variables โ†’ Overview
Use:
env SOME_VAR="I am some var" OTHER_VAR="I am other var" docker stack deploy -c docker-compose.yml
Use version 3.6:
version: "3.6"
services:
one:
image: "nginx:alpine"
environment:
foo: "bar"
SOME_VAR:
baz: "${OTHER_VAR}"
labels:
some-label: "$SOME_VAR"
two:
image: "nginx:alpine"
environment:
hello: "world"
world: "${SOME_VAR}"
labels:
some-label: "$OTHER_VAR"
I got it from Feature request: Docker stack deploy pass environment variables via cli options #939.
You cannot ... yet. But this is an alternative, think like a docker-composer.yml generator:
https://gist.github.com/Vad1mo/9ab63f28239515d4dafd
Basically a shell script that will replace your variables. Also you can use Grunt task to build your docker compose file at the end of your CI process.
I have a simple bash script I created for this it just means running it on your file before use:
https://github.com/antonosmond/subber
Basically just create your compose file using double curly braces to denote environment variables e.g:
app:
build: "{{APP_PATH}}"
ports:
- "{{APP_PORT_MAP}}"
Anything in double curly braces will be replaced with the environment variable of the same name so if I had the following environment variables set:
APP_PATH=~/my_app/build
APP_PORT_MAP=5000:5000
on running subber docker-compose.yml the resulting file would look like:
app:
build: "~/my_app/build"
ports:
- "5000:5000"
To focus solely on the issue of default and mandatory values for environment variables, and as an update to #modulito's answer:
Using default values and enforcing mandatory values within the docker-compose.yml file is now supported (from the docs):
Both $VARIABLE and ${VARIABLE} syntax are supported. Additionally when using the 2.1 file format, it is possible to provide inline default values using typical shell syntax:
${VARIABLE:-default} evaluates to default if VARIABLE is unset or empty in the environment.
${VARIABLE-default} evaluates to default only if VARIABLE is unset in the environment.
Similarly, the following syntax allows you to specify mandatory variables:
${VARIABLE:?err} exits with an error message containing err if VARIABLE is unset or empty in the environment.
${VARIABLE?err} exits with an error message containing err if VARIABLE is unset in the environment.
Other extended shell-style features, such as ${VARIABLE/foo/bar}, are not supported.
This was written for Docker v20, using the docker compose v2 commands.
I was having a similar roadblock and found that the --env-file parameter ONLY works for docker compose config command. On top of that using the docker compose env_file variable, still forced me to repeat values for the variables, when wanting to reuse them in other places than the Dockerfile such as environment for docker-compose.yml. I just wanted one source of truth, my .env, with the ability to swap them per deployment stage. So here is how I got it to work, basically use docker compose config to generate a base docker-compose.yml file that will pass ARG into Dockerfile's.
.local.env This would be your .env, I have mine split for different deployments.
DEVELOPMENT=1
PLATFORM=arm64
docker-compose.config.yml - This is my core docker compose file.
services:
server:
build:
context: .
dockerfile: docker/apache2/Dockerfile
args:
- PLATFORM=${PLATFORM}
- DEVELOPMENT=${DEVELOPMENT}
environment:
- PLATFORM=${PLATFORM}
- DEVELOPMENT=${DEVELOPMENT}
Now sadly I do need to pass in the variables twice, once for the Dockerfile, the other for environment. However, they are still coming from the single source .local.env so at least I do not need to repeat values.
I then use docker compose config to generate a semi-final docker-compose.yml. This lets me pass in my companion override docker-compose.local.yml for where the final deployment is happening.
docker compose --env-file=.local.env -f docker-compose.config.yml config > docker-compose.yml
This will now let my Dockerfile access the .env variables.
FROM php:5.6-apache
# Make sure to declare after FROM
ARG PLATFORM
ARG DEVELOPMENT
# Access args in strings with $PLATFORM, and can wrap i.e ${PLATFORM}
RUN echo "SetEnv PLATFORM $PLATFORM" > /etc/apache2/conf-enabled/environment.conf
RUN echo "SetEnv DEVELOPMENT $DEVELOPMENT" > /etc/apache2/conf-enabled/environment.conf
This then passes the .env variables from the docker-compose.yml into Dockerfile which then passes it into my Apache HTTP server, which passes it to my final destination, the PHP code.
My next step to then to pass in my docker compose overrides from my deployment stage.
docker-compose.local.yml - This is my docker-compose override.
services:
server:
volumes:
- ./localhost+2.pem:/etc/ssl/certs/localhost+2.pem
- ./localhost+2-key.pem:/etc/ssl/private/localhost+2-key.pem
Lastly, run the docker compose command.
docker compose -f docker-compose.yml -f docker-compose.local.yml up --build
Please note if you change anything in you .env file you will need to re-run the docker compose config and add --build for docker compose up. Since builds are cached it has little impact.
So for my final command I normally run:
docker compose --env-file=.local.env -f docker-compose.config.yml config > docker-compose.yml; docker compose --env-file=.local.env -f docker-compose.yml -f docker-compose.local.yml up --build
As far as I know, this is a work-in-progress. They want to do it, but it's not released yet. See 1377 (the "new" 495 that was mentioned by #Andy).
I ended up implementing the "generate .yml as part of CI" approach as proposed by #Thomas.
Add an environment variable to the .env file
Such as
VERSION=1.0.0
Then save it to deploy.sh
INPUTFILE=docker-compose.yml
RESULT_NAME=docker-compose.product.yml
NAME=test
prepare() {
local inFile=$(pwd)/$INPUTFILE
local outFile=$(pwd)/$RESULT_NAME
cp $inFile $outFile
while read -r line; do
OLD_IFS="$IFS"
IFS="="
pair=($line)
IFS="$OLD_IFS"
sed -i -e "s/\${${pair[0]}}/${pair[1]}/g" $outFile
done <.env
}
deploy() {
docker stack deploy -c $outFile $NAME
}
prepare
deploy
Use .env file to define dynamic values in docker-compse.yml. Be it port or any other value.
Sample docker-compose:
testcore.web:
image: xxxxxxxxxxxxxxx.dkr.ecr.ap-northeast-2.amazonaws.com/testcore:latest
volumes:
- c:/logs:c:/logs
ports:
- ${TEST_CORE_PORT}:80
environment:
- CONSUL_URL=http://${CONSUL_IP}:8500
- HOST=${HOST_ADDRESS}:${TEST_CORE_PORT}
Inside .env file you can define the value of these variables:
CONSUL_IP=172.31.28.151
HOST_ADDRESS=172.31.16.221
TEST_CORE_PORT=10002
I ended up using "sed" in my deploy.sh script to accomplish this, though my requirements were slightly different since docker-compose is being called by Terrafom: Passing Variables to Docker Compose via a Terraform script for an Azure App Service
eval "sed -i 's/MY_VERSION/$VERSION/' ../docker-compose.yaml"
cat ../docker-compose.yaml
terraform init
terraform apply -auto-approve \
-var "app_version=$VERSION" \
-var "client_id=$ARM_CLIENT_ID" \
-var "client_secret=$ARM_CLIENT_SECRET" \
-var "tenant_id=$ARM_TENANT_ID" \
-var "subscription_id=$ARM_SUBSCRIPTION_ID"
eval "sed -i 's/$VERSION/MY_VERSION/' ../docker-compose.yaml"
It's simple like this:
Using command line as mentioned in the documentation:
docker-compose --env-file ./config/.env.dev config
Or using a .env file, I think this is the easiest way:
web:
env_file:
- web-variables.env
Documentation with a sample

Resources