Within my docker-compose.yml file I have a service with lines like the following
environment:
- ENV1=hello
- ENV2=world
command: -f ./tmp/config.toml
volumes:
- ./config/config_x.toml:/tmp/config.toml
I want to make it so that if ENV1 is defined (i.e. not an empty string), then mount the volume
- ./config/config_x.toml:/tmp/config.toml
otherwise, mount the volume
- ./config/config_y.toml:/tmp/config.toml
What would be the best way of doing this?
You can add an env variable which holds the name of the volume you want to mount. For ex:
environment:
- ENV1=myvolume
- ENV2=world
command: -f ./tmp/config.toml
volumes:
- ./${myvolume}/config_x.toml:/tmp/config.toml
Docker-compose doesn't have explicit conditional statement(if) concepts.
To achieve what you want, you have to define a single property key which the value is different according to your target environment.
Besides, the interpolation of variable in a docker-compose template is done broadly in two ways :
set the variables in the caller shell
use an .env file (while it doesn't work with docker stack deploy).
Using .env is not enough for your case because you need to have a file by environment with the expected value for the same property key.
A possible approach would be probably to set variables in the shell.
Either manually : SRC_VOLUME=config_x.toml docker-compose up
and SRC_VOLUME=config_y.toml docker-compose up for the second.
Or by defining two files that contain the key=value that you have to write into your shell environment.
For example :
SRC_VOLUME=config_x.toml
and that in the second :
SRC_VOLUME=config_y.toml
In any case you just need to use a placeholder in the docker-compose template :
volumes:
- ./config/${SRC_VOLUME}:/tmp/config.toml
Related
I'm running a containerized Milvus Standalone database (Milvus) and I'm trying to find the location of items added to the database. In the docker-compose.yml file, the volume location is defined as follows:
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/etcd:/etcd
Checking my docker server, I do not find an environment variable named DOCKER_VOLUME_DIRECTORY.
What does this definition mean? Also, what does the
:-.
part mean?
It is using Shell Parameter expansion:
${parameter:-word}
If parameter is unset or null, then word is used as a default value.
In this case, as DOCKER_VOLUME_DIRECTORY is not set, the default value of . (the current directory) is used.
$ echo ${DOCKER_VOLUME_DIRECTORY:-.}
.
So the volume will effectively be:
volumes:
- ./volumes/etcd:/etcd
I am new to docker .I recently came across one of the docker-compose file from our org ACR, the ports are defined as variables in the compose file. I DO NOT have the docker file of that image used in docker-compose file.
version: "3"
services:
webapp:
image: p32d1830151.azurecr.io/web/weblogic:0.1
container_name: banker
hostname: banker
ports:
- "${URL_PORT}:8080"
- "${TCP_PORT}:12345"
The advantage of this docker-compose.yml file is that
It can be executed with docker-compose up -d . The default value is taken
It can be executed with docker-compose --env-file d.env up -d , that overrides the default value with the values from env file.
I tried to do achieve the same with my docker images that is different from the same , and it fails with error
docker-compose up -d
WARNING: The URL_PORT variable is not set. Defaulting to a blank string.
WARNING: The TCP_PORT variable is not set. Defaulting to a blank string.
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.webimage.ports contains an invalid type, it should be a number, or an object
services.webimage.ports contains an invalid type, it should be a number, or an object
but it works if I define the port as
ports:
- "URL_PORT:8080"
- "TCP_PORT:12345"
or
ports:
- "URL_PORT:${URL_PORT}"
- "TCP_PORT:${TCP_PORT}"
Has - "${URL_PORT}:8080"
- "${TCP_PORT}:12345" for any...? if so please let me know how to make this work ?
Should something be added to the docker file ?
Do we have some documentation on this ?
How do I attain this flexibility ?
1 How does this work ?
Notice that ${} or single a $ substitutes environment variables inside the docker-compose.yml.
This means when you've set an environment variable like URL_PORT docker-compose will replace $URL_PORT with its value.
Setting the environemnt variable can be done by running export URL_PORT=1234 before you do docker-compose up -d or by placing a .env-file containing URL_PORT=1234 in the current directory.
2 Should something be added to the docker file ?
No you don't have to add anything to the Dockerfile
3 Do we have some documentation on this ?
See: Environment variables in Compose
4 How do I attain this flexibility ?
By setting environment variables.
I have the following compose file:
services:
myproject:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443;http://+:80
- ASPNETCORE_HTTPS_PORT=44308
- PROJECT_NAME=MyProject
volumes:
- ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/Turma/${PROJECT_NAME}/Logs:/var/logs/${PROJECT_NAME}
On the line:
- ${APPDATA}/Turma/${PROJECT_NAME}/Logs:/var/logs/${PROJECT_NAME}
It recognises ${APPDATA} but for ${PROJECT_NAME} it uses the literal string not the environment variable value.
Is there a way to make this work so the actual project name is used in path?
As far as I am aware, you cannot reference your env variables defined within the compose file later in the same compose file and have them interpreted. Your definition of $APPDATA works since that's set in the host's environment, not the compose file.
I tested both using the env variable and a .env file with compose 2.3 and 3, and neither worked.
I recommend wrapping your compose file in a run script where you can set the variables needed in your host shell, so you can have those interpreted properly. If you're deploying with a standard tool such as ansible, jenkins, etc. those can all set variables for you. This can look like the following:
#!/bin/bash
export PROJECT_NAME=foo
docker-compose up -d
unset PROJECT_NAME
Although it may not work for creating volumes, if you just need the variable to do something during the container's runtime (such as setting another environment variable), that can be put into an entrypoint script as well.
I have a docker-compose file that allows me to pass the environment variables as a file (.env file). As I have multiple ENV variables, Is there any option in Dockerfile like env_file in docker-compose for passing multiple environment variables during docker build?
This is the docker-compose.yml
services:
web:
image: "node"
links:
- "db"
env_file: "env.app"
AFAIK, there is no such way to inject environment variables using a file during the build step using Dockerfile. However, in most cases, people do end up using an entrypoint script & injecting variables during the docker run or docker-compose up.
In case it's a necessity you might need to write a shell wrapper which will change the values in the Dockerfile dynamically by taking a key-value pair text file as an input or make it something as below but the ENV file name need to be included in Dockerfile.
COPY my-env-vars /
RUN export $(cat my-env-vars | xargs)
It's an open issue - https://github.com/moby/moby/issues/28617
PS - You need to be extra careful while using this approach because the secrets are baked into the image itself.
I have a docker-compose.yml file and in the terminal I am typing docker-compose up [something] but I would also like to pass an argument to docker-compose.yml. Is this possible? I've read about interpolation variables and tried to specify a variable in the .yml file using ${testval} and then docker-compose up [something] var="test" but I receive the following error:
WARNING: The testval variable is not set. Defaulting to a blank string.
ERROR: No such service: testval=test
Based on dnephin answer, I created this sample repo that you can pass an variable to docker-compose up.
The usage is simple:
MAC / LINUX
TEST= docker-compose up to create and start both app and db container. The api should then be running on your docker daemon on port 3030.
TEST=DO docker-compose up to create and start both app and db container. The api should execute the npm run test inside the package.json file.
WINDOWS (Powershell)
$env:TEST="";docker-compose up to create and start both app and db container. The api should then be running on your docker daemon on port 3030.
$env:TEST="do";docker-compose up to create and start both app and db container. The api should execute the npm run test inside the package.json file.
You need to ensure 2 things:
The docker-compose.yml has the environment variable declared. For example,
services:
app:
image: python3.7
environment:
- "SECRET_KEY=${SECRET_KEY}"
have the variable available in the environment when docker-compose up is called:
SECRET_KEY="not a secret" docker-compose up
Note that this is not equivalent to pass them during build, as it is not advisable to store secrets in docker images.
You need to pass the variables as environment variables:
testvar=test docker-compose up ...
or
export testvar=test
docker-compose up
From the docs:
https://docs.docker.com/compose/reference/up/
https://docs.docker.com/compose/reference/build/
You can't pass arguments to docker-compose up, but you can pass arguments to docker-compose build:
docker-compose build --build-arg KEY1=VALUE1 --build-arg KEY2=VALUE2
I'm not sure what you want to do here, but if what you need is to pass an environmental variable to a specific container docker-compose.yml allows you to do that:
web:
...
environment:
- RAILS_ENV=production
- VIRTUAL_HOST=www.example.com
- VIRTUAL_PORT=3011
This variables will be specific for the container you specified them to, and wil not be shared between containers.
Also "docker-compose up" doesn't take any argument.
When dealing with build argumenets please declare them in compose yml file as follows
services:
app: (name of service
build:
context: docker/app/ (where is your docker build root)
dockerfile: Dockerfile (that is optional)
args:
- COMPOSER_AUTH_TOKEN (name of variable, value will be taken from host environment)
Well before running docker-compose up, export variable as other guys suggested. It will work. I tried. Use docker compose version 3 and above. Have fun
Compose supports declaring default environment variables in an environment file named .env placed in the project directory.
Step 1:
Create a file named .env in the project directory
Step 2:
Declare variables in the form VAR=VAL
NOTE: There is no special handling of quotation mark i.e. TESTVAL='test' means TESTVAL is 'test'(with quotation mark) and not just test. So you'd declare it as TESTVAL=test.
Step 3:
Use the variables in the Compose file as:
environment:
myval=${TESTVAL}
Documentation: Declare default environment variables in file
BONUS: If you are building image on the fly in you docker-compose.yaml, then you can even pass the build args using environment variables. Eg:
version: "3.8"
services:
myapp:
build:
context: ./myDir
dockerfile: ./myDir/myDockerfile
args:
- MYARG=${TESTVAL}
I was trying to find solution for batch file, based on Rafael Delboni answer you can add command inside batch file for calling powershell:
powershell $env:TEST="";docker-compose up ...
but instead of that because it's expensive to call powershell inside batch file you can initialize TEST variable inside batch file and then call your docker-compose command.
Something like this:
set TEST = ...
docker compose up ...