Docker compose volume definition using environment variable - docker

I'm running a containerized Milvus Standalone database (Milvus) and I'm trying to find the location of items added to the database. In the docker-compose.yml file, the volume location is defined as follows:
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/etcd:/etcd
Checking my docker server, I do not find an environment variable named DOCKER_VOLUME_DIRECTORY.
What does this definition mean? Also, what does the
:-.
part mean?

It is using Shell Parameter expansion:
${parameter:-word}
If parameter is unset or null, then word is used as a default value.
In this case, as DOCKER_VOLUME_DIRECTORY is not set, the default value of . (the current directory) is used.
$ echo ${DOCKER_VOLUME_DIRECTORY:-.}
.
So the volume will effectively be:
volumes:
- ./volumes/etcd:/etcd

Related

How to paramerterize variables in docker-compose file that can run with and without env file

I am new to docker .I recently came across one of the docker-compose file from our org ACR, the ports are defined as variables in the compose file. I DO NOT have the docker file of that image used in docker-compose file.
version: "3"
services:
webapp:
image: p32d1830151.azurecr.io/web/weblogic:0.1
container_name: banker
hostname: banker
ports:
- "${URL_PORT}:8080"
- "${TCP_PORT}:12345"
The advantage of this docker-compose.yml file is that
It can be executed with docker-compose up -d . The default value is taken
It can be executed with docker-compose --env-file d.env up -d , that overrides the default value with the values from env file.
I tried to do achieve the same with my docker images that is different from the same , and it fails with error
docker-compose up -d
WARNING: The URL_PORT variable is not set. Defaulting to a blank string.
WARNING: The TCP_PORT variable is not set. Defaulting to a blank string.
ERROR: The Compose file './docker-compose.yml' is invalid because:
services.webimage.ports contains an invalid type, it should be a number, or an object
services.webimage.ports contains an invalid type, it should be a number, or an object
but it works if I define the port as
ports:
- "URL_PORT:8080"
- "TCP_PORT:12345"
or
ports:
- "URL_PORT:${URL_PORT}"
- "TCP_PORT:${TCP_PORT}"
Has - "${URL_PORT}:8080"
- "${TCP_PORT}:12345" for any...? if so please let me know how to make this work ?
Should something be added to the docker file ?
Do we have some documentation on this ?
How do I attain this flexibility ?
1 How does this work ?
Notice that ${} or single a $ substitutes environment variables inside the docker-compose.yml.
This means when you've set an environment variable like URL_PORT docker-compose will replace $URL_PORT with its value.
Setting the environemnt variable can be done by running export URL_PORT=1234 before you do docker-compose up -d or by placing a .env-file containing URL_PORT=1234 in the current directory.
2 Should something be added to the docker file ?
No you don't have to add anything to the Dockerfile
3 Do we have some documentation on this ?
See: Environment variables in Compose
4 How do I attain this flexibility ?
By setting environment variables.

Mount volume if environment variable is defined

Within my docker-compose.yml file I have a service with lines like the following
environment:
- ENV1=hello
- ENV2=world
command: -f ./tmp/config.toml
volumes:
- ./config/config_x.toml:/tmp/config.toml
I want to make it so that if ENV1 is defined (i.e. not an empty string), then mount the volume
- ./config/config_x.toml:/tmp/config.toml
otherwise, mount the volume
- ./config/config_y.toml:/tmp/config.toml
What would be the best way of doing this?
You can add an env variable which holds the name of the volume you want to mount. For ex:
environment:
- ENV1=myvolume
- ENV2=world
command: -f ./tmp/config.toml
volumes:
- ./${myvolume}/config_x.toml:/tmp/config.toml
Docker-compose doesn't have explicit conditional statement(if) concepts.
To achieve what you want, you have to define a single property key which the value is different according to your target environment.
Besides, the interpolation of variable in a docker-compose template is done broadly in two ways :
set the variables in the caller shell
use an .env file (while it doesn't work with docker stack deploy).
Using .env is not enough for your case because you need to have a file by environment with the expected value for the same property key.
A possible approach would be probably to set variables in the shell.
Either manually : SRC_VOLUME=config_x.toml docker-compose up
and SRC_VOLUME=config_y.toml docker-compose up for the second.
Or by defining two files that contain the key=value that you have to write into your shell environment.
For example :
SRC_VOLUME=config_x.toml
and that in the second :
SRC_VOLUME=config_y.toml
In any case you just need to use a placeholder in the docker-compose template :
volumes:
- ./config/${SRC_VOLUME}:/tmp/config.toml

Variable substitution not working on Windows 10 with docker compose

I'm wondering if I've stumbled on a bug or that there's something not properly documented about variable substitution on Windows in combination with Docker Machine and Compose (installed version of docker is 1.11.1).
If I run the "docker-compose up" command for a yml file that looks like this:
volumes:
- ${FOOBAR}/build/:/usr/share/nginx/html/
And this variable doesn't exist docker compose will correctly complain about it:
The foobar variable is not set. Defaulting to a blank string.
However, when I change it to an existing environment variable:
volumes:
- ${PROJECT_DIR}/build/:/usr/share/nginx/html/
It will then not properly start the container and displays the following error (trying to access the nginx container will give you a host is unreachable message):
ERROR: for nginx rpc error: code = 2 desc = "oci runtime error: could not synchronise with container process: not a directory"
If I run the echo command in the Docker Quickstart Terminal it will output the correct path that I've set in the environment variable. If I replace the ${PROJECT_DIR} with the environment variable value the container runs correctly.
I get the same type of error message if I try to use the environment variable for the official php image instead of the official nginx image. In both cases the docker compose file works if I substitute ${PROJECT_DIR} text with the content of the environment variable.
So is this a bug or am I missing something?
After some mucking about I've managed to get the containers to start correctly without error messages if I use the following (contains the full path to the local files):
volumes:
- ${PROJECT_DIR}:/usr/share/nginx/html/
The nginx container is then up and running though it cannot find the files then anymore. If I replace the variable with the path it contains it then can find the files again.
Above behaviour isn't consistent. When I added a second environment variable for substitution it gave the oci runtime error. Kept giving it when I removed that second variable and only started working again when I also removed the first variable. After that it suddenly accepted ${PROJECT_DIR}/build/ but still without finding files.
Starting a bash session to the nginx container shows that the mount point for the volume contains no files.
I'm really at a loss here what docker is doing and what it expects from me. Especially as I have no idea to what it is expanding the variables in the compose file.
In the end the conclusion is that variable substitution is too quirky on Windows with Docker Machine to be useful. However, there is an alternative to variable substitution.
If you need a docker environment that does the following:
Can deploy on different computers that don't run the same OS
Doesn't care if the host uses Docker natively or via Virtual Box (this can require path changes)
Then your best bet is to use extending.
First you create the docker-compose.yml file that contains the images you'll need. For example an php image with MySQL:
php:
image: 5.5-apache
links:
- php_db:mysql
- maildev:maildev
ports:
- 8080:80
php_db:
image: mariadb
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: examplepass
You might notice that there aren't any volumes defined in this docker-compose file. That is something we're going to define in a file called docker-compose.override.yml:
php:
volumes:
- /workspaces/Eclipse/project/:/var/www/html/
When you have both files in one directory docker-compose does something interesting. It combines them into one adding/overwriting settings in the docker-compose.yml with those present in docker-compose.override.yml.
Then when running the command docker-compose up it will result in a docker run that is configured for the machine you're working on.
You can get similar behaviour with custom files names if you change a few things in your docker-compose command:
docker-compose -f docker-compose.yml -f docker-compose.conf.yml up
The detail is that docker-compose can accept multiple compose files and it will combine them into one. This happens from left to right.
Both methods allows you to create a basic compose file that configures the containers you need. You then can override/add the settings you need for the specific computer you're running docker on.
The page Overview of docker-compose CLI has more details on how these commands work.

How do I pass an argument along with docker-compose up?

I have a docker-compose.yml file and in the terminal I am typing docker-compose up [something] but I would also like to pass an argument to docker-compose.yml. Is this possible? I've read about interpolation variables and tried to specify a variable in the .yml file using ${testval} and then docker-compose up [something] var="test" but I receive the following error:
WARNING: The testval variable is not set. Defaulting to a blank string.
ERROR: No such service: testval=test
Based on dnephin answer, I created this sample repo that you can pass an variable to docker-compose up.
The usage is simple:
MAC / LINUX
TEST= docker-compose up to create and start both app and db container. The api should then be running on your docker daemon on port 3030.
TEST=DO docker-compose up to create and start both app and db container. The api should execute the npm run test inside the package.json file.
WINDOWS (Powershell)
$env:TEST="";docker-compose up to create and start both app and db container. The api should then be running on your docker daemon on port 3030.
$env:TEST="do";docker-compose up to create and start both app and db container. The api should execute the npm run test inside the package.json file.
You need to ensure 2 things:
The docker-compose.yml has the environment variable declared. For example,
services:
app:
image: python3.7
environment:
- "SECRET_KEY=${SECRET_KEY}"
have the variable available in the environment when docker-compose up is called:
SECRET_KEY="not a secret" docker-compose up
Note that this is not equivalent to pass them during build, as it is not advisable to store secrets in docker images.
You need to pass the variables as environment variables:
testvar=test docker-compose up ...
or
export testvar=test
docker-compose up
From the docs:
https://docs.docker.com/compose/reference/up/
https://docs.docker.com/compose/reference/build/
You can't pass arguments to docker-compose up, but you can pass arguments to docker-compose build:
docker-compose build --build-arg KEY1=VALUE1 --build-arg KEY2=VALUE2
I'm not sure what you want to do here, but if what you need is to pass an environmental variable to a specific container docker-compose.yml allows you to do that:
web:
...
environment:
- RAILS_ENV=production
- VIRTUAL_HOST=www.example.com
- VIRTUAL_PORT=3011
This variables will be specific for the container you specified them to, and wil not be shared between containers.
Also "docker-compose up" doesn't take any argument.
When dealing with build argumenets please declare them in compose yml file as follows
services:
app: (name of service
build:
context: docker/app/ (where is your docker build root)
dockerfile: Dockerfile (that is optional)
args:
- COMPOSER_AUTH_TOKEN (name of variable, value will be taken from host environment)
Well before running docker-compose up, export variable as other guys suggested. It will work. I tried. Use docker compose version 3 and above. Have fun
Compose supports declaring default environment variables in an environment file named .env placed in the project directory.
Step 1:
Create a file named .env in the project directory
Step 2:
Declare variables in the form VAR=VAL
NOTE: There is no special handling of quotation mark i.e. TESTVAL='test' means TESTVAL is 'test'(with quotation mark) and not just test. So you'd declare it as TESTVAL=test.
Step 3:
Use the variables in the Compose file as:
environment:
myval=${TESTVAL}
Documentation: Declare default environment variables in file
BONUS: If you are building image on the fly in you docker-compose.yaml, then you can even pass the build args using environment variables. Eg:
version: "3.8"
services:
myapp:
build:
context: ./myDir
dockerfile: ./myDir/myDockerfile
args:
- MYARG=${TESTVAL}
I was trying to find solution for batch file, based on Rafael Delboni answer you can add command inside batch file for calling powershell:
powershell $env:TEST="";docker-compose up ...
but instead of that because it's expensive to call powershell inside batch file you can initialize TEST variable inside batch file and then call your docker-compose command.
Something like this:
set TEST = ...
docker compose up ...

Volume with variable host location in Docker Compose

I'm trying to share my mysql data directory with a docker container. The goal is to be able to configure the shared folder location with an environment variable on the host machine.
Using docker compose, the relevant portion of my docker-compose.yml file looks like this:
data:
image: yappabe/data
volumes:
- ${MYSQL_DATA_DIR}:/var/lib/mysql
tty: true
When running the container, I get this error:
Creating docker_data_1
ERROR: . includes invalid characters for a local volume name, only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed
Running echo $MYSQL_DATA_DIR in the terminal returns the expected result.
From this issue comment, you would need to declare the environment variable in your docker-compose.yml file:
data:
image: yappabe/data
environment:
- MYSQL_DATA_DIR
The OP jdp confirms a volume path (as supported/illustrated here) can then use the environment variable, as in ${MYSQL_DATA_DIR}:/var/lib/mysql.

Resources