I'm trying to share my mysql data directory with a docker container. The goal is to be able to configure the shared folder location with an environment variable on the host machine.
Using docker compose, the relevant portion of my docker-compose.yml file looks like this:
data:
image: yappabe/data
volumes:
- ${MYSQL_DATA_DIR}:/var/lib/mysql
tty: true
When running the container, I get this error:
Creating docker_data_1
ERROR: . includes invalid characters for a local volume name, only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed
Running echo $MYSQL_DATA_DIR in the terminal returns the expected result.
From this issue comment, you would need to declare the environment variable in your docker-compose.yml file:
data:
image: yappabe/data
environment:
- MYSQL_DATA_DIR
The OP jdp confirms a volume path (as supported/illustrated here) can then use the environment variable, as in ${MYSQL_DATA_DIR}:/var/lib/mysql.
Related
I am writing a docker-compose.yaml file for my project. I have checked the volumes documentation here .
I also understand the concept of volume in docker that I can mount a volume e.g. -v my-data/:/var/lib/db where my-data/ is a directory on my host machine while /var/lib/db is the path inside database container.
My confuse is with the link I put above. There it has the following sample:
version: "3.9"
services:
db:
image: db
volumes:
- data-volume:/var/lib/db
backup:
image: backup-service
volumes:
- data-volume:/var/lib/backup/data
volumes:
data-volume:
I wonder does it mean that I have to create a directory named data-volume on my host machine? What if I have a directory on my machine with path temp/my-data/ and I want to mount that path to the database container /var/lib/db ? Should I do something like below?
version: "3.9"
services:
db:
image: db
volumes:
- temp/my-data/:/var/lib/db
volumes:
temp/my-data/:
My main confusion is the volumes: section at the bottom, I am not sure whether the volume name should be the path of my directory or should be just literally a name I give & if it is the latter case then how could the given name be mapped with temp/my-data/ on my machine? The sample doesn't indicate that & is ambiguous to clarify that.
Could someone please clarify it for me?
P.S. I tried with above docker-compose I guessed, ended up with the error:
ERROR: The Compose file './docker-compose.yaml' is invalid because:
volumes value 'temp/my-data/' does not match any of the regexes: '^[a-zA-Z0-9._-]+$'
Mapped volumes can either be files/directories on the host machine (sometimes called bind mounts in the documentation) or they can be docker volumes that can be managed using docker volume commands.
The volumes: section in a docker-compose file specify docker volumes, i.e. not files/directories. The first docker-compose in your post uses such a volume.
If you want to map a file or directory (like in your last docker-compose file), you don't need to specify anything in the volumes: section.
Docker volumes (the ones specified in the volumes: section or created using docker volume create) are of course also stored somewhere on your host computer, but docker manages that and you shouldn't normally need to know where or what the format is.
This part of the documentation is pretty good about explaining it, I think https://docs.docker.com/storage/volumes/
As #HansKilian mentions, you don't need both volumes and services.volumes. To use services.volumes, map the host directory to the container directory like this:
services:
db:
image: db
volumes:
- /host/path/lib/db:/container/path/lib/db
With that, the directory /host/path/lib/db on the host machine will be used by the container and available at /container/path/lib/db.
Now, if you're like me, I get really confused with fake examples, so let's say the real directory on your host machine is /var/lib/db and you just want to see it at /db when you run a shell in Docker (i.e., docker exec -it /bin/bash container-id).
docker-compose.yaml would look like this:
services:
db:
image: db
volumes:
- /var/lib/db:/db
Now when you run the shell, cd /logs and ls, you'll see the same results as if you'd cd /var/lib/db on the host.
If you want to use the volumes section to indicate a global volume to use, you first have to create that volume using docker volume create. The documentation Hans linked includes steps to do this. The syntax of /host/path:/container/path is replaced by volume-name:/container/path. Then, once defined, you'd alter your docker-compose.yaml to be more like this:
services:
db:
image: db
volumes:
- your-global-volume-name:/db
volumes:
your-global-volume-name:
external: true
Note that I have not tested or used the this configuration. I'm assuming it's correct based on the other method working and the few changes I can identify in the docs.
I'm building a docker image from a project where I have a file with default credentials for the database. At the docker container run time, I want to pass the real credentials and replace the variables defined on that file. What is the best way to do it? I tried to use environment variables, but it's not working.
db_config.yml:
host: ${HOST}
user: ${USER}
pass: ${PASS}
port: ${PORT}
db: ${DB_NAME}
docker-compose.yml:
version: '2.3'
services:
test_ctr:
container_name: test
image: container:latest
network_mode: "host"
environment:
- HOST=${HOST}
- USER=${USER}
- PASS=${PASS}
- PORT=${PORT}
- DB_NAME=${DB_NAME}
db_config.yml is in builded image and language is Python. Basically when I run container, db_config.yml is red by a script and use file's credentials. When I create the image, this db_config.yml have default credentials. but when I run the container, I want to replace this file
To debug this try running:
docker exec -it <name-of-the-container> <command>
In your case this translates to:
docker exec -it test sh
This should open a shell inside the container.
Then type:
printenv
This will print all Environment variables and their values (that way You will see if the values You have passed are present)
There will be a problem if the container is crashing at startup (in this case it's not possible to use docker exec).
TIP:
Use .env file located in the same directory as docker-compose.yml (or whatever your docker-compose file is) to pass variables.
.env:
KEY1=value1
KEY2=value2
In your case this might look something like:
HOST=1.2.3.4
USER=sa
PASSWORD=42
PORT=4242
DB_NAME=mydb
When your running:
docker-compose up
docker-compose will look for this .env file and will inject the values from this file
Good luck
I want to pass environment variables that is readable by applications spin up by docker-compose up.
What is the proper way of using docker-compose up with varying configuration settings?
I don't want to use .env & environment: config as the environment variables are changing frequently & it is insecure to save tokens in a file.
docker-compose run -e does work a bit, but loses many.
It does not map the ports that defined in docker-compose.yml services.
Also multiple services are defined in docker-compose.yml and I don't want to use depends_on just because docker-compose up doesn't work.
Let's say I define service in docker-compose.yml
service-a:
build:
context: .
dockerfile: DockerfileA
command: node serviceA.js
In my serviceA.js, I simply use the environment variable:
console.log("This is ", process.env.KEY, "running in service A");
When I run docker-compose run -e KEY=DockerComposeRun service-a
I do get the environment variable KEY read by serviceA.js
This is DockerComposeRun running in service A
However I could only get one single service running.
I could have use environment: in docker-compose.yml
environment:
- KEY=DockerComposeUp
But in my use case, each docker compose would have different environment variable values, meaning I would need to edit the file each time before I do docker-compose.
Also, not only single service would use the same environment variable, .env even done a better job, but it is not desired.
There doesn't seem to be a way to do the same for docker-compose up
I have tried KEY=DockerComposeUp docker-compose up,
but what I get is undefined .
Export doesn't work for me as well, it seems they are all about using environment variable for docker-compose.yml instead of for the applications in container
To safely pass sensitive configuration data to your containers you can use Docker secrets. Everything passed through Secrets is encrypted.
You can create and manage secrets using the commands below:
docker secret create
docker secret inspect
docker secret ls
docker secret rm
And use them in your docker-compose file, either referring to existing secrets (external) or use a file:
secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: true
You can use environment like this:
service-a:
build:
context: .
dockerfile: DockerfileA
command: node serviceA.js
environment:
KEY=DockerComposeRun
Refer at: https://docs.docker.com/compose/environment-variables/
I'm wondering if I've stumbled on a bug or that there's something not properly documented about variable substitution on Windows in combination with Docker Machine and Compose (installed version of docker is 1.11.1).
If I run the "docker-compose up" command for a yml file that looks like this:
volumes:
- ${FOOBAR}/build/:/usr/share/nginx/html/
And this variable doesn't exist docker compose will correctly complain about it:
The foobar variable is not set. Defaulting to a blank string.
However, when I change it to an existing environment variable:
volumes:
- ${PROJECT_DIR}/build/:/usr/share/nginx/html/
It will then not properly start the container and displays the following error (trying to access the nginx container will give you a host is unreachable message):
ERROR: for nginx rpc error: code = 2 desc = "oci runtime error: could not synchronise with container process: not a directory"
If I run the echo command in the Docker Quickstart Terminal it will output the correct path that I've set in the environment variable. If I replace the ${PROJECT_DIR} with the environment variable value the container runs correctly.
I get the same type of error message if I try to use the environment variable for the official php image instead of the official nginx image. In both cases the docker compose file works if I substitute ${PROJECT_DIR} text with the content of the environment variable.
So is this a bug or am I missing something?
After some mucking about I've managed to get the containers to start correctly without error messages if I use the following (contains the full path to the local files):
volumes:
- ${PROJECT_DIR}:/usr/share/nginx/html/
The nginx container is then up and running though it cannot find the files then anymore. If I replace the variable with the path it contains it then can find the files again.
Above behaviour isn't consistent. When I added a second environment variable for substitution it gave the oci runtime error. Kept giving it when I removed that second variable and only started working again when I also removed the first variable. After that it suddenly accepted ${PROJECT_DIR}/build/ but still without finding files.
Starting a bash session to the nginx container shows that the mount point for the volume contains no files.
I'm really at a loss here what docker is doing and what it expects from me. Especially as I have no idea to what it is expanding the variables in the compose file.
In the end the conclusion is that variable substitution is too quirky on Windows with Docker Machine to be useful. However, there is an alternative to variable substitution.
If you need a docker environment that does the following:
Can deploy on different computers that don't run the same OS
Doesn't care if the host uses Docker natively or via Virtual Box (this can require path changes)
Then your best bet is to use extending.
First you create the docker-compose.yml file that contains the images you'll need. For example an php image with MySQL:
php:
image: 5.5-apache
links:
- php_db:mysql
- maildev:maildev
ports:
- 8080:80
php_db:
image: mariadb
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: examplepass
You might notice that there aren't any volumes defined in this docker-compose file. That is something we're going to define in a file called docker-compose.override.yml:
php:
volumes:
- /workspaces/Eclipse/project/:/var/www/html/
When you have both files in one directory docker-compose does something interesting. It combines them into one adding/overwriting settings in the docker-compose.yml with those present in docker-compose.override.yml.
Then when running the command docker-compose up it will result in a docker run that is configured for the machine you're working on.
You can get similar behaviour with custom files names if you change a few things in your docker-compose command:
docker-compose -f docker-compose.yml -f docker-compose.conf.yml up
The detail is that docker-compose can accept multiple compose files and it will combine them into one. This happens from left to right.
Both methods allows you to create a basic compose file that configures the containers you need. You then can override/add the settings you need for the specific computer you're running docker on.
The page Overview of docker-compose CLI has more details on how these commands work.
I have a docker-compose.yml file and in the terminal I am typing docker-compose up [something] but I would also like to pass an argument to docker-compose.yml. Is this possible? I've read about interpolation variables and tried to specify a variable in the .yml file using ${testval} and then docker-compose up [something] var="test" but I receive the following error:
WARNING: The testval variable is not set. Defaulting to a blank string.
ERROR: No such service: testval=test
Based on dnephin answer, I created this sample repo that you can pass an variable to docker-compose up.
The usage is simple:
MAC / LINUX
TEST= docker-compose up to create and start both app and db container. The api should then be running on your docker daemon on port 3030.
TEST=DO docker-compose up to create and start both app and db container. The api should execute the npm run test inside the package.json file.
WINDOWS (Powershell)
$env:TEST="";docker-compose up to create and start both app and db container. The api should then be running on your docker daemon on port 3030.
$env:TEST="do";docker-compose up to create and start both app and db container. The api should execute the npm run test inside the package.json file.
You need to ensure 2 things:
The docker-compose.yml has the environment variable declared. For example,
services:
app:
image: python3.7
environment:
- "SECRET_KEY=${SECRET_KEY}"
have the variable available in the environment when docker-compose up is called:
SECRET_KEY="not a secret" docker-compose up
Note that this is not equivalent to pass them during build, as it is not advisable to store secrets in docker images.
You need to pass the variables as environment variables:
testvar=test docker-compose up ...
or
export testvar=test
docker-compose up
From the docs:
https://docs.docker.com/compose/reference/up/
https://docs.docker.com/compose/reference/build/
You can't pass arguments to docker-compose up, but you can pass arguments to docker-compose build:
docker-compose build --build-arg KEY1=VALUE1 --build-arg KEY2=VALUE2
I'm not sure what you want to do here, but if what you need is to pass an environmental variable to a specific container docker-compose.yml allows you to do that:
web:
...
environment:
- RAILS_ENV=production
- VIRTUAL_HOST=www.example.com
- VIRTUAL_PORT=3011
This variables will be specific for the container you specified them to, and wil not be shared between containers.
Also "docker-compose up" doesn't take any argument.
When dealing with build argumenets please declare them in compose yml file as follows
services:
app: (name of service
build:
context: docker/app/ (where is your docker build root)
dockerfile: Dockerfile (that is optional)
args:
- COMPOSER_AUTH_TOKEN (name of variable, value will be taken from host environment)
Well before running docker-compose up, export variable as other guys suggested. It will work. I tried. Use docker compose version 3 and above. Have fun
Compose supports declaring default environment variables in an environment file named .env placed in the project directory.
Step 1:
Create a file named .env in the project directory
Step 2:
Declare variables in the form VAR=VAL
NOTE: There is no special handling of quotation mark i.e. TESTVAL='test' means TESTVAL is 'test'(with quotation mark) and not just test. So you'd declare it as TESTVAL=test.
Step 3:
Use the variables in the Compose file as:
environment:
myval=${TESTVAL}
Documentation: Declare default environment variables in file
BONUS: If you are building image on the fly in you docker-compose.yaml, then you can even pass the build args using environment variables. Eg:
version: "3.8"
services:
myapp:
build:
context: ./myDir
dockerfile: ./myDir/myDockerfile
args:
- MYARG=${TESTVAL}
I was trying to find solution for batch file, based on Rafael Delboni answer you can add command inside batch file for calling powershell:
powershell $env:TEST="";docker-compose up ...
but instead of that because it's expensive to call powershell inside batch file you can initialize TEST variable inside batch file and then call your docker-compose command.
Something like this:
set TEST = ...
docker compose up ...