Variable substitution not working on Windows 10 with docker compose - docker

I'm wondering if I've stumbled on a bug or that there's something not properly documented about variable substitution on Windows in combination with Docker Machine and Compose (installed version of docker is 1.11.1).
If I run the "docker-compose up" command for a yml file that looks like this:
volumes:
- ${FOOBAR}/build/:/usr/share/nginx/html/
And this variable doesn't exist docker compose will correctly complain about it:
The foobar variable is not set. Defaulting to a blank string.
However, when I change it to an existing environment variable:
volumes:
- ${PROJECT_DIR}/build/:/usr/share/nginx/html/
It will then not properly start the container and displays the following error (trying to access the nginx container will give you a host is unreachable message):
ERROR: for nginx rpc error: code = 2 desc = "oci runtime error: could not synchronise with container process: not a directory"
If I run the echo command in the Docker Quickstart Terminal it will output the correct path that I've set in the environment variable. If I replace the ${PROJECT_DIR} with the environment variable value the container runs correctly.
I get the same type of error message if I try to use the environment variable for the official php image instead of the official nginx image. In both cases the docker compose file works if I substitute ${PROJECT_DIR} text with the content of the environment variable.
So is this a bug or am I missing something?
After some mucking about I've managed to get the containers to start correctly without error messages if I use the following (contains the full path to the local files):
volumes:
- ${PROJECT_DIR}:/usr/share/nginx/html/
The nginx container is then up and running though it cannot find the files then anymore. If I replace the variable with the path it contains it then can find the files again.
Above behaviour isn't consistent. When I added a second environment variable for substitution it gave the oci runtime error. Kept giving it when I removed that second variable and only started working again when I also removed the first variable. After that it suddenly accepted ${PROJECT_DIR}/build/ but still without finding files.
Starting a bash session to the nginx container shows that the mount point for the volume contains no files.
I'm really at a loss here what docker is doing and what it expects from me. Especially as I have no idea to what it is expanding the variables in the compose file.

In the end the conclusion is that variable substitution is too quirky on Windows with Docker Machine to be useful. However, there is an alternative to variable substitution.
If you need a docker environment that does the following:
Can deploy on different computers that don't run the same OS
Doesn't care if the host uses Docker natively or via Virtual Box (this can require path changes)
Then your best bet is to use extending.
First you create the docker-compose.yml file that contains the images you'll need. For example an php image with MySQL:
php:
image: 5.5-apache
links:
- php_db:mysql
- maildev:maildev
ports:
- 8080:80
php_db:
image: mariadb
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: examplepass
You might notice that there aren't any volumes defined in this docker-compose file. That is something we're going to define in a file called docker-compose.override.yml:
php:
volumes:
- /workspaces/Eclipse/project/:/var/www/html/
When you have both files in one directory docker-compose does something interesting. It combines them into one adding/overwriting settings in the docker-compose.yml with those present in docker-compose.override.yml.
Then when running the command docker-compose up it will result in a docker run that is configured for the machine you're working on.
You can get similar behaviour with custom files names if you change a few things in your docker-compose command:
docker-compose -f docker-compose.yml -f docker-compose.conf.yml up
The detail is that docker-compose can accept multiple compose files and it will combine them into one. This happens from left to right.
Both methods allows you to create a basic compose file that configures the containers you need. You then can override/add the settings you need for the specific computer you're running docker on.
The page Overview of docker-compose CLI has more details on how these commands work.

Related

How to mount volumes with docker-compose in- and outside of a VSCode devcontainer?

Scenario
The project I'm working on (a React app) uses docker-compose to setup it's backend, webserver and frontend. I'm working inside a VSCode devcontainer (Node with Typescript).
The Docker in Docker environment I've setup seems to work fine and I'm able to start each of the Docker containers but I had to adapt the code in the following manner because otherwise Docker wasn't able to locate the specified volumes to mount.
Setup
First I needed to set a remote environment variable in my devcontainer.json:
"remoteEnv": {
// the original host directory which is needed for volume mount commands from inside the container (Docker in Docker)
"LOCAL_WORKSPACE_FOLDER": "${localWorkspaceFolder}"
}
I'm then using this environment variable in the docker-compose.yaml like so:
services:
webserver:
build:
context: ./docker
dockerfile: webserver/Dockerfile
image: webserver
container_name: webserver_nginx
ports:
- 8080:80
volumes:
- ${LOCAL_WORKSPACE_FOLDER}/webserver:/etc/nginx/conf.d
- ${LOCAL_WORKSPACE_FOLDER}/build:/var/www/html
restart: unless-stopped
depends_on:
- backend
backend:
...
Problem
On my machine (and on the machine of my colleagues who also use VSCode) everything works fine. But I have some teams members which don't use VSCode. When I commit the adapted docker-compose.yaml file, their setup doesn't work anymore and vice-versa if they adapt the file again to their needs.
Question
How can I ensure that Docker compose works in- and outside of VSCode's devcontainers?
Possible solutions?
Would it be possible to set the environment variable to a default value? Because in my case the actual value that should be set if the project is not opened inside a devcontainer is just a simple dot (.). Because when I run the command echo ${LOCAL_WORKSPACE_FOLDER} inside the integrated VSCode terminal, the correct path gets printed. So it seems that VSCode just sets normal environment variables?
(If the assumption from above is correct) wouldn't it be possible to write a simple Bash script install.sh that set's the correct path automatically? This script should only be run once during the setup of the project. How could this file look like?
Docker compose allow default value for variables:
${VARIABLE:-default} evaluates to default if VARIABLE is unset or
empty in the environment.
See: https://docs.docker.com/compose/environment-variables/
For you case example, you can use:
${LOCAL_WORKSPACE_FOLDER:-.}
PS: I never used that personnaly

The same docker compose service in multiple folders

I have a package (bootstrap) that is included in multiple local projects. Example:
project1/:
src/...
tests/...
vendor/bootstrap/...
project2/:
src/...
tests/...
vendor/bootstrap/...
This package has its internal tests and static code analyzers that I want to run inside each projectX/vendor/bootstrap folder. The tests and analyzers are run from docker containers. I.e. bootstrap has docker-compose.yml with some configuration:
version: '3.7'
services:
cli:
build: docker/cli
working_dir: /app
volumes:
- ./:/app
tty: true
The problem is when I run something inside project1/vendor/bootstrap, then switch to project2/vendor/bootstrap and run something there, docker thinks that I execute containers from project1. I believe it's because of the same folder name as Docker Compose generates container names as [folder-name_service-name]. So when I run docker-compose exec cli sh it checks if there is a running container bootstrap_cli, but it can be created within another bootstrap folder of another project.
Example of docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
128c3e834df4 bootstrap_cli "docker-php-entrypoi…" 55 minutes ago Up 55 minutes bootstrap_cli
NAMES is the same for containers in all these projectX folders.
There is an option to add container_name: bootstrap_project1_cli, but it seems Docker Compose ignores it when searching for a running container.
So is it possible to differentiate containers of the same name and have all of them at the same time?
Have a look at this github issue:
https://github.com/docker/compose/issues/2120
There are two options to set the COMPOSE_PROJECT_NAME. Use the -p commandline flag or the COMPOSE_PROJECT_NAME environment variable. Both are documented here: https://docs.docker.com/compose/reference/overview/#compose-project-name
When you run docker-compose, it needs a project name for the containers. If you don't specify the -p option, docker-compose looks for an environment varaible named COMPOSE_PROJECT_NAME. If both are not set, it defaults to the current working directory. Thats the behaviour you have.
If you don't want to add a commandline parameter, you can specify the environment variable in your .env file inside the directory of your docker compose file. See https://docs.docker.com/compose/env-file/
docker-compose is basically just a wrapper around the docker cli. It provides some basic scoping for the services inside a compose file by prefixing the containers (networks and volumes, too) with the COMPOSE_PROJECT_NAME value. If not configured differently, this value corresponds to the directory name of the compose file. You can overwrite this by having according environment variables set. A simple solution would be to place an .env file into the bootstrap directories that contains an instruction like
COMPOSE_PROJECT_NAME=project1_bootstrap
which will lead to auto-generated container names like e.g. project1_bootstrap_cli_1
Details:
https://docs.docker.com/compose/reference/envvars/
https://docs.docker.com/compose/env-file/

docker-compose: start service from same docker-compose file using env vars to alter container name

We are using docker in a team of developers. We have on project all devs work on. Since we do not want to have one docker-compose.yml for each developer we use environment variables to pass the username to docker-compose. Inside docker-compose we have something like this
services:
myservice:
image: myimage
container_name: ${user}_myservice
This used to work very well for us but has stopped working lately. Assume there are two users. The first user runs docker-compose up myservice launching ${user1}_myservice. When the second user issues the same command, the second user will kill the container running under ${user1}_myservice and start ${user2}_myservice.
Somehow it seems that docker services are now linked directly and not only through the container_name variable as before.
We recently upgraded docker to Docker version 17.09.0-ce, build afdb6d4. I attribute the change to the "new" docker version. I have tried downgrading docker-compose to previous versions and it seems this is not related to docker-compose.
UPDATE
Inspired by the answer below we found the following workaround:
We set the env variable COMPOSE_PROJECT_NAME to be the username on login of the user on the host. Then we extend the service name in our docker-compose.yml files to be <proj>_<service>, thereby avoiding any conflicts between identical service names across projects.
Rather than mucking about with variables in docker-compose.yml, it's probably easier just to make use of the --project-name (-p) option to docker-compose.
Normally, docker-compose derives the project name from the name of the directory that contains your docker-compose.yaml file. So if two people try to start an application from a directory named myapp, they will end up with a conflict because both instances will attempt to use the same name.
However, if they were to run instead:
docker-compose --project-name ${USER}_myapp ...
Then docker-compose for each user would use different project names (like alice_myapp and bob_myapp) and there would be no conflict.
If people get tired of using the -p option, they could create a .env like this:
COMPOSE_PROJECT_NAME=alice_myapp
And this would have the same effect as specifying -p alice_myapp on the command line.

Set $PROJECT_NAME in docker-compose file

I am using the same docker-compose.yml file for multiple projects. I am really lazy, so I don't want to start them with docker-compose -p $PROJECT_NAME up.
As of Docker version 17.06.0, is it possible to set the variable directly in the docker-compose.yml file?
UPDATE: You can now use the top-level name property in your docker-compose YAML file. This is available from Docker Compose v2.3.3
This is the result of the #745 proposal. An issue which persevered for about 8 years.
Previously:
Right now, we can use the .env file to set the custom project name like this:
COMPOSE_PROJECT_NAME=SOMEPROJECTNAME
It's not flexible, but it's better than nothing. Currently, there is an open issue regarding this as a proposal.
I know this question was asked a long time ago, but I ran into the same problem. There's a suggestion to add the feature https://github.com/docker/compose/issues/745, but they don't want to.
However, I have a Makefile in the root of the directory and then you can add something like in the Makefile:
.PHONY: container-name
container-name:
docker-compose -p $PROJECT_NAME up -d container-name
and then run make container-name
I know it isn't what you asked for, but could maybe make your life a bit easier.
220806 UPDATE: you can now use the top-level name property in your docker-compose YAML file.
This is the result of the #745 proposal.
Update as on Docker Compose version 2.3.3, name can be given in the compose file, but please note the following as per documentation compose-spec at github.com., Compose official documentation
Whenever project name is defined by top-level name or by some custom mechanism, it MUST be exposed for interpolation and environment variable resolution as COMPOSE_PROJECT_NAME.
name: stitch
services:
foo:
image: busybox
environment:
- COMPOSE_PROJECT_NAME
command: echo "I'm running ${COMPOSE_PROJECT_NAME}"
Previously proposed solution :
You can add them as your environment variables which are available through the session in which you are bringing up your containers using the docker compose.
Ie, if you wanted to use $PROJECT_NAME somewhere inside your docker-compose.yaml then if this variable has a value in your session, then it would be picked up.
Inside the yaml you can assign it to anything as you want it. You want as a commandline arg to some script, even that is also possible. ie,
working_dir: /opt
command: /bin/bash -c './script.sh ${PROJECT_NAME}'
volumes:
- /var/run/:/host/var/run/
I'm using
docker version : Docker version 17.09.0-ce, build afdb6d4
docker-compose version : docker-compose version 1.14.0, build c7bdf9e

Variable substitution in docker-compose.yml file when running docker-compose with sudo

I'm currently trying to use variable substitution in a docker-compse.yml file. This file contains the following:
jenkins:
image: "jenkins:${JENKINS_VERSION}"
external_links:
- mongodb:mongo
ports:
- 8000:8080
The image below shows what happens when I try to start everything up.
As you can see, docker-compose shows a warning saying that the variable is not set. I suspect this is caused due to the use of sudo to start docker-compose. My setup (a Jenkins docker container which has access to docker and docker-compose via volume mounts) currently requires the use of sudo. Would it be better to stop docker requiring sudo, or is there another way to fix this without changing the current setup?
sudo -E preserve the user environment when running the command. It should do what you want.

Resources