The same docker compose service in multiple folders - docker

I have a package (bootstrap) that is included in multiple local projects. Example:
project1/:
src/...
tests/...
vendor/bootstrap/...
project2/:
src/...
tests/...
vendor/bootstrap/...
This package has its internal tests and static code analyzers that I want to run inside each projectX/vendor/bootstrap folder. The tests and analyzers are run from docker containers. I.e. bootstrap has docker-compose.yml with some configuration:
version: '3.7'
services:
cli:
build: docker/cli
working_dir: /app
volumes:
- ./:/app
tty: true
The problem is when I run something inside project1/vendor/bootstrap, then switch to project2/vendor/bootstrap and run something there, docker thinks that I execute containers from project1. I believe it's because of the same folder name as Docker Compose generates container names as [folder-name_service-name]. So when I run docker-compose exec cli sh it checks if there is a running container bootstrap_cli, but it can be created within another bootstrap folder of another project.
Example of docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
128c3e834df4 bootstrap_cli "docker-php-entrypoi…" 55 minutes ago Up 55 minutes bootstrap_cli
NAMES is the same for containers in all these projectX folders.
There is an option to add container_name: bootstrap_project1_cli, but it seems Docker Compose ignores it when searching for a running container.
So is it possible to differentiate containers of the same name and have all of them at the same time?

Have a look at this github issue:
https://github.com/docker/compose/issues/2120
There are two options to set the COMPOSE_PROJECT_NAME. Use the -p commandline flag or the COMPOSE_PROJECT_NAME environment variable. Both are documented here: https://docs.docker.com/compose/reference/overview/#compose-project-name
When you run docker-compose, it needs a project name for the containers. If you don't specify the -p option, docker-compose looks for an environment varaible named COMPOSE_PROJECT_NAME. If both are not set, it defaults to the current working directory. Thats the behaviour you have.
If you don't want to add a commandline parameter, you can specify the environment variable in your .env file inside the directory of your docker compose file. See https://docs.docker.com/compose/env-file/

docker-compose is basically just a wrapper around the docker cli. It provides some basic scoping for the services inside a compose file by prefixing the containers (networks and volumes, too) with the COMPOSE_PROJECT_NAME value. If not configured differently, this value corresponds to the directory name of the compose file. You can overwrite this by having according environment variables set. A simple solution would be to place an .env file into the bootstrap directories that contains an instruction like
COMPOSE_PROJECT_NAME=project1_bootstrap
which will lead to auto-generated container names like e.g. project1_bootstrap_cli_1
Details:
https://docs.docker.com/compose/reference/envvars/
https://docs.docker.com/compose/env-file/

Related

Why does docker-compose launch different image when run with the '-p' flag

We have a setup with our Jenkins server running within a Docker container which I have taken over from a colleague who has left.
I am seeing some behaviour which I do not understand and have not been able to work out what is going on from the documentation.
My folder structure looks like this:
└── Master
├── docker-compose.yml
└── jenkins-master
└── Dockerfile
My docker-compose.yaml file looks like this (this is just a snippet of the relevant part):
version: '3'
services:
master:
build: ./jenkins-master
I have updated the version of the base Jenkins image in jenkins-master/Dockerfile and then rebuilt using docker-compose build.
This succeeds and results in an image called master_master
If I run docker images I see this new image as well as a previous image:
REPOSITORY TAG IMAGE ID CREATED SIZE
master_master latest <id1> 16 hours ago 704MB
jenkins_master latest <id2> 10 months ago 707MB
As I understand it, the name master_master is as a result of the base folder name (i.e. Master) and the service name of master in the docker-compose.yaml file.
I don't know how the existing image ended up with the name jenkins_master. Would the folder name have had to be Jenkins rather than Master, or is there another way that would have resulted in this name?
When I run docker-compose up -d it uses the master_master image to launch a container (called master_master_1).
When I run docker-compose -p jenkins up -d it uses the jenkins_master image to launch a container (called jenkins_master_1).
Apart from the different container names, the resultant running containers are different as I can see that the Jenkins versions are different (as per the change I made in the Dockerfile).
I do not change the docker-compose file at all between running these 2 commands and yet different images are run.
The documentation that I have found for specifying the -p (--project-name) flag states:
Sets the project name. This value is prepended along with the service
name to the container on start up. For example, if your project name
is myapp and it includes two services db and web, then Compose
starts containers named myapp_db_1 and myapp_web_1 respectively.
Setting this is optional. If you do not set this, the
COMPOSE_PROJECT_NAME defaults to the basename of the project
directory.
There is nothing that leads me to believe that the -p flag will result in a different image being run.
So what is going on here?
How does docker-compose choose which image to run?
Is this happening due to the names of the images master_master vs jenkins_master?
If you're going to use the docker-compose -p option, you need to use it with every docker-compose command, including docker-compose build.
If your docker-compose.yml file doesn't specify an image:, Compose constructs an image name from the current project name and the Compose service name. The project name and Docker object metadata are the only way it has to remember anything. So what's happening here is that the plain docker-compose build builds the image for the master service in the master project, but then docker-compose -p jenkins up looks for the master service in the jenkins project, and finds the other image.
docker-compose -p jenkins build
docker-compose -p jenkins up -d
It may or may not be easier to set the COMPOSE_PROJECT_NAME environment variable, possibly putting this in a .env file. In a Jenkins context, I also might consider using Jenkins's Docker integration to build (and push) the image, and only referring to image: in the docker-compose.yml file.
Add image option in the docker-compose.yml file. It will create the container with a specified docker image.
build: ./jenkins-master
image: dockerimage_name:tag

Working directory when running a Docker container with a Docker-Compose file

I'm trying to set up a CI pipeline which uses a Docker container to run tests. The pipeline is supposed to create a container based on an image I already have and remove that container when it's finished.
For my tests I need to mount a few volumes and bind a few ports from my runner to my container, so to simplify things I want to use a docker-compose file that's constantly stored in /home/runner/docker/docker-compose.yml on my runner.
The problem is as follows:
in my docker-compose.yml I have the following lines, binding the current working directory to the HTML folder in my container :
volumes:
- .:var/www/html
When I use the command docker-compose -f "/home/runner/docker/docker-compose.yml" -d, . should be whichever folder GitLab CI cloned my project to, not /home/runner/dockeras is currently the case.
Is there a way to make it so that . is my cloned project folder (without hardcoding the name), or am I better off just executing a docker run in my GitLab CI script?
One option could be to use an environment variable to define the path to the repo, so that instead of
volumes:
- .:var/www/html
you have
volumes:
- ${YOUR_REPO}:var/www/html
This way you only need to set YOUR_REPO before running docker-compose and that's it.

Mount files in read-only volume (where source is in .dockerignore)

My app depends on secrets, which I have stored in the folder .credentials (e.g. .credentials/.env, .credentials/.google_api.json, etc...) I don't want these files built into the docker image, however they need to be visible to the docker container.
My solution is:
Add .credentials to my .dockerignore
Mount the credentials folder in read-only mode with a volume:
# docker-compose.yaml
version: '3'
services:
app:
build: .
volumes:
- ./.credentials:/app/.credentials:ro
This is not working (I do not see any credentials inside the docker container). I'm wondering if the .dockerignore is causing the volume to break, or if I've done something else wrong?
Am I going about this the wrong way? e.g. I could just pass the .env file with docker run IMAGE_NAME --env-file .env
Edit:
My issue was to do with how I was running the image. I was doing docker-compose build and then docker run IMAGE_NAME, assuming that the volumes were build into the image. However this seems not to be the case.
Instead the above code works when I do docker-compose run app(where app is the service name) after building.
From the comments, the issue here is in looking at the docker-compose.yml file for your container definition while starting the container with docker run. The docker run command does not use the compose file, so no volumes were defined on the resulting container.
The build process itself creates an image where you do not specify the source of volumes. Only the Dockerfile and your build context is used as an input to the build. The rest of the compose file are all run time settings that apply to containers. Many projects do not even use the compose file for building the image, so all settings in the compose file for those projects are a way to define the default settings for containers being created.
The solution is to using docker-compose up -d to test your docker-compose.yml.

docker-compose: start service from same docker-compose file using env vars to alter container name

We are using docker in a team of developers. We have on project all devs work on. Since we do not want to have one docker-compose.yml for each developer we use environment variables to pass the username to docker-compose. Inside docker-compose we have something like this
services:
myservice:
image: myimage
container_name: ${user}_myservice
This used to work very well for us but has stopped working lately. Assume there are two users. The first user runs docker-compose up myservice launching ${user1}_myservice. When the second user issues the same command, the second user will kill the container running under ${user1}_myservice and start ${user2}_myservice.
Somehow it seems that docker services are now linked directly and not only through the container_name variable as before.
We recently upgraded docker to Docker version 17.09.0-ce, build afdb6d4. I attribute the change to the "new" docker version. I have tried downgrading docker-compose to previous versions and it seems this is not related to docker-compose.
UPDATE
Inspired by the answer below we found the following workaround:
We set the env variable COMPOSE_PROJECT_NAME to be the username on login of the user on the host. Then we extend the service name in our docker-compose.yml files to be <proj>_<service>, thereby avoiding any conflicts between identical service names across projects.
Rather than mucking about with variables in docker-compose.yml, it's probably easier just to make use of the --project-name (-p) option to docker-compose.
Normally, docker-compose derives the project name from the name of the directory that contains your docker-compose.yaml file. So if two people try to start an application from a directory named myapp, they will end up with a conflict because both instances will attempt to use the same name.
However, if they were to run instead:
docker-compose --project-name ${USER}_myapp ...
Then docker-compose for each user would use different project names (like alice_myapp and bob_myapp) and there would be no conflict.
If people get tired of using the -p option, they could create a .env like this:
COMPOSE_PROJECT_NAME=alice_myapp
And this would have the same effect as specifying -p alice_myapp on the command line.

Variable substitution not working on Windows 10 with docker compose

I'm wondering if I've stumbled on a bug or that there's something not properly documented about variable substitution on Windows in combination with Docker Machine and Compose (installed version of docker is 1.11.1).
If I run the "docker-compose up" command for a yml file that looks like this:
volumes:
- ${FOOBAR}/build/:/usr/share/nginx/html/
And this variable doesn't exist docker compose will correctly complain about it:
The foobar variable is not set. Defaulting to a blank string.
However, when I change it to an existing environment variable:
volumes:
- ${PROJECT_DIR}/build/:/usr/share/nginx/html/
It will then not properly start the container and displays the following error (trying to access the nginx container will give you a host is unreachable message):
ERROR: for nginx rpc error: code = 2 desc = "oci runtime error: could not synchronise with container process: not a directory"
If I run the echo command in the Docker Quickstart Terminal it will output the correct path that I've set in the environment variable. If I replace the ${PROJECT_DIR} with the environment variable value the container runs correctly.
I get the same type of error message if I try to use the environment variable for the official php image instead of the official nginx image. In both cases the docker compose file works if I substitute ${PROJECT_DIR} text with the content of the environment variable.
So is this a bug or am I missing something?
After some mucking about I've managed to get the containers to start correctly without error messages if I use the following (contains the full path to the local files):
volumes:
- ${PROJECT_DIR}:/usr/share/nginx/html/
The nginx container is then up and running though it cannot find the files then anymore. If I replace the variable with the path it contains it then can find the files again.
Above behaviour isn't consistent. When I added a second environment variable for substitution it gave the oci runtime error. Kept giving it when I removed that second variable and only started working again when I also removed the first variable. After that it suddenly accepted ${PROJECT_DIR}/build/ but still without finding files.
Starting a bash session to the nginx container shows that the mount point for the volume contains no files.
I'm really at a loss here what docker is doing and what it expects from me. Especially as I have no idea to what it is expanding the variables in the compose file.
In the end the conclusion is that variable substitution is too quirky on Windows with Docker Machine to be useful. However, there is an alternative to variable substitution.
If you need a docker environment that does the following:
Can deploy on different computers that don't run the same OS
Doesn't care if the host uses Docker natively or via Virtual Box (this can require path changes)
Then your best bet is to use extending.
First you create the docker-compose.yml file that contains the images you'll need. For example an php image with MySQL:
php:
image: 5.5-apache
links:
- php_db:mysql
- maildev:maildev
ports:
- 8080:80
php_db:
image: mariadb
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: examplepass
You might notice that there aren't any volumes defined in this docker-compose file. That is something we're going to define in a file called docker-compose.override.yml:
php:
volumes:
- /workspaces/Eclipse/project/:/var/www/html/
When you have both files in one directory docker-compose does something interesting. It combines them into one adding/overwriting settings in the docker-compose.yml with those present in docker-compose.override.yml.
Then when running the command docker-compose up it will result in a docker run that is configured for the machine you're working on.
You can get similar behaviour with custom files names if you change a few things in your docker-compose command:
docker-compose -f docker-compose.yml -f docker-compose.conf.yml up
The detail is that docker-compose can accept multiple compose files and it will combine them into one. This happens from left to right.
Both methods allows you to create a basic compose file that configures the containers you need. You then can override/add the settings you need for the specific computer you're running docker on.
The page Overview of docker-compose CLI has more details on how these commands work.

Resources