I have a few repos that I want to deploy using docker-compose within the same project, for each I have a docker-compose.yaml in the root of the directory that defines the list of services. The docker files are located in docker/{service}/Dockerfile relative to the repo root, and hence the docker-compose.yaml looks something like
version: "3"
services:
service1:
build:
context: .
dockerfile: ./docker/service1/Dockerfile
networks:
- default
service2:
build:
context: .
dockerfile: ./docker/service2/Dockerfile
networks:
- default
service3:
build:
context: .
dockerfile: ./docker/service3/Dockerfile
networks:
- default
networks:
default:
However when I run the command from the directory above the repo roots
docker-compose -p project -f repo1/docker-compose.yaml -f repo2/docker-compose.yaml build
I get the error ERROR: Cannot locate specified Dockerfile: ./docker/service1/Dockerfile
Which is a service from repo2
These commands run fine with just one file specified and only one set of services are "not found" (repo2's). I assume that the first file is therefore setting the context of compose, is there a way I can tell compose not to do this?
The documentation for the docker-compose -f option notes:
When you use multiple Compose files, all paths in the files are relative to the first configuration file specified with -f.
Practically, this probably means all of the Compose files need to be in the same directory. If you have a file that really has no host paths at all (neither a build: context directory nor volumes: bind mounts) it could in principle be somewhere else, but IME this is pretty unusual.
For the setup you describe, it might be more practical to launch the separate projects separately:
(cd repo1 && docker-compose up --build -d)
(cd repo2 && docker-compose up --build -d)
A more typical use of multiple Compose files is to provide options split across several files; for example, a base file that specifies image names and an override file that provides developer-oriented features like publishing database ports and building images from source. Share Compose configurations between files and projects describes this use case a little more. That page similarly comments:
Tracking which fragment of a service is relative to which path is difficult and confusing, so to keep paths easier to understand, all paths must be defined relative to the base file.
In short: it's an intentional feature of Compose that all path references in all Compose files are relative to the location of the first file, regardless of the location of the file that contains the path reference, and there's not an override option for it.
Related
I am very (read very) new to Docker so experimenting. I have created a very basic Dockerfile to pull in Laravel:
FROM composer:latest
RUN composer_version="$(composer --version)" && echo $composer_version
RUN composer global require laravel/installer
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel site
My docker-compose.yml file looks like:
version: '3.7'
services:
laravel:
build:
context: .
dockerfile: laravel.dockerfile
container_name: my_laravel
network_mode: host
restart: on-failure
volumes:
- ./site:/var/www/site
When I run docker-compose up, the ./site directory is created but the contents are empty. I've put this in docker-compose as I plan on on including other things like nginx, mysql, php etc
The command:
docker run -v "/where/i/want/data/site:/var/www/site" my_laravel
Results in the same behaviour.
I know the install is successful as I modified my dockerfile with the follwing two lines appended to it:
WORKDIR /var/www/site
RUN ls -la
Which gives me the correct listing.
Clearly misunderstanding something here. Any help appreciated.
EDIT: So, I was able to get this to work... although, it slightly more difficult than just specifying a path..
You can accomplish this by specifying a volume in docker-compose.yml.. The path to the directory (on the host) is labeled as device in the compose file.. It appears that the root of the path has to be an actual volume (possibly a share would work) but the 'destination' of the path can be a directory on the specified volume..
I created a new volume called docker on my machine but I suppose you could do this with your existing disk/volume..
I am on a Mac and this docker-compose.yml file worked for me:
version: '3.7'
services:
nodemon-test:
container_name: my-nodemon-test
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
volumes:
- docker_test_app:/app # see comment below on which name to use here
volumes:
docker_test_app: # use this name under `volumes:` for the service
name: docker_test_app
driver: local
driver_opts:
o: bind
type: none
device: /Volumes/docker/docker_test_app
The container specified exists in my DockerHub.. this is the source code for it, just in case you are worried about anything malicious. I created it like two weeks ago to help someone else on StackOverflow.
Shows files from the container on my machine (the host)..
You can read more about Docker Volume configs here if you would like.
ORIGINAL ANSWER:
It looks like you are trying to share the build directory with your host machine.. After some testing, it appears Docker will overwrite the specified path on the container with the contents of the path on the host.
If you run docker logs my_laravel you should see an error about missing files at /var/www/site.. So, even though the build is successful - once Docker mounts the directory from your machine (./site) onto the container (/var/www/site) it overwrites the path within the container (/var/www/site) with the contents of the path on your host (./site) - which is empty.
To test and make sure the contents of /var/www/site are in fact being overwritten, you can run docker exec -it /bin/bash (you may need to replace /bin/bash with /bash).. This will give you command line access inside of the container. From there you can do ls -a /var/www/site..
Furthermore, you can also pre-stage ./site to have a random test file in it (test.txt or whatever), then docker-compose up -d, then run the same commands from the step above docker exec -it ... and see if the staged test.txt file is now inside the container - this gives you definitive evidence that when you run volumes, the data on your host overwrites data in the container.
With that being said, doing something like this and sharing a log directory will work... the volume path specified on the container is still overwritten, the difference is the container is writing to that path.. it doesn't rely on it for config files/app files.
Hope this helps.
I use docker and docker compose to package scientific tools into easily/universally executable modules. One example is a docker that packages a rather complicated python library into a container that runs a jupyter notebook server; the idea is that other scientists who are not terribly tech-savvy can clone a github repository, run docker-compose up then do their analyses without having to install the library, configure various plugins and other dependencies, etc.
I have this all working fine except that I'm having issues getting the volume mounts to work in a coherent fashion. The reason for this is that the library inside the docker container handles multiple kinds of datasets, which users will store in several separate directories that are conventionally tracked through shell environment variables. (Please don't tell me this is a bad way to do this--it's the way things are done in the field, not the way I've chosen to do things.) So, for example, if the user stores FreeSurfer data, they will have an environment variable named SUBJECTS_DIR that points to the directory containing the data; if they store HCP data, they will have an environment variable HCP_SUBJECTS_DIR. However, they may have both, either, or neither of these set (as well as a few others).
I would like to be able to put something like this in my docker-compose.yml file in order to handle these cases:
version: '3'
services:
my_fancy_library:
build: .
ports:
- "8080:8888"
environment:
- HCP_SUBJECTS_DIR="/hcp_subjects"
- SUBJECTS_DIR="/freesurfer_subjects"
volumes:
- "$SUBJECTS_DIR:/freesurfer_subjects"
- "$HCP_SUBJECTS_DIR:/hcp_subjects"
In testing this, if the user has both environment variables set, everything works swimmingly. However, if they don't have one of these set, I get an error about not mounting directories that are fewer than 2 characters long (which I interpret to be a complaint about mounting a volume specified by ":/hcp_subjects").
This question asks basically the same thing, and the answer points to here, which, if I'm understanding it right, basically explains how to have multiple docker-compose files that are resolved in some fashion. This isn't really a viable solution for my case for a few reasons:
This tool is designed for use by people who don't necessarily know anything about docker, docker-compose, or related utilities, so expecting them to write/edit their own docker-compose.yml file is a problem
There are more than just two of these directories (I have shown two as an example) and I can't realistically make a docker-compose file for every possible combination of these paths being declared or not declared
Honestly, this solution seems really clunky given that the information needed is right there in the variables that docker-compose is already reading.
The only decent solution I've been able to come up with is to ask the users to run a script ./run.sh instead of docker-compose up; the script examines the environment variables, writes out its own docker-compose.yml file with the appropriate volumes, and runs docker-compose up itself. This also seems somewhat clunky, but it works.
Does anyone know of a way to conditionally mount a set of volumes based on the state of the environment variables when docker-compose up is run?
You can set defaults for environment variable in a .env-file shipped alongside with a docker-compose.yml [1].
By setting your environment variables to /dev/null by default and then handling this case in the containerized application, you should be able to achieve what you need.
Example
$ tree -a
.
├── docker-compose.yml
├── Dockerfile
├── .env
└── run.sh
docker-compose.yml
version: "3"
services:
test:
build: .
environment:
- VOL_DST=${VOL_DST}
volumes:
- "${VOL_SRC}:${VOL_DST}"
Dockerfile
FROM alpine
COPY run.sh /run.sh
ENTRYPOINT ["/run.sh"]
.env
VOL_SRC=/dev/null
VOL_DST=/volume
run.sh
#!/usr/bin/env sh
set -euo pipefail
if [ ! -d ${VOL_DST} ]; then
echo "${VOL_DST} not mounted"
else
echo "${VOL_DST} mounted"
fi
Testing
Environment variable VOL_SRC not defined:
$ docker-compose up
Starting test_test_1 ... done
Attaching to test_test_1
test_1 | /volume not mounted
test_test_1 exited with code 0
Environment variable VOL_SRC defined:
$ VOL_SRC="./" docker-compose up
Recreating test_test_1 ... done
Attaching to test_test_1
test_1 | /volume mounted
[1] https://docs.docker.com/compose/environment-variables/#the-env-file
Even though #Ente's answer solves the problem, here is an alternative solution when you have more complex differences between environments.
Docker compose supports multiple docker-compose files for configuration overriding in different environments.
This is useful if you have different named volumes you need to potentially mount on the same path depending on the environment.
You can modify existing services or even add new ones, for instance:
# docker-compose.yml
version: '3.3'
services:
service-a:
image: "image-name"
volumes:
- type: volume
source: vprod
target: /data
ports:
- "80:8080"
volumes:
vprod:
vdev:
And then you have the override file to change the volume mapping:
# docker-compose.override.yml
services:
service-a:
volumes:
- type: volume
source: vdev
target: /data
When running docker-compose up -d both configurations will be merged with the override file taking precedence.
Docker compose picks up docker-compose.yml and docker-compose.override.yml by default, if you have more files, or files with different names, you need to specify them in order:
docker-compose -f docker-compose.yml -f docker-compose.custon.yml -f docker-compose.dev.yml up -d
I am following lynda Docker tutorials and performing stuff related to docker compose file.
This is my docker-compose.yml file.
more docker-compose.yml
version: '3'
services:
web:
image: jboss/wildfly
volumes:
- ~/deployments:/opt/jboss/wildfly/standalone/deployments
ports:
- 8080:8080
As per authors, I am trying to copy webapp.war file to deployments/ folder giving me error. It look like volume mapping for the docker file is not working.
cp /home/user/Demos/docker-for-java/chapter2/webapp.war deployments/
cp: cannot create regular file ‘deployments/’: Not a directory
docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------
helloweb_web_1 /opt/jboss/wildfly/bin/sta ... Up 0.0.0.0:8080->8080/tcp
I think you might be misinterpreting tutorial. I haven't seen the tutorial itself, but checking the documentation for the WildFly Docker image here there's a mention that you need to extend base image and add your war file inside:
To do this you just need to extend the jboss/wildfly image by creating a new one. Place your application inside the deployments/ directory with the ADD command (but make sure to include the trailing slash on the deployment folder path, more info). You can also do the changes to the configuration (if any) as additional steps (RUN command).
This means that you need to create a Dockerfile with approximately this contents (change your-awesome-app.war with the path to your war file):
FROM jboss/wildfly
ADD your-awesome-app.war /opt/jboss/wildfly/standalone/deployments/
After that you need to change you docker-compose.yml to build from your Dockerfile instead of using jboss/wildfly (note the use of build: . instead of image: jboss/wildfly):
version: '3'
services:
web:
build: .
ports:
- 8080:8080
Try that and comment if you run into any issues
I am having problems with writing files out from inside a docker container to my host computer. I believe this is a privilege issue and prefer not to set privileged: True. A work around for writing out files is by pre-pending ../ to a volume in my docker-compose.yml file. For example,
version: '3'
services:
example:
volumes:
- ../:/example
What exactly is ../ doing here? Is it taking from the container's privileges and "going up" a directory to the host machine? Without ../, I am unable to write out files to my host machine.
Specifying a path as the source, as opposed to a volume name, bind mounts a host path to a path inside the container. In your example, ../ will be visible inside the container at /example on a recent version of docker.
Older versions of docker can only access the directory it is in and lower, not higher, unless you specify the higher directory as the context.
To run the docker build from the parent directory:
docker build -f /home/me myapp/Dockerfile
As opposed to
docker build -f /home/me/myapp Dockerfile
Doing the same in composer:
#docker-compose.yml
version: '3.3'
services:
yourservice:
build:
context: /home/me
dockerfile: myapp/Dockerfile
Or with your example:
version: '3'
services:
build:
context: /home/me/app
dockerfile: docker/Dockerfile
example:
volumes:
- /home/me/app:/example
Additionally you have to supply full paths, not relative paths. Ie.
- /home/me/myapp/files/example:/example
If you have a script that is generating the Dockerfile from an unknown path, you can use:
CWD=`pwd`; echo $CWD
To refer to the current working directory. From there you can append /..
Alternately you can build the image from a directory one up, or use a volume which you can share with an image that is run from a higher directory, or you need to output your file to stdout and redirect the output of the command to the file you need from the script that runs it.
See also: Docker: adding a file from a parent directory
The statement volumes: ['../:/example'] makes the parent directory of the directory containing docker-compose.yml on the host (../) visible inside the container at /example. Host directory bind-mounts like this, plus some equivalent constructs using a named volume attached to a specific host directory, are the only way a container can write out to the host filesystem.
Imagine two containers: webserver (1) is hosting static HTML files that need to be built form templates inside a data volume container (2).
docker-compose.yml file looks something like this:
version: "2"
services:
webserver:
build: ./web
ports:
- "80:80"
volumes_from:
- templates
templates:
build: ./templates
Dockerfile for templates service looks like this
FROM ruby:2.3
# ... there is more but that is should not be important
WORKDIR /tmp
COPY ./Gemfile /tmp/Gemfile
RUN bundle install
COPY ./source /tmp/source
RUN bundle exec middleman build --clean
VOLUME /tmp/build
When I run docker-compose up everything is working as expected: templates are built, webserver hosts them and you can view them in the browser.
Problem is, when I update the ./source and restart/rebuild the setup, the files the webserver hosts are still the old ones, although the log shows that the container was rebuilt - at least the last three layers after COPY ./source /tmp/source. So the changes inside the source folder are picked up by the rebuilt but I'm not able to get the changes shown in the browser.
What am I doing wrong?
Compose preserves volumes when containers are recreated, which is probably why you are seeing the old files.
Generally it is not a good idea to use volumes for source code (or in this case static html files). Volumes are for data you want to persist, like data in a database. Source code changes with each version of the image, so doesn't really belong in a volume.
Instead of using a data volume container for these files, you can use a builder container to compile them and a webserver service to host them. You'll need to add a COPY to the webserver Dockerfile to include the files.
To accomplish this you would change your docker-compose.yml to this:
version: "2"
services:
webserver:
image: myapp:latest
ports: ["80:80"]
Now you just need to build myapp:latest. You could write a script which:
builds the builder container
runs the builder container
builds the myapp container
You can also use a tool like dobi instead of writing a script (disclaimer: I am the author of this tool). There is an example of building a minimal docker image which is very similar to what you're trying to do.
Your dobi.yaml might look something like this:
image=builder:
image: myapp-dev
context: ./templates
job=templates:
use: builder
image=webserver:
image: myapp
tags: [latest]
context: .
depends: [templates]
compose=serve:
files: [docker-compose.yml]
depends: [webserver]
Now if you run dobi serve it will do all the steps for you. Each step will only be run if files have changed.