How to mount a tmp directory with docker-compose? - docker

How do you specify a mount volume in docker-compose, so your Dockerfile can access files from it?
I have a docker-compose.yml like:
version: "3.6"
services:
app_test:
build:
context: ..
dockerfile: Dockerfile
volumes:
- /tmp/cache:/tmp/cache
And in my Dockerfile, I want to access files from /tmp/cache via RUN like:
RUN cat /tmp/cache/somebinary.tar.gz | processor.sh
However, running docker-compose gives me the error:
/tmp/cache/somebinary.tar.gz does not exist
Even though on the host, ls /tmp/cache/somebinary.tar.gz confirms it does exist.
Why is docker-compose/Docker unable to mount or access my host directory?

Dockerfile RUN commands are executed at build time of the image.
The volume is mounted at run time once the image is run as a container. So the mounted files will not be available until you spawn a container based on your image.
To define the commands to use at run time, use CMD, or depending on how you intend your image to be used ENTRYPOINT.
You would need to add this at the end of your Dockerfile:
CMD cat /tmp/cache/somebinary.tar.gz | processor.sh

Related

How to COPY in Dockfiler after volume is mounted

I have a docker-compose.yml file that mounts a couple of volumes. Here is a snippet:
version: '3'
services:
gitlab-runner:
build: '.'
volumes:
- gitlab-config-volume:/etc/gitlab-runner
volumes:
gitlab-config-volume:
external: false
However, my Dockerfile has a COPY action into /etc/gitlab-runner/certs
FROM gitlab/gitlab-runner:latest
COPY files/ca.crt /etc/gitlab-runner/certs/ca.crt
The problem is, that this COPY happens before the mount. Is there a way I can work around this issue?
The easiest approach is to not mount a volume over content configured in your Dockerfile.
services:
gitlab-runner:
build: .
# no volumes:
If the volume actually is configuration, and it includes environment-specific settings like TLS CA certificates, it might not make sense to include this in your image at all. It will usually be easier to inject these files from the host system than to try to copy them into a named Docker volume.
services:
gitlab-runner:
# (could still `build: .`, but don't copy config into the image)
image: gitlab/gitlab-runner:latest
volumes:
# uses a host directory and not a named volume
- ./gitlab-config:/etc/gitlab-runner
cp files/ca.crt gitlab-config/certs/ca.crt
docker-compose up -d
Finally, if you really want the file to be included in your image, but to be able to mount some other content into it, you need to run code in your container to copy the file into the volume. This happens at container startup, so it will happen after the volume mount. This is the most complex option, though.
The shell script is straightforward:
#!/bin/sh
# Copy the CA certificate into the configuration directory if required.
if [ -f /etc/gitlab-runner/ca.crt ]; then
cp /opt/config/ca.crt /etc/gitlab-runner/certs
fi
# Run the main container command.
exec "$#"
In your Dockerfile, you need to make this script be the ENTRYPOINT (with JSON-array syntax); you need to repeat the CMD from the base image; and you need to copy the default file into the filesystem, somewhere other than the volume mount point. Doing this correctly involves knowing some details of the base image, and finding its Dockerfile is all but essential.
FROM gitlab/gitlab-runner:latest
COPY files/ca.crt /opt/config
COPY entrypoint.sh /opt/config
# These last two lines are derived from the base image and are not generic
ENTRYPOINT ["/dumb-init", "/opt/config/entrypoint.sh", "/entrypoint"]
CMD ["run", "--user=gitlab-runner", "--working-directory=/home/gitlab-runner"]

Using the same volume for two Docker containers

I have two containers, one of which provides a file that I need in another container, and I want to make the first container write that file to a volume, then have the second container access that volume and read the file.
I have the following docker-compose.yml file:
version: '3'
volumes:
web_data:
services:
build_jar:
build:
context: .
dockerfile: Dockerfile-gradle
volumes:
- web_data:/workdir
generate_html:
depends_on:
- build_jar
ports:
- "8080:80"
build: .
volumes:
- web_data:/workdir
Dockerfile-gradle
FROM gradle:latest AS builder
USER root
RUN mkdir /workspace
ADD . /workspace
RUN cd /workspace && gradle shadowJar --no-daemon
RUN mkdir /workdir
RUN cp /workspace/build/libs/datainfrastructure-1.0-SNAPSHOT-all.jar /workdir/stat.jar
Dockerfile
FROM openjdk:8-jre-slim AS java
USER root
RUN java -jar /workdir/stat.jar
First of all, I assumed that having created the volume in docker-compose.yml I would automatically get the directory /workdir without having to create it manually, which seems to not be the case. So I create it using mkdir and I do actually get my data saved: I can go to var/lib/docker/volumes on my host machine and find the corresponding volume with the data the container wrote. Great.
Well, secondly, now I need to use this volume with another container, which also does not have the workdir directory existing already. So if I try to access /workdir/stat.jar, it does not exist, and if I manually create /workdir, it's an empty directory. How do I get the files on the volume that the first container put there? Am I missing something in either Dockerfiles or docker-compose.yml?
When you build a Docker image, the Dockerfile has no access to Docker networking, volumes, or any other part of the Docker ecosystem. It's not unreasonable to think of docker build as acting like Maven or Gradle: it produces an image that you can copy to other systems and run elsewhere, but then at build time it can't access data that will eventually be present when you run it.
Correspondingly, as a general rule, Docker images should be self-contained. An image should usually contain its language runtime and any code or artifacts necessary to run the application; sharing code (or jar files) via volumes isn't usually a best practice. (Of particular note, if you do this successfully, Docker will always use the old jar file in the volume, in both containers, in preference to what's built into the image.)
In this context it seems more like you're looking for a multi-stage build. You can combine these two Dockerfiles together, and then COPY the jar file from the first image to the second one. That results in
FROM gradle:latest AS builder
WORKDIR /workspace
COPY . .
RUN gradle shadowJar --no-daemon
FROM openjdk:8-jre-slim AS java
WORKDIR /workdir
COPY --from=builder /workspace/build/libs/datainfrastructure-1.0-SNAPSHOT-all.jar stat.jar
CMD java -jar /workdir/stat.jar
In the docker-compose.yml file, you can delete volume along with the no-op container that does the build:
version: '3.8'
services:
generate_html:
ports:
- "8080:80"
build: .
I assumed that having created the volume in docker-compose.yml I would automatically get the directory /workdir without having to create it manually
That is not supposed, when you declare a volume mapping for some service you only declare mapping between volume and path in the future container. Your container image should guarantee that something exists on that path.
I need to use this volume with another container, which also does not have the workdir directory existing already
Your confusion is probably related to the fact that you expect volumes to work in build time that is not true unfortunately.

Docker: sync files created by dockerfile with host through volume

When i use docker-compose with volumes to sync my files from host to container i cant see any new files created in the dockerfile
my docker-compose.yml:
version: "3"
services:
web:
image: nginx:latest
volumes:
- ./:/code
links:
- php
php:
build: .
volumes:
- ./:/code
My dockerfile looks like this:
FROM php:7-fpm
WORKDIR /code
RUN touch testfile
Of course thats a simplified example, but why do I not see the "testfile" on my host System? if I use docker-compose exec php touch testfile everything works as expected, i see the testfile on my host.
From my understanding I do need to see it outside of my container for it to be shared with the other containers with the same volume (in this example nginx)
When you run RUN touch testfile within your Dockerfile it creates the testfile within the image itself.
Now when you start your container and volume mount your ./ directory to /code, it will mount over your existing /code folder in the image which is why you see it empty. If you didn't add the volume mount in your compose file, it would have the testfile in there.
Note: I don't fully understand your use case but if you wanted your image to create that file within the volume mount, you would need to add it to your entrypoint.

Mounted directory empty with docker-compose and custom Dockerfile

I am very (read very) new to Docker so experimenting. I have created a very basic Dockerfile to pull in Laravel:
FROM composer:latest
RUN composer_version="$(composer --version)" && echo $composer_version
RUN composer global require laravel/installer
WORKDIR /var/www
RUN composer create-project --prefer-dist laravel/laravel site
My docker-compose.yml file looks like:
version: '3.7'
services:
laravel:
build:
context: .
dockerfile: laravel.dockerfile
container_name: my_laravel
network_mode: host
restart: on-failure
volumes:
- ./site:/var/www/site
When I run docker-compose up, the ./site directory is created but the contents are empty. I've put this in docker-compose as I plan on on including other things like nginx, mysql, php etc
The command:
docker run -v "/where/i/want/data/site:/var/www/site" my_laravel
Results in the same behaviour.
I know the install is successful as I modified my dockerfile with the follwing two lines appended to it:
WORKDIR /var/www/site
RUN ls -la
Which gives me the correct listing.
Clearly misunderstanding something here. Any help appreciated.
EDIT: So, I was able to get this to work... although, it slightly more difficult than just specifying a path..
You can accomplish this by specifying a volume in docker-compose.yml.. The path to the directory (on the host) is labeled as device in the compose file.. It appears that the root of the path has to be an actual volume (possibly a share would work) but the 'destination' of the path can be a directory on the specified volume..
I created a new volume called docker on my machine but I suppose you could do this with your existing disk/volume..
I am on a Mac and this docker-compose.yml file worked for me:
version: '3.7'
services:
nodemon-test:
container_name: my-nodemon-test
image: oze4/nodemon-docker-test
ports:
- "1337:1337"
volumes:
- docker_test_app:/app # see comment below on which name to use here
volumes:
docker_test_app: # use this name under `volumes:` for the service
name: docker_test_app
driver: local
driver_opts:
o: bind
type: none
device: /Volumes/docker/docker_test_app
The container specified exists in my DockerHub.. this is the source code for it, just in case you are worried about anything malicious. I created it like two weeks ago to help someone else on StackOverflow.
Shows files from the container on my machine (the host)..
You can read more about Docker Volume configs here if you would like.
ORIGINAL ANSWER:
It looks like you are trying to share the build directory with your host machine.. After some testing, it appears Docker will overwrite the specified path on the container with the contents of the path on the host.
If you run docker logs my_laravel you should see an error about missing files at /var/www/site.. So, even though the build is successful - once Docker mounts the directory from your machine (./site) onto the container (/var/www/site) it overwrites the path within the container (/var/www/site) with the contents of the path on your host (./site) - which is empty.
To test and make sure the contents of /var/www/site are in fact being overwritten, you can run docker exec -it /bin/bash (you may need to replace /bin/bash with /bash).. This will give you command line access inside of the container. From there you can do ls -a /var/www/site..
Furthermore, you can also pre-stage ./site to have a random test file in it (test.txt or whatever), then docker-compose up -d, then run the same commands from the step above docker exec -it ... and see if the staged test.txt file is now inside the container - this gives you definitive evidence that when you run volumes, the data on your host overwrites data in the container.
With that being said, doing something like this and sharing a log directory will work... the volume path specified on the container is still overwritten, the difference is the container is writing to that path.. it doesn't rely on it for config files/app files.
Hope this helps.

What is happening when using ../ with docker-compose volume

I am having problems with writing files out from inside a docker container to my host computer. I believe this is a privilege issue and prefer not to set privileged: True. A work around for writing out files is by pre-pending ../ to a volume in my docker-compose.yml file. For example,
version: '3'
services:
example:
volumes:
- ../:/example
What exactly is ../ doing here? Is it taking from the container's privileges and "going up" a directory to the host machine? Without ../, I am unable to write out files to my host machine.
Specifying a path as the source, as opposed to a volume name, bind mounts a host path to a path inside the container. In your example, ../ will be visible inside the container at /example on a recent version of docker.
Older versions of docker can only access the directory it is in and lower, not higher, unless you specify the higher directory as the context.
To run the docker build from the parent directory:
docker build -f /home/me myapp/Dockerfile
As opposed to
docker build -f /home/me/myapp Dockerfile
Doing the same in composer:
#docker-compose.yml
version: '3.3'
services:
yourservice:
build:
context: /home/me
dockerfile: myapp/Dockerfile
Or with your example:
version: '3'
services:
build:
context: /home/me/app
dockerfile: docker/Dockerfile
example:
volumes:
- /home/me/app:/example
Additionally you have to supply full paths, not relative paths. Ie.
- /home/me/myapp/files/example:/example
If you have a script that is generating the Dockerfile from an unknown path, you can use:
CWD=`pwd`; echo $CWD
To refer to the current working directory. From there you can append /..
Alternately you can build the image from a directory one up, or use a volume which you can share with an image that is run from a higher directory, or you need to output your file to stdout and redirect the output of the command to the file you need from the script that runs it.
See also: Docker: adding a file from a parent directory
The statement volumes: ['../:/example'] makes the parent directory of the directory containing docker-compose.yml on the host (../) visible inside the container at /example. Host directory bind-mounts like this, plus some equivalent constructs using a named volume attached to a specific host directory, are the only way a container can write out to the host filesystem.

Resources