How to COPY in Dockfiler after volume is mounted - docker

I have a docker-compose.yml file that mounts a couple of volumes. Here is a snippet:
version: '3'
services:
gitlab-runner:
build: '.'
volumes:
- gitlab-config-volume:/etc/gitlab-runner
volumes:
gitlab-config-volume:
external: false
However, my Dockerfile has a COPY action into /etc/gitlab-runner/certs
FROM gitlab/gitlab-runner:latest
COPY files/ca.crt /etc/gitlab-runner/certs/ca.crt
The problem is, that this COPY happens before the mount. Is there a way I can work around this issue?

The easiest approach is to not mount a volume over content configured in your Dockerfile.
services:
gitlab-runner:
build: .
# no volumes:
If the volume actually is configuration, and it includes environment-specific settings like TLS CA certificates, it might not make sense to include this in your image at all. It will usually be easier to inject these files from the host system than to try to copy them into a named Docker volume.
services:
gitlab-runner:
# (could still `build: .`, but don't copy config into the image)
image: gitlab/gitlab-runner:latest
volumes:
# uses a host directory and not a named volume
- ./gitlab-config:/etc/gitlab-runner
cp files/ca.crt gitlab-config/certs/ca.crt
docker-compose up -d
Finally, if you really want the file to be included in your image, but to be able to mount some other content into it, you need to run code in your container to copy the file into the volume. This happens at container startup, so it will happen after the volume mount. This is the most complex option, though.
The shell script is straightforward:
#!/bin/sh
# Copy the CA certificate into the configuration directory if required.
if [ -f /etc/gitlab-runner/ca.crt ]; then
cp /opt/config/ca.crt /etc/gitlab-runner/certs
fi
# Run the main container command.
exec "$#"
In your Dockerfile, you need to make this script be the ENTRYPOINT (with JSON-array syntax); you need to repeat the CMD from the base image; and you need to copy the default file into the filesystem, somewhere other than the volume mount point. Doing this correctly involves knowing some details of the base image, and finding its Dockerfile is all but essential.
FROM gitlab/gitlab-runner:latest
COPY files/ca.crt /opt/config
COPY entrypoint.sh /opt/config
# These last two lines are derived from the base image and are not generic
ENTRYPOINT ["/dumb-init", "/opt/config/entrypoint.sh", "/entrypoint"]
CMD ["run", "--user=gitlab-runner", "--working-directory=/home/gitlab-runner"]

Related

How to mount a tmp directory with docker-compose?

How do you specify a mount volume in docker-compose, so your Dockerfile can access files from it?
I have a docker-compose.yml like:
version: "3.6"
services:
app_test:
build:
context: ..
dockerfile: Dockerfile
volumes:
- /tmp/cache:/tmp/cache
And in my Dockerfile, I want to access files from /tmp/cache via RUN like:
RUN cat /tmp/cache/somebinary.tar.gz | processor.sh
However, running docker-compose gives me the error:
/tmp/cache/somebinary.tar.gz does not exist
Even though on the host, ls /tmp/cache/somebinary.tar.gz confirms it does exist.
Why is docker-compose/Docker unable to mount or access my host directory?
Dockerfile RUN commands are executed at build time of the image.
The volume is mounted at run time once the image is run as a container. So the mounted files will not be available until you spawn a container based on your image.
To define the commands to use at run time, use CMD, or depending on how you intend your image to be used ENTRYPOINT.
You would need to add this at the end of your Dockerfile:
CMD cat /tmp/cache/somebinary.tar.gz | processor.sh

Using the same volume for two Docker containers

I have two containers, one of which provides a file that I need in another container, and I want to make the first container write that file to a volume, then have the second container access that volume and read the file.
I have the following docker-compose.yml file:
version: '3'
volumes:
web_data:
services:
build_jar:
build:
context: .
dockerfile: Dockerfile-gradle
volumes:
- web_data:/workdir
generate_html:
depends_on:
- build_jar
ports:
- "8080:80"
build: .
volumes:
- web_data:/workdir
Dockerfile-gradle
FROM gradle:latest AS builder
USER root
RUN mkdir /workspace
ADD . /workspace
RUN cd /workspace && gradle shadowJar --no-daemon
RUN mkdir /workdir
RUN cp /workspace/build/libs/datainfrastructure-1.0-SNAPSHOT-all.jar /workdir/stat.jar
Dockerfile
FROM openjdk:8-jre-slim AS java
USER root
RUN java -jar /workdir/stat.jar
First of all, I assumed that having created the volume in docker-compose.yml I would automatically get the directory /workdir without having to create it manually, which seems to not be the case. So I create it using mkdir and I do actually get my data saved: I can go to var/lib/docker/volumes on my host machine and find the corresponding volume with the data the container wrote. Great.
Well, secondly, now I need to use this volume with another container, which also does not have the workdir directory existing already. So if I try to access /workdir/stat.jar, it does not exist, and if I manually create /workdir, it's an empty directory. How do I get the files on the volume that the first container put there? Am I missing something in either Dockerfiles or docker-compose.yml?
When you build a Docker image, the Dockerfile has no access to Docker networking, volumes, or any other part of the Docker ecosystem. It's not unreasonable to think of docker build as acting like Maven or Gradle: it produces an image that you can copy to other systems and run elsewhere, but then at build time it can't access data that will eventually be present when you run it.
Correspondingly, as a general rule, Docker images should be self-contained. An image should usually contain its language runtime and any code or artifacts necessary to run the application; sharing code (or jar files) via volumes isn't usually a best practice. (Of particular note, if you do this successfully, Docker will always use the old jar file in the volume, in both containers, in preference to what's built into the image.)
In this context it seems more like you're looking for a multi-stage build. You can combine these two Dockerfiles together, and then COPY the jar file from the first image to the second one. That results in
FROM gradle:latest AS builder
WORKDIR /workspace
COPY . .
RUN gradle shadowJar --no-daemon
FROM openjdk:8-jre-slim AS java
WORKDIR /workdir
COPY --from=builder /workspace/build/libs/datainfrastructure-1.0-SNAPSHOT-all.jar stat.jar
CMD java -jar /workdir/stat.jar
In the docker-compose.yml file, you can delete volume along with the no-op container that does the build:
version: '3.8'
services:
generate_html:
ports:
- "8080:80"
build: .
I assumed that having created the volume in docker-compose.yml I would automatically get the directory /workdir without having to create it manually
That is not supposed, when you declare a volume mapping for some service you only declare mapping between volume and path in the future container. Your container image should guarantee that something exists on that path.
I need to use this volume with another container, which also does not have the workdir directory existing already
Your confusion is probably related to the fact that you expect volumes to work in build time that is not true unfortunately.

Docker: sync files created by dockerfile with host through volume

When i use docker-compose with volumes to sync my files from host to container i cant see any new files created in the dockerfile
my docker-compose.yml:
version: "3"
services:
web:
image: nginx:latest
volumes:
- ./:/code
links:
- php
php:
build: .
volumes:
- ./:/code
My dockerfile looks like this:
FROM php:7-fpm
WORKDIR /code
RUN touch testfile
Of course thats a simplified example, but why do I not see the "testfile" on my host System? if I use docker-compose exec php touch testfile everything works as expected, i see the testfile on my host.
From my understanding I do need to see it outside of my container for it to be shared with the other containers with the same volume (in this example nginx)
When you run RUN touch testfile within your Dockerfile it creates the testfile within the image itself.
Now when you start your container and volume mount your ./ directory to /code, it will mount over your existing /code folder in the image which is why you see it empty. If you didn't add the volume mount in your compose file, it would have the testfile in there.
Note: I don't fully understand your use case but if you wanted your image to create that file within the volume mount, you would need to add it to your entrypoint.

How can I add a file to a volume in a Docker image, using values from the docker-compose.yml?

I have this .env file:
admin=admin
password=adminsPassword
stackName=integration-demo
the values of which are used in the docker-compose.yml file, like this:
myService:
build:
context: .
dockerfile: myService.Dockerfile
args:
- instance=${stackName}.local
- admin=${admin}
- password=${password}
volumes:
- ./config:/config
I want to add them to the Docker compose file, like this:
FROM openjdk:8-jdk-alpine
ARG docker_properties_file=Username=$admin\nPassword=$password\nHost=$instance
RUN $docker_proprties_file >> config/gradle-docker.properties
so that I have a gradle-docker.properties file that looks like:
username=admin
password=adminsPassword
host=integration.demo.local
in the /config directory.
However, no gradle-docker.properties file is getting written.
How can I use the variable in a docker-compose.yml file to add data to a volume?
Plain Docker and Docker Compose don’t have this capability. You can create the file outside of Docker on the host and mount it into the container as you show, but neither Docker nor Compose has the templating capability you would need to be able to do this.
The overall approach you’re describing in the question builds a custom image for each set of configuration options. That’s not really a best practice: imagine needing to recompile ls because you attached a USB drive you needed to look at.
One thing you can do in plain Docker is teach the image how to create its own configuration file at startup time. You can do that with a script like, for example:
#!/bin/sh
# I am docker-entrypoint.sh
# Create the config file
cat >config/gradle-docker.properties <<EOF
username=$USERNAME
et=$CETERA
EOF
# Run the main container process
exec "$#"
In your Dockerfile, COPY this file into the image and set it as the ENTRYPOINT; leave your CMD unchanged. You must use the JSON-array form of the ENTRYPOINT directive.
...
COPY docker-entrypoint.sh .
RUN chmod +x docker-entrypoint.sh
ENTRYPOINT ["./docker-entrypoint.sh"]
CMD ["java", "-jar", "application.jar"]
(In Kubernetes, the Helm package manager does have a templating system that can create content for a ConfigMap object that can be injected into a pod; but that’s a significant amount of extra machinery.)

docker-compose named volume copy contents on initial start

I may be a little confused on how volumes work and I keep reading the same things over and over and to me it should be working. I want the contents from a folder inside the container to copy over if the volume gets initialized the first time.
I have something like this:
I have a Dockerfile like this:
https://github.com/docker-library/tomcat/blob/f6dc3671bf56465917b52c8df4356fa8f0ebafcd/7/jre7/Dockerfile
And before
EXPOSE 8080
CMD ["catalina.sh", "run"]
I have something like
Tomcat Dockerfile
VOLUME ["/opt/tomcat/conf"]
EXPOSE 8080
CMD ["catalina.sh", "run"]
When i build this image, I tag it as tomcat.
Then I have another Dockerfile with a bunch of environment variables that I set and a script.
Like so:
MyApp Dockerfile
FROM tomcat
ENV SOME_VAR=Test1
COPY assets/script.sh /script.sh
The second image builds from the first image and just adds a script and sets some settings. So far so good.
I want to do something like this in my docker-compose.yml file:
Docker Compose file
website:
image: myapp
ports:
- "8000:8080"
volumes:
- /srv/myapp/conf:/opt/tomcat/conf
I want the contents of /opt/tomcat/conf to copy into /srv/myapp/conf when that folder first gets created. Everything I read suggests that this should work, but it just creates the folder and doesn't copy the contents. Am I missing something here?
Basically I have this issue:
https://github.com/moby/moby/issues/18670
Oh and my docker-compose yaml file is using version 2.1 if that makes a difference.
What you are looking for is not possible when you are binding host volume inside the container. It will only work if you have a named volume. Then docker will copy the content of the folder to a container. You need to change you compose file to
version: '3'
services:
website:
image: myapp
ports:
- "8000:8080"
volumes:
- appconfig:/opt/tomcat/conf
volumes:
appconfig: {}
If you want to get the config out then you can use a shell script and your original compose file
#!/bin/bash
if [ ! -d "/srv/myapp/conf" ]; then
mkdir /srv/myapp/conf
docker create --name myappconfig myapp
docker cp myapp:/opt/tomcat/conf /srv/myapp/
docker rm myapp
fi
docker-compose up -d
For this to work the directory should not exist for the first time.

Resources