I do a web application with Symfony using Docker images that contain the appplication code. I try to find a way to share source code from the application container and the nginx container.
Actually I use named volume: it's ok for uploaded data, because I want to persist this files between application versions.
But when I use named volume to share source code between containers, it create a conflict when I update the application: data in the named volume is the previous data. Then I'm forced to:
stop containers
delete app and nginx containers
delete the source volume
then recreate containers with new image that re-create volume and put the new code inside.
Before, we can create the VOLUME in the APP dockerfile and use volumes_from to retrieve the data.
Someone have an idea ?
Thanks a lot
Unfortunately, there is no trivial solution at the time.
If you mount a volume with existing data to a directory it will be overwritten with the volume's content. This behavior is desired by docker developers since volumes are often used as storage for valuable data, and it protects you from losing the data.
There is no option to change this behavior nor any other functionality of docker-compose or docker for this problem.
But you can implement your own solution. One simple way to do that is to run a script which copies your files into a volume which is shared between the both services.
For example, assuming your files are in the ./public directory and you start the app with the command app:
docker-compose.yml
services:
nginx-proxy:
image: nginx
volumes:
- assets:/www/data:ro
web:
image: my_web_app
volumes:
- assets:/assets:rw
volumes:
assets:
Dockerfile
...
COPY public ./public
COPY entrypoint.sh ./
CMD ["sh", "./entrypoint.sh"]
entrypoint.sh
#!/bin/sh
cp -r ./public/* /assets
app
This will copy your files into the /www/data directory of the nginx-proxy when the application container starts. If you want to copy files even when you run a custom command on the service, change CMD to ENTRYPOINT.
Related
Supposed I have a Docker container and a folder on my host /hostFolder. Now if I want to add this folder to the Docker container as a volume, then I can do this either by using ADD in the Dockerfile or mounting it as a volume.
So far, so good.
Now /hostFolder contains a sub-folder, /hostFolder/subFolder.
I want to mount /hostFolder into the Docker container (whether as read-write or read-only does not matter, works both for me), but I do NOT want to have it included /hostFolder/subFolder. I want to exclude this, and I also want the Docker container be able to make changes to this sub-folder, without the consequence of having it changed on the host as well.
Is this possible? If so, how?
Using docker-compose I'm able to use node_modules locally, but ignore it in the docker container using the following syntax in the docker-compose.yml
volumes:
- './angularApp:/opt/app'
- /opt/app/node_modules/
So everything in ./angularApp is mapped to /opt/app and then I create another mount volume /opt/app/node_modules/ which is now empty directory - even if in my local machine ./angularApp/node_modules is not empty.
If you want to have subdirectories ignored by docker-compose but persistent, you can do the following in docker-compose.yml:
volumes:
node_modules:
services:
server:
volumes:
- .:/app
- node_modules:/app/node_modules
This will mount your current directory as a shared volume, but mount a persistent docker volume in place of your local node_modules directory. This is similar to the answer by #kernix, but this will allow node_modules to persist between docker-compose up runs, which is likely the desired behavior.
For those trying to get a nice workflow going where node_modules isn't overridden by local this might help.
Change your docker-compose to mount an anonymous persistent volume to node_modules to prevent your local overriding it. This has been outlined in this thread a few times.
services:
server:
build: .
volumes:
- .:/app
- /app/node_modules
This is the important bit we were missing. When spinning up your stack use docker-compose -V. Without this if you added a new package and rebuilt your image it would be using the node_modules from your initial docker-compose launch.
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving
data from the previous containers.
To exclude a file, use the following
volumes:
- /hostFolder:/folder
- /dev/null:/folder/fileToBeExcluded
With the docker command line:
docker run \
--mount type=bind,src=/hostFolder,dst=/containerFolder \
--mount type=volume,dst=/containerFolder/subFolder \
...other-args...
The -v option may also be used (credit to Bogdan Mart), but --mount is clearer and recommended.
First, using the ADD instruction in a Dockerfile is very different from using a volume (either via the -v argument to docker run or the VOLUME instruction in a Dockerfile). The ADD and COPY commands just take a copy of the files at the time docker build is run. These files are not updated until a fresh image is created with the docker build command. By contrast, using a volume is essentially saying "this directory should not be stored in the container image; instead use a directory on the host"; whenever a file inside a volume is changed, both the host and container will see it immediately.
I don't believe you can achieve what you want using volumes, you'll have to rethink your directory structure if you want to do this.
However, it's quite simple to achieve using COPY (which should be preferred to ADD). You can either use a .dockerignore file to exclude the subdirectory, or you could COPY all the files then do a RUN rm bla to remove the subdirectory.
Remember that any files you add to image with COPY or ADD must be inside the build context i.e. in or below the directory you run docker build from.
for the people who also had the issue that the node_modules folder would still overwrite from your local system and the other way around
volumes:
node_modules:
services:
server:
volumes:
- .:/app
- node_modules:/app/node_modules/
This is the solution, With the trailing / after the node_modules being the fix.
Looks like the old solution doesn't work anymore(at least for me).
Creating an empty folder and mapping target folder to it helped though.
volumes:
- ./angularApp:/opt/app
- .empty:/opt/app/node_modules/
I found this link which saved me: Working with docker bind mounts and node_modules.
This working solution will create a "exclude" named volume in docker volumes manager. The volume name "exclude" is arbitrary, so you can use a custom name for the volume intead exclude.
services:
node:
command: nodemon index.js
volumes:
- ./:/usr/local/app/
# the volume above prevents our host system's node_modules to be mounted
- exclude:/usr/local/app/node_modules/
volumes:
exclude:
You can see more infos about volumes in Official docs - Use a volume with docker compose
To exclude a mounted file contained in the volume of your machine, you will have to overwrite it by allocating a volume to this same file.
In your config file:
services:
server:
build : ./Dockerfile
volumes:
- .:/app
An example in you dockerfile:
# Image Location
FROM node:13.12.0-buster
VOLUME /app/you_overwrite_file
I am checking the docker documentation on how to use named volumes to share data between containers.
In Populate a volume using a container it is specified that:
If you start a container which creates a new volume, as above, and the container has files or directories in the directory to be mounted (such as /app/ above), the directory’s contents are copied into the volume. The container then mounts and uses the volume, and other containers which use the volume also have access to the pre-populated content.
So I did a simple example where:
I start a container which creates the volume and mounts it to a directory with existing files
I start a second container on which I mount the volume and indeed I can see the first container's files.
So far so good.
However I wanted to see if it is possible to have pre-populated content from more than one containers.
What I did was
Create two simple images which have their respective configuration files in the same directory
FROM alpine:latest
WORKDIR /opt/test
RUN mkdir -p "/opt/test/conf" && \
echo "container from image 1" > /opt/test/conf/config_1.cfg
FROM alpine:latest
WORKDIR /opt/test
RUN mkdir -p "/opt/test/conf" && \
echo "container from image 2" > /opt/test/conf/config_2.cfg
Create a docker compose which defines a named volume which is mounted on both services
services:
test_container_1:
image:
test_image_1
volumes:
- test_volume:/opt/test/conf
tty: true
test_container_2:
image:
test_image_2
volumes:
- test_volume:/opt/test/conf
tty: true
volumes:
test_volume:
Started the services.
> docker-compose -p example up
Creating network "example_default" with the default driver
Creating volume "example_test_volume" with default driver
Creating example_test_container_2_1 ... done
Creating example_test_container_1_1 ... done
Attaching to example_test_container_1_1, example_test_container_2_1
According to the logs container_2 was created first and it pre-populated the volume. However, the volume was then mounted to container_1 and the only file available on the mount was apparently /opt/test/conf/config_2.cfg effectively removing config_1.
So my question is, if it is possible to have a volume populated with data from 2 or more containers.
The reason I want to explore this, is so that I can have additional app configuration loaded from different containers, to support a multi tenant scenario, without having to rework the app to read the tenant configuration from different folders.
Thank you in advance
Once there is any content in a named volume at all, Docker will never automatically copy content into it. It will not merge content from two different images, update the volume if one of the images changes, or anything else.
I'd advise you to ignore the paragraph you quote in the Docker documentation. Assume any volume you mount into the container is initially empty. This matches the behavior you'll get with Docker bind-mounts (host directories), Kubernetes persistent volumes, and basically any other kind of storage besides Docker named volumes proper. Don't mount a volume over the content in your image.
If you can, restructure your application to avoid sharing files at all. One common use of named volumes I see is trying to republish static assets to a reverse proxy, for example; rather than trying to use a named volume (which will never update itself) you can COPY the static assets into a dedicated Web server image. This avoids the various complexities around trying to use a volume here.
If you really don't have a choice in the matter, then you can approach this with dedicated code in both of the containers. The basic setup here is:
Have a data directory somewhere outside your application directory, and mount the volume there.
Include the original files in the image somewhere different.
In an entrypoint wrapper script, copy the original files into the data directory (the mounted volume).
Let's say for the sake of argument that you've installed the application into /opt/test, and the data directory will be /etc/test. The entrypoint wrapper script can be as little as
#!/bin/sh
# Copy config files from the application tree into the config tree
# (overwriting anything that's already there)
cp /opt/test/* "$TEST_CONFIG_DIR"
# Run the main container command
exec "$#"
In the Dockerfile, you need to make sure that directory exists (and if you'll use a non-root user, that user needs permission to write to it).
FROM alpine
WORKDIR /opt/test
COPY ./ ./
ENV TEST_CONFIG_DIR=/etc/test
RUN mkdir "$TEST_CONFIG_DIR"
ENTRYPOINT ["./entrypoint.sh"]
CMD ["./my_app"]
Finally, in the Compose setup, mount the volume on that data directory (you can't use the environment variable, but consider the filesystem path part of the image's API):
version: '3.8'
volumes:
test_config:
services:
one:
build: ./one
volumes:
- test_config:/etc/test
two:
build: ./two
volumes:
- test_config:/etc/test
You would be able to run, for example,
docker-compose run one ls /etc/test
docker-compose run two ls /etc/test
to see both sets of files appear there.
The entrypoint script is code you control. There's nothing especially magical about it beyond the final exec "$#" line to run the main container command. If you want to ignore files that already exist, for example, or if you have a way to merge in changes, then you can implement something more clever than a simple cp command.
I have two containers, one of which provides a file that I need in another container, and I want to make the first container write that file to a volume, then have the second container access that volume and read the file.
I have the following docker-compose.yml file:
version: '3'
volumes:
web_data:
services:
build_jar:
build:
context: .
dockerfile: Dockerfile-gradle
volumes:
- web_data:/workdir
generate_html:
depends_on:
- build_jar
ports:
- "8080:80"
build: .
volumes:
- web_data:/workdir
Dockerfile-gradle
FROM gradle:latest AS builder
USER root
RUN mkdir /workspace
ADD . /workspace
RUN cd /workspace && gradle shadowJar --no-daemon
RUN mkdir /workdir
RUN cp /workspace/build/libs/datainfrastructure-1.0-SNAPSHOT-all.jar /workdir/stat.jar
Dockerfile
FROM openjdk:8-jre-slim AS java
USER root
RUN java -jar /workdir/stat.jar
First of all, I assumed that having created the volume in docker-compose.yml I would automatically get the directory /workdir without having to create it manually, which seems to not be the case. So I create it using mkdir and I do actually get my data saved: I can go to var/lib/docker/volumes on my host machine and find the corresponding volume with the data the container wrote. Great.
Well, secondly, now I need to use this volume with another container, which also does not have the workdir directory existing already. So if I try to access /workdir/stat.jar, it does not exist, and if I manually create /workdir, it's an empty directory. How do I get the files on the volume that the first container put there? Am I missing something in either Dockerfiles or docker-compose.yml?
When you build a Docker image, the Dockerfile has no access to Docker networking, volumes, or any other part of the Docker ecosystem. It's not unreasonable to think of docker build as acting like Maven or Gradle: it produces an image that you can copy to other systems and run elsewhere, but then at build time it can't access data that will eventually be present when you run it.
Correspondingly, as a general rule, Docker images should be self-contained. An image should usually contain its language runtime and any code or artifacts necessary to run the application; sharing code (or jar files) via volumes isn't usually a best practice. (Of particular note, if you do this successfully, Docker will always use the old jar file in the volume, in both containers, in preference to what's built into the image.)
In this context it seems more like you're looking for a multi-stage build. You can combine these two Dockerfiles together, and then COPY the jar file from the first image to the second one. That results in
FROM gradle:latest AS builder
WORKDIR /workspace
COPY . .
RUN gradle shadowJar --no-daemon
FROM openjdk:8-jre-slim AS java
WORKDIR /workdir
COPY --from=builder /workspace/build/libs/datainfrastructure-1.0-SNAPSHOT-all.jar stat.jar
CMD java -jar /workdir/stat.jar
In the docker-compose.yml file, you can delete volume along with the no-op container that does the build:
version: '3.8'
services:
generate_html:
ports:
- "8080:80"
build: .
I assumed that having created the volume in docker-compose.yml I would automatically get the directory /workdir without having to create it manually
That is not supposed, when you declare a volume mapping for some service you only declare mapping between volume and path in the future container. Your container image should guarantee that something exists on that path.
I need to use this volume with another container, which also does not have the workdir directory existing already
Your confusion is probably related to the fact that you expect volumes to work in build time that is not true unfortunately.
I use docker-compose for a simple keycloak container and I've been trying to install a new theme for keycloak.
However I've been unable to copy even a single file to the container usnig a Dockerfile. The Dockerfile and docker-compose.yml are in the same directory
Neither of these commands work or cause any events or warnings in the logs.
COPY test /tmp/
COPY /home/adm/workspace/docker/keycloak-cluster/docker/kctheme/theme/login/. /opt/jboss/keycloak/themes/keycloak/login/
Copying manually with
sudo docker cp test docker_keycloak_1:/tmp
works without any issues.
Quick understanding on Docker.
docker build: Create an image from a Dockerfile
docker run: Create a container from an image.
(you can create yourself the image or use a existing image from docker hub )
Based on what you said, you have 2 options.
Create a new docker image based on the existing one and add the theme.
something like
# Dockerfile
FROM jboss/keycloak
COPY test /tmp/
COPY /home/adm/workspace/docker/keycloak-cluster/docker/kctheme/theme/login/. /opt/jboss/keycloak/themes/keycloak/login/
and then use docker build to create your new image
Volume the theme in the correct directory
using docker-compose volume
version: '3'
services:
keycloak:
image: quay.io/keycloak/keycloak:latest
volumes:
- "./docker/kctheme/theme/login:/opt/jboss/keycloak/themes/keycloak/login"
Files have to be in the same directory or a subdirectory of the directory with your Dockerfile build file if you use COPY, and have to be present at build time. No absolute paths.
/tmp as destination is also a bit tricky, because the startup process of the container might have a /tmp cleanout, which means that you would never see that file in a running container.
I am trying to wrap my mind around Docker volumes but I must have some things missing to understand it.
Let's say I have a Python app that require some initialisation depending on env variables. What I'm trying to achieve is having a "Code only image" from which I can start containers that would be mounted at executions. The entrypoint script of the Main container will then read and generate some files from/on the Code only container.
I tried to create an image to have a copy of the code
FROM ubuntu
COPY ./app /usr/local/code/app
Then docker create --name code_volume
And with docker-compose:
app:
image: python/app
hostname: app
ports:
- "2443:443"
environment:
- ENV=stuff
volumes_from:
- code_volume
I get an error from the app container saying it can't find a file in /usr/local/code/app/src but when I run code_volume with bash then ls into the folder, the file is sitting there...
I tried to change access rights, add /bin/true (seeing it in some examples) but I just can't get what I want to be working. I checked the docker volume create feature but it seems to be for storing/sharing data afterward
What am I missing ? Is the entrypoint script executed before volumes are mounted ? Is there any best practices for cases like this that don't involve mounting folders and keeping one copy for every container ? Should I be thinking my containers over again ?
You do not declare the volume on code_volume container upon creation.
docker create -v /usr/local/code/app --name code_volume