Supposed I have a Docker container and a folder on my host /hostFolder. Now if I want to add this folder to the Docker container as a volume, then I can do this either by using ADD in the Dockerfile or mounting it as a volume.
So far, so good.
Now /hostFolder contains a sub-folder, /hostFolder/subFolder.
I want to mount /hostFolder into the Docker container (whether as read-write or read-only does not matter, works both for me), but I do NOT want to have it included /hostFolder/subFolder. I want to exclude this, and I also want the Docker container be able to make changes to this sub-folder, without the consequence of having it changed on the host as well.
Is this possible? If so, how?
Using docker-compose I'm able to use node_modules locally, but ignore it in the docker container using the following syntax in the docker-compose.yml
volumes:
- './angularApp:/opt/app'
- /opt/app/node_modules/
So everything in ./angularApp is mapped to /opt/app and then I create another mount volume /opt/app/node_modules/ which is now empty directory - even if in my local machine ./angularApp/node_modules is not empty.
If you want to have subdirectories ignored by docker-compose but persistent, you can do the following in docker-compose.yml:
volumes:
node_modules:
services:
server:
volumes:
- .:/app
- node_modules:/app/node_modules
This will mount your current directory as a shared volume, but mount a persistent docker volume in place of your local node_modules directory. This is similar to the answer by #kernix, but this will allow node_modules to persist between docker-compose up runs, which is likely the desired behavior.
For those trying to get a nice workflow going where node_modules isn't overridden by local this might help.
Change your docker-compose to mount an anonymous persistent volume to node_modules to prevent your local overriding it. This has been outlined in this thread a few times.
services:
server:
build: .
volumes:
- .:/app
- /app/node_modules
This is the important bit we were missing. When spinning up your stack use docker-compose -V. Without this if you added a new package and rebuilt your image it would be using the node_modules from your initial docker-compose launch.
-V, --renew-anon-volumes Recreate anonymous volumes instead of retrieving
data from the previous containers.
To exclude a file, use the following
volumes:
- /hostFolder:/folder
- /dev/null:/folder/fileToBeExcluded
With the docker command line:
docker run \
--mount type=bind,src=/hostFolder,dst=/containerFolder \
--mount type=volume,dst=/containerFolder/subFolder \
...other-args...
The -v option may also be used (credit to Bogdan Mart), but --mount is clearer and recommended.
First, using the ADD instruction in a Dockerfile is very different from using a volume (either via the -v argument to docker run or the VOLUME instruction in a Dockerfile). The ADD and COPY commands just take a copy of the files at the time docker build is run. These files are not updated until a fresh image is created with the docker build command. By contrast, using a volume is essentially saying "this directory should not be stored in the container image; instead use a directory on the host"; whenever a file inside a volume is changed, both the host and container will see it immediately.
I don't believe you can achieve what you want using volumes, you'll have to rethink your directory structure if you want to do this.
However, it's quite simple to achieve using COPY (which should be preferred to ADD). You can either use a .dockerignore file to exclude the subdirectory, or you could COPY all the files then do a RUN rm bla to remove the subdirectory.
Remember that any files you add to image with COPY or ADD must be inside the build context i.e. in or below the directory you run docker build from.
for the people who also had the issue that the node_modules folder would still overwrite from your local system and the other way around
volumes:
node_modules:
services:
server:
volumes:
- .:/app
- node_modules:/app/node_modules/
This is the solution, With the trailing / after the node_modules being the fix.
Looks like the old solution doesn't work anymore(at least for me).
Creating an empty folder and mapping target folder to it helped though.
volumes:
- ./angularApp:/opt/app
- .empty:/opt/app/node_modules/
I found this link which saved me: Working with docker bind mounts and node_modules.
This working solution will create a "exclude" named volume in docker volumes manager. The volume name "exclude" is arbitrary, so you can use a custom name for the volume intead exclude.
services:
node:
command: nodemon index.js
volumes:
- ./:/usr/local/app/
# the volume above prevents our host system's node_modules to be mounted
- exclude:/usr/local/app/node_modules/
volumes:
exclude:
You can see more infos about volumes in Official docs - Use a volume with docker compose
To exclude a mounted file contained in the volume of your machine, you will have to overwrite it by allocating a volume to this same file.
In your config file:
services:
server:
build : ./Dockerfile
volumes:
- .:/app
An example in you dockerfile:
# Image Location
FROM node:13.12.0-buster
VOLUME /app/you_overwrite_file
We have mounted a folder in a Linux machine to our docker container application using (docker-compose)
volumes:
- /mnt/share:/mnt/share
The /mnt/share is a mounted folder in the machine(Not a real folder in the machine, its our file server). IF for some reason that mount is lost and then remounted again.
The application running in the docker container is not having access to the mounted folder until the container is restarted.
You might want to use to use a Volume Driver instead of bind-mounting a local filesystem.
See Share data among machines
Without knowing more about your environment it is impossible to give a more detailed answer. It would be helpful to know if your container runs in a AWS data center or if you use nfsv3, nfsv4 or cifs for mounting.
The following solution helped me to continue.
I wrote a script to check whether the folder exists.
The script is then called a command in the docker-compose file.
version:"3"
services:
flowable-task-handler:
build: flowable-task-handler
ports:
- "8085:8085"
command: bash -c "/wait_for_file_mount.sh /mnt/share/fileshares/ && java -jar /app.jar"
wait_for_file_mount.sh
#!/bin/sh
# Used to check whether the mount folder is ready for flowable to use
mountedfolder="$1"
until [ -d "$mountedfolder" ];
do sleep 2;
echo error "Mounted folder not found : $mountedfolder";
done;
Its a spring boot application. I have removed the entrypoint in the DockerFile and application is started using the command in docker compose(java -jar /app.jar")
defining the mount propagation as ":shared" should fix this:
-v /autofs:/autofs:shared \
not sure about docker-compose - I don't really use that. but you can define a docker volume with mount propagation and put this into your DC file.
I'm running Jenkins in a Docker container. Following this article, I'm bind mounting the Docker socket in order to interact with it from the dockerized Jenkins. I'm also bind mounting the container directory jenkins_home. Here is a quick recap on my volumes:
# Jenkins
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /usr/local/bin/docker-compose:/usr/local/bin/docker-compose
- ./bar:/var/jenkins_home
I run this from the directory /home/foo/ of the host, therefore the following directory is created in the host file system (and mounted):
/home/foo/bar
Now, I have a Jenkins pipeline (mypipe) that runs a docker-compose file spinning up a MySQL container with the following volume:
# MySQL created from Jenkins
volumes:
- ./data:/var/lib/mysql
Weirdly enough, it ends up mounting:
/var/jenkins_home/workspace/mypipe/data < /var/lib/mysql
instead of:
/home/foo/bar/workspace/mypipe/data < /var/lib/mysql
Here is a graphical recap:
Searching stackoverflow, it turned out that it happens since:
The volume source path (left of :) does not refer to the middle container, but to the host filesystem!
And that's ok, but my question is:
Why there?
I mean why does .data is translated exactly into the path: /var/jenkins_home/workspace/…/data, since the MySQL container is not aware of the path /var/jenkins_home?
When Docker creates a bind mount, it is always from an absolute path in the host filesystem to an absolute path in the container filesystem.
When your docker-compose.yml names a relative path, Compose first expands that path before handing it off to the Docker daemon. In your example, you're trying to bind-mount ./bar from a file /var/jenkins_home/workspace/mypipe/docker-compose.yml, so Compose fills in the absolute path you see when it invokes the Docker API. Compose has no idea that the current directory is actually a bind-mount from a different path in the Docker daemon's context.
If you look in the Jenkins logs at what scripted pipeline invocations like docker.inside { ... } do, mounts the workspace directory to an identical path inside the container it launches. Probably the easiest way to work around the mapping problem you're having is to use an identical /var/jenkins_home path on the host system, so the filesystem path is the same in every context.
I am trying to create a jenkins and nexus integration using docker compose file. Where in my jenkins updated with few plugins using Dockerfile and volume created under /var/lib/jenkins/.
VOLUME ["/var/lib/jenkins/"]
in compose file am trying to map my volume to local store /opt/jenkins/
jenkins:
build: ./jenkins
ports:
- 9090:8080
volumes:
- /opt/jenkins/:/var/lib/jenkins/
But Nothing is copying to my persistence directory(/opt/jenkins/).
I can see in all my jenkins jobs created under _data/jobs/ directory under some volume. not in my volume defined /var/lib/jenkins/
Can any one help me on this why this is happening?
From the documentation:
Volumes are initialized when a container is created. If the container’s base image contains data at the specified mount point, that existing data is copied into the new volume upon volume initialization. (Note that this does not apply when mounting a host directory.)
And in the mount a host directory as data volume:
This command mounts the host directory, /src/webapp, into the container at /webapp. If the path /webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
So basically you are overlaying (hiding) anything that was in var/lib/jenkins. Can your image function if those things are hidden?
I would like to know if there's any way that I can share a directory from my host machine to the docker container (Shared Volume) using Dockerfile.
I understand that we can do that using volumes (-v option) while using docker run. But I couldn't find any way by which I can do that while using an Instruction of Dockerfile.
I already tried VOLUME instruction in Dockerfile but couldn't succeed.
Here's some details about my environment:
[me#myHost new]$ tree -L 1
.
|-- docker-compose.yml
|-- Dockerfile
|-- Shared // This is the directory I wish to share with my containers.
`
I was using docker-compose.yml file to mount this directory till now:
volumes:
- ./Shared:/shared # "Relative Path at the host":"Absolute Path at the container"
But now, due to some reasons, I need to mount it in Dockerfile. I already tried the following but couldn't succeed (It is creating a new empty volume at /shared.):
VOLUME ./Shared:/shared
I could use docker run and save the image by making manual changes in the container, but I wished if I could do that in Dockerfile itself.
Thanks.
You can't mount a local directory using commands in a Dockerfile. You must do this with docker run, or a proxy to docker run like docker-compose.