when mounting volume, directory is empty in docker - docker

when I run nodered with
docker run -v D:/mydir:/data
the content of /data is copied in my volume at first run, thats what I've expected.
If I make
docker run -v D:/mydir:/usr/src/node-red/node_modules nodered
Then the volume is empty
I was expecting to get the content of node_modules being copied in the volume at start time... what am I missing ?
I can illustrate that a little bit more :
docker run --rm -v d:/VM:/data nodered/node-red-docker ls /data
--> list files
docker run --rm ls /usr/src/node-red/node_modules
--> list content of node_modules
docker run --rm -v d:/VM:/usr/src/node-red/node_modules nodered/node-red-docker ls /usr/src/node-red/node_modules
--> is empty !

You're mounting host directories as volumes, so there isn't any copying going on - the mount path inside the container is being mapped to the path on the host, so you're seeing the contents of the host directory.
Volumes sit outside the Union File System when you mount them, so you don't get an overlay which merges the contents of the image and the contents of the host directory. Instead you're effectively bypassing the contents of the image for that volume, and repointing it to your host.
Samples:
touch /docker/nodered-modules/sample.txt
docker run --rm -v /docker/nodered-modules:/usr/src/node-red/node_modules nodered/node-red-docker ls /usr/src/node-red/node_modules
sample.txt
touch /docker/nodered-data/sample.txt
docker run --rm -v /docker/nodered-data:/data nodered/node-red-docker ls /data
sample.txt
The reason you're seeing a difference is because the /data volume is defined in the Dockerfile and empty in the image, so you see the contents of your host directory as expected. The modules directory isn't empty in the image, but you're repointing it to an empty directory on your host.

Docker does not support copying data from the base image into host directories that are mounted as container volumes.

Related

Docker volume is empty

When using -v switch the files from container should be copied to localhost volume right? But it seems like the directory jenkins_home isn't created at all.
If I create the jenkins_home directory manually and then mount it, the directory is still empty.
I want to preserve the jenkins configs so I could re-run image later.
docker run -p 8080:8080 -p 50000:50000 -d -v jenkins_home:/var/jenkins_home jenkins/jenkins:latest
If you docker run -v jenkins_home:... where the first half of the -v option has no slashes in it at all, that syntax creates a named Docker volume; it isn't a bind mount.
If you docker run -v "$PWD/jenkins_home:..." then that host directory is mounted over the corresponding container directory. At startup time, nothing is ever copied into the host directory; if the host directory is empty, that empty directory gets mounted into the container, hiding everything that was in the image.
If you use the docker run -v named-volume:... syntax, and the named volume is empty, then in this case only, and only the very first time the container is run, the contents of the image are copied into the named volume. This doesn't work for bind mounts, and it doesn't work if there is already data in the volume (perhaps from a previous docker run). It also does not work in other container environments such as Kubernetes. I do not recommend relying on this behavior.
Probably the easiest way to make this work is to launch a one-off container to export the contents of the image, and then use bind-mount syntax:
cd jenkins_home
docker run \
--rm \ # clean up this container when done
-w /var/jenkins_home \ # set the current container directory
jenkins/jenkins \ # the image to run
tar cf - . \ # write a tar file to stdout
| tar xf - # and unpack it on the host
# Now launch the container as normal
docker run -d -p ... -v "$PWD:/var/jenkins_home" jenkins/jenkins
Figured it out.
Turned out that by default it creates the volume in /var/lib/docker/volumes/jenkins_home/ instead of in the current directory.
Also I had tried docker volume create jenkins_home before running the docker image to mount. So not sure if it was the -v jenkins_home:/var/jenkins_home or if it was docker create volume that created the directory in /var/lib/docker/volumes/.

Docker VOLUME command inside Dockerfile not working as expected

FROM ubuntu:15.04
RUN mkdir -p /app/tina
RUN touch /app/tina/foo.txt
RUN echo "testing tina" > /app/tina/foo.txt
VOLUME /app/tina
CMD sh
As per Docker guide
This Dockerfile results in an image that causes docker run to create a
new mount point at /app/tina and copy the foo.txt file into the newly
created volume
but when I do
docker run --rm -it -v /tmp/foo:/app/tina imagename sh
ls /app/tina/
I can't find foo.txt inside it.
From https://docs.docker.com/engine/reference/builder/#volume
The VOLUME instruction creates a mount point with the specified name and marks it as holding externally mounted volumes from native
host or other containers.
You are using /tmp/foo which is a directory, not a volume. Try:
docker volume create my-vol
docker run --rm -it -v my-vol:/app/tina imagename ls /app/tina/
The problem is that attaching an external directory as a volume using -v actually performs a bind mount: /tmp/foo directory is mounted to the /app/tina directory of the container.
In Linux, when you mount something, all files which were previously seen in the mount point (/app/tina in your case) become invisible. So, when you mount /tmp/foo (empty directory) to /app/tina (which contains foo.txt), the foo.txt file becomes invisible and you see the contents of /tmp/foo in the /app/tina directory, i.e. nothing.
You may ensure that you will see foo.txt in /app/tina when you will unmount tmp/foo from it:
root#84d8cfad500a:/# ls /app/tina
root#84d8cfad500a:/# umount /app/tina
root#84d8cfad500a:/# ls /app/tina
foo.txt
However, this would work only in the privileged (docker run --privileged) container (otherwise you will not be able to unmount /app/tina).
Your files are hidden. This is simply how mounts work. If I were to plug in a flash drive and mount it to ~/someDirectory, then anything in ~/someDirectory would be masked by the files available in the new mount. The volumes feature in docker works the same way.
You can avoid this behavior if you create entrypoint.sh and put these lines into entrypoint
RUN mkdir -p /app/tina
RUN touch /app/tina/foo.txt
RUN echo "testing tina" > /app/tina/foo.txt
when you create container (not image) docker creates volume and after it creates foo.txt and puts "testing tina" to the file.
Of course, don't forget to mention entrypoint in Dockerfile

docker mount volume dir ubuntu

I'm trying to use docker to do this:
Run Docker image, make sure you mount your User (for MAC) or home (for
Ubuntu) directory as a volume so you can access your local files
The code that I've been given is:
docker run -v /Users/:/host -p 5000:5000 -t -i bjoffe/openface_flask_v2 /bin/bash
I know that the part that I should modify to my local files is -v /Users/:/host, but I am unsure how to do so.
The files I want to load in the container are inside home/user/folder-i-want-to-read
How should this code be written?
Bind mount is just a mapping of the host files or directories into a container files or directories. That basically pointing to the same physical location on disk.
In your case, you could try this command,
docker container run -it -p 5000:5000 -v /home/user/folder-i-want-to-read/:/path_in_container bjoffe/openface_flask_v2 /bin/bash
And, once run verify that directories from the path on host home/user/folder-i-want-to-read are loaded in the container path which you have mapped.

Does Docker update contents of volume when mounted if changes are made in Dockerfile?

I have Jenkins running in a Docker container. The home directory is in a host volume, in order to ensure that the build history is preserved when updates to the container are actioned.
I have updated the container, to create an additional file in the home directory. When the new container is pulled, I cannot see the changed file.
ENV JENKINS_HOME=/var/jenkins_home
RUN mkdir -p ${JENKINS_HOME}/.m2
COPY settings.xml ${JENKINS_HOME}/.m2/settings.xml
RUN chown -R jenkins:jenkins ${JENKINS_HOME}/.m2
VOLUME ["/var/jenkins_home"]
I am running the container like this:
docker run -v /host/directory:/var/jenkins_home -p 80:8080 jenkins
I had previous run Jenkins and so the home directory already exists on the host. When I pull the new container and run it, I see that the file .m2/settings.xml is not created. Why is this please?
Basically when you run:
docker run -v /host-src-dir:/container-dest-dir my_image
You will overlay your /container-dest-dir with what is in /host-src-dir
From Docs
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the
container at /webapp. If the path /webapp already exists inside the
container’s image, the /src/webapp mount overlays but does not remove
the pre-existing content. Once the mount is removed, the content is
accessible again. This is consistent with the expected behavior of the
mount command.
This SO question is also relevant docker mounting volumes on host
It seems you want it the other way around (i.e. the container is source and the host is destination).
Here is a workaround:
Create the volume in your Dockerfile
Run it without -v i.e.: docker run --name=my_container my_image
Run docker inspect --format='{{json .Mounts}}' my_container
This will give you output similar to:
[{"Name":"5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73","Source":"/var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data","Destination":"/var/jenkins_home","Driver":"local","Mode":"","RW":true,"Propagation":""}]
Which means your dir as it is on container was mounted into the host directory /var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data
Unfortunately, I do not know a way to make it mount on a specific host directory instead.

Does docker run -v or Dockerfile VOLUME take precedence?

Dockerfile
FROM nginx:1.7
# Deliberately declaring VOLUME before RUN
VOLUME /var/tmp
RUN touch /var/tmp/test.txt
Observation
Now I believe I understand the implications of declaring the VOLUME statement before
creating test.txt - the volume /var/tmp exposed at runtime will be based on the intermediary container prior to the test.txt file is created so it will be empty (hope this observation is correct)
So as expected the following docker run does not show test.txt:
docker run kz/test-volume:0.1
But then I tried supplying the volume at runtime as below:
docker run -v /var/tmp kz/test-volume:0.1
Question
The outcome was the same. So what does this mean? does the -v /var/tmp in the docker run command map to the empty /var/tmp dir exposed by the VOLUME command in the Dockerfile and not the /var/tmp directory with the test.txt in the latest image?
Feeling a tad bit confused.
There is no image which contains /var/tmp/test.txt. The effect of declaring the volume before creating the file is that the RUN instruction runs in a temporary container which has its own volume. Volumes bypass the Union File System so when the build saves that intermediate container, the contents of the volume are not saved, so they don't get persisted in the image layer.
Every container you create from that image will have its own volume, the -v option doesn't change that, unless you use it to map the volume to a host path.
Using your Dockerfile, you can see that by inspecting two containers. The first without the -v option:
> docker run -d temp
c3c4f7de411f166b3a67397ff1221552fe5b94c46bc100725a50a57231da427b
> docker inspect -f '{{ .Mounts }}' c3c
[
{67267d2eeb57373f76a9dd50c25f744b7d99d0a75647bf06aa0d17b70807cf71/var/lib/docker/volumes/67267d2eeb57373f76a9dd50c25f744b7d99d0a75647bf06aa0d17b70807cf71/_data /var/cache/nginx local true }
{91490ea73e3a9d42df9f00e32a91fe571d2143f54248071959765d5d55c23d46/var/lib/docker/volumes/91490ea73e3a9d42df9f00e32a91fe571d2143f54248071959765d5d55c23d46/_data /var/tmp local true }
]
Here there are two volumes, mounted from /var/lib/docker on the host. One is from the nginx base image, and one from your image. With the explicit -v:
> docker run -d -v /var/tmp temp
6fa1a8713b2d6638675a3d048669943419bc7a3924ed98371771100bcfde3954
> docker inspect -f '{{ .Mounts }}' 6fa
[
{9adf6954ed3e826f23a914cbbd768753e6dec1f176eed10e03c2d5503287d101/var/lib/docker/volumes/9adf6954ed3e826f23a914cbbd768753e6dec1f176eed10e03c2d5503287d101/_data /var/cache/nginx local true }
{7dd8be71ce88017ffbeef249837bdd1c96c071b802a2f43b18fd406983e1076a/var/lib/docker/volumes/7dd8be71ce88017ffbeef249837bdd1c96c071b802a2f43b18fd406983e1076a/_data /var/tmp local true }
]
Same outcome, but different host paths, because each container has its own volume.

Resources