Docker inside docker: file missing - docker

I am launching docker inside another docker container and I'm trying to make files visible inside "deepest" container.
My first container is build on python:3.8-slim image, entrypoint is ["python"] and is called test-client.
I launch it as docker run --rm -it -v /home/.../inputs:/inputs -v /var/run/docker.sock:/var/run/docker.sock --network ... test-client start_client.py ....
Now inner container.
Inside start_client.py I run it with docker==5.0.3 library.
def check_docker():
import time
inputs = Mount('/inputs', 'inputs')
client = docker.from_env()
client.images.pull('apline')
time.sleep(30) # I will explain this later
output = client.containers.run(
'apline', 'ls inputs -al',
mounts=[inputs]
).decode('utf-8')
for line in output.split('\n'):
print(line)
So. I used time.sleep to have time to dive into first container and check if needed file is presed. Yes it is, my file is inside first container. But output of deepest container sees no files inside inputs directory.
What am I doing wrong?

You can't directly mount a directory from one container to another. In the mounts option you show (and in docker run -v and Compose volumes:) the host path is always a path on the system where the Docker daemon is running. If you're bind-mounting the host's Docker socket, these paths will be paths on the host; if $DOCKER_HOST points into a VM or at a remote machine, the paths will be paths on that system and not your local one.
But, in your specific example, the directory you're trying to remount is already a mount itself. If you mount the same host location into both containers, then you'll be able to see the files. I'd suggest specifying this in an environment variable
inputs = Mount('/inputs', os.getenv('INPUT_SOURCE', 'input'))
and when you run the container, pass that directory in as a variable
INPUT_SOURCE="$PWD/inputs"
docker run --rm -it \
-e INPUT_SOURCE \
-v "$INPUT_SOURCE:/inputs" \
--network ... \
test-client \
start_client.py ...
If you use a bare string input in the Mount object as you've done, it will mount (and automatically create) a named volume. You can use your container to inspect this
docker run --rm -v inputs:/inputs test-client \
-e 'print(os.listdir("/inputs"))'
(you can use a simpler shell syntax if you remove the ENTRYPOINT ["python"] line from your Dockerfile).

Related

Docker volume is empty

When using -v switch the files from container should be copied to localhost volume right? But it seems like the directory jenkins_home isn't created at all.
If I create the jenkins_home directory manually and then mount it, the directory is still empty.
I want to preserve the jenkins configs so I could re-run image later.
docker run -p 8080:8080 -p 50000:50000 -d -v jenkins_home:/var/jenkins_home jenkins/jenkins:latest
If you docker run -v jenkins_home:... where the first half of the -v option has no slashes in it at all, that syntax creates a named Docker volume; it isn't a bind mount.
If you docker run -v "$PWD/jenkins_home:..." then that host directory is mounted over the corresponding container directory. At startup time, nothing is ever copied into the host directory; if the host directory is empty, that empty directory gets mounted into the container, hiding everything that was in the image.
If you use the docker run -v named-volume:... syntax, and the named volume is empty, then in this case only, and only the very first time the container is run, the contents of the image are copied into the named volume. This doesn't work for bind mounts, and it doesn't work if there is already data in the volume (perhaps from a previous docker run). It also does not work in other container environments such as Kubernetes. I do not recommend relying on this behavior.
Probably the easiest way to make this work is to launch a one-off container to export the contents of the image, and then use bind-mount syntax:
cd jenkins_home
docker run \
--rm \ # clean up this container when done
-w /var/jenkins_home \ # set the current container directory
jenkins/jenkins \ # the image to run
tar cf - . \ # write a tar file to stdout
| tar xf - # and unpack it on the host
# Now launch the container as normal
docker run -d -p ... -v "$PWD:/var/jenkins_home" jenkins/jenkins
Figured it out.
Turned out that by default it creates the volume in /var/lib/docker/volumes/jenkins_home/ instead of in the current directory.
Also I had tried docker volume create jenkins_home before running the docker image to mount. So not sure if it was the -v jenkins_home:/var/jenkins_home or if it was docker create volume that created the directory in /var/lib/docker/volumes/.

How to mount volume inside child docker created by parent docker sharing docker.sock

I am trying to create a wrapper container to build and run a set of containers using a docker-compose I cannot modify. The docker-compose mounts several volumes, but when starting the docker-compose from inside of the wrapper docker, the volumes are still mounted from the host since the docker .sock is volume mounted to be the host's docker.sock.
I would like to not have to use full docker-in-docker due to all the problems associated with it outlined in jpetazzo's article.
I would also like to avoid volume-from since I cannot edit the docker-compose file mentioned previously.
Is there a way to get this snippet to correctly use the parent docker's file instead of going to the host filesystem and mounting it from there?
FROM docker:latest
RUN mkdir -p /tmp/parent/ && echo "This is from the parent docker" > /tmp/parent/parent.txt
CMD docker run -v /tmp/parent/parent.txt:/root/parent.txt --rm ubuntu:18.04 bash -c "cat /root/parent.txt"
when run with a command akin to this:
docker build -t parent . && docker run --rm -v /var/run/docker.sock:/var/run/docker.sock parent
Make your paths the same on the host and inside of the docker image, e.g.
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-v /home/user:/home/user -w /home/user/project parent_image ...
By mounting the volume as /home/user in the same location inside the image, a command like docker-compose up with relative bind mounts will use the container path names when talking to the docker socket, which will match the paths on the host.

How to allow docker sibling container to bind subdirectory from existing volume

I want to know how I can allow a "child" (sibling) docker container to access some subdirectory of an already mounted volume. As an explanation, this is a simple setup:
I have the following Dockerfile, which just installs Docker in a Docker container:
FROM ubuntu
RUN apt-get update && apt-get install -y curl
RUN curl -fsSL https://get.docker.com/ | sh
I have the following data directory on my host machine
/home/user/data/
data1.txt
subdir/
data2.txt
Build the parent image:
[host]$> docker build -t parent .
Then run the parent container:
[host]$> docker run --rm --name parent -it -v /home/user/data/:/data/ -v /var/run/docker.sock:/var/run/docker.sock parent
Now I have a running container, and am "inside" the new container. Since I have the docker socket bound to the parent, I am able to run docker commands to create "child" containers, which are actually sibling containers. The data volume has been successfully mapped:
[parent]$> ls /data/
subdir data1.txt
Now I want to create a sibling container that can only see the subdir directory:
[parent]$> docker run --rm --name child -it -v /data/subdir/:/data/ ubuntu
This creates a sibling container, and I am successfully "inside" the container, however the new data directory is empty. My assumption is because the volume I tell it to use "/data/" is mapped by the host to a directory that doesn't exist on the host, rather than using the volume defined when running the parent.
[child]$> ls /data/
<nothing>
What can I do to allow this mapping to work, so that the child can create files in the subdirectory, and that the parent container can see and access these files? The child is not allowed to see data1.txt (or anything else above the subdirectory).
"Sibling" container is the correct term, there is no direct relationship between what you have labeled the "parent" and "child" containers, even though you ran the docker command in one of the containers.
The container with the docker socket mounted still controls the dockerd running on the host, so any paths sent to dockerd via the API will be in the hosts scope.
There are docker commands where using the containers filesystem does change things. This is when the docker utility itself is accessing the local file system. docker build docker cp docker import docker export are examples where docker interacts with the local file system.
Mount
Use -v /home/user/data/subdir:/data for the second container
docker run --name parent_volume \
-it --rm -v /home/user/data:/data ubuntu
docker run --name child_volume \
-it --rm -v /home/user/data/subdir:/data ubuntu
The processes you run need to be careful with what is writing to data mounted into multiple containers so data doesn't get clobbered.

Does Docker update contents of volume when mounted if changes are made in Dockerfile?

I have Jenkins running in a Docker container. The home directory is in a host volume, in order to ensure that the build history is preserved when updates to the container are actioned.
I have updated the container, to create an additional file in the home directory. When the new container is pulled, I cannot see the changed file.
ENV JENKINS_HOME=/var/jenkins_home
RUN mkdir -p ${JENKINS_HOME}/.m2
COPY settings.xml ${JENKINS_HOME}/.m2/settings.xml
RUN chown -R jenkins:jenkins ${JENKINS_HOME}/.m2
VOLUME ["/var/jenkins_home"]
I am running the container like this:
docker run -v /host/directory:/var/jenkins_home -p 80:8080 jenkins
I had previous run Jenkins and so the home directory already exists on the host. When I pull the new container and run it, I see that the file .m2/settings.xml is not created. Why is this please?
Basically when you run:
docker run -v /host-src-dir:/container-dest-dir my_image
You will overlay your /container-dest-dir with what is in /host-src-dir
From Docs
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the
container at /webapp. If the path /webapp already exists inside the
container’s image, the /src/webapp mount overlays but does not remove
the pre-existing content. Once the mount is removed, the content is
accessible again. This is consistent with the expected behavior of the
mount command.
This SO question is also relevant docker mounting volumes on host
It seems you want it the other way around (i.e. the container is source and the host is destination).
Here is a workaround:
Create the volume in your Dockerfile
Run it without -v i.e.: docker run --name=my_container my_image
Run docker inspect --format='{{json .Mounts}}' my_container
This will give you output similar to:
[{"Name":"5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73","Source":"/var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data","Destination":"/var/jenkins_home","Driver":"local","Mode":"","RW":true,"Propagation":""}]
Which means your dir as it is on container was mounted into the host directory /var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data
Unfortunately, I do not know a way to make it mount on a specific host directory instead.

How to re-mount a docker volume without overriding existing files?

When running Docker, you can mount files and directories using the --volume option. E.g.:
docker run --volume /remote ./local myimage
I'm running a docker image that defines VOLUMESs in the Dockerfile. I need to access a config file that happens to be inside one of the defined volumes. I'd like to have that file "synced" on the host so that I can edit it. I know I could run docker exec ..., but I hope to circumvent that overhead for only editing one file. I found out that the volumes created by the VOLUMES line are stored in /var/lib/docker/volumes/<HASH>/_data.
Using docker inspect I was able to find the directory that is mounted:
docker inspect gitlab-runner | grep -B 1 '"Destination": "/etc/gitlab-runner"' | head -n 1 | cut -d '"' -f 4
Output:
/var/lib/docker/volumes/9c233c085c36380c6c33035222c16e5d061368c5060cc81dda2a9a713a2b2b3b/_data
So the question is:
Is there a way to re-mount volumes defined in an image? OR to somehow get the directory easier than my oneliner above?
EDIT after comments by zeppelin I've tried rebinding the volume with no success:
$ mkdir etc
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
$ docker exec test1 ls /etc/gitlab-runner/
certs
config.toml
$ docker exec test2 ls /etc/gitlab-runner/
# empty. no files
$ ls etc
# also empty
docker inspect shows correctly that the volume is bound to ~/etc, but the files inside the container at /etc/gitlab-runner/ seem lost.
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
You've got two different volume types there. One I call an anonymous volume (a very long uuid visible when you run docker volume ls). The second is a host volume or bind mount that maps a directory on the host directly into the container. So each container you spun up is looking at different places.
Anonymous volumes and named volumes (docker run -d -v mydata:/etc/gitlab-runner gitlab/gitlab-runner) get initialized to the contents of the image at that directory location. This initialization only happens when the volume is empty and is mounted into a new container. Host volumes, as you've seen, only get the contents of the host filesystem, even if it's empty at that location.
With that background, the short answer to your question is no, you cannot mount a file inside the container back out to your host. But you can copy the file out with several methods, assuming you don't overlay the source of the file with a host volume mount. With a running container, there's the docker cp command. Personally, I like:
docker run --rm -v ~/etc:/target gitlab/gitlab-runner \
cp -av /etc/gitlab-runner/. /target/.
If you have a named volume with data you want to copy in or out, you can use any image with the tools you need to do the copy:
docker run --rm -v mydata:/source -v ~/etc:/target busybox \
cp -av /source/. /target/.
Try to avoid modifying data inside a container from the host directly, much nicer is when you wrap your task into another container that you then start with "--volumes-from" option when possible in your case.
Not sure I understood your problem, anyway, as for the documentation you mention,
The VOLUME instruction creates a mount point with the specified name
and marks it as holding externally mounted volumes from native host or
other containers. [...] The docker run command initializes the newly
created volume with any data that exists at the specified location
within the base image.
So, following the example Dockerfile , after having built the image
docker build -t mytest .
and having the container running
docker run -d -ti --name mytestcontainer mytest /bin/bash
you can access it from the container itself, e.g.
docker exec -ti mytestcontainer ls -l /myvol/greeting
docker exec -ti mytestcontainer cat /myvol/greeting
Hope it helps.

Resources