I am creating a Docker Image from a base image that in one of its early layers mounts a volume to the current user home folder utilizing the VOLUME directive.
This mount leads to some unwanted file removals on my own image once it is run.
Is there a way to unmount this volume from within the Dockerfile of my image?
Related
I'd like to mount a whole docker image as a volume in the host. I could run the image, but I don't really need to, I just need the files to be visible as files within a directory/volume/mount point outside of docker.
(To be clear, I don't need to mount a host directory as a volume in a running docker instance which is what docker run -v does; I need the opposite, to mount a directory in the docker image or the whole image as a volume in the host; read-only is okay)
We have a system in which the user can start sessions inside a number of docker containers. When they do this their home directory is automatically mounted to the docker container. We can't modify the system which starts the docker containers and mounts the directory.
Our goal is to have one image not automatically mount this container. Is there something that I can do to the image to basically make one directory unmountable?
No. If you can docker run a container, you can always use docker run -v to mount any host directory over any directory, and the original contents of the image will be hidden.
Docker's general model is that the image has somewhat limited powers, but you can specify most things when you start the container. Trying to prevent a volume mount (more frequently asked, trying to force a volume mount) is the opposite of this model; the image has no way to prevent how it will eventually be used.
I'd like to mount a file from a Docker's container to my docker host.
Data volumes is not the solution for me, as these are mounts from the docker host to docker containers, and I need the opposite way around.
Thanks
When docker mounts a volume, it overlays the directory inside the container with that of the volume. There is an exception where it will initialize a named volume with the content of that directory in the image. There's no other built in method to copy files out of the image to the volume.
Therefore, to go the other direction and copy the contents of the image directory out to the host with a host volume, you'll need to add your own copy command inside the container. That can be part of your entrypoint script that runs in the container.
An example of the entrypoint script is the volume caching scripts in my base image. One script moves the files to a cached location inside the image during the build, and a second script copies files from the image cached location to the volume mount in the entrypoint.
Is there any difference between:
Mounting a host directory into a container path (the path is not exposed as a volume), e.g.:
docker run -v /host/directory:/container/directory my_image command_to_run
Dockerfile of my_image does not include VOLUME ["/container/directory"]
Mounting a host directory into a a container path exposed as a volume:
docker run -v /host/directory:/container/directory my_image command_to_run
Dockerfile of my_image includes VOLUME ["/container/directory"]
I know that volume data persists independent of the container life-cycle. However, since I want to work on my local data from within a container, does that make any difference if the mount-point inside the container is a volume?
There is no difference if you mount the path from the host into the container. The filesystem from the host will be mounted over top of that directory inside the container.
The difference between listing the volume and not listing it inside the image is the behavior of docker when you create an image without specifying a volume. When the volume is defined on the image, docker will create an "anonymous" volume you can see with docker volume ls as a long uuid string. These volumes are rarely useful, so I recommend against defining a volume in the image and instead only defining them on your docker run command or docker-compose.yml definition.
Downsides of defining a volume in the image include:
Later lines in the Dockerfile or in descendant Dockerfile's may not be able to change the contents at this location. Docker's behavior with this varies by scenario and version, so for predictability, once a volume is defined in an image, I consider that directory off limits.
Creation of anonymous volumes are difficult to use and are likely to clutter up the filesystem.
I posted a blog on this topic a while back if you're interested in more details.
what does docker do, when you bind-mount a volume in your docker "run" command,
which is already a managed volume defined in the docker build file/ image?
Example:
dockerfile defines /myvolume as managed volume
then: docker run -v /< my_host_dir >:/myvolume ... /< image >
What I see is that the managed volume is no longer created.
Instead the bind-mount comes through and mounts the host-dir into the container.
What goes on behind the scenes?
Is this documented somewhere and therefor something one can count on?
br volker
The VOLUME statement in a Dockerfile just marks the directory as to be mounted from somewhere else to help users of the image. For example when you create a Database-Image, the user of that image usually wants to persist the date outside of the container.
If you (as the creator of the Image/writer of the Dockerfile) marked a directory as a VOLUME, the user of the image (the one who executes docker run or similar) has an idea, where in the container he should mount a directory from outside.