How to name a volume, created by Dockerfile - docker

If I start a container from an image, the Dockerfile of which has an entry like this:
VOLUME ["/data"]
with what subcommand of docker run should I start a container, so that when I list the volumes via docker volume ls, I see the name I gave to the volume and not some long random hash?

If you use the ordinary docker run -v option to mount something on that same path, Docker won't create an anonymous volume there.
docker volume create something
docker run -v something:/data ...
In fact, you don't need a Dockerfile VOLUME directive to do this: you can mount a volume or host directory on to any container path regardless of whether or not it's declared as a VOLUME directory. There's not a lot of benefits to having that in the Dockerfile, and it has some confusing side effects; I'd suggest just deleting that line.

Related

Can you mount a directory from a image into another container

Is it currently possible with docker to do something like this conceptually?
docker run --mount type=xxx,image=imageX,subdir=/somedir,dst=/mnt-here imageY ...
I understand this can be done during at docker build time with COPY --from=...., however, in my use-case it would only really be beneficial if it can be done at container creation time.
The only things it's possible to mount into a container arbitrary host directories, tmpfs directories, and Docker named volumes. You can make a named volume use anything you could mount with the Linux mount(8) command. Potentially you can install additional volume drivers to mount other things. But these are all of the possible options.
None of these options allow you to mount image or container content into a different container. The COPY --from=other-image syntax you suggest is probably the best approach here.
If you really absolutely needed it in a volume, one option is to create a volume yourself, copy the content from the source image, and then mount that into the destination image.
docker volume create some-volume
# Since the volume is empty, mounting it into the container will
# copy the contents from the image into the volume. This only happens
# with native Docker volumes and only if the volume is totally empty.
# Docker will never modify the contents of this volume after this.
# Create an empty temporary container to set up the volume
docker run -v some-volume:/somedir --rm some-image /bin/true
# Now you can mount the volume into the actual container
docker run -v some-volume:/mnt-here ...

Combing VOLUME + docker run -v

I was looking for an explanation on the VOLUME entry when writing a Dockerfile and came across this statement
A volume is a persistent data stored in /var/lib/docker/volumes/...
You can either declare it in a Dockerfile, which means each time a container is started from the image, the volume is created (empty), even if you don't have any -v option.
You can declare it on runtime docker run -v [host-dir:]container-dir.
combining the two (VOLUME + docker run -v) means that you can mount the content of a host folder into your volume persisted by the container in /var/lib/docker/volumes/...
docker volume create creates a volume without having to define a Dockerfile and build an image and run a container. It is used to quickly allow other containers to mount said volume.
But I'm having a hard time understanding this line:
...combining the two (VOLUME + docker run -v) means that you can mount the content of a host folder into your volume persisted by the container in /var/lib/docker/volumes/...
For example, let's say I have a config file on my host machine and I run the container based off the image I made with the Dockerfile I wrote. Will it copy the config file into where the volume that I stated in my the volume entry?
Would it be something like (pseudocode)
#dockerfile
From Ubuntu
Run apt-get update
Run apt-get install mysql
Volume . /etc/mysql/conf.d
Cmd systemcl start MySQL
And when I run it
docker run -it -v /path/to/config/file: ubuntu_based_image
Is this what they mean?
You probably don't want VOLUME in your Dockerfile. It's not necessary to mount files or directories at runtime, and it has confusing side effects like making subsequent RUN commands silently lose state.
If an image does have a VOLUME, and you don't mount anything else there when you start the container, Docker will create an anonymous volume and mount it for you. This can result in space leaks if you don't clean these volumes up.
You can use a docker run -v option on any container directory regardless of whether or not it's declared as a VOLUME.
If you docker run -v /host/path:/container/path, the two directories are actually the same; nothing is copied, and writes to one are (supposed to be) immediately visible on the other.
docker run -v /host/path:/container/path bind mounts aren't visible in /var/lib/docker at all.
You shouldn't usually be looking at content in /var/lib/docker (and can't if you're not on a native-Linux host). If you need to access the volume file content directly, use a bind mount rather than a named or anonymous volume.
Bind mounts like you've shown are appropriate for injecting config files into containers, and for reading log files back out. Named volumes are appropriate for stateful applications' storage, like the data for a MySQL database. Neither type of volume is appropriate for code or libraries; build these directly into Docker images instead.

Mounting local directory into Docker container path that is not exposed as a VOLUME

Is there any difference between:
Mounting a host directory into a container path (the path is not exposed as a volume), e.g.:
docker run -v /host/directory:/container/directory my_image command_to_run
Dockerfile of my_image does not include VOLUME ["/container/directory"]
Mounting a host directory into a a container path exposed as a volume:
docker run -v /host/directory:/container/directory my_image command_to_run
Dockerfile of my_image includes VOLUME ["/container/directory"]
I know that volume data persists independent of the container life-cycle. However, since I want to work on my local data from within a container, does that make any difference if the mount-point inside the container is a volume?
There is no difference if you mount the path from the host into the container. The filesystem from the host will be mounted over top of that directory inside the container.
The difference between listing the volume and not listing it inside the image is the behavior of docker when you create an image without specifying a volume. When the volume is defined on the image, docker will create an "anonymous" volume you can see with docker volume ls as a long uuid string. These volumes are rarely useful, so I recommend against defining a volume in the image and instead only defining them on your docker run command or docker-compose.yml definition.
Downsides of defining a volume in the image include:
Later lines in the Dockerfile or in descendant Dockerfile's may not be able to change the contents at this location. Docker's behavior with this varies by scenario and version, so for predictability, once a volume is defined in an image, I consider that directory off limits.
Creation of anonymous volumes are difficult to use and are likely to clutter up the filesystem.
I posted a blog on this topic a while back if you're interested in more details.

What is the purpose of defining VOLUME mount points within DockerFile rather than adhoc cmd-line -v?

I understand that using the VOLUME command within a Dockerfile, defines a mount point within container.
FROM centos:6
VOLUME /html
However I noticed that without that VOLUME definition, it's still possible to mount on that VOLUME point regardless of defining it
docker run -ti -v /path/to/my/html:/html centos:6
What is the purpose of defining VOLUME mount points in the dockerfile? I suspect it's for readability so people can read the Dockerfile and instantly know what is meant to be mounted?
VOLUME instruction used within a Dockerfile does not allow us to do host mount, that is where we mount a directory from the host OS into a container.
However other containers can still mount into the volumes of a container using the --from-container=<container name>, created with the VOLUMES instruction in the Dockerfile
I understand that using the VOLUME command within a Dockerfile,
defines a mount point within container.
That's not right. In that case the volume is defined for an image, not for a container.
When a volume is defined in the Dockerfile, it's set for an image, so every container run from that image gets that volume defined.
If you define the volume in the command line (docker run -v ...) the volume is defined just for that specific container.

Dockerfile: understanding VOLUME instruction

Let's take an example.
The following is the VOLUME instruction for the nginx image:
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx", "/var/www/html"]
Here are my questions:
When you start the container, will these directories show up on my host? And when I stop my container, the directories will stay?
If some (or all) of these directories already exist in my host, what will happen? For example, let's say the image comes with a default config file within the /etc/nginx directory of the container, and I also have a config file within /etc/nginx on my host. When the container starts, which of these files will get priority?
What's the key difference between -v /host/dir:container/dir and VOLUME?
References:
https://github.com/dockerfile/nginx/blob/master/Dockerfile
http://www.tech-d.net/2014/11/03/docker-indepth-volumes/
How to mount host volumes into docker containers in Dockerfile during build
http://jpetazzo.github.io/2015/01/19/dockerfile-and-data-in-volumes/
A container's volumes are just directories on the host regardless of what method they are created by. If you don't specify a directory on the host, Docker will create a new directory for the volume, normally under /var/lib/docker/vfs.
However the volume was created, it's easy to find where it is on the host by using the docker inspect command e.g:
$ ID=$(docker run -d -v /data debian echo "Data container")
$ docker inspect -f {{.Mounts}} $ID
[{0d7adb21591798357ac1e140735150192903daf3de775105c18149552a26f951 /var/lib/docker/volumes/0d7adb21591798357ac1e140735150192903daf3de775105c18149552a26f951/_data /data local true }]
We can see that Docker has created a directory for the volume at /var/lib/docker/volumes/0d7adb21591798357ac1e140735150192903daf3de775105c18149552a26f951/_data.
You are free to modify/add/delete files in this directory from the host, but note that you may need to use sudo for permissions.
Docker will only delete volume directories in two circumstances:
If the --rm option is given to docker run, any volumes will be deleted when the container exits
If a container is deleted with docker rm -v CONTAINER, any volumes will be removed.
In both cases, volumes will only be deleted if no other containers refer to them. Volumes mapped to specific host directories (the -v HOST_DIR:CON_DIR syntax) are never deleted by Docker. However, if you remove the container for a volume, the naming scheme means you will have a hard time figuring out which directory contains the volume.
So, specific questions:
Yes and yes, with above caveats.
Each Docker managed volume gets a new directory on the host
The VOLUME instruction is identical to -v without specifying the host dir. When the host dir is specified, Docker does not create any directories for the volume, will not copy in files from the image and will never delete the volume (docker rm -v CONTAINER will not delete volumes mapped to user-specified host directories).
More information here:
https://blog.container-solutions.com/understanding-volumes-docker

Resources