Can you mount a directory from a image into another container - docker

Is it currently possible with docker to do something like this conceptually?
docker run --mount type=xxx,image=imageX,subdir=/somedir,dst=/mnt-here imageY ...
I understand this can be done during at docker build time with COPY --from=...., however, in my use-case it would only really be beneficial if it can be done at container creation time.

The only things it's possible to mount into a container arbitrary host directories, tmpfs directories, and Docker named volumes. You can make a named volume use anything you could mount with the Linux mount(8) command. Potentially you can install additional volume drivers to mount other things. But these are all of the possible options.
None of these options allow you to mount image or container content into a different container. The COPY --from=other-image syntax you suggest is probably the best approach here.
If you really absolutely needed it in a volume, one option is to create a volume yourself, copy the content from the source image, and then mount that into the destination image.
docker volume create some-volume
# Since the volume is empty, mounting it into the container will
# copy the contents from the image into the volume. This only happens
# with native Docker volumes and only if the volume is totally empty.
# Docker will never modify the contents of this volume after this.
# Create an empty temporary container to set up the volume
docker run -v some-volume:/somedir --rm some-image /bin/true
# Now you can mount the volume into the actual container
docker run -v some-volume:/mnt-here ...

Related

Combing VOLUME + docker run -v

I was looking for an explanation on the VOLUME entry when writing a Dockerfile and came across this statement
A volume is a persistent data stored in /var/lib/docker/volumes/...
You can either declare it in a Dockerfile, which means each time a container is started from the image, the volume is created (empty), even if you don't have any -v option.
You can declare it on runtime docker run -v [host-dir:]container-dir.
combining the two (VOLUME + docker run -v) means that you can mount the content of a host folder into your volume persisted by the container in /var/lib/docker/volumes/...
docker volume create creates a volume without having to define a Dockerfile and build an image and run a container. It is used to quickly allow other containers to mount said volume.
But I'm having a hard time understanding this line:
...combining the two (VOLUME + docker run -v) means that you can mount the content of a host folder into your volume persisted by the container in /var/lib/docker/volumes/...
For example, let's say I have a config file on my host machine and I run the container based off the image I made with the Dockerfile I wrote. Will it copy the config file into where the volume that I stated in my the volume entry?
Would it be something like (pseudocode)
#dockerfile
From Ubuntu
Run apt-get update
Run apt-get install mysql
Volume . /etc/mysql/conf.d
Cmd systemcl start MySQL
And when I run it
docker run -it -v /path/to/config/file: ubuntu_based_image
Is this what they mean?
You probably don't want VOLUME in your Dockerfile. It's not necessary to mount files or directories at runtime, and it has confusing side effects like making subsequent RUN commands silently lose state.
If an image does have a VOLUME, and you don't mount anything else there when you start the container, Docker will create an anonymous volume and mount it for you. This can result in space leaks if you don't clean these volumes up.
You can use a docker run -v option on any container directory regardless of whether or not it's declared as a VOLUME.
If you docker run -v /host/path:/container/path, the two directories are actually the same; nothing is copied, and writes to one are (supposed to be) immediately visible on the other.
docker run -v /host/path:/container/path bind mounts aren't visible in /var/lib/docker at all.
You shouldn't usually be looking at content in /var/lib/docker (and can't if you're not on a native-Linux host). If you need to access the volume file content directly, use a bind mount rather than a named or anonymous volume.
Bind mounts like you've shown are appropriate for injecting config files into containers, and for reading log files back out. Named volumes are appropriate for stateful applications' storage, like the data for a MySQL database. Neither type of volume is appropriate for code or libraries; build these directly into Docker images instead.

How to name a volume, created by Dockerfile

If I start a container from an image, the Dockerfile of which has an entry like this:
VOLUME ["/data"]
with what subcommand of docker run should I start a container, so that when I list the volumes via docker volume ls, I see the name I gave to the volume and not some long random hash?
If you use the ordinary docker run -v option to mount something on that same path, Docker won't create an anonymous volume there.
docker volume create something
docker run -v something:/data ...
In fact, you don't need a Dockerfile VOLUME directive to do this: you can mount a volume or host directory on to any container path regardless of whether or not it's declared as a VOLUME directory. There's not a lot of benefits to having that in the Dockerfile, and it has some confusing side effects; I'd suggest just deleting that line.

Can I mount a Docker image as a volume in Docker?

I would like to distribute some larger static files/assets as a Docker image so that it is easy for user to pull those optional files down the same way they would be pulling the app itself. But I cannot really find a good way to expose files from one Docker image to the other? Is there a way to mount a Docker image itself (or a directory in it) as a volume to other Docker container?
I know that there are volume plugins I could use, but I could not find any where I could to this or something similar?
Is possible create any directory of an image to a docker volume, but not full image. At least not in a pretty or simple way.
If you want to create a directory from your image as a docker volume you can create a named volume:
docker volume create your_volume
docker run -d \
-it \
--name=yourcontainer \
-v your_volume:/dir_with_data_you_need \
your_docker_image
From this point, you'll have accessible your_volume with data from image your_docker_image
Reason why you cannot mount the whole image in a volume is because docker doesn't let specify / as source of named volume. You'll get Cannot create container for service your-srv: invalid volume spec "/": invalid volume specification: '/' even if you try with docker-compose.
Don't know any direct way.
You can use a folder in your host as a bridge to share things, this is a indirect way to acheive this.
docker run -d -v /any_of_your_host_folder:/your_assets_folder_in_your_image_container your_image
docker run -d -v /any_of_your_host_folder:/your_folder_of_your_new_container your_container_want_to_use_assets
For your_image, you need add CMD in dockerfile to copy the assets to your_assets_folder_in_your_image_container(the one you use as volume as CMD executes after volume)
This may waste time, but just at the first time the assets container starts. And after the container starts, the files in assets container in fact copy to the host folder, and has none business with assets image any more. So you can just delete the image of the assets image. Then no space waste.
You aim just want other people easy to use the assets, so why not afford script to them, automatically fetch the image -> start the container(CMD auto copy files to volume) -> delete the image/container -> the assets already on host, so people just use this host assets as a volume to do next things.
Of course, if container can directly use other image's resource, it is better than this solution. Anyway, this can be a solution although not perfect.
You can add the docker sock as a volume which will allow you to start one of your docker images from within your docker container.
To do this, add the two following volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/usr/bin/docker:/usr/bin/docker"
If you need to share files between the containers map the volume /tmp:/tmp when starting both containers.

Mounting local directory into Docker container path that is not exposed as a VOLUME

Is there any difference between:
Mounting a host directory into a container path (the path is not exposed as a volume), e.g.:
docker run -v /host/directory:/container/directory my_image command_to_run
Dockerfile of my_image does not include VOLUME ["/container/directory"]
Mounting a host directory into a a container path exposed as a volume:
docker run -v /host/directory:/container/directory my_image command_to_run
Dockerfile of my_image includes VOLUME ["/container/directory"]
I know that volume data persists independent of the container life-cycle. However, since I want to work on my local data from within a container, does that make any difference if the mount-point inside the container is a volume?
There is no difference if you mount the path from the host into the container. The filesystem from the host will be mounted over top of that directory inside the container.
The difference between listing the volume and not listing it inside the image is the behavior of docker when you create an image without specifying a volume. When the volume is defined on the image, docker will create an "anonymous" volume you can see with docker volume ls as a long uuid string. These volumes are rarely useful, so I recommend against defining a volume in the image and instead only defining them on your docker run command or docker-compose.yml definition.
Downsides of defining a volume in the image include:
Later lines in the Dockerfile or in descendant Dockerfile's may not be able to change the contents at this location. Docker's behavior with this varies by scenario and version, so for predictability, once a volume is defined in an image, I consider that directory off limits.
Creation of anonymous volumes are difficult to use and are likely to clutter up the filesystem.
I posted a blog on this topic a while back if you're interested in more details.

What is the purpose of defining VOLUME mount points within DockerFile rather than adhoc cmd-line -v?

I understand that using the VOLUME command within a Dockerfile, defines a mount point within container.
FROM centos:6
VOLUME /html
However I noticed that without that VOLUME definition, it's still possible to mount on that VOLUME point regardless of defining it
docker run -ti -v /path/to/my/html:/html centos:6
What is the purpose of defining VOLUME mount points in the dockerfile? I suspect it's for readability so people can read the Dockerfile and instantly know what is meant to be mounted?
VOLUME instruction used within a Dockerfile does not allow us to do host mount, that is where we mount a directory from the host OS into a container.
However other containers can still mount into the volumes of a container using the --from-container=<container name>, created with the VOLUMES instruction in the Dockerfile
I understand that using the VOLUME command within a Dockerfile,
defines a mount point within container.
That's not right. In that case the volume is defined for an image, not for a container.
When a volume is defined in the Dockerfile, it's set for an image, so every container run from that image gets that volume defined.
If you define the volume in the command line (docker run -v ...) the volume is defined just for that specific container.

Resources