Difference between -volume and -volumes-from in Docker Volumes - docker

What is the exact difference between the two flags used in docker volume commands -v and --volumes-from. It seems to me that they are doing the same work, consider the following scenario.
First lets create a volume named myvol using command:
$ docker volume create myvol
Now create and run a container named c1 that uses myvol and get into his bash:
$ docker run -it --name c1 -v myvol:/data nginx bash
Lets create a file test.txt in the mounted directory of the container as:
root#766f90ebcf37:/# touch /data/test.txt
root#766f90ebcf37:/# ls /data
test.txt
Using -volume flag:
Now create another container named c2 that also uses myvol:
$ docker run -it --name c2 -v myvol:/data nginx bash
As expected, the new generated container c2 also have access to all the files of myvol
root#393418742e2c:/# ls /data
test.txt
Now doing the same thing with --volumes-from
Creating a container named c3 using volumes from container c1
$ docker run -it --name c3 --volumes-from c1 nginx bash
This will result the same thing in c3:
root#27eacbe25f92:/# ls /data
test.txt
The point is if -v and --volumes-from are working the same way i.e. to share data between containers then why they are different flags and what --volumes-from can do that -v cannot do?

The point is if -v and --volumes-from are working the same way i.e. to share data between containers
-v and --volumes-from are not working the same way, but with both of them you can share data between containers.
what --volumes-from can do that -v cannot do?
E.g. it can connect to another containers volumes without knowing how volumes are named and you do not specify path. You are able to add suffixes to containers ID's, with permissions like :ro or :rw.
More details here - Mount volumes from container (--volumes-from) section
The other question is what -v can do that --volumes-from cannot do?
E.g. it can mount named volumes, you can mount host directories or tmpfs (this one you cannot share between containers). In --volume-from you cannot share data with host directly.
The conclusion is: the purpose of --volume is to share data with host. (here you can find more use cases).
The purpose of --volumes-from is to share the data between containers.
They both works nice together.

Related

How to properly share a folder between few docker containers in read mode?

I have Docker installed in top of a CentOS system.
I tried to use volume but each new container is deleting (or hidding) the content of the folder to share.
My volume is always empty after a Docker run.
In order to create my containers, I use
docker run -dit --name $CONTAINER_NAME -p $PORT:8080 \
-v $VOLUME_PATH:/opt/conf/ \
$IMAGE_NAME
I aim at sharing a folder from the host between few Docker containers (to READ) AND I want also to write into this folder from the host.
What is an elegant way to do that ?
One solution is to use Data Volume Containers.
First, create a data volume container
docker run -d --name <data-volume-name> -v /<data-volume-name> ubuntu
You can add any data you want in this container.
Create your containers that will share by using the option volume-from
Let's create container foo and container bar using the shared datacontainer :
docker run -it --name foo --volumes-from=<data-volume-name> ubuntu
docker run -it --name bar --volumes-from=<data-volume-name> centos
Enjoy yourself
Each container in my example is mapped to the root folder.
From either bar or foo you can see /, in the filesytem.
You can also use volume field.
Create a volume
docker volume create --name <volume-name>
Create containers foo and bar that witl be mapped to the volume
docker run -dit --name foo -v test-volume:/path/in/container/ <image-name>
docker run -dit --name bar -v test-volume:/path/in/container/ <image-name>
Each container that will write in the volume will be visible by other.

Bind data to a db container

I have a test data which I build as a Docker image (docker build with copy to /db/data) and pushed to docker hub.
I want to run a db instance that will use that data.
I would expect to be able to:
run a "docker create" and create a container from image and map it to a volume (maybe named volume) which will practically will copy the data to that volume.
run a "docker run" with volumes-from and map that data from first container to the second.
When I tried it out I always see that in the second directory there is a folder mapping but I can't reach any pre-populated data from the data-container.
--volumes-from will add "Volume" from a docker container to another.
In your case you created an image that contains your data, you didnt create a volume that contains your data.
Docker is currently supporting the following:
a host bind mount that is listed as a bind mount in both services
a named volume that is used by both services
volumes_from another service
for example if you have your data in /my/data on docker host you can use the following:
sudo docker run -d --name firstcontainer -v /my/data:/db/data <image name>
sudo docker run -d --name secondcontainer -v /my/data:/db/data <image name>
If you want to use named volumes follow these steps:
create the named volume
sudo docker volume create <volumename>
Mount it to a transitional container and copy your data in it with docker cp
sudo docker cp /path/to/your/file <containername>:/destination/path
Mount your named volume on multiple containers.
sudo docker run -d --name firstcontainer -v <volumename>:/db/data <image name>
sudo docker run -d --name secondcontainer -v <volumename>:/db/data <image name>

How to re-mount a docker volume without overriding existing files?

When running Docker, you can mount files and directories using the --volume option. E.g.:
docker run --volume /remote ./local myimage
I'm running a docker image that defines VOLUMESs in the Dockerfile. I need to access a config file that happens to be inside one of the defined volumes. I'd like to have that file "synced" on the host so that I can edit it. I know I could run docker exec ..., but I hope to circumvent that overhead for only editing one file. I found out that the volumes created by the VOLUMES line are stored in /var/lib/docker/volumes/<HASH>/_data.
Using docker inspect I was able to find the directory that is mounted:
docker inspect gitlab-runner | grep -B 1 '"Destination": "/etc/gitlab-runner"' | head -n 1 | cut -d '"' -f 4
Output:
/var/lib/docker/volumes/9c233c085c36380c6c33035222c16e5d061368c5060cc81dda2a9a713a2b2b3b/_data
So the question is:
Is there a way to re-mount volumes defined in an image? OR to somehow get the directory easier than my oneliner above?
EDIT after comments by zeppelin I've tried rebinding the volume with no success:
$ mkdir etc
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
$ docker exec test1 ls /etc/gitlab-runner/
certs
config.toml
$ docker exec test2 ls /etc/gitlab-runner/
# empty. no files
$ ls etc
# also empty
docker inspect shows correctly that the volume is bound to ~/etc, but the files inside the container at /etc/gitlab-runner/ seem lost.
$ docker run -d --name test1 gitlab/gitlab-runner
$ docker run -d --name test2 -v ~/etc:/etc/gitlab-runner gitlab/gitlab-runner
You've got two different volume types there. One I call an anonymous volume (a very long uuid visible when you run docker volume ls). The second is a host volume or bind mount that maps a directory on the host directly into the container. So each container you spun up is looking at different places.
Anonymous volumes and named volumes (docker run -d -v mydata:/etc/gitlab-runner gitlab/gitlab-runner) get initialized to the contents of the image at that directory location. This initialization only happens when the volume is empty and is mounted into a new container. Host volumes, as you've seen, only get the contents of the host filesystem, even if it's empty at that location.
With that background, the short answer to your question is no, you cannot mount a file inside the container back out to your host. But you can copy the file out with several methods, assuming you don't overlay the source of the file with a host volume mount. With a running container, there's the docker cp command. Personally, I like:
docker run --rm -v ~/etc:/target gitlab/gitlab-runner \
cp -av /etc/gitlab-runner/. /target/.
If you have a named volume with data you want to copy in or out, you can use any image with the tools you need to do the copy:
docker run --rm -v mydata:/source -v ~/etc:/target busybox \
cp -av /source/. /target/.
Try to avoid modifying data inside a container from the host directly, much nicer is when you wrap your task into another container that you then start with "--volumes-from" option when possible in your case.
Not sure I understood your problem, anyway, as for the documentation you mention,
The VOLUME instruction creates a mount point with the specified name
and marks it as holding externally mounted volumes from native host or
other containers. [...] The docker run command initializes the newly
created volume with any data that exists at the specified location
within the base image.
So, following the example Dockerfile , after having built the image
docker build -t mytest .
and having the container running
docker run -d -ti --name mytestcontainer mytest /bin/bash
you can access it from the container itself, e.g.
docker exec -ti mytestcontainer ls -l /myvol/greeting
docker exec -ti mytestcontainer cat /myvol/greeting
Hope it helps.

Share Same resource in multiple Container in docker

I need to setup one container volume use to multiple container.
for example:
Container 1(web app1): volume path -v /var/www/html/
Container 2 (web app2): volume path -v /var/www/html/
Container 3(Commaon Files): volume path -v /var/www/html/
I need to setup Container-3 Common file use other two Containers.
How can I Achive this.
You should name your volumes so you can mount them by name instead of by container. So:
docker run -d --name web1 -v web1-html:/var/www/html web-img
docker run -d --name web2 -v web2-html:/var/www/html web-img
docker run -d --name common -v web1-html:/var/www/web1/html \
-v web2-html:/var/www/web2/html your-img
With the volumes created today from your two web apps, you'll see them listed with a guid under docker volume ls. By giving them a name, you can easily reused those volumes in other containers.

docker shared volumed not working as described in the documentation

I am now learning docker and according to the documentation a shared data volume shall solely be destroyed when the last container holding a link to the shared volume is removed with the -v flag. Nevertheless, in my initial tests this is not the behaviour that I saw.
From the documentation:
Managing Data in Containers
If you remove containers that mount volumes, including the initial dbdata container, or the subsequent containers db1 and db2, the volumes will not be deleted. To delete the volume from disk, you must explicitly call docker rm -v against the last container with a reference to the volume. This allows you to upgrade, or effectively migrate data volumes between containers.
I did the following:
docker run -d -v /dbdata --name dbdata ubuntu:14.04 echo Data-only container for postgres
docker run -d --volumes-from dbdata --name db1 ubuntu:14.04 /bin/bash
Created some files on the /dbdata directory
Exited the db1 container
docker run -d --volumes-from dbdata --name db2 ubuntu:14.04 /bin/bash
I could access the files created on item 3 and create some new files
Exited the db2 container
docker run -d --volumes-from dbdata --name db3 ubuntu:14.04 /bin/bash
I could access the files created on item 3 and 6 and create some new files
Exited the db3 container
Removed all containers without the -v flag
Created the db container again, but the data was not there.
As stated in the user manual:
This allows you to upgrade, or effectively migrate data volumes between containers.
I wonder what I am doing wrong.
You are doing nothing wrong. In step 12, you are creating a new container with the same name. It has a different volume, which initially is empty.
Maybe the following example can illustrate what is happening (ids and paths will/may vary on your system or in other docker versions):
$ docker run -d -v /dbdata --name dbdata ubuntu:14.04 echo Data-only container for postgres
7c23cc1e6637e29f36c6cdd4c1461f6e1742b201e05227279ac3db55328da674
Run a container that has a volume /dbdata and give it the name dbdata. The Id is returned (your Id will be different).
Now lets inspect the container and print the "Volumes" information:
$ docker inspect --format "{{ .Volumes }}" dbdata
map[/dbdata:/var/lib/docker/vfs/dir/248641a5f51a80b5004f72f622a7329835e93881e9915a01b3c7112189d0b55e]
We can see that your /dbdata volume is located at /var/lib/docker/vfs/dir/248641...
Let's create some new data inside the container's volume:
$ docker run --rm --volumes-from dbdata ubuntu:14.04 /bin/bash -c "echo fuu >> /dbdata/test"
And check if it is available
$ docker run --rm --volumes-from dbdata -it ubuntu:14.04 cat /dbdata/test
fuu
Afterwards you delete the containers, without the -v flag.
$ docker rm dbdata
The dbdata container (with id 7c23cc1e6637) is gone, however is still present on your filesystem, as you can see if you inspect the folder:
$ cat /var/lib/docker/vfs/dir/248641a5f51a80b5004f72f622a7329835e93881e9915a01b3c7112189d0b55e/test
fuu
(Please note: if you use the -v flag and delete the container with docker rm -v dbdata the files of the volume on your host filesystem will be deleted and the above cat command would result in a No such file or directory message or similar)
Finally, in step 12. you start a new container with a different volume and give it the same name: dbdata.
docker run -d -v /dbdata --name dbdata ubuntu:14.04 echo Data-only container for postgres
2500731848fd6f2093243da3be064db79e76d731904e6f5349c3f00f054e5f8c
Inspection yields a different volume, which is initially empty.
docker inspect --format "{{ .Volumes }}" dbdata
map[/dbdata:/var/lib/docker/vfs/dir/faffba00358060024026412203a1562125f73d2bdd69a2202483e858dda04740]
If you want to re-use the volume, you have to create a new container and import/restore the data from the filesystem into the data container. In your case, you should not delete the data container in the first place, as you want to reuse the volume from it.

Resources