Strange issue about applying docker named volume for new container - docker

As I understand from all the information I found, the docker host volume can be created from three ways:
1- by ignoring the host-path (it will automatically create a directory with random ID)
2- by specify the host-path (it will also automatically create a directory with random ID)
3- by named volume and specify to a host-path
So I was trying first 2 ways:
$ docker run --name mongo-docker -v /data/db -p 27017:27017 -d mongo
$ docker run --name mongo-docker2 -v $(pwd)/data/:/data/db -p 27222:27017 -d mongo
And I look at the docker volume list:
$ docker volume ls
DRIVER VOLUME NAME
local 5b829a731245cb7fe3a1f28aca4c4c3c3791105be228182ccb9b2f72319180c8
local fb058e804412fb56b2096e2cb903e3ae73647ef6ca076ad9003708b80f94ffc5
It looks just what I was expected.
But when I tried the last one, by created first a volume:
$ docker volume create mongoVol
mongoVol
$ docker volume ls
DRIVER VOLUME NAME
local mongoVol
and use it for host-volume path, came up like this:
$ docker run --name mongo-docker3 -v mongoVol:/data/db -p 27322:27017 -d mongo
86bea0e52c9f395268665e191edc59f795d07266f17667502c7fa32879a6e021
$ docker volume ls
DRIVER VOLUME NAME
local 0de25c92be504d0a6b9bb9c83aa8a6fe17bf9bc195562314ca49edb1c4cf4377 <=== create a new one for new container?
local mongoVol
Why is this create a new directory for it? Shouldn't it be just the "mongoVol" volume?
I can’t find answers to related questions on any forum, post nor any videos....

The mongo image's Dockerfile has two directories named in a VOLUME statement. You're mounting content on /data/db but not on /data/configdb.
If the Dockerfile declares a directory as a VOLUME and nothing is explicitly mounted there, Docker automatically creates an anonymous volume (your first case). That's what results in the additional volume appearing in the docker volume ls listing.

Related

Why some volumes are already created inside docker engine?

Whenever I run the below command:
docker volume ls
I can see some volumes already created in my docker engine.
DRIVER VOLUME NAME
local 5df9458932cd504e10b2b37856c434cbdf3876733684b100cbf390c965ac9581
local 6f7037bc33861a5e42a9f8bcd699f8184ff1916a297a718ccc4df5f369d07530
local 8a86c462020f35f1051b47c48555228a1df359251f2496c32ed45a9081bb1872
local 85ed838d2e081eddc672fd8ddb15bbb3eecc73adb270678c98b7c50a03ecb2fc
Why are those volume created ?
How can I find for what purpose they exists ?
If you started a Docker container with a volume that doesn't have a name or host mount point, Docker will create a unique name for them. These docs briefly mention anonymous volumes like this. Most likely, a Dockerfile had a VOLUME section and wasn't run with a corresponding --mount or -v flag to bind some local volume to the container's volume.
Also see this devops stack exchange answer.
Here's an example of when an anonymous volume is created:
Dockerfile with anonymous volumes:
FROM alpine:3.9
VOLUME ["/root", "/test"]
Building/running container without mounting or otherwise naming the /root, /test volumes:
$ docker volume ls
DRIVER VOLUME NAME
$ docker build -t test .
$ docker run -it --rm -d --name volume-test test:latest sh
$ docker volume ls
DRIVER VOLUME NAME
local 5b332abd25b77c1ac324a0e3c00dc9a554cfe80c996a20bd77ef10c35c8ef98a
local 05c903f47f3f3666e03ee06154ff54b23547a5cc65750ca18bb40be40ed4049c
local 6f595aada6ae7c9fb16831996c2bdd8d652bec55a7cedf96afef95aec8f4e6e1
local 7f54c9dbbec46acc5a843499c65a50e23a78baa884facd026704d0dcb0362c9e
local 47a791197d6164757b015df1e2aba48bac3999720ead6b5981820a3aaece4113
local 214155fe63200cc859c1eddd2b31aa990fd6eb7c8614aa02bd8b57690b0fe53e
Of course, you can always inspect the volumes to try to find out where they came from but this may or may not be useful for you:
docker inspect 5b332abd25b77c1ac324a0e3c00dc9a554cfe80c996a20bd77ef10c35c8ef98a

Instead of using existing docker volume, couchdb docker image always create new volume

In my ubuntu 18.04, installed couch db using this repo. In order to data persistance, i have created docker volume using the command docker volume create --name couchdbvolume.
I used docker run -p 5984:5984 -d couchdb -v couchdbvolume:/opt/couchdb/data --name some-couchdb command to create new docker process. Instead of using existing volume, every time docker creates new volume. So i loss data in every restart.
As per this question , un-named volumes are created if the docker file doesn't have name in volume keyword. I think because of this line the volume doesn't have name. so it creates un-named volume.
Instead of multiple docker volume,
I expect, only one docker volume(i have only one couchdb docker image)
According to the documentation, options should precede the image name.
$ docker run [OPTIONS] IMAGE[:TAG|#DIGEST] [COMMAND] [ARG...]
please try the following:
docker run -p 5984:5984 -d -v couchdbvolume:/opt/couchdb/data --name some-couchdb couchdb

Docker Created Volume Does Not Exist With Inspect

I am new to volumes in Docker.
Following Creating and mounting a data volume container, I created a volume called mochawesome with:
docker create -v /mochawesome-reports --name mochawesome dman777/vista-e2e-test-runner
I see it existing:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
273b14f7e0ea dman777/vista-e2e-test-runner "./run_test.sh" 3 minutes ago Created mochawesome
However if I do docker volume inspect mochawesome I get:
Error: No such volume: mochawesome
Why is this?
The argument --name in docker create specifies the name of the container (not the name of the volume). Therefore docker volume inspect cannot find this name.
To create a named volume use docker create -v my-named-volume:/mochawesome-reports --name mochawesome dman777/vista-e2e-test-runner. Then you can use docker volume inspect my-named-volume.

How to run a docker containter linked to a previously created volume

I have a named volume with stuff in it.
I would like to provide this volume as I provide a path: docker run -v /host:/path.in.docker.container - this works for paths. I'd like to do the same with a volume I manually created and filled.
I know about --volumes-from, but how do i first connect the volume to the empty container.
You can create a volume thanks to docker create, see the documentation, then, mount this volume-container with the option --volume of the command docker run as in docker run -v volumename:/data -it my_image.

Mounting volumes on Bluemix containers and sharing between them does not work

I've created a volume with
$ cf ic volume create mosquitto_config
This information shows up as expected:
$ cf ic volume list
mosquitto_config
Then, I've created two containers that are based on an image, which contains the VOLUME ["/etc/mosquitto"] line in its Dockerfile, and on which I'm able to log in via SSH:
$ cf ic run -p 22:22 --volume mosquitto_config:/etc/mosquitto --name ssh-test registry.ng.bluemix.net/{reg-name}/{image-name}:latest
$ cf ic run -p 22:22 --volume mosquitto_config:/etc/mosquitto --name ssh-test-2 registry.ng.bluemix.net/{reg-name}/{image-name}:latest
After logging in, I see the mount point /etc/mosquitto as directory on both containers. However, if I create a file in that directory within one container, the new file does not show up in the other container. As far as I understand the volume concept, the new file should show up in the other container. Is it currently not working or how do you set it up correctly?
this kind of way to share volumes I think is not supported by docker.
In order to give a container access to another container’s volumes, you can simply give the –volumes-from argument to docker run. For example:
$ docker run -it -h NEWCONTAINER --volumes-from container-test debian /bin/bash
All the volumes mounted in 'container-test' will be available to 'NEWCONTAINER' (with the same mount options)
It’s important to note that it works even if the container-test is not running: a volume will never be deleted as long as a container is linked to it.
For further help check this url
http://container-solutions.com/understanding-volumes-docker/

Resources