Docker how to get volumes used by a container - docker

I'm using Docker version 1.10. How can I get the volumes used by a container?
I know I can get the containers by:
docker ps
And I can inspect them with:
docker inspect $containerID
I also know that the volume API is available, so I can also do:
docker volume ls
and
docker volume inspect $volumeID
But I can't find any link information between them. What should I use?

You can get the detail volume information of a container by
docker inspect --format="{{.Mounts}}" $containerID
If I create a volume named "volumehello", and start a container named "hello" which use "volumehello":
docker volume create --name volumehello
docker run -it -d --name=hello -v volumehello:/tmp/data hello-world
Then we can get the volume information of "hello" container by running:
docker inspect --format="{{.Mounts}}" hello
We will get:
[{volumehello /var/lib/docker/volumes/volumehello/_data /tmp/data local z true rprivate}]
volumehello is the volume name
/var/lib/docker/volumes/volumehello/_data is the host location of the volume
/tmp/data is the mapped location of the volume within the container

Related

Why some volumes are already created inside docker engine?

Whenever I run the below command:
docker volume ls
I can see some volumes already created in my docker engine.
DRIVER VOLUME NAME
local 5df9458932cd504e10b2b37856c434cbdf3876733684b100cbf390c965ac9581
local 6f7037bc33861a5e42a9f8bcd699f8184ff1916a297a718ccc4df5f369d07530
local 8a86c462020f35f1051b47c48555228a1df359251f2496c32ed45a9081bb1872
local 85ed838d2e081eddc672fd8ddb15bbb3eecc73adb270678c98b7c50a03ecb2fc
Why are those volume created ?
How can I find for what purpose they exists ?
If you started a Docker container with a volume that doesn't have a name or host mount point, Docker will create a unique name for them. These docs briefly mention anonymous volumes like this. Most likely, a Dockerfile had a VOLUME section and wasn't run with a corresponding --mount or -v flag to bind some local volume to the container's volume.
Also see this devops stack exchange answer.
Here's an example of when an anonymous volume is created:
Dockerfile with anonymous volumes:
FROM alpine:3.9
VOLUME ["/root", "/test"]
Building/running container without mounting or otherwise naming the /root, /test volumes:
$ docker volume ls
DRIVER VOLUME NAME
$ docker build -t test .
$ docker run -it --rm -d --name volume-test test:latest sh
$ docker volume ls
DRIVER VOLUME NAME
local 5b332abd25b77c1ac324a0e3c00dc9a554cfe80c996a20bd77ef10c35c8ef98a
local 05c903f47f3f3666e03ee06154ff54b23547a5cc65750ca18bb40be40ed4049c
local 6f595aada6ae7c9fb16831996c2bdd8d652bec55a7cedf96afef95aec8f4e6e1
local 7f54c9dbbec46acc5a843499c65a50e23a78baa884facd026704d0dcb0362c9e
local 47a791197d6164757b015df1e2aba48bac3999720ead6b5981820a3aaece4113
local 214155fe63200cc859c1eddd2b31aa990fd6eb7c8614aa02bd8b57690b0fe53e
Of course, you can always inspect the volumes to try to find out where they came from but this may or may not be useful for you:
docker inspect 5b332abd25b77c1ac324a0e3c00dc9a554cfe80c996a20bd77ef10c35c8ef98a

How can I see detached docker containers?

I started using docker only recently. It is my understanding that in order to mount the local folder into a docker volume inside the container C1 on the image image_name can be done by running the following code:
var=$(pwd)
docker run -d --name=C1 -v $var:/host image_name
However, because I am detaching the container, I am not able to see it among the containers created doing docker ps or docker container ls.
However, if I run docker volume list and then docker volume rm VOLUMEID I get the error volume is in use - [CONTAINER_C1_ID].
Any idea how can I see where C1 is?
Where am I doing wrong?

Docker inside docker : volume is mounted, but empty

I am running a docker container with docker mounted inside using :
docker run -v /Path/to/service:/src/service -v /var/run/docker.sock:/var/run/docker.sock --net=host image-name python run.py
This runs a python script that creates a data folder in /src and fills it. When printing os.listdir('/src/data'), I get a list of files.
I then run a container from within this container, mounting the data folder, using docker-py.
volumes = {'/src/data': {'bind': '/src', 'mode': 'rw'}}
client.containers.run(image, command='ls data', name=container_key, network='host', volumes=volumes)
And it prints :
Starting with UID: 0 and HOME: /src\n0\n'
Which means data is mounted, but empty. What am I doing wrong ?
So- mounting docker inside the container means that containers started from in there are running on your HOST machine.
The end result is you have two containers on host- one with
/Path/to/service:/src/service
and one with
/src/data:/src
If you want to share a volume between two containers you should usually use a "named" volume like
docker run -v sharedvolume:/src/data and docker run -v sharedvolume:/src

Docker Created Volume Does Not Exist With Inspect

I am new to volumes in Docker.
Following Creating and mounting a data volume container, I created a volume called mochawesome with:
docker create -v /mochawesome-reports --name mochawesome dman777/vista-e2e-test-runner
I see it existing:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
273b14f7e0ea dman777/vista-e2e-test-runner "./run_test.sh" 3 minutes ago Created mochawesome
However if I do docker volume inspect mochawesome I get:
Error: No such volume: mochawesome
Why is this?
The argument --name in docker create specifies the name of the container (not the name of the volume). Therefore docker volume inspect cannot find this name.
To create a named volume use docker create -v my-named-volume:/mochawesome-reports --name mochawesome dman777/vista-e2e-test-runner. Then you can use docker volume inspect my-named-volume.

Docker mount namespace

When i mount $docker run -v /tmp:/tmp -ti ubuntu /bin/bash for the running container that uses the filesystem of the host . When i close the above container from exit command and i link the above container id with the new $docker run --volumes-from="closed container id" -ti ubuntu /bin/bash this as well uses
/tmp files in the newly running container.how is this possible that even after closed the container it is still could be referred in other container.please explain me in a better way what is happening in docker.
how is this possible that even after closed the container it is still could be referred in other container.please explain me in a better way what is happening in docker.
This is an expected behavior, because the you have mapped volume -v /tmp:/tmp on the first instance, which means you have mapped /tmp on your host OS to /tmp inside the container. Now any changes you make within the container remains on the host OS which is accessible by the second or third instance unless the <container id> is removed.
The container exists unless its removed with docker rm <container id>. You can get the <container id> from docker ps -a, which returns the list of all the containers which are running and have been exited AND not been removed.
Check Container Solution's Understanding Volumes in Docker

Resources