I started using docker only recently. It is my understanding that in order to mount the local folder into a docker volume inside the container C1 on the image image_name can be done by running the following code:
var=$(pwd)
docker run -d --name=C1 -v $var:/host image_name
However, because I am detaching the container, I am not able to see it among the containers created doing docker ps or docker container ls.
However, if I run docker volume list and then docker volume rm VOLUMEID I get the error volume is in use - [CONTAINER_C1_ID].
Any idea how can I see where C1 is?
Where am I doing wrong?
Related
I am running a docker container with docker mounted inside using :
docker run -v /Path/to/service:/src/service -v /var/run/docker.sock:/var/run/docker.sock --net=host image-name python run.py
This runs a python script that creates a data folder in /src and fills it. When printing os.listdir('/src/data'), I get a list of files.
I then run a container from within this container, mounting the data folder, using docker-py.
volumes = {'/src/data': {'bind': '/src', 'mode': 'rw'}}
client.containers.run(image, command='ls data', name=container_key, network='host', volumes=volumes)
And it prints :
Starting with UID: 0 and HOME: /src\n0\n'
Which means data is mounted, but empty. What am I doing wrong ?
So- mounting docker inside the container means that containers started from in there are running on your HOST machine.
The end result is you have two containers on host- one with
/Path/to/service:/src/service
and one with
/src/data:/src
If you want to share a volume between two containers you should usually use a "named" volume like
docker run -v sharedvolume:/src/data and docker run -v sharedvolume:/src
I have created a volume of a docker image. The docker image is:
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/tensorflow/tensorflow latest-gpu 7f09e75cdc12 4 months ago 1.289 GB
And the container volume is:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
e99c80d2d53e gcr.io/tensorflow/tensorflow:latest-gpu "/run_jupyter.sh" 21 hours ago Up 11 minutes 6006/tcp, 0.0.0.0:8888->8888/tcp deep
I need to share a folder between the host Ubuntu 16.04 OS and the docker container.
I ran this command for doing this:
docker run -v /home/cortana/deep-learning/:/home gcr.io/tensorflow/tensorflow:latest-gpu
This didnt lead to the folder being loaded into the container deep. I dont know what to do after this and am really new to the container stuff in docker. Please explain your answer a bit too.
EDIT:
I deleted the container and then ran these commands:
docker run -v /home/cortana/deep-learning/:/home gcr.io/tensorflow/tensorflow:latest-gpu
nvidia-docker run -p 8888:8888 --name deep gcr.io/tensorflow/tensorflow:latest-gpu
nvidia-docker exec -it deep bash
There is no folder called deep-learning in the /home/ folder in the container. What have I done wrong here?
There's no API I'm aware of to change the mounted volumes on a running container. You destroy the existing container (docker stop and docker rm) and create a new one with the proper configuration (docker run). If you find yourself trying to maintain a single container, upgrading apps inside the container or with data inside, odds are good that you're trying to recreate a VM rather than isolating a process, which is an anti-pattern.
From your edit, you didn't create the /home/deep-learning folder, you created the /home folder. You also appear to be creating a second container named deep without any volume mounts and exec'ing into that one. To make a container with the /home/deep-learning volume mount and the name deep, run it like:
docker run -v /home/cortana/deep-learning:/home/deep-learning \
-p 8888:8888 --name deep gcr.io/tensorflow/tensorflow:latest-gpu
I'm using Docker version 1.10. How can I get the volumes used by a container?
I know I can get the containers by:
docker ps
And I can inspect them with:
docker inspect $containerID
I also know that the volume API is available, so I can also do:
docker volume ls
and
docker volume inspect $volumeID
But I can't find any link information between them. What should I use?
You can get the detail volume information of a container by
docker inspect --format="{{.Mounts}}" $containerID
If I create a volume named "volumehello", and start a container named "hello" which use "volumehello":
docker volume create --name volumehello
docker run -it -d --name=hello -v volumehello:/tmp/data hello-world
Then we can get the volume information of "hello" container by running:
docker inspect --format="{{.Mounts}}" hello
We will get:
[{volumehello /var/lib/docker/volumes/volumehello/_data /tmp/data local z true rprivate}]
volumehello is the volume name
/var/lib/docker/volumes/volumehello/_data is the host location of the volume
/tmp/data is the mapped location of the volume within the container
When i mount $docker run -v /tmp:/tmp -ti ubuntu /bin/bash for the running container that uses the filesystem of the host . When i close the above container from exit command and i link the above container id with the new $docker run --volumes-from="closed container id" -ti ubuntu /bin/bash this as well uses
/tmp files in the newly running container.how is this possible that even after closed the container it is still could be referred in other container.please explain me in a better way what is happening in docker.
how is this possible that even after closed the container it is still could be referred in other container.please explain me in a better way what is happening in docker.
This is an expected behavior, because the you have mapped volume -v /tmp:/tmp on the first instance, which means you have mapped /tmp on your host OS to /tmp inside the container. Now any changes you make within the container remains on the host OS which is accessible by the second or third instance unless the <container id> is removed.
The container exists unless its removed with docker rm <container id>. You can get the <container id> from docker ps -a, which returns the list of all the containers which are running and have been exited AND not been removed.
Check Container Solution's Understanding Volumes in Docker
I have the following containers:
Data container which is build directly in quay.io from a github repo, basically is a website.
FPM container
NGINX container
The three of them are linked together and working just fine. BUT the problem is that every time I change something in the website (Data container) it is rebuilt (of course) and I have to remove that container and also the FPM and NGINX and recreate them all to be able to read the new content.
I started with a "backup approach" for what I'm copying the data from the container to a host directory and mounting that into the FPM and NGINX containers, this way I can update the data without restarting/removing any service.
But the idea of moving the data from the data container into the host, really doesn't like me. So wondering if there a "docker way" or a better way of doing it.
Thanks!
UPDATE: Adding more context
Dockerfile d`ata container definition
FROM debian
ADD data/* /home/mustela/
VOLUME /home/mustela/
Where data only has 2 files: hello.1 and hello.2
Compiling the image:
docker build -t="mustela/data" .
Running the data container:
docker run --name mustela-data mustela/data
Creating another container to link to the previous one:
docker run -d -it --name nginx --volumes-from mustela-data ubuntu bash
Listing the mounted files:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
Now, lets rebuild the data container image, but first adding some new files, so now inside data we have hello.1 hello.2 hello.3 hello.4
docker rm mustela-data
docker build -t="mustela/data" .
docker run --name mustela-data mustela/data
If I ls /home/mustela from the running container, the files aren't being updated:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
But if I run a new container I can see the files
docker run -it --name nginx2 --volumes-from mustela-data ubuntu ls /home/mustela
Result: hello.1 hello.2 hello.3 hello.4