how to share folder between host os and docker container - docker

I have created a volume of a docker image. The docker image is:
REPOSITORY TAG IMAGE ID CREATED SIZE
gcr.io/tensorflow/tensorflow latest-gpu 7f09e75cdc12 4 months ago 1.289 GB
And the container volume is:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
e99c80d2d53e gcr.io/tensorflow/tensorflow:latest-gpu "/run_jupyter.sh" 21 hours ago Up 11 minutes 6006/tcp, 0.0.0.0:8888->8888/tcp deep
I need to share a folder between the host Ubuntu 16.04 OS and the docker container.
I ran this command for doing this:
docker run -v /home/cortana/deep-learning/:/home gcr.io/tensorflow/tensorflow:latest-gpu
This didnt lead to the folder being loaded into the container deep. I dont know what to do after this and am really new to the container stuff in docker. Please explain your answer a bit too.
EDIT:
I deleted the container and then ran these commands:
docker run -v /home/cortana/deep-learning/:/home gcr.io/tensorflow/tensorflow:latest-gpu
nvidia-docker run -p 8888:8888 --name deep gcr.io/tensorflow/tensorflow:latest-gpu
nvidia-docker exec -it deep bash
There is no folder called deep-learning in the /home/ folder in the container. What have I done wrong here?

There's no API I'm aware of to change the mounted volumes on a running container. You destroy the existing container (docker stop and docker rm) and create a new one with the proper configuration (docker run). If you find yourself trying to maintain a single container, upgrading apps inside the container or with data inside, odds are good that you're trying to recreate a VM rather than isolating a process, which is an anti-pattern.
From your edit, you didn't create the /home/deep-learning folder, you created the /home folder. You also appear to be creating a second container named deep without any volume mounts and exec'ing into that one. To make a container with the /home/deep-learning volume mount and the name deep, run it like:
docker run -v /home/cortana/deep-learning:/home/deep-learning \
-p 8888:8888 --name deep gcr.io/tensorflow/tensorflow:latest-gpu

Related

Inject configuration into volume before Docker container starts

I am looking for a way to create a Docker volume and put some data on it just before a specific container is started - which needs the configuration on startup.
I do not want to modify the container. I would like to use a vanilla container straight from the Docker Hub.
Any ideas?
Update
I did not mention that all this has to be done in a compose file. If I would do it manually, I could wait for the configuration injecting container to finish.
Absolutely! Just create your volume beforehand, attach it to any container (A base OS like Ubuntu would work great), add your data, and you're good to go!
Create the volume:
docker volume create test_volume
Attach it to an instance where you can add data:
docker run --rm -it --name ubuntu_1 -v test_volume:/app ubuntu /bin/sh
Add some data:
Do this within the container; which you are in from the previous command.
touch /app/my_file
Exit the container:
exit
Attach the volume to your new container:
Of course, replace ubuntu with your real image name.
docker run --rm -it --name ubuntu_2 -v test_volume:/app ubuntu /bin/sh
Verify the data is there:
~> ls app/
my_file

Docker Volume point to host Directory in Dockerfile

I have the following Dockerfile :
FROM jboss/wildfly
USER jboss
RUN mkdir -p /opt/jboss/wildfly/standalone/log
VOLUME /opt/jboss/wildfly/standalone/log
CMD /bin/bash
# CMD true
This resulting image is started with docker run -ti --name=data_volume data/volume. The next Dockerfile
FROM jboss/wildfly
RUN sed -i 's|<file relative-to="jboss.server.log.dir"
path="server.log"/>|\<file relative-to="jboss.server.log.dir"
path="\${jboss.host.name}-server.log"/\>|'
/opt/jboss/wildfly/standalone/configuration/standalone.xml
overrides the logging of the resulting jboss to log to "servername"-server.log in the logging dir. When I start the resulting image with docker run -ti --name=wild-01 --volumes-from=data_volume my/wildfly and docker run -ti --name=wild-02 --volumes-from=data_volume my/wildfly I have two log files in my data_colume container. So fine so good.
I would like to point my volume to a directory on the host eg. /var/log/wildfly.
How can I achieve this in Dockerfiles and not with the -v parameter when running data/volume
Thanks a lot in advance
Inside dockerfiles you can only define volumes in /var/lib/docker/volumes. This is because every host can be different from the other.
Docker uses /var/lib/docker as "docker area" where it stores all docker-related data. It's the directory that's guaranteed on every host because it gets created on installation.
If you were to point out a volume in the dockerfile, let's say to /home/mbieren/docker_vol, the image would result in multiple errors when executed on a different host, as that directory does not exist and the user probably has insufficient permissions to create it.
Docker goes around that problem by not allowing custom mount-paths to be set in the dockerfile.
I would like to point my volume to a directory on the host eg. /var/log/wildfly.
remove all mention of volumes from your Dockerfile ... launch your container using
docker run -d -v /var/log/wildfly:/var/log/wildfly your-image-name
then in your code just reference the normal path
/var/log/wildfly
Your syntax to launch the container using docker run -ti makes the container shell interactive whereas -d is the normal mode to spin it up as a daemon running in the background

Docker mount namespace

When i mount $docker run -v /tmp:/tmp -ti ubuntu /bin/bash for the running container that uses the filesystem of the host . When i close the above container from exit command and i link the above container id with the new $docker run --volumes-from="closed container id" -ti ubuntu /bin/bash this as well uses
/tmp files in the newly running container.how is this possible that even after closed the container it is still could be referred in other container.please explain me in a better way what is happening in docker.
how is this possible that even after closed the container it is still could be referred in other container.please explain me in a better way what is happening in docker.
This is an expected behavior, because the you have mapped volume -v /tmp:/tmp on the first instance, which means you have mapped /tmp on your host OS to /tmp inside the container. Now any changes you make within the container remains on the host OS which is accessible by the second or third instance unless the <container id> is removed.
The container exists unless its removed with docker rm <container id>. You can get the <container id> from docker ps -a, which returns the list of all the containers which are running and have been exited AND not been removed.
Check Container Solution's Understanding Volumes in Docker

Mounting volumes on Bluemix containers and sharing between them does not work

I've created a volume with
$ cf ic volume create mosquitto_config
This information shows up as expected:
$ cf ic volume list
mosquitto_config
Then, I've created two containers that are based on an image, which contains the VOLUME ["/etc/mosquitto"] line in its Dockerfile, and on which I'm able to log in via SSH:
$ cf ic run -p 22:22 --volume mosquitto_config:/etc/mosquitto --name ssh-test registry.ng.bluemix.net/{reg-name}/{image-name}:latest
$ cf ic run -p 22:22 --volume mosquitto_config:/etc/mosquitto --name ssh-test-2 registry.ng.bluemix.net/{reg-name}/{image-name}:latest
After logging in, I see the mount point /etc/mosquitto as directory on both containers. However, if I create a file in that directory within one container, the new file does not show up in the other container. As far as I understand the volume concept, the new file should show up in the other container. Is it currently not working or how do you set it up correctly?
this kind of way to share volumes I think is not supported by docker.
In order to give a container access to another container’s volumes, you can simply give the –volumes-from argument to docker run. For example:
$ docker run -it -h NEWCONTAINER --volumes-from container-test debian /bin/bash
All the volumes mounted in 'container-test' will be available to 'NEWCONTAINER' (with the same mount options)
It’s important to note that it works even if the container-test is not running: a volume will never be deleted as long as a container is linked to it.
For further help check this url
http://container-solutions.com/understanding-volumes-docker/

Share and update docker data containers across containers

I have the following containers:
Data container which is build directly in quay.io from a github repo, basically is a website.
FPM container
NGINX container
The three of them are linked together and working just fine. BUT the problem is that every time I change something in the website (Data container) it is rebuilt (of course) and I have to remove that container and also the FPM and NGINX and recreate them all to be able to read the new content.
I started with a "backup approach" for what I'm copying the data from the container to a host directory and mounting that into the FPM and NGINX containers, this way I can update the data without restarting/removing any service.
But the idea of moving the data from the data container into the host, really doesn't like me. So wondering if there a "docker way" or a better way of doing it.
Thanks!
UPDATE: Adding more context
Dockerfile d`ata container definition
FROM debian
ADD data/* /home/mustela/
VOLUME /home/mustela/
Where data only has 2 files: hello.1 and hello.2
Compiling the image:
docker build -t="mustela/data" .
Running the data container:
docker run --name mustela-data mustela/data
Creating another container to link to the previous one:
docker run -d -it --name nginx --volumes-from mustela-data ubuntu bash
Listing the mounted files:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
Now, lets rebuild the data container image, but first adding some new files, so now inside data we have hello.1 hello.2 hello.3 hello.4
docker rm mustela-data
docker build -t="mustela/data" .
docker run --name mustela-data mustela/data
If I ls /home/mustela from the running container, the files aren't being updated:
docker exec -it nginx ls /mustela/home
Result:
hello.1 hello.2
But if I run a new container I can see the files
docker run -it --name nginx2 --volumes-from mustela-data ubuntu ls /home/mustela
Result: hello.1 hello.2 hello.3 hello.4

Resources