Mount a Host Directory as a Data Volume in docker? - docker

when i try to mount a host directory as data volume in docker its not mount any directory to docker container(image).
When i run
sudo docker run -t -i --name web -v /home/rponna/src/webapp:/opt/webapp ramnathreddy
it returns container id
7dcc03c397d56514015220a073c9e951478bf84aceb90b880bb93a5716079212
But when i run that container it will not show any files in opt/webapp (its empty).

Related

Mount a local data directory into jupyterhub

I build and run jupyterhub a docker image. https://hub.docker.com/r/joergklein/jupyterhub
Is it a good idea to mount in the Dockerfile
# Create a mountpoint
VOLUME /data
or is it better to mount to
# Create a mountpoint
VOLUME /home/data
I have a local data dir on my computer. I will mount the data dir into the container /data or /home/data.
At first I download and install the image
docker run -p 8000:8000 -d --name jupyterhub joergklein/jupyterhub jupyterhub
Second I will mount the datasets dir. into the /data in the container. In the dataset dir are a lot of csv files.
docker run -v /home/user/datasets:/data -t jupyterhub /bin/bash
I want run the JupyterHub in a sub domain in a team.
We want share the data. How can all team member work in this directory?
How we can add new data in this directory?
Which is the right docker run commnd?
That works fine for me.
docker run -p 8000:8000 -d --name jupyterhub --volume $(pwd)/datasets:/data joergklein/jupyterhub jupyterhub

docker mount volume dir ubuntu

I'm trying to use docker to do this:
Run Docker image, make sure you mount your User (for MAC) or home (for
Ubuntu) directory as a volume so you can access your local files
The code that I've been given is:
docker run -v /Users/:/host -p 5000:5000 -t -i bjoffe/openface_flask_v2 /bin/bash
I know that the part that I should modify to my local files is -v /Users/:/host, but I am unsure how to do so.
The files I want to load in the container are inside home/user/folder-i-want-to-read
How should this code be written?
Bind mount is just a mapping of the host files or directories into a container files or directories. That basically pointing to the same physical location on disk.
In your case, you could try this command,
docker container run -it -p 5000:5000 -v /home/user/folder-i-want-to-read/:/path_in_container bjoffe/openface_flask_v2 /bin/bash
And, once run verify that directories from the path on host home/user/folder-i-want-to-read are loaded in the container path which you have mapped.

How to copy files from Docker container to bind mount directory on docker run?

In my Dockerfile, I copy a config file over like so:
VOLUME ["/root/.litecoin"]
WORKDIR $HOME/.litecoin
COPY litecoin.conf .
I'm able to start the docker container with the config file if I use a named or unanamed volume:
docker run -d --name litecoind -v litecoin-blockchain:/root/.litecoin -t test
or
docker run -d --name litecoind -v /root/.litecoin -t test
However, if I try with a bind mount (empty directory), it overwrites the contents of /root/.litecoin and I lose my config. E.g:
docker run --name litecoind -v /mnt/LV_LTC:/root/.litecoin -t test
What is the best non-hacky way to get config files copied from the host to the container using bind mounts? It's annoying that my self-contained Docker app is breaking because I need to use bind mounts (block storage).

Does Docker update contents of volume when mounted if changes are made in Dockerfile?

I have Jenkins running in a Docker container. The home directory is in a host volume, in order to ensure that the build history is preserved when updates to the container are actioned.
I have updated the container, to create an additional file in the home directory. When the new container is pulled, I cannot see the changed file.
ENV JENKINS_HOME=/var/jenkins_home
RUN mkdir -p ${JENKINS_HOME}/.m2
COPY settings.xml ${JENKINS_HOME}/.m2/settings.xml
RUN chown -R jenkins:jenkins ${JENKINS_HOME}/.m2
VOLUME ["/var/jenkins_home"]
I am running the container like this:
docker run -v /host/directory:/var/jenkins_home -p 80:8080 jenkins
I had previous run Jenkins and so the home directory already exists on the host. When I pull the new container and run it, I see that the file .m2/settings.xml is not created. Why is this please?
Basically when you run:
docker run -v /host-src-dir:/container-dest-dir my_image
You will overlay your /container-dest-dir with what is in /host-src-dir
From Docs
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the
container at /webapp. If the path /webapp already exists inside the
container’s image, the /src/webapp mount overlays but does not remove
the pre-existing content. Once the mount is removed, the content is
accessible again. This is consistent with the expected behavior of the
mount command.
This SO question is also relevant docker mounting volumes on host
It seems you want it the other way around (i.e. the container is source and the host is destination).
Here is a workaround:
Create the volume in your Dockerfile
Run it without -v i.e.: docker run --name=my_container my_image
Run docker inspect --format='{{json .Mounts}}' my_container
This will give you output similar to:
[{"Name":"5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73","Source":"/var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data","Destination":"/var/jenkins_home","Driver":"local","Mode":"","RW":true,"Propagation":""}]
Which means your dir as it is on container was mounted into the host directory /var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data
Unfortunately, I do not know a way to make it mount on a specific host directory instead.

Bind data to a db container

I have a test data which I build as a Docker image (docker build with copy to /db/data) and pushed to docker hub.
I want to run a db instance that will use that data.
I would expect to be able to:
run a "docker create" and create a container from image and map it to a volume (maybe named volume) which will practically will copy the data to that volume.
run a "docker run" with volumes-from and map that data from first container to the second.
When I tried it out I always see that in the second directory there is a folder mapping but I can't reach any pre-populated data from the data-container.
--volumes-from will add "Volume" from a docker container to another.
In your case you created an image that contains your data, you didnt create a volume that contains your data.
Docker is currently supporting the following:
a host bind mount that is listed as a bind mount in both services
a named volume that is used by both services
volumes_from another service
for example if you have your data in /my/data on docker host you can use the following:
sudo docker run -d --name firstcontainer -v /my/data:/db/data <image name>
sudo docker run -d --name secondcontainer -v /my/data:/db/data <image name>
If you want to use named volumes follow these steps:
create the named volume
sudo docker volume create <volumename>
Mount it to a transitional container and copy your data in it with docker cp
sudo docker cp /path/to/your/file <containername>:/destination/path
Mount your named volume on multiple containers.
sudo docker run -d --name firstcontainer -v <volumename>:/db/data <image name>
sudo docker run -d --name secondcontainer -v <volumename>:/db/data <image name>

Resources