First create a volume sample_vol
docker volume create sample_vol
My Dockerfile
FROM archlinux/base
RUN touch /root/testing [**edited** find note at RUN below]
# VOLUME sample_vol:/root [**edited** this will not work, because VOLUME will not accpet named volumes. So this will not mount at /root, it will mount at sample_vol:/root which does not exist]
VOLUME "/root" or VOLUME ["/root"] [**edited** this will create a local mount volume only till the time the container is running. I tried to use named volumes like VOLUME ["name:/root"] but didnt work ]
# RUN touch /root/testing [**edited** this will not work because volume when mounted will only copy files till it got declared]
build the image
docker build -t archlinux/sample_vol .
checking whether testing file is created in sample_vol
docker run --rm -it -v=sample_vol:/tmp/myvolume archlinux/base ls /tmp/myvolume
It does not show any file testing created
while
$ docker run --rm -it --name sample_vol archlinux/sample_vol ls /root/testing
It shows the file testing is created in the /root/ of build image
So why sample_vol is not mounted at /root and testing is created inside it.
Update: Reason i found can be due to
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#volume
Changing the volume from within the Dockerfile: If any build steps
change the data within the volume after it has been declared, those
changes will be discarded.
You are misunderstanding docker-volume.
Docker-Image are more about build time.
Docker-Volume is useful only in runtime.
Try running following commands to get an idea:
docker run --rm -it -v=sample_vol:/tmp/myvolume archlinux/base touch /tmp/myvolume/1.txt
docker run --rm -it -v=sample_vol:/tmp/myvolume archlinux/base touch /tmp/myvolume/2.txt
docker run --rm -it -v=sample_vol:/tmp/myvolume archlinux/base touch /tmp/myvolume/3.txt
docker run --rm -it -v=sample_vol:/tmp/myvolume archlinux/base ls -altr /tmp/myvolume/
1st container create a file 1.txt in docker volume mounted at /tmp/myvolume and then container gets deleted after this operation.
2nd container create a file 2.txt in docker volume mounted at /tmp/myvolume and then container gets deleted after this operation.
3rd container create a file 3.txt in docker volume mounted at /tmp/myvolume and then container gets deleted after this operation.
4th container list files in docker volume mounted at /tmp/myvolume and then container gets deleted after this operation.
Docker volume is to store persistent data outside of the lifecycle of container.That means when you remove container , you still have data outside of the container living inside volume.
So next time if you create a container and attach that docker volume - you will automatically get all the data with new container.
Consider an example of database image where you want to have data in volume so that when you change the container to the higher version - you will get the old data in the new database.
Related
I am looking at this example
docker run --rm --volumes-from myredis -v $(pwd)/backup:/backup debian cp /data/dump.rdb /backup/
from Using Docker book.
Why do we need --rm flag?
Why do we have --volumes-from?
The idea here is that
you have a redis container named myredis which has some volumes for persistent storage (that you'd like to backup).
you run a temporary debian container that will save the backup to your_current_dir/backup and get removed.
docker run --rm ... debian runs the container and removes it after it exits
--volumes-from myredis this way the debian container will have access to the database
-v $(pwd)/backup:/backup this second volume is used to put the backup at your current dir $(pwd)/backup. If it wasn't used, the backup would have only been copied to /backup (inside the container) and later been removed together with the container. This way the backup persists.
cp /data/dump.rdb /backup/ copies the actual files
The --rm flag tells Docker Engine to remove the container once it exits. Without this flag, you need to manually remove the container after you stop it.
The --volumes-from flag mounts all the defined volumes from the referenced containers, it ensures the two containers mounts same volumes.
I have Jenkins running in a Docker container. The home directory is in a host volume, in order to ensure that the build history is preserved when updates to the container are actioned.
I have updated the container, to create an additional file in the home directory. When the new container is pulled, I cannot see the changed file.
ENV JENKINS_HOME=/var/jenkins_home
RUN mkdir -p ${JENKINS_HOME}/.m2
COPY settings.xml ${JENKINS_HOME}/.m2/settings.xml
RUN chown -R jenkins:jenkins ${JENKINS_HOME}/.m2
VOLUME ["/var/jenkins_home"]
I am running the container like this:
docker run -v /host/directory:/var/jenkins_home -p 80:8080 jenkins
I had previous run Jenkins and so the home directory already exists on the host. When I pull the new container and run it, I see that the file .m2/settings.xml is not created. Why is this please?
Basically when you run:
docker run -v /host-src-dir:/container-dest-dir my_image
You will overlay your /container-dest-dir with what is in /host-src-dir
From Docs
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the
container at /webapp. If the path /webapp already exists inside the
container’s image, the /src/webapp mount overlays but does not remove
the pre-existing content. Once the mount is removed, the content is
accessible again. This is consistent with the expected behavior of the
mount command.
This SO question is also relevant docker mounting volumes on host
It seems you want it the other way around (i.e. the container is source and the host is destination).
Here is a workaround:
Create the volume in your Dockerfile
Run it without -v i.e.: docker run --name=my_container my_image
Run docker inspect --format='{{json .Mounts}}' my_container
This will give you output similar to:
[{"Name":"5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73","Source":"/var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data","Destination":"/var/jenkins_home","Driver":"local","Mode":"","RW":true,"Propagation":""}]
Which means your dir as it is on container was mounted into the host directory /var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data
Unfortunately, I do not know a way to make it mount on a specific host directory instead.
when I run nodered with
docker run -v D:/mydir:/data
the content of /data is copied in my volume at first run, thats what I've expected.
If I make
docker run -v D:/mydir:/usr/src/node-red/node_modules nodered
Then the volume is empty
I was expecting to get the content of node_modules being copied in the volume at start time... what am I missing ?
I can illustrate that a little bit more :
docker run --rm -v d:/VM:/data nodered/node-red-docker ls /data
--> list files
docker run --rm ls /usr/src/node-red/node_modules
--> list content of node_modules
docker run --rm -v d:/VM:/usr/src/node-red/node_modules nodered/node-red-docker ls /usr/src/node-red/node_modules
--> is empty !
You're mounting host directories as volumes, so there isn't any copying going on - the mount path inside the container is being mapped to the path on the host, so you're seeing the contents of the host directory.
Volumes sit outside the Union File System when you mount them, so you don't get an overlay which merges the contents of the image and the contents of the host directory. Instead you're effectively bypassing the contents of the image for that volume, and repointing it to your host.
Samples:
touch /docker/nodered-modules/sample.txt
docker run --rm -v /docker/nodered-modules:/usr/src/node-red/node_modules nodered/node-red-docker ls /usr/src/node-red/node_modules
sample.txt
touch /docker/nodered-data/sample.txt
docker run --rm -v /docker/nodered-data:/data nodered/node-red-docker ls /data
sample.txt
The reason you're seeing a difference is because the /data volume is defined in the Dockerfile and empty in the image, so you see the contents of your host directory as expected. The modules directory isn't empty in the image, but you're repointing it to an empty directory on your host.
Docker does not support copying data from the base image into host directories that are mounted as container volumes.
Please bear with me as I learn my way around docker. I'm using v1.11.1
I am making a Dockerfile and would like to specify that a folder of the container should be persisted, this should only be persisted per user (computer running the container). I originally thought that including:
VOLUME /path/to/dir/to/persist
would be enough, but when I start my container with docker run -t -i myimage:latest bash and manually add files in then exit I expect to be able to find my files again. But when I run the image again (as per above) the added files are no longer there.
I've read around but answers seem either outdated in regards to the use of VOLUMES, or suggest things I would rather not do, which is:
I don't want to use -v in the run command
I would rather not make a volume container (seems like overkill for my one tiny folder)
What is it that I'm doing wrong? Any help would be greatly appreciated.
Cheers guys.
Update: I can persist data using a named volume ie: docker run -v name:/path/to/persist -t -i myimage:latest bash But building with a Dockerfile that contains VOLUME name:/path/to/persist does not work.
What is not very obvious is that you are creating a brand new container every time you do a "docker run". Each new container would then have a fresh volume.
So your data is being persisted, but you're not reading the data from the container you wrote it to.
Example to illustrate the problem
Sample Dockerfile
FROM ubuntu
VOLUME /data
built as normal
$ docker build . -t myimage
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM ubuntu
---> bd3d4369aebc
Step 2 : VOLUME /data
---> Running in db84d80841de
---> 7c94335543b8
Now run it twice
$ docker run -ti myimage echo hello world
$ docker run -ti myimage echo hello world
And take a look at the volumes
$ docker volume ls
DRIVER VOLUME NAME
local 078820609d31f814cd5704cf419c3f579af30672411c476c4972a4aad3a3916c
local cad0604d02467a02f2148a77992b1429bb655dba8137351d392b77a25f30192b
The "docker rm" command has a special "-v" option that will cleanup any volumes associated with containers.
$ docker rm -v $(docker ps -qa)
How to use a data container
Using the same docker image, built in the previous example create a container whose sole purpose is to persist data via it's volume
$ docker create --name mydata myimage
Launch another container that saves some data into the "/data" volume
$ docker run -it --rm --volumes-from mydata myimage bash
root#a1227abdc212:/# echo hello world > /data/helloworld.txt
root#a1227abdc212:/# exit
Launch a second container that retrieves the data
$ docker run -it --rm --volumes-from mydata myimage cat /data/helloworld.txt
hello world
Cleanup, simply remove the container and specify the "-v" option to ensure its volume is cleaned up.
$ docker rm -v mydata
Notes:
The "volumes-from" parameter means all data is saved into the underlying volume associated with the "mydata" container
When running the containers the "rm" option will ensure they are automatically removed, useful for once-off containers.
I was using Docker in the old way, with a volume container:
docker run -d --name jenkins-data jenkins:tag echo "data-only container for Jenkins"
But now I changed to the new way by creating a named volume:
docker volume create --name my-jenkins-volume
I bound this new volume to a new Jenkins container.
The only thing I've left is a folder in which I have the /var/jenkins_home of my previous jenkins container. (by using docker cp)
Now I want to fill my new named volume with the content of that folder.
Can I just copy the content of that folder to /var/lib/jenkins/volume/my-jenkins-volume/_data?
You can certainly copy data directly into /var/lib/docker/volumes/my-jenkins-volume/_data, but by doing this you are:
Relying on physical access to the docker host. This technique won't work if you're interacting with a remote docker api.
Relying on a particular aspect of the volume implementation would could change in the future, breaking any processes you have that rely on it.
I think you are better off relying on things you can accomplish using the docker api, via the command line client. The easiest solution is probably just to use a helper container, something like:
docker run -v my-jenkins-volume:/data --name helper busybox true
docker cp . helper:/data
docker rm helper
You don't need to start some container to add data to already existing named volume, just create a container and copy data there:
docker container create --name temp -v my-jenkins-volume:/data busybox
docker cp . temp:/data
docker rm temp
You can reduce the accepted answer to one line using, e.g.
docker run --rm -v `pwd`:/src -v my-jenkins-volume:/data busybox cp -r /src /data
Here are steps for copying contents of ~/data to docker volume named my-vol
Step 1. Attach the volume to a "temporary" container. For that run in terminal this command :
docker run --rm -it --name alpine --mount type=volume,source=my-vol,target=/data alpine
Step 2. Copy contents of ~/data into my-vol . For that run this commands in new terminal window :
cd ~/data
docker cp . alpine:/data
This will copy contents of ~/data into my-vol volume. After copy exit the temporary container.
You can add this BASH function to your .bashrc to copy files to a existing Docker volume without running a container
# Usage: copy-to-docker-volume SRC_PATH DEST_VOLUME_NAME [DEST_PATH]
copy-to-docker-volume() {
SRC_PATH=$1
DEST_VOLUME_NAME=$2
DEST_PATH="${3:-}"
# create smallest Docker image possible
echo -e 'FROM scratch\nLABEL empty=""' | docker build -t empty -
# create temporary container to be able to mount volume
CONTAINER_ID=$(docker container create -v my-volume:/data empty cmd)
# copy files to volume
docker cp "${SRC_PATH}" "${CONTAINER_ID}":"/data/${DEST_PATH}"
# remove temporary container
docker rm "${CONTAINER_ID}"
}
Example
# create volume as destination
docker volume create my-volume
# create directory to copy
mkdir my-dir
echo "hello file1" > my-dir/my-file-1
# copy directory to volume
copy-to-docker-volume my-dir my-volume
# list directory on volume
docker run --rm -it -v my-volume:/data busybox ls -la /data/my-dir
# show file content on volume
docker run --rm -it -v my-volume:/data busybox cat /data/my-dir/my-file-1
# create another file to copy
echo "hello file2" > my-file-2
# copy file to directory on volume
copy-to-docker-volume my-file-2 my-volume my-dir
# list (updated) directory on volume
docker run --rm -it -v my-volume:/data busybox ls -la /data/my-dir
# check volume content
docker run --rm -it -v my-volume:/data busybox cat /data/my-dir/my-file-2
If you don't want to create a docker and you can access as privileged user to , simply do (on Linux systems):
docker volume create my_named_volume
sudo cp -p . /var/lib/docker/volumes/my_named_volume/_data/
Furthermore, it also allows you to access data in docker runtime or also with docker containers stopped.
If you don't want to create a temp helper container on windows docker desktop (backed by wsl2) then
copy the files to below location
\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\my-volume\_data
here my-volume is the name of your named volume. browse the above path from address bar in your file explorer. This is a internal network created by wsl in windows.
Note: it might be better to use docker API like mentioned by larsks, but I have not faced any issues on windows.
Similarly on linux files can be copied to
/var/lib/docker/volumes/my-volume/_data/