I tried to share data between the docker container and the host, for example by adding the parameter -v /Users/name/Desktop/Tutorials:/cntk/Tutorials to the docker run command, but I noticed that it also deletes all the files on the docker contained in /cntk/Tutorials.
My question is how to make the same link, but having instead all the files in /cntk/Tutorials copied to the host (at /Users/name/Desktop/Tutorials)
Thank you
Unfortunately that it is not possible, take a look here. That is because this is how mounting works in Linux.
It is not correct to say that the files were deleted. They are still present in the underlying image, but the act of mounting another directory at the same path has obscured them. They exist, but are not accessible in this condition.
One way you can accomplish this is by mounting a volume into your container at a different path, and then copying the container's files to that path. Something like this.
Mount a host volume using a different path than the one the container already has for the files you are interested in.
docker run -v /Users/name/Desktop/Tutorials:/cntk/Tutorials2 [...]
Now, execute a command that will copy the files already in the docker image, into the mounted volume from the outside host.
docker exec <container-id> cp -r /cntk/Tutorials /cntk/Tutorials2
The docker cp command allows you to copy files/folders on demand between host and the container:
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
docker cp ContainerName:/home/data.txt . <== copy from container to host
docker cp ./test.txt ContainerName:/test.txt <== copy from host to container
docker cp ContainerName:/test.txt ./test2.txt <== copy from container to host
For details run docker cp --help
Related
I am attempting to mount a non-empty directory into a container, however, the directory inside the container is empty. The directory does not already exist in the container. My command is:
docker run --volume /tmp:/test busybox:latest ls -l /test
I have also tried this using the --mount flag instead but no success. Why is the /test directory inside my container empty? It should contain the contents of /tmp from the host.
Finally figured things out, in part provoked by #BMitch's comment. I was running Docker in a multi-node environment so Docker was creating the new container on a separate node. I solved the problem by coping the file into the new container via docker cp
I have the following command:
docker run -it -v ~/Desktop:/var/task mylambda bash
From my understanding, this command here will mount a volume so all files inside /var/task within my container will be copied to ~/Desktop. But that's not the case. Do I misunderstand that command? How do I otherwise get /var/task/lambdatest.zip to my localhost?
It works the other way around.
The command you have mounts ~/Desktop (usually the command requires an absolute path) into the container such that the container's directory /var/task is the content of your desktop. This will have the consequence of mounting the ~/Desktop over any content existing within the container's /var/task directory and so /var/task/lambdatest.zip would not be accessible to the container.
You want to use docker cp command:
https://docs.docker.com/engine/reference/commandline/cp/
You are using bind mounts. This is actually their behaviour. Your goal can be achived with volumes.
docker run -it -v a_docker_managed_volume:/var/task mylambda bash
Have a look at the reference https://docs.docker.com/storage/volumes/
What is the use of "VOLUME" or "RUN mkdir /m"?
Even if I do not specify any of these instructions in the Dockerfile, then also "docker run -v ${PWD}/m:/m" works.
Inside a Dockerfile, VOLUME marks a directory as a mount point for an external volume. Even if the docker run command doesn't mount an existing folder into that mount point, docker will create a named volume to hold the data.
RUN mkdir /m does what mkdir does on any Unix system. It makes a directory named m at the root of the filesystem.
docker run -v ... binds a host directory to a volume inside a container. It will work whether or not the mount point was declared as a volume in a Dockerfile, and it will also create the directory if it doesn't exist. So neither VOLUME or RUN mkdir are specifically necessary before using that command, though they may be helpful to communicate the intent to the user.
I have Jenkins running in a Docker container. The home directory is in a host volume, in order to ensure that the build history is preserved when updates to the container are actioned.
I have updated the container, to create an additional file in the home directory. When the new container is pulled, I cannot see the changed file.
ENV JENKINS_HOME=/var/jenkins_home
RUN mkdir -p ${JENKINS_HOME}/.m2
COPY settings.xml ${JENKINS_HOME}/.m2/settings.xml
RUN chown -R jenkins:jenkins ${JENKINS_HOME}/.m2
VOLUME ["/var/jenkins_home"]
I am running the container like this:
docker run -v /host/directory:/var/jenkins_home -p 80:8080 jenkins
I had previous run Jenkins and so the home directory already exists on the host. When I pull the new container and run it, I see that the file .m2/settings.xml is not created. Why is this please?
Basically when you run:
docker run -v /host-src-dir:/container-dest-dir my_image
You will overlay your /container-dest-dir with what is in /host-src-dir
From Docs
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the
container at /webapp. If the path /webapp already exists inside the
container’s image, the /src/webapp mount overlays but does not remove
the pre-existing content. Once the mount is removed, the content is
accessible again. This is consistent with the expected behavior of the
mount command.
This SO question is also relevant docker mounting volumes on host
It seems you want it the other way around (i.e. the container is source and the host is destination).
Here is a workaround:
Create the volume in your Dockerfile
Run it without -v i.e.: docker run --name=my_container my_image
Run docker inspect --format='{{json .Mounts}}' my_container
This will give you output similar to:
[{"Name":"5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73","Source":"/var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data","Destination":"/var/jenkins_home","Driver":"local","Mode":"","RW":true,"Propagation":""}]
Which means your dir as it is on container was mounted into the host directory /var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data
Unfortunately, I do not know a way to make it mount on a specific host directory instead.
I am wondering if I can map the volume in the docker to another folder in my linux host. The reason why I want to do this is, if I don't misunderstand, the default mapping folder is in /var/lib/docker/... and I don't have the access to this folder. So I am thinking about changing that to a host's folder I have access (for example /tmp/) when I create the image. I'm now able to modify Dockerfile if this can be done before creating the image. Or must this be done after creating the image or after creating the container?
I found this article which helps me to use a local directory as the volume in docker.
https://docs.docker.com/engine/userguide/containers/dockervolumes/
Command I use while creating a new container:
docker run -d -P -name randomname -v /tmp/localfolder:/volumepath imageName
Docker doesn't have any tools I know of to map named or container volumes back to the host, though they are just sub directories under /var/lib/docker so writing your own tool wouldn't be impossible, but you'd need root access to run it. Note that with access to docker on the host, there are likely a lot of ways to access root privileges on the host. Creating a hard link to the target folder should be all that's needed if both source and target are on the same file system.
The docker way to access the named volume would be to create a disposable container to access your files. You can even create an additional host volume to export the data. E.g.
docker run -it --rm \
-v test:/source -v `pwd`/data:/target \
busybox /bin/sh -c "tar -cC /source . | tar -xC /target"'
Where "test" is the named volume you want to export/copy. You may need to also run a chown -R $uid /target in a container to change everything to your uid on the host.