Docker Container Mount Folder - docker

I am trying to mount my VM Machine folder to Container using below command
sudo docker run -d -it --name devtest \
-v /home/minhaj/GOQTINDOOR:/home/user:Z therecipe/qt:linux bash
But do not see any folder on my Container home/user. Please advise what is wrong in my command or do I need to execute more commands to mount folder on Container.

Your issue is that you are running the container in detached mode. Remove -d
sudo docker run -it --name devtest -v /home/minhaj/GOQTINDOOR:/home/user therecipe/qt:linux bash​
After this if you compile something inside the container and copy it is inside the /home/user folder it will be automatically available inside /home/minhaj/GOQTINDOOR. You can copy and delete any file inside /home/minhaj/GOQTINDOOR. But you can't delete the /home/minhaj/GOQTINDOOR folder itself as it the mount point.
Any files or folder inside /home/minhaj/GOQTINDOOR can be deleted from inside the container by delete them from /home/user folder.
docker cp command is only required if you want to copy a file which is not there in any mounted path.
For that you can use
docker cp <containerid>:<pathinsidecontainer> <pathonhost>

Related

In Docker while binding host directory with container directory I am facing a problem

I am trying to bindmount a directory form docker container to my host directory called /home, the docker container directory which I am trying to sync is named as /test and it contains a file called new.txt.
My Dockerfile is in /home/sampledocker1 directory. Its contents are as follows:
FROM ubuntu:18.04
RUN ["/bin/bash", "-c", "mkdir test"]
COPY new.txt test
Here, local file new.txt available in current path.
I executed the below commands first I built the docker image and started the container as follows:
docker build -t sample1:latest . # image is created properly
docker run -t -d -v /home:/test sample1:latest /bin/bash
After creating container with mount option, I am expecting that the file new.txt in test folder of container would appear in my /home directory but it did not.
Here bindmount is not happening properly.
By running -v option you actually override directory that already exists in the docker file.
If you run:
docker run -ti sample1:latest /bin/bash
You will find /test/new.txt file because it is added to the image layer with COPY command on the Dockerfile.
If you run:
docker run -ti -v /home:/test sample1:latest /bin/bash
You will find the contents of your computers /home directory in the /test of the docker container, because -v (mouted volume) overrides original image layer created with the COPY command on the Dockerfile.
THE SUGGESTION: Remove both COPY and mkdir commands from your Dockerfile:
FROM ubuntu:18.04
# Nothing at all
And mount your current directory with your docker run command:
docker run -ti -v $(pwd):/test sample1:latest /bin/bash
Since your Dockerfile is empty, equivalent command is just running ubuntu:18:04 image:
docker run -ti -v $(pwd):/test ubuntu:18.04 /bin/bash
p.s. I changed -d (detached) to -i(interactive) on the example to make sure that you enter docker image as soon as you run docker run command.

`docker run -v`: Copy all files from container to host?

I have the following command:
docker run -it -v ~/Desktop:/var/task mylambda bash
From my understanding, this command here will mount a volume so all files inside /var/task within my container will be copied to ~/Desktop. But that's not the case. Do I misunderstand that command? How do I otherwise get /var/task/lambdatest.zip to my localhost?
It works the other way around.
The command you have mounts ~/Desktop (usually the command requires an absolute path) into the container such that the container's directory /var/task is the content of your desktop. This will have the consequence of mounting the ~/Desktop over any content existing within the container's /var/task directory and so /var/task/lambdatest.zip would not be accessible to the container.
You want to use docker cp command:
https://docs.docker.com/engine/reference/commandline/cp/
You are using bind mounts. This is actually their behaviour. Your goal can be achived with volumes.
docker run -it -v a_docker_managed_volume:/var/task mylambda bash
Have a look at the reference https://docs.docker.com/storage/volumes/

How to update and sync Docker container files using a volume

I'm trying to use a volume to edit the project files using Visual Studio Code from a folder on my desktop to sync with a Docker container. I'm not sure if I'm doing it correctly because my changes aren't being shown in the container, even when I manually restart the container. Are there any additional steps needed or did I reference the "www" folders wrong?
The Docker container has an Ubuntu project with files in the /var/www/ directory.
docker run -it -v /Users/.../Desktop/docker/test2/bh_files:/www -v /www/ -p 8080:8080 k/bh:latest
docker run -it -v /Users/.../Desktop/docker/test2/bh_files:/www -v /www/ -p 8080:8080 k/bh:latest
You are linking your project folder with the /www/ folder inside your container NOT /var/www/. Simply update the path and it should work.
Edit: Change your container volume path as docker run -it -v /Users/.../Desktop/docker/test2/bh_files:/var/www -p 8080:8080 k/bh:latest
I am not really sure that you need the second volume -v /www/. This serves no purpose without a host folder.

Does Docker update contents of volume when mounted if changes are made in Dockerfile?

I have Jenkins running in a Docker container. The home directory is in a host volume, in order to ensure that the build history is preserved when updates to the container are actioned.
I have updated the container, to create an additional file in the home directory. When the new container is pulled, I cannot see the changed file.
ENV JENKINS_HOME=/var/jenkins_home
RUN mkdir -p ${JENKINS_HOME}/.m2
COPY settings.xml ${JENKINS_HOME}/.m2/settings.xml
RUN chown -R jenkins:jenkins ${JENKINS_HOME}/.m2
VOLUME ["/var/jenkins_home"]
I am running the container like this:
docker run -v /host/directory:/var/jenkins_home -p 80:8080 jenkins
I had previous run Jenkins and so the home directory already exists on the host. When I pull the new container and run it, I see that the file .m2/settings.xml is not created. Why is this please?
Basically when you run:
docker run -v /host-src-dir:/container-dest-dir my_image
You will overlay your /container-dest-dir with what is in /host-src-dir
From Docs
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the
container at /webapp. If the path /webapp already exists inside the
container’s image, the /src/webapp mount overlays but does not remove
the pre-existing content. Once the mount is removed, the content is
accessible again. This is consistent with the expected behavior of the
mount command.
This SO question is also relevant docker mounting volumes on host
It seems you want it the other way around (i.e. the container is source and the host is destination).
Here is a workaround:
Create the volume in your Dockerfile
Run it without -v i.e.: docker run --name=my_container my_image
Run docker inspect --format='{{json .Mounts}}' my_container
This will give you output similar to:
[{"Name":"5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73","Source":"/var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data","Destination":"/var/jenkins_home","Driver":"local","Mode":"","RW":true,"Propagation":""}]
Which means your dir as it is on container was mounted into the host directory /var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data
Unfortunately, I do not know a way to make it mount on a specific host directory instead.

docker cp the content of a folder

I try to copy the content of a folder (on my server) to my container:
docker cp sonatype-work-backup/* nexus:/sonatype-work/
So I want the content of sonatype-work in my /sonatype-work/ of nexus. But it doesn't work with the * and without the star it's copying the directory sonatype-work-backup inside my sonatype-work directory. I can't perform mv after that.
you could just mount that directory in your container at run
docker run -v /sonatype-work-backup:/mnt --name nexus nexus-image
then
docker exec -it nexus bash
and just cp from /mnt to your desired folder

Resources