Docker - mounted directory is empty inside container - docker

I am attempting to mount a non-empty directory into a container, however, the directory inside the container is empty. The directory does not already exist in the container. My command is:
docker run --volume /tmp:/test busybox:latest ls -l /test
I have also tried this using the --mount flag instead but no success. Why is the /test directory inside my container empty? It should contain the contents of /tmp from the host.

Finally figured things out, in part provoked by #BMitch's comment. I was running Docker in a multi-node environment so Docker was creating the new container on a separate node. I solved the problem by coping the file into the new container via docker cp

Related

Why docker run can't find file which was copied during build

Dockerfile
FROM centos
RUN mkdir /test
#its ensured that sample.sh exists where the dockerFile exists & being run
COPY ./sample.sh /test
CMD ["sh", "/test/sample.sh"]
Docker run cmd:
docker run -d -p 8081:8080 --name Test -v /home/Docker/Container_File_System:/test test:v1
Log output :
sh: /test/sample.sh: No such file or directory
There are 2 problems here.
The output says sh: /test/sample.sh: No such file or directory
as I have mapped a host folder to container folder, I was expecting the test folder & the sample.sh to be available at /home/Docker/Container_File_System post run, which did not happen
Any help is appreciated.
When you map a folder from the host to the container, the host files become available in the container. This means that if your host has file a.txt and the container has b.txt, when you run the container the file a.txt becomes available in the container and the file b.txt is no longer visible or accessible.
Additionally file b.txt is not available in the host at anytime.
In your case, since your host does not have sample.sh, the moment you mount the directory, sample.sh is no longer available in the container (which causes the error).
What you want to do is copy the sample.sh file to the correct directory in the host and then start the container.
The problem is in volume mapping. If I create a volume and map it subsequently it works fine, but directly mapping host folder to container folder does not work.
Below worked fine
docker volume create my-vol
docker run -d -p 8081:8080 --name Test -v my-vol:/test test:v1

Docker : Dynamically created file copy to local machine

I am new to docker, I'm dynamically creating a file which is in docker container and want to copy that local machine at the same time, please let me know how it is possible through volumes.
For now, I have to use the below command again and again to check the file data :
docker cp source destination
How it can be done through volumes, the file format will be in .csv or .xlsx? I mean what should I write the command in docker files so that it can copy the file
What you need is volume. You have to add your current directory as a volume to the docker container when you first create the container so that they are the same folder. By doing this, you'll be able to sync the files in that folder automatically. But I'm assuming you're using docker for development environment.
This is how I run my container.
docker run -d -it --name {container_name} --volume $PWD:{directory_in_container} --entrypoint /bin/bash {image_name}
In addition to your run command, you have to add --volume $PWD:{directory_in_container} to your run script.
If you have a problem again, just add more detail to your question.
Things you can add might be your Dockerfile, and how you first run your container.

`docker run -v`: Copy all files from container to host?

I have the following command:
docker run -it -v ~/Desktop:/var/task mylambda bash
From my understanding, this command here will mount a volume so all files inside /var/task within my container will be copied to ~/Desktop. But that's not the case. Do I misunderstand that command? How do I otherwise get /var/task/lambdatest.zip to my localhost?
It works the other way around.
The command you have mounts ~/Desktop (usually the command requires an absolute path) into the container such that the container's directory /var/task is the content of your desktop. This will have the consequence of mounting the ~/Desktop over any content existing within the container's /var/task directory and so /var/task/lambdatest.zip would not be accessible to the container.
You want to use docker cp command:
https://docs.docker.com/engine/reference/commandline/cp/
You are using bind mounts. This is actually their behaviour. Your goal can be achived with volumes.
docker run -it -v a_docker_managed_volume:/var/task mylambda bash
Have a look at the reference https://docs.docker.com/storage/volumes/

Does Docker update contents of volume when mounted if changes are made in Dockerfile?

I have Jenkins running in a Docker container. The home directory is in a host volume, in order to ensure that the build history is preserved when updates to the container are actioned.
I have updated the container, to create an additional file in the home directory. When the new container is pulled, I cannot see the changed file.
ENV JENKINS_HOME=/var/jenkins_home
RUN mkdir -p ${JENKINS_HOME}/.m2
COPY settings.xml ${JENKINS_HOME}/.m2/settings.xml
RUN chown -R jenkins:jenkins ${JENKINS_HOME}/.m2
VOLUME ["/var/jenkins_home"]
I am running the container like this:
docker run -v /host/directory:/var/jenkins_home -p 80:8080 jenkins
I had previous run Jenkins and so the home directory already exists on the host. When I pull the new container and run it, I see that the file .m2/settings.xml is not created. Why is this please?
Basically when you run:
docker run -v /host-src-dir:/container-dest-dir my_image
You will overlay your /container-dest-dir with what is in /host-src-dir
From Docs
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the
container at /webapp. If the path /webapp already exists inside the
container’s image, the /src/webapp mount overlays but does not remove
the pre-existing content. Once the mount is removed, the content is
accessible again. This is consistent with the expected behavior of the
mount command.
This SO question is also relevant docker mounting volumes on host
It seems you want it the other way around (i.e. the container is source and the host is destination).
Here is a workaround:
Create the volume in your Dockerfile
Run it without -v i.e.: docker run --name=my_container my_image
Run docker inspect --format='{{json .Mounts}}' my_container
This will give you output similar to:
[{"Name":"5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73","Source":"/var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data","Destination":"/var/jenkins_home","Driver":"local","Mode":"","RW":true,"Propagation":""}]
Which means your dir as it is on container was mounted into the host directory /var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data
Unfortunately, I do not know a way to make it mount on a specific host directory instead.

How to share data between the docker container and the host?

I tried to share data between the docker container and the host, for example by adding the parameter -v /Users/name/Desktop/Tutorials:/cntk/Tutorials to the docker run command, but I noticed that it also deletes all the files on the docker contained in /cntk/Tutorials.
My question is how to make the same link, but having instead all the files in /cntk/Tutorials copied to the host (at /Users/name/Desktop/Tutorials)
Thank you
Unfortunately that it is not possible, take a look here. That is because this is how mounting works in Linux.
It is not correct to say that the files were deleted. They are still present in the underlying image, but the act of mounting another directory at the same path has obscured them. They exist, but are not accessible in this condition.
One way you can accomplish this is by mounting a volume into your container at a different path, and then copying the container's files to that path. Something like this.
Mount a host volume using a different path than the one the container already has for the files you are interested in.
docker run -v /Users/name/Desktop/Tutorials:/cntk/Tutorials2 [...]
Now, execute a command that will copy the files already in the docker image, into the mounted volume from the outside host.
docker exec <container-id> cp -r /cntk/Tutorials /cntk/Tutorials2
The docker cp command allows you to copy files/folders on demand between host and the container:
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
docker cp [OPTIONS] SRC_PATH|- CONTAINER:DEST_PATH
docker cp ContainerName:/home/data.txt . <== copy from container to host
docker cp ./test.txt ContainerName:/test.txt <== copy from host to container
docker cp ContainerName:/test.txt ./test2.txt <== copy from container to host
For details run docker cp --help

Resources