Create a volume in docker from windows host - docker

I have the following folder on my windows host
C:\Tmp\TmpVolume
"TmpVolume" has a number of files.
This is where I will be putting my sourcecode for development purposes on the host machine.
Now I want to run the container and mount this folder onto the container. This is the command I execute
docker run -p 49160:3000 -v C:/Tmp/TmpVolume/:/usr/src/app/TmpVolume -d containerName
My problem is that when I move into the directory /usr/src/app in the container there is a TmpVolume folder but this empty, there is nothing inside it. What am I doing wrong here?

Related

Docker Save Logs To Host's Directory

I am writing to a Text File within my Docker Container, Path inside container is /app/data/text.txt
When I run my APP, it writes to this files just fine, however I want to write this to my HOST system, not within the container. so I tried below
docker run -v /home/pi/mmm:/app/data -d smartazanmobilebackgroundservice
and still i cant see any text.txt file in my /home/pi/data folder
My Working dir for my Docker app is ...
WORKDIR /app
.Net Code to get directory is
string logPath = Path.Combine("data");
docker run -v /home/pi/mmm:/app/data -d smartazanmobilebackgroundservice
I was using -v after the Image Name.

Why docker run can't find file which was copied during build

Dockerfile
FROM centos
RUN mkdir /test
#its ensured that sample.sh exists where the dockerFile exists & being run
COPY ./sample.sh /test
CMD ["sh", "/test/sample.sh"]
Docker run cmd:
docker run -d -p 8081:8080 --name Test -v /home/Docker/Container_File_System:/test test:v1
Log output :
sh: /test/sample.sh: No such file or directory
There are 2 problems here.
The output says sh: /test/sample.sh: No such file or directory
as I have mapped a host folder to container folder, I was expecting the test folder & the sample.sh to be available at /home/Docker/Container_File_System post run, which did not happen
Any help is appreciated.
When you map a folder from the host to the container, the host files become available in the container. This means that if your host has file a.txt and the container has b.txt, when you run the container the file a.txt becomes available in the container and the file b.txt is no longer visible or accessible.
Additionally file b.txt is not available in the host at anytime.
In your case, since your host does not have sample.sh, the moment you mount the directory, sample.sh is no longer available in the container (which causes the error).
What you want to do is copy the sample.sh file to the correct directory in the host and then start the container.
The problem is in volume mapping. If I create a volume and map it subsequently it works fine, but directly mapping host folder to container folder does not work.
Below worked fine
docker volume create my-vol
docker run -d -p 8081:8080 --name Test -v my-vol:/test test:v1

Link a docker container folder to a host folder

I am new to docker and I am trying to do the following: I would like to have a folder on my host machine which is synched with a folder in the Docker container. I need this since I would like to write on some files of the container folder with the usual software tools I use on my host machine (e.g., sublime text, vscode). Then, once I am done editing the files on my host computer, I will compile them in the docker container and test them directly there.
My workflow is the following:
In the DOCKERFILE I clone a git repository, let's call it repo1 and it will then be in the docker container in /root/repo1
I build the container (and I remove the old ones, not important for this question)
# Run docker, setup and keep running
echo Running docker, setting it up and keep runnning ...
docker run -dt \
--privileged \
-v /path_to_existing_folder_on_host_machine:/root/repo1 \
-e DISPLAY=:0 \
-p 14556:14556/udp \
--name name_container_1 \
name_container_1
echo ... Finished setting up docker and kept it running in the background
The folders are synched: if I create a file on the host machine, I can see it from the docker container. However, I get a folder on both the host and the container that is empty.
EDIT: I understood that what I was doing is wrong since mounting a volume from the host machine will effectively "override" files that exist in the container. Therefore, I think that I have to find another solution.
Maybe you want to mount your host folder like this:
docker run -v <host-file-system-directory>:<docker-file-system-directory>
Refer Access a bash script variable outside the docker container in which the script is running

docker mount volume dir ubuntu

I'm trying to use docker to do this:
Run Docker image, make sure you mount your User (for MAC) or home (for
Ubuntu) directory as a volume so you can access your local files
The code that I've been given is:
docker run -v /Users/:/host -p 5000:5000 -t -i bjoffe/openface_flask_v2 /bin/bash
I know that the part that I should modify to my local files is -v /Users/:/host, but I am unsure how to do so.
The files I want to load in the container are inside home/user/folder-i-want-to-read
How should this code be written?
Bind mount is just a mapping of the host files or directories into a container files or directories. That basically pointing to the same physical location on disk.
In your case, you could try this command,
docker container run -it -p 5000:5000 -v /home/user/folder-i-want-to-read/:/path_in_container bjoffe/openface_flask_v2 /bin/bash
And, once run verify that directories from the path on host home/user/folder-i-want-to-read are loaded in the container path which you have mapped.

Does Docker update contents of volume when mounted if changes are made in Dockerfile?

I have Jenkins running in a Docker container. The home directory is in a host volume, in order to ensure that the build history is preserved when updates to the container are actioned.
I have updated the container, to create an additional file in the home directory. When the new container is pulled, I cannot see the changed file.
ENV JENKINS_HOME=/var/jenkins_home
RUN mkdir -p ${JENKINS_HOME}/.m2
COPY settings.xml ${JENKINS_HOME}/.m2/settings.xml
RUN chown -R jenkins:jenkins ${JENKINS_HOME}/.m2
VOLUME ["/var/jenkins_home"]
I am running the container like this:
docker run -v /host/directory:/var/jenkins_home -p 80:8080 jenkins
I had previous run Jenkins and so the home directory already exists on the host. When I pull the new container and run it, I see that the file .m2/settings.xml is not created. Why is this please?
Basically when you run:
docker run -v /host-src-dir:/container-dest-dir my_image
You will overlay your /container-dest-dir with what is in /host-src-dir
From Docs
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the
container at /webapp. If the path /webapp already exists inside the
container’s image, the /src/webapp mount overlays but does not remove
the pre-existing content. Once the mount is removed, the content is
accessible again. This is consistent with the expected behavior of the
mount command.
This SO question is also relevant docker mounting volumes on host
It seems you want it the other way around (i.e. the container is source and the host is destination).
Here is a workaround:
Create the volume in your Dockerfile
Run it without -v i.e.: docker run --name=my_container my_image
Run docker inspect --format='{{json .Mounts}}' my_container
This will give you output similar to:
[{"Name":"5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73","Source":"/var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data","Destination":"/var/jenkins_home","Driver":"local","Mode":"","RW":true,"Propagation":""}]
Which means your dir as it is on container was mounted into the host directory /var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data
Unfortunately, I do not know a way to make it mount on a specific host directory instead.

Resources