I have a docker container which has some data in let's say /opt/files. File A and B. How can I start that container and access these files on my host machine?
I'm using Docker for Windows (Hyper-V). When i start the container with:
docker run -it -v C:/tmp:/opt/files myImage
I see an empty folder on my windows machine and inside of the container. Any new files I create there are of course reflected on both sides but how can I access files that are already in the container (e.g. because they're added in the Dockerfile)?
You can't share from inside container to host. There are two ways to do it
Copy the files from container
docker cp <containerid>:<file_path_inside_container> localpath
Share a folder other than the one where files will be generated
docker run -it -v C:/tmp:/opt/files_temp myImage
Then you get inside the container copy files from /opt/files to /opt/files_temp
Once your container is started, you can copy files inside it to your host.
Use docker cp for this (https://docs.docker.com/engine/reference/commandline/cp/).
Example : docker cp CONTAINER:SRC_PATH DEST_PATH|-
Related
I was trying to run a simple docker container with bind mount, so the application can read and modify data.json file (from host machine). I placed data.json in /home/usr/project and ran the container with
docker container run -it -v /home/usr/project:/app container_name main.exe
project contains 3 files, rest of the 2 files were included in container build. When I try to run the container, it gives the error about the other 2 files being not found. Placing those files in /home/usr/project on local host solves the issue. Since, I want the container to only look for data.json, is there any way I can do it without keeping other 2 files unnecessarily in the bind mount directory
You can map individual files in docker
docker run -it -v /home/usr/project/data.json:/app/data.json alpine cat /app/data.json
And you can even make them readonly inside the container to avoid unwanted modifications
docker run -it -v /home/usr/project/data.json:/app/data.json:ro alpine cat /app/data.json
I have created a mount on my container which maps a physical path on the server to a path within the docker container. However, when files are placed within the containers path, those files are not appearing on the servers path (and vice versa)
Here is my docker run cmd:
docker run -d -p 127.0.0.1:7001:5000 --name myContainer myContainer -v /var/www/Images:/app/wwwroot/
Server is running CentOS. My application that runs within this docker container places files in the app/wwwroot folder within its container. I expected these files to also appear on the servers /var/www/Images folder but they do not.
Any ideas why?
Thanks
I expected these files to also appear on the servers /var/www/Images
folder but they do not.
You map mount a directory or path /app/wwwroot will be overridden (hide) by the host files, as -v option tells to the docker I am going to override anything inside Docker with host files.
When you use a bind mount, a file or directory on the host machine is mounted into a container. The file or directory is referenced by its full or relative path on the host machine.
bind-mounts
Or if you expect to copy from container then one way is to start container
docker run -it --rm --name test my_container
then copy files from container
docker cp my_container:/app/wwwroot/ /var/www/Images
Now bind you have docker files under /var/www/Images.
I am new to docker volumes, and my use case is the next:
I have two different containers running in the same host, and both need to read/write files from it. Is of my understanding that I should use docker volumes, but before I try that, I want to make sure that i can delete files of the host filesystem, from inside the containers (e.g. using a golang app)
Maybe, you should use docker volumes. It can share the directory between the host and containers. For example, you want to read/write the file in /mnt, you can mount the /mnt to container.
docker run -it -v /mnt:/mnt ubuntu:latest touch /mnt/hello.log
now, /mnt/hello.log was created. And you can edit the file /mnt/hello.log in you host filesystem.
Then,
docker run -it -v /mnt:/mnt ubuntu:latest rm /mnt/hello.log
After the command above, the file /mnt/hello.log will be deleted from inside the container.
Actually, you can delete the file in golang, like this:
os.Remove("/mnt/hello.log")
The use case is that I want to download and image that contains python code files. Assume the image does not have any text editor installed. So i want to mount a drive on host, so that files in the container show up in this host mount and i can use different editors installed on my host to update the code. Saving the changes are to be reflected in the image.
if i run the following >
docker run -v /host/empty/dir:/container/folder/with/code/files -it myimage
the /host/empty/dir is still empty, and browsing the container dir also shows it as empty. What I want is the file contents of /container/folder/with/code/files to show up in /host/empty/dir
Sébastien Helbert answer is correct. But there is still a way to do this in 2 steps.
First run the container to extract the files:
docker run --rm -it myimage
In another terminal, type this command to copy what you want from the container.
docker cp <container_id>:/container/folder/with/code/files /host/empty/dir
Now stop the container. It will be deleted (--rm) when stopped.
Now if you run your original command, it will work as expected.
docker run -v /host/empty/dir:/container/folder/with/code/files -it myimage
There is another way to access the files from within the container without copying it but it's very cumbersome.
Your /host/empty/dir is always empty because the volume binding replaces (overrides) the container folder with your empty host folder. But you can not do the opposite, that is, you take a container folder to replace your host folder.
However, there is a workaround by manually copying the files from your container folder to your host folder. before using them as you have suggested.
For exemple :
run your docker image with a volume maaping between you host folder and a temp folder : docker run -v /host/empty/dir:/some-temp-folder -it myimage
copy your /container/folder/with/code/files content into /some-temp-folder to fill you host folder with you container folder
run you container with a volum mapping on /host/empty/dir but now this folder is no longer empty : run -v /host/empty/dir:/container/folder/with/code/files -it myimage
Note that steps 1 & 2 may be replaced by : Copying files from Docker container to host
I have mounted my USB devices to a docker container using docker run --privileged -v /dev/bus/usb:/dev/bus/usb -d ubuntu
Within the container, I would like to delete few files from /dev/bus/usb/
This results in the deletion of files from the host as well, which is not what I want
I would like to delete files from the container, but continue to have them in the host
Is there any way that I can achieve this ?
This is because you are using a shared volume, so when you delete files this action is effective into your container and into the host.
Maybe you can write a little Dockerfile to create an image with a copy of your usb files and not share the volume into the container.
FROM ubuntu
COPY /dev/bus/usb /path/for/your/copy
After that you can compile your image:
docker build -t imagename .
And finally launch it:
docker run -d imagename