Mount directory from docker container to host - docker

I'm trying to mount a directory from a docker container to the host file system
sudo mount 172.17.0.2:/mnt/my_storage /home/user/data/
but the command seems to be pending.
I used this command in the past with another instance of the same container image and everything was fine.
Any check to face the issue? Is there another way to accomplish that?

Why don't add a mount during container creation.
local folder /home/user/data/
docker run [..] -v /home/user/data/:/mnt/my_storage [..]
named volume app_data
docker run [..] -v app_data:/mnt/my_storage [..]
Have a look at: https://docs.docker.com/storage/volumes/

Related

Docker volume bind empty volume or convert files to folders

I'm running a container by sending to docker daemon so it can run a sibling container and in that container I try to run another container and mount a volume to access some data, however in the sibling container, the volume is either empty or the file is converted to a folder...
Running the first container:
$ docker run -v /var/run/docker.sock:/var/run/docker.sock -it example /bin/bash
root#3aa35965846a:/home/node/example# ls some_volume/
test.txt
root#3aa35965846a:/home/node/example# cat some_volume/test.txt
hello
// Running the second container
root#3aa35965846a:/home/node/example# docker run -v /home/node/example/some_volume/:/some_volume/ -it node:10 /bin/bash
root#6a84739fbb92:/# ls /some_volume/
* test.txt
root#6a84739fbb92:/# cat /some_volume/test.txt/
cat: /some_volume/test.txt/: Is a directory
The first time I run the second container the volume is empty, if I try to mount a file directly it is converted to a folder, and after that if I try to mount the folder like the example above, there is only the file I tried to mount earlier and it is a folder.
How is this possible ? If i try to mount a volume outside the first container I don't have any problem, how can I fix this ?
The first path in the docker run -v option is always on the host system. For example, if you
docker run -v /etc:/x busybox cat /x/shadow
it will dump out the host's encrypted password file, regardless of whether you ran this command directly from the host or from a container.
There isn't a way to share an arbitrary directory from one container to another. If the launching container knows something about its own directory structure (in particular that some directory was mounted from a specific host path or named volume) then it can replicate that to the other container, but that's not a generic answer. The other behaviors you're seeing are just a consequence of those directories not existing on the host system.
In general I would advise not using Docker for short-lived processes that principally interact with the outside world through the filesystem. Take whatever program you'd run in the other container, install it in your image's Dockerfile, and run it directly without going through Docker.
If you really can't avoid this workflow, the only thing I've found to work reliably is to docker create the container, docker cp files in, docker start it, and docker wait for it to finish. When it's done, docker cp the result out before docker rm it. That's a kind of painstaking workflow but it gets around the problem of the two containers not sharing any filesystem space.

How to create a file not a directory automatically when mount a host file from container

I'm writing a cli which will generate a markdown file when finished, and I build a docker image for that cli.
I want to mount the markdown file generated by the container to host machine.
docker -v will create a folder not a file automatically when the path not exist on host.
For example.
~/result.md not exist at first.
docker run -it --rm -v ~/result.md:/usr/src/work_dir/result.md cli:latest generate_markdown
After running, ~/result.md folder is created but not file, and the cli throw an exception because of write to a directory not a file.
To avoid this, I have to create a file at first, and run the docker cli subsequently. It works fine.
Is it possible to avoid create the file at the beginning ?
Try -
$docker volume create myvol
$docker run -it --rm -v myvol:/usr/src/work_dir/ cli:latest generate_markdown
Alternately, you can just
$docker run -it --rm -v myvol:/usr/src/work_dir/ cli:latest generate_markdown
Want an explanation ?
You are using a bind mount; in your case
docker run -it --rm -v ~/result.md:/usr/src/work_dir/result.md cli:latest generate_markdown
The solution to your problem might just be a volume mount.
For more info refer - https://docs.docker.com/storage/volumes/
First create a docker volume by-
$docker volume create myvol
. You can give any name instead of myvol.
This docker volume will be created, you can check if the volume was created successfully by-
$docker volume ls
This will give a list of your all your volumes, your newly created volume should be listed.
ak#ubuntu:~$ docker volume create myvol
myvol
ak#ubuntu:~$ docker volume ls
DRIVER VOLUME NAME
local myvol
Docker Volumes are stored in a separate area on the host file system and are completely managed by docker as opposed to bind mounts.Docker volumes store state outside of containers, so your data survives when you replace the container to update your app.
Docker volumes also get automatically created in case you specify a name instead of a directory path. In the following example a volume by the name myvol2 will be automatically created -
$docker run -it -v myvol2:/home/myfiles imagename:tag
Docker volumes are usually created in /var/lib/docker/volumes
in linux and in C:\ProgramData\docker\volumes in Windows.
Now here's the useful part. Any data/file/directories that already exist in the specified container directory are automatically copied or 'mounted' onto the docker volume. Therefore if the '''/usr/src/work_dir/''' directory mentioned in the above example contains any files (like a markup file in your case), they are copied onto the volume automatically.
Hope this helps.
The volume mount will assume a directory name passed in, not a file. You can either mount the volume ~ or have a directory created and mount that volume.
mkdir ~/markdown
docker run -it --rm -v ~/markdown/:/usr/src/work_dir/ cli:latest generate_markdown
You should avoid mounting files all together accordijg to best practices. That being said it is not possible to change the x fault behaviour that you describe.
The solution to you problem (to not create the file each time before running the container) is to mount the directories, omitting the file names.
docker run -it --rm -v ~/:/usr/src/work_dir/ cli:latest generate_markdown

Docker file mount understanding

I'am using docker and I have a strange behaviour when I try to mount a container file on a host file.
docker run -v /var/tmp/foo.txt:/var/tmp/foo.txt myapp
The command above runs myapp container which creates a foo.txt file into the /var/tmp directory into the container. Because I need to keep this file on host after myapp dies, I create a mounting point.
My problem is that instead of creating foo.txt as a file on host, I end up with an empty directory named "foo.txt" (and nothing inside).
But, if I create an empty text file foo.txt on host and if I run myapp again, it works as expected.
So, my question is, Do I need to create the file on host before starting the container when I use file mount with docker?
I think I missed something. Thank you for your explanations.
In fact as you discovered, to mount a host file as a data volume the file must exists otherwise docker will create a directory and mount it.
From: https://docs.docker.com/engine/tutorials/dockervolumes/
Mount a host file as a data volume
The -v flag can also be used to mount a single file - instead of just directories - from the host machine.
$ docker run --rm -it -v ~/.bash_history:/root/.bash_history ubuntu /bin/bash
Note it never says that the file doesn't exists on host.
As suggested in the comment it is better to mount a directory if you want the container writes into it.
Regards

Bind-mount a host directory into a volume of a running docker container

Let's say that I start a docker container with a bind-mounted local folder:
docker run --rm -v /ux1/dmtest:/data -it ubuntu
Then, locally - not inside the container, I bind-mount a directory from another fs into /ux1/dmtest:
mkdir /ux1/dmtest/bm
mount --bind /ux0/bm /ux1/dmtest/bm
Now, from the container, I see /data/bm/ and I can write content to it, but this content will not be visible on the host on /ux0/bm.
Where is this content stored?
And is there any way to mount additional storage into a running docker container (this workaround clearly doesn't work)?
Mounts done after the fact won't be seen by the container due to mount namespaces that Docker uses. The files will be in the /ux1/dmtest directory that was in place before your second bind mount.
If you do want to use a bind mount, put it in place, and then start the docker daemon, and then your container will see it.

Mounting a single file from a Docker data volume in a Docker

I'm trying to mount a single file from a Docker volume in a container when using "docker run".
I've been able to mount an entire volume as a directory, e.g:
docker run -v my_volume:/root/volume my_container
I've also mounted single files from the physical machine, e.g:
docker run -v /usr/local/bin/docker:/usr/local/bin/docker
Is there a way?
Is there a way always destination path/file doesn't exist in the container, if you've created a named volume and a bind to its directory (similar to deprecated volumes_from)
docker run -v /var/lib/docker/volumes/my_volume/_data/MY_FILE.txt:/destination_folder/MY_FILE.txt
That's why when you create a named volume and run a service/container with docker run -v my_volume:/root/volume my_container, data is stored in /var/lib/docker/volumes/my_volume/_data

Resources