Docker file mount understanding - docker

I'am using docker and I have a strange behaviour when I try to mount a container file on a host file.
docker run -v /var/tmp/foo.txt:/var/tmp/foo.txt myapp
The command above runs myapp container which creates a foo.txt file into the /var/tmp directory into the container. Because I need to keep this file on host after myapp dies, I create a mounting point.
My problem is that instead of creating foo.txt as a file on host, I end up with an empty directory named "foo.txt" (and nothing inside).
But, if I create an empty text file foo.txt on host and if I run myapp again, it works as expected.
So, my question is, Do I need to create the file on host before starting the container when I use file mount with docker?
I think I missed something. Thank you for your explanations.

In fact as you discovered, to mount a host file as a data volume the file must exists otherwise docker will create a directory and mount it.
From: https://docs.docker.com/engine/tutorials/dockervolumes/
Mount a host file as a data volume
The -v flag can also be used to mount a single file - instead of just directories - from the host machine.
$ docker run --rm -it -v ~/.bash_history:/root/.bash_history ubuntu /bin/bash
Note it never says that the file doesn't exists on host.
As suggested in the comment it is better to mount a directory if you want the container writes into it.
Regards

Related

Is it possible to "open" vscode to see the contents of a docker container?

I have a docker image, and I am running it now (finishing with bash)
When I do, I have a file structure inside the container.
However, this is not some file structure mapped (with -v) from outside the container. These files and folders exist only inside the container.
My question is, since it is bothersome to be opening each file with vi and navigating from the terminal, is there a way that I can open vscode on these files?
Be aware that these files do not exist outside the container
I found how to do it from this link
However I used the "attach to running container" command
I rarely do that but when I have to I usually mount an empty volume to the container, then exec into the container copy the folder which I need into that empty volume, which is then replicated on my host machine. From my host machine I then open it in vscode.
However please be careful if you have sensitive information in that container, not to expose something by accident.
So the steps are:
Create empty volume ( docker-compose example )
Note do not overwrite the folder/file which you want to extract. containerpath is path which does not exist in the container prior to creating it.
volume:
- ./hostpath:/containerpath
Find docker id so that you can use it to exec into it:
docker ps
Exec into the container:
docker exec -it <container_id> /bin/sh
Copy the file/folder to that empty volume:
cp -r folder containerpath
Exit the container and look at your files in ./hostpath folder.

Mount directory from docker container to host

I'm trying to mount a directory from a docker container to the host file system
sudo mount 172.17.0.2:/mnt/my_storage /home/user/data/
but the command seems to be pending.
I used this command in the past with another instance of the same container image and everything was fine.
Any check to face the issue? Is there another way to accomplish that?
Why don't add a mount during container creation.
local folder /home/user/data/
docker run [..] -v /home/user/data/:/mnt/my_storage [..]
named volume app_data
docker run [..] -v app_data:/mnt/my_storage [..]
Have a look at: https://docs.docker.com/storage/volumes/

Mounted a volume in Docker, but container and localhost changes are independent

I think I may either be experiencing an error, or misunderstanding the way volumes work in Docker containers.
I am starting my image using the following command: docker run --name Goku -ti -p 3000:3000 -v VSPM:/root/goku:rw ubuntu:goku
VSPM is the local directory and I am wanting to mount it to /root/goku on the docker container. Well, it mounts just fine; however, if I create a new file within the container, that new file doesn't show on the localhost in the VSPM directory. The same vice versa -- if I create a new file on the host in that folder, nothing changes in the container's folder.
What am I doing wrong? I just simply want a shared folder between the host and the container. Nothing more, nothing less.
Use the full file path of the local mounted drive instead of just VSPM.

How to remove a mount for existing container?

I'm learning docker and reading their chapter "Manage data in containers". In the "Mount a host directory as a data volume". They mentioned the following paragraph:
In addition to creating a volume using the -v flag you can also mount a directory from your Docker engine’s host into a container.
$ docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the container at /opt/webapp. If the path /opt/webapp already exists inside the container’s image, the /src/webapp mount overlays but does not remove the pre-existing content. Once the mount is removed, the content is accessible again. This is consistent with the expected behavior of the mount command.
Experiment 1
Then when I tried to run this command and try to inspect the container, I found that that actually container doesn't even run. Then I use docker logs web and find this error:
can't open file 'app.py': [Errno 2] No such file or directory
I assume that the /src/webapp mount overlays on the /opt/webapp, which there is no content.
Question 1
How can I remove this mount and check if the content is still there as the quote said?
Experiment 2
When I tried to run
$ docker run -d -P --name web2 -v newvolume:/opt/webapp training/webapp python app.py
I found that the container ran correctly. Then I use docker exec -it web2 /bin/bash and find that all of the existing content are still inside the /opt/webapp. I can also add more files inside here. So in this case, it looks like that the volume is not overlay but combined. If I use docker inspect web and check Mounts, then I'll see that the volume is created under /var/lib/docker/volumes/newvolume/_data
Question 2
If I give a name instead of a host-dir absolute path, then the volume will not overlay the container-dir /opt/webapp but connect the two dir together?
An alternative solution is to commit the container (or export it) using docker cli and re-create it without doing the mapping.
Question 1 How can I remove this mount and check if the content is still there as the quote said?
You would create a new container without the volume mount. E.g.
$ docker run -d -P --name web training/webapp python app.py
(Theoretically it's possible to perform some privileged operations to remove the mount on a running container, but inside the container you will not normally have this permission, and it's a good practice to get into the habit of treating containers as ephemeral.)
Question 2 If I give a name instead of a host-dir absolute path, then the volume will not overlay the container-dir /opt/webapp but connect the two dir together?
Almost. What's happening with named volumes is that docker provides an initialization step when the volume is empty and the container is created with that volume mount. The initialization step copies the contents of the image at that directory into the volume, including all files and directories recursively, ownership, and permissions. This is very useful to running containers as a non-root user with a volume directory that the user inside the container needs to be able to write into. After that initialization has happened, future containers with the same named volume will skip the initialization, even if the image content has changed, e.g. if you add new content into the image.

How can I use VOLUME in a Dockerfile to persist individual files in a directory?

This application I'm trying to Dockerize has configuration files in the root of the install dir. If I use VOLUME to mount the install dir on the host, I'll end up with the application on the host, too. I only want to store the configuration files on the host.
Should I use hard links in the container and use VOLUME to mount the dir that has the hardlinks? Do hard links even work in a container?
You can mount individual files. Below is from the docker documentation https://docs.docker.com/engine/userguide/containers/dockervolumes/
Mount a host file as a data volume
The -v flag can also be used to mount a single file - instead of just
directories - from the host machine.
$ docker run --rm -it -v ~/.bash_history:/root/.bash_history ubuntu /bin/bash
This will drop you into a bash shell in a new container, you will have
your bash history from the host and when you exit the container, the
host will have the history of the commands typed while in the
container.
Note: Many tools used to edit files including vi and sed --in-place may result in an inode change. Since Docker v1.1.0, this will produce an error such as “sed: cannot rename ./sedKdJ9Dy: Device
or resource busy”. In the case where you want to edit the mounted
file, it is often easiest to instead mount the parent directory.

Resources