I have a mount a directory in an input mount of Docker container (Directory contains my files).
Inside the container, I list mounted files at the beginning and at the middle of my script. The problem is some files do not get mounted.
Sometimes, I can't see some of them at the beginning of my script or All of them are in input mount at the beginning of script but some of them are not in input mount at the middle of script sometimes.
What is the problem?
P.S:I mostly faced this problem while bulk launch
Related
I'm running an application in a Docker container and expose its entire app-data directory to the host. The app-data directory inside the container contains many files and directories, also a logs subdirectory which I want to treat differently and map it to another disk partition on the host. My docker-compose.yml now contains this:
volumes:
- /mnt/app:/var/app-data
- /tmp/logs:/var/app-data/logs
While this seems to work, I encountered a side-effect that I don't understand. After starting the application, the /tmp/logs directory correctly contains the application logs, but the /mnt/app directory now contains an empty logs directory:
drwxr-xr-x. 2 root root 4096 Dec 1 10:47 logs
Where does this empty directory come from? Is that just expected Docker behavior and totally safe to deploy this way?
That empty directory is needed as the mount point for the nested mount.
The first important detail is that Docker internally sorts all mounts by their container path, and mounts them in order. So, the /var/app-data mount is completed before Docker starts to consider the /var/app-data/logs directory.
When Docker does the bind mount, it uses the Linux mount(2) system call. Docker needs to tell the kernel what to mount and where to mount it. If the mount point doesn't exist, Docker creates it first.
In your scenario, after /var/app-data is mounted, the directory /var/app-data/logs doesn't exist, so Docker creates it before doing the actual mount. But, when it does create the directory, the outer directory is already mounted, so when it creates the directory it creates it on the host system too.
This is a normal consequence of using nested mounts and it's nothing to worry about.
I have the following in my docker-compose.yml file:
volumes:
- .:/var/www/app
- my_modules:/var/www/app/node_modules
I do this because I don't have node_modules on my host and everything gets installed in the image and is located at /var/www/app/node_modules.
I have two questions regarding this:
An empty folder gets created in my host named node_modules. If I add another volume (named or anonymous) to the list in my .yml file, it'll show up in my host directory in the same folder that contains my .yml file. From this answer, it seems to have to do with the fact that there's these two mappings going on simultaneously. However, why is the folder empty on my host? Shouldn't it either a) contain the files from the named volume or b) not show up at all on the host?
How does Docker know to check the underlying /var/www/app/node_modules from the image when initializing the volume rather than just saying "Oh, node_modules doesn't exist" (since I'm assuming the host bind mount happens before the named volume gets initialized, hence /var/www/app should no longer have a folder named node_modules. It seems like it even works when I create a sample node_modules folder on my host and a new volume while keeping my_modules:/var/www/app/node_modules—it seems to still use the node_modules from the image rather than from the host (which is not what I expected, although not unwanted).
As an implementation detail, Docker actually uses the Linux kernel filesystem mount facility whenever it mounts a volume. To mount a volume it has to be mounted on to a directory, so if the mount target doesn't already exist, it creates a new empty directory to be the mount point. If the mount point is itself inside a mounted volume, you'll see the empty directory get created, but the mount won't get echoed out.
(If you're on a Linux host, try running mount in a shell while the container is running.)
That is:
/container_root/app is a bind mount to /host_path/app; they are they same underlying files.
mkdir /container_root/app/node_modules creates /host_path/app/node_modules too.
Mounting something else on /container_root/app/node_modules doesn't cause anything to be mounted on /host_path/app/node_modules.
...which leaves an empty /host_path/app/node_modules directory.
The first time you start a container, and only then, if you mount an empty volume into a container, the contents from the image get copied into the volume. You're telling Docker this directory contains critical data that needs to be persisted for longer than the lifespan of the container. It is not a magic "don't use the host directory volume" knob, and if you do things like change your package.json file, Docker will not update the contents of this volume.
I have a little problem with the docker.
I'm trying to use a volume share with my computer. I can see the files on my computer, but they are empty from my container.
I tried to create a file in the /root of my container (outside the shared volume) and I can see the file without any problem.
If I do echo test > test.txt (in my shared volume), the file content is empty.
I execute this command :
docker run -v "D:\My App:/home/app" -it MyImage /bin/bash
In the /home/app folder, I can see the files on my computer. But if I do:
cat /home/app/test.txt
It tells me there's nothing in the file. While there is a text (the file exists)
If I create a file from my container, in the shared volume, I find it on my computer (and it is not empty).
If I create a file from my computer, I find it in the container, but it is empty when I try to display it.
Currently, when I do a cat test.txt, it doesn't display anything.
This should display this is a test
Do check first your Docker for Windows settings:
If your D:\ drive is not shared, you won't see much in your container.
docker/for-win issue 25 points out multiple possible issues:
If you are using Docker Toolbox:
In my case Docker Toolbox created a VM named default in Virtualbox and I added the Shared Folder in the VM; Virtualbox -> default (VM) -> Settings -> Shared Folders -> Add:
Then you can specify the paths in both your machine and the mapped path in the VM, like:
The 1st field is the path in your machine, like D:\my\app
The 2nd is the path in the VM, like /my-vm/app
Choose to Mount Automatically
Another:
One of the issues I had when learning, was to try and mount a volume in my container, but then have a folder that conflicted.
For example, I'd make my workingdir /foo/bar, then try to use a volume for /foo/bar/private as well, BUT already have a folder called private in my initial mount.
I would see no error, but I'd see the first folder and not my 2nd volume
Or:
docker/for-win issue 2151: "Volumes mounted from a Linux WSL instance don't resolve in container".
It refers to "how to use Docker with WSL".
The last thing we need to do is set things up so that volume mounts work. This tripped me up for a while because check this out…
When using WSL, Docker for Windows expects you to supply your volume paths in a format that matches this: /c/Users/nick/dev/myapp.
But, WSL doesn’t work like that. Instead, it uses the /mnt/c/Users/nick/dev/myapp format.
Honestly I think Docker should change their path to use /mnt/c because it’s more clear on what’s going on, but that’s a discussion for another time.
I have a docker container I use to compile a project, build a setup and export it out of the container. For this I mount the checked out sources (using $(Build.SourcesDirectory):C:/git/ in the volumes section of the TFS docker run task) and an output folder in 2 different folders. Now my project contains a submodule which is also correctly checked out, all the files are there. However, when my script executes nmake I get the following error:
Cannot find file: \ContainerMappedDirectories\347DEF6A-D43B-48C0-A5DF-CE228E5A10FD\src\Submodule\Submodule.pro
Where the path of the mapped container maps to C:/git/ inside the windows docker container(running on a windows host). I was able to start the docker container with an interactive powershell and mount the folder and find out the following things:
All the files are there in the container.
When doing docker cp project/ container:C:/test/ and running my build script it finds all the files and compiles successfully.
when copying the mounted project within docker with powershell and starting the build script it also works.
So it seems nmake has trouble traversing a mounted container within docker. Any idea how to fix this? I'd rather avoid copying the project into the container because that takes quite some time as compared to simply mounting the checked out project.
I'm trying to avoid slow osxfs when using docker. So I'm running docker-sync volume container.
I want to mount only subdir of this volume to another container.
Current example errors out:
docker: Error response from daemon: create docker-sync/www/marcopolo.front: "docker-sync/www/marcopolo.front" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed.
See 'docker run --help'.
As you've seen, docker does not currently provide a way to mount only a sub directory from inside of a named volume. You'll have to mount to entire volume and then either pick your specific files, or you can mount to volume in a separate location and include a symbolic link to your specific subdirectory as part of the image.