Anyone know if Docker somehow caches files/file systems? And if it does, is there a way to suppress that or force it to regenerate/refresh the cache. I am editing my files outside my docker image, but the directory inside the docker image doesn't seem to include them in it. However, outside the docker image the "same" directory does include the files. The only thing that makes sense to me is that docker has an internal "copy" of the directory and isn't going to the disk, so it sees an outdated copy of the directory before the file was added.
Details:
I keep my "work" files in a directory on a separate partition (/Volumes/emacs) on my MacBook, i.e.:
/Volumes/emacs/datapelago
I do my editing in emacs on the MacBook, but not in the docker container. I have a link to that directory called:
/projects
And I might edit or create a file called:
/projects/nuc-440/qflow/ToAttribute.java
In the meantime I have a docker container I created this way:
docker container create -p 8080:8080 -v /Volumes/emacs/datapelago:/projects -v /Users/christopherfclark:/Users/cfclark --name nuc-440 gradle:7.4.2-jdk17 tail -f /dev/null
I keep a shell running in that container:
docker container exec -it nuc-440 /bin/bash
cd /projects/nuc-440
And after making changes I run the build/test sequence:
gradle clean;gradle build;gradle test
However, recently I have noticed that when I make changes, add files, they don't always get reflected inside the docker container, which of course can cause the build to fail or the test not to pass etc.
Thus, this question.
I'd rather not have to start/stop the container each time and instead just keep it running and tell it to "refetch" the projects/nuc-440 directory and its children.
I restarted the docker container and it is now "tracking" file changes again. That is I can make changes in MacOS and they are reflected inside docker with no additional steps. I don't seem to have to continually restart it. It must have gotten into a "wierd" state. However, I don't have any useful details beyond that. Sorry.
I am finding many answers on how to develop inside a container in Visual Studio Code with the Remote Containers extension, but surprisingly none address my use case.
I can add to the repo only from the host, but can run the code only from the container. If I edit a file in the container, I have to manually copy it to the host so I can commit it, but if I edit the file on the host, I have to manually copy it into the container so I can test it.
I would like to set up the IDE to automatically copy files from the host into the container whenever I save or change a file in a particular directory. This way I can both commit files on the host, and run them in the container, without having to manually run docker cp each time I change a file. Files should not automatically be copied from the container to the host, since the container will contain built files which should not be added to the repo.
It seems highly unlikely that this is impossible; but how?
This can be configured using the Run on Save extension.
Set it to run docker cp on save.
So I have this remote folder /mnt/shared mounted with fuse. It is mostly available, except there shall be some disconnections from time to time.
The actual mounted folder /mnt/shared becomes available again when the re-connection happens.
The issue is that I put this folder into a docker volume to make it available to my app: /shared. When I start the container, the volume is available.
But if a disconnection happens in between, while the /mnt/shared repo on the host machine is available, the /shared folder is not accessible from the container, and I get:
user#machine:~$ docker exec -it e313ec554814 bash
root#e313ec554814:/app# ls /shared
ls: cannot access '/shared': Transport endpoint is not connected
In order to get it to work again, the only solution I found is to docker restart e313ec554814, which brings downtime to my app, hence is not an acceptable solution.
So my questions are:
Is this somehow a docker "bug" not to reconnect to the mounted folder when it is available again?
Can I execute this task manually, without having to restart the whole container?
Thanks
I would try the following solution.
If you mount the volume to your docker like so:
docker run -v /mnt/shared:/shared my-image
I would create an intermediate directory /mnt/base/shared and mount it to docker like so:
docker run -v /mnt/base/shared:/base/shared my-image
and I will also adjust my code to refer to the new path or creating a link from /base/shared to /shared inside the container
Explanation:
The problem is that the mounted directory /mnt/shared is probably deleted on host machine, when there is a disconnection and a new directory is created after connection is back. But, the container started running with directory mapping for the old directory which was deleted. By creating an intermediate directory and mapping to it instead you avoid this mapping issue.
Another solution that might work is to mount the directory using bind-propagation=shared
e.g:
--mount type=bind,source=/mnt/shared,target=/shared,bind-propagation=shared
See docker docs explaining bind-propogation
Why is Docker trying to create the folder that I'm mounting? If I cd to C:\Users\szx\Projects
docker run --rm -it -v "${PWD}:/src" ubuntu /bin/bash
This command exits with the following error:
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: error while creating mount source path '/c/Users/szx/Projects': mkdir /c/Users/szx/Projects: file exists.
I'm using Docker Toolbox on Windows 10 Home.
For anyone running mac/osx and encountering this, I restarted docker desktop in order to resolve this issue.
Edit: It would appear this also fixes the issue on Windows 10
My trouble was a fuse-mounted volume (e.g. sshfs, etc.) that got mounted again into the container. I didn't help that the fuse-mount had the same ownership as the user inside the container.
I assume the underlying problem is that the docker/root supervising process needs to get a hold of the fuse-mount as well when setting up the container.
Eventually it helped to mount the fuse volume with the allow_other option. Be aware that this opens access to any user. Better might be allow_root – not tested, as blocked for other reasons.
I got this error after changing my Windows password. I had to go into Docker settings and do "Reset credentials" under "Shared Drives", then restart Docker.
Make sure the folder is being shared with the docker embedded VM. This differs with the various types of docker for desktop installs. With toolbox, I believe you can find the shared folders in the VirtualBox configuration. You should also note that these directories are case sensitive. One way to debug is to try:
docker run --rm -it -v "/:/host" ubuntu /bin/bash
And see what the filesystem looks like under "/host".
I have encountered this problem on Docker (Windows) after upgrading to 2.2.0.0 (42247). The issue was with casing in the folder name that I've provided in my arguments to docker command.
Did you use this container before? You could try to remove all the docker-volumes before re-executing your command.
docker volume rm `(docker volume ls -qf dangling=true)`
I tried your command locally (MacOS) without any error.
I met this problem too.
I used to run the following command to share the folder with container
docker run ... -v c:/seleniumplus:/dev/seleniumplus ...
But it cannot work anymore.
I am using the Windows 10 as host.
My docker has recently been upgraded to "19.03.5 build 633a0e".
I did change my windows password recently.
I followed the instructions to re-share the "C" drive, and restarted the docker and even restarted the computer, but it didn't work :-(.
All of sudden, I found that the folder is "C:\SeleniumPlus" in the file explorer, so I ran
docker run ... -v C:/SeleniumPlus:/dev/seleniumplus ...
And it did work. So it is case-sensitive when we specify the windows shared folder in the latest docker ("19.03.5 build 633a0e").
I am working in Linux (WSL2 under Windows, to be more precise) and my problem was that there existed a symlink for that folder on my host:
# docker run --rm -it -v /etc/localtime:/etc/localtime ...
docker: Error response from daemon: mkdir /etc/localtime: file exists.
# ls -al /etc/localtime
lrwxrwxrwx 1 root root 25 May 23 2019 /etc/localtime -> ../usr/share/zoneinfo/UTC
It worked for me to bind mount the source /usr/share/zoneinfo/UTC instead.
I had this issue when I was working with Docker in a CryFS -encrypted directory in Ubuntu 20.04 LTS. The same probably happens in other UNIX-like OS-es.
The problem was that by default the CryFS-mounted virtual directory is not accessible by root, but Docker runs as root. The solution is to enable root access for FUSE-mounted volumes by editing /etc/fuse.conf: just comment out the use_allow_other setting in it. Then mount the encrypted directory with the command cryfs <secretdir> <opendir> -o allow_root (where <secretdir> and <opendir> are the encrypted directory and the FUSE mount point for the decrypted virtual directory, respectively).
Credits to the author of this comment on GitHub for calling my attention to the -o allow_root option.
Had the exact error. In my case, I used c instead of C when changing into my directory.
I solved this by restarting docker and rebuilding the images.
I have put the user_allow_other in /etc/fuse.conf.
Then mounting as in the example below has solved the problem.
$ sshfs -o allow_other user#remote_server:/directory/
I had this issue in WSL, likely caused by leaving some containers alive too long. None of the advice here worked for me. Finally, based on this blog post, I managed to fix it with the following commands, which wipe all the volumes completely to start fresh.
docker-compose down
docker rm -f $(docker ps -a -q)
docker volume rm $(docker volume ls -q)
docker-compose up
Then, I restarted WSL (wsl --shutdown), restarted docker desktop, and tried my command again.
In case you work with a separate Windows user, with which you share the volume (C: usually): you need to make sure it has access to the folders you are working with -- including their parents, up to your home directory.
Also make sure that EFS (Encrypting File System) is disabled for the shared folders.
See also my answer here.
I had the same issue when developing using docker. After I moved the project folder locally, Docker could not mount files that were listed with relatives paths, and tried to make directories instead.
Pruning docker volumes / images / containers did not solve the issue. A simple restart of docker-desktop did the job.
This error crept up for me because the problem was that my docker-compose file was looking for the APPDATA path on my machine on mac OS. MacOS doesn't have an APPDATA environment variable so I just created a .env file with the contents:
APPDATA=~/Library/
And my problem was solved.
I faced this error when another running container was already using folder that is being mounted in docker run command. Please check for the same & if not needed then stop the container. Best solution is to use volume by using following command -
docker volume create
then Mount this created volume if required to be used by multiple containers..
For anyone having this issue in linux based os, try to remount your remote folders which are used by docker image. This helped me in ubuntu:
sudo mount -a
I am running docker desktop(docker engine v20.10.5) on Windows 10 and faced similar error. I went ahead and removed the existing image from docker-desktop UI, deleted the folder in question(for me deleting the folder was an option because i was just doing some local testing), removed the existing container, restarted the docker and it worked
In my case my volume path (in a .env file for docker-compose) had a space in it
/Volumes/some\ thing/folder
which did work on Docker 3 but didn't after updating to Docker 4. So I had to set my env variable to :
"/Volumes/some thing/folder"
I had this problem when the directory on my host was inside a directory mounted with gocryptfs. By default even root can't see the directory mounted by gocryptfs, only the user who executed the gocryptfs command can. To fix this add user_allow_other to /etc/fuse.conf and use the -allow_other flag e.g. gocryptfs -allow_other encrypted mnt
In my specific instance, Windows couldn't tell me who owned my SSL certs (probably docker). I took control of the SSL certs again under Properties, added read permission for docker-users and my user, and it seemed to have fixed the problem. After tearing my hair out for 3 days with just the Daemon: Access Denied error, I finally got a meaningful error regarding another answer above "mkdir failed" or whataever on a mounted file (the SSL cert).
I have a docker-compose dev stack. When I run, docker-compose up --build, the container will be built and it will execute
Dockerfile:
RUN composer install --quiet
That command will write a bunch of files inside the ./vendor/ directory, which is then only available inside the container, as expected. The also existing vendor/ on the host is not touched and, hence, out of date.
Since I use that container for development and want my changes to be available, I mount the current directory inside the container as a volume:
docker-compose.yml:
my-app:
volumes:
- ./:/var/www/myapp/
This loads an outdated vendor directory into my container; forcing me to rerun composer install either on the host or inside the container in order to have the up to date version.
I wonder how I could manage my docker-compose stack differently, so that the changes during the docker build on the current folder are also persisted on the host directory and I don't have to run the command twice.
I do want to keep the vendor folder mounted, as some vendors are my own and I like being able to modifiy them in my current project. So only mounting the folders I need to run my application would not be the best solution.
I am looking for a way to tell docker-compose: Write all the stuff inside the container back to the host before adding the volume.
You can run a short side container after docker-compose build:
docker run --rm -v /vendor:/target my-app cp -a vendor/. /target/.
The cp could also be something more efficient like an rsync. Then after that container exits, you do your docker-compose up which mounts /vendor from the host.
Write all the stuff inside the container back to the host before adding the volume.
There isn't any way to do this directly, but there are a few options to do it as a second command.
as already suggested you can run a container and copy or rsync the files
use docker cp to copy the files out of a container (without using a volume)
use a tool like dobi (disclaimer: dobi is my own project) to automate these tasks. You can use one image to update vendor, and another image to run the application. That way updates are done on the host, but can be built into the final image. dobi takes care of skipping unnecessary operations when the artifact is still fresh (based on modified time of files or resources), so you never run unnecessary operations.