Error: "error creating aufs mount to" when building dockerfile - docker

I get this error when I try to build a docker file
error creating aufs mount to /var/lib/docker/aufs/mnt
/6c1b42ce1a98b1c0f2d2a7f17c196221445f1054566065d4c607e4f1b99930eb-init:
invalid argument
What does it mean? How do I fix it?

I had some unresolved errors after removing /var/lib/docker/aufs, which a couple extra steps cleared up.
To add to #benwalther answer, since I lack the reputation to comment:
# Cleaning up through docker avoids these errors
# ERROR: Service 'master' failed to build:
# open /var/lib/docker/aufs/layers/<container_id>: no such file or directory
# ERROR: Service 'master' failed to build: failed to register layer:
# open /var/lib/docker/aufs/layers/<container_id>: no such file or directory
docker rm -f $(docker ps -a -q)
docker rmi -f $(docker images -a -q)
# As per #BenWalther's answer above
sudo service docker stop
sudo rm -rf /var/lib/docker/aufs
# Removing the linkgraph.db fixed this error:
# Conflict. The name "/jenkins_data_1" is already in use by container <container_id>.
# You have to remove (or rename) that container to be able to reuse that name.
sudo rm -f /var/lib/docker/linkgraph.db
sudo service docker start

If you try to use docker inside a persistent enable Live CD, you may encounter this error. I guess, it is due to the fact that you can't mount aufs inside overlayfs mounts, which is the persistent layer.
The solution was simply using different driver. I've used vfs in /etc/docker/daemon.json
Here it is
{
"storage-driver": "vfs"
}

I have removed /var/lib/docker/aufs/diff and got the same problem:
error creating aufs mount to /var/lib/docker/aufs/mnt/blah-blah-init: invalid argument
It solved by running the following commands:
docker stop $(docker ps -a -q);
docker rm $(docker ps -a -q);
docker rmi -f $(docker images -a -q)

AUFS is unable to mount the docker container filesystem.
This is either because: the path is already mounted - or - there's a race condition in docker's interaction with AUFS, due to the large amount of existing volumes.
To solve this, try the following:
restart the docker service or daemon and try again.
check mount for aufs mounted on any paths under /var/lib/docker/aufs/. If found, stop docker, then umount them (need sudo).
example:
mount
none on /var/lib/docker/aufs/mnt/55639da9aa959e88765899ac9dc200ccdf363b2f09ea933370cf4f96051b22b9 type aufs (rw,relatime,si=5abf628bd5735419,dio,dirperm1)
then
sudo umount /var/lib/docker/aufs/mnt/55639da9aa959e88765899ac9dc200ccdf363b2f09ea933370cf4f96051b22b9
If that doesn't work, stop docker, then sudo rm -rf /var/lib/docker/aufs. You will lose any existing stopped containers and all images. But this is just about guaranteed to solve the problem.

Unfortunately on my system I could not resolve this with the above answers. The docker administration kept remembering a certain file in the aufs layer that it couldn't reach anymore. Other solutions didn't work either. So if this is an option for you, you could try the following fix: uninstall/purge docker and docker-engine:
apt-get purge docker docker-engine
Then make sure everything from /var/lib/docker is removed.
rm -rf /var/lib/docker
After that install docker again.

I'm using Raspbian with Raspberry 4
Best way to do it..
Check your docker version with:
sudo docker info and check "Storage Driver"
sudo systemctl stop docker
sudo nano /etc/docker/daemon.json
write this code below and save it
{
"storage-driver": "vfs"
}
sudo systemctl start docker
altought vfs... has a performance issue and could not be the best choice... :)

I just had a similar issue on Lubuntu (Ubuntu 4.15.0-20-generic) with Docker CE 18.03. None of the described options helped.
It appears that latest docker versions use the overlay2 storage driver. However some applications require aufs. Thus a possible fix might be to simply use this docker guide to change the storage driver to aufs (simply replace "overlay2" with "aufs") as in this guide.

I am running a container inside another container(also installed docker in that container) and is trying to create an aufs storage on top of an overlayfs mount, which is not possible. So, I also change the host overlyfs storage to aufs. It's solve my issue. To check storage driver use below command.
docker info
The solution was simply using different driver. I've used aufs in /etc/docker/daemon.json
Here it is
{
"storage-driver": "aufs"
}
For detailed explanation read below documentation.
Docker storage documentation

A similar issue arose while I was using Docker in Windows:
ERROR: Service 'daemon' failed to build: error creating overlay mount
to /var/lib/docker/overlay2/83c98f716020954420e8b89e6074b1af6
1b2b86cd51ac6a54724ed263b3663a2-init/merged: no such file or directory
The problem occurred after having removed a volume from the image's Dockerfile, rebuilding the image and then rebooting the PC. Maybe this is a common cause?
I managed to solve the problem by clicking Docker -> Settings -> Reset -> Reset to factory defaults...
All my images were subsequently lost but that didn't matter for me. I also figured that removing the VM disk image (the path to which can be found under the Advanced tab in Settings) could solve the issue. I haven't tried this approach however.

In windows after a restart, docker machine problem is solved for me.
Use these commands:
docker-machine stop
docker-machine start
docker-compose up

I put this answer also here, as the google search lead me here since the #whitebrow's answer contains what term I searched for in google
ERROR: Service '***' failed to build: error creating overlay mount
to /var/lib/docker/overlay2/***/merged: no such file or directory
In my case, the working workaround surprisingly was to restrict the number of 'RUN' docker building commands/layers, since if the number surpassed 60 layers/commands, it always ended up with that missing 'merged' folder error, no matter what was the contents of the command, even simple command such as RUN ls -la ended up with that error, if the total number of such/any commands was higher than about 60, strange. Merged subfolder was always missing, though even when I automatically generated all the merged subfolders, always was created on the fly a new layer with a new hash, which was missing that subfolder.

I faced the same issue.I resolved it by adding the storage driver to /etc/docker/daemon.json
you can refer this link as well to see other driver options.
Visit https://docs.docker.com/storage/storagedriver/select-storage-driver/

Related

Getting error while running local docker registry

I am getting while running local docker registry on centos system. I am explaining the error below.
docker: Error response from daemon: lstat /var/lib/docker/overlay2/3202584ed599bad99c7896e0363ac9bb80a0385910844ce13e9c5e8849494d07: no such file or directory.
I am setting of the local registry like below.
vi /etc/docker/daemon.json:
{ "insecure-registries":["ip:5000"] }
I have the registry image installed my system and I am running using the below command.
docker run -dit -p 5000:5000 --name registry bundle/tools:registry_3.0.0-521
I have cleaned all volume as per some suggestion from google but still same issue. Can anybody help me to resolve this error.
The error is not related to the registry and is happening in the client side because of local caching (or some other docker-related issue) in your system.
I've seen this error a lot in the docker community and the most suggested approach to solve this error is to clean up the whole /var/lib/docker directory.
On your local client system, if you don't care about your current containers, images, and caches, try stopping the docker daemon, removing the whole /var/lib/docker directory, and starting it again:
Note that sometimes it gets fixed by only restarting the daemon, so it worths trying it first:
sudo service docker restart
If a simple restart can't solve the problem, go ahead and destroy it:
sudo service docker stop
sudo rm -rf /var/lib/docker
sudo service docker start
(I'm not sure about if these systemd commands will work on your CentOS too)

Change Docker (snap) data-root folder

I'm trying to change the default data folder of docker images, containers, etc to a different path. Snap installation of docker has such folder at /var/snap/docker/common/var-lib-docker.
Theoretically I could change that with data-root option in deamon.json. But, if I change the daemon.json adding "data-root": "/home/user/docker" docker won't start due to a conflict with flags (which always has the previously described default path on it).
I do can start docker with my custom path if I stop it and then start it like this: sudo snap start docker.dockerd --data-root=/home/user/docker. Which is not pretty but works. Is there a way to change docker snap flags on startup or make it prefers the daemon.json options?
I've read this archived post, which treats such issue on docker version 17, but it didn't helped much the same way several other material I found online. I seems that symbolic link may be a way tho...
I'm using docker 19.03.11, snap installed on Ubuntu 20.04.
P.s.: The new path is on a second HDD mounted as my home directory. Changing the path will save space in my system SSD.
Thanks for the attention.
From https://github.com/docker-snap/docker-snap/issues/3 and https://askubuntu.com/questions/550348/how-to-make-mount-bind-permanent, the not-perfect-but-working solution seems to be the bind mount between /var/snap/docker/common/var-lib-docker and /home/username/docker which is the previous docker data-root I had before installing docker with snap.
So first, clear the data-root option in daemon.json.
Then add the following at the end of /etc/fstab with the following command:
echo '/home/username/docker /var/snap/docker/common/var-lib-docker none bind' >> /etc/fstab
After reboot, your docker data root will be stored in /home/username/docker
I ran out of space on an Ubuntu VirtualBox VM and had to do the following:
Stop the VM and create a new Fixed Volume
Start the VM and make sure the new volume was mounted
Stop the docker service
sudo systemctl stop docker.service
sudo systemctl stop docker.socket
Copy /var/lib/docker to new volume
sudo rsync -aqxP /var/lib/docker/ /media/username/spare\ disk/
Update /etc/docker/daemon.json
{
"data-root": "/media/username/spare disk/docker",
"storage-driver": "overlay2"
}
Reload systemd and start docker service
sudo systemctl daemon-reload
sudo systemctl start docker
See: https://docs.docker.com/config/daemon/systemd/#runtime-directory-and-storage-driver

Increase Docker container size from default 10GB on rhel7

When I launch a container from rhel7.3 image, the default container size is 10GB. I want to increase it to 20GB. I tried the below ways but I had no luck
1) Added "DOCKER_STORAGE_OPTIONS": "--storage-opt dm.basesize=20G" in /etc/docker/daemon.json file. /etc/docker/daemon.json file is not there by default so I had to add it and tried restarting docker. Restart fails with the below error:
"unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives don't match any configuration option: DOCKER_STORAGE_OPTIONS\n"
2) Added "dm.basesize=20G" parameter while I launch the conatiner
docker run --privileged --storage-opt "dm.basesize=20G" -d IMAGE_ID
but it fails to launch with error
"docker: Error response from daemon: Unknown option dm.basesize."
Any help on how I can achieve to launch a container with 20GB instead of the default 10GB?
Thanks,
Premchand
I changed the storage type to "Overlay" by the following steps
1) Added {"storage-driver": "overlay"} in /etc/docker/daemon.json file. This file was not there in rhel 7.3 so I added it manually.
2) Restarted docker
My issue of increasing the container volume is resolved as each container get total amount of volume available on the host.
Had the same issue as you, after a lot of research i found a simple solution:
stop the docker service:
sudo systemctl stop docker
edit your docker service file, located at:
/usr/lib/systemd/system/docker.service
find the execution line:
ExecStart=/usr/bin/dockerd
and change it to: ExecStart=/usr/bin/dockerd --storage-opt dm.basesize=20G
start docker service again:
sudo systemctl start docker
all done.
You have the correct flag, --storage-opt dm.basesize=some_size, however this is an argument that should be given to dockerd, not docker.
Try reformatting your daemon.json file to contain:
"storage-opt": [ "dm.basesize=20G" ]

Are you trying to mount a directory onto a file (or vice-versa)?

I have a docker with version 17.06.0-ce. When I trying to install NGINX using docker with command:
docker run -p 80:80 -p 8080:8080 --name nginx -v $PWD/www:/www -v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf -v $PWD/logs:/wwwlogs -d nginx:latest
It shows that
docker: Error response from daemon: oci runtime error:
container_linux.go:262: starting container process caused
"process_linux.go:339: container init caused \"rootfs_linux.go:57:
mounting \\"/appdata/nginx/conf/nginx.conf\\" to rootfs
\\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0\\"
at
\\"/var/lib/docker/aufs/mnt/dcea22444e9ffda114593b18fc8b574adfada06947385aedc2ac09f199188fa0/etc/nginx/nginx.conf\\"
caused \\"not a directory\\"\""
: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
If do not mount the nginx.conf file, everything is okay. So, how can I mount the configuration file?
This should no longer happen (since v2.2.0.0), see here
If you are using Docker for Windows, this error can happen if you have recently changed your password.
How to fix:
First make sure to delete the broken container's volume
docker rm -v <container_name>
Update: The steps below may work without needing to delete volumes first.
Open Docker Settings
Go to the "Shared Drives" tab
Click on the "Reset Credentials..." link on the bottom of the window
Re-Share the drives you want to use with Docker
You should be prompted to enter your username/password
Click "Apply"
Go to the "Reset" tab
Click "Restart Docker"
Re-create your containers/volumes
Credit goes to BaranOrnarli on GitHub for the solution.
TL;DR: Remove the volumes associated with the container.
Find the container name using docker ps -a then remove that container using:
docker rm -v <container_name>
Problem:
The error you are facing might occur if you previously tried running the docker run command while the file was not present at the location where it should have been in the host directory.
In this case docker daemon would have created a directory inside the container in its place, which later fails to map to the proper file when the correct files are put in the host directory and the docker command is run again.
Solution:
Remove the volumes that are associated with the container. If you are not concerned about other container volumes, you can also use:
# WARNING, THIS WILL REMOVE ALL VOLUMES
docker volume rm $(docker volume ls -q)
Because docker will recognize $PWD/conf/nginx.conf as a folder and not as a file. Check whether the $PWD/conf/ directory contains nginx.conf as a directory.
Test with
> cat $PWD/conf/nginx.conf
cat: nginx.conf/: Is a directory
Otherwise, open a Docker issue.
It's working fine for me with same configuration.
The explanation given by #Ayushya was the reason I hit this somewhat confusing error message and the necessary housekeeping can be done easily like this:
$ docker container prune
$ docker volume prune
Answer for people using Docker Toolbox
There have been at least 3 answers here touching on the problem, but not explaining it properly and not giving a full solution. This is just a folder mounting problem.
Description of the problem:
Docker Toolbox bypasses the Hyper-V requirement of Docker by creating a virtual machine (in VirtualBox, which comes bundled). Docker is installed and ran inside the VM. In order for Docker to function properly, it needs to have access to the from the host machine. Which here it doesn't.
After I installed Docker Toolbox it created the VirtualBox VM and only mounted C:\Users to the machine, as \c\Users\. My project was in C:\projects so nowhere on the mounted volume. When I was sending the path to the VM, it would not exist, as C:\projects isn't mounted. Hence, the error above.
Let's say I had my project containing my ngnix config in C:/projects/project_name/
Fixing it:
Go to VirtualBox, right click on Default (the VM from Docker) > Settings > Shared Folders
Clicking the small icon with the plus on the right side, Add a new share. I used the following settings:
The above will map C:\projects to /projects (ROOT/projects) in the VM, meaning that now you can reference any path in projects like this: /projects/project_name - because project_name from C:\projects\project_name is now mounted.
To use relative paths, please consider naming the path c/projects not projects
Restart everything and it should now work properly. I manually stopped the virtual machine in VirtualBox and restarted the Docker Toolbox CLI.
In my docker file, I now reference the nginx.conf like this:
volumes:
- /projects/project_name/docker_config/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
Where nginx.conf actually resides in C:\projects\project_name\docker_config\nginx\nginx.conf
I had the same problem. I was using Docker Desktop with WSL in Windows 10 17.09.
Cause of the problem:
The problem is that Docker for Windows expects you to supply your volume paths in a format that matches this:
/c/Users/username/app
BUT, WSL instead uses the format:
/mnt/c/Users/username/app
This is confusing because when checking the file in the console I saw it, and for me everything was correct. I wasn't aware of the Docker for Windows expectations about the volume paths.
Solution to the problem:
I binded the custom mount points to fix Docker for Windows and WSL differences:
sudo mount --bind /mnt/c /c
Like suggested in this amazing guide: Setting Up Docker for Windows and WSL to Work Flawlessly and everything is working perfectly now.
Before I started using WSL I was using Git Bash and I had this problem as well.
On my Mac I had to uncheck the box "Use gRPC FUSE for file sharing" in Settings -> General
Maybe someone finds this useful. My compose file had following volume mounted
./file:/dir/file
As ./file did not exist, it was mounted into ABC (by default as folder).
In my case I had a container resulted from
docker commit ABC cool_image
When I later created ./file and ran docker-compose up , I had the error:
[...] Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
The container brought up from cool_image remembered that /dir/file was a directory and it conflicted with lately created and mounted ./file .
The solution was:
touch ./file
docker run abc_image --name ABC -v ./file:/dir/file
# ... desired changes to ABC
docker commit ABC cool_image
I am using Docker ToolBox for Windows. By default C Drive is mounted automatically, so in order to mount the files, make sure your files and folders are inside C DRIVE.
Example: C:\Users\%USERNAME%\Desktop
I'll share my case here as this may save a lot of time for someone else in the future.
I had a perfectly working docker-compose on my macos, until I start using docker-in-docker in Gitlab CI. I was only given permissions to work as Master in a repository, and the Gitlab CI is self-hosted and setup by someone else and no other info was shared, about how it's setup, etc.
The following caused the issue:
volumes:
- ./.docker/nginx/wordpress/wordpress.conf:/etc/nginx/conf.d/default.conf
Only when I noticed that this might be running under windows (hours scratching the head), I tried renaming the wodpress.conf to default.conf and just set the dir pathnames:
volumes:
- ./.docker/nginx/wordpress:/etc/nginx/conf.d
This solved the problem!
I had the same issue, docker-compose was creating a directory instead of file, then crashing mid-way.
What I did:
Run the container without any mapping.
Copy the .conf file to the host location:
docker cp containername:/etc/nginx/nginx.conf ./nginx.conf
Remove the container (docker-compose down).
Put the mapping back.
Re-mount the container.
Docker Compose will find the .conf file and map it, instead of trying to create a directory.
unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I had a similar error on niginx in Mac environment.
Docker didn't recognize the default.conf file correctly. Once changing the relative path to the absolute path, the error was fixed.
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf
In Windows 10, I just get this error without changing anything in my docker-compose.yml file or Docker configuration in general.
In my case, I was using a VPN with a firewall policy that blocks port 445.
After disconnecting from the VPN the problem disappears.
So I recommend checking your firewall and not using a proxy or VPN when running Docker Desktop.
Check Docker for windows - Firewall rules for shared drives for more details.
I hope this will help someone else.
Could you please use the absolute/complete path instead of $PWD/conf/nginx.conf? Then it will work.
EX:docker run --name nginx-container5 --rm -v /home/sree/html/nginx.conf:/etc/nginx/nginx.conf -d -p 90:80 nginx
b9ead15988a93bf8593c013b6c27294d38a2a40f4ac75b1c1ee362de4723765b
root#sree-VirtualBox:/home/sree/html# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b9ead15988a9 nginx "nginx -g 'daemon of…" 7 seconds ago Up 6 seconds 0.0.0.0:90->80/tcp nginx-container5
e2b195a691a4 nginx "/bin/bash" 16 minutes ago Up 16 minutes 0.0.0.0:80->80/tcp test-nginx
I experienced the same issue using Docker over WSL1 on Windows 10 with this command line:
echo $PWD
/mnt/d/nginx
docker run --name nginx -d \
-v $PWD/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
I resolved it by changing the path for the file on the host system to a UNIX style absolute path:
docker run --name nginx -d \
-v /d/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
or using an Windows style absolute path with / instead of \ as path separators:
docker run --name nginx -d \
-v D:/nginx/conf/nginx.conf:/etc/nginx/nginx.conf \
nginx
To strip the /mnt that seems to cause problems from the path I use
bash variable extension:
-v ${PWD/mnt\/}/conf/nginx.conf:/etc/nginx/nginx.conf
Updating Virtual Box to 6.0.10 fixed this issue for Docker Toolbox
https://github.com/docker/toolbox/issues/844
I was experiencing this kind of error:
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects
$ touch resolv.conf
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects
$ docker run --rm -it -v $PWD/resolv.conf:/etc/resolv.conf ubuntu /bin/bash
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/c/Users/mlepisto/G/Projects/resolv.conf\\\" to rootfs \\\"/mnt/sda1/var/lib/docker/overlay2/61eabcfe9ed7e4a87f40bcf93c2a7d320a5f96bf241b2cf694a064b46c11db3f/merged\\\" at \\\"/mnt/sda1/var/lib/docker/overlay2/61eabcfe9ed7e4a87f40bcf93c2a7d320a5f96bf241b2cf694a064b46c11db3f/merged/etc/resolv.conf\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
# mounting to some other file name inside the container did work just fine
mlepisto#DESKTOP-VKJ76GO MINGW64 ~/G/Projects/
$ docker run --rm -it -v $PWD/resolv.conf:/etc/resolv2.conf ubuntu /bin/bash
root#a5020b4d6cc2:/# exit
exit
After updating VitualBox all commands did work just fine 🎉
Had the same head scratch because I did not have the file locally so it created it as a folder.
mimas#Anttis-MBP:~/random/dockerize/tube$ ls
Dockerfile
mimas#Anttis-MBP:~/random/dockerize/tube$ docker run --rm -v $(pwd)/logs.txt:/usr/app/logs.txt devopsdockeruh/first_volume_exercise
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"rootfs_linux.go:58: mounting \\\"/Users/mimas/random/dockerize/tube/logs.txt\\\" to rootfs \\\"/var/lib/docker/overlay2/75891ea3688c58afb8f0fddcc977c78d0ac72334e4c88c80d7cdaa50624e688e/merged\\\" at \\\"/var/lib/docker/overlay2/75891ea3688c58afb8f0fddcc977c78d0ac72334e4c88c80d7cdaa50624e688e/merged/usr/app/logs.txt\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
mimas#Anttis-MBP:~/random/dockerize/tube$ ls
Dockerfile logs.txt/
For me, this did not work:
volumes:
- ./:/var/www/html
- ./nginx.conf:/etc/nginx/conf.d/site.conf
But this, works fine (obviously moved my config file inside a new directory too:
volumes:
- ./:/var/www/html
- ./nginx/nginx.conf:/etc/nginx/conf.d/site.conf
I had this problem under Windows 7 because my dockerfile was on different drive.
Here's what I did to fix the problem:
Open VirtualBox Manager
Select the "default" container and edit the settings.
Select Shared Folders and click the icon to add a new shared folder
Folder Path: x:\
Folder Name: /x
Check Auto-mount and Make Permanent
Restart the virtual machine
At this point, docker-compose up should work.
I got the same error on Windows10 after an update of Docker: 2.3.0.2 (45183).
... caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I was using absolute paths like this //C/workspace/nginx/nginx.conf and everything worked like a charm.
The update broke my docker-compose, and I had to change the paths to /C/workspace/nginx/nginx.conf with a single / for the root.
Note that this situation will also occur if you try to mount a volume from the host which has not been added to the Resources > File Sharing section of Docker Preferences.
Adding the root path as a file sharing resource will now permit Docker to access the resource to mount it to the container. Note that you may need to erase the contents on your Docker container to attempt to re-mount the volume.
For example, if your application is located at /mysites/myapp, you will want to add /mysites as the file sharing resource location.
In my case it was a problem with Docker for Windows and use partition encrypted by Bitlocker. If you have project files on encrypted files after restart and unlock drive Dokcer doesn't see project files properly.
All you need to do is just need to restart Docker
CleanWebpackPlugin can be the problem. In my case, in my Docker file I copy a file like this:
COPY --chown=node:node dist/app.js /usr/app/app.js
and then during development I mount that file via docker-compose:
volumes:
- ./dist/app.js:/usr/app/app.js
I would intermittently get the Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type. error or some version of it.
The problem was that the CleanWebpackPlugin was deleting the file and before webpack re-built. If Docker was trying to mount the file while it was deleted Docker would fail. It was intermittent.
Either remove CleanWebpackPlugin completely or configure its options to play nicer.
I had this happen when the json file on the host had the executable permission set. I don't know the reason behind this.
For me, it was enough to just do this:
docker compose down
docker compose up -d
l have solved the mount problem. I am using a Win 7 environment, and the same problem happened to me.
Are you trying to mount a directory onto a file?
The container has a default sync directory at C:\Users\, so I moved my project to C:\Users\, then recreated the project. Now it works.

Docker - ERROR: failed to register layer: symlink

I'm running a docker-compose file we have, I usually run it with command:
docker-compose up
But today I'm getting this error.
ERROR: failed to register layer: symlink ../bdf441e8145a625c4ab289f13ac2274b37d35475b97680f50b7eccda4328f973/diff /var/lib/docker/overlay2/l/7O5XKRTJV6RMTXBV5DTPDOHYNX: no such file or directory
To solve this issue, you just Stop and Start docker service from terminal.
# service docker stop
# service docker start
For me, this issue came up when I tried to clear the lib/docker/overlay folder by deleting all its contents (not a good thing to do). After that, I was not able to build any of my images back.
Solved it by running this
docker system prune --volumes -a
Warning: This removes all the volumes and its contents which may result in data loss. Which was fine for me since I already deleted everything.
Followed this answer just restarting docker fixed the problem.
https://stackoverflow.com/a/35325477/4031815
Restart docker or if that doesn't work, do a Docker > Reset > Remove all Data.
I got the same error and the latter is the only thing that ended up working for me.
What worked for me and did not involve losing all my data:
Make sure all docker containers are down: docker compose down
Remove problematic overlay2: sudo rm -R /var/lib/docker/overlay2
Remove images: sudo rm -R /var/lib/docker/image
Clear any other cached data: docker system prune -f
Restart docker service: systemctl restart docker
Close and reopen VS Code
Then, docker compose build finally worked fine for me

Resources