LXD lose snapshot after creating containers from them, if using zfs - lxc

I am having issues when creating containers from snaphots, when using lxd with
zfs as backstorage. I can create the container, but when I delete it, the
snapshot gets deleted at zfs level (not at lxd).
I am using lxd in Ubuntu 16.04, and the funny thing is that does not happen in
every box: some work fine, others never did, and others worked fine for a while
and then broke.
I wonder now if creating containers from snapshots is legit, or if operation
semantics are unspecified.
This can be seen in this example
$ lxc launch ubuntu:14.04 mycontainer # Happens with other images
$ lxc stop mycontainer # stop is optional
$ lxc snapshot mycontainer mysnap0
$ lxc copy mycontainer/mysnap0 mycontainer1
$ lxc delete mycontainer1
$ lxc copy mycontainer/mysnap0 mycontainer2
error: rsync failed: rsync: change_dir "/var/lib/lxd/snapshots/mycontainer/mysnap0" failed: No such file or directory (2)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.1]
But lxd still thinks the snapshot is good.
$ lxc info mycontainer
Name: mycontainer
Architecture: x86_64
Created: 2016/06/16 10:53 UTC
Status: Stopped
Type: persistent
Profiles: default
Snapshots:
mysnap0 (taken at 2016/06/16 10:54 UTC) (stateless)
Another example that makes me think is lxd who request zfs to delete the snapshot:
$ lxc snapshot mycontainer mysnap1
$ lxc copy mycontainer/mysnap1 mycontainer2
$ lxc copy mycontainer/mysnap1 mycontainer3
$ lxc delete mycontainer2
error: Failed to destroy ZFS filesystem: cannot destroy 'debtool/containers/mycontainer#snapshot-mysnap1': snapshot has dependent clones
And some info on the versions I use:
zfs-zed 0.6.5.6-0ubuntu8
zfsutils-linux 0.6.5.6-0ubuntu8
lxd 2.0.2-0ubuntu1~16.04.1
lxd-client 2.0.2-0ubuntu1~16.04.1
lxd-tools 2.0.2-0ubuntu1~16.04.1
liblxc1 2.0.0-0ubuntu2
lxc 2.0.0-0ubuntu2
lxc-common 2.0.0-0ubuntu2
lxc-templates 2.0.0-0ubuntu2
lxc1 2.0.0-0ubuntu2
lxcfs 2.0.0-0ubuntu2.1
Any ideas on how I could fix/troubleshoot the issue?

Related

minikube does not start on ubuntu 20.04 LTS. Exiting due to GUEST_PROVISION

I am trying to setup minikube in a VM with ubuntu desktop 20.04 LTS installed, using docker driver.
I've followed the steps here, and also taken into consideration the limitations for the docker driver (reported here), that have to do with runtime security options. And when I try to start minikube the error I get is : Failed to start host: creating host: create: creating: prepare kic ssh: copying pub key.
This is what I have done to have my brand new VM with minikube installed.
Install docker
Add my user to the docker group, otherwise minkube start would fail because dockerd runs as root (aka Rootless mode in docker terminology).
Install kubectl (that is not necessary, but I opted for this instead of the embedded kubectl in minikube)
Install minikube
When I start minikube, this is what I get:
ubuntuDesktop:~$ minikube start
😄 minikube v1.16.0 on Ubuntu 20.04
✨ Using the docker driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🔥 Creating docker container (CPUs=2, Memory=4500MB) ...
✋ Stopping node "minikube" ...
🛑 Powering off "minikube" via SSH ...
🔥 Deleting "minikube" in docker ...
🤦 StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset051825440 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset051825440: no such file or directory
: exit status 1
🔥 Creating docker container (CPUs=2, Memory=4500MB) ...
😿 Failed to start docker container. Running "minikube delete" may fix it: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset544814591 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset544814591: no such file or directory
: exit status 1
❌ Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset544814591 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset544814591: no such file or directory
: exit status 1
😿 If the above advice does not help, please let us know:
👉 https://github.com/kubernetes/minikube/issues/new/choose
I suspect that the error has to do with the security settings issues with the docker driver, but this seems to be like a dog chasing its tail: if I don't use rootless mode in docker and I attempt to start minikube with sudo (so that docker can also start up with a privileged user), then I get this:
ubuntuDesktop:~$ sudo minikube start
[sudo] password for alberto:
😄 minikube v1.16.0 on Ubuntu 20.04
✨ Automatically selected the docker driver. Other choices: virtualbox, none
🛑 The "docker" driver should not be used with root privileges.
💡 If you are running minikube within a VM, consider using --driver=none:
📘 https://minikube.sigs.k8s.io/docs/reference/drivers/none/
❌ Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.
So, or either I am missing something or minikube doesn't work at all with docker driver, which I doubt.
Here is my environment info:
ubuntuDesktop:~$ docker version
Client:
Version: 19.03.11
API version: 1.40
Go version: go1.13.12
Git commit: dd360c7
Built: Mon Jun 8 20:23:26 2020
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 19.03.11
API version: 1.40 (minimum version 1.12)
Go version: go1.13.12
Git commit: 77e06fd
Built: Mon Jun 8 20:24:59 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.13
GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429
runc:
Version: 1.0.0-rc10
GitCommit:
docker-init:
Version: 0.18.0
GitCommit: fec3683
ubuntuDesktop:~$ minikube version
minikube version: v1.16.0
commit: 9f1e482427589ff8451c4723b6ba53bb9742fbb1-dirty
If someone has minikube working on ubuntu 20.04 and could share versions and driver, I would appreciate. with the info in minikube and docker sites I don't know what else to check to make this work.
Solution:
As I mentioned in my comment you may just need to run:
docker system prune
then:
minikube delete
and finally:
minikube start --driver=docker
This should help.
Explanation:
Although as I already mentioned in my comment, it's difficult to say what was the issue in your specific case, such situation may happen as a consequence of previous unseccessful attempt to run your Minikube instance.
It happens sometimes also when different driver is used and it is run as a VM and basically deleting such VM may help. Usually running minikube delete && minikube start is enough.
In this case, when --driver=docker is used, your Minikube instance is configured as a container in your docker runtime but apart from container itself other things like networking or storage are configured.
docker system prune command removes all unused containers, networks, images (both dangling and unreferenced), and optionally, volumes. So what we can say for sure it was one of the above.
Judging by the exact error message:
❌ Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: creating: prepare kic ssh: copying pub key: docker copy /tmp/tmpf-memory-asset544814591 into minikube:/home/docker/.ssh/authorized_keys, output: lstat /tmp/tmpf-memory-asset544814591: no such file or directory
: exit status 1
I guess it could be simply clearing some cached data that helped in your case and removing broken references to non-existing files. The above message explains quite clearly what couldn't be done, namely docker couldn't copy a public ssh key to the destination minikube:/home/docker/.ssh/authorized_keys as the source file /tmp/tmpf-memory-asset544814591, it attempted to copy it from, simply didn't exist. So it's actually very simple to say what happend but to be able to tell why it happened might require diving a bit deeper in both Docker and Minikube internals and analyze step by step how Minikube instance is provisioned when using --driver=docker.
It's a good point that you may try to analyze your docker logs but I seriously doubt that you will find there the exact reason why non-existing temporary file /tmp/tmpf-memory-asset544814591 was referenced or why it didn't exsist.
minikube start --force --driver=docker fixed it for me
The issue is that the docker driver should not be used with root privileges. And by default, the docker daemon always runs as the root user. To run the docker daemon not as root user, create a Unix group called docker and add users to it. When the Docker daemon starts, it creates a Unix socket accessible by members of the docker group.
Run the following commands to fix this issue
Create the docker group if not exist
sudo groupadd docker
Add user to the docker group
sudo usermod -aG docker [user]
To activate changes to the group
newgrp docker
start minikube cluster
minikube start
This worked for me
minikube start --driver=docker --container-runtime=containerd
if you use Linux Desktop OS with docker and minikube already installed, just run
sudo usermod -aG docker $USER
and restart your computer.
It worked for me.
I was running into the same issue when I attempted to install Minikube on an Ubuntu 20.04 system.
The "docker system prune" didn't help in my case, but I figured out the cause for my issue was that /var was mounted with the nosuid option and I had to remove that and remount /var. The minikube cluster initialization then worked.
I might be too ignorant but I didn't find that info stated as a requirement.
Restarting my mac helped me.
I was getting below error earlier:
❌ Exiting due to DRV DOCKER NOT RUNNING: Found docker, but the docker service isn't running. Try restarting the docker service.
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
sudo systemctl enable docker
systemctl status docker
sudo systemctl start/stop docker
sudo groupadd docker
sudo usermod -aG docker user_name --- to add the user to docker group.
newgrp docker -- to activate the grp
minikube start or minikube start --driver=docker ---to start minikube
On my Raspberry Pi this problem was resolved with:
sudo usermod -aG docker $USER && newgrp docker
Try the following:
minikube delete
Then try to delete all docker images with name like k8s... and minikube:
docker rmi <container id> <container id2> <container id3>
Finally:
minikube start
On my end just a docker system prune did the job (Ubuntu).
I had a few configurations I did not want to lose on my minikube profile and it recreated the container accordinlgy and booted fine.
So before the minikube profile deletion it is something to try first.
It's worth checking to see if it's running in Docker desktop on a Mac. If it is then run the kubectl command. If that returns the commands screen then you're good to go.

Run docker inside ubuntu container

2 days I try to run the docker inside an ubuntu container:
docker run -it ubuntu bash
Install docker by instruction of https://docs.docker.com/engine/install/ubuntu/ or/and https://phoenixnap.com/kb/how-to-install-docker-on-ubuntu-18-04
Finally I have installed docker:
root#e65411d2b70a:/# docker -v
Docker version 19.03.6, build 369ce74a3c
But when I try to run docker run hello-world have some problem
root#5ac21097b6f6:/# docker run hello-world
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
In service list not docker:
root#5ac21097b6f6:/# service docker start
docker: unrecognized service
root#5ac21097b6f6:/# service --status-all
[ - ] apparmor
[ + ] cgroupfs-mount
[ - ] dbus
[ ? ] hwclock.sh
[ - ] procps
[ ? ] ubuntu-fan
When try to run dockerd:
root#5ac21097b6f6:/# dockerd
INFO[2020-04-23T07:01:11.622627006Z] Starting up
INFO[2020-04-23T07:01:11.624389266Z] libcontainerd: started new containerd process pid=154
INFO[2020-04-23T07:01:11.624460438Z] parsed scheme: "unix" module=grpc
INFO[2020-04-23T07:01:11.624477203Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2020-04-23T07:01:11.624532871Z] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 <nil>}] <nil>} module=grpc
INFO[2020-04-23T07:01:11.624560679Z] ClientConn switching balancer to "pick_first" module=grpc
INFO[2020-04-23T07:01:11.664827037Z] starting containerd revision= version="1.3.3-0ubuntu1~18.04.2"
ERRO[2020-04-23T07:01:11.664943052Z] failed to change OOM score to -500 error="write /proc/154/oom_score_adj: permission denied"
...
INFO[2020-04-23T07:01:11.816951247Z] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.6.1: can't initialize iptables table `nat': Permission denied (you must be root)
Perhaps iptables or your kernel needs to be upgraded.
(exit status 3)
Not understand why Permission denied if user root.
Install sudo and add root to the group, but it's not help.
apt-get install sudo
usermod -a -G sudo root
- sudo dockerd have the save problem.
How to make work docker inside ubuntu container? Do you have ideas?
ps. I know about docker-in-docker, I need exactly docker inside ubuntu-container
pss. I know about -v /var/run/docker.sock:/var/run/docker.sock - but needed independent the docker service inside ubuntu-container.
When running docker in docker, the container must use the docker engine on your host.
Here is a simple working setup:
1) Create a dockerfile with docker CLI installed. I am using the official compose image, so you also have docker-compose
FROM docker/compose:1.25.5
WORKDIR /app
ENTRYPOINT ["/bin/sh"]
2) When running it, mount the docker sock
$ docker build -t dind .
$ docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock dind
Form within the container, you now have docker. Try running docker ps
If you want to do docker in docker without -v /var/run/docker.sock:/var/run/docker.sock then I am afraid that there is no good way to do this.
Sharing the docker socket from host is the classic way to make docker containers run within another docker container.
I was trying my best to run containers within containers just like you for the past few days. Wasted many hours. So far most of the people advise me to do stuff like using the docker's DIND image which is not applicable for my case, as I need the main container to be Ubuntu OS, or to run some privilege command and map the daemon socket into container, like -v /var/run/docker.sock:/var/run/docker.sock
(Which never ever works for me, or for any Ubuntu OS I tried. Reason being, the main container which is based on Ubuntu OS does not comes with systemd which is important to run docker containers conveniently like a usual local machine)
The solution I found was to use Nestybox on my Ubuntu 20.04 system and it works best. Its also extremely simple to execute, provided your local system is ubuntu (which they support best), as the container runtime are specifically deigned for such application. It also has the most flexible options.
The free edition of Nestybox is perhaps the best method as of Nov 2022. Highly recommends you to try it without bothering all the tedious setup other people suggest. They have many pre-constructed solutions to address such specific needs with a simple command line.
The Nestybox provide special runtime environment for newly created docker container, they also provides some ubuntu/common OS images with docker and systemd in built.
Their goal is to make the main container function exactly the same as a virtual machine securely. You can literally ssh into your ubuntu main container as well without the ability to access anything in the main machine. From your main container you may create all kinds of containers like a normal local system does. That systemd is very important for you to setup docker conveniently inside the container.
One simple common command to execute sysbox:
dock run --runtime=sysbox-runc -it any_image
If you think thats what you are looking for, you can find out more at their github:
https://github.com/nestybox/sysbox
Quicklink to instruction on how to deploy a simple sysbox runtime environment container:
https://github.com/nestybox/sysbox/blob/master/docs/quickstart/README.md

selinux and usernamespace can't co-exist in docker?

I have below contents in /etc/sysconfig/docker
# /etc/sysconfig/docker
# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --userns-remap=default --log-opt=max-size=10M --log-opt=max-file=30'
DOCKER_CERT_PATH=/etc/docker
# Enable insecure registry communication by appending the registry URL
# to the INSECURE_REGISTRY variable below and uncommenting it
# INSECURE_REGISTRY='--insecure-registry '
# On SELinux System, if you remove the --selinux-enabled option, you
# also need to turn on the docker_transition_unconfined boolean.
# setsebool -P docker_transition_unconfined
# Location used for temporary files, such as those created by
# docker load and build operations. Default is /var/lib/docker/tmp
# Can be overriden by setting the following environment variable.
# DOCKER_TMPDIR=/var/tmp
# Controls the /etc/cron.daily/docker-logrotate cron job status.
# To disable, uncomment the line below.
# LOGROTATE=false
# Allow creation of core dumps
GOTRACEBACK=crash
But i can't run any containers with this configuration
[root#server ~]# docker run -ti hello-world
docker: Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:359: container init caused \"rootfs_linux.go:54: mounting \\\"mqueue\\\" to rootfs \\\"/var/lib/docker/231072.231072/overlay2/ac28bae7fd341860112089d08b04e54aeeb8b85304be9455c8705ff6d883c4ac/merged\\\" at \\\"/dev/mqueue\\\" caused \\\"operation not permitted\\\"\"".
But when i remove --selinux-enabled from /etc/sysconfig/docker then it works just fine
[root#server ~]# docker run -ti hello-world
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
5b0f327be733: Pull complete
Digest: sha256:07d5f7800dfe37b8c2196c7b1c524c33808ce2e0f74e7aa00e603295ca9a0972
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://cloud.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/engine/userguide/
Can't these tags co-exist?
docker version
[root#server ~]# docker -v
Docker version 17.03.1-ce, build 276fd32
selinux versions
[root#server ~]# rpm -qa | grep selinux
libselinux-python-2.5-11.el7.x86_64
libselinux-2.5-11.el7.i686
selinux-policy-3.13.1-166.0.2.el7_4.4.noarch
libselinux-2.5-11.el7.x86_64
selinux-policy-targeted-3.13.1-166.0.2.el7_4.4.noarch
libselinux-utils-2.5-11.el7.x86_64

Docker data volume container. I can't seem to get to backup

Reading these links:
https://docs.docker.com/userguide/dockervolumes/#backup-restore-or-migrate-data-volumes
Backing up data volume containers off machine
My understanding is I can take a data volume container and archive its backup.
However reading the first link I can't seem to get it to work.
docker create -v /sonatype-work --name sonatype-work sonatype/nexus /bin/true
I launch sonatype/nexus image in a container using:
--volumes-from sonatype-nexus
All good, after running nexus, i inspect the data volume, i can see the innards created, and stop and remove nexus and start again, all changes saved.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f84abb054d2e sonatype/nexus "/bin/sh -c 'java -" 22 seconds ago Up 21 seconds 0.0.0.0:8081->8081/tcp nexus
1aea2674e482 sonatype/nexus "/bin/true" 25 seconds ago Created sonatype-work
I want to now back up sonatype-work, but with no luck.
[root#ansible22 ~]# pwd
/root
[root#ansible22 ~]# docker run --volumes-from sonatype-work -v $(pwd):/backup ubuntu tar cvf /backup/sonatype-work-backup.tar /sonatype-work
tar: /backup/sonatype-work-backup.tar: Cannot open: Permission denied
tar: Error is not recoverable: exiting now
I have tried running as -u root, I also tried with:
/root/sonatype-work-backup.tar
When doing so, i can see it taring stuff, but I don't see the tar file. Based on the example and my understanding I don't think thats right anyway.
Can anyone see what I'm doing wrong?
EDIT: Linux Version Info
Fedora release 22 (Twenty Two)
NAME=Fedora
VERSION="22 (Twenty Two)"
ID=fedora
VERSION_ID=22
PRETTY_NAME="Fedora 22 (Twenty Two)"
ANSI_COLOR="0;34"
CPE_NAME="cpe:/o:fedoraproject:fedora:22"
HOME_URL="https://fedoraproject.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Fedora"
REDHAT_BUGZILLA_PRODUCT_VERSION=22
REDHAT_SUPPORT_PRODUCT="Fedora"
REDHAT_SUPPORT_PRODUCT_VERSION=22
PRIVACY_POLICY_URL=https://fedoraproject.org/wiki/Legal:PrivacyPolicy
VARIANT="Server Edition"
VARIANT_ID=server
Fedora release 22 (Twenty Two)
Fedora release 22 (Twenty Two)
The reason for this is related to selinux labelling. There are a couple of good Project Atomic pages on this:
Docker and Linux
The default type for a confined container process is svirt_lxc_net_t. This type is permitted to read and execute all files types under /usr and most types under /etc. svirt_lxc_net_t is permitted to use the network but is not permitted to read content under /var, /home, /root, /mnt … svirt_lxc_net_t is permitted to write only to files labeled svirt_sandbox_file_t and docker_var_lib_t. All files in a container are labeled by default as svirt_sandbox_file_t.
Then in Using Volumes with Docker can Cause Problems with SELinux:
This will label the content inside the container with the exact MCS label that the container will run with, basically it runs chcon -Rt svirt_sandbox_file_t -l s0:c1,c2 /var/db where s0:c1,c2 differs for each container.
(In this case not /var/db but /root)
If you volume mount a image with -v /SOURCE:/DESTINATION:z docker will automatically relabel the content for you to s0. If you volume mount with a Z, then the label will be specific to the container, and not be able to be shared between containers.
So either z or Z are suitable in this case but one might usually prefer Z for the isolation.
The reason I'm getting permission denied is because of selinux. I am not sure why yet, but will edit this answer when/if I find out. Disabling selinux and restarting, i was able to take a back up.

Error: "error creating aufs mount to" when building dockerfile

I get this error when I try to build a docker file
error creating aufs mount to /var/lib/docker/aufs/mnt
/6c1b42ce1a98b1c0f2d2a7f17c196221445f1054566065d4c607e4f1b99930eb-init:
invalid argument
What does it mean? How do I fix it?
I had some unresolved errors after removing /var/lib/docker/aufs, which a couple extra steps cleared up.
To add to #benwalther answer, since I lack the reputation to comment:
# Cleaning up through docker avoids these errors
# ERROR: Service 'master' failed to build:
# open /var/lib/docker/aufs/layers/<container_id>: no such file or directory
# ERROR: Service 'master' failed to build: failed to register layer:
# open /var/lib/docker/aufs/layers/<container_id>: no such file or directory
docker rm -f $(docker ps -a -q)
docker rmi -f $(docker images -a -q)
# As per #BenWalther's answer above
sudo service docker stop
sudo rm -rf /var/lib/docker/aufs
# Removing the linkgraph.db fixed this error:
# Conflict. The name "/jenkins_data_1" is already in use by container <container_id>.
# You have to remove (or rename) that container to be able to reuse that name.
sudo rm -f /var/lib/docker/linkgraph.db
sudo service docker start
If you try to use docker inside a persistent enable Live CD, you may encounter this error. I guess, it is due to the fact that you can't mount aufs inside overlayfs mounts, which is the persistent layer.
The solution was simply using different driver. I've used vfs in /etc/docker/daemon.json
Here it is
{
"storage-driver": "vfs"
}
I have removed /var/lib/docker/aufs/diff and got the same problem:
error creating aufs mount to /var/lib/docker/aufs/mnt/blah-blah-init: invalid argument
It solved by running the following commands:
docker stop $(docker ps -a -q);
docker rm $(docker ps -a -q);
docker rmi -f $(docker images -a -q)
AUFS is unable to mount the docker container filesystem.
This is either because: the path is already mounted - or - there's a race condition in docker's interaction with AUFS, due to the large amount of existing volumes.
To solve this, try the following:
restart the docker service or daemon and try again.
check mount for aufs mounted on any paths under /var/lib/docker/aufs/. If found, stop docker, then umount them (need sudo).
example:
mount
none on /var/lib/docker/aufs/mnt/55639da9aa959e88765899ac9dc200ccdf363b2f09ea933370cf4f96051b22b9 type aufs (rw,relatime,si=5abf628bd5735419,dio,dirperm1)
then
sudo umount /var/lib/docker/aufs/mnt/55639da9aa959e88765899ac9dc200ccdf363b2f09ea933370cf4f96051b22b9
If that doesn't work, stop docker, then sudo rm -rf /var/lib/docker/aufs. You will lose any existing stopped containers and all images. But this is just about guaranteed to solve the problem.
Unfortunately on my system I could not resolve this with the above answers. The docker administration kept remembering a certain file in the aufs layer that it couldn't reach anymore. Other solutions didn't work either. So if this is an option for you, you could try the following fix: uninstall/purge docker and docker-engine:
apt-get purge docker docker-engine
Then make sure everything from /var/lib/docker is removed.
rm -rf /var/lib/docker
After that install docker again.
I'm using Raspbian with Raspberry 4
Best way to do it..
Check your docker version with:
sudo docker info and check "Storage Driver"
sudo systemctl stop docker
sudo nano /etc/docker/daemon.json
write this code below and save it
{
"storage-driver": "vfs"
}
sudo systemctl start docker
altought vfs... has a performance issue and could not be the best choice... :)
I just had a similar issue on Lubuntu (Ubuntu 4.15.0-20-generic) with Docker CE 18.03. None of the described options helped.
It appears that latest docker versions use the overlay2 storage driver. However some applications require aufs. Thus a possible fix might be to simply use this docker guide to change the storage driver to aufs (simply replace "overlay2" with "aufs") as in this guide.
I am running a container inside another container(also installed docker in that container) and is trying to create an aufs storage on top of an overlayfs mount, which is not possible. So, I also change the host overlyfs storage to aufs. It's solve my issue. To check storage driver use below command.
docker info
The solution was simply using different driver. I've used aufs in /etc/docker/daemon.json
Here it is
{
"storage-driver": "aufs"
}
For detailed explanation read below documentation.
Docker storage documentation
A similar issue arose while I was using Docker in Windows:
ERROR: Service 'daemon' failed to build: error creating overlay mount
to /var/lib/docker/overlay2/83c98f716020954420e8b89e6074b1af6
1b2b86cd51ac6a54724ed263b3663a2-init/merged: no such file or directory
The problem occurred after having removed a volume from the image's Dockerfile, rebuilding the image and then rebooting the PC. Maybe this is a common cause?
I managed to solve the problem by clicking Docker -> Settings -> Reset -> Reset to factory defaults...
All my images were subsequently lost but that didn't matter for me. I also figured that removing the VM disk image (the path to which can be found under the Advanced tab in Settings) could solve the issue. I haven't tried this approach however.
In windows after a restart, docker machine problem is solved for me.
Use these commands:
docker-machine stop
docker-machine start
docker-compose up
I put this answer also here, as the google search lead me here since the #whitebrow's answer contains what term I searched for in google
ERROR: Service '***' failed to build: error creating overlay mount
to /var/lib/docker/overlay2/***/merged: no such file or directory
In my case, the working workaround surprisingly was to restrict the number of 'RUN' docker building commands/layers, since if the number surpassed 60 layers/commands, it always ended up with that missing 'merged' folder error, no matter what was the contents of the command, even simple command such as RUN ls -la ended up with that error, if the total number of such/any commands was higher than about 60, strange. Merged subfolder was always missing, though even when I automatically generated all the merged subfolders, always was created on the fly a new layer with a new hash, which was missing that subfolder.
I faced the same issue.I resolved it by adding the storage driver to /etc/docker/daemon.json
you can refer this link as well to see other driver options.
Visit https://docs.docker.com/storage/storagedriver/select-storage-driver/

Resources