How to fix docker storage-driver=overlay2 problem - docker

I need to change the underlying storage for a Proxmox LXC Debian Buster container from RAW to ZFS. For this I restored a snapshot to ZFS storage. This is normally transparent for the OS in the container, but in this case docker no longer starts.
The initial problem was that docker wasn't started, and after some digging around I find this:
# dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
INFO[2021-08-03T09:24:40.909844803Z] Starting up
...
ERRO[2021-08-03T09:24:56.914420548Z] failed to mount overlay: invalid argument storage-driver=overlay2
ERRO[2021-08-03T09:24:56.914439880Z] [graphdriver] prior storage driver overlay2 failed: driver not supported
failed to start daemon: error initializing graphdriver: driver not supported
How can I fix this?
EDIT:
I tried the suggested fix, but still no cigar:
root#mail:/var/log# systemctl status docker.service
* docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2021-10-09 10:05:49 UTC; 1min 23s ago
Docs: https://docs.docker.com
Process: 236 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
Main PID: 236 (code=exited, status=1/FAILURE)
Oct 09 10:05:49 mail systemd[1]: docker.service: Service RestartSec=2s expired, scheduling restart.
Oct 09 10:05:49 mail systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Oct 09 10:05:49 mail systemd[1]: Stopped Docker Application Container Engine.
Oct 09 10:05:49 mail systemd[1]: docker.service: Start request repeated too quickly.
Oct 09 10:05:49 mail systemd[1]: docker.service: Failed with result 'exit-code'.
Oct 09 10:05:49 mail systemd[1]: Failed to start Docker Application Container Engine.
The link offered suggests creating a new zpool within the container. Seems a bit of an overkill for that to be necessary, no?

Configure Docker to use zfs. Edit /etc/docker/daemon.json and set the storage-driver to zfs. If the file was empty before, it should now look like this:
{
"storage-driver": "zfs"
}
more details: https://docs.docker.com/storage/storagedriver/zfs-driver/

Related

Run Docker on Raspberry Pi4 with overlay fs

I prefer to create a situation where on a Raspberry Pi4 Docker is running while the SD-card is read only. This with overlay fs.
In the dockercontainer a database is running, the data of the database is written to an USB-stick (volume mapping).
When overlayfs is activated (after reboot, enabled via “sudo raspi-config”), docker will not start-up any more.
The steps on https://docs.docker.com/storage/storagedriver/overlayfs-driver/
System information:
Linux raspberrypi 5.10.63-v8+ #1488 SMP PREEMPT Thu Nov 18 16:16:16 GMT 2021 aarch64 GNU/Linux
Docker information:
pi#raspberrypi:~ $ docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Build with BuildKit (Docker Inc., v0.6.3-docker)
Server:
Containers: 1
Running: 1
Paused: 0
Stopped: 0
Images: 1
Server Version: 20.10.11
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
………
Status docker after restart:
pi#raspberrypi:~ $ sudo systemctl status docker.*
Warning: The unit file, source configuration file or drop-ins of docker.service changed on disk. Run 'systemctl daemon-reload' to reload units.
● docker.socket - Docker Socket for the API
Loaded: loaded (/lib/systemd/system/docker.socket; enabled; vendor preset: enabled)
Active: failed (Result: service-start-limit-hit) since Thu 2021-12-09 14:30:43 GMT; 1h 13min ago
Triggers: ● docker.service
Listen: /run/docker.sock (Stream)
CPU: 2ms
Dec 09 14:30:36 raspberrypi systemd[1]: Starting Docker Socket for the API.
Dec 09 14:30:36 raspberrypi systemd[1]: Listening on Docker Socket for the API.
Dec 09 14:30:43 raspberrypi systemd[1]: docker.socket: Failed with result 'service-start-limit-hit'
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2021-12-09 14:30:43 GMT; 1h 13min ago
TriggeredBy: ● docker.socket
Docs: https://docs.docker.com
Main PID: 992 (code=exited, status=1/FAILURE)
CPU: 162ms
Dec 09 14:30:43 raspberrypi systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Dec 09 14:30:43 raspberrypi systemd[1]: Stopped Docker Application Container Engine.
Dec 09 14:30:43 raspberrypi systemd[1]: docker.service: Start request repeated too quickly.
Dec 09 14:30:43 raspberrypi systemd[1]: docker.service: Failed with result 'exit-code'.
Dec 09 14:30:43 raspberrypi systemd[1]: Failed to start Docker Application Container Engine.
Running the command given in docker.service with additional overlay flag
pi#raspberrypi:~ $ sudo /usr/bin/dockerd --storage-driver=overlay -H fd:// --containerd=/run/containerd/containerd.sock
unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: storage-driver: (from flag: overlay, from file: overlay2)
pi#raspberrypi:~ $ sudo /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
INFO[2021-12-09T14:34:31.667296985Z] Starting up
failed to load listeners: no sockets found via socket activation: make sure the service was started by systemd
Which steps am I missing to be able to run Docker with overlay fs, such that the SD-card in the Raspberry is read only?
Without the overlay fs active it all works as expected.
I ran into this issue as well and found a way around it. In summary, you can't run the default Docker FS driver (overlay2) on overlayfs. Fortunately, Docker supports other storage drivers, including fuse-overlayfs. Switching to this driver resolves the issue but there's one final catch. When Docker starts, it attempts to rename /var/lib/docker/runtimes and since overlayfs doesn't support renames of directories already in lower layers, it fails. If you simply rm -rf this directory while Docker is stopped and before you enable RPi's overlayfs, everything should work.

Does dockerd support WatchdogSec sd_notify health checks?

We've been having issues where the Docker daemon will occasionally stop responding on one of our Kubernetes systems, but Systemd still thinks the service is running:
systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Mon 2019-04-15 20:40:57 UTC; 3 months 22 days ago
Docs: https://docs.docker.com
Main PID: 1281 (dockerd)
Tasks: 1409
Memory: 31.0G
CPU: 5d 17h 3min 4.758s
CGroup: /system.slice/docker.service
├─ 1281 /usr/bin/dockerd -H fd://
...
There isn't anything in the journalctl -u docker or syslog files to indicate what the issue is, but the Docker daemon no longer responds to requests (docker ps just hangs). We are currently using the 17.03.2~ce-0~ubuntu-xenial package for Ubuntu 16.04, which has the following service unit:
cat /lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket firewalld.service
Requires=docker.socket
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd://
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
I noticed that even though it is a Type=notify service, there isn't a WatchdogSec= defined in the service unit.
Does the Docker daemon support setting a watchdog timeout for sd_notify based health checks?
No, currently the components/engine/cmd/dockerd/daemon_linux.go file only implements systemdDaemon.SdNotifyReady to notify Systemd when the process has started. For watchdog support it would have to use something like SdWatchdogEnabled to continually send SdNotifyWatchdog = "WATCHDOG=1" notifications.
If you try and set WatchdogSec=60s on the docker.service file it will kill and restart the service because the daemon doesn't send the required notifications.
systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2019-08-08 02:09:52 UTC; 50s ago
systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: deactivating (stop-sigabrt) (Result: watchdog) since Thu 2019-08-08 02:10:02 UTC; 45ms ago
systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: activating (start) since Thu 2019-08-08 02:10:04 UTC; 777ms ago
# Log entries:
Aug 08 02:09:14 kam1 systemd[1]: Starting Docker Application Container Engine...
Aug 08 02:09:15 kam1 systemd[1]: Started Docker Application Container Engine.
Aug 08 02:10:15 kam1 systemd[1]: docker.service: Watchdog timeout (limit 60s)!
Aug 08 02:10:15 kam1 systemd[1]: docker.service: Killing process 12383 (dockerd) with signal SIGABRT.
Aug 08 02:10:16 kam1 systemd[1]: docker.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Aug 08 02:10:16 kam1 systemd[1]: docker.service: Failed with result 'watchdog'.
Aug 08 02:10:18 kam1 systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Aug 08 02:10:18 kam1 systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Aug 08 02:10:18 kam1 systemd[1]: Stopped Docker Application Container Engine.
Aug 08 02:10:18 kam1 systemd[1]: Starting Docker Application Container Engine...

docker: docker start fails after creating daemon.json

I am trying to setup awslogs for docker.
The docs say to add this to daemon.json:
{
"log-driver": "awslogs",
"log-opts": {
"awslogs-region": "eu-central-1"
}
}
When I create on Ubuntu /etc/docker/daemon.json with the content above docker wont start again.
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Fr 2018-07-20 10:59:53 CEST; 11s ago
Docs: https://docs.docker.com
Process: 647 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=1/FAILURE)
Main PID: 647 (code=exited, status=1/FAILURE)
Jul 20 10:59:53 dev01-ubuntu systemd[1]: Failed to start Docker Application Container Engine.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: docker.service: Unit entered failed state.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: Stopped Docker Application Container Engine.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: docker.service: Start request repeated too quickly.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: Failed to start Docker Application Container Engine.
Can anybody explain this behaviour?
Not ever use it.
But from this: https://docs.docker.com/config/containers/logging/plugins/, it seems we need to install plugin for any new log driver, check it with docker plugin ls
Maybe it just available in amazon cloud environment, not available in local pc, just in case you did not notice that.

How do I clear a thinpool device for docker

I am running docker on a Redhat system with devicemapper and thinpool device just as recommended for production systems. Now when I want to reinstall docker I need two steps:
1) remove docker directory (in my case /area51/docker)
2) clear thinpool device
The docker documentation states that when using devicemapper with dm.metadev and dm.datadev options, the easiest way of cleaning devicemapper would be:
If setting up a new metadata pool it is required to be valid.
This can be achieved by zeroing the first 4k to indicate empty metadata, like this:
$ dd if=/dev/zero of=$metadata_dev bs=4096 count=1
Unfortunately, according to the documentation, the dm.metadatadev is deprecated, it says to use dm.thinpooldev instead.
My thinpool has been created along the lines of this docker instruction
So, my setup now looks like this:
cat /etc/docker/daemon.json
{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.thinpooldev=/dev/mapper/thinpool_VG_38401-thinpool",
"dm.basesize=18G"
]
}
Under the devicemapper directory i see the following thinpool devices
ls -l /dev/mapper/thinpool_VG_38401-thinpool*
lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool -> ../dm-8
lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool_tdata -> ../dm-7
lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool_tmeta -> ../dm-6
So, after running docker successfully I tried to reinstall as described above and clear the thinpool by writing 4K zeroes into the tmeta device and restart docker:
dd if=/dev/zero of=/dev/mapper/thinpool_VG_38401-thinpool_tmeta bs=4096 count=1
systemctl start docker
And endet up with
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2017-12-06 10:28:46 UTC; 10s ago
Docs: https://docs.docker.com
Process: 1566 ExecStart=/usr/bin/dockerd -G uwsgi --data-root=/area51/docker -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE)
Main PID: 1566 (code=exited, status=1/FAILURE)
Memory: 236.0K
CGroup: /system.slice/docker.service
Dec 06 10:28:45 yoda3 systemd[1]: Starting Docker Application Container Engine...
Dec 06 10:28:45 yoda3 dockerd[1566]: time="2017-12-06T10:28:45.816049000Z" level=info msg="libcontainerd: new containerd process, pid: 1577"
Dec 06 10:28:46 yoda3 dockerd[1566]: time="2017-12-06T10:28:46.816966000Z" level=warning msg="failed to rename /area51/docker/tmp for background deletion: renam...chronously"
Dec 06 10:28:46 yoda3 dockerd[1566]: Error starting daemon: error initializing graphdriver: devmapper: Unable to take ownership of thin-pool (thinpool_VG_38401-...data blocks
Dec 06 10:28:46 yoda3 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Dec 06 10:28:46 yoda3 systemd[1]: Failed to start Docker Application Container Engine.
Dec 06 10:28:46 yoda3 systemd[1]: Unit docker.service entered failed state.
Dec 06 10:28:46 yoda3 systemd[1]: docker.service failed.
I assumed I could get around the 'unable to take ownership of thin-pool' by doing a reboot. But after reboot and trying to start docker again I got the following error:
systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2017-12-06 10:30:37 UTC; 2min 29s ago
Docs: https://docs.docker.com
Process: 3180 ExecStart=/usr/bin/dockerd -G uwsgi --data-root=/area51/docker -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE)
Main PID: 3180 (code=exited, status=1/FAILURE)
Memory: 37.9M
CGroup: /system.slice/docker.service
Dec 06 10:30:36 yoda3 systemd[1]: Starting Docker Application Container Engine...
Dec 06 10:30:36 yoda3 dockerd[3180]: time="2017-12-06T10:30:36.893777000Z" level=warning msg="libcontainerd: makeUpgradeProof could not open /var/run/docker/lib...containerd"
Dec 06 10:30:36 yoda3 dockerd[3180]: time="2017-12-06T10:30:36.901958000Z" level=info msg="libcontainerd: new containerd process, pid: 3224"
Dec 06 10:30:37 yoda3 dockerd[3180]: Error starting daemon: error initializing graphdriver: devicemapper: Non existing device thinpool_VG_38401-thinpool
Dec 06 10:30:37 yoda3 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Dec 06 10:30:37 yoda3 systemd[1]: Failed to start Docker Application Container Engine.
Dec 06 10:30:37 yoda3 systemd[1]: Unit docker.service entered failed state.
Dec 06 10:30:37 yoda3 systemd[1]: docker.service failed.
So, obviously writing zeroes into the thinpool_meta device is not the right thing to do, it seems to destroy my thinpool device.
Anyone here that can tell me the right steps to clear the thin-pool device? Preferably the solution should not require a reboot.

docker service does not start after creating daemon.json

Following error message appears when doing the steps below
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Wed 2017-08-30 09:21:52 CEST; 13s ago
Docs: https://docs.docker.com
Process: 11581 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=1/FAILURE)
Main PID: 11581 (code=exited, status=1/FAILURE)
CPU: 28ms
Aug 30 09:21:52 debian systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 30 09:21:52 debian systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Aug 30 09:21:52 debian systemd[1]: Stopped Docker Application Container Engine.
Aug 30 09:21:52 debian systemd[1]: docker.service: Start request repeated too quickly.
Aug 30 09:21:52 debian systemd[1]: Failed to start Docker Application Container Engine.
Aug 30 09:21:52 debian systemd[1]: docker.service: Unit entered failed state.
Aug 30 09:21:52 debian systemd[1]: docker.service: Failed with result 'exit-code'.
Aug 30 09:22:00 debian systemd[1]: docker.service: Start request repeated too quickly.
Aug 30 09:22:00 debian systemd[1]: Failed to start Docker Application Container Engine.
Aug 30 09:22:00 debian systemd[1]: docker.service: Failed with result 'exit-code'.
I created a fresh Ubuntu 64bit VM on VirtualBox.
Then I used the install script to install docker: https://get.docker.com/
After the installation went successful I tried to configure the daemon to 10.0.2.15:2375 so I can forward it to my Host OS
I ran nano /etc/docker/daemon.json to create the file
I pasted following example into it
{
"debug": true,
"tls": false,
"tlscert": "/var/docker/server.pem",
"tlskey": "/var/docker/serverkey.pem",
"hosts": ["tcp://10.0.2.15:2375"]
}
then I ran service docker restart
running service docker status shows me the message above
Check the docker version of your machine by
docker --version
I was facing the same issue, and it got solved after upgrading the docker to latest version which is available.
Even the documentation available on docker's official website have not mentioned anything like that.
Once you upgrade docker , Restart the docker by
systemctl restart docker
The error will be gone, and new changes will start reflecting

Resources