I am running docker on a Redhat system with devicemapper and thinpool device just as recommended for production systems. Now when I want to reinstall docker I need two steps:
1) remove docker directory (in my case /area51/docker)
2) clear thinpool device
The docker documentation states that when using devicemapper with dm.metadev and dm.datadev options, the easiest way of cleaning devicemapper would be:
If setting up a new metadata pool it is required to be valid.
This can be achieved by zeroing the first 4k to indicate empty metadata, like this:
$ dd if=/dev/zero of=$metadata_dev bs=4096 count=1
Unfortunately, according to the documentation, the dm.metadatadev is deprecated, it says to use dm.thinpooldev instead.
My thinpool has been created along the lines of this docker instruction
So, my setup now looks like this:
cat /etc/docker/daemon.json
{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.thinpooldev=/dev/mapper/thinpool_VG_38401-thinpool",
"dm.basesize=18G"
]
}
Under the devicemapper directory i see the following thinpool devices
ls -l /dev/mapper/thinpool_VG_38401-thinpool*
lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool -> ../dm-8
lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool_tdata -> ../dm-7
lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool_tmeta -> ../dm-6
So, after running docker successfully I tried to reinstall as described above and clear the thinpool by writing 4K zeroes into the tmeta device and restart docker:
dd if=/dev/zero of=/dev/mapper/thinpool_VG_38401-thinpool_tmeta bs=4096 count=1
systemctl start docker
And endet up with
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2017-12-06 10:28:46 UTC; 10s ago
Docs: https://docs.docker.com
Process: 1566 ExecStart=/usr/bin/dockerd -G uwsgi --data-root=/area51/docker -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE)
Main PID: 1566 (code=exited, status=1/FAILURE)
Memory: 236.0K
CGroup: /system.slice/docker.service
Dec 06 10:28:45 yoda3 systemd[1]: Starting Docker Application Container Engine...
Dec 06 10:28:45 yoda3 dockerd[1566]: time="2017-12-06T10:28:45.816049000Z" level=info msg="libcontainerd: new containerd process, pid: 1577"
Dec 06 10:28:46 yoda3 dockerd[1566]: time="2017-12-06T10:28:46.816966000Z" level=warning msg="failed to rename /area51/docker/tmp for background deletion: renam...chronously"
Dec 06 10:28:46 yoda3 dockerd[1566]: Error starting daemon: error initializing graphdriver: devmapper: Unable to take ownership of thin-pool (thinpool_VG_38401-...data blocks
Dec 06 10:28:46 yoda3 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Dec 06 10:28:46 yoda3 systemd[1]: Failed to start Docker Application Container Engine.
Dec 06 10:28:46 yoda3 systemd[1]: Unit docker.service entered failed state.
Dec 06 10:28:46 yoda3 systemd[1]: docker.service failed.
I assumed I could get around the 'unable to take ownership of thin-pool' by doing a reboot. But after reboot and trying to start docker again I got the following error:
systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2017-12-06 10:30:37 UTC; 2min 29s ago
Docs: https://docs.docker.com
Process: 3180 ExecStart=/usr/bin/dockerd -G uwsgi --data-root=/area51/docker -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE)
Main PID: 3180 (code=exited, status=1/FAILURE)
Memory: 37.9M
CGroup: /system.slice/docker.service
Dec 06 10:30:36 yoda3 systemd[1]: Starting Docker Application Container Engine...
Dec 06 10:30:36 yoda3 dockerd[3180]: time="2017-12-06T10:30:36.893777000Z" level=warning msg="libcontainerd: makeUpgradeProof could not open /var/run/docker/lib...containerd"
Dec 06 10:30:36 yoda3 dockerd[3180]: time="2017-12-06T10:30:36.901958000Z" level=info msg="libcontainerd: new containerd process, pid: 3224"
Dec 06 10:30:37 yoda3 dockerd[3180]: Error starting daemon: error initializing graphdriver: devicemapper: Non existing device thinpool_VG_38401-thinpool
Dec 06 10:30:37 yoda3 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Dec 06 10:30:37 yoda3 systemd[1]: Failed to start Docker Application Container Engine.
Dec 06 10:30:37 yoda3 systemd[1]: Unit docker.service entered failed state.
Dec 06 10:30:37 yoda3 systemd[1]: docker.service failed.
So, obviously writing zeroes into the thinpool_meta device is not the right thing to do, it seems to destroy my thinpool device.
Anyone here that can tell me the right steps to clear the thin-pool device? Preferably the solution should not require a reboot.
Related
I had changed my nodes' runtime into containerd. But now i would like to use docker tool instead. I try refresh the dockerd service config as follow:
[Service]
......
ExecStart=/usr/bin/dockerd --selinux-enabled=false --insecure-registry=127.0.0.1 -H fd:// --containerd=/var/run/containerd/containerd.sock --cri-containerd --debug
......
And it doesn t work unexpected.
$ systemctl status docker.service
● docker.service
Loaded: loaded (/etc/systemd/system/docker.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/docker.service.d
└─http-proxy.conf
Active: failed (Result: start-limit) since Mon 2022-11-07 16:10:51 CST; 10s ago
Docs: https://docs.docker.com
Process: 4872 ExecStart=/usr/bin/dockerd --selinux-enabled=false --insecure-registry=127.0.0.1 -H fd:// --containerd=/var/run/containerd/containerd.sock --cri-containerd --debug (code=exited, status=1/FAILURE)
Main PID: 4872 (code=exited, status=1/FAILURE)
Nov 07 16:10:51 master systemd[1]: Failed to start docker.service.
Nov 07 16:10:51 master systemd[1]: Unit docker.service entered failed state.
Nov 07 16:10:51 master systemd[1]: docker.service failed.
Nov 07 16:10:51 master systemd[1]: docker.service holdoff time over, scheduling restart.
Nov 07 16:10:51 master systemd[1]: Stopped docker.service.
Nov 07 16:10:51 master systemd[1]: start request repeated too quickly for docker.service
Nov 07 16:10:51 master systemd[1]: Failed to start docker.service.
Nov 07 16:10:51 master systemd[1]: Unit docker.service entered failed state.
Nov 07 16:10:51 master systemd[1]: docker.service failed.
I suppose it is easy before. Please tell me the right config.
I expect it will work easily.
I need to change the underlying storage for a Proxmox LXC Debian Buster container from RAW to ZFS. For this I restored a snapshot to ZFS storage. This is normally transparent for the OS in the container, but in this case docker no longer starts.
The initial problem was that docker wasn't started, and after some digging around I find this:
# dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock
INFO[2021-08-03T09:24:40.909844803Z] Starting up
...
ERRO[2021-08-03T09:24:56.914420548Z] failed to mount overlay: invalid argument storage-driver=overlay2
ERRO[2021-08-03T09:24:56.914439880Z] [graphdriver] prior storage driver overlay2 failed: driver not supported
failed to start daemon: error initializing graphdriver: driver not supported
How can I fix this?
EDIT:
I tried the suggested fix, but still no cigar:
root#mail:/var/log# systemctl status docker.service
* docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sat 2021-10-09 10:05:49 UTC; 1min 23s ago
Docs: https://docs.docker.com
Process: 236 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
Main PID: 236 (code=exited, status=1/FAILURE)
Oct 09 10:05:49 mail systemd[1]: docker.service: Service RestartSec=2s expired, scheduling restart.
Oct 09 10:05:49 mail systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Oct 09 10:05:49 mail systemd[1]: Stopped Docker Application Container Engine.
Oct 09 10:05:49 mail systemd[1]: docker.service: Start request repeated too quickly.
Oct 09 10:05:49 mail systemd[1]: docker.service: Failed with result 'exit-code'.
Oct 09 10:05:49 mail systemd[1]: Failed to start Docker Application Container Engine.
The link offered suggests creating a new zpool within the container. Seems a bit of an overkill for that to be necessary, no?
Configure Docker to use zfs. Edit /etc/docker/daemon.json and set the storage-driver to zfs. If the file was empty before, it should now look like this:
{
"storage-driver": "zfs"
}
more details: https://docs.docker.com/storage/storagedriver/zfs-driver/
I am trying to setup awslogs for docker.
The docs say to add this to daemon.json:
{
"log-driver": "awslogs",
"log-opts": {
"awslogs-region": "eu-central-1"
}
}
When I create on Ubuntu /etc/docker/daemon.json with the content above docker wont start again.
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: inactive (dead) (Result: exit-code) since Fr 2018-07-20 10:59:53 CEST; 11s ago
Docs: https://docs.docker.com
Process: 647 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=1/FAILURE)
Main PID: 647 (code=exited, status=1/FAILURE)
Jul 20 10:59:53 dev01-ubuntu systemd[1]: Failed to start Docker Application Container Engine.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: docker.service: Unit entered failed state.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: docker.service: Failed with result 'exit-code'.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: docker.service: Service hold-off time over, scheduling restart.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: Stopped Docker Application Container Engine.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: docker.service: Start request repeated too quickly.
Jul 20 10:59:53 dev01-ubuntu systemd[1]: Failed to start Docker Application Container Engine.
Can anybody explain this behaviour?
Not ever use it.
But from this: https://docs.docker.com/config/containers/logging/plugins/, it seems we need to install plugin for any new log driver, check it with docker plugin ls
Maybe it just available in amazon cloud environment, not available in local pc, just in case you did not notice that.
I install docker on CentOS7(Linux version 3.10.0-327.el7.x86_64) with command yum install -y docker, but when I try to start docker with systemctl start docker, the docker failed to start, below is the error message
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2018-03-15 16:38:37 CST; 10s ago
Docs: http://docs.docker.com
Process: 5166 ExecStart=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES (code=exited, status=1/FAILURE)
Main PID: 5166 (code=exited, status=1/FAILURE)
Mar 15 16:38:36 localhost.localdomain systemd[1]: Starting Docker Application Container Engine...
Mar 15 16:38:36 localhost.localdomain dockerd-current[5166]: time="2018-03-15T16:38:36.570661801+08:00" level=info msg="libcontainerd... 5171"
Mar 15 16:38:37 localhost.localdomain dockerd-current[5166]: time="2018-03-15T16:38:37.585565695+08:00" level=warning msg="overlay2: the ba...
Mar 15 16:38:37 localhost.localdomain dockerd-current[5166]: Error starting daemon: SELinux is not supported with the overlay2 graph ...false)
Mar 15 16:38:37 localhost.localdomain systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Mar 15 16:38:37 localhost.localdomain systemd[1]: Failed to start Docker Application Container Engine.
Mar 15 16:38:37 localhost.localdomain systemd[1]: Unit docker.service entered failed state.
Mar 15 16:38:37 localhost.localdomain systemd[1]: docker.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
How to solve this issue?
I'm trying run docker in Ubuntu 16.04 after system reboot . I created service for it "/etc/systemd/system/openvpnBOX.service":
[Unit]
Description=Openvpn Docker
[Service]
User=root
ExecStart=/etc/init/openvpn.conf
[Install]
WantedBy=multi-user.target
Alias=openvpnBOX.service
openvpn.conf:
#!/bin/bash
exec docker run --volumes-from ovpn-data --rm -p 1194:1194/udp --cap- add=NET_ADMIN kylemanna/openvpn
When i'm running this service "sudo service openvpnBOX start i see that service is run, but when i'm rebooting my system, after reboot i see that service can't start:
"sudo service openvpnBOX status"
● openvpnBOX.service - Openvpn Docker
Loaded: loaded (/etc/systemd/system/openvpnBOX.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2017-10-01 21:35:48 SST; 2min 51s ago
Process: 1771 ExecStart=/etc/init/openvpn.conf (code=exited, status=1/FAILURE)
Main PID: 1771 (code=exited, status=1/FAILURE)
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Main process exited, code=exited, status=1/FAILURE
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Unit entered failed state.
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Failed with result 'exit-code'.
Oct 01 21:35:48 systemd[1]: Started Openvpn Docker.
Oct 01 21:35:48 openvpn.conf[1771]: Error response from daemon: 404 page not found
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Main process exited, code=exited, status=1/FAILURE
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Unit entered failed state.
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Failed with result 'exit-code'.
Oct 01 21:35:48 systemd[1]: openvpnBOX.service: Start request repeated too quickly.
Oct 01 21:35:48 systemd[1]: Failed to start Openvpn Docker.
I can use "sudo docker run --restart=always --volumes-from ovpn-data -p 1194:1194/udp --cap-add=NET_ADMIN kylemanna/openvpn" but it doesn't solve my problem, because i woud like understand why my service doesn't work after reboot.
Any idea?