Can't start working with docker - docker

I use ubuntu 16.04.
Suddenly (sorry), I can't run docker.
When I run command in the terminal I only have this (expected output was info about client and daemon versions):
$ sudo docker --version
Docker version 1.12.3, build 6b644ec
When I run commands line this
$ sudo docker ps
I have nothing for long time:
How can I overcome this problem?
$ sudo service docker status
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; disabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Ср 2017-01-04 18:14:48 MSK; 12s ago
Docs: https://docs.docker.com
Process: 9534 ExecStart=/usr/bin/dockerd -H fd:// (code=exited, status=1/FAILURE)
Main PID: 9534 (code=exited, status=1/FAILURE)
янв 04 18:14:47 kenenbek dockerd[9534]: time="2017-01-04T18:14:47.446210980+03:00" level=warning msg="Your kernel does not support swap memory limit."
янв 04 18:14:47 kenenbek dockerd[9534]: time="2017-01-04T18:14:47.447160673+03:00" level=info msg="Loading containers: start."
янв 04 18:14:47 kenenbek dockerd[9534]: .................time="2017-01-04T18:14:47.469385119+03:00" level=info msg="Firewalld running: false"
янв 04 18:14:47 kenenbek dockerd[9534]: time="2017-01-04T18:14:47.881263583+03:00" level=info msg="Default bridge (docker0) is assigned with an IP addr
янв 04 18:14:48 kenenbek dockerd[9534]: time="2017-01-04T18:14:48.736641043+03:00" level=info msg="Loading containers: done."
янв 04 18:14:48 kenenbek dockerd[9534]: time="2017-01-04T18:14:48.790061315+03:00" level=fatal msg="Error creating cluster component: error while loadi
янв 04 18:14:48 kenenbek systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
янв 04 18:14:48 kenenbek systemd[1]: Failed to start Docker Application Container Engine.
янв 04 18:14:48 kenenbek systemd[1]: docker.service: Unit entered failed state.
янв 04 18:14:48 kenenbek systemd[1]: docker.service: Failed with result 'exit-code'.
When I get such output when I run:
$ sudo service docker restart
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.

This looks like a docker swarm certificate related issue as reported here
A solution to this problem will be released in version 1.13. For now you can try forcing recreating the swarm as explained here.

Related

docker start failed after adding daemon.json file

I'm trying to install Kubernetes on CentOS 7.7, therefore, I have to install docker first.
I followed Kubernetes Documentation to install docker-ce and modify daemon.json file.
$ yum install yum-utils device-mapper-persistent-data lvm2
$ yum-config-manager --add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
$ yum update && yum install \
containerd.io-1.2.10 \
docker-ce-19.03.4 \
docker-ce-cli-19.03.4
$ mkdir /etc/docker
$ cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
$ mkdir -p /etc/systemd/system/docker.service.d
$ systemctl daemon-reload
$ systemctl start docker
When started docker service, it said:
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
$ systemctl status -l docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Tue 2020-01-07 14:44:11 UTC; 7min ago
Docs: https://docs.docker.com
Process: 9879 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock (code=exited, status=1/FAILURE)
Main PID: 9879 (code=exited, status=1/FAILURE)
Jan 07 14:44:09 love61y2222c.mylabserver.com systemd[1]: Failed to start Docker Application Container Engine.
Jan 07 14:44:09 love61y2222c.mylabserver.com systemd[1]: Unit docker.service entered failed state.
Jan 07 14:44:09 love61y2222c.mylabserver.com systemd[1]: docker.service failed.
Jan 07 14:44:11 love61y2222c.mylabserver.com systemd[1]: docker.service holdoff time over, scheduling restart.
Jan 07 14:44:11 love61y2222c.mylabserver.com systemd[1]: Stopped Docker Application Container Engine.
Jan 07 14:44:11 love61y2222c.mylabserver.com systemd[1]: start request repeated too quickly for docker.service
Jan 07 14:44:11 love61y2222c.mylabserver.com systemd[1]: Failed to start Docker Application Container Engine.
Jan 07 14:44:11 love61y2222c.mylabserver.com systemd[1]: Unit docker.service entered failed state.
Jan 07 14:44:11 love61y2222c.mylabserver.com systemd[1]: docker.service failed.
$ journalctl -xe
.
.
-- Unit docker.service has begun starting up.
Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time="2020-01-07T15:28:25.722780008Z" level=info msg="Starting up"
Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time="2020-01-07T15:28:25.728447514Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time="2020-01-07T15:28:25.728479813Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=
Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time="2020-01-07T15:28:25.728510943Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/
Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time="2020-01-07T15:28:25.728526075Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time="2020-01-07T15:28:25.732325726Z" level=info msg="parsed scheme: \"unix\"" module=grpc
Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time="2020-01-07T15:28:25.733844225Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=
Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time="2020-01-07T15:28:25.733880664Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/
Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time="2020-01-07T15:28:25.733898044Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: time="2020-01-07T15:28:25.743421350Z" level=warning msg="Using pre-4.0.0 kernel for overlay2, mount failures may require
Jan 07 15:28:25 love61y2223c.mylabserver.com dockerd[29628]: failed to start daemon: error initializing graphdriver: overlay2: the backing xfs filesystem is formatted without d_type
Jan 07 15:28:25 love61y2223c.mylabserver.com systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Jan 07 15:28:25 love61y2223c.mylabserver.com systemd[1]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has failed.
--
-- The result is failed.
Jan 07 15:28:25 love61y2223c.mylabserver.com systemd[1]: Unit docker.service entered failed state.
Jan 07 15:28:25 love61y2223c.mylabserver.com systemd[1]: docker.service failed.
Could anyone tell me why docker service start failed after modifying daemon.json file? And how to specify cgroupdriver, default log-driver and default storage-driver in the right way?
Any suggestion will be greatly appreciated.
Thanks.
This error is pointing to an issue forcing docker to use overlay2 without the proper backing filesystem:
failed to start daemon: error initializing graphdriver: overlay2: the backing xfs filesystem is formatted without d_type
See docker's table for details on backing filesystem requirements for the different storage drivers: https://docs.docker.com/storage/storagedriver/#supported-backing-filesystems
The fix is to remove the storage driver settings, or fix the backing filesystem with the needed options to support overlay2:
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
}
}
For details on changing the xfs options, that appears to require rebuilding the filesystem. See this answer for more details on the needed steps.

Set the selinux status to `Permissive`, still can not run docker

After I install the docker, I have set the selinux status to Permissive, still can not run docker.
In my /etc/selinux/config, I have edit the SELINUX=disabled.
and setenforce 0, check with:
# getenforce
Permissive
I use systemctl start docker, but failed, get bellow error:
# systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since 五 2018-06-29 09:05:47 CST; 14s ago
Docs: http://docs.docker.com
Process: 21615 ExecStart=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --init-path=/usr/libexec/docker/docker-init-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES (code=exited, status=1/FAILURE)
Main PID: 21615 (code=exited, status=1/FAILURE)
6月 29 09:05:46 123.xyz systemd[1]: Starting Docker Application Container Engine...
6月 29 09:05:46 123.xyz dockerd-current[21615]: time="2018-06-29T09:05:46.451911058+08:00" level=warning msg="could not ch...found"
6月 29 09:05:46 123.xyz dockerd-current[21615]: time="2018-06-29T09:05:46.453472267+08:00" level=info msg="libcontainerd: ...21626"
6月 29 09:05:47 123.xyz dockerd-current[21615]: time="2018-06-29T09:05:47.463085812+08:00" level=warning msg="overlay2: the back...
6月 29 09:05:47 123.xyz dockerd-current[21615]: Error starting daemon: SELinux is not supported with the overlay2 graph dr...false)
6月 29 09:05:47 123.xyz systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
6月 29 09:05:47 123.xyz systemd[1]: Failed to start Docker Application Container Engine.
6月 29 09:05:47 123.xyz systemd[1]: Unit docker.service entered failed state.
6月 29 09:05:47 123.xyz systemd[1]: docker.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
Why there still says:
Error starting daemon: SELinux is not supported with the overlay2 graph dr...false)
My linux is CentOS 7.2
I find the solution.
in the /etc/sysconfig/docker:
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; then
DOCKER_CERT_PATH=/etc/docker
fi
set the -selinux-enabled to --selinux-enabled=false.

Docker could not start after install on CentOS 7

I install docker on CentOS7(Linux version 3.10.0-327.el7.x86_64) with command yum install -y docker, but when I try to start docker with systemctl start docker, the docker failed to start, below is the error message
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Thu 2018-03-15 16:38:37 CST; 10s ago
Docs: http://docs.docker.com
Process: 5166 ExecStart=/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --seccomp-profile=/etc/docker/seccomp.json $OPTIONS $DOCKER_STORAGE_OPTIONS $DOCKER_NETWORK_OPTIONS $ADD_REGISTRY $BLOCK_REGISTRY $INSECURE_REGISTRY $REGISTRIES (code=exited, status=1/FAILURE)
Main PID: 5166 (code=exited, status=1/FAILURE)
Mar 15 16:38:36 localhost.localdomain systemd[1]: Starting Docker Application Container Engine...
Mar 15 16:38:36 localhost.localdomain dockerd-current[5166]: time="2018-03-15T16:38:36.570661801+08:00" level=info msg="libcontainerd... 5171"
Mar 15 16:38:37 localhost.localdomain dockerd-current[5166]: time="2018-03-15T16:38:37.585565695+08:00" level=warning msg="overlay2: the ba...
Mar 15 16:38:37 localhost.localdomain dockerd-current[5166]: Error starting daemon: SELinux is not supported with the overlay2 graph ...false)
Mar 15 16:38:37 localhost.localdomain systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Mar 15 16:38:37 localhost.localdomain systemd[1]: Failed to start Docker Application Container Engine.
Mar 15 16:38:37 localhost.localdomain systemd[1]: Unit docker.service entered failed state.
Mar 15 16:38:37 localhost.localdomain systemd[1]: docker.service failed.
Hint: Some lines were ellipsized, use -l to show in full.
How to solve this issue?

How do I clear a thinpool device for docker

I am running docker on a Redhat system with devicemapper and thinpool device just as recommended for production systems. Now when I want to reinstall docker I need two steps:
1) remove docker directory (in my case /area51/docker)
2) clear thinpool device
The docker documentation states that when using devicemapper with dm.metadev and dm.datadev options, the easiest way of cleaning devicemapper would be:
If setting up a new metadata pool it is required to be valid.
This can be achieved by zeroing the first 4k to indicate empty metadata, like this:
$ dd if=/dev/zero of=$metadata_dev bs=4096 count=1
Unfortunately, according to the documentation, the dm.metadatadev is deprecated, it says to use dm.thinpooldev instead.
My thinpool has been created along the lines of this docker instruction
So, my setup now looks like this:
cat /etc/docker/daemon.json
{
"storage-driver": "devicemapper",
"storage-opts": [
"dm.thinpooldev=/dev/mapper/thinpool_VG_38401-thinpool",
"dm.basesize=18G"
]
}
Under the devicemapper directory i see the following thinpool devices
ls -l /dev/mapper/thinpool_VG_38401-thinpool*
lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool -> ../dm-8
lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool_tdata -> ../dm-7
lrwxrwxrwx 1 root root 7 Dec 6 08:31 /dev/mapper/thinpool_VG_38401-thinpool_tmeta -> ../dm-6
So, after running docker successfully I tried to reinstall as described above and clear the thinpool by writing 4K zeroes into the tmeta device and restart docker:
dd if=/dev/zero of=/dev/mapper/thinpool_VG_38401-thinpool_tmeta bs=4096 count=1
systemctl start docker
And endet up with
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2017-12-06 10:28:46 UTC; 10s ago
Docs: https://docs.docker.com
Process: 1566 ExecStart=/usr/bin/dockerd -G uwsgi --data-root=/area51/docker -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE)
Main PID: 1566 (code=exited, status=1/FAILURE)
Memory: 236.0K
CGroup: /system.slice/docker.service
Dec 06 10:28:45 yoda3 systemd[1]: Starting Docker Application Container Engine...
Dec 06 10:28:45 yoda3 dockerd[1566]: time="2017-12-06T10:28:45.816049000Z" level=info msg="libcontainerd: new containerd process, pid: 1577"
Dec 06 10:28:46 yoda3 dockerd[1566]: time="2017-12-06T10:28:46.816966000Z" level=warning msg="failed to rename /area51/docker/tmp for background deletion: renam...chronously"
Dec 06 10:28:46 yoda3 dockerd[1566]: Error starting daemon: error initializing graphdriver: devmapper: Unable to take ownership of thin-pool (thinpool_VG_38401-...data blocks
Dec 06 10:28:46 yoda3 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Dec 06 10:28:46 yoda3 systemd[1]: Failed to start Docker Application Container Engine.
Dec 06 10:28:46 yoda3 systemd[1]: Unit docker.service entered failed state.
Dec 06 10:28:46 yoda3 systemd[1]: docker.service failed.
I assumed I could get around the 'unable to take ownership of thin-pool' by doing a reboot. But after reboot and trying to start docker again I got the following error:
systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Wed 2017-12-06 10:30:37 UTC; 2min 29s ago
Docs: https://docs.docker.com
Process: 3180 ExecStart=/usr/bin/dockerd -G uwsgi --data-root=/area51/docker -H unix:///var/run/docker.sock (code=exited, status=1/FAILURE)
Main PID: 3180 (code=exited, status=1/FAILURE)
Memory: 37.9M
CGroup: /system.slice/docker.service
Dec 06 10:30:36 yoda3 systemd[1]: Starting Docker Application Container Engine...
Dec 06 10:30:36 yoda3 dockerd[3180]: time="2017-12-06T10:30:36.893777000Z" level=warning msg="libcontainerd: makeUpgradeProof could not open /var/run/docker/lib...containerd"
Dec 06 10:30:36 yoda3 dockerd[3180]: time="2017-12-06T10:30:36.901958000Z" level=info msg="libcontainerd: new containerd process, pid: 3224"
Dec 06 10:30:37 yoda3 dockerd[3180]: Error starting daemon: error initializing graphdriver: devicemapper: Non existing device thinpool_VG_38401-thinpool
Dec 06 10:30:37 yoda3 systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Dec 06 10:30:37 yoda3 systemd[1]: Failed to start Docker Application Container Engine.
Dec 06 10:30:37 yoda3 systemd[1]: Unit docker.service entered failed state.
Dec 06 10:30:37 yoda3 systemd[1]: docker.service failed.
So, obviously writing zeroes into the thinpool_meta device is not the right thing to do, it seems to destroy my thinpool device.
Anyone here that can tell me the right steps to clear the thin-pool device? Preferably the solution should not require a reboot.

Docker can't start on centos7: failed to start docker application container engine

centos7 via vmware workstation player, and
[root#localhost Desktop]# uname -r
3.10.0-229.14.1.el7.x86_64
first, yum install docker-engine
then, other_args="--selinux-enabled" >> /etc/sysconfig/docker
when service docker start,I got error:
[root#localhost Desktop]# systemctl status docker.service -l
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled)
Active: activating (start) since 日 2015-10-25 19:49:32 PDT; 46s ago
Docs: https://docs.docker.com
Main PID: 14387 (docker)
CGroup: /system.slice/docker.service
└─14387 /usr/bin/docker daemon -H fd://
10月 25 19:49:32 localhost.localdomain systemd[1]: Failed to start Docker Application Container Engine.
10月 25 19:49:32 localhost.localdomain systemd[1]: Unit docker.service entered failed state.
10月 25 19:49:32 localhost.localdomain systemd[1]: Starting Docker Application Container Engine...
10月 25 19:49:33 localhost.localdomain docker[14387]: time="2015-10-25T19:49:33.092885953-07:00" level=info msg="[graphdriver] using prior storage driver \"devicemapper\""
10月 25 19:49:33 localhost.localdomain docker[14387]: time="2015-10-25T19:49:33.093697949-07:00" level=info msg="Option DefaultDriver: bridge"
10月 25 19:49:33 localhost.localdomain docker[14387]: time="2015-10-25T19:49:33.093729432-07:00" level=info msg="Option DefaultNetwork: bridge"
10月 25 19:49:33 localhost.localdomain docker[14387]: time="2015-10-25T19:49:33.108983655-07:00" level=warning msg="Running modprobe bridge nf_nat br_netfilter failed with message: modprobe: WARNING: Module br_netfilter not found.\n, error: exit status 1"
who can help me ? thanks.

Resources