My error
donhuvy#VYLAPTOP:~/temp2607/reaction-development-platform$ sudo apt-get update
Hit:1 https://download.docker.com/linux/ubuntu focal InRelease
Hit:2 http://archive.ubuntu.com/ubuntu focal InRelease
Get:3 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB]
Hit:4 http://archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:5 http://archive.ubuntu.com/ubuntu focal-backports InRelease
Fetched 114 kB in 4s (26.4 kB/s)
Reading package lists... Done
W: Target Packages (stable/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:50 and /etc/apt/sources.list.d/docker.list:1
W: Target Packages (stable/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:50 and /etc/apt/sources.list.d/docker.list:1
W: Target Translations (stable/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list:50 and /etc/apt/sources.list.d/docker.list:1
W: Target CNF (stable/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list:50 and /etc/apt/sources.list.d/docker.list:1
W: Target CNF (stable/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list:50 and /etc/apt/sources.list.d/docker.list:1
W: Target Packages (stable/binary-amd64/Packages) is configured multiple times in /etc/apt/sources.list:50 and /etc/apt/sources.list.d/docker.list:1
W: Target Packages (stable/binary-all/Packages) is configured multiple times in /etc/apt/sources.list:50 and /etc/apt/sources.list.d/docker.list:1
W: Target Translations (stable/i18n/Translation-en) is configured multiple times in /etc/apt/sources.list:50 and /etc/apt/sources.list.d/docker.list:1
W: Target CNF (stable/cnf/Commands-amd64) is configured multiple times in /etc/apt/sources.list:50 and /etc/apt/sources.list.d/docker.list:1
W: Target CNF (stable/cnf/Commands-all) is configured multiple times in /etc/apt/sources.list:50 and /etc/apt/sources.list.d/docker.list:1
donhuvy#VYLAPTOP:~/temp2607/reaction-development-platform$ sudo apt-get install docker-ce docker-ce-cli containerd.io
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
docker-ce-rootless-extras docker-scan-plugin pigz slirp4netns
Suggested packages:
aufs-tools cgroupfs-mount | cgroup-lite
The following NEW packages will be installed:
containerd.io docker-ce docker-ce-cli docker-ce-rootless-extras docker-scan-plugin pigz slirp4netns
0 upgraded, 7 newly installed, 0 to remove and 45 not upgraded.
Need to get 104 MB of archives.
After this operation, 448 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 https://download.docker.com/linux/ubuntu focal/stable amd64 containerd.io amd64 1.4.8-1 [24.7 MB]
Get:2 http://archive.ubuntu.com/ubuntu focal/universe amd64 pigz amd64 2.4-1 [57.4 kB]
Get:3 http://archive.ubuntu.com/ubuntu focal/universe amd64 slirp4netns amd64 0.4.3-1 [74.3 kB]
Get:4 https://download.docker.com/linux/ubuntu focal/stable amd64 docker-ce-cli amd64 5:20.10.7~3-0~ubuntu-focal [41.4 MB]
Get:5 https://download.docker.com/linux/ubuntu focal/stable amd64 docker-ce amd64 5:20.10.7~3-0~ubuntu-focal [24.8 MB]
Get:6 https://download.docker.com/linux/ubuntu focal/stable amd64 docker-ce-rootless-extras amd64 5:20.10.7~3-0~ubuntu-focal [9063 kB]
Get:7 https://download.docker.com/linux/ubuntu focal/stable amd64 docker-scan-plugin amd64 0.8.0~ubuntu-focal [3889 kB]
Fetched 104 MB in 17s (6216 kB/s)
Selecting previously unselected package pigz.
(Reading database ... 32256 files and directories currently installed.)
Preparing to unpack .../0-pigz_2.4-1_amd64.deb ...
Unpacking pigz (2.4-1) ...
Selecting previously unselected package containerd.io.
Preparing to unpack .../1-containerd.io_1.4.8-1_amd64.deb ...
Unpacking containerd.io (1.4.8-1) ...
Selecting previously unselected package docker-ce-cli.
Preparing to unpack .../2-docker-ce-cli_5%3a20.10.7~3-0~ubuntu-focal_amd64.deb ...
Unpacking docker-ce-cli (5:20.10.7~3-0~ubuntu-focal) ...
Selecting previously unselected package docker-ce.
Preparing to unpack .../3-docker-ce_5%3a20.10.7~3-0~ubuntu-focal_amd64.deb ...
Unpacking docker-ce (5:20.10.7~3-0~ubuntu-focal) ...
Selecting previously unselected package docker-ce-rootless-extras.
Preparing to unpack .../4-docker-ce-rootless-extras_5%3a20.10.7~3-0~ubuntu-focal_amd64.deb ...
Unpacking docker-ce-rootless-extras (5:20.10.7~3-0~ubuntu-focal) ...
Selecting previously unselected package docker-scan-plugin.
Preparing to unpack .../5-docker-scan-plugin_0.8.0~ubuntu-focal_amd64.deb ...
Unpacking docker-scan-plugin (0.8.0~ubuntu-focal) ...
Selecting previously unselected package slirp4netns.
Preparing to unpack .../6-slirp4netns_0.4.3-1_amd64.deb ...
Unpacking slirp4netns (0.4.3-1) ...
Setting up slirp4netns (0.4.3-1) ...
Setting up docker-scan-plugin (0.8.0~ubuntu-focal) ...
Setting up containerd.io (1.4.8-1) ...
Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.
Setting up docker-ce-cli (5:20.10.7~3-0~ubuntu-focal) ...
Setting up pigz (2.4-1) ...
Setting up docker-ce-rootless-extras (5:20.10.7~3-0~ubuntu-focal) ...
Setting up docker-ce (5:20.10.7~3-0~ubuntu-focal) ...
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket.
invoke-rc.d: could not determine current runlevel
Processing triggers for man-db (2.9.1-1) ...
Processing triggers for systemd (245.4-4ubuntu3.6) ...
donhuvy#VYLAPTOP:~/temp2607/reaction-development-platform$ sudo docker run hello-world
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
donhuvy#VYLAPTOP:~/temp2607/reaction-development-platform$ docker run
"docker run" requires at least 1 argument.
See 'docker run --help'.
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Run a command in a new container
donhuvy#VYLAPTOP:~/temp2607/reaction-development-platform$ docker --version
Docker version 20.10.7, build f0df350
donhuvy#VYLAPTOP:~/temp2607/reaction-development-platform$ docker run hello-world
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
donhuvy#VYLAPTOP:~/temp2607/reaction-development-platform$ systemctl start docker
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
donhuvy#VYLAPTOP:~/temp2607/reaction-development-platform$ sudo dockerd
INFO[2021-07-26T09:26:04.598999900+07:00] Starting up
INFO[2021-07-26T09:26:05.751575600+07:00] libcontainerd: started new containerd process pid=4266
INFO[2021-07-26T09:26:05.751933800+07:00] parsed scheme: "unix" module=grpc
INFO[2021-07-26T09:26:05.752582200+07:00] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2021-07-26T09:26:05.752789000+07:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>} module=grpc
INFO[2021-07-26T09:26:05.753058600+07:00] ClientConn switching balancer to "pick_first" module=grpc
INFO[2021-07-26T09:26:05.784888200+07:00] starting containerd revision=7eba5930496d9bbe375fdf71603e610ad737d2b2 version=1.4.8
INFO[2021-07-26T09:26:05.807969400+07:00] loading plugin "io.containerd.content.v1.content"... type=io.containerd.content.v1
INFO[2021-07-26T09:26:05.809913000+07:00] loading plugin "io.containerd.snapshotter.v1.aufs"... type=io.containerd.snapshotter.v1
INFO[2021-07-26T09:26:06.359366500+07:00] skip loading plugin "io.containerd.snapshotter.v1.aufs"... error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/4.4.0-19041-Microsoft\\n\"): skip plugin" type=io.containerd.snapshotter.v1
INFO[2021-07-26T09:26:06.360318200+07:00] loading plugin "io.containerd.snapshotter.v1.btrfs"... type=io.containerd.snapshotter.v1
INFO[2021-07-26T09:26:06.361771600+07:00] skip loading plugin "io.containerd.snapshotter.v1.btrfs"... error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (wslfs) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2021-07-26T09:26:06.362037500+07:00] loading plugin "io.containerd.snapshotter.v1.devmapper"... type=io.containerd.snapshotter.v1
WARN[2021-07-26T09:26:06.362243500+07:00] failed to load plugin io.containerd.snapshotter.v1.devmapper error="devmapper not configured"
INFO[2021-07-26T09:26:06.362974300+07:00] loading plugin "io.containerd.snapshotter.v1.native"... type=io.containerd.snapshotter.v1
INFO[2021-07-26T09:26:06.365365000+07:00] loading plugin "io.containerd.snapshotter.v1.overlayfs"... type=io.containerd.snapshotter.v1
INFO[2021-07-26T09:26:06.368026100+07:00] loading plugin "io.containerd.snapshotter.v1.zfs"... type=io.containerd.snapshotter.v1
INFO[2021-07-26T09:26:06.368363300+07:00] skip loading plugin "io.containerd.snapshotter.v1.zfs"... error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2021-07-26T09:26:06.369411200+07:00] loading plugin "io.containerd.metadata.v1.bolt"... type=io.containerd.metadata.v1
WARN[2021-07-26T09:26:06.377457800+07:00] could not use snapshotter devmapper in metadata plugin error="devmapper not configured"
INFO[2021-07-26T09:26:06.378196300+07:00] metadata content store policy set policy=shared
INFO[2021-07-26T09:26:06.384144600+07:00] loading plugin "io.containerd.differ.v1.walking"... type=io.containerd.differ.v1
INFO[2021-07-26T09:26:06.384857900+07:00] loading plugin "io.containerd.gc.v1.scheduler"... type=io.containerd.gc.v1
INFO[2021-07-26T09:26:06.386379300+07:00] loading plugin "io.containerd.service.v1.introspection-service"... type=io.containerd.service.v1
INFO[2021-07-26T09:26:06.393867000+07:00] loading plugin "io.containerd.service.v1.containers-service"... type=io.containerd.service.v1
INFO[2021-07-26T09:26:06.394944900+07:00] loading plugin "io.containerd.service.v1.content-service"... type=io.containerd.service.v1
INFO[2021-07-26T09:26:06.395951300+07:00] loading plugin "io.containerd.service.v1.diff-service"... type=io.containerd.service.v1
INFO[2021-07-26T09:26:06.397012700+07:00] loading plugin "io.containerd.service.v1.images-service"... type=io.containerd.service.v1
INFO[2021-07-26T09:26:06.397965800+07:00] loading plugin "io.containerd.service.v1.leases-service"... type=io.containerd.service.v1
INFO[2021-07-26T09:26:06.398988200+07:00] loading plugin "io.containerd.service.v1.namespaces-service"... type=io.containerd.service.v1
INFO[2021-07-26T09:26:06.400056700+07:00] loading plugin "io.containerd.service.v1.snapshots-service"... type=io.containerd.service.v1
INFO[2021-07-26T09:26:06.401187500+07:00] loading plugin "io.containerd.runtime.v1.linux"... type=io.containerd.runtime.v1
INFO[2021-07-26T09:26:06.408917100+07:00] loading plugin "io.containerd.runtime.v2.task"... type=io.containerd.runtime.v2
INFO[2021-07-26T09:26:06.411047900+07:00] loading plugin "io.containerd.monitor.v1.cgroups"... type=io.containerd.monitor.v1
INFO[2021-07-26T09:26:06.412279400+07:00] loading plugin "io.containerd.service.v1.tasks-service"... type=io.containerd.service.v1
INFO[2021-07-26T09:26:06.412639500+07:00] loading plugin "io.containerd.internal.v1.restart"... type=io.containerd.internal.v1
INFO[2021-07-26T09:26:06.413666800+07:00] loading plugin "io.containerd.grpc.v1.containers"... type=io.containerd.grpc.v1
INFO[2021-07-26T09:26:06.414645100+07:00] loading plugin "io.containerd.grpc.v1.content"... type=io.containerd.grpc.v1
INFO[2021-07-26T09:26:06.415881400+07:00] loading plugin "io.containerd.grpc.v1.diff"... type=io.containerd.grpc.v1
INFO[2021-07-26T09:26:06.417013800+07:00] loading plugin "io.containerd.grpc.v1.events"... type=io.containerd.grpc.v1
INFO[2021-07-26T09:26:06.423490700+07:00] loading plugin "io.containerd.grpc.v1.healthcheck"... type=io.containerd.grpc.v1
INFO[2021-07-26T09:26:06.424583600+07:00] loading plugin "io.containerd.grpc.v1.images"... type=io.containerd.grpc.v1
INFO[2021-07-26T09:26:06.425780700+07:00] loading plugin "io.containerd.grpc.v1.leases"... type=io.containerd.grpc.v1
INFO[2021-07-26T09:26:06.426810700+07:00] loading plugin "io.containerd.grpc.v1.namespaces"... type=io.containerd.grpc.v1
INFO[2021-07-26T09:26:06.427972000+07:00] loading plugin "io.containerd.internal.v1.opt"... type=io.containerd.internal.v1
INFO[2021-07-26T09:26:06.430384700+07:00] loading plugin "io.containerd.grpc.v1.snapshots"... type=io.containerd.grpc.v1
INFO[2021-07-26T09:26:06.430569100+07:00] loading plugin "io.containerd.grpc.v1.tasks"... type=io.containerd.grpc.v1
INFO[2021-07-26T09:26:06.431579800+07:00] loading plugin "io.containerd.grpc.v1.version"... type=io.containerd.grpc.v1
INFO[2021-07-26T09:26:06.433089700+07:00] loading plugin "io.containerd.grpc.v1.introspection"... type=io.containerd.grpc.v1
INFO[2021-07-26T09:26:06.440447100+07:00] serving... address=/var/run/docker/containerd/containerd-debug.sock
INFO[2021-07-26T09:26:06.441881300+07:00] serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
INFO[2021-07-26T09:26:06.443648400+07:00] serving... address=/var/run/docker/containerd/containerd.sock
INFO[2021-07-26T09:26:06.444409700+07:00] containerd successfully booted in 0.663149s
INFO[2021-07-26T09:26:06.458975000+07:00] parsed scheme: "unix" module=grpc
INFO[2021-07-26T09:26:06.459340200+07:00] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2021-07-26T09:26:06.459937000+07:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>} module=grpc
INFO[2021-07-26T09:26:06.461038500+07:00] ClientConn switching balancer to "pick_first" module=grpc
INFO[2021-07-26T09:26:06.464507500+07:00] parsed scheme: "unix" module=grpc
INFO[2021-07-26T09:26:06.465303100+07:00] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2021-07-26T09:26:06.465948100+07:00] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}] <nil> <nil>} module=grpc
INFO[2021-07-26T09:26:06.469570200+07:00] ClientConn switching balancer to "pick_first" module=grpc
WARN[2021-07-26T09:26:06.508606600+07:00] Your kernel does not support cgroup memory limit
WARN[2021-07-26T09:26:06.509102200+07:00] Unable to find cpu cgroup in mounts
WARN[2021-07-26T09:26:06.509641700+07:00] Unable to find blkio cgroup in mounts
WARN[2021-07-26T09:26:06.511422700+07:00] Unable to find cpuset cgroup in mounts
WARN[2021-07-26T09:26:06.512210200+07:00] Unable to find pids cgroup in mounts
INFO[2021-07-26T09:26:06.513102200+07:00] Loading containers: start.
WARN[2021-07-26T09:26:06.525628500+07:00] Running iptables --wait -t nat -L -n failed with message: `iptables v1.8.4 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.`, error: exit status 3
INFO[2021-07-26T09:26:06.633151400+07:00] stopping event stream following graceful shutdown error="<nil>" module=libcontainerd namespace=moby
INFO[2021-07-26T09:26:06.634747000+07:00] stopping healthcheck following graceful shutdown module=libcontainerdINFO[2021-07-26T09:26:06.634766100+07:00] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby
WARN[2021-07-26T09:26:07.650886300+07:00] grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial unix:///var/run/docker/containerd/containerd.sock: timeout". Reconnecting... module=grpc
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.8.4 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
(exit status 3)
donhuvy#VYLAPTOP:~/temp2607/reaction-development-platform$ systemctl start docker
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
donhuvy#VYLAPTOP:~/temp2607/reaction-development-platform$ docker run hello-world
docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/create: dial unix /var/run/docker.sock: connect: permission denied.
See 'docker run --help'.
donhuvy#VYLAPTOP:~/temp2607/reaction-development-platform$ sudo rm -rf /etc/systemd/system/docker.service.d
donhuvy#VYLAPTOP:~/temp2607/reaction-development-platform$ sudo systemctl deamon-reload
Unknown operation deamon-reload.
donhuvy#VYLAPTOP:~/temp2607/reaction-development-platform$ systemctl status docker.service
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
donhuvy#VYLAPTOP:~/temp2607/reaction-development-platform$ sudo su
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform# systemctl start docker
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down [ OK ]
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform#
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform#
Executing: /lib/systemd/systemd-sysv-install enable docker
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform# systemctl restart docker
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform# systemctl restart docker
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform# systemctl restart docker
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform# /etc/init.d/dbus start
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform#
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform#
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform#
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform# docker run hello-world
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform# service docker stop
* Docker already stopped - file /var/run/docker-ssd.pid not found.
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform# cd /var/run/docker/libcontainerd
bash: cd: /var/run/docker/libcontainerd: No such file or directory
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform# service docker start
* Starting Docker: docker [ OK ]
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform#
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform#
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform#
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform#
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform#
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform#
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform# docker --version
Docker version 20.10.7, build f0df350
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform# docker run hello-world
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
See 'docker run --help'.
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform# docker run -v /var/run/docker.sock:/var/run/docker.sock
"docker run" requires at least 1 argument.
See 'docker run --help'.
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Run a command in a new container
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform# make
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Removed docker development symlink for reaction-hydra
Running pre-build hook script for reaction-hydra.
reaction-hydra post-project-start script invoked.
/bin/sh: 1: docker-compose: not found
make: *** [Makefile:264: build-reaction-hydra] Error 127
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform#
and
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform# sudo systemctl is-active docker
System has not been booted with systemd as init system (PID 1). Can't operate.
Failed to connect to bus: Host is down
root#VYLAPTOP:/home/donhuvy/temp2607/reaction-development-platform#
How to fix it?
WSL doesn't have an init. You need to use native Windows docker which uses a special WSL for it.
I looked for a solution for this problem and found exactly what you were looking for I think. My goal was to be able to use docker from my WSL distro of choice (Ubuntu).
Make sure you have WSL2 (How to check it)
Download Docker Desktop (I know you don't want to use this but stay with me)
Open docker desktop, go to settings, to Resources, to WSL integration. Here activate the integration with your distro of choice.
Do not forget to hit Apply & Restart
Now open your WSL and type docker run hello-world to test if docker works
I haven't entirely managed to get rid of the same error (I use Windows 10, WSL2 and Ubuntu 20.04 and I sure as hell do not want to use the crappy shareware called Docker Desktop, that annoys the hell of me with all its upgrade and "Pro" crap).
However: following this guide https://dev.to/bowmanjd/install-docker-on-windows-wsl-without-docker-desktop-34m9 seems to help.
I got rid of the "iptables"-related error messages by adding "iptables": false in the /etc/docker/daemon.json configuration. I have not mnaged to start dockerd just now though (apparently "error creating default "bridge" network: permission denied" is haunting me somewhat).
Give it a whirl if you like.
Edit: the reason for the error message was, that my WSL Ubuntu 20.04LTS container was - despite what I thought - not a WSL2 container, but WSL instead.
Make sure to set the version of the container to v2 and make sure to use that container, possibly setting both settings as defaults (v2 and container). See https://stackoverflow.com/a/65005633/15610035 for details on how to do it.
After changing the version of the container to v2 the error message went away.
However: I currently have trouble getting dockerd to expose the ports in a way, that the Windows localhost can access, because the WSL localhost is not the same as the Windows localhost. The reason is, that network is handled differently with WSLv2 - in comparison to WSLv1.
Additional useful articles: https://superuser.com/questions/1131874/how-to-access-localhost-of-linux-subsystem-from-windows and https://github.com/microsoft/WSL/issues/4150 .
Related
I'm having an error trying to have docker set iptables false when minikube start fails.
Below are my logs:
minikube v1.20.0 on Centos 7.6.1810 (amd64)
* Using the none driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Restarting existing none bare metal machine for "minikube" ...
* OS release is CentOS Linux 7 (Core)
* Preparing Kubernetes v1.20.2 on Docker 20.10.6 ...
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1
stdout:
[init] Using Kubernetes version: v1.20.2
[preflight] Running pre-flight checks
stderr:
[WARNING Firewalld]: firewalld is active, please ensure ports [8443 10250] are open or your cluster may not function correctly
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING FileExisting-socat]: socat not found in system path
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
Error you included states that you are misising bridge-nf-call-iptables.
bridge-nf-call-iptables is exported by br_netfilter.
What you need to do is issue the command
sudo modprobe br_netfilter
and then ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl
cat <<EOF > /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
This should fix your problem
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
During the installation of kubernetes, an error is reported when I initialize the master node. I am using the arm platform server and the operating system is centos-7.6 aarch64. Does kubernetes support deploying master nodes on the arm platform?
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
6月 30 22:53:04 master kubelet[54238]: W0630 22:53:04.188966 54238 pod_container_deletor.go:75] Container "51615bc1d926dcc56606bca9f452c178398bc08c78a2418a346209df28b95854" not found in pod's containers
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.189353 54238 kubelet.go:2248] node "master" not found
6月 30 22:53:04 master kubelet[54238]: I0630 22:53:04.218672 54238 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.236484 54238 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://192.168.1.112:6443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 192.168.1.112:6443: connect: connection refused
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.238898 54238 certificate_manager.go:400] Failed while requesting a signed certificate from the master: cannot create certificate signing request: Post https://192.168.1.112:6443/apis/certificates.k8s.io/v1beta1/certificatesigningrequests: dial tcp 192.168.1.112:6443: connect: connection refused
6月 30 22:53:04 master kubelet[54238]: I0630 22:53:04.260520 54238 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.289516 54238 kubelet.go:2248] node "master" not found
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.389666 54238 kubelet.go:2248] node "master" not found
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.436810 54238 reflector.go:125] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: Failed to list *v1.Service: Get https://192.168.1.112:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 192.168.1.112:6443: connect: connection refused
6月 30 22:53:04 master kubelet[54238]: E0630 22:53:04.489847 54238 kubelet.go:2248] node "master" not found
To start kubernetes cluster, make sure you have minimum requirement of kubernetes platfrom.
If you want kubernetes cluster with low compute you could discus with me in seperatly.
You need :
Docker
Compute Node at least 4GB Memory 2CPU.
I will write answer depends on your node.
Docker
On each of your machines, install Docker. Version 19.03.11 is recommended, but 1.13.1, 17.03, 17.06, 17.09, 18.06 and 18.09 are known to work as well. Keep track of the latest verified Docker version in the Kubernetes release notes.
Use the following commands to install Docker on your system:
Install required packages
yum install -y yum-utils device-mapper-persistent-data lvm2
Add the Docker repository
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
Install Docker CE
yum update -y && yum install -y \
containerd.io-1.2.13 \
docker-ce-19.03.11 \
docker-ce-cli-19.03.11
Create /etc/docker
mkdir /etc/docker
Set up the Docker daemon
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
Restart Docker
mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
Kubernetes
As a requirement for your Linux Node's iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
Make sure that the br_netfilter module is loaded before this step. This can be done by running lsmod | grep br_netfilter. To load it explicitly call sudo modprobe br_netfilter.
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF
Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
systemctl daemon-reload
systemctl restart kubelet
Initializing your control-plane node
The control-plane node is the machine where the control plane components run, including etcd (the cluster database) and the API Server (which the kubectl command line tool communicates with).
Master
Init kubernetes cluster (Running this on master node)
kubeadm init --pod-network-cidr 192.168.0.0/16
Note : I will calico here. so the cidr use 192.168.0.0/16
Move kube config to user directory (assume root)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Worker Node
Join other nodes (Running below command from your worker node)
kubeadm join <IP_PUBLIC>:6443 --token <TOKEN> \
--discovery-token-ca-cert-hash sha256:<HASH>
Note : you will get this when you successfully init master
Master Node
Applying calico
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
Verify cluster
kubectl get nodes
Reference : https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
I'm trying to start minikube in Windows 10 using below command. minikube version v1.10.1
minikube start --vm-driver=virtualbox --no-vtx-check
But i'm getting below error
Creating virtualbox VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
* Preparing Kubernetes v1.18.2 on Docker 19.03.8 ...
* Unable to load cached images: loading cached images: Docker load /var/lib/minikube/images/pause_3.2: loadimage docker.: docker load -i /var/lib/minikube/images/pause_3.2: Process exited with status 1
stdout:
stderr:
Error processing tar file(exit status 1): archive/tar: invalid tar header
*
* [OOM_KILL_SCP] Failed to update cluster updating node: downloading binaries: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:2506->127.0.0.1:2427: wsarecv: An existing connection was forcibly closed by the remote host.
* Suggestion: Disable dynamic memory in your VM manager, or pass in a larger --memory value
* Related issue: https://github.com/kubernetes/minikube/issues/1766
So i thought of degrading the minikube version. so i used v1.7.2 version and then v1.3.0 version but in both cases i got the same above mentioned error. Kindly suggest
Regards
It worked. Below are the steps which i have done as part of change for minikube in Windows 10 Home edition where hyper-v is not supported
Step 1: Enable virtualization and install virtualbox
step 2: add kutectl and minikube installer
step 3:
Run below command
minikube start --vm-driver=virtualbox --memory 4096
If it fails then
minikube delete and delete .minikube and .kubectl folders
Enable WSL 2
Open PowerShell as Administrator and run:
Enable WSL1
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart
Enable WSL2
dism.exe /online /enable-feature /featurename:VirtualMachinePlatform /all /norestart
Restart the system
Install Linux Distribution Package
Click here!
Disable hypervisorlaunchtype
Open CMD
Run bcdedit to check hypervisor status
bcdedit
If hypervisorlaunchtype is set to auto then disable it:
bcdedit /set hypervisorlaunchtype off
Reboot
Again run minikube
minikube start --vm-driver=virtualbox --memory 4096
I try to run a sample application in my Ubuntu 18 vm.
I have installed Docker client and server version of 18.06.1-ce. I already have VirtualBox running.
I use below link and install kubectl 1.14 too: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux
I have Minikube v1.0.1 also installed. But Minikube start command stuck in Waiting for pods: apiserver and timeout
harshana#-Virtual-Machine:~$ sudo minikube start
😄 minikube v1.0.1 on linux (amd64)
🤹 Downloading Kubernetes v1.14.1 images in the background ...
⚠️ Ignoring --vm-driver=virtualbox, as the existing "minikube" VM was created using the none driver.
⚠️ To switch drivers, you may create a new VM using `minikube start -p <name> --vm-driver=virtualbox`
⚠️ Alternatively, you may delete the existing VM using `minikube delete -p minikube`
🔄 Restarting existing none VM for "minikube" ...
⌛ Waiting for SSH access ...
📶 "minikube" IP address is xxx.xxx.x.xxx
🐳 Configuring Docker as the container runtime ...
🐳 Version of container runtime is 18.06.1-ce
⌛ Waiting for image downloads to complete ...
✨ Preparing Kubernetes environment ...
💾 Downloading kubeadm v1.14.1
💾 Downloading kubelet v1.14.1
🚜 Pulling images required by Kubernetes v1.14.1 ...
🔄 Relaunching Kubernetes v1.14.1 using kubeadm ...
⌛ Waiting for pods: apiserver
sudo minikube logs:
May 19 08:11:40 harshana-Virtual-Machine kubelet[10572]: E0519 08:11:40.825465 10572 kubelet.go:2244] node "minikube" not found
May 19 08:11:40 harshana-Virtual-Machine kubelet[10572]: E0519 08:11:40.895848 10572 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I got the same behaviour because I have created a first VM using kvm. I have followed the instructions and deleted the VM. Run the below :
1- minikube delete -p minikube
2- minikube start
I am having trouble starting the docker containers on a particular machine: doing docker run gives random results, and that is the case whether I install atom, debian stretch of ubuntu 18.04. On the debian OSes, I am using a fresh install of Docker version 18.09.6, build 481bc77.
The most common issue is Error response from daemon: OCI runtime create failed
Here is what I see when I am trying to run the hello-world example (working ~1.5 times out of 7 times):
user#machine:~$ sudo docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
user#machine:~$ sudo docker run hello-world
docker: Error response from daemon: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v1.linux/moby/02c7ab23649c89b19720d57a549eb703aa442805aa3b468e7610c19e6d8fa2eb/log.json: no such file or directory): runc did not terminate sucessfully: unknown.
ERRO[0001] error waiting for container: context canceled
user#machine:~$ sudo docker run hello-world
docker: Error response from daemon: ttrpc: client shutting down: read unix #->#/containerd-shim/moby/4de0da9c33103f4622907a3ab25535075325366e9a4d0f1c4849ec20ca3cb91f/shim.sock: read: connection reset by peer: unknown.
ERRO[0001] error waiting for container: context canceled
user#machine:~$ sudo docker run hello-world
docker: Error response from daemon: ttrpc: client shutting down: read unix #->#/containerd-shim/moby/151f1ba68a9b28260a00e9cff433c5009382880fb75a28ee79fa549ffdfb21a9/shim.sock: read: connection reset by peer: unknown.
ERRO[0001] error waiting for container: context canceled
user#machine:~$ sudo docker run hello-world
docker: Error response from daemon: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/containerd/io.containerd.runtime.v1.linux/moby/32de5ca60771884d4a236e3e9d2704a48f18f03e93fc6dd195f4e39fb7b56501/log.json: no such file or directory): runc did not terminate sucessfully: unknown.
ERRO[0001] error waiting for container: context canceled
user#machine:~$ sudo docker run hello-world
docker: Error response from daemon: ttrpc: client shutting down: read unix #->#/containerd-shim/moby/dcbb905d8783c65302c1a3afe8fb7913c58e7d5765b5a79072d55fb36f7bc1ea/shim.sock: read: connection reset by peer: unknown.
ERRO[0001] error waiting for container: context canceled
user#machine:~$ sudo docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
docker: Error response from daemon: OCI runtime state failed: runc did not terminate sucessfully: SIGILL: illegal instruction
PC=0x55611122e30c m=3 sigcode=2
goroutine 20 [running]:
runtime.aeshashbody()
/.GOROOT/src/runtime/asm_amd64.s:939 +0x1c fp=0xc42002d6b8 sp=0xc42002d6b0 pc=0x55611122e30c
runtime.mapaccess1_faststr(0x556111a6ad00, 0xc42007f590, 0x5561116ceb02, 0x2, 0x556100000001)
/.GOROOT/src/runtime/hashmap_fast.go:233 +0x1d1 fp=0xc42002d728 sp=0xc42002d6b8 pc=0x5561111e3031
text/template/parse.lexIdentifier(0xc4200bab60, 0x556111ae6e70)
/.GOROOT/src/text/template/parse/lex.go:441 +0x138 fp=0xc42002d7b8 sp=0xc42002d728 pc=0x556111415128
text/template/parse.(*lexer).run(0xc4200bab60)
/.GOROOT/src/text/template/parse/lex.go:228 +0x39 fp=0xc42002d7d8 sp=0xc42002d7b8 pc=0x556111413f99
runtime.goexit()
/.GOROOT/src/runtime/asm_amd64.s:2361 +0x1 fp=0xc42002d7e0 sp=0xc42002d7d8 pc=0x55611122f3b1
created by text/template/parse.lex
/.GOROOT/src/text/template/parse/lex.go:221 +0x161
goroutine 1 [chan receive, locked to thread]:
text/template/parse.(*lexer).nextItem(...)
/.GOROOT/src/text/template/parse/lex.go:195
text/template/parse.(*Tree).next(...)
/.GOROOT/src/text/template/parse/parse.go:64
text/template/parse.(*Tree).nextNonSpace(0xc42009a200, 0x0, 0x0, 0x0, 0x0, 0x0)
/.GOROOT/src/text/template/parse/parse.go:102 +0x159
text/template/parse.(*Tree).parse(0xc42009a200)
/.GOROOT/src/text/template/parse/parse.go:284 +0x2fa
text/template/parse.(*Tree).Parse(0xc42009a200, 0x5561116cead5, 0xf0, 0x0, 0x0, 0x0, 0x0, 0xc42007f800, 0xc42007c6c0, 0x2, ...)
/.GOROOT/src/text/template/parse/parse.go:233 +0x228
text/template/parse.Parse(0x5561116b62fb, 0x5, 0x5561116cead5, 0xf0, 0x0, 0x0, 0x0, 0x0, 0xc42007c6c0, 0x2, ...)
/.GOROOT/src/text/template/parse/parse.go:55 +0x10a
text/template.(*Template).Parse(0xc42008c240, 0x5561116cead5, 0xf0, 0x5561112abfaa, 0x5561116c0486, 0x1d)
/.GOROOT/src/text/template/template.go:198 +0x11a
rax 0x5561116ceb02
rbx 0x55611122e2d0
rcx 0x2
rdx 0xc42002d6c8
rdi 0xc6b7000000000000
rsi 0x1
rbp 0xc42002d718
rsp 0xc42002d6b0
r8 0xc42002d728
r9 0x0
r10 0x3
r11 0x286
r12 0xc42006e468
r13 0xff
r14 0xff
r15 0xf
rip 0x55611122e30c
rflags 0x10202
cs 0x33
fs 0x0
gs 0x0
: unknown.
ERRO[0002] error waiting for container: context canceled
Does anyone know what the error could be?
I had some some weird networking errors when installing docker, but launching the same apt install again worked:
user#machine:~$ sudo apt-get install docker-ce docker-ce-cli containerd.io
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
aufs-tools cgroupfs-mount libltdl7 pigz
The following NEW packages will be installed:
aufs-tools cgroupfs-mount containerd.io docker-ce docker-ce-cli libltdl7 pigz
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 50.7 MB of archives.
After this operation, 243 MB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://archive.ubuntu.com/ubuntu bionic/universe amd64 pigz amd64 2.4-1 [57.4 kB]
Get:2 http://archive.ubuntu.com/ubuntu bionic/universe amd64 aufs-tools amd64 1:4.9+20170918-1ubuntu1 [104 kB]
E: Method https has died unexpectedly!
E: Sub-process https received signal 4.
If you are facing issues after the upgrade to containerd 1.4.0, downgrade to 1.3.4.
That is, for example, if you are on Arch Linux, probably you can do:
cd /var/cache/pacman/pkg/
sudo pacman -U containerd-1.3.4-2-x86_64.pkg.tar.zst
Specifically, this is the error message you might be facing:
docker: Error response from daemon: ttrpc: closed: unknown.
If you need 1.4.0 for some reason, there is an open issue tracking this issue on Github over here, best to track it's status from there: https://github.com/containerd/containerd/issues/4483