Docker daemon on FreeBSD fails to start: issue with graphdriver and zfs - docker

I am on x86 FreeBSD. I followed the instructions here: https://wiki.freebsd.org/Docker.
When I start docker, I get
[root#udoo:dev ]# docker run -it quay.io/skilbjo/router-logs bash
Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
So I restart the daemon ...
[root#udoo:dev ]# service docker start && docker -dD
Starting docker...
DEBU[0000] Registering GET, /info
DEBU[0000] Registering GET, /images/search
DEBU[0000] Registering GET, /containers/ps
... etc ....
DEBU[0000] Registering POST, /containers/{name:.*}/pause
DEBU[0000] Registering POST, /containers/{name:.*}/exec
DEBU[0000] Registering POST, /containers/{name:.*}/rename
DEBU[0000] Registering DELETE, /containers/{name:.*}
DEBU[0000] Registering DELETE, /images/{name:.*}
DEBU[0000] Registering OPTIONS,
WARN[0000] Kernel version detection is available only on linux
DEBU[0000] Warning: could not change group /var/run/docker.sock to docker: Group docker not found
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
FATA[0000] Error starting daemon: error initializing graphdriver: Failed to find zfs dataset mounted on '/var/lib/docker' in /proc/mounts
some other helpful info (exactly as the docs describe):
[root#udoo:dev ]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 10.6M 3.74G 23K /zroot
zroot/docker 10.4M 3.74G 10.4M /usr/docker
[root#udoo:dev ]# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 3.88G 10.6M 3.86G - 0% 0% 1.00x ONLINE -
[root#udoo:dev ]# docker version
Client version: 1.7.0-dev
Client API version: 1.19
Go version (client): go1.8.3
Git commit (client): 582db78
OS/Arch (client): freebsd/amd64
Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
What's going on here? Why won't the docker daemon load? I found very little while researching "/var/lib/docker in /proc/mounts".
EDIT: Trying #tarun-lalwani's suggestion gets me closer, but not quite started yet...
DEBU[0000] Warning: could not change group /var/run/docker.sock to docker: Group docker not found
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
DEBU[0000] [zfs] zfs get -rHp -t filesystem all zroot/docker2
DEBU[0000] Using graph driver zfs
DEBU[0000] Using default logging driver json-file
DEBU[0000] Creating images graph
DEBU[0000] Restored 0 elements
DEBU[0000] Creating repository list
DEBU[0000] [bridge] init driver
WARN[0000] port allocator - using fallback port range 49153-65535 due to error: open /proc/sys/net/ipv4/ip_local_port_range: no such file or directory
DEBU[0000] [bridge] found ip address: 172.17.42.1/16
FATA[0000] Error starting daemon: unknown exec driver native

There are several pieces here. First, you do need /var/lib/docker mounted on ZFS.
Second, you need to create a docker group in /etc/group.
pw groupadd docker
It also looks like it's trying to hit procfs which is not enabled by default on FreeBSD for several releases. Add this to /etc/fstab
proc /proc procfs rw 0
You also need to be on a new enough FreeBSD release. As I recall it required 10.x or higher.
The docker project did not want to take upstream patches for FreeBSD support. The effort to support docker kind of fell apart after that. Microsoft even tried to encourage it with some hackathons for docker on FreeBSD. It's recommended not to use docker, especially for production, on FreeBSD. In fact, as of this writing, the docker-freebsd port hasn't been modified since 2018.

Related

How to use vpnkit with minikube on mac

There are many question around this topic, but not the specific info I am after.
Host OS is Mac, and recently had to uninstall Docker Desktop due to their licensing change. So instead we have moved to minikube, and it is all working great with VirtualBox driver.
But ideally we would like to use the hyperkit driver, as it requires less resources than virtualbox, and is (anecdotally) faster. This also all works great until we connect to our VPN (using cisco anyconnect) and then all outbound networking from within the minikube VM stops working. e.g.
k8> minikube ssh "traceroute 8.8.8.8"
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 46 byte packets
1 host.minikube.internal (192.168.64.1) 0.154 ms 0.181 ms 0.151 ms
2 * * *
Everything else is is fine, inbound networking via ingress is all good. And maven-docker-plugin is happily creating images with the minikube docker daemon. Just nothing outbound.
So figured I'd try to work with VPNKit as I have read it is meant to address this issue. But cannot find a lot of detailed documentation, and so am struggling.
We have tried starting VPNKit with minimal config:
vpnkit --ethernet /tmp/vpkit-ethernet.socket --debug
And then attempt to start minikube, but it fails:
k8> minikube delete
๐Ÿ”ฅ Deleting "minikube" in hyperkit ...
๐Ÿ’€ Removed all traces of the "minikube" cluster.
k8> minikube start --driver=hyperkit --hyperkit-vpnkit-sock=/tmp/vpnkit-ethernet.socket
๐Ÿ˜„ minikube v1.25.1 on Darwin 10.15.7
โœจ Using the hyperkit driver based on user configuration
๐Ÿ‘ Starting control plane node minikube in cluster minikube
๐Ÿ”ฅ Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
๐Ÿ”ฅ Deleting "minikube" in hyperkit ...
๐Ÿคฆ StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: hyperkit crashed! command line:
hyperkit loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube
๐Ÿ”ฅ Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
๐Ÿ˜ฟ Failed to start hyperkit VM. Running "minikube delete" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: hyperkit crashed! command line:
hyperkit loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube
โŒ Exiting due to PR_HYPERKIT_CRASHED: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: hyperkit crashed! command line:
hyperkit loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube
๐Ÿ’ก Suggestion: Hyperkit is broken. Upgrade to the latest hyperkit version and/or Docker for Desktop. Alternatively, you may choose an alternate --driver
๐Ÿฟ Related issues:
โ–ช https://github.com/kubernetes/minikube/issues/6079
โ–ช https://github.com/kubernetes/minikube/issues/5780
And in the vpnkit log we see:
time="2022-02-14T06:07:57Z" level=debug msg="usernet: accepted vmnet connection"
time="2022-02-14T06:07:57Z" level=warning msg="Uwt: Pipe.listen: rejected ethernet connection: EOF"
time="2022-02-14T06:08:07Z" level=debug msg="usernet: accepted vmnet connection"
time="2022-02-14T06:08:07Z" level=warning msg="Uwt: Pipe.listen: rejected ethernet connection: EOF"
So kind of implies something is not right with how I started vpnkit. Have played with IP args to ensure it all matches, but does not help.
My guess is that the --ethernet=path arg is not the right type of socket. I have seen there is also --vsock-path=path but specifying this does not appear to create the socket file like --ethernet=path does. Do I have to create this some other way?
Or are there other config options I need to mess with. e.g. I thought --gateway-forwards=path could help, but can find no documentation on file format or contents.
So, I guess two main questions:
Is what we are trying even possible? Is it the the right way to go about it? Or is it much more complicated than simply running the vpnkit command?
If we are on the right track, does anyone have experience with this, and know how to set up the socket for minikube+vpnkit+hyperkit? What args, config, or other setup is required?
And just to note: --hyperkit-vpnkit-sock=auto is not an option for us, as we do not have docker installed, and so the docker socket file does not exist.
And just in case its a version issue:
k8> minikube version
minikube version: v1.25.1
commit: 3e64b11ed75e56e4898ea85f96b2e4af0301f43d
k8> vpnkit --version
854498c13b1884d4a48d84f3569eb34681af2126
k8> hyperkit -v
hyperkit: 0.20200908
Homepage: https://github.com/docker/hyperkit
License: BSD

Can I run k8s master INSIDE a docker container? Getting errors about k8s looking for host's kernel details

In a docker container I want to run k8s.
When I run kubeadm join ... or kubeadm init commands I see sometimes errors like
\"modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could
not open moddep file
'/lib/modules/3.10.0-1062.1.2.el7.x86_64/modules.dep.bin'.
nmodprobe:
FATAL: Module configs not found in directory
/lib/modules/3.10.0-1062.1.2.el7.x86_64",
err: exit status 1
because (I think) my container does not have the expected kernel header files.
I realise that the container reports its kernel based on the host that is running the container; and looking at k8s code I see
// getKernelConfigReader search kernel config file in a predefined list. Once the kernel config
// file is found it will read the configurations into a byte buffer and return. If the kernel
// config file is not found, it will try to load kernel config module and retry again.
func (k *KernelValidator) getKernelConfigReader() (io.Reader, error) {
possibePaths := []string{
"/proc/config.gz",
"/boot/config-" + k.kernelRelease,
"/usr/src/linux-" + k.kernelRelease + "/.config",
"/usr/src/linux/.config",
}
so I am bit confused what is simplest way to run k8s inside a container such that it consistently past this getting the kernel info.
I note that running docker run -it solita/centos-systemd:7 /bin/bash on a macOS host I see :
# uname -r
4.9.184-linuxkit
# ls -l /proc/config.gz
-r--r--r-- 1 root root 23834 Nov 20 16:40 /proc/config.gz
but running exact same on a Ubuntu VM I see :
# uname -r
4.4.0-142-generic
# ls -l /proc/config.gz
ls: cannot access /proc/config.gz
[Weirdly I don't see this FATAL: Module configs not found in directory error every time, but I guess that is a separate question!]
UPDATE 22/November/2019. I see now that k8s DOES run okay in a container. Real problem was weird/misleading logs. I have added an answer to clarify.
I do not believe that is possible given the nature of containers.
You should instead test your app in a docker container then deploy that image to k8s either in the cloud or locally using minikube.
Another solution is to run it under kind which uses docker driver instead of VirtualBox
https://kind.sigs.k8s.io/docs/user/quick-start/
It seems the FATAL error part was a bit misleading.
It was badly formatted by my test environment (all on one line.
When k8s was failing I saw the FATAL and assumed (incorrectly) that was root cause.
When I format the logs nicely I see ...
kubeadm join 172.17.0.2:6443 --token 21e8ab.1e1666a25fd37338 --discovery-token-unsafe-skip-ca-verification --experimental-control-plane --ignore-preflight-errors=all --node-name 172.17.0.3
[preflight] Running pre-flight checks
[WARNING FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] The system verification failed. Printing the output from the verification:
KERNEL_VERSION: 4.4.0-142-generic
DOCKER_VERSION: 18.09.3
OS: Linux
CGROUPS_CPU: enabled
CGROUPS_CPUACCT: enabled
CGROUPS_CPUSET: enabled
CGROUPS_DEVICES: enabled
CGROUPS_FREEZER: enabled
CGROUPS_MEMORY: enabled
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.0-142-generic/modules.dep.bin'\nmodprobe: FATAL: Module configs not found in directory /lib/modules/4.4.0-142-generic\n", err: exit status 1
[discovery] Trying to connect to API Server "172.17.0.2:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://172.17.0.2:6443"
[discovery] Failed to request cluster info, will try again: [the server was unable to return a response in the time allotted, but may still be processing the request (get configmaps cluster-info)]
There are other errors later, which I originally though were a side-effect of the nasty looking FATAL error e.g. .... "[util/etcd] Attempt timed out"]} but I now think root cause is Etcd part times out sometimes.
Adding this answer in case someone else puzzled like I was.

Minikube start stuck in waiting for pods and timeout

I try to run a sample application in my Ubuntu 18 vm.
I have installed Docker client and server version of 18.06.1-ce. I already have VirtualBox running.
I use below link and install kubectl 1.14 too: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux
I have Minikube v1.0.1 also installed. But Minikube start command stuck in Waiting for pods: apiserver and timeout
harshana#-Virtual-Machine:~$ sudo minikube start
๐Ÿ˜„ minikube v1.0.1 on linux (amd64)
๐Ÿคน Downloading Kubernetes v1.14.1 images in the background ...
โš ๏ธ Ignoring --vm-driver=virtualbox, as the existing "minikube" VM was created using the none driver.
โš ๏ธ To switch drivers, you may create a new VM using `minikube start -p <name> --vm-driver=virtualbox`
โš ๏ธ Alternatively, you may delete the existing VM using `minikube delete -p minikube`
๐Ÿ”„ Restarting existing none VM for "minikube" ...
โŒ› Waiting for SSH access ...
๐Ÿ“ถ "minikube" IP address is xxx.xxx.x.xxx
๐Ÿณ Configuring Docker as the container runtime ...
๐Ÿณ Version of container runtime is 18.06.1-ce
โŒ› Waiting for image downloads to complete ...
โœจ Preparing Kubernetes environment ...
๐Ÿ’พ Downloading kubeadm v1.14.1
๐Ÿ’พ Downloading kubelet v1.14.1
๐Ÿšœ Pulling images required by Kubernetes v1.14.1 ...
๐Ÿ”„ Relaunching Kubernetes v1.14.1 using kubeadm ...
โŒ› Waiting for pods: apiserver
sudo minikube logs:
May 19 08:11:40 harshana-Virtual-Machine kubelet[10572]: E0519 08:11:40.825465 10572 kubelet.go:2244] node "minikube" not found
May 19 08:11:40 harshana-Virtual-Machine kubelet[10572]: E0519 08:11:40.895848 10572 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I got the same behaviour because I have created a first VM using kvm. I have followed the instructions and deleted the VM. Run the below :
1- minikube delete -p minikube
2- minikube start

Docker daemon not starting after adding the -H flag

I'm trying to use Docker Swarm, to do that I need to start the Docker daemon with the -H flag on each node using this command:
docker -H tcp://0.0.0.0:2375 -d
When doing this on my node (Debian 8, Docker 1.6.0) the command never stops, even if it displays that the daemon has completed initialization.
The complete output:
INFO[0000] +job init_networkdriver()
INFO[0000] +job serveapi(tcp://0.0.0.0:2375)
INFO[0000] Listening for HTTP on tcp (0.0.0.0:2375)
INFO[0000] /!\ DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING /!\
INFO[0000] -job init_networkdriver() = OK (0)
WARN[0000] mountpoint for memory not found
INFO[0000] Loading containers: start.
INFO[0000] Loading containers: done.
INFO[0000] docker daemon: 1.6.0 4749651; execdriver: native-0.2; graphdriver: aufs
INFO[0000] +job acceptconnections()
INFO[0000] -job acceptconnections() = OK (0)
INFO[0000] Daemon has completed initialization
After this last line nothing happens and I'm not able to write another command.
I also ran the command using screen to be able to run a command after the first one but I have a error message when running a Docker command:
FATA[0000] Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
This command clearly states that the daemon didn't start correctly. How could I have a Docker daemon that starts and ensures that remote API on Swarm Agents is available over TCP for the Swarm Manager?
This commands states the client cannot talk to the docker daemon/engine/server. According to logs, your server is running.
With only the -H tcp://0.0.0.0:2375, if you didn't export DOCKER_HOST=127.0.0.1:2375, the docker client won't be able to talk to the daemon. You have 2 ways to handle this :
Exporting DOCKER_HOST
# Exporting DOCKER_HOST when you want to talk to it
$ export DOCKER_HOST=127.0.0.1:2375
$ docker ps
Or update your server options to also bind to the socket, like this
# docker -H tcp://0.0.0.0:2375 -H unix://var/run/docker.sock -d
$ docker ps

Running docker -d fails on Ubuntu 14.04

I am working on a fresh VM delivered by Host Europe that matches the description on
https://docs.docker.com/installation/ubuntulinux/#ubuntu-trusty-1404-lts-64-bit
(so Ubuntu Trusty 14.04 (LTS) (64-bit), 3.13.0 Linux kernel).
After installing the docker.io package docker ps fails with
"Cannot connect to the Docker daemon. Is 'docker -d' running on this host?"
When running docker -d I get:
INFO[0000] +job serveapi(unix:///var/run/docker.sock)
INFO[0000] +job init_networkdriver()
inappropriate ioctl for device
INFO[0000] -job init_networkdriver() = ERR (1)
FATA[0000] inappropriate ioctl for device
Apparently this error happens as well when the docker service tries to start via upstart.
I also tried it with the latest docker package according to "Docker-maintained Package Installation" in the above-mentioned description.
Here is the more detailed ouptput using docker -D -d:
INFO[0000] +job serveapi(unix:///var/run/docker.sock)
DEBU[0000] libdevmapper(3): ioctl/libdm-iface.c:363 (-1) /dev/mapper/control: open failed: Operation not permitted
DEBU[0000] libdevmapper(3): ioctl/libdm-iface.c:415 (-1) Failure to communicate with kernel device-mapper driver.
DEBU[0000] libdevmapper(3): ioctl/libdm-iface.c:417 (-1) Check that device-mapper is available in the kernel.
DEBU[0000] Using graph driver vfs
DEBU[0000] Creating images graph
DEBU[0000] Restored 0 elements
DEBU[0000] Creating repository list
INFO[0000] +job init_networkdriver()
DEBU[0000] Creating bridge docker0 with network 172.17.42.1/16
DEBU[0000] setting bridge mac address = true
inappropriate ioctl for device
INFO[0000] -job init_networkdriver() = ERR (1)
FATA[0000] inappropriate ioctl for device
Ideas anybody? Thanks in advance. (Seems like a "deadend" to me after lots of successfull "dockerizing" on local VMs.)
Most probably your hoster doesn't provide cgroups. That happens sometimes depending on the kind virtualization they use.
I have the same problem with www.stratro.de
Thats when cat /proc/cgroups returns an empty table.
You can see more here: https://mannlinstones.wordpress.com/2014/08/12/docker-v-server-strato-final-results/
Did you checked the runtime dependencies from Docker -> Check runtime dependencies? It is defentliy a problem with your filesystem maybe it is related to this problem.
From Docker:
a properly mounted cgroupfs hierarchy (having a single, all-encompassing "cgroup" mount point is not sufficient)

Resources