Running docker -d fails on Ubuntu 14.04 - docker

I am working on a fresh VM delivered by Host Europe that matches the description on
https://docs.docker.com/installation/ubuntulinux/#ubuntu-trusty-1404-lts-64-bit
(so Ubuntu Trusty 14.04 (LTS) (64-bit), 3.13.0 Linux kernel).
After installing the docker.io package docker ps fails with
"Cannot connect to the Docker daemon. Is 'docker -d' running on this host?"
When running docker -d I get:
INFO[0000] +job serveapi(unix:///var/run/docker.sock)
INFO[0000] +job init_networkdriver()
inappropriate ioctl for device
INFO[0000] -job init_networkdriver() = ERR (1)
FATA[0000] inappropriate ioctl for device
Apparently this error happens as well when the docker service tries to start via upstart.
I also tried it with the latest docker package according to "Docker-maintained Package Installation" in the above-mentioned description.
Here is the more detailed ouptput using docker -D -d:
INFO[0000] +job serveapi(unix:///var/run/docker.sock)
DEBU[0000] libdevmapper(3): ioctl/libdm-iface.c:363 (-1) /dev/mapper/control: open failed: Operation not permitted
DEBU[0000] libdevmapper(3): ioctl/libdm-iface.c:415 (-1) Failure to communicate with kernel device-mapper driver.
DEBU[0000] libdevmapper(3): ioctl/libdm-iface.c:417 (-1) Check that device-mapper is available in the kernel.
DEBU[0000] Using graph driver vfs
DEBU[0000] Creating images graph
DEBU[0000] Restored 0 elements
DEBU[0000] Creating repository list
INFO[0000] +job init_networkdriver()
DEBU[0000] Creating bridge docker0 with network 172.17.42.1/16
DEBU[0000] setting bridge mac address = true
inappropriate ioctl for device
INFO[0000] -job init_networkdriver() = ERR (1)
FATA[0000] inappropriate ioctl for device
Ideas anybody? Thanks in advance. (Seems like a "deadend" to me after lots of successfull "dockerizing" on local VMs.)

Most probably your hoster doesn't provide cgroups. That happens sometimes depending on the kind virtualization they use.
I have the same problem with www.stratro.de
Thats when cat /proc/cgroups returns an empty table.
You can see more here: https://mannlinstones.wordpress.com/2014/08/12/docker-v-server-strato-final-results/

Did you checked the runtime dependencies from Docker -> Check runtime dependencies? It is defentliy a problem with your filesystem maybe it is related to this problem.
From Docker:
a properly mounted cgroupfs hierarchy (having a single, all-encompassing "cgroup" mount point is not sufficient)

Related

How to use vpnkit with minikube on mac

There are many question around this topic, but not the specific info I am after.
Host OS is Mac, and recently had to uninstall Docker Desktop due to their licensing change. So instead we have moved to minikube, and it is all working great with VirtualBox driver.
But ideally we would like to use the hyperkit driver, as it requires less resources than virtualbox, and is (anecdotally) faster. This also all works great until we connect to our VPN (using cisco anyconnect) and then all outbound networking from within the minikube VM stops working. e.g.
k8> minikube ssh "traceroute 8.8.8.8"
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 46 byte packets
1 host.minikube.internal (192.168.64.1) 0.154 ms 0.181 ms 0.151 ms
2 * * *
Everything else is is fine, inbound networking via ingress is all good. And maven-docker-plugin is happily creating images with the minikube docker daemon. Just nothing outbound.
So figured I'd try to work with VPNKit as I have read it is meant to address this issue. But cannot find a lot of detailed documentation, and so am struggling.
We have tried starting VPNKit with minimal config:
vpnkit --ethernet /tmp/vpkit-ethernet.socket --debug
And then attempt to start minikube, but it fails:
k8> minikube delete
๐Ÿ”ฅ Deleting "minikube" in hyperkit ...
๐Ÿ’€ Removed all traces of the "minikube" cluster.
k8> minikube start --driver=hyperkit --hyperkit-vpnkit-sock=/tmp/vpnkit-ethernet.socket
๐Ÿ˜„ minikube v1.25.1 on Darwin 10.15.7
โœจ Using the hyperkit driver based on user configuration
๐Ÿ‘ Starting control plane node minikube in cluster minikube
๐Ÿ”ฅ Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
๐Ÿ”ฅ Deleting "minikube" in hyperkit ...
๐Ÿคฆ StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: hyperkit crashed! command line:
hyperkit loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube
๐Ÿ”ฅ Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
๐Ÿ˜ฟ Failed to start hyperkit VM. Running "minikube delete" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: hyperkit crashed! command line:
hyperkit loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube
โŒ Exiting due to PR_HYPERKIT_CRASHED: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: hyperkit crashed! command line:
hyperkit loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube
๐Ÿ’ก Suggestion: Hyperkit is broken. Upgrade to the latest hyperkit version and/or Docker for Desktop. Alternatively, you may choose an alternate --driver
๐Ÿฟ Related issues:
โ–ช https://github.com/kubernetes/minikube/issues/6079
โ–ช https://github.com/kubernetes/minikube/issues/5780
And in the vpnkit log we see:
time="2022-02-14T06:07:57Z" level=debug msg="usernet: accepted vmnet connection"
time="2022-02-14T06:07:57Z" level=warning msg="Uwt: Pipe.listen: rejected ethernet connection: EOF"
time="2022-02-14T06:08:07Z" level=debug msg="usernet: accepted vmnet connection"
time="2022-02-14T06:08:07Z" level=warning msg="Uwt: Pipe.listen: rejected ethernet connection: EOF"
So kind of implies something is not right with how I started vpnkit. Have played with IP args to ensure it all matches, but does not help.
My guess is that the --ethernet=path arg is not the right type of socket. I have seen there is also --vsock-path=path but specifying this does not appear to create the socket file like --ethernet=path does. Do I have to create this some other way?
Or are there other config options I need to mess with. e.g. I thought --gateway-forwards=path could help, but can find no documentation on file format or contents.
So, I guess two main questions:
Is what we are trying even possible? Is it the the right way to go about it? Or is it much more complicated than simply running the vpnkit command?
If we are on the right track, does anyone have experience with this, and know how to set up the socket for minikube+vpnkit+hyperkit? What args, config, or other setup is required?
And just to note: --hyperkit-vpnkit-sock=auto is not an option for us, as we do not have docker installed, and so the docker socket file does not exist.
And just in case its a version issue:
k8> minikube version
minikube version: v1.25.1
commit: 3e64b11ed75e56e4898ea85f96b2e4af0301f43d
k8> vpnkit --version
854498c13b1884d4a48d84f3569eb34681af2126
k8> hyperkit -v
hyperkit: 0.20200908
Homepage: https://github.com/docker/hyperkit
License: BSD

Minikube start stuck in waiting for pods and timeout

I try to run a sample application in my Ubuntu 18 vm.
I have installed Docker client and server version of 18.06.1-ce. I already have VirtualBox running.
I use below link and install kubectl 1.14 too: https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl-on-linux
I have Minikube v1.0.1 also installed. But Minikube start command stuck in Waiting for pods: apiserver and timeout
harshana#-Virtual-Machine:~$ sudo minikube start
๐Ÿ˜„ minikube v1.0.1 on linux (amd64)
๐Ÿคน Downloading Kubernetes v1.14.1 images in the background ...
โš ๏ธ Ignoring --vm-driver=virtualbox, as the existing "minikube" VM was created using the none driver.
โš ๏ธ To switch drivers, you may create a new VM using `minikube start -p <name> --vm-driver=virtualbox`
โš ๏ธ Alternatively, you may delete the existing VM using `minikube delete -p minikube`
๐Ÿ”„ Restarting existing none VM for "minikube" ...
โŒ› Waiting for SSH access ...
๐Ÿ“ถ "minikube" IP address is xxx.xxx.x.xxx
๐Ÿณ Configuring Docker as the container runtime ...
๐Ÿณ Version of container runtime is 18.06.1-ce
โŒ› Waiting for image downloads to complete ...
โœจ Preparing Kubernetes environment ...
๐Ÿ’พ Downloading kubeadm v1.14.1
๐Ÿ’พ Downloading kubelet v1.14.1
๐Ÿšœ Pulling images required by Kubernetes v1.14.1 ...
๐Ÿ”„ Relaunching Kubernetes v1.14.1 using kubeadm ...
โŒ› Waiting for pods: apiserver
sudo minikube logs:
May 19 08:11:40 harshana-Virtual-Machine kubelet[10572]: E0519 08:11:40.825465 10572 kubelet.go:2244] node "minikube" not found
May 19 08:11:40 harshana-Virtual-Machine kubelet[10572]: E0519 08:11:40.895848 10572 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
I got the same behaviour because I have created a first VM using kvm. I have followed the instructions and deleted the VM. Run the below :
1- minikube delete -p minikube
2- minikube start

Docker daemon on FreeBSD fails to start: issue with graphdriver and zfs

I am on x86 FreeBSD. I followed the instructions here: https://wiki.freebsd.org/Docker.
When I start docker, I get
[root#udoo:dev ]# docker run -it quay.io/skilbjo/router-logs bash
Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
So I restart the daemon ...
[root#udoo:dev ]# service docker start && docker -dD
Starting docker...
DEBU[0000] Registering GET, /info
DEBU[0000] Registering GET, /images/search
DEBU[0000] Registering GET, /containers/ps
... etc ....
DEBU[0000] Registering POST, /containers/{name:.*}/pause
DEBU[0000] Registering POST, /containers/{name:.*}/exec
DEBU[0000] Registering POST, /containers/{name:.*}/rename
DEBU[0000] Registering DELETE, /containers/{name:.*}
DEBU[0000] Registering DELETE, /images/{name:.*}
DEBU[0000] Registering OPTIONS,
WARN[0000] Kernel version detection is available only on linux
DEBU[0000] Warning: could not change group /var/run/docker.sock to docker: Group docker not found
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
FATA[0000] Error starting daemon: error initializing graphdriver: Failed to find zfs dataset mounted on '/var/lib/docker' in /proc/mounts
some other helpful info (exactly as the docs describe):
[root#udoo:dev ]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 10.6M 3.74G 23K /zroot
zroot/docker 10.4M 3.74G 10.4M /usr/docker
[root#udoo:dev ]# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 3.88G 10.6M 3.86G - 0% 0% 1.00x ONLINE -
[root#udoo:dev ]# docker version
Client version: 1.7.0-dev
Client API version: 1.19
Go version (client): go1.8.3
Git commit (client): 582db78
OS/Arch (client): freebsd/amd64
Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
What's going on here? Why won't the docker daemon load? I found very little while researching "/var/lib/docker in /proc/mounts".
EDIT: Trying #tarun-lalwani's suggestion gets me closer, but not quite started yet...
DEBU[0000] Warning: could not change group /var/run/docker.sock to docker: Group docker not found
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
DEBU[0000] [zfs] zfs get -rHp -t filesystem all zroot/docker2
DEBU[0000] Using graph driver zfs
DEBU[0000] Using default logging driver json-file
DEBU[0000] Creating images graph
DEBU[0000] Restored 0 elements
DEBU[0000] Creating repository list
DEBU[0000] [bridge] init driver
WARN[0000] port allocator - using fallback port range 49153-65535 due to error: open /proc/sys/net/ipv4/ip_local_port_range: no such file or directory
DEBU[0000] [bridge] found ip address: 172.17.42.1/16
FATA[0000] Error starting daemon: unknown exec driver native
There are several pieces here. First, you do need /var/lib/docker mounted on ZFS.
Second, you need to create a docker group in /etc/group.
pw groupadd docker
It also looks like it's trying to hit procfs which is not enabled by default on FreeBSD for several releases. Add this to /etc/fstab
proc /proc procfs rw 0
You also need to be on a new enough FreeBSD release. As I recall it required 10.x or higher.
The docker project did not want to take upstream patches for FreeBSD support. The effort to support docker kind of fell apart after that. Microsoft even tried to encourage it with some hackathons for docker on FreeBSD. It's recommended not to use docker, especially for production, on FreeBSD. In fact, as of this writing, the docker-freebsd port hasn't been modified since 2018.

Docker daemon not starting after adding the -H flag

I'm trying to use Docker Swarm, to do that I need to start the Docker daemon with the -H flag on each node using this command:
docker -H tcp://0.0.0.0:2375 -d
When doing this on my node (Debian 8, Docker 1.6.0) the command never stops, even if it displays that the daemon has completed initialization.
The complete output:
INFO[0000] +job init_networkdriver()
INFO[0000] +job serveapi(tcp://0.0.0.0:2375)
INFO[0000] Listening for HTTP on tcp (0.0.0.0:2375)
INFO[0000] /!\ DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING /!\
INFO[0000] -job init_networkdriver() = OK (0)
WARN[0000] mountpoint for memory not found
INFO[0000] Loading containers: start.
INFO[0000] Loading containers: done.
INFO[0000] docker daemon: 1.6.0 4749651; execdriver: native-0.2; graphdriver: aufs
INFO[0000] +job acceptconnections()
INFO[0000] -job acceptconnections() = OK (0)
INFO[0000] Daemon has completed initialization
After this last line nothing happens and I'm not able to write another command.
I also ran the command using screen to be able to run a command after the first one but I have a error message when running a Docker command:
FATA[0000] Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
This command clearly states that the daemon didn't start correctly. How could I have a Docker daemon that starts and ensures that remote API on Swarm Agents is available over TCP for the Swarm Manager?
This commands states the client cannot talk to the docker daemon/engine/server. According to logs, your server is running.
With only the -H tcp://0.0.0.0:2375, if you didn't export DOCKER_HOST=127.0.0.1:2375, the docker client won't be able to talk to the daemon. You have 2 ways to handle this :
Exporting DOCKER_HOST
# Exporting DOCKER_HOST when you want to talk to it
$ export DOCKER_HOST=127.0.0.1:2375
$ docker ps
Or update your server options to also bind to the socket, like this
# docker -H tcp://0.0.0.0:2375 -H unix://var/run/docker.sock -d
$ docker ps

Docker containers keep losing internet

I have configured to use docker on Centos 6.5. Quite often the containers keep losing internet. In such instances I have to restart Docker on the host. Recently, I tried to run an yum update from inside a container which failed. Following is the log from /var/log/docker
2014/07/15 10:33:36 GET /v1.12/containers/update_test/json
[b601ba8c] +job container_inspect(update_test)
[b601ba8c] -job container_inspect(update_test) = OK (0)
2014/07/15 10:33:36 POST /v1.12/containers/update_test/attach?stderr=1&stdin=1&stdout=1&stream=1
[b601ba8c] +job container_inspect(update_test)
2014/07/15 10:33:36 POST /v1.12/containers/update_test/start
[b601ba8c] +job start(update_test)
[b601ba8c] -job container_inspect(update_test) = OK (0)
[b601ba8c] +job attach(update_test)
[b601ba8c] +job allocate_interface(5a5c0247441ef5872b531ba720ba1f7d8af2df1cbd47b4a98b84a7b995384d8b)
[b601ba8c] -job allocate_interface(5a5c0247441ef5872b531ba720ba1f7d8af2df1cbd47b4a98b84a7b995384d8b) = OK (0)
[b601ba8c] -job start(update_test) = OK (0)
2014/07/15 10:33:36 POST /v1.12/containers/update_test/resize?h=37&w=165
[b601ba8c] +job resize(update_test, 37, 165)
[b601ba8c] -job resize(update_test, 37, 165) = OK (0)
[b601ba8c] +job release_interface(5a5c0247441ef5872b531ba720ba1f7d8af2df1cbd47b4a98b84a7b995384d8b)
[b601ba8c] -job release_interface(5a5c0247441ef5872b531ba720ba1f7d8af2df1cbd47b4a98b84a7b995384d8b) = OK (0)
[error] container.go:492 5a5c0247441ef5872b531ba720ba1f7d8af2df1cbd47b4a98b84a7b995384d8b: Error closing terminal: invalid argument
[b601ba8c] -job attach(update_test) = OK (0)
As mentioned above restarting Docker on the host solves the issue. I don't want to keep restarting the docker as I am planning to run a production application through docker. Anybody have any idea in this regard?
Please let me know if you need more information in this regard.
It is my bad that I did not mention that the host was hosted in the Rackspace. My apologies for not clarifying that (at that time I thought that it was irrelevant). It was the Rackspace's automated routine that kept messing up the iptables which obviously affected the docker routing. Rackspace did suggest to create a lock file somewhere in the /etc to prevent their automated routine touching the iptables which I have forgotten now; should not be difficult for anybody to get this from them if they experience the issue.
As suggested by Maduraiveeran, to prevent Rackspace from overwriting Docker's custom iptables rules you need to create a lock file in /etc. The file should be named rackconnect-allow-custom-iptables:
touch /etc/rackconnect-allow-custom-iptables
Once the lockfile has been created, restart your docker daemon with: sudo systemctl restart docker
https://support.rackspace.com/how-to/how-to-prevent-rackconnect-from-overwriting-custom-iptables-rules-on-linux-cloud-servers/
If you don't set a DNS at the host /etc/resolv.conf , then you can face some internet issues; If you still have this issue, consider using the --dns 209.244.0.3 at your docker run settings; the solution might look like this:
docker run -d --dns 209.244.0.3 centos webapp.sh
Another useful param you would maybe require at some point in this matter is --add-host which adds a host to the containers' /etc/hosts file

Resources