Docker daemon not starting after adding the -H flag - docker

I'm trying to use Docker Swarm, to do that I need to start the Docker daemon with the -H flag on each node using this command:
docker -H tcp://0.0.0.0:2375 -d
When doing this on my node (Debian 8, Docker 1.6.0) the command never stops, even if it displays that the daemon has completed initialization.
The complete output:
INFO[0000] +job init_networkdriver()
INFO[0000] +job serveapi(tcp://0.0.0.0:2375)
INFO[0000] Listening for HTTP on tcp (0.0.0.0:2375)
INFO[0000] /!\ DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING /!\
INFO[0000] -job init_networkdriver() = OK (0)
WARN[0000] mountpoint for memory not found
INFO[0000] Loading containers: start.
INFO[0000] Loading containers: done.
INFO[0000] docker daemon: 1.6.0 4749651; execdriver: native-0.2; graphdriver: aufs
INFO[0000] +job acceptconnections()
INFO[0000] -job acceptconnections() = OK (0)
INFO[0000] Daemon has completed initialization
After this last line nothing happens and I'm not able to write another command.
I also ran the command using screen to be able to run a command after the first one but I have a error message when running a Docker command:
FATA[0000] Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
This command clearly states that the daemon didn't start correctly. How could I have a Docker daemon that starts and ensures that remote API on Swarm Agents is available over TCP for the Swarm Manager?

This commands states the client cannot talk to the docker daemon/engine/server. According to logs, your server is running.
With only the -H tcp://0.0.0.0:2375, if you didn't export DOCKER_HOST=127.0.0.1:2375, the docker client won't be able to talk to the daemon. You have 2 ways to handle this :
Exporting DOCKER_HOST
# Exporting DOCKER_HOST when you want to talk to it
$ export DOCKER_HOST=127.0.0.1:2375
$ docker ps
Or update your server options to also bind to the socket, like this
# docker -H tcp://0.0.0.0:2375 -H unix://var/run/docker.sock -d
$ docker ps

Related

How to run minikube inside a docker container?

I intend to test a non-trivial Kubernetes setup as part of CI and wish to run the full system before CD. I cannot run --privileged containers and am running the docker container as a sibling to the host using docker run -v /var/run/docker.sock:/var/run/docker.sock
The basic docker setup seems to be working on the container:
linuxbrew#03091f71a10b:~$ docker run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
However, minikube fails to start inside the docker container, reporting connection issues:
linuxbrew#03091f71a10b:~$ minikube start --alsologtostderr -v=7
I1029 15:07:41.274378 2183 out.go:298] Setting OutFile to fd 1 ...
I1029 15:07:41.274538 2183 out.go:345] TERM=xterm,COLORTERM=, which probably does not support color
...
...
...
I1029 15:20:27.040213 197 main.go:130] libmachine: Using SSH client type: native
I1029 15:20:27.040541 197 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a1e20] 0x7a4f00 <nil> [] 0s} 127.0.0.1 49350 <nil> <nil>}
I1029 15:20:27.040593 197 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I1029 15:20:27.040992 197 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:49350: connect: connection refused
This is despite the network being linked and the port being properly forwarded:
linuxbrew#51fbce78731e:~$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
93c35cec7e6f gcr.io/k8s-minikube/kicbase:v0.0.27 "/usr/local/bin/entr…" 2 minutes ago Up 2 minutes 127.0.0.1:49350->22/tcp, 127.0.0.1:49351->2376/tcp, 127.0.0.1:49348->5000/tcp, 127.0.0.1:49349->8443/tcp, 127.0.0.1:49347->32443/tcp minikube
51fbce78731e 7f7ba6fd30dd "/bin/bash" 8 minutes ago Up 8 minutes bpt-ci
linuxbrew#51fbce78731e:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
1e800987d562 bridge bridge local
aa6b2909aa87 host host local
d4db150f928b kind bridge local
a781cb9345f4 minikube bridge local
0a8c35a505fb none null local
linuxbrew#51fbce78731e:~$ docker network connect a781cb9345f4 93c35cec7e6f
Error response from daemon: endpoint with name minikube already exists in network minikube
The minikube container seems to be alive and well when trying to curl from the host and even sshis responding:
mastercook#linuxkitchen:~$ curl https://127.0.0.1:49350
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 127.0.0.1:49350
mastercook#linuxkitchen:~$ ssh root#127.0.0.1 -p 49350
The authenticity of host '[127.0.0.1]:49350 ([127.0.0.1]:49350)' can't be established.
ED25519 key fingerprint is SHA256:0E41lExrrezFK1QXULaGHgk9gMM7uCQpLbNPVQcR2Ec.
This key is not known by any other names
What am I missing and how can I make minikube properly discover the correctly working minikube container?
Because minikube does not complete the cluster creation, running Kubernetes in a (sibling) Docker container favours kind.
Given that the (sibling) container does not know enough about its setup, the networking connections are a bit flawed. Specifically, a loopback IP is selected by kind (and minikube) upon cluster creation even though the actual container sits on a different IP in the host docker.
To correct the networking, the (sibling) container needs to be connected to the network actually hosting the Kubernetes image. To accomplish this, the procedure is illustrated below:
Create a kubernetes cluster:
linuxbrew#324ba0f819d7:~$ kind create cluster --name acluster
Creating cluster "acluster" ...
βœ“ Ensuring node image (kindest/node:v1.21.1) πŸ–Ό
βœ“ Preparing nodes πŸ“¦
βœ“ Writing configuration πŸ“œ
βœ“ Starting control-plane πŸ•ΉοΈ
βœ“ Installing CNI πŸ”Œ
βœ“ Installing StorageClass πŸ’Ύ
Set kubectl context to "kind-acluster"
You can now use your cluster with:
kubectl cluster-info --context kind-acluster
Thanks for using kind! 😊
Verify if the cluster is accessible:
linuxbrew#324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 127.0.0.1:36779 was refused - did you specify the right host or port?
3.) Since the cluster cannot be reached, retrieve the control planes master IP. Note the "-control-plane" addition to the cluster name:
linuxbrew#324ba0f819d7:~$ export MASTER_IP=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' acluster-control-plane)
4.) Update the kube config with the actual master IP:
linuxbrew#324ba0f819d7:~$ sed -i "s/^ server:.*/ server: https:\/\/$MASTER_IP:6443/" $HOME/.kube/config
5.) This IP is still not accessible by the (sibling) container and to connect the container with the correct network retrieve the docker network ID:
linuxbrew#324ba0f819d7:~$ export MASTER_NET=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.NetworkID}}{{end}}' acluster-control-plane)
6.) Finally connect the (sibling) container ID (which should be stored in the $HOSTNAME environment variable) with the cluster docker network:
linuxbrew#324ba0f819d7:~$ docker network connect $MASTER_NET $HOSTNAME
7.) Verify whether the control plane accessible after the changes:
linuxbrew#324ba0f819d7:~$ kubectl cluster-info --context kind-acluster
Kubernetes control plane is running at https://172.18.0.4:6443
CoreDNS is running at https://172.18.0.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
If kubectl returns Kubernetes control plane and CoreDNS URL, as shown in the last step above, the configuration has succeeded.
You can run minikube in docker in docker container. It will use docker driver.
docker run --name dind -d --privileged docker:20.10.17-dind
docker exec -it dind sh
/ # wget https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
/ # mv minikube-linux-amd64 minikube
/ # chmod +x minikube
/ # ./minikube start --force
...
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
/ # ./minikube kubectl -- run --image=hello-world
/ # ./minikube kubectl -- logs pod/hello
Hello from Docker!
...
Also, note that --force is for running minikube using docker driver as root which we shouldn't do according minikube instructions.

How to access etcd in docker

I create a container by
docker run -d --name etcd \
-v /usr/share/ca-certificates/:/etc/ssl/certs \
quay.io/coreos/etcd:v3.0.4 /usr/local/bin/etcd -advertise-client-urls \
http://0.0.0.0:2379 -listen-client-urls http://0.0.0.0:2379
And use
docker exec 40cc9457f132 ifconfig
to get its IP "172.17.0.2"
And then I use local etcdctl to get data,
etcdctl --endpoint=http://172.17.0.2:2379 get /testdir/testkey1
but fail with:
Error: client: etcd cluster is unavailable or misconfigured
error #0: dial tcp 0.0.0.0:2379: getsockopt: connection refused
What should I do?
PS:
To make sure the data is actually stored on the container, I stoped local etcd first.
systemctl stop etcd
If I don't do that, I could get the data, but it's not the same with the result of
docker exec 40cc9457f132 etcdctl get /testdir/testkey1
"40cc9457f132 " is the container id.
OK, I fix it. It's the problem of version.
My local etcd is v2.2.4 (installed by apt install), and the etcd image version is v3.0.4.
I update both of them to v3.3.5 and set $ETCDCTL_API=3.
Now it seems all right.
It seems that same port use both localhost and docker container.
please assign another port for container. example 2379->2380
etcdctl --endpoint=http://localhost:2379 -> localhost
etcdctl --endpoint=http://localhost:2380 -> docker container

Docker daemon on FreeBSD fails to start: issue with graphdriver and zfs

I am on x86 FreeBSD. I followed the instructions here: https://wiki.freebsd.org/Docker.
When I start docker, I get
[root#udoo:dev ]# docker run -it quay.io/skilbjo/router-logs bash
Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
So I restart the daemon ...
[root#udoo:dev ]# service docker start && docker -dD
Starting docker...
DEBU[0000] Registering GET, /info
DEBU[0000] Registering GET, /images/search
DEBU[0000] Registering GET, /containers/ps
... etc ....
DEBU[0000] Registering POST, /containers/{name:.*}/pause
DEBU[0000] Registering POST, /containers/{name:.*}/exec
DEBU[0000] Registering POST, /containers/{name:.*}/rename
DEBU[0000] Registering DELETE, /containers/{name:.*}
DEBU[0000] Registering DELETE, /images/{name:.*}
DEBU[0000] Registering OPTIONS,
WARN[0000] Kernel version detection is available only on linux
DEBU[0000] Warning: could not change group /var/run/docker.sock to docker: Group docker not found
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
FATA[0000] Error starting daemon: error initializing graphdriver: Failed to find zfs dataset mounted on '/var/lib/docker' in /proc/mounts
some other helpful info (exactly as the docs describe):
[root#udoo:dev ]# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 10.6M 3.74G 23K /zroot
zroot/docker 10.4M 3.74G 10.4M /usr/docker
[root#udoo:dev ]# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 3.88G 10.6M 3.86G - 0% 0% 1.00x ONLINE -
[root#udoo:dev ]# docker version
Client version: 1.7.0-dev
Client API version: 1.19
Go version (client): go1.8.3
Git commit (client): 582db78
OS/Arch (client): freebsd/amd64
Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
What's going on here? Why won't the docker daemon load? I found very little while researching "/var/lib/docker in /proc/mounts".
EDIT: Trying #tarun-lalwani's suggestion gets me closer, but not quite started yet...
DEBU[0000] Warning: could not change group /var/run/docker.sock to docker: Group docker not found
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
DEBU[0000] [zfs] zfs get -rHp -t filesystem all zroot/docker2
DEBU[0000] Using graph driver zfs
DEBU[0000] Using default logging driver json-file
DEBU[0000] Creating images graph
DEBU[0000] Restored 0 elements
DEBU[0000] Creating repository list
DEBU[0000] [bridge] init driver
WARN[0000] port allocator - using fallback port range 49153-65535 due to error: open /proc/sys/net/ipv4/ip_local_port_range: no such file or directory
DEBU[0000] [bridge] found ip address: 172.17.42.1/16
FATA[0000] Error starting daemon: unknown exec driver native
There are several pieces here. First, you do need /var/lib/docker mounted on ZFS.
Second, you need to create a docker group in /etc/group.
pw groupadd docker
It also looks like it's trying to hit procfs which is not enabled by default on FreeBSD for several releases. Add this to /etc/fstab
proc /proc procfs rw 0
You also need to be on a new enough FreeBSD release. As I recall it required 10.x or higher.
The docker project did not want to take upstream patches for FreeBSD support. The effort to support docker kind of fell apart after that. Microsoft even tried to encourage it with some hackathons for docker on FreeBSD. It's recommended not to use docker, especially for production, on FreeBSD. In fact, as of this writing, the docker-freebsd port hasn't been modified since 2018.

How can I share a network interface with docker without setns error?

I want to fire up 2 docker containers on the same interface, so I tried the following from the docker docs:
First container:
bash-4.1$ docker run -ti --name=target ubuntu /bin/bash
root#45edefd42404:/#
Second container:
bash-4.1$ docker run -ti --rm --net=container:target ubuntu /bin/bash
setup networking failed to setns current network namespace: invalid argumentFATA[0002] Error response from daemon: Cannot start container ba28e4f14f4b3c2d7b94aa4b0cca8f5b70e6b354842818fe77b31885acc77461: setup networking failed to setns current network namespace: invalid argument
I've googled for failures related to setns and can't find anything relevant. Is there anyplace else I can look to debug this?
My docker daemon log contains this related to the failure (full log https://gist.github.com/paulweb515/990a1a9edeef1e73b752);
time="2015-04-23T09:17:59-04:00" level="error" msg="Warning: error unmounting device ba28e4f14f4b3c2d7b94aa4b0cca8f5b70e6b354842818fe77b31885acc77461: UnmountDevice: device not-mounted id ba28e4f14f4b3c2d7b94aa4b0cca8f5b70e6b354842818fe77b31885acc77461\n"
time="2015-04-23T09:17:59-04:00" level="info" msg="+job log(die, ba28e4f14f4b3c2d7b94aa4b0cca8f5b70e6b354842818fe77b31885acc77461, ubuntu:14.04)"
time="2015-04-23T09:17:59-04:00" level="info" msg="-job log(die, ba28e4f14f4b3c2d7b94aa4b0cca8f5b70e6b354842818fe77b31885acc77461, ubuntu:14.04) = OK (0)"
Cannot start container ba28e4f14f4b3c2d7b94aa4b0cca8f5b70e6b354842818fe77b31885acc77461: setup networking failed to setns current network namespace: invalid argument

Running docker -d fails on Ubuntu 14.04

I am working on a fresh VM delivered by Host Europe that matches the description on
https://docs.docker.com/installation/ubuntulinux/#ubuntu-trusty-1404-lts-64-bit
(so Ubuntu Trusty 14.04 (LTS) (64-bit), 3.13.0 Linux kernel).
After installing the docker.io package docker ps fails with
"Cannot connect to the Docker daemon. Is 'docker -d' running on this host?"
When running docker -d I get:
INFO[0000] +job serveapi(unix:///var/run/docker.sock)
INFO[0000] +job init_networkdriver()
inappropriate ioctl for device
INFO[0000] -job init_networkdriver() = ERR (1)
FATA[0000] inappropriate ioctl for device
Apparently this error happens as well when the docker service tries to start via upstart.
I also tried it with the latest docker package according to "Docker-maintained Package Installation" in the above-mentioned description.
Here is the more detailed ouptput using docker -D -d:
INFO[0000] +job serveapi(unix:///var/run/docker.sock)
DEBU[0000] libdevmapper(3): ioctl/libdm-iface.c:363 (-1) /dev/mapper/control: open failed: Operation not permitted
DEBU[0000] libdevmapper(3): ioctl/libdm-iface.c:415 (-1) Failure to communicate with kernel device-mapper driver.
DEBU[0000] libdevmapper(3): ioctl/libdm-iface.c:417 (-1) Check that device-mapper is available in the kernel.
DEBU[0000] Using graph driver vfs
DEBU[0000] Creating images graph
DEBU[0000] Restored 0 elements
DEBU[0000] Creating repository list
INFO[0000] +job init_networkdriver()
DEBU[0000] Creating bridge docker0 with network 172.17.42.1/16
DEBU[0000] setting bridge mac address = true
inappropriate ioctl for device
INFO[0000] -job init_networkdriver() = ERR (1)
FATA[0000] inappropriate ioctl for device
Ideas anybody? Thanks in advance. (Seems like a "deadend" to me after lots of successfull "dockerizing" on local VMs.)
Most probably your hoster doesn't provide cgroups. That happens sometimes depending on the kind virtualization they use.
I have the same problem with www.stratro.de
Thats when cat /proc/cgroups returns an empty table.
You can see more here: https://mannlinstones.wordpress.com/2014/08/12/docker-v-server-strato-final-results/
Did you checked the runtime dependencies from Docker -> Check runtime dependencies? It is defentliy a problem with your filesystem maybe it is related to this problem.
From Docker:
a properly mounted cgroupfs hierarchy (having a single, all-encompassing "cgroup" mount point is not sufficient)

Resources