Related
I am using Ubuntu 22.04, Docker version 20.10.8.
Issue: when I am using docker (type docker-compose up in my terminal, I use docker file), I can search in google or firefox, but I can NOT open the webpages, I can NOT get access to some other websites like amazon, StackOverflow, instacart, etc. It shows "This site can’t be reached".
If I type sudo cat /var/log/syslog, I got this:
Sep 15 11:02:59 pop-os slack.desktop[117910]: [09/15/22, 11:02:59:922]
info: [API-Q] (T07RRCX35) f5a63264-1663254179.919 client.counts is
ENQUEUED Sep 15 11:03:01 pop-os slack.desktop[117910]: [09/15/22,
11:03:01:920] info: [CHECK-FOR-OLD-IMS] (T7TUUDHHN) Within limit
[external DMs]: 0 Sep 15 11:03:01 pop-os slack.desktop[117910]:
[09/15/22, 11:03:01:920] info: [CHECK-FOR-OLD-IMS] (T7TUUDHHN) Within
limit [internal DMs]: 3 Sep 15 11:03:01 pop-os slack.desktop[117910]:
[09/15/22, 11:03:01:921] info: [API-Q] (T7TUUDHHN)
f5a63264-1663254181.921 users.channelSections.list called with reason:
conditional-fetch-manager Sep 15 11:03:01 pop-os
slack.desktop[117910]: [09/15/22, 11:03:01:921] info: [API-Q]
(T7TUUDHHN) f5a63264-1663254181.921 users.channelSections.list is
ENQUEUED Sep 15 11:03:01 pop-os slack.desktop[117910]: [09/15/22,
11:03:01:962] info: [API-Q] (T7TUUDHHN) f5a63264-1663254181.921
users.channelSections.list is ACTIVE Sep 15 11:03:03 pop-os
slack.desktop[117910]: [09/15/22, 11:03:03:920] info: [API-Q]
(T0390MPHBCN) f5a63264-1663254183.919 client.counts called with
reason: client-counts-api/fetchClientCounts Sep 15 11:03:03 pop-os
slack.desktop[117910]: [09/15/22, 11:03:03:920] info: [API-Q]
(T0390MPHBCN) f5a63264-1663254183.919 client.counts is ENQUEUED Sep 15
11:03:09 pop-os slack.desktop[117910]: [09/15/22, 11:03:09:920] info:
[API-Q] (T7TUUDHHN) f5a63264-1663254189.920 client.counts called with
reason: client-counts-api/fetchClientCounts Sep 15 11:03:09 pop-os
slack.desktop[117910]: [09/15/22, 11:03:09:920] info: [API-Q]
(T7TUUDHHN) f5a63264-1663254189.920 client.counts is ENQUEUED Sep 15
11:03:10 pop-os slack.desktop[117910]: [09/15/22, 11:03:10:575] info:
[API-Q] (T03U4EJLT) f5a63264-1663254165.925 client.counts is ERROR:
request did not complete Sep 15 11:03:13 pop-os slack.desktop[117910]:
[09/15/22, 11:03:13:555] info: [API-Q] (TKF75556W)
f5a63264-1663254165.960 users.channelSections.list is ERROR: request
did not complete Sep 15 11:03:13 pop-os slack.desktop[117910]:
[09/15/22, 11:03:13:555] info: [API-Q] (TKF75556W)
f5a63264-1663254170.919 client.counts is ACTIVE Sep 15 11:03:13 pop-os
slack.desktop[117910]: [09/15/22, 11:03:13:651] info: [API-Q]
(T07RRCX35) f5a63264-1663254171.921 users.channelSections.list is
ERROR: request did not complete Sep 15 11:03:13 pop-os
slack.desktop[117910]: [09/15/22, 11:03:13:651] info: [API-Q]
(T07RRCX35) f5a63264-1663254179.919 client.counts is ACTIVE Sep 15
11:03:14 pop-os slack.desktop[117910]: [09/15/22, 11:03:14:920] info:
[API-Q] (TKF75556W) f5a63264-1663254165.960 users.channelSections.list
is retrying, attempt 2 Sep 15 11:03:14 pop-os slack.desktop[117910]:
[09/15/22, 11:03:14:920] info: [API-Q] (TKF75556W)
f5a63264-1663254165.960 users.channelSections.list is ENQUEUED Sep 15
11:03:15 pop-os slack.desktop[117910]: [09/15/22, 11:03:15:920] info:
[API-Q] (T03U4EJLT) f5a63264-1663254165.925 client.counts is retrying,
attempt 2 Sep 15 11:03:15 pop-os slack.desktop[117910]: [09/15/22,
11:03:15:920] info: [API-Q] (T03U4EJLT) f5a63264-1663254165.925
client.counts is ENQUEUED Sep 15 11:03:15 pop-os
slack.desktop[117910]: [09/15/22, 11:03:15:920] info: [API-Q]
(T03U4EJLT) f5a63264-1663254165.925 client.counts is ACTIVE Sep 15
11:03:15 pop-os slack.desktop[117910]: [09/15/22, 11:03:15:920] info:
[API-Q] (T07RRCX35) f5a63264-1663254171.921 users.channelSections.list
is retrying, attempt 2 Sep 15 11:03:15 pop-os slack.desktop[117910]:
[09/15/22, 11:03:15:920] info: [API-Q] (T07RRCX35)
f5a63264-1663254171.921 users.channelSections.list is ENQUEUED Sep 15
11:03:16 pop-os slack.desktop[117910]: [09/15/22, 11:03:16:719] info:
[API-Q] (T0390MPHBCN) f5a63264-1663254173.921
users.channelSections.list is ERROR: request did not complete Sep 15
11:03:16 pop-os slack.desktop[117910]: [09/15/22, 11:03:16:719] info:
[API-Q] (T0390MPHBCN) f5a63264-1663254183.919 client.counts is ACTIVE
Sep 15 11:03:21 pop-os slack.desktop[117910]: [09/15/22, 11:03:21:922]
info: [API-Q] (T0390MPHBCN) f5a63264-1663254173.921
users.channelSections.list is retrying, attempt 2 Sep 15 11:03:21
pop-os slack.desktop[117910]: [09/15/22, 11:03:21:922] info: [API-Q]
(T0390MPHBCN) f5a63264-1663254173.921 users.channelSections.list is
ENQUEUED Sep 15 11:03:22 pop-os kernel: [96800.697247] [UFW BLOCK]
IN=wlp4s0 OUT= MAC=01:00:5e:00:00:01:a4:56:cc:bd:dc:61:08:00
SRC=10.0.0.1 DST=224.0.0.1 LEN=28 TOS=0x00 PREC=0xC0 TTL=1 ID=34348
PROTO=2 Sep 15 11:03:25 pop-os kernel: [96803.769337] [UFW BLOCK]
IN=wlp4s0 OUT= MAC=01:00:5e:00:00:fb:a4:56:cc:bd:dc:61:08:00
SRC=10.0.0.1 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF
PROTO=2 Sep 15 11:03:25 pop-os slack.desktop[117910]: [09/15/22,
11:03:25:840] info: [API-Q] (T7TUUDHHN) f5a63264-1663254181.921
users.channelSections.list is ERROR: request did not complete Sep 15
11:03:25 pop-os slack.desktop[117910]: [09/15/22, 11:03:25:844] info:
[API-Q] (T7TUUDHHN) f5a63264-1663254189.920 client.counts is ACTIVE
Before starting docker,
route -n gives:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.0.1 0.0.0.0 UG 600 0 0 wlp4s0
10.0.0.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp4s0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlp4s0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.144.0 0.0.0.0 255.255.240.0 U 0 0 0 br-be62c9f6aadf
docker network list gives
NETWORK ID NAME DRIVER SCOPE
be62c9f6aadf 0-course-fall-2022_default bridge local
6ee13c43c492 bridge bridge local
6f15c9584a6f host host local
9c4377540201 none null local
ip addr gives
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp5s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
link/ether f0:2f:74:20:cc:4b brd ff:ff:ff:ff:ff:ff
3: enp6s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether f0:2f:74:20:cc:49 brd ff:ff:ff:ff:ff:ff
4: wlp4s0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 44:af:28:30:96:61 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.169/24 brd 10.0.0.255 scope global dynamic noprefixroute wlp4s0
valid_lft 162382sec preferred_lft 162382sec
inet6 2601:84:8900:9770:1b6f:ce0:b930:88eb/64 scope global temporary dynamic
valid_lft 300sec preferred_lft 300sec
inet6 2601:84:8900:9770:7d59:93a5:a2d2:63be/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 300sec preferred_lft 300sec
inet6 fe80::ed79:4b63:ef2b:7fe9/64 scope link noprefixroute
valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:66:6f:0f:01 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
47: br-be62c9f6aadf: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:c2:99:82:c8 brd ff:ff:ff:ff:ff:ff
inet 192.168.144.1/20 brd 192.168.159.255 scope global br-be62c9f6aadf
valid_lft forever preferred_lft forever
inet6 fe80::42:c2ff:fe99:82c8/64 scope link
valid_lft forever preferred_lft forever
After starting docker,
route -n gives:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 0.0.0.0 0.0.0.0 U 0 0 0 veth2dfd7b5
0.0.0.0 10.0.0.1 0.0.0.0 UG 600 0 0 wlp4s0
10.0.0.0 0.0.0.0 255.255.255.0 U 600 0 0 wlp4s0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 veth2dfd7b5
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 wlp4s0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.144.0 0.0.0.0 255.255.240.0 U 0 0 0 br-be62c9f6aadf
docker network inspect be62c9f6aadf gives:
[
{
"Name": "somename",
"Id": "be62c9f6aadf496bcb1a771ab91c12abe3224a20ef1fdf7e43be18c49279ffe5",
"Created": "2022-09-15T10:04:25.791740449-04:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "192.168.144.0/20",
"Gateway": "192.168.144.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"4d898f2de0dccd8ae33d98c4fefc79617839e900080c146a02796e8a168df4e6": {
"Name": "0-course-fall-2022_ggtree_1",
"EndpointID": "1e9ac1158d47479fecc2634167aefddcfc55d047f54176e03a4948b6245687f2",
"MacAddress": "02:42:c0:a8:90:02",
"IPv4Address": "192.168.144.2/20",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "0-course-fall-2022",
"com.docker.compose.version": "1.29.2"
}
}
]
ip addr gives:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp5s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
link/ether f0:2f:74:20:cc:4b brd ff:ff:ff:ff:ff:ff
3: enp6s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
link/ether f0:2f:74:20:cc:49 brd ff:ff:ff:ff:ff:ff
4: wlp4s0: <BROADCAST,MULTICAST,DYNAMIC,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 44:af:28:30:96:61 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.169/24 brd 10.0.0.255 scope global dynamic noprefixroute wlp4s0
valid_lft 161995sec preferred_lft 161995sec
inet6 2601:84:8900:9770:253f:ef6d:b518:d3d3/64 scope global temporary dynamic
valid_lft 300sec preferred_lft 300sec
inet6 2601:84:8900:9770:7d59:93a5:a2d2:63be/64 scope global dynamic mngtmpaddr noprefixroute
valid_lft 300sec preferred_lft 300sec
inet6 fe80::2544:291c:7ab:f742/64 scope link noprefixroute
valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:66:6f:0f:01 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
47: br-be62c9f6aadf: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c2:99:82:c8 brd ff:ff:ff:ff:ff:ff
inet 192.168.144.1/20 brd 192.168.159.255 scope global br-be62c9f6aadf
valid_lft forever preferred_lft forever
inet6 fe80::42:c2ff:fe99:82c8/64 scope link
valid_lft forever preferred_lft forever
57: veth2dfd7b5#if56: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-be62c9f6aadf state UP group default
link/ether 0a:98:ae:0f:6e:cf brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 169.254.173.91/16 brd 169.254.255.255 scope global veth2dfd7b5
valid_lft forever preferred_lft forever
inet6 fe80::898:aeff:fe0f:6ecf/64 scope link
valid_lft forever preferred_lft forever
docker network list gives:
NETWORK ID NAME DRIVER SCOPE
be62c9f6aadf somename bridge local
6ee13c43c492 bridge bridge local
6f15c9584a6f host host local
9c4377540201 none null local
Also, if I switch wifi to my personal hotspot, there is no problem with using docker.
Any suggestions?
I'm trying to use docker on my raspberry pi 2B running archlinux arm, but the docker daemon cannot contact the docker registry.
docker pull hello-world results in docker daemon output:
DEBU[2020-07-02T16:47:21.391929909Z] Calling HEAD /_ping
DEBU[2020-07-02T16:47:21.394012289Z] Calling GET /v1.40/info
DEBU[2020-07-02T16:47:21.444644977Z] Calling POST /v1.40/images/create?fromImage=hello-world&tag=latest
DEBU[2020-07-02T16:47:21.445747989Z] Trying to pull hello-world from https://registry-1.docker.io v2
WARN[2020-07-02T16:47:36.446771393Z] Error getting v2 registry: Get "https://registry-1.docker.io/v2/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
INFO[2020-07-02T16:47:36.447023996Z] Attempting next endpoint for pull after error: Get "https://registry-1.docker.io/v2/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
ERRO[2020-07-02T16:47:36.447505346Z] Handler for POST /v1.40/images/create returned error: Get "https://registry-1.docker.io/v2/": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I can however curl that address, curl https://registry-1.docker.io/v2/ gives:
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
I can also dig it, dig registry-1.docker.io
; <<>> DiG 9.16.4 <<>> registry-1.docker.io
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 54200
;; flags: qr rd ra; QUERY: 1, ANSWER: 8, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;registry-1.docker.io. IN A
;; ANSWER SECTION:
registry-1.docker.io. 53 IN A 52.5.11.128
registry-1.docker.io. 53 IN A 35.174.73.84
registry-1.docker.io. 53 IN A 52.72.232.213
registry-1.docker.io. 53 IN A 52.1.121.53
registry-1.docker.io. 53 IN A 52.54.232.21
registry-1.docker.io. 53 IN A 52.4.20.24
registry-1.docker.io. 53 IN A 54.236.131.166
registry-1.docker.io. 53 IN A 54.85.107.53
;; Query time: 0 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Thu Jul 02 16:54:59 UTC 2020
;; MSG SIZE rcvd: 177
My daemon.json is:
{
"dns": ["8.8.8.8", "8.8.4.4"],
"debug": true
}
/etc/resolv.conf
# Generated by resolvconf
nameserver 8.8.8.8
nameserver 8.8.4.4
output of ip link:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether b8:27:eb:e1:df:6c brd ff:ff:ff:ff:ff:ff
3: wlan0: <BROADCAST,MULTICAST> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
link/ether b8:27:eb:b4:8a:39 brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:2b:be:08:ee brd ff:ff:ff:ff:ff:ff
5: br-35ff30d41af9: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:01:4c:7e:30 brd ff:ff:ff:ff:ff:ff
Any ideas?
Problem: the Internet isn't accessible within a docker container.
on my bare metal Ubuntu 17.10 box...
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=52 time=10.8 ms
but...
$ docker run --rm debian:latest ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
92 bytes from 7911d89db6a4 (192.168.220.2): Destination Host Unreachable
I think the root cause is that I had to set up a non-default network for docker0 because the default one 172.17.0.1 was already in use within my organization.
My /etc/docker/daemon.json file needs to look like this in order for docker to start successfully.
$ cat /etc/docker/daemon.json
{
"bip": "192.168.220.1/24",
"fixed-cidr": "192.168.220.0/24",
"fixed-cidr-v6": "0:0:0:0:0:ffff:c0a8:dc00/120",
"mtu": 1500,
"default-gateway": "192.168.220.10",
"default-gateway-v6": "0:0:0:0:0:ffff:c0a8:dc0a",
"dns": ["10.0.0.69","10.0.0.70","10.1.1.11"],
"debug": true
}
Note that the default-gateway setting looks wrong. However, if I correct it to read 192.168.220.1 the docker service fails to start. Running dockerd at the command line directly produces the most helpful logging, thus:
With "default-gateway": 192.168.220.1 in daemon.json...
$ sudo dockerd
-----8<-----
many lines removed
----->8-----
Error starting daemon: Error initializing network controller: Error creating default "bridge" network: failed to allocate secondary ip address (DefaultGatewayIPv4:192.168.220.1): Address already in use
Here's the info for docker0...
$ ip addr show docker0
10: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:10:bc:66:fd brd ff:ff:ff:ff:ff:ff
inet 192.168.220.1/24 brd 192.168.220.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:10ff:febc:66fd/64 scope link
valid_lft forever preferred_lft forever
And routing table...
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.62.131.1 0.0.0.0 UG 100 0 0 enp14s0
10.62.131.0 0.0.0.0 255.255.255.0 U 100 0 0 enp14s0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 enp14s0
192.168.220.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
Is this the root cause? How do I achieve the, seemingly mutually exclusive states of:
docker0 interface address is x.x.x.1
gateway address is same, x.x.x.1
dockerd runs ok
?
Thanks!
Longer answer to Wedge Martin's question. I made the changes to daemon.json as you suggested:
{
"bip": "192.168.220.2/24",
"fixed-cidr": "192.168.220.0/24",
"fixed-cidr-v6": "0:0:0:0:0:ffff:c0a8:dc00/120",
"mtu": 1500,
"default-gateway": "192.168.220.1",
"default-gateway-v6": "0:0:0:0:0:ffff:c0a8:dc0a",
"dns": ["10.0.0.69","10.0.0.70","10.1.1.11"],
"debug": true
}
so at least the daemon starts, but I still don't have internet access within a container...
$ docker run -it --rm debian:latest bash
root#bd9082bf70a0:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
15: eth0#if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:dc:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.220.3/24 brd 192.168.220.255 scope global eth0
valid_lft forever preferred_lft forever
root#bd9082bf70a0:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
92 bytes from bd9082bf70a0 (192.168.220.3): Destination Host Unreachable
It turned out that less is more. Simplifying daemon.json to the following resolved my issues.
{
"bip": "192.168.220.2/24"
}
If you don't set the gw, docker will set it to first non-network address in the network, or .1, but if you set it, docker will conflict when allocating the bridge as the address .1 is in use. You should only set default_gateway if its outside of the network range.
Now the bip can tell docker to use a different address than the .1 and so setting the bip can avoid the conflict, but I am not sure that it will end up doing what you want. Probably will cause routing issues as non-network route will go to address that has no host responding.
I'm learing Docker machine while encount some problems.
My computer is mac and use Docker for mac. I create 2 vm,vm1& vm2 by docker-machine,and try to init a swarm who has nodes-vm1,vm2 and my mac.My steps are below:
1. create an image called "sprinla/cms:latest" and a docker-compose.yml
version: "3"
services:
web:
image: sprinla/cms:latest
deploy:
replicas: 1
ports:
- "80:80"
networks:
- webnet
command: /data/start.sh
networks:
webnet:
2.create 2 vms.Here is vm info:
yuxrdeMBP:~ yuxr$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
vm1 - virtualbox Running tcp://192.168.99.100:2376 v17.12.0-ce
vm2 - virtualbox Running tcp://192.168.99.101:2376 v17.12.0-ce
init swarm on my mac host:
yuxrdeMBP:~ yuxr$ docker swarm init
Swarm initialized: current node (uf6rg1v91exlwntlskyj8iim7) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-3qb32l84n0s8vl74rj9d6psm7bzdany3piw55ohtrq0q7ly814-c5km5zg3kj9d6vn6vrtt6xxtg 192.168.65.2:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
4 join vm1 to swarm,then comes the problem
yuxrdeMBP:~ yuxr$ docker-machine ssh vm1 "docker swarm join --token SWMTKN-1-3qb32l84n0s8vl74rj9d6psm7bzdany3piw55ohtrq0q7ly814-c5km5zg3kj9d6vn6vrtt6xxtg 192.168.65.2:2377"
Error response from daemon: Timeout was reached before node joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node.
exit status 1
5.cat the docker log :
time="2018-01-03T17:13:50.387854642Z" level=debug msg="Calling GET /_ping"
time="2018-01-03T17:13:50.388228524Z" level=debug msg="Calling GET /_ping"
time="2018-01-03T17:13:50.388521374Z" level=debug msg="Calling POST /v1.35/swarm/join"
time="2018-01-03T17:13:50.388583426Z" level=debug msg="form data: {\"AdvertiseAddr\":\"\",\"Availability\":\"\",\"DataPathAddr\":\"\",\"JoinToken\":\"*****\",\"ListenAddr\":\"0.0.0.0:2377\",\"RemoteAddrs\":[\"192.168.65.2:2377\"]}"
time="2018-01-03T17:13:55.392578452Z" level=error msg="failed to retrieve remote root CA certificate" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" module=node
time="2018-01-03T17:14:02.394608777Z" level=error msg="failed to retrieve remote root CA certificate" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" module=node
time="2018-01-03T17:14:09.395720474Z" level=error msg="failed to retrieve remote root CA certificate" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" module=node
time="2018-01-03T17:14:10.393743738Z" level=error msg="Handler for POST /v1.35/swarm/join returned error: Timeout was reached before node joined. The attempt to join the swarm will continue in the background. Use the \"docker info\" command to see the current swarm status of your node."
time="2018-01-03T17:14:16.398095265Z" level=error msg="failed to retrieve remote root CA certificate" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" module=node
time="2018-01-03T17:14:23.399587783Z" level=error msg="failed to retrieve remote root CA certificate" error="rpc error: code = DeadlineExceeded desc = context deadline exceeded" module=node
time="2018-01-03T17:14:25.399943337Z" level=error msg="cluster exited with error: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
below is my mac ifconfig info:
yuxrdeMBP:~ yuxr$ ifconfig
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
inet 127.0.0.1 netmask 0xff000000
inet6 ::1 prefixlen 128
inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
nd6 options=201<PERFORMNUD,DAD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
XHC20: flags=0<> mtu 0
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether ac:bc:32:81:97:37
inet6 fe80::4d8:6b2:718a:5d3b%en0 prefixlen 64 secured scopeid 0x5
inet 192.168.199.169 netmask 0xffffff00 broadcast 192.168.199.255
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304
ether 0e:bc:32:81:97:37
media: autoselect
status: inactive
awdl0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1484
ether 36:9f:65:fd:34:c3
inet6 fe80::349f:65ff:fefd:34c3%awdl0 prefixlen 64 scopeid 0x7
nd6 options=201<PERFORMNUD,DAD>
media: autoselect
status: active
en1: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=60<TSO4,TSO6>
ether 6a:00:00:e3:4c:30
media: autoselect <full-duplex>
status: inactive
en2: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
options=60<TSO4,TSO6>
ether 6a:00:00:e3:4c:31
media: autoselect <full-duplex>
status: inactive
bridge0: flags=8822<BROADCAST,SMART,SIMPLEX,MULTICAST> mtu 1500
options=63<RXCSUM,TXCSUM,TSO4,TSO6>
ether 6a:00:00:e3:4c:30
Configuration:
id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
ipfilter disabled flags 0x2
member: en1 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 8 priority 0 path cost 0
member: en2 flags=3<LEARNING,DISCOVER>
ifmaxaddr 0 port 9 priority 0 path cost 0
media: <unknown type>
status: inactive
utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 2000
options=6403<RXCSUM,TXCSUM,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM>
inet6 fe80::441e:c0e3:5429:2abb%utun0 prefixlen 64 scopeid 0xb
nd6 options=201<PERFORMNUD,DAD>
utun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
options=6403<RXCSUM,TXCSUM,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM>
inet6 fe80::7820:5bac:4735:7f82%utun1 prefixlen 64 scopeid 0xc
inet6 fd44:5cb3:4ab4:5d08:7820:5bac:4735:7f82 prefixlen 64
nd6 options=201<PERFORMNUD,DAD>
utun2: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
options=6403<RXCSUM,TXCSUM,CHANNEL_IO,PARTIAL_CSUM,ZEROINVERT_CSUM>
inet6 fe80::26f2:e964:8dfb:e884%utun2 prefixlen 64 scopeid 0xd
nd6 options=201<PERFORMNUD,DAD>
gpd0: flags=8862<BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1400
ether 02:50:41:00:01:01
vboxnet0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
ether 0a:00:27:00:00:00
inet 192.168.99.1 netmask 0xffffff00 broadcast 192.168.99.255
Why????
mac host has ip, 192.168.99.1 ,vm1 has ip 192.168.99.100,vm2 has ip 192.168.99.101,they are in the same network,why can't vm1 nor vm2
join the mac host's swarm?
ANOTHER QUESTION:if i use vm1 as swarm manager,run "docker swarm join" commad on the mac host,when join as worker,it can join but can't use;when join as manager will has error:
yuxrdeMBP:~ yuxr$ docker swarm join --token SWMTKN-1-49w1hd28hs1mtj3sgmd0o3q7n59zgppvd18vs0iwhcnjemzmwb-7mk35zdnaslt1p41gninvwlud 192.168.99.100:2377
Error response from daemon: manager stopped: can't initialize raft node: rpc error: code = Unknown desc = could not connect to prospective new cluster member using its advertised address: rpc error: code = Unavailable desc = grpc: the connection is unavailable
THANK YOU FOR HELP ME !!!
There is no routing between the Mac host and Docker for Mac. So on a Mac you can only setup multi-node swarms between VMs, and the standard Docker for Mac cannot participate in a multi-node swam. This is a limitation on how networking is implemented on OSX.
See the documentation, where this is explained.
Also see this issue for more background.
For Me, this error got resolved by making the Security groups to Inbound Rules to All traffic in AWS.
I got the same error when trying to join a swarm cluster as a worker Used 2 VMs from Google cloud for this..
Manager node was working fine ..docker info--> swarm did not give any errors. but when i try to join the worker nodes with the token .. i got this error "Error response from daemon: Timeout was reached before node joined. The attempt to join the swarm will continue in the background. Use the "docker info" command to see the current swarm status of your node. " while docker info showed me
"rpc error: code = DeadlineExceeded desc = context deadline exceeded in swarm error"
tried a lot of different things finally below solution worked.
solution. -->. i used "docker swarm init --force-new-cluster". in one of the vms i tried to join the as a worker.. and then i used "docker swarm leave --force" on the existing manager node .. and the joined that one as a worker to the newly created cluster. Other vm also also worked when tried to join as workers for the new cluster..
ubuntu - 18.04
docker version -20.10.17
From this PR that got recently merged into docker's 17.06 release candidate, we now have support for host networking with swarm services. However, trying out a very similar command I'm seeing an error:
$ docker service create --name nginx-host --network host nginx
Error response from daemon: could not find the corresponding predefined swarm network: network host not found
I'm running the 17.06 release candidate:
$ docker version
Client:
Version: 17.06.0-ce-rc2
API version: 1.30
Go version: go1.8.3
Git commit: 402dd4a
Built: Wed Jun 7 10:07:14 2017
OS/Arch: linux/amd64
Server:
Version: 17.06.0-ce-rc2
API version: 1.30 (minimum version 1.12)
Go version: go1.8.3
Git commit: 402dd4a
Built: Wed Jun 7 10:06:06 2017
OS/Arch: linux/amd64
Experimental: true
What's different from my command from what docker now supports?
After discussing with the docker devs, this feature needs swarm to be initialized after the upgrade to 17.06. Host and bridge networks created before the swarm init runs cannot be used with the node-local networks. Since this was a test environment, recreated my swarm with:
$ docker swarm leave --force
Node left the swarm.
$ docker swarm init
Swarm initialized: current node (***) is now a manager.
...
Now the docker service create command works:
$ docker service create --name nginx-host --network host nginx
i83udvgk0qga0k7toq4v7kh0x
$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
i83udvgk0qga nginx-host replicated 1/1 docker.io/library/nginx#sha256:41ad9967ea448d7c2b203c699b429abe1ed5af331cd92533900c6d77490e0268
To verify, lets check the network interfaces inside the container:
$ docker ps | grep nginx
7024a2764b46 nginx "nginx -g 'daemon ..." 16 hours ago Up 16 hours nginx-host.1.i2blydombywzhz9zy06j8wrzf
$ docker exec 702 ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
link/ether ***
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether ***
...