I cannot ping anything from containers using bridge networking (example: docker run --network bridge --rm -it bash ping 8.8.8.8). Not even the default gateway of the container.
ip route from inside container:
bash-5.1# ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 scope link src 172.17.0.2
ip link from my machine:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eno1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN mode DEFAULT group default qlen 1000
link/ether 6c:02:e0:77:5a:c1 brd ff:ff:ff:ff:ff:ff
altname enp16s0
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DORMANT group default qlen 1000
link/ether a4:97:b1:86:f9:6b brd ff:ff:ff:ff:ff:ff
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 02:42:bd:d0:fb:cc brd ff:ff:ff:ff:ff:ff
6: veth40b832a#if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether ba:e5:3a:88:e4:67 brd ff:ff:ff:ff:ff:ff link-netnsid 0
The docker0 interface stays down even if containers are running.
brctl shows that the container interfaces don't get bridged to docker0:
bridge name bridge id STP enabled interfaces
docker0 8000.0242bdd0fbcc no
Here's the output of iptables -S -t nat:
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N DOCKER
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A DOCKER -i docker0 -j RETURN
So far I've tried reinstalling docker and switching between iptables and nftables with iptables-nft. The whole issue started when I tried running a k3d example cluster. I'm running everything on Arch using the official packages.
I finally figured it out
Both NetworkManager and systemd-networkd were running on my system, which messed with the interface IPs via multiple DHCP services. That caused bridge networking not to work as well, which in turn messed with the containers traffic.
Pro tip: don't run multiple daemons trying to do the same thing
Check
cat /etc/sysctl.conf
if you have
net.ipv4.ip_forward=1
Related
I manually installed weblogic, configured domain inside a Docker container [Overlay network - Docker swarm].
[root#host ~]# docker network ls
NETWORK ID NAME DRIVER SCOPE
e3f32e71cc16 bridge bridge local
26628a46774c docker_gwbridge bridge local
20c80427519f host host local
9ejgeett1y4y ingress overlay swarm
52d14f492cda none null local
f628wowngc6z myoverlay overlay swarm
After starting the Admin Server, I see that the admin server port 7001 is bound to only the overlay interface and not to the ANY interface 0.0.0.0. So, even though I exposed the 7001 port when creating container, I am not able to access it from external public network.
[root#host /]# netstat -anp | grep 7001
tcp 0 0 10.0.0.5:7001 0.0.0.0:* LISTEN 8168/java
[root#host /]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
265: eth0#if266: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:00:00:05 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.0.0.5/24 scope global eth0
valid_lft forever preferred_lft forever
267: eth1#if268: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:13:00:06 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.19.0.6/16 scope global eth1
valid_lft forever preferred_lft forever
I also started sshd in the same container, but this was bound to the 0.0.0.0 ANY interface.
[root#host /]# netstat -anp | grep 22
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1/sshd
I need weblogic admin server console to be accessible to public network, not just within the overlay network containers. So, how to bind weblogic admin server port to all interfaces (0.0.0.0) ?
Problem lies in Weblogic configuration. If Listen address is set to empty, weblogic itself will bind the port to all interfaces. This solved my problem.
I have a bunch of docker containers running on the default bridge network, that need to communicate with each other.
I want to move some of the containers to a separate user defined network so I can specify their IP addresses.
Is there any way to do this without having to take down/replicate all the containers and move them to the other network, or is this the only way?
It's possible to create networks and connect containers while they are live. You may still need to stop/start processes if the process is listening on specific a IP addresses rather than all interfaces (* or :: )
Create a network
docker network create \
--driver=bridge \
--subnet=192.168.38.0/24 \
--gateway=172.16.238.1 \
<NETWORK>
Connect a container
docker network connect \
--ip 192.168.38.14 \
<NETWORK> \
<CONTAINER>
Disconnect from original network
docker network disconnect <OLDNETWORK> <CONTAINER>
Example
Before the containers eth0 is on the default bridge network
→ docker exec $CONTAINER ip ad sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
15: eth0#if16: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.4/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
Afterwards, eth1 has been added and no more eth0
→ docker exec $CONTAINER ip ad sh
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
17: eth1#if18: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:c0:a8:26:0e brd ff:ff:ff:ff:ff:ff
inet 192.168.38.14/24 brd 192.168.38.255 scope global eth1
valid_lft forever preferred_lft forever
You also should think about using a docker compose. It will create a network automatically, with its own DNS, allowing the containers to be connected.
Problem: the Internet isn't accessible within a docker container.
on my bare metal Ubuntu 17.10 box...
$ ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=52 time=10.8 ms
but...
$ docker run --rm debian:latest ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
92 bytes from 7911d89db6a4 (192.168.220.2): Destination Host Unreachable
I think the root cause is that I had to set up a non-default network for docker0 because the default one 172.17.0.1 was already in use within my organization.
My /etc/docker/daemon.json file needs to look like this in order for docker to start successfully.
$ cat /etc/docker/daemon.json
{
"bip": "192.168.220.1/24",
"fixed-cidr": "192.168.220.0/24",
"fixed-cidr-v6": "0:0:0:0:0:ffff:c0a8:dc00/120",
"mtu": 1500,
"default-gateway": "192.168.220.10",
"default-gateway-v6": "0:0:0:0:0:ffff:c0a8:dc0a",
"dns": ["10.0.0.69","10.0.0.70","10.1.1.11"],
"debug": true
}
Note that the default-gateway setting looks wrong. However, if I correct it to read 192.168.220.1 the docker service fails to start. Running dockerd at the command line directly produces the most helpful logging, thus:
With "default-gateway": 192.168.220.1 in daemon.json...
$ sudo dockerd
-----8<-----
many lines removed
----->8-----
Error starting daemon: Error initializing network controller: Error creating default "bridge" network: failed to allocate secondary ip address (DefaultGatewayIPv4:192.168.220.1): Address already in use
Here's the info for docker0...
$ ip addr show docker0
10: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:10:bc:66:fd brd ff:ff:ff:ff:ff:ff
inet 192.168.220.1/24 brd 192.168.220.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:10ff:febc:66fd/64 scope link
valid_lft forever preferred_lft forever
And routing table...
$ route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.62.131.1 0.0.0.0 UG 100 0 0 enp14s0
10.62.131.0 0.0.0.0 255.255.255.0 U 100 0 0 enp14s0
169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 enp14s0
192.168.220.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
Is this the root cause? How do I achieve the, seemingly mutually exclusive states of:
docker0 interface address is x.x.x.1
gateway address is same, x.x.x.1
dockerd runs ok
?
Thanks!
Longer answer to Wedge Martin's question. I made the changes to daemon.json as you suggested:
{
"bip": "192.168.220.2/24",
"fixed-cidr": "192.168.220.0/24",
"fixed-cidr-v6": "0:0:0:0:0:ffff:c0a8:dc00/120",
"mtu": 1500,
"default-gateway": "192.168.220.1",
"default-gateway-v6": "0:0:0:0:0:ffff:c0a8:dc0a",
"dns": ["10.0.0.69","10.0.0.70","10.1.1.11"],
"debug": true
}
so at least the daemon starts, but I still don't have internet access within a container...
$ docker run -it --rm debian:latest bash
root#bd9082bf70a0:/# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
15: eth0#if16: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:c0:a8:dc:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 192.168.220.3/24 brd 192.168.220.255 scope global eth0
valid_lft forever preferred_lft forever
root#bd9082bf70a0:/# ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
92 bytes from bd9082bf70a0 (192.168.220.3): Destination Host Unreachable
It turned out that less is more. Simplifying daemon.json to the following resolved my issues.
{
"bip": "192.168.220.2/24"
}
If you don't set the gw, docker will set it to first non-network address in the network, or .1, but if you set it, docker will conflict when allocating the bridge as the address .1 is in use. You should only set default_gateway if its outside of the network range.
Now the bip can tell docker to use a different address than the .1 and so setting the bip can avoid the conflict, but I am not sure that it will end up doing what you want. Probably will cause routing issues as non-network route will go to address that has no host responding.
I use docker on Virtual Box and I try to add one more image-running.
The first 'run' that I had is
$ docker run -Pit --name rpython -p 8888:8888 -p 8787:8787 -p 6006:6006 -p
8022:22 -v /c/Users/lenovo:/home/dockeruser/hosthome
datascienceschool/rpython
$ docker port rpython
8888/tcp -> 0.0.0.0:8888
22/tcp -> 0.0.0.0:8022
27017/tcp -> 0.0.0.0:32781
28017/tcp -> 0.0.0.0:32780
5432/tcp -> 0.0.0.0:32783
6006/tcp -> 0.0.0.0:6006
6379/tcp -> 0.0.0.0:32782
8787/tcp -> 0.0.0.0:8787
It works fine, on local browser through those tcp.
But about the second 'running'
docker run -Pit -i -t -p 8888:8888 -p 8787:8787 -p 8022:22 -p 3306:3306 --
name ubuntu jhleeroot/dave_ubuntu:1.0.1
$ docker port ubuntu
22/tcp -> 0.0.0.0:8022
3306/tcp -> 0.0.0.0:3306
8787/tcp -> 0.0.0.0:8787
8888/tcp -> 0.0.0.0:8888
It doesn't work.
root#90dd963fe685:/# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
23: eth0#if24: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
Any idea about this?
Your first command ran an image (datascienceschool/rpython) that presumably kicked off a python app which must have listened to the port you were testing.
Your second command ran a different image (jhleeroot/dave_ubuntu:1.0.1) and from the pasted output you are only running a bash shell in that image. Bash isn't listening on these ports inside the container, so docker will forward to a closed port and the browser will see a closed connection.
Docker doesn't run the server for your listening ports, it relies on you to run that inside the container and it just forwards the requests.
When I inspect ingress I getting info about containers:
$docker inspect ingress
"Containers": {
"1234567890": {
"Name": "gateway.1.qwertyuiop",
"EndpointID": "12345678990",
"MacAddress": "00:11:22:33:44:55",
"IPv4Address": "10.255.0.11/16",
"IPv6Address": ""
},
"ingress-sbox": {
"Name": "ingress-endpoint",
"EndpointID": "1234567890",
"MacAddress": "00:11:22:33:44:55",
"IPv4Address": "10.255.0.3/16",
"IPv6Address": ""
}
},
Then I can inspect first container but inspecting ingress-endpoint returns nothing
$ docker inspect ingress-endpoint
[]
What I'm trying to find is the local ip for ingress-endpoint in the swarm node.
Not sure why you need that, but if you are looking for where incoming packets are being routed to, you may look at iptables.
iptables -n -t nat -L DOCKER-INGRESS
On my setup (where one of the services publishes the port 80) I get following output:
Chain DOCKER-INGRESS (2 references)
target prot opt source destination
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 to:172.18.0.2:80
RETURN all -- 0.0.0.0/0 0.0.0.0/0
So 172.18.0.2 is the IP address of the ingress container
Notice that this ingress-sbox container is part of both networks that Swarm mode creates for you:
ingress
docker_gwbridge
This is all part of Linux network namespaces, as explained in this very good post: https://neuvector.com/network-security/docker-swarm-container-networking/ which I recommend reading.
# cd /var/run/
# ln -s docker/netns .
# ip netns exec ingress_sbox ip addr
This gives (on my system):
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
7: eth0#if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default
link/ether 02:42:0a:ff:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.255.0.2/16 brd 10.255.255.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.255.0.4/32 brd 10.255.0.4 scope global eth0
valid_lft forever preferred_lft forever
10: eth1#if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:12:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 1
inet 172.18.0.2/16 brd 172.18.255.255 scope global eth1
valid_lft forever preferred_lft forever
You can also run iptables commands from within those network namespaces to try and follow how packets get magically routed...
As running docker inspect ingress-endpoint giving output of [], it very possibly means such endpoint is a stale container, which still hooks ingress network. You might want to disconnect the network, remove such container then re- join Swarm cluster, by doing
docker network disconnect ingress --force ingress-endpoint
docker network rm ingess