Docker is overriding my default route configuration - docker

A noob here starting with docker in a Orange Pi 3 (Rasberry Pi clone).
I'm trying to configure and start a docker containter (bitwarden_rs), but when I do, I lost connection to the external network. Docker mess with my route table.
Network configuration: I have a bridge br0 that bridges eth0 and wlan0.
(Eth0 connects to the router, wlan0 is configured in AP mode)
Table when container is stopped:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default _gateway 0.0.0.0 UG 425 0 0 br0 <---OK
link-local 0.0.0.0 255.255.0.0 U 1000 0 0 br0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.1.0 0.0.0.0 255.255.255.0 U 425 0 0 br0
192.168.2.0 0.0.0.0 255.255.255.0 U 425 0 0 br0
Table when container is running (No internet access to the exterior)
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 0.0.0.0 0.0.0.0 U 205 0 0 docker0 <---NOT OK
default _gateway 0.0.0.0 UG 425 0 0 br0
link-local 0.0.0.0 255.255.0.0 U 205 0 0 docker0
link-local 0.0.0.0 255.255.0.0 U 230 0 0 vethed140ce
link-local 0.0.0.0 255.255.0.0 U 1000 0 0 br0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.1.0 0.0.0.0 255.255.255.0 U 425 0 0 br0
192.168.2.0 0.0.0.0 255.255.255.0 U 425 0 0 br0
What can I do to fix it? It's docker config problem or maybe my system problem (armbian).
Thanks

On ubuntu 20.04, I tried many methods,
like prevent dhcpd to update route
or change NetworkManager configure to let network-manager to igonre veth* device
Neither of the above works.
I spent a lot of time and found that connman service changes default route. Change its config file /etc/connman/main.conf by uncommenting following line:
#NetworkInterfaceBlacklist = vmnet,vboxnet,virbr,ifb,veth-,vb-
and
systemctl restart connman
to restart connman service. The issue resolved eventually.

This is because, as you can see docker creates a linux bridge named 'docker0'.
You can change the default settings for the docker bridge to resolve the issue.
Configure the default bridge network by providing the bip option along with the desired subnet in the daemon.json
# vi /etc/docker/daemon.json
{
"bip": "172.200.0.1/16"
}
and restart the service.
systemctl restart docker
More details HERE and HERE

Related

Docker: Route Traffic from One Container to Another Container Through a VPN Container

I set up an SSTP client container (172.17.0.3) that communicates with an SSTP server container (172.17.0.2) via the ppp0 interface. All traffic from the SSTP client container is routed through its ppp0 interface, as seen using netstat on the SSTP client container (192.168.20.1 is the SSTP server container's ppp0 IP address):
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.20.1 0.0.0.0 UG 0 0 0 ppp0
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
192.168.20.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
Now, I have an HTTP server container (172.17.0.4) running, and I want to use yet another client container (for example, a container that runs Apache Benchmark ab) to talk to the HTTP server container via the SSTP server container. To do so, I use --net=container:sstp-client on the ab client container so it uses the SSTP client container's network. However, the ab client container cannot seem to reach the HTTP server container, even though it is able to benchmark servers on the Internet (e.g., 8.8.8.8). For another example, if I do traceroute from a container through the SSTP client container to 8.8.8.8:
docker run -it --name alpine --net=container:sstp-client alpine ash
/ # traceroute -s 192.168.20.2 google.com
traceroute to google.com (142.250.65.206) from 192.168.20.2, 30 hops max, 46 byte packets
1 192.168.20.1 (192.168.20.1) 1.088 ms 1.006 ms 1.077 ms
2 * * *
3 10.0.2.2 (10.0.2.2) 1.710 ms 1.695 ms 0.977 ms
4 * * *
...
I am able to finally reach Google.
But if I traceroute to my HTTP server container:
/ # traceroute -s 192.168.20.2 172.17.0.4
traceroute to 172.17.0.4 (172.17.0.4) from 192.168.20.2, 30 hops max, 46 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
5 * * *
...
It fails.
My suspicion is that the routing configuration on the SSTP server container is incorrect, but I am not sure how I can fix that to make it work. My goal is to be able to reach both the outside world and the internal containers. I've messed around with both iptables and route quite a bit, but still can't make it work. This is my current configuration of the SSTP server container:
/ # netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
192.168.20.0 172.17.0.2 255.255.255.255 UGH 0 0 0 eth0
192.168.20.0 172.17.0.2 255.255.255.0 UG 0 0 0 eth0
192.168.20.2 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
/ # iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A INPUT -i ppp+ -j ACCEPT
-A FORWARD -j ACCEPT
-A OUTPUT -o ppp+ -j ACCEPT
I've seen many online solutions on how to route via a VPN container to the Internet, but nothing about to another containers. Very much a newbie in this area. Any suggestions welcome! Thank you.
I had to choose similar approach to this problem. Nevertheless I have it inside gitlab-ci...
Start VPN container and find its IP
docker run -it -d --rm \
--name vpn-client-${SOME_VARIABLE} \
--privileged \
--net gitlab-runners \
-e VPNADDR=... \
-e VPNUSER=... \
-e VPNPASS=... \
auchandirect/forticlient || true && sleep 2
export VPN_CONTAINER_IP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' vpn-client-${SOME_VARIABLE})
Next you have to add new variable into your application container and add new ROUTE, eg docker-compose:
version: "3"
services:
test-server:
image: alpine
variables:
ROUTE_IP: "${VPN_CONTAINER_IP}"
command: "ip route add <CIDR_TARGET_SUBNET> via ${ROUTE_IP}"
networks:
default:
name: gitlab-runners
external: true
But be advised IP mentioned above HAS to be in CIDR format (eg 192.168.1.0/24) and also be in SAME docker network

boot2docker / bootsync.sh ignores my default gateway?

I'm trying to configure a static IP address for a docker machine.
Thanks to VonC answer, I managed to get started.
However, I'm facing a problem: boot2docker seems to ignore the "route add default gw 192.168.0.1", no matter what.
To reproduce:
1. Create a new docker machine: OK
docker-machine create -d hyperv --hyperv-virtual-switch "Primary Virtual Switch" Swarm-Worker1
2. Apply your batch: OK
dmvbf Swarm-Worker1 0 108
3. SSH into the machine and check that bootsync.sh is fine: OK
cat /var/lib/boot2docker/bootsync.sh
output:
kill $(more /var/run/udhcpc.eth0.pid)
ifconfig eth0 192.168.0.108 netmask 255.255.255.0 broadcast 192.168.0.255 up
route add default gw 192.168.0.1
4. Exit SSH, restart the machine then regenerate certs: OK
docker-machine restart Swarm-Worker1
docker-machine regenerate-certs Swarm-Worker1
5. Check that the IP matches the desired one: OK
docker-machine env Swarm-Worker1
6: SSH into the machine and check its routes: KO
route -n
output:
127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
7: Try to set the gateway manually and check if it works: OK
route add default gw 192.168.0.1
OR
ip route add default via 192.168.0.1
Anyone knows what's happening? Why would boot2docker ignore only the route instruction? How can I solve that?
P.S: My docker-machines run on Docker-Engine - Community 18.09.6

Can't reach service of host from container

On the host, there is a service
#server# netstat -ln | grep 3308
tcp6 0 0 :::3308 :::* LISTEN
It can be reached from remote.
The container is in a user-defined bridge network.
The server IP address is 192.168.1.30
#localhost ~]# ifconfig
br-a54fd3b63acd: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.18.0.1 netmask 255.255.0.0 broadcast 172.18.255.255
inet6 fe80::42:1eff:fecc:92e8 prefixlen 64 scopeid 0x20<link>
ether 02:42:1e:cc:92:e8 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:37ff:fe9f:e4f1 prefixlen 64 scopeid 0x20<link>
ether 02:42:37:9f:e4:f1 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 34 bytes 4018 (3.9 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.1.30 netmask 255.255.255.0 broadcast 192.168.1.255
And ping from container also works.
#33208c18aa61:~# ping -c 2 192.168.1.30
PING 192.168.1.30 (192.168.1.30) 56(84) bytes of data.
64 bytes from 192.168.1.30: icmp_seq=1 ttl=64 time=0.120 ms
64 bytes from 192.168.1.30: icmp_seq=2 ttl=64 time=0.105 ms
And the service is available.
#server# telnet 192.168.1.30 3308
Trying 192.168.1.30...
Connected to 192.168.1.30.
Escape character is '^]'.
N
But the service can't be reached from the container.
#33208c18aa61:~# telnet 192.168.1.30 3308
Trying 192.168.1.30...
telnet: Unable to connect to remote host: No route to host
I checked
Make docker use IPv4 for port binding
make sure I didn't have IPv6 set to only bind on IPv6
# sysctl net.ipv6.bindv6only
net.ipv6.bindv6only = 0
From inside of a Docker container, how do I connect to the localhost of the machine?
find my route is a little different.
# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default router.asus.com 0.0.0.0 UG 100 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-a54fd3b63acd
192.168.1.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
Does it matter? Or could it be another reason?
Your docker container is on a different network namespace and connected to a different interface than your host machine that's why you can't reach it using the ip 192.168.x.x
What you need to do is to use the docker network gateway instead, in your case 172.17.0.1 but be aware that this IP might no be the same from host to host so to reproduce this everywhere and be completely sure of which is the IP you can create an user-defined network specifying the subnet and gateway and running your container there for example:
docker network create -d bridge --subnet 172.16.0.0/24 --gateway 172.16.0.1 dockernet
docker run --net=dockernet ubuntu
Also whatever service you are trying to connect here must be listening on the docker's bridge interface as well.
Another option is to run the container on the same network namespace as the host with the --net=host flag, and in this case you can access service outside the container using localhost
Inspired by the official document
The Docker bridge driver automatically installs rules in the host
machine so that containers on different bridge networks cannot
communicate directly with each other.
I checked the iptables on the server, for an experiment I stopped the iptables temporary. Then the container can reach that service success. Later I was told, the server has been reboot recently. So guessing some config was lost after that reboot. Not familiar with iptables very much, and when I try
systemctl status iptables.service
It says the service is not installed. After I install and run the service,
iptables -L -n
is almost empty. Now not clue what kind of iptables rules can cause that messy.
But if anyone face the ping success telnet fail situation, iptables could be the place of the root cause.

Docker container IP 172.17.XXX unable to reach from windows host machine 192.168.X.X

I have successfully run docker container with default bridge network on ubuntu (Virtual box), I am able to communicate (ping) from My container, but not from my Host os (windows) . virtual box network adapter has bridge
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 enp0s3
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s3
you need to ping from host (windows) too, I think the result is RTO. solution is you need change youre ip to same class as a HOST Windows.

Which IPv4 is used while running ping *hostname* command

If I have a hostname that has several IPv4 addresses assigned.
Which IPv4 will be used by ping request to resolve the hostname address [for example, while running "ping Some-Pc"]?
Run the command 'route' in Linux and you will see the routing tables. Based on the destination address and the routing table you should be able to determine the interface being used to send the ICMP messages and thus the src IP address.
For example, given this routing table in Linux:
[mynode]$ route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 enp0s3
10.0.2.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s3
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.56.0 0.0.0.0 255.255.255.0 U 100 0 0 enp0s8
192.168.124.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0
If you send a ping to address 10.0.2.45, it will use enp0s3 and the corresponding IP address as src address.
If you send a ping to address 172.17.0.0 it will send the address from NIC docker0 and the corresponding src IP address.
With ifconfig in Linux (ipconfig in Windows) you can see the IP address assigned to each interface.

Resources