boot2docker / bootsync.sh ignores my default gateway? - docker

I'm trying to configure a static IP address for a docker machine.
Thanks to VonC answer, I managed to get started.
However, I'm facing a problem: boot2docker seems to ignore the "route add default gw 192.168.0.1", no matter what.
To reproduce:
1. Create a new docker machine: OK
docker-machine create -d hyperv --hyperv-virtual-switch "Primary Virtual Switch" Swarm-Worker1
2. Apply your batch: OK
dmvbf Swarm-Worker1 0 108
3. SSH into the machine and check that bootsync.sh is fine: OK
cat /var/lib/boot2docker/bootsync.sh
output:
kill $(more /var/run/udhcpc.eth0.pid)
ifconfig eth0 192.168.0.108 netmask 255.255.255.0 broadcast 192.168.0.255 up
route add default gw 192.168.0.1
4. Exit SSH, restart the machine then regenerate certs: OK
docker-machine restart Swarm-Worker1
docker-machine regenerate-certs Swarm-Worker1
5. Check that the IP matches the desired one: OK
docker-machine env Swarm-Worker1
6: SSH into the machine and check its routes: KO
route -n
output:
127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
7: Try to set the gateway manually and check if it works: OK
route add default gw 192.168.0.1
OR
ip route add default via 192.168.0.1
Anyone knows what's happening? Why would boot2docker ignore only the route instruction? How can I solve that?
P.S: My docker-machines run on Docker-Engine - Community 18.09.6

Related

Docker: Route Traffic from One Container to Another Container Through a VPN Container

I set up an SSTP client container (172.17.0.3) that communicates with an SSTP server container (172.17.0.2) via the ppp0 interface. All traffic from the SSTP client container is routed through its ppp0 interface, as seen using netstat on the SSTP client container (192.168.20.1 is the SSTP server container's ppp0 IP address):
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 192.168.20.1 0.0.0.0 UG 0 0 0 ppp0
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
192.168.20.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
Now, I have an HTTP server container (172.17.0.4) running, and I want to use yet another client container (for example, a container that runs Apache Benchmark ab) to talk to the HTTP server container via the SSTP server container. To do so, I use --net=container:sstp-client on the ab client container so it uses the SSTP client container's network. However, the ab client container cannot seem to reach the HTTP server container, even though it is able to benchmark servers on the Internet (e.g., 8.8.8.8). For another example, if I do traceroute from a container through the SSTP client container to 8.8.8.8:
docker run -it --name alpine --net=container:sstp-client alpine ash
/ # traceroute -s 192.168.20.2 google.com
traceroute to google.com (142.250.65.206) from 192.168.20.2, 30 hops max, 46 byte packets
1 192.168.20.1 (192.168.20.1) 1.088 ms 1.006 ms 1.077 ms
2 * * *
3 10.0.2.2 (10.0.2.2) 1.710 ms 1.695 ms 0.977 ms
4 * * *
...
I am able to finally reach Google.
But if I traceroute to my HTTP server container:
/ # traceroute -s 192.168.20.2 172.17.0.4
traceroute to 172.17.0.4 (172.17.0.4) from 192.168.20.2, 30 hops max, 46 byte packets
1 * * *
2 * * *
3 * * *
4 * * *
5 * * *
...
It fails.
My suspicion is that the routing configuration on the SSTP server container is incorrect, but I am not sure how I can fix that to make it work. My goal is to be able to reach both the outside world and the internal containers. I've messed around with both iptables and route quite a bit, but still can't make it work. This is my current configuration of the SSTP server container:
/ # netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
192.168.20.0 172.17.0.2 255.255.255.255 UGH 0 0 0 eth0
192.168.20.0 172.17.0.2 255.255.255.0 UG 0 0 0 eth0
192.168.20.2 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0
/ # iptables -S
-P INPUT ACCEPT
-P FORWARD ACCEPT
-P OUTPUT ACCEPT
-A INPUT -i ppp+ -j ACCEPT
-A FORWARD -j ACCEPT
-A OUTPUT -o ppp+ -j ACCEPT
I've seen many online solutions on how to route via a VPN container to the Internet, but nothing about to another containers. Very much a newbie in this area. Any suggestions welcome! Thank you.
I had to choose similar approach to this problem. Nevertheless I have it inside gitlab-ci...
Start VPN container and find its IP
docker run -it -d --rm \
--name vpn-client-${SOME_VARIABLE} \
--privileged \
--net gitlab-runners \
-e VPNADDR=... \
-e VPNUSER=... \
-e VPNPASS=... \
auchandirect/forticlient || true && sleep 2
export VPN_CONTAINER_IP=$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' vpn-client-${SOME_VARIABLE})
Next you have to add new variable into your application container and add new ROUTE, eg docker-compose:
version: "3"
services:
test-server:
image: alpine
variables:
ROUTE_IP: "${VPN_CONTAINER_IP}"
command: "ip route add <CIDR_TARGET_SUBNET> via ${ROUTE_IP}"
networks:
default:
name: gitlab-runners
external: true
But be advised IP mentioned above HAS to be in CIDR format (eg 192.168.1.0/24) and also be in SAME docker network

Failed to connect , connection refused

I am setting up a blockchain network . One of my node is running on port 11628 of my machine, and another node is running inside a docker container on port 11625. I want to access the node running on my machine from inside the docker container. When I do
>curl localhost:11628
In machine It works fine. I understand that I cannot execute this command inside container as localhost would mean to itself.
When I do execute
route
inside docker container, it gives output
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 172.17.0.1 0.0.0.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
So I also tried curl 172.17.0.1:11628 it also gives
curl: (7) Failed to connect to 172.17.0.1 port 11628: Connection refused
What should I do now ?
Is there any problem with exposing the port ?
I started container by command
docker run -it --name container_name image

Docker container IP 172.17.XXX unable to reach from windows host machine 192.168.X.X

I have successfully run docker container with default bridge network on ubuntu (Virtual box), I am able to communicate (ping) from My container, but not from my Host os (windows) . virtual box network adapter has bridge
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 enp0s3
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 enp0s3
you need to ping from host (windows) too, I think the result is RTO. solution is you need change youre ip to same class as a HOST Windows.

Can't ping docker IPv6 container

I ran docker daemon for using it with global IPv6 for containers:
docker daemon --ipv6 --fixed-cidr-v6="xxxx:xxxx:xxxx:xxxx::/64"
After it I ran docker container:
docker run -d --name my-container some-image
It successfully got Global IPv6 address( I checked by docker inspect my-container). But I can't to ping my container by this ip:
Destination unreachable: Address unreachable
But I can successfully ping docker0 bridge by it's IPv6 address.
Output of route -n -6 contains next lines:
Destination Next Hop Flag Met Ref Use If
xxxx:xxxx:xxxx:xxxx::/64 :: U 256 0 0 docker0
xxxx:xxxx:xxxx:xxxx::/64 :: U 1024 0 0 docker0
fe80::/64 :: U 256 0 0 docker0
docker0 interface has global IPv6 address:
inet6 addr: xxxx:xxxx:xxxx:xxxx::1/64 Scope:Global
xxxx:xxxx:xxxx:xxxx:: everywhere is the same, and it's global IPv6 address of my eth0 interface
Does docker required something additional configs for accessing my containers via IPv6?
Assuming IPv6 in your guest OS is properly configured probably you are pinging the container not from host OS, but outside and network discovery protocol is not configured. Other hosts does not know if your container is behind of your host. I'm doing this after start of container with IPv6 (in host OS) (in ExecStartPost clauses of Systemd .service file)
/usr/sbin/sysctl net.ipv6.conf.interface_name.proxy_ndp=1
/usr/bin/ip -6 neigh add proxy $(docker inspect --format {{.NetworkSettings.GlobalIPv6Address}} container_name) dev interface_name"
Beware of IPv6: docker developers say in replies to bug reports they do not have enough time to make IPv6 production-ready in version 1.10 and say nothing about 1.11.
Mb you use wrong ping command. For ipv6 is ping6.
$ ping6 2607:f0d0:1002:51::4

Setting up 4 containers with 4 IPs and 2 interfaces on EC2

I am trying to set up 4 containers(with nginx) in a system with 4 IPs and 2 interfaces. Can someone please help me? For now only 3 containers are accessible. 4th one is timing out when tried to access from the browser instead of showing a welcome page. I have given the ip routes needed
Host is Ubuntu.
So when this happened I thought it had something to do with the ip routes. So in the same system I installed apache and created 4 virtual hosts each listening to different IPs and with different document routes.
When checked all the IPs were accessible and showed the correct documents.
So now I am stuck, what do I do now!
Configuration:
4 IPs and 2 interfaces. So I created 2 IP aliases. All IPs are configured by the /etc/network/interfaces except the first one. eth0 is is set to dhcp mode.
auto eth0:1
iface eth0:1 inet static
address 172.31.118.182
netmask 255.255.255.0
auto eth1
iface eth1 inet static
address 172.31.119.23
netmask 255.255.255.0
auto eth1:1
iface eth1:1 inet static
address 172.31.119.11
netmask 255.255.255.0
It goes like this. The IPs are private IPs, so I guess there is no problem sharing it here.
eth0 - 172.31.118.249
eth0:1 - 172.31.118.182
eth1 - 172.31.119.23
eth1:1 - 172.31.119.11
Now the docker creation commands
All are just basic nginx containers, so when working it will show the default nginx page.
sudo docker create -i -t -p 172.31.118.249:80:80 --name web1 web_fresh
sudo docker create -i -t -p 172.31.118.182:80:80 --name web2 web_fresh
sudo docker create -i -t -p 172.31.119.23:80:80 --name web3 web_fresh
sudo docker create -i -t -p 172.31.119.11:80:80 --name web4 web_fresh
sudo docker start web1
sudo docker start web2
sudo docker start web3
sudo docker start web4
--
Now here web1 & web2 become immediately accessible. But the containers running on eth1 and eth1:1 are not. So I figured iproutes must be the issue and went ahead and added some routes.
ip route add 172.31.119.0/24 dev eth1 src 172.31.119.23 table eth1
ip route add default via 172.31.119.1 table eth1
ip route add 172.31.119.0/24 dev eth1 src 172.31.119.11 table eth11
ip route add default via 172.31.119.1 table eth11
ip rule add from 172.31.119.23 lookup eth1 prio 1002
ip rule add from 172.31.119.11 lookup eth11 prio 1003
This made web3 also accessible. But not the one from eth1:1. So here is where I am stuck at the moment.

Resources