how to channelize docker container traffic through a hosts network adapter - docker

I have the following network route on my host pc. I am using softether vpn. Its setup on the adapter vpn_se
$ ip route
default via 192.168.43.1 dev wlan0 proto dhcp metric 600
vpn.ip.adress via 192.168.43.1 dev wlan0
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.30.0/24 dev vpn_se proto kernel scope link src 192.168.30.27
192.168.43.0/24 dev wlan0 proto kernel scope link src 192.168.43.103 metric 600
Now I want to route all traffic from a docker container to vpn_se
something like
172.17.0.6 via 192.168.30.1 dev vpn_se
How can i achieve this

Related

Why CURL didn't work from inside docker container?

I have 2 services in Docker (each service has its own docker-compose.yml, nginx + php-fpm).
Service #1 on port 48801.
Service #2 on port 48802.
My server IP 99.99.99.44 (CentOS 8).
I make CURL request (via PHP) from inside Service #1 to Service #2 (i.e. to 99.99.99.44:48802). But I get next error:
Failed to connect to 99.99.99.44 port 48802 after 1017 ms: Host is unreachable
There is a problem with my server. I need a help (or direction).
Some info.
On other server this services work fine.
Request from inside container to port 80 of this server works fine.
Request from host (not from inside container) to custom port 48802 works fine.
All services available from browser (via custom ports).
SELinux disabled.
Firewalld disabled.
My ip route result:
default via 99.99.99.1 dev eno1 proto static metric 100
99.99.99.1 dev eno1 proto static scope link metric 100
172.18.0.0/16 dev br-2f405adcc89d proto kernel scope link src 172.18.0.1
172.19.0.0/16 dev br-19c596fe7618 proto kernel scope link src 172.19.0.1

Set ip for traffic for all traffic incoming outgoing

I have host and 5 IPs set for that host.
I can access host by any of these IPs.
Any connection that was made from that host and dockers too are detected as from IP1
I have a docker on that host that I want to have an IP2. How can I set that docker so when any connection made from that docker to other external servers they get info about IP# instead of IP1.
Thanks!
to achieve this you need to edit the routes on your machine. you can start by running these command to find out the current routes.
$ ip route show
default via 10.1.73.254 dev eth0 proto static metric 100
10.1.73.0/24 dev eth0 proto kernel scope link src 10.1.73.17 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.18.0.0/16 dev docker_gwbridge proto kernel scope link src 172.18.0.1
then you need to change the default route like this
ip route add default via ${YOUR_IP} dev eth0 proto static metric 100
WHat have work for me at the end is to create docker network and add to iptables postrouting for that local range...
docker network create bridge-smtp --subnet=192.168.1.0/24 --gateway=192.168.1.1
iptables -t nat -I POSTROUTING -s 192.168.1.0/24 -j SNAT --to-source MYIP_ADD_HERE
docker run --rm --network bridge-smtp byrnedo/alpine-curl http://www.myip.ch

Interpretation of ip routes rules

The citation comes from: https://github.com/docker/labs/blob/master/networking/concepts/05-bridge-networks.md
When we peek into the host routing table we can see the IP interfaces
in the global network namespace that now includes docker0. The host
routing table provides connectivity between docker0 and eth0 on the
external network, completing the path from inside the container to the
external network.
host$ ip route default via 172.31.16.1 dev eth0
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42.1
172.31.16.0/20 dev eth0 proto kernel scope link src 172.31.16.102
It is written: The host table provides connectivity between docker0 and eth0. I cannot see where in that rules the connectivity is introduced. Can you explain?

Docker swarm overlay network with vxlan routing over openvpn

I have setup a docker swarm with 3 nodes (docker 18.03). These nodes use an overlay network to communicate.
node1:
laptop
host tun0 172.16.0.6 --> openvpn -> nat gateway
container n1
ip = 192.169.1.10
node2:
aws ec2
host eth2 10.0.30.62
container n2
ip = 192.169.1.9
node3:
aws ec2
host eth2 10.0.140.122
container n3
ip = 192.169.1.12
nat-gateway:
aws ec2
tun0 172.16.0.1 --> openvpn --> laptop
eth0 10.0.30.198
The scheme is partly working:
1. Containers can ping eachother using name (n1,n2,n3)
2. Docker swarm commands are working, services can be deployed
The overlay is partly working. Some nodes cannot communicate with each other either using tcp/ip or udp. I tried all combinations of the 3 nodes with udp and tcp/ip:
I did a tcpdump on the nat gateway to monitor overlay vxlan network activity (port 4789):
tcpdump -l -n -i eth0 "port 4789"
tcpdump -l -n -i tun0 "port 4789"
Then I tried tcp/ip communication from node2 to node3. On node3:
nc -l -s 0.0.0.0 -p 8999
On node1:
telnet 192.169.1.12 8999
Node1 will then try to connect to node3. I see packets coming in on the nat-gateway over the tun0 interface:
on the nat-gateway eth0 interface:
it seems that the nat-gateway is not sending replies back over the tun0 interface.
The iptables configuration the nat-gateway
The routing of the nat-gateway
Can you help me solve this issue?
I have been able to fix the issue using the following configuration on the NAT gateway:
and
No masquerading of 172.16.0.0/22 is needed. All the workers and managers will route their traffic for 172.16.0.0/22 via the NAT gateway, and it knows how to send the packets over tun0.
Masquerading of eth0 was just wrong...
All the containers can now ping and establish tcp/ip connections to each other.

Dataflow worker unable to connect to Kafka through Cloud VPN

I have issues connecting a KafkaIO source to brokers available only through a Cloud VPN tunnel.
The tunnel is set up to allow traffic from a specific subnetwork (secure) and routes are set up and working for compute engines in that subnetwork.
Executing the pipeline with the DirectRunner KafkaIO is able to connect to the brokers, whether through the VPN on a standard compute engine in the secure subnetwork, or through a local machine with ssh tunnels setup by sshuttle.
Running the pipeline with the DataflowRunner connections to the brokers fail with:
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata. The pipeline gets executed within the secure subnetwork.
Connecting to the compute engine instance spanned by the job the following routes are visible:
jgrabber#REDACTED-harness-REDACTED ~ $ ip r
default via 10.74.252.1 dev eth0 proto dhcp src 10.74.252.3 metric 1024
default via 10.74.252.1 dev eth0 proto dhcp metric 1024
10.74.252.1 dev eth0 proto dhcp scope link src 10.74.252.3 metric 1024
10.74.252.1 dev eth0 proto dhcp metric 1024
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
The IPv4 addresses of the brokers are within a 172.17.0.0/16 (remote) network. The VPN is configured with a remote network range of 172.16.0.0/12.
Could the remote 172.17.0.0/16 network be shadowed by the virtual network setup and used by docker?

Resources