I am trying to emulate a hardware configuration between two Intel NUC servers and a Raspberry Pi server using docker containers and docker compose. Since the latter is ARM, and the test host is x86, I decided to run the RPi image within QEMU encapsulated in one of the docker containers. The docker compose file is pretty simple:
services:
rpi:
build: ./rpi
networks:
iot:
ipv4_address: 192.168.1.3
ether:
ipv4_address: 192.168.2.3
nuc1:
build: ./nuc
networks:
- "iot"
nuc2:
build: ./nuc
networks:
- "ether"
networks:
iot:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.1.0/24
gateway: 192.168.1.1
ether:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.2.0/24
gateway: 192.168.2.1
You'll notice that there are two separate networks, I have two vlans I am trying to emulate and the rpi server is apart of both, hence the docker container is a part of both. I have QEMU running fine on the rpi container, and the docker containers themselves are behaving on the network as intended.
The problem I am having is trying to set up the networking so that the QEMU image appears to be just another node on the network at the addresses 192.168.1.3/192.168.2.3. My requirements are that:
The QEMU guest knows that its own IP on each network is 192.168.1.3 and 192.168.2.3 respectively.
The other NUC docker containers can reach the QEMU image at those IP address (i.e. I don't want to give the docker container running the QEMU image its own IP address, have the other containers hit that IP address, then NAT the address to the QEMU address).
I tried the steps listed on this gist with no luck. Additionally, I tried creating a TAP with an address of 10.0.0.1 on the QEMU docker container, a bound the QEMU guest to at TAP, then creating an IP tables rule to NAT traffic received to the TAP, however, the issue is that then the destination address is 10.0.0.1 and the QEMU guest thinks its own address is 192.168.1.3, so it won't receive the packet.
As you can see, I am a bit stuck conceptually, and need some help with a direction to take this.
Right now, this is the network configuration that I set up to handle traffic on the QEMU container (excuse the lack of consistent ip usage, iproute2 is not installed on the image I am using and I can't seem to get the containers to hit the internet):
brctl addbr br0
ip addr flush dev eth0
ip addr flush dev eth1
brctl addif br0 eth0
brctl addif br0 eth1
tunctl -t tap0 -u $(whoami)
brctl addif br0 tap0
ifconfig br0 up
ifconfig tap0 up
ip addr add 192.168.1.3 dev br0
ip addr add 192.168.2.3 dev br0
ip route add 192.168.1.1/24 dev br0
ip route add 192.168.2.1/24 dev br0
ip addr add 10.0.0.1 dev tap0
Then I've done the following forwarding rules:
iptables -t nat -A PREROUTING -i br0 -d 192.168.1.1 -j DNAT --to 10.0.0.1
iptables -t nat -A POSTROUTING -o br0 -s 10.0.0.1 -j SNAT --to 192.168.1.1
iptables -t nat -A PREROUTING -i br0 -d 192.168.2.1 -j DNAT --to 10.0.0.1
iptables -t nat -A POSTROUTING -o br0 -s 10.0.0.1 -j SNAT --to 192.168.2.1
Related
I am running a Wireguard server from a VPS provider. What I want to achieve is to be able to route specific internet traffic (ports 10000:11000 are set to accept traffic from the VPS firewall) from VPN to my Docker containers at home server.
[Internet] <-> [Wireguard 10.100.0.1] <-> [Home Server 10.100.0.2 (Docker Containers)]
Details:
Wireguard Server
OS: Ubuntu 20.04.2 LTS
iptables post up/down rules from wg0.conf:
iptables -A FORWARD -i eth0 -j ACCEPT;
iptables -t nat -A PREROUTING -p tcp --dport 10000:11000 -j DNAT --to-destination 10.100.0.2;
iptables -w -t nat -A POSTROUTING -o eth0 -j MASQUERADE;
sysctl -p:
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
Home Server
OS: Ubuntu 20.04.2 LTS (Desktop)
systemd-networkd (.network file for wireguard interface) configuration:
[Match]
Name = wguard
[Network]
Address = 10.100.0.2/32
[RoutingPolicyRule]
From = 10.200.0.0/16
Table = 250
[Route]
Gateway = 10.100.0.2
Table = 250
[Route]
Destination = 0.0.0.0/0
Type = blackhole
Metric = 1
Table = 250
I created a Docker network in 10.200.0.0/16, and containers are using using this network. Docker containers return the VPN ip address when I check it with curl.
Home server returns my home ip address with a plain curl query; but it returns VPN ip address via wireguard interface.
My problems:
I cannnot ping the VPN host from home server, but I can ping other VPN clients. Also I can ping the VPN host from inside the containers, which is strange.
Port forwarding from the VPN to containers does not work properly.
Example docker-compose:
version: "3.7"
services:
rutorrent:
container_name: rutorrent1
image: romancin/rutorrent:latest
networks:
- wireguard0
ports:
- "10010:51415"
- "10011:80"
- "10012:443"
environment:
- PUID=1000
- PGID=1000
volumes:
- /home/user/Downloads/rutorrent/config:/config
- /home/user/Downloads/rutorrent/downloads:/downloads
networks:
wireguard0:
external: true
I would appreciate if you could at least point me to the part where I am doing wrong. I think my iptables rules have missing lines but I couldn't find a good reference or a book to fully understand how to set it properly.
Thanks
I have host and 5 IPs set for that host.
I can access host by any of these IPs.
Any connection that was made from that host and dockers too are detected as from IP1
I have a docker on that host that I want to have an IP2. How can I set that docker so when any connection made from that docker to other external servers they get info about IP# instead of IP1.
Thanks!
to achieve this you need to edit the routes on your machine. you can start by running these command to find out the current routes.
$ ip route show
default via 10.1.73.254 dev eth0 proto static metric 100
10.1.73.0/24 dev eth0 proto kernel scope link src 10.1.73.17 metric 100
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
172.18.0.0/16 dev docker_gwbridge proto kernel scope link src 172.18.0.1
then you need to change the default route like this
ip route add default via ${YOUR_IP} dev eth0 proto static metric 100
WHat have work for me at the end is to create docker network and add to iptables postrouting for that local range...
docker network create bridge-smtp --subnet=192.168.1.0/24 --gateway=192.168.1.1
iptables -t nat -I POSTROUTING -s 192.168.1.0/24 -j SNAT --to-source MYIP_ADD_HERE
docker run --rm --network bridge-smtp byrnedo/alpine-curl http://www.myip.ch
I have two VPSs, first machine (proxy from now) is for proxy and second machine (dock from now) is docker host. I want to redirect all traffic generated inside a docker container itself over proxy, to not exposure dock machines public IP.
As connection between VPSs is over internet, no local connection, created a tunnel between them by ip tunnel as follows:
On proxy:
ip tunnel add tun10 mode ipip remote x.x.x.x local y.y.y.y dev eth0
ip addr add 192.168.10.1/24 peer 192.168.10.2 dev tun10
ip link set dev tun10 mtu 1492
ip link set dev tun10 up
On dock:
ip tunnel add tun10 mode ipip remote y.y.y.y local x.x.x.x dev eth0
ip addr add 192.168.10.2/24 peer 192.168.10.1 dev tun10
ip link set dev tun10 mtu 1492
ip link set dev tun10 up
PS: Do not know if ip tunnel can be used for production, it is another question, anyway planning to use libreswan or openvpn as a tunnel between VPSs.
After, added SNAT rules to iptables on both VPSs and some routing rules as follows:
On proxy:
iptables -t nat -A POSTROUTING -s 192.168.10.2/32 -j SNAT --to-source y.y.y.y
On dock:
iptables -t nat -A POSTROUTING -s 172.27.10.0/24 -j SNAT --to-source 192.168.10.2
ip route add default via 192.168.10.1 dev tun10 table rt2
ip rule add from 192.168.10.2 table rt2
And last but not least created a docker network with one test container attached to it as follows:
docker network create --attachable --opt com.docker.network.bridge.name=br-test --opt com.docker.network.bridge.enable_ip_masquerade=false --subnet=172.27.10.0/24 testnet
docker run --network testnet alpine:latest /bin/sh
Unfortunately all these ended with no success. So the question is how to debug that? Is it correct way? How would you do the redirection over proxy?
Some words about theory: Traffic coming from 172.27.10.0/24 subnet hits iptables SNAT rule, source IP changes to 192.168.10.2. By routing rule it routes over tun10 device, it is the tunnel. And hits another iptables SNAT rule that changes IP to y.y.y.y and finally goes to destination.
I have setup a docker swarm with 3 nodes (docker 18.03). These nodes use an overlay network to communicate.
node1:
laptop
host tun0 172.16.0.6 --> openvpn -> nat gateway
container n1
ip = 192.169.1.10
node2:
aws ec2
host eth2 10.0.30.62
container n2
ip = 192.169.1.9
node3:
aws ec2
host eth2 10.0.140.122
container n3
ip = 192.169.1.12
nat-gateway:
aws ec2
tun0 172.16.0.1 --> openvpn --> laptop
eth0 10.0.30.198
The scheme is partly working:
1. Containers can ping eachother using name (n1,n2,n3)
2. Docker swarm commands are working, services can be deployed
The overlay is partly working. Some nodes cannot communicate with each other either using tcp/ip or udp. I tried all combinations of the 3 nodes with udp and tcp/ip:
I did a tcpdump on the nat gateway to monitor overlay vxlan network activity (port 4789):
tcpdump -l -n -i eth0 "port 4789"
tcpdump -l -n -i tun0 "port 4789"
Then I tried tcp/ip communication from node2 to node3. On node3:
nc -l -s 0.0.0.0 -p 8999
On node1:
telnet 192.169.1.12 8999
Node1 will then try to connect to node3. I see packets coming in on the nat-gateway over the tun0 interface:
on the nat-gateway eth0 interface:
it seems that the nat-gateway is not sending replies back over the tun0 interface.
The iptables configuration the nat-gateway
The routing of the nat-gateway
Can you help me solve this issue?
I have been able to fix the issue using the following configuration on the NAT gateway:
and
No masquerading of 172.16.0.0/22 is needed. All the workers and managers will route their traffic for 172.16.0.0/22 via the NAT gateway, and it knows how to send the packets over tun0.
Masquerading of eth0 was just wrong...
All the containers can now ping and establish tcp/ip connections to each other.
I can't get logspout to connect to papertrail. I get the following error:
!! lookup logs5.papertrailapp.com on 127.0.0.11:53: read udp 127.0.0.1:46185->127.0.0.11:53: i/o timeout
where 46185 changes every time I run the container. It seems like a DNS error, but nslookup logs5.papertrailapp.com gives the expected output, as does docker run busybox nslookup logs5.papertrailapp.com.
Beyond that, I don't even know how to interpret that error message, let alone address it. Any help debugging this would be hugely appreciated.
My Docker Compose file:
version: '2'
services:
logspout:
image: gliderlabs/logspout
command: "syslog://logs5.papertrailapp.com:12345"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
sleep:
image: benwhitehead/env-loop
Where 12345 is the actual papertrail port. Result is the same whether using syslog:// or syslog-tls://.
From https://docs.docker.com/engine/userguide/networking/configure-dns/:
the docker daemon implements an embedded DNS server which provides built-in service discovery for any container
It looks like your container is unable to connect to this DNS server. If your container is on the default bridge network, it won't reach the embedded DNS server. You can either set --dns to be an outside source or update /etc/resolv.conf. It doesn't sound like a Papertrail issue, at all.
(source)
Docker and iptables got in a fight. So I spun up a new machine, failed to set up iptables, and the problem was solved: no firewall at all to get in the way of Docker's connections!
Just kidding, don't do that. I got a toy database hacked that way.
Fortunately, it's now relatively easy to get iptables and Docker to live in harmony, using the DOCKER_USER iptables chain.
The solution, excerpted from my blog:
Configure Docker with iptables=true, and append to iptables configuration:
iptables -A DOCKER-USER -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A DOCKER-USER -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT
iptables -A DOCKER-USER -i eth0 -p tcp -m tcp --dport 443 -j ACCEPT
iptables -A DOCKER-USER -i eth0 -j DROP