Port Forwarding from Wireguard to Docker Containers - docker

I am running a Wireguard server from a VPS provider. What I want to achieve is to be able to route specific internet traffic (ports 10000:11000 are set to accept traffic from the VPS firewall) from VPN to my Docker containers at home server.
[Internet] <-> [Wireguard 10.100.0.1] <-> [Home Server 10.100.0.2 (Docker Containers)]
Details:
Wireguard Server
OS: Ubuntu 20.04.2 LTS
iptables post up/down rules from wg0.conf:
iptables -A FORWARD -i eth0 -j ACCEPT;
iptables -t nat -A PREROUTING -p tcp --dport 10000:11000 -j DNAT --to-destination 10.100.0.2;
iptables -w -t nat -A POSTROUTING -o eth0 -j MASQUERADE;
sysctl -p:
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
Home Server
OS: Ubuntu 20.04.2 LTS (Desktop)
systemd-networkd (.network file for wireguard interface) configuration:
[Match]
Name = wguard
[Network]
Address = 10.100.0.2/32
[RoutingPolicyRule]
From = 10.200.0.0/16
Table = 250
[Route]
Gateway = 10.100.0.2
Table = 250
[Route]
Destination = 0.0.0.0/0
Type = blackhole
Metric = 1
Table = 250
I created a Docker network in 10.200.0.0/16, and containers are using using this network. Docker containers return the VPN ip address when I check it with curl.
Home server returns my home ip address with a plain curl query; but it returns VPN ip address via wireguard interface.
My problems:
I cannnot ping the VPN host from home server, but I can ping other VPN clients. Also I can ping the VPN host from inside the containers, which is strange.
Port forwarding from the VPN to containers does not work properly.
Example docker-compose:
version: "3.7"
services:
rutorrent:
container_name: rutorrent1
image: romancin/rutorrent:latest
networks:
- wireguard0
ports:
- "10010:51415"
- "10011:80"
- "10012:443"
environment:
- PUID=1000
- PGID=1000
volumes:
- /home/user/Downloads/rutorrent/config:/config
- /home/user/Downloads/rutorrent/downloads:/downloads
networks:
wireguard0:
external: true
I would appreciate if you could at least point me to the part where I am doing wrong. I think my iptables rules have missing lines but I couldn't find a good reference or a book to fully understand how to set it properly.
Thanks

Related

Putting a QEMU guest onto a network created by docker compose

I am trying to emulate a hardware configuration between two Intel NUC servers and a Raspberry Pi server using docker containers and docker compose. Since the latter is ARM, and the test host is x86, I decided to run the RPi image within QEMU encapsulated in one of the docker containers. The docker compose file is pretty simple:
services:
rpi:
build: ./rpi
networks:
iot:
ipv4_address: 192.168.1.3
ether:
ipv4_address: 192.168.2.3
nuc1:
build: ./nuc
networks:
- "iot"
nuc2:
build: ./nuc
networks:
- "ether"
networks:
iot:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.1.0/24
gateway: 192.168.1.1
ether:
driver: bridge
ipam:
driver: default
config:
- subnet: 192.168.2.0/24
gateway: 192.168.2.1
You'll notice that there are two separate networks, I have two vlans I am trying to emulate and the rpi server is apart of both, hence the docker container is a part of both. I have QEMU running fine on the rpi container, and the docker containers themselves are behaving on the network as intended.
The problem I am having is trying to set up the networking so that the QEMU image appears to be just another node on the network at the addresses 192.168.1.3/192.168.2.3. My requirements are that:
The QEMU guest knows that its own IP on each network is 192.168.1.3 and 192.168.2.3 respectively.
The other NUC docker containers can reach the QEMU image at those IP address (i.e. I don't want to give the docker container running the QEMU image its own IP address, have the other containers hit that IP address, then NAT the address to the QEMU address).
I tried the steps listed on this gist with no luck. Additionally, I tried creating a TAP with an address of 10.0.0.1 on the QEMU docker container, a bound the QEMU guest to at TAP, then creating an IP tables rule to NAT traffic received to the TAP, however, the issue is that then the destination address is 10.0.0.1 and the QEMU guest thinks its own address is 192.168.1.3, so it won't receive the packet.
As you can see, I am a bit stuck conceptually, and need some help with a direction to take this.
Right now, this is the network configuration that I set up to handle traffic on the QEMU container (excuse the lack of consistent ip usage, iproute2 is not installed on the image I am using and I can't seem to get the containers to hit the internet):
brctl addbr br0
ip addr flush dev eth0
ip addr flush dev eth1
brctl addif br0 eth0
brctl addif br0 eth1
tunctl -t tap0 -u $(whoami)
brctl addif br0 tap0
ifconfig br0 up
ifconfig tap0 up
ip addr add 192.168.1.3 dev br0
ip addr add 192.168.2.3 dev br0
ip route add 192.168.1.1/24 dev br0
ip route add 192.168.2.1/24 dev br0
ip addr add 10.0.0.1 dev tap0
Then I've done the following forwarding rules:
iptables -t nat -A PREROUTING -i br0 -d 192.168.1.1 -j DNAT --to 10.0.0.1
iptables -t nat -A POSTROUTING -o br0 -s 10.0.0.1 -j SNAT --to 192.168.1.1
iptables -t nat -A PREROUTING -i br0 -d 192.168.2.1 -j DNAT --to 10.0.0.1
iptables -t nat -A POSTROUTING -o br0 -s 10.0.0.1 -j SNAT --to 192.168.2.1

Why can't I reach my traefik dashboard via HTTPS?

I am trying to run a traefik container on my docker swarm cluster. Because we are using TLS encrypted communication, I want the traefik dashboard to be available via https.
In my browser, I try to access traefik via the docker swarm manager hostname via https://my.docker.manager and therefor I mounted my hosts certificate and key into the traefik service.
When I try to open https://my.docker.manager in my browser, I get a timeout.
When I try to curl https://my.docker.manager directly on the host (my.docker.manager) I get HTTP code 403 as response
My traefik config:
debug=true
logLevel = "DEBUG"
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
certFile = "/etc/traefik/certs/my.docker.manager.crt"
keyFile = "/etc/traefik/certs/my.docker.manager.key"
[entryPoints.https.tls.defaultCertificate]
certFile = "/etc/traefik/certs/my.docker.manager.crt"
keyFile = "/etc/traefik/certs/my.docker.manager.key"
[api]
address = ":8080"
[docker]
watch = true
swarmMode = true
My traefik compose file:
version: "3.7"
services:
traefik:
image: traefik
ports:
- 80:80
- 443:443
networks:
- devops-net
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /mnt/docker-data/secrets/certs/:/etc/traefik/certs/
configs:
- source: traefik.conf
target: /etc/traefik/traefik.toml
deploy:
placement:
constraints:
- node.role == manager
labels:
- "traefik.docker.network=devops-net"
- "traefik.frontend.rule=Host:my.docker.manager"
- "traefik.port=8080"
networks:
devops-net:
driver: overlay
external: true
configs:
traefik.conf:
external: true
As described in this article (https://www.digitalocean.com/community/tutorials/how-to-use-traefik-as-a-reverse-proxy-for-docker-containers-on-ubuntu-16-04), I expected to see the traefik dashboard, when I call https://my.docker.manager in my browser. But I only get a timeout. When using curl https://my.docker.manager I get HTTP code 403. I followed the mentioned article except two differences:
1) I did not configure credentials
2) I used my hosts own certificates instead of letsencrypt
In the meantime I found the reason for my problem (not sure what is the best solution). For the case someone is interested, I will try to explain it.
I have a Swarm of three nodes, in the network prod.company.de
My client is in another network intranet.comnpany.de
My swarm manager is adressed by docker-manager.prod.company.de. On this host, I have deployed the traefik service, that I want to access via https://docker-manager.prod.company.de (This is port 443 and because of traefik config forwarded to the traefik dashboard on 8080 inside the container).
If I track my network traffic, I can see that the https request (https://docker-manager.prod.company.de) from my client browser reaches the server and than the traffic is forwarded to the docker_gwbridge address 17.18.0.2. But the answer does not find the way back to my client because of the docker NAT configuration.
iptables -t nat -L -v
Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination
8 504 MASQUERADE all -- any docker_gwbridge anywhere anywhere ADDRTYPE match src-type LOCAL
0 0 MASQUERADE all -- any !docker0 172.17.0.0/16 anywhere
0 0 MASQUERADE all -- any !docker_gwbridge 172.18.0.0/16 anywhere
MASQERADE says that the source IP of the request should be replaced with the bridge IP (in my case 172.18.0.1) so that answers are routed back to this IP. IN the case above, the rule 8 504 MASQUERADE all -- any docker_gwbridge anywhere anywhere would do this, but it is limited to requests from LOCAL by ADDRTYPE match src-type LOCAL, which means that using a browser on the docker host would work, but my connections from client do not work, because the answer will not find the way back to my client address.
Currently, I added one more NAT rule to my iptables:
iptables -t nat -A POSTROUTING -o docker_gwbridge -j MASQUERADE
which results in
1 52 MASQUERADE all -- any docker_gwbridge anywhere anywhere
After that, I can see the traefik Dashboard, when opening https://docker-manager.prod.company.de in my browser on the client.
But I do not understand, why I have to do this, because I found nothing about that in any documentation and I don't think that my usecase is really rare. Thats why I would be happy, if someone could have a look at this post and maybe check if I did some other thing wrong or explain to my, why I have to do such things to get a standard usecase working.
Kind regards

Docker container hits iptables to proxy

I have two VPSs, first machine (proxy from now) is for proxy and second machine (dock from now) is docker host. I want to redirect all traffic generated inside a docker container itself over proxy, to not exposure dock machines public IP.
As connection between VPSs is over internet, no local connection, created a tunnel between them by ip tunnel as follows:
On proxy:
ip tunnel add tun10 mode ipip remote x.x.x.x local y.y.y.y dev eth0
ip addr add 192.168.10.1/24 peer 192.168.10.2 dev tun10
ip link set dev tun10 mtu 1492
ip link set dev tun10 up
On dock:
ip tunnel add tun10 mode ipip remote y.y.y.y local x.x.x.x dev eth0
ip addr add 192.168.10.2/24 peer 192.168.10.1 dev tun10
ip link set dev tun10 mtu 1492
ip link set dev tun10 up
PS: Do not know if ip tunnel can be used for production, it is another question, anyway planning to use libreswan or openvpn as a tunnel between VPSs.
After, added SNAT rules to iptables on both VPSs and some routing rules as follows:
On proxy:
iptables -t nat -A POSTROUTING -s 192.168.10.2/32 -j SNAT --to-source y.y.y.y
On dock:
iptables -t nat -A POSTROUTING -s 172.27.10.0/24 -j SNAT --to-source 192.168.10.2
ip route add default via 192.168.10.1 dev tun10 table rt2
ip rule add from 192.168.10.2 table rt2
And last but not least created a docker network with one test container attached to it as follows:
docker network create --attachable --opt com.docker.network.bridge.name=br-test --opt com.docker.network.bridge.enable_ip_masquerade=false --subnet=172.27.10.0/24 testnet
docker run --network testnet alpine:latest /bin/sh
Unfortunately all these ended with no success. So the question is how to debug that? Is it correct way? How would you do the redirection over proxy?
Some words about theory: Traffic coming from 172.27.10.0/24 subnet hits iptables SNAT rule, source IP changes to 192.168.10.2. By routing rule it routes over tun10 device, it is the tunnel. And hits another iptables SNAT rule that changes IP to y.y.y.y and finally goes to destination.

Docker swarm overlay network with vxlan routing over openvpn

I have setup a docker swarm with 3 nodes (docker 18.03). These nodes use an overlay network to communicate.
node1:
laptop
host tun0 172.16.0.6 --> openvpn -> nat gateway
container n1
ip = 192.169.1.10
node2:
aws ec2
host eth2 10.0.30.62
container n2
ip = 192.169.1.9
node3:
aws ec2
host eth2 10.0.140.122
container n3
ip = 192.169.1.12
nat-gateway:
aws ec2
tun0 172.16.0.1 --> openvpn --> laptop
eth0 10.0.30.198
The scheme is partly working:
1. Containers can ping eachother using name (n1,n2,n3)
2. Docker swarm commands are working, services can be deployed
The overlay is partly working. Some nodes cannot communicate with each other either using tcp/ip or udp. I tried all combinations of the 3 nodes with udp and tcp/ip:
I did a tcpdump on the nat gateway to monitor overlay vxlan network activity (port 4789):
tcpdump -l -n -i eth0 "port 4789"
tcpdump -l -n -i tun0 "port 4789"
Then I tried tcp/ip communication from node2 to node3. On node3:
nc -l -s 0.0.0.0 -p 8999
On node1:
telnet 192.169.1.12 8999
Node1 will then try to connect to node3. I see packets coming in on the nat-gateway over the tun0 interface:
on the nat-gateway eth0 interface:
it seems that the nat-gateway is not sending replies back over the tun0 interface.
The iptables configuration the nat-gateway
The routing of the nat-gateway
Can you help me solve this issue?
I have been able to fix the issue using the following configuration on the NAT gateway:
and
No masquerading of 172.16.0.0/22 is needed. All the workers and managers will route their traffic for 172.16.0.0/22 via the NAT gateway, and it knows how to send the packets over tun0.
Masquerading of eth0 was just wrong...
All the containers can now ping and establish tcp/ip connections to each other.

Logspout can't connect to papertrail

I can't get logspout to connect to papertrail. I get the following error:
!! lookup logs5.papertrailapp.com on 127.0.0.11:53: read udp 127.0.0.1:46185->127.0.0.11:53: i/o timeout
where 46185 changes every time I run the container. It seems like a DNS error, but nslookup logs5.papertrailapp.com gives the expected output, as does docker run busybox nslookup logs5.papertrailapp.com.
Beyond that, I don't even know how to interpret that error message, let alone address it. Any help debugging this would be hugely appreciated.
My Docker Compose file:
version: '2'
services:
logspout:
image: gliderlabs/logspout
command: "syslog://logs5.papertrailapp.com:12345"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
sleep:
image: benwhitehead/env-loop
Where 12345 is the actual papertrail port. Result is the same whether using syslog:// or syslog-tls://.
From https://docs.docker.com/engine/userguide/networking/configure-dns/:
the docker daemon implements an embedded DNS server which provides built-in service discovery for any container
It looks like your container is unable to connect to this DNS server. If your container is on the default bridge network, it won't reach the embedded DNS server. You can either set --dns to be an outside source or update /etc/resolv.conf. It doesn't sound like a Papertrail issue, at all.
(source)
Docker and iptables got in a fight. So I spun up a new machine, failed to set up iptables, and the problem was solved: no firewall at all to get in the way of Docker's connections!
Just kidding, don't do that. I got a toy database hacked that way.
Fortunately, it's now relatively easy to get iptables and Docker to live in harmony, using the DOCKER_USER iptables chain.
The solution, excerpted from my blog:
Configure Docker with iptables=true, and append to iptables configuration:
iptables -A DOCKER-USER -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
iptables -A DOCKER-USER -i eth0 -p tcp -m tcp --dport 80 -j ACCEPT
iptables -A DOCKER-USER -i eth0 -p tcp -m tcp --dport 443 -j ACCEPT
iptables -A DOCKER-USER -i eth0 -j DROP

Resources