I have a problem with WireGuard VPN connection to my office network. I am just testing Wireguard if it can replace OpenVPN (which is working fine).
Both sides are Debian 9.7.
The connection is established between client and server successfully, I can ping and ssh in both directions.
On the server side is attached local network 10.5.5.0/24, the address of the server is 10.5.5.5, and two other computers 10.5.5.100, 10.5.5.200
Server Wireguard Address = 10.0.1.1/24, Client = 10.0.1.3/24
AllowedIPs on Server: 10.0.1.3/32
AllowedIPs on Client: 10.0.1.1/32, 10.5.5.0/24
Routes on the client are set, I can ping the server from a client with 10.0.1.1 and also 10.5.5.5.
I can't ping/access any other computer on 10.5.5.0/24 - (10.5.5.100, 10.5.5.200).
I need to know, if there is a problem with wireguard, Debian or somewhere between chair and keyboard.
Finally ... I figured it out: missing iptables rule:
iptables -t nat -A POSTROUTING -o ens224 -j MASQUERADE
where ens224 is network interface for subnet 10.5.5.0/24
Related
I am running a Wireguard server from a VPS provider. What I want to achieve is to be able to route specific internet traffic (ports 10000:11000 are set to accept traffic from the VPS firewall) from VPN to my Docker containers at home server.
[Internet] <-> [Wireguard 10.100.0.1] <-> [Home Server 10.100.0.2 (Docker Containers)]
Details:
Wireguard Server
OS: Ubuntu 20.04.2 LTS
iptables post up/down rules from wg0.conf:
iptables -A FORWARD -i eth0 -j ACCEPT;
iptables -t nat -A PREROUTING -p tcp --dport 10000:11000 -j DNAT --to-destination 10.100.0.2;
iptables -w -t nat -A POSTROUTING -o eth0 -j MASQUERADE;
sysctl -p:
net.ipv4.ip_forward = 1
net.ipv6.conf.all.forwarding = 1
Home Server
OS: Ubuntu 20.04.2 LTS (Desktop)
systemd-networkd (.network file for wireguard interface) configuration:
[Match]
Name = wguard
[Network]
Address = 10.100.0.2/32
[RoutingPolicyRule]
From = 10.200.0.0/16
Table = 250
[Route]
Gateway = 10.100.0.2
Table = 250
[Route]
Destination = 0.0.0.0/0
Type = blackhole
Metric = 1
Table = 250
I created a Docker network in 10.200.0.0/16, and containers are using using this network. Docker containers return the VPN ip address when I check it with curl.
Home server returns my home ip address with a plain curl query; but it returns VPN ip address via wireguard interface.
My problems:
I cannnot ping the VPN host from home server, but I can ping other VPN clients. Also I can ping the VPN host from inside the containers, which is strange.
Port forwarding from the VPN to containers does not work properly.
Example docker-compose:
version: "3.7"
services:
rutorrent:
container_name: rutorrent1
image: romancin/rutorrent:latest
networks:
- wireguard0
ports:
- "10010:51415"
- "10011:80"
- "10012:443"
environment:
- PUID=1000
- PGID=1000
volumes:
- /home/user/Downloads/rutorrent/config:/config
- /home/user/Downloads/rutorrent/downloads:/downloads
networks:
wireguard0:
external: true
I would appreciate if you could at least point me to the part where I am doing wrong. I think my iptables rules have missing lines but I couldn't find a good reference or a book to fully understand how to set it properly.
Thanks
I have setup an environment in Jelastic including a load balancer (tested both Apache and Nginx with same results), with public IP and an application server running Univention UCS DC Master docker image (I have also tried a simple Ubuntu 20.04 install).
Now the application server has a private IP address and is correctly reachable from the internet, also I can correctly SSH into both, load balancer and app server.
The one thing I can't seem to achieve is to have the app server access the internet (outbound traffic).
I have tried setting up the network in the app server and tried a few Nginx load-balancing configurations but to be honest I've never used a load balancer before and I feel that configuring load balancing will not resolve my issue (might be wrong).
Of course my intention is to learn load balancing but if someone could just point me in the right direction I would be so grateful.
Question: what needs to be configured in Jelastic or in the servers to have the machines behind the load balancer access the internet?
Thank you for your time.
Cristiano
I was able to resolve the issue by simply detaching and re-attaching the public IP address to the server, so it was no setup problem just something in Jelastic got stuck..
Thanks all!
Edit: Actually to effectively resolve the issue, I have to detach the public IP address from the univention/ucs docker image, attach it to another node in the environment (ie an Ubuntu server I have), then attach the public IP back to the univention docker image. Can’t really figure why but works for me.
To have the machines access the internet you should add a route in them using your load balancer as a gw like this:
Destination GW Genmask
0.0.0.0 LB #IP 255.255.255.0
Your VMs firewalls should not block 80 and 443 ports for in/out traffic, using iptables :
sudo iptables -A INPUT -p tcp -m multiport --dports 80,443 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -p tcp -m multiport --dports 80,443 -m conntrack --ctstate ESTABLISHED -j ACCEPT
In your load balancer you should masquerade outgoing traffic (change source ip) and forward input traffic to your vms subnet using the LB interface connected to this subnet:
sudo iptables --table NAT -A POSTROUTING --out-interface eth0 -j MASQUERADE
sudo iptables -A FORWARD -p tcp -dport 80 -i eth0 -o eth1 -j ACCEPT
sudo iptables -A FORWARD -p tcp -dport 443 -i eth0 -o eth1 -j ACCEPT
You should enable ip forwarding in your load balancer
echo 1 > /proc/sys/net/ipv4/ip_forward
I have two VPSs, first machine (proxy from now) is for proxy and second machine (dock from now) is docker host. I want to redirect all traffic generated inside a docker container itself over proxy, to not exposure dock machines public IP.
As connection between VPSs is over internet, no local connection, created a tunnel between them by ip tunnel as follows:
On proxy:
ip tunnel add tun10 mode ipip remote x.x.x.x local y.y.y.y dev eth0
ip addr add 192.168.10.1/24 peer 192.168.10.2 dev tun10
ip link set dev tun10 mtu 1492
ip link set dev tun10 up
On dock:
ip tunnel add tun10 mode ipip remote y.y.y.y local x.x.x.x dev eth0
ip addr add 192.168.10.2/24 peer 192.168.10.1 dev tun10
ip link set dev tun10 mtu 1492
ip link set dev tun10 up
PS: Do not know if ip tunnel can be used for production, it is another question, anyway planning to use libreswan or openvpn as a tunnel between VPSs.
After, added SNAT rules to iptables on both VPSs and some routing rules as follows:
On proxy:
iptables -t nat -A POSTROUTING -s 192.168.10.2/32 -j SNAT --to-source y.y.y.y
On dock:
iptables -t nat -A POSTROUTING -s 172.27.10.0/24 -j SNAT --to-source 192.168.10.2
ip route add default via 192.168.10.1 dev tun10 table rt2
ip rule add from 192.168.10.2 table rt2
And last but not least created a docker network with one test container attached to it as follows:
docker network create --attachable --opt com.docker.network.bridge.name=br-test --opt com.docker.network.bridge.enable_ip_masquerade=false --subnet=172.27.10.0/24 testnet
docker run --network testnet alpine:latest /bin/sh
Unfortunately all these ended with no success. So the question is how to debug that? Is it correct way? How would you do the redirection over proxy?
Some words about theory: Traffic coming from 172.27.10.0/24 subnet hits iptables SNAT rule, source IP changes to 192.168.10.2. By routing rule it routes over tun10 device, it is the tunnel. And hits another iptables SNAT rule that changes IP to y.y.y.y and finally goes to destination.
I have setup a docker swarm with 3 nodes (docker 18.03). These nodes use an overlay network to communicate.
node1:
laptop
host tun0 172.16.0.6 --> openvpn -> nat gateway
container n1
ip = 192.169.1.10
node2:
aws ec2
host eth2 10.0.30.62
container n2
ip = 192.169.1.9
node3:
aws ec2
host eth2 10.0.140.122
container n3
ip = 192.169.1.12
nat-gateway:
aws ec2
tun0 172.16.0.1 --> openvpn --> laptop
eth0 10.0.30.198
The scheme is partly working:
1. Containers can ping eachother using name (n1,n2,n3)
2. Docker swarm commands are working, services can be deployed
The overlay is partly working. Some nodes cannot communicate with each other either using tcp/ip or udp. I tried all combinations of the 3 nodes with udp and tcp/ip:
I did a tcpdump on the nat gateway to monitor overlay vxlan network activity (port 4789):
tcpdump -l -n -i eth0 "port 4789"
tcpdump -l -n -i tun0 "port 4789"
Then I tried tcp/ip communication from node2 to node3. On node3:
nc -l -s 0.0.0.0 -p 8999
On node1:
telnet 192.169.1.12 8999
Node1 will then try to connect to node3. I see packets coming in on the nat-gateway over the tun0 interface:
on the nat-gateway eth0 interface:
it seems that the nat-gateway is not sending replies back over the tun0 interface.
The iptables configuration the nat-gateway
The routing of the nat-gateway
Can you help me solve this issue?
I have been able to fix the issue using the following configuration on the NAT gateway:
and
No masquerading of 172.16.0.0/22 is needed. All the workers and managers will route their traffic for 172.16.0.0/22 via the NAT gateway, and it knows how to send the packets over tun0.
Masquerading of eth0 was just wrong...
All the containers can now ping and establish tcp/ip connections to each other.
I've got some issues with my captive portal.
I want to open a pop-up when anyone try to connect to my Raspberry wifi access point. In order to, I have turn my Rpi into a wifi access point and I have put a LAMP server on my Rpi.
Actually I use DNSMASQ and i change the conf file to :
address=/#/10.0.0.1
listen-address=10.0.0.1
dhcp-range=10.0.0.10,10.0.0.50,12h
And I change the iptables in order to capture all the connexion :
iptables -t nat -A PREROUTING -i wlan0 -p tcp --dport 443 -j DNAT --to-destination 10.0.0.1:443
iptables -t nat -A PREROUTING -i wlan0 -p tcp --dport 80 -j DNAT --to-destination 10.0.0.1:80
So when I connect and go on the browser with my phone I'm redirected to the home page of the server => This is what I want, so it's good :)
But my problem is I want a trigger to open the home page automatically when i connect to the network.
Anyone knows how to do this ?
Another question, when I call "google.fr" in my browser, I'm redirected to my Apache home page, but when I launch a search request in the browser, I've got an error. Anyone knows why ?
the reaseon why you get an error is either because :
your server is not setup for https request
if you request google.com/search?=whatever, /search doesn't exist on your server.
you need to:
configure your server for https (but it will show a security alert because of bad certificate)
tell your server to rewrite any "unknown" url to a specific virtual host showing your home page
This tutorial for Ubuntu, is a good follow along for the Raspberry Pi if you are using Apache and php in your captive portal setup.
http://aryo.info/labs/captive-portal-using-php-and-iptables.html (from archive)