CoovaChilli fails to redirect - http-redirect

I'm trying to set up a captive portal with CoovaChilli. So far I can get my router to distribute IP address from the 10.1.0.0/24 subnet, but when I attempt to go to www.youtube.com the browser simply hangs. I can access the captive portal only by manually entering 10.1.0.1. The related files are below
cat /etc/chilli/config
HS_LANIF=eth1 # Subscriber Interface for client devices
HS_NETWORK=10.1.0.0 # HotSpot Network (must include HS_UAMLISTEN)
HS_NETMASK=255.255.0.0 # HotSpot Network Netmask
HS_UAMLISTEN=10.1.0.1 # HotSpot IP Address (on subscriber network)
HS_UAMPORT=3990 # HotSpot UAM Port (on subscriber network)
HS_UAMUIPORT=4990 # HotSpot UAM "UI" Port (on subscriber network, for embedded portal)
HS_NASID=localhost
HS_RADIUS=localhost
HS_RADIUS2=localhost
HS_RADSECRET=testing123 # Set to be your RADIUS shared secret
HS_UAMSECRET=greatsecret # Set to be your UAM secret
HS_UAMALIASNAME=chilli
HS_SSID="GreenEarth"
HS_NASIP=127.0.0.1 # To explicitly set NAS-IP-Address
HS_UAMSERVER=$HS_UAMLISTEN
HS_UAMFORMAT=http://\$HS_UAMLISTEN/cake2/rd_cake/dynamic_details/chilli_browser_detect/
HS_MACAUTH=on # To turn on MAC Authentication
HS_TCP_PORTS="80 23 8000"
HS_MODE=hotspot
HS_TYPE=chillispot
HS_WWWDIR=/etc/chilli/www
HS_WWWBIN=/etc/chilli/wwwsh
HS_PROVIDER=Coova
HS_PROVIDER_LINK=http://www.coova.org/
HS_LOC_NAME="My HotSpot" # WISPr Location Name and used in portal
HS_COAPORT=3799
cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet static
address 10.1.0.0
netmask 255.255.255.0
cat /etc/chilli/ipup.sh
iptables -I POSTROUTING -t nat -o $HS_WANIF -j MASQUERADE
cat /proc/sys/net/ipv4/ip_forward
1
Any help would be greatly appreciated. Thanks.

You need to enable https redirect in coovachilli config file:
HS_REDIRSSL=on
HS_SSLKEYFILE=/etc/chilli/key.pem
HS_SSLCERTFILE=/etc/chilli/cert.pem
To generate certificate files, see How to create a self-signed certificate with openssl?.
Also you'll need to have coovachilli build with SSL support enabled.
With this configuration your users should be redirected to the login page when entering https urls (like youtube one).
BUT they will get a browser warning because the certificate won't be the one the browser is waiting for...

Related

how to channelize docker container traffic through a hosts network adapter

I have the following network route on my host pc. I am using softether vpn. Its setup on the adapter vpn_se
$ ip route
default via 192.168.43.1 dev wlan0 proto dhcp metric 600
vpn.ip.adress via 192.168.43.1 dev wlan0
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.30.0/24 dev vpn_se proto kernel scope link src 192.168.30.27
192.168.43.0/24 dev wlan0 proto kernel scope link src 192.168.43.103 metric 600
Now I want to route all traffic from a docker container to vpn_se
something like
172.17.0.6 via 192.168.30.1 dev vpn_se
How can i achieve this

How to connect to Coral Dev Board without USB connection

I have a person detection model running in my Google Dev Board which is exposed as a Flask application.
I have enabled direct wifi connection in the Dev board as per coral documentation.
When the Dev board is connected via USB (OTG) cable I am able to access the application using the following URL
http://192.168.100.2:4664
When I disconnect the USB (OTG) connection, I am not able to access this URL in my laptop which is connected to the same wifi network
Please help
You can use the standard ssh linux tool to connect to the board. Matterfact, mdt is just a friendly wrapper around ssh.
On your host machine, do this and just keep typing enter:
host# ssh-keygen
It should generate a file ~/.ssh/id_rsa.pub that is an rsa key. Copy that key, char by char, log back into the board, create a file in /home/mendel/.ssh/authorized_keys and paste the key there.
Then get the ip address on the board using ip addr on wlan0 interface, for instance, mine looks like this:
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 3000
...
inet 192.168.0.160/24 brd 192.168.0.255 scope global dynamic noprefixroute wlan0
...
that is all, just do this to log in with the address you found, for me it is:
ssh mendel#192.168.0.160
More on ssh: https://www.ssh.com/ssh/

Interpretation of ip routes rules

The citation comes from: https://github.com/docker/labs/blob/master/networking/concepts/05-bridge-networks.md
When we peek into the host routing table we can see the IP interfaces
in the global network namespace that now includes docker0. The host
routing table provides connectivity between docker0 and eth0 on the
external network, completing the path from inside the container to the
external network.
host$ ip route default via 172.31.16.1 dev eth0
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.42.1
172.31.16.0/20 dev eth0 proto kernel scope link src 172.31.16.102
It is written: The host table provides connectivity between docker0 and eth0. I cannot see where in that rules the connectivity is introduced. Can you explain?

Docker container hits iptables to proxy

I have two VPSs, first machine (proxy from now) is for proxy and second machine (dock from now) is docker host. I want to redirect all traffic generated inside a docker container itself over proxy, to not exposure dock machines public IP.
As connection between VPSs is over internet, no local connection, created a tunnel between them by ip tunnel as follows:
On proxy:
ip tunnel add tun10 mode ipip remote x.x.x.x local y.y.y.y dev eth0
ip addr add 192.168.10.1/24 peer 192.168.10.2 dev tun10
ip link set dev tun10 mtu 1492
ip link set dev tun10 up
On dock:
ip tunnel add tun10 mode ipip remote y.y.y.y local x.x.x.x dev eth0
ip addr add 192.168.10.2/24 peer 192.168.10.1 dev tun10
ip link set dev tun10 mtu 1492
ip link set dev tun10 up
PS: Do not know if ip tunnel can be used for production, it is another question, anyway planning to use libreswan or openvpn as a tunnel between VPSs.
After, added SNAT rules to iptables on both VPSs and some routing rules as follows:
On proxy:
iptables -t nat -A POSTROUTING -s 192.168.10.2/32 -j SNAT --to-source y.y.y.y
On dock:
iptables -t nat -A POSTROUTING -s 172.27.10.0/24 -j SNAT --to-source 192.168.10.2
ip route add default via 192.168.10.1 dev tun10 table rt2
ip rule add from 192.168.10.2 table rt2
And last but not least created a docker network with one test container attached to it as follows:
docker network create --attachable --opt com.docker.network.bridge.name=br-test --opt com.docker.network.bridge.enable_ip_masquerade=false --subnet=172.27.10.0/24 testnet
docker run --network testnet alpine:latest /bin/sh
Unfortunately all these ended with no success. So the question is how to debug that? Is it correct way? How would you do the redirection over proxy?
Some words about theory: Traffic coming from 172.27.10.0/24 subnet hits iptables SNAT rule, source IP changes to 192.168.10.2. By routing rule it routes over tun10 device, it is the tunnel. And hits another iptables SNAT rule that changes IP to y.y.y.y and finally goes to destination.

Dataflow worker unable to connect to Kafka through Cloud VPN

I have issues connecting a KafkaIO source to brokers available only through a Cloud VPN tunnel.
The tunnel is set up to allow traffic from a specific subnetwork (secure) and routes are set up and working for compute engines in that subnetwork.
Executing the pipeline with the DirectRunner KafkaIO is able to connect to the brokers, whether through the VPN on a standard compute engine in the secure subnetwork, or through a local machine with ssh tunnels setup by sshuttle.
Running the pipeline with the DataflowRunner connections to the brokers fail with:
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata. The pipeline gets executed within the secure subnetwork.
Connecting to the compute engine instance spanned by the job the following routes are visible:
jgrabber#REDACTED-harness-REDACTED ~ $ ip r
default via 10.74.252.1 dev eth0 proto dhcp src 10.74.252.3 metric 1024
default via 10.74.252.1 dev eth0 proto dhcp metric 1024
10.74.252.1 dev eth0 proto dhcp scope link src 10.74.252.3 metric 1024
10.74.252.1 dev eth0 proto dhcp metric 1024
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown
The IPv4 addresses of the brokers are within a 172.17.0.0/16 (remote) network. The VPN is configured with a remote network range of 172.16.0.0/12.
Could the remote 172.17.0.0/16 network be shadowed by the virtual network setup and used by docker?

Resources