OpenWRT: Wired connection stop working when wireless enabled on RT-N13U - openwrt

I have an Asus RT-N13U and am running: BARRIER BREAKER (14.07, r42625) on it. I used TFTP to install the openwrt-ramips-rt305x-rt-n13u-squashfs-sysupgrade.bin image onto it. Everything has been working fine until I enabled the wan. All of a sudden my wired connections don't work. Any ideas or pointers on where to look? This is my network config:
root#OpenWrt:~# cat /etc/config/network
config interface 'loopback'
option ifname 'lo'
option proto 'static'
option ipaddr '127.0.0.1'
option netmask '255.0.0.0'
config globals 'globals'
option ula_prefix 'xxxx:xxxx:4553::/48'
config interface 'lan'
option ifname 'eth0.1'
option force_link '1'
option type 'bridge'
option proto 'static'
option netmask '255.255.255.0'
option ip6assign '60'
option macaddr '00:0c:43:41:46:32'
option ipaddr '192.168.7.1'
config interface 'wan'
option ifname 'eth0.2'
option proto 'dhcp'
option macaddr '00:0c:43:xx:xx:xx'
config interface 'wan6'
option ifname '#wan'
option proto 'dhcpv6'
config switch
option name 'rt305x'
option reset '1'
option enable_vlan '1'
config switch_vlan
option device 'rt305x'
option vlan '1'
option ports '0 1 2 3 5 6t'
config switch_vlan
option device 'rt305x'
option vlan '2'
option ports '4 6t'

Related

set up the captive portal on openwrt

I'm trying to set up the captive portal on openwrt.
I did all the work. And when users connect to the router, they are automatically redirected to the index.html page.
But when the Internet is disconnected, a "internet my not be available" message appears. And Android devices can not detect the captive portal page.
file /etc/config/dhcp
root#OpenWrt:/etc/config# cat dhcp
config dnsmasq
option domainneeded '1'
option boguspriv '1'
option localise_queries '1'
option rebind_protection '1'
option rebind_localhost '1'
option local '/lan/'
option domain 'lan'
option expandhosts '1'
option authoritative '1'
option readethers '1'
option leasefile '/tmp/dhcp.leases'
option resolvfile '/tmp/resolv.conf.auto'
option logqueries '1'
config dhcp 'lan'
option interface 'lan'
option start '100'
option limit '150'
option leasetime '12h'
option dhcpv6 'server'
option ra 'server'
option ra_management '1'
config dhcp 'wan'
option interface 'wan'
option ignore '1'
config odhcpd 'odhcpd'
option maindhcp '0'
option leasefile '/tmp/hosts/odhcpd'
option leasetrigger '/usr/sbin/odhcpd-update'
config domain
option name 'connectivitycheck.gstatic.com'
option ip '192.168.1.1'
config domain
option name 'apple.com'
option ip '192.168.1.1'
config domain
option name 'captive.apple.com'
option ip '192.168.1.1'
config domain
option name 'detectportal.firefox.com'
option ip '192.168.1.1'
config domain
option name 'gstatic.com'
option ip '192.168.1.1'
config domain
option name 'clients3.google.com'
option ip '192.168.1.1'
config domain
option name 'connectivitycheck.android.com'
option ip '192.168.1.1'
config domain
option name 'msftconnecttest.com'
option ip '192.168.1.1'
config domain
option name 'play.googleapis.com'
option ip '192.168.1.1'
config domain
option name 'spectrum.s3.amazonaws.com'
option ip '192.168.1.1'
config domain
option name 'mtalk.google.com'
option ip '192.168.1.1'
config domain
option name 'alt3-mtalk.google.com'
option ip '192.168.1.1'
config domain
option name 'alt4-mtalk.google.com'
option ip '192.168.1.1'
config domain
option name 'connectivity-check.ubuntu.com'
option ip '192.168.1.1'
I think that android devices send packet ICMP for check internet.
I using iptables drop all ICMP packet. But it did not work again.
Note
This problem occurs only when Android users connect. Ubuntu and Firefox recognize the index page.

Docker Bridge Conflicts with Host Network

Docker seems to be creating a bridge after a container starts running that then conflicts with my host network. This is not the default bridge docker0, but rather another bridge that is created after a container has started. I am able to configure the default bridge according to the older user guide link https://docs.docker.com/v17.09/engine/userguide/networking/default_network/custom-docker0/, however, I do not know how to configure this other bridge so it does not conflict with 172.17.
This current issue is then that my container cannot access other systems on the host network when this bridge becomes active.
Any ideas?
Version of docker:
Version 18.03.1-ce-mac65 (24312)
This is the bridge that gets created. Sometimes it is not 172.17, but sometimes it is.
br-f7b50f41d024 Link encap:Ethernet HWaddr 02:42:7D:1B:05:A3
inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0
When docker networks are created (e.g. using docker network create or indirectly through docker-compose) without explicitly specifying a subnet range, dockerd allocates a new /16 network, starting from 172.N.0.0/16, where N is a number that is incremented (e.g. N=17, N=18, N=19, N=20, ...). A given N is skipped if a docker network (a custom one, or the default docker bridge) already exists in the range.
You can specify explicitly a safe IP range when creating a docker bridge (i.e. one that excludes the host ips in your network) on the CLI. But usually bridge networks are created automatically by docker-compose with default blocks. To exclude these IPs reliably would require modifying every docker-compose.yaml file you encounter. It's bad practice to include host-specific things inside a compose file.
Instead, you can play with the networks that docker considers allocated, to force dockerd to "skip" subnets. I'm outlining three methods below:
Method #0 -- configure the pool of ips in the daemon config
If your docker version is recent enough (TODO check minimum version), and you have permissions to configure the docker daemon's command line arguments, you can try passing --default-address-pool ARG options to the dockerd command. Ex:
# allocate /24 subnets with the given CIDR prefix only.
# note that this prefix excludes 172.17.*
--default-address-pool base=172.24.0.0/13,size=24
You can add this setting in one of the etc files: /etc/default/docker, or in /etc/sysconfig/docker, depending on your distribution. There is also a way to set this parameter in daemon.json (see syntax)
Method #1 -- create a dummy placeholder network
You can prevent the entire 172.17.0.0/16 from being used by dockerd (in future bridge networks) by creating a very small docker network anywhere inside 172.17.0.0/16.
Find 4 consecutive IPs in 172.17.* that you know are not in use in your host network, and sacrifice them in a "tombstone" docker bridge. Below, I'm assuming the ips 172.17.253.0, 172.17.253.1, 172.17.253.2, 172.17.253.3 (i.e. 172.17.253.0/30) are unused in your host network.
docker network create --driver=bridge --subnet 172.17.253.0/30 tombstone
# created: c48327b0443dc67d1b727da3385e433fdfd8710ce1cc3afd44ed820d3ae009f5
Note the /30 suffix here, which defines a block of 4 different IPs. In theory, the smallest valid network subnet should be a /31 which consists of a total of 2 IPs (network identifier + broadcast). Docker asks for a /30 minimum, probably to account for a gateway host, and another container. I picked .253.0 arbitrarily, you should pick something that's not in use in your environment. Also note that the identifier tombstone is nothing special, you can rename it to anything that will help you remember why it's there when you find it again several months later.
Docker will modify your routing table to send traffic for these 4 IPs to go through that new bridge instead of the host network:
# output of route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.5.1 0.0.0.0 UG 0 0 0 eth1
172.17.253.0 0.0.0.0 255.255.255.252 U 0 0 0 br-c48327b0443d
172.20.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.5.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
Note: Traffic for 172.17.253.{0,1,2,3} goes through the tombstone docker bridge just created (br-c4832...). Traffic for any other IP in the 172.17.* would go through the default route (host network). My docker bridge (docker0) is on 172.20.0.1, which may appear unusual -- I've modified bip in /etc/docker/daemon.json to do that. See this page for more details.
The twist: if there exists a bridge occupying even a subportion of a /16, new bridges created will skip that range. If we create new docker networks, we can see that the rest of 172.17.0.0/16 is skipped, because the range is not entirely available.
docker network create foo_test
# c9e1b01f70032b1eff08e48bac1d5e2039fdc009635bfe8ef1fd4ca60a6af143
docker network create bar_test
# 7ad5611bfa07bda462740c1dd00c5007a934b7fc77414b529d0ec2613924cc57
The resulting routing table:
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.5.1 0.0.0.0 UG 0 0 0 eth1
172.17.253.0 0.0.0.0 255.255.255.252 U 0 0 0 br-c48327b0443d
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-c9e1b01f7003
172.19.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-7ad5611bfa07
172.20.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.5.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
Notice that the rest of the IPs in 172.17.0.0/16 have not been used. The new networks reserved .18. and .19.. Sending traffic to any of your conflicting IPs outside that tombstone network would go via your host network.
You would have to keep that tombstone network around in docker, but not use it in your containers. It's a dummy placeholder network.
Method #2 -- bring down the conflicting bridge network
If you wish to temporarily avoid the IP conflict, you can bring the conflicting docker bridge down using ip: ip link set dev br-xxxxxxx down (where xxxxxx represents the name of the bridge network from route -n or ip link show). This will have the effect of removing the corresponding bridge routing entry in the routing table, without modifying any of the docker metadata.
This is arguably not as good as the method above, because you'd have to bring down the interface possibly every time dockerd starts, and it would interfere with your container networking if there was any container using that bridge.
If method 1 stops working in the future (e.g. because docker tries to be smarter and reuse unused parts of an ip block), you could combine both approaches: e.g. create a large tombstone network with the entire /16, not use it in any container, and then bring its corresponding br-x device down.
Method #3 -- reconfigure your docker bridge to occupy a subportion of the conflicting /16
As a slight variation of the above, you could make the default docker bridge overlap with a region of 172.17.*.* that is not used in your host network. You can change the default docker bridge subnet by changing the bridge ip (i.e. bip key) in /etc/docker/daemon.json (See this page for more details). Just make it a subregion of your /16, e.g. in a /24 or smaller.
I've not tested this, but I presume any new docker network would skip the remainder of 172.17.0.0/16 and allocate an entirely different /16 for each new bridge.
The bridge was created from docker-compose, which can be configured within the compose file.
Answer found here: Docker create two bridges that corrupts my internet access

Mesh network with OpenWrt: clients can not ping each other

I am building a WiFi mesh network using Openwrt 802.11s and Tp-Link wr703n mini routers for my final year project. OLSR is running as a routing protocol. I am using Linux.
Total of 4 routers
LAN IP Adress Mac Mesh IP Adress
Node A 192.168.10.1 AO 192.168.5.1
Node B 192.168.11.1 6E 192.168.5.2
Node C 192.168.12.1 42 192.168.5.3
Node D 192.168.13.1 54 192.168.5.4
Above you can see the Lan IP address and the mesh addresses of each router.
So client X is connected to Node A with a cable and a node is assigned the IP address 192.168.10.100. Client Y is connected to D and is assigned the IP addresses 192.168.13.50.
When I try to ping X from Y, I cannot get it to work. Also, I can't ping the mesh IP addresses as well from the operating system terminal. But when I am logged to the OpenWrt via terminal, I am able to ping any IP addresses within the mesh.
I have captured some 802.11s beacon frame which I am adding to the post.
If you look at the very end:
Capability: 0x01
...
.... 0... = Mesh Forwarding: No
...
I feel like that's the problem because I have a previous thesis paper and the student that did that project has that setting to be Yes, and it was working.
So, does anybody have any idea?
Additionally, I checked with Wireshark that OLSR is working perfectly and transmits hello messages, to messages, etc.
One of the routers config files -- OLSRD ----network---wireless (they are all the same except the IP addresses):
root#OpenWrt:/etc/config# cat wireless
config wifi-device 'radio0'
option type 'mac80211'
option macaddr '14:cf:92:3c:67:54'
option hwmode '11ng'
option htmode 'HT20'
list ht_capab 'SHORT-GI-20'
list ht_capab 'SHORT-GI-40'
list ht_capab 'RX-STBC1'
list ht_capab 'DSSS_CCK-40'
option country 'IE'
option channel '11'
option txpower '7'
config wifi-iface
option device 'radio0'
option mesh_id 'mesh_OpenWrt'
option mode 'mesh'
option network 'mesh'
option encryption 'none'
root#OpenWrt:/etc/config# cat network
config interface 'loopback'
option ifname 'lo'
option proto 'static'
option ipaddr '127.0.0.1'
option netmask '255.0.0.0'
config interface 'lan'
option ifname 'eth0'
option type 'bridge'
option proto 'static'
option netmask '255.255.255.0'
option ipaddr '192.168.13.1'
option gateway '192.168.5.4'
config interface 'mesh'
option _orig_ifname 'wlan0'
option _orig_bridge 'false'
option proto 'static'
option ipaddr '192.168.5.4'
option netmask '255.255.255.0'
root#OpenWrt:/etc/config# cat olsrd
config olsrd
option IpVersion '4'
option FIBMetric 'flat'
option LinkQualityLevel '2'
option LinkQualityAlgorithm 'etx_ff'
option OlsrPort '698'
option Willingness '3'
option NatThreshold '1.0'
config LoadPlugin
option library 'olsrd_arprefresh.so.0.1'
config LoadPlugin
option library 'olsrd_dyn_gw.so.0.5'
config LoadPlugin
option library 'olsrd_httpinfo.so.0.1'
option port '1978'
list Net '0.0.0.0 0.0.0.0'
config LoadPlugin
option library 'olsrd_nameservice.so.0.3'
config LoadPlugin
option library 'olsrd_txtinfo.so.0.1'
option accept '0.0.0.0'
config Interface
option ignore '0'
option Mode 'mesh'
option interface 'mesh'
config InterfaceDefaults
option Mode 'mesh'
I believe there will be one bridge interface, br-lan and two interfaces wlan0
, wlan1
In NODE A:
Add these two interfaces wlan0, wlan1 into the bridge br-lan.
wlan0<----[br-lan]--->wlan1
wlan0 make as a mesh point.
wlan1 make as AP.
Make the changes in /etc/cofig/network
option type 'bridge'
option proto 'static'
option netmask '255.255.255.0'
option ipaddr '192.168.13.1'
3. Run the dhcp server on br-lan of NodeA
Make the changes in /etc/config/network of other Nodes same as below:
option proto 'dhcp'
Now all NodeB,NodeC,NodeD are in same DHCP subnet IP series of NodeA.
192.168.13.x, DHCP clients are running on all NodeB/C/D and DHCP server is running NodeA.
It will resolve your end to end PING issue.
Another approach if you want to access the internet to all nodes.
Setup should be like this:
ISP<----ETH--->wan[NodeA]-wlan0<---mesh-->wlan0-[NodeB]<---mesh-->wlan0-[NodeC]<---mesh--->wlan0-[NodeD]-wlan1 <---wifi--->sta/pc
All nodes will get DHCP IP, in every br-lan of nodes we need to run dhcp client.
NodeA
wan interface eth0.2
-Add all interface eth0.2, wlan0, wlan1 into bridge br-lan.
- Make the changes in /etc/config/network
option type 'bridge'
option proto 'dhcp'
# option netmask '255.255.255.0' /* comment this line */
# option ipaddr '192.168.13.1' /* comment this line */
Rest of the nodes will same as previous.
This will resolve your end to end ping issue, even every nodes and STA has access to internet.

Connecting VMs Using GRE Tunnels - Openvswitch

Hello everyone i'm really new in networking, so i i'm a little bit lost please i hope anyone can help me...
I have two physical nodes with the same configuration in the interface:
# The primary network interface
#auto eth0
#iface eth0 inet dhcp
auto br0
iface br0 inet dhcp
bridge_ports eth0
bridge_fd 9
bridge_hello 2
bridge_maxage 12
bridge_stp off
my nodes have the following public ip:
ubuntu001: 158.42.104.129
ubuntu002: 158.42.104.139
I run one VM in each node using the default configuration of libvirt:
Vm in ubuntu001: 10.1.1.189
Vm in ubuntu002: 10.1.1.59
I want to do ping between the VMs through "gre tunnel using OVS", so i did the next but it didn't work:
First i create an OVS bridge:
# ovs-vsctl add-br ovs-br0
Second i connect my bridge with its uplink which in this case is eth0
# ovs-vsctl add-port ovs-br0 eth0
Third i run a VM in each node (ubuntu001: 10.1.1.189 and ubuntu002: 10.1.1.59 respectively)
Fourth i add a port for the GRE tunnel:
# ovs-vsctl add-port ovs-br0 gre0 -- set interface gre0 type=gre options:remote_ip=158.42.104.139
# ovs-vsctl add-port ovs-br0 gre0 -- set interface gre0 type=gre options:remote_ip=158.42.104.129
i did the same in the other node and this show when i use ovs-vsctl show:
root#ubuntu001:~# ovs-vsctl show
41268e02-3996-4caa-b941-e4fe9c718e35
Bridge "ovs-br0"
Port "ovs-br0"
Interface "ovs-br0"
type: internal
Port "gre0"
Interface "gre0"
type: gre
options: {remote_ip="158.42.104.139"}
Port "eth0"
Interface "eth0"
ovs_version: "2.0.2"
root#ubuntu002:~# ovs-vsctl show
f0128df4-1a89-4999-8add-b5076ff055ee
Bridge "ovs-br0"
Port "ovs-br0"
Interface "ovs-br0"
type: internal
Port "gre0"
Interface "gre0"
type: gre
options: {remote_ip="158.42.104.129"}
Port "eth0"
Interface "eth0"
ovs_version: "2.0.2"
what i am doing wrong or is missing something??
Add this to /etc/network/interfaces:
auto br-ovs=br-ovs
iface br-ovs inet manual
ovs_type OVSBridge
ovs_ports gre1 gre2
ovs_extra set bridge ${IFACE} stp_enable=true
mtu 1462
allow-br-ovs gre1
iface gre1 inet manual
ovs_type OVSPort
ovs_bridge br-ovs
ovs_extra set interface ${IFACE} type=gre options:remote_ip=158.42.104.139 options:key=1
auto br1
iface br1 inet manual# (or static, or DHCP)
mtu 1462
I do not know how to do this with commands.
I think eth0 should not be in the output of ovs-vsctl show.
stp_enable=true is optional, I don't think it is needed in case of 2 nodes.
Set mtu to suit your needs. This example is for when the real NIC's mtu is 1500.
remote_ip=158.42.104.139 should contain the other node's IP. It is different on the 2 nodes.
options:key=1 is also optional, it can be used to label 2 GRE networks (eg. the second mesh would have key=2 etc.).
You can add VMs to br1 and they will be able to ping each other.
Don't forget to set the VMs' mtu to 1462.
This tutorial might be useful: https://wiredcraft.com/blog/multi-host-docker-network/

CoovaChilli fails to redirect

I'm trying to set up a captive portal with CoovaChilli. So far I can get my router to distribute IP address from the 10.1.0.0/24 subnet, but when I attempt to go to www.youtube.com the browser simply hangs. I can access the captive portal only by manually entering 10.1.0.1. The related files are below
cat /etc/chilli/config
HS_LANIF=eth1 # Subscriber Interface for client devices
HS_NETWORK=10.1.0.0 # HotSpot Network (must include HS_UAMLISTEN)
HS_NETMASK=255.255.0.0 # HotSpot Network Netmask
HS_UAMLISTEN=10.1.0.1 # HotSpot IP Address (on subscriber network)
HS_UAMPORT=3990 # HotSpot UAM Port (on subscriber network)
HS_UAMUIPORT=4990 # HotSpot UAM "UI" Port (on subscriber network, for embedded portal)
HS_NASID=localhost
HS_RADIUS=localhost
HS_RADIUS2=localhost
HS_RADSECRET=testing123 # Set to be your RADIUS shared secret
HS_UAMSECRET=greatsecret # Set to be your UAM secret
HS_UAMALIASNAME=chilli
HS_SSID="GreenEarth"
HS_NASIP=127.0.0.1 # To explicitly set NAS-IP-Address
HS_UAMSERVER=$HS_UAMLISTEN
HS_UAMFORMAT=http://\$HS_UAMLISTEN/cake2/rd_cake/dynamic_details/chilli_browser_detect/
HS_MACAUTH=on # To turn on MAC Authentication
HS_TCP_PORTS="80 23 8000"
HS_MODE=hotspot
HS_TYPE=chillispot
HS_WWWDIR=/etc/chilli/www
HS_WWWBIN=/etc/chilli/wwwsh
HS_PROVIDER=Coova
HS_PROVIDER_LINK=http://www.coova.org/
HS_LOC_NAME="My HotSpot" # WISPr Location Name and used in portal
HS_COAPORT=3799
cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet static
address 10.1.0.0
netmask 255.255.255.0
cat /etc/chilli/ipup.sh
iptables -I POSTROUTING -t nat -o $HS_WANIF -j MASQUERADE
cat /proc/sys/net/ipv4/ip_forward
1
Any help would be greatly appreciated. Thanks.
You need to enable https redirect in coovachilli config file:
HS_REDIRSSL=on
HS_SSLKEYFILE=/etc/chilli/key.pem
HS_SSLCERTFILE=/etc/chilli/cert.pem
To generate certificate files, see How to create a self-signed certificate with openssl?.
Also you'll need to have coovachilli build with SSL support enabled.
With this configuration your users should be redirected to the login page when entering https urls (like youtube one).
BUT they will get a browser warning because the certificate won't be the one the browser is waiting for...

Resources