How to connect to Coral Dev Board without USB connection - google-coral

I have a person detection model running in my Google Dev Board which is exposed as a Flask application.
I have enabled direct wifi connection in the Dev board as per coral documentation.
When the Dev board is connected via USB (OTG) cable I am able to access the application using the following URL
http://192.168.100.2:4664
When I disconnect the USB (OTG) connection, I am not able to access this URL in my laptop which is connected to the same wifi network
Please help

You can use the standard ssh linux tool to connect to the board. Matterfact, mdt is just a friendly wrapper around ssh.
On your host machine, do this and just keep typing enter:
host# ssh-keygen
It should generate a file ~/.ssh/id_rsa.pub that is an rsa key. Copy that key, char by char, log back into the board, create a file in /home/mendel/.ssh/authorized_keys and paste the key there.
Then get the ip address on the board using ip addr on wlan0 interface, for instance, mine looks like this:
3: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 3000
...
inet 192.168.0.160/24 brd 192.168.0.255 scope global dynamic noprefixroute wlan0
...
that is all, just do this to log in with the address you found, for me it is:
ssh mendel#192.168.0.160
More on ssh: https://www.ssh.com/ssh/

Related

tc qdisc with GRE in openwrt

I'm trying to implement traffic control to GRE interface in an openwrt board. For this i followed below steps,
Create GRE interface named gre1 in both tunnel end devices.
Tested reachability with ping, Success.
create qdisc using following command.
tc qdisc add dev gre1 root handle 1: default 2
Before creating tc classes i tired to ping the tunnel interface but this failed.
I tried to capture packet in gre1 but found 0 packets.
Monitored the statistics of qdisc using the command
tc -p -s -d qdisc show dev gre1
found that packet drop count is increasing.
I have tested this same in Ubuntu PC and found working. Also if i change the tunnel to VPN tunnel instead of GRE it working fine.
Is there any additional thing which I need to handle to implement tc in GRE ?
Any help will be appreciated.
Fixed !
Add class
tc class add dev eth0 parent 1:1 classid 1:2 htb rate 60kbps ceil 100kbps
then add sfq for the class
tc qdisc add dev eth0 parent 1:2 handle 20: sfq

route all traffic over gre tunnel

I have an openvswitch sw1 with subnet 10.207.39.0/24 that has lxc containers attached and I have the same on another physical server and I have successfully connected these using a GRE tunnel. However, the lxc containers have additional ports on additional openvswitches, e.g. sw4 with subnet 192.220.39.0/24 and I want to push that traffic over the single gre tunnel on sw1 because there is only one physical interface and it's not possible to have multiple gre tunnels on each openvswitch with the same physical interface IP addr endpoints. Is it possible to push the traffic on the other openvswitches over the gre tunnel on sw1? Or is there a better way to connect multiple subnets in lxc containers on two physical hosts? Thanks.
I solved this "myself" - with help from two links provided below - (after sleeping on it and relentless google searches over several frustrating days).
I realize the solution is pretty simple and would be clear to a networking professional. I am an Oracle DBA and only know as much networking as I need to work with orabuntu-lxc software, LXC containers, and Oracle software, so please keep that in mind if the below is "obvious" - it wasn't obvious to me in my network ignorance.
I got the clue on how to solve the actual steps from this blog post:
http://www.cnblogs.com/popsuper1982/p/3800548.html
I confirmed that any subnet should be routable over a GRE tunnel from this blog post (which gave me hope to keep working towards a solution):
https://supportforums.adtran.com/thread/1408
In particular the author stated in the adtran comment that "GRE tunnels have no limitation on the types of traffic which can traverse it. It can route multiple subnets without multiple tunnels."
That post told me that the solution was likely a routing solution and that only one GRE tunnel would be needed for this use case.
Note that this feature of "no limitation" on the types of traffic is great for Oracle RAC because we need to be able to send multicast over the GRE tunnel for RAC.
This use case:
I am building an Oracle RAC infrastructure to run in LXC Linux containers. I have a public network 10.207.39.0/24 on openvswitch sw1 and a private RAC interconnect network 192.220.39.0/24 on openvswitch sw4. I want to be able to build the RAC in LXC linux containers that span multiple physical hosts and so I created a GRE tunnel to connect the 10.207.39.1 tunnel endpoint on colossus to 10.207.39.5 tunnel endpoint on guardian.
Here is the setup details:
Host "guardian":
LAN wireless physical network interface: wlp4s0 (IP 192.168.1.11)
sw1 10.207.39.5
sw4 192.220.39.5
Host "colossus":
LAN wireless physical network interface: wlp4s0 (IP 192.168.1.15)
sw1 10.207.39.1
sw4 192.220.39.1
Step 1:
Create GRE tunnel between sw1 openvswitches on both physical hosts with physical wireless LAN network interface end points:
Host "guardian": Create gre tunnel phys hosts (guardian --> colossus).
sudo ovs-vsctl add-port sw1 gre0 -- set interface gre0 type=gre options:remote_ip=192.168.1.15
Host "colossus": Create gre tunnel phys hosts (colossus --> guardian).
sudo ovs-vsctl add-port sw1 gre0 -- set interface gre0 type=gre options:remote_ip=192.168.1.11
Step 2:
Route the 192.220.39.0/24 network over the established GRE tunnel as shown below:
Host "guardian": route 192.220.39.0/24 openvswitch sw4 over GRE tunnel:
sudo route add -net 192.220.39.0/24 gw 10.207.39.5 dev sw1
Host "colossus": route 192.220.39.0/24 openvswitch sw4 over GRE tunnel:
sudo route add -net 192.220.39.0/24 gw 10.207.39.1 dev sw1
Note: To add additional subnets repeat step 2 for each subnet.
Note on MTU:
Also, you have to allow for GRE encapsulation in MTU if you want to ssh over these tunnels.
Therefore in the above example for the main GRE tunnel connecting the hosts, we need MTU to be set to 1420 to allow 80 for the GRE header.
MTU on the LXC container virtual interfaces on the sw1 switches need to be set to MTU=1420 in the LXC container config files.
MTU on the LXC container virtual interfaces on the sw4 switches need to be set to MTU=1420 in the LXC container config files.
Note that the MTU on the openvswitches sw1 and sw4 should automatically set to the MTU on the LXC intefaces as long as ALL LXC virtual interfaces are set to the new lower MTU values, so explicitly setting MTU on the openvswitches sw1 and sw4 themselves should not be necessary.
If run into issues still with SSH over the tunnels, but ping works cross-hosts cross-containers, then re-check all MTU settings on the virtual interfaces and openvswitches and recheck.

Docker connect to mocked service on host port

I am using docker to run my web app on my local machine and I have created mocked web service using SoapUI on host machine.
The mocked service is accessible through localhost:8099 and IP 127.0.0.1:8099 (using telnet), I am however unable to access it from running docker container.
I have read some articles about discovering host IP address through
ip addr show docker0
with results:
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:e3:36:43:5b brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:e3ff:fe36:435b/64 scope link
valid_lft forever preferred_lft forever
When I tried to ping IP 172.17.0.1 from docker container I am getting responses just OK, but when trying to call the mocked web service from my web app I get responses No route to host.
I have also tried to modify iptables using iptables -A INPUT -i docker0 -j ACCEPT but with no success.
Is there any other setting that I am missing?
Any help is appreciated.
Thanks, shimon
If I have read your question right your local and host machines are not the same machine. In which case you won't be able to (unless you have set a tunnel up on localhost:8099) be able to access your mocker service on the host machine using localhost as it will resolve to your local ip (on your local machine).
What you need to do is make sure both machines can talk to each other and use the host machines IP instead of localhost.

CoovaChilli fails to redirect

I'm trying to set up a captive portal with CoovaChilli. So far I can get my router to distribute IP address from the 10.1.0.0/24 subnet, but when I attempt to go to www.youtube.com the browser simply hangs. I can access the captive portal only by manually entering 10.1.0.1. The related files are below
cat /etc/chilli/config
HS_LANIF=eth1 # Subscriber Interface for client devices
HS_NETWORK=10.1.0.0 # HotSpot Network (must include HS_UAMLISTEN)
HS_NETMASK=255.255.0.0 # HotSpot Network Netmask
HS_UAMLISTEN=10.1.0.1 # HotSpot IP Address (on subscriber network)
HS_UAMPORT=3990 # HotSpot UAM Port (on subscriber network)
HS_UAMUIPORT=4990 # HotSpot UAM "UI" Port (on subscriber network, for embedded portal)
HS_NASID=localhost
HS_RADIUS=localhost
HS_RADIUS2=localhost
HS_RADSECRET=testing123 # Set to be your RADIUS shared secret
HS_UAMSECRET=greatsecret # Set to be your UAM secret
HS_UAMALIASNAME=chilli
HS_SSID="GreenEarth"
HS_NASIP=127.0.0.1 # To explicitly set NAS-IP-Address
HS_UAMSERVER=$HS_UAMLISTEN
HS_UAMFORMAT=http://\$HS_UAMLISTEN/cake2/rd_cake/dynamic_details/chilli_browser_detect/
HS_MACAUTH=on # To turn on MAC Authentication
HS_TCP_PORTS="80 23 8000"
HS_MODE=hotspot
HS_TYPE=chillispot
HS_WWWDIR=/etc/chilli/www
HS_WWWBIN=/etc/chilli/wwwsh
HS_PROVIDER=Coova
HS_PROVIDER_LINK=http://www.coova.org/
HS_LOC_NAME="My HotSpot" # WISPr Location Name and used in portal
HS_COAPORT=3799
cat /etc/network/interfaces
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet static
address 10.1.0.0
netmask 255.255.255.0
cat /etc/chilli/ipup.sh
iptables -I POSTROUTING -t nat -o $HS_WANIF -j MASQUERADE
cat /proc/sys/net/ipv4/ip_forward
1
Any help would be greatly appreciated. Thanks.
You need to enable https redirect in coovachilli config file:
HS_REDIRSSL=on
HS_SSLKEYFILE=/etc/chilli/key.pem
HS_SSLCERTFILE=/etc/chilli/cert.pem
To generate certificate files, see How to create a self-signed certificate with openssl?.
Also you'll need to have coovachilli build with SSL support enabled.
With this configuration your users should be redirected to the login page when entering https urls (like youtube one).
BUT they will get a browser warning because the certificate won't be the one the browser is waiting for...

Java application cannot get IP address of the host in docker container with static IP

I use OpenStack for a while to manage my applications. Now I want to transfer them to docker as container per app because docker is more lightweight and efficient.
The problem is almost every thing related to networking went wrong in runtime.
In my design, every application container should have a static IP address and I can use hosts file to locate the container network.
Here is my implementation. (the bash filename is docker_addnet.sh)
# Useages
# docker_addnet.sh container_name IP
# interface name: veth_(containername)
# gateway 172.17.42.1
if [ $# != 2 ]; then
echo -e "ERROR! Wrong args"
exit 1
fi
container_netmask=16
container_gw=172.17.42.1
container_name=$1
bridge_if=veth_`echo ${container_name} | cut -c 1-10`
container_ip=$2/${container_netmask}
container_id=`docker ps | grep $1 | awk '{print \$1}'`
pid=`docker inspect -f '{{.State.Pid}}' ${container_name}`
echo "Contaner: " $container_name "pid: " $pid
mkdir -p /var/run/netns
ln -s /proc/$pid/ns/net /var/run/netns/$pid
brctl delif docker0 $bridge_if
ip link add A type veth peer name B
ip link set A name $bridge_if
brctl addif docker0 $bridge_if
ip link set $bridge_if up
ip link set B netns $pid
ip netns exec $pid ip link set dev B name eth0
ip netns exec $pid ip link set eth0 up
ip netns exec $pid ip addr add $container_ip dev eth0
ip netns exec $pid ip route add default via $container_gw
The script is use to set the static ip address of the container, then you run the container, you must append --net=none to manually setup the network
You can now start a container by
sudo docker run --rm -it --name repl --dns=8.8.8.8 --net=none clojure bash
and set the network by
sudo zsh docker_addnet.sh repl 172.17.15.1
In the container bash, you can see the IP address by ip addr, the output is something like
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
67: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 2e:7b:7e:5a:b5:d6 brd ff:ff:ff:ff:ff:ff
inet 172.17.15.1/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::2c7b:7eff:fe5a:b5d6/64 scope link
valid_lft forever preferred_lft forever
So far so good.
Let's try to get the container host ip address by using clojure repl. First the repl by:
lein repl
next eval the code below
(. java.net.InetAddress getLocalHost)
The clojure code is equals to
System.out.println(Inet4Address.getLocalHost());
What you get is an exception
UnknownHostException 5a8efbf89c79: Name or service not known
java.net.Inet6AddressImpl.lookupAllHostAddr (Inet6AddressImpl.java:-2)
Other things going weird is the RMI server cannot get client IP address by RemoteServer.getClientHost();.
So what may cause this issue? I remember that java sometimes get the wrong network configures, but I don't know the reason.
The documentation for InetAddress.getLocalHost() says:
Returns the address of the local host. This is achieved by retrieving the name of the host from the system, then resolving that name into an InetAddress.
Since you didn't take any steps to make your static IP address resolvable inside the container, it doesn't work.
To find the address in Java without going via hostname you could enumerate all network interfaces via NetworkInterface.getNetworkInterfaces() then iterate over each interface inspecting each address to find the one you want. Example code at
Getting the IP address of the current machine using Java
Another option would be to use Docker's --add-host and --hostname options on the docker run command to put in a mapping for the address you want, then getLocalHost() should work as you expect.
In my design, every application container should have a static IP
address and I can use hosts file to locate the container network.
Why not rethink your original design? Let's start with two obvious options:
Container linking
Add a service discovery component
Container linking
This approach is described in the Docker documentation.
https://docs.docker.com/userguide/dockerlinks/
When launching a container you specify that it is linked to another. This results in environment variables being injected into the linked container containing the IP address and port number details of the collaborating container.
Your application's configuration stops using hard coded IP addresses and instead uses soft code references that are set at run-time.
Currently docker container linking is limited to a single host, but I expect this concept will continue to evolve into multi-host implementations. Worst case you could inject environment variables into your container at run-time.
Service discovery
This is a common approach taken by large distributed applications. Examples implementations of such systems would be:
zookeeper
etcd
consul
..
With such a system in place, your back-end service components (eg database) would register themselves on startup and client processes would dynamically discover their location at run-time. This form of decoupled operation is very Docker friendly and scales very very well.

Resources