How to deal with the LoRa interference - iot

I'm trying to test my LoRaWAN network and I'm having some very disappointing results.
I have a Ideetron lorank8v1 gateway which can work over distances of 15 km.
In addition I used the Ideetron Nexus Board as a microcontroller and a Nexus Demoboard where the sensors are mounted.
I'm sending 1 packet per minute with my humidity and temperature measurements.
After 200m my packets aren’t captured by the gateway anymore.
I've ran the packet logger software on my gateway and all packets that are captured have a bad CRC_CODE verification which I think is due to interference.
I have thought that I may have some interference from LTE/3G/4G networks but these networks are not 868MHz band in Greece.
My gateway has 8 simultaneous channels and my node uses SF9 with a 125KHz bandwidth. When I changed these parameters nothing changed. I'm using it in a urban area.
What should I do?
Maybe I have to configure the datarate, the spreadingfactor and the frequencies at which my node transmits?
Are there any better ideas out there?

Related

How much impact does the network delay have on IoT Edge throughput?

We have a customer who has deployed a number of iotedge transparent gateways and keeps routing data from tons of leaf devices to cloud.
Recently they noticed the output (edge to IoT Hub) cannot catch up the input on part of the edge devices, which is causing a severe latency issue for their messages.
Here's the information of the built-in metrics on edgeHub,named 8B:
edgehub_queue_length
8B: 981061
edgehub_message_send_duration_seconds
8B: ~110ms
{quantile="0.1"} 0.0632608
{quantile="0.5"} 0.1136008
{quantile="0.9"} 0.127605
{quantile="0.99"} 0.2449048
edgehub_message_process_duration_seconds
8B: 0.5-2.0 ms
We would like to clarify two questions:
What is the recommended network latency for iotedge gateway?
Are there any other methods we can do to improve the output throughput of
edgeHub?

WebRtc Maximum Connections, only using data channels

My question is if I plan to use WebRTC with a p2p architecture but only using its custom data channel to send constant small text messages. ¿What is the maximum number of peer connections that a peer can support? (I know this its heavily going to depend on the device, network... of each peer, but could somebody give me a ballpark estimate).
Edit: By constant text messages i mean around 30 / sec
One of limitations might be maximum amount of available ports in the device's OS. For example, Ubuntu has about 65k available ports. So, supposing that you have enough memory, CPU and network bandwith, and 1 port for 1 data channel then you have ~65k connections.

Using consumer cellphones to build a mesh network for IOT devices?

I have been looking into LoRaWAN for a low cost waterproof asset tracker I am looking at building.
AFAIK, the primary benefits of LoraWAN over say LTE-M or cellular are: no connectivity costs and potentially lower power consumption.
What I'm wondering is: why can't we use our own cellphones as the "base station" that the IOT device talks with? We can do this with bluetooth and WiFi, why not cell? Is it the LTE protocol that prevents it? Physics? What am I missing?
There's quite a few architectural reasons why Peer-to-Peer LTE isn't feasible, but the largest is probably the fact that in LTE the uplink and downlink use different modulation techniques.
In the downlink (the connection from the Base stations (eNodeBs) to the User Equipment (our mobile phones)) Orthogonal Frequency Division Multiplex (OFDMA) is used, this means the phone listens out onto the RF interface for the OFDMA signal.
This works well, OFDMA is a great way of encoding the data onto the air interface, but it has a very high peak-to-average-power ratio, this means if the UEs used OFDMA in the Uplink (From the UE to the eNodeB) they'd have awful battery life.
Instead in the Uplink LTE uses Single Carrier Frequency Division Multiple Access (SC-FDMA), which is much more power efficient and allows you talk all day, so the eNodeBs listen on their RF interface for the SC-FDMA modulated traffic.
This means our UEs (Mobile phones) use one type of modulation to send and a different modulation scheme to receive, so they can't talk directly to one another as they can't send OFDMA modulated data, only receive & visa-versa.
Some more reading on OFDMA & SC-FDMA.
The LTE relay interface inducted as part of Release 10 allows the deployment of relay nodes (a kind of low cost eNB) that are fixed and that use in-band LTE to extend the coverage of standard eNodeBs by one hop, improve signal quality and to increase the network capacity. Relays can be placed such that it converts the long single hop into two shorter hops.
However the approach of using UE seems have many challenges as it can make UE to get bit loaded with more functional changes across layers(MAC, PHY, RRC, NAS) as it has to take additional functionalities from Relay nodes/eNB as well ranging from lower layer signalling, co-ordination, mobility to forwarding. Also, there might be additional power consumption and change in antenna to support the same which all will add to more cost of UE.

Tensorflow scalibility

I am using tensorflow to train DNN, my network structure is very simple, each minibatch takes about 50ms when only one parameter server and one worker. In order to process huge samples, I am using distributed ASGD training, however, I found that increasing worker count could not increase throughput, for example, 40 machines could achieve 1.5 million samples per second, after doubling parameter server machine count and worker machine count, cluster still could only process 1.5 million samples per second or even worse. The reason is each step takes much longer when cluster is large. Does tensorflow have good scalibility, and any advice for speeding up training?
General approach to solving these problems is to find where bottlenecks are. You could be hitting a bottleneck in software or in your hardware.
General example of doing the math -- suppose you have 250M parameters, and each backward pass takes 1 second. This means each worker will be sending 1GB/sec of data and receiving 1GB/sec of data. If you have 40 machines, that'll be 80GB/sec of transfer between workers and parameter server. Suppose parameter server machines only have 1GB/sec fully duplex NIC cards. This means that if you have less than 40 parameter server shards, then your NIC card speed will be the bottleneck.
After ruling that out, you should consider interconnect speed. You may have N network cards in your cluster, but the cluster most likely can't handle all network cards sending data to all other network cards. Can your cluster handle 80GB/sec of data flowing between 80 machines? Google designs their own network hardware to handle their interconnect demands, so this is an important problem constraint.
Once you checked that your network hardware can handle the load, I would check software. IE, suppose you have a single worker, how does "time to send" scale with the number of parameter server shards? If the scaling is strongly sublinear, this suggests a bottleneck, perhaps some inefficient scheduling of threads or some-such.
As an example of finding and fixing a software bottleneck, see grpc RecvTensor is slow issue. That issue involved gRPC layer become inefficient if you are trying to send more than 100MB messages. This issue was fixed in upstream gRPC release, but not integrated into TensorFlow release yet, so current work-around is to break messages into pieces 100MB or smaller.
The general approach to finding these is to write lots of benchmarks to validate your assumptions about the speed.
Here are some examples:
benchmark sending messages between workers(local)
benchmark sharded PS benchmark (local)

3G/4G peer to peer network for long-distance communication in remote area?

I'm working on an engineering project where I want a go-kart to maintain a direct connection with a base station. The base and go-kart can be separated by about a half mile (with lots of obstacles in between) which is too far for WiFi.
I'm thinking about using 3G/4G to directly connect the two. Does anyone have any resources or ideas that might help?
Or, alternatively, a better way to connect them? I'm just trying to send some sensor data (pretty low bandwidth) in real-time.
The biggest problem you face is radio spectrum that you are allowed to use. All 3G/4G spectrum is licensed to some firm and they get really unhappy (e.g. have you hunted down and fined) when you transmit in their space.
I did find DASH7 which
is an open source wireless sensor networking standard … which operates in the 433 MHz unlicensed ISM band. DASH7 provides multi-year battery life, range of up to 2 km, indoor location with 1 meter accuracy, low latency for connecting with moving things, a very small open source protocol stack …
with a parts cost around US$ 10. This sounds like it satisfies your requirements and keeps the local constabulary from bothering you.
You could maybe use SMS, between a modem on the kart and a mobile phone or modem at the base.
A mobile data connection like a telephone call isn't possible directly between the two; you have to make a data connection from the kart to a server in your operator's core network, identified by the APN. Then you can access IP addresses as for a normal internet connection - so the base computer would have to be a web server.

Resources