LoRaWAN planning - do M gateways seem to suffice for N sensors & 1 [msg/hr]? - iot

We have installed N LoRaWAN sensors and M gateways. These sensors require an acknowledgment message from the network server. Communication is carried out every hour in SF9 on the 915 MHz band. If the acknowledgment is not received, the sensor switches to a degraded mode and then communicates K times more, in order to catch the signal from the gateway.
In order to try to solve this problem, we have tried to increase the density of the gateway network.
What is the number of Gateways to add?

Related

How are LoRaWAN devices working in confirmed mode?

When the device is in confirmed mode, it waits for a downlink confirmation (ACK) from the network after each uplink. If the confirmation is not received by the device, it will repeat (up to a maximum of 8 times and increasing the SF of the UL if it was lowered before) the uplink until it receives a confirmation. It may take about 30 seconds to send the 8 repetitions.
If the device does not see a confirmation and needs to send another uplink (e.g. alarm or new periodic measurement), it will send the new uplink and forget the previous one. To operate in confirmed mode, the device must be declared in confirmed mode (or ACK) on the network platform. You can activate it via the IoT configurator in the network parameters
Be aware that this operation consumes much more battery power than a traditional operation, even more, if the network quality is poor.
If the transmitter loses a lot of frames, it would be better to reposition the transmitter (if possible) or the GW (if possible) to improve the transmission rather than activating the ACK which will drain the battery faster than "expected" depending on the network condition.
Is this answer correct or not?

Does iOS delay peripheralManager:didReceiveWriteRequests: and peripheralManager:didReceiveReadRequest:?

I use an iPhone as a peripheral to communicate with a micro controller (BLE chip in question is the BGM113). After connecting from the MCU, the MCU sends a couple of read and write requests for characteristics serially. Each request takes only a few ms in the MCU. On the iPhone side, responding to each request also only takes a few ms in the relevant methods (peripheralManager:didReceiveWriteRequests: and peripheralManager:didReceiveReadRequest:).
Still, I have roughly 500ms delay between each request. I have a support request running with the bluetooth chip manufacturer to clarify, but my gut feeling tells me that the fruity company is to blame...
Can anyone confirm such delays when reading or writing characteristics?
(more details: all characteristics are in the same service, read and write may happen on the same characteristic serially, there are several characteristics that I operate on.)
Your delay will be between 1 and 2 times the connection interval, so you set the connection interval to match your maximum delay requirement. Note that the energy consumption for the radio is linear to the inverse of the connection interval though.

Wifi networking delay and amount of network traffic

In CSMA, a sender device postpones sending a wireless data while other devices send their wireless data. That is why even the UDP datagram packets are delayed on Wifi router with many mobile devices.
I am interested to finding a way of shortening the delay. With a hypothesis "Even though other devices does not lower their network usage, device A will get lesser round-trip latency if A takes lower network usage.", I did an experiment. (described below) Fortunately, my assume seems to be right by the test result.
In my guess, other interfering devices keep their networking usage so that CSMA delay may not decline, but the possibility of interfering might decline, thus the delay might be affected. But I cannot figure out the reason anymore. I am not a hardware engineer.
Is my hypothesis right? If so, what is the reason?
Devices: An Wifi router, Device A(wireless laptop) and device B(wireless laptop)
Device B: exchanges data of 10 MB/sec with another wired computer
Device A: sends ping-pong UDP packets to another wired computer
Device A:
<Server>
When receives a UDP datagram D from S via port 12345:
Send D to S.
<Client>
For every 1 millisecond:
Send 20 bytes of data to Server:12345.
Device B: just use iperf for massive traffic generation.
Test times: 5 times * minutes
The result:
Device A sends 1000 times per second: ping-pong latency is 130 ms
Device A sends 300 times per second: ping-pong latency is 40 ms
Remark: High deviation.

How does Linux kernel wifi driver determine when a connection is lost?

My understanding is that current WiFi driver uses rate control algorithm to choose a data rate within a small set of predetermined values to send packets over the WiFi medium. Different algorithms exist for this purpose. But how does this process work when WiFi driver decides that the connection is lost and shutdown the connection all together? Which part of the code should I read in open source WiFi driver such as MadWiFi and the likes?
The WiFi driver for your hardware which runs in Linux communicates with the WiFi chip which also runs a pretty complex firmware. The interface between the driver and the firmware is hardware specific. In some hardware the detection of connection loss events is done completely by the firmware and the driver only gets a "disconnected" event while in others the driver is also involved.
Regardless of who does what disconnection usually occurs due to either
Receiving a DEAUTH frame from the AP
Detecting too many missing beacons. Beacons are WiFi frames sent periodically by the AO (for most APs every ~100ms) . If you get too far from the AP or the AP was just powered off you stop seeing the beacons in the air and usually you'll signal disconnection or try to roam to a different AP.
Too many failures on Tx of packets (i.e. not receiving ACK frames for too much traffic)
This usually indicates that you've gone too far from the AP. It could be that you can "hear" the AP but it can't hear you already. In this case it also makes sense to signal a disconnection.
For example you can look in TI wifi driver in the Linux kernel drivers/net/wireless/ti/wlcore/events.c and the function wlcore_event_beacon_loss()
In Cfg80211 architecture, assume we are station mode.
driver call kernel API cfg80211_send_disassoc() if we received a deassoc/deauth frame.this function will notify corresponding application (ex wpa_supplicant) a disconnect event.
on another hand, when we decide to disconnect with AP, applicantion (ex wpa_supplicant) can call linux kernel API cfg80211_disconnected(), it will trigger corresponding driver ioctl function to finish disconnection task.

WLAN triangulation of mobile devices from WLAN side as a pose to cell phone app

Im trying to measure the signal strength of mobile devices from either existing WLAN routers or creating directional antennas. I want to see what specific routers are picking up the top 3-4 signal strengths from a specific mobile device, and use triangulaiton to gather their locaiton. Any ideas of the best route to do this?
I don't know whether such router exists or not, but can provide you an alternative and convenient way. Wireless channel is symmetric in both direction; this is, if the router transmits signal at 20 dBm and mobile device receives such signal at -30 dBm, the received signal strength at router would be -30 dBm if the mobile phone transmits at 20 dBm (given that environment does not change much). Then simply install WiFi analyzer app on your Android phone and record the signal strength of your normal routers.

Resources