When the device is in confirmed mode, it waits for a downlink confirmation (ACK) from the network after each uplink. If the confirmation is not received by the device, it will repeat (up to a maximum of 8 times and increasing the SF of the UL if it was lowered before) the uplink until it receives a confirmation. It may take about 30 seconds to send the 8 repetitions.
If the device does not see a confirmation and needs to send another uplink (e.g. alarm or new periodic measurement), it will send the new uplink and forget the previous one. To operate in confirmed mode, the device must be declared in confirmed mode (or ACK) on the network platform. You can activate it via the IoT configurator in the network parameters
Be aware that this operation consumes much more battery power than a traditional operation, even more, if the network quality is poor.
If the transmitter loses a lot of frames, it would be better to reposition the transmitter (if possible) or the GW (if possible) to improve the transmission rather than activating the ACK which will drain the battery faster than "expected" depending on the network condition.
Is this answer correct or not?
Related
Description - How I can get the number of BLE connection in iOS.
I want to restrict a user to add more BLE sensor after a particular number of BLE connection. I want to get the number of a BLE connection a device can handle.
A connection represents state, not traffic. The count of connections will be bound by either memory or the data structures used by the Bluetooth stack to manage them, both unknown. My answer is, "As many as it can and no more."
Packets represent traffic and each is handled one at a time. From this perspective, my answer is, "One."
However, if a packet cannot be processed out of the critical paths in the chip and protocol stack fast enough to begin processing the next packet, packets can be dropped. Experience has shown these critical paths in iOS are dependent on the traffic's packet size and rate. Additionally, other devices in the area not connected to your BLE stack may be flooding the radio spectrum and causing packet collisions outside the stack. I have seen BLE traffic go to hell with an excess of 20 connections and as few as one. From this perspective my answer is, "It depends."
In CSMA, a sender device postpones sending a wireless data while other devices send their wireless data. That is why even the UDP datagram packets are delayed on Wifi router with many mobile devices.
I am interested to finding a way of shortening the delay. With a hypothesis "Even though other devices does not lower their network usage, device A will get lesser round-trip latency if A takes lower network usage.", I did an experiment. (described below) Fortunately, my assume seems to be right by the test result.
In my guess, other interfering devices keep their networking usage so that CSMA delay may not decline, but the possibility of interfering might decline, thus the delay might be affected. But I cannot figure out the reason anymore. I am not a hardware engineer.
Is my hypothesis right? If so, what is the reason?
Devices: An Wifi router, Device A(wireless laptop) and device B(wireless laptop)
Device B: exchanges data of 10 MB/sec with another wired computer
Device A: sends ping-pong UDP packets to another wired computer
Device A:
<Server>
When receives a UDP datagram D from S via port 12345:
Send D to S.
<Client>
For every 1 millisecond:
Send 20 bytes of data to Server:12345.
Device B: just use iperf for massive traffic generation.
Test times: 5 times * minutes
The result:
Device A sends 1000 times per second: ping-pong latency is 130 ms
Device A sends 300 times per second: ping-pong latency is 40 ms
Remark: High deviation.
I want to know in regards to iOS what is the typical advertising rate for peripherals running in the background. I know that it will vary depending on what you are doing in the foreground , but I do not know to what degree. I have read it changes only by 10's of ms but then i have also read it can change by up to 4 seconds between advertisement packets. I can fit all of my data needed into 1 packet. I would like to know between advertising this one packet in the background and nominal iphone usage how likely is it that my advertising rate will still be in the ms range?
Also i have read that i can change this interval programatically, but the examples are only for 30 seconds. (im guessing due to limited discoverability mode?) Is there a way I could keep recalling this advertising boost? OR Maybe I could start as a central scanning in the background instead and when i scan a certain packet i could switch to peripheral mode (since you can only do one or the other in the background) and my advertising boost could happen then. Then when that thirty second boost is over i could switch back to central mode until i scan that certain packet again right? This way i would only have a small window of poor advertising rate every 30 seconds. Is that probable?
My understanding is that current WiFi driver uses rate control algorithm to choose a data rate within a small set of predetermined values to send packets over the WiFi medium. Different algorithms exist for this purpose. But how does this process work when WiFi driver decides that the connection is lost and shutdown the connection all together? Which part of the code should I read in open source WiFi driver such as MadWiFi and the likes?
The WiFi driver for your hardware which runs in Linux communicates with the WiFi chip which also runs a pretty complex firmware. The interface between the driver and the firmware is hardware specific. In some hardware the detection of connection loss events is done completely by the firmware and the driver only gets a "disconnected" event while in others the driver is also involved.
Regardless of who does what disconnection usually occurs due to either
Receiving a DEAUTH frame from the AP
Detecting too many missing beacons. Beacons are WiFi frames sent periodically by the AO (for most APs every ~100ms) . If you get too far from the AP or the AP was just powered off you stop seeing the beacons in the air and usually you'll signal disconnection or try to roam to a different AP.
Too many failures on Tx of packets (i.e. not receiving ACK frames for too much traffic)
This usually indicates that you've gone too far from the AP. It could be that you can "hear" the AP but it can't hear you already. In this case it also makes sense to signal a disconnection.
For example you can look in TI wifi driver in the Linux kernel drivers/net/wireless/ti/wlcore/events.c and the function wlcore_event_beacon_loss()
In Cfg80211 architecture, assume we are station mode.
driver call kernel API cfg80211_send_disassoc() if we received a deassoc/deauth frame.this function will notify corresponding application (ex wpa_supplicant) a disconnect event.
on another hand, when we decide to disconnect with AP, applicantion (ex wpa_supplicant) can call linux kernel API cfg80211_disconnected(), it will trigger corresponding driver ioctl function to finish disconnection task.
I'm building a portion of an app that will require the user to download files of varying size. Currently, I'm using Apple's Reachability code to let me know if I have a connection.
(http://developer.apple.com/library/ios/#samplecode/Reachability/Introduction/Intro.html)
While the reachability code can keep track of 'having a connection', it doesn't have an ability to let me know of a worsening connection. It would seem that I need to expand on the functionality of the Apple code to meet my requirement. Is it possible for me to measure the number or percentage of dropped packets during the data transfer? This would be helpful so I could message the user and be able to pause or stop the download.
There is no iOS API available that hooks deep into the networking stack that tells you when and why packets are dropped. It could be dropped at a router or it could be dropped locally because of a checksum error.
What kind of data is it? I assume TCP.
You might be able to infer the quality of the connection by examining the throughput rate. You can count how many bytes you receive per second.
TCP has something called congestion control and the end hosts (iOS devices) will throttle back their throughput as packets are dropped and go unacknowledged. So throughput may correlate with network quality. However, your throughput rate may vary for many reasons such as network congestion, packet shaping, QoS, etc.
iOS does not have a public API to monitor the wifi signal strength. There is a private API but your app would not be approved in the app store if you use the private API.