Is it possible to implement flow control for CBCharacteristicWriteWithoutResponse? - ios

Is it possible to use writeValue with type CBCharacteristicWriteWithoutResponse and still have some flow control to avoid sending data faster than BLE stack manages to actually send it out? Currently it works on Android but not on iOS.
The long story.
I have previously implemented duplex communication channel over BLE on Android. It is basically using two ATT characteristics - one is write/writeWithoutResponse and the other is notifiable.
On Android, even when I use writeWithoutResponse, Android sends me onCharacteristicWrite callback to signal that the data packet has at least reached BLE stack, and in this callback I send out the next data packet with size of current ATT_MTU-3 bytes.
This works fine, the data reaches the target intact and I can achieve transfer speeds about 10 KB/s.
But on iOS there is a problem. When using writeValue with type CBCharacteristicWriteWithoutResponse, iOS (at least iOS 8) does not call didWriteValueForCharacteristic and this is intended and documented behavior. Thus I have no way of knowing if the data packet has reached BLE stack. The best I can is to call writeValue in a loop. Also, writeValue seems to be non blocking (async). As the result, not all of my data reaches the peripheral device. In the logs I see that the incoming data stream is stopped too soon. My guess is that if I call writeValue too fast, iOS is just carelessly overwriting previous cached writable characteristic value and thus misses some data bytes in between.
If I use writeValue with CBCharacteristicWriteWithResponse, it works fine, and what's strange - it works fine even if I ignore didWriteValueForCharacteristic and just call writeValue in a loop. It seems, with CBCharacteristicWriteWithResponse iOS is doing some internal housekeeping and uses BLE acknowledgments to avoid overwriting current value of the characteristic, therefore the data is being sent in order and without any losses.
Of course, I don't expect to get reliable writes using CBCharacteristicWriteWithResponse, but at least make it work for most cases. If it works on Android, then why shouldn't it work on iOS?

Apple's implementation kind of sucks. All other implementations I've seen have proper flow control. What you could do if you don't want to implement some advanced TCP-like layer on top of BLE is to simply stick with Write Without Response packets but send each 10th packet or so as a Write With Response. Then you won't (with high probability) not get any packet drops. This will probably only be a small performance decrease. You should also increase the MTU to increase the throughput even further.

Related

How to improve performance of CBCentralManager while CBPeripheralManager is active

We've created an iOS app that implements a CBCentralManager to connect to a device that we've created, that transmits data at 10Hz. It's vitally important that this data comes through and displays quickly, so we have built tight latency checks around this, if too many points are missed or if the local clock detects that incoming values are slowed, we will fault and break the connection.
The client has asked us top implement a second iOS app that will observe the first one. We implemented a CBPeripheralManager in the original app which advertises, can be connected to, and will periodically publish its data to a few outgoing characteristics.
What we are finding is that we cannot seem to connect the observer iOS app to the original iOS app (i.e., the original iOS app has both a CBCentral connection to the device and CBPeripheral connection to the observer app active at the same time), without tripping up our latency checks on the incoming data from the device.
I've tried everything I can think of, I've used separate queues for both CBPeripheralManager and CBCentralManager, as follows:
q = dispatch_get_global_queue(QOS_CLASS_UTILITY, 0);
ptr_CBPeriphMgr = [[CBPeripheralManager alloc] initWithDelegate:self queue:q];
Also,
I logged and timestamped everything, verified none of my code is taking too long
I moved almost all of my code out of BLE handlers to make them very light and not blocking,
I tried the separate queues (example shown above), with low priorities
I have tried slowing my CBPeripheralManager data rates to a trickle, a few updates a second
I have tried suspending the latency checks for three seconds after a CBPeripheralManager connection is established (which is very not ideal), but the problem seems to kick in randomly, not just after a connection.
It seems like no matter what I try, after 4-5 minutes of both peripheral and central connections being active (we have a loop where the second App repeatedly connects and disconnects every five seconds, to challenge the device connection) my incoming value updates from the device to the central slows to about 1/4 or 1/5 speed, or they stop for a full second and then three or four updates come in nearly simultaneously -- both of which trip our latency checks. It's like some queue is getting filled up and performance flatlines, but as I mentioned above I think I'm using separate queues.
I'm at my wits end... does anybody have any thoughts about how to prioritize my central functions over my peripheral functions in the iOS app, or to somehow improve performance to prevent this from being an issue and keep my app responsive to 10Hz updates from the device, even when being observed as a peripheral?
(Edited to state that we are connecting/disconnecting the second App repeatedly... perhaps I'm not cleaning up after the disconnection properly, and the garbage piles up and screws up BLE? That would explain why the problem seems to occur after 4-5 minutes regardless of the frequency of data updates over the second connection.)
Here are some suggestions:
Try creating queues using QOS_CLASS_USER_INITIATED or higher instead of QOS_CLASS_UTILITY.
Make sure that you call -[CBCentralManager stopScan] whenever you do not need to be scanning for peripherals, and -[CBPeripheralManager stopAdvertising] whenever you do not need to be prepared for incoming connections (probably whenever you are connected).
Call -[CBPeripheralManager setDesiredConnectionLatency:forCentral:] to reserve more resources.
However, if you can target iOS 11+, what I would recommend to decrease latency and increase speed is using a L2CAP channel. Although CoreBluetooth's L2CAP support is not documented well, here is a list of the available APIs.

iOS input stream receives fast data only if output stream is simultaneously sending

My iOS app requires socket communication. I'm following this Ray Wenderlich tutorial for setting up the input and output streams. The server I'm using is Twisted. My app requires sending and receiving fast bursts of data generated by external events like gyroscope data. It is sending/receiving data in form of JSON string. So largely, it's very much like a real-time messaging chat app but sending and receiving is very fast and in bursts.
So my app layout is that I have 1 view controller: DViewContorller and a tabbarcontroller with 3 tableviewcontrollers. I need to send and receive data in all these 4 view controllers, hence I implemented the socket stream initialization in App Delegate. For all the 3 tabs, my App Delegate sets the [self.inputstream setDelegate:self] but when it is in the DViewController it sets the delegate of input stream to a reference of DViewController. In (void)viewWillDisappear of DViewController, I reset the input stream delegate to a reference of AppDelegate to let it regain the control over the inputstream.
For outputstream, the delegate is always set to AppDelegate and never changed.
Both my AppDelegate and DViewController are <NSStreamDelegate> and both implement:
- (void)stream:(NSStream *)theStream handleEvent:(NSStreamEvent)streamEvent}
{
}
with all the stream event cases implemented.
So basically, my entire setup works well, but only if data is not fast(or for that matter, I cannot seem to pinpoint the exact problem).
So here are a few observations which I have made while testing (with a Simulator-iPhone and iPhone-iPhone setup):
A. SIMULATOR - iPHONE:
Now in this setup, I am able to send data fast and exactly the way I want it to send from Simulator to iPhone, but not from iPhone to Simulator. The iPhone receives all the strings well and acts as per required.
Sending from iPhone to Simulator, simulator can read data only one JSON string at a time, and doesn't work when data is sent fast. If sent fast, then all strings received by the simulator are half(only one half is received). NOTE: The server receives and sends full strings, and all the strings even if it is fast, there is no problem with the server.
If I send data simultaneously, at the same time, from both the Simulator and iPhone, even if fast, both receive and process all the strings well.
B. iPHONE - iPHONE:
Either iPhones (any one of them sending, not both together) can receive data only one JSON string at a time, and both don't work when data is sent fast. If sent fast, then all strings received by the any of the iPhones are half(only one half is received). NOTE: The server receives and sends full strings, and all the strings even if it is fast, there is no problem with the server.
If I send data simultaneously, at the same time, from both the iPhones to each other, even if fast, both receive and process all the strings well.
These observations led me to believe that the iPhone is receiving all fast strings only if it is simultaneously sending something to the server. OR i could be totally wrong, because when the Simulator sends to iPhone, the iPhone is able to receive everything no matter what. I want to know what the simulator is doing differently that the string received from it is taken in as full by the iPhone but not the other way round. Is it that the iPhone sends way too fast than a Simulator, hence all its sent strings don't get registered by the receiver? Somebody help me crack this please!
NOTE: In all cases, the server works perfectly and it sends and receives data in full length, no matter whatever speed. And I'm using iOS 7.
UPDATE 1:
Okay, so been experimenting with it the entire day, I finally made it to work. The thing is, it is exactly what my question statement is, output stream cannot stay idle if you want to receive continuously and fast from the input stream. I don't know why that happens, if anyone could enlighten me please. So the quick-fix I'm using is that whenever I get bytes on input stream, I immediately send blank data to server to keep the output stream active. So now the input stream can read complete data and fast. But I feel it is a wastage of server resources. Plus it's not a reliable solution. I'm looking for a concrete solution. I want to know how the Simulator does it without being bothered about the utilization of output stream. Can anyone help please?
UPDATE 2:
Learning from the previous update, it's not about sending blank data to the server, but i need to send dummy data to the sender if i want to receive the next string from him complete. I need to keep the end-to-end communication alive with dummy/blank data if i want to end/receive data fast and complete. Anyone has had this issue and found a better reliable/concrete way to do it?

Central is getting error reponse for write request if Peripheral respond back a little late

I have been using CoreBlueTooth framework to communicate between BTLE iOS devices and I see a strange behavior. Here is what I am doing:
iOS device 1 (Peripheral): Expose a writable characteristics.
iOS device 2 (Central): Scan for the writable characteristics and write data into it.
iOS device 1 (Peripheral): Receives write request. Wait for some time to acknowledge the receipt of data.
iOS device 2 (Central): Get a callback on the below delegate and received the mentioned error.
Issue: Here if I respond back to the write request in few seconds by calling the API [iPeripheral respondToRequest:iRequest withResult:iStatus] then it all works fine and I get a success on my Central. But if I take some time, even if my Peripheral has not responded to the write request, I get error response back.
Is this some kind of connection loss in few seconds or the known CB framework behavior, any idea?
- (void)peripheral:(CBPeripheral *)iPeripheral didWriteValueForCharacteristic:(CBCharacteristic *)iCharacteristic error:(NSError *)iError
Error Domain=CBErrorDomain Code=0 "Unknown error." UserInfo=0x183a6d70 {NSLocalizedDescription=Unknown error.}
Both my Central and Peripheral are running on iOS 7.0.
I also observed this problem when I had deadlocks in my code and couldn't respond in time ;-) The way I observed it, iOS responds with an automatic error request with an arbitrary error code if a request is not answered within 10 seconds. I have not found a way to change this, but it makes sense from a protocol perspective.
In Bluetooth Low Energy, a central can only send a single Characteristic Value Write Request at a time. After it has sent this request, it cannot send a different Write Request unless the first one is responded to. Therefore, it is crucial to always respond to requests as fast as possible.
In the comments, you mentioned that you are waiting for user input to affect the result code you want to send to the central. I guess "Success" if the user confirms in the UI that an operation should be started, and an error code if the user denies that. This is not the way an LE based protocol should be designed. It's like blocking the UI thread until an operation is finished, just from the other side. You are effectively blocking the BT communications until a blocking operation (waiting for user input) completes.
A different design would be to send a write request to the other phone, responding immediately with a "Success" error code to indicate that the request was received and the popup is displayed. Then, send a Characteristic Value Indication with the user's choice from the peripheral to the central.
There's one small caveat if you target iOS 6: indications don't work nicely in many cases (reentrancy bugs etc, best not touch them). There, you should send a Read Request from your central and return the user's choice in this read request if it's already available. Again, don't block while giving the answer, sending back a "user is still choosing" value back if the answer is not yet ready.
Single rule: Answer requests as fast as possible. It's the way, Bluetooth LE is designed to work.
You may be exceeding the maximum time allowed for a write to be acknowledged. Try testing several different ack times and see if it reliably fails beyond a certain threshold.
If you use iPhone 4 devices, this device no suports BLE. BLS are supported in iPhone 4 and later.

iOS gamekit/bluetooth data streaming

I have a written a program using gamekit/bluetooth to transfer low quality video using compressed jpegs from one iOS device to another. I do already realize that gamekit/bluetooth should not be used for this purpose (for small chunks of data) but it does indeed work well streaming 15 low quality compressed jpegs/second with little to no latency.
The question I have is once I increase either the quality or frame rate from the iOS device sender to the iOS receiver, a lag or delay will occur and will no longer be real time. If there is a delay, I'd like somehow for the sending iOS device to discard frames so that the receiver can catchup or for the receiver to ignore the backlog queue.
In GameKit I have set the session mode to use GKSendDataUnreliable to see if it could help, but to no avail.
If delays occur, what is the best solution and correct approach to discard the frames (jpegs) so that the iOS receiver can then catch up back to real time? Would the sender need to stop transmission for a period of time or is there something that receiving client can do to discard the accumulating queue.
I've used NSStream before as well, and while using wifi allows for greater bandwidth, the same problem will still occur in terms of delays if too much data is being transmitted.
Thank you in advance for your help.
Could you not attach a timestamp to each jpg (time since epoch perhaps) so the receiving client will ignore all images that are not within a given timeframe.
Also you could have the receiving client respond back with simple acknowledgement packets indicating that a jpg has been received. If the sending client hasn't received an acknowledge packet within a given timeframe it discards all images it was going to send and starts from scratch.
With this solution if the receiving client falls X seconds behind the sender it will stop sending acknowledgement packets and discard all incoming data until the sender throws out everything in its queue and starts sending "live" frames again.

Why do I get packet loss when using GKMatchSendDataReliable?

I have a multiplayer iOS game, and I am sending data using GKMatchSendDataReliable. However, occasionally, the data packet is lost. I've checked on the sending end and I am not getting an error. I'm just not receiving it on the receiving in. It is intermittent, and I have NSLogs right at the beginning of my receive method, so I always know when I get a message.
Is GKMatchSendDataReliable 100% reliable? It seems like a waste to have to set up my own reliable data sending routines.
It seems that this only happens when one device is on Verizon's LTE network. I havn't tried any other cellular network. When using Wi-Fi only, not necessarily the same wi-fi, it works fine.
This happens to me too. It appears that while GKMatchSendDataReliable is oodles more reliable than GKMatchSendDataUnreliable (which loses about 2% of packets in my tests), GKMatchSendDataReliable seems to occassionally lose the first packet I send (immediately after connecting).
My users also complain that some data may be accidentally lost during the game. I wrote a test app and figured out that GKMatchSendDataReliable is not really reliable. On weak internet connection (e.g. EDGE) some packets are regularly lost without any error from the Game Center API.
So the only option is to add an extra transport layer for truly reliable delivery.
I wrote a simple lib for this purpose: RoUTP. It saves all sent messages until acknowledgement for each received, resends lost and buffers received messages in case of broken sequence.
In my tests combination "RoUTP + GKMatchSendDataUnreliable" works even beter than "RoUTP + GKMatchSendDataReliable" (and of course better than pure GKMatchSendDataReliable which is not really reliable).
Apple stated that this was a bug and fixed in iOS7

Resources