AFNetworking/NSURLConnection HTTPS keep alive shows strange behaviour - ios

We're currently dealing with a performance issues in our app, and we believe that some of these issues might be related to the fact that the app and the underlying AFNetworking network stack seems to ignore keep-alive on HTTP 1.1.
We got information from Apple that persistent connections are purged after 3, 6 or 30 seconds respectively, depending on iOS version and WiFi/WWAN connectivity, regardless of server-side keep-alive information.
While monitoring the connection handshakes on our servers, we noted the weird behavior that an SSL connection from our app on an iOS device is left open and not closed with a FIN packet. As soon as new request is made from the app, the left over connection from the previous request is THEN closed with a FIN packet and a new connection is created.
While we understand that iOS purges the connections to keep the battery consumption low, we wonder that it doesn't terminate the existing connection properly and defers that termination to the start of a new request.
Could someone explain this behavior, and suggest solutions to avoid expensive SSL handshakes in connections which are covered by keep-alive under regular conditions?

I bumped into the same problem some weeks ago.
The solution was to force webserver ignore keepalive http header from iOs device and close connection immediately.

Related

iOS BLE unstable connection

I'm currently developing a react native app in combination with a device. The device and the app communicate via BLE. So far everything works as expected but I'm having issues with the connection stability of the iOS app and the device. What would happen is that the device would connect and I can update some characteristics but it would regularly either disconnect with a CBErrorDomain 7 or the response for a write would timeout. The implementation on the app or device side does not seem to be the problem as Android works stable and the device also disconnects when connecting with the LightBlue app.
I've already updated the BLE connection parameters as suggested here:
https://developer.apple.com/library/archive/qa/qa1931/_index.html.
This has increased the stability but did not resolve the problems completely. I've tried playing around with the values but so far no luck.
The current set of parameters we are using are:
conn_min_interval: 15
conn_max_interval: 15
conn_latency: 0
supervision_timeout: 2000
adv_min_interval: 1285
adv_max_interval: 1285
My question now would be if somebody has an idea what other things I could check or which parameter to tune?
Are you checking the maximumWriteValueLength and making sure your writes are smaller than this? A likely cause of your problems is overwhelming the device and it fails to keep up with sending ACKs. What version of Bluetooth does your device support, and does it implement DLE (Data Length Extension)?
Your conn_min_interval and conn_max_interval are suspicious. Asking for 15ms with no leeway is likely to negotiate to 30ms instead. (See 41.6 Connection Parameters) Is your device comfortable with being re-negotiated to something other than 15ms? Can your device actually keep up with 15ms and no connection latency if it does get that? I'm betting it can't. Try setting your connection interval to 30ms (or even a bit slower), and you might even try setting your connection latency to 1 to make the connection a bit more forgiving (though I'd focus more on slowing the CI than increasing latency; increasing latency would be more of a hack in this case).
All my suspicions are around your peripheral not keeping up with its side of the connection. If you have any synchronous activities in response to the data, you need to make sure that it's not blocking your BLE stack from sending the required responses.
Finally, I've found the answer to my problem. The problem was that the pairing procedure of the BLE server was faulty and thus iOS was unable to have a stable connection. Now that this is fixed the connection is very stable.
I'm still unsure why iOS was able to have any communication at all without the pairing but I hope that this helps some people in the future.

iOS 15: Websocket inside WKWebView issue

In our app we are reliant on a web socket inside a WKWebview. In previous releases this web socket worked well. In the iOS 15 betas though this web socket behaves differently: it connects to our server successfully but once the client tries to send any data through it the web socket throws an error and closes with a non descriptive error:
The operation couldn’t be completed. (kNWErrorDomainPOSIX error 54 -
Connection reset by peer)
Looking into the system log the deepest error I can make out is:
nw_protocol_boringssl_error(1772) [C12.1.1:2][0x102e0d540] Lower
protocol stack error post TLS handshake. [54: ]
A test web socket to another server seems to be working.
I also notices that a MitM proxy like Charles no longer shows web socket connections in the iOS 15 beta. This just indicates that something might have changed.
Because the communication via this socket is very important for the functionality of our app I must know what the issue is. I tried adding ATS exceptions for the URL of the socket to no avail.
Maybe this is a temporary bug in iOS 15 that will be fixed until its released? Or maybe anyone has experienced this kind of error in the past?
It seems that the issue is related with websocket compression on IOS 15 (permessage deflate). Disabling the compression for ios 15 devices on the server side helped.
This is obviously not a solution, but only a quick fix (if you have an access to the server). Here is a discussion on the same topic.

iOS Networking - HTTP Connections & running in background

I have an app that lets the user send messages with images. A user might hit send, then immediately close their phone or switch to another app.
We were running into an issue that if there's temporarily a bad network connection the message would fail to send. We switched to using NSURLSession backgroundConfigurationWithIdentifier so that backgrounding the app doesn't immediately time out the running request. We switched to using this for all our api requests, thinking that it wouldn't hurt for every request to able to continue in the background if the app were closed at the wrong time.
Fast forward a couple weeks we're noticing all requests seem slow. Using wireshark I just discovered that this background session seems to use a new http connection per request, meaning it requires setting up a TCP connection and new TLS handshake for every request, which was adding a ~500ms latency on every request in our app. This is a pretty big deal but I can't find this behavior documented anywhere, including the link above or Apple's background transfer considerations.
So my question is, is this behavior expected, or am I doing something wrong somewhere? Is there an easy way with NSURLSession to make an HTTP request that will use an existing keep-alive connection if there is one, but can fall back to the backgroundConfiguration if the app gets moved to the background?
NSURLSession is the recommended way to fulfill your use case. Have you tried setting backgroundSessionConfig.discretionary = true
iOS Reference
A Boolean value that determines whether background tasks can be
scheduled at the discretion of the system for optimal performance.
If that doesn't help, I recommend filing a bug with iOS.

Sending SPDY requests results in "The request timed out" errors with NSUrlSession in iOS

My iOS app loads images from an nginx HTTP server. After I send 400+ such requests the networking 'gets stuck' and all subsequent HTTP requests result in "The request timed out" error. I can make the images load again only when I restart the app.
Details:
I am using NSURLSession.sharedSession().dataTaskWithURL to send four hundred HTTP GET requests to jpeg files.
Requests are sent sequentially, one after another. The interval between requests is 10 ms.
Each previous unfinished request is cancelled with cancel() method of NSURLSessionDataTask object.
Interestingly:
I can only have this issue with HTTPS requests and when SPDY is enabled on the server.
Non-secure HTTP requests work fine.
Non-SPDY HTTPS requests work fine. I tested it by turning SPDY off on the server side, in the nginx config.
Problem appears both on iOS 8 and 9, on physical device and in the simulator. Both on Wi-Fi and LTE.
When I look at nginx access logs, I can still see the 'stuck' requests coming in. Important nuance: the request log record appears at the exact moment when the iOS app is giving up on it after the time out period ends.
I was hoping to analyze HTTP requests with Charles Proxy but the problem cures itself when requests go through Charles. That is - everything works with Charles, much like effect in quantum mechanics when the fact of looking influences the outcome.
I was able to reproduce the issue when the iOS app connected to two different servers with vastly different nginx configurations. This probably means that the issue is not related to a particular nginx setup.
I analyzed the app using "Activity Monitor" instrument. The number of threads it is using during the bulk HTTP requests jumps from 5 to 10. In comparison, when I send just a single HTTP requests the number of threads jumps to 8. CPU load rarely goes above 30%.
What can be the cause of the issue? Can anyone recommend other ways or tools for analysing and debugging it?
Analysing with scheduling instrument
Demo app
This demo app reproduces the issue 100% of the time for me.
https://github.com/exchangegroup/ImageLoadDemo
Versions and settings
My nginx config: http://pastebin.com/pYYjdxfP
OS X: 10.10.4 (14E46), iOS: 8 and 9, Xcode: 7.0 (7A218), nginx: 1.9.4
Not ideal workaround
I managed to keep requests working only if I create a new NSURLSession for each individual request and clear the previous session with finishTasksAndInvalidate or invalidateAndCancel.
// Request 1
let configuration = NSURLSessionConfiguration.defaultSessionConfiguration()
let session = NSURLSession(configuration: configuration)
session.dataTaskWithURL ...
// Request 2
// clear the previous request
session.finishTasksAndInvalidate()
let session2 = NSURLSession(configuration: configuration)
session2.dataTaskWithURL ...
One possibility is that iOS started sending the request, and then packet loss prevented the headers and request body from being fully delivered.
Another possibility that comes to mind is that your server may not be logging the request until it actually finishes trying to deliver it, which would make the time stamps in the server logs line up with when the connection was closed, rather than when it was opened. (IIRC, that's what Apache does; I haven't worked with nginx, so I can't speak for its behavior.) If that's the case, then this is just a simple connection stall. As for why it is stalling, I couldn't guess.
Does the problem occur exclusively for HTTPS traffic? If you can reproduce it with HTTP, you don't need Charles Proxy; just use OS X's "Internet Sharing" feature, and capture the packets with tcpdump or wireshark, listening on the bridge interface. If you can't reproduce it with HTTP, my money would be on a problem with fetching the CRLs or performing the OCSP check while validating the server's certificate.
Is your app ending up with a huge number of threads as a result of excessive async dispatching to new queues, by any chance? Because that could easily cause all sorts of odd misbehavior.
How long is the timeout? If it is too short, your app might simply be running up against performance limitations of the hardware while processing the results of 400 requests delivered in only four seconds.
Also, are you trying to schedule these requests simultaneously? Because I seem to recall reading about a bug that causes NSURLSession to hit a brick wall if you start too many tasks in a single session at the same time. You might try adding tasks only after the number of tasks in a session drops below some threshold and see if that fixes the problem.

What could cause a socket to close quickly after every HTTP transaction?

I have a Delphi 6 application that talks to an external device that acts as an HTTP server. I am using the ICS TWSocket components for this application. I open up a socket to talk to the device and handle the necessary header and body crafting to talk to the server. In other words, I am not using the ICS HTTP client component but using the lower level TWSocket component and handling the necessary HTTP "handshaking" myself.
The headers I craft and send to the external device have the keep-alive flag set to TRUE. On my system, after I send anything to the external device, the connection will stay open continuously and will not close until approximately 30 seconds of inactivity occurs (30 seconds where I don't make any requests of the external device as an HTTP server). I don't know if the external device closes it or if Microsoft Windows does it. But the important point is that normally I can do multiple sends and the connection will stay open until I send nothing for about 30 seconds. This works fine and is what my code expects.
However, on some of my users systems the socket is closing after every send. I do have code that checks for a closed socket and attempts a reconnect to the external device if necessary, but does not expect to have to do a reconnect with each transaction.
My questions are:
Is there a system setting for sockets that might be causing this anomalous behavior on some users systems?
If so, are there Windows API function calls I can use to query the offending parameter and then set it to the expected close on 30 seconds of inactivity instead of with each transaction?
If so, can I, or how do I do it in a manner that will not adversely affect any other programs running on the users system?
The server is closing the socket. There are three possible reasons for this:
The client made a HTTP/1.0 request
The client set a Connection: close header in the request
The server does not support persistent connections
HTTP/1.0 did not support persistent connections, and the server would be correct in closing the socket after a HTTP/1.0 request.
HTTP/1.1 specifies that a connection is implicitly persistent, unless the client specifies a Connection: close header. The server would be correct in closing the connection if it receives this header. If the server does not support persistent connections, it would also be correct in closing the connection.
If you are using HTTP/1.1, you can force the connection to be persistent (as long as the server supports it) by sending a Connection: keep-alive header. You should then also send a Keep-Alive: timeout=<secs>, max=<max-requests> header, where <secs> and <max-requests> are integers representing the desired behaviour.

Resources