NSURLConnection (iOS, Objective-C) seems to be throttling after repeated sequential use - ios

This is an issue that's making me question my own sanity, but I'm posting the question in case it's something real rather than a problem of my own making.
I have an iOS app that is making use of the NSURLConnection class to send a request to a webserver. The object is instantiated and instructed to call back the delegate, which receives the corresponding notifications didReceiveResponse / didReceiveData / didFinishLoading / didFailWithError. Effectively the same code that is posted on Apple's dev page for using the class. The requests are all short POST transmissions with JSON data; the responses are also JSON-formatted, and come back from an Apache Tomcat Java Servlet.
For the most part it all works as advertised. The app sends a series of requests to the server in order to start a job and poll for partial results. Most of the exhanges are short, but sometimes the results can be up to about 100-200Kb maximum when there are partial results available.
The individual pieces of data get handed back by the operating system in chunks of about 10Kb each time, give or take. The transport is essentially instantaneous, as it is talking to a test server on the LAN.
However: after a few dozen polling operations, the rate of transport grinds to a near standstill. The sequence of response/data.../finished works normally: the webserver has delivered its payload, but the iOS app is receiving exactly 2896 bytes, with a periodicity of 20-30 seconds in between chunks. It is the correct data, and waiting about 5 minutes for 130Kb of data does confirm that it's operating correctly.
Nothing I do seems to conveniently work around it. I tried switching to the "async" invocation method with a response block; same result. Talking to a remote website rather than my LAN test deployment gets the same result. Running in simulator or iPhone gets the same result. The server returns content-length and doesn't try to do anything weird like keeping the connection alive.
Changing the frequency of the polling achieves little, unless I crank up the delay in between polling to 50 seconds, then everything works fine, presumably because it only ends up polling once or twice.
A hypothesis that fits this observation is that the NSURLConnection object hangs around long after it has been released, and chews up resources. Once a certain limit is hit, the progress rate grinds to a near halt. If the slowed down connection actually completes, subsequent connections work normally again, presumably because they've been cleaned up.
So does this sound familiar to anyone?

Related

HttpSendRequest blocks on SOAP call

I have trouble with Delphi XE2 app. Sometimes WinInet call to ASMX service blocks and never returns - user must terminate process from task manager to close app.
To connect to ASMX service app uses service generated by WSDLImp tool.
During its work, app makes a lot of calls to web service (~1000-2000). And at some moment (last time it was 782 request item, first time it was near the end) app freezes. After some digging, logging I find out that app blocks on
WinInetResult := HttpSendRequest(Request, nil, 0, DatStr.Bytes, DatStr.Size);
In Soap.SOAPHTTPTrans unit
First guess was it is some server-side problem – server hangs on request processing. But on trials server was processing requests from other clients, while target one was blocked. And, when you use Fiddler to debug http traffic from app everything works as expected, no locks. Also, WinInet’s SendTimeout, ReceiveTimeout, ConnectTimeout has no effect – there are no timeout errors. One more point, app blocks not on specific method call, but on different ones.
After googling, I find out that HttpSendRequest can block on max parallel connections exceeded. But there are no parallel execution in app – each action is performed in main GUI thread.
My next try was to use Indy for HTTP communication instead of WinInet. And with Indy, app does its work as it should, no locks. But downside is performance degradation – app’s work takes two times longer with Indy.
This is not very good. So, I want to go back to WinInet. But for this I need to find reason of blocking. Does anybody know why HttpSendRequest can block?
P.S.
It is strange that with Indy we have such performance degradation. Maybe there are some properties, parameters to increase performance?
So, I have finally fixed this issue.
After all trials with no success, I've re-implemented SOAP calls using WinHTTP instead of WinInet.
With WinHTTP everything works normally.

Streaming/Chunked HTTP and NSURLSession Hanging

I have this piece of code that I have been trying to port. The code works 100% fine on windows using a WinHTTP implementation. On IOS 7 simulator, I am using NSURLSession. For regular HTTPS get/post seems to work fine.
Things start breaking down when I use a "streaming" HTTP. In this case, the content length is unknown, because the data is streaming in continuously.
I have a blocking synchronous below call that will wait until the current request completes. When I use the first command the synchronous loop will exit after the delegate is hit. However if I replace with the commented second line the synchronous loop hangs.
[m_pDelegate.session invalidateAndCancel];
// [m_pDelegate.session finishTasksAndInvalidate];
blockUntilOperationsComplete();
Eventually it will exit, and I do get my data callbacks. I believe the callbacks finally trigger MINUTES later because small keep-alive messages (16 bytes long) eventually overflow the buffer and trigger a delegate call. Is there a way to reduce the buffering threshold?
After wasting 2 days on this I'll leave this for the next soul that comes by. There is no way to reduce this buffer through existing NSURL* classes. It turns out that current implementation (on iOS7, and it seems that it's like that since forever) for chunked encoding buffers incoming data by waiting for 512 bytes of chunk encoded payload to gather, and only after that callbacks will occur - important part follows - if Content-Type is "text/html". After that all following traffic triggered callbacks will happen in real time.
However if server changes Content-Type header to "application/json" it will not be buffered and your callbacks will fire as soon as something is actually received.

NSURLConnection and multiple asynchronous requests - is it messing with the data being transmitted?

I have an NSArray of links. I want to parse through them with an online article extractor API (Clear Read), and with the result given back for each article (some HTML) I throw it into an NSString.
My problem arises from the fact that, say my array has 100 URLs in it, I loop through the array shooting each item into the API and getting back some results in JSON. This is firing like 100 NSURLConnection calls at once asynchronously.
I wasn't sure if that'd be a problem, but when I give it 100 URLs (real strings, none are nil) the data that comes back often has either empty values for the JSON keys (when they shouldn't), or the data coming back is nil. There's also a bunch of duplicates.
Should I be handling multiple asynchronous connections better than I am now? If so, how?
A couple of thoughts:
If you're doing concurrent asynchronous requests and are using asynchronous NSURLConnection, then you'll want to define your own class for this download operation to make sure that every connection keeps track of its own properties. That way, everything can be encapsulated within this class where the resulting download objects can keep track of what's downloaded, what's been parsed, etc. If you're not using asynchronous NSURLConnection (e.g. you're just using dataWithContentsOfURL), it's even easier, though you lose some of the progress updates that NSURLConnection provides and/or streaming opportunities.
For best performance, you should do concurrent requests. Having said that, you should not have more than four or five concurrent requests going to any particular server. This is an iOS imposed constraint, and especially if you have a slow network connection, you risk having connections timeout otherwise.
If you're doing preliminary testing on the simulator, you may want to make sure you try out the "network link conditioner". It's part of the "Hardware IO Tools for Xcode", available at the Downloads for Apple Developers. There are issues (such as the aforementioned timeout problems if you have too many concurrent requests going to a particular server) that only manifest themselves in slow connections.
Having said that, you also want to make sure to test your solution on a device with real world network speeds. It's easy to successfully run massively parallel tasks successfully on the simulator that are too greedy for the device. Limiting the number of concurrent sessions to five will diminish this resource problem, but it should be part of your testing strategy.
I agree with JRG-Developer, that you should look into established frameworks, such as AFNetworking. Make sure to set the maxConcurrentOperationCount for the queue of the AFHTTPClient, though, if queueing 100 plus operations.
I don't know how much data your 100 requests entail, but be forewarned that the app approval process has been known to reject apps that make extraordinary networks requests on cellular networks. What constitutes excessive cellular network activity is not explicitly stated in the app review guidelines, though Avoiding iPhone App Rejection From Apple has claimed that you should ensure that you don't exceed more than 4.5mb in 5 minutes. You can use Reachability to determine what type of network you are on and perhaps warn the user if they're on cellular (if the amount of data approaches this threshold).
Have you considered using a third party framework - such as AFNetworking - and limiting the number of asynchronous calls happening at once? Perhaps this might help / solve your problem.
In particular, you might consider creating a networking manager class that creates and manages AFHTTPClient(s), which in turn manages AFHTTPRequestOperations, for each endpoint (base URL) you hit.

How many simultaneous downloads make sense on iOS

I have an iOS app which synchronizes a certain number of assets at startup. I'm using AFNetworking and set up an NSOperationQueue to handle all of the downloads. I was wondering, how many simultaneous downloads make sense. Is there a limit where network performance will drop if I have to many at the same time? At the moment I'm doing max 5 downloads at a time.
This depends on several factors:
What is the network speed and latency?
What is the data size of the requests and responses?
How long does processing a request take on the server?
How long does processing a response take on the client?
How many parallel requests can the server fulfill efficiently?
How many users will make requests at the same time?
What is the minimal speed and memory size of the target device?
For small and medium sized applications, the limiting factor is usually the device's network latency, but that might not be the case in your situation. In the end, you'll have to test and figure out the most efficient compromise. 5 is a good number to start with.
You might want to set the number of concurrent downloads by the available network connection (WLAN or 3G or even slower...).
The beauty of using NSOperationQueues is that they are closely tied into the underlying OS (iOS or OSX). The queue decides how many operations to run based on many factors, including free memory, load on the system, etc.
You should not try to second guess the system and throttle yourself. Queue as many operations as you have and let the OS deal with it. I have an iPhone app that adds hundreds of operations in the queue when it has to fetch images of varying sizes etc. Works great, UI is not blocked, etc.
EDIT: well, it seems that when doing NSURLConnections and similar network connections, NSOperationQueue is NOT really keyed in to network usage. I asked on the Apple internal forums this summer, and in the end was told by Quinn "The Eskimo" (Apple network guru) to use a limit of something like 4. So this post is correct in the sense of pure processing power - NSOperationQueue will do the right thing - but when it comes to network ops you need to set a limit.
Depends on your hardware mostly I would say. Best way to address this is to test it with multiple cases with multiple trials. Try to diversify the hardware you test on as much as possible (remember do not use the simulator to test this!).
There actually is a constant the SDK provides that varies depending on various constraints. I would recommend you look into using it.
Regarding this question, I've done some tests on a Ipad2 IOS6.0. I've created a little app that performs an HTTP-GET request to a webserver. This webserver will provide data for 60 seconds ( this to get a meaningfull result, will change this to 10 min later in my tests ).
For one HTTP-GET request it works very good. Then I tried to perform several HTTP-request at the same time and see how many and how fast I can download over a WIFI connection of the IPad
I made 2 versions. 1 version using NSOperations and 1 version using NSThread an Synchron HTTP-GET request. In short, I always get a TimeOut for my 6th request. ( The tcp-syn doesn't get to my HTTP-Server ).
Extra info:
NSThead-implementation:
Simply make a for loop and create a Thread. This will perform a synchronized HTTP requests.
There I observe that my 6th request times out after 20 seconds. If I set the Timeout to 80 seconds, I clearly see that after the end of my first http-request ( after 60 seconds ) my 6th request is launched...
NSOperation-implementation:
Create a Queue and set the maxConcurrentOperations to 12. Add 12 http-request Operations to the queue. Here as well I notice that the 6th request gets a -1001 error code ( meaning: timout ). and I see no tcp-syn of the 6th request.

Should I convert my action method to async action method?

I have a web site where user can upload a PDF and convert it to WORD doc.
It works nice but sometimes (5-6 times per hour) the users have to wait more than usual for the conversion to take place....
I use ASP.NET MVC and the flow is:
- USER uploads file -> get the stream and convert it to word -> save word file as a temp file -> return the user the url
I am not sure if I have to convert this flow to asynchronous? Basically, my flow is sequential now BUT I have about 3-5 requests per second and CPU is dual core and 4 GB Ram.
And as I know maxConcurrentRequestsPerCPU is 5000; also The default value of Threads Per Processor Limit is 25; so these default settings should be more than fine, right?
Then why still my web app has "waitings" some times? Are there any IIS settings I need to modify from default to anything else or I should just go and make my sync method for conversion to be async?
Ps: The conversion itself is taking between 1 seconds to 40-50 seconds depending on the pdf file size.
UPDATE: Basically what it's not very clear for me is: if a user uploads a file and the conversion is long shouldn't only current request "suffer" because of this? Because the next request is independent, make another CPU call and different thread so should be no wait here, isn't it?
There are a couple of things that must be defined clearly here. Async(hronous) method and flow are not the same thing at least as far as I can understand.
An asynchronous method (using Task, usually also leveraging the async/await keywords) will work in the following way:
The execution starts on thread t1 until it reaches an await
The (potentially) long operation will not take place on thread t1 - sometimes not even on an app thread at all, leveraging IOCP (I/O completion ports).
Thread t1 is free and released back to the thread pool and is ready to service other requests if needed
When the (potentially) long operation returns a thread is taken from the thread pool (could even be the same t1 or, most probably, another one) and the rest of the code execution resumes from the last await encountered
The rest of the code executes
There's a couple of things to note here:
a. The client is blocked during the whole process. The eventual switching of threads and so on happens only on the server
b. This approach is mainly designed to alleviate an unwanted condition called 'thread starvation'. It is not meant to speed up the total client waiting time and it usually doesn't speed up the process.
As far as I understand an asynchronous flow would mean, at least in this case, that after the user's request of converting the document, the client (i.e. the client's browser) would quickly receive a response in which (s)he is informed that this potentially long process has started on the server, the user should be patient and this current response page might provide progress feedback.
In your case I recommend the second approach because the first approach would not help at all.
Of course this will not be easy. You need to emulate a queue, you need to have a processing agent and an eviction policy (most probably enforce by the same agent if you don't want a second agent).
This would work along the following lines:
a. The end user submits a file, the web server receives it
b. The web server places it in the queue and receives a job number
c. The web server returns the user a response with the job number (let's say an HTML page with a polling mechanism that would periodically receive progress from the server)
d. The agent would start processing the document when it gets the chance (i.e. finishes other work) and update its status in a common place for the web server to pick this information
e. The web server would receive calls from the HTML response asking for the status of the job and would find out that the job is complete and offer a download link or start downloading it directly.
This can be refined in some ways:
instead of the client polling the server, websockets or long polling (for example SignalR covers both) could be used
many processing agents could be used instead of one if the hardware configuration makes sense
The queue can be implemented with a simple RDBMS, Remus Rușanu has a nice article about this.

Resources