I noticed that the framerate while scrolling a collection view on an iPhone 4 dropped significantly (at times to 5 FPS) when a download using NSURLConnection was taking place in the background. I first suspected AFNetworking to be the culprit, but it turns out that the same thing happens when I simply use a block:
- (void)startBlockDownload:(id)sender
{
NSLog(#"starting block download");
dispatch_queue_t defQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
void (^downloadBlock) (void);
downloadBlock = ^{
NSURLRequest *request = [NSURLRequest requestWithURL:[NSURL URLWithString:_urlString]];
NSURLResponse *response = nil;
NSError *error = nil;
NSData* result = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error];
NSLog(#"block request done");
};
dispatch_async(defQueue, downloadBlock);
}
What gives? Is a background download so demanding that it renders the UI extremely sluggish? Is it the slow flash memory? Is there anything that can be done to keep the UI very responsive while doing a background download?
I've created a sample project to demonstrate the issue: https://github.com/jfahrenkrug/AFNetworkingPerformanceTest
Also see this issue on AFNetworking that I have started about the topic: https://github.com/AFNetworking/AFNetworking/issues/1030#issuecomment-18563005
Any help is appreciated!
As the comments you linked to say: The iPhone 4 is still a single-core machine so no matter how much "multitasking" you do, it will still only be executing one set of code at once. What's worse is that the sendSynchronousRequest and the UI code are both blocking sets of code, so they will take up the maximum amount of time allotted to them by the OS and then the OS will perform an expensive context switch to prepare to execute the other.
You can try two things:
1) Use the async API of NSURLConnection instead of dispatching away a synch request (though I have the feeling you tried this with AFNetworking).
2) Lower the priority of the queue you dispatch to (preferably to DISPATCH_QUEUE_PRIORITY_BACKGROUND if possible). This might give the main queue more cycles to execute on, but it may not since the entire operation is just one big chunk.
If those both fail to work, then it probably simply is too much for a single-core processor to handle (scrolling is not a light operation, and downloading requires constant running time to receive data). That's my best guess anyway...
Related
I'm currently working on an iOS project that utilises the AWS SDK to download large media files to the device. I am using CloudFront to distribute the content and the downloads are working well, however I am having problems implementing a network queue for these operations. No matter what I try, all the files want to download at once.
I am using the AWSContent downloadWithDownloadType: method to initiate and monitor progress on the actual downloads.
I have tried using an NSOperationQueue and setting setMaxConcurrentOperationCount, and all the code blocks execute at once. :(
I have a feeling it might be configurable with AWSServiceConfiguration in the AppDelegate, but the documentation is extremely vague on what variables you can pass into that object... http://docs.aws.amazon.com/AWSiOSSDK/latest/Classes/AWSServiceConfiguration.html
Has anyone had any experience with this?
TIA
Your problem is most likely that you misunderstand an approach of asynchronous operations.
I have tried using an NSOperationQueue and setting
setMaxConcurrentOperationCount, and all the code blocks execute at
once. :(
It's difficult to say what's definitely wrong without seeing an actual code, however most likely it's tied to the following steps:
You create NSOperationQueue
You set maxConcurrentOperationsCount to 2 for example
You add 4 blocks to it with AWSContent downloadWithDownloadType:
You expect no more 2 downloads to be run simultaneously
What do you probably do wrong
The key is inside point 3. What exactly the block does? My guess is that it completes before actual download completes. So if you have something like:
NSOperationQueue *queue = [NSOperationQueue new];
queue.maxConcurrentOperationsCount = 2;
for (AWSContent *content in contentArray) { // Assume you already do have this array
[queue addOperationWithBlock:^() {
[content downloadWithDownloadType:AWSContentDownloadTypeIfNotCached
pinOnCompletion:YES
progressBlock:nil
completionHandler:^(AWSContent *content, NSData *data, NSError *error) {
// do some stuff here on completion
}];
}];
}
Your block exits before your download is finished, allowing next blocks to run on queue and starting further downloads.
What to try
You should simply add some synchronization mechanism to your block to let operation complete only on completion block. Say:
NSOperationQueue *queue = [NSOperationQueue new];
queue.maxConcurrentOperationsCount = 2;
for (AWSContent *content in contentArray) { // Assume you already do have this array
[queue addOperationWithBlock:^() {
dispatch_semaphore_t dsema = dispatch_semaphore_create(0);
[content downloadWithDownloadType:AWSContentDownloadTypeIfNotCached
pinOnCompletion:YES
progressBlock:nil
completionHandler:^(AWSContent *content, NSData *data, NSError *error) {
// do some stuff here on completion
// ...
dispatch_semaphore_signal(dsema); // it's important to call this function in both error and success cases of download to free the block from queue
}];
dispatch_semaphore_wait(dsema, DISPATCH_TIME_FOREVER); // or another dispatch_time if you want your custom timeout instead of AWS
}];
}
Effectively your answer is https://stackoverflow.com/a/4326754/2392973
You just schedule plenty of such blocks to your operation queue.
More reading
https://developer.apple.com/reference/dispatch
Is it possible to run multiple background threads to improve performance on iOS . Currently I am using the following code for sending lets say 50 network requests on background thread like this:
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void){
// send 50 network requests
});
EDIT:
After updating my code to something like this no performance gain was achieved :( Taken from here
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", NULL);
dispatch_group_t fetchGroup = dispatch_group_create();
// This will allow up to 8 parallel downloads.
dispatch_semaphore_t downloadSema = dispatch_semaphore_create(8);
// We start ALL our downloads in parallel throttled by the above semaphore.
for (NSURL *url in urlsArray) {
dispatch_group_async(fetchGroup, fetchQ, ^(void) {
dispatch_semaphore_wait(downloadSema, DISPATCH_TIME_FOREVER);
NSMutableURLRequest *headRequest = [NSMutableURLRequest requestWithURL:url cachePolicy: NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0];
[headRequest setHTTPMethod: #"GET"];
[headRequest addValue: cookieString forHTTPHeaderField: #"Cookie"];
NSOperationQueue *queue = [[[NSOperationQueue alloc] init] autorelease];
[NSURLConnection sendAsynchronousRequest:headRequest
queue:queue // created at class init
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){
// do something with data or handle error
NSLog(#"request completed");
}];
dispatch_semaphore_signal(downloadSema);
});
}
// Now we wait until ALL our dispatch_group_async are finished.
dispatch_group_wait(fetchGroup, DISPATCH_TIME_FOREVER);
// Update your UI
dispatch_sync(dispatch_get_main_queue(), ^{
//[self updateUIFunction];
});
// Release resources
dispatch_release(fetchGroup);
dispatch_release(downloadSema);
dispatch_release(fetchQ);
Be careful not to confuse threads with queues
A single concurrent queue can operate across multiple threads, and GCD never guarantees which thread your tasks will run on.
The code you currently have will submit 50 network tasks to be run on a background concurrent queue, this much is true.
However, all 50 of these tasks will be executed on the same thread.
GCD basically acts like a giant thread pool, so your block (containing your 50 tasks) will be submitted to the next available thread in the pool. Therefore, if the tasks are synchronous, they will be executed serially. This means that each task will have to wait for the previous one to finish before preceding. If they are asynchronous tasks, then they will all be dispatched immediately (which begs the question of why you need to use GCD in the first place).
If you want multiple synchronous tasks to run at the same time, then you need a separate dispatch_async for each of your tasks. This way you have a block per task, and therefore they will be dispatched to multiple threads from the thread pool and therefore can run concurrently.
Although you should be careful that you don't submit too many network tasks to operate at the same time (you don't say specifically what they're doing) as it could potentially overload a server, as gnasher says.
You can easily limit the number of concurrent tasks (whether they're synchronous or asynchronous) operating at the same time using a GCD semaphore. For example, this code will limit the number of concurrent operations to 6:
long numberOfConcurrentTasks = 6;
dispatch_semaphore_t semaphore = dispatch_semaphore_create(numberOfConcurrentTasks);
for (int i = 0; i < 50; i++) {
dispatch_async(concurrentQueue, ^{
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
[self doNetworkTaskWithCompletion:^{
dispatch_semaphore_signal(semaphore);
NSLog(#"network task %i done", i);
}];
});
}
Edit
The problem with your code is the line:
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", NULL);
When NULL is passed to the attr parameter, GCD creates a serial queue (it's also a lot more readable if you actually specify the queue type here). You want a concurrent queue. Therefore you want:
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", DISPATCH_QUEUE_CONCURRENT);
You need to be signalling your semaphore from within the completion handler of the request instead of at the end of the request. As it's asynchronous, the semaphore will get signalled as soon as the request is sent off, therefore queueing another network task. You want to wait for the network task to return before signalling.
[NSURLConnection sendAsynchronousRequest:headRequest
queue:queue // created at class init
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){
// do something with data or handle error
NSLog(#"request completed");
dispatch_semaphore_signal(downloadSema);
}];
Edit 2
I just noticed you are updating your UI using a dispatch_sync. I see no reason for it to be synchronous, as it'll just block the background thread until the main thread has updated the UI. I would use a dispatch_async to do this.
Edit 3
As CouchDeveloper points out, it is possible that the number of concurrent network requests might be being capped by the system.
The easiest solution appears to be migrating over to NSURLSession and configuring the maxConcurrentOperationCount property of the NSOperationQueue used. That way you can ditch the semaphores altogether and just dispatch all your network requests on a background queue, using a callback to update the UI on the main thread.
I am not at all familiar with NSURLSession though, I was only answering this from a GCD stand-point.
You can send multiple requests, but sending 50 requests in parallel is usually not a good idea. There is a good chance that a server confronted with 50 simultaneous request will handle the first few and return errors for the rest. It depends on the server, but using a semaphore you can easily limit the number of running requests to anything you like, say four or eight. You need to experiment with the server in question to find out what works reliably on that server and gives you the highest performance.
And there seems to be a bit of confusion around: Usually all your network requests will run asynchronously. That is you send the request to the OS (which goes very quick usually), then nothing happens for a while, then a callback method of yours is called, processing the data. Whether you send the requests from the main thread or from a background thread doesn't make much difference.
Processing the results of these requests can be time consuming. You can process the results on a background thread. You can process the results of all requests on the same serial queue, which makes it a lot easier to avoid multithreading problems. That's what I do because it's easy and even in the worst case uses one processor for intensive processing of the results, while the other processor can do UI etc.
If you use synchronous network requests (which is a bad idea), then you need to dispatch each one by itself on a background thread. If you run a loop running 50 synchronous network requests on a background thread, then the second request will wait until the first one is completely finished.
I have been using the following structure in my projects for consuming API data:
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(queue, ^{
// SYNCHRONOUS network request
// Data processing
dispatch_async(dispatch_get_main_queue(), ^{
// UI update
});
});
On the other hand, I have seen quite frequently another structure where the network request is asynchronous (i.e. using AFNetworking) and then the data processing and UI update are handled in the completion block (which is not async - I think).
Here is an example of what I am saying:
NSURLRequest *request = [NSURLRequest requestWithURL:url];
AFHTTPRequestOperation *operation = [[AFHTTPRequestOperation alloc] initWithRequest:request];
operation.responseSerializer = [AFJSONResponseSerializer serializer];
[operation setCompletionBlockWithSuccess:^(AFHTTPRequestOperation *operation, id responseObject) {
// Data processing
// UI update
} failure:^(AFHTTPRequestOperation *operation, NSError *error) {
// Error handling
}];
[operation start];
So, my questions are:
In the second structure there the data processing is not being run asynchronously, it is?
Why are we generally encouraged to run async network requests instead of sync ones from async blocks/threads.
Why the second structure is much more known and spread than the first?
Is there something that I am missing?
In my opinion async calls are much more elegant solutions for networking in any language.
Yes data processing is done in main thread in that example. Parsing strings and creating objects in UI isn't really a cpu cycle consuming job. However lets say you download an image and you want to process it, you shouldn't do it in this block (maybe spawn another thread). But image processing is another scope.it isn't related to the network code so network code is working as intended.
With async network calls, you can cancel/pause the request. But in sync network calls you cannot do that.
First of all gcd is new to the objective-c (i am not really sure but it was available after iOS 4. correct me if i am wrong). Before that we were using delegation and it was a lot of boilerplate codes. But with the second approach you can easily manage the networking code.
I hope this helps.
In the second structure there the data processing is not being run
asynchronously, it is?
No its not. Its run on the main thread. But the time consuming task of fetching data from remote server is already done on another thread. Once the data is retrieved, data processing will not take much time generally and can be run on the main thread. In rare cases(i have not come across any such) if data processing itself takes time and appears to block UI, then we can use NSOperations or GCD queues to process them.
This is achieved by making use of blocks which is just like a callback function.
Why the second structure is much more known and spread than the first?
Its easier to implement and also to understand once you know the syntax for blocks ;)
I am developing a static library that needs to do some stuff in the background, without interacting with the main thread. To give you an idea, think of just logging some user events. The library must keep doing this stuff until the user exits the app or sends it to the background (pushes the home button) - in other words it needs to keep doing stuff inside a loop.
The only interaction between the main app thread and the spawned thread is that occasionally the main app thread will put some stuff (an event object) into a queue that the spawned thread can read/consume. Other than that, the spawned thread just keeps going until the app exists or backgrounds.
Part of what the spawned thread needs to do (though not all of it) involves sending data to an HTTP server. I would have thought that it would be easy to subclass NSThread, override its main method, and just make a synchronous call to NSUrlConnection with some sort of timeout on that connection so the thread doesn't hang forever. For example, in Java/Android, we just subclass Thread, override the start() method and call a synchronous HTTP GET method (say from Apache's HttpClient class). This is very easy and works fine. But from what I have seen here and elsewhere, apparently on iOS it is much more complicated than this and I'm more than a bit confused as to what the best approach is that actually works.
So should I subclass NSThread and somehow use NSUrlConnection? It seems the asynchronous NSUrlConnection does not work inside NSThread because delegate methods don't get called but what about the synchronous method? Do I somehow need to use and configure the RunLoop and set up an autorelease pool? Or should I use an NSOperation? It seems to me that what I am trying to do is pretty common - does anyone have a working example of how to do this properly?
As I understand it, to use NSURLConnection asynchronously you need a runloop. Even if you use an NSOperation you still need a runloop.
All the examples I have seen use the Main Thread to start NSURLConnection which has a runloop. The examples using NSOperation are set up so the operation is Concurrent which tells NSOperationQueue not to provide it's own thread, they then make sure that NSURLConnection is started on the main thread, for example via a call to performSelectorOnMainThread:
Here is an example:
Pulse Engineering Blog: Concurrent Downloads using NSOperationQueues
You can also search the Apple documentation for QRunLoopOperation in the LinkedImageFetcher sample which is an example class showing some ins and outs of this kind of thing.
(Although I'm not sure I actually saw any code that example showing how to run your own runloop, again this example relies on the main thread.)
I've used the grand central dispatch (GCD) methods to achieve this. Here is an example that worked for me in a simple test app (I'm not sure if it applies in a static library, but may be worth a look). I'm using ARC.
In the example, I am kicking off some background work from my viewDidLoad method, but you can kick it off from anywhere. The key is that "dispatch_async(dispatch_get_global_queue…" runs the block in a background thread. See this answer for a good explanation of that method: https://stackoverflow.com/a/12693409/215821
Here is my viewDidLoad:
- (void)viewDidLoad
{
[super viewDidLoad];
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, (unsigned long)NULL),
^(void) {
[self doStuffInBackground];
});
}
The doStuffInBackground method is running in the background at this point, so you can just use NSURLConnection synchronously. In my example here, the method loops making network calls until presumably some other code sets backgroundStuffShouldRun = false. A network call is made with a 10 second timeout. After the call, I'm updating a UI label just to show progress. Note that the UI update is performed with "dispatch_async(dispatch_get_main_queue()…". This runs the UI update on the UI thread, as required.
One potential issue with this background work: there isn't a way to cancel the http request itself. But, with a 10 second timeout, you'd be waiting a max of 10 seconds for the thread to abort itself after an outsider (likely some event in your UI) sets backgroundStuffShouldRun = false.
- (void)doStuffInBackground
{
while (backgroundStuffShouldRun) {
// prepare for network call...
NSURL* url = [[NSURL alloc] initWithString:#"http://maps.google.com/maps/geo"];
// set a 10 second timeout on the request
NSURLRequest* request = [[NSURLRequest alloc] initWithURL:url cachePolicy:NSURLCacheStorageAllowed timeoutInterval:10];
NSError* error = nil;
NSURLResponse *response = nil;
// make the request
NSData* data = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error];
// were we asked to stop the background processing?
if (!backgroundStuffShouldRun) {
return;
}
// process response...
NSString* status = #"Success";
if (error) {
if (error.code == NSURLErrorTimedOut) {
// handle timeout...
status = #"Timed out";
}
else {
// handle other errors...
status = #"Other error";
}
}
else {
// success, handle the response body
NSString *dataAsString = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding];
NSLog(#"%#", dataAsString);
}
// update the UI with our status
dispatch_async(dispatch_get_main_queue(), ^{
[statusLabel setText:[NSString stringWithFormat:#"completed network call %d, status = %#", callCount, status]];
});
callCount++;
sleep(1); // 1 second breather. not necessary, but good idea for testing
}
}
I know about the difference between how each works but i want to know in a performance wise point of view (resources inside the iphone).
Lets say I send an asynch request and wait for the delegate to be called. This won't lock my execution thread. But what is the difference of doing this against just sending a synch request in another thread with GCD.
Like this:
dispatch_queue_t findPicsQueue;
findPicsQueue = dispatch_queue_create("FindPicsQueue", NULL);
dispatch_async(findPicsQueue, ^{
NSData *theResponse = [NSURLConnection sendSynchronousRequest:theRequest
returningResponse:&response
error:&error];
NSHTTPURLResponse *httpResponse = (NSHTTPURLResponse*)response;
if (error) {
NSLog(#"Error: %#",error)
}
if (httpResponse.statusCode == 200)
{
[self parseXMLFile:theResponse]; // Parses Data and modifies picturesFound
for (PictureData *tmp in picturesFound) {
NSLog(#"%#",tmp);
}
}
}
It wont lock my interface since its not being executed in the main thread, but it will lock this specific thread. And I also think GCD runs queues concurrently.
Thanks in advance. I really want to clarify this question.
If you use NSURLConnection with sendAsynchronousRequest, then almost all processing takes place on the main thread, in particular, the XML parsing will be done on the main thread. Your code example however uses a different thread for processing.
This difference is relevant if you have an iPhone or iPad processor with two cores. Then the XML parsing can run in parallel with some UI activity on the main thread (in your example). So it can be completed earlier compared to running everything on the main thread (sendAsynchronousRequest approach).
For older devices with just one core, only one thread will run at a time and the two approaches should behave almost identical.