Running multiple background threads iOS - ios

Is it possible to run multiple background threads to improve performance on iOS . Currently I am using the following code for sending lets say 50 network requests on background thread like this:
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void){
// send 50 network requests
});
EDIT:
After updating my code to something like this no performance gain was achieved :( Taken from here
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", NULL);
dispatch_group_t fetchGroup = dispatch_group_create();
// This will allow up to 8 parallel downloads.
dispatch_semaphore_t downloadSema = dispatch_semaphore_create(8);
// We start ALL our downloads in parallel throttled by the above semaphore.
for (NSURL *url in urlsArray) {
dispatch_group_async(fetchGroup, fetchQ, ^(void) {
dispatch_semaphore_wait(downloadSema, DISPATCH_TIME_FOREVER);
NSMutableURLRequest *headRequest = [NSMutableURLRequest requestWithURL:url cachePolicy: NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0];
[headRequest setHTTPMethod: #"GET"];
[headRequest addValue: cookieString forHTTPHeaderField: #"Cookie"];
NSOperationQueue *queue = [[[NSOperationQueue alloc] init] autorelease];
[NSURLConnection sendAsynchronousRequest:headRequest
queue:queue // created at class init
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){
// do something with data or handle error
NSLog(#"request completed");
}];
dispatch_semaphore_signal(downloadSema);
});
}
// Now we wait until ALL our dispatch_group_async are finished.
dispatch_group_wait(fetchGroup, DISPATCH_TIME_FOREVER);
// Update your UI
dispatch_sync(dispatch_get_main_queue(), ^{
//[self updateUIFunction];
});
// Release resources
dispatch_release(fetchGroup);
dispatch_release(downloadSema);
dispatch_release(fetchQ);

Be careful not to confuse threads with queues
A single concurrent queue can operate across multiple threads, and GCD never guarantees which thread your tasks will run on.
The code you currently have will submit 50 network tasks to be run on a background concurrent queue, this much is true.
However, all 50 of these tasks will be executed on the same thread.
GCD basically acts like a giant thread pool, so your block (containing your 50 tasks) will be submitted to the next available thread in the pool. Therefore, if the tasks are synchronous, they will be executed serially. This means that each task will have to wait for the previous one to finish before preceding. If they are asynchronous tasks, then they will all be dispatched immediately (which begs the question of why you need to use GCD in the first place).
If you want multiple synchronous tasks to run at the same time, then you need a separate dispatch_async for each of your tasks. This way you have a block per task, and therefore they will be dispatched to multiple threads from the thread pool and therefore can run concurrently.
Although you should be careful that you don't submit too many network tasks to operate at the same time (you don't say specifically what they're doing) as it could potentially overload a server, as gnasher says.
You can easily limit the number of concurrent tasks (whether they're synchronous or asynchronous) operating at the same time using a GCD semaphore. For example, this code will limit the number of concurrent operations to 6:
long numberOfConcurrentTasks = 6;
dispatch_semaphore_t semaphore = dispatch_semaphore_create(numberOfConcurrentTasks);
for (int i = 0; i < 50; i++) {
dispatch_async(concurrentQueue, ^{
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
[self doNetworkTaskWithCompletion:^{
dispatch_semaphore_signal(semaphore);
NSLog(#"network task %i done", i);
}];
});
}
Edit
The problem with your code is the line:
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", NULL);
When NULL is passed to the attr parameter, GCD creates a serial queue (it's also a lot more readable if you actually specify the queue type here). You want a concurrent queue. Therefore you want:
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", DISPATCH_QUEUE_CONCURRENT);
You need to be signalling your semaphore from within the completion handler of the request instead of at the end of the request. As it's asynchronous, the semaphore will get signalled as soon as the request is sent off, therefore queueing another network task. You want to wait for the network task to return before signalling.
[NSURLConnection sendAsynchronousRequest:headRequest
queue:queue // created at class init
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){
// do something with data or handle error
NSLog(#"request completed");
dispatch_semaphore_signal(downloadSema);
}];
Edit 2
I just noticed you are updating your UI using a dispatch_sync. I see no reason for it to be synchronous, as it'll just block the background thread until the main thread has updated the UI. I would use a dispatch_async to do this.
Edit 3
As CouchDeveloper points out, it is possible that the number of concurrent network requests might be being capped by the system.
The easiest solution appears to be migrating over to NSURLSession and configuring the maxConcurrentOperationCount property of the NSOperationQueue used. That way you can ditch the semaphores altogether and just dispatch all your network requests on a background queue, using a callback to update the UI on the main thread.
I am not at all familiar with NSURLSession though, I was only answering this from a GCD stand-point.

You can send multiple requests, but sending 50 requests in parallel is usually not a good idea. There is a good chance that a server confronted with 50 simultaneous request will handle the first few and return errors for the rest. It depends on the server, but using a semaphore you can easily limit the number of running requests to anything you like, say four or eight. You need to experiment with the server in question to find out what works reliably on that server and gives you the highest performance.
And there seems to be a bit of confusion around: Usually all your network requests will run asynchronously. That is you send the request to the OS (which goes very quick usually), then nothing happens for a while, then a callback method of yours is called, processing the data. Whether you send the requests from the main thread or from a background thread doesn't make much difference.
Processing the results of these requests can be time consuming. You can process the results on a background thread. You can process the results of all requests on the same serial queue, which makes it a lot easier to avoid multithreading problems. That's what I do because it's easy and even in the worst case uses one processor for intensive processing of the results, while the other processor can do UI etc.
If you use synchronous network requests (which is a bad idea), then you need to dispatch each one by itself on a background thread. If you run a loop running 50 synchronous network requests on a background thread, then the second request will wait until the first one is completely finished.

Related

GCD in iOS always creating new threads

I have many codes like this:
dispatch_async(dispatch_get_global_queue(0, 0), ^{});
dispatch_async will create a new thread when I call it one time.
When app run a while, I got many threads: number 50, 60, 70. It's not good.
How to reuse those threads. Like tableview.dequeueReusableCellWithIdentifier
This is my code. It's need do some image stitching things after download, then save.
- (void)sdImageWith:(NSString *)urlString saveIn:(NSString *)savePath completion:(completionSuccess)successCompletion failure:(completionFalse)failureCompletion {
[[SDWebImageDownloader sharedDownloader] downloadImageWithURL:[NSURL URLWithString:urlString] options:SDWebImageDownloaderUseNSURLCache progress:nil completed:^(UIImage * _Nullable image, NSData * _Nullable data, NSError * _Nullable error, BOOL finished) {
if (data.length <= 100 || error != nil) { failureCompletion(error); return;}
dispatch_async(imageStitch, ^{
NSLog(#"thread:%#", [NSThread currentThread]);
[[DLStitchingWarper shareSingleton] StitchingImage:data savePath:savePath];
if ([[NSFileManager defaultManager] fileExistsAtPath:savePath]) {
successCompletion(savePath);
}else {
NSError *error = [[NSError alloc] initWithDomain:#"x" xxxcode:404 userInfo:nil];
failureCompletion(error);
}
});
}];
}
The dispatch_get_global_queue doesn't necessarily create new threads. It will pull threads from a limited pool of "worker" threads that GCD manages for you. When it's done running your dispatched task, it will return this thread back to the pool of worker threads.
When you dispatch something to a GCD queue, it will grab an available worker thread from this pool. You have no assurances as to which one it uses from one invocation to the next. But you simply don't need to worry about whether it's a different thread, as GCD is managing this pool of threads to ensure that threads are not created and destroyed unnecessarily. It's one of the main reasons we use GCD instead of doing our own NSThread programming. It's a lot more efficient.
The only thing you need to worry about is the degree of concurrency that you employ in your app so that you don't exhaust this pool of worker threads (having unintended impact on other background tasks that might be drawing on the same pool of worker threads).
The most draconian way of limiting the degree of concurrency is to employ a shared serial queue that you create yourself. That means that only one thing will run on that serial queue at a time. (Note, even in this situation you don't have assurances that it will use the same thread every time; only that you'll only be using one background worker thread at a time.)
A slightly more refined way to constrain the degree of concurrency in your app is to use NSOperationQueue (a layer above GCD) and set its maxConcurrentOperationCount. With this, you can constrain the degree of concurrency to something greater than 1, but still small enough to not exhaust the worker threads. E.g. for network queues, it's not unusual to specify a maxConcurrentOperationCount of 4 or 5.
In your revised question, you show us a code snippet. So, a couple of thoughts:
Don't worry about what [NSThread currentThread]. GCD will manage the threads for you.
Is this stitching process slow and potentially using a fair degree of memory?
If so, I would not suggest either a serial queue (only allowing one at a time might be too constraining), nor a global queue (because you could have enough of these running concurrently that you'd use up all the available worker threads), nor a GCD concurrent queue (again, the degree of concurrency is unbound), but instead use an NSOperationQueue with some reasonable limited degree of concurrency:
#property (nonatomic, strong) NSOperationQueue *stitchQueue;
And
self.stitchQueue = [[NSOperationQueue alloc] init];
self.stitchQueue.name = #"com.domain.app.stitch";
self.stitchQueue.maxConcurrentOperationCount = 4;
And
- (void)sdImageWith:(NSString *)urlString saveIn:(NSString *)savePath completion:(completionSuccess)successCompletion failure:(completionFalse)failureCompletion {
[[SDWebImageDownloader sharedDownloader] downloadImageWithURL:[NSURL URLWithString:urlString] options:SDWebImageDownloaderUseNSURLCache progress:nil completed:^(UIImage * _Nullable image, NSData * _Nullable data, NSError * _Nullable error, BOOL finished) {
if (data.length <= 100 || error != nil) { failureCompletion(error); return;}
[self.stitchQueue addOperationWithBlock:^{
// NSLog(#"thread:%#", [NSThread currentThread]); // stop worrying about `NSThread`
[[DLStitchingWarper shareSingleton] StitchingImage:data savePath:savePath];
if ([[NSFileManager defaultManager] fileExistsAtPath:savePath]) {
successCompletion(savePath);
}else {
NSError *error = [[NSError alloc] initWithDomain:#"x" xxxcode:404 userInfo:nil];
failureCompletion(error);
}
}];
}];
}
If you prefer to use a custom GCD serial queue (with only one stitching operation possible at a time) or a custom GCD concurrent queue (with no limit as to how many stitching tasks running at any given time), feel free. You know how time consuming and/or resource intensive these operations are, so only you can make that call. But operation queues offer the benefits of concurrency, but simple control over the degree of concurrency.
Instead of using dispatch_get_global_queue, use dispatch_queue_create and save a reference to the queue. Reuse it like any other instance variable.
the default implementation to concurrent queue does reuse threads but doesnt wait for free threads: if none are free, itll create.
it will thus overcommit and you need to assure you dont spawn too many long running tasks yourself. For a 'real' thread pool
You need
a) your own queue(s)
b) a 'limiter' so that you
for an example see : IOS thread pool

iPhone App Crashes with dispatch_async if operation is not completed

In my shipping iPhone App, I've used a dispatch_async block without issues. The App checks a web site for price updates, parses the HTML, updates a Core Data model accordingly, and then refreshes the table being viewed.
In my latest App however, I've found I can crash the App by switching out of the App while the price update process is running. The differences between the first and second usage seem to me to be only that I'm calling the dispatch block from the table's refreshController (ie. tableViewController's now built-in pull-to-refresh mechanism) and that this is now iOS7.
Can anyone suggest to me how dispatch_async should be gracefully aborted under known conditions, such as the user wishing to stop the process, or if they switch apps like this and I'd like to intercept that activity to manage the block properly, please?
If there's any good background reading on do's and don'ts with blocks, I'd be pleased to look through such links likewise - thanks!
This is the (mostly boilerplate) dispatch_async code I'm using, for your convenience:
priceData = [[NSMutableData alloc]init]; // priceData is declared in the header
priceURL = … // the price update URL
NSURL *requestedPriceURL = [[NSURL alloc]initWithString:[#“myPriceURL.com”]];
NSURLRequest *urlRequest = [NSURLRequest requestWithURL:requestedPriceURL];
dispatch_queue_t dispatchQueue = dispatch_queue_create("net.fudoshindesign.loot.priceUpdateDispatchQueue", NULL); //ie. my made-up queue name
dispatch_async(dispatchQueue, ^{
dispatch_async(dispatch_get_main_queue(), ^{
NSURLConnection *conn = [[NSURLConnection alloc]initWithRequest:urlRequest delegate:self startImmediately:YES];
[conn start];
})
});
That boilerplate code looks quite useless.
You create a serial queue. You dispatch a block on the queue which does nothing but dispatching a block on the main queue. You might as well have dispatched on the main queue directly.
Although you have a async block but you are executing NSURLConnection Request on main thread in that block, that's why app gets crashed if process doesn't get completed. Execute request in background thread. You are blocking main thread in this code.
You can do it like this:
dispatch_queue_t dispatchQueue = dispatch_queue_create("net.fudoshindesign.loot.priceUpdateDispatchQueue", 0); //ie. your made-up queue name
dispatch_async(dispatchQueue, ^{
NSURLConnection *conn = [[NSURLConnection alloc]initWithRequest:urlRequest delegate:self startImmediately:YES];
[conn start];
...
//other code
...
dispatch_async(dispatch_get_main_queue(), ^{
//process completed. update UI or other tasks requiring main thread
});
});
try reading and practicing more about GCD.
Grand Central Dispatch (GCD) Reference from Apple Docs
GCD Tutorial
There's no explicit provision in dispatch queues for cancelling. Basically, it would be a semaphore.
NSOperationQueue (a higher level abstraction, but still built using GCD underneath) has support for canceling operations. You can create a series of NSOperations and add them to an NSOperationQueue and then message cancelAllOperations to the queue when you don't need it to complete.
Useful link:
http://www.cimgf.com/2008/02/16/cocoa-tutorial-nsoperation-and-nsoperationqueue/
Dispatch queues: How to tell if they're running and how to stop them

Issue with GCD and too many threads

I have an image loader class which provided with NSURL loads and image from the web and executes completion block. Code is actually quite simple
- (void)downloadImageWithURL:(NSString *)URLString completion:(BELoadImageCompletionBlock)completion
{
dispatch_async(_queue, ^{
// dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
UIImage *image = nil;
NSURL *URL = [NSURL URLWithString:URLString];
if (URL) {
image = [UIImage imageWithData:[NSData dataWithContentsOfURL:URL]];
}
dispatch_async(dispatch_get_main_queue(), ^{
completion(image, URLString);
});
});
}
When I replace
dispatch_async(_queue, ^{
with commented out
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
Images are loading much faster, wich is quite logical (before that images would be loaded one at a time, now a bunch of them are loading simultaneously). My issue is that I have perhaps 50 images and I call downloadImageWithURL:completion: method for all of them and when I use global queue instead of _queue my app eventually crashes and I see there are 85+ threads. Can the problem be that my calling dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0) 50 times in a row makes GCD create too many threads? I thought that gcd handles all the treading and makes sure the number of threads is not huge, but if it's not the case is there any way I can influence number of threads?
The kernel creates additional threads when workunits on existing GCD worker threads for a global concurrent queue are blocked in the kernel for a significant amount of time (as long as there is further work pending on the global queue).
This is necessary so that the application can continue to make progress overall (e.g. the execution of one of the pending blocks may be what allows the blocked threads to become unblocked).
If the reason for worker threads to be blocked in the kernel is IO (e.g. the +[NSData dataWithContentsOfURL:] in this example), the best solution is replace those calls with an API that will perform that IO asynchronously without blocking, e.g. NSURLConnection for networking or dispatch I/O for filesystem IO.
Alternatively you can limit the number of concurrent blocking operations manually, e.g. by using a counting dispatch semaphore.
The WWDC 2012 GCD session went over this topic in some detail.
Well from http://developer.apple.com/library/ios/#documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html
Concurrent queues (also known as a type of global dispatch queue) execute one or more tasks concurrently, but tasks are still started in
the order in which they were added to the queue. The currently
executing tasks run on distinct threads that are managed by the
dispatch queue. The exact number of tasks executing at any given point
is variable and depends on system conditions.
and
Serial queues (also known as private dispatch queues) execute one task
at a time in the order in which they are added to the queue. The
currently executing task runs on a distinct thread (which can vary
from task to task) that is managed by the dispatch queue.
By dispatching all your blocks to the high priority concurrent dispatch queue with
[NSData dataWithContentsOfURL:URL]
which is a synchronous blocking network operation, it looks like the default GCD behaviour will be to spawn a load of threads to execute your blocks ASAP.
You should be dispatching to DISPATCH_QUEUE_PRIORITY_BACKGROUND. These tasks are in no way "High Priority". Any image processing should be done when there is spare time and nothing is happening on the main thread.
If you want more control over how many of these things are happening at once i reccommend that you look into using NSOperation. You can take your blocks and embed them in an operation using NSBlockOperation and then you can submit these operations to your own NSOperationQueue. An NSOperationQueue has a - (NSInteger)maxConcurrentOperationCount and as an added benefit operations can also be cancelled after scheduling if needed.
You can use NSOperationqueue, which is supported by NSURLConnection
And it has the following instance method:
- (void)setMaxConcurrentOperationCount:(NSInteger)count

Why use async HTTP request over sync HTTP in a separate thread?

I know about the difference between how each works but i want to know in a performance wise point of view (resources inside the iphone).
Lets say I send an asynch request and wait for the delegate to be called. This won't lock my execution thread. But what is the difference of doing this against just sending a synch request in another thread with GCD.
Like this:
dispatch_queue_t findPicsQueue;
findPicsQueue = dispatch_queue_create("FindPicsQueue", NULL);
dispatch_async(findPicsQueue, ^{
NSData *theResponse = [NSURLConnection sendSynchronousRequest:theRequest
returningResponse:&response
error:&error];
NSHTTPURLResponse *httpResponse = (NSHTTPURLResponse*)response;
if (error) {
NSLog(#"Error: %#",error)
}
if (httpResponse.statusCode == 200)
{
[self parseXMLFile:theResponse]; // Parses Data and modifies picturesFound
for (PictureData *tmp in picturesFound) {
NSLog(#"%#",tmp);
}
}
}
It wont lock my interface since its not being executed in the main thread, but it will lock this specific thread. And I also think GCD runs queues concurrently.
Thanks in advance. I really want to clarify this question.
If you use NSURLConnection with sendAsynchronousRequest, then almost all processing takes place on the main thread, in particular, the XML parsing will be done on the main thread. Your code example however uses a different thread for processing.
This difference is relevant if you have an iPhone or iPad processor with two cores. Then the XML parsing can run in parallel with some UI activity on the main thread (in your example). So it can be completed earlier compared to running everything on the main thread (sendAsynchronousRequest approach).
For older devices with just one core, only one thread will run at a time and the two approaches should behave almost identical.

How should I use GCD dispatch_barrier_async in iOS (seems to execute before and not after other blocks)

I'm trying to synchronize the following code in iOS5:
an object has a method which makes an HTTP request from which it
gets some data, including an URL to an image
once the data arrives, the textual data is used to populate a
CoreData model
at the same time, a second thread is dispatched async to download
the image; this thread will signal via KVO to a viewController when
the image is already cached and available in the CoreData model.
since the image download will take a while, we immediately return
the CoreData object which has all attributes but for the image to
the caller.
Also, when the second thread is done downloading, the CoreData model
can be saved.
This is the (simplified) code:
- (void)insideSomeMethod
{
[SomeHTTPRequest withCompletionHandler:
^(id retrievedData)
{
if(!retrievedData)
{
handler(nil);
}
// Populate CoreData model with retrieved Data...
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSURL* userImageURL = [NSURL URLWithString:[retrievedData valueForKey:#"imageURL"]];
aCoreDataNSManagedObject.profileImage = [NSData dataWithContentsOfURL:userImageURL];
});
handler(aCoreDataNSManagedObject);
[self shouldCommitChangesToModel];
}];
}
- (void)shouldCommitChangesToModel
{
dispatch_barrier_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSError *error = nil;
if(![managedObjectContext save:&error])
{
// Handle error
}
});
}
But what's going on is that the barrier-based save-block is always executed before the the image-loading block. That is,
dispatch_barrier_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSError *error = nil;
if(![managedObjectContext save:&error])
{
// Handle error
}
});
Executes before:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSURL* userImageURL = [NSURL URLWithString:[retrievedData valueForKey:#"imageURL"]];
aCoreDataNSManagedObject.profileImage = [NSData dataWithContentsOfURL:userImageURL];
});
So obviously I'm not really dispatching the image-loading block before the barrier, or the barrier would wait until the image-loading block is done before executing (which was my intention).
What am I doing wrong? how do I make sure the image-loading block is enqueued before the barrier block?
At first glance the issue may be that you are dispatching the barrier block on a global concurrent queue. You can only use barrier blocks on your own custom concurrent queue. Per the GCD docs on dispatch_barrier_async, if you dispatch a block to a global queue, it will behave like a normal dispatch_async call.
Mike Ash has a good blog post on GCD barrier blocks: http://www.mikeash.com/pyblog/friday-qa-2011-10-14-whats-new-in-gcd.html
Good luck
T
You need to create your own queue and not dispatch to the global queues as per the ADC Docs
The queue you specify should be a concurrent queue that you create
yourself using the dispatch_queue_create function. If the queue you
pass to this function is a serial queue or one of the global
concurrent queues, this function behaves like the dispatch_async
function.
from https://developer.apple.com/library/ios/documentation/Performance/Reference/GCD_libdispatch_Ref/Reference/reference.html#//apple_ref/c/func/dispatch_barrier_async .
You can create tons of your own GCD queues just fine. gcd queues are very small and you can create tons of them without issue. You just need to free them when you're done with them.
For what you seem to be trying to solve, dispatch_barrier_async may not be the best solution.
Have a look at the Migrating Away From Threads section of the Concurrency Programming Guide. Just using dispatch_sync on a your own serial queue may solve your synchronization problem.
Alternatively, you can use NSOperation and NSOperationQueue. Unlike GCD, NSOperation allows you to easily manage dependancies (you can do it using GCD, but it can get ugly fast).
I'm a little late to the party, but maybe next time you could try using dispatch_groups to your advantage. http://www.raywenderlich.com/63338/grand-central-dispatch-in-depth-part-2

Resources