GCD in iOS always creating new threads - ios

I have many codes like this:
dispatch_async(dispatch_get_global_queue(0, 0), ^{});
dispatch_async will create a new thread when I call it one time.
When app run a while, I got many threads: number 50, 60, 70. It's not good.
How to reuse those threads. Like tableview.dequeueReusableCellWithIdentifier
This is my code. It's need do some image stitching things after download, then save.
- (void)sdImageWith:(NSString *)urlString saveIn:(NSString *)savePath completion:(completionSuccess)successCompletion failure:(completionFalse)failureCompletion {
[[SDWebImageDownloader sharedDownloader] downloadImageWithURL:[NSURL URLWithString:urlString] options:SDWebImageDownloaderUseNSURLCache progress:nil completed:^(UIImage * _Nullable image, NSData * _Nullable data, NSError * _Nullable error, BOOL finished) {
if (data.length <= 100 || error != nil) { failureCompletion(error); return;}
dispatch_async(imageStitch, ^{
NSLog(#"thread:%#", [NSThread currentThread]);
[[DLStitchingWarper shareSingleton] StitchingImage:data savePath:savePath];
if ([[NSFileManager defaultManager] fileExistsAtPath:savePath]) {
successCompletion(savePath);
}else {
NSError *error = [[NSError alloc] initWithDomain:#"x" xxxcode:404 userInfo:nil];
failureCompletion(error);
}
});
}];
}

The dispatch_get_global_queue doesn't necessarily create new threads. It will pull threads from a limited pool of "worker" threads that GCD manages for you. When it's done running your dispatched task, it will return this thread back to the pool of worker threads.
When you dispatch something to a GCD queue, it will grab an available worker thread from this pool. You have no assurances as to which one it uses from one invocation to the next. But you simply don't need to worry about whether it's a different thread, as GCD is managing this pool of threads to ensure that threads are not created and destroyed unnecessarily. It's one of the main reasons we use GCD instead of doing our own NSThread programming. It's a lot more efficient.
The only thing you need to worry about is the degree of concurrency that you employ in your app so that you don't exhaust this pool of worker threads (having unintended impact on other background tasks that might be drawing on the same pool of worker threads).
The most draconian way of limiting the degree of concurrency is to employ a shared serial queue that you create yourself. That means that only one thing will run on that serial queue at a time. (Note, even in this situation you don't have assurances that it will use the same thread every time; only that you'll only be using one background worker thread at a time.)
A slightly more refined way to constrain the degree of concurrency in your app is to use NSOperationQueue (a layer above GCD) and set its maxConcurrentOperationCount. With this, you can constrain the degree of concurrency to something greater than 1, but still small enough to not exhaust the worker threads. E.g. for network queues, it's not unusual to specify a maxConcurrentOperationCount of 4 or 5.
In your revised question, you show us a code snippet. So, a couple of thoughts:
Don't worry about what [NSThread currentThread]. GCD will manage the threads for you.
Is this stitching process slow and potentially using a fair degree of memory?
If so, I would not suggest either a serial queue (only allowing one at a time might be too constraining), nor a global queue (because you could have enough of these running concurrently that you'd use up all the available worker threads), nor a GCD concurrent queue (again, the degree of concurrency is unbound), but instead use an NSOperationQueue with some reasonable limited degree of concurrency:
#property (nonatomic, strong) NSOperationQueue *stitchQueue;
And
self.stitchQueue = [[NSOperationQueue alloc] init];
self.stitchQueue.name = #"com.domain.app.stitch";
self.stitchQueue.maxConcurrentOperationCount = 4;
And
- (void)sdImageWith:(NSString *)urlString saveIn:(NSString *)savePath completion:(completionSuccess)successCompletion failure:(completionFalse)failureCompletion {
[[SDWebImageDownloader sharedDownloader] downloadImageWithURL:[NSURL URLWithString:urlString] options:SDWebImageDownloaderUseNSURLCache progress:nil completed:^(UIImage * _Nullable image, NSData * _Nullable data, NSError * _Nullable error, BOOL finished) {
if (data.length <= 100 || error != nil) { failureCompletion(error); return;}
[self.stitchQueue addOperationWithBlock:^{
// NSLog(#"thread:%#", [NSThread currentThread]); // stop worrying about `NSThread`
[[DLStitchingWarper shareSingleton] StitchingImage:data savePath:savePath];
if ([[NSFileManager defaultManager] fileExistsAtPath:savePath]) {
successCompletion(savePath);
}else {
NSError *error = [[NSError alloc] initWithDomain:#"x" xxxcode:404 userInfo:nil];
failureCompletion(error);
}
}];
}];
}
If you prefer to use a custom GCD serial queue (with only one stitching operation possible at a time) or a custom GCD concurrent queue (with no limit as to how many stitching tasks running at any given time), feel free. You know how time consuming and/or resource intensive these operations are, so only you can make that call. But operation queues offer the benefits of concurrency, but simple control over the degree of concurrency.

Instead of using dispatch_get_global_queue, use dispatch_queue_create and save a reference to the queue. Reuse it like any other instance variable.

the default implementation to concurrent queue does reuse threads but doesnt wait for free threads: if none are free, itll create.
it will thus overcommit and you need to assure you dont spawn too many long running tasks yourself. For a 'real' thread pool
You need
a) your own queue(s)
b) a 'limiter' so that you
for an example see : IOS thread pool

Related

Why is NSOperationQueue.mainQueue.maxConcurrentOperationCount set to 1

The reason for this question is because of the reactions to this question.
I realized the understanding of the problem was not fully there as well as the reason for the question in the first place. So I am trying to boil down the reason for the other question to this one at it's core.
First a little preface, and some history, I know NSOperation(Queue) existed before GCD, and and they were implemented using threads before dispatch queues.
The next thing is that you need to understand is that by default, meaning no "waiting" methods being use on operations or operation queues (just a standard "addOperation:"), an NSOperation's main method is executed on the underlying queue of the NSOperationQueue asynchronously (e.g. dispatch_async()).
To conclude my preface, I'm questioning the purpose of setting NSOperationQueue.mainQueue.maxConcurrentOperationCount to 1 in this day and age, now that the underlyingQueue is actually the main GCD serial queue (e.g. the return of dispatch_get_main_queue()).
If NSOperationQueue.mainQueue already executes it's operation's main methods serially, why worry about maxConcurrentOperationCount at all?
To see the issue of it being set to 1, please see the example in the referenced question.
It's set to 1 because there's no reason to set it to anything else, and it's probably slightly better to keep it set to 1 for at least three reasons I can think of.
Reason 1
Because NSOperationQueue.mainQueue's underlyingQueue is dispatch_get_main_queue(), which is serial, NSOperationQueue.mainQueue is effectively serial (it could never run more than a single block at a time, even if its maxConcurrentOperationCount were greater than 1).
We can check this by creating our own NSOperationQueue, putting a serial queue in its underlyingQueue target chain, and setting its maxConcurrentOperationCount to a large number.
Create a new project in Xcode using the macOS > Cocoa App template with language Objective-C. Replace the AppDelegate implementation with this:
#implementation AppDelegate {
dispatch_queue_t concurrentQueue;
dispatch_queue_t serialQueue;
NSOperationQueue *operationQueue;
}
- (void)applicationDidFinishLaunching:(NSNotification *)aNotification {
concurrentQueue = dispatch_queue_create("q", DISPATCH_QUEUE_CONCURRENT);
serialQueue = dispatch_queue_create("q2", nil);
operationQueue = [[NSOperationQueue alloc] init];
// concurrent queue targeting serial queue
//dispatch_set_target_queue(concurrentQueue, serialQueue);
//operationQueue.underlyingQueue = concurrentQueue;
// serial queue targeting concurrent queue
dispatch_set_target_queue(serialQueue, concurrentQueue);
operationQueue.underlyingQueue = serialQueue;
operationQueue.maxConcurrentOperationCount = 100;
for (int i = 0; i < 100; ++i) {
NSOperation *operation = [NSBlockOperation blockOperationWithBlock:^{
NSLog(#"operation %d starting", i);
sleep(3);
NSLog(#"operation %d ending", i);
}];
[operationQueue addOperation:operation];
}
}
#end
If you run this, you'll see that operation 1 doesn't start until operation 0 has ended, even though I set operationQueue.maxConcurrentOperationCount to 100. This happens because there is a serial queue in the target chain of operationQueue.underlyingQueue. Thus operationQueue is effectively serial, even though its maxConcurrentOperationCount is not 1.
You can play with the code to try changing the structure of the target chain. You'll find that if there is a serial queue anywhere in that chain, only one operation runs at a time.
But if you set operationQueue.underlyingQueue = concurrentQueue, and do not set concurrentQueue's target to serialQueue, then you'll see that 64 operations run simultaneously. For operationQueue to run operations concurrently, the entire target chain starting with its underlyingQueue must be concurrent.
Since the main queue is always serial, NSOperationQueue.mainQueue is effectively always serial.
In fact, if you set NSOperationQueue.mainQueue.maxConcurrentOperationCount to anything but 1, it has no effect. If you print NSOperationQueue.mainQueue.maxConcurrentOperationCount after trying to change it, you'll find that it's still 1. I think it would be even better if the attempt to change it raised an assertion. Silently ignoring attempts to change it is more likely to lead to confusion.
Reason 2
NSOperationQueue submits up to maxConcurrentOperationCount blocks to its underlyingQueue simultaneously. Since the mainQueue.underlyingQueue is serial, only one of those blocks can run at a time. Once those blocks are submitted, it may be too late to use the -[NSOperation cancel] message to cancel the corresponding operations. I'm not sure; this is an implementation detail that I haven't fully explored. Anyway, if it is too late, that is unfortunate as it may lead to a waste of time and battery power.
Reason 3
As with mentioned with reason 2, NSOperationQueue submits up to maxConcurrentOperationCount blocks to its underlyingQueue simultaneously. Since mainQueue.underlyingQueue is serial, only one of those blocks can execute at a time. The other blocks, and any other resources the dispatch_queue_t uses to track them, must sit around idly, waiting for their turns to run. This is a waste of resources. Not a big waste, but a waste nonetheless. If mainQueue.maxConcurrentOperationCount is set to 1, it will only submit a single block to its underlyingQueue at a time, thus preventing GCD from allocating resources uselessly.

Running multiple background threads iOS

Is it possible to run multiple background threads to improve performance on iOS . Currently I am using the following code for sending lets say 50 network requests on background thread like this:
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void){
// send 50 network requests
});
EDIT:
After updating my code to something like this no performance gain was achieved :( Taken from here
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", NULL);
dispatch_group_t fetchGroup = dispatch_group_create();
// This will allow up to 8 parallel downloads.
dispatch_semaphore_t downloadSema = dispatch_semaphore_create(8);
// We start ALL our downloads in parallel throttled by the above semaphore.
for (NSURL *url in urlsArray) {
dispatch_group_async(fetchGroup, fetchQ, ^(void) {
dispatch_semaphore_wait(downloadSema, DISPATCH_TIME_FOREVER);
NSMutableURLRequest *headRequest = [NSMutableURLRequest requestWithURL:url cachePolicy: NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0];
[headRequest setHTTPMethod: #"GET"];
[headRequest addValue: cookieString forHTTPHeaderField: #"Cookie"];
NSOperationQueue *queue = [[[NSOperationQueue alloc] init] autorelease];
[NSURLConnection sendAsynchronousRequest:headRequest
queue:queue // created at class init
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){
// do something with data or handle error
NSLog(#"request completed");
}];
dispatch_semaphore_signal(downloadSema);
});
}
// Now we wait until ALL our dispatch_group_async are finished.
dispatch_group_wait(fetchGroup, DISPATCH_TIME_FOREVER);
// Update your UI
dispatch_sync(dispatch_get_main_queue(), ^{
//[self updateUIFunction];
});
// Release resources
dispatch_release(fetchGroup);
dispatch_release(downloadSema);
dispatch_release(fetchQ);
Be careful not to confuse threads with queues
A single concurrent queue can operate across multiple threads, and GCD never guarantees which thread your tasks will run on.
The code you currently have will submit 50 network tasks to be run on a background concurrent queue, this much is true.
However, all 50 of these tasks will be executed on the same thread.
GCD basically acts like a giant thread pool, so your block (containing your 50 tasks) will be submitted to the next available thread in the pool. Therefore, if the tasks are synchronous, they will be executed serially. This means that each task will have to wait for the previous one to finish before preceding. If they are asynchronous tasks, then they will all be dispatched immediately (which begs the question of why you need to use GCD in the first place).
If you want multiple synchronous tasks to run at the same time, then you need a separate dispatch_async for each of your tasks. This way you have a block per task, and therefore they will be dispatched to multiple threads from the thread pool and therefore can run concurrently.
Although you should be careful that you don't submit too many network tasks to operate at the same time (you don't say specifically what they're doing) as it could potentially overload a server, as gnasher says.
You can easily limit the number of concurrent tasks (whether they're synchronous or asynchronous) operating at the same time using a GCD semaphore. For example, this code will limit the number of concurrent operations to 6:
long numberOfConcurrentTasks = 6;
dispatch_semaphore_t semaphore = dispatch_semaphore_create(numberOfConcurrentTasks);
for (int i = 0; i < 50; i++) {
dispatch_async(concurrentQueue, ^{
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
[self doNetworkTaskWithCompletion:^{
dispatch_semaphore_signal(semaphore);
NSLog(#"network task %i done", i);
}];
});
}
Edit
The problem with your code is the line:
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", NULL);
When NULL is passed to the attr parameter, GCD creates a serial queue (it's also a lot more readable if you actually specify the queue type here). You want a concurrent queue. Therefore you want:
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", DISPATCH_QUEUE_CONCURRENT);
You need to be signalling your semaphore from within the completion handler of the request instead of at the end of the request. As it's asynchronous, the semaphore will get signalled as soon as the request is sent off, therefore queueing another network task. You want to wait for the network task to return before signalling.
[NSURLConnection sendAsynchronousRequest:headRequest
queue:queue // created at class init
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){
// do something with data or handle error
NSLog(#"request completed");
dispatch_semaphore_signal(downloadSema);
}];
Edit 2
I just noticed you are updating your UI using a dispatch_sync. I see no reason for it to be synchronous, as it'll just block the background thread until the main thread has updated the UI. I would use a dispatch_async to do this.
Edit 3
As CouchDeveloper points out, it is possible that the number of concurrent network requests might be being capped by the system.
The easiest solution appears to be migrating over to NSURLSession and configuring the maxConcurrentOperationCount property of the NSOperationQueue used. That way you can ditch the semaphores altogether and just dispatch all your network requests on a background queue, using a callback to update the UI on the main thread.
I am not at all familiar with NSURLSession though, I was only answering this from a GCD stand-point.
You can send multiple requests, but sending 50 requests in parallel is usually not a good idea. There is a good chance that a server confronted with 50 simultaneous request will handle the first few and return errors for the rest. It depends on the server, but using a semaphore you can easily limit the number of running requests to anything you like, say four or eight. You need to experiment with the server in question to find out what works reliably on that server and gives you the highest performance.
And there seems to be a bit of confusion around: Usually all your network requests will run asynchronously. That is you send the request to the OS (which goes very quick usually), then nothing happens for a while, then a callback method of yours is called, processing the data. Whether you send the requests from the main thread or from a background thread doesn't make much difference.
Processing the results of these requests can be time consuming. You can process the results on a background thread. You can process the results of all requests on the same serial queue, which makes it a lot easier to avoid multithreading problems. That's what I do because it's easy and even in the worst case uses one processor for intensive processing of the results, while the other processor can do UI etc.
If you use synchronous network requests (which is a bad idea), then you need to dispatch each one by itself on a background thread. If you run a loop running 50 synchronous network requests on a background thread, then the second request will wait until the first one is completely finished.

NSOperation and NSOperationQueue working thread vs main thread

I have to carry out a series of download and database write operations in my app. I am using the NSOperation and NSOperationQueue for the same.
This is application scenario:
Fetch all postcodes from a place.
For each postcode fetch all houses.
For each house fetch inhabitant details
As said, I have defined an NSOperation for each task. In first case (Task1), I am sending a request to server to fetch all postcodes. The delegate within the NSOperation will receive the data. This data is then written to database. The database operation is defined in a different class. From NSOperation class I am making a call to the write function defined in database class.
My question is whether the database write operation occur in main thread or in a background thread? As I was calling it within a NSOperation I was expecting it to run in a different thread (Not MainThread) as the NSOperation. Can someone please explain this scenario while dealing with NSOperation and NSOperationQueue.
My question is whether the database write operation occur in main
thread or in a background thread?
If you create an NSOperationQueue from scratch as in:
NSOperationQueue *myQueue = [[NSOperationQueue alloc] init];
It will be in a background thread:
Operation queues usually provide the threads used to run their
operations. In OS X v10.6 and later, operation queues use the
libdispatch library (also known as Grand Central Dispatch) to initiate
the execution of their operations. As a result, operations are always
executed on a separate thread, regardless of whether they are
designated as concurrent or non-concurrent operations
Unless you are using the mainQueue:
NSOperationQueue *mainQueue = [NSOperationQueue mainQueue];
You can also see code like this:
NSOperationQueue *myQueue = [[NSOperationQueue alloc] init];
[myQueue addOperationWithBlock:^{
// Background work
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
// Main thread work (UI usually)
}];
}];
And the GCD version:
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void)
{
// Background work
dispatch_async(dispatch_get_main_queue(), ^(void)
{
// Main thread work (UI usually)
});
});
NSOperationQueue gives finer control with what you want to do. You can create dependencies between the two operations (download and save to database). To pass the data between one block and the other, you can assume for example, that a NSData will be coming from the server so:
__block NSData *dataFromServer = nil;
NSBlockOperation *downloadOperation = [[NSBlockOperation alloc] init];
__weak NSBlockOperation *weakDownloadOperation = downloadOperation;
[weakDownloadOperation addExecutionBlock:^{
// Download your stuff
// Finally put it on the right place:
dataFromServer = ....
}];
NSBlockOperation *saveToDataBaseOperation = [[NSBlockOperation alloc] init];
__weak NSBlockOperation *weakSaveToDataBaseOperation = saveToDataBaseOperation;
[weakSaveToDataBaseOperation addExecutionBlock:^{
// Work with your NSData instance
// Save your stuff
}];
[saveToDataBaseOperation addDependency:downloadOperation];
[myQueue addOperation:saveToDataBaseOperation];
[myQueue addOperation:downloadOperation];
Edit: Why I am using __weak reference for the Operations, can be found here. But in a nutshell is to avoid retain cycles.
If you want to perform the database writing operation in the background thread you need to create a NSManagedObjectContext for that thread.
You can create the background NSManagedObjectContext in the start method of your relevant NSOperation subclass.
Check the Apple docs for Concurrency with Core Data.
You can also create an NSManagedObjectContext that executes requests in its own background thread by creating it with NSPrivateQueueConcurrencyType and performing the requests inside its performBlock: method.
From NSOperationQueue
In iOS 4 and later, operation queues use Grand Central Dispatch to execute operations. Prior to iOS 4, they create separate threads for non-concurrent operations and launch concurrent operations from the current thread.
So,
[NSOperationQueue mainQueue] // added operations execute on main thread
[NSOperationQueue new] // post-iOS4, guaranteed to be not the main thread
In your case, you might want to create your own "database thread" by subclassing NSThread and send messages to it with performSelector:onThread:.
The execution thread of NSOperation depends on the NSOperationQueue where you added the operation. Look out this statement in your code -
[[NSOperationQueue mainQueue] addOperation:yourOperation]; // or any other similar add method of NSOperationQueue class
All this assumes you have not done any further threading in main method of NSOperation which is the actual monster where the work instructions you have (expected to be) written.
However, in case of concurrent operations, the scenario is different. The queue may spawn a thread for each concurrent operation. Although it's not guarrantteed and it depends on system resources vs operation resource demands at that point in the system. You can control concurrency of operation queue by it's maxConcurrentOperationCount property.
EDIT -
I found your question interesting and did some analysis/logging myself. I have NSOperationQueue created on main thread like this -
self.queueSendMessageOperation = [[[NSOperationQueue alloc] init] autorelease];
NSLog(#"Operation queue creation. current thread = %# \n main thread = %#", [NSThread currentThread], [NSThread mainThread]);
self.queueSendMessageOperation.maxConcurrentOperationCount = 1; // restrict concurrency
And then, I went on to create an NSOperation and added it using addOperation. In the main method of this operation when i checked for current thread,
NSLog(#"Operation obj = %#\n current thread = %# \n main thread = %#", self, [NSThread currentThread], [NSThread mainThread]);
it was not as main thread. And, found that current thread object is not main thread object.
So, custom creation of queue on main thread (with no concurrency among its operation) doesn't necessarily mean the operations will execute serially on main thread itself.
The summary from the docs is operations are always executed on a separate thread (post iOS 4 implies GCD underlying operation queues).
It's trivial to check that it is indeed running on a non-main thread:
NSLog(#"main thread? %#", [NSThread isMainThread] ? #"YES" : #"NO");
When running in a thread it's trivial to use GCD/libdispatch to run something on the main thread, whether core data, user interface or other code required to run on the main thread:
dispatch_async(dispatch_get_main_queue(), ^{
// this is now running on the main thread
});
If you're doing any non-trivial threading, you should use FMDatabaseQueue.

Issue with GCD and too many threads

I have an image loader class which provided with NSURL loads and image from the web and executes completion block. Code is actually quite simple
- (void)downloadImageWithURL:(NSString *)URLString completion:(BELoadImageCompletionBlock)completion
{
dispatch_async(_queue, ^{
// dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
UIImage *image = nil;
NSURL *URL = [NSURL URLWithString:URLString];
if (URL) {
image = [UIImage imageWithData:[NSData dataWithContentsOfURL:URL]];
}
dispatch_async(dispatch_get_main_queue(), ^{
completion(image, URLString);
});
});
}
When I replace
dispatch_async(_queue, ^{
with commented out
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
Images are loading much faster, wich is quite logical (before that images would be loaded one at a time, now a bunch of them are loading simultaneously). My issue is that I have perhaps 50 images and I call downloadImageWithURL:completion: method for all of them and when I use global queue instead of _queue my app eventually crashes and I see there are 85+ threads. Can the problem be that my calling dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0) 50 times in a row makes GCD create too many threads? I thought that gcd handles all the treading and makes sure the number of threads is not huge, but if it's not the case is there any way I can influence number of threads?
The kernel creates additional threads when workunits on existing GCD worker threads for a global concurrent queue are blocked in the kernel for a significant amount of time (as long as there is further work pending on the global queue).
This is necessary so that the application can continue to make progress overall (e.g. the execution of one of the pending blocks may be what allows the blocked threads to become unblocked).
If the reason for worker threads to be blocked in the kernel is IO (e.g. the +[NSData dataWithContentsOfURL:] in this example), the best solution is replace those calls with an API that will perform that IO asynchronously without blocking, e.g. NSURLConnection for networking or dispatch I/O for filesystem IO.
Alternatively you can limit the number of concurrent blocking operations manually, e.g. by using a counting dispatch semaphore.
The WWDC 2012 GCD session went over this topic in some detail.
Well from http://developer.apple.com/library/ios/#documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html
Concurrent queues (also known as a type of global dispatch queue) execute one or more tasks concurrently, but tasks are still started in
the order in which they were added to the queue. The currently
executing tasks run on distinct threads that are managed by the
dispatch queue. The exact number of tasks executing at any given point
is variable and depends on system conditions.
and
Serial queues (also known as private dispatch queues) execute one task
at a time in the order in which they are added to the queue. The
currently executing task runs on a distinct thread (which can vary
from task to task) that is managed by the dispatch queue.
By dispatching all your blocks to the high priority concurrent dispatch queue with
[NSData dataWithContentsOfURL:URL]
which is a synchronous blocking network operation, it looks like the default GCD behaviour will be to spawn a load of threads to execute your blocks ASAP.
You should be dispatching to DISPATCH_QUEUE_PRIORITY_BACKGROUND. These tasks are in no way "High Priority". Any image processing should be done when there is spare time and nothing is happening on the main thread.
If you want more control over how many of these things are happening at once i reccommend that you look into using NSOperation. You can take your blocks and embed them in an operation using NSBlockOperation and then you can submit these operations to your own NSOperationQueue. An NSOperationQueue has a - (NSInteger)maxConcurrentOperationCount and as an added benefit operations can also be cancelled after scheduling if needed.
You can use NSOperationqueue, which is supported by NSURLConnection
And it has the following instance method:
- (void)setMaxConcurrentOperationCount:(NSInteger)count

How should I use GCD dispatch_barrier_async in iOS (seems to execute before and not after other blocks)

I'm trying to synchronize the following code in iOS5:
an object has a method which makes an HTTP request from which it
gets some data, including an URL to an image
once the data arrives, the textual data is used to populate a
CoreData model
at the same time, a second thread is dispatched async to download
the image; this thread will signal via KVO to a viewController when
the image is already cached and available in the CoreData model.
since the image download will take a while, we immediately return
the CoreData object which has all attributes but for the image to
the caller.
Also, when the second thread is done downloading, the CoreData model
can be saved.
This is the (simplified) code:
- (void)insideSomeMethod
{
[SomeHTTPRequest withCompletionHandler:
^(id retrievedData)
{
if(!retrievedData)
{
handler(nil);
}
// Populate CoreData model with retrieved Data...
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSURL* userImageURL = [NSURL URLWithString:[retrievedData valueForKey:#"imageURL"]];
aCoreDataNSManagedObject.profileImage = [NSData dataWithContentsOfURL:userImageURL];
});
handler(aCoreDataNSManagedObject);
[self shouldCommitChangesToModel];
}];
}
- (void)shouldCommitChangesToModel
{
dispatch_barrier_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSError *error = nil;
if(![managedObjectContext save:&error])
{
// Handle error
}
});
}
But what's going on is that the barrier-based save-block is always executed before the the image-loading block. That is,
dispatch_barrier_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSError *error = nil;
if(![managedObjectContext save:&error])
{
// Handle error
}
});
Executes before:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSURL* userImageURL = [NSURL URLWithString:[retrievedData valueForKey:#"imageURL"]];
aCoreDataNSManagedObject.profileImage = [NSData dataWithContentsOfURL:userImageURL];
});
So obviously I'm not really dispatching the image-loading block before the barrier, or the barrier would wait until the image-loading block is done before executing (which was my intention).
What am I doing wrong? how do I make sure the image-loading block is enqueued before the barrier block?
At first glance the issue may be that you are dispatching the barrier block on a global concurrent queue. You can only use barrier blocks on your own custom concurrent queue. Per the GCD docs on dispatch_barrier_async, if you dispatch a block to a global queue, it will behave like a normal dispatch_async call.
Mike Ash has a good blog post on GCD barrier blocks: http://www.mikeash.com/pyblog/friday-qa-2011-10-14-whats-new-in-gcd.html
Good luck
T
You need to create your own queue and not dispatch to the global queues as per the ADC Docs
The queue you specify should be a concurrent queue that you create
yourself using the dispatch_queue_create function. If the queue you
pass to this function is a serial queue or one of the global
concurrent queues, this function behaves like the dispatch_async
function.
from https://developer.apple.com/library/ios/documentation/Performance/Reference/GCD_libdispatch_Ref/Reference/reference.html#//apple_ref/c/func/dispatch_barrier_async .
You can create tons of your own GCD queues just fine. gcd queues are very small and you can create tons of them without issue. You just need to free them when you're done with them.
For what you seem to be trying to solve, dispatch_barrier_async may not be the best solution.
Have a look at the Migrating Away From Threads section of the Concurrency Programming Guide. Just using dispatch_sync on a your own serial queue may solve your synchronization problem.
Alternatively, you can use NSOperation and NSOperationQueue. Unlike GCD, NSOperation allows you to easily manage dependancies (you can do it using GCD, but it can get ugly fast).
I'm a little late to the party, but maybe next time you could try using dispatch_groups to your advantage. http://www.raywenderlich.com/63338/grand-central-dispatch-in-depth-part-2

Resources