Issue with GCD and too many threads - ios

I have an image loader class which provided with NSURL loads and image from the web and executes completion block. Code is actually quite simple
- (void)downloadImageWithURL:(NSString *)URLString completion:(BELoadImageCompletionBlock)completion
{
dispatch_async(_queue, ^{
// dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
UIImage *image = nil;
NSURL *URL = [NSURL URLWithString:URLString];
if (URL) {
image = [UIImage imageWithData:[NSData dataWithContentsOfURL:URL]];
}
dispatch_async(dispatch_get_main_queue(), ^{
completion(image, URLString);
});
});
}
When I replace
dispatch_async(_queue, ^{
with commented out
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
Images are loading much faster, wich is quite logical (before that images would be loaded one at a time, now a bunch of them are loading simultaneously). My issue is that I have perhaps 50 images and I call downloadImageWithURL:completion: method for all of them and when I use global queue instead of _queue my app eventually crashes and I see there are 85+ threads. Can the problem be that my calling dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0) 50 times in a row makes GCD create too many threads? I thought that gcd handles all the treading and makes sure the number of threads is not huge, but if it's not the case is there any way I can influence number of threads?

The kernel creates additional threads when workunits on existing GCD worker threads for a global concurrent queue are blocked in the kernel for a significant amount of time (as long as there is further work pending on the global queue).
This is necessary so that the application can continue to make progress overall (e.g. the execution of one of the pending blocks may be what allows the blocked threads to become unblocked).
If the reason for worker threads to be blocked in the kernel is IO (e.g. the +[NSData dataWithContentsOfURL:] in this example), the best solution is replace those calls with an API that will perform that IO asynchronously without blocking, e.g. NSURLConnection for networking or dispatch I/O for filesystem IO.
Alternatively you can limit the number of concurrent blocking operations manually, e.g. by using a counting dispatch semaphore.
The WWDC 2012 GCD session went over this topic in some detail.

Well from http://developer.apple.com/library/ios/#documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html
Concurrent queues (also known as a type of global dispatch queue) execute one or more tasks concurrently, but tasks are still started in
the order in which they were added to the queue. The currently
executing tasks run on distinct threads that are managed by the
dispatch queue. The exact number of tasks executing at any given point
is variable and depends on system conditions.
and
Serial queues (also known as private dispatch queues) execute one task
at a time in the order in which they are added to the queue. The
currently executing task runs on a distinct thread (which can vary
from task to task) that is managed by the dispatch queue.
By dispatching all your blocks to the high priority concurrent dispatch queue with
[NSData dataWithContentsOfURL:URL]
which is a synchronous blocking network operation, it looks like the default GCD behaviour will be to spawn a load of threads to execute your blocks ASAP.
You should be dispatching to DISPATCH_QUEUE_PRIORITY_BACKGROUND. These tasks are in no way "High Priority". Any image processing should be done when there is spare time and nothing is happening on the main thread.
If you want more control over how many of these things are happening at once i reccommend that you look into using NSOperation. You can take your blocks and embed them in an operation using NSBlockOperation and then you can submit these operations to your own NSOperationQueue. An NSOperationQueue has a - (NSInteger)maxConcurrentOperationCount and as an added benefit operations can also be cancelled after scheduling if needed.

You can use NSOperationqueue, which is supported by NSURLConnection
And it has the following instance method:
- (void)setMaxConcurrentOperationCount:(NSInteger)count

Related

what happened if i use more then one dispatch_async

Need to unzip 10 files in document directory for that i use dispatch async like this
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
// unzip 5 files
})
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
// unzip another 5 files
})
my doubt is will it do the unzip concurrently?
If is it so, while first 5 files getting unzip, another 5 files also
getting unzipped at the same time ?
how can i do it efficiently ?
any help would be appreciable.
If you unzip total files it will be good.Because It is asynchronous.While doing background operation simultaneously if you try to do other background process,it makes problem.
DISPATCH_QUEUE_PRIORITY_HIGH Items dispatched to the queue will run at high priority, i.e. the queue will be scheduled for execution before any default priority or low priority queue.
If you want to run a single independent queued operation and you’re not concerned with other concurrent operations, you can use the global concurrent queue
dispatch_queue_t globalConcurrentQueue =
dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
It runs asynchronously on a background thread. This is done because parsing data may be a time consuming task and it could block the main thread which would stop all animations and the application wouldn't be responsive.
Grand Central Dispatch
dispatch_async
Asynchronous Operations
Apple Document
UI and main Thread
GLobal Queue
dispatch_sync and dispatch_async Process
Calling dispatch_get_global_queue returns a concurrent, background queue (ie a queue capable of running more than on queue item at once on a background thread) that is managed by the OS. The concurrency of the operations really depends on the number of cores the device has.
Each block you pass to dispatch_async is a queue item, so the code inside the block will run linearly on the background thread when that block is dequeued and run. If for example you are unzipping with a for loop, as such:
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
for (int i = 0; i<5; i++) {
// Unzip here
}
})
Then the task running time will be 5 x file unzip time. Batching them in to two sets of 5 could potentially mean the total unzip time is halved.
If you want all 10 files unzipped with max concurrency (ie as much concurrency as the system will allow), then you'd be better off dispatching 10 blocks to the global_queue, as such:
for (int i = 0; i<5; i++) {
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
// Unzip here
})
}
Because global queues are concurrent queues, your second dispatch_async will happen concurrently with the first one. If you do this, you must ensure that:
The unzip class/instance is thread safe;
All model updates must be synchronized; and
The peak memory usage resulting from simultaneous zip operations isn't too high.
If the above conditions aren't satisfied (i.e. if you want to run these zip tasks consecutively, not concurrently), you can create a serial queue for unzipping (with dispatch_queue_create). That way, any unzipping tasks dispatched to that queue will be performed serially on a background queue, preventing them from running concurrently.

Running multiple background threads iOS

Is it possible to run multiple background threads to improve performance on iOS . Currently I am using the following code for sending lets say 50 network requests on background thread like this:
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void){
// send 50 network requests
});
EDIT:
After updating my code to something like this no performance gain was achieved :( Taken from here
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", NULL);
dispatch_group_t fetchGroup = dispatch_group_create();
// This will allow up to 8 parallel downloads.
dispatch_semaphore_t downloadSema = dispatch_semaphore_create(8);
// We start ALL our downloads in parallel throttled by the above semaphore.
for (NSURL *url in urlsArray) {
dispatch_group_async(fetchGroup, fetchQ, ^(void) {
dispatch_semaphore_wait(downloadSema, DISPATCH_TIME_FOREVER);
NSMutableURLRequest *headRequest = [NSMutableURLRequest requestWithURL:url cachePolicy: NSURLRequestUseProtocolCachePolicy timeoutInterval:60.0];
[headRequest setHTTPMethod: #"GET"];
[headRequest addValue: cookieString forHTTPHeaderField: #"Cookie"];
NSOperationQueue *queue = [[[NSOperationQueue alloc] init] autorelease];
[NSURLConnection sendAsynchronousRequest:headRequest
queue:queue // created at class init
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){
// do something with data or handle error
NSLog(#"request completed");
}];
dispatch_semaphore_signal(downloadSema);
});
}
// Now we wait until ALL our dispatch_group_async are finished.
dispatch_group_wait(fetchGroup, DISPATCH_TIME_FOREVER);
// Update your UI
dispatch_sync(dispatch_get_main_queue(), ^{
//[self updateUIFunction];
});
// Release resources
dispatch_release(fetchGroup);
dispatch_release(downloadSema);
dispatch_release(fetchQ);
Be careful not to confuse threads with queues
A single concurrent queue can operate across multiple threads, and GCD never guarantees which thread your tasks will run on.
The code you currently have will submit 50 network tasks to be run on a background concurrent queue, this much is true.
However, all 50 of these tasks will be executed on the same thread.
GCD basically acts like a giant thread pool, so your block (containing your 50 tasks) will be submitted to the next available thread in the pool. Therefore, if the tasks are synchronous, they will be executed serially. This means that each task will have to wait for the previous one to finish before preceding. If they are asynchronous tasks, then they will all be dispatched immediately (which begs the question of why you need to use GCD in the first place).
If you want multiple synchronous tasks to run at the same time, then you need a separate dispatch_async for each of your tasks. This way you have a block per task, and therefore they will be dispatched to multiple threads from the thread pool and therefore can run concurrently.
Although you should be careful that you don't submit too many network tasks to operate at the same time (you don't say specifically what they're doing) as it could potentially overload a server, as gnasher says.
You can easily limit the number of concurrent tasks (whether they're synchronous or asynchronous) operating at the same time using a GCD semaphore. For example, this code will limit the number of concurrent operations to 6:
long numberOfConcurrentTasks = 6;
dispatch_semaphore_t semaphore = dispatch_semaphore_create(numberOfConcurrentTasks);
for (int i = 0; i < 50; i++) {
dispatch_async(concurrentQueue, ^{
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
[self doNetworkTaskWithCompletion:^{
dispatch_semaphore_signal(semaphore);
NSLog(#"network task %i done", i);
}];
});
}
Edit
The problem with your code is the line:
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", NULL);
When NULL is passed to the attr parameter, GCD creates a serial queue (it's also a lot more readable if you actually specify the queue type here). You want a concurrent queue. Therefore you want:
dispatch_queue_t fetchQ = dispatch_queue_create("Multiple Async Downloader", DISPATCH_QUEUE_CONCURRENT);
You need to be signalling your semaphore from within the completion handler of the request instead of at the end of the request. As it's asynchronous, the semaphore will get signalled as soon as the request is sent off, therefore queueing another network task. You want to wait for the network task to return before signalling.
[NSURLConnection sendAsynchronousRequest:headRequest
queue:queue // created at class init
completionHandler:^(NSURLResponse *response, NSData *data, NSError *error){
// do something with data or handle error
NSLog(#"request completed");
dispatch_semaphore_signal(downloadSema);
}];
Edit 2
I just noticed you are updating your UI using a dispatch_sync. I see no reason for it to be synchronous, as it'll just block the background thread until the main thread has updated the UI. I would use a dispatch_async to do this.
Edit 3
As CouchDeveloper points out, it is possible that the number of concurrent network requests might be being capped by the system.
The easiest solution appears to be migrating over to NSURLSession and configuring the maxConcurrentOperationCount property of the NSOperationQueue used. That way you can ditch the semaphores altogether and just dispatch all your network requests on a background queue, using a callback to update the UI on the main thread.
I am not at all familiar with NSURLSession though, I was only answering this from a GCD stand-point.
You can send multiple requests, but sending 50 requests in parallel is usually not a good idea. There is a good chance that a server confronted with 50 simultaneous request will handle the first few and return errors for the rest. It depends on the server, but using a semaphore you can easily limit the number of running requests to anything you like, say four or eight. You need to experiment with the server in question to find out what works reliably on that server and gives you the highest performance.
And there seems to be a bit of confusion around: Usually all your network requests will run asynchronously. That is you send the request to the OS (which goes very quick usually), then nothing happens for a while, then a callback method of yours is called, processing the data. Whether you send the requests from the main thread or from a background thread doesn't make much difference.
Processing the results of these requests can be time consuming. You can process the results on a background thread. You can process the results of all requests on the same serial queue, which makes it a lot easier to avoid multithreading problems. That's what I do because it's easy and even in the worst case uses one processor for intensive processing of the results, while the other processor can do UI etc.
If you use synchronous network requests (which is a bad idea), then you need to dispatch each one by itself on a background thread. If you run a loop running 50 synchronous network requests on a background thread, then the second request will wait until the first one is completely finished.

How to parallelize many (100+) tasks without hitting global GCD limit?

The problem:
When lazy-loading a list of 100+ icons in the background, I hit the GCD thread limit (64 threads), which causes my app to freeze up with a semaphore_wait_trap on the main thread. I want to restructure my threading code to prevent this from happening, while still loading the icons asynchronous to prevent UI blocking.
Context:
My app loads a screen with SVG icons on it. The amount differs on average from 10-200. The icons get drawn by using a local SVG image or a remote SVG image (if it has a custom icon), then they get post-processed to get the final image result.
Because this takes some time, and they aren't vital for the user, I want to load and post-process them in the background, so they would pop in over time. For every icon I use the following:
dispatch_queue_t concurrentQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(concurrentQueue, ^{
//code to be executed in the background
SVGKImage *iconImage = [Settings getIconImage:location];
dispatch_async(dispatch_get_main_queue(), ^{
//code to be executed on the main thread when background task is finished
if (iconImage) {
[iconImgView setImage:iconImage.UIImage];
}
});
});
The getIconImage method handles the initial loading of the base SVG, which reads it synchronized with [NSInputStream inputStreamWithFileAtPath:path] if local, and [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&errorWithNSData] if it should load remotely. This all happens synchronous.
Then there is some post-processing of recoloring the SVG, before it gets returned and put in the UIImageView on the main thread.
Question:
Is there a way to structure my code to allow for parallelized background loading but prevent deadlock because of too many threads?
Solution EDIT:
_iconOperationQueue = [[NSOperationQueue alloc]init];
_iconOperationQueue.maxConcurrentOperationCount = 8;
// Code will be executed on the background
[_iconOperationQueue addOperationWithBlock:^{
// I/O code
SVGKImage *baseIcon = [Settings getIconBaseSVG:location];
// CPU-only code
SVGKImage *iconImage = [Settings getIconImage:location withBaseSVG:baseIcon];
UIImage *svgImage = iconImage.UIImage; // Converting SVGKImage to UIImage is expensive, so don't do this on the main thread
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
// Code to be executed on the main thread when background task is finished
if (svgImage) {
[iconImgView setImage:svgImage];
}
}];
}];
Instead of directly using GCD with a concurrent queue, use an NSOperationQueue. Set its maxConcurrentOperationCount to something reasonable, like 4 or 8.
If you can, you should also separate I/O from pure computation. Use the width-restricted operation queue for the I/O. The pure computation you can use an unrestricted operation queue or pure GCD for.
The reason is that I/O blocks. GCD detects that the system is idle and spins up another worker thread and starts another task from the queue. That blocks in I/O, too, so it does that some more until it hits its limit. Then, the I/O starts completing and the tasks unblock. Now you have oversubscribed the system resources (i.e. CPU) because there are more tasks in flight than cores and suddenly they are actually using CPU instead of being blocked by I/O.
Pure computation tasks don't provoke this problem because GCD sees that the system is actually busy and doesn't dequeue more tasks until earlier ones have completed.
You can stay with GCD by using a semaphore something like this running the whole operation in the background otherwise waiting for the semaphore will stall the UI:
dispatch_semaphore_t throttleSemaphore = dispatch_semaphore_create(8);
dispatch_queue_t concurrentQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
for /* Loop through your images */ {
dispatch_semaphore_wait(throttleSemaphore, DISPATCH_TIME_FOREVER);
dispatch_async(concurrentQueue, ^{
//code to be executed in the background
SVGKImage *iconImage = [Settings getIconImage:location];
dispatch_async(dispatch_get_main_queue(), ^{
//code to be executed on the main thread when background task is finished
if (iconImage) {
[iconImgView setImage:iconImage.UIImage];
}
dispatch_semaphore_signal(throttleSemaphore);
});
});
}

Get underlying dispatch_queue_t from NSOperationQueue

I seem to have some confusion between dispatch_queue_t and NSOperationQueue queues.
By default, AFNetworking's AFImageRequestOperation will execute the success callback block on the application's main thread. To change this, AFHTTPRequestOperation has the property successCallbackQueue which lets you choose on which queue to run the callback.
I'm trying to execute the success callback on the same background queue / background threads which already did the HTTP request. Instead of returning to the main thread, the NSOperationQueue which ran the HTTP request should run the callback as well, since there are some heavy calculations I need to do using some of the returned images.
My first try was to set successCallbackQueue to the NSOperationQueue instance on which the AFImageRequestOperation ran. However, the successCallbackQueue property is of type dispatch_queue_t, so I need a way to get the underlying dispatch_queue_t of my NSOperation instance, if there is such a thing.
Is that possible, or do I need to create a separate dispatch_queue_t?
The reason I ask: It's somewhat strange that AFNetworking inherits from NSOperation, but expects us to use dispatch_queue_t queues for the callbacks. Kind of mixing the two paradigmas dispatch_queue_t and NSOperationQueue.
Thanks for any hints!
There is no such thing, there isn't a one-to-one correspondence of an NSOperationQueue and a dispatch_queue_t, the queueing concepts in the two APIs are very different (e.g. NSOperationQueue does not have strict FIFO queueing like GCD does).
The only dispatch queue used by NSOperationQueue to execute your code is the default priority global concurrent queue.
NSOperationQueue is not your bottleneck with AFNetworking. Request operations are bound by network, not CPU or memory. All of the work is done asynchronously in dispatch queues, which are accessible as properties in AFHTTPRequestOperation.
It is not advisable to use the network thread to do any processing. This will not improve performance in any way.
Instead, if you're noticing performance issues, try limiting the maximum number of concurrent operations in the operation queue, as a way to indirectly control the amount of work being done by those background processing queues.
It is interesting that AFHTTPClient uses an NSOperationQueue to run the AFHTTPRequestOperations but GCD dispatch_queues to handle the results.
Regarding NSOperationQueue the Apple docs say:
Note: In iOS 4 and later, operation queues use Grand Central Dispatch to execute operations.
but there doesn't seem to be a public API to get the dispatch_queue for a given operation.
If it's not that important to you that the success callback has to be on exactly the same queue/thread that the original operation was executed why not set:
successCallbackQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND,0);
XCode 6.4 for iOS 8.4, ARC enabled
1) "...so I need a way to get the underlying dispatch_queue_t of my NSOperation instance, if there is such a thing."
There is a property of NSOperationQueue that can help:
#property(assign) dispatch_queue_t underlyingQueue
It can be used as follows to assign to NSOperationQueue:
NSOperationQueue *concurrentQueueForServerCommunication = [[NSOperationQueue alloc] init];
dispatch_queue_t concurrentQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
concurrentQueueForServerCommunication.underlyingQueue = concurrentQueue;
Or assign from NSOperationQueue:
NSOperationQueue *concurrentQueueForServerCommunication = [[NSOperationQueue alloc] init];
dispatch_queue_t concurrentQueue = concurrentQueueForServerCommunication.underlyingQueue;
Not sure if the API you are using for network communication updates your UI after the network task's completion, but just in case it does not, then you must know to get back onto the main queue when the completion block is executed:
dispatch_sync(dispatch_get_main_queue(), ^{
//Update your UI here...
}
Hope this helps! Cheers.
First of all, it's a good behaviour to execute the success of the AFImageRequestOperation on the main thread because the main usage of this operation is to download an image in background and display it on the UI (which should be on the main thread), but in order to satisfy the needs of those user (your case too) who want to execute the callback on other threads, there is also a successCalbackQueue.
So you can create your own dispatch_queue_t with dispatch_queue_create method or the recommended way, you should use dispatch_get_global_queue to get the main queue.
On each case, make sure that in your success block, if you are making some changes to the UI, put them inside dispatch_async(dispatch_get_main_queue(), ^{ // main op here});
Swift 3 code, based on #serge-k's answer:
// Initialize the operation queue.
let operationQueue = OperationQueue()
operationQueue.name = "com.example.myOperationQueue"
operationQueue.qualityOfService = .userInitiated
// Initialize a backing DispatchQueue so we can reuse it for network operations.
// Because no additional info is give, the dispatch queue will have the same QoS as the operation queue.
let operationQueueUnderlyingQueue = DispatchQueue(label: "com.example.underlyingQueue")
operationQueue.qualityOfService.underlyingQueue = operationQueueUnderlyingQueue
You can then use this in Alamofire (or AFNetworking) in the following manner:
Alamofire.request("https://example.com/get", parameters: nil).validate().responseJSON(queue: operationQueue.underlyingQueue) { response in
response handler code
}
A warning here, from Apple's documentation on setting the OperationQueue's underlying queue:
The value of this property should only be set if there are no operations in the queue; setting the value of this property when operationCount is not equal to 0 raises an invalidArgumentException. The value of this property must not be the value returned by dispatch_get_main_queue(). The quality-of-service level set for the underlying dispatch queue overrides any value set for the operation queue's qualityOfService property.

How should I use GCD dispatch_barrier_async in iOS (seems to execute before and not after other blocks)

I'm trying to synchronize the following code in iOS5:
an object has a method which makes an HTTP request from which it
gets some data, including an URL to an image
once the data arrives, the textual data is used to populate a
CoreData model
at the same time, a second thread is dispatched async to download
the image; this thread will signal via KVO to a viewController when
the image is already cached and available in the CoreData model.
since the image download will take a while, we immediately return
the CoreData object which has all attributes but for the image to
the caller.
Also, when the second thread is done downloading, the CoreData model
can be saved.
This is the (simplified) code:
- (void)insideSomeMethod
{
[SomeHTTPRequest withCompletionHandler:
^(id retrievedData)
{
if(!retrievedData)
{
handler(nil);
}
// Populate CoreData model with retrieved Data...
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSURL* userImageURL = [NSURL URLWithString:[retrievedData valueForKey:#"imageURL"]];
aCoreDataNSManagedObject.profileImage = [NSData dataWithContentsOfURL:userImageURL];
});
handler(aCoreDataNSManagedObject);
[self shouldCommitChangesToModel];
}];
}
- (void)shouldCommitChangesToModel
{
dispatch_barrier_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSError *error = nil;
if(![managedObjectContext save:&error])
{
// Handle error
}
});
}
But what's going on is that the barrier-based save-block is always executed before the the image-loading block. That is,
dispatch_barrier_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSError *error = nil;
if(![managedObjectContext save:&error])
{
// Handle error
}
});
Executes before:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSURL* userImageURL = [NSURL URLWithString:[retrievedData valueForKey:#"imageURL"]];
aCoreDataNSManagedObject.profileImage = [NSData dataWithContentsOfURL:userImageURL];
});
So obviously I'm not really dispatching the image-loading block before the barrier, or the barrier would wait until the image-loading block is done before executing (which was my intention).
What am I doing wrong? how do I make sure the image-loading block is enqueued before the barrier block?
At first glance the issue may be that you are dispatching the barrier block on a global concurrent queue. You can only use barrier blocks on your own custom concurrent queue. Per the GCD docs on dispatch_barrier_async, if you dispatch a block to a global queue, it will behave like a normal dispatch_async call.
Mike Ash has a good blog post on GCD barrier blocks: http://www.mikeash.com/pyblog/friday-qa-2011-10-14-whats-new-in-gcd.html
Good luck
T
You need to create your own queue and not dispatch to the global queues as per the ADC Docs
The queue you specify should be a concurrent queue that you create
yourself using the dispatch_queue_create function. If the queue you
pass to this function is a serial queue or one of the global
concurrent queues, this function behaves like the dispatch_async
function.
from https://developer.apple.com/library/ios/documentation/Performance/Reference/GCD_libdispatch_Ref/Reference/reference.html#//apple_ref/c/func/dispatch_barrier_async .
You can create tons of your own GCD queues just fine. gcd queues are very small and you can create tons of them without issue. You just need to free them when you're done with them.
For what you seem to be trying to solve, dispatch_barrier_async may not be the best solution.
Have a look at the Migrating Away From Threads section of the Concurrency Programming Guide. Just using dispatch_sync on a your own serial queue may solve your synchronization problem.
Alternatively, you can use NSOperation and NSOperationQueue. Unlike GCD, NSOperation allows you to easily manage dependancies (you can do it using GCD, but it can get ugly fast).
I'm a little late to the party, but maybe next time you could try using dispatch_groups to your advantage. http://www.raywenderlich.com/63338/grand-central-dispatch-in-depth-part-2

Resources