Waiting until two async blocks are executed before starting another block - ios

When using GCD, we want to wait until two async blocks are executed and done before moving on to the next steps of execution. What is the best way to do that?
We tried the following, but it doesn't seem to work:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^ {
// block1
});
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^ {
// block2
});
// wait until both the block1 and block2 are done before start block3
// how to do that?
dispatch_sync(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^ {
// block3
});

Use dispatch groups: see here for an example, "Waiting on Groups of Queued Tasks" in the "Dispatch Queues" chapter of Apple's iOS Developer Library's Concurrency Programming Guide
Your example could look something like this:
dispatch_group_t group = dispatch_group_create();
dispatch_group_async(group,dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^ {
// block1
NSLog(#"Block1");
[NSThread sleepForTimeInterval:5.0];
NSLog(#"Block1 End");
});
dispatch_group_async(group,dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^ {
// block2
NSLog(#"Block2");
[NSThread sleepForTimeInterval:8.0];
NSLog(#"Block2 End");
});
dispatch_group_notify(group,dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^ {
// block3
NSLog(#"Block3");
});
// only for non-ARC projects, handled automatically in ARC-enabled projects.
dispatch_release(group);
and could produce output like this:
2012-08-11 16:10:18.049 Dispatch[11858:1e03] Block1
2012-08-11 16:10:18.052 Dispatch[11858:1d03] Block2
2012-08-11 16:10:23.051 Dispatch[11858:1e03] Block1 End
2012-08-11 16:10:26.053 Dispatch[11858:1d03] Block2 End
2012-08-11 16:10:26.054 Dispatch[11858:1d03] Block3

Expanding on Jörn Eyrich answer (upvote his answer if you upvote this one), if you do not have control over the dispatch_async calls for your blocks, as might be the case for async completion blocks, you can use the GCD groups using dispatch_group_enter and dispatch_group_leave directly.
In this example, we're pretending computeInBackground is something we cannot change (imagine it is a delegate callback, NSURLConnection completionHandler, or whatever), and thus we don't have access to the dispatch calls.
// create a group
dispatch_group_t group = dispatch_group_create();
// pair a dispatch_group_enter for each dispatch_group_leave
dispatch_group_enter(group); // pair 1 enter
[self computeInBackground:1 completion:^{
NSLog(#"1 done");
dispatch_group_leave(group); // pair 1 leave
}];
// again... (and again...)
dispatch_group_enter(group); // pair 2 enter
[self computeInBackground:2 completion:^{
NSLog(#"2 done");
dispatch_group_leave(group); // pair 2 leave
}];
// Next, setup the code to execute after all the paired enter/leave calls.
//
// Option 1: Get a notification on a block that will be scheduled on the specified queue:
dispatch_group_notify(group, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSLog(#"finally!");
});
// Option 2: Block an wait for the calls to complete in code already running
// (as cbartel points out, be careful with running this on the main/UI queue!):
//
// dispatch_group_wait(group, DISPATCH_TIME_FOREVER); // blocks current thread
// NSLog(#"finally!");
In this example, computeInBackground:completion: is implemented as:
- (void)computeInBackground:(int)no completion:(void (^)(void))block {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSLog(#"%d starting", no);
sleep(no*2);
block();
});
}
Output (with timestamps from a run):
12:57:02.574 2 starting
12:57:02.574 1 starting
12:57:04.590 1 done
12:57:06.590 2 done
12:57:06.591 finally!

With Swift 5.1, Grand Central Dispatch offers many ways to solve your problem. According to your needs, you may choose one of the seven patterns shown in the following Playground snippets.
#1. Using DispatchGroup, DispatchGroup's notify(qos:flags:queue:execute:) and DispatchQueue's async(group:qos:flags:execute:)
The Apple Developer Concurrency Programming Guide states about DispatchGroup:
Dispatch groups are a way to block a thread until one or more tasks finish executing. You can use this behavior in places where you cannot make progress until all of the specified tasks are complete. For example, after dispatching several tasks to compute some data, you might use a group to wait on those tasks and then process the results when they are done.
import Foundation
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
let queue = DispatchQueue(label: "com.company.app.queue", attributes: .concurrent)
let group = DispatchGroup()
queue.async(group: group) {
print("#1 started")
Thread.sleep(forTimeInterval: 5)
print("#1 finished")
}
queue.async(group: group) {
print("#2 started")
Thread.sleep(forTimeInterval: 2)
print("#2 finished")
}
group.notify(queue: queue) {
print("#3 finished")
}
/*
prints:
#1 started
#2 started
#2 finished
#1 finished
#3 finished
*/
#2. Using DispatchGroup, DispatchGroup's wait(), DispatchGroup's enter() and DispatchGroup's leave()
import Foundation
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
let queue = DispatchQueue(label: "com.company.app.queue", attributes: .concurrent)
let group = DispatchGroup()
group.enter()
queue.async {
print("#1 started")
Thread.sleep(forTimeInterval: 5)
print("#1 finished")
group.leave()
}
group.enter()
queue.async {
print("#2 started")
Thread.sleep(forTimeInterval: 2)
print("#2 finished")
group.leave()
}
queue.async {
group.wait()
print("#3 finished")
}
/*
prints:
#1 started
#2 started
#2 finished
#1 finished
#3 finished
*/
Note that you can also mix DispatchGroup wait() with DispatchQueue async(group:qos:flags:execute:) or mix DispatchGroup enter() and DispatchGroup leave() with DispatchGroup notify(qos:flags:queue:execute:).
#3. Using Dispatch​Work​Item​Flags barrier and DispatchQueue's async(group:qos:flags:execute:)
Grand Central Dispatch Tutorial for Swift 4: Part 1/2 article from Raywenderlich.com gives a definition for barriers:
Dispatch barriers are a group of functions acting as a serial-style bottleneck when working with concurrent queues. When you submit a DispatchWorkItem to a dispatch queue you can set flags to indicate that it should be the only item executed on the specified queue for that particular time. This means that all items submitted to the queue prior to the dispatch barrier must complete before the DispatchWorkItem will execute.
Usage:
import Foundation
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
let queue = DispatchQueue(label: "com.company.app.queue", attributes: .concurrent)
queue.async {
print("#1 started")
Thread.sleep(forTimeInterval: 5)
print("#1 finished")
}
queue.async {
print("#2 started")
Thread.sleep(forTimeInterval: 2)
print("#2 finished")
}
queue.async(flags: .barrier) {
print("#3 finished")
}
/*
prints:
#1 started
#2 started
#2 finished
#1 finished
#3 finished
*/
#4. Using DispatchWorkItem, Dispatch​Work​Item​Flags's barrier and DispatchQueue's async(execute:)
import Foundation
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
let queue = DispatchQueue(label: "com.company.app.queue", attributes: .concurrent)
queue.async {
print("#1 started")
Thread.sleep(forTimeInterval: 5)
print("#1 finished")
}
queue.async {
print("#2 started")
Thread.sleep(forTimeInterval: 2)
print("#2 finished")
}
let dispatchWorkItem = DispatchWorkItem(qos: .default, flags: .barrier) {
print("#3 finished")
}
queue.async(execute: dispatchWorkItem)
/*
prints:
#1 started
#2 started
#2 finished
#1 finished
#3 finished
*/
#5. Using DispatchSemaphore, DispatchSemaphore's wait() and DispatchSemaphore's signal()
Soroush Khanlou wrote the following lines in The GCD Handbook blog post:
Using a semaphore, we can block a thread for an arbitrary amount of time, until a signal from another thread is sent. Semaphores, like the rest of GCD, are thread-safe, and they can be triggered from anywhere. Semaphores can be used when there’s an asynchronous API that you need to make synchronous, but you can’t modify it.
Apple Developer API Reference also gives the following discussion for DispatchSemaphore init(value:​) initializer:
Passing zero for the value is useful for when two threads need to reconcile the completion of a particular event. Passing a value greater than zero is useful for managing a finite pool of resources, where the pool size is equal to the value.
Usage:
import Foundation
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
let queue = DispatchQueue(label: "com.company.app.queue", attributes: .concurrent)
let semaphore = DispatchSemaphore(value: 0)
queue.async {
print("#1 started")
Thread.sleep(forTimeInterval: 5)
print("#1 finished")
semaphore.signal()
}
queue.async {
print("#2 started")
Thread.sleep(forTimeInterval: 2)
print("#2 finished")
semaphore.signal()
}
queue.async {
semaphore.wait()
semaphore.wait()
print("#3 finished")
}
/*
prints:
#1 started
#2 started
#2 finished
#1 finished
#3 finished
*/
#6. Using OperationQueue and Operation's addDependency(_:)
The Apple Developer API Reference states about Operation​Queue:
Operation queues use the libdispatch library (also known as Grand Central Dispatch) to initiate the execution of their operations.
Usage:
import Foundation
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
let operationQueue = OperationQueue()
let blockOne = BlockOperation {
print("#1 started")
Thread.sleep(forTimeInterval: 5)
print("#1 finished")
}
let blockTwo = BlockOperation {
print("#2 started")
Thread.sleep(forTimeInterval: 2)
print("#2 finished")
}
let blockThree = BlockOperation {
print("#3 finished")
}
blockThree.addDependency(blockOne)
blockThree.addDependency(blockTwo)
operationQueue.addOperations([blockThree, blockTwo, blockOne], waitUntilFinished: false)
/*
prints:
#1 started
#2 started
#2 finished
#1 finished
#3 finished
or
#2 started
#1 started
#2 finished
#1 finished
#3 finished
*/
#7. Using OperationQueue and OperationQueue's addBarrierBlock(_:) (requires iOS 13)
import Foundation
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
let operationQueue = OperationQueue()
let blockOne = BlockOperation {
print("#1 started")
Thread.sleep(forTimeInterval: 5)
print("#1 finished")
}
let blockTwo = BlockOperation {
print("#2 started")
Thread.sleep(forTimeInterval: 2)
print("#2 finished")
}
operationQueue.addOperations([blockTwo, blockOne], waitUntilFinished: false)
operationQueue.addBarrierBlock {
print("#3 finished")
}
/*
prints:
#1 started
#2 started
#2 finished
#1 finished
#3 finished
or
#2 started
#1 started
#2 finished
#1 finished
#3 finished
*/

Another GCD alternative is a barrier:
dispatch_queue_t queue = dispatch_queue_create("com.company.app.queue", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(queue, ^{
NSLog(#"start one!\n");
sleep(4);
NSLog(#"end one!\n");
});
dispatch_async(queue, ^{
NSLog(#"start two!\n");
sleep(2);
NSLog(#"end two!\n");
});
dispatch_barrier_async(queue, ^{
NSLog(#"Hi, I'm the final block!\n");
});
Just create a concurrent queue, dispatch your two blocks, and then dispatch the final block with barrier, which will make it wait for the other two to finish.

I know you asked about GCD, but if you wanted, NSOperationQueue also handles this sort of stuff really gracefully, e.g.:
NSOperationQueue *queue = [[NSOperationQueue alloc] init];
NSOperation *completionOperation = [NSBlockOperation blockOperationWithBlock:^{
NSLog(#"Starting 3");
}];
NSOperation *operation;
operation = [NSBlockOperation blockOperationWithBlock:^{
NSLog(#"Starting 1");
sleep(7);
NSLog(#"Finishing 1");
}];
[completionOperation addDependency:operation];
[queue addOperation:operation];
operation = [NSBlockOperation blockOperationWithBlock:^{
NSLog(#"Starting 2");
sleep(5);
NSLog(#"Finishing 2");
}];
[completionOperation addDependency:operation];
[queue addOperation:operation];
[queue addOperation:completionOperation];

Answers above are all cool, but they all missed one thing. group executes tasks(blocks) in the thread where it entered when you use dispatch_group_enter/dispatch_group_leave.
- (IBAction)buttonAction:(id)sender {
dispatch_queue_t demoQueue = dispatch_queue_create("com.demo.group", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(demoQueue, ^{
dispatch_group_t demoGroup = dispatch_group_create();
for(int i = 0; i < 10; i++) {
dispatch_group_enter(demoGroup);
[self testMethod:i
block:^{
dispatch_group_leave(demoGroup);
}];
}
dispatch_group_notify(demoGroup, dispatch_get_main_queue(), ^{
NSLog(#"All group tasks are done!");
});
});
}
- (void)testMethod:(NSInteger)index block:(void(^)(void))completeBlock {
NSLog(#"Group task started...%ld", index);
NSLog(#"Current thread is %# thread", [NSThread isMainThread] ? #"main" : #"not main");
[NSThread sleepForTimeInterval:1.f];
if(completeBlock) {
completeBlock();
}
}
this runs in the created concurrent queue demoQueue. If i dont create any queue, it runs in main thread.
- (IBAction)buttonAction:(id)sender {
dispatch_group_t demoGroup = dispatch_group_create();
for(int i = 0; i < 10; i++) {
dispatch_group_enter(demoGroup);
[self testMethod:i
block:^{
dispatch_group_leave(demoGroup);
}];
}
dispatch_group_notify(demoGroup, dispatch_get_main_queue(), ^{
NSLog(#"All group tasks are done!");
});
}
- (void)testMethod:(NSInteger)index block:(void(^)(void))completeBlock {
NSLog(#"Group task started...%ld", index);
NSLog(#"Current thread is %# thread", [NSThread isMainThread] ? #"main" : #"not main");
[NSThread sleepForTimeInterval:1.f];
if(completeBlock) {
completeBlock();
}
}
and there's a third way to make tasks executed in another thread:
- (IBAction)buttonAction:(id)sender {
dispatch_queue_t demoQueue = dispatch_queue_create("com.demo.group", DISPATCH_QUEUE_CONCURRENT);
// dispatch_async(demoQueue, ^{
__weak ViewController* weakSelf = self;
dispatch_group_t demoGroup = dispatch_group_create();
for(int i = 0; i < 10; i++) {
dispatch_group_enter(demoGroup);
dispatch_async(demoQueue, ^{
[weakSelf testMethod:i
block:^{
dispatch_group_leave(demoGroup);
}];
});
}
dispatch_group_notify(demoGroup, dispatch_get_main_queue(), ^{
NSLog(#"All group tasks are done!");
});
// });
}
Of course, as mentioned you can use dispatch_group_async to get what you want.

The first answer is essentially correct, but if you want the very simplest way to accomplish the desired result, here's a stand-alone code example demonstrating how to do it with a semaphore (which is also how dispatch groups work behind the scenes, JFYI):
#include <dispatch/dispatch.h>
#include <stdio.h>
main()
{
dispatch_queue_t myQ = dispatch_queue_create("my.conQ", DISPATCH_QUEUE_CONCURRENT);
dispatch_semaphore_t mySem = dispatch_semaphore_create(0);
dispatch_async(myQ, ^{ printf("Hi I'm block one!\n"); sleep(2); dispatch_semaphore_signal(mySem);});
dispatch_async(myQ, ^{ printf("Hi I'm block two!\n"); sleep(4); dispatch_semaphore_signal(mySem);});
dispatch_async(myQ, ^{ dispatch_semaphore_wait(mySem, DISPATCH_TIME_FOREVER); printf("Hi, I'm the final block!\n"); });
dispatch_main();
}

Swift 4.2 example:
let group = DispatchGroup.group(count: 2)
group.notify(queue: DispatchQueue.main) {
self.renderingLine = false
// all groups are done
}
DispatchQueue.main.async {
self.renderTargetNode(floorPosition: targetPosition, animated: closedContour) {
group.leave()
// first done
}
self.renderCenterLine(position: targetPosition, animated: closedContour) {
group.leave()
// second done
}
}

Accepted answer in swift:
let group = DispatchGroup()
group.async(group: DispatchQueue.global(qos: .default), execute: {
// block1
print("Block1")
Thread.sleep(forTimeInterval: 5.0)
print("Block1 End")
})
group.async(group: DispatchQueue.global(qos: .default), execute: {
// block2
print("Block2")
Thread.sleep(forTimeInterval: 8.0)
print("Block2 End")
})
dispatch_group_notify(group, DispatchQueue.global(qos: .default), {
// block3
print("Block3")
})
// only for non-ARC projects, handled automatically in ARC-enabled projects.
dispatch_release(group)

Not to say other answers are not great for certain circumstances, but this is one snippet I always user from Google:
- (void)runSigninThenInvokeSelector:(SEL)signInDoneSel {
if (signInDoneSel) {
[self performSelector:signInDoneSel];
}
}

Related

Why is the performance gap between GCD, ObjC and Swift so large

OC: Simulator iPhoneSE iOS 13;
(60-80 seconds)
NSTimeInterval t1 = NSDate.date.timeIntervalSince1970;
NSInteger count = 100000;
for (NSInteger i = 0; i < count; i++) {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSLog(#"op begin: %ld", i);
self.idx = i;
NSLog(#"op end: %ld", i);
if (i == count - 1) {
NSLog(#"耗时: %f", NSDate.date.timeIntervalSince1970-t1);
}
});
}
Swift: Simulator iPhoneSE iOS 13;
(10-14 seconds)
let t1 = Date().timeIntervalSince1970
let count = 100000
for i in 0..<count {
DispatchQueue.global().async {
print("subOp begin i:\(i), thred: \(Thread.current)")
self.idx = i
print("subOp end i:\(i), thred: \(Thread.current)")
if i == count-1 {
print("耗时: \(Date().timeIntervalSince1970-t1)")
}
}
}
My previous work has been writing code using oc. Recently learned to use swift. I was surprised by the wide performance gap for a common use of GCD
tl;dr
You most likely are doing a debug build. In Swift, debug builds do all sort of safety checks. If you do a release build, it turns off those safety checks, and the performance is largely indistinguishable from the Objective-C rendition.
A few observations:
Neither of these are thread-safe regarding their interaction with index, namely your interaction with this property is not being synchronized. If you’re going to update a property from multiple threads, you have to synchronize your access (with locks, serial queue, reader-writer concurrent queue, etc.).
In your Swift code, are you doing a release build? In a debug build, there are safety checks in a debug build that aren’t performed with the Objective-C rendition. Do a “release” build for both to compare apples-to-apples.
Your Objective-C rendition is using DISPATCH_QUEUE_PRIORITY_HIGH, but your Swift iteration is using a QoS of .default. I’d suggest using a QoS of .userInteractive or .userInitiated to be comparable.
Global queues are concurrent queues. So, checking i == count - 1 is not sufficient to know whether all of the current tasks are done. Generally we’d use dispatch groups or concurrentPerform/dispatch_apply to know when they’re done.
FWIW, dispatching 100,000 tasks to a global queue is not advised, because you’re going to quickly exhaust the worker threads. Don’t do this sort of thread explosion. You can have unexpected lockups in your app if you do that. In Swift we’d use concurrentPerform. In Objective-C we’d use dispatch_apply.
You’re doing NSLog in Objective-C and print in Swift. Those are not the same thing. I’d suggest doing NSLog in both if you want to compare performance.
When performance testing, I might suggest using unit tests’ measure routine, which repeats it a number of times.
Anyway, when correcting for all of this, the time for the Swift code was largely indistinguishable from the Objective-C performance.
Here are the routines I used. In Swift:
class SwiftExperiment {
// This is not advisable, because it suffers from thread explosion which will exhaust
// the very limited number of worker threads.
func experiment1(completion: #escaping (TimeInterval) -> Void) {
let t1 = Date()
let count = 100_000
let group = DispatchGroup()
for i in 0..<count {
DispatchQueue.global(qos: .userInteractive).async(group: group) {
NSLog("op end: %ld", i);
}
}
group.notify(queue: .main) {
let elapsed = Date().timeIntervalSince(t1)
completion(elapsed)
}
}
// This is safer (though it's a poor use of `concurrentPerform` as there's not enough
// work being done on each thread).
func experiment2(completion: #escaping (TimeInterval) -> Void) {
let t1 = Date()
let count = 100_000
DispatchQueue.global(qos: .userInteractive).async() {
DispatchQueue.concurrentPerform(iterations: count) { i in
NSLog("op end: %ld", i);
}
let elapsed = Date().timeIntervalSince(t1)
completion(elapsed)
}
}
}
And the equivalent Objective-C routines:
// This is not advisable, because it suffers from thread explosion which will exhaust
// the very limited number of worker threads.
- (void)experiment1:(void (^ _Nonnull)(NSTimeInterval))block {
NSDate *t1 = [NSDate date];
NSInteger count = 100000;
dispatch_group_t group = dispatch_group_create();
for (NSInteger i = 0; i < count; i++) {
dispatch_group_async(group, dispatch_get_global_queue(QOS_CLASS_USER_INTERACTIVE, 0), ^{
NSLog(#"op end: %ld", i);
});
}
dispatch_group_notify(group, dispatch_get_main_queue(), ^{
NSTimeInterval elapsed = [NSDate.date timeIntervalSinceDate:t1];
NSLog(#"耗时: %f", elapsed);
block(elapsed);
});
}
// This is safer (though it's a poor use of `dispatch_apply` as there's not enough
// work being done on each thread).
- (void)experiment2:(void (^ _Nonnull)(NSTimeInterval))block {
NSDate *t1 = [NSDate date];
NSInteger count = 100000;
dispatch_async(dispatch_get_global_queue(QOS_CLASS_USER_INTERACTIVE, 0), ^{
dispatch_apply(count, dispatch_get_global_queue(QOS_CLASS_USER_INTERACTIVE, 0), ^(size_t index) {
NSLog(#"op end: %ld", index);
});
NSTimeInterval elapsed = [NSDate.date timeIntervalSinceDate:t1];
NSLog(#"耗时: %f", elapsed);
block(elapsed);
});
}
And my unit tests:
class MyApp4Tests: XCTestCase {
func testSwiftExperiment1() throws {
let experiment = SwiftExperiment()
measure {
let e = expectation(description: "experiment1")
experiment.experiment1 { elapsed in
e.fulfill()
}
wait(for: [e], timeout: 1000)
}
}
func testSwiftExperiment2() throws {
let experiment = SwiftExperiment()
measure {
let e = expectation(description: "experiment2")
experiment.experiment2 { elapsed in
e.fulfill()
}
wait(for: [e], timeout: 1000)
}
}
func testObjcExperiment1() throws {
let experiment = ObjectiveCExperiment()
measure {
let e = expectation(description: "experiment1")
experiment.experiment1 { elapsed in
e.fulfill()
}
wait(for: [e], timeout: 1000)
}
}
func testObjcExperiment2() throws {
let experiment = ObjectiveCExperiment()
measure {
let e = expectation(description: "experiment2")
experiment.experiment2 { elapsed in
e.fulfill()
}
wait(for: [e], timeout: 1000)
}
}
}
That resulted in

iOS GCD Max concurrent operations

On an iPad how many parallel operations can start to get maximum performance? in each run a query operation , calculations , etc ...
It depends on the iPad model (CPU)?
int count = [objects count];
if (count > 0)
{
dispatch_group_t group = dispatch_group_create();
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
for (int i = 0; i < count; i++)
{
dispatch_group_async(group, queue, ^{
for (int j = i + 1; j < count; j++)
{
dispatch_group_async(group, queue, ^{
/** LOTS AND LOTS OF WORK FOR EACH OBJECT **/
});
}
});
}
dispatch_group_notify(group, queue, ^{
/** END OF ALL OPERATIONS */
};
}
This is basically a UX question. Depends on the needs of the end user.
Does he really need all those computations started and finished quickly ?
Can you delay some or most of them ?
It's good practice to inform the user with a progress of each computation (a progress bar) and notify him upon completion.
Let him choose which to start / stop / pause (this is a very important feature).
If all the tasks are local - CPU intensive, and not related to network fetching of resources, then it depends on each CPU device - how many threads can it run in parallel.

Retarget blocks submitted to dispatch queue

I have serial dispatch queue Q (with another serial queue T as a target) and few blocks already submitted via dispatch_async(Q, block). Is there a way to retarget pending blocks to another queue A?
My simple test shows that Q forwards blocks to T as soon as possible, thus setting new target has no effect:
#define print(f, ...) printf(f "\n", ##__VA_ARGS__)
dispatch_queue_t Q = dispatch_queue_create("Q", DISPATCH_QUEUE_SERIAL);
dispatch_queue_t T = dispatch_queue_create("T", DISPATCH_QUEUE_SERIAL);
dispatch_queue_t A = dispatch_queue_create("A", DISPATCH_QUEUE_SERIAL);
dispatch_set_target_queue(Q, T);
dispatch_async(T, ^{ print("T sleeping"); sleep(2); print("T ready"); });
dispatch_async(A, ^{ print("A sleeping"); sleep(5); print("A ready"); });
dispatch_async(Q, ^{
print("block 1");
dispatch_set_target_queue(Q, A); // no effect!
});
dispatch_async(Q, ^{ print("block 2"); });
dispatch_async(Q, ^{ print("block 3"); });
dispatch_async(Q, ^{ print("block 4"); });
Output:
A sleeping
T sleeping
(wait 2 seconds)
T ready
block 1
block 2
block 3
block 4
(wait 3 seconds)
A ready
As you can see, blocks 2-4 were pinned to T even though manual states:
The new target queue setting will take effect between block executions on the object, but not in the middle of any existing block executions (non-preemptive).
It is unclear to me if that only applies to dispatch sources, or "existing" means already submitted (even not yet executed) blocks, but anyway, thing doesn't happen for my serial queue.
Is there a way to do that?
Okay, after studying a bit I came up with the following solution. It is based on custom dispatch source, and I think it is the only way to submit blocks just-in-time.
BlockSource.h:
dispatch_source_t dispatch_block_source_create(dispatch_queue_t queue);
void dispatch_block_source_add_block(dispatch_source_t source, dispatch_block_t block);
BlockSource.c:
struct context {
CFMutableArrayRef array;
pthread_mutex_t mutex;
dispatch_source_t source;
};
static void
s_event(struct context *context)
{
dispatch_block_t block;
CFIndex pending;
pthread_mutex_lock(&context->mutex); {
block = CFArrayGetValueAtIndex(context->array, 0);
CFArrayRemoveValueAtIndex(context->array, 0);
pending = CFArrayGetCount(context->array);
}
pthread_mutex_unlock(&context->mutex);
block();
Block_release(block);
if (pending)
dispatch_source_merge_data(context->source, 1);
}
static void
s_cancel(struct context *context)
{
CFIndex count = CFArrayGetCount(context->array);
for (CFIndex i = 0; i < count; i++) {
dispatch_block_t block = CFArrayGetValueAtIndex(context->array, i);
Block_release(block);
}
CFRelease(context->array);
pthread_mutex_destroy(&context->mutex);
print("canceled");
}
dispatch_source_t
dispatch_block_source_create(dispatch_queue_t queue)
{
dispatch_source_t source = dispatch_source_create(DISPATCH_SOURCE_TYPE_DATA_OR, 0, 0, queue);
struct context *context = calloc(1, sizeof(*context));
context->array = CFArrayCreateMutable(kCFAllocatorDefault, 0, NULL);
pthread_mutex_init(&context->mutex, NULL);
context->source = source;
dispatch_set_context(source, context);
dispatch_source_set_event_handler_f(source, (dispatch_function_t)s_event);
dispatch_source_set_cancel_handler_f(source, (dispatch_function_t)s_cancel);
dispatch_set_finalizer_f(source, (dispatch_function_t)free);
return source;
}
void
dispatch_block_source_add_block(dispatch_source_t source, dispatch_block_t block)
{
struct context *context = dispatch_get_context(source);
pthread_mutex_lock(&context->mutex); {
CFArrayAppendValue(context->array, Block_copy(block));
dispatch_source_merge_data(context->source, 1);
}
pthread_mutex_unlock(&context->mutex);
}
And the test case:
dispatch_queue_t T = dispatch_queue_create("T", DISPATCH_QUEUE_SERIAL);
dispatch_queue_t A = dispatch_queue_create("A", DISPATCH_QUEUE_SERIAL);
static int queue_name_key;
dispatch_queue_set_specific(T, &queue_name_key, "T", NULL);
dispatch_queue_set_specific(A, &queue_name_key, "A", NULL);
dispatch_source_t source = dispatch_block_source_create(T);
dispatch_resume(source);
for (int i = 1; i <= 10; i++) {
dispatch_block_source_add_block(source, ^{
print("block %d on queue %s", i, dispatch_get_specific(&queue_name_key));
sleep(1);
if (i == 2) {
dispatch_set_target_queue(source, A);
}
else if (i == 5) {
dispatch_source_cancel(source);
dispatch_release(source);
}
});
}
Output:
block 1 on queue T
block 2 on queue T
block 3 on queue A
block 4 on queue A
block 5 on queue A
canceled
At least now it follows dispatch source's scheme after dispatch_set_target_queue() call. This means that if new target queue is set while inside one of submitted blocks, it is guaranteed that all remaining blocks will go to new queue.
May still contain bugs.

How to break the loop if we are using dispatch_apply?

if we are using GCD approach for iteration , how to break/stop the loop once the condition matched?
queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_apply(count, queue, ^(size_t i) {
printf("%u\n", i);
//doing thread safe(also heavy) operation here
if (condition) {
//exit the loop
}
});
It is not possible to cancel dispatch_apply as not all operations are completed sequentially but concurrently. The purpose of dispatch_apply is to parallelize a for-loop where all iterations are independent from other iterations.
However you can use a boolean which indicates that the condition was satisfied. All pending operations are cancelled immediately as they are invoked.
__block BOOL stop = NO;
queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
dispatch_apply(count, queue, ^(size_t i) {
if (stop)
return;
//Do stuff
if (condition)
stop = YES;
});

Operation synchronisation challenge in iOS

I have 3 operations: A, B, C.
A, B can be processed concurrently
if C runs, A and B should wait
if A or B runs C should wait
I would solve it with a dispatch group and a semaphore:
public var dgLoadMain = dispatch_group_create()
public var semaLoadMain = dispatch_semaphore_create(1)
A, B would look like this:
dispatch_group_enter(dgLoadMain)
dispatch_semaphore_wait(semaLoadMain, DISPATCH_TIME_FOREVER)
dispatch_semaphore_signal(semaLoadMain) //maybe odd, but right after wait, it signals, it just check wether C is in critical section, if not, release semaphore, and let other B or A continue too
//..
dispatch_group_leave(dgLoadMain)
C would look like this:
dispatch_group_wait(dgLoadMain, DISPATCH_TIME_FOREVER)
dispatch_semaphore_wait(semaLoadMain, DISPATCH_TIME_FOREVER)
//..
dispatch_semaphore_signal(semaLoadMain)
Do you think it is OK?
dispatch_barrier_async is more sophisticated. Take a look at this code. It's Objective-C but the same concept absolutely works well on Swift.
#import <Foundation/Foundation.h>
void A()
{
NSLog(#"A begin");
sleep(1);
NSLog(#"A end");
}
void B()
{
NSLog(#"B begin");
sleep(1);
NSLog(#"B end");
}
void C()
{
NSLog(#"C begin");
sleep(1);
NSLog(#"C end");
}
int main()
{
dispatch_queue_t q = dispatch_queue_create("ABC", DISPATCH_QUEUE_CONCURRENT);
dispatch_queue_t qA = dispatch_queue_create("A", DISPATCH_QUEUE_SERIAL);
dispatch_set_target_queue(qA, q);
dispatch_queue_t qB = dispatch_queue_create("B", DISPATCH_QUEUE_SERIAL);
dispatch_set_target_queue(qB, q);
dispatch_barrier_async(q, ^{C();});
dispatch_async(qA, ^{A();});
dispatch_async(qA, ^{A();});
dispatch_async(qB, ^{B();});
dispatch_barrier_async(q, ^{C();});
dispatch_async(qB, ^{B();});
dispatch_barrier_async(q, ^{C();});
dispatch_async(qA, ^{A();});
dispatch_main();
return 0;
}
The result.
C begin
C end
A begin
B begin
A end
B end
B begin
A begin
B end
A end
A begin
A end
C begin
C end
C begin
C end
Your solution probably works but reasoning about its correctness was painful. I came up with what I thought was a cleaner solution. It uses 2 semaphores. It basically combines down task A and task B and looks at them as one task. And then it uses the timeout property to check if either task A or task B was finished previously and signals to the first semaphore accordingly. Here's the code:
let semaphore = dispatch_semaphore_create(1);
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER)
let ABSema = dispatch_semaphore_create(0);
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), {
print("A")
if !Bool(dispatch_semaphore_wait(ABSema, DISPATCH_TIME_NOW))
{
dispatch_semaphore_signal(ABSema)
}
else
{
dispatch_semaphore_signal(semaphore)
}
})
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), {
print("B")
if !Bool(dispatch_semaphore_wait(ABSema, DISPATCH_TIME_NOW))
{
dispatch_semaphore_signal(ABSema)
}
else
{
dispatch_semaphore_signal(semaphore)
}
})
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER)
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), {
print("C")
dispatch_semaphore_signal(semaphore)
})

Resources