iOS multiple threads versus groups - ios

I have a question about threading. I have a view which displays two images (Banner of the opponents). I have read about threads groups which can run together.
The way I have it now is:
- (void) setBanners{
[getBanner:#"TeamA"];
[getBanner:#"TeamB"];
}
- getBanner:(NSString *team){
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(queue, ^(void){
..Goto server and get logo
}
}
So my question is, does this happen the same way as grouping threads or does team two method get called when one is finished ? With grouping it would look like this:
- setBanner{
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_group_t group = dispatch_group_create();
dispatch_group_async(group, queue, ^{
get logo team a
});
dispatch_group_async(group, queue, ^{
get logo team a
});
}

for that they are almost equal.. there is no difference except that the dispatch_group reuses threads implicitly since GCD has a thread pool
-- oh and obviously GCD uses blocks

Related

Intangible Order of Execution (dispatch_semaphore_t, dispatch_group_async) and the Use of Them in Combination with Different Dispatch Queue Types

I just took some time in the evening to play around with GCD, especially with dispatch_semaphore_t because I never used it. Never had the need to.
So I wrote the following as a test:
- (void)viewDidLoad
{
UIView *firstView = [[UIView alloc] initWithFrame:(CGRect){{0, 0}, self.view.frame.size.width/4, self.view.frame.size.width/5}];
firstView.backgroundColor = [UIColor purpleColor];
[self.view addSubview:firstView];
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^
{
for (long i = 0; i < 1000; i++)
{
sleep(5);
dispatch_async(dispatch_get_main_queue(), ^
{
firstView.layer.opacity = ((i%2) ? 0: 1);
});
}
});
dispatch_queue_t queue1 = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_group_t group1 = dispatch_group_create();
dispatch_group_async(group1, queue1, ^
{
sleep(3);
NSLog(#"dispatch group 1");
});
dispatch_group_notify(group1, queue1, ^
{
NSLog(#"dispatch notify 1");
});
dispatch_async(myQueue, ^
{
for(int z = 0; z < 10; z++)
{
NSLog(#"%i", z);
sleep(1);
}
dispatch_semaphore_signal(mySemaphore);
});
dispatch_semaphore_wait(mySemaphore, DISPATCH_TIME_FOREVER);
NSLog(#"Loop is Done");
}
If I ran the above, the output would be:
0
1
2
dispatch group 1
dispatch notify 1
3
4
5
6
7
8
9
Loop is Done
After the above, firstView appears on the screen (before semaphore the whole screen was black) and finally this gets executed:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^
{
for (long i = 0; i < 1000; i++)
{
sleep(5);
dispatch_async(dispatch_get_main_queue(), ^
{
firstView.layer.opacity = ((i%2) ? 0: 1);
});
}
});
Which only alternates the opacity as the loop runs after semaphore is done.
1.)
So, it seems that I have to wait until dispatch_semaphore finish to do its work before any UI thing takes place.
BUT:
It seems like dispatch_group_t runs concurrently with dispatch_semaphore as shown from the output above (i.e., 1, 2, 3, ....).
???
2.)
And if I change the for loop in the above to using: dispatch_async(dispatch_get_main_queue(), ^
instead of:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^,
nothing gets shown on the screen even after semaphore is done.
How so???
3.)
Furthermore if I change semaphore to the following as opposed to using a global queue like in the above:
dispatch_async(dispatch_get_main_queue(), ^
{
for(int z = 0; z < 10; z++)
{
NSLog(#"%i", z);
sleep(1);
}
dispatch_semaphore_signal(mySemaphore);
});
Only dispatch_group takes place; nothing else take places / get executed, not the for loop in the above, including UI. Nothing.
4.)
So besides what I pointed out in the above, what can I do, in order to make semaphore not blocking my UI and my other process and just let my UI and the other processes do their thing?
And as mentioned above, why changing the type of queue for semaphore from global to main would cause nothing to be shown on screen and even the loop would not execute except dispatch_group?
5.)
If I change semaphore to:
dispatch_semaphore_t mySemaphore = dispatch_semaphore_create(1);
//1 instead of 0 (zero)
Everything (i.e., both for loop and UI) runs immediately and NSLog(#"Loop is Done"); is displayed also immediately, which tells me that semaphore didn't wait here:
dispatch_semaphore_wait(mySemaphore, DISPATCH_TIME_FOREVER);
NSLog(#"Loop is Done");
???
I spent the whole evening trying to figure this out, but to no avail.
I hope someone with great GCD knowledge can enlighten me on this.
First things first: As a general rule, one should never block the main queue. This rule about not blocking the main queue applies to both dispatch_semaphore_wait() and sleep() (as well as any of the synchronous dispatches, any group wait, etc.). You should never do any of these potentially blocking calls on the main queue. And if you follow this rule, your UI should never become non-responsive.
Your code sample and subsequent questions might seem to suggest a confusion between groups and semaphores. Dispatch groups are a way of keeping track of a group of dispatched blocks. But you're not taking advantage of the features of dispatch groups here, so I might suggest excising them from the discussion as it's irrelevant to the discussion about semaphores.
Dispatch semaphores are, on the other hand, simply a mechanism for one thread to send a signal to another thread that is waiting for the signal. Needless to say, the fact that you've created a semaphore and sent signals via that semaphore will not affect any of your dispatched tasks (whether to group or not) unless the code in question happens to call dispatch_semaphore_wait.
Finally, in some of your later examples you tried have the semaphore send multiple signals or changing the initial count to supplied when creating the semaphore. For each signal, you generally want a corresponding wait. If you have ten signals, you want ten waits.
So, let's illustrate semaphores in a way where your main queue (and thus the UI) will never be blocked. Here, we can send ten signals between two separate concurrently running tasks, having the latter one update the UI:
dispatch_semaphore_t semaphore = dispatch_semaphore_create(0);
// send 10 signals from one background thread
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
for (NSInteger i = 0; i < 10; i++) {
NSLog(#"Sleeping %d", i);
sleep(3);
NSLog(#"Sending signal %d", i);
dispatch_semaphore_signal(semaphore);
}
NSLog(#"Done signaling");
});
// and on another thread, wait for those 10 signals ...
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
for (NSInteger i = 0; i < 10; i++) {
NSLog(#"Waiting for signal %d", i);
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
NSLog(#"Got signal %d", i);
// if you want to update your UI, then dispatch that back to the main queue
dispatch_async(dispatch_get_main_queue(), ^{
// update your UI here
});
}
NSLog(#"Done waiting");
});
This is, admittedly, not a terribly useful example of semaphores, but it illustrates how theoretically you could use them. In practice, it's rare that you have to use semaphores, as for most business problems, there are other, more elegant coding patterns. If you describe what you're trying to do, we can show you how to best achieve it.
As for an example with non-zero value passed to dispatch_semaphore_create, that's used to control access to some finite resource. In this example, let's assume that you had 100 tasks to run, but you didn't want more than 5 to run at any given time (e.g. you're using network connections (which are limited), or the each operation takes up so much memory that you want to avoid having more than five running at any given time). Then you could do something like:
// we only want five to run at any given time
dispatch_semaphore_t semaphore = dispatch_semaphore_create(5);
// send this to background queue, so that when we wait, it doesn't block main queue
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
for (NSInteger i = 0; i < 100; i++)
{
// wait until one of our five "slots" are available
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
// when it is, dispatch code to background queue
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
NSLog(#"starting %d", i);
// to simulate something slow happening in the background, we'll just sleep
sleep(5);
NSLog(#"Finishing %d", i);
// when done, signal that this "slot" is free (please note, this is done
// inside the dispatched block of code)
dispatch_semaphore_signal(semaphore);
});
}
});
Again, this isn't a great example of semaphores (in this case, I'd generally use an NSOperationQueue with a maxConcurrentOperationCount), but it illustrates an example of why you'd use a non-zero value for dispatch_source_create.
You've asked a number of questions about groups. I contend that groups are unrelated to your own semaphores. You might use a group, for example, if you want to run a block of code when all of the tasks are complete. So here is a variation of the above example, but using a dispatch_group_notify to do something when all of the other tasks in that group are complete.
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); // or create your own concurrent queue
dispatch_semaphore_t semaphore = dispatch_semaphore_create(5);
dispatch_group_t group = dispatch_group_create();
// send this to background queue, so that when we wait, it doesn't block main queue
dispatch_async(queue, ^{
for (NSInteger i = 0; i < 100; i++)
{
// wait until one of our five "slots" are available
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
// when it is, dispatch code to background queue
dispatch_group_async(group, queue, ^{
NSLog(#"starting %d", i);
// to simulate something slow happening in the background, we'll just sleep
sleep(5);
NSLog(#"Finishing %d", i);
dispatch_semaphore_signal(semaphore);
});
}
dispatch_group_notify(group, queue, ^{
NSLog(#"All done");
});
});

Sync dispatch on current queue

I know you might find this an odd question, but I'm just learning GCD and I want to fully understand all its aspects. So here it is:
Is there ever any reason to dispatch a task SYNC on the CURRENT QUEUE?
For example:
dispatch_queue_t concurrentQueue = dispatch_get_global_queue(...);
dispatch_async(concurrentQueue, ^{
//this is work task 0
//first do something here, then suddenly:
dispatch_sync(concurrentQueue, ^{
//work task 1
});
//continue work task 0
});
I understand one thing: if instead of concurrentQueue I use a serial queue, then I get a deadlock on that serial queue, because work task 1 cannot start until the work task 0 is finished (because of the serial queue that guarantees order of execution), and in the same time work task 0 cannot continue its execution because it waits for the SYNC dispath function to return (please correct me if I'm wrong, that would make me a total noob).
So coming back to the original idea, is there any difference whatsoever between the code above and the same code where instead of calling the dispatch_sync function I simply write work task 1 code directly?
No. I can't think of a reason to ever dispatch_sync on the same concurrent queue you're already on. If you do that, GCD will just immediately invoke your block, in-line, on the same thread, as if you had called it directly. (I checked.) And as you pointed out, doing that on a serial queue will deadlock you.
Assume this queue for all examples:
dispatch_queue_t queue = dispatch_queue_create(“com.somecompany.queue”, nil);
Situation 1 - OK
dispatch_async(queue, ^{
[self goDoSomethingLongAndInvolved];
dispatch_async(queue, ^{
NSLog(#"Situation 1");
});
});
Situation 2 - Not OK! Deadlock!
dispatch_sync(queue, ^{
[self goDoSomethingLongAndInvolved];
dispatch_sync(queue, ^{
NSLog(#"Situation 2”); // NOT REACHED! DEADLOCK!
});
});
Situation 3 - Not OK! Deadlock!
dispatch_async(queue, ^{
[self goDoSomethingLongAndInvolved];
dispatch_sync(queue, ^{
NSLog(#"Situation 3"); // NOT REACHED! DEADLOCK!
});
});
Situation 4 - OK
dispatch_sync(queue, ^{
[self goDoSomethingLongAndInvolved];
dispatch_async(queue, ^{
NSLog(#"Situation 4");
});
});
Basically dispatch_sync does not like to be on the inside.
Only dispatch_asyncs can go inside.

iOS-Managing and Keeping Track of Multiple Concurrent tasks

In my app I need to load up data from multiple sources and put them together in a table view. Gathering each of the sources one after another would take forever. To get around this I need to run all of the download operations together. Since they are download tasks, in theory I could just run them, but the issue is that only part of the code on the thread runs asynchronously, which means it will need the main thread to complete the operation.
So in order to get ALL of it running in the background, I need to use GCD, which I don't have much experience with.
//DataLoader.m
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void){
[self.webLoader getFeedWithCompletion:self.thatOtherCompletionBlock];
[self.otherDataLoader getDataWithCompletion:self.completionBlock];
[self.thatDataLoader getThatDataWithCompletion:self.anotherCompletionBlock]
dispatch_async(dispatch_get_main_queue(), ^(void){
});
});
However, since part of the task is already asynchronous, I need to figure out where to put GCD code.
I could put it before starting the task, like I did above. This could work, however, since the tasks are already partially run in the background (in some cases I cannot change that), it seems wasteful to be running a task that already runs partially in the background in the background. Why run something that already runs in a background thread in another thread?
Another option would be to use GCD in the actual class that gets the feed (ex. webloader), putting it on all code that isn't running in the background
- (void)connectionDidFinishLoading:(NSURLConnection *)connection {
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void){
.......
});
Which way is better?
There is also another problem. Since part of the tasks are asynchronous, they use completion blocks. Not only do I need to also run the completion blocks in the background, I need to figure out which one is the last one to finish, so I can run some code to clean up and neatly package and ship the data to the view controller.
The way I thought of would be to use a BOOL for each task, simply changing it to true when it's done. Then in my completion blocks I can check if all the other tasks are complete, and if so, run the clean up code. However, this may not be the most elegant solution.
What would be the best way to deal with these tasks, ensuring that it all happens in the background?
GCD groups could easily be used for this. Groups allow you to track arbitrary "members" of the group, and hook a block up to run when all members of the group have finished. It's quite handy. For example (using your code):
dispatch_async(dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^(void){
dispatch_group_t group = dispatch_group_create();
dispatch_group_enter(group); // + 1
[self.webLoader getFeedWithCompletion: ^{
self.thatOtherCompletionBlock();
dispatch_group_leave(group); // - 1
}];
dispatch_group_enter(group); // + 1
[self.otherDataLoader getDataWithCompletion:^{
self.completionBlock();
dispatch_group_leave(group); // - 1
}];
dispatch_group_enter(group); // + 1
[self.thatDataLoader getThatDataWithCompletion:^{
self.anotherCompletionBlock();
dispatch_group_leave(group); // - 1
}];
dispatch_group_notify(group, dispatch_get_global_queue( DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// This will get executed once all three of the prior completion blocks have been run.
// i.e. when the group "count" goes to zero.
});
dispatch_release(group);
});
You could also, albeit a bit circuitously, use NSOperation's inter-operation dependency feature to achieve this. Like this:
NSOperationQueue* q = [[[NSOperationQueue alloc] init] autorelease];
NSOperation* completionA = [NSBlockOperation blockOperationWithBlock: self.thatOtherCompletionBlock];
NSOperation* completionB = [NSBlockOperation blockOperationWithBlock: self.completionBlock];
NSOperation* completionC = [NSBlockOperation blockOperationWithBlock: self.anotherCompletionBlock];
NSBlockOperation* afterAllThree = [[[NSBlockOperation alloc] init] autorelease];
[afterAllThree addDependency: completionA];
[afterAllThree addDependency: completionB];
[afterAllThree addDependency: completionC];
[afterAllThree addExecutionBlock:^{
// This will get executed once all three of the prior completion blocks have been run.
}];
// Kick off the tasks
[q addOperationWithBlock:^{
[self.webLoader getFeedWithCompletion: ^{ [q addOperation: completionA];}];
[self.otherDataLoader getDataWithCompletion:^{ [q addOperation: completionB]; }];
[self.thatDataLoader getThatDataWithCompletion:^{ [q addOperation: completionC]; }];
}];
I personally prefer the dispatch_group method, but they would both get the job done.

How to shift operation from main queue to background releasing the main queue

This is what I am doing.
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0ul);
dispatch_async(queue, ^{
NSData* data = [NSData dataWithContentsOfURL:[NSURL URLWithString:[NSString stringWithFormat:#"http://myurl"]]];
dispatch_sync(dispatch_get_main_queue(), ^{
if(!data) {
// data not recieved or bad data. Initiate reachability test
// I have built a wrapper for Reachability class called ReachabilityController
// ReachabilityController also notifies the user for avaibility, UI
ReachabilityController *reachability = [[ReachabilityController alloc] init];
[reachability checkReachability];
return;
}
//update table
});
});
My problem is the reachability test is being done in the main queue, which often freezes the UI. I want to run in a background mode.
I want to process the ReachabilityTest in a background mode or in a low priority mode. But again, my reachability controller does notify user of the current net avaibility, so at some point i will have to use main queue again.
I strongly believe that there must be a better way.
This is, however, a correct way. It doesn't look entirely pretty, but that doesn't mean it's incorrect. If you want your code to look 'cleaner' you might wanna take a look at NSThread and work your way through it, but this is a far easier approach.
To make it look easier in my project we made a simple class called dispatcher that uses blocks:
+ (void)dispatchOnBackgroundAsync:(void(^)(void))block {
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), block);
}
+ (void)dispatchOnMainAsync:(void(^)(void))block {
dispatch_async(dispatch_get_main_queue(), block);
}
used like this:
[Dispatcher dispatchOnBackgroundAsync:^{
// background thread code
[Dispatcher dispatchOnMainAsync:^{
// main thread code
}];
}];

using the same dispatch queue in a method for background processing

I have a method that updates two sections in a table that takes awhile. I want to do something like:
dispatch_queue_t lowQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0);
dispatch_queue_t mainQueue = dispatch_get_main_queue();
dispatch_async(lowQueue, ^{
NSArray *tempArray = // do long running task to get the data
dispatch_async(mainQueue, ^{
// update the main thread
[self.activityIndicatorView stopAnimating];
[self.reportsTableView reloadData];
});
});
dispatch_async(lowQueue, ^{
NSArray *tempArray2 = // same thing, do another long task
// similarly, update the main thread
If I use the same lowQueue in the same method, is that ok? Thanks.
Yes, you can use lowQueue in the same method. When you grab the DISPATCH_QUEUE_PRIORITY_LOW global queue and store a reference to it in lowQueue, you can continue to enqueue additional blocks on it with multiple dispatch_async GCD calls. Every time you call dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), you'll get back a reference to the exact same dispatch queue.
Since all the global dispatch queues are concurrent queues, each block from both of your two tasks will be dequeued and executed simultaneously, provided that GCD determines this is most efficient for the system at runtime (given system load, CPU cores available, number of other threads currently executing, etc).

Resources