We're using GCD to perform some image processing operations 'in the background' in our image editor view which works great. The problem is if we open the editor view, do some processing and then just sit in the editor view for 10-20 minutes we get these OSSpinLockLock freezes, but we're not using SpinLocks or locks of any kind, we have these properties:
#property (nonatomic, readonly) dispatch_semaphore_t processingSemaphore;
#property (nonatomic, readonly) dispatch_queue_t serialQueue;
and setup the queues like so:
processingSemaphore = dispatch_semaphore_create(1);
serialQueue = dispatch_queue_create("com.myapp.imageProcessingQueue", NULL);
dispatch_set_target_queue(serialQueue, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, NULL));
and process thusly:
dispatch_async(self.serialQueue, ^{
dispatch_semaphore_wait(self.processingSemaphore, DISPATCH_TIME_FOREVER);
....<do stuff>....
dispatch_semaphore_signal(self.processingSemaphore);
dispatch_sync(dispatch_get_main_queue(), ^{
....<notify that we're done>....
}];
});
I'm wondering if its the semaphore somehow.
libdispatch does not use OSSpinLockLock either in the queue or the semaphore implementation, but malloc does (and thus Block_copy, which libdispatch calls as part of dispatch_async).
Can you show the backtraces of all threads when you are blocked in OSSpinLockLock ?
Perhaps instead of using the semaphore you could create a serial queue. Create your queue like this:
serialQueue = dispatch_queue_create("com.myapp.imageProcessingQueue", DISPATCH_QUEUE_SERIAL);
This will ensure only one block at a time is executed.
You cannot cancel operations on this queue though. To do that you will need to use NSOperationQueue.
Related
I've got this method
-(void)addObjectToProcess(NSObject*)object;
and i want this method to add the object to process queue which can process up to 4 objects in parallel.
i've created my own dispatch_queue and semhphore
_concurrentQueue = dispatch_queue_create([queue_id UTF8String],DISPATCH_QUEUE_CONCURRENT);
_processSema = dispatch_semaphore_create(4);
and the implementation of the method is:
-(void)addObjectToProcess(NSObject*)object {
dispatch_semaphore_wait(self.processSema, DISPATCH_TIME_FOREVER);
__weak MyViewController* weakSelf = self;
dispatch_async(self.concurrentQueue, ^{
// PROCESS...........
// ..................
dispatch_semaphore_signal(self.processSema);
dispatch_async(dispatch_get_main_queue(), ^{
// call delegate from UI thread
});
});
}
it seems the caller sometimes gets blocked cause of the semaphore barrier.
is there any other/easier option to implement what i'm trying to make here ?
Thanks
The problem is that you're calling dispatch_semaphore_wait on whatever thread you called addObjectToProcess on (presumably the main thread). Thus, if you already have four tasks running, when you schedule this fifth process, it will wait on the main thread.
You don't, though, just want to move the waiting for the semaphore into the block dispatched to self.concurrentQueue, because while that will successfully constrain the "PROCESS" tasks to four at a time, you will consume another worker thread for each one of these backlogged dispatched tasks, and there are a finite number of those worker threads. And when you exhaust those, you could adversely affect other processes.
One way to address this would be to create a serial scheduling queue in addition to your concurrent processing queue, and then dispatch this whole scheduling task asynchronously to this scheduling queue. Thus you enjoy the maximum concurrency on the process queue, while neither blocking the main thread nor using up worker threads for backlogged tasks. For example:
#property (nonatomic, strong) dispatch_queue_t schedulingQueue;
And
self.schedulingQueue = dispatch_queue_create("com.domain.scheduler", 0);
And
- (void)addObjectToProcess(NSObject*)object {
dispatch_async(self.schedulingQueue, ^{
dispatch_semaphore_wait(self.processSema, DISPATCH_TIME_FOREVER);
typeof(self) __weak weakSelf = self;
dispatch_async(self.concurrentQueue, ^{
// PROCESS...........
// ..................
typeof(self) __strong strongSelf = weakSelf;
if (strongSelf) {
dispatch_semaphore_signal(strongSelf.processSema);
dispatch_async(dispatch_get_main_queue(), ^{
// call delegate from UI thread
});
}
});
});
}
Another good approach (especially if the "PROCESS" is synchronous) is to use NSOperationQueue that has a maxConcurrentOperationCount, which controls the degree of concurrency for you. For example:
#property (nonatomic, strong) NSOperationQueue *processQueue;
And initialize it:
self.processQueue = [[NSOperationQueue alloc] init];
self.processQueue.maxConcurrentOperationCount = 4;
And then:
- (void)addObjectToProcess(NSObject*)object {
[self.processQueue addOperationWithBlock:^{
// PROCESS...........
// ..................
dispatch_async(dispatch_get_main_queue(), ^{
// call delegate from UI thread
});
}];
}
The only trick is if the "PROCESS", itself, is asynchronous. If you do that, then you can't just use addOperationWithBlock, but rather have to write your own custom, asynchronous NSOperation subclass, and then use addOperation to the NSOperationQueue. It's not hard to write asynchronous NSOperation subclass, but there are a few little details associated with that. See Configuring Operations for Concurrent Execution in the Concurrency Programming Guide.
I have a question about threading. I have a view which displays two images (Banner of the opponents). I have read about threads groups which can run together.
The way I have it now is:
- (void) setBanners{
[getBanner:#"TeamA"];
[getBanner:#"TeamB"];
}
- getBanner:(NSString *team){
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(queue, ^(void){
..Goto server and get logo
}
}
So my question is, does this happen the same way as grouping threads or does team two method get called when one is finished ? With grouping it would look like this:
- setBanner{
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_group_t group = dispatch_group_create();
dispatch_group_async(group, queue, ^{
get logo team a
});
dispatch_group_async(group, queue, ^{
get logo team a
});
}
for that they are almost equal.. there is no difference except that the dispatch_group reuses threads implicitly since GCD has a thread pool
-- oh and obviously GCD uses blocks
I'm looking for a common and elegant way to manage interfaces update.
I know that user interface code must be run in main thread, so when i need some computation o network task i use GDC with this pattern:
dispatch_queue_t aQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(aQueue, ^() {
//Backgroud code
dispatch_sync(dispatch_get_main_queue(), ^{
//Update the UI
}
}
The problem with this code is that i need always check if user has changed view during my computation, so the code is like:
dispatch_sync(dispatch_get_main_queue(), ^{
if (mylabel != nil) && ([mylabel superview] != nil) {
mylabel.text = _result_from_computation_;
}
}
There is some best ways?
Thanks.
You pretty well have it. However, in case you want to do more reading or want a more thorough explanation of what's going on...
You should read the Apple Docs Grand Central Dispatch (GCD) Reference and watch the WWDC 2012 video, Session 712 - Asynchronous Design Patters with Blocks, GCD and XPC.
If you're working with iOS, you can disregard XPC (interprocess communication) as it's not supported by the current OS version (6.1 at the time of this writing).
Example: Load a large image in the background and set the image when completed.
#interface MyClass ()
#property (strong) dispatch_block_t task;
#end
#implementation MyClass
- (void)viewDidLoad {
self.task = ^{
// Background Thread, i.e., your task
NSImage *image = [[NSImage alloc] initWithData:data];
dispatch_async(dispatch_get_main_queue(), ^{
// Main Thread, setting the loaded image
[view setImage:image];
});
});
}
- (IBAction)cancelTaskButtonClick:(id)sender { // This can be -viewWillDisappear
self.task = nil; // Cancels this enqueued item in default global queue
}
- (IBAction)runTaskButtonClick:(id)sender {
// Main Thread
dispatch_queue_t queue;
queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_async(queue, self.task);
}
In order to cancel and reload the interface later, all you have to do is set the dispatch_block_t variable to nil.
Perhaps more specifically to your problem, this example piece of code deals with Reading Data from a Descriptor, i.e., either the disk or network.
Typically, you would use the Call-Callback pattern which essentially gets a background thread, executes a task, and when completed calls another block to get the main thread to update the UI.
Hope this helps!
You can check the view window property:
if (myLabel.window) {
// update label
}
this is redundant if (label != nil) since if label is nil, then all label properties will also be nil (or zero) and setting them will not raise an exception.
I have a method that updates two sections in a table that takes awhile. I want to do something like:
dispatch_queue_t lowQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0);
dispatch_queue_t mainQueue = dispatch_get_main_queue();
dispatch_async(lowQueue, ^{
NSArray *tempArray = // do long running task to get the data
dispatch_async(mainQueue, ^{
// update the main thread
[self.activityIndicatorView stopAnimating];
[self.reportsTableView reloadData];
});
});
dispatch_async(lowQueue, ^{
NSArray *tempArray2 = // same thing, do another long task
// similarly, update the main thread
If I use the same lowQueue in the same method, is that ok? Thanks.
Yes, you can use lowQueue in the same method. When you grab the DISPATCH_QUEUE_PRIORITY_LOW global queue and store a reference to it in lowQueue, you can continue to enqueue additional blocks on it with multiple dispatch_async GCD calls. Every time you call dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), you'll get back a reference to the exact same dispatch queue.
Since all the global dispatch queues are concurrent queues, each block from both of your two tasks will be dequeued and executed simultaneously, provided that GCD determines this is most efficient for the system at runtime (given system load, CPU cores available, number of other threads currently executing, etc).
I have a method that builds a package, sends it to a web service, gets a package back, opens it and returns me a nsdictionary. How can I call it on a background queue in order to display a HUD while it requests the data?
You could detach a new thread like following
- (void) fetchData
{
//Show Hud
//Start thread
[NSThread detachNewThreadSelector:#selector(getDataThreaded)
toTarget:self
withObject:nil];
}
- (void) getDataThreaded
{
//Start Fetching data
//Hide hud from main UI thread
dispatch_async(dispatch_get_main_queue(), ^{
//Update UI if you have to
//Hide Hud
});
}
Grand central dispatch (gcd) provides great support for doing what you ask. Running something in the background using gcd is simple:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_NORMAL, 0) ^{
NSDictionary* data = [self fetchAndParseData];
dispatch_async(dispatch_get_main_queue(), ^{
[self dataRetrieved:data];
});
});
This call will return immediately (so your UI will continue to be responsive) and dataRetrieved will be called when the data is ready.
Now, depending on how fetchAndParse data works it may need to be more complicated. If you NSURLConnection or something similar, you might need to create an NSRunLoop to process data callbacks on the gcd thread. NSURLConnection for the most part is asynchronous anyway (though callbacks like didReceiveData will be routed through the UI thread) so you can use gcd only to do the parsing of the data when all the data has been retrieved. It depends on how asynchronous you want to be.
In addition to previous replies, why don't you use NSOperation and NSOperationQueue classes? These classes are abstractions under GCD and they are very simple to use.
I like NSOperation class since it allows to modularize code in apps I usually develop.
To set up a NSOperation you could just subclass it like
//.h
#interface MyOperation : NSOperation
#end
//.m
#implementation MyOperation()
// override the main method to perform the operation in a different thread...
- (void)main
{
// long running operation here...
}
Now in the main thread you can provide that operation to a queue like the following:
MyOperation *op = [[MyOperation alloc] initWithDocument:[self document]];
[[self someQueue] addOperation:op];
P.S. You cannot start an async operation in the main method of a NSOperation. When the main finishes, delegates linked with that operations will not be called. To say the the truth you can but this involves to deal with run loop or concurrent behaviour.
Here some links on how to use them.
http://www.cimgf.com/2008/02/16/cocoa-tutorial-nsoperation-and-nsoperationqueue/
https://developer.apple.com/cocoa/managingconcurrency.html
and obviously the class reference for NSOperation