I'm using the open source software TMCache.
It saves expensive data to cache asynchronously. There is also a synchronous method.
It uses dispatch_semaphore_wait() to wait until the operation is over.
Source
- (id)objectForKey:(NSString *)key
{
if (!key)
return nil;
__block id objectForKey = nil;
dispatch_semaphore_t semaphore = dispatch_semaphore_create(0);
[self objectForKey:key block:^(TMCache *cache, NSString *key, id object) {
objectForKey = object;
dispatch_semaphore_signal(semaphore);
}];
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
#if !OS_OBJECT_USE_OBJC
dispatch_release(semaphore);
#endif
return objectForKey;
}
This works fine on my machine. On a colleague's machine it does not.
The program stops working at dispatch_semaphore_wait(). It's absolutely not reproducible for me.
The method above is called in tableView:viewForTableColumn:row:,
so it's executed in the main queue.
Any idea why this is happening? Must I use the method in another queue?
Most likely you are running out of threads. The dispatch_semaphore_signal(semaphore) , which should release the dispatch_semaphore_wait() needs to be executed in a new thread (see objectForKey:block: for details). If OS fails to dispatch that new thread, you stuck, as nobody will send you the dispatch_semaphore_signal.
How often and when it happens depends on computer/device speed, how fast you scroll the table etc. That's why you can't reproduce this issue on your computer.
The quick solution here is to keep number of threads low, by employing the same dispatch semaphore approach with timeout set to DISPATCH_TIME_NOW, as you may not block the main queue.
I would prefer changing the way TMCache.m works, though. I believe that dispatch semaphores approach is not justified in this case - gaining the code brevity (wrapping async method into a synchronous counterpart) at the expense of reliability does not seem right to me. I used to wrap synchronous methods with async ones, but not vice versa.
Here is the fix
https://github.com/rushproject/TMCache
Note that only synchronous objectForKey methods were patched.
Related
In objective C,
I am making my program to wait using while loop
doInitialize()
{
dispach_group_t loadDataGroup=dispatch_group_create();
dispatch_group_async(loadDataGroup,...get_global_queue(..),0),^{
renewauth();
}
dispatch_group_notify(loadDataGroup,...get_global_queue(..),0),^{
//Do other tasks once renew session has completed...
}
}
renewauth()
{
RenewAuthTokenInProgress=true;
startRenewThread();
**while (RenewAuthTokenInProgress);**
}
In turn startRenewThread() function also performs dispatch_async operation inside. So I have to make renewAuth() wait.
And async task in startRenewThread will update the bool variable once renewal is successful.
Is there any better approach of doing it other than dispatch_groups?
And is it good to make other threads wait with while (true) statement?
Manoj Kumar,
using a while loop to wait till the boolean variable change is not the correct approach to solve the problem. Here are few of the issues with this method
Your CPU is un-necessarily burdened with checking the variable regularly.
This will clearly show that developer isn't much equipted with basic skills of coding and features available with language.
If for any reason your variable will never change then your CPU will never stop checking the value of bool in while loop and blocks the execution of further code on the same thread.
Here are few of the correct approach :
Blocks or closures : Make use of blocks to execute the code asynchronously when the RenewAuthToken is done.
Delegates : if blocks are harder to understand, Make use of delegates and trigger the delegate when you are done with RenewAuthToken.
Notifications : Add observer for notifications in classes which needs to respond when RenewAuthToken is done and throw notification from the asynctask and let the class to catch it execute the code.
Locks : If it is necessary to block the execution of the thread till the response comes use locks to control the thread execution rather than using while loop
EDIT
As pointed out by fogmeister in comments
If you block the main thread for too long with a while(true) loop then
the app will actually be terminated by the iOS Watchdog as it will
assume it has crashed
Please have a look at the link : understand iOS watchdog termination reasons provided by fogmeister
Hope it helps.
I believe what you need it's a semaphore like:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
dispatch_semaphore_t sem = dispatch_semaphore_create(0);
__block BOOL done = FALSE;
while (true) {
[self someCompletionMethod completion:^(BOOL success) {
if(success) { // Stop condition
done = TRUE;
}
// do something
dispatch_semaphore_signal(sem); // This will let a new iteration
}];
dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
if(done) {
dispatch_async(dispatch_get_main_queue(), ^{
// Dispatch to main
NSLog(#"Done!");
break;
});
}
}
});
Semaphores are an old-school threading concept introduced to the world by the ever-so-humble Edsger W. Dijkstra. Semaphores are a complex topic because they build upon the intricacies of operating system functions.
You can see a tutorial here about semaphore and check it out more links: https://www.raywenderlich.com/63338/grand-central-dispatch-in-depth-part-2
I hope this can help you.
What you do is absolutely lethal. It blocks the running thread (presumably the main thread) so the UI is frozen. It runs one core at 100% load for no reason whatsoever which empties the battery rapidly and heats up the phone. This will get you some very, very unhappy customers or very, very happy ex-customers.
Anything like this has to run in the background: startRenewThread should trigger some action that sets RenewAuthTokenInProgress = NO and sets whether there is a new token or not, and then triggers further action.
This is an absolutely essential programming pattern on iOS (and Android as far as I know).
Recently, I had the need for a function that I could use to guarantee synchronous execution of a given block on a particular serial dispatch queue. There was the possibility that this shared function could be called from something already running on that queue, so I needed to check for this case in order to prevent a deadlock from a synchronous dispatch to the same queue.
I used code like the following to do this:
void runSynchronouslyOnVideoProcessingQueue(void (^block)(void))
{
dispatch_queue_t videoProcessingQueue = [GPUImageOpenGLESContext sharedOpenGLESQueue];
if (dispatch_get_current_queue() == videoProcessingQueue)
{
block();
}
else
{
dispatch_sync(videoProcessingQueue, block);
}
}
This function relies on the use of dispatch_get_current_queue() to determine the identity of the queue this function is running on and compares that against the target queue. If there's a match, it knows to just run the block inline without the dispatch to that queue, because the function is already running on it.
I've heard conflicting things about whether or not it was proper to use dispatch_get_current_queue() to do comparisons like this, and I see this wording in the headers:
Recommended for debugging and logging purposes only:
The code must not make any assumptions about the queue returned,
unless it is one of the global queues or a queue the code has itself
created. The code must not assume that synchronous execution onto a
queue is safe from deadlock if that queue is not the one returned by
dispatch_get_current_queue().
Additionally, in iOS 6.0 (but not yet for Mountain Lion), the GCD headers now mark this function as being deprecated.
It sounds like I should not be using this function in this manner, but I'm not sure what I should use in its place. For a function like the above that targeted the main queue, I could use [NSThread isMainThread], but how can I check if I'm running on one of my custom serial queues so that I can prevent a deadlock?
Assign whatever identifier you want using dispatch_queue_set_specific(). You can then check your identifier using dispatch_get_specific().
Remember that dispatch_get_specific() is nice because it'll start at the current queue, and then walk up the target queues if the key isn't set on the current one. This usually doesn't matter, but can be useful in some cases.
This is a very simple solution. It is not as performant as using dispatch_queue_set_specific and dispatch_get_specific manually – I don't have the metrics on that.
#import <libkern/OSAtomic.h>
BOOL dispatch_is_on_queue(dispatch_queue_t queue)
{
int key;
static int32_t incrementer;
CFNumberRef value = CFBridgingRetain(#(OSAtomicIncrement32(&incrementer)));
dispatch_queue_set_specific(queue, &key, value, nil);
BOOL result = dispatch_get_specific(&key) == value;
dispatch_queue_set_specific(queue, &key, nil, nil);
CFRelease(value);
return result;
}
Recently, I had the need for a function that I could use to guarantee synchronous execution of a given block on a particular serial dispatch queue. There was the possibility that this shared function could be called from something already running on that queue, so I needed to check for this case in order to prevent a deadlock from a synchronous dispatch to the same queue.
I used code like the following to do this:
void runSynchronouslyOnVideoProcessingQueue(void (^block)(void))
{
dispatch_queue_t videoProcessingQueue = [GPUImageOpenGLESContext sharedOpenGLESQueue];
if (dispatch_get_current_queue() == videoProcessingQueue)
{
block();
}
else
{
dispatch_sync(videoProcessingQueue, block);
}
}
This function relies on the use of dispatch_get_current_queue() to determine the identity of the queue this function is running on and compares that against the target queue. If there's a match, it knows to just run the block inline without the dispatch to that queue, because the function is already running on it.
I've heard conflicting things about whether or not it was proper to use dispatch_get_current_queue() to do comparisons like this, and I see this wording in the headers:
Recommended for debugging and logging purposes only:
The code must not make any assumptions about the queue returned,
unless it is one of the global queues or a queue the code has itself
created. The code must not assume that synchronous execution onto a
queue is safe from deadlock if that queue is not the one returned by
dispatch_get_current_queue().
Additionally, in iOS 6.0 (but not yet for Mountain Lion), the GCD headers now mark this function as being deprecated.
It sounds like I should not be using this function in this manner, but I'm not sure what I should use in its place. For a function like the above that targeted the main queue, I could use [NSThread isMainThread], but how can I check if I'm running on one of my custom serial queues so that I can prevent a deadlock?
Assign whatever identifier you want using dispatch_queue_set_specific(). You can then check your identifier using dispatch_get_specific().
Remember that dispatch_get_specific() is nice because it'll start at the current queue, and then walk up the target queues if the key isn't set on the current one. This usually doesn't matter, but can be useful in some cases.
This is a very simple solution. It is not as performant as using dispatch_queue_set_specific and dispatch_get_specific manually – I don't have the metrics on that.
#import <libkern/OSAtomic.h>
BOOL dispatch_is_on_queue(dispatch_queue_t queue)
{
int key;
static int32_t incrementer;
CFNumberRef value = CFBridgingRetain(#(OSAtomicIncrement32(&incrementer)));
dispatch_queue_set_specific(queue, &key, value, nil);
BOOL result = dispatch_get_specific(&key) == value;
dispatch_queue_set_specific(queue, &key, nil, nil);
CFRelease(value);
return result;
}
I'm trying to synchronize the following code in iOS5:
an object has a method which makes an HTTP request from which it
gets some data, including an URL to an image
once the data arrives, the textual data is used to populate a
CoreData model
at the same time, a second thread is dispatched async to download
the image; this thread will signal via KVO to a viewController when
the image is already cached and available in the CoreData model.
since the image download will take a while, we immediately return
the CoreData object which has all attributes but for the image to
the caller.
Also, when the second thread is done downloading, the CoreData model
can be saved.
This is the (simplified) code:
- (void)insideSomeMethod
{
[SomeHTTPRequest withCompletionHandler:
^(id retrievedData)
{
if(!retrievedData)
{
handler(nil);
}
// Populate CoreData model with retrieved Data...
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSURL* userImageURL = [NSURL URLWithString:[retrievedData valueForKey:#"imageURL"]];
aCoreDataNSManagedObject.profileImage = [NSData dataWithContentsOfURL:userImageURL];
});
handler(aCoreDataNSManagedObject);
[self shouldCommitChangesToModel];
}];
}
- (void)shouldCommitChangesToModel
{
dispatch_barrier_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSError *error = nil;
if(![managedObjectContext save:&error])
{
// Handle error
}
});
}
But what's going on is that the barrier-based save-block is always executed before the the image-loading block. That is,
dispatch_barrier_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSError *error = nil;
if(![managedObjectContext save:&error])
{
// Handle error
}
});
Executes before:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSURL* userImageURL = [NSURL URLWithString:[retrievedData valueForKey:#"imageURL"]];
aCoreDataNSManagedObject.profileImage = [NSData dataWithContentsOfURL:userImageURL];
});
So obviously I'm not really dispatching the image-loading block before the barrier, or the barrier would wait until the image-loading block is done before executing (which was my intention).
What am I doing wrong? how do I make sure the image-loading block is enqueued before the barrier block?
At first glance the issue may be that you are dispatching the barrier block on a global concurrent queue. You can only use barrier blocks on your own custom concurrent queue. Per the GCD docs on dispatch_barrier_async, if you dispatch a block to a global queue, it will behave like a normal dispatch_async call.
Mike Ash has a good blog post on GCD barrier blocks: http://www.mikeash.com/pyblog/friday-qa-2011-10-14-whats-new-in-gcd.html
Good luck
T
You need to create your own queue and not dispatch to the global queues as per the ADC Docs
The queue you specify should be a concurrent queue that you create
yourself using the dispatch_queue_create function. If the queue you
pass to this function is a serial queue or one of the global
concurrent queues, this function behaves like the dispatch_async
function.
from https://developer.apple.com/library/ios/documentation/Performance/Reference/GCD_libdispatch_Ref/Reference/reference.html#//apple_ref/c/func/dispatch_barrier_async .
You can create tons of your own GCD queues just fine. gcd queues are very small and you can create tons of them without issue. You just need to free them when you're done with them.
For what you seem to be trying to solve, dispatch_barrier_async may not be the best solution.
Have a look at the Migrating Away From Threads section of the Concurrency Programming Guide. Just using dispatch_sync on a your own serial queue may solve your synchronization problem.
Alternatively, you can use NSOperation and NSOperationQueue. Unlike GCD, NSOperation allows you to easily manage dependancies (you can do it using GCD, but it can get ugly fast).
I'm a little late to the party, but maybe next time you could try using dispatch_groups to your advantage. http://www.raywenderlich.com/63338/grand-central-dispatch-in-depth-part-2
As mentioned in title, I would like to open UIManagedDocument synchronously, i.e, I would like my execution to wait till open completes. I'm opening document on mainThread only.
Current API to open uses block
[UIManagedDocument openWithCompletionHandler:(void (^)(BOOL success))];
Locks usage mentioned at link works well on threads other than main thread. If I use locks on mainThread, it freezes execution of app.
Any advice would be helpful. Thanks.
First, let me say that I strongly discourage doing this. Your main thread just waits, and does nothing while waiting for the call to complete. Under certain circumstances, the system will kill your app if it does not respond on the main thread. This is highly unusual.
I guess you should be the one to decide when/how you should use various programming tools.
This one does exactly what you want... block the main thread until the completion handler runs. Again, I do not recommend doing this, but hey, it's a tool, and I'll take the NRA stance: guns don't kill people...
__block BOOL waitingOnCompletionHandler = YES;
[object doSomethingWithCompletionHandler:^{
// Do your work in the completion handler block and when done...
waitingOnCompletionHandler = NO;
}];
while (waitingOnCompletionHandler) {
usleep(USEC_PER_SEC/10);
}
Another option is to execute the run loop. However, this isn't really synchronous, because the run loop will actually process other events. I've used this technique in some unit tests. It is similar to the above, but still allows other stuff to happen on the main thread (for example, the completion handler may invoke an operation on the main queue, which may not get executed in the previous method).
__block BOOL waitingOnCompletionHandler = YES;
[object doSomethingWithCompletionHandler:^{
// Do your work in the completion handler block and when done...
waitingOnCompletionHandler = NO;
}];
while (waitingOnCompletionHandler) {
NSDate *futureTime = [NSDate dateWithTimeIntervalSinceNow:0.1];
[[NSRunLoop currentRunLoop] runUntilDate:futureTime];
}
There are other methods as well, but these are simple, easy to understand, and stick out like a sore thumb so it's easy to know you are doing something unorthodox.
I should also note that I've never encountered a good reason to do this in anything other than tests. You can deadlock your code, and not returning from the main run loop is a slippery slope (even if you are manually executing it yourself - note that what called you is still waiting and running the loop again could re-enter that code, or cause some other issue).
Asynchronous APIs are GREAT. The condition variable approach or using barriers for concurrent queues are reasonable ways to synchronize when using other threads. Synchronizing the main thread is the opposite of what you should be doing.
Good luck... and make sure you register your guns, and always carry your concealed weapons permit. This is certainly the wild west. There's always a John Wesley Harden out there looking for a gun fight.