Can block capture a CoreFundation object? - ios

In Apple's doc I can't find what I can do when I want to capture a CoreFoundation object.
But in Apple's Concurrency Programming Guide. It seems the sample code use some code when dispatch object is not support ARC just like this:
void average_async(int *data, size_t len, dispatch_queue_t queue, void (^block)(int))
{
// Retain the queue provided by the user to make
// sure it does not disappear before the completion
// block can be called.
dispatch_retain(queue);
// Do the work on the default concurrent queue and then
// call the user-provided block with the results.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
int avg = average(data, len);
dispatch_async(queue, ^{ block(avg);});
// Release the user-provided queue when done
dispatch_release(queue);
});
}
Do I need to use CFObject like the DispatchObject before. But if I need to invoke the block many times?
Maybe I can use __attribute__((NSObject)), but I don't know what will happen!
Does Apple say something about this?

I didn't see anything at Apple explicitly, but I do see some mentions in the llvm.org documentation, which I found elaborated on in this cocoa-dev mailing list thread.
It looks like you should be okay for using __attribute__((NSObject)), as it's given an implicit "__strong" qualification (from the documentation) and in a practical sense (from the mailing list thread), the object is retained when the block is queued up and released when the block finishes.

First of all, dispatch_queue_t is not a Core Foundation object.
Dispatch objects are considered Objective-C objects by the compiler (for ARC and blocks purposes) if your deployment target is iOS 6+ / OS X 10.8+.

Related

How about iOS GCD Performance?

I invoke a method( named A) repeatedly about 3000 times per seconds, in A method, I allocate some resources and do something.
If I allocate and do something in A method directly, it works fine. But if I allocate and do something in a customed serial thread created by GCD like blow, it will crash
NSString *queueName = [NSString stringWithFormat:#"com.realank.thread"];
dispatch_queue_t serialQueue = dispatch_queue_create([queueName cStringUsingEncoding:NSUTF8StringEncoding], NULL);
dispatch_async(initMidMenu, ^{
// alloc and do something
});
So I think why it will crash is because the performance limit about GCD thread, is anybody know about it? Thanks very much~
You're allocating a new dispatch queue when you call dispatch_queue_create. You must release it afterwards using dispatch_releaseas mentioned by Apple. Otherwise you'll run out of memory!!!
https://developer.apple.com/reference/dispatch/1453030-dispatch_queue_create#return-value
You can get a reference to the newly created queue by the return value of the dispatch_queue_create method, and then release it afterwards.
Here's how Apple has mentioned it in their documentation.
When your application no longer needs the dispatch queue, it should
release it with the dispatch_release function.

is there a way to still get which queue I am on instead of dispatch_get_current_queue? [duplicate]

Recently, I had the need for a function that I could use to guarantee synchronous execution of a given block on a particular serial dispatch queue. There was the possibility that this shared function could be called from something already running on that queue, so I needed to check for this case in order to prevent a deadlock from a synchronous dispatch to the same queue.
I used code like the following to do this:
void runSynchronouslyOnVideoProcessingQueue(void (^block)(void))
{
dispatch_queue_t videoProcessingQueue = [GPUImageOpenGLESContext sharedOpenGLESQueue];
if (dispatch_get_current_queue() == videoProcessingQueue)
{
block();
}
else
{
dispatch_sync(videoProcessingQueue, block);
}
}
This function relies on the use of dispatch_get_current_queue() to determine the identity of the queue this function is running on and compares that against the target queue. If there's a match, it knows to just run the block inline without the dispatch to that queue, because the function is already running on it.
I've heard conflicting things about whether or not it was proper to use dispatch_get_current_queue() to do comparisons like this, and I see this wording in the headers:
Recommended for debugging and logging purposes only:
The code must not make any assumptions about the queue returned,
unless it is one of the global queues or a queue the code has itself
created. The code must not assume that synchronous execution onto a
queue is safe from deadlock if that queue is not the one returned by
dispatch_get_current_queue().
Additionally, in iOS 6.0 (but not yet for Mountain Lion), the GCD headers now mark this function as being deprecated.
It sounds like I should not be using this function in this manner, but I'm not sure what I should use in its place. For a function like the above that targeted the main queue, I could use [NSThread isMainThread], but how can I check if I'm running on one of my custom serial queues so that I can prevent a deadlock?
Assign whatever identifier you want using dispatch_queue_set_specific(). You can then check your identifier using dispatch_get_specific().
Remember that dispatch_get_specific() is nice because it'll start at the current queue, and then walk up the target queues if the key isn't set on the current one. This usually doesn't matter, but can be useful in some cases.
This is a very simple solution. It is not as performant as using dispatch_queue_set_specific and dispatch_get_specific manually – I don't have the metrics on that.
#import <libkern/OSAtomic.h>
BOOL dispatch_is_on_queue(dispatch_queue_t queue)
{
int key;
static int32_t incrementer;
CFNumberRef value = CFBridgingRetain(#(OSAtomicIncrement32(&incrementer)));
dispatch_queue_set_specific(queue, &key, value, nil);
BOOL result = dispatch_get_specific(&key) == value;
dispatch_queue_set_specific(queue, &key, nil, nil);
CFRelease(value);
return result;
}

How can I verify that I am running on a given GCD queue without using dispatch_get_current_queue()?

Recently, I had the need for a function that I could use to guarantee synchronous execution of a given block on a particular serial dispatch queue. There was the possibility that this shared function could be called from something already running on that queue, so I needed to check for this case in order to prevent a deadlock from a synchronous dispatch to the same queue.
I used code like the following to do this:
void runSynchronouslyOnVideoProcessingQueue(void (^block)(void))
{
dispatch_queue_t videoProcessingQueue = [GPUImageOpenGLESContext sharedOpenGLESQueue];
if (dispatch_get_current_queue() == videoProcessingQueue)
{
block();
}
else
{
dispatch_sync(videoProcessingQueue, block);
}
}
This function relies on the use of dispatch_get_current_queue() to determine the identity of the queue this function is running on and compares that against the target queue. If there's a match, it knows to just run the block inline without the dispatch to that queue, because the function is already running on it.
I've heard conflicting things about whether or not it was proper to use dispatch_get_current_queue() to do comparisons like this, and I see this wording in the headers:
Recommended for debugging and logging purposes only:
The code must not make any assumptions about the queue returned,
unless it is one of the global queues or a queue the code has itself
created. The code must not assume that synchronous execution onto a
queue is safe from deadlock if that queue is not the one returned by
dispatch_get_current_queue().
Additionally, in iOS 6.0 (but not yet for Mountain Lion), the GCD headers now mark this function as being deprecated.
It sounds like I should not be using this function in this manner, but I'm not sure what I should use in its place. For a function like the above that targeted the main queue, I could use [NSThread isMainThread], but how can I check if I'm running on one of my custom serial queues so that I can prevent a deadlock?
Assign whatever identifier you want using dispatch_queue_set_specific(). You can then check your identifier using dispatch_get_specific().
Remember that dispatch_get_specific() is nice because it'll start at the current queue, and then walk up the target queues if the key isn't set on the current one. This usually doesn't matter, but can be useful in some cases.
This is a very simple solution. It is not as performant as using dispatch_queue_set_specific and dispatch_get_specific manually – I don't have the metrics on that.
#import <libkern/OSAtomic.h>
BOOL dispatch_is_on_queue(dispatch_queue_t queue)
{
int key;
static int32_t incrementer;
CFNumberRef value = CFBridgingRetain(#(OSAtomicIncrement32(&incrementer)));
dispatch_queue_set_specific(queue, &key, value, nil);
BOOL result = dispatch_get_specific(&key) == value;
dispatch_queue_set_specific(queue, &key, nil, nil);
CFRelease(value);
return result;
}

How should I use GCD dispatch_barrier_async in iOS (seems to execute before and not after other blocks)

I'm trying to synchronize the following code in iOS5:
an object has a method which makes an HTTP request from which it
gets some data, including an URL to an image
once the data arrives, the textual data is used to populate a
CoreData model
at the same time, a second thread is dispatched async to download
the image; this thread will signal via KVO to a viewController when
the image is already cached and available in the CoreData model.
since the image download will take a while, we immediately return
the CoreData object which has all attributes but for the image to
the caller.
Also, when the second thread is done downloading, the CoreData model
can be saved.
This is the (simplified) code:
- (void)insideSomeMethod
{
[SomeHTTPRequest withCompletionHandler:
^(id retrievedData)
{
if(!retrievedData)
{
handler(nil);
}
// Populate CoreData model with retrieved Data...
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSURL* userImageURL = [NSURL URLWithString:[retrievedData valueForKey:#"imageURL"]];
aCoreDataNSManagedObject.profileImage = [NSData dataWithContentsOfURL:userImageURL];
});
handler(aCoreDataNSManagedObject);
[self shouldCommitChangesToModel];
}];
}
- (void)shouldCommitChangesToModel
{
dispatch_barrier_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSError *error = nil;
if(![managedObjectContext save:&error])
{
// Handle error
}
});
}
But what's going on is that the barrier-based save-block is always executed before the the image-loading block. That is,
dispatch_barrier_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSError *error = nil;
if(![managedObjectContext save:&error])
{
// Handle error
}
});
Executes before:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
NSURL* userImageURL = [NSURL URLWithString:[retrievedData valueForKey:#"imageURL"]];
aCoreDataNSManagedObject.profileImage = [NSData dataWithContentsOfURL:userImageURL];
});
So obviously I'm not really dispatching the image-loading block before the barrier, or the barrier would wait until the image-loading block is done before executing (which was my intention).
What am I doing wrong? how do I make sure the image-loading block is enqueued before the barrier block?
At first glance the issue may be that you are dispatching the barrier block on a global concurrent queue. You can only use barrier blocks on your own custom concurrent queue. Per the GCD docs on dispatch_barrier_async, if you dispatch a block to a global queue, it will behave like a normal dispatch_async call.
Mike Ash has a good blog post on GCD barrier blocks: http://www.mikeash.com/pyblog/friday-qa-2011-10-14-whats-new-in-gcd.html
Good luck
T
You need to create your own queue and not dispatch to the global queues as per the ADC Docs
The queue you specify should be a concurrent queue that you create
yourself using the dispatch_queue_create function. If the queue you
pass to this function is a serial queue or one of the global
concurrent queues, this function behaves like the dispatch_async
function.
from https://developer.apple.com/library/ios/documentation/Performance/Reference/GCD_libdispatch_Ref/Reference/reference.html#//apple_ref/c/func/dispatch_barrier_async .
You can create tons of your own GCD queues just fine. gcd queues are very small and you can create tons of them without issue. You just need to free them when you're done with them.
For what you seem to be trying to solve, dispatch_barrier_async may not be the best solution.
Have a look at the Migrating Away From Threads section of the Concurrency Programming Guide. Just using dispatch_sync on a your own serial queue may solve your synchronization problem.
Alternatively, you can use NSOperation and NSOperationQueue. Unlike GCD, NSOperation allows you to easily manage dependancies (you can do it using GCD, but it can get ugly fast).
I'm a little late to the party, but maybe next time you could try using dispatch_groups to your advantage. http://www.raywenderlich.com/63338/grand-central-dispatch-in-depth-part-2

POSIX threading on ios

I've started experimenting with POSix threads using the ios platform. Coming fro using NSThread it's pretty daunting.
Basically in my sample app I have a big array filled with type mystruct. Every so often (very frequently) I want to perform a task with the contents of one of these structs in the background so I pass it to detachnewthread to kick things off.
I think I have the basics down but Id like to get a professional opinion before I attempt to move on to more complicated stuff.
Does what I have here seem "o.k" and could you point out anything missing that could cause problems? Can you spot any memory management issues etc....
struct mystruct
{
pthread thread;
int a;
long c;
}
void detachnewthread(mystruct *str)
{
// pthread_t thread;
if(str)
{
int rc;
// printf("In detachnewthread: creating thread %d\n", str->soundid);
rc = pthread_create(&str->thread, NULL, DoStuffWithMyStruct, (void *)str);
if (rc){
printf("ERROR; return code from pthread_create() is %d\n", rc);
//exit(-1);
}
}
//
/* Last thing that main() should do */
// pthread_exit(NULL);
}
void *DoStuffWithMyStruct(void *threadid)
{
mystruct *sptr;
dptr = (mystruct *)threadid;
// do stuff with data in my struct
pthread_detach(soundptr->thread);
}
One potential issue would be how the storage for the passed in structure mystruct is created. The lifetime of that variable is very critical to its usage in the thread. For example, if the caller of detachnewthread had that declared on the stack and then returned before the thread finished, it would be undefined behavior. Likewise, if it were dynamically allocated, then it is necessary to make sure it is not freed before the thread is finished.
In response to the comment/question: The necessity of some kind of mutex depends on the usage. For the sake of discussion, I will assume it is dynamically allocated. If the calling thread fills in the contents of the structure prior to creating the "child" thread and can guarantee that it will not be freed until after the child thread exits, and the subsequent access is read/only, then you would not need a mutex to protect it. I can imagine that type of scenario if the structure contains information that the child thread needs for completing its task.
If, however, more than one thread will be accessing the contents of the structure and one or more threads will be changing the data (writing to the structure), then you probably do need a mutex to protect it.
Try using Apple's Grand Central Dispatch (GCD) which will manage the threads for you. GCD provides the capability to dispatch work, via blocks, to various queues that are managed by the system. Some of the queue types are concurrent, serial, and of course the main queue where the UI runs. Based upon the CPU resources at hand, the system will manage the queues and necessary threads to get the work done. A simple example, which shows the how you can nest calls to different queues is like this:
__block MYClass *blockSelf=self;
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[blockSelf doSomeWork];
dispatch_async(dispatch_get_main_queue(), ^{
[blockSelf.textField setStringValue:#"Some work is done, updating UI"];
});
});
__block MyClass *blockSelf=self is used simply to avoid retain cycles associated with how blocks work.
Apple's docs:
http://developer.apple.com/library/ios/#documentation/Performance/Reference/GCD_libdispatch_Ref/Reference/reference.html
Mike Ash's Q&A blog post:
http://mikeash.com/pyblog/friday-qa-2009-08-28-intro-to-grand-central-dispatch-part-i-basics-and-dispatch-queues.html

Resources