I think the best way to ask this question is with some code:
//Main method
for(int i = 0; i < 10; i++)
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[self foo:i];
});
}
- (void) foo: (int) i
{
#synchronized(self)
{
NSLog(#"%d",i);
}
}
In this case, is it guaranteed that the numbers 0-9 will be printed out in order? Is there ever a chance that one of the threads that is waiting on the run queue, will be skipped over? How about in reality. Realistically, does that ever happen? What if I wanted the behavior above (still using threads); how could I accomplish this?
In this case, is it guaranteed that the numbers 0-9 will be printed
out in order?
No.
Is there ever a chance that one of the threads that is waiting on the
run queue, will be skipped over?
Unclear what "skipped over" means. If it means "will the blocks be executed in order?" the answer is "probably, but it is an implementation detail".
How about in reality. Realistically, does that ever happen?
Irrelevant. If you you are writing concurrency code based on assumptions about realistic implementation details, you are writing incorrect concurrency code.
What if I wanted the behavior above (still using threads); how could I
accomplish this?
Create a serial dispatch queue and dispatch to that queue in the order you need things to be executed. Note that this is significantly faster than #synchronized() (of course, #synchronized() wouldn't work for you anyway in that it doesn't guarantee order, but merely exclusivity).
From the documentation of dispatch_get_global_queue
Blocks submitted to these global concurrent queues may be executed concurrently with respect to each other.
So that means there is no guaranteed of anything there. You are passing a block of code to the queue and the queue takes it from there.
Related
Sometimes I need to do synchronous return. In dispatch_sync it's just:
__block int foo
dispatch_sync({
foo = 3
});
return foo;
I am not sure how that translates to NSOperationQueue. I have checked the maxConcurrentOperationCount = 1, but I don't think that's blocking. My understanding is that this only makes the operation queue "serial", but not "synchronous".
One can use addoperations:waitUntilFinished: (addOperations(_:waitUntilFinished:) in Swift), passing true for that second parameter.
For the sake of future readers, while one can wait using waitUntilFinished, 9 times out of 10, it is a mistake. If you have got some asynchronous operation, it is generally asynchronous for a good reason, and you should embrace asynchronous patterns and use asynchronous block completion handler parameters in your own code, rather than forcing it to finish synchronously.
E.g., rather than:
- (int)bar {
__block int foo;
NSOperation *operation = [NSBlockOperation blockOperationWithBlock:^{
foo = 3;
}];
[self.queue addOperations:#[operation] waitUntilFinished:true];
return foo;
}
One generally would do:
- (void)barWithCompletion:(void (^ _Nonnull)(int))completion {
[self.queue addOperationWithBlock:^{
completion(3);
}];
}
[self barWithCompletion:^(int value) {
// use `value` here
}];
// but not here, as the above runs asynchronously
When developers first encounter asynchronous patterns, they almost always try to fight them, making them behave synchronously, but it is almost always a mistake. Embrace asynchronous patterns, don't fight them.
FWIW, the new async-await patterns in Swift allow us to write asynchronous code that is far easier to reason about than the traditional asynchronous blocks/closures. So if Swift and its new concurrency patterns are an option, you might consider that as well.
In the comments below, you seem to be concerned about the performance characteristics of synchronous behavior of operation queues versus dispatch queues. One does not use operation queues over dispatch queues for performance reasons, but rather if you benefit from the added features (including dependencies, priorities, controlling degree of concurrency, elegant cancelation features, wrapping asynchronous processes in operations, etc.).
If performance is of optimal concern (e.g., for thread-safe synchronizing), something like an os_unfair_lock will eclipse dispatch queue performance. But we generally don't let performance dictate our choice except for those 3% of the cases where one absolutely needs that. We try to use the highest level of abstraction possible (even at the cost of modest performance hits) suitable for the task at hand.
Now that dispatch_get_current_queue is deprecated in iOS 6, how do I use dispatch_after to execute something in the current queue?
The various links in the comments don't say "it's better not to do it." They say you can't do it. You must either pass the queue you want or dispatch to a known queue. Dispatch queues don't have the concept of "current." Blocks often feed from one queue to another (called "targeting"). By the time you're actually running, the "current" queue is not really meaningful, and relying on it can (and historically did) lead to dead-lock. dispatch_get_current_queue() was never meant for dispatching; it was a debugging method. That's why it was removed (since people treated it as if it meant something meaningful).
If you need that kind of higher-level book-keeping, use an NSOperationQueue which tracks its original queue (and has a simpler queuing model that makes "original queue" much more meaningful).
There are several approaches used in UIKit that are appropriate:
Pass the call-back dispatch_queue as a parameter (this is probably the most common approach in new APIs). See [NSURLConnection setDelegateQueue:] or addObserverForName:object:queue:usingBlock: for examples. Notice that NSURLConnection expects an NSOperationQueue, not a dispatch_queue. Higher-level APIs and all that.
Call back on whatever queue you're on and leave it up to the receiver to deal with it. This is how callbacks have traditionally worked.
Demand that there be a runloop on the calling thread, and schedule your callbacks on the calling runloop. This is how NSURLConnection historically worked before queues.
Always make your callbacks on one of the well-known queues (particularly the main queue) unless told otherwise. I don't know of anywhere that this is done in UIKit, but I've seen it commonly in app code, and is a very easy approach most of the time.
Create a queue manually and dispatch both your calling code and your dispatch_after code onto that. That way you can guarantee that both pieces of code are run from the same queue.
Having to do this is likely because the need of a hack. You can hack around this with another hack:
id block = ^foo() {
[self doSomething];
usleep(delay_in_us);
[self doSomehingOther];
}
Instead of usleep() you might consider to loop in a run loop.
I would not recommend this "approach" though. The better way is to have some method which takes a queue as parameter and a block as parameter, where the block is then executed on the specified queue.
And, by the way, there are ways during a block executes to check whether it runs on a particular queue - respectively on any of its parent queue, provided you have a reference to that queue beforehand: use functions dispatch_queue_set_specific, and dispatch_get_specific.
Essentially, I have a set of data in an NSDictionary, but for convenience I'm setting up some NSArrays with the data sorted and filtered in a few different ways. The data will be coming in via different threads (blocks), and I want to make sure there is only one block at a time modifying my data store.
I went through the trouble of setting up a dispatch queue this afternoon, and then randomly stumbled onto a post about #synchronized that made it seem like pretty much exactly what I want to be doing.
So what I have right now is...
// a property on my object
#property (assign) dispatch_queue_t matchSortingQueue;
// in my object init
_sortingQueue = dispatch_queue_create("com.asdf.matchSortingQueue", NULL);
// then later...
- (void)sortArrayIntoLocalStore:(NSArray*)matches
{
dispatch_async(_sortingQueue, ^{
// do stuff...
});
}
And my question is, could I just replace all of this with the following?
- (void)sortArrayIntoLocalStore:(NSArray*)matches
{
#synchronized (self) {
// do stuff...
};
}
...And what's the difference between the two anyway? What should I be considering?
Although the functional difference might not matter much to you, it's what you'd expect: if you #synchronize then the thread you're on is blocked until it can get exclusive execution. If you dispatch to a serial dispatch queue asynchronously then the calling thread can get on with other things and whatever it is you're actually doing will always occur on the same, known queue.
So they're equivalent for ensuring that a third resource is used from only one queue at a time.
Dispatching could be a better idea if, say, you had a resource that is accessed by the user interface from the main queue and you wanted to mutate it. Then your user interface code doesn't need explicitly to #synchronize, hiding the complexity of your threading scheme within the object quite naturally. Dispatching will also be a better idea if you've got a central actor that can trigger several of these changes on other different actors; that'll allow them to operate concurrently.
Synchronising is more compact and a lot easier to step debug. If what you're doing tends to be two or three lines and you'd need to dispatch it synchronously anyway then it feels like going to the effort of creating a queue isn't worth it — especially when you consider the implicit costs of creating a block and moving it over onto the heap.
In the second case you would block the calling thread until "do stuff" was done. Using queues and dispatch_async you will not block the calling thread. This would be particularly important if you call sortArrayIntoLocalStore from the UI thread.
This is a two part question. Hope someone could reply with a complete answer.
NSOperations are powerful objects. They can be of two different types: non-concurrent or concurrent.
The first type runs synchronously. You can take advantage of a non-concurrent operations by adding them into a NSOperationQueue. The latter creates a thread(s) for you. The result consists in running that operation in a concurrent manner. The only caveat regards the lifecycle of such an operation. When its main method finishes, then it is removed form the queue. This is can be a problem when you deal with async APIs.
Now, what about concurrent operations? From Apple doc
If you want to implement a concurrent operation—that is, one that runs
asynchronously with respect to the calling thread—you must write
additional code to start the operation asynchronously. For example,
you might spawn a separate thread, call an asynchronous system
function, or do anything else to ensure that the start method starts
the task and returns immediately and, in all likelihood, before the
task is finished.
This is quite almost clear to me. They run asynchronously. But you must take the appropriate actions to ensure that they do.
What it is not clear to me is the following. Doc says:
Note: In OS X v10.6, operation queues ignore the value returned by
isConcurrent and always call the start method of your operation from a
separate thread.
What it really means? What happens if I add a concurrent operation in a NSOperationQueue?
Then, in this post Concurrent Operations, concurrent operations are used to download some HTTP content by means of NSURLConnection (in its async form). Operations are concurrent and included in a specific queue.
UrlDownloaderOperation * operation = [UrlDownloaderOperation urlDownloaderWithUrlString:url];
[_queue addOperation:operation];
Since NSURLConnection requires a loop to run, the author shunt the start method in the main thread (so I suppose adding the operation to the queue it has spawn a different one). In this manner, the main run loop can invoke the delegate included in the operation.
- (void)start
{
if (![NSThread isMainThread])
{
[self performSelectorOnMainThread:#selector(start) withObject:nil waitUntilDone:NO];
return;
}
[self willChangeValueForKey:#"isExecuting"];
_isExecuting = YES;
[self didChangeValueForKey:#"isExecuting"];
NSURLRequest * request = [NSURLRequest requestWithURL:_url];
_connection = [[NSURLConnection alloc] initWithRequest:request
delegate:self];
if (_connection == nil)
[self finish];
}
- (BOOL)isConcurrent
{
return YES;
}
// delegate method here...
My question is the following. Is this thread safe? The run loop listens for sources but invoked methods are called in a background thread. Am I wrong?
Edit
I've completed some tests on my own based on the code provided by Dave Dribin (see 1). I've noticed, as you wrote, that callbacks of NSURLConnection are called in the main thread.
Ok, but now I'm still very confusing. I'll try to explain my doubts.
Why including within a concurrent operation an async pattern where its callback are called in the main thread? Shunting the start method to the main thread it allows to execute callbacks in the main thread, and what about queues and operations? Where do I take advantage of threading mechanisms provided by GCD?
Hope this is clear.
This is kind of a long answer, but the short version is that what you're doing is totally fine and thread safe since you've forced the important part of the operation to run on the main thread.
Your first question was, "What happens if I add a concurrent operation in a NSOperationQueue?" As of iOS 4, NSOperationQueue uses GCD behind the scenes. When your operation reaches the top of the queue, it gets submitted to GCD, which manages a pool of private threads that grows and shrinks dynamically as needed. GCD assigns one of these threads to run the start method of your operation, and guarantees this thread will never be the main thread.
When the start method finishes in a concurrent operation, nothing special happens (which is the point). The queue will allow your operation to run forever until you set isFinished to YES and do the proper KVO willChange/didChange calls, regardless of the calling thread. Typically you'd make a method called finish to do that, which it looks like you have.
All this is fine and well, but there are some caveats involved if you need to observe or manipulate the thread on which your operation is running. The important thing to remember is this: don't mess with threads managed by GCD. You can't guarantee they'll live past the current frame of execution, and you definitely can't guarantee that subsequent delegate calls (i.e., from NSURLConnection) will occur on the same thread. In fact, they probably won't.
In your code sample, you've shunted start off to the main thread so you don't need to worry much about background threads (GCD or otherwise). When you create an NSURLConnection it gets scheduled on the current run loop, and all of its delegate methods will get called on that run loop's thread, meaning that starting the connection on the main thread guarantees its delegate callbacks also happen on the main thread. In this sense it's "thread safe" because almost nothing is actually happening on a background thread besides the start of the operation itself, which may actually be an advantage because GCD can immediately reclaim the thread and use it for something else.
Let's imagine what would happen if you didn't force start to run on the main thread and just used the thread given to you by GCD. A run loop can potentially hang forever if its thread disappears, such as when it gets reclaimed by GCD into its private pool. There's some techniques floating around for keeping the thread alive (such as adding an empty NSPort), but they don't apply to threads created by GCD, only to threads you create yourself and can guarantee the lifetime of.
The danger here is that under light load you actually can get away with running a run loop on a GCD thread and think everything is fine. Once you start running many parallel operations, especially if you need to cancel them midflight, you'll start to see operations that never complete and never deallocate, leaking memory. If you wanted to be completely safe, you'd need to create your own dedicated NSThread and keep the run loop going forever.
In the real world, it's much easier to do what you're doing and just run the connection on the main thread. Managing the connection consumes very little CPU and in most cases won't interfere with your UI, so there's very little to gain by running the connection completely in the background. The main thread's run loop is always running and you don't need to mess with it.
It is possible, however, to run an NSURLConnection connection entirely in the background using the dedicated thread method described above. For an example, check out JXHTTP, in particular the classes JXOperation and JXURLConnectionOperation
I'm trying to use GCD as a replacement for dozens of atomic properties. I remember at WWDC they were talking about that GCD could be used for efficient transactional locking mechanisms.
In my OpenGL ES runloop method I put all drawing code in a block executed by dispatch_sync on a custom created serial queue. The runloop is called by a CADisplayLink which is to my knowledge happening on the main thread.
There are ivars and properties which are used both for drawing but also for controlling what will be drawn. The problem is that there must be some locking in place to prevent concurrency problems, and a way of transactionally querying and modifying the state of the OpenGL ES scene from the main thread between two drawn frames.
I can modify a group of properties in a transactional way with GCD by executing a block on that serial queue.
But it seems I can't read values into the main thread, using GCD, while blocking the queue that executes the drawing code. dispatch_synch doesn't have a return value, but I want to get access to presentation values exactly between the drawing of two frames both for reading and writing.
Is it this barrier thing they were talking about? How does that work?
This is what the async writer / sync reader model was designed to accomplish. Let's say you have an ivar (and for purpose of discussion let's assume that you've gone a wee bit further and encapsulated all your ivars into a single structure, just for simplicity's sake:
struct {
int x, y;
char *n;
dispatch_queue_t _internalQueue;
} myIvars;
Let's further assume (for brevity) that you've initialized the ivars in a dispatch_once() and created the _internalQueue as a serial queue with dispatch_queue_create() earlier in the code.
Now, to write a value:
dispatch_async(myIvars._internalQueue, ^{ myIvars.x = 10; });
dispatch_async(myIvars._internalQueue, ^{ myIvars.n = "Hi there"; });
And to read one:
__block int val; __block char *v;
dispatch_sync(myIvars._internalQueue, ^{ val = myIvars.x; });
dispatch_sync(myIvars._internalQueue, ^{ v = myIvars.n; })
Using the internal queue makes sure everything is appropriately serialized and that writes can happen asynchronously but reads wait for all pending writes to complete before giving you back the value. A lot of "GCD aware" data structures (or routines that have internal data structures) incorporate serial queues as implementation details for just this purpose.
dispatch_sync allows you to specify a second argument as completion block where you can get the values from your serial queue and use them on your main thread.So it would look something like
dispatch_sync(serialQueue,^{
//execute a block
dispatch_async(get_dispatch_main_queue,^{
//use your calculations here
});
});
And serial queues handle the concurrency part themselves. So if another piece is trying to access the same code at the same time it will be handled by the queue itself.Hope this was of little help.