how to do blocking `dispatch_sync` using `NSOperationQueue` - ios

Sometimes I need to do synchronous return. In dispatch_sync it's just:
__block int foo
dispatch_sync({
foo = 3
});
return foo;
I am not sure how that translates to NSOperationQueue. I have checked the maxConcurrentOperationCount = 1, but I don't think that's blocking. My understanding is that this only makes the operation queue "serial", but not "synchronous".

One can use addoperations:waitUntilFinished: (addOperations(_:waitUntilFinished:) in Swift), passing true for that second parameter.
For the sake of future readers, while one can wait using waitUntilFinished, 9 times out of 10, it is a mistake. If you have got some asynchronous operation, it is generally asynchronous for a good reason, and you should embrace asynchronous patterns and use asynchronous block completion handler parameters in your own code, rather than forcing it to finish synchronously.
E.g., rather than:
- (int)bar {
__block int foo;
NSOperation *operation = [NSBlockOperation blockOperationWithBlock:^{
foo = 3;
}];
[self.queue addOperations:#[operation] waitUntilFinished:true];
return foo;
}
One generally would do:
- (void)barWithCompletion:(void (^ _Nonnull)(int))completion {
[self.queue addOperationWithBlock:^{
completion(3);
}];
}
[self barWithCompletion:^(int value) {
// use `value` here
}];
// but not here, as the above runs asynchronously
When developers first encounter asynchronous patterns, they almost always try to fight them, making them behave synchronously, but it is almost always a mistake. Embrace asynchronous patterns, don't fight them.
FWIW, the new async-await patterns in Swift allow us to write asynchronous code that is far easier to reason about than the traditional asynchronous blocks/closures. So if Swift and its new concurrency patterns are an option, you might consider that as well.
In the comments below, you seem to be concerned about the performance characteristics of synchronous behavior of operation queues versus dispatch queues. One does not use operation queues over dispatch queues for performance reasons, but rather if you benefit from the added features (including dependencies, priorities, controlling degree of concurrency, elegant cancelation features, wrapping asynchronous processes in operations, etc.).
If performance is of optimal concern (e.g., for thread-safe synchronizing), something like an os_unfair_lock will eclipse dispatch queue performance. But we generally don't let performance dictate our choice except for those 3% of the cases where one absolutely needs that. We try to use the highest level of abstraction possible (even at the cost of modest performance hits) suitable for the task at hand.

Related

2017 / Swift 3.1 - GCD vs NSOperation

I am diving a bit deeper into concurrency and have been reading extensively about GCD and NSOperation. However, a lot of posts like the canonic answer on SO are several years old.
It seemed to me that NSOperation main advantages used to be, at the cost of some performance:
"the way to go" generally for more than a simple dispatch as the highest level abstraction (built atop of GCD)
to make task manipulation (cancellation, etc.) a lot easier
to easily set up dependencies between tasks
Given GCD's DispatchWorkItem & block cancellation / DispatchGroup / qos in particular, is there really an incentive (cost-performance wise) to use NSOperation anymore for concurrency apart from cases where you need to be able to cancel a task when it began executing or query the task state ?
Apple seems to put a lot more emphasis on GCD, at least in their WWDC (granted it's more recent than NSOperation).
I see them each still having their own purpose. I just recently rewatched the 2015 WWDC talk about this (Advanced NSOperations), and I see two main points here.
Run Time & User Interaction
From the talk:
NSOperations run for a little bit longer than you would expect a block to run, so blocks usually take a few nanoseconds, maybe at most a millisecond, to execute.
NSOperations, on the other hand, can be much longer, for anywhere from a couple of milliseconds to even several minutes
The example they talk about is in the WWDC app, where there exists an NSOperation that has a dependency on having a logged in user. The dependency NSOperation presents a login view controller and waits for the user to authenticate. Once finished, that NSOperation finishes and the NSOperationQueue resumes it's work. I don't think you'd want to use GCD for this scenario.
Subclassing
Since NSOperations are just classes, you can subclass them to get more reusability out of them. This isn't possible with GCD.
Example: (Using the WWDC login scenario from above)
You have many NSOperations in your code base that are associated with a user interaction that requires them to be authenticated. (Liking a video, in this example.) You could extend NSOperation to create an AuthenticatedOperation, then have all those NSOperations extend this new class.
First off, NSOperationQueue let you enqueue operations, that is, some sort of asynchronous operations with a start method, a cancel method and a few observable properties, while with a dispatch queue one can submit a block or a closure or a function to a dispatch queue, which will be then executed.
An "Operation" is semantically fundamentally different than a block (or closure, function). An operation has an underlying asynchronous task, while a block (closure or functions) is just that.
What comes close to an NSOperation, though, is an asynchronous function, e.g.:
func asyncTask(param: Param, completion: (T?, Error?) ->())
Now with Futures we can define the same asynchronous function like:
func asyncTask(param: Param) -> Future<T>
which makes such asynchronous functions quite handy.
Since futures have combinator functions like map and flatMap and so on, we can quite easily "emulate" the "dependency" feature of NSOperation, just in a more powerful, more concise and more comprehensible way.
We can also implement some sort of NSOperationQueue with a few lines of code based solely on GCD, say a "TaskQueue" and with basically the same features, like "maxConcurrentTasks" and can use it to enqueue task functions (not operations), in just a more powerful, more concise and a more comprehensible way as well. ;)
In order to get a cancelable operation, you need to create a subclass of NSOperation - while you can create a async function "ad-hod" - inline.
Also, since cancellation is an independent concept, we can assume, that there exists some library whose implementation is solely based on GCD which solves this problem in the, uhm, the usual way ;) It may look like this:
self.cancellationRequest = CancellationRequest()
self.asyncTask(param: param, cancellationToken: cr.token).map { result in
...
}
and later:
override func viewWillDisappear(_ animated: animated) {
super.viewWillDisappear(animated)
self.cancellationRequest.cancel()
}
So, IMHO there's really no reason to use clunky NSOperation and NSOperationQueue, and there's no reason any more for subclassing NSOperation, which is quite elaborate and surprising difficult, unless you don't care about data races.

Whether those two ways of dispatching work to main thread (CGD and NSOperationQueue) are equivalent?

I'm curious whether those two types to dispatch work to main queue are equivalent or maybe there are some differentials?
dispatch_async(dispatch_get_main_queue()) {
// Do stuff...
}
and
NSOperationQueue.mainQueue().addOperationWithBlock { [weak self] () -> Void in
// Do stuff..
}
There are differences, but they are somewhat subtle.
Operations enqueued to -[NSOperationQueue mainQueue] get executed one operation per pass of the run loop. This means, among other things, that there will be a "draw" pass between operations.
With dispatch_async(dispatch_get_main_queue(),...) and -[performSelectorOnMainThread:...] all enqueued blocks/selectors are called one after the other without spinning the run loop (i.e. allowing views to draw or anything like that). The runloop will continue after executing all enqueued blocks.
So, with respect to drawing, dispatch_async(dispatch_get_main_queue(),...) and -[performSelectorOnMainThread:...] batch operations into one draw pass, whereas -[NSOperationQueue mainQueue] will draw after each operation.
For a full, in-depth investigation of this, see my answer over here.
At a very basic level they are not both the same thing.
Yes, the operation queue method will be scheduled on GCD queue. But it also gets all the rich benefits of using operation queues, such as an easy way to add dependent operations; state observation; the ability to cancel an operation…
So no, they are not equivalent.
Yes there are difference in GCD and NSOperation.
GCD is light weight can be used to give flavor of multithreading like loading profile pic, loading web page, network call that surely returns at earliest.
NSOperation queue 1. Usually used to make heavy network calls, sort thousand's of record etc.2. Can add new operation, delete, get current status at any operation3. Add completion handler4. get operation count etc are added advantages over GCD

What behavior is guaranteed with Grand Central Dispatch in Objective-C?

I think the best way to ask this question is with some code:
//Main method
for(int i = 0; i < 10; i++)
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
[self foo:i];
});
}
- (void) foo: (int) i
{
#synchronized(self)
{
NSLog(#"%d",i);
}
}
In this case, is it guaranteed that the numbers 0-9 will be printed out in order? Is there ever a chance that one of the threads that is waiting on the run queue, will be skipped over? How about in reality. Realistically, does that ever happen? What if I wanted the behavior above (still using threads); how could I accomplish this?
In this case, is it guaranteed that the numbers 0-9 will be printed
out in order?
No.
Is there ever a chance that one of the threads that is waiting on the
run queue, will be skipped over?
Unclear what "skipped over" means. If it means "will the blocks be executed in order?" the answer is "probably, but it is an implementation detail".
How about in reality. Realistically, does that ever happen?
Irrelevant. If you you are writing concurrency code based on assumptions about realistic implementation details, you are writing incorrect concurrency code.
What if I wanted the behavior above (still using threads); how could I
accomplish this?
Create a serial dispatch queue and dispatch to that queue in the order you need things to be executed. Note that this is significantly faster than #synchronized() (of course, #synchronized() wouldn't work for you anyway in that it doesn't guarantee order, but merely exclusivity).
From the documentation of dispatch_get_global_queue
Blocks submitted to these global concurrent queues may be executed concurrently with respect to each other.
So that means there is no guaranteed of anything there. You are passing a block of code to the queue and the queue takes it from there.

#synchronized block versus GCD dispatch_async()

Essentially, I have a set of data in an NSDictionary, but for convenience I'm setting up some NSArrays with the data sorted and filtered in a few different ways. The data will be coming in via different threads (blocks), and I want to make sure there is only one block at a time modifying my data store.
I went through the trouble of setting up a dispatch queue this afternoon, and then randomly stumbled onto a post about #synchronized that made it seem like pretty much exactly what I want to be doing.
So what I have right now is...
// a property on my object
#property (assign) dispatch_queue_t matchSortingQueue;
// in my object init
_sortingQueue = dispatch_queue_create("com.asdf.matchSortingQueue", NULL);
// then later...
- (void)sortArrayIntoLocalStore:(NSArray*)matches
{
dispatch_async(_sortingQueue, ^{
// do stuff...
});
}
And my question is, could I just replace all of this with the following?
- (void)sortArrayIntoLocalStore:(NSArray*)matches
{
#synchronized (self) {
// do stuff...
};
}
...And what's the difference between the two anyway? What should I be considering?
Although the functional difference might not matter much to you, it's what you'd expect: if you #synchronize then the thread you're on is blocked until it can get exclusive execution. If you dispatch to a serial dispatch queue asynchronously then the calling thread can get on with other things and whatever it is you're actually doing will always occur on the same, known queue.
So they're equivalent for ensuring that a third resource is used from only one queue at a time.
Dispatching could be a better idea if, say, you had a resource that is accessed by the user interface from the main queue and you wanted to mutate it. Then your user interface code doesn't need explicitly to #synchronize, hiding the complexity of your threading scheme within the object quite naturally. Dispatching will also be a better idea if you've got a central actor that can trigger several of these changes on other different actors; that'll allow them to operate concurrently.
Synchronising is more compact and a lot easier to step debug. If what you're doing tends to be two or three lines and you'd need to dispatch it synchronously anyway then it feels like going to the effort of creating a queue isn't worth it — especially when you consider the implicit costs of creating a block and moving it over onto the heap.
In the second case you would block the calling thread until "do stuff" was done. Using queues and dispatch_async you will not block the calling thread. This would be particularly important if you call sortArrayIntoLocalStore from the UI thread.

How to use GCD for lightweight transactional locking of resources?

I'm trying to use GCD as a replacement for dozens of atomic properties. I remember at WWDC they were talking about that GCD could be used for efficient transactional locking mechanisms.
In my OpenGL ES runloop method I put all drawing code in a block executed by dispatch_sync on a custom created serial queue. The runloop is called by a CADisplayLink which is to my knowledge happening on the main thread.
There are ivars and properties which are used both for drawing but also for controlling what will be drawn. The problem is that there must be some locking in place to prevent concurrency problems, and a way of transactionally querying and modifying the state of the OpenGL ES scene from the main thread between two drawn frames.
I can modify a group of properties in a transactional way with GCD by executing a block on that serial queue.
But it seems I can't read values into the main thread, using GCD, while blocking the queue that executes the drawing code. dispatch_synch doesn't have a return value, but I want to get access to presentation values exactly between the drawing of two frames both for reading and writing.
Is it this barrier thing they were talking about? How does that work?
This is what the async writer / sync reader model was designed to accomplish. Let's say you have an ivar (and for purpose of discussion let's assume that you've gone a wee bit further and encapsulated all your ivars into a single structure, just for simplicity's sake:
struct {
int x, y;
char *n;
dispatch_queue_t _internalQueue;
} myIvars;
Let's further assume (for brevity) that you've initialized the ivars in a dispatch_once() and created the _internalQueue as a serial queue with dispatch_queue_create() earlier in the code.
Now, to write a value:
dispatch_async(myIvars._internalQueue, ^{ myIvars.x = 10; });
dispatch_async(myIvars._internalQueue, ^{ myIvars.n = "Hi there"; });
And to read one:
__block int val; __block char *v;
dispatch_sync(myIvars._internalQueue, ^{ val = myIvars.x; });
dispatch_sync(myIvars._internalQueue, ^{ v = myIvars.n; })
Using the internal queue makes sure everything is appropriately serialized and that writes can happen asynchronously but reads wait for all pending writes to complete before giving you back the value. A lot of "GCD aware" data structures (or routines that have internal data structures) incorporate serial queues as implementation details for just this purpose.
dispatch_sync allows you to specify a second argument as completion block where you can get the values from your serial queue and use them on your main thread.So it would look something like
dispatch_sync(serialQueue,^{
//execute a block
dispatch_async(get_dispatch_main_queue,^{
//use your calculations here
});
});
And serial queues handle the concurrency part themselves. So if another piece is trying to access the same code at the same time it will be handled by the queue itself.Hope this was of little help.

Resources