Calling dispatch_sync from a concurrent queue - does it block entirely? - ios

Let's say I hypothetically call a dispatch_sync from a concurrent queue - does it block the entire queue or just that thread of execution?

dispatch_sync will block the caller thread until execution completes, a concurrent queue has multiple threads so it will only block one of those on that queue, the other threads will still execute.
Here is what Apple says about this:
Submits a block to a dispatch queue for synchronous execution. Unlike
dispatch_async, this function does not return until the block has
finished. Calling this function and targeting the current queue
results in deadlock.
Unlike with dispatch_async, no retain is performed on the target
queue. Because calls to this function are synchronous, it "borrows"
the reference of the caller. Moreover, no Block_copy is performed on
the block.
As an optimization, this function invokes the block on the current
thread when possible.
Source

Related

Does sync/async behave similar to serial/concurrent i.e., do they both control DispatchQueues or do sync/Async control Threads only

Most answers on stackoverflow implies in a way that sync vs async behaviour is quite similar to serial vs concurrent queue concept difference. Like the link in the first comment by #Roope
I have started to think that
Serial and concurrent are related to DispatchQueue, and sync/ async for how an operation will get executed on a thread.
Am I right?
Like if we've got DQ.main.sync then task/operation closure will get executed in a synchronous manner on this serial (main) queue.
And, if I do DQ.main.async then task will get asynchronously on some other background queue, and on reaching completion will return control on main thread.
And, since main is a serial queue, it won't let any other task/operation get into execution state/ start getting executed until the current closure task has finished its execution.
Then,
DQ.global().sync would execute a task synchronously on the thread on which its task/operation has been assigned i.e., it will block that thread from doing any other task/operation by blocking any context switching on that particular thread.
And, since, global is a concurrent queue it will keep on putting the tasks present in it to the execution state irrespective of previous task/operation's execution state.
DQ.global().async would allow context switching on the thread on which the operation closure has been put for execution
Is this the correct interpretations of the above dispatchQueues and sync vs async?
You are asking the right questions but I think you got a bit confused (mostly due to not very clear posts about this topic on internet).
Concurrent / Serial
Let's look at how you can create a new dispatch Queue:
let serialQueue = DispatchQueue(label: label)
If you don't specify any other additional parameter, this queue will behave as a serial queue:
This means that every block dispatched on this queue (sync or async it doesn't matter) will be executed alone, without the possibility for other blocks to be executed, on that same queue, simultaneously.
This doesn't mean that anything else is stopped, it just means that if something else is dispatched on that same queue, it will wait for the first block to finish before starting it's execution. Other threads and queues will still run on their own.
You can, however, create a concurrent queue, that will not constraint this blocks of code in this manner and, instead, if it happens that more blocks of code are dispatched on that same queue at the same time, it will execute them at the same time (on different threads)
let concurrentQueue = DispatchQueue(label: label,
qos: .background,
attributes: .concurrent,
autoreleaseFrequency: .inherit,
target: .global())
So, you just need to pass the attribute concurrent to the queue, and it won't be serial anymore.
(I won't be talking about the other parameters since they are not in focus of this particular question and, I think, you can read about them in the other SO post linked in the comment or, if it's not enough, you can ask another question)
If you want to understand more about concurrent queues (aka: skip if you don't care about concurrent queues)
You could ask: When do I even need a concurrent queue?
Well, just for example, let's think of a use-case where you want to synchronize READS on a shared resource: since the reads can be done simultaneously without issues, you could use a concurrent queue for that.
But what if you want to write on that shared resource?
well, in this case a write needs to act as a "barrier" and during the execution of that write, no other write and no reads can operate on that resource simultaneously.
To obtain this kind of behavior, the swift code would look something like this
concurrentQueue.async(flags: .barrier, execute: { /*your barriered block*/ })
So, in other words, you can make a concurrent queue work temporarily as a serial queue in case you need.
Once again, the concurrent / serial distinction is only valid for blocks dispatched to that same queue, it has nothing to do with other concurrent or serial work that can be done on another thread/queue.
SYNC / ASYNC
This is totally another issue, with virtually no connection to the previous one.
This two ways to dispatch some block of code are relative to the current thread/queue you are at the time of the dispatch call. This dispatch call blocks (in case of sync) or doesn't block (async) the execution of that thread/queue while executing the code you dispatch on the other queue.
So let's say I'm executing a method and in that method I dispatch async something on some other queue (I'm using main queue but it could be any queue):
func someMethod() {
var aString = "1"
DispatchQueue.main.async {
aString = "2"
}
print(aString)
}
What happens is that this block of code is dispatched on another queue and could be executed serially or concurrently on that queue, but that has no correlation to what is happening on the current queue (which is the one on which someMethod is called).
What happens on the current queue is that the code will continue executing and won't wait for that block to be completed before printing that variable.
This means that, very likely, you will see it print 1 and not 2. (More precisely you can't know what will happen first)
If instead you would dispatch it sync, than you would've ALWAYS printed 2 instead of 1, because the current queue would've waited for that block of code to be completed, before continuing in it's execution.
So this will print 2:
func someMethod() {
var aString = "1"
DispatchQueue.main.sync {
aString = "2"
}
print(aString)
}
But does it mean that the queue on which someMethod is called is actually stopped?
Well, it depends on the current queue:
If it's serial, than yes. All the blocks previously dispatched to that queue or that will be dispatched on that queue will have to wait for that block to be completed.
If it's concurrent, than no. All concurrent blocks will continue their execution, only this specific block of execution will be blocked, waiting for this dispatch call to finish it's work. Of course if we are in the barriered case, than it's like for serial queues.
What happens when the currentQueue and the queue on which we dispatch are the same?
Assuming we are on serial queues (which I think will be most of your use-cases)
In case we dispatch sync, than deadlock. Nothing will ever execute on that queue anymore. That's the worst it could happen.
In case we dispatch async, than the code will be executed at the end of all the code already dispatched on that queue (including but not limited to the code executing right now in someMethod)
So be extra careful when you use the sync method, and be sure you are not on that same queue you are dispatching into.
I hope this let you understand better.
I have started to think that Serial and concurrent are related to DispatchQueue, and sync/async for how an operation will get executed on a thread.
In short:
Whether the destination queue is serial or concurrent dictates how that destination queue will behave (namely, can that queue run this closure at the same time as other things that were dispatched to that same queue or not);
Whereas sync vs async dictates how the current thread from which you are dispatching will behave (namely, should the calling thread wait until the dispatched code to finish or not).
So, serial/concurrent affects the destination queue to which you are dispatching, whereas sync/async affects the current thread from which you are dispatching.
You go on to say:
Like if we've got DQ.main.sync then task/operation closure will get executed in a synchronous manner on this serial (main) queue.
I might rephrase this to say “if we've got DQ.main.sync then the current thread will wait for the main queue to perform this closure.”
FWIW, we don’t use DQ.main.sync very often, because 9 times out of 10, we’re just doing this to dispatch some UI update, and there’s generally no need to wait. It’s minor, but we almost always use DQ.main.async. We do use sync is when we’re trying to provide thread-safe interaction with some resource. In that scenario, sync can be very useful. But it often is not required in conjunction with main, but only introduces inefficiencies.
And, if I do DQ.main.async then task will get asynchronously on some other background queue, and on reaching completion will return control on main thread.
No.
When you do DQ.main.async, you’re specifying the closure will run asynchronously on the main queue (the queue to which you dispatched) and that that your current thread (presumably a background thread) doesn’t need to wait for it, but will immediately carry on.
For example, consider a sample network request, whose responses are processed on a background serial queue of the URLSession:
let task = URLSession.shared.dataTask(with: url) { data, _, error in
// parse the response
DispatchQueue.main.async {
// update the UI
}
// do something else
}
task.resume()
So, the parsing happens on this URLSession background thread, it dispatches a UI update to the main thread, and then carries on doing something else on this background thread. The whole purpose of sync vs async is whether the “do something else” has to wait for the “update the UI” to finish or not. In this case, there’s no point to block the current background thread while the main is processing the UI update, so we use async.
Then, DQ.global().sync would execute a task synchronously on the thread on which its task/operation has been assigned i.e., ...
Yes DQ.global().sync says “run this closure on a background queue, but block the current thread until that closure is done.”
Needless to say, in practice, we would never do DQ.global().sync. There’s no point in blocking the current thread waiting for something to run on a global queue. The whole point in dispatching closures to the global queues is so you don’t block the current thread. If you’re considering DQ.global().sync, you might as well just run it on the current thread because you’re blocking it anyway. (In fact, GCD knows that DQ.global().sync doesn’t achieve anything and, as an optimization, will generally run it on the current thread anyway.)
Now if you were going to use async or using some custom queue for some reason, then that might make sense. But there’s generally no point in ever doing DQ.global().sync.
... it will block that thread from doing any other task/operation by blocking any context switching on that particular thread.
No.
The sync doesn’t affect “that thread” (the worker thread of the global queue). The sync affects the current thread from which you dispatched this block of code. Will this current thread wait for the global queue to perform the dispatched code (sync) or not (async)?
And, since, global is a concurrent queue it will keep on putting the tasks present in it to the execution state irrespective of previous task/operation's execution state.
Yes. Again, I might rephrase this: “And, since global is a current queue, this closure will be scheduled to run immediately, regardless of what might already be running on this queue.”
The technical distinction is that when you dispatch something to a concurrent queue, while it generally starts immediately, sometimes it doesn’t. Perhaps all of the cores on your CPU are tied up running something else. Or perhaps you’ve dispatched many blocks and you’ve temporarily exhausted GCD’s very limited number of “worker threads”. Bottom line, while it generally will start immediately, there could always be resource constraints that prevent it from doing so.
But this is a detail: Conceptually, when you dispatch to a global queue, yes, it generally will start running immediately, even if you might have a few other closures that you have dispatched to that queue which haven’t finished yet.
DQ.global().async would allow context switching on the thread on which the operation closure has been put for execution.
I might avoid the phrase “context switching”, as that has a very specific meaning which is probably beyond the scope of this question. If you’re really interested, you can see WWDC 2017 video Modernizing Grand Central Dispatch Usage.
The way I’d describe DQ.global().async is that it simply “allows the current thread to proceed, unblocked, while the global queue performs the dispatched closure.” This is an extremely common technique, often called from the main queue to dispatch some computationally intensive code to some global queue, but not wait for it to finish, leaving the main thread free to process UI events, resulting in more responsive user interface.

Will a Serial DispatchQueue wait if a task spawns a different thread

Serial dispatch queues will execute their tasks one at a time. But what if I have task1 and task2 in the queue. task1 one starts execution and calls a function with a completion block (which I assume will use a different thread to execute). At this point, I believe task1 will exit, even though the completion block has not yet been called. Is it a possibility that task2 will start executing before the completion block from task 1 is executed?
Yes. In the normal case, that's exactly what will happen.
If you want to wait for something to complete before continuing, research DispatchGroup.

When does dispatch_async actually dispatch?

Let's say I have the following code:
dispatch_async(dispatch_get_main_queue()) {
myFunction()
}
This says to call the block, which calls myFunction, asynchronously. Let's say that I call this code in my main queue, which is also the queue specified for the dispatch_async call.
When does this block actually get called in this case? Does my current queue get pre-empted and the block run immediately, or does the current call stack unroll and the block gets called at the next event loop? Or something else?
When does this block actually get called in this case? Does my current queue get pre-empted and the block run immediately, or does the current call stack unroll and the block gets called at the next event loop? Or something else?
In short, if you dispatch asynchronously to the main queue from the main queue, the dispatched block will not run until you yield back to the main run loop (and also after any other blocks dispatched to the main queue also finish).
From Grand Central Dispatch (GCD) Reference: dispatch_async
The target queue determines whether the block is invoked serially or concurrently with respect to other blocks submitted to that same queue.
From OperationQueues: Performing Tasks on the Main Thread
You can get the dispatch queue for your application’s main thread by calling the dispatch_get_main_queue function. Tasks added to this queue are performed serially on the main thread itself. Therefore, you can use this queue as a synchronization point for work being done in other parts of your application.
From these two pieces of information, we know the main queue is a serial dispatch queue and dispatch_async() will follow the rules of serial execution.
So the simple answer is the task will be run on the main queue sometime after the completion of the current context.
I couldn't find a official description of the run loop's internals, but rob mayoff a good breakdown.
Order of operations in runloop on iOS
Note that the run loop is structured so only one of these branches happens on each iteration:
Ready timers fire, or
Blocks on dispatch_get_main_queue() run, or
A single version 1 source is dispatched to its callback.
If the context is an input source or a timer fire, then the task will happen in a different iteration of the run loop. If the context is a dispatched task, then the task may actually run within the same iteration of the run loop.

What is purpose of dispatch sync?

I'm pretty clear what dispatch_async queue is performing, but I'm not clear what dispatch_sync purpose is. For an example: what is difference between this:
NSLog(#"A");
NSLog(#"B");
and this:
dispatch_sync(dispatch_get_main_queue(), ^ {
NSLog(#"A");
});
NSLog(#"B");
As I get, in both ways output will be A then B. Because sync is executed in order that is written. Thanks.
As the names says dispatch_sync makes it possible to synchronize the tasks to be executed even if they are not executed on the main queue.
Saheb Roy's answer is only half of the truth. You can only specify the dispatch queue your code should be executed on. The actual thread is chosen by GCD.
Code blocks dispatched using dispatch_async on a concurrent queue are also executed in the FIFO way and guaranteed to be executed in the order you dispatch them. The main difference is that dispatch_sync on a serial queue also guarantees you that following code blocks are not executed before the previous block has finished executing. dispatch_sync is blocking your current dispatch queue i.e. the queue your dispatch_sync call is executed on. So your calling function is blocked until the dispatched code block returns whereas dispatch_async returns immediately.
execution timeline using dispatch_async on a concurrent queue my look like this:
Block A [..............]
Block B [.....]
Block C [....]
while using dispatch_sync on a serial queue looks like this:
Block A [..............]
Block B [.....]
Block C [....]
The purpose of dispatch_syncqueue is that it will dispatch blocks of code in the thread you mentioned and that the will run synchronously, meaning one by one or rather one after the other in FIFO method.
Do check out NSOperationQueue to understand the function of dispatch_sync in a better way
According to Docs
Submits a block to a dispatch queue for synchronous execution. Unlike
dispatch_async, this function does not return until the block has
finished. Calling this function and targeting the current queue
results in deadlock.
Unlike with dispatch_async, no retain is performed on the target
queue. Because calls to this function are synchronous, it "borrows"
the reference of the caller. Moreover, no Block_copy is performed on
the block.
As an optimization, this function invokes the block on the current
thread when possible.
Its purpose is to the multitasking. .Two or more process run at a same time one in background thread and another in main thread. .mostly the process run in background thread and UI update in main thread to avoid screen block.

Calling code sequentially after dispatch_async

I'm doing some customization in iOS, I'm subclassing a system class that executes a method asynchronously (presumably with dispatch_async)
Sample code:
-(void)originalAsyncMethod {
[super originalAsyncMethod];
dispatch_async(dispatch_get_main_queue(), ^{
//do something that needs to happen just after originalAsyncMethod finishes executing
});
}
Is there a way I can make sure my custom code runs AFTER the async super method is executed?
It's unclear to me wether this would be possible based on your question, but if you have direct access to the implementation of super, then this shouldn't be to hard to achieve.
First, assuming that you have access to the super class and that the super implementation also dispatches asynchronously to the main queue, then you don't actually have to do anything to get this working expectedly. When you use dispatch_get_main_queue() you're adding your dispatch block to the end of a serial queue on the main thread that is executed in FIFO (first in first out) order.
The second option is also pretty heavily reliant on having access to the super implementation, as it would require you manually create your own dispatch queue to execute tasks on. I think it goes without saying that if you use a serial dispatch queue then you have FIFO ordering in this queue same as you dispatch_get_main_queue(), only you wouldn't have to execute on the main thread.
And the last option I can think of wouldn't necessarily require you to modify the super class, but would require you to know the queue on which super was executing. (and still might not work right if it's a global queue) By using a dispatch_barrier, you could allow your super implementation to execute asynchronously on a concurrent queue knowing that the subclass dispatch block has also been added to the queue (via dispatch_barrier), and will be executed once the super dispatch (and any other previous submissions to the queue) has completed.
Quoting the docs
A dispatch barrier allows you to create a synchronization point within
a concurrent dispatch queue. When it encounters a barrier, a
concurrent queue delays the execution of the barrier block (or any
further blocks) until all blocks submitted before the barrier finish
executing. At that point, the barrier block executes by itself. Upon
completion, the queue resumes its normal execution behavior.

Resources