What is the difference between dispatch_sync and dispatch_main? - ios

I understand that dispatch_async() runs something on a background thread, and dispatch_main() runs it on the main thread, so where does dispatch_sync() come in?

You don't generally want to use dispatch_main(). It's for things other than regular applications (system daemons and such). It is, in fact, guaranteed to break your program if you call it in a regular app.
dispatch_sync runs a block on a queue and waits for it to finish, dispatch_async runs a block on a queue and doesn't wait for it to finish.
Serial queues run one block at a time, in order. Concurrent queues run multiple blocks at a time, and therefore aren't necessarily in order.
(edit)
Perhaps when you said dispatch_main() you were thinking of dispatch_get_main_queue()?

dispatch_main() is not for running things on the main thread — it runs the GCD block dispatcher. In a normal app, you won't need or want to use it.
dispatch_sync() blocks the current thread until the block completes. This may or may not be the main thread.
If you want to run something on the main thread, you can use dispatch_get_main_queue() to get the main queue and use once of the normal dispatch methods to run a block there.

Related

Understanding dispatch_queues and synchronous/asynchronous dispatch

I'm an Android engineer trying to port some iOS code that uses 5 SERIAL dispatch queues. I want to make sure I'm thinking about things the right way.
dispatch_sync to a SERIAL queue is basically using the queue as a synchronized queue- only one thread may access it and the block that gets executed can be thought of as a critical region. It happens immediately on the current thread- its the equivalent of
get_semaphore()
queue.pop()
do_block()
release_semaphore()
dispatch_async to a serial queue- performs the block on another thread and lets the current thread return immediately. However since its a serial queue it promises only one of these asynchronous thread is going to execute at a time (the next call to dispatch_async will wait until all other threads are finished). That block can also be thought of as a critical region, but it will occur on another thread. So the same code as above, but its passed to a worker thread first.
Am I off in any of that, or did I figure it out correctly?
This feels like an overly complicated way of thinking of it and there are lots of little details of that description that aren't quite right. Specifically, "it happens immediately on the current thread" is not correct.
First, let's step back: The distinction between dispatch_async and dispatch_sync is merely whether the current thread waits for it or not. But when you dispatch something to a serial queue, you should always imagine that it's running on a separate worker thread of GCD's own choosing. Yes, as an optimization, sometimes dispatch_sync will use the current thread, but you are in no way guaranteed of this fact.
Second, when you discuss dispatch_sync, you say something about it running "immediately". But it's by no means assured to be immediate. If a thread does dispatch_sync to some serial queue, then that thread will block until (a) any block currently running on on that serial queue finish; (b) any other queued blocks for that serial queue run and complete; and (c) obviously, the block that thread A itself dispatched runs and completes.
Now, when you use a serial queue for synchronization for, some thread-safe access to some object in memory, often that synchronization process is very quick, so the waiting thread will generally be blocked for a negligible amount of time as its dispatched block (and any prior dispatched blocks) to finish. But in general, it's misleading to say that it will run immediately. (If it always could run immediately, then you wouldn't need a queue to synchronize access).
Now your question talks about a "critical region", to which I assume you're talking about some bit of code that, in order to ensure thread-safety or for some other reason like that, must be synchronized. So, when running this code to be synchronized, the only question re dispatch_sync vs dispatch_async is whether the current thread must wait. A common pattern, for example, is to say that one may dispatch_async writes to some model (because there's no need to wait for the model to update before proceeding), but dispatch_sync reads from some model (because you obviously don't want to proceed until the read value is returned).
A further optimization of that sync/async pattern is the reader-writer pattern, where concurrent reads are permissible but concurrent writes are not. Thus, you'll use a concurrent queue, dispatch_barrier_async the writes (achieving serial-like behavior for the writes), but dispatch_sync the reads (enjoying concurrent performance with respect to other read operations).
To pick nits, dispatch_sync doesn't necessarily run the code on the current thread, but if it doesn't, it still blocks the current thread until the task completes. The distinction is only potentially important if you're relying on thread IDs or thread-local storage.
But otherwise, yes, unless I missed something subtle.

Can you call dispatch_sync from a concurrent thread to itself without deadlocking?

I know you would deadlock by doing this on a serial queue, but I haven't found anything that mentions deadlocking by doing it on a concurrent queue. I just wanted to verify it wont deadlock (it doesn't seem like it would, as it would only block one of the threads on the queue and the task would run on another thread on the same queue)
Also, is it true that you can control order of execution by calling dispatch_sync on a concurrent queue? (It's mentioned here) I don't understand why that would happen, as async vs sync have to do with the caller thread.
This will not deadlock since the dispatched block can start running immediately - it's not a serial queue so it doesn't have to wait for the current block to finish.
But it's still not a good idea. This will block one thread causing the OS to spin up a new one (because it still has free CPU as the thread is sleeping) wasting memory.

Suspending and resuming Grand Central Dispatch thread

I am using Grand Central Dispatch to run a process in background. I want know how can i suspend, resume and stop that background thread. I have tried
dispatch_suspend(background_thread);
dispatch_resume(background_thread);
but these functions doesn't help me, it keeps on running. Please someone help me.
You seem to have some confusion. Direct manipulation of threads is not part of the GCD API. The GCD object you normally manipulate is a queue, not a thread. You put blocks in a queue, and GCD runs those blocks on any thread it wants.1
Furthermore, the dispatch_suspend man page says this:
The dispatch framework always checks the suspension status before executing a block, but such changes never affect a block during execution (non-preemptive).
In other words, GCD will not suspend a queue while the queue is running a block. It will only suspend a queue while the queue is in between blocks.
I'm not aware of any public API that lets you stop a thread without cooperation from the function running on that thread (for example by setting a flag that is checked periodically on that thread).
If possible, you should break up your long-running computation so that you can work on it incrementally in a succession of blocks. Then you can suspend the queue that runs those blocks.
Footnote 1. Except the main queue. If you put a block on the main queue, GCD will only run that block on the main thread.
You are describing a concurrent processing model, where different processes can be suspended and resumed. This is often achieved using threads, or in some cases coroutines.
GCD uses a different model, one of partially ordered blocks where each block is sequentially executed without pre-emption, suspension or resumption directly supported.
GCD semaphores do exist, and may suit your needs, however creating general cooperating concurrent threads with them is not the goal of GCD. Otherwise look at a thread based solution using NSThread or even Posix threads.
Take a look at Apple's Migrating Away from Threads to see if your model is suited to migration to GCD, but not all models are.

Concurrency and synchronous execution

I was reading OReilly's iOS6 Programming Cookbook and am confused about something. Quoting from page 378, chapter 6 "Concurrency":
For any task that doesn’t involve the UI, you can use global concurrent queues in GCD.
These allow either synchronous or asynchronous execution. But synchronous execution
does not mean your program waits for the code to finish before continuing. It
simply means that the concurrent queue will wait until your task has finished before it
continues to the next block of code on the queue. When you put a block object on a
concurrent queue, your own program always continues right away without waiting for
the queue to execute the code. This is because concurrent queues, as their name implies,
run their code on threads other than the main thread.
I bolded the text that intrigues me. I think it is false because as I've just learned today synchronous execution means precisely that the program waits for the code to finish before continuing.
Is this correct or how does it really work?
How is this paragraph wrong? Let us count the ways:
For any task that doesn’t involve the UI, you can use global
concurrent queues in GCD.
This is overly specific and inaccurate. Certain UI centric tasks, such as loading images, could be done off the main thread. This would be better said as "In most cases, don't interact with UIKit classes except from the main thread," but there are exceptions (for instance, drawing to a UIGraphicsContext is thread-safe as of iOS 4, IIRC, and drawing is a great example of a CPU intensive task that could be offloaded to a background thread.) FWIW, any work unit you can submit to a global concurrent queue you can also submit to a private concurrent queue too.
These allow either synchronous or asynchronous execution. But
synchronous execution does not mean your program waits for the code to
finish before continuing. It simply means that the concurrent queue
will wait until your task has finished before it continues to the next
block of code on the queue.
As iWasRobbed speculated, they appear to have conflated sync/async work submission with serial/concurrent queues. Synchronous execution does, by definition, mean that your program waits for the code to return before continuing. Asynchronous execution, by definition, means that your program does not wait. Similarly, serial queues only execute one submitted work unit at a time, executing each in FIFO order. Concurrent queues, private or global, in the general case (more in a second), schedule submitted blocks for execution, in the order in which they were enqueued, on one or more background threads. The number of background threads used is an opaque implementation detail.
When you put a block object on a concurrent queue, your own program
always continues right away without waiting for the queue to execute
the code.
Nope. Not true. Again, they're mixing up sync/async and serial/concurrent. I suspect what they're trying to say is: When you enqueue a block asynchronously, your own program always continues right away without waiting for the queue to execute the code.
This is because concurrent queues, as their name implies, run their code on threads other than the main thread.
This is also not correct. For instance, if you have a private concurrent queue that you are using to act as a reader/writer lock that protects some mutable state, if you dispatch_sync to that queue from the main thread, your code will, in many cases, execute on the main thread.
Overall, this whole paragraph is really pretty horrible and misleading.
EDIT: I mentioned this in a comment on another answer, but it might be helpful to put it here for clarity. The concept of "Synchronous vs. Asynchronous dispatch" and the concept of "Serial vs. Concurrent queues" are largely orthogonal. You can dispatch work to any queue (serial or concurrent) in either a synchronous or asynchronous way. The sync/async dichotomy is primarily relevant to the "dispatch*er*" (in that it determines whether the dispatcher is blocked until completion of the block or not), whereas the Serial/Concurrent dichotomy is primarily relevant to the dispatch*ee* block (in that it determines whether the dispatchee is potentially executing concurrently with other blocks or not).
I think that bit of text is poorly written, but they are basically explaining the difference between execution on a serial queue with execution on a concurrent queue. A serial queue is run on one thread so it doesn't have a choice but to execute one task at a time, whereas a concurrent queue can use one or more threads.
Serial queue's execute one task after the next in the order in which they were put into the queue. Each task has to wait for the prior task to be executed before it can then be executed (i.e. synchronously).
In a concurrent queue, tasks can be run at the same time that other tasks are also run since they normally use multiple threads (i.e. asynchronously), but again they are still executed in the order they were enqueued and they can effectively be finished in any order. If you use NSOperation, you can also set up dependencies on a concurrent queue to ensure that certain tasks are executed before other tasks are.
More info:
https://developer.apple.com/library/ios/documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html
The author is Vandad Nahavandipoor, I don't want to affect this guy's sales income, but all his books contain the same mistakes in the concurrency chapters:
http://www.amazon.com/Vandad-Nahavandipoor/e/B004JNSV7I/ref=sr_tc_2_rm?qid=1381231858&sr=8-2-ent
Which is ironic since he's got a 50-page book exactly on this subject.
http://www.amazon.com/Concurrent-Programming-Mac-iOS-Performance/dp/1449305636/ref=la_B004JNSV7I_1_6?s=books&ie=UTF8&qid=1381232139&sr=1-6
People should STOP reading this guy's books.
When you put a block object on a concurrent queue, your own program
always continues right away without waiting for the queue to execute
the code. This is because concurrent queues, as their name implies,
run their code on threads other than the main thread.
I find it confusing, and the only explanation I can think of, is that she is talking about who blocks who. From man dispatch_sync:
Conceptually, dispatch_sync() is a convenient wrapper around
dispatch_async() with the addition of a semaphore to wait for
completion of the block, and a wrapper around the block to signal its
completion.
So execution returns to your code right away, but the next thing the dispatch_sync does after queueing the block, is wait on a semaphore until the block is executed. Your code blocks because it chooses to.
The other way your code would block is when the queue chooses to run a block using your thread (the one from where you executed dispatch_sync). In this case, your code wouldn't recover control until the block is executed, so the check on the semaphore would always find the block is done.
Erica Sadun for sure knows better than me, so maybe I'm missing some nuance here, but this is my understanding.

Difference between dispatch_get_main_queue and dispatch_get_global_queue

I have just started working on iOS and have been going through Apple Reference material on GCD. dispatch_get_global _queue returns a concurrent queue to which one can submit a block to be executed.
However, we can achieve the same using dispatch_get_main_queue as well, right? Then, what exactly is the difference between dispatch_get_global_queue and dispatch_get_main_queue?
The global queue is a background queue and executes its blocks on a non-main thread. The main queue executes its blocks on the main thread.
You should put background work that does not involve changes to the user interface on the global queue, but use the main queue when the blocks make changes to the user interface. A very common pattern, for example, is to execute a "work" block on the global queue, and to have the work block itself dispatch back to the main queue to update a progress indicator.
dispatch_get_main_queue - should be used when you want to manipulate UI elements.
(It gets a background queue that you can dispatch background tasks that are run asynchronously... it won't block your user interface)
dispatch_get_global_queue - Can be used for network calls/core data.

Resources