I have been using DispatchQueue.main.async for a long time to perform UI related operations.
Swift provides both DispatchQueue.main.async and DispatchQueue.main.sync, and both are performed on the main queue.
Can anyone tell me the difference between them?
When should I use each?
DispatchQueue.main.async {
self.imageView.image = imageView
self.lbltitle.text = ""
}
DispatchQueue.main.sync {
self.imageView.image = imageView
self.lbltitle.text = ""
}
Why Concurrency?
As soon as you add heavy tasks to your app like data loading it slows your UI work down or even freezes it.
Concurrency lets you perform 2 or more tasks “simultaneously”.
The disadvantage of this approach is that thread safety which is not always as easy to control. F.e. when different tasks want to access the same resources like trying to change the same variable on a different threads or accessing the resources already blocked by the different threads.
There are a few abstractions we need to be aware of.
Queues.
Synchronous/Asynchronous task performance.
Priorities.
Common troubles.
Queues
Must be serial or concurrent. As well as global or private at the same time.
With serial queues, tasks will be finished one by one while with concurrent queues, tasks will be performed simultaneously and will be finished on unexpected schedules. The same group of tasks will take the way more time on a serial queue compared to a concurrent queue.
You can create your own private queues (both serial or concurrent) or use already available global (system) queues.
The main queue is the only serial queue out of all of the global queues.
It is highly recommended to not perform heavy tasks which are not referred to UI work on the main queue (f.e. loading data from the network), but instead to do them on the other queues to keep the UI unfrozen and responsive to the user actions. If we let the UI be changed on the other queues, the changes can be made on a different and unexpected schedule and speed. Some UI elements can be drawn before or after they are needed. It can crash the UI. We also need to keep in mind that since the global queues are system queues there are some other tasks can run by the system on them.
Quality of Service / Priority
Queues also have different qos (Quality of Service) which sets the task performing priority (from highest to lowest here):
.userInteractive - as for the main queue .userInitiated - for the user initiated tasks on which user waits for some response .utility - for the tasks which takes some time and doesn't require immediate response, e.g working with data .background - for the tasks which aren't related with the visual part and which aren't strict for the completion time). There is also .default queue which does't transfer the qos information.
If it wasn't possible to detect the qos the qos will be used between .userInitiated and .utility.
Tasks can be performed synchronously or asynchronously.
Synchronous function returns control to the current queue only after the task is finished. It blocks the queue and waits until the task is finished.
Asynchronous function returns control to the current queue right after task has been sent to be performed on the different queue. It doesn't wait until the task is finished. It doesn't block the queue.
Common Troubles.
The most popular mistakes programmers make while projecting the concurrent apps are the following:
Race condition - caused when the app work depends on the order of the code parts execution.
Priority inversion - when the higher priority tasks wait for the smaller priority tasks to be finished due to some resources being blocked
Deadlock - when a few queues have infinite wait for the sources (variables, data etc.) already blocked by some of these queues.
NEVER call the sync function on the main queue.
If you call the sync function on the main queue it will block the queue as well as the queue will be waiting for the task to be completed but the task will never be finished since it will not be even able to start due to the queue is already blocked. It is called deadlock.
When to use sync?
When we need to wait until the task is finished. F.e. when we are making sure that some function/method is not double called. F.e. we have synchronization and trying to prevent it to be double called until it's completely finished. Here's some code for this concern: How to find out what caused error crash report on IOS device?
When you use async it lets the calling queue move on without waiting until the dispatched block is executed. On the contrary sync will make the calling queue stop and wait until the work you've dispatched in the block is done. Therefore sync is subject to lead to deadlocks. Try running DispatchQueue.main.sync from the main queue and the app will freeze because the calling queue will wait until the dispatched block is over but it won't be even able to start (because the queue is stopped and waiting)
When to use sync? When you need to wait for something done on a DIFFERENT queue and only then continue working on your current queue
Example of using sync:
On a serial queue you could use sync as a mutex in order to make sure that only one thread is able to perform the protected piece of code at the same time.
DispatchQueue.<>.sync vs DispatchQueue.<>.async
[Sync vs Async]
[GCD]
GCD allows you to execute a task synchronously or asynchronously
synchronous(block and wait) function returns a control when the task will be completed
asynchronous(dispatch and proceed) function returns a control immediately, dispatching the task to start to an appropriate queue but not waiting for it to complete.
sync or async methods have no effect on the queue on which they are called.
sync will block the thread from which it is called and not the queue on which it is called. It is the property of DispatchQueue which decides whether the DispatchQueue will wait for the task execution (serial queue) or can run the next task before current task gets finished (concurrent queue).
So even when DispatchQueue.main.async is an async call, a heavy duty operation added in it can freeze the UI as its operations are serially executed on the main thread. If this method is called from the background thread, control will return to that thread instantaneously even when UI seems to be frozen. This is because async call is made on DispatchQueue.main
Related
I'm relatively new to programming in Swift for iOS. I was wondering if I do:
for n in 1...10 {
DispatchQueue.global(qos: .background).async {
sleep(5)
print("Hello world!")
}
}
Will this spawn 10 more background threads that will immedietly start working on my task?
Or will there ever only be 1 background thread and I will have just enqueued 10 things to do?
From the DispatchQueue docs:
Work submitted to dispatch queues executes on a pool of threads managed by the system. Except for the dispatch queue representing your app's main thread, the system makes no guarantees about which thread it uses to execute a task.
When a task scheduled by a concurrent dispatch queue blocks a thread, the system creates additional threads to run other queued concurrent tasks.
The system may use one of the existing threads, or it may create a new one. No guarantees that two work items executed on the same dispatch queue will be executed on the same thread (unless it's the main queue).
You are creating 10 tasks but the order is not guaranteed. Global returns a concurrent queue so the tasks are executed maybe at the same time maybe out of order or maybe in order. Additionally, you are in no control of the actual threads. The system uses the threads available; so if task 9 finished first, then task 10 may get the thread that task 9 had. Dispatch Apis were meant to take a lot of thread work away from the programmer. In fact even on a serial queue you couldn't guarantee the threads used by the system here. The main queue guarantees the thread, but that's the only way I know of. As for the number of threads working, the system will decide, in this case with a low priority to do as many at once as it can. (.background)
I read a lot many articles which specifies that we need to update the UI in the main thread however whenever I update my UI is always the code.
DispatchQueue.main which in return gives me the Queue not thread. How exactly I would access the thread or both are same ?
Imagine a train station where the number of the train is the same as the number of the platform that it leaves from.
So if you want the #1 train, you stand on #1 platform. You cannot get on the train without first standing on the platform. Everyone else who wants to get on this train also stands on the platform to wait for their chance to get on the train.
The train is the thread. The platform is the queue.
If you want to get on the main thread, get on the main queue.
From Dispatch Queues in the Concurrency Programming Guide:
Main dispatch queue
The main dispatch queue is a globally available serial queue that executes tasks on the application’s main thread. This queue works with the application’s run loop (if one is present) to interleave the execution of queued tasks with the execution of other event sources attached to the run loop. Because it runs on your application’s main thread, the main queue is often used as a key synchronization point for an application.
Generally, GCD maintains a pool of threads, and there is no 1-1
relationship between dispatch queues and threads. But the main queue
is special: it is tied to the main thread, all items dispatched
to the main queue are executed on the main thread. (The same is
true for OperationQueue.main.)
Dispatching code to DispatchQueue.main (or OperationQueue.main)
ensures that it is executed on the main thread, and synchronized with
other UI updates.
In this sense, the terms “execute on the main thread” and
“execute on the main queue” are often used interchangeably.
DispatchQueue manages the execution of the code on a specific thread.
From Apple documentation:
DispatchQueue manages the execution of work items. Each work item
submitted to a queue is processed on a pool of threads managed by the
system.
So, when you call
DispatchQueue.main.async {
//your code
}
This code is submitted to the main queue which in turn runs on the Main thread.
//main thread
DispatchQueue.main.async
{
//eg.
tableview.reloadData()
// here you update your UI.
}
How to stop/cancel/suspend/resume tasks on GCD queue
How does one stop background queue operations? I want to stop some screens in our app. And some screens it should be auto resume. So, how does one pass a queue in iOS?
I mean when user have browsing the app time we run the background thread in dispatch_queue_t. But it never stops and resume in the code. So how does one suspend and resume a queue
To suspend a dispatch queue, it is simply queue.suspend() (dispatch_suspend(queue) in Objective-C). That doesn't affect any tasks currently running, but merely prevents new tasks from starting on that queue. Also, you obviously only suspend queues that you created (not global queues, not main queue).
To resume a dispatch queue, it is queue.resume() (or dispatch_resume(queue) in Objective-C). There's no concept of “auto resume”, so you'd just have to manually resume it when appropriate.
To pass a dispatch queue around, you simply pass the DispatchQueue object that you created (or the dispatch_queue_t object that you created when you called dispatch_queue_create() in Objective-C).
In terms of canceling tasks queued on dispatch queues, this is a was introduced in iOS 8. One can item.cancel() a DispatchWorkItem (dispatch_block_cancel(block) a dispatch_block_t object in Objective-C). This cancels queued blocks/items that have not started, but does not stop ones that are underway. If you want to be able to interrupt a dispatched block/item, you have to periodically examine item.isCancelled (or dispatch_block_testcancel() in Objective-C).
See https://stackoverflow.com/a/38372384/1271826 for examples on canceling dispatch work items.
If you want to cancel tasks, you might also consider using operation queues, i.e. OperationQueue (NSOperationQueue in Objective-C). Its cancelable operations have been around for a while and you're likely to find lots of examples online. It also supports constraining the degree of concurrency with maxConcurrentOperationCount (whereas with dispatch queues you can only choose between serial and concurrent, and controlling concurrency more than that requires a tiny bit of effort on your part).
If using operation queues, you suspend and resume by changing the suspended property of the queue. And to pass it around, you just pass the NSOperationQueue object you instantiated.
Having said all of that, I'd suggest you expand your question to elaborate what sort of tasks are running in the background and articulate why you want to suspend them. There might be better approaches than suspending the background queue.
In your comments, you mention that you were using NSTimer, a.k.a. Timer in Swift. If you want to stop a timer, call timer.invalidate() to stop it. Create a new NSTimer when you want to start it again.
Or if the timer is really running on a background thread, GCD “dispatch source timers” do this far more gracefully. With a GCD timer, you can suspend/resume it just like you suspend/resume a queue, just using the timer object instead of the queue object.
You can't pause / cancel when using a GCD queue. If you need that functionality (and in a lot of general cases even if you don't) you should be using the higher level API - NSOperationQueue. This is built on top of GCD but it gives you the ability to control how many things are executing at the same time, suspend processing of the queue and to cancel individual / all operations.
I was reading OReilly's iOS6 Programming Cookbook and am confused about something. Quoting from page 378, chapter 6 "Concurrency":
For any task that doesn’t involve the UI, you can use global concurrent queues in GCD.
These allow either synchronous or asynchronous execution. But synchronous execution
does not mean your program waits for the code to finish before continuing. It
simply means that the concurrent queue will wait until your task has finished before it
continues to the next block of code on the queue. When you put a block object on a
concurrent queue, your own program always continues right away without waiting for
the queue to execute the code. This is because concurrent queues, as their name implies,
run their code on threads other than the main thread.
I bolded the text that intrigues me. I think it is false because as I've just learned today synchronous execution means precisely that the program waits for the code to finish before continuing.
Is this correct or how does it really work?
How is this paragraph wrong? Let us count the ways:
For any task that doesn’t involve the UI, you can use global
concurrent queues in GCD.
This is overly specific and inaccurate. Certain UI centric tasks, such as loading images, could be done off the main thread. This would be better said as "In most cases, don't interact with UIKit classes except from the main thread," but there are exceptions (for instance, drawing to a UIGraphicsContext is thread-safe as of iOS 4, IIRC, and drawing is a great example of a CPU intensive task that could be offloaded to a background thread.) FWIW, any work unit you can submit to a global concurrent queue you can also submit to a private concurrent queue too.
These allow either synchronous or asynchronous execution. But
synchronous execution does not mean your program waits for the code to
finish before continuing. It simply means that the concurrent queue
will wait until your task has finished before it continues to the next
block of code on the queue.
As iWasRobbed speculated, they appear to have conflated sync/async work submission with serial/concurrent queues. Synchronous execution does, by definition, mean that your program waits for the code to return before continuing. Asynchronous execution, by definition, means that your program does not wait. Similarly, serial queues only execute one submitted work unit at a time, executing each in FIFO order. Concurrent queues, private or global, in the general case (more in a second), schedule submitted blocks for execution, in the order in which they were enqueued, on one or more background threads. The number of background threads used is an opaque implementation detail.
When you put a block object on a concurrent queue, your own program
always continues right away without waiting for the queue to execute
the code.
Nope. Not true. Again, they're mixing up sync/async and serial/concurrent. I suspect what they're trying to say is: When you enqueue a block asynchronously, your own program always continues right away without waiting for the queue to execute the code.
This is because concurrent queues, as their name implies, run their code on threads other than the main thread.
This is also not correct. For instance, if you have a private concurrent queue that you are using to act as a reader/writer lock that protects some mutable state, if you dispatch_sync to that queue from the main thread, your code will, in many cases, execute on the main thread.
Overall, this whole paragraph is really pretty horrible and misleading.
EDIT: I mentioned this in a comment on another answer, but it might be helpful to put it here for clarity. The concept of "Synchronous vs. Asynchronous dispatch" and the concept of "Serial vs. Concurrent queues" are largely orthogonal. You can dispatch work to any queue (serial or concurrent) in either a synchronous or asynchronous way. The sync/async dichotomy is primarily relevant to the "dispatch*er*" (in that it determines whether the dispatcher is blocked until completion of the block or not), whereas the Serial/Concurrent dichotomy is primarily relevant to the dispatch*ee* block (in that it determines whether the dispatchee is potentially executing concurrently with other blocks or not).
I think that bit of text is poorly written, but they are basically explaining the difference between execution on a serial queue with execution on a concurrent queue. A serial queue is run on one thread so it doesn't have a choice but to execute one task at a time, whereas a concurrent queue can use one or more threads.
Serial queue's execute one task after the next in the order in which they were put into the queue. Each task has to wait for the prior task to be executed before it can then be executed (i.e. synchronously).
In a concurrent queue, tasks can be run at the same time that other tasks are also run since they normally use multiple threads (i.e. asynchronously), but again they are still executed in the order they were enqueued and they can effectively be finished in any order. If you use NSOperation, you can also set up dependencies on a concurrent queue to ensure that certain tasks are executed before other tasks are.
More info:
https://developer.apple.com/library/ios/documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html
The author is Vandad Nahavandipoor, I don't want to affect this guy's sales income, but all his books contain the same mistakes in the concurrency chapters:
http://www.amazon.com/Vandad-Nahavandipoor/e/B004JNSV7I/ref=sr_tc_2_rm?qid=1381231858&sr=8-2-ent
Which is ironic since he's got a 50-page book exactly on this subject.
http://www.amazon.com/Concurrent-Programming-Mac-iOS-Performance/dp/1449305636/ref=la_B004JNSV7I_1_6?s=books&ie=UTF8&qid=1381232139&sr=1-6
People should STOP reading this guy's books.
When you put a block object on a concurrent queue, your own program
always continues right away without waiting for the queue to execute
the code. This is because concurrent queues, as their name implies,
run their code on threads other than the main thread.
I find it confusing, and the only explanation I can think of, is that she is talking about who blocks who. From man dispatch_sync:
Conceptually, dispatch_sync() is a convenient wrapper around
dispatch_async() with the addition of a semaphore to wait for
completion of the block, and a wrapper around the block to signal its
completion.
So execution returns to your code right away, but the next thing the dispatch_sync does after queueing the block, is wait on a semaphore until the block is executed. Your code blocks because it chooses to.
The other way your code would block is when the queue chooses to run a block using your thread (the one from where you executed dispatch_sync). In this case, your code wouldn't recover control until the block is executed, so the check on the semaphore would always find the block is done.
Erica Sadun for sure knows better than me, so maybe I'm missing some nuance here, but this is my understanding.
I am trying to re-schedule queued block that will handle the update operations.
Main goal is updating UI objects (online user table...) with minimum amount of (UI update request). (Server sometimes rain down massive amount of updates, yay!)
For simplicity main scenario is;
The dispatch_queue_t instance (queue that will handle given UI updating block) is a serial dispatch queue (private dispatch queue)
The operation (UI updating block) is scheduled with dispatch_after with t amount of time (Instead of updating for each data set update, collect update requests within t amount of time and perform a single UI update for them)
In case our data set updated, check if there already exist a scheduled event. If yes, unschedule it from dispatch_queue_t instance. Then re-schedule same block with t amount of time delay.
Also;
t is a small amount of time interval that possibly won't be noticed by the user (like 500 ms.)
Any alternative approach is welcome.
My motive behind this;
i applied same logic via Android's Handler (post & removeCallbacks combination with Runnable instance) and i hope i could achieve the same on iOS.
Edit:
As #Sven suggested usage of NSOperationQueue is more suitable for the scenario as they support cancelling each NSOperation. I skimmed through documents and found;
Canceling Operations
Once added to an operation queue, an operation object is effectively owned by the queue and cannot be removed. The only way to dequeue an operation is to cancel it. You can cancel a single individual operation object by calling its cancel method or you can cancel all of the operation objects in a queue by calling the cancelAllOperations method of the queue object.
You should cancel operations only when you are sure you no longer need them. Issuing a cancel command puts the operation object into the “canceled” state, which prevents it from ever being run. Because a canceled operation is still considered to be “finished”, objects that are dependent on it receive the appropriate KVO notifications to clear that dependency. Thus, it is more common to cancel all queued operations in response to some significant event, like the application quitting or the user specifically requesting the cancellation, rather than cancel operations selectively.
This can easily be done with GCD as well, no need to reach for the big hammer that is NSOperationQueue here.
Just use a non-repeating dispatch timer source directly instead of dispatch_after (which is just a convenience wrapper around such a timer source, it doesn't actually enqueue the block onto the queue until the timer goes off).
You can reschedule a pending timer source execution with dispatch_source_set_timer().
You cannot remove or otherwise change an operation enqueued on a dispatch queue. Try using the higher level NSOperationQueue instead which supports cancellation.