How to remove queued block from a GCD dispatch queue? - ios

I am trying to re-schedule queued block that will handle the update operations.
Main goal is updating UI objects (online user table...) with minimum amount of (UI update request). (Server sometimes rain down massive amount of updates, yay!)
For simplicity main scenario is;
The dispatch_queue_t instance (queue that will handle given UI updating block) is a serial dispatch queue (private dispatch queue)
The operation (UI updating block) is scheduled with dispatch_after with t amount of time (Instead of updating for each data set update, collect update requests within t amount of time and perform a single UI update for them)
In case our data set updated, check if there already exist a scheduled event. If yes, unschedule it from dispatch_queue_t instance. Then re-schedule same block with t amount of time delay.
Also;
t is a small amount of time interval that possibly won't be noticed by the user (like 500 ms.)
Any alternative approach is welcome.
My motive behind this;
i applied same logic via Android's Handler (post & removeCallbacks combination with Runnable instance) and i hope i could achieve the same on iOS.
Edit:
As #Sven suggested usage of NSOperationQueue is more suitable for the scenario as they support cancelling each NSOperation. I skimmed through documents and found;
Canceling Operations
Once added to an operation queue, an operation object is effectively owned by the queue and cannot be removed. The only way to dequeue an operation is to cancel it. You can cancel a single individual operation object by calling its cancel method or you can cancel all of the operation objects in a queue by calling the cancelAllOperations method of the queue object.
You should cancel operations only when you are sure you no longer need them. Issuing a cancel command puts the operation object into the “canceled” state, which prevents it from ever being run. Because a canceled operation is still considered to be “finished”, objects that are dependent on it receive the appropriate KVO notifications to clear that dependency. Thus, it is more common to cancel all queued operations in response to some significant event, like the application quitting or the user specifically requesting the cancellation, rather than cancel operations selectively.

This can easily be done with GCD as well, no need to reach for the big hammer that is NSOperationQueue here.
Just use a non-repeating dispatch timer source directly instead of dispatch_after (which is just a convenience wrapper around such a timer source, it doesn't actually enqueue the block onto the queue until the timer goes off).
You can reschedule a pending timer source execution with dispatch_source_set_timer().

You cannot remove or otherwise change an operation enqueued on a dispatch queue. Try using the higher level NSOperationQueue instead which supports cancellation.

Related

Scheduling execution of blocks in Objective C

I'm creating an application in objective C where I have two threads:
The main thread, which is woken up from sleep and is called into asynchronously by a module above it
The callback block(thread) whose execution is asynchronous and is dependent on an external module "M" sending a notification.
On my main thread, I want to wait for the callback to come in before I start doing my tasks. So, I tried using dispatch_group_enter and dispatch_group_wait(FOREVER) on the main thread while calling into dispatch_group_leave on the callback thread. This ensured that when the main thread is the first to execute, things happen as they are supposed to, i.e the main thread waits for the callback to come in and unblock it before performing its tasks.
However, I'm seeing a race condition where the callback block gets called first sometimes and is stuck on dispatch_group_leave (since at this point the main thread has not called into dispatch_group_enter.
Is there a different GCD construct I can use for this purpose?
The “main thread” is a thread which handles UI, system events, notifications, etc. We never block that thread. Blocking it results in a horrible UX where the app will appear to freeze and your app may even be terminated by the “watch dog” process, which kills apps that it thinks are frozen. In some cases, the app will deadlock.
So, if you really mean “main thread”, then the answer is that you would never “wait” on that thread (or otherwise block it). The pattern is to have your background thread do what it needs, and then dispatch model/UI updates back to the main thread with GCD (or submit your notification and let the main thread process it).
If you want a UX where the user is not allowed to interact with the UI while this background process is underway, you would present something in your UI that makes that clear. A common pattern is a dimming/blurring view that covers the whole view, often with a UIActivityIndicatorView (i.e., a spinner), and when the task dispatched to the background queue is done (or have the notification handler do that), you’d then remove that dimming/blurred view and the spinner and update the UI accordingly.
But you never block the main thread by waiting.

Difference between DispatchQueue.main.async and DispatchQueue.main.sync

I have been using DispatchQueue.main.async for a long time to perform UI related operations.

Swift provides both DispatchQueue.main.async and DispatchQueue.main.sync, and both are performed on the main queue.

Can anyone tell me the difference between them? 

When should I use each?

DispatchQueue.main.async {
self.imageView.image = imageView
self.lbltitle.text = ""
}
DispatchQueue.main.sync {
self.imageView.image = imageView
self.lbltitle.text = ""
}
Why Concurrency?
As soon as you add heavy tasks to your app like data loading it slows your UI work down or even freezes it.
Concurrency lets you perform 2 or more tasks “simultaneously”.
The disadvantage of this approach is that thread safety which is not always as easy to control. F.e. when different tasks want to access the same resources like trying to change the same variable on a different threads or accessing the resources already blocked by the different threads.
There are a few abstractions we need to be aware of.
Queues.
Synchronous/Asynchronous task performance.
Priorities.
Common troubles.
Queues
Must be serial or concurrent. As well as global or private at the same time.
With serial queues, tasks will be finished one by one while with concurrent queues, tasks will be performed simultaneously and will be finished on unexpected schedules. The same group of tasks will take the way more time on a serial queue compared to a concurrent queue.
You can create your own private queues (both serial or concurrent) or use already available global (system) queues.
The main queue is the only serial queue out of all of the global queues.
It is highly recommended to not perform heavy tasks which are not referred to UI work on the main queue (f.e. loading data from the network), but instead to do them on the other queues to keep the UI unfrozen and responsive to the user actions. If we let the UI be changed on the other queues, the changes can be made on a different and unexpected schedule and speed. Some UI elements can be drawn before or after they are needed. It can crash the UI. We also need to keep in mind that since the global queues are system queues there are some other tasks can run by the system on them.
Quality of Service / Priority
Queues also have different qos (Quality of Service) which sets the task performing priority (from highest to lowest here):
.userInteractive - as for the main queue .userInitiated - for the user initiated tasks on which user waits for some response .utility - for the tasks which takes some time and doesn't require immediate response, e.g working with data .background - for the tasks which aren't related with the visual part and which aren't strict for the completion time). There is also .default queue which does't transfer the qos information.
If it wasn't possible to detect the qos the qos will be used between .userInitiated and .utility.
Tasks can be performed synchronously or asynchronously.
Synchronous function returns control to the current queue only after the task is finished. It blocks the queue and waits until the task is finished.
Asynchronous function returns control to the current queue right after task has been sent to be performed on the different queue. It doesn't wait until the task is finished. It doesn't block the queue.
Common Troubles.
The most popular mistakes programmers make while projecting the concurrent apps are the following:
Race condition - caused when the app work depends on the order of the code parts execution.
Priority inversion - when the higher priority tasks wait for the smaller priority tasks to be finished due to some resources being blocked
Deadlock - when a few queues have infinite wait for the sources (variables, data etc.) already blocked by some of these queues.
NEVER call the sync function on the main queue.
If you call the sync function on the main queue it will block the queue as well as the queue will be waiting for the task to be completed but the task will never be finished since it will not be even able to start due to the queue is already blocked. It is called deadlock.
When to use sync?
When we need to wait until the task is finished. F.e. when we are making sure that some function/method is not double called. F.e. we have synchronization and trying to prevent it to be double called until it's completely finished. Here's some code for this concern: How to find out what caused error crash report on IOS device?
When you use async it lets the calling queue move on without waiting until the dispatched block is executed. On the contrary sync will make the calling queue stop and wait until the work you've dispatched in the block is done. Therefore sync is subject to lead to deadlocks. Try running DispatchQueue.main.sync from the main queue and the app will freeze because the calling queue will wait until the dispatched block is over but it won't be even able to start (because the queue is stopped and waiting)
When to use sync? When you need to wait for something done on a DIFFERENT queue and only then continue working on your current queue
Example of using sync:
On a serial queue you could use sync as a mutex in order to make sure that only one thread is able to perform the protected piece of code at the same time.
DispatchQueue.<>.sync vs DispatchQueue.<>.async
[Sync vs Async]
[GCD]
GCD allows you to execute a task synchronously or asynchronously
synchronous(block and wait) function returns a control when the task will be completed
asynchronous(dispatch and proceed) function returns a control immediately, dispatching the task to start to an appropriate queue but not waiting for it to complete.
sync or async methods have no effect on the queue on which they are called.
sync will block the thread from which it is called and not the queue on which it is called. It is the property of DispatchQueue which decides whether the DispatchQueue will wait for the task execution (serial queue) or can run the next task before current task gets finished (concurrent queue).
So even when DispatchQueue.main.async is an async call, a heavy duty operation added in it can freeze the UI as its operations are serially executed on the main thread. If this method is called from the background thread, control will return to that thread instantaneously even when UI seems to be frozen. This is because async call is made on DispatchQueue.main

OperationQueue is not removing the operations from queue if I call cancelAllOpeartions

I have opeationqueue on which I am calling cancelAllOpeations but if i ask
OpearationQueue.operationcount it is not returing me zero.
I am overriding cancel method everything is working but opertioncount is not zero.is it expected?
See Apple's API Document for the NSOperation cancel method (emphasis mine):
This method does not force your operation code to stop. Instead, it updates the object’s internal flags to reflect the change in state. If the operation has already finished executing, this method has no effect. Canceling an operation that is currently in an operation queue, but not yet executing, makes it possible to remove the operation from the queue sooner than usual.
The cancel method will either mark the operation as 'ready' if it is in a queue, or mark it finished immediately if it is not in a queue. Since your operations are in a queue, this means the cancelled operations will start 'sooner'. If sub-classed correctly, your cancelled operations should immediately mark itself finished and generate its final KVO notifications. Only then will your operations be dequeued.
See also Responding to the Cancel Command for more information about cancelling custom operations.
If you need to know when the operation queue has 0 operations left in its operations array property, you might consider registering the queue's owner as an observer for the operationCount key path using KVO. Then when you are notified that the value of that property has changed, you can check to see if the value is 0 and then perform any logic required. Note that NSOperations send their KVO notifications on the thread they are operating on, which is usually going to be a background thread if they are run from an NSOperationQueue - this means that if you need to perform any UI/blocking logic, you will need to ensure it is run on the main thread.
If you decide to add an observer using KVO, make sure that you balance that by removing the observer later. In fact, if you do decide to leverage KVO, I highly suggest you digest all of the KVO programming guide and read through the KVO API docs, anything short of due-diligence when working with that framework can result in undefined behaviors, memory leaks, or even bad_access crashes.

Understanding dispatch_queues and synchronous/asynchronous dispatch

I'm an Android engineer trying to port some iOS code that uses 5 SERIAL dispatch queues. I want to make sure I'm thinking about things the right way.
dispatch_sync to a SERIAL queue is basically using the queue as a synchronized queue- only one thread may access it and the block that gets executed can be thought of as a critical region. It happens immediately on the current thread- its the equivalent of
get_semaphore()
queue.pop()
do_block()
release_semaphore()
dispatch_async to a serial queue- performs the block on another thread and lets the current thread return immediately. However since its a serial queue it promises only one of these asynchronous thread is going to execute at a time (the next call to dispatch_async will wait until all other threads are finished). That block can also be thought of as a critical region, but it will occur on another thread. So the same code as above, but its passed to a worker thread first.
Am I off in any of that, or did I figure it out correctly?
This feels like an overly complicated way of thinking of it and there are lots of little details of that description that aren't quite right. Specifically, "it happens immediately on the current thread" is not correct.
First, let's step back: The distinction between dispatch_async and dispatch_sync is merely whether the current thread waits for it or not. But when you dispatch something to a serial queue, you should always imagine that it's running on a separate worker thread of GCD's own choosing. Yes, as an optimization, sometimes dispatch_sync will use the current thread, but you are in no way guaranteed of this fact.
Second, when you discuss dispatch_sync, you say something about it running "immediately". But it's by no means assured to be immediate. If a thread does dispatch_sync to some serial queue, then that thread will block until (a) any block currently running on on that serial queue finish; (b) any other queued blocks for that serial queue run and complete; and (c) obviously, the block that thread A itself dispatched runs and completes.
Now, when you use a serial queue for synchronization for, some thread-safe access to some object in memory, often that synchronization process is very quick, so the waiting thread will generally be blocked for a negligible amount of time as its dispatched block (and any prior dispatched blocks) to finish. But in general, it's misleading to say that it will run immediately. (If it always could run immediately, then you wouldn't need a queue to synchronize access).
Now your question talks about a "critical region", to which I assume you're talking about some bit of code that, in order to ensure thread-safety or for some other reason like that, must be synchronized. So, when running this code to be synchronized, the only question re dispatch_sync vs dispatch_async is whether the current thread must wait. A common pattern, for example, is to say that one may dispatch_async writes to some model (because there's no need to wait for the model to update before proceeding), but dispatch_sync reads from some model (because you obviously don't want to proceed until the read value is returned).
A further optimization of that sync/async pattern is the reader-writer pattern, where concurrent reads are permissible but concurrent writes are not. Thus, you'll use a concurrent queue, dispatch_barrier_async the writes (achieving serial-like behavior for the writes), but dispatch_sync the reads (enjoying concurrent performance with respect to other read operations).
To pick nits, dispatch_sync doesn't necessarily run the code on the current thread, but if it doesn't, it still blocks the current thread until the task completes. The distinction is only potentially important if you're relying on thread IDs or thread-local storage.
But otherwise, yes, unless I missed something subtle.

How to stop/cancel/suspend/resume tasks on GCD queue

How to stop/cancel/suspend/resume tasks on GCD queue
How does one stop background queue operations? I want to stop some screens in our app. And some screens it should be auto resume. So, how does one pass a queue in iOS?
I mean when user have browsing the app time we run the background thread in dispatch_queue_t. But it never stops and resume in the code. So how does one suspend and resume a queue
To suspend a dispatch queue, it is simply queue.suspend() (dispatch_suspend(queue) in Objective-C). That doesn't affect any tasks currently running, but merely prevents new tasks from starting on that queue. Also, you obviously only suspend queues that you created (not global queues, not main queue).
To resume a dispatch queue, it is queue.resume() (or dispatch_resume(queue) in Objective-C). There's no concept of “auto resume”, so you'd just have to manually resume it when appropriate.
To pass a dispatch queue around, you simply pass the DispatchQueue object that you created (or the dispatch_queue_t object that you created when you called dispatch_queue_create() in Objective-C).
In terms of canceling tasks queued on dispatch queues, this is a was introduced in iOS 8. One can item.cancel() a DispatchWorkItem (dispatch_block_cancel(block) a dispatch_block_t object in Objective-C). This cancels queued blocks/items that have not started, but does not stop ones that are underway. If you want to be able to interrupt a dispatched block/item, you have to periodically examine item.isCancelled (or dispatch_block_testcancel() in Objective-C).
See https://stackoverflow.com/a/38372384/1271826 for examples on canceling dispatch work items.
If you want to cancel tasks, you might also consider using operation queues, i.e. OperationQueue (NSOperationQueue in Objective-C). Its cancelable operations have been around for a while and you're likely to find lots of examples online. It also supports constraining the degree of concurrency with maxConcurrentOperationCount (whereas with dispatch queues you can only choose between serial and concurrent, and controlling concurrency more than that requires a tiny bit of effort on your part).
If using operation queues, you suspend and resume by changing the suspended property of the queue. And to pass it around, you just pass the NSOperationQueue object you instantiated.
Having said all of that, I'd suggest you expand your question to elaborate what sort of tasks are running in the background and articulate why you want to suspend them. There might be better approaches than suspending the background queue.
In your comments, you mention that you were using NSTimer, a.k.a. Timer in Swift. If you want to stop a timer, call timer.invalidate() to stop it. Create a new NSTimer when you want to start it again.
Or if the timer is really running on a background thread, GCD “dispatch source timers” do this far more gracefully. With a GCD timer, you can suspend/resume it just like you suspend/resume a queue, just using the timer object instead of the queue object.
You can't pause / cancel when using a GCD queue. If you need that functionality (and in a lot of general cases even if you don't) you should be using the higher level API - NSOperationQueue. This is built on top of GCD but it gives you the ability to control how many things are executing at the same time, suspend processing of the queue and to cancel individual / all operations.

Resources