Synchronise async tasks in a serial queue - ios

let serialQueue = DispatchQueue(label: "Serial Queue")
func performCriticalSectionTask() {
serialQueue.async {
performLongRuningAsyncTask()
}
}
func performLongRuningAsyncTask() {
/// some long running task
}
The function performCriticalSectionTask() can be called from different places many times.
I want this function to be running one at a time. Thus, I kept the critical section of code inside the serial async queue.
But, the problem here is that the critical section itself is a performLongRuningAsyncTask() which will return immediately, and thus serial queue will not wait for the current task to complete first and will start another one.
How can I solve this problem?

if performLongRuningAsyncTask is only running in one thread, it will be called only once at the time. In your case it delegates it to another thread, so you wrapping it into another thread call doesn't work since it will be on another thread anyway
You could do checks in the method itself, the simplest way is to add a boolean. (Or you could add these checks in your class that executes this method, with a completion handler).
Another ways are adding dispatch groups / semaphores / locks.
If you still need it to be executed later, you should use a dispatch group / OperationQueue / Semaphore.
func performLongRunningAsyncTask() {
self.serialQueue.sync {
if isAlreadyRunning {
return
}
isAlreadyRunning = true
}
asyncTask { result in
self.serialQueue.sync {
self.isAlreadyRunning = false
}
}
}

Related

How to use gcd barrier in iOS?

I want to use gcd barrier implement a safe store object. But it not work correctly. The setter sometime is more early than the getter. What's wrong with it?
https://gist.github.com/Terriermon/02c446d1238ad6ec1edb08b607b1bf05
class MutiReadSingleWriteObject<T> {
let queue = DispatchQueue(label: "com.readwrite.concurrency", attributes: .concurrent)
var _object:T?
var object: T? {
#available(*, unavailable)
get {
fatalError("You cannot read from this object.")
}
set {
queue.async(flags: .barrier) {
self._object = newValue
}
}
}
func getObject(_ closure: #escaping (T?) -> Void) {
queue.async {
closure(self._object)
}
}
}
func testMutiReadSingleWriteObject() {
let store = MutiReadSingleWriteObject<Int>()
let queue = DispatchQueue(label: "com.come.concurrency", attributes: .concurrent)
for i in 0...100 {
queue.async {
store.getObject { obj in
print("\(i) -- \(String(describing: obj))")
}
}
}
print("pre --- ")
store.object = 1
print("after ---")
store.getObject { obj in
print("finish result -- \(String(describing: obj))")
}
}
Whenever you create a DispatchQueue, whether serial or concurrent, it spawns its own thread that it uses to schedule and run work items on. This means that whenever you instantiate a MutiReadSingleWriteObject<T> object, its queue will have a dedicated thread for synchronizing your setter and getObject method.
However: this also means that in your testMutiReadSingleWriteObject method, the queue that you use to execute the 100 getObject calls in a loop has its own thread too. This means that the method has 3 separate threads to coordinate between:
The thread that testMutiReadSingleWriteObject is called on (likely the main thread),
The thread that store.queue maintains, and
The thread that queue maintains
These threads run their work in parallel, and this means that an async dispatch call like
queue.async {
store.getObject { ... }
}
will enqueue a work item to run on queue's thread at some point, and keep executing code on the current thread.
This means that by the time you get to running store.object = 1, you are guaranteed to have scheduled 100 work items on queue, but crucially, how and when those work items actually start executing are up to the queue, the CPU scheduler, and other environmental factors. While somewhat rare, this does mean that there's a chance that none of those tasks have gotten to run before the assignment of store.object = 1, which means that by the time they do happen, they'll see a value of 1 stored in the object.
In terms of ordering, you might see a combination of:
100 getObject calls, then store.object = 1
N getObject calls, then store.object = 1, then (100 - N) getObject calls
store.object = 1, then 100 getObject calls
Case (2) can actually prove the behavior you're looking to confirm: all of the calls before store.object = 1 should return nil, and all of the ones after should return 1. If you have a getObject call after the setter that returns nil, you'd know you have a problem. But, this is pretty much impossible to control the timing of.
In terms of how to address the timing issue here: for this method to be meaningful, you'll need to drop one thread to properly coordinate all of your calls to store, so that all accesses to it are on the same thread.
This can be done by either:
Dropping queue, and just accessing store on the thread that the method was called on. This does mean that you cannot call store.getObject asynchronously
Make all calls through queue, whether sync or async. This gives you the opportunity to better control exactly how the store methods are called
Either way, both of these approaches can have different semantics, so it's up to you to decide what you want this method to be testing. Do you want to be guaranteed that all 100 calls will go through before store.object = 1 is reached? If so, you can get rid of queue entirely, because you don't actually want those getters to be called asynchronously. Or, do you want to try to cause the getters and the setter to overlap in some way? Then stick with queue, but it'll be more difficult to ensure you get meaningful results, because you aren't guaranteed to have stable ordering with the concurrent calls.

GCD serial async queue vs serial sync queue nested in async

I have to protect a critical section of my code.
I don't want the caller to be blocked by the function that can be time consuming so I'm creating a serial queue with background qos and then dispatching asynchronously:
private let someQueue = DispatchQueue(label: "\(type(of: self)).someQueue", qos: .background)
func doSomething() {
self.someQueue.async {
//critical section
}
}
For my understanding, the function will directly return on the calling thread without blocking.
I've also seen somewhere dispatching first asynchronously on the global queue, the synchronously on a serial queue:
private let someQueue2 = DispatchQueue(label: "\(type(of: self)).someQueue2")
func doSomething() {
DispatchQueue.global(qos: .background).async {
self.someQueue2.sync {
//critical section
}
}
}
What's the difference between the two approaches?
Which is the right approach?
In the first approach, the calling thread is not blocked and the task (critical section) passed in the async block will be executed in background.
In the second approach, the calling thread is not blocked, but the "background" thread will be waiting for the sync block (critical section) execution which is executed by another thread.
I don't know what you do in your critical section, but it seems first approach seems the best one. Note that background qos is quite slow, maybe use default qos for your queue, unless you know what you are doing. Also note that convention wants that you use bundle identifier as label for your queue. So something like this:
private let someQueue = DispatchQueue(label: "\(Bundle.main.bundleIdentifier ?? "").\(type(of: self)).someQueue")

Completion block method vs. DispatchQueue

I have implemented following completion block, one block is completed and then I update UI and object accordingly.
func doPaging() {
fetchProducts(page: pageNumber , completion: { success in
if let products = success as? Products
{
DispatchQueue.main.async {
self.products.append(contentsOf:products)
self.isWating = false;
self.productTableView.reloadData()
}
}
})
}
func fetchProducts(page: Int, completion: #escaping ((AnyObject) -> Void)) {
// URLSession call here
}
However, the following approach clearly shows restful call will happen in background thread and once it is completed, then update UI and objects.
func doPaging() {
DispatchQueue.global(qos: .background).async {
// Background Thread
fetchProducts()
DispatchQueue.main.async {
self.pageNumber += 1
self.productTableView.reloadData()
self.isWating = false
}
}
}
func fetchProducts(page: Int) {
// URLSession call here
}
I am confused between completion block method vs. DispatchQueue.
Which one is recommended?
In the first approach, you call a method fetchProducts() which internally uses NSURLSession. REST call using NSURLSession runs in background and on completion of the REST call, the completion of the task will be called. In that completion, you call your completion handler of fetchProducts(). This approach seems fine to me.
In the second approach, you use global background queue and asynchronously call NSURLSession APIs (I assume so), and don’t wait for the call to complete. The code on main queue will be instantly called and at this point the NSURLSession task may or may not have been completed.
So, this approach is problematic.
First method seems OK as long as you fetchProducts asynchronously. In fetchProducts() , if you call the completion block in the main queue you won't even need to get main queue again in the doPaging() method.
In your second method, you are calling fetchProducts() in a global (concurrent) queue. Although global queues start each task in the order they were added to queue, they run tasks concurrently. And since fechtProduct() takes time, your code block that contains self.pageNumber += 1 executed before even fetchProduct's URLSession is started. So, this approach won't work.
Completion block and Dispatch Queue are two different concepts.
Completion block is used when your function perform actions takes time to run, and need to return back and run some code even the functions has "ended". For example,
func networkCall(foo: Int, completion:#escaping (_ result:Bool)-> Void))
func otherFunc(){...}
func A(){
networkCall(foo:1){ (success) in
// handle your stuff
}
otherFunc()
}
When you run A(), it first run networkCall(), however networkCall() may takes time to run the network request and the app moved on to run otherFunc(). When the network request is done, networkCall() can call it's completion block so that A() can handle it again.
Dispatch Queue is the threading stuff safely encapsulated by Apple. Network request can be performed in Main thread as well, but it will be blocking other functions.
A common practice is to call Network request in background queue
DispatchQueue.global(qos: .background).async and call completion block after finished. If anything needs to be updated in main thread like UI, do it in the DispatchQueue.main.async

Synchronization of multiple tasks on single thread

How can I prevent a block of code to be repeatedly accessed from the same thread?
Suppose, I have the next code:
func sendAnalytics() {
// some synchronous work
asyncTask() { _ in
completion()
}
}
I want to prevent any thread from accessing "// some synchronous work", before completion was called.
objc_sync_enter(self)
objc_sync_exit(self)
seem to only prevent accessing this code from multiple threads and don't save me from accessing this code from the single thread. Is there a way to do this correctly, without using custom solutions?
My repeatedly accessing, I mean calling this sendAnalytics from one thread multiple times. Suppose, I have a for, like this:
for i in 0...10 {
sendAnalytics()
}
Every next call won't be waiting for completion inside sendAnalytics get called (obvious). Is there a way to make the next calls wait, before completion fires? Or the whole way of thinking is wrong and I have to solve this problem higher, at the for body?
You can use a DispatchSemaphore to ensure that one call completes before the next can start
let semaphore = DispatchSemaphore(value:1)
func sendAnalytics() {
self.semaphore.wait()
// some synchronous work
asyncTask() { _ in
completion()
self.semaphore.signal()
}
}
The second call to sendAnalytics will block until the first asyncTask is complete. You should be careful not to block the main queue as that will cause your app to become non-responsive. It is probably safer to dispatch the sendAnalytics call onto its own serial dispatch queue to eliminate this risk:
let semaphore = DispatchSemaphore(value:1)
let analyticsQueue = DispatchQueue(label:"analyticsQueue")
func sendAnalytics() {
analyticsQueue.async {
self.semaphore.wait()
// some synchronous work
asyncTask() { _ in
completion()
self.semaphore.signal()
}
}
}

Is it normal that CPU usage exceeds 100% using dispatch async in Xcode 7

I'm a beginner in swift 2, and I'm trying to make my program blocks while showing only a progress spinner until some operation finishes, I made that code snippet in a button with the action "touch up inside", my problem is that while debugging,Xcode 7 CPU usage jumps to 190 % once I tap my button and keeps high until the flag changes its value, Is it normal that CPU usage jumps like that?, also Is it a good practice to use the following snippet or shud i use sleep or some other mechanism inside my infinite loop?
let queue2 = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
dispatch_async(self.queue2) { () -> Void in
while(flag == true)
{
//wait until flag sets to false from previous func
}
self.dispatch_main({
//continue after the flag became false
})
This is a very economical completion handler
func test(completion:() -> ())
{
// do hard work
completion()
}
let queue2 = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
dispatch_async(queue2) {
test() {
print("completed")
}
}
or with additional dispatch to the main queue to update the UI
let queue2 = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
dispatch_async(queue2) {
test() {
print("completed")
dispatch_async(dispatch_get_main_queue()) {
// update UI
}
}
}
This is totally wrong approach as you are using while loop for waiting. You should use Completion Handler to achieve this kind of stuff.
Completion handlers are callbacks that allow a client to perform some action when a framework method or function completes its task. Often the client uses a completion handler to free state or update the user interface. Several framework methods let you implement completion handlers as blocks (instead of, say, delegation methods or notification handlers).
Refer Apple documentation for more details.
I suppose you have a sort of class which manages these "some operation finishes".
When you finish your operations you can comunicate by completion handler or delegation. In the meanwhile you can disable the user interaction of your UI until the end of these operations.
If you provide more informations about your background operations I can add some snippets of code.

Resources