How to wait first dispatch to finish execution - ios

I have 2 dispatch_async() like this :
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {
/* Code here */
}
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {
/* Code here */
}
I want the second dispatch to wait until the first one finish his execution. how i can do that ?
Thank's in advance

I'll give a few solutions, in the order of increasing complexity:
1
The simplest way is to include both code blocks in the same async calls:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)) {
// code block 1
// code block 2
}
2
If you don't know precisely when they will run, for example, code block 1 is triggered when user presses a button and code block 2 is run when user presses another button, use a serial queue:
let serialQueue = dispatch_queue_create("mySerialQueue", DISPATCH_QUEUE_SERIAL)
dispatch_async(serialQueue) {
// code block 1
}
dispatch_async(serialQueue) {
// code block 2
}
3
If your code blocks run asynchronously, like first making a webservice call to authenticate, then making a second call to get the user's profile, you have to implement waiting:
let groupID = dispatch_group_create()
let task1 = session.dataTaskWithRequest(request1) { data, response, error in
// handle the response...
// Tell Grand Central Dispatch that the request is done
dispatch_group_leave(groupID)
}
let task2 = session.dataTaskWithRequest(request2) { data, response, error in
// handle the response...
}
dispatch_async(dispatch_get_global_queue(QOS_CLASS_BACKGROUND, 0)) {
dispatch_group_enter(groupID) // Tell GCD task1 is starting
task1.resume()
dispatch_group_wait(groupID, DISPATCH_TIME_FOREVER) // Wait until task1 is done
task2.resume()
}
4
For anything more complicated, I strongly suggest you learn NSOpereationQueue. There's a WWDC session on it

Related

API calls blocks UI thread Swift

I need to sync web database in my coredata, for which I perform service api calls. I am using Alamofire with Swift 3. There are 23 api calls, giving nearly 24k rows in different coredata entities.
My problem: These api calls blocks UI for a minute, which is a long time for a user to wait.
I tried using DispatchQueue and performing the task in background thread, though nothing worked. This is how I tried :
let dataQueue = DispatchQueue.init(label: "com.app.dataSyncQueue")
dataQueue.async {
DataSyncController().performStateSyncAPICall()
DataSyncController().performRegionSyncAPICall()
DataSyncController().performStateRegionSyncAPICall()
DataSyncController().performBuildingRegionSyncAPICall()
PriceSyncController().performBasicPriceSyncAPICall()
PriceSyncController().performHeightCostSyncAPICall()
// Apis which will be used in later screens are called in background
self.performSelector(inBackground: #selector(self.performBackgroundTask), with: nil)
}
An API call from DataSyncController:
func performStateSyncAPICall() -> Void {
DataSyncRequestManager.fetchStatesDataWithCompletionBlock {
success, response, error in
self.apiManager.didStatesApiComplete = true
}
}
DataSyncRequestManager Code:
static func fetchStatesDataWithCompletionBlock(block:#escaping requestCompletionBlock) {
if appDelegate.isNetworkAvailable {
Util.setAPIStatus(key: kStateApiStatus, with: kInProgress)
DataSyncingInterface().performStateSyncingWith(request:DataSyncRequest().createStateSyncingRequest() , withCompletionBlock: block)
} else {
//TODO: show network failure error
}
}
DataSyncingInterface Code:
func performStateSyncingWith(request:Request, withCompletionBlock block:#escaping requestCompletionBlock)
{
self.interfaceBlock = block
let apiurl = NetworkHttpClient.getBaseUrl() + request.urlPath!
Alamofire.request(apiurl, parameters: request.getParams(), encoding: URLEncoding.default).responseJSON { response in
guard response.result.isSuccess else {
block(false, "error", nil )
return
}
guard let responseValue = response.result.value else {
block (false, "error", nil)
return
}
block(true, responseValue, nil)
}
}
I know many similar questions have been already posted on Stackoverflow and mostly it is suggested to use GCD or Operation Queue, though trying DispatchQueues didn't work for me.
Am I doing something wrong?
How can I not block UI and perform the api calls simultaneously?
You can do this to run on a background thread:
DispatchQueue.global(qos: .background).async {
// Do any processing you want.
DispatchQueue.main.async {
// Go back to the main thread to update the UI.
}
}
DispatchQueue manages the execution of work items. Each work item submitted to a queue is processed on a pool of threads managed by the system.
I usually use NSOperationQueue with Alamofire, but the concepts are similar. When you set up an async queue, you allow work to be performed independently of the main (UI) thread, so that your app doesn't freeze (refuse user input). The work will still take however long it takes, but your program doesn't block while waiting to finish.
You really have only put one item into the queue.
You are adding to the queue only once, so all those "perform" calls wait for the previous one to finish. If it is safe to run them concurrently, you need to add each of them to the queue separately. There's more than one way to do this, but the bottom line is each time you call .async {} you are adding one item to the queue.
dataQueue.async {
DataSyncController().performStateSyncAPICall()
}
dataQueue.async {
DataSyncController(). performRegionSyncAPICall l()
}

Synchronization of multiple tasks on single thread

How can I prevent a block of code to be repeatedly accessed from the same thread?
Suppose, I have the next code:
func sendAnalytics() {
// some synchronous work
asyncTask() { _ in
completion()
}
}
I want to prevent any thread from accessing "// some synchronous work", before completion was called.
objc_sync_enter(self)
objc_sync_exit(self)
seem to only prevent accessing this code from multiple threads and don't save me from accessing this code from the single thread. Is there a way to do this correctly, without using custom solutions?
My repeatedly accessing, I mean calling this sendAnalytics from one thread multiple times. Suppose, I have a for, like this:
for i in 0...10 {
sendAnalytics()
}
Every next call won't be waiting for completion inside sendAnalytics get called (obvious). Is there a way to make the next calls wait, before completion fires? Or the whole way of thinking is wrong and I have to solve this problem higher, at the for body?
You can use a DispatchSemaphore to ensure that one call completes before the next can start
let semaphore = DispatchSemaphore(value:1)
func sendAnalytics() {
self.semaphore.wait()
// some synchronous work
asyncTask() { _ in
completion()
self.semaphore.signal()
}
}
The second call to sendAnalytics will block until the first asyncTask is complete. You should be careful not to block the main queue as that will cause your app to become non-responsive. It is probably safer to dispatch the sendAnalytics call onto its own serial dispatch queue to eliminate this risk:
let semaphore = DispatchSemaphore(value:1)
let analyticsQueue = DispatchQueue(label:"analyticsQueue")
func sendAnalytics() {
analyticsQueue.async {
self.semaphore.wait()
// some synchronous work
asyncTask() { _ in
completion()
self.semaphore.signal()
}
}
}

How does a serial queue/private dispatch queue know when a task is complete?

(Perhaps answered by How does a serial dispatch queue guarantee resource protection? but I don't understand how)
Question
How does gcd know when an asynchronous task (e.g. network task) is finished? Should I be using dispatch_retain and dispatch_release for this purpose? Update: I cannot call either of these methods with ARC... What do?
Details
I am interacting with a 3rd party library that does a lot of network access. I have created a wrapper via a small class that basically offers all the methods i need from the 3rd party class, but wraps the calls in dispatch_async(serialQueue) { () -> Void in (where serialQueue is a member of my wrapper class).
I am trying to ensure that each call to the underlying library finishes before the next begins (somehow that's not already implemented in the library).
The serialisation of work on a serial dispatch queue is at the unit of work that is directly submitted to the queue. Once execution reaches the end of the submitted closure (or it returns) then the next unit of work on the queue can be executed.
Importantly, any other asynchronous tasks that may have been started by the closure may still be running (or may not have even started running yet), but they are not considered.
For example, for the following code:
dispatch_async(serialQueue) {
print("Start")
dispatch_async(backgroundQueue) {
functionThatTakes10Seconds()
print("10 seconds later")
}
print("Done 1st")
}
dispatch_async(serialQueue) {
print("Start")
dispatch_async(backgroundQueue) {
functionThatTakes10Seconds()
print("10 seconds later")
}
print("Done 2nd")
}
The output would be something like:
Start
Done 1st
Start
Done 2nd
10 seconds later
10 seconds later
Note that the first 10 second task hasn't completed before the second serial task is dispatched. Now, compare:
dispatch_async(serialQueue) {
print("Start")
dispatch_sync(backgroundQueue) {
functionThatTakes10Seconds()
print("10 seconds later")
}
print("Done 1st")
}
dispatch_async(serialQueue) {
print("Start")
dispatch_sync(backgroundQueue) {
functionThatTakes10Seconds()
print("10 seconds later")
}
print("Done 2nd")
}
The output would be something like:
Start
10 seconds later
Done 1st
Start
10 seconds later
Done 2nd
Note that this time because the 10 second task was dispatched synchronously the serial queue was blocked and the second task didn't start until the first had completed.
In your case, there is a very good chance that the operations you are wrapping are going to dispatch asynchronous tasks themselves (since that is the nature of network operations), so a serial dispatch queue on its own is not enough.
You can use a DispatchGroup to block your serial dispatch queue.
dispatch_async(serialQueue) {
let dg = dispatch_group_create()
dispatch_group_enter(dg)
print("Start")
dispatch_async(backgroundQueue) {
functionThatTakes10Seconds()
print("10 seconds later")
dispatch_group_leave(dg)
}
dispatch_group_wait(dg)
print("Done")
}
This will output
Start
10 seconds later
Done
The dg.wait() blocks the serial queue until the number of dg.leave calls matches the number of dg.enter calls. If you use this technique then you need to be careful to ensure that all possible completion paths for your wrapped operation call dg.leave. There are also variations on dg.wait() that take a timeout parameter.
As mentioned before, DispatchGroup is a very good mechanism for that.
You can use it for synchronous tasks:
let group = DispatchGroup()
DispatchQueue.global().async(group: group) {
syncTask()
}
group.notify(queue: .main) {
// done
}
It is better to use notify than wait, as wait does block the current thread, so it is safe on non-main threads.
You can also use it to perform async tasks:
let group = DispatchGroup()
group.enter()
asyncTask {
group.leave()
}
group.notify(queue: .main) {
// done
}
Or you can even perform any number of parallel tasks of any synchronicity:
let group = DispatchGroup()
group.enter()
asyncTask1 {
group.leave()
}
group.enter() //other way of doing a task with synchronous API
DispatchQueue.global().async {
syncTask1()
group.leave()
}
group.enter()
asyncTask2 {
group.leave()
}
DispatchQueue.global().async(group: group) {
syncTask2()
}
group.notify(queue: .main) {
// runs when all tasks are done
}
It is important to note a few things.
Always check if your asynchronous functions call the completion callback, sometimes third party libraries forget about that, or cases when your self is weak and nobody bothered to check if the body got evaluated when self is nil. If you don't check it then you can potentially hang and never get the notify callback.
Remember to perform all the needed group.enter() and group.async(group: group) calls before you call the group.notify. Otherwise you can get a race condition, and the group.notify block can fire, before you actually finish your tasks.
BAD EXAMPLE
let group = DispatchGroup()
DispatchQueue.global().async {
group.enter()
syncTask1()
group.leave()
}
group.notify(queue: .main) {
// Can run before syncTask1 completes - DON'T DO THIS
}
The answer to the question in your questions body:
I am trying to ensure that each call to the underlying library finishes before the next begins
A serial queue does guarantee that the tasks are progressed in the order you add them to the queue.
I do not really understand the question in the title though:
How does a serial queue ... know when a task is complete?

How to wait until all NSOperations is finished?

I have the following code:
func testFunc(completion: (Bool) -> Void) {
let queue = NSOperationQueue()
queue.maxConcurrentOperationCount = 1
for i in 1...3 {
queue.addOperationWithBlock{
Alamofire.request(.GET, "https://httpbin.org/get").responseJSON { response in
switch (response.result){
case .Failure:
print("error")
break;
case .Success:
print("i = \(i)")
}
}
}
//queue.addOperationAfterLast(operation)
}
queue.waitUntilAllOperationsAreFinished()
print("finished")
}
and output is:
finished
i = 3
i = 1
i = 2
but I expect the following:
i = 3
i = 1
i = 2
finished
So, why queue.waitUntilAllOperationsAreFinished() don't wait?
Each operation you've added into queue is immediately executed because Alamofire.request simply returns without waiting for the response data.
Furthermore, there is a possibility of deadlock there. Since responseJSON block is executed within the main queue by default, blocking the main thread by calling waitUntilAllOperationsAreFinished will prevent it from executing the completion block at all.
First, in order to fix the deadlock issue, you can tell Alamofire to execute the completion block in a different queue, second, you can use dispatch_group_t to group the number of asynchronous HTTP requests and keep the main thread waiting till all those requests in the group finish executing:
let queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)
let group = dispatch_group_create()
for i in 1...3 {
dispatch_group_enter(group)
Alamofire.request(.GET, "https://httpbin.org/get").responseJSON(queue: queue, options: .AllowFragments) { response in
print(i)
dispatch_async(dispatch_get_main_queue()) {
// Main thread is still blocked. You can update the UI here but it will take effect after all HTTP requests are finished.
}
dispatch_group_leave(group)
}
}
dispatch_group_wait(group, DISPATCH_TIME_FOREVER)
print("finished")
I would suggest you to use KVO and observe when the queue has finish all the task instead of blocking the current thread until all the operations finished. Or you can use dependencies. Take a look at this SO question
To check whether all operations finished - We could use KVO to observe number of operations in the Queue. Unfortunately both operations and operationCount are currently deprecated..!
So it's safe to use following option using dependency.
To check few operations are finished - Use Dependencies :
Create a final operation called "finishOperation" then add dependencies to all other required operation. This way, "finishOperation" will be executed only when depended operations are finished. Check this answer for code sample.

Waiting for async function to finish using GCD

I have an async function that queries Parse. I need to wait until all objects from the Parse query have returned before calling my second function. The problem is, I'm using:
var group: dispatch_group_t = dispatch_group_create()
dispatch_group_async(group, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)) { () -> Void in
asyncFunctionA() // this includes an async Parse query
}
dispatch_group_notify(group, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)) { () -> Void in
asyncFunctionB() // must be called when asyncFunctionA() has finished
}
...but, asyncFunctionB() is getting called before I even have any objects appended to my arrays in asyncFunctionA(). Isn't the point of using GCD notify to observe the completion of a prior function? Why isn't that working here?
Just like Parse employs the concept of completion block/closures, you need to do the same in your asyncFunctionA:
func asyncFunctionA(completionHandler: () -> ()) {
// your code to prepare the background request goes here, but the
// key is that in the background task's closure, you add a call to
// your `completionHandler` that we defined above, e.g.:
gameScore.saveInBackgroundWithBlock { success, error in
if (success) {
// The object has been saved.
} else {
// There was a problem, check error.description
}
completionHandler()
}
}
Then you could do something like your code snippet:
let group = dispatch_group_create()
dispatch_group_enter(group)
asyncFunctionA() {
dispatch_group_leave(group)
}
dispatch_group_notify(group, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0)) {
self.asyncFunctionB()
}
Note, if function A was really using Parse's asynchronous methods, then there's no need to use dispatch_async there. But if you need it for some reason, feel free to add that back in, but make sure the dispatch_group_enter occurs before to dispatch to some background thread.
Frankly, I'd only use groups if I had a whole bunch of items added to this group. If it really was just B waiting for single call to A, I'd retire the groups entirely and just do:
asyncFunctionA() {
self.asyncFunctionB()
}

Resources