Swift execute asynchronous tasks in order - ios

I have a few asynchronous, network tasks that I need to perform on my app. Let's say I have 3 resources that I need to fetch from a server, call them A, B, and C. Let's say I have to finish fetching resource A first before fetching either B or C. Sometimes, I'd want to fetch B first, other times C first.
Right now, I just have a long-chained closure like so:
func fetchA() {
AFNetworking.get(completionHandler: {
self.fetchB()
self.fetchC()
})
}
This works for now, but the obvious limitation is I've hard-coded the order of execution into the completion handler of fetchA. Now, say I want to only fetchC after fetchB has finished in that completion handler, I'd have to go change my implementation for fetchB...
Essentially, I'd like to know if there's some magic way to do something like:
let orderedAsync = [fetchA, fetchB, fetchC]
orderedAsync.executeInOrder()
where fetchA, fetchB, and fetchC are all async functions, but fetchB won't execute until fetchA has finished and so on. Thanks!

You can use a serial DispatchQueue mixed with a DispatchGroup which will ensure that only one execution block will run at a time.
let serialQueue = DispatchQueue(label: "serialQueue")
let group = DispatchGroup()
group.enter()
serialQueue.async{ //call this whenever you need to add a new work item to your queue
fetchA{
//in the completion handler call
group.leave()
}
}
serialQueue.async{
group.wait()
group.enter()
fetchB{
//in the completion handler call
group.leave()
}
}
serialQueue.async{
group.wait()
group.enter()
fetchC{
group.leave()
}
}
Or if you are allowed to use a 3rd party library, use PromiseKit, it makes handling and especially chaining async methods way easier than anything GCD provides. See the official GitHub page for more info.
You can wrap an async method with a completion handler in a Promise and chain them together like this:
Promise.wrap(fetchA(completion:$0)).then{ valueA->Promise<typeOfValueB> in
return Promise.wrap(fetchB(completion:$0)
}.then{ valueB in
}.catch{ error in
//handle error
}
Also, all errors are propagated through your promises.

You could use combination of dispatchGroup and dispatchSemaphore to perform the asynchronous code blocks in sequence.
DispatchGroup will maintain the enter and leave to notify when all the task are completed.
DispatchSemaphore with value 1 will make sure only one block of task is executed
Sample
code where fetchA, fetchB, fetchC are functions with closure (completion handler)
// Create DispatchQueue
private let dispatchQueue = DispatchQueue(label: "taskQueue", qos: .background)
//value 1 indicate only one task will be performed at once.
private let semaphore = DispatchSemaphore(value: 1)
func sync() -> Void {
let group = DispatchGroup()
group.enter()
self.dispatchQueue.async {
self.semaphore.wait()
fetchA() { (modelResult) in
// success or failure handler
// semaphore signal to remove wait and execute next task
self.semaphore.signal()
group.leave()
}
}
group.enter()
self.dispatchQueue.async {
self.semaphore.wait()
fetchB() { (modelResult) in
// success or failure handler
// semaphore signal to remove wait and execute next task
self.semaphore.signal()
group.leave()
}
}
group.enter()
self.dispatchQueue.async {
self.semaphore.wait()
fetchC() { (modelResult) in
// success or failure handler
// semaphore signal to remove wait and execute next task
self.semaphore.signal()
group.leave()
}
}
group.notify(queue: .main) {
// Perform any task once all the intermediate tasks (fetchA(), fetchB(), fetchC()) are completed.
// This block of code will be called once all the enter and leave statement counts are matched.
}
}

Not sure why other answers are adding unnecessary code, what you are describing is already the default behavior for a serial queue:
let fetchA = { print("a starting"); sleep(1); print("a done")}
let fetchB = { print("b starting"); sleep(1); print("b done")}
let fetchC = { print("c starting"); sleep(1); print("c done")}
let orderedAsync = [fetchA, fetchB, fetchC]
let queue = DispatchQueue(label: "fetchQueue")
for task in orderedAsync{
queue.async(execute: task) //notice "async" here
}
print("all enqueued")
sleep(5)
"all enqueued" will print immediately, and each task will wait for the previous one to finish before it starts.
FYI, if you added attributes: .concurrent to your DispatchQueue initialization, then they wouldn't be guaranteed to execute in order. But even then you can use the .barrier flag when you want things to execute in order.
In other words, this would also fulfill your requirements:
let queue = DispatchQueue(label: "fetchQueue", attributes: .concurrent)
for task in orderedAsync{
queue.async(flags: .barrier, execute: task)
}

Related

How to use the completion handler version of a async function in Swift?

I have a async function func doWork(id: String) async throws -> String. I want to call this function from a concurrent dispatch queue like this to test some things.
for i in 1...100 {
queue.async {
obj.doWork(id: "foo") { result, error in
...
}
}
}
I want to do this because queue.async { try await obj.doWork() } is not supported. I get an error:
Cannot pass function of type '#Sendable () async throws -> Void' to parameter expecting synchronous function type
But the compiler does not provide me with a completion handler version of doWork(id:). When I call it from Obj C, I am able to use the completion handler version: [obj doWorkWithId: #"foo" completionHandler:^(NSString * val, NSError * _Nullable error) { ... }]
How do I do something similar in Swift?
You can define a DispatchQueue. This DispatchQueue will wait for the previous task to complete before proceeding to the next. 
let queue = DispatchQueue(label: "queue")
func doWork(id: String) async -> String {
print("Do id \(id)")
return id
}
func doWorksConcurrently() {
for i in 0...100 {
queue.async {
Task.init {
await doWork(id: String(i))
}
}
}
}
doWorksConcurrently()
You are initiating an asynchronous task and immediately finishing the dispatch without waiting for doWork to finish. Thus the dispatch queue is redundant. One could do:
for i in 1...100 {
Task {
let results = try await obj.doWork(id: "foo")
...
}
}
Or, if you wanted to catch/display the errors:
for i in 1...100 {
Task {
do {
let results = try await obj.doWork(id: "foo")
...
} catch {
print(error)
throw error
}
}
}
Now, generally in Swift, we would want to remain within structured concurrency and use a task group. But if you are trying to mirror what you'll experience from Objective-C, the above should be sufficient.
Needless to say, if your Objective-C code is creating a queue solely for the purpose for calling the completion-handler rendition of doWork, then the queue is unnecessary there, too. But we cannot comment on that code without seeing it.

Adding condition based on previous result on DispatchQueue

Is it possible to set a condition on the next queue of DispatchQueue? Supposed there are 2 API calls that should be executed synchronously, callAPI1 -> callAPI2. But, callAPI2 should be only executed if callAPI1 returning true. Please check code below for more clear situation:
let dispatchQueue: DispatchQueue = DispatchQueue(label: "queue")
let dispatchGroup = DispatchGroup()
var isSuccess: Bool = false
dispatchGroup.enter()
dispatchQueue.sync {
self.callAPI1(completion: { (result) in
isSuccess = result
dispatchGroup.leave()
}
}
dispatchGroup.enter()
dispatchQueue.sync {
if isSuccess { //--> This one always get false
self.callAPI2(completion: { (result) in
isSuccess = result
dispatchGroup.leave()
})
} else {
dispatchGroup.leave()
}
}
dispatchGroup.notify(queue: DispatchQueue.main, execute: {
completion(isSuccess) //--> This one always get false
})
Currently above code always returning isSuccess as false despite on callAPI1's call returning true, which cause only callAPI1's is called.
All non-playground code typed directly into answer, expect little errors.
It appears that you are trying to make an asynchronous call into a synchronous one, and the way you are attempting this simply will not work. Assuming callAPI1 is asynchronous then after:
self.callAPI1(completion: { (result) in
isSuccess = result
}
the completion block has (in all probability) not yet been run, you cannot test isSuccess immediately, as in:
self.callAPI1(completion: { (result) in
isSuccess = result
}
if isSuccess
{
// in all probability this will never be reached
}
Wrapping the code into a synchronous block will have no effect whatsoever:
dispatchQueue.sync
{
self.callAPI1(completion: { (result) in
isSuccess = result
}
// at this point, in all probability, the completion block
// has not yet run, therefore...
}
// at this point it has also not run
A sync dispatch just runs its block on a different queue and waits for it to complete; if that block contains asynchronous code, as yours does, then it is not magically made synchronous - it executes asynchronously as normal, the synchronously dispatched block terminates, the sync dispatch returns, and your code continues. The sync dispatch has no real effect (apart from running the block on a different queue while blocking the current one).
If you need to sequence a number of asynchronous calls you can do it a number of ways. One method is to simply chain the calls through the completion blocks. Using this approach your code becomes:
self.callAPI1(completion: { (result) in
if !result { completion(false) }
else
{
self.callAPI2(completion: { (result) in
completion(result)
}
}
}
Using Semaphores
If you have a long sequence of such calls using the above pattern then the code can become very nested, in such a case instead of nesting you can use semaphores to sequence the calls. A simple semaphore can be used to block (thread) execution, using wait(), until it is signalled (by an unblocked thread), using signal().
Notice the emphasis here on blocking, once you introduce the ability to block execution all sorts of issues have to be considered: among them are UI responsiveness - blocking the UI thread is not good; deadlock - for example if the code that will issue semaphore wait and signal operations is executing on the same thread then after a wait there will be no signal...
Here is a sample Swift Playground script to demonstrate using semaphores. The pattern follows your original code but uses a semaphore in addition to your boolean.
import Cocoa
// some convenience functions for our dummy callAPI1 & callAPI2
func random(_ range : CountableClosedRange<UInt32>) -> UInt32
{
let lower = range.lowerBound
let upper = range.upperBound
return lower + arc4random_uniform(upper - lower + 1)
}
func randomBool() -> Bool
{
return random(0...1) == 1
}
class Demo
{
// grab the global concurrent utility queue to schedule our work on
let workerQueue = DispatchQueue.global(qos : .utility)
// dummy callAPI1, just pauses and then randomly return success or failure
func callAPI1(_ completion : #escaping (Bool) -> Void) -> Void
{
// do the "work" on workerQueue, which is concurrent so other work
// can be executing, or *blocked*, on the same queue
let pause = random(1...2)
workerQueue.asyncAfter(deadline: .now() + Double(pause))
{
// produce a random success result
let success = randomBool()
print("callAPI1 after \(pause) -> \(success)")
completion(success)
}
}
func callAPI2(_ completion : #escaping (Bool) -> Void) -> Void
{
let pause = random(1...2)
workerQueue.asyncAfter(deadline: .now() + Double(pause))
{
let success = randomBool()
print("callAPI2 after \(pause) -> \(success)")
completion(success)
}
}
func runDemo(_ completion : #escaping (Bool) -> Void) -> Void
{
// We run the demo as a standard async function
// which doesn't block the main thread
workerQueue.async
{
print("Demo starting...")
var isSuccess: Bool = false
let semaphore = DispatchSemaphore(value: 0)
// do the first call
// this will asynchronously execute on a different thread
// *including* its completion block
self.callAPI1
{ (result) in
isSuccess = result
semaphore.signal() // signal completion
}
// we can safely wait for the semaphore to be
// signalled as callAPI1 is executing on a different
// thread so we will not deadlock
semaphore.wait()
if isSuccess
{
self.callAPI2
{ (result) in
isSuccess = result
semaphore.signal() // signal completion
}
semaphore.wait() // wait for completion
}
completion(isSuccess)
}
}
}
Demo().runDemo { (result) in print("Demo result: \(result)") }
// For the Playground
// ==================
// The Playground can terminate a program run once the main thread is done
// and before all async work is finished. This can result in incomplete execution
// and/or errors. To avoid this we sleep the main thread for a few seconds.
sleep(6)
print("All done")
// Run the Playground multiple times, the results should vary
// (different wait times, callAPI2 may not run). Wait until
// the "All done"" before starting next run
// (i.e. don't push stop, it confuses the Playground)
Or...
Another approach to avoid the nesting is to design functions (or operators) which take two async methods and produce a single one by implementing the nesting pattern. Long nested sequences can then be reduce to more linear sequences. This approach is left as an exercise.
HTH

How to execute closures serially in Swift

I have a closure that is executed asynchronously in a for loop.
for i in 0..<10 {
closure
}
How to make to the for loop wait for the closure to be executed before going into the next iteration?
This is what Max tried to says:
var queue = OperationQueue()
queue.maxConcurrentOperationCount = 1
for i in 0..<10 {
queue.addOperation {
// closure here
}
}
You'd probably want to use NSOpertations or GCD. See Dispatch
or you could use PromiseKit if you're doing this a lot.
You can do this very simply with Dispatch:
import Dispatch // Necessary for DispatchQueue
import Foundation // Necessary for sleep
let closures = [ //The array of closures to execute serially
{ print(1); sleep(1) },
{ print(2); sleep(1) },
{ print(3); sleep(1) },
{ print(4); sleep(1) },
{ print(5); sleep(1) }
]
let queue = DispatchQueue(label: "My Serial Queue") // TODO: Name me
for closure in closures {
queue.sync(execute: closure)
}
// Alternatively:
// closures.forEach(queue.sync(execute:))
You can add them to an OperationQueue and set its maxConcurrentOperationCount to 1. Then run them.

GCD Serial Queue not returning in order

I don't understand why the second function is returning before the first. Here's my code. I'm missing something simple I think.
let queue = dispatch_queue_create(nil, DISPATCH_QUEUE_SERIAL)
dispatch_async(queue) {
self.requestToServer()
self.sayHello()
dispatch_async(get_main_queue(), {
// Update the UI...
}
}
requestToServer() function obviously takes longer than the sayHello() function, but shouldn't they execute one at a time with the serial queue that I have created? What am I doing wrong here?
You misunderstood the idea of a serial queue. A serial queue guarantees that blocks will be executed in the order you add them. It does not control statements within the same block. The block can end before all its statements have completed.
If I were to describe your block as a railroad branch, it looks like this:
(another queue) -- requestToServer() ---------------------------
/
(serial queue) start ------- sayHello() ----
\
(main queue) --- Update the UI
requestToServer() does not have a chance to finish before you update the GUI.
Instead, rewrite your requestToServer() to take a completion handler:
func requestToServer{completion: () -> Void) {
let session = NSURLSession(configuration: ...)
let task = session.dataTaskWithURL(url) { data, response, error in
// check the response from server
...
// when everything is OK, call the completion handler
completion()
}
task.resume()
}
self.requestToServer() {
self.sayHello()
dispatch_async(get_main_queue()) {
// Update the UI...
}
}

How to know when parallel HTTP requests are fulfilled in iOS?

I need to run some code only after requesting multiple HTTP resources for gathering some data.
I've read a lot of documentation and I've found out I should use GCD and dispatch groups:
Create a group with dispatch_group_create()
For each request:
Enter the dispatch group with dispatch_group_enter()
Run the request
When receiving a response, leave the group with dispatch_group_leave()
Wait with dispatch_group_wait()
Release the group with dispatch_release()
Yet I'm not sure if this practice could have some pitfalls – or is there a better way to wait for parallels requests being finished?
The code below looks working well:
// Just send a request and call the when finished closure
func sendRequest(url: String, whenFinished: () -> Void) {
let request = NSMutableURLRequest(URL: NSURL(string: url))
let task = NSURLSession.sharedSession().dataTaskWithRequest(request, completionHandler: {
(data, response, error) -> Void in
whenFinished()
})
task.resume()
}
let urls = ["http://example.com?a",
"http://example.com?b",
"http://example.com?c",
"http://example.com?d",
"http://invalid.example.com"]
var fulfilledUrls: Array<String> = []
let group = dispatch_group_create();
for url in urls {
dispatch_group_enter(group)
sendRequest(url, {
() in
fulfilledUrls.append(url)
dispatch_group_leave(group)
})
}
dispatch_group_wait(group, DISPATCH_TIME_FOREVER);
for url in fulfilledUrls { println(url) }
Yup, this is the basic idea, although you would ideally use dispatch_group_notify instead of dispatch_group_wait since dispatch_group_wait blocks the calling thread until the group completes, whereas dispatch_group_notify will call a block when the group completes without blocking the calling thread in the interim.

Resources