Synchronous network calls blocking other calls on startup - ios

I have an App which does hundreds of different network calls (HTTP GET requests) to a REST service. The calls are done from every single page of the app and there are much of them. But there is the requirement that two requests have to be done (on startup or awake) before any other network requests happens. The result of these two requests is some config data which is needed before all other following requests. (this requirement has many reason)
I have one central method for all GET requests. It uses AFNetworking and (of course) asynchronous handlers:
func GET(path: String, var parameters: Dictionary<String, String>? = nil) -> Future<AnyObject!> {
let promise = Promise<AnyObject!>()
manager.GET(path, parameters: parameters, success: { (task: NSURLSessionDataTask!, response: AnyObject!) -> Void in
// some processing...
promise.success(response)
}) { (task: NSURLSessionDataTask!, error: NSError!) -> Void in
// some failure handling...
promise.failure(error)
}
return promise.future
}
The problem now is - how to do this first two calls and block all other calls until those two succeed? The obvious solution would be a semaphore which blocks the thread (not main!) until those two calls arrive successfully but if possible I want to avoid this solution. (because of deadlocks, race conditions, how to do error handling, etc... the usual suspects)
So is there any better solution for this?
The synchronous order basically has to be:
1st call
wait for successful response of 1st call
2nd call
wait or successful response of 2nd call
allow all other calls (async) in any order
I can not do this logic on the upper layers of the app because the GET requests could come from every part of the app, so I would need to rewrite everthing. I want to do it centrally on this single GET request.
Maybe this is also possible with the Promise/Future pattern I already use, any hints are welcome.
Thanks.

There are a couple of approaches to tackle this class of problem:
You can use GCD (as shown in the answer you linked to) using semaphores or dispatch groups.
You can use asynchronous NSOperation custom subclass in conjunction with NSOperationQueue.
Or you can use promises.
But I'd generally suggest you pick one of these three, but don't try to introduce dispatch/operation queue patterns with your existing futures/promises code. People will suggest dispatch/operation queue approaches simply because this is how one generally solves this type of problem (using futures is not gained popular acceptance yet, and there are competing promises/futures libraries out there). But promises/futures is designed to solve precisely this problem, and if you're using that, just stick with it.
On the specifics of the dispatch/operation approaches, I could explain why I don't like the GCD approach and try to convince you to use the NSOperationQueue approach, but that's moot if you're using promises/futures.
The challenge here, though, is that you appear to be using an old version of Thomvis/BrightFutures. The current version takes two types for the generic. So my code below probably won't work for you. But I'll offer it up as as suggestion, as it may be illustrative.
For example, let's imagine that you had a loginURL (the first request), and some second request that you wanted to perform after that, but then had an array urls that you wanted to run concurrently with respect to each other, but only after the first two requests were done. Using 3.2.2 of BrightFutures, it might look like:
GET(loginURL).flatMap { responseObject -> Future<AnyObject!, NSError> in
return self.GET(secondRequestURL)
}.flatMap { responseObject -> Future<Int, NSError> in
return urls.map { self.GET($0) }.fold(0) { sum, _ in sum + 1 }
}.onSuccess { count in
print("\(count) concurrent requests completed successfully")
}.onFailure { error in
print("not successful: \(error)")
}
Judging from your code snippet, you must be using an old version of BrightFutures, so the above probably won't work as written, but hopefully this illustrates the basic idea. Use the capabilities of BrightFutures to manage these asynchronous tasks and control which are done sequentially and which are done concurrently.

Related

How to properly cancel Swift async/await function

I have watched Explore structured concurrency in Swift video and other relevant videos / articles / books I was able to find (swift by Sundell, hacking with swift, Ray Renderlich), but all examples there are very trivial - async functions usually only have 1 async call in them. How should this work in real life code?
For example:
...
task = Task {
var longRunningWorker: LongRunningWorker? = nil
do {
var fileURL = state.fileURL
if state.needsCompression {
longRunningWorker = LongRunningWorker(inputURL: fileURL)
fileURL = try await longRunningWorker!.doAsyncWork()
}
let urls = try await ApiService.i.fetchUploadUrls()
if let image = state.image, let imageData = image.jpegData(compressionQuality: 0.8) {
guard let imageUrl = urls.signedImageUrl else {
fatalError("Cover art supplied but art upload URL is nil")
}
try await ApiService.i.uploadData(url: imageUrl, data: imageData)
}
let fileData = try Data(contentsOf: state.fileUrl)
try await ApiService.i.uploadData(url: urls.signedFileUrl, data: fileData)
try await ApiService.i.doAnotherAsyncNetworkCall()
} catch {
longRunningWorker?.deleteFilesIfNecessary()
throw error
}
}
...
Then at some point I will call task.cancel().
Whose responsible for cancelling what? Examples I've seen so far would use try Task.checkCancellation(), but for this code that line should appear every few lines - is that how it should be done?
If API service uses URLSession the calls will be cancelled on iOS 15, but we don't use async variant of URLSession code so we have to cancel the calls manually. Also this applies to all the long running worker code.
I am also thinking that I could add this check within each of async functions, but then basically all async functions would have the same boilerplate code which again seems wrong and I haven't seen that done in any of the videos.
EDIT:
I have removed callback calls as those are irrelevant to the question.
There are two basic patterns for the implementation of our own cancelation logic:
Use withTaskCancellationHandler(operation:onCancel:) to wrap your cancelable asynchronous process.
This is useful when calling a cancelable legacy API and wrapping it in a Task. This way, canceling a task can proactively stop the asynchronous process in your legacy API, rather than waiting until you reach a manual isCancelled or checkCancellation call. This pattern works well with iOS 13/14 URLSession API, or any asynchronous API that offers a cancelation method.
Periodically check isCancelled or try checkCancellation.
This is useful in scenarios where you are performing some manual, computationally intensive process with a loop.
Many discussions about handling cooperative cancelation tend to dwell on these methods, but when dealing with legacy cancelable API, the aforementioned withTaskCancellationHandler is generally the better solution.
So, I would personally focus on implementing cooperative cancelation in your methods that wrap some legacy asynchronous process. And generally the cancelation logic will percolate up, frequently not requiring additional checking further up in the call chain, often handled by whatever error handling logic you might already have.
Examples I've seen so far would use try Task.checkCancellation(), but for this code that line should appear every few lines - is that how it should be done?
Basically yes. Cancellation is a totally voluntary venture. The runtime doesn't know what cancellation means for your particular task, so it just leaves it up to you. You look at Task.isCancelled, or, if your intention is to throw just in case the task is cancelled, you can call Task.checkCancellation.
Note that if, within your task, you are calling (with try) any async material that throws when cancelled, you do not need to any cancellation work with regard to that material, because when it throws due to cancellation, you will throw due to cancellation automatically.
Having said all that, I have to add, as a footnote, that your code is extremely strange. Callbacks and async/await are opposites; the idea that you would do a do/catch and call a callback within a Task is extremely weird and I would advise against it. You are basically negating all the advantages of a Task by doing that, as well as making untrue the thing I just said about the throw trickling up and out of your task.

When would a queue consider a task is completed?

In the following code, when would queueT (serial queue) consider “task A” is completed?
The moment when aNetworkRequest switched to another thread?
Or in the doneInAnotherQueue block? ( commented // 1)
In another word, when would “task B” be executed?
let queueT = DispatchQueue(label: "com.test.a")
queueT.async { // task A
aNetworkRequest.doneInAnotherQueue() { // completed in another thread possibly
// 1
}
}
queueT.async { // task B
print("It's my turn")
}
It would much better if you could explain the mechanism how a queue consider a task is completed.
Thanks in advance.
In short, the first example starts an asynchronous network request, so the async call “finishes” as soon as that network request is submitted (but does not wait for that network request to finish).
I am assuming that the real question is that you want to know when the network request is done. Bottom line, GCD is not well suited for managing dependencies between tasks that are, themselves, asynchronous requests. The dispatching the initiation of a network request to a serial queue is undoubtedly not going to achieve what you want. (And before someone suggests using semaphores or dispatch groups to wait for the asynchronous request to finish, note that can solve the tactical issue, but it is a pattern to be avoided because it is inefficient use of resources and, in edge cases, can introduce deadlocks.)
One pattern is to use completion handlers:
func performRequestA(completion: #escaping () -> Void) { // task A
aNetworkRequest.doneInAnotherQueue() { object in
...
completion()
}
}
Now, in practice, we would generally use the completion handler with a parameter, perhaps even a Result type:
func performRequestA(completion: #escaping (Result<Foo, Error>) -> Void) { // task A
aNetworkRequest.doneInAnotherQueue() { result in
guard ... else {
completion(.failure(error))
return
}
let foo = ...
completion(.success(foo))
}
}
Then you can use the completion handler pattern, to process the results, update models, and perhaps initiate subsequent requests that are dependent upon the results of this request. For example:
performRequestA { result in
switch result {
case .failure(let error):
print(error)
case .success(let foo):
// update models or initiate next step in the process here
}
}
If you are really asking how to manage dependencies between asynchronous tasks, there are a number of other, elegant patterns (e.g., Combine, custom asynchronous Operation subclass, the forthcoming async/await pattern contemplated in SE-0296 and SE-0303, etc.). All of these are elegant solutions for managing dependencies between asynchronous tasks, controlling the degree of concurrency, etc.
We probably would need to better understand the nature of your broader needs before we made any specific recommendations. You have asked the question about a single dispatch, but the question probably is best viewed from a broader context of what you are trying to achieve. For example, I'm assuming you are asking because you have multiple asynchronous requests to initiate: Do you really need to make sure that they happen sequentially and lose all the performance benefits of concurrency? Or can you allow them to run concurrently and you just need to know when all of the concurrent requests are done and how to get the results in the correct order? And might you have so many concurrent requests that you might need to constrain the degree of concurrency?
The answers to those questions will probably influence our recommendation of how to best manage your multiple asynchronous requests. But the answer is almost certainly is not a GCD queue.
You can do a simple check
let queueT = DispatchQueue(label: "com.test.a")
queueT.async { // task A
DispatchQueue(label: "com.test2.a").async { // create another queue inside
for i in 0..<6 {
print(i)
}
}
}
queueT.async { // task B
for i in 10..<20 {
print(i)
}
}
}
you'll get different output each run this means yes when you switch thread the task is considered done
A GCD work item is complete when the closure you pass returns. So for your example, I'm going to rewrite it to make the function calls and parameters more explicit (rather than using trailing closure syntax).
queueT.async(execute: {
// This is a function call that takes a closure parameter. Whether this
// function returns, then this closure will continue. Whether that is before or
// after running completionHandler is an internal detail of doneInAnotherQueue.
aNetworkRequest.doneInAnotherQueue(closureParameter: { ... })
// At this point, the closure is complete. What doneInAnotherQueue() does with
// its closure is its business.
})
Assuming that doneInAnotherQueue() executes its closure parameter "sometime in the future", then your task B will likely run before that closure runs (it may not; it's really a race at that point, but probably). If the doneInAnotherQueue() blocks on its closure before returning, then closureParameter will definitely run before task B.
There is absolutely no magic here. The system has no idea what doneInAnotherQueue does with its parameter. It may never run it. It may run it immediately. It may run it sometime in the future. The system just calls doneInAnotherQueue() and passes it a closure.
I rewrote async in normal "function with parameters" syntax to make it even more clear that async() is just a function, and it takes a closure parameter. It also isn't magic. It's not part of the language. It's just a normal function in the Dispatch framework. All it does it take its parameter, put it on a dispatch queue, and return. It doesn't execute anything. There's just closures that get put on queues, scheduled, and executed.
Swift is in the process of adding structured concurrency, which will add more language-level concurrency features that will allow you to express much more advanced things than the simple primitives provided by GCD.
Your task A returns straight away. Dispatching work to another queue is synchronous. Think of the block (the trailing closure) after 'doneInAnotherQueue' as just an argument to the doneInAnotherQueue function, no different to passing an Int or a String. You pass that block along and then you return immediately with the closing brace from task A.

How do I wait for an asynchronous call in Swift?

So I've recently come back to Swift & iOS after a hiatus and I've run into an issue with asynchronous execution. I'm using Giphy's iOS SDK to save myself a lot of work, but their documentation is pretty much nonexistent so I'm not sure what might be happening under the hood in their function that calls their API.
I'm calling my function containing the below code from the constructor of a static object (I don't think that's the problem as I've also tried calling it from a cellForItemAt method for a Collection View).
My issue is that my function is returning and execution continues before the API call is finished. I've tried utilizing DispatchQueue.main.async and removing Dispatch entirely, and DispatchGroups, to no avail. The one thing that worked was a semaphore, but I think I remember reading that it wasn't best practice?
Any tips would be great, I've been stuck on this for waaaaaay too long. Thanks so much in advance
GiphyCore.shared.gifByID(id) { (response, error) in
if let media = response?.data {
DispatchQueue.main.sync {
print(media)
ret = media
}
}
}
return ret
My issue is that my function is returning and execution continues before the API call is finished.
That's the whole point of asynchronous calls. A network call can take an arbitrary amount of time, so it kicks off the request in the background and tells you when it's finished.
Instead of returning a value from your code, take a callback parameter and call it when you know the Giphy call has finished. Or use a promise library. Or the delegate pattern.
The one thing that worked was a semaphore, but I think I remember reading that it wasn't best practice?
Don't do this. It will block your UI until the network call completes. Since you don't know how long that will take, your UI will be unresponsive for an unknown amount of time. Users will think your app has crashed on slow connections.
You could just add this inside a method and use a completion handler and therefore do you not need to wait for the response. You could do it like this:
func functionName(completion: #escaping (YOURDATATYPE) -> Void) {
GiphyCore.shared.gifByID(id) { (response, error) in
if let media = response?.data {
completion(media)
return
}
}
}
Call your method like this
functionName() { response in
DispatchQueue.main.async {
// UPDATE the UI here
}
}

Swift - How to get an unknown number of asynchronous operations to execute one after the other (synchronously)?

I have a set of asynchronous operations of an unknown amount that need to executed one after another because they are updating the same resource. At the end of all the executions, I want a single point of completion to be notified that they're all complete.
e.g. I have a basket that has an unknown number of eggs in it (numEggs). I need to call api.removeEgg(eggID:fromBasket:completion:) for numEggs times - but i don't want the subsequent egg to be removed until the previous egg is completely removed, as they cannot modify the basket at the same time. When all of the eggs are removed, the client code should be notified once.
Which is the best mechanism to achieve this given the amount of tasks is unknown? I've attempted to use DispatchGroup, but it seems the asynchronous tasks are kicked off at the same time. OperationQueue would work the same way.
NOTE: This is not a duplicate of this question: Calling asynchronous tasks one after the other in Swift (iOS)
The difference is that I do not know the number of asynchronous tasks that need to be completed one-after-the-other. In the referenced post, the type of the tasks are known at compile time and can simply be chained - I don't know until runtime how many asynchronous tasks I'll need to execute.
You can add a dependency on a certain operation that has to happen afterwards.
For example,
let operation = RandomOperation()
let laterOperation = RandomOperation()
laterOperation.addDependency(operation)
what it does is the laterOperation waits to start until the previous operation ends its process.
-- it seems that you have an unknown number of operation to take care of.
I recommend you make a custom class of "group operation".
And all the custom class does is to have a queue which is OperationQueue
and variable operations.
at the end, you add operations to queue by
queue.addOperation(operations:waitUntilFinished:)
Recursive function:
func removeAllEggs(eggIDs: [String], completion:#escaping () -> ()) {
var ids = eggIDs
let id = ids.last
Alamofire.request(endpoint, method: .post, parameters: parametes, encoding: JSONEncoding.default, headers: headers).responseData { response in
// check response etc...
if ids.count > 1 {
ids.removeLast()
removeAllEggs(eggIDs: ids, completion: completion)
} else {
// you just sent the last item
return completion()
}
}
}
and to use it:
func callItHere() {
removeAllEggs(eggIDs: allEggs) {
// Done! poof! all gone!
}
}

Managing asynchronous calls to web API in iOS

I am fetching data (news articles) in JSON format from a web service. The fetched data needs to be converted to an Article object and that object should be stored or updated in the database. I am using Alamofire for sending requests to the server and Core Data for database management.
My approach to this was to create a DataFetcher class for fetching JSON data and converting it to Article object:
class DataFetcher {
var delegate:DataFetcherDelegate?
func fetchArticlesFromUrl(url:String, andCategory category:ArticleCategory) {
//convert json to article
//send articles to delegate
getJsonFromUrl(url) { (json:JSON?,error:NSError?) in
if error != nil {
print("An error occured while fetching json : \(error)")
}
if json != nil {
let articles = self.getArticleFromJson(json!,andCategory: category)
self.delegate?.receivedNewArticles(articles, fromCategory: category)
}
}
}
After I fetch the data I send it to DataImporter class to store it in database:
func receivedNewArticles(articles: [Article], fromCategory category:ArticleCategory) {
//update the database with new articles
//send articles to delegate
delegate?.receivedUpdatedArticles(articles, fromCategory:category)
}
The DataImporter class sends the articles to its delegate that is in my case the ViewController. This pattern was good when I had only one API call to make (that is fetchArticles), but now I need to make another call to the API for fetching categories. This call needs to be executed before the fetchArticles call in the ViewController.
This is the viewDidLoad method of my viewController:
override func viewDidLoad() {
super.viewDidLoad()
self.dataFetcher = DataFetcher()
let dataImporter = DataImporter()
dataImporter.delegate = self
self.dataFetcher?.delegate = dataImporter
self.loadCategories()
self.loadArticles()
}
My questions are:
What is the best way to ensure that one the call to the API gets executed before the other one?
Is the pattern that I implemented good since I need to make different method for different API calls?
What is the best way to ensure that one the call to the API gets executed before the other one?
If you want to ensure that two or more asynchronous functions execute sequentially, you should first remember this:
If you implement a function which calls an asynchronous function, the calling function becomes asynchronous as well.
An asynchronous function should have a means to signal the caller that it has finished.
If you look at the network function getJsonFromUrl - which is an asynchronous function - it has a completion handler parameter which is one approach to signal the caller that the underlying task (a network request) has finished.
Now, fetchArticlesFromUrl calls the asynchronous function getJsonFromUrl and thus becomes asynchronous as well. However, in your current implementation it has no means to signal the caller that its underlying task (getJsonFromUrl) has finished. So, you first need to fix this, for example, through adding an appropriate completion handler and ensuring that the completion handler will eventually be called from within the body.
The same is true for your function loadArticles and loadCategories. I assume, these are asynchronous and require a means to signal the caller that the underlying task has finished - for example, by adding a completion handler parameter.
Once you have a number of asynchronous functions, you can chain them - that is, they will be called sequentially:
Given, two asynchronous functions:
func loadCategories(completion: (AnyObject?, ErrorType?) -> ())
func loadArticles(completion: (AnyObject?, ErrorType?) -> ())
Call them as shown below:
loadCategories { (categories, error) in
if let categories = categories {
// do something with categories:
...
// Now, call loadArticles:
loadArticles { (articles, error) in
if let articles = articles {
// do something with the articles
...
} else {
// handle error:
...
}
}
} else {
// handler error
...
}
}
Is the pattern that I implemented good since I need to make different method for different API calls?
IMHO, you should not merge two functions into one where one performs the network request and the other processes the returned data. Just let them separated. The reason is, you might want to explicitly specify the "execution context" - that is, the dispatch queue, where you want the code to be executed. Usually, Core Data, CPU bound functions and network functions should not or cannot share the same dispatch queue - possibly also due to concurrency constraints. Due to this, you may want to have control over where your code executes through a parameter which specifies a dispatch queue.
If processing data may take perceivable time (e.g. > 100ms) don't hesitate and execute it asynchronously on a dedicated queue (not the main queue). Chain several asynchronous functions as shown above.
So, your code may consist of four asynchronous functions, network request 1, process data 1, network request 2, process data 2. Possibly, you need another function specifically for storing the data into Core Data.
Other hints:
Unless there's a parameter which can be set by the caller and which explicitly specifies the "execution context" (e.g. a dispatch queue) where the completion handler should be called on, it is preferred to submit the call of the completion handler on a concurrent global dispatch queue. This performs faster and avoids dead locks. This is in contrast to Alamofire that usually calls the completion handlers on the main thread per default and is prone to dead locks and also performs suboptimal. If you can configure the queue where the completion handler will be executed, please do this.
Prefere to execute functions and code on a dispatch queue which is not associated to the main thread - e.g. not the main queue. In your code, it seems, the bulk of processing the data will be executed on the main thread. Just ensure that UIKit methods will execute on the main thread.

Resources