Make multiple asynchronous requests but wait for only one - ios

I have a question concerning asynchronous requests. I want to request data from different sources on the web. Each source might have the data I want but I do not know that beforehand. Because I only want that information once, I don't care about the other sources as soon as one source has given me the data I need. How would I go about doing that?
I thought about doing it with a didSet and only setting it once, something like this:
var dogPicture : DogPicture? = nil {
didSet {
// Do something with the picture
}
}
func findPictureOfDog(_ sources) -> DogPicture? {
for source in sources {
let task = URL.Session.shared.dataTask(with: source) { (data, response, error) in
// error handling ...
if data.isWhatIWanted() && dogPicture == nil {
dogPicture = data.getPicture()
}
}
task.resume()
}
}
sources = ["yahoo.com", "google.com", "pinterest.com"]
findPictureOfDog(sources)
However it would be very helpful, if I could just wait until findPictureOfDog() is finished, because depending on if I find something or not, I have to ask the user for more input.
I don't know how I could do it in the above way, because if I don't find anything the didSet will never be called, but I should ask the user for a picture then.
A plus: isWhatIWanted() is rather expensive, so If there was a way to abort the execution of the handler once I found a DogPicture would be great.
I hope I made myself clear and hope someone can help me out with this!
Best regards and thank you for your time

A couple of things:
First, we’re dealing with asynchronous processes, so you shouldn’t return the DogPicture, but rather use completion handler pattern. E.g. rather than:
func findPictureOfDog(_ sources: [String]) -> DogPicture? {
...
return dogPicture
}
You instead would probably do something like:
func findPictureOfDog(_ sources: [String], completion: #escaping (Result<DogPicture, Error>) -> Void) {
...
completion(.success(dogPicture))
}
And you’d call it like:
findPictureOfDog(sources: [String]) { result in
switch result {
case .success(let dogPicture): ...
case .failure(let error): ...
}
}
// but don’t try to access the DogPicture or Error here
While the above was addressing the “you can’t just return value from asynchronous process”, the related observations is that you don’t want to rely on a property as the trigger to signal when the process is done. All of the “when first process finishes” logic should be in the findPictureOfDog routine, and call the completion handler when it’s done.
I would advise against using properties and their observers for this process, because it begs questions about how one synchronizes access to ensure thread-safety, etc. Completion handlers are unambiguous and avoid these secondary issues.
You mention that isWhatIWanted is computationally expensive. That has two implications:
If it is computationally expensive, then you likely don’t want to call that synchronously inside the dataTask(with:completionHandler:) completion handler, because that is a serial queue. Whenever dealing with serial queues (whether main queue, network session serial queue, or any custom serial queue), you often want to get in and out as quickly as possible (so the queue is free to continue processing other tasks).
E.g. Let’s imagine that the Google request came in first, but, unbeknownst to you at this point, it doesn’t contain what you wanted, and the isWhatIWanted is now slowly checking the result. And let’s imagine that in this intervening time, the Yahoo request that came in. If you call isWhatIWanted synchronously, the result of the Yahoo request won’t be able to start checking its result until the Google request has failed because you’re doing synchronous calls on this serial queue.
I would suggest that you probably want to start checking results as they came in, not waiting for the others. To do this, you want a rendition of isWhatIWanted the runs asynchronously with respect to the network serial queue.
Is the isWhatIWanted a cancelable process? Ideally it would be, so if the Yahoo image succeeded, it could cancel the now-unnecessary Pinterest isWhatIWanted. Canceling the network requests is easy enough, but more than likely, what we really want to cancel is this expensive isWhatIWanted process. But we can’t comment on that without seeing what you’re doing there.
But, let’s imagine that you’re performing the object classification via VNCoreMLRequest objects. You might therefore cancel any pending requests as soon as you find your first match.
In your example, you list three sources. How many sources might there be? When dealing with problems like this, you often want to constrain the degree of concurrency. E.g. let’s say that in the production environment, you’d be querying a hundred different sources, you’d probably want to ensure that no more than, say, a half dozen running at any given time, because of the memory and CPU constraints.
All of this having been said, all of these considerations (asynchronous, cancelable, constrained concurrency) seem to be begging for an Operation based solution.
So, in answer to your main question, the idea would be to write a routine that iterates through the sources, and calling the main completion handler upon the first success and make sure you prevent any subsequent/concurrent requests from calling the completion handler, too:
You could save a local reference to the completion handler.
As soon as you successfully find a suitable image, you can:
call that saved completion handler;
nil your saved reference (so in case you have other requests that have completed at roughly the same time, that they can’t call the completion handler again, eliminating any race conditions); and
cancel any pending operations so that any requests that have not finished will stop (or have not even started yet, prevent them from starting at all).
Note, you’ll want to synchronize the the above logic, so you don’t have any races in this process of calling and resetting the completion handler.
Make sure to have a completion handler that you call after all the requests are done processing, in case you didn’t end up finding any dogs at all.
Thus, that might look like:
func findPictureOfDog(_ sources: [String], completion: #escaping DogPictureCompletion) {
var firstCompletion: DogPictureCompletion? = completion
let synchronizationQueue: DispatchQueue = .main // note, we could have used any *serial* queue for this, but main queue is convenient
let completionOperation = BlockOperation {
synchronizationQueue.async {
// if firstCompletion not nil by the time we get here, that means none of them matched
firstCompletion?(.failure(DogPictureError.noneFound))
}
print("done")
}
for source in sources {
let url = URL(string: source)!
let operation = DogPictureOperation(url: url) { result in
if case .success(_) = result {
synchronizationQueue.async {
firstCompletion?(result)
firstCompletion = nil
Queues.shared.cancelAllOperations()
}
}
}
completionOperation.addDependency(operation)
Queues.shared.processingQueue.addOperation(operation)
}
OperationQueue.main.addOperation(completionOperation)
}
So what might that DogPictureOperation might look like? I might create an asynchronous custom Operation subclass (I just subclass a general purpose AsynchronousOperation subclass, like the one here) that will initiate network request and then run an inference on the resulting image upon completion. And if canceled, it would cancel the network request and/or any pending inferences (pursuant to point 3, above).

If you care about only one task use a completion handler, call completion(nil) if no picture was found.
var dogPicture : DogPicture?
func findPictureOfDog(_ sources, completion: #escaping (DogPicture?) -> Void) {
for source in sources {
let task = URL.Session.shared.dataTask(with: source) { (data, response, error) in
// error handling ...
if data.isWhatIWanted() && dogPicture == nil {
let picture = data.getPicture()
completion(picture)
}
}
task.resume()
}
}
sources = ["yahoo.com", "google.com", "pinterest.com"]
findPictureOfDog(sources) { [weak self] picture in
if let picture = picture {
self?.dogPicture = picture
print("picture set")
} else {
print("No picture found")
}
}

You can use DispatchGroup to run a check when all of your requests have returned:
func findPictureOfDog(_ sources: [String]) -> DogPicture? {
let group = DispatchGroup()
for source in sources {
group.enter()
let task = URLSession.shared.dataTask(with: source) { (data, response, error) in
// error handling ...
if data.isWhatIWanted() && dogPicture == nil {
dogPicture = data.getPicture()
}
group.leave()
}
task.resume()
}
group.notify(DispatchQueue.main) {
if dogPicture == nil {
// all requests came back but none had a result.
}
}
}

Related

How to change a value outside, when inside a Dispatch Semaphore (Edited)

i have a question, and I will tell you as clear as possible:
I need to get an object to my func, create a variable version of it, change some property values one by one, and use new version of it for saving to the cloud. My problem is, when I declare my variable, if I change property values inside my Dispatch Semaphore, the variable outside not changing somehow, there is some problem that I could not understand. Here is the code :
func savePage(model: PageModel, savingHandler: #escaping (Bool) -> Void) {
// some code .....
var page = model // (1) I created a variable from function arg
let newQueue = DispatchQueue(label: "image upload queue")
let semaphore = DispatchSemaphore(value: 0)
newQueue.async {
if let picURL1 = model.picURL1 {
self.saveImagesToFireBaseStorage(pictureURL: picURL1) { urlString in
let url = urlString // (2) urlString value true and exist
page.picURL1 = url // (3) modified the new created "page" object
print(page.picURL1!) // (4) it works, object prints modified
}
}
semaphore.signal()
}
semaphore.wait()
newQueue.async {
if let picURL2 = model.picURL2 {
self.saveImagesToFireBaseStorage(pictureURL: picURL2) { urlString in
let url = urlString
page.picURL2 = url
}
}
semaphore.signal()
}
semaphore.wait()
print(page.picURL1!) //(5) "page" object has the old value?
newQueue.async {
print(page.picURL1!) //(6) "page" object has the old value?
do {
try pageDocumentRef.setData(from: page)
savingHandler(true)
} catch let error {
print("Error writing city to Firestore: \(error)")
}
semaphore.signal()
}
semaphore.wait()
}
I should upload some pictures to the cloud and get their urls, so I can create updated version of the object and save onto the old version on the cloud. But "page" object doesn't change somehow. When inside the semaphore, it prints right value, when outside, or inside another async semaphore block, it prints old value. I am new to the concurrency and could not find a way.
What I tried before :
Using Operation Queue and adding blocks as dependencies.
Creating queue as DispatchQueue.global()
What is the thing I am missing here?
Edit : I added semaphore.wait() after second async call. It actually was in my code but I accidentally deleted it while pasting to the question, thanks to Chip Jarred for pointing it.
UPDATE
This is how I changed the code with async - await:
func savePage(model: PageModel, savingHandler: #escaping () -> Void) {
// Some code
Task {
do {
let page = model
let updatedPage = try await updatePageWithNewImageURLS(page)
// some code
} catch {
// some code
}
}
// Some code
}
private func saveImagesToFireBaseStorage(pictureURL : String?) async -> String? {
//some code
return downloadURL.absoluteString
}
private func updatePageWithNewImageURLS(_ page : PageModel) async throws -> PageModel {
let picUrl1 = await saveImagesToFireBaseStorage(pictureURL: page.picURL1)
let picUrl2 = await saveImagesToFireBaseStorage(pictureURL: page.picURL2)
let picUrl3 = await saveImagesToFireBaseStorage(pictureURL: page.picURL3)
let newPage = page
return try await addNewUrlstoPage(newPage, url1: picUrl1, url2: picUrl2, url3: picUrl3)
}
private func addNewUrlstoPage(_ page : PageModel, url1: String?, url2 : String?, url3 :String?) async throws -> PageModel {
var newPage = page
if let url1 = url1 { newPage.picURL1 = url1 }
if let url2 = url2 { newPage.picURL2 = url2 }
if let url3 = url3 { newPage.picURL3 = url3 }
return newPage
}
So one async function gets a new photo url, for an old url, second async function runs multiple of this first function inside to get new urls for all three urls of the object, then calls a third async function a create and return an updated object with the new values.
This is my first time using async - await.
Let's look at your first async call:
newQueue.async {
if let picURL1 = model.picURL1 {
self.saveImagesToFireBaseStorage(pictureURL: picURL1) { urlString in
let url = urlString // (2) urlString value true and exist
page.picURL1 = url // (3) modified the new created "page" object
print(page.picURL1!) // (4) it works, object prints modified
}
}
semaphore.signal()
}
I would guess that the inner closure, that is the one passed to saveImagesToFireBaseStorage, is also called asynchronously. If I'm right about that, then saveImagesToFireBaseStorage returns almost immediately, executes the signal, but the inner closure has not run yet, so the new value isn't yet set. Then after some latency, the inner closure is finally called, but that's after your "outer" code that depends on page.picURL1 has already been run, so page.picURL1 ends up being set afterwards.
So you need to call signal in the inner closure, but you still have to handle the case where the inner closure isn't called. I'm thinking something like this:
newQueue.async {
if let picURL1 = model.picURL1 {
self.saveImagesToFireBaseStorage(pictureURL: picURL1) { urlString in
let url = urlString
page.picURL1 = url
print(page.picURL1!)
semaphore.signal() // <--- ADDED THIS
}
/*
If saveImagesToFireBaseStorage might not call the closure,
such as on error, you need to detect that and call signal here
in the case as well, or perhaps in some closure that's called in
the failure case. That depends on your design.
*/
}
else { semaphore.signal() } // <--- MOVED INTO `else` block
}
Your second async would need to be modified similarly.
I notice that you're not calling wait after the second async, the one that sets page.picURL2. So you have 2 calls to wait, but 3 calls to signal. That wouldn't affect whether page.picURL1 is set properly in the first async, but it does mean that semaphore will have unbalanced waits and signals at the end of your code example, and the blocking behavior of wait after the third async may not be as you expect it to be.
If it's an option for your project, refactoring to use async and await keywords would resolve the problem in a way that's easier to maintain, because it would remove the need for the semaphore entirely.
Also, if my premise that saveImagesToFireBaseStorage is called asynchronously is correct, you don't really need the async calls at all, unless there is more code in their closures that isn't shown.
Update
In comments, it was revealed the using the solution above caused the app to "freeze". This suggests that saveImagesToFireBaseStorage calls its completion handler on the same queue that savePage(model:savingHandler) is called on, and it's almost certainly DispatchQueue.main. The problem is that DispatchQueue.main is a serial queue (as is newQueue), which means it won't execute any tasks until the next iteration of its run loop, but it never gets to do that, because it calls semaphore.wait(), which blocks waiting for the completion handler for saveImagesToFireBaseStorage to call semaphore.signal. By waiting it prevents the very thing its waiting for from ever executing.
You say in comments that using async/await solved the problem. That's probably the cleanest way, for lots of reasons, not the least of which is that you get compile time checks for a lot of potential problems.
In the mean time, I came up with this solution using DispatchSemaphore. I'll put it here, in case it helps someone.
First I moved the creation of newQueue outside of savePage. Creating a dispatch queue is kind of heavy-weight operation, so you should create whatever queues you need once, and then reuse them. I assume that it's a global variable or instance property of whatever object owns savePage.
The second thing is that savePage doesn't block anymore, but we still want the sequential behavior, preferably without going to completion handler hell (deeply nested completion handlers).
I refactored the code that calls saveImagesToFireBaseStorage into a local function, and made it behave synchronously by using a DispatchSemaphore to block until it's completion handler is called, but only in that local function. I do create the DispatchSemaphore outside of that function so that I can reuse the same instance for both invocations, but I treat it as though it were a local variable in the nested function.
I also have to use a time-out for the wait, because I don't know if I can assume that the completion handler for saveImagesToFireBaseStorage will always be called. Are there failure conditions in which it wouldn't be? The time-out value is almost certainly wrong, and should be considered a place-holder for the real value. You'd need to determine the maximum latency you want to allow based on your knowledge of your app and its working environment (servers, networks, etc...).
The local function uses a key path to allow setting differently named properties of PageModel (picURL1 vs picURL2), while still consolidating the duplicated code.
Here's the refactored savePage code:
func savePage(model: PageModel, savingHandler: #escaping (Bool) -> Void)
{
// some code .....
var page = model
let saveImageDone = DispatchSemaphore(value: 0)
let waitTimeOut = DispatchTimeInterval.microseconds(500)
func saveModelImageToFireBaseStorage(
from urlPath: WritableKeyPath<PageModel, String?>)
{
if let picURL = model[keyPath: urlPath]
{
saveImagesToFireBaseStorage(pictureURL: picURL)
{
page[keyPath: urlPath] = $0
print("page.\(urlPath) = \(page[keyPath: urlPath]!)")
saveImageDone.signal()
}
if .timedOut == saveImageDone.wait(timeout: .now() + waitTimeOut) {
print("saveImagesToFireBaseStorage timed out!")
}
}
}
newQueue.async
{
saveModelImageToFireBaseStorage(from: \.picURL1)
saveModelImageToFireBaseStorage(from: \.picURL2)
print(page.picURL1!)
do {
try self.pageDocumentRef.setData(from: page)
// Assume handler might do UI stuff, so it needs to execute
// on main
DispatchQueue.main.async { savingHandler(true) }
} catch let error {
print("Error writing city to Firestore: \(error)")
// Should savingHandler(false) be called here?
}
}
}
It's important to note that savePage does not block the thread that's called on, which I believe to be DispatchQueue.main. I assume that any code that is sequentially called after a call to savePage, if any, does not depend on the results of calling savePage. Any that does depend on it should be in its savingHandler.
And speaking of savingHandler, I have to assume that it might update the UI, and since the point where it would be called is in a closure for newQueue.async it has to be explicitly called on DispatchQueue.main, so I do that.

How do I ensure that all network calls have been made before accessing my core data model?

I am making multiple api calls in succession and when I finally push to my next view controller my data comes up completely blank from my core data model. In ViewController A I have made the following requests in this order:
Api.verifyOtp(email, otp).continueWith { (task) -> Any? in
if task.succeed {
self.apiCallOne()
self.apiCallTwo()
self.apiCallThree()
self.apiCallFour()
self.apiCallFive()
} else {
Hud.hide()
task.showError()
}
return nil
}
Now all of these calls are made asynchronously. However the last method which is self.apiCallFive() is the method that pushes to ViewController B. Here is the call:
Api.apiCallFive().continueOnSuccessWith { (task) -> Any? in
Hud.hide()
if task.succeed {
let storyboard = UIStoryboard(name: "Main", bundle: nil)
let viewB storyboard.instantiateViewController(withIdentifier: "ViewControllerB" self.navigationController?.pushViewController(viewB, animated: true)
}
My guess is that since all of these requests are happening asynchronously then there's no guarantee on which call will finish first. So the apiCallFive() is pushing and loading ViewController B before the others are able to finish. How can I make it so the next view will not be loaded or pushed to until all of the tasks have been completed?
Thank you!
I have faced the same issue. Fix it by using DispatchGroup.
Code:
Define as property
let APIGroup = DispatchGroup()
Execute below code when any API Calling starts.
APIGroup.enter()
Execute below code when any API Calling Completed.
downloadGroup.leave()
Notify Block:
APIGroup.notify(queue: DispatchQueue.main) {
print("All APIs called successfully: Perform required operation")
}
There no need to manage by any counter or other variables. notify block call automatically when all task completed successfully.
What’s really important here is the enter-leave pairs. You have to be
very careful and make sure that you leave the group. It would be easy
to introduce a bug in the code above. Let’s say that we didn’t leave
the group in that guard statement above, just before the return. If
the API called failed, or the JSON was malformed, the number of groups> entries would not match the number of leaves. So the group completion
handler would never get called. If you’re calling this method from the
UI and displaying an activity indicator while your networking requests
are running, you would never get a callback from the method, and you
would keep on spinning 🙂
Apple documents
To solve this you need a way of getting notified when each call is finished.
The easiest way of doing this is using completion blocks on each call.
func apiCall(completion: #escaping () -> Void) {
....
}
After adding completion blocks to the api calls, your blocks could look like this:
let dispatchGroup = DispatchGroup()
dispatchGroup.enter()
apiCallOne {
dispatchGroup.leave()
}
dispatchGroup.enter()
apiCallTwo {
dispatchGroup.leave()
}
...
dispatchGroup.enter()
apiCallN {
dispatchGroup.leave()
}
dispatchGroup.wait(timeout: Constants.timeout)
Keep in mind that the wait statement will block the thread where you call it until all the leave() statements are executed, so be careful that you don't end up with a deadlock.

API calls blocks UI thread Swift

I need to sync web database in my coredata, for which I perform service api calls. I am using Alamofire with Swift 3. There are 23 api calls, giving nearly 24k rows in different coredata entities.
My problem: These api calls blocks UI for a minute, which is a long time for a user to wait.
I tried using DispatchQueue and performing the task in background thread, though nothing worked. This is how I tried :
let dataQueue = DispatchQueue.init(label: "com.app.dataSyncQueue")
dataQueue.async {
DataSyncController().performStateSyncAPICall()
DataSyncController().performRegionSyncAPICall()
DataSyncController().performStateRegionSyncAPICall()
DataSyncController().performBuildingRegionSyncAPICall()
PriceSyncController().performBasicPriceSyncAPICall()
PriceSyncController().performHeightCostSyncAPICall()
// Apis which will be used in later screens are called in background
self.performSelector(inBackground: #selector(self.performBackgroundTask), with: nil)
}
An API call from DataSyncController:
func performStateSyncAPICall() -> Void {
DataSyncRequestManager.fetchStatesDataWithCompletionBlock {
success, response, error in
self.apiManager.didStatesApiComplete = true
}
}
DataSyncRequestManager Code:
static func fetchStatesDataWithCompletionBlock(block:#escaping requestCompletionBlock) {
if appDelegate.isNetworkAvailable {
Util.setAPIStatus(key: kStateApiStatus, with: kInProgress)
DataSyncingInterface().performStateSyncingWith(request:DataSyncRequest().createStateSyncingRequest() , withCompletionBlock: block)
} else {
//TODO: show network failure error
}
}
DataSyncingInterface Code:
func performStateSyncingWith(request:Request, withCompletionBlock block:#escaping requestCompletionBlock)
{
self.interfaceBlock = block
let apiurl = NetworkHttpClient.getBaseUrl() + request.urlPath!
Alamofire.request(apiurl, parameters: request.getParams(), encoding: URLEncoding.default).responseJSON { response in
guard response.result.isSuccess else {
block(false, "error", nil )
return
}
guard let responseValue = response.result.value else {
block (false, "error", nil)
return
}
block(true, responseValue, nil)
}
}
I know many similar questions have been already posted on Stackoverflow and mostly it is suggested to use GCD or Operation Queue, though trying DispatchQueues didn't work for me.
Am I doing something wrong?
How can I not block UI and perform the api calls simultaneously?
You can do this to run on a background thread:
DispatchQueue.global(qos: .background).async {
// Do any processing you want.
DispatchQueue.main.async {
// Go back to the main thread to update the UI.
}
}
DispatchQueue manages the execution of work items. Each work item submitted to a queue is processed on a pool of threads managed by the system.
I usually use NSOperationQueue with Alamofire, but the concepts are similar. When you set up an async queue, you allow work to be performed independently of the main (UI) thread, so that your app doesn't freeze (refuse user input). The work will still take however long it takes, but your program doesn't block while waiting to finish.
You really have only put one item into the queue.
You are adding to the queue only once, so all those "perform" calls wait for the previous one to finish. If it is safe to run them concurrently, you need to add each of them to the queue separately. There's more than one way to do this, but the bottom line is each time you call .async {} you are adding one item to the queue.
dataQueue.async {
DataSyncController().performStateSyncAPICall()
}
dataQueue.async {
DataSyncController(). performRegionSyncAPICall l()
}

Alamofire request blocking during application lifecycle

I'm having troubles with Alamofire using Operation and OperationQueue.
I have an OperationQueue named NetworkingQueue and I push some operation (wrapping AlamofireRequest) into it, everything works fine, but during application living, at one moment all Alamofire request are not sent. My queue is getting bigger and bigger and no request go to the end.
I do not have a scheme to reproduce it anytime.
Does anybody have a clue for helping me?
Here is a sample of code
The BackgroundAlamoSession
let configuration = URLSessionConfiguration.background(withIdentifier: "[...].background")
self.networkingSessionManager = Alamofire.SessionManager(configuration: configuration)
AbstractOperation.swift
import UIKit
import XCGLogger
class AbstractOperation:Operation {
private let _LOGGER:XCGLogger = XCGLogger.default
enum State:String {
case Ready = "ready"
case Executing = "executing"
case Finished = "finished"
var keyPath: String {
get{
return "is" + self.rawValue.capitalized
}
}
}
override var isAsynchronous:Bool {
get{
return true
}
}
var state = State.Ready {
willSet {
willChangeValue(forKey: self.state.rawValue)
willChangeValue(forKey: self.state.keyPath)
willChangeValue(forKey: newValue.rawValue)
willChangeValue(forKey: newValue.keyPath)
}
didSet {
didChangeValue(forKey: oldValue.rawValue)
didChangeValue(forKey: oldValue.keyPath)
didChangeValue(forKey: self.state.rawValue)
didChangeValue(forKey: self.state.keyPath)
}
}
override var isExecuting: Bool {
return state == .Executing
}
override var isFinished:Bool {
return state == .Finished
}
}
A concrete Operation implementation
import UIKit
import XCGLogger
import SwiftyJSON
class FetchObject: AbstractOperation {
public let _LOGGER:XCGLogger = XCGLogger.default
private let _objectId:Int
private let _force:Bool
public var object:ObjectModel?
init(_ objectId:Int, force:Bool) {
self._objectId = objectId
self._force = force
}
convenience init(_ objectId:Int) {
self.init(objectId, force:false)
}
override var desc:String {
get{
return "FetchObject(\(self._objectId))"
}
}
public override func start(){
self.state = .Executing
_LOGGER.verbose("Fetch object operation start")
if !self._force {
let objectInCache:objectModel? = Application.main.collections.availableObjectModels[self._objectId]
if let objectInCache = objectInCache {
_LOGGER.verbose("object with id \(self._objectId) founded on cache")
self.object = objectInCache
self._LOGGER.verbose("Fetch object operation end : success")
self.state = .Finished
return
}
}
if !self.isCancelled {
let url = "[...]\(self._objectId)"
_LOGGER.verbose("Requesting object with id \(self._objectId) on server")
Application.main.networkingSessionManager.request(url, method : .get)
.validate()
.responseJSON(
completionHandler: { response in
switch response.result {
case .success:
guard let raw:Any = response.result.value else {
self._LOGGER.error("Error while fetching json programm : Empty response")
self._LOGGER.verbose("Fetch object operation end : error")
self.state = .Finished
return
}
let data:JSON = JSON(raw)
self._LOGGER.verbose("Received object from server \(data["bId"])")
self.object = ObjectModel(objectId:data["oId"].intValue,data:data)
Application.main.collections.availableobjectModels[self.object!.objectId] = self.object
self._LOGGER.verbose("Fetch object operation end : success")
self.state = .Finished
break
case .failure(let error):
self._LOGGER.error("Error while fetching json program \(error)")
self._LOGGER.verbose("Fetch object operation end : error")
self.state = .Finished
break
}
})
} else {
self._LOGGER.verbose("Fetch object operation end : cancel")
self.state = .Finished
}
}
}
The NetworkQueue
class MyQueue {
public static let networkQueue:SaootiQueue = SaootiQueue(name:"NetworkQueue", concurent:true)
}
How I use it in another operation and wait for for result
let getObjectOperation:FetchObject = FetchObject(30)
SaootiQueue.networkQueue.addOperations([getObjectOperation], waitUntilFinished: true)
How I use it the main operation using KVO
let getObjectOperation:FetchObject = FetchObject(30)
operation.addObserver(self, forKeyPath: #keyPath(Operation.isFinished), options: [.new], context: nil)
operation.addObserver(self, forKeyPath: #keyPath(Operation.isCancelled), options: [.new], context: nil)
queue.addOperation(operation)
//[...]
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
if let operation = object as? FetchObject {
operation.removeObserver(self, forKeyPath: #keyPath(Operation.isFinished))
operation.removeObserver(self, forKeyPath: #keyPath(Operation.isCancelled))
if keyPath == #keyPath(Operation.isFinished) {
//Do something
}
}
A few clarifications:
My application is a radio player and I need, while playing music and the background, to fetch the currently playing program. This is why I need background Session.
In fact I also use the background session for all the networking I do when the app is foreground. Should I avoid that ?
The wait I'm using is from another queue and is never used in the main queue (I know it is a threading antipattern and I take care of it).
In fact it is used when I do two networking operation and the second one depends of the result of the second. I put a wait after the first operation to avoid KVO observing. Should I avoid that ?
Additional edit:
When I say "My queue is getting bigger and bigger and no request go to the end", it means that at one moment during application livecycle, random for the moment (I can not find a way to reproduce it at every time), Alamofire request don't reach the response method.
Because of that the Operation wrapper don't end and the queue is growing.
By the way I'm working on converting Alamofire request into URLRequest for having clues and I founded some problem on using the main queue. I have to sort what is due to the fact that Alamofire use the main queue for reponse method and I'll see if I find a potential deadlock
I'll keep you informed. Thanks
There are minor issues, but this operation implementation looks largely correct. Sure, you should make your state management thread-safe, and there are other stylistic improvements you could make, but I don't think this is critical to your question.
What looks worrisome is addOperations(_:waitUntilFinished:). From which queue are you waiting? If you do that from the main queue, you will deadlock (i.e. it will look like the Alamofire requests never finish). Alamofire uses the main queue for its completion handlers (unless you override the queue parameter of responseJSON), but if you're waiting on the main thread, this can never take place. (As an aside, if you can refactor so you never explicitly "wait" for operations, that not only avoids the deadlock risk, but is a better pattern in general.)
I also notice that you're using Alamofire requests wrapped in operations in conjunction with a background session. Background sessions are antithetical to operations and completion handler closure patterns. Background sessions continue after your app has been jettisoned and you have to rely solely upon the SessionDelegate closures that you set when you first configure your SessionManager when the app starts. When the app restarts, your operations and completion handler closures are long gone.
Bottom line, do you really need background session (i.e. uploads and downloads that continue after your app terminates)? If so, you may want to lose this completion handler and operation based approach. If you don't need this to continue after the app terminates, don't use background sessions. Configuring Alamofire to properly handle background sessions is a non-trivial exercise, so only do so if you absolutely need to. Remember to not conflate background sessions and the simple asynchronous processing that Alamofire (and URLSession) do automatically for you.
You asked:
My application is a radio player and I need, while playing music and the background, to fetch the currently playing program. This is why I need background Session.
You need background sessions if you want downloads to proceed while the app is not running. If your app is running in the background, though, playing music, you probably don't need background sessions. But, if the user chooses to download a particular media asset, you may well want background session so that the download proceeds when the user leaves the app, whether the app is playing music or not.
In fact I also use the background session for all the networking I do when the app is foreground. Should I avoid that ?
It's fine. It's a little slower, IIRC, but it's fine.
The problem isn't that you're using background session, but that you're doing it wrong. The operation-based wrapping of Alamofire doesn't make sense with a background session. For sessions to proceed in the background, you are constrained as to how you use URLSession, namely:
You cannot use data tasks while the app is not running; only upload and download tasks.
You cannot rely upon completion handler closures (because the entire purpose of background sessions is to keep them running when your app terminates and then fire up your app again when they're done; but if the app was terminated, your closures are all gone).
You have to use delegate based API only for background sessions, not completion handlers.
You have to implement the app delegate method to capture the system provided completion handler that you call when you're done processing background session delegate calls. You have to call that when your URLSession tells you that it's done processing all the background delegate methods.
All of this is a significant burden, IMHO. Given that the system is keeping you app alive for background music, you might contemplate using a standard URLSessionConfiguration. If you're going to use background session, you might need to refactor all of this completion handler-based code.
The wait I'm using is from another queue and is never used in the main queue (I know it is a threading antipattern and I take care of it).
Good. There's still serious code smell from ever using "wait", but if you are 100% confident that it's not deadlocking here, you can get away with it. But it's something you really should check (e.g. put some logging statement after the "wait" and make sure you're getting past that line, if you haven't already confirmed this).
In fact it is used when I do two networking operation and the second one depends of the result of the second. I put a wait after the first operation to avoid KVO observing. Should I avoid that ?
Personally, I'd lose that KVO observing and just establish addDependency between the operations. Also, if you get rid of that KVO observing, you can get rid of your double KVO notification process. But I don't think this KVO stuff is the root of the problem, so maybe you defer that.

Cancel Alamofire file download if this file is already exists

How to cancel Alamofire request if the downloaded file is already exists in documents folder?
Here is the code for request:
Alamofire.download(.GET, fileUrls[button.tag], destination: { (temporaryURL, response) in
if let directoryURL = NSFileManager.defaultManager().URLsForDirectory(.DocumentDirectory, inDomains: .UserDomainMask)[0] as? NSURL {
let fileURL = directoryURL.URLByAppendingPathComponent(response.suggestedFilename!)
self.localFilePaths[button.tag] = fileURL
if NSFileManager.defaultManager().fileExistsAtPath(fileURL.path!) {
NSFileManager.defaultManager().removeItemAtPath(fileURL.path!, error: nil)
}
return fileURL
}
println("temporaryURL - \(temporaryURL)")
self.localFilePaths[button.tag] = temporaryURL
return temporaryURL
}).progress { _, totalBytesRead, totalBytesExpectedToRead in
println("\(totalBytesRead) - \(totalBytesExpectedToRead)")
dispatch_async(dispatch_get_main_queue()) {
self.progressBar.setProgress(Float(totalBytesRead) / Float(totalBytesExpectedToRead), animated: true)
if totalBytesRead == totalBytesExpectedToRead {
self.progressBar.hidden = true
self.progressBar.setProgress(0, animated: false)
}
}
}.response { (_, _, data, error) in
let previewQL = QLReaderViewController()
previewQL.dataSource = self
previewQL.currentPreviewItemIndex = button.tag
self.navigationController?.pushViewController(previewQL, animated: true)
}
I've also tried to create a request variable var request: Alamofire.Request? and then cancel request?.cancel() it if that file exists but it doesn't work.
Can someone help me to solve this issue?
Rather than cancelling the request, IMO you shouldn't make it in the first place. You should do the file check BEFORE you start the Alamofire request.
If you absolutely feel you need to start the request, you can always cancel immediately after starting the request.
var shouldCancel = false
let request = Alamofire.request(.GET, "some_url") { _, _ in
shouldCancel = true
}
.progress { _, _, _ in
// todo...
}
.response { _, _, _ in
// todo...
}
if shouldCancel {
request.cancel()
}
TL; DR: Canceling a request is a bit cumbersome in many cases. Even Alamofire, as far as I know, does not guarentee that request will be cancelled upon your request, immediately. However, you may use dispatch_suspend or NSOperation in order to overcome this.
Grand Central Dispatch (GCD)
This way utilizes functional programming.
Here we enlight our way with low-level programming. Apple introduced a good library, aka GCD, to do some thread-level programming.
You cannot cancel a block, unless... you suspend a queue (if it is not main or global queue).
There is a C function called dispatch_suspend, (from Apple's GCD Reference)
void dispatch_suspend(dispatch_object_t object);
Suspends the invocation of block objects on a dispatch object.
And you can also create queues (who are dispatch_object_ts) with dispatch_queue_create.
So you can do your task in user created queue, and you may suspend this queue in order to prevent CPU from doing something unnecessary.
NSOperation (also NSThread)
This way utilizes functional programming over object-oriented interface.
Apple also introduced NSOperation, where object-oriented programming may be object, whereas it is easier to cope with.
NSOperation is an abstract class, which associates code and data, according to the Apple's documentation.
In order to use this class, you should either use one of its defined subclasses, or create your own subclass: In your case particularly, I suppose NSBlockOperation is the one.
You may refer to this code block:
let block = NSBlockOperation { () -> Void in
// do something here...
}
// Cancel operation
block.cancel()
Nevertheless, it also does not guarantee stopping from whatever it is doing. Apple also states that:
This method does not force your operation code to stop. Instead, it updates the object’s internal flags to reflect the change in state. If the operation has already finished executing, this method has no effect. Canceling an operation that is currently in an operation queue, but not yet executing, makes it possible to remove the operation from the queue sooner than usual.
If you want to take advantage of flags, you should read more: Responding to the Cancel Command

Resources