How to pause operation queue in Swift - ios

I have a couple of operations to perform on the IoT device from iOS App. So All my operations are in OperationsQueue with serial operations.
Here I want to perform one operation at a time and each operation needs to wait until I get a response from the IoT device.
Here IoT device response will take time to send back. so how to wait for the current operation in operation queue until I get a response from IoT.
So is there any way to pause current running operation until getting a response from IoT and then I will resume it so that the next operation in operation queue will start.
I tried with Sleep operation But it required time, but we can not guarantee about IoT device response.
Any suggestions would appreciate it. Thank you in advance.

The basic idea is that you don’t pause (or wait, or sleep), but rather you define a “concurrent” operation (see discussion of concurrent operations in the documentation) that doesn’t trigger the isFinished KVO until the device responds.
A simple way to do this is to write a concurrent operation class, like the one shown in this answer. Then your IoT operation can subclass that AsynchronousOperation class, and just call finish() when the device responds.
Then your operation queue (which presumably has a maxConcurrentOperationCount of 1, or perhaps is using dependencies), will not start an operation until the prior operation has finished.

As Rob said, you can implement Asynchronous Operation class and subclass from it your IoT operation. For me it looks like the most preferred way to implement yr case.
As an alternative, in cases where you need to continue the process only after some asynchronous event in another thread completed, you can use NSCondition. This is a mechanism from obj-c that provide an easy way to wait for a condition to occur.
Here is example:
let cond = NSCondition()
var available = false
var sharedString = ""
class WriterThread: Thread {
override func main() {
for _ in 0..<5 {
cond.lock()
sharedString = "😅"
available = true
cond.signal() // Notify and wake up the waiting thread/s
cond.unlock()
}
}
}
class PrinterThread: Thread {
override func main(){
for _ in 0..<5 { //Just do it 5 times
cond.lock()
while(!available) { //Protect from spurious signals
cond.wait()
}
print(SharedString)
sharedString = ""
available = false
cond.unlock()
}
}
}
let writet = WriterThread()
let printt = PrinterThread()
printt.start()
writet.start()

You could use a DispatchQueue and call .suspend() when you send the operation, and have the code that gets the response call .resume(). Then wherever you want to wait for the response before continuing, just put a dummy queue.sync({ print("done waiting")}) and it will automatically wait until .resume() has been called before printing and continuing.
import Dispatch
var queue = DispatchQueue(label: "queue")
func sendOperationToIoTDevice(){
//send the operation
//...
queue.suspend()
}
...
//whatever code gets the response:
//get response
//...
queue.resume()
...
//main code
sendOperationToIoTDevice()
queue.sync { print("done waiting") } // will hang here until .resume() is called

Related

In Xcode 9 / Swift 4 Google APIs Client Library for Objective-C for REST: threading notification not working

In Xcode 9 / Swift 4 using Google APIs Client Library for Objective-C for REST: why does service.executeQuery return thread completion notification before the thread completes?
I have been trying various ways but I am stuck with the following code where the notification is returned before the thread completes. See below the code, the actual output and what I would expect to see (notification comes once the thread has complete).
What am I doing wrong?
Thanks
func myFunctionTest () {
let workItem = DispatchWorkItem {
self.service.executeQuery(query,
delegate: self,
didFinish: #selector(self.displayResultWithTicket2b(ticket:finishedWithObject:error:))
)
}
let group = DispatchGroup()
group.enter()
group.notify(queue: service.callbackQueue) {
print("************************** NOTIFY MAIN THREAD *************************************")
}
service.callbackQueue.async(group: group) {
workItem.perform()
}
group.leave()
}
#objc func displayResultWithTicket2b(ticket : GTLRServiceTicket,
finishedWithObject messagesResponse : GTLRGmail_ListMessagesResponse,
error : NSError?) {
//some code to run here
print("************************** 02.displayResultWithTicket2b ***************************")
}
Output
************************** NOTIFY MAIN THREAD *************************************
************************** 02.displayResultWithTicket2b ***************************
What I would expect = Thread notification comes when thread has completed
************************** 02.displayResultWithTicket2b ***************************
************************** NOTIFY MAIN THREAD *************************************
The problem is that you're dealing with an asynchronous API and you're calling leave when you're done submitting the request. The leave() call has to be inside the completion handler or selector method of your executeQuery call. If you're going to stick with this selector based approach, you're going to have to save the dispatch group in some property and then have displayResultWithTicket2b call leave.
It would be much easier if you used the block/closure completion handler based rendition of the executeQuery API, instead of the selector-based API. Then you could just move the leave into the block/closure completion handler and you'd be done. If you use the block based implementation, not only does it eliminate the need to save the dispatch group in some property, but it probably eliminates the need for the group at all.
Also, the callback queue presumably isn't designed for you to add your own tasks. It's a queue that the library will use the for its callbacks (the queue on which completion blocks and/or delegate methods will be run). Just call executeQuery and the library takes care of running the callbacks on that queue. And no DispatchWorkItem is needed:
session.executeQuery(query) { ticket, object, error in
// do whatever you need here; this runs on the callback queue
DispatchQueue.main.async {
// when you need to update model/UI, do that on the main queue
}
}
The only time I'd use a dispatch group would be if I was performing a series of queries and needed to know when they were all done:
let group = DispatchGroup()
for query in queries {
group.enter()
session.executeQuery(query) { ticket, object, error in
defer { group.leave() }
// do whatever you need here; this runs on the callback queue
}
}
group.notify(queue: .main) {
// do something when done; this runs on the main queue
}

Alamofire request blocking during application lifecycle

I'm having troubles with Alamofire using Operation and OperationQueue.
I have an OperationQueue named NetworkingQueue and I push some operation (wrapping AlamofireRequest) into it, everything works fine, but during application living, at one moment all Alamofire request are not sent. My queue is getting bigger and bigger and no request go to the end.
I do not have a scheme to reproduce it anytime.
Does anybody have a clue for helping me?
Here is a sample of code
The BackgroundAlamoSession
let configuration = URLSessionConfiguration.background(withIdentifier: "[...].background")
self.networkingSessionManager = Alamofire.SessionManager(configuration: configuration)
AbstractOperation.swift
import UIKit
import XCGLogger
class AbstractOperation:Operation {
private let _LOGGER:XCGLogger = XCGLogger.default
enum State:String {
case Ready = "ready"
case Executing = "executing"
case Finished = "finished"
var keyPath: String {
get{
return "is" + self.rawValue.capitalized
}
}
}
override var isAsynchronous:Bool {
get{
return true
}
}
var state = State.Ready {
willSet {
willChangeValue(forKey: self.state.rawValue)
willChangeValue(forKey: self.state.keyPath)
willChangeValue(forKey: newValue.rawValue)
willChangeValue(forKey: newValue.keyPath)
}
didSet {
didChangeValue(forKey: oldValue.rawValue)
didChangeValue(forKey: oldValue.keyPath)
didChangeValue(forKey: self.state.rawValue)
didChangeValue(forKey: self.state.keyPath)
}
}
override var isExecuting: Bool {
return state == .Executing
}
override var isFinished:Bool {
return state == .Finished
}
}
A concrete Operation implementation
import UIKit
import XCGLogger
import SwiftyJSON
class FetchObject: AbstractOperation {
public let _LOGGER:XCGLogger = XCGLogger.default
private let _objectId:Int
private let _force:Bool
public var object:ObjectModel?
init(_ objectId:Int, force:Bool) {
self._objectId = objectId
self._force = force
}
convenience init(_ objectId:Int) {
self.init(objectId, force:false)
}
override var desc:String {
get{
return "FetchObject(\(self._objectId))"
}
}
public override func start(){
self.state = .Executing
_LOGGER.verbose("Fetch object operation start")
if !self._force {
let objectInCache:objectModel? = Application.main.collections.availableObjectModels[self._objectId]
if let objectInCache = objectInCache {
_LOGGER.verbose("object with id \(self._objectId) founded on cache")
self.object = objectInCache
self._LOGGER.verbose("Fetch object operation end : success")
self.state = .Finished
return
}
}
if !self.isCancelled {
let url = "[...]\(self._objectId)"
_LOGGER.verbose("Requesting object with id \(self._objectId) on server")
Application.main.networkingSessionManager.request(url, method : .get)
.validate()
.responseJSON(
completionHandler: { response in
switch response.result {
case .success:
guard let raw:Any = response.result.value else {
self._LOGGER.error("Error while fetching json programm : Empty response")
self._LOGGER.verbose("Fetch object operation end : error")
self.state = .Finished
return
}
let data:JSON = JSON(raw)
self._LOGGER.verbose("Received object from server \(data["bId"])")
self.object = ObjectModel(objectId:data["oId"].intValue,data:data)
Application.main.collections.availableobjectModels[self.object!.objectId] = self.object
self._LOGGER.verbose("Fetch object operation end : success")
self.state = .Finished
break
case .failure(let error):
self._LOGGER.error("Error while fetching json program \(error)")
self._LOGGER.verbose("Fetch object operation end : error")
self.state = .Finished
break
}
})
} else {
self._LOGGER.verbose("Fetch object operation end : cancel")
self.state = .Finished
}
}
}
The NetworkQueue
class MyQueue {
public static let networkQueue:SaootiQueue = SaootiQueue(name:"NetworkQueue", concurent:true)
}
How I use it in another operation and wait for for result
let getObjectOperation:FetchObject = FetchObject(30)
SaootiQueue.networkQueue.addOperations([getObjectOperation], waitUntilFinished: true)
How I use it the main operation using KVO
let getObjectOperation:FetchObject = FetchObject(30)
operation.addObserver(self, forKeyPath: #keyPath(Operation.isFinished), options: [.new], context: nil)
operation.addObserver(self, forKeyPath: #keyPath(Operation.isCancelled), options: [.new], context: nil)
queue.addOperation(operation)
//[...]
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
if let operation = object as? FetchObject {
operation.removeObserver(self, forKeyPath: #keyPath(Operation.isFinished))
operation.removeObserver(self, forKeyPath: #keyPath(Operation.isCancelled))
if keyPath == #keyPath(Operation.isFinished) {
//Do something
}
}
A few clarifications:
My application is a radio player and I need, while playing music and the background, to fetch the currently playing program. This is why I need background Session.
In fact I also use the background session for all the networking I do when the app is foreground. Should I avoid that ?
The wait I'm using is from another queue and is never used in the main queue (I know it is a threading antipattern and I take care of it).
In fact it is used when I do two networking operation and the second one depends of the result of the second. I put a wait after the first operation to avoid KVO observing. Should I avoid that ?
Additional edit:
When I say "My queue is getting bigger and bigger and no request go to the end", it means that at one moment during application livecycle, random for the moment (I can not find a way to reproduce it at every time), Alamofire request don't reach the response method.
Because of that the Operation wrapper don't end and the queue is growing.
By the way I'm working on converting Alamofire request into URLRequest for having clues and I founded some problem on using the main queue. I have to sort what is due to the fact that Alamofire use the main queue for reponse method and I'll see if I find a potential deadlock
I'll keep you informed. Thanks
There are minor issues, but this operation implementation looks largely correct. Sure, you should make your state management thread-safe, and there are other stylistic improvements you could make, but I don't think this is critical to your question.
What looks worrisome is addOperations(_:waitUntilFinished:). From which queue are you waiting? If you do that from the main queue, you will deadlock (i.e. it will look like the Alamofire requests never finish). Alamofire uses the main queue for its completion handlers (unless you override the queue parameter of responseJSON), but if you're waiting on the main thread, this can never take place. (As an aside, if you can refactor so you never explicitly "wait" for operations, that not only avoids the deadlock risk, but is a better pattern in general.)
I also notice that you're using Alamofire requests wrapped in operations in conjunction with a background session. Background sessions are antithetical to operations and completion handler closure patterns. Background sessions continue after your app has been jettisoned and you have to rely solely upon the SessionDelegate closures that you set when you first configure your SessionManager when the app starts. When the app restarts, your operations and completion handler closures are long gone.
Bottom line, do you really need background session (i.e. uploads and downloads that continue after your app terminates)? If so, you may want to lose this completion handler and operation based approach. If you don't need this to continue after the app terminates, don't use background sessions. Configuring Alamofire to properly handle background sessions is a non-trivial exercise, so only do so if you absolutely need to. Remember to not conflate background sessions and the simple asynchronous processing that Alamofire (and URLSession) do automatically for you.
You asked:
My application is a radio player and I need, while playing music and the background, to fetch the currently playing program. This is why I need background Session.
You need background sessions if you want downloads to proceed while the app is not running. If your app is running in the background, though, playing music, you probably don't need background sessions. But, if the user chooses to download a particular media asset, you may well want background session so that the download proceeds when the user leaves the app, whether the app is playing music or not.
In fact I also use the background session for all the networking I do when the app is foreground. Should I avoid that ?
It's fine. It's a little slower, IIRC, but it's fine.
The problem isn't that you're using background session, but that you're doing it wrong. The operation-based wrapping of Alamofire doesn't make sense with a background session. For sessions to proceed in the background, you are constrained as to how you use URLSession, namely:
You cannot use data tasks while the app is not running; only upload and download tasks.
You cannot rely upon completion handler closures (because the entire purpose of background sessions is to keep them running when your app terminates and then fire up your app again when they're done; but if the app was terminated, your closures are all gone).
You have to use delegate based API only for background sessions, not completion handlers.
You have to implement the app delegate method to capture the system provided completion handler that you call when you're done processing background session delegate calls. You have to call that when your URLSession tells you that it's done processing all the background delegate methods.
All of this is a significant burden, IMHO. Given that the system is keeping you app alive for background music, you might contemplate using a standard URLSessionConfiguration. If you're going to use background session, you might need to refactor all of this completion handler-based code.
The wait I'm using is from another queue and is never used in the main queue (I know it is a threading antipattern and I take care of it).
Good. There's still serious code smell from ever using "wait", but if you are 100% confident that it's not deadlocking here, you can get away with it. But it's something you really should check (e.g. put some logging statement after the "wait" and make sure you're getting past that line, if you haven't already confirmed this).
In fact it is used when I do two networking operation and the second one depends of the result of the second. I put a wait after the first operation to avoid KVO observing. Should I avoid that ?
Personally, I'd lose that KVO observing and just establish addDependency between the operations. Also, if you get rid of that KVO observing, you can get rid of your double KVO notification process. But I don't think this KVO stuff is the root of the problem, so maybe you defer that.

Synchronization of multiple tasks on single thread

How can I prevent a block of code to be repeatedly accessed from the same thread?
Suppose, I have the next code:
func sendAnalytics() {
// some synchronous work
asyncTask() { _ in
completion()
}
}
I want to prevent any thread from accessing "// some synchronous work", before completion was called.
objc_sync_enter(self)
objc_sync_exit(self)
seem to only prevent accessing this code from multiple threads and don't save me from accessing this code from the single thread. Is there a way to do this correctly, without using custom solutions?
My repeatedly accessing, I mean calling this sendAnalytics from one thread multiple times. Suppose, I have a for, like this:
for i in 0...10 {
sendAnalytics()
}
Every next call won't be waiting for completion inside sendAnalytics get called (obvious). Is there a way to make the next calls wait, before completion fires? Or the whole way of thinking is wrong and I have to solve this problem higher, at the for body?
You can use a DispatchSemaphore to ensure that one call completes before the next can start
let semaphore = DispatchSemaphore(value:1)
func sendAnalytics() {
self.semaphore.wait()
// some synchronous work
asyncTask() { _ in
completion()
self.semaphore.signal()
}
}
The second call to sendAnalytics will block until the first asyncTask is complete. You should be careful not to block the main queue as that will cause your app to become non-responsive. It is probably safer to dispatch the sendAnalytics call onto its own serial dispatch queue to eliminate this risk:
let semaphore = DispatchSemaphore(value:1)
let analyticsQueue = DispatchQueue(label:"analyticsQueue")
func sendAnalytics() {
analyticsQueue.async {
self.semaphore.wait()
// some synchronous work
asyncTask() { _ in
completion()
self.semaphore.signal()
}
}
}

Swift 3 multithreading using which queue?

I'm going through Stanford CP 193P, looking at a Twitter client.
When a network is called, I assumed it would always be called on the main queue unless invoked on another queue. However without dispatch back onto the main queue (as below) the App does not work as expected - meaning we must not be on the main queue. How?
When tweets are fetched the following closure is used - and to update the UI means that the work needs to be done on the main thread (DispatchQueue.main.async)
request.fetchTweets { [weak self] (newTweets) in
DispatchQueue.main.async {
if request == self?.lastTwitterRequest {
self?.tweets.insert(newTweets, at: 0)
self?.tableView.insertSections([0], with: .fade)
}
}
}
This calls a convenience function that is commented as "handler is not necessarily invoked on the main queue". I can't find anywhere that declares which queue it is invoked on, so I assume it is on the main queue?
// convenience "fetch" for when self is a request that returns Tweet(s)
// handler is not necessarily invoked on the main queue
open func fetchTweets(_ handler: #escaping ([Tweet]) -> Void) {
fetch { results in
var tweets = [Tweet]()
var tweetArray: NSArray?
if let dictionary = results as? NSDictionary {
if let tweets = dictionary[TwitterKey.Tweets] as? NSArray {
tweetArray = tweets
} else if let tweet = Tweet(data: dictionary) {
tweets = [tweet]
}
} else if let array = results as? NSArray {
tweetArray = array
}
if tweetArray != nil {
for tweetData in tweetArray! {
if let tweet = Tweet(data: tweetData as? NSDictionary) {
tweets.append(tweet)
}
}
}
handler(tweets)
}
}
I did not write the Twitter framework, and it appears to have been authored by the Stanford instructor.
You ask:
This calls a convenience function that is commented as "handler is not necessarily invoked on the main queue". I can't find anywhere that declares which queue it is invoked on, so I assume it is on the main queue?
No, you cannot assume it is on the main queue. In fact, it sounds like it's explicitly warning you that it isn't. The only time you can be assured it's on the main queue, is if it explicitly says so.
For example, if the underlying framework is using URLSession, it, by default, does not use the main queue for its completion handlers. The init(configuration:​delegate:​delegate​Queue:​) documentation warns us that the queue parameter is as follows:
An operation queue for scheduling the delegate calls and completion handlers. The queue should be a serial queue, in order to ensure the correct ordering of callbacks. If nil, the session creates a serial operation queue for performing all delegate method calls and completion handler calls.
And for a given framework, it may be completely unrelated to URLSession queue behavior. It might also be using its own queues for completion handlers.
Bottom line, if the framework doesn't explicitly assure you that the closure always runs on the main queue, you should never assume it does. So, yes, in the absence of any assurances to this effect, you'd want to dispatch any UI stuff to the main queue and do the appropriate synchronization for any model objects.
You can, if you have code that must run on a particular thread and you want to make sure this is the case, you can add a dispatchPrecondition to test if it's on the main thread. The behavior of this changes between debug builds and release builds, but it's a quick way of quickly testing if it's using the queue you think it is:
dispatchPrecondition(condition: .onQueue(.main))

Is it normal that CPU usage exceeds 100% using dispatch async in Xcode 7

I'm a beginner in swift 2, and I'm trying to make my program blocks while showing only a progress spinner until some operation finishes, I made that code snippet in a button with the action "touch up inside", my problem is that while debugging,Xcode 7 CPU usage jumps to 190 % once I tap my button and keeps high until the flag changes its value, Is it normal that CPU usage jumps like that?, also Is it a good practice to use the following snippet or shud i use sleep or some other mechanism inside my infinite loop?
let queue2 = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
dispatch_async(self.queue2) { () -> Void in
while(flag == true)
{
//wait until flag sets to false from previous func
}
self.dispatch_main({
//continue after the flag became false
})
This is a very economical completion handler
func test(completion:() -> ())
{
// do hard work
completion()
}
let queue2 = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
dispatch_async(queue2) {
test() {
print("completed")
}
}
or with additional dispatch to the main queue to update the UI
let queue2 = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
dispatch_async(queue2) {
test() {
print("completed")
dispatch_async(dispatch_get_main_queue()) {
// update UI
}
}
}
This is totally wrong approach as you are using while loop for waiting. You should use Completion Handler to achieve this kind of stuff.
Completion handlers are callbacks that allow a client to perform some action when a framework method or function completes its task. Often the client uses a completion handler to free state or update the user interface. Several framework methods let you implement completion handlers as blocks (instead of, say, delegation methods or notification handlers).
Refer Apple documentation for more details.
I suppose you have a sort of class which manages these "some operation finishes".
When you finish your operations you can comunicate by completion handler or delegation. In the meanwhile you can disable the user interaction of your UI until the end of these operations.
If you provide more informations about your background operations I can add some snippets of code.

Resources