Swift -Start/Stop Synchronous OperationQueue - ios

I have some operations that need to run synchronously. I tried to follow this link but it's not clear enough for my situation.
op2 doesn't start until op1 is finished and op3 doesn't start until op2 is finished but during that time I need to be able to stop any of the operations and restart all over again. For example if op2 is running, I know that it cannot be stopped, but for whatever reason I need to be able to prevent op3 from executing because op1 has restarted. How can I do this?
This is a very simple example, the actual code is more intricate
var queue1 = OperationQueue()
var queue2 = OperationQueue()
var queue3 = OperationQueue()
var operation1: BlockOperation?
var operation2: BlockOperation?
var operation3: BlockOperation?
// a DispatchGroup has finished running now it's time to start the operations ...
dispatchGroup.notify(queue: .global(qos: .background)) { [weak self] in
DispatchQueue.main.async { [weak self] in
self?.runFirstFunc()
}
}
func runFirstFunc() {
var count = 0
let num in arr {
count += num
}
// now that the loop is finished start the second func but there is a possibility something may happen in the first that should prevent the second func from running
runSecondFunc(count: count)
}
func runSecondFunc(count: Int) {
do {
try ...
// if the do-try is successful do something with count then start thirdFunc but there is a possibility something may happen in the second func that should prevent the third func from running
runThirdFunc()
} catch {
return
}
}
func runThirdFunc() {
// this is the final operation, once it hits here I know it can't be stopped even if I have to restart op1 again but that is fine
}

You said:
op2 doesn't start until op1 is finished and op3 doesn't start until op2 is finished ...
If using OperationQueue you can accomplish that by creating the three operations, and defining op1 to be a dependency of op2 and defining op2 as a dependency of op3.
... but during that time I need to be able to stop any of the operations and restart all over again.
If using OperationQueue, if you want to stop all operations that have been added to the queue, you call cancelAllOperations.
For example if op2 is running, I know that it cannot be stopped, ...
Well, it depends upon what op2 is doing. If it's spinning in a loop doing calculations, then, yes, it can be canceled, mid-operation. You just check isCancelled, and if it is, stop the operation in question. Or if it is a network request (or something else that is cancelable), you can override cancel method and cancel the task, too. It depends upon what the operation is doing.
... but for whatever reason I need to be able to prevent op3 from executing because op1 has restarted.
Sure, having canceled all the operations with cancelAllOperations, you can then re-add three new operations (with their associated dependencies) to the queue.

Here's a not-tested implementation that allows cancellation while any task is doing it's subtasks (repeatedly).
In case second task fails/throws, it automatically restarts from the first task.
In case user manually stops / starts, the last in-flight task quits it's execution (as soon as it can).
Note : You must take care of [weak self] part according to your own implementation.
import Foundation
class TestWorker {
let workerQueue = DispatchQueue.global(qos: .utility)
var currentWorkItem: DispatchWorkItem?
func start() {
self.performTask { self.performTask1() }
}
func stop() {
currentWorkItem?.cancel()
}
func performTask(block: #escaping (() -> Void)) {
let workItem = DispatchWorkItem(block: block)
self.currentWorkItem = workItem
workerQueue.async(execute: workItem)
}
func performTask1() {
guard let workItem = self.currentWorkItem else { return }
func subtask(index: Int) {}
for i in 0..<100 {
if workItem.isCancelled { return }
subtask(index: i)
}
self.performTask { self.performTask2() }
}
func performTask2() {
guard let workItem = self.currentWorkItem else { return }
func subtask(index: Int) throws {}
for i in 0..<100 {
if workItem.isCancelled { return }
do { try subtask(index: i) }
catch {
self.start()
return
}
}
self.performTask { self.performTask3() }
}
func performTask3() {
guard let workItem = self.currentWorkItem else { return }
func subtask(index: Int) {}
for i in 0..<100 {
if workItem.isCancelled { return }
subtask(index: i)
}
/// Done
}
}

Maybe, this is a good reason to look into Swift Combine:
Define your tasks as Publishers.
Use flatMap to chain them, optionally pass output from previous to the next.
Use switchToLatest to restart the whole thing and cancel the previous one when it is still running - if any.
Use cancel on the subscriber to cancel the whole thing.

Related

Subclassing OperationQueue adding sleep period

import Foundation
class MyOperationQueue {
static let shared = MyOperationQueue()
private var queue: OperationQueue
init() {
self.queue = OperationQueue()
queue.name = "com.myqueue.name"
queue.maxConcurrentOperationCount = 1
queue.qualityOfService = .background
}
func requestDataOperation() {
queue.addOperation {
print("START NETWORK \(Date())")
NetworkService.shared.getData()
print("END NETWORK \(Date())")
}
}
func scheduleSleep() {
queue.cancelAllOperations()
queue.addOperation {
print("SLEEP START \(Date())")
Thread.sleep(forTimeInterval: 5)
print("SLEEP END \(Date())")
}
}
func cancelAll() {
queue.cancelAllOperations()
}
}
I put requestDataOperation function inside a timer for every 10 seconds interval. And I have a button to call scheduleSleep manually. I was expected to debounce the request for every 5 more seconds when I tapping the button.
But I am getting something like this:
START NETWORK
END NETWORK
SLEEP START 2021-03-11 11:13:40 +0000
SLEEP END 2021-03-11 11:13:45 +0000
SLEEP START 2021-03-11 11:13:45 +0000
SLEEP END 2021-03-11 11:13:50 +0000
START NETWORK
END NETWORK
How to add 5 more seconds since my last tapping and combine it together rather than split it into two operation? I call queue.cancelAllOperations and start a new sleep operation but doesn't seem to work.
Expect result:
START NETWORK
END NETWORK
SLEEP START 2021-03-11 11:13:40 +0000
// <- the second tap when 2 seconds passed away
SLEEP END 2021-03-11 11:13:47 +0000 // 2+5
START NETWORK
END NETWORK
If you want some operation to be delayed for a certain amount of time, I would not create a “queue” class, but rather I would just define an Operation that simply will not be isReady until that time has passed (e.g., five seconds later). That not only eliminates the need for two separate “sleep operations”, but eliminates them altogether.
E.g.,
class DelayedOperation: Operation {
#Atomic private var enoughTimePassed = false
private var timer: DispatchSourceTimer?
private var block: (() -> Void)?
override var isReady: Bool { enoughTimePassed && super.isReady } // this operation won't run until (a) enough time has passed; and (b) any dependencies or the like are satisfied
init(timeInterval: TimeInterval = 5, block: #escaping () -> Void) {
self.block = block
super.init()
resetTimer(for: timeInterval)
}
override func main() {
block?()
block = nil
}
func resetTimer(for timeInterval: TimeInterval = 5) {
timer = DispatchSource.makeTimerSource() // create GCD timer (eliminating reference to any prior timer will cancel that one)
timer?.setEventHandler { [weak self] in
guard let self = self else { return }
self.willChangeValue(forKey: #keyPath(isReady)) // make sure to do necessary `isReady` KVO notification
self.enoughTimePassed = true
self.didChangeValue(forKey: #keyPath(isReady))
}
timer?.schedule(deadline: .now() + timeInterval)
timer?.resume()
}
}
I am synchronizing my interaction with enoughTimePassed with the following property wrapper, but you can use whatever synchronization mechanism you want:
#propertyWrapper
struct Atomic<Value> {
private var value: Value
private var lock = NSLock()
init(wrappedValue: Value) {
value = wrappedValue
}
var wrappedValue: Value {
get { synchronized { value } }
set { synchronized { value = newValue } }
}
private func synchronized<T>(block: () throws -> T) rethrows -> T {
lock.lock()
defer { lock.unlock() }
return try block()
}
}
Just make sure that isReady is thread-safe.
Anyway, having defined that DelayedOperation, then you can do something like
logger.debug("creating operation")
let operation = DelayedOperation {
logger.debug("some task")
}
queue.addOperation(operation)
And it will delay running that task (in this case, just login “some task” message) for five seconds. If you want to reset the timer, just call that method on the operation subclass:
operation.resetTimer()
For example, here I created the task, added it to the queue, reset it three times at two second intervals, and the block actually runs five seconds after the last reset:
2021-09-30 01:13:12.727038-0700 MyApp[7882:228747] [ViewController] creating operation
2021-09-30 01:13:14.728953-0700 MyApp[7882:228747] [ViewController] delaying operation
2021-09-30 01:13:16.728942-0700 MyApp[7882:228747] [ViewController] delaying operation
2021-09-30 01:13:18.729079-0700 MyApp[7882:228747] [ViewController] delaying operation
2021-09-30 01:13:23.731010-0700 MyApp[7882:228829] [ViewController] some task
Now, if you're using operations for network requests, then you are presumably already implemented your own asynchronous Operation subclass that does the necessary KVO for isFinished, isExecuting, etc., so you may choose marry the above isReady logic with that existing Operation subclass.
But the idea is one can completely lose the "sleep" operation with an asynchronous pattern. If you did want a dedicate sleep operation, you could still use the above pattern (but make it an asynchronous operation rather than blocking a thread with sleep).
All of this having been said, if I personally wanted to debounce a network request, I would not integrate this into the operation or operation queue. I would just do that debouncing at the time that I started the request:
weak var timer: Timer?
func debouncedRequest(in timeInterval: TimeInterval = 5) {
timer?.invalidate()
timer = .scheduledTimer(withTimeInterval: timeInterval, repeats: false) { _ in
// initiate request here
}
}

Using Operations to manage imbalances between function calls

I am writing a VideoPlayer() class that has a start() and stop() function to initiate playback for a given video. Each instance of VideoPlayer() manages a single video
The start() and stop() functions are asynchronous and I get notified via a delegate of success/failure. This delegation is done by a third party SDK.
class VideoPlayer {
func start() {
// Async call
}
func stop() {
// Async call
}
// Callback from third party library with status
func videoDidStart(view: UIView) {
// Notify clients
}
func videoDidStop(stopped: Bool) {
// Notify clients
}
}
I have clients calling the start() and stop() function but sometimes it so happens that they are calling start/stop in quick successions. This leads to unexpected behavior.
For example, if the clients call start() before a previous call to stop() has finished, the video won't play.
Ideally, the clients would wait till I send success/failure notification that I receive via the delegates.
But of course, that's not happening and I am tasked with making the VideoPlayer() class manage the imbalance's between start() and stop() and queue all the calls so everything executes in order.
I would like the VideoPlayer() class to ensure that every time there is an imbalance between start() and stop(), every start() is matched with a stop() and the rogue calls are put in a queue and execute them after the imbalance has been sorted out.
How do I manage such a thing? I believe I need to use Operations with a dependency between start() and stop(). Should I make start()/stop() a single operation and queue the rogue calls until the operation finishes.
Are there other approaches I could use? I have looked dispatch_groups and dispatch_barriers but I am not sure if they fit my use case.
Any help would be greatly appreciated. Thank you
I don't see why an operation is needed. As a crude example of the sort of thing you are describing, I made a small dispatcher architecture:
struct Start {
let callback : () -> ()
var started = false
}
class Dispatcher {
var q = [Start]()
func start(_ f: #escaping () -> ()) {
q.append(Start(callback:f))
self.startFirstOne()
}
func stop(_ f: #escaping () -> ()) {
// assume this can only refer to the first one in the queue
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
self.q.removeFirst()
f()
self.startFirstOne()
}
}
private func startFirstOne() {
guard !q.isEmpty else { return }
guard !q[0].started else { return }
print("starting")
self.q[0].started = true
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
self.q.first!.callback()
}
}
}
The start calls are queued up, and we don't actually start one until we have received sufficient stop calls to bring it to the front of the queue.
I personally don't like this at all, because we are making three very strong assumptions about contract-keeping:
we assume that everyone who calls start will also call stop,
and we are assuming that someone who calls start will not call stop until after being called back from the start call,
and we are assuming that no one who did not call start will ever call stop.
Nevertheless, if everyone does keep to the contract, then everyone's stop will be called back corresponding to that person's start. Here's a rough test bed, using random delays throughout to simulate asynchronicity:
let d = Dispatcher()
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
d.start {
print("hearts start")
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
d.stop { print("hearts stop") }
}
}
}
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
d.start {
print("diamonds start")
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
d.stop { print("diamonds stop") }
}
}
}
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
d.start {
print("clubs start")
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
d.stop { print("clubs stop") }
}
}
}
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
d.start {
print("spades start")
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
d.stop { print("spades stop") }
}
}
}
Try it, and you'll see that, whatever order and with whatever timing these things run, the start-stop pairs are all maintained by the dispatcher.
As I say, I don't like it. I would prefer to assume that callers might not adhere to the contract and be prepared to do something about it. But there it is, for what it's worth.

Completion handlers and Operation queues

I am trying to do the following approach,
let operationQueue = OperationQueue()
operationQueue.maxConcurrentOperationCount = 10
func registerUser(completionHandler: #escaping (Result<Data, Error>) -> Void) -> String {
self.registerClient() { (result) in
switch result {
case .success(let data):
self.downloadUserProfile(data.profiles)
case .failure(let error):
return self.handleError(error)
}
}
}
func downloadUserProfile(urls: [String]) {
for url in urls {
queue.addOperation {
self.client.downloadTask(with: url)
}
}
}
I am checking is there anyway I can get notified when all operations gets completed and then I can call the success handler there.
I tried checking the apple dev documentation which suggests to use
queue.addBarrierBlock {
<#code#>
}
but this is available only from iOS 13.0
Pre iOS 13, we’d use dependencies. Declare a completion operation, and then when you create operations for your network requests, you’d define those operations to be dependencies for your completion operation.
let completionOperation = BlockOperation { ... }
let networkOperation1 = ...
completionOperation.addDependency(networkOperation1)
queue.addOperation(networkOperation1)
let networkOperation2 = ...
completionOperation.addDependency(networkOperation2)
queue.addOperation(networkOperation2)
OperationQueue.main.addOperation(completionOperation)
That having been said, you should be very careful with your operation implementation. Do I correctly infer that downloadTask(with:) returns immediately after the download task has been initiated and doesn’t wait for the request to finish? In that case, neither dependencies nor barriers will work the way you want.
When wrapping network requests in an operation, you’d want to make sure to use an asynchronous Operation subclass (e.g. https://stackoverflow.com/a/32322851/1271826).
The pre-iOS 13 way is to observe the operationCount property of the operation queue
var observation : NSKeyValueObservation?
...
observation = operationQueue.observe(\.operationCount, options: [.new]) { observed, change in
if change.newValue == 0 {
print("operations finished")
}
}
}

How to suspend dispatch queue inside for loop?

I have play and pause button. When I pressed play button, I want to play async talking inside for loop. I used dispatch group for async method's waiting inside for loop. But I cannot achieve pause.
startStopButton.rx.tap.bind {
if self.isPaused {
self.isPaused = false
dispatchGroup.suspend()
dispatchQueue.suspend()
} else {
self.isPaused = true
self.dispatchQueue.async {
for i in 0..<self.textBlocks.count {
self.dispatchGroup.enter()
self.startTalking(string: self.textBlocks[i]) { isFinished in
self.dispatchGroup.leave()
}
self.dispatchGroup.wait()
}
}
}
}.disposed(by: disposeBag)
And i tried to do with operationqueue but still not working. It is still continue talking.
startStopButton.rx.tap.bind {
if self.isPaused {
self.isPaused = false
self.talkingQueue.isSuspended = true
self.talkingQueue.cancelAllOperations()
} else {
self.isPaused = true
self.talkingQueue.addOperation {
for i in 0..<self.textBlocks.count {
self.dispatchGroup.enter()
self.startTalking(string: self.textBlocks[i]) { isFinished in
self.dispatchGroup.leave()
}
self.dispatchGroup.wait()
}
}
}
}.disposed(by: disposeBag)
Is there any advice?
A few observations:
Pausing a group doesn’t do anything. You suspend queues, not groups.
Suspending a queue stops new items from starting on that queue, but it does not suspend anything already running on that queue. So, if you’ve added all the textBlock calls in a single dispatched item of work, then once it’s started, it won’t suspend.
So, rather than dispatching all of these text blocks to the queue as a single task, instead, submit them individually (presuming, of course, that your queue is serial). So, for example, let’s say you had a DispatchQueue:
let queue = DispatchQueue(label: "...")
And then, to queue the tasks, put the async call inside the for loop, so each text block is a separate item in your queue:
for textBlock in textBlocks {
queue.async { [weak self] in
guard let self = self else { return }
let semaphore = DispatchSemaphore(value: 0)
self.startTalking(string: textBlock) {
semaphore.signal()
}
semaphore.wait()
}
}
FYI, while dispatch groups work, a semaphore (great for coordinating a single signal with a wait) might be a more logical choice here, rather than a group (which is intended for coordinating groups of dispatched tasks).
Anyway, when you suspend that queue, the queue will be preventing from starting anything queued (but will finish the current textBlock).
Or you can use an asynchronous Operation, e.g., create your queue:
let queue: OperationQueue = {
let queue = OperationQueue()
queue.name = "..."
queue.maxConcurrentOperationCount = 1
return queue
}()
Then, again, you queue up each spoken word, each respectively a separate operation on that queue:
for textBlock in textBlocks {
queue.addOperation(TalkingOperation(string: textBlock))
}
That of course assumes you encapsulated your talking routine in an operation, e.g.:
class TalkingOperation: AsynchronousOperation {
let string: String
init(string: String) {
self.string = string
}
override func main() {
startTalking(string: string) {
self.finish()
}
}
func startTalking(string: String, completion: #escaping () -> Void) { ... }
}
I prefer this approach because
we’re not blocking any threads;
the logic for talking is nicely encapsulated in that TalkingOperation, in the spirit of the single responsibility principle; and
you can easily suspend the queue or cancel all the operations.
By the way, this is a subclass of an AsynchronousOperation, which abstracts the complexity of asynchronous operation out of the TalkingOperation class. There are many ways to do this, but here’s one random implementation. FWIW, the idea is that you define an AsynchronousOperation subclass that does all the KVO necessary for asynchronous operations outlined in the documentation, and then you can enjoy the benefits of operation queues without making each of your asynchronous operation subclasses too complicated.
For what it’s worth, if you don’t need suspend, but would be happy just canceling, the other approach is to dispatching the whole for loop as a single work item or operation, but check to see if the operation has been canceled inside the for loop:
So, define a few properties:
let queue = DispatchQueue(label: "...")
var item: DispatchWorkItem?
Then you can start the task:
item = DispatchWorkItem { [weak self] in
guard let textBlocks = self?.textBlocks else { return }
for textBlock in textBlocks where self?.item?.isCancelled == false {
let semaphore = DispatchSemaphore(value: 0)
self?.startTalking(string: textBlock) {
semaphore.signal()
}
semaphore.wait()
}
self?.item = nil
}
queue.async(execute: item!)
And then, when you want to stop it, just call item?.cancel(). You can do this same pattern with a non-asynchronous Operation, too.

How to ensure to run some code on same background thread?

I am using realm in my iOS Swift project. Search involve complex filters for a big data set. So I am fetching records on background thread.
But realm can be used only from same thread on which Realm was created.
I am saving a reference of results which I got after searching Realm on background thread. This object can only be access from same back thread
How can I ensure to dispatch code at different time to the same thread?
I tried below as suggested to solve the issue, but it didn't worked
let realmQueue = DispatchQueue(label: "realm")
var orginalThread:Thread?
override func viewDidLoad() {
super.viewDidLoad()
realmQueue.async {
self.orginalThread = Thread.current
}
let deadlineTime = DispatchTime.now() + .seconds(2)
DispatchQueue.main.asyncAfter(deadline: deadlineTime) {
self.realmQueue.async {
print("realm queue after some time")
if self.orginalThread == Thread.current {
print("same thread")
}else {
print("other thread")
}
}
}
}
Output is
realm queue after some time
other thread
Here's a small worker class that can works in a similar fashion to async dispatching on a serial queue, with the guarantee that the thread stays the same for all work items.
// Performs submitted work items on a dedicated thread
class Worker {
// the worker thread
private var thread: Thread?
// used to put the worker thread in the sleep mode, so in won't consume
// CPU while the queue is empty
private let semaphore = DispatchSemaphore(value: 0)
// using a lock to avoid race conditions if the worker and the enqueuer threads
// try to update the queue at the same time
private let lock = NSRecursiveLock()
// and finally, the glorious queue, where all submitted blocks end up, and from
// where the worker thread consumes them
private var queue = [() -> Void]()
// enqueues the given block; the worker thread will execute it as soon as possible
public func enqueue(_ block: #escaping () -> Void) {
// add the block to the queue, in a thread safe manner
locked { queue.append(block) }
// signal the semaphore, this will wake up the sleeping beauty
semaphore.signal()
// if this is the first time we enqueue a block, detach the thread
// this makes the class lazy - it doesn't dispatch a new thread until the first
// work item arrives
if thread == nil {
thread = Thread(block: work)
thread?.start()
}
}
// the method that gets passed to the thread
private func work() {
// just an infinite sequence of sleeps while the queue is empty
// and block executions if the queue has items
while true {
// let's sleep until we get signalled that items are available
semaphore.wait()
// extract the first block in a thread safe manner, execute it
// if we get here we know for sure that the queue has at least one element
// as the semaphore gets signalled only when an item arrives
let block = locked { queue.removeFirst() }
block()
}
}
// synchronously executes the given block in a thread-safe manner
// returns the same value as the block
private func locked<T>(do block: () -> T) -> T {
lock.lock(); defer { lock.unlock() }
return block()
}
}
Just instantiate it and let it do the job:
let worker = Worker()
worker.enqueue { print("On background thread, yay") }
You have to create your own thread with a run loop for that. Apple gives an example for a custom run loop in Objective C. You may create a thread class in Swift with that like:
class MyThread: Thread {
public var runloop: RunLoop?
public var done = false
override func main() {
runloop = RunLoop.current
done = false
repeat {
let result = CFRunLoopRunInMode(.defaultMode, 10, true)
if result == .stopped {
done = true
}
}
while !done
}
func stop() {
if let rl = runloop?.getCFRunLoop() {
CFRunLoopStop(rl)
runloop = nil
done = true
}
}
}
Now you can use it like this:
let thread = MyThread()
thread.start()
sleep(1)
thread.runloop?.perform {
print("task")
}
thread.runloop?.perform {
print("task 2")
}
thread.runloop?.perform {
print("task 3")
}
Note: The sleep is not very elegant but needed, since the thread needs some time for its startup. It should be better to check if the property runloop is set, and perform the block later if necessary. My code (esp. runloop) is probably not safe for race conditions, and it's only for demonstration. ;-)

Resources