How to suspend dispatch queue inside for loop? - ios

I have play and pause button. When I pressed play button, I want to play async talking inside for loop. I used dispatch group for async method's waiting inside for loop. But I cannot achieve pause.
startStopButton.rx.tap.bind {
if self.isPaused {
self.isPaused = false
dispatchGroup.suspend()
dispatchQueue.suspend()
} else {
self.isPaused = true
self.dispatchQueue.async {
for i in 0..<self.textBlocks.count {
self.dispatchGroup.enter()
self.startTalking(string: self.textBlocks[i]) { isFinished in
self.dispatchGroup.leave()
}
self.dispatchGroup.wait()
}
}
}
}.disposed(by: disposeBag)
And i tried to do with operationqueue but still not working. It is still continue talking.
startStopButton.rx.tap.bind {
if self.isPaused {
self.isPaused = false
self.talkingQueue.isSuspended = true
self.talkingQueue.cancelAllOperations()
} else {
self.isPaused = true
self.talkingQueue.addOperation {
for i in 0..<self.textBlocks.count {
self.dispatchGroup.enter()
self.startTalking(string: self.textBlocks[i]) { isFinished in
self.dispatchGroup.leave()
}
self.dispatchGroup.wait()
}
}
}
}.disposed(by: disposeBag)
Is there any advice?

A few observations:
Pausing a group doesn’t do anything. You suspend queues, not groups.
Suspending a queue stops new items from starting on that queue, but it does not suspend anything already running on that queue. So, if you’ve added all the textBlock calls in a single dispatched item of work, then once it’s started, it won’t suspend.
So, rather than dispatching all of these text blocks to the queue as a single task, instead, submit them individually (presuming, of course, that your queue is serial). So, for example, let’s say you had a DispatchQueue:
let queue = DispatchQueue(label: "...")
And then, to queue the tasks, put the async call inside the for loop, so each text block is a separate item in your queue:
for textBlock in textBlocks {
queue.async { [weak self] in
guard let self = self else { return }
let semaphore = DispatchSemaphore(value: 0)
self.startTalking(string: textBlock) {
semaphore.signal()
}
semaphore.wait()
}
}
FYI, while dispatch groups work, a semaphore (great for coordinating a single signal with a wait) might be a more logical choice here, rather than a group (which is intended for coordinating groups of dispatched tasks).
Anyway, when you suspend that queue, the queue will be preventing from starting anything queued (but will finish the current textBlock).
Or you can use an asynchronous Operation, e.g., create your queue:
let queue: OperationQueue = {
let queue = OperationQueue()
queue.name = "..."
queue.maxConcurrentOperationCount = 1
return queue
}()
Then, again, you queue up each spoken word, each respectively a separate operation on that queue:
for textBlock in textBlocks {
queue.addOperation(TalkingOperation(string: textBlock))
}
That of course assumes you encapsulated your talking routine in an operation, e.g.:
class TalkingOperation: AsynchronousOperation {
let string: String
init(string: String) {
self.string = string
}
override func main() {
startTalking(string: string) {
self.finish()
}
}
func startTalking(string: String, completion: #escaping () -> Void) { ... }
}
I prefer this approach because
we’re not blocking any threads;
the logic for talking is nicely encapsulated in that TalkingOperation, in the spirit of the single responsibility principle; and
you can easily suspend the queue or cancel all the operations.
By the way, this is a subclass of an AsynchronousOperation, which abstracts the complexity of asynchronous operation out of the TalkingOperation class. There are many ways to do this, but here’s one random implementation. FWIW, the idea is that you define an AsynchronousOperation subclass that does all the KVO necessary for asynchronous operations outlined in the documentation, and then you can enjoy the benefits of operation queues without making each of your asynchronous operation subclasses too complicated.
For what it’s worth, if you don’t need suspend, but would be happy just canceling, the other approach is to dispatching the whole for loop as a single work item or operation, but check to see if the operation has been canceled inside the for loop:
So, define a few properties:
let queue = DispatchQueue(label: "...")
var item: DispatchWorkItem?
Then you can start the task:
item = DispatchWorkItem { [weak self] in
guard let textBlocks = self?.textBlocks else { return }
for textBlock in textBlocks where self?.item?.isCancelled == false {
let semaphore = DispatchSemaphore(value: 0)
self?.startTalking(string: textBlock) {
semaphore.signal()
}
semaphore.wait()
}
self?.item = nil
}
queue.async(execute: item!)
And then, when you want to stop it, just call item?.cancel(). You can do this same pattern with a non-asynchronous Operation, too.

Related

Swift -Start/Stop Synchronous OperationQueue

I have some operations that need to run synchronously. I tried to follow this link but it's not clear enough for my situation.
op2 doesn't start until op1 is finished and op3 doesn't start until op2 is finished but during that time I need to be able to stop any of the operations and restart all over again. For example if op2 is running, I know that it cannot be stopped, but for whatever reason I need to be able to prevent op3 from executing because op1 has restarted. How can I do this?
This is a very simple example, the actual code is more intricate
var queue1 = OperationQueue()
var queue2 = OperationQueue()
var queue3 = OperationQueue()
var operation1: BlockOperation?
var operation2: BlockOperation?
var operation3: BlockOperation?
// a DispatchGroup has finished running now it's time to start the operations ...
dispatchGroup.notify(queue: .global(qos: .background)) { [weak self] in
DispatchQueue.main.async { [weak self] in
self?.runFirstFunc()
}
}
func runFirstFunc() {
var count = 0
let num in arr {
count += num
}
// now that the loop is finished start the second func but there is a possibility something may happen in the first that should prevent the second func from running
runSecondFunc(count: count)
}
func runSecondFunc(count: Int) {
do {
try ...
// if the do-try is successful do something with count then start thirdFunc but there is a possibility something may happen in the second func that should prevent the third func from running
runThirdFunc()
} catch {
return
}
}
func runThirdFunc() {
// this is the final operation, once it hits here I know it can't be stopped even if I have to restart op1 again but that is fine
}
You said:
op2 doesn't start until op1 is finished and op3 doesn't start until op2 is finished ...
If using OperationQueue you can accomplish that by creating the three operations, and defining op1 to be a dependency of op2 and defining op2 as a dependency of op3.
... but during that time I need to be able to stop any of the operations and restart all over again.
If using OperationQueue, if you want to stop all operations that have been added to the queue, you call cancelAllOperations.
For example if op2 is running, I know that it cannot be stopped, ...
Well, it depends upon what op2 is doing. If it's spinning in a loop doing calculations, then, yes, it can be canceled, mid-operation. You just check isCancelled, and if it is, stop the operation in question. Or if it is a network request (or something else that is cancelable), you can override cancel method and cancel the task, too. It depends upon what the operation is doing.
... but for whatever reason I need to be able to prevent op3 from executing because op1 has restarted.
Sure, having canceled all the operations with cancelAllOperations, you can then re-add three new operations (with their associated dependencies) to the queue.
Here's a not-tested implementation that allows cancellation while any task is doing it's subtasks (repeatedly).
In case second task fails/throws, it automatically restarts from the first task.
In case user manually stops / starts, the last in-flight task quits it's execution (as soon as it can).
Note : You must take care of [weak self] part according to your own implementation.
import Foundation
class TestWorker {
let workerQueue = DispatchQueue.global(qos: .utility)
var currentWorkItem: DispatchWorkItem?
func start() {
self.performTask { self.performTask1() }
}
func stop() {
currentWorkItem?.cancel()
}
func performTask(block: #escaping (() -> Void)) {
let workItem = DispatchWorkItem(block: block)
self.currentWorkItem = workItem
workerQueue.async(execute: workItem)
}
func performTask1() {
guard let workItem = self.currentWorkItem else { return }
func subtask(index: Int) {}
for i in 0..<100 {
if workItem.isCancelled { return }
subtask(index: i)
}
self.performTask { self.performTask2() }
}
func performTask2() {
guard let workItem = self.currentWorkItem else { return }
func subtask(index: Int) throws {}
for i in 0..<100 {
if workItem.isCancelled { return }
do { try subtask(index: i) }
catch {
self.start()
return
}
}
self.performTask { self.performTask3() }
}
func performTask3() {
guard let workItem = self.currentWorkItem else { return }
func subtask(index: Int) {}
for i in 0..<100 {
if workItem.isCancelled { return }
subtask(index: i)
}
/// Done
}
}
Maybe, this is a good reason to look into Swift Combine:
Define your tasks as Publishers.
Use flatMap to chain them, optionally pass output from previous to the next.
Use switchToLatest to restart the whole thing and cancel the previous one when it is still running - if any.
Use cancel on the subscriber to cancel the whole thing.

Swift - Run 1000 async tasks with a sleeper after every 50 - how to communicate btw DispatchGroups

I have to run 1000 async calculations. Since the API has a limit of 50 requests/min I have to split it up into chunks of 50 and wait for a minute after processing once chunk. Eventually I want to print the results.
resultsArray = [Double]()
// chunked is an extension
points.chunked(into: 50).forEach { pointsChunk in
pointsChunk.forEach { pointsPair
// this function is async
service.calculate(pointsPair) { result in
resultsArray.append(result)
}
}
// wait for a minute before continuing with the next chunk
}
// after all 1000 calculations are done, print result
print(resultsArray)
I did try finding a solution with using DispatchGroup but struggled on how to incorporate a timer:
let queue = DispatchQueue(label: "MyQueue", attributes: .concurrent)
let chunkGroup = DispatchGroup()
let workGroup = DispatchGroup()
points.chunked(into: 50).forEach { pointsChunk in
chunkGroup.enter()
pointsChunk.forEach { routePointsPair in
workGroup.enter()
// do something async and in the callback:
workGroup.leave()
}
workGroup.notify(queue: queue) {
do { sleep(60) }
chunkGroup.leave()
}
}
chunkGroup.notify(queue: .main) {
print(resultArray)
}
This just executes all chunks at once instead of delayed by 60 seconds.
What I have implemented in a similar situation is manual suspending and resuming of my serial queue.
my queue reference:
public static let serialQueue = DispatchQueue(label: "com.queue.MyProvider.Serial")
func serialQueue() -> DispatchQueue {
return MyProvider.serialQueue
}
suspend queue:
func suspendSerialQueue() -> Void {
self.serialQueue().suspend()
}
resume queue after delay:
func resumeSerialQueueAfterDelay(seconds: Double) -> Void {
DispatchQueue.global(qos: .userInitiated).asyncAfter(deadline: .now() + seconds) {
self.serialQueue().resume()
}
}
This way I have full control over when I suspend and when I resume the queue and I can spread out many API calls evenly over longer period of time.
self.serialQueue().async {
self.suspendSerialQueue()
// API call completion block {
self.resumeSerialQueueAfterDelay(seconds: delay)
}
}
Not sure if this is what you were looking for, but maybe you can adapt my example to your needs.

How to achieve DispatchGroup functionality using OperationQueue? [duplicate]

I have an Operation subclass and Operation queue with maxConcurrentOperationCount = 1.
This performs my operations in a sequential order that i add them which is good but now i need to wait until all operations have finished before running another process.
i was trying to use notification group but as this is run in a for loop as soon as the operations have been added to the queue the notification group fires.. How do i wait for all operations to leave the queue before running another process?
for (index, _) in self.packArray.enumerated() {
myGroup.enter()
let myArrayOperation = ArrayOperation(collection: self.outerCollectionView, id: self.packArray[index].id, count: index)
myArrayOperation.name = self.packArray[index].id
downloadQueue.addOperation(myArrayOperation)
myGroup.leave()
}
myGroup.notify(queue: .main) {
// do stuff here
}
You can use operation dependencies to initiate some operation upon the completion of a series of other operations:
let queue = OperationQueue()
let completionOperation = BlockOperation {
// all done
}
for object in objects {
let operation = ...
completionOperation.addDependency(operation)
queue.addOperation(operation)
}
OperationQueue.main.addOperation(completionOperation) // or, if you don't need it on main queue, just `queue.addOperation(completionOperation)`
Or, in iOS 13 and later, you can use barriers:
let queue = OperationQueue()
for object in objects {
queue.addOperation(...)
}
queue.addBarrierBlock {
DispatchQueue.main.async {
// all done
}
}
A suitable solution is KVO
First before the loop add the observer (assuming queue is the OperationQueue instance)
queue.addObserver(self, forKeyPath:"operations", options:.new, context:nil)
Then implement
override func observeValue(forKeyPath keyPath: String?, of object: Any?, change: [NSKeyValueChangeKey : Any]?, context: UnsafeMutableRawPointer?) {
if object as? OperationQueue == queue && keyPath == "operations" {
if queue.operations.isEmpty {
// Do something here when your queue has completed
self.queue.removeObserver(self, forKeyPath:"operations")
}
} else {
super.observeValue(forKeyPath: keyPath, of: object, change: change, context: context)
}
}
Edit:
In Swift 4 it's much easier
Declare a property:
var observation : NSKeyValueObservation?
and create the observer
observation = queue.observe(\.operationCount, options: [.new]) { [unowned self] (queue, change) in
if change.newValue! == 0 {
// Do something here when your queue has completed
self.observation = nil
}
}
Since iOS13 and macOS15 operationCount is deprecated. The replacement is to observe progress.completedUnitCount.
Another modern way is to use the KVO publisher of Combine
var cancellable: AnyCancellable?
cancellable = queue.publisher(for: \.progress.completedUnitCount)
.filter{$0 == queue.progress.totalUnitCount}
.sink() { _ in
print("queue finished")
self.cancellable = nil
}
I use the next solution:
private let queue = OperationQueue()
private func addOperations(_ operations: [Operation], completionHandler: #escaping () -> ()) {
DispatchQueue.global().async { [unowned self] in
self.queue.addOperations(operations, waitUntilFinished: true)
DispatchQueue.main.async(execute: completionHandler)
}
}
Set the maximum number of concurrent operations to 1
operationQueue.maxConcurrentOperationCount = 1
then each operation will be executed in order (as if each was dependent on the previous one) and your completion operation will execute at the end.
Code at the end of the queue
refer to this link
NSOperation and NSOperationQueue are great and useful Foundation framework tools for asynchronous tasks. One thing puzzled me though: How can I run code after all my queue operations finish? The simple answer is: use dependencies between operations in the queue (unique feature of NSOperation). It's just 5 lines of code solution.
NSOperation dependency trick
with Swift it is just easy to implement as this:
extension Array where Element: NSOperation {
/// Execute block after all operations from the array.
func onFinish(block: () -> Void) {
let doneOperation = NSBlockOperation(block: block)
self.forEach { [unowned doneOperation] in doneOperation.addDependency($0) }
NSOperationQueue().addOperation(doneOperation)
}}
My solution is similar to that of https://stackoverflow.com/a/42496559/452115, but I don't add the completionOperation in the main OperationQueue but into the queue itself. This works for me:
var a = [Int](repeating: 0, count: 10)
let queue = OperationQueue()
let completionOperation = BlockOperation {
print(a)
}
queue.maxConcurrentOperationCount = 2
for i in 0...9 {
let operation = BlockOperation {
a[i] = 1
}
completionOperation.addDependency(operation)
queue.addOperation(operation)
}
queue.addOperation(completionOperation)
print("Done 🎉")

How to ensure to run some code on same background thread?

I am using realm in my iOS Swift project. Search involve complex filters for a big data set. So I am fetching records on background thread.
But realm can be used only from same thread on which Realm was created.
I am saving a reference of results which I got after searching Realm on background thread. This object can only be access from same back thread
How can I ensure to dispatch code at different time to the same thread?
I tried below as suggested to solve the issue, but it didn't worked
let realmQueue = DispatchQueue(label: "realm")
var orginalThread:Thread?
override func viewDidLoad() {
super.viewDidLoad()
realmQueue.async {
self.orginalThread = Thread.current
}
let deadlineTime = DispatchTime.now() + .seconds(2)
DispatchQueue.main.asyncAfter(deadline: deadlineTime) {
self.realmQueue.async {
print("realm queue after some time")
if self.orginalThread == Thread.current {
print("same thread")
}else {
print("other thread")
}
}
}
}
Output is
realm queue after some time
other thread
Here's a small worker class that can works in a similar fashion to async dispatching on a serial queue, with the guarantee that the thread stays the same for all work items.
// Performs submitted work items on a dedicated thread
class Worker {
// the worker thread
private var thread: Thread?
// used to put the worker thread in the sleep mode, so in won't consume
// CPU while the queue is empty
private let semaphore = DispatchSemaphore(value: 0)
// using a lock to avoid race conditions if the worker and the enqueuer threads
// try to update the queue at the same time
private let lock = NSRecursiveLock()
// and finally, the glorious queue, where all submitted blocks end up, and from
// where the worker thread consumes them
private var queue = [() -> Void]()
// enqueues the given block; the worker thread will execute it as soon as possible
public func enqueue(_ block: #escaping () -> Void) {
// add the block to the queue, in a thread safe manner
locked { queue.append(block) }
// signal the semaphore, this will wake up the sleeping beauty
semaphore.signal()
// if this is the first time we enqueue a block, detach the thread
// this makes the class lazy - it doesn't dispatch a new thread until the first
// work item arrives
if thread == nil {
thread = Thread(block: work)
thread?.start()
}
}
// the method that gets passed to the thread
private func work() {
// just an infinite sequence of sleeps while the queue is empty
// and block executions if the queue has items
while true {
// let's sleep until we get signalled that items are available
semaphore.wait()
// extract the first block in a thread safe manner, execute it
// if we get here we know for sure that the queue has at least one element
// as the semaphore gets signalled only when an item arrives
let block = locked { queue.removeFirst() }
block()
}
}
// synchronously executes the given block in a thread-safe manner
// returns the same value as the block
private func locked<T>(do block: () -> T) -> T {
lock.lock(); defer { lock.unlock() }
return block()
}
}
Just instantiate it and let it do the job:
let worker = Worker()
worker.enqueue { print("On background thread, yay") }
You have to create your own thread with a run loop for that. Apple gives an example for a custom run loop in Objective C. You may create a thread class in Swift with that like:
class MyThread: Thread {
public var runloop: RunLoop?
public var done = false
override func main() {
runloop = RunLoop.current
done = false
repeat {
let result = CFRunLoopRunInMode(.defaultMode, 10, true)
if result == .stopped {
done = true
}
}
while !done
}
func stop() {
if let rl = runloop?.getCFRunLoop() {
CFRunLoopStop(rl)
runloop = nil
done = true
}
}
}
Now you can use it like this:
let thread = MyThread()
thread.start()
sleep(1)
thread.runloop?.perform {
print("task")
}
thread.runloop?.perform {
print("task 2")
}
thread.runloop?.perform {
print("task 3")
}
Note: The sleep is not very elegant but needed, since the thread needs some time for its startup. It should be better to check if the property runloop is set, and perform the block later if necessary. My code (esp. runloop) is probably not safe for race conditions, and it's only for demonstration. ;-)

Waiting for completion handler to complete, before continuing

I have an array of 9 images and I'd like to save them all to the user's camera roll. You can do that with UIImageWriteToSavedPhotosAlbum. I wrote a loop to save each image. The problem with this is that for some reason, it will only save the first five. Now, order is important, so if an image fails to save, I want to retry and wait until it succeeds, rather than have some unpredictable race.
So, I implement a completion handler, and thought I could use semaphores like so:
func save(){
for i in (0...(self.imagesArray.count-1)).reversed(){
print("saving image at index ", i)
semaphore.wait()
let image = imagesArray[i]
self.saveImage(image)
}
}
func saveImage(_ image: UIImage){
UIImageWriteToSavedPhotosAlbum(image, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)
}
func image(_ image: UIImage, didFinishSavingWithError error: NSError?, contextInfo: UnsafeRawPointer) {
//due to some write limit, only 5 images get written at once.
if let error = error {
print("trying again")
self.saveImage(image)
} else {
print("successfully saved")
semaphore.signal()
}
}
The problem with my code is it gets blocked out after the first save and semaphore.signal never gets called. I'm thinking my completion handler is supposed to be called on the main thread, but is already being blocked by the semaphore.wait(). Any help appreciated. Thanks
As others have pointed out, you want to avoid waiting on the main thread, risking deadlocking. So, while you can push it off to a global queue, the other approach is to employ one of the many mechanisms for performing a series of asynchronous tasks. Options include asynchronous Operation subclass or promises (e.g. PromiseKit).
For example, to wrap the image saving task in an asynchronous Operation and add them to an OperationQueue you could define your image save operation like so:
class ImageSaveOperation: AsynchronousOperation {
let image: UIImage
let imageCompletionBlock: ((NSError?) -> Void)?
init(image: UIImage, imageCompletionBlock: ((NSError?) -> Void)? = nil) {
self.image = image
self.imageCompletionBlock = imageCompletionBlock
super.init()
}
override func main() {
UIImageWriteToSavedPhotosAlbum(image, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)
}
func image(_ image: UIImage, didFinishSavingWithError error: NSError?, contextInfo: UnsafeRawPointer) {
imageCompletionBlock?(error)
complete()
}
}
Then, assuming that you had an array, images, i.e. that was a [UIImage], you could then do:
let queue = OperationQueue()
queue.name = Bundle.main.bundleIdentifier! + ".imagesave"
queue.maxConcurrentOperationCount = 1
let operations = images.map {
return ImageSaveOperation(image: $0) { error in
if let error = error {
print(error.localizedDescription)
queue.cancelAllOperations()
}
}
}
let completion = BlockOperation {
print("all done")
}
operations.forEach { completion.addDependency($0) }
queue.addOperations(operations, waitUntilFinished: false)
OperationQueue.main.addOperation(completion)
You can obviously customize this to add retry logic upon error, but that is likely not needed now because the root of the "too busy" problem was a result of too many concurrent save requests, which we've eliminated. That only leaves errors that are unlikely to solved by retrying, so I probably wouldn't add retry logic. (The errors are more likely to be permissions failures, out of space, etc.) But you can add retry logic if you really want. More likely, if you have an error, you might want to just cancel all of the remaining operations on the queue, like I have above.
Note, the above subclasses AsynchronousOperation, which is just an Operation subclass for which isAsynchronous returns true. For example:
/// Asynchronous Operation base class
///
/// This class performs all of the necessary KVN of `isFinished` and
/// `isExecuting` for a concurrent `NSOperation` subclass. So, to developer
/// a concurrent NSOperation subclass, you instead subclass this class which:
///
/// - must override `main()` with the tasks that initiate the asynchronous task;
///
/// - must call `completeOperation()` function when the asynchronous task is done;
///
/// - optionally, periodically check `self.cancelled` status, performing any clean-up
/// necessary and then ensuring that `completeOperation()` is called; or
/// override `cancel` method, calling `super.cancel()` and then cleaning-up
/// and ensuring `completeOperation()` is called.
public class AsynchronousOperation : Operation {
private let syncQueue = DispatchQueue(label: Bundle.main.bundleIdentifier! + ".opsync")
override public var isAsynchronous: Bool { return true }
private var _executing: Bool = false
override private(set) public var isExecuting: Bool {
get {
return syncQueue.sync { _executing }
}
set {
willChangeValue(forKey: "isExecuting")
syncQueue.sync { _executing = newValue }
didChangeValue(forKey: "isExecuting")
}
}
private var _finished: Bool = false
override private(set) public var isFinished: Bool {
get {
return syncQueue.sync { _finished }
}
set {
willChangeValue(forKey: "isFinished")
syncQueue.sync { _finished = newValue }
didChangeValue(forKey: "isFinished")
}
}
/// Complete the operation
///
/// This will result in the appropriate KVN of isFinished and isExecuting
public func complete() {
if isExecuting { isExecuting = false }
if !isFinished { isFinished = true }
}
override public func start() {
if isCancelled {
isFinished = true
return
}
isExecuting = true
main()
}
}
Now, I appreciate operation queues (or promises) is going to seem like overkill for your situation, but it's a useful pattern that you can employ wherever you have a series of asynchronous tasks. For more information on operation queues, feel free to refer to the Concurrency Programming Guide: Operation Queues.
Year ago I faced Same problem
You should try to put your code in Dispatach.global queue it will surely help
Reason : I really don't know about reason too , I think it what semaphore it is , May be it is required to execute in Background Thread to sync wait and signal
As Mike alter mentioned, using a Dispatch.global().async helped solve the problem. The code now looks like this:
func save(){
for i in (0...(self.imagesArray.count-1)).reversed(){
DispatchQueue.global().async { [unowned self] in
self.semaphore.wait()
let image = self.imagesArray[i]
self.saveImage(image)
}
}
}
I suspect the problem was that the completion handler gets executed in the main thread, which was already locked by the semaphore.wait() called initially. So when the completion occurs, semaphore.signal() never gets called.
This is solved by running the tasks an asynchronous queue.

Resources