How to get cancellation state for multiple DispatchWorkItems - ios

Background
I'm implementing a search. Each search query results in one DispatchWorkItem which is then queued for execution. As the user can trigger a new search faster than the previous one can be completed, I'd like to cancel the previous one as soon as I receive a new one.
This is my current setup:
var currentSearchJob: DispatchWorkItem?
let searchJobQueue = DispatchQueue(label: QUEUE_KEY)
func updateSearchResults(for searchController: UISearchController) {
let queryString = searchController.searchBar.text?.lowercased() ?? ""
// if there is already an (older) search job running, cancel it
currentSearchJob?.cancel()
// create a new search job
currentSearchJob = DispatchWorkItem() {
self.filter(queryString: queryString)
}
// start the new job
searchJobQueue.async(execute: currentSearchJob!)
}
Problem
I understand that dispatchWorkItem.cancel() doesn't kill the running task immediately. Instead, I need to check for dispatchWorkItem.isCancelled manually. But how do I get the right dispatchWorkItemobject in this case?
If I were setting currentSearchJob only once, I could simply access that attribute like done in this case. However, this isn't applicable here, because the attribute will be overriden before the filter() method will be finished. How do I know which instance is actually running the code in which I want to check for dispatchWorkItem.isCancelled?
Ideally, I'd like to provide the newly-created DispatchWorkItem as an additional parameter to the filter() method. But that's not possible, because I'll get a Variable used within its own initial value error.
I'm new to Swift, so I hope I'm just missing something. Any help is appreciated very much!

The trick is how to have a dispatched task check if it has been canceled. I'd actually suggest consider OperationQueue approach, rather than using dispatch queues directly.
There are at least two approaches:
Most elegant, IMHO, is to just subclass Operation, passing whatever you want to it in the init method, and performing the work in the main method:
class SearchOperation: Operation {
private var queryString: String
init(queryString: String) {
self.queryString = queryString
super.init()
}
override func main() {
// do something synchronous, periodically checking `isCancelled`
// e.g., for illustrative purposes
print("starting \(queryString)")
for i in 0 ... 10 {
if isCancelled { print("canceled \(queryString)"); return }
print(" \(queryString): \(i)")
heavyWork()
}
print("finished \(queryString)")
}
func heavyWork() {
Thread.sleep(forTimeInterval: 0.5)
}
}
Because that's in an Operation subclass, isCancelled is implicitly referencing itself rather than some ivar, avoiding any confusion about what it's checking. And your "start a new query" code can just say "cancel anything currently on the the relevant operation queue and add a new operation onto that queue":
private var searchQueue: OperationQueue = {
let queue = OperationQueue()
// queue.maxConcurrentOperationCount = 1 // make it serial if you want
queue.name = Bundle.main.bundleIdentifier! + ".backgroundQueue"
return queue
}()
func performSearch(for queryString: String) {
searchQueue.cancelAllOperations()
let operation = SearchOperation(queryString: queryString)
searchQueue.addOperation(operation)
}
I recommend this approach as you end up with a small cohesive object, the operation, that nicely encapsulates a block of work that you want to do, in the spirit of the Single Responsibility Principle.
While the following is less elegant, technically you can also use BlockOperation, which is block-based, but for which which you can decouple the creation of the operation, and the adding of the closure to the operation. Using this technique, you can actually pass a reference to the operation to its own closure:
private weak var lastOperation: Operation?
func performSearch(for queryString: String) {
lastOperation?.cancel()
let operation = BlockOperation()
operation.addExecutionBlock { [weak operation, weak self] in
print("starting \(identifier)")
for i in 0 ... 10 {
if operation?.isCancelled ?? true { print("canceled \(identifier)"); return }
print(" \(identifier): \(i)")
self?.heavyWork()
}
print("finished \(identifier)")
}
searchQueue.addOperation(operation)
lastOperation = operation
}
func heavyWork() {
Thread.sleep(forTimeInterval: 0.5)
}
I only mention this for the sake of completeness. I think the Operation subclass approach is frequently a better design. I'll use BlockOperation for one-off sort of stuff, but as soon as I want more sophisticated cancelation logic, I think the Operation subclass approach is better.
I should also mention that, in addition to more elegant cancelation capabilities, Operation objects offer all sorts of other sophisticated capabilities (e.g. asynchronously manage queue of tasks that are, themselves, asynchronous; constrain degree of concurrency; etc.). This is all beyond the scope of this question.

you wrote
Ideally, I'd like to provide the newly-created DispatchWorkItem as an
additional parameter
you are wrong, to be able to cancel running task, you need a reference to it, not to the next which is ready to dispatch.
cancel() doesn't cancel running task, it only set internal "isCancel" flag by the thread-safe way, or remove the task from the queue before execution. Once executed, checking isCancel give you a chance to finish the job (early return).
import PlaygroundSupport
import Foundation
PlaygroundPage.current.needsIndefiniteExecution = true
let queue = DispatchQueue.global(qos: .background)
let prq = DispatchQueue(label: "print.queue")
var task: DispatchWorkItem?
func work(task: DispatchWorkItem?) {
sleep(1)
var d = Date()
if task?.isCancelled ?? true {
prq.async {
print("cancelled", d)
}
return
}
sleep(3)
d = Date()
prq.async {
print("finished", d)
}
}
for _ in 0..<3 {
task?.cancel()
let item = DispatchWorkItem {
work(task: task)
}
item.notify(queue: prq) {
print("done")
}
queue.asyncAfter(deadline: .now() + 0.5, execute: item)
task = item
sleep(1) // comment this line
}
in this example, only the very last job is really fully executed
cancelled 2018-12-17 23:49:13 +0000
done
cancelled 2018-12-17 23:49:14 +0000
done
finished 2018-12-17 23:49:18 +0000
done
try to comment the last line and it prints
done
done
finished 2018-12-18 00:07:28 +0000
done
the difference is, that first two execution never happened. (were removed from the dispatch queue before execution)

Related

How to change a value outside, when inside a Dispatch Semaphore (Edited)

i have a question, and I will tell you as clear as possible:
I need to get an object to my func, create a variable version of it, change some property values one by one, and use new version of it for saving to the cloud. My problem is, when I declare my variable, if I change property values inside my Dispatch Semaphore, the variable outside not changing somehow, there is some problem that I could not understand. Here is the code :
func savePage(model: PageModel, savingHandler: #escaping (Bool) -> Void) {
// some code .....
var page = model // (1) I created a variable from function arg
let newQueue = DispatchQueue(label: "image upload queue")
let semaphore = DispatchSemaphore(value: 0)
newQueue.async {
if let picURL1 = model.picURL1 {
self.saveImagesToFireBaseStorage(pictureURL: picURL1) { urlString in
let url = urlString // (2) urlString value true and exist
page.picURL1 = url // (3) modified the new created "page" object
print(page.picURL1!) // (4) it works, object prints modified
}
}
semaphore.signal()
}
semaphore.wait()
newQueue.async {
if let picURL2 = model.picURL2 {
self.saveImagesToFireBaseStorage(pictureURL: picURL2) { urlString in
let url = urlString
page.picURL2 = url
}
}
semaphore.signal()
}
semaphore.wait()
print(page.picURL1!) //(5) "page" object has the old value?
newQueue.async {
print(page.picURL1!) //(6) "page" object has the old value?
do {
try pageDocumentRef.setData(from: page)
savingHandler(true)
} catch let error {
print("Error writing city to Firestore: \(error)")
}
semaphore.signal()
}
semaphore.wait()
}
I should upload some pictures to the cloud and get their urls, so I can create updated version of the object and save onto the old version on the cloud. But "page" object doesn't change somehow. When inside the semaphore, it prints right value, when outside, or inside another async semaphore block, it prints old value. I am new to the concurrency and could not find a way.
What I tried before :
Using Operation Queue and adding blocks as dependencies.
Creating queue as DispatchQueue.global()
What is the thing I am missing here?
Edit : I added semaphore.wait() after second async call. It actually was in my code but I accidentally deleted it while pasting to the question, thanks to Chip Jarred for pointing it.
UPDATE
This is how I changed the code with async - await:
func savePage(model: PageModel, savingHandler: #escaping () -> Void) {
// Some code
Task {
do {
let page = model
let updatedPage = try await updatePageWithNewImageURLS(page)
// some code
} catch {
// some code
}
}
// Some code
}
private func saveImagesToFireBaseStorage(pictureURL : String?) async -> String? {
//some code
return downloadURL.absoluteString
}
private func updatePageWithNewImageURLS(_ page : PageModel) async throws -> PageModel {
let picUrl1 = await saveImagesToFireBaseStorage(pictureURL: page.picURL1)
let picUrl2 = await saveImagesToFireBaseStorage(pictureURL: page.picURL2)
let picUrl3 = await saveImagesToFireBaseStorage(pictureURL: page.picURL3)
let newPage = page
return try await addNewUrlstoPage(newPage, url1: picUrl1, url2: picUrl2, url3: picUrl3)
}
private func addNewUrlstoPage(_ page : PageModel, url1: String?, url2 : String?, url3 :String?) async throws -> PageModel {
var newPage = page
if let url1 = url1 { newPage.picURL1 = url1 }
if let url2 = url2 { newPage.picURL2 = url2 }
if let url3 = url3 { newPage.picURL3 = url3 }
return newPage
}
So one async function gets a new photo url, for an old url, second async function runs multiple of this first function inside to get new urls for all three urls of the object, then calls a third async function a create and return an updated object with the new values.
This is my first time using async - await.
Let's look at your first async call:
newQueue.async {
if let picURL1 = model.picURL1 {
self.saveImagesToFireBaseStorage(pictureURL: picURL1) { urlString in
let url = urlString // (2) urlString value true and exist
page.picURL1 = url // (3) modified the new created "page" object
print(page.picURL1!) // (4) it works, object prints modified
}
}
semaphore.signal()
}
I would guess that the inner closure, that is the one passed to saveImagesToFireBaseStorage, is also called asynchronously. If I'm right about that, then saveImagesToFireBaseStorage returns almost immediately, executes the signal, but the inner closure has not run yet, so the new value isn't yet set. Then after some latency, the inner closure is finally called, but that's after your "outer" code that depends on page.picURL1 has already been run, so page.picURL1 ends up being set afterwards.
So you need to call signal in the inner closure, but you still have to handle the case where the inner closure isn't called. I'm thinking something like this:
newQueue.async {
if let picURL1 = model.picURL1 {
self.saveImagesToFireBaseStorage(pictureURL: picURL1) { urlString in
let url = urlString
page.picURL1 = url
print(page.picURL1!)
semaphore.signal() // <--- ADDED THIS
}
/*
If saveImagesToFireBaseStorage might not call the closure,
such as on error, you need to detect that and call signal here
in the case as well, or perhaps in some closure that's called in
the failure case. That depends on your design.
*/
}
else { semaphore.signal() } // <--- MOVED INTO `else` block
}
Your second async would need to be modified similarly.
I notice that you're not calling wait after the second async, the one that sets page.picURL2. So you have 2 calls to wait, but 3 calls to signal. That wouldn't affect whether page.picURL1 is set properly in the first async, but it does mean that semaphore will have unbalanced waits and signals at the end of your code example, and the blocking behavior of wait after the third async may not be as you expect it to be.
If it's an option for your project, refactoring to use async and await keywords would resolve the problem in a way that's easier to maintain, because it would remove the need for the semaphore entirely.
Also, if my premise that saveImagesToFireBaseStorage is called asynchronously is correct, you don't really need the async calls at all, unless there is more code in their closures that isn't shown.
Update
In comments, it was revealed the using the solution above caused the app to "freeze". This suggests that saveImagesToFireBaseStorage calls its completion handler on the same queue that savePage(model:savingHandler) is called on, and it's almost certainly DispatchQueue.main. The problem is that DispatchQueue.main is a serial queue (as is newQueue), which means it won't execute any tasks until the next iteration of its run loop, but it never gets to do that, because it calls semaphore.wait(), which blocks waiting for the completion handler for saveImagesToFireBaseStorage to call semaphore.signal. By waiting it prevents the very thing its waiting for from ever executing.
You say in comments that using async/await solved the problem. That's probably the cleanest way, for lots of reasons, not the least of which is that you get compile time checks for a lot of potential problems.
In the mean time, I came up with this solution using DispatchSemaphore. I'll put it here, in case it helps someone.
First I moved the creation of newQueue outside of savePage. Creating a dispatch queue is kind of heavy-weight operation, so you should create whatever queues you need once, and then reuse them. I assume that it's a global variable or instance property of whatever object owns savePage.
The second thing is that savePage doesn't block anymore, but we still want the sequential behavior, preferably without going to completion handler hell (deeply nested completion handlers).
I refactored the code that calls saveImagesToFireBaseStorage into a local function, and made it behave synchronously by using a DispatchSemaphore to block until it's completion handler is called, but only in that local function. I do create the DispatchSemaphore outside of that function so that I can reuse the same instance for both invocations, but I treat it as though it were a local variable in the nested function.
I also have to use a time-out for the wait, because I don't know if I can assume that the completion handler for saveImagesToFireBaseStorage will always be called. Are there failure conditions in which it wouldn't be? The time-out value is almost certainly wrong, and should be considered a place-holder for the real value. You'd need to determine the maximum latency you want to allow based on your knowledge of your app and its working environment (servers, networks, etc...).
The local function uses a key path to allow setting differently named properties of PageModel (picURL1 vs picURL2), while still consolidating the duplicated code.
Here's the refactored savePage code:
func savePage(model: PageModel, savingHandler: #escaping (Bool) -> Void)
{
// some code .....
var page = model
let saveImageDone = DispatchSemaphore(value: 0)
let waitTimeOut = DispatchTimeInterval.microseconds(500)
func saveModelImageToFireBaseStorage(
from urlPath: WritableKeyPath<PageModel, String?>)
{
if let picURL = model[keyPath: urlPath]
{
saveImagesToFireBaseStorage(pictureURL: picURL)
{
page[keyPath: urlPath] = $0
print("page.\(urlPath) = \(page[keyPath: urlPath]!)")
saveImageDone.signal()
}
if .timedOut == saveImageDone.wait(timeout: .now() + waitTimeOut) {
print("saveImagesToFireBaseStorage timed out!")
}
}
}
newQueue.async
{
saveModelImageToFireBaseStorage(from: \.picURL1)
saveModelImageToFireBaseStorage(from: \.picURL2)
print(page.picURL1!)
do {
try self.pageDocumentRef.setData(from: page)
// Assume handler might do UI stuff, so it needs to execute
// on main
DispatchQueue.main.async { savingHandler(true) }
} catch let error {
print("Error writing city to Firestore: \(error)")
// Should savingHandler(false) be called here?
}
}
}
It's important to note that savePage does not block the thread that's called on, which I believe to be DispatchQueue.main. I assume that any code that is sequentially called after a call to savePage, if any, does not depend on the results of calling savePage. Any that does depend on it should be in its savingHandler.
And speaking of savingHandler, I have to assume that it might update the UI, and since the point where it would be called is in a closure for newQueue.async it has to be explicitly called on DispatchQueue.main, so I do that.

Completion block is getting triggered even before my operation completes in main method

I am trying to create user in firebase using OperationQueue and Operation. I placed the Firebase Auth call in operation main method. Completion block of the operation is getting triggered even before firebase registration process is succeeded.
RegistrationViewModal.swift
//This is operation initialization
let operationQueues = OperationQueues()
let registrationRecord = RegistrationRecord(user: self.user!, encryptedData: self.fireBaseAuthCompliance)
let userRegistrationOperation = UserRegistrationOperation(registrationRecord: registrationRecord)
userRegistrationOperation.completionBlock = {
//I am expecting this completion block will be called only when my firebase invocation in main() method is finished
DispatchQueue.main.async {
//Since this block is getting triggered even before completion, the //value is returning as null
self.user?.uid = userRegistrationOperation.registrationRecord.user.uid
}
}
operationQueues.userRegistrationQueue.addOperation(userRegistrationOperation)
UserRegistrationOperation.swift
class OperationQueues {
lazy var userRegistrationQueue: OperationQueue = {
var queue = OperationQueue()
queue.maxConcurrentOperationCount = 1
queue.name = "User registration queue"
return queue
}()
}
class UserRegistrationOperation: Operation {
var registrationRecord: RegistrationRecord
init(registrationRecord: RegistrationRecord) {
self.registrationRecord = registrationRecord
}
override func main() {
guard !isCancelled else { return }
self.registrationRecord.state = RegistrationStatus.pending
//Firebase invocation to create a user in Firebase Auth
Auth.auth().createUser(withEmail: self.registrationRecord.user.userEmail, password: self.registrationRecord.encryptedData){ [weak self](result, error) in
if error != nil {
print("Error occured while user registration process")
self?.registrationRecord.state = RegistrationStatus.failed
return
}
self?.registrationRecord.user.uid = result?.user.uid
self?.registrationRecord.state = RegistrationStatus.processed
}
}
}
The problem is that your operation is initiating an asynchronous process, but the operation finishes when the asynchronous task is started, not when the asynchronous task finishes.
You need to do the KVO associated with a “concurrent” operation, as outlined in the documentation:
If you are creating a concurrent operation, you need to override the following methods and properties at a minimum:
start()
isAsynchronous
isExecuting
isFinished
In a concurrent operation, your start() method is responsible for starting the operation in an asynchronous manner. Whether you spawn a thread or call an asynchronous function, you do it from this method. Upon starting the operation, your start() method should also update the execution state of the operation as reported by the isExecuting property. You do this by sending out KVO notifications for the isExecuting key path, which lets interested clients know that the operation is now running. Your isExecuting property must also provide the status in a thread-safe manner.
Upon completion or cancellation of its task, your concurrent operation object must generate KVO notifications for both the isExecuting and isFinished key paths to mark the final change of state for your operation. (In the case of cancellation, it is still important to update the isFinished key path, even if the operation did not completely finish its task. Queued operations must report that they are finished before they can be removed from a queue.) In addition to generating KVO notifications, your overrides of the isExecuting and isFinished properties should also continue to report accurate values based on the state of your operation.
Now all of that sounds quite hairy, but it’s actually not that bad. One way is to write a base operation class that takes care of all of this KVO stuff, and this this answer outlines one example implementation.
Then you can subclass AsynchronousOperation instead, and make sure to call finish (or whatever triggers the isFinished KVO) when the task is done:
class UserRegistrationOperation: AsynchronousOperation {
var registrationRecord: RegistrationRecord
init(registrationRecord: RegistrationRecord) {
self.registrationRecord = registrationRecord
super.init() // whenever you subclass, remember to call `super`
}
override func main() {
self.registrationRecord.state = .pending
//Firebase invocation to create a user in Firebase Auth
Auth.auth().createUser(withEmail: registrationRecord.user.userEmail, password: registrationRecord.encryptedData) { [weak self] result, error in
defer { self?.finish() } // make sure to call `finish` regardless of how we leave this closure
guard let result = result, error == nil else {
print("Error occured while user registration process")
self?.registrationRecord.state = .failed
return
}
self?.registrationRecord.user.uid = result.user.uid
self?.registrationRecord.state = .processed
}
}
}
There are lots of ways to implement that AsynchronousOperation class and this is just one example. But once you have a class that nicely encapsulates the concurrent operation KVO, you can subclass it and you can write your own concurrent operations with very little changes to your code.

How to handle Race Condition Read/Write Problem in Swift?

I have got a concurrent queue with dispatch barrier from Raywenderlich post Example
private let concurrentPhotoQueue = DispatchQueue(label: "com.raywenderlich.GooglyPuff.photoQueue", attributes: .concurrent)
Where write operations is done in
func addPhoto(_ photo: Photo) {
concurrentPhotoQueue.async(flags: .barrier) { [weak self] in
// 1
guard let self = self else {
return
}
// 2
self.unsafePhotos.append(photo)
// 3
DispatchQueue.main.async { [weak self] in
self?.postContentAddedNotification()
}
}
}
While read opeartion is done in
var photos: [Photo] {
var photosCopy: [Photo]!
// 1
concurrentPhotoQueue.sync {
// 2
photosCopy = self.unsafePhotos
}
return photosCopy
}
As this will resolve Race Condition. Here why only Write operation is done with barrier and Read in Sync. Why is Read not done with barrier and write with sync ?. As with Sync Write, it will wait till it reads like a lock and while barrier Read it will only be read operation.
set(10, forKey: "Number")
print(object(forKey: "Number"))
set(20, forKey: "Number")
print(object(forKey: "Number"))
public func set(_ value: Any?, forKey key: String) {
concurrentQueue.sync {
self.dictionary[key] = value
}
}
public func object(forKey key: String) -> Any? {
// returns after concurrentQueue is finished operation
// beacuse concurrentQueue is run synchronously
var result: Any?
concurrentQueue.async(flags: .barrier) {
result = self.dictionary[key]
}
return result
}
With the flip behavior, I am getting nil both times, with barrier on Write it is giving 10 & 20 correct
You ask:
Why is Read not done with barrier ... ?.
In this reader-writer pattern, you don’t use barrier with “read” operations because reads are allowed to happen concurrently with respect to other “reads”, without impacting thread-safety. It’s the whole motivating idea behind reader-writer pattern, to allow concurrent reads.
So, you could use barrier with “reads” (it would still be thread-safe), but it would unnecessarily negatively impact performance if multiple “read” requests happened to be called at the same time. If two “read” operations can happen concurrently with respect to each other, why not let them? Don’t use barriers (reducing performance) unless you absolutely need to.
Bottom line, only “writes” need to happen with barrier (ensuring that they’re not done concurrently with respect to any “reads” or “writes”). But no barrier is needed (or desired) for “reads”.
[Why not] ... write with sync?
You could “write” with sync, but, again, why would you? It would only degrade performance. Let’s imagine that you had some reads that were not yet done and you dispatched a “write” with a barrier. The dispatch queue will ensure for us that a “write” dispatched with a barrier won’t happen concurrently with respect to any other “reads” or “writes”, so why should the code that dispatched that “write” sit there and wait for the “write” to finish?
Using sync for writes would only negatively impact performance, and offers no benefit. The question is not “why not write with sync?” but rather “why would you want to write with sync?” And the answer to that latter question is, you don’t want to wait unnecessarily. Sure, you have to wait for “reads”, but not “writes”.
You mention:
With the flip behavior, I am getting nil ...
Yep, so lets consider your hypothetical “read” operation with async:
public func object(forKey key: String) -> Any? {
var result: Any?
concurrentQueue.async {
result = self.dictionary[key]
}
return result
}
This effective says “set up a variable called result, dispatch task to retrieve it asynchronously, but don’t wait for the read to finish before returning whatever result currently contained (i.e., nil).”
You can see why reads must happen synchronously, because you obviously can’t return a value before you update the variable!
So, reworking your latter example, you read synchronously without barrier, but write asynchronously with barrier:
public func object(forKey key: String) -> Any? {
return concurrentQueue.sync {
self.dictionary[key]
}
}
public func set(_ value: Any?, forKey key: String) {
concurrentQueue.async(flags: .barrier) {
self.dictionary[key] = value
}
}
Note, because sync method in the “read” operation will return whatever the closure returns, you can simplify the code quite a bit, as shown above.
Or, personally, rather than object(forKey:) and set(_:forKey:), I’d just write my own subscript operator:
public subscript(key: String) -> Any? {
get {
concurrentQueue.sync {
dictionary[key]
}
}
set {
concurrentQueue.async(flags: .barrier) {
self.dictionary[key] = newValue
}
}
}
Then you can do things like:
store["Number"] = 10
print(store["Number"])
store["Number"] = 20
print(store["Number"])
Note, if you find this reader-writer pattern too complicated, note that you could just use a serial queue (which is like using a barrier for both “reads” and “writes”). You’d still probably do sync “reads” and async “writes”. That works, too. But in environments with high contention “reads”, it’s just a tad less efficient than the above reader-writer pattern.

Dispose (cancel) observable. SubscribeOn and observeOn different schedulers

Reformed question
I have reformed my question. To the common case.
I want to generate items with RxSwift in background thread (loading from disk, long-running calculations, etc.), and observe items in MainThread. And I want to be sure that no items will be delivered after dispose (from main thread).
According to documentation (https://github.com/ReactiveX/RxSwift/blob/master/Documentation/GettingStarted.md#disposing):
So can this code print something after the dispose call is executed? The answer is: it depends.
If the scheduler is a serial scheduler (ex. MainScheduler) and dispose is called on the same serial scheduler, the answer is no.
Otherwise it is yes.
But in case of using subscribeOn and observerOn with different schedulers - we cannot guarantee that nothing will be emitted after dispose (manual or by dispose bag, it does not matter).
How should I generate items (images, for example) in background and be sure that result will not be used after the dispose?
I made workaround in real project, but I want to solve this problem and to understand how should we avoid it in the same cases.
In my test project I have used small periods - they demonstrate the problem perfectly!
import RxSwift
class TestClass {
private var disposeBag = DisposeBag()
private var isCancelled = false
init(cancelAfter: TimeInterval, longRunningTaskDuration: TimeInterval) {
assert(Thread.isMainThread)
load(longRunningTaskDuration: longRunningTaskDuration)
DispatchQueue.main.asyncAfter(deadline: .now() + cancelAfter) { [weak self] in
self?.cancel()
}
}
private func load(longRunningTaskDuration: TimeInterval) {
assert(Thread.isMainThread)
// We set task not cancelled
isCancelled = false
DataService
.shared
.longRunngingTaskEmulation(sleepFor: longRunningTaskDuration)
// We want long running task to be executed in background thread
.subscribeOn(ConcurrentDispatchQueueScheduler.init(queue: .global()))
// We want to process result in Main thread
.observeOn(MainScheduler.instance)
.subscribe(onSuccess: { [weak self] (result) in
assert(Thread.isMainThread)
guard let strongSelf = self else {
return
}
if !strongSelf.isCancelled {
print("Should not be called! Task is cancelled!")
} else {
// Do something with result, set image to UIImageView, for instance
// But if task was cancelled, this method will set invalid (old) data
print(result)
}
}, onError: nil)
.disposed(by: disposeBag)
}
// Cancel all tasks. Can be called in PreapreForReuse.
private func cancel() {
assert(Thread.isMainThread)
// For test purposes. After cancel, old task should not make any changes.
isCancelled = true
// Cancel all tasks by creating new DisposeBag (and disposing old)
disposeBag = DisposeBag()
}
}
class DataService {
static let shared = DataService()
private init() { }
func longRunngingTaskEmulation(sleepFor: TimeInterval) -> Single<String> {
return Single
.deferred {
assert(!Thread.isMainThread)
// Enulate long running task
Thread.sleep(forTimeInterval: sleepFor)
// Return dummy result for test purposes.
return .just("Success")
}
}
}
class MainClass {
static let shared = MainClass()
private init() { }
func main() {
Timer.scheduledTimer(withTimeInterval: 0.150, repeats: true) { [weak self] (_) in
assert(Thread.isMainThread)
let longRunningTaskDuration: TimeInterval = 0.050
let offset = TimeInterval(arc4random_uniform(20)) / 1000.0
let cancelAfter = 0.040 + offset
self?.executeTest(cancelAfter: cancelAfter, longRunningTaskDuration: longRunningTaskDuration)
}
}
var items: [TestClass] = []
func executeTest(cancelAfter: TimeInterval, longRunningTaskDuration: TimeInterval) {
let item = TestClass(cancelAfter: cancelAfter, longRunningTaskDuration: longRunningTaskDuration)
items.append(item)
}
}
Call MainClass.shared.main() somewhere to start.
We call method to load some data and later we call cancel (all from Main Thread). After cancel we sometimes receive the result (in main thread too), but it is old already.
In real project TestClass is a UITableViewCell subclass and cancel method is called in prepareForReuse. Then cell is being reused and new data is set to the cell. And later we get the result of OLD task. And old image is set to the cell!
ORIGINAL QUESTION (OLD):
I would like to load image with RxSwift in iOS. I want to load image in background, and to use it in main thread. So I subscribeOn background thread, and observeOn main thread. And function will look like this:
func getImage(path: String) -> Single<UIImage> {
return Single
.deferred {
if let image = UIImage(contentsOfFile: path) {
return Single.just(image)
} else {
return Single.error(SimpleError())
}
}
.subscribeOn(ConcurrentDispatchQueueScheduler(qos: .background))
.observeOn(MainScheduler.instance)
}
But I get problems with cancelation. Because different schedulers are used to create items and to call dispose (disposing from main thread), subscription event can be raised after dispose is called. So in my case of using in UITableViewCell I receive invalid (old) image.
If I create item (load image) in the same scheduler that observes (Main thread), everything works fine!
But I would like to load images in background and I want it will be canceled after disposing (in prepareForReuse method or in new path set method). What is the common template for this?
EDIT:
I have created a test project, where I can emulate the problem when the event is received after dispose.
And I have one simple solution that works. We should emit items in the same scheduler. So we should capture scheduler and emit items there (after long running task completes).
func getImage2(path: String) -> Single<UIImage> {
return Single
.create(subscribe: { (single) -> Disposable in
// We captrure current queue to execute callback in
// TODO: It can be nil if called from background thread
let callbackQueue = OperationQueue.current
// For async calculations
OperationQueue().addOperation {
// Perform any long-running task
let image = UIImage(contentsOfFile: path)
// Emit item in captured queue
callbackQueue?.addOperation {
if let result = image {
single(.success(result))
} else {
single(.error(SimpleError()))
}
}
}
return Disposables.create()
})
.observeOn(MainScheduler.instance)
}
But it is not in Rx way. And I think this is not the best solution.
May be I should use CurrentThreadScheduler to emit items, but I cannot understand how. Is there any tutorial or example of items generation with schedulers usage? I did not find any.
Interesting test case. There is a small bug, it should be if strongSelf.isCancelled instead of if !strongSelf.isCancelled. Apart from that, the test case shows the problem.
I would intuitively expect that it is checked whether a dispose has already taken place before emitting, if it happens on the same thread.
I found additionally this:
just to make this clear, if you call dispose on one thread (like
main), you won't observe any elements on that same thread. That is a
guarantee.
see here: https://github.com/ReactiveX/RxSwift/issues/38
So maybe it is a bug.
To be sure I opened an issue here:
https://github.com/ReactiveX/RxSwift/issues/1778
Update
It seems it was actually a bug. Meanwhile, the fine people at RxSwift have confirmed it and fortunately fixed it very quickly. See the issue link above.
Testing
The bug was fixed with commit bac86346087c7e267dd5a620eed90a7849fd54ff. So if you are using CocoaPods, you can simply use something like the following for testing:
target 'RxSelfContained' do
use_frameworks!
pod 'RxAtomic', :git => 'https://github.com/ReactiveX/RxSwift.git', :commit => 'bac86346087c7e267dd5a620eed90a7849fd54ff'
pod 'RxSwift', :git => 'https://github.com/ReactiveX/RxSwift.git', :commit => 'bac86346087c7e267dd5a620eed90a7849fd54ff'
end

Using DispatchGroup() in Swift 3 to perform a task?

With Swift 3, using GCD has changed to DispatchGroup(), and I'm trying to learn to use it in my code.
Currently I have a function in another class that attempts to download a file and output its speed. I like to have that function finish first because I assign its speed to a var that I will be using in the first class to perform other tasks that is dependent upon that var.
It goes something like this:
Second class:
func checkSpeed()
{
// call other functions and perform task to download file from link
// print out speed of download
nMbps = speedOfDownload
}
First Class:
let myGroup = DispatchGroup()
let check: SecondClass = SecondClass()
myGroup.enter()
check.checkSpeed()
myGroup.leave()
myGroup.notify(queue: DispatchQueue.main, execute: {
print("Finished all requests.")
print("speed = \(check.nMbps)")
})
The problem is Finish all requests gets output first, thus returning nil for speed, then afterwards checkSpeed finishes and outputs the correct download speed.
I believe I'm doing this wrong, but I'm not sure?
How can I ensure that speed obtains the correct value after the completion of checkSpeed in my first class?
The details of checkSpeed is exactly the same as connectedToNetwork from GitHub: connectedness.swift
You need to call DispatchGroup.leave() when the entered task has completed. So, in your code, myGroup.leave() needs be placed at the end of the completion handler inside your checkSpeed() method.
You may need to modify your code like this:
func checkSpeed(in myGroup: DispatchGroup) {
//...
...downLoadTask... {...its completion handler... in
//...
// print out speed of download
nMbps = speedOfDownload
myGroup.leave() //<- This needs to be placed at the end of the completion handler
}
//You should not place any code after invoking asynchronous task.
}
And use it as:
myGroup.enter()
check.checkSpeed(in: myGroup)
myGroup.notify(queue: DispatchQueue.main, execute: {
print("Finished all requests.")
print("speed = \(check.nMbps)")
})
But, as noted in vadian's comment or Pangu's answer, you usually do not use DispatchGroup for a single asynchronous task.
ADDITION
I need to say, I strongly recommend completion handler pattern shown in Pangu's answer. It's a more general way to handle asynchronous tasks.
If you have modified your checkSpeed() to checkSpeed(completion:) as suggested, you can easily experiment DispatchGroup like this:
let myGroup = DispatchGroup()
let check: SecondClass = SecondClass()
let anotherTask: ThirdClass = ThirdClass()
myGroup.enter() //for `checkSpeed`
myGroup.enter() //for `doAnotherAsync`
check.checkSpeed {
myGroup.leave()
}
anotherTask.doAnotherAsync {
myGroup.leave()
}
myGroup.notify(queue: DispatchQueue.main) {
print("Finished all requests.")
print("speed = \(check.nMbps)")
}
With hint provided in the comment and solution found here: from #vadian, since I am only performing one task, I used a async completion handler:
Second Class:
func checkSpeed(completion: #escaping () -> ())
{
// call other functions and perform task to download file from link
// print out speed of download
nMbps = speedOfDownload
completion()
}
First Class:
let check: SecondClass = SecondClass()
check.checkSpeed {
print("speed = \(check.nMbps)")
}
Now checkSpeed will complete first and speed is assigned the appropriate value.

Resources