I am using a for loop coupled with a DispatchQueue with async to incrementally increase playback volume over the course of a 5 or 10-minute duration.
How I am currently implementing it is:
for i in (0...(numberOfSecondsToFadeOut*timesChangePerSecond)) {
DispatchQueue.main.asyncAfter(deadline: .now() + Double(i)/Double(timesChangePerSecond)) {
if self.activityHasEnded {
NSLog("Activity has ended") //This will keep on printing
} else {
let volumeSetTo = originalVolume - (reductionAmount)*Float(i)
self.setVolume(volumeSetTo)
}
}
if self.activityHasEnded {
break
}
}
My goal is to have activityHasEnded to act as the breaker. The issue as noted in the comment is that despite using break, the NSLog will keep on printing over every period. What would be the better way to fully break out of this for loop that uses DispatchQueue.main.asyncAfter?
Updated: As noted by Rob, it makes more sense to use a Timer. Here is what I did:
self.fadeOutTimer = Timer.scheduledTimer(withTimeInterval: timerFrequency, repeats: true) { (timer) in
let currentVolume = self.getCurrentVolume()
if currentVolume > destinationVolume {
let volumeSetTo = currentVolume - reductionAmount
self.setVolume(volumeSetTo)
print ("Lowered volume to \(volumeSetTo)")
}
}
When the timer is no longer needed, I call self.fadeOutTimer?.invalidate()
You don’t want to use asyncAfter: While you could use DispatchWorkItem rendition (which is cancelable), you will end up with a mess trying to keep track of all of the individual work items. Worse, a series of individually dispatch items are going to be subject to “timer coalescing”, where latter tasks will start to clump together, no longer firing off at the desired interval.
The simple solution is to use a repeating Timer, which avoids coalescing and is easily invalidated when you want to stop it.
You can utilise DispatchWorkItem, which can be dispatch to a DispatchQueue asynchronously and can also be cancelled even after it was dispatched.
for i in (0...(numberOfSecondsToFadeOut*timesChangePerSecond)) {
let work = DispatchWorkItem {
if self.activityHasEnded {
NSLog("Activity has ended") //This will keep on printing
} else {
let volumeSetTo = originalVolume - (reductionAmount)*Float(i)
self.setVolume(volumeSetTo)
}
}
DispatchQueue.main.asyncAfter(deadline: .now() + Double(i)/Double(timesChangePerSecond), execute: work)
if self.activityHasEnded {
work.cancel() // cancel the async work
break // exit the loop
}
}
Related
Background
I'm implementing a search. Each search query results in one DispatchWorkItem which is then queued for execution. As the user can trigger a new search faster than the previous one can be completed, I'd like to cancel the previous one as soon as I receive a new one.
This is my current setup:
var currentSearchJob: DispatchWorkItem?
let searchJobQueue = DispatchQueue(label: QUEUE_KEY)
func updateSearchResults(for searchController: UISearchController) {
let queryString = searchController.searchBar.text?.lowercased() ?? ""
// if there is already an (older) search job running, cancel it
currentSearchJob?.cancel()
// create a new search job
currentSearchJob = DispatchWorkItem() {
self.filter(queryString: queryString)
}
// start the new job
searchJobQueue.async(execute: currentSearchJob!)
}
Problem
I understand that dispatchWorkItem.cancel() doesn't kill the running task immediately. Instead, I need to check for dispatchWorkItem.isCancelled manually. But how do I get the right dispatchWorkItemobject in this case?
If I were setting currentSearchJob only once, I could simply access that attribute like done in this case. However, this isn't applicable here, because the attribute will be overriden before the filter() method will be finished. How do I know which instance is actually running the code in which I want to check for dispatchWorkItem.isCancelled?
Ideally, I'd like to provide the newly-created DispatchWorkItem as an additional parameter to the filter() method. But that's not possible, because I'll get a Variable used within its own initial value error.
I'm new to Swift, so I hope I'm just missing something. Any help is appreciated very much!
The trick is how to have a dispatched task check if it has been canceled. I'd actually suggest consider OperationQueue approach, rather than using dispatch queues directly.
There are at least two approaches:
Most elegant, IMHO, is to just subclass Operation, passing whatever you want to it in the init method, and performing the work in the main method:
class SearchOperation: Operation {
private var queryString: String
init(queryString: String) {
self.queryString = queryString
super.init()
}
override func main() {
// do something synchronous, periodically checking `isCancelled`
// e.g., for illustrative purposes
print("starting \(queryString)")
for i in 0 ... 10 {
if isCancelled { print("canceled \(queryString)"); return }
print(" \(queryString): \(i)")
heavyWork()
}
print("finished \(queryString)")
}
func heavyWork() {
Thread.sleep(forTimeInterval: 0.5)
}
}
Because that's in an Operation subclass, isCancelled is implicitly referencing itself rather than some ivar, avoiding any confusion about what it's checking. And your "start a new query" code can just say "cancel anything currently on the the relevant operation queue and add a new operation onto that queue":
private var searchQueue: OperationQueue = {
let queue = OperationQueue()
// queue.maxConcurrentOperationCount = 1 // make it serial if you want
queue.name = Bundle.main.bundleIdentifier! + ".backgroundQueue"
return queue
}()
func performSearch(for queryString: String) {
searchQueue.cancelAllOperations()
let operation = SearchOperation(queryString: queryString)
searchQueue.addOperation(operation)
}
I recommend this approach as you end up with a small cohesive object, the operation, that nicely encapsulates a block of work that you want to do, in the spirit of the Single Responsibility Principle.
While the following is less elegant, technically you can also use BlockOperation, which is block-based, but for which which you can decouple the creation of the operation, and the adding of the closure to the operation. Using this technique, you can actually pass a reference to the operation to its own closure:
private weak var lastOperation: Operation?
func performSearch(for queryString: String) {
lastOperation?.cancel()
let operation = BlockOperation()
operation.addExecutionBlock { [weak operation, weak self] in
print("starting \(identifier)")
for i in 0 ... 10 {
if operation?.isCancelled ?? true { print("canceled \(identifier)"); return }
print(" \(identifier): \(i)")
self?.heavyWork()
}
print("finished \(identifier)")
}
searchQueue.addOperation(operation)
lastOperation = operation
}
func heavyWork() {
Thread.sleep(forTimeInterval: 0.5)
}
I only mention this for the sake of completeness. I think the Operation subclass approach is frequently a better design. I'll use BlockOperation for one-off sort of stuff, but as soon as I want more sophisticated cancelation logic, I think the Operation subclass approach is better.
I should also mention that, in addition to more elegant cancelation capabilities, Operation objects offer all sorts of other sophisticated capabilities (e.g. asynchronously manage queue of tasks that are, themselves, asynchronous; constrain degree of concurrency; etc.). This is all beyond the scope of this question.
you wrote
Ideally, I'd like to provide the newly-created DispatchWorkItem as an
additional parameter
you are wrong, to be able to cancel running task, you need a reference to it, not to the next which is ready to dispatch.
cancel() doesn't cancel running task, it only set internal "isCancel" flag by the thread-safe way, or remove the task from the queue before execution. Once executed, checking isCancel give you a chance to finish the job (early return).
import PlaygroundSupport
import Foundation
PlaygroundPage.current.needsIndefiniteExecution = true
let queue = DispatchQueue.global(qos: .background)
let prq = DispatchQueue(label: "print.queue")
var task: DispatchWorkItem?
func work(task: DispatchWorkItem?) {
sleep(1)
var d = Date()
if task?.isCancelled ?? true {
prq.async {
print("cancelled", d)
}
return
}
sleep(3)
d = Date()
prq.async {
print("finished", d)
}
}
for _ in 0..<3 {
task?.cancel()
let item = DispatchWorkItem {
work(task: task)
}
item.notify(queue: prq) {
print("done")
}
queue.asyncAfter(deadline: .now() + 0.5, execute: item)
task = item
sleep(1) // comment this line
}
in this example, only the very last job is really fully executed
cancelled 2018-12-17 23:49:13 +0000
done
cancelled 2018-12-17 23:49:14 +0000
done
finished 2018-12-17 23:49:18 +0000
done
try to comment the last line and it prints
done
done
finished 2018-12-18 00:07:28 +0000
done
the difference is, that first two execution never happened. (were removed from the dispatch queue before execution)
Reformed question
I have reformed my question. To the common case.
I want to generate items with RxSwift in background thread (loading from disk, long-running calculations, etc.), and observe items in MainThread. And I want to be sure that no items will be delivered after dispose (from main thread).
According to documentation (https://github.com/ReactiveX/RxSwift/blob/master/Documentation/GettingStarted.md#disposing):
So can this code print something after the dispose call is executed? The answer is: it depends.
If the scheduler is a serial scheduler (ex. MainScheduler) and dispose is called on the same serial scheduler, the answer is no.
Otherwise it is yes.
But in case of using subscribeOn and observerOn with different schedulers - we cannot guarantee that nothing will be emitted after dispose (manual or by dispose bag, it does not matter).
How should I generate items (images, for example) in background and be sure that result will not be used after the dispose?
I made workaround in real project, but I want to solve this problem and to understand how should we avoid it in the same cases.
In my test project I have used small periods - they demonstrate the problem perfectly!
import RxSwift
class TestClass {
private var disposeBag = DisposeBag()
private var isCancelled = false
init(cancelAfter: TimeInterval, longRunningTaskDuration: TimeInterval) {
assert(Thread.isMainThread)
load(longRunningTaskDuration: longRunningTaskDuration)
DispatchQueue.main.asyncAfter(deadline: .now() + cancelAfter) { [weak self] in
self?.cancel()
}
}
private func load(longRunningTaskDuration: TimeInterval) {
assert(Thread.isMainThread)
// We set task not cancelled
isCancelled = false
DataService
.shared
.longRunngingTaskEmulation(sleepFor: longRunningTaskDuration)
// We want long running task to be executed in background thread
.subscribeOn(ConcurrentDispatchQueueScheduler.init(queue: .global()))
// We want to process result in Main thread
.observeOn(MainScheduler.instance)
.subscribe(onSuccess: { [weak self] (result) in
assert(Thread.isMainThread)
guard let strongSelf = self else {
return
}
if !strongSelf.isCancelled {
print("Should not be called! Task is cancelled!")
} else {
// Do something with result, set image to UIImageView, for instance
// But if task was cancelled, this method will set invalid (old) data
print(result)
}
}, onError: nil)
.disposed(by: disposeBag)
}
// Cancel all tasks. Can be called in PreapreForReuse.
private func cancel() {
assert(Thread.isMainThread)
// For test purposes. After cancel, old task should not make any changes.
isCancelled = true
// Cancel all tasks by creating new DisposeBag (and disposing old)
disposeBag = DisposeBag()
}
}
class DataService {
static let shared = DataService()
private init() { }
func longRunngingTaskEmulation(sleepFor: TimeInterval) -> Single<String> {
return Single
.deferred {
assert(!Thread.isMainThread)
// Enulate long running task
Thread.sleep(forTimeInterval: sleepFor)
// Return dummy result for test purposes.
return .just("Success")
}
}
}
class MainClass {
static let shared = MainClass()
private init() { }
func main() {
Timer.scheduledTimer(withTimeInterval: 0.150, repeats: true) { [weak self] (_) in
assert(Thread.isMainThread)
let longRunningTaskDuration: TimeInterval = 0.050
let offset = TimeInterval(arc4random_uniform(20)) / 1000.0
let cancelAfter = 0.040 + offset
self?.executeTest(cancelAfter: cancelAfter, longRunningTaskDuration: longRunningTaskDuration)
}
}
var items: [TestClass] = []
func executeTest(cancelAfter: TimeInterval, longRunningTaskDuration: TimeInterval) {
let item = TestClass(cancelAfter: cancelAfter, longRunningTaskDuration: longRunningTaskDuration)
items.append(item)
}
}
Call MainClass.shared.main() somewhere to start.
We call method to load some data and later we call cancel (all from Main Thread). After cancel we sometimes receive the result (in main thread too), but it is old already.
In real project TestClass is a UITableViewCell subclass and cancel method is called in prepareForReuse. Then cell is being reused and new data is set to the cell. And later we get the result of OLD task. And old image is set to the cell!
ORIGINAL QUESTION (OLD):
I would like to load image with RxSwift in iOS. I want to load image in background, and to use it in main thread. So I subscribeOn background thread, and observeOn main thread. And function will look like this:
func getImage(path: String) -> Single<UIImage> {
return Single
.deferred {
if let image = UIImage(contentsOfFile: path) {
return Single.just(image)
} else {
return Single.error(SimpleError())
}
}
.subscribeOn(ConcurrentDispatchQueueScheduler(qos: .background))
.observeOn(MainScheduler.instance)
}
But I get problems with cancelation. Because different schedulers are used to create items and to call dispose (disposing from main thread), subscription event can be raised after dispose is called. So in my case of using in UITableViewCell I receive invalid (old) image.
If I create item (load image) in the same scheduler that observes (Main thread), everything works fine!
But I would like to load images in background and I want it will be canceled after disposing (in prepareForReuse method or in new path set method). What is the common template for this?
EDIT:
I have created a test project, where I can emulate the problem when the event is received after dispose.
And I have one simple solution that works. We should emit items in the same scheduler. So we should capture scheduler and emit items there (after long running task completes).
func getImage2(path: String) -> Single<UIImage> {
return Single
.create(subscribe: { (single) -> Disposable in
// We captrure current queue to execute callback in
// TODO: It can be nil if called from background thread
let callbackQueue = OperationQueue.current
// For async calculations
OperationQueue().addOperation {
// Perform any long-running task
let image = UIImage(contentsOfFile: path)
// Emit item in captured queue
callbackQueue?.addOperation {
if let result = image {
single(.success(result))
} else {
single(.error(SimpleError()))
}
}
}
return Disposables.create()
})
.observeOn(MainScheduler.instance)
}
But it is not in Rx way. And I think this is not the best solution.
May be I should use CurrentThreadScheduler to emit items, but I cannot understand how. Is there any tutorial or example of items generation with schedulers usage? I did not find any.
Interesting test case. There is a small bug, it should be if strongSelf.isCancelled instead of if !strongSelf.isCancelled. Apart from that, the test case shows the problem.
I would intuitively expect that it is checked whether a dispose has already taken place before emitting, if it happens on the same thread.
I found additionally this:
just to make this clear, if you call dispose on one thread (like
main), you won't observe any elements on that same thread. That is a
guarantee.
see here: https://github.com/ReactiveX/RxSwift/issues/38
So maybe it is a bug.
To be sure I opened an issue here:
https://github.com/ReactiveX/RxSwift/issues/1778
Update
It seems it was actually a bug. Meanwhile, the fine people at RxSwift have confirmed it and fortunately fixed it very quickly. See the issue link above.
Testing
The bug was fixed with commit bac86346087c7e267dd5a620eed90a7849fd54ff. So if you are using CocoaPods, you can simply use something like the following for testing:
target 'RxSelfContained' do
use_frameworks!
pod 'RxAtomic', :git => 'https://github.com/ReactiveX/RxSwift.git', :commit => 'bac86346087c7e267dd5a620eed90a7849fd54ff'
pod 'RxSwift', :git => 'https://github.com/ReactiveX/RxSwift.git', :commit => 'bac86346087c7e267dd5a620eed90a7849fd54ff'
end
How can I prevent a block of code to be repeatedly accessed from the same thread?
Suppose, I have the next code:
func sendAnalytics() {
// some synchronous work
asyncTask() { _ in
completion()
}
}
I want to prevent any thread from accessing "// some synchronous work", before completion was called.
objc_sync_enter(self)
objc_sync_exit(self)
seem to only prevent accessing this code from multiple threads and don't save me from accessing this code from the single thread. Is there a way to do this correctly, without using custom solutions?
My repeatedly accessing, I mean calling this sendAnalytics from one thread multiple times. Suppose, I have a for, like this:
for i in 0...10 {
sendAnalytics()
}
Every next call won't be waiting for completion inside sendAnalytics get called (obvious). Is there a way to make the next calls wait, before completion fires? Or the whole way of thinking is wrong and I have to solve this problem higher, at the for body?
You can use a DispatchSemaphore to ensure that one call completes before the next can start
let semaphore = DispatchSemaphore(value:1)
func sendAnalytics() {
self.semaphore.wait()
// some synchronous work
asyncTask() { _ in
completion()
self.semaphore.signal()
}
}
The second call to sendAnalytics will block until the first asyncTask is complete. You should be careful not to block the main queue as that will cause your app to become non-responsive. It is probably safer to dispatch the sendAnalytics call onto its own serial dispatch queue to eliminate this risk:
let semaphore = DispatchSemaphore(value:1)
let analyticsQueue = DispatchQueue(label:"analyticsQueue")
func sendAnalytics() {
analyticsQueue.async {
self.semaphore.wait()
// some synchronous work
asyncTask() { _ in
completion()
self.semaphore.signal()
}
}
}
I am creating a game where the user can move a SKShapeNode around. Now, I am trying to write a utility function that will perform back-to-back animations sequentially.
Description of what I'm Trying
I first dispatch_async to a serial thread. This thread then calls a dispatch_sync on the main thread to perform an animation. Now, to make the animations run sequentially, I would like to block the GlobalSerialAnimationQueue until the animation is completed on the main thread. By doing this, I (theoretically) would be able to run animations sequentially. My code is pasted below for more description
func moveToCurrentPosition() {
let action = SKAction.moveTo(self.getPositionForCurrRowCol(), duration: 1.0)
dispatch_async(GlobalSerialAnimateQueue) {
//this just creates an action to move to a point
dispatch_sync(GlobalMainQueue, {
self.userNode!.runAction(action) {
//inside the completion block now want to continue
//WOULD WANT TO TRIGGER THREAD TO CONTINUE HERE
}
})
//WOULD LIKE TO PAUSE HERE, THIS BLOCK FINISHING ONLY WHEN THE ANIMATION IS COMPLETE
}
}
So, my question is, how would I write a function that can take in requests for animations, and then perform them sequentially? What grand-central-dispatch tools should I use, or should I be trying a completely different approach?
I figured out how to do this using a grand-central-dispatch semaphore. My updated code is here.
func moveToCurrentPosition() {
let action = SKAction.moveTo(self.getPositionForCurrRowCol(), duration: Animation.USER_MOVE_DURATION)
dispatch_async(GlobalSerialAnimateQueue) {
let semaphore = dispatch_semaphore_create(0)
dispatch_sync(GlobalMainQueue, {
self.userNode!.runAction(action) {
//signal done
dispatch_semaphore_signal(semaphore)
}
})
//wait here...
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER)
}
}
SpriteKit provides a much simpler and intuitive API for running actions in sequence. Have a look at the documentation here.
You can simply perform actions as a sequence of events, with blocks in between or as completion:
let action = SKAction.moveTo(self.getPositionForCurrRowCol(), duration: Animation.USER_MOVE_DURATION)
let otherAction = SKAction.runBlock({
//Perform completion here.
})
self.userNode!.runAction(SKAction.sequence([action, otherAction]))
I am using a Particle Core to get the temperature from my room. The temperature is accessed through the cloud, which is being constantly updated in a variable. This is how I access the variable and display it:
func updateTemp(){
let seconds = 3.0
let delay = seconds * Double(NSEC_PER_SEC) // nanoseconds per seconds
let dispatchTime = dispatch_time(DISPATCH_TIME_NOW, Int64(delay))
dispatch_after(dispatchTime, dispatch_get_main_queue(), {
self.myPhoton?.getVariable("tempF", completion: { (result:AnyObject!, error:NSError!) -> Void in
if let _ = error {
print("Failed reading temperature from device")
}
else {
if let larry = result as? Int {
self.temp.text="\(larry)˚"
self.truth++ //Once a value has been found, update the count.
}
}
})
})
}
override func viewDidLoad() {
sparkStart()
}
override func viewDidLayoutSubviews() {
updateTemp()
NSTimer.scheduledTimerWithTimeInterval(100.0, target: self, selector: "updateTemp", userInfo: nil, repeats: true) //Gaurantees that the app is updated every 100 seconds. That way we have a fresh temperature often.
//Stop the spinning once a value has been found
if truth == 1{
activity.stopAnimating()
activity.removeFromSuperview()
}
}
Since this is my Particle Core detecting the temperature from environment, the temperature variable is constantly changing. However, when I use NSTimer, the code does not get updated in the time specified. Instead, it begins by updating based on the specified time, but then the time starts decreases exponentially and the variable is updated every 0.001 seconds or so. Any thoughts?
Im assuming what we see is not the full code. In your viewDidLayoutSubviews function, you call updateTemp twice. Once explicitly and once via timer callback.
Your updateTemp function schedules the network call in the main run loop, that's where the timer is also running. The dispatch_after function queues the execution of the readout updates one after the other. I am now assuming, that something in your display code causes repeated triggers of viewDidLayoutSubviews, each of which schedules two new updates etc. Even if the assumption is false (there are a couple of other possibilities due to network code being slow and the timer also running in the main run loop), I am guessing if you drop the explicit call to updateTemp you'll lose the "exponential" and should be fine.
In general, as the web call is largely asynchronous, you could just use the timer and call your sensor directly or if you feel GCD has an important performance advantage switch to dispatch_async and apply for the next available queue with each call via calling dispatch_get_global_queue