In my app I have a method that makes cloud calls. It has a completion handler. At some point I have a situation when a users makes this call to the cloud and while waiting for the completion, the user might hit log out.
This will remove the controller from the stack, so the completion block will be returned to the controller that is no longer on a stack.
This causes a crash since I do some UI tasks on that completion return.
I did a workaround where, I'm not doing anything with the UI is the controller in no longer on a stack.
However, I'm curious if it's possible to cancel/stop all pending callbacks somehow on logout?
I'm not sure, but I think something is tightly coupled. Try doing:
{ [weak self] () -> Void in
guard let _ = self else { return }
//rest of your code
}
If you get deinitialized then your completioHanlder would just not proceed.
For the granular control over operations' cancellation, you can return a cancellation token out from your function. Call it upon a need to cancel an operation.
Here is an example how it can be achieved:
typealias CancellationToken = () -> Void
func performWithDelay(callback: #escaping () -> Void) -> CancellationToken {
var cancelled = false
// For the sake of example delayed async execution
// used to emulate callback behavior.
DispatchQueue.main.asyncAfter(deadline: .now() + 2) {
if !cancelled {
callback()
}
}
return { cancelled = true }
}
let cancellationToken = performWithDelay {
print("test")
}
cancellationToken()
For the cases where you just need to ensure that within a block execution there are still all necessary prerequisites and conditions met you can use guard:
{ [weak self] in
guard let `self` = self else { return }
// Your code here... You can write a code down there
// without worrying about unwrapping self or
// creating retain cycles.
}
Related
func checkUsername(username: String) -> Bool {
var avalible = true
Database.database().reference().child("usernames").child(username).observeSingleEvent(of: .value, with: { snapshot in
if snapshot.exists() {
print("exists")
avalible = false
}
})
return available
}
I'm trying to check if a username exists.
I've already tried multiple things, but it seems like the function always returns before the completion handler even finishes and always gives the same output (true) even if the username is already taken.
Does somebody have an idea how to „wait“ for the completion handler first to finish before returning the „available“ variable?
An explanation with the answer would be excellent.
Thanks.
You cannot return something from an asynchronous task. Your code – with fixed typos – returns true even before the database request has been started.
Maybe there is a native async/await version of observeSingleEvent but in any case you can use a Continuation
func checkUsername(username: String) async -> Bool {
return await withCheckedContinuation { continuation in
Database.database().reference().child("usernames").child(username).observeSingleEvent(of: .value, with: { snapshot in
continuation.resume(returning: !snapshot.exists())
})
}
}
And call it
Task {
let isAvailable = await checkUsername(username: "Foo")
}
This is because the Firebase operation is asynchronous, you cannot simply turn that into a synchronous operation without blocking the thread.
Have a look at How could I create a function with a completion handler in Swift?
The correct way to approach this, without the use of the new async/await syntax, is to use a completion handler yourself.
func checkUsername(username: String, completion: #escaping (Bool) -> ()) {
Database.database().reference().child("usernames").child(username).observeSingleEvent(of: .value) { snapshot in
completion(!snapshot.exists())
}
}
// To use:
checkUsername(username: "foo") { exists in
print("exists? ", exists)
}
I have some operations that need to run synchronously. I tried to follow this link but it's not clear enough for my situation.
op2 doesn't start until op1 is finished and op3 doesn't start until op2 is finished but during that time I need to be able to stop any of the operations and restart all over again. For example if op2 is running, I know that it cannot be stopped, but for whatever reason I need to be able to prevent op3 from executing because op1 has restarted. How can I do this?
This is a very simple example, the actual code is more intricate
var queue1 = OperationQueue()
var queue2 = OperationQueue()
var queue3 = OperationQueue()
var operation1: BlockOperation?
var operation2: BlockOperation?
var operation3: BlockOperation?
// a DispatchGroup has finished running now it's time to start the operations ...
dispatchGroup.notify(queue: .global(qos: .background)) { [weak self] in
DispatchQueue.main.async { [weak self] in
self?.runFirstFunc()
}
}
func runFirstFunc() {
var count = 0
let num in arr {
count += num
}
// now that the loop is finished start the second func but there is a possibility something may happen in the first that should prevent the second func from running
runSecondFunc(count: count)
}
func runSecondFunc(count: Int) {
do {
try ...
// if the do-try is successful do something with count then start thirdFunc but there is a possibility something may happen in the second func that should prevent the third func from running
runThirdFunc()
} catch {
return
}
}
func runThirdFunc() {
// this is the final operation, once it hits here I know it can't be stopped even if I have to restart op1 again but that is fine
}
You said:
op2 doesn't start until op1 is finished and op3 doesn't start until op2 is finished ...
If using OperationQueue you can accomplish that by creating the three operations, and defining op1 to be a dependency of op2 and defining op2 as a dependency of op3.
... but during that time I need to be able to stop any of the operations and restart all over again.
If using OperationQueue, if you want to stop all operations that have been added to the queue, you call cancelAllOperations.
For example if op2 is running, I know that it cannot be stopped, ...
Well, it depends upon what op2 is doing. If it's spinning in a loop doing calculations, then, yes, it can be canceled, mid-operation. You just check isCancelled, and if it is, stop the operation in question. Or if it is a network request (or something else that is cancelable), you can override cancel method and cancel the task, too. It depends upon what the operation is doing.
... but for whatever reason I need to be able to prevent op3 from executing because op1 has restarted.
Sure, having canceled all the operations with cancelAllOperations, you can then re-add three new operations (with their associated dependencies) to the queue.
Here's a not-tested implementation that allows cancellation while any task is doing it's subtasks (repeatedly).
In case second task fails/throws, it automatically restarts from the first task.
In case user manually stops / starts, the last in-flight task quits it's execution (as soon as it can).
Note : You must take care of [weak self] part according to your own implementation.
import Foundation
class TestWorker {
let workerQueue = DispatchQueue.global(qos: .utility)
var currentWorkItem: DispatchWorkItem?
func start() {
self.performTask { self.performTask1() }
}
func stop() {
currentWorkItem?.cancel()
}
func performTask(block: #escaping (() -> Void)) {
let workItem = DispatchWorkItem(block: block)
self.currentWorkItem = workItem
workerQueue.async(execute: workItem)
}
func performTask1() {
guard let workItem = self.currentWorkItem else { return }
func subtask(index: Int) {}
for i in 0..<100 {
if workItem.isCancelled { return }
subtask(index: i)
}
self.performTask { self.performTask2() }
}
func performTask2() {
guard let workItem = self.currentWorkItem else { return }
func subtask(index: Int) throws {}
for i in 0..<100 {
if workItem.isCancelled { return }
do { try subtask(index: i) }
catch {
self.start()
return
}
}
self.performTask { self.performTask3() }
}
func performTask3() {
guard let workItem = self.currentWorkItem else { return }
func subtask(index: Int) {}
for i in 0..<100 {
if workItem.isCancelled { return }
subtask(index: i)
}
/// Done
}
}
Maybe, this is a good reason to look into Swift Combine:
Define your tasks as Publishers.
Use flatMap to chain them, optionally pass output from previous to the next.
Use switchToLatest to restart the whole thing and cancel the previous one when it is still running - if any.
Use cancel on the subscriber to cancel the whole thing.
I am writing a VideoPlayer() class that has a start() and stop() function to initiate playback for a given video. Each instance of VideoPlayer() manages a single video
The start() and stop() functions are asynchronous and I get notified via a delegate of success/failure. This delegation is done by a third party SDK.
class VideoPlayer {
func start() {
// Async call
}
func stop() {
// Async call
}
// Callback from third party library with status
func videoDidStart(view: UIView) {
// Notify clients
}
func videoDidStop(stopped: Bool) {
// Notify clients
}
}
I have clients calling the start() and stop() function but sometimes it so happens that they are calling start/stop in quick successions. This leads to unexpected behavior.
For example, if the clients call start() before a previous call to stop() has finished, the video won't play.
Ideally, the clients would wait till I send success/failure notification that I receive via the delegates.
But of course, that's not happening and I am tasked with making the VideoPlayer() class manage the imbalance's between start() and stop() and queue all the calls so everything executes in order.
I would like the VideoPlayer() class to ensure that every time there is an imbalance between start() and stop(), every start() is matched with a stop() and the rogue calls are put in a queue and execute them after the imbalance has been sorted out.
How do I manage such a thing? I believe I need to use Operations with a dependency between start() and stop(). Should I make start()/stop() a single operation and queue the rogue calls until the operation finishes.
Are there other approaches I could use? I have looked dispatch_groups and dispatch_barriers but I am not sure if they fit my use case.
Any help would be greatly appreciated. Thank you
I don't see why an operation is needed. As a crude example of the sort of thing you are describing, I made a small dispatcher architecture:
struct Start {
let callback : () -> ()
var started = false
}
class Dispatcher {
var q = [Start]()
func start(_ f: #escaping () -> ()) {
q.append(Start(callback:f))
self.startFirstOne()
}
func stop(_ f: #escaping () -> ()) {
// assume this can only refer to the first one in the queue
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
self.q.removeFirst()
f()
self.startFirstOne()
}
}
private func startFirstOne() {
guard !q.isEmpty else { return }
guard !q[0].started else { return }
print("starting")
self.q[0].started = true
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
self.q.first!.callback()
}
}
}
The start calls are queued up, and we don't actually start one until we have received sufficient stop calls to bring it to the front of the queue.
I personally don't like this at all, because we are making three very strong assumptions about contract-keeping:
we assume that everyone who calls start will also call stop,
and we are assuming that someone who calls start will not call stop until after being called back from the start call,
and we are assuming that no one who did not call start will ever call stop.
Nevertheless, if everyone does keep to the contract, then everyone's stop will be called back corresponding to that person's start. Here's a rough test bed, using random delays throughout to simulate asynchronicity:
let d = Dispatcher()
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
d.start {
print("hearts start")
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
d.stop { print("hearts stop") }
}
}
}
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
d.start {
print("diamonds start")
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
d.stop { print("diamonds stop") }
}
}
}
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
d.start {
print("clubs start")
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
d.stop { print("clubs stop") }
}
}
}
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
d.start {
print("spades start")
DispatchQueue.main.asyncAfter(deadline: .now()+DispatchTimeInterval.seconds(Int.random(in: 1...10))) {
d.stop { print("spades stop") }
}
}
}
Try it, and you'll see that, whatever order and with whatever timing these things run, the start-stop pairs are all maintained by the dispatcher.
As I say, I don't like it. I would prefer to assume that callers might not adhere to the contract and be prepared to do something about it. But there it is, for what it's worth.
I am using the Combine Future to wrap around an async block operation and adding a subscriber to that publisher to receive the values.. I am noticing the future object is not getting deallocated, even after the subscribers are deallocated. The XCode memory graph and instruments leaks graph itself shows no reference to these future objects. I am puzzled why are they still around.
func getUsers(forceRefresh: Bool = false) -> AnyPublisher<[User], Error> {
let future = Future<[User], Error> { [weak self] promise in
guard let params = self?.params else {
promise(.failure(CustomErrors.invalidData))
return
}
self?.restApi.getUsers(params: params, forceRefresh: forceRefresh, success: { (users: [User]?, _) in
guard let users = users else {
return promise(.failure(CustomErrors.invalidData))
}
promise(.success(users))
}) { (error: Error) in
promise(.failure(error))
}
}
return future.eraseToAnyPublisher()
}
Here's how I am adding a subscription:
self.userDataService?.getUsers(forceRefresh: forceRefresh)
.sink(receiveCompletion: { [weak self] completion in
self?.isLoading = false
if case let .failure(error) = completion {
self?.publisher.send(.error(error))
return
}
guard let users = self?.users, !users.isEmpty else {
self?.publisher.send(.empty)
return
}
self?.publisher.send(.data(users))
}) { [weak self] (response: Array<User>) in
self?.users = response
}.store(in: &self.subscribers)
deinit {
self.subscribers.removeAll()
}
This is the screenshot of the leaked memory for the future that got created above.. It's still staying around even after the subscribers are all deleted. Instruments is also showing a similar memory graph. Any thoughts on what could be causing this ??
Future invokes its closure immediately upon creation, which may be impacting this. You might try wrapping the Future in Deferred so that it isn't created until a subscription happens (which may be what you're expecting anyway from scanning the code).
The fact that it's creating one immediately is what (I think) is being reflected in the objects listed when there are no subscribers.
I am trying to do the following approach,
let operationQueue = OperationQueue()
operationQueue.maxConcurrentOperationCount = 10
func registerUser(completionHandler: #escaping (Result<Data, Error>) -> Void) -> String {
self.registerClient() { (result) in
switch result {
case .success(let data):
self.downloadUserProfile(data.profiles)
case .failure(let error):
return self.handleError(error)
}
}
}
func downloadUserProfile(urls: [String]) {
for url in urls {
queue.addOperation {
self.client.downloadTask(with: url)
}
}
}
I am checking is there anyway I can get notified when all operations gets completed and then I can call the success handler there.
I tried checking the apple dev documentation which suggests to use
queue.addBarrierBlock {
<#code#>
}
but this is available only from iOS 13.0
Pre iOS 13, we’d use dependencies. Declare a completion operation, and then when you create operations for your network requests, you’d define those operations to be dependencies for your completion operation.
let completionOperation = BlockOperation { ... }
let networkOperation1 = ...
completionOperation.addDependency(networkOperation1)
queue.addOperation(networkOperation1)
let networkOperation2 = ...
completionOperation.addDependency(networkOperation2)
queue.addOperation(networkOperation2)
OperationQueue.main.addOperation(completionOperation)
That having been said, you should be very careful with your operation implementation. Do I correctly infer that downloadTask(with:) returns immediately after the download task has been initiated and doesn’t wait for the request to finish? In that case, neither dependencies nor barriers will work the way you want.
When wrapping network requests in an operation, you’d want to make sure to use an asynchronous Operation subclass (e.g. https://stackoverflow.com/a/32322851/1271826).
The pre-iOS 13 way is to observe the operationCount property of the operation queue
var observation : NSKeyValueObservation?
...
observation = operationQueue.observe(\.operationCount, options: [.new]) { observed, change in
if change.newValue == 0 {
print("operations finished")
}
}
}