Memory leak when using `Publishers.Sequence` - ios

I have a function that create collection of Publishers:
func publishers(from text: String) -> [AnyPublisher<SignalToken, Never>] {
let signalTokens: [SignalToken] = translate(from: text)
var delay: Int = 0
let signalPublishers: [AnyPublisher<SignalToken, Never>] = signalTokens.map { token in
let publisher = Just(token)
.delay(for: .milliseconds(delay), scheduler: DispatchQueue.main)
.eraseToAnyPublisher()
delay += token.delay
return publisher
}
return signalPublishers
}
In service class I have to method, one for play():
func play(signal: String) {
anyCancellable = signalTokenSubject.sink(receiveValue: { token in print(token) }
anyCancellable2 = publishers(from: signal)
.publisher
.flatMap { $0 }
.subscribe(on: DispatchQueue.global())
.sink(receiveValue: { [weak self] token in
self?.signalTokenSubject.send(token)
})
}
and one for stop():
func stop() {
anyCancellable?.cancel()
anyCancellable2?.cancel()
}
I've had problem with memory. When collection of publishers is large and I stop() before whole Publishers.Sequence is .finshed memory increase and never release.
Is there a way to completed Publishers.Sequence earlier, before Combine iterate over whole collection?

To reclaim the memory, release the pipelines:
func stop() {
anyCancellable?.cancel()
anyCancellable2?.cancel()
anyCancellable = nil
anyCancellable2 = nil
}
Actually you don't need the cancel calls, because releasing the pipelines does cancel in good order; that is the whole point of AnyCancellable. So you can just say:
func stop() {
anyCancellable = nil
anyCancellable2 = nil
}
Another thing to note is that you are running all your publishers at once. The sequence does not arrive sequentially; the whole sequence is dumped into the flapMap which starts all the publishers publishing simultaneously. Thus cancelling doesn't do you all that much good. You might want to set the maxPublishers: on your flatMap so that backpressure prevents more than some small number of publishers from arriving simultaneously (like for example one at a time).

Related

Swift -Start/Stop Synchronous OperationQueue

I have some operations that need to run synchronously. I tried to follow this link but it's not clear enough for my situation.
op2 doesn't start until op1 is finished and op3 doesn't start until op2 is finished but during that time I need to be able to stop any of the operations and restart all over again. For example if op2 is running, I know that it cannot be stopped, but for whatever reason I need to be able to prevent op3 from executing because op1 has restarted. How can I do this?
This is a very simple example, the actual code is more intricate
var queue1 = OperationQueue()
var queue2 = OperationQueue()
var queue3 = OperationQueue()
var operation1: BlockOperation?
var operation2: BlockOperation?
var operation3: BlockOperation?
// a DispatchGroup has finished running now it's time to start the operations ...
dispatchGroup.notify(queue: .global(qos: .background)) { [weak self] in
DispatchQueue.main.async { [weak self] in
self?.runFirstFunc()
}
}
func runFirstFunc() {
var count = 0
let num in arr {
count += num
}
// now that the loop is finished start the second func but there is a possibility something may happen in the first that should prevent the second func from running
runSecondFunc(count: count)
}
func runSecondFunc(count: Int) {
do {
try ...
// if the do-try is successful do something with count then start thirdFunc but there is a possibility something may happen in the second func that should prevent the third func from running
runThirdFunc()
} catch {
return
}
}
func runThirdFunc() {
// this is the final operation, once it hits here I know it can't be stopped even if I have to restart op1 again but that is fine
}
You said:
op2 doesn't start until op1 is finished and op3 doesn't start until op2 is finished ...
If using OperationQueue you can accomplish that by creating the three operations, and defining op1 to be a dependency of op2 and defining op2 as a dependency of op3.
... but during that time I need to be able to stop any of the operations and restart all over again.
If using OperationQueue, if you want to stop all operations that have been added to the queue, you call cancelAllOperations.
For example if op2 is running, I know that it cannot be stopped, ...
Well, it depends upon what op2 is doing. If it's spinning in a loop doing calculations, then, yes, it can be canceled, mid-operation. You just check isCancelled, and if it is, stop the operation in question. Or if it is a network request (or something else that is cancelable), you can override cancel method and cancel the task, too. It depends upon what the operation is doing.
... but for whatever reason I need to be able to prevent op3 from executing because op1 has restarted.
Sure, having canceled all the operations with cancelAllOperations, you can then re-add three new operations (with their associated dependencies) to the queue.
Here's a not-tested implementation that allows cancellation while any task is doing it's subtasks (repeatedly).
In case second task fails/throws, it automatically restarts from the first task.
In case user manually stops / starts, the last in-flight task quits it's execution (as soon as it can).
Note : You must take care of [weak self] part according to your own implementation.
import Foundation
class TestWorker {
let workerQueue = DispatchQueue.global(qos: .utility)
var currentWorkItem: DispatchWorkItem?
func start() {
self.performTask { self.performTask1() }
}
func stop() {
currentWorkItem?.cancel()
}
func performTask(block: #escaping (() -> Void)) {
let workItem = DispatchWorkItem(block: block)
self.currentWorkItem = workItem
workerQueue.async(execute: workItem)
}
func performTask1() {
guard let workItem = self.currentWorkItem else { return }
func subtask(index: Int) {}
for i in 0..<100 {
if workItem.isCancelled { return }
subtask(index: i)
}
self.performTask { self.performTask2() }
}
func performTask2() {
guard let workItem = self.currentWorkItem else { return }
func subtask(index: Int) throws {}
for i in 0..<100 {
if workItem.isCancelled { return }
do { try subtask(index: i) }
catch {
self.start()
return
}
}
self.performTask { self.performTask3() }
}
func performTask3() {
guard let workItem = self.currentWorkItem else { return }
func subtask(index: Int) {}
for i in 0..<100 {
if workItem.isCancelled { return }
subtask(index: i)
}
/// Done
}
}
Maybe, this is a good reason to look into Swift Combine:
Define your tasks as Publishers.
Use flatMap to chain them, optionally pass output from previous to the next.
Use switchToLatest to restart the whole thing and cancel the previous one when it is still running - if any.
Use cancel on the subscriber to cancel the whole thing.

How to prevent from calling async function many times but call completion for each of them?

This is code I am using currently:
typealias ResponseHandler = (SomeResponse?, Error?) -> Void
class LoginService {
private var authorizeTokenCompletions = [ResponseHandler]()
func authorizeToken(withRefreshToken refreshToken: String, completion: #escaping ResponseHandler) {
if authorizeTokenCompletions.isEmpty {
authorizeTokenCompletions.append(completion)
post { [weak self] response, error in
self?.authorizeTokenCompletions.forEach { $0(response, error) }
self?.authorizeTokenCompletions.removeAll()
}
} else {
authorizeTokenCompletions.append(completion)
}
}
private func post(completion: #escaping ResponseHandler) {
// async
completion(nil, nil)
}
}
What is idea of above code?
authorizeToken function may be called as many times as it needs (for example 20 times)
Only one asynchronous request (post) may be pushed at a time.
All completions from called authorizeToken functions should be called with the same parameters as the first one completed.
Usage:
let service = LoginService()
service.authorizeToken(withRefreshToken: "") { a, b in print(a)}
service.authorizeToken(withRefreshToken: "") { a, b in print(a)}
service.authorizeToken(withRefreshToken: "") { a, b in print(a)}
service.authorizeToken(withRefreshToken: "") { a, b in print(a)}
service.authorizeToken(withRefreshToken: "") { a, b in print(a)}
All completions above should be printed with result from the first one which was called.
Is it possible to do this with RxSwift?
PS I will award a bounty of 100 once it is possible for the one who help me with this;)
Is it possible to do this with RxSwift?
Yes it is possible. RxSwift and Handling Invalid Tokens.
The simplest solution:
func authorizeToken(withRefreshToken refreshToken: String) -> Observable<SomeResponse> {
Observable.create { observer in
DispatchQueue.main.asyncAfter(deadline: .now() + 3) {
print("async operation")
observer.onNext(SomeResponse())
}
return Disposables.create()
}
}
let response = authorizeToken(withRefreshToken: "")
.share(replay: 1)
response.subscribe(onNext: { print($0) })
response.subscribe(onNext: { print($0) })
response.subscribe(onNext: { print($0) })
response.subscribe(onNext: { print($0) })
response.subscribe(onNext: { print($0) })
The above will only work if all requests (subscribes) are made before the first one completes. Just like your code.
If you want to store the response for use even after completion, then you can use replay instead of share.
let response = authorizeToken(withRefreshToken: "")
.replayAll()
let disposable = response.connect() // this calls the async function. The result will be stored until `disposable.dispose()` is called.
response.subscribe(onNext: { print($0) })
DispatchQueue.main.asyncAfter(deadline: .now() + 5) {
response.subscribe(onNext: { print($0) }) // this won't perform the async operation again even if the operation completed some time ago.
}
Answering
is it possible to do this with RxSwift
that's not possible, as every time we trigger the function it gets dispatched and we can't access the callbacks from other threads.
You're creating a race condition, a workaround is to populate the data once in a singleton, and rather than calling the function multiple times use that singleton.
some other approach might also work singleton is just an example.
Race condition: A race condition is what happens when the expected completion order of a sequence of operations becomes unpredictable, causing our program logic to end up in an undefined state

Why is this Combine pipeline not letting items through?

I'm stuck on a Combine problem, and I can't find a proper solution for this.
My goal is to monitor a queue, and process items in the queue until it's empty. If then someone adds more items in the queue, I resume processing. Items needs to be processed one by one, and I don't want to lose any item.
I wrote a very simplified queue below to reproduce the problem. My items are modeled as just strings for the sake of simplicity again.
Given the contraints above:
I use a changePublisher on the queue to monitor for changes.
A button lets me add a new item to the queue
The flatMap operator relies on the maxPublishers parameter to only allow one in-flight processing.
The buffer operator prevents items from being lost if the flatMap is busy.
Additionally, I'm using a combineLatest operator to only trigger the pipeline under some conditions. For simplicity, I'm using a Just(true) publisher here.
The problem
If I tap the button, a first item goes in the pipeline and is processed. The changePublisher triggers because the queue is modified (item is removed), and the pipeline stops at the compactMap because the peek() returns nil. So far, so good. Afterwards, though, if I tap on the button again, a value is sent in the pipeline but never makes it through the buffer.
Solution?
I noticed that removing the combineLatest prevents the problem from happening, but I don't understand why.
Code
import Combine
import UIKit
class PersistentQueue {
let changePublisher = PassthroughSubject<Void, Never>()
var strings = [String]()
func add(_ s: String) {
strings.append(s)
changePublisher.send()
}
func peek() -> String? {
strings.first
}
func removeFirst() {
strings.removeFirst()
changePublisher.send()
}
}
class ViewController: UIViewController {
private let queue = PersistentQueue()
private var cancellables: Set<AnyCancellable> = []
override func viewDidLoad() {
super.viewDidLoad()
start()
}
#IBAction func tap(_ sender: Any) {
queue.add(UUID().uuidString)
}
/*
Listen to changes in the queue, and process them one at a time. Once processed, remove the item from the queue.
Keep doing this until there are no more items in the queue. The pipeline should also be triggered if new items are
added to the queue (see `tap` above)
*/
func start() {
queue.changePublisher
.print("Change")
.buffer(size: Int.max, prefetch: .keepFull, whenFull: .dropNewest)
.print("Buffer")
// NOTE: If I remove this combineLatest (and the filter below, to make it compile), I don't have the issue anymore.
.combineLatest(
Just(true)
)
.print("Combine")
.filter { _, enabled in return enabled }
.print("Filter")
.compactMap { _ in
self.queue.peek()
}
.print("Compact")
// maxPublishers lets us process one page at a time
.flatMap(maxPublishers: .max(1)) { reference in
return self.process(reference)
}
.sink { reference in
print("Sink for \(reference)")
// Remove the processed item from the queue. This will also trigger the queue's changePublisher,
// which re-run this pipeline in case
self.queue.removeFirst()
}
.store(in: &cancellables)
}
func process(_ value: String) -> AnyPublisher<String, Never> {
return Future<String, Never> { promise in
print("Starting processing of \(value)")
DispatchQueue.global(qos: .userInitiated).asyncAfter(deadline: .now() + 2) {
promise(.success(value))
}
}.eraseToAnyPublisher()
}
}
Output
Here is a sample run of the pipeline if you tap on the button twice:
Change: receive subscription: (PassthroughSubject)
Change: request max: (9223372036854775807)
Buffer: receive subscription: (Buffer)
Combine: receive subscription: (CombineLatest)
Filter: receive subscription: (Print)
Compact: receive subscription: (Print)
Compact: request max: (1)
Filter: request max: (1)
Combine: request max: (1)
Buffer: request max: (1)
Change: receive value: (())
Buffer: receive value: (())
Combine: receive value: (((), true))
Filter: receive value: (((), true))
Compact: receive value: (3999C98D-4A86-42FD-A10C-7724541E774D)
Starting processing of 3999C98D-4A86-42FD-A10C-7724541E774D
Change: request max: (1) (synchronous)
Sink for 3999C98D-4A86-42FD-A10C-7724541E774D // First item went through pipeline
Change: receive value: (())
Compact: request max: (1)
Filter: request max: (1)
Combine: request max: (1)
Buffer: request max: (1)
Buffer: receive value: (())
Combine: receive value: (((), true))
Filter: receive value: (((), true))
// Second time compactMap is hit, value is nil -> doesn't forward any value downstream.
Filter: request max: (1) (synchronous)
Combine: request max: (1) (synchronous)
Change: request max: (1)
// Tap on button
Change: receive value: (())
// ... Nothing happens
[EDIT] Here is a much more constrained example, which can run in Playgrounds and which also demonstrates the problem:
import Combine
import Foundation
import PlaygroundSupport
PlaygroundPage.current.needsIndefiniteExecution = true
func process(_ value: String) -> AnyPublisher<String, Never> {
return Future<String, Never> { promise in
print("Starting processing of \(value)")
DispatchQueue.global(qos: .userInitiated).asyncAfter(deadline: .now() + 0.1) {
promise(.success(value))
}
}.eraseToAnyPublisher()
}
var count = 3
let s = PassthroughSubject<Void, Never>()
var cancellables = Set<AnyCancellable>([])
// This reproduces the problem. Switching buffer and combineLatest fix the problem…
s
.print()
.buffer(size: Int.max, prefetch: .keepFull, whenFull: .dropNewest)
.combineLatest(Just("a"))
.filter { _ in count > 0 }
.flatMap(maxPublishers: .max(1)) { _, a in process("\(count)") }
.sink {
print($0)
count -= 1
s.send()
}
.store(in: &cancellables)
s.send()
Thread.sleep(forTimeInterval: 3)
count = 1
s.send()
Switching combine and buffer fixes the problem.
I am not sure why the pipeline is blocked, but there is no reason to publish when the queue is empty. Fixing this resolved the problem for me.
func removeFirst() {
guard !strings.isEmpty else {
return
}
strings.removeFirst()
if !self.strings.isEmpty {
self.changePublisher.send(self.strings.first)
}
}
Just tried your example. It works as expected if buffer is placed before flatMap. And update removeFirst according to Paulw11's answer below
queue.changePublisher
.print("Change")
// NOTE: If I remove this combineLatest (and the filter below, to make it compile), I don't have the issue anymore.
.combineLatest(Just(true))
.print("Combine")
.filter { _, enabled in return enabled }
.print("Filter")
.compactMap { _ in
self.queue.peek()
}
.print("Compact")
// maxPublishers lets us process one page at a time
.buffer(size: Int.max, prefetch: .keepFull, whenFull: .dropNewest)
.print("Buffer")
.flatMap(maxPublishers: .max(1)) { reference in
return self.process(reference)
}
.sink { reference in
print("Sink for \(reference)")
// Remove the processed item from the queue. This will also trigger the queue's changePublisher,
// which re-run this pipeline in case
self.queue.removeFirst()
print("COUNT: " + self.queue.strings.count.description)
}
.store(in: &cancellables)

How does DispatchQueue.main.async store it's blocks

I have a code similar to this:
func fetchBalances() -> Observable<Result<[User], Error>> {
Observable.create { observer in
var dataChangeDisposable: Disposable?
DispatchQueue.main.async {
let realm = try! Realm()
let user = realm.objects(UserData.self)
dataChangeDisposable = Observable.collection(from: user)
.map { $0.map { UserData.convert($0) } }
.subscribe(onNext: {
observer.onNext(.success($0))
})
}
return Disposables.create {
dataChangeDisposable?.dispose()
}
}
}
I need to use some thread with run loop in order to maintain subscription to Realm database (Realm's restriction). For now I'm using DispatchQueue.main.async {} method and I noticed that subscription remains active all the time, how does DispatchQueue.main stores it's submitted blocks and if Observable destroys does it mean that I'm leaking blocks in memory?
The block sent to the dispatch queue is deleted immediately after execution. It isn't stored for very long at all.
If your subscription "remains active all the time" then it's because it's not being disposed of properly. Likely what is happening here is that the block sent to Disposables.create is being called before dataChangeDisposable contains a value.
Test my hypothesis by changing the code to:
return Disposables.create {
dataChangeDisposable!.dispose()
}
If your app crashes because dataChangeDisposable is nil, then that's your problem.

Combine Future Publisher is not getting deallocated

I am using the Combine Future to wrap around an async block operation and adding a subscriber to that publisher to receive the values.. I am noticing the future object is not getting deallocated, even after the subscribers are deallocated. The XCode memory graph and instruments leaks graph itself shows no reference to these future objects. I am puzzled why are they still around.
func getUsers(forceRefresh: Bool = false) -> AnyPublisher<[User], Error> {
let future = Future<[User], Error> { [weak self] promise in
guard let params = self?.params else {
promise(.failure(CustomErrors.invalidData))
return
}
self?.restApi.getUsers(params: params, forceRefresh: forceRefresh, success: { (users: [User]?, _) in
guard let users = users else {
return promise(.failure(CustomErrors.invalidData))
}
promise(.success(users))
}) { (error: Error) in
promise(.failure(error))
}
}
return future.eraseToAnyPublisher()
}
Here's how I am adding a subscription:
self.userDataService?.getUsers(forceRefresh: forceRefresh)
.sink(receiveCompletion: { [weak self] completion in
self?.isLoading = false
if case let .failure(error) = completion {
self?.publisher.send(.error(error))
return
}
guard let users = self?.users, !users.isEmpty else {
self?.publisher.send(.empty)
return
}
self?.publisher.send(.data(users))
}) { [weak self] (response: Array<User>) in
self?.users = response
}.store(in: &self.subscribers)
deinit {
self.subscribers.removeAll()
}
This is the screenshot of the leaked memory for the future that got created above.. It's still staying around even after the subscribers are all deleted. Instruments is also showing a similar memory graph. Any thoughts on what could be causing this ??
Future invokes its closure immediately upon creation, which may be impacting this. You might try wrapping the Future in Deferred so that it isn't created until a subscription happens (which may be what you're expecting anyway from scanning the code).
The fact that it's creating one immediately is what (I think) is being reflected in the objects listed when there are no subscribers.

Resources