How to composite functions and persist the progress - ios

I have a sequence of asynchronous methods defined like so:
func step1(input: Step1InputData, completion: (Step1OutputData -> Void)) { /* do something */ }
func step2(input: Step1OutputData, completion: (Step2OutputData -> Void)) { /* do something */ }
// etc...
As you can see, the output of step1 is the input of step2. These types all implement the StepData protocol:
protocol StepData {}
class Step1InputData : StepData { }
class Step1OutputData : StepData { }
class Step2OutputData : StepData { }
Finally, I have this custom operator:
infix operator => { associativity left }
func => <P:StepData, Q:StepData, R:StepData> (left:((P, Q -> Void) -> Void), right:((Q, R -> Void) -> Void)) -> ((P, R -> Void) -> Void) {
return { (x, completion) in
left(x, { y in
right(y, completion)
})
}
}
... which means I can write the following:
let combinedStep = step1 => step2
Which is great because it's super readable, and crucially it enforces type safety between the steps.
The problem is that I want to be able to persist the progress of a combinedStep. For example, step1 could be an image upload and step2 could be a local storage write. If step2 fails, the next time I try, I want to pick up where I left off, rather than re-uploading the image. Of course, there could be any number of steps chained together.
I could come up with a system that has an array of steps and manages the passing of data between steps and would be able to persist the progress, however, I can't think of a way to do this while still keeping the compile-time type safety.
Can someone with more experience of functional programming point me in the right direction?

I would suggest you look into frameworks like PromiseKit, BrightFutures, and ReactiveCocoa.
Those framework are in the same problem and solution space as you, asynchronous operations that need to be composed, and they all provide multiple ways to handle errors and retries.
If you like any of those you could adopt them in place of your custom implementation, and leverage the support of a big community of developers. Or you could just find inspiration from their code to bring back in your.
Enjoy

I think you need to make your operator right associative and pass an enum in the completion handler to allow for failure.
enum StepResult<Output> {
case Success(Output)
case Failure((StepResult<Output>->Void)->Void)
static func toFailHandlerWithInput<Input>(input: Input, processor: (Input,(StepResult<Output>)->Void)->Void) -> StepResult<Output> {
let handler : ((StepResult<Output>->Void)->Void) = { completion in
processor(input) { pout in
completion(pout)
}
}
return .Failure(handler)
}
}
infix operator => { associativity right }
func =><Input,Intermediate,Output>(left: (Input,(StepResult<Intermediate>)->Void)->Void, right: (Intermediate,(StepResult<Output>)->Void)->Void) -> (Input,(StepResult<Output>)->Void)->Void {
var mergedOp : ((Input,(StepResult<Output>)->Void)->Void)!
mergedOp = { input,completion in
left(input) { intermediate in
switch intermediate {
case .Success(let output):
right(output, completion)
case .Failure:
let failure = StepResult.toFailHandlerWithInput(input, processor: mergedOp)
completion(failure)
}
}
}
return mergedOp
}
var counter1 = 0
func step1(input: Int, completion: (StepResult<Double>)->Void) {
print("performing step 1...")
counter1 += 1
if ( counter1 > 1 ) {
print("Step 1 will succeed...")
let result = 2.0 * Double(input + counter1)
completion(.Success(result))
} else {
print("Step 1 fails...")
completion(StepResult.toFailHandlerWithInput(input, processor: step1))
}
}
var counter2 = 0
func step2(input: Double, completion: (StepResult<Double>)->Void) {
print("performing Step 2...")
counter2 += 1
if ( counter2 > 2 ) {
print("Step 2 will succeed...")
let result = 3 * input + Double(counter2)
completion(.Success(result))
} else {
print("Step 2 fails...")
completion(StepResult.toFailHandlerWithInput(input, processor: step2))
}
}
var counter3 = 0
func step3(input: Double, completion: (StepResult<Double>)->Void) {
print("performing Step 3...")
counter3 += 1
if ( counter3 > 1 ) {
print("Step 3 will succeed...")
let result = 4 * input + Double(counter3)
completion(.Success(result))
} else {
print("Step 3 fails...")
completion(StepResult.toFailHandlerWithInput(input, processor: step3))
}
}
func comboHandler(result: StepResult<Double>) {
switch result {
case .Success(let output):
print("output: \(output)")
case .Failure(let failHandler):
failHandler(comboHandler) // call again until success
}
}
let combinedSteps = step1 => step2 => step3
combinedSteps(5) { result in
comboHandler(result)
}

The answer to the specific question I asked here, in case it is useful to anyone else turned out to be to take a similar approach to Memoization. Strictly speaking, it is caching, rather than memoization, but the concept of wrapping the step function in another function that takes a cache key parameter is the same.

Related

How to escape (to initial calling place) from a recursive function in swift?

var currentCount = 0
let totalCount = 100
func myEscapingRecursiveFunction(_ json: String, done: #escaping (Bool) -> Void) {
currentCount+=1
if currentCount == totalCount {
done(true)
}
else {
myEscapingRecursiveFunction("xyz") { done in
// what should I do here????
}
}
Calling
// (Initially called here)
myEscapingRecursiveFunction("xyz") { done in
if done {
print("completed") // I want to get response here after the recursion is finished
}
}
I want my function to escape only when the current count is equal to total count, otherwise it should recurse, the problem is that I want to get response in place where it was initially called, but it will always execute the completion handler code where it was last called. here:
You just need to pass the same escaping block to your recurse function.
so call your function like this.
myEscapingRecursiveFunction("xyz", done: done)

Kotlin coroutines delay do not work on IOS queue dispatcher

I have a KMM app, and there is code:
fun getWeather(callback: (WeatherInfo) -> Unit) {
println("Start loading")
GlobalScope.launch(ApplicationDispatcher) {
while (true) {
val response = httpClient.get<String>(API_URL) {
url.parameters.apply {
set("q", "Moscow")
set("units", "metric")
set("appid", weatherApiKey())
}
println(url.build())
}
val result = Json {
ignoreUnknownKeys = true
}.decodeFromString<WeatherApiResponse>(response).main
callback(result)
// because ApplicationDispatcher on IOS do not support delay
withContext(Dispatchers.Default) { delay(DELAY_TIME) }
}
}
}
And if I replace withContext(Dispatchers.Default) { delay(DELAY_TIME) } with delay(DELAY_TIME) execution is never returned to while cycle and it will have only one iteration.
And ApplicationDispatcher for IOS looks like:
internal actual val ApplicationDispatcher: CoroutineDispatcher = NsQueueDispatcher(dispatch_get_main_queue())
internal class NsQueueDispatcher(
private val dispatchQueue: dispatch_queue_t
) : CoroutineDispatcher() {
override fun dispatch(context: CoroutineContext, block: Runnable) {
dispatch_async(dispatchQueue) {
block.run()
}
}
}
And from delay source code I can guess, that DefaultDelay should be returned and there is should be similar behaviour with/without withContext(Dispatchers.Default)
/** Returns [Delay] implementation of the given context */
internal val CoroutineContext.delay: Delay get() = get(ContinuationInterceptor) as? Delay ?: DefaultDelay
Thanks!
P.S. I got ApplicationDispatcher from ktor-samples.
Probably ApplicationDispatcher is some old stuff, you don't need to use it anymore:
CoroutineScope(Dispatchers.Default).launch {
}
or
MainScope().launch {
}
And don't forget to use -native-mt version of coroutines, more info in this issue

objc_sync_enter / objc_sync_exit not working with DISPATCH_QUEUE_PRIORITY_LOW

I need a read\write lock for my application. I've read https://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock
and wrote my own class, cause there are no read/write lock in swift
class ReadWriteLock {
var logging = true
var b = 0
let r = "vdsbsdbs" // string1 for locking
let g = "VSDBVSDBSDBNSDN" // string2 for locking
func waitAndStartWriting() {
log("wait Writing")
objc_sync_enter(g)
log("enter writing")
}
func finishWriting() {
objc_sync_exit(g)
log("exit writing")
}
// ждет пока все чтение завершится чтобы начать чтение
// и захватить мютекс
func waitAndStartReading() {
log("wait reading")
objc_sync_enter(r)
log("enter reading")
b++
if b == 1 {
objc_sync_enter(g)
log("read lock writing")
}
print("b = \(b)")
objc_sync_exit(r)
}
func finishReading() {
objc_sync_enter(r)
b--
if b == 0 {
objc_sync_exit(g)
log("read unlock writing")
}
print("b = \(b)")
objc_sync_exit(r)
}
private func log(s: String) {
if logging {
print(s)
}
}
}
It works good, until i try to use it from GCD threads.
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0)
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
When i try to use this class from different async blocks at some moment it allows to write when write is locked
here is sample log:
wait reading
enter reading
read lock writing
b = 1
wait reading
enter reading
b = 2
wait reading
enter reading
b = 3
wait reading
enter reading
b = 4
wait reading
enter reading
b = 5
wait reading
enter reading
b = 6
wait reading
enter reading
b = 7
wait reading
enter reading
b = 8
wait reading
enter reading
b = 9
b = 8
b = 7
b = 6
b = 5
wait Writing
enter writing
exit writing
wait Writing
enter writing
So, as you can see g was locked, but objc_sync_enter(g) allows to continue.
Why could this happen ?
BTW i checked how many times ReadWriteLock constructed, and it's 1.
Why objc_sync_exit not working and allowing to objc_sync_enter(g) when it's not freed ?
PS Readwirtelock defined as
class UserData {
static let lock = ReadWriteLock()
Thanks.
objc_sync_enter is an extremely low-level primitive, and isn't intended to be used directly. It's an implementation detail of the old #synchronized system in ObjC. Even that is extremely out-dated and should generally be avoided.
Synchronized access in Cocoa is best achieved with GCD queues. For example, this is a common approach that achieves a reader/writer lock (concurrent reading, exclusive writing).
public class UserData {
private let myPropertyQueue = dispatch_queue_create("com.example.mygreatapp.property", DISPATCH_QUEUE_CONCURRENT)
private var _myProperty = "" // Backing storage
public var myProperty: String {
get {
var result = ""
dispatch_sync(myPropertyQueue) {
result = self._myProperty
}
return result
}
set {
dispatch_barrier_async(myPropertyQueue) {
self._myProperty = newValue
}
}
}
}
All your concurrent properties can share a single queue, or you can give each property its own queue. It depends on how much contention you expect (a writer will lock the entire queue).
The "barrier" in "dispatch_barrier_async" means that it is the only thing allowed to run on the queue at that time, so all previous reads will have completed, and all future reads will be prevented until it completes. This scheme means that you can have as many concurrent readers as you want without starving writers (since writers will always be serviced), and writes are never blocking. On reads are blocking, and only if there is actual contention. In the normal, uncontested case, this is extremely very fast.
Are you 100% sure your blocks are actually executing on different threads?
objc_sync_enter() / objc_sync_exit() are guarding you only from object being accessed from different threads. They use a recursive mutex under the hood, so they won't either deadlock or prevent you from repeatedly accessing object from the same thread.
So if you lock in one async block and unlock in another one, the third block executed in-between can have access to the guarded object.
This is one of those very subtle nuances that is easy to miss.
Locks in Swift
You have to really careful what you use as a Lock. In Swift, String is a struct, meaning it's pass-by-value.
Whenever you call objc_sync_enter(g), you are not giving it g, but a copy of g. So each thread is essentially creating its own lock, which in effect, is like having no locking at all.
Use NSObject
Instead of using a String or Int, use a plain NSObject.
let lock = NSObject()
func waitAndStartWriting() {
log("wait Writing")
objc_sync_enter(lock)
log("enter writing")
}
func finishWriting() {
objc_sync_exit(lock)
log("exit writing")
}
That should take care of it!
In addition to #rob-napier's solution. I've updated this to Swift 5.1, added generic typing and a couple of convenient append methods. Note that only methods that access resultArray via get/set or append are thread safe, so I added a concurrent append also for my practical use case where the result data is updated over many result calls from instances of Operation.
public class ConcurrentResultData<E> {
private let resultPropertyQueue = dispatch_queue_concurrent_t.init(label: UUID().uuidString)
private var _resultArray = [E]() // Backing storage
public var resultArray: [E] {
get {
var result = [E]()
resultPropertyQueue.sync {
result = self._resultArray
}
return result
}
set {
resultPropertyQueue.async(group: nil, qos: .default, flags: .barrier) {
self._resultArray = newValue
}
}
}
public func append(element : E) {
resultPropertyQueue.async(group: nil, qos: .default, flags: .barrier) {
self._resultArray.append(element)
}
}
public func appendAll(array : [E]) {
resultPropertyQueue.async(group: nil, qos: .default, flags: .barrier) {
self._resultArray.append(contentsOf: array)
}
}
}
For an example running in a playground add this
//MARK:- helpers
var count:Int = 0
let numberOfOperations = 50
func operationCompleted(d:ConcurrentResultData<Dictionary<AnyHashable, AnyObject>>) {
if count + 1 < numberOfOperations {
count += 1
}
else {
print("All operations complete \(d.resultArray.count)")
print(d.resultArray)
}
}
func runOperationAndAddResult(queue:OperationQueue, result:ConcurrentResultData<Dictionary<AnyHashable, AnyObject>> ) {
queue.addOperation {
let id = UUID().uuidString
print("\(id) running")
let delay:Int = Int(arc4random_uniform(2) + 1)
for _ in 0..<delay {
sleep(1)
}
let dict:[Dictionary<AnyHashable, AnyObject>] = [[ "uuid" : NSString(string: id), "delay" : NSString(string:"\(delay)") ]]
result.appendAll(array:dict)
DispatchQueue.main.async {
print("\(id) complete")
operationCompleted(d:result)
}
}
}
let q = OperationQueue()
let d = ConcurrentResultData<Dictionary<AnyHashable, AnyObject>>()
for _ in 0..<10 {
runOperationAndAddResult(queue: q, result: d)
}
I had the same problem using queues in background. The synchronization is not working all the time in queues with "background" (low) priority.
One fix I found was to use semaphores instead of "obj_sync":
static private var syncSemaphores: [String: DispatchSemaphore] = [:]
static func synced(_ lock: String, closure: () -> ()) {
//get the semaphore or create it
var semaphore = syncSemaphores[lock]
if semaphore == nil {
semaphore = DispatchSemaphore(value: 1)
syncSemaphores[lock] = semaphore
}
//lock semaphore
semaphore!.wait()
//execute closure
closure()
//unlock semaphore
semaphore!.signal()
}
The function idea comes from What is the Swift equivalent to Objective-C's "#synchronized"?, an answer of #bryan-mclemore.

Create a moving average (and other FIR filters) using ReactiveCocoa

I'm still getting started with ReactiveCocoa and functional reactive programming concepts, so maybe this is a dumb question.
ReactiveCocoa seem naturally designed to react to streams of live data, touch events or accelerometer sensor input etc.
Is it possible to apply finite impulse response filters in ReactiveCocoa in an easy, reactive fashion? Or if not, what would be the least-ugly hacky way of doing this? How would one go about implementing something like a simple moving average?
Ideally looking for an Swift 2 + RA4 solution but also interested in if this is possible at all in Objective C and RA2/RA3.
What you actually need is a some sort of period buffer, which will keep a period of values buffered and only start sending out when the buffer has reached capacity (the code below is heavenly inspired on takeLast operator)
extension SignalType {
func periodBuffer(period:Int) -> Signal<[Value], Error> {
return Signal { observer in
var buffer: [Value] = []
buffer.reserveCapacity(period)
return self.observe { event in
switch event {
case let .Next(value):
// To avoid exceeding the reserved capacity of the buffer, we remove then add.
// Remove elements until we have room to add one more.
while (buffer.count + 1) > period {
buffer.removeAtIndex(0)
}
buffer.append(value)
if buffer.count == period {
observer.sendNext(buffer)
}
case let .Failed(error):
observer.sendFailed(error)
case .Completed:
observer.sendCompleted()
case .Interrupted:
observer.sendInterrupted()
}
}
}
}
}
based on that you can map it to any algorithm you want
let pipe = Signal<Int,NoError>.pipe()
pipe.0
.periodBuffer(3)
.map { Double($0.reduce(0, combine: +))/Double($0.count) } // simple moving average
.observeNext { print($0) }
pipe.1.sendNext(10) // does nothing
pipe.1.sendNext(11) // does nothing
pipe.1.sendNext(15) // prints 12
pipe.1.sendNext(7) // prints 11
pipe.1.sendNext(9) // prints 10.3333
pipe.1.sendNext(6) // prints 7.3333
Probably the scan signal operator is what you're looking for. Inspired by Andy Jacobs' answer, I came up with something like this (a simple moving average implementation):
let (signal, observer) = Signal<Int,NoError>.pipe()
let maxSamples = 3
let movingAverage = signal.scan( [Int]() ) { (previousSamples, nextValue) in
let samples : [Int] = previousSamples.count < maxSamples ? previousSamples : Array(previousSamples.dropFirst())
return samples + [nextValue]
}
.filter { $0.count >= maxSamples }
.map { $0.average }
movingAverage.observeNext { (next) -> () in
print("Next: \(next)")
}
observer.sendNext(1)
observer.sendNext(2)
observer.sendNext(3)
observer.sendNext(4)
observer.sendNext(42)
Note: I had to move average method into a protocol extension, otherwise the compiler would complain that the expression was too complex. I used a nice solution from this answer:
extension Array where Element: IntegerType {
var total: Element {
guard !isEmpty else { return 0 }
return reduce(0){$0 + $1}
}
var average: Double {
guard let total = total as? Int where !isEmpty else { return 0 }
return Double(total)/Double(count)
}
}

Is there a way to specify an error code for an ErrorType value in Swift?

In my app I am using a custom error type ProgrammerError with three error values .Messiness, .Procrastination and .Arrogance (these are just examples). Later in the code I need to cast the errors to NSError. The NSError objects have code properties starting from 0 following the order or error values I declared: 0, 1, 2 etc.
enum ProgrammerError: ErrorType {
case Messiness
case Procrastination
case Arrogance
}
(ProgrammerError.Messiness as NSError).code // 0
(ProgrammerError.Procrastination as NSError).code // 1
(ProgrammerError.Arrogance as NSError).code // 2
My question is: Is there a way to set different error codes for the enumeration values? For example, can I set Messiness to have code value of 100 instead of 0?
You can implement var _code: Int { get } property.
enum ProgrammerError: ErrorType {
case Messiness
case Procrastination
case Arrogance
var _code: Int {
switch self {
case .Messiness:
return 100
case .Procrastination:
return 101
case .Arrogance:
return 102
}
}
}
(ProgrammerError.Messiness as NSError).code // 100
(ProgrammerError.Procrastination as NSError).code // 101
(ProgrammerError.Arrogance as NSError).code // 102
You can also implement var _domain: String { get } if you need.
But I must warn you, these methods are undocumented so they might stop working in future.
Or, you may try explicit conversion.
extension FileActionError {
func getCode() -> Int {
switch self {
case .BadFileNodeIndex: return 1
case .BadFileNodePath: return 2
}
}
func toNSError() -> NSError {
return NSError(domain: "", code: getCode(), userInfo: [NSLocalizedDescriptionKey: "\(self)"])
}
}
Might not what you expected.

Resources