Swift's way to handle this situation instead of using bools - ios

I have a function that I want only to execute if not executing currently.
I have used a bool variable to check the current execution.
Is there any other solution provided by Swift to handle this instead of using Bool?
guard
!isExecuting,
let currentNavVC = tabBarController.selectedViewController as? UINavigationController
else { return }
isExecuting = true
let first = currentNavVC.viewControllers.first,
let last = currentNavVC.viewControllers.last
var controllers = [first]
if first != last {
controllers = [first, last]
}
DispatchQueue.main.async {
currentNavVC.viewControllers = controllers
isExecuting = false
}
Bool variable: isExecuting
Note:
Tried using Semaphores(DispatchSemaphore) but they are of no help.
Also I am calling the above function in didReceiveMemoryWarning()
Any help will be appreciated and thanks in advance!!

I have a function that I want only to execute if not executing currently
You're looking for a lock. But locks of themselves are tricky and dangerous. The easy, safe way to get a lock is to use a serial queue. As we say, a serial queue is a form of lock. So:
If your function is called on the main queue, then it cannot execute if it is executing currently, and there is nothing to do. The main queue is a serial queue and there can be Only One.
If your function is called on a background queue, then make sure that your queue is a serial queue. For example, if you create your own DispatchQueue, it is serial by default.

I believe you also can use Operation with OperationQueue in this case.
Operation supports cancellation as well as checking if it is executing.
Ref:
OperationQueue: https://developer.apple.com/documentation/foundation/operationqueue
Operation: https://developer.apple.com/documentation/foundation/operation

Related

How do I write thread-safe code that uses a completionHandler with a function that delegates code to an instance of OperationQueue?

I've been using the CloudKitShare sample code found here as a sample to help me write code for my app. I want to use performWriterBlock and performReaderBlockAndWait as found in BaseLocalCache using a completionHandler without violating the purposes of the design of the code, which focuses on being thread-safe. I include code from CloudKitShare below that are pertinent to my question. I include the comments that explain the code. I wrote comments to identify which code is mine.
I would like to be able to use an escaping completionHandler if possible. Does using an escaping completionHandler still comply with principles of thread-safe code, or does it in any way violate the purpose of the design of this sample code to be thread-safe? If I use an escaping completionHandler, I would need to consider when the completionHandler actually runs relative to other code outside of the scope of the actual perform function that uses the BaseLocalCache perform block. I would for one thing need to be aware of what other code runs in my project between the time the method executes and the time operationQueue in BaseLocalCache actually executes the block of code and thus the completionHandler.
class BaseLocalCache {
// A CloudKit task can be a single operation (CKDatabaseOperation)
// or multiple operations that you chain together.
// Provide an operation queue to get more flexibility on CloudKit operation management.
//
lazy var operationQueue: OperationQueue = OperationQueue()
// This sample ...
//
// This sample uses this dispatch queue to implement the following logics:
// - It serializes Writer blocks.
// - The reader block can be concurrent, but it needs to wait for the enqueued writer blocks to complete.
//
// To achieve that, this sample uses the following pattern:
// - Use a concurrent queue, cacheQueue.
// - Use cacheQueue.async(flags: .barrier) {} to execute writer blocks.
// - Use cacheQueue.sync(){} to execute reader blocks. The queue is concurrent,
// so reader blocks can be concurrent, unless any writer blocks are in the way.
// Note that Writer blocks block the reader, so they need to be as small as possible.
//
private lazy var cacheQueue: DispatchQueue = {
return DispatchQueue(label: "LocalCache", attributes: .concurrent)
}()
func performWriterBlock(_ writerBlock: #escaping () -> Void) {
cacheQueue.async(flags: .barrier) {
writerBlock()
}
}
func performReaderBlockAndWait<T>(_ readerBlock: () -> T) -> T {
return cacheQueue.sync {
return readerBlock()
}
}
}
final class TopicLocalCache: BaseLocalCache {
private var serverChangeToken: CKServerChangeToken?
func setServerChangeToken(newToken: CKServerChangeToken?) {
performWriterBlock { self.serverChangeToken = newToken }
}
func getServerChangeToken() -> CKServerChangeToken? {
return performReaderBlockAndWait { return self.serverChangeToken }
}
// Trial: How to use escaping completionHandler? with a performWriterBlock
func setServerChangeToken(newToken: CKServerChangeToken?, completionHandler: #escaping (Result<Void, Error>)->Void) {
performWriterBlock {
self.serverChangeToken = newToken
completionHandler(.success(Void()))
}
}
// Trial: How to use escaping completionHandler? with a performReaderBlockAndWait
func getServerChangeToken(completionHandler: (Result<CKServerChangeToken, Error>)->Void) {
performReaderBlockAndWait {
if let serverChangeToken = self.serverChangeToken {
completionHandler(.success(serverChangeToken))
} else {
completionHandler(.failure(NSError(domain: "nil CKServerChangeToken", code: 0)))
}
}
}
}
You asked:
Does using an escaping completionHandler still comply with principles of thread-safe code, or does it in any way violate the purpose of the design of this sample code to be thread-safe?
An escaping completion handler does not violate thread-safety.
That having been said, it does not ensure thread-safety, either. Thread-safety is solely a question of whether you ever access some shared resource from one thread while mutating it from another.
If I use an escaping completionHandler, I would need to consider when the completionHandler actually runs relative to other code outside of the scope of the actual perform function that uses the BaseLocalCache perform block.
Yes, you need to be aware that the escaping completion handler is called asynchronously (i.e., later). That is less of a thread-safety concern than a general understanding of the application flow. It is only a question of what you might be doing in that closure.
IMHO, the more important observation is that the completion handler is called on the cacheQueue used internally by BaseLocalCache. So, the caller needs to be aware that the closure is not called on the caller’s current queue, but on cacheQueue.
It should be noted that elsewhere in that project, they employ another common pattern, where the completion handler is dispatched back to a particular queue, e.g., the main queue.
Bottom line, thread-safety is not a question of whether a closure is escaping or not, but rather (a) from what thread does the method call the closure; and (b) what the supplied closure actually does:
Do you interact with the UI? Then you will want to ensure that you dispatch that back to the main queue.
Do you interact with your own properties? Then you will want to make sure you synchronize all of your access with them, either with actors, relying on the main queue, use your own serial queues, or a reader-writer pattern like in the example you shared with us.
If you are ever unsure about your code’s thread-safety, you might consider temporarily turning on TSAN as described in Diagnosing Memory, Thread, and Crash Issues Early

How to "loosely" trigger a function?

I have the following async recursive code:
func syncData() {
dal.getList(...) { [unowned self] list, error in
if let objects = list {
if oneTime {
oneTime = false
syncOtherStuffNow()
}
syncData() // recurse until all data synced
} else if let error = error {... }
func syncOtherStuffNow() { } // with its own recursion
My understanding is that the recursion will build the call stack until all the function calls complete, at which point they will all unwind and free up the heap.
I also want to trigger another function (syncOtherStuffNow) from within the closure. But don't want to bind it to the closure with a strong reference waiting for it's return (even though it's async too).
How can I essentially trigger the syncOtherStuffNow() selector to run, and not affect the current closure with hanging on to its return call?
I thought of using Notifications, but that seems overkill given the two functions are in the same class.
Since dal.getList() takes a callback I guess it is asynchronous and so the the first syncData starts the async call and then returns immediately which lets syncData() return.
If syncOtherStuffNow() is async it will return immediately and so dataSync() will not wait on it finishing its job and so continue with its execution to the end.
You can test whether sth builds a callstack by putting a breakpoint on every recursion and look on the callstack how many calls of the same function are ontop.
What I do is recurse with asyncAfter, which unwinds the call stack.

How to run async Operation

For example, I have this custom operation:
class CustomOperation: Operation {
override init() {
super.init()
self.qualityOfService = .userInitiated
}
override func main() {
// ..
}
}
And this is what I'm doing to run the CustomOperation:
let customOperation = CustomOperation()
customOperation.completionBlock = { print("custom operation finished") }
customOperation.start()
I have a few CustomOperations trying to run at the same time. Is anyway to run it async without creating an OperationQueue for each CustomOperation? Because the isAsynchronous property is read only.
You don't have to create a queue for each operation. You can put them all in the same queue. The queue's maxConcurrentOperationCount determine's how many are run simultaneously.
If you don't want to use a queue at all, you need to override start() and isAsynchronous() and have start() start a thread and run. There is more you need to do than that (read the docs)
https://developer.apple.com/reference/foundation/operation
Go to the "Methods to Override Section"
If you are creating a concurrent operation, you need to override the following methods and properties at a minimum:
start()
isAsynchronous
isExecuting
isFinished
In a concurrent operation, your start() method is responsible for starting the operation in an asynchronous manner. Whether you spawn a thread or call an asynchronous function, you do it from this method. Upon starting the operation, your start() method should also update the execution state of the operation as reported by the isExecuting property. You do this by sending out KVO notifications for the isExecuting key path, which lets interested clients know that the operation is now running. Your isExecuting property must also provide the status in a thread-safe manner.

SceneKit: how to animate multiple SCNNodes together then call completion block once

The goal is to animate multiple SCNNodes at the same time then call a completion block once all the animations complete. The parallel animations have the same duration so will complete at the same time if started together.
This SO answer suggested using the group function for Sprite Kit, but there is no analog in Scene Kit because the SCNScene class lacks a runAction.
One option is run all the actions individually against each node and have each one call the same completion function, which must maintain a flag to ensure it's only called once.
Another option is to avoid the completion handler and call the completion code after a delay matched to the animation duration. This creates race conditions during testing, however, since sometimes the animations get held up before completing.
This seems clunky, though. What's the right way to group the animation of multiple nodes in SceneKit then invoke a completion handler?
The way I initially approached this was, since all the initial animations have the same duration, to apply the completion handler to just one of the actions. But, on occasion, the animations would hang-up (SCNAction completion handler awaits gesture to execute).
My current, successful solution is to not use the completion handler in conjunction with an SCNAction but with a delay:
func delay(delay:Double, closure:()->()) {
dispatch_after(
dispatch_time(
DISPATCH_TIME_NOW,
Int64(delay * Double(NSEC_PER_SEC))
),
dispatch_get_main_queue(), closure)
}
An examlpe of invocation:
delay(0.95) {
self.scaleNode_2.runAction(moveGlucoseBack)
self.fixedNode_2.runAction(moveGlucoseBack)
self.scaleNode_3.hidden = true
self.fixedNode_3.hidden = true
}
I doubt this can be called "the right way" but it works well for my uses and eliminates the random hang-ups I experienced trying to run animations on multiple nodes with completion handlers.
I haven't thought this through completely but I'll post it in hopes of being useful.
The general problem, do something after the last of a set of actions completes, is what GCD's dispatch_barrier is about. Submit all of the blocks to a private concurrent queue, then submit the Grand Finale completion block with dispatch_barrier. Grand Finale runs after all previous blocks have finished.
What I don't see right away is how to integrate these GCD calls with SceneKit calls and completion handlers.
Maybe dispatch_group is a better approach.
Edits and comments welcome!
Try something like this:
private class CountMonitor {
var completed: Int = 0
let total: Int
let then: ()->Void
init(for total: Int, then: #escaping(()->Void)) {
self.total = total
self.then = then
}
func didOne() {
completed += 1
if completed == total {
then() // Generally you should dispatch this off the main thread though
}
}
}
Then creating the actions looks something like:
private func test() {
// for context of types
let nodes: [SCNNode] = []
let complexActionsToRun: SCNAction = .fadeIn(duration: 100)
// Set up the monitor so it knows how many 'didOne' calls it should get, and what to do when they are all done ...
let monitor = CountMonitor(for: nodes.count) { () in
// do whatever you want at the end here
print("Done!")
}
for node in nodes {
node.runAction( complexActionsToRun ) { () in
monitor.didOne()
}
}
}
Note you should also account for the nodes array being empty (you might still want to do whatever you wanted to do at the end, just immediately in that case).

Object passed by reference will not exist. Swift

I have an array.
var array:[customType] = [] // pseudo code
func Generate_New_Array(){
//initialization of generatedNewArray
array = generatedNewArray
for (index,element) in array{
async_process({
Update_Data_From_Web(&array[index])
})
}
})
}
func Update_Data_From_Web(inout object:customType){
download_process{
object = downloadedData
}
}
The question is , what will should I do if I call Generate_New_Array before Update_Data_From_Web will finish for each of elements. They will store value back to not-existing index in array. How to avoid problems with that.
You have a couple of options:
Make the Generate_New_Array process cancelable, and then cancel the old one before starting the new one.
Make the Generate_New_Array serial so that when you make a subsequent call to this method, it will finish the calls first. For example, you could have this enqueue an operation on a serial queue.
Regardless of which approach you adopt, if this is multithreaded code, make sure you synchronize your interaction with the model object (via GCD queues or locks or whatever).

Resources