Flow Framework: How does #Signal behave and can be used to perform some dynamic tasks, which change the execution flow in SWF - amazon-swf

Some of my observations on researching about flow framework are:
#Signal starts executing in the decider replay once the signal is received. #Signal method is executed in all the future replays of the same workflow. (Once signal is received, on each replay the decider executes #Signal). #Asynchronus methods should not be long running tasks, as decider will schedule Activity Tasks only after all #Asynch methods have completed execution.
Are my observations correct ? If yes: then what if in the same workflow I want a signal, which performs some task and then stop executing for future replays. Such as a pause signal: user might pause and resume a workflow multiple times.
Another problem is: How are the following types of cases handled by flow: A decider times out, and meanwhile two events come: Cancel workflow and Activity Completed. How does decider figure out that they are related and if cancellation is done, then do not responds to ActivityComplatedEvent.

It helps to not think about workflow behavior in terms of replay. Replay is just a mechanism for recovering workflow state. But when workflow logic is written it is not really visible besides the requirement of determinism and that workflow code is asynchronous and non blocking. So never think about replay when designing your workflow logic. Write it as it is a locally executing asynchronous program.
So when replay is not in a way the #Signal is just a callback method that executes once per received signal. So if you invoke some action from the #Signal method then it is going to execute once.
As for second question it depends on the order in which cancellation and activity completion are received. If cancellation is first then cancellation is delivered to a workflow first which might cause the cancellation of the activity. Cancellation is actually blocking waiting for activity to cancel. Completion of activity (which is the next event) unblocks the cancellation for it. If completion is second then activity completes and whatever follows it is cancelled by the next event. In majority of cases the result is exactly the same that activity completion is received but all logic after that is cancelled.

Related

How to understand dart async operation?

As we know, dart is a single-threaded language. So according to the document, we can use Futrure/Stream to implement a async opetation. It sends the time-consuming operation to the Event Queue.
What confused me is where the Event Queue working on. It is working on the dart threat? if yes, it will block the app.
Another question is Event Queue a FIFO queue. If i have two opertion, one is a 1mins needed networking request, the other is a click event. The two operation will send to the Event Queue.
So if the click event will blocked by the networking request? Because the queue is a FIFO queue?
So where is the event queue working on?
Thank you very much!
One thing to note is that asynchronous and multithreading are two different things. Dart uses Futures and async/await to achieve asynchronicity, but Dart is still inherently a single-threaded language.
The way it works is when a Future is created (either manually or via calling an async method), that process is added to an event queue, as you read. Then, in the middle of all the synchronous execution, whenever there is a lull, the event queue can take priority. It can then go through the processes and figure out if any of the Futures have been completed. If so, the result is passed along to any other asynchronous processes that are waiting on that resource, if any.
This also means that, yes, if your program hangs in the middle of an asynchronous operation (with the easy example of an endless loop via while (true) {}), it will freeze the entire program, including the synchronous code and other asynchronous processes still waiting to resolve (even if the conditions allowing them to resolve have already occurred).
However, in your case, this won't be an issue. If you fire an asynchronous process in the form of a network request followed by another in the form of a "click event" (not sure what you're referring to, but I'll assume it's asynchronous as well), they will both be added to the event queue in that order. But if the click event resolves before the network request, the event queue will merely recognize that the network request Future has not yet resolved and will move on to the click event that has.
As a side note, it's worth noting that Dart does have a multi-threading capability, albeit in a fairly roundabout way. Dart has something called an Isolate, which isn't a thread but a completely separate child program. This means that the Isolate cannot access any of the same data in memory as the root program itself. However, data can be passed between the two using SendPorts and ReceivePorts. This makes using Isolates slightly more complicated than threads, but it also means that, if no memory is shared, it virtually eliminates race conditions based on which thread accesses the memory first.

RxSwift -- MainScheduler.instance vs MainScheduler.asyncInstance

What is the difference between using RxSwift's MainSchedule.instance and MainSchedule.asyncInstance within the context of observeOn?
asyncInstance guarantees asynchronous delivery of events whereas instance can deliver events synchronously if it’s already on the main thread.
As for why you would ever need to force asynchronous delivery when you’re already on the main thread: it’s fairly rare and I would typically try to avoid it, but sometimes you have a recursive reactive pipeline where one event triggers the delivery of a new event in the same pipeline. If this happens synchronously, it breaks the Rx contract and RxSwift will spit out a warning that you tried to deliver a second event before the first event finished. In this case you can observe on MainScheduler.asyncInstance to break the cycle.

Synchronous API requests with Queue in Swift?

I need to execute synchronous requests on API using Swift. Requests must be queued. Meaning, if one is already in progress and it awaits response it must not be canceled or interrupted by the next synchronous request that enters queue or is already in queue.
Requests must be executed in order as they enter queue (FIFO). Next request must not start until previous is finished/completed in the queue.
Also, every single request in queue must be executed until queue is empty. Synchronous requests can enter queue at any time.
I meant to implement a Synchronous API Client as a singleton which contains its own Queue for queued requests. Requests must not stop/freeze UI. UI has to be responsive on user interaction all the time.
I know it can be done with semaphores but, unless you know what your are doing and you are completely sure how semaphores work, it is not the safest or maybe the best way do it. Otherwise, potential bugs and crashes could appear.
I'm expecting successful execution of every synchronous request that enters queue (by FIFO order, regardless if it returns success or an error as a response) and UI updates immediately after.
So, my question is what is the best way to approach and solve this problem?
Thanks for your help and time.
You can create your own DispatchQueue and put you operations on it as DispatchWorkItems. It is serial per default. Just remember to call your completions on DispatchQueue.main if you plan to update the UI.
John Sundell has a wonderful article about DispatchQueues here:
https://www.swiftbysundell.com/articles/a-deep-dive-into-grand-central-dispatch-in-swift/

Handling of Alamofire requests as iOS app is terminating

AppDelegate.applicationWillTerminate is called when the application is about to terminate. In this function, I am issuing a network request via Alamofire, to notify the server that the app is terminating. Alamofire's response handler is never invoked. It looks to me like the termination completes before the completion handler is invoked.
Alamofire's completion handlers appear to run on the main thread. I found documentation saying that the app is responsible for draining the main queue: "Although you do not need to create the main dispatch queue, you do need to make sure your application drains it appropriately. For more information on how this queue is managed, see Performing Tasks on the Main Thread." (From https://developer.apple.com/library/content/documentation/General/Conceptual/ConcurrencyProgrammingGuide/OperationQueues/OperationQueues.html) And this is where I am stuck.
How do I drain the main thread? I need to ensure that this last Alamofire request runs before the main thread exits.
Don't worry about “draining” the main thread. The problem is more simple than that. It's just a question of how to do something when your app is leaves the “foreground”/“active” state.
When a user leaves your app to go do something else, it is generally not terminated. It enters a “suspended” state where it remains in memory but does not execute any code. So when the app is suspended, it cannot process your request (but the app isn't yet terminated, either).
There are two approaches to solve this problem.
You could just request a little time to finish your request (see Extending Your App's Background Execution Time). By doing this, your app is not suspended, but temporarily enters a "background" state, where execution can continue for a short period of time.
The advantage of this approach is that it is fairly simple process. Just get background task id before starting the request and you tell it that the background task is done in the Alamofire completion handler.
The disadvantage of this approach is that you only have 30 seconds (previously 3 minutes) for the request to be processed. If you have a good connection, this is generally adequate. But if you don't have a good network connection in that period, the request might never get sent.
The second approach is a little more complicated: You could make your request using a background URLSession. In this scenario, you are effectively telling iOS to take over the handling of this request, and the OS will continue to do so, even if your app is suspends (or later terminated during its natural lifecycle).
But this is much more complicated than the first approach I outlined, and you lose much of the ease and elegance of Alamofire in the process. You can contort yourself to do it (see https://stackoverflow.com/a/26542755/1271826 for an example), but it is far from the obvious and intuitive interface that you're used to with Alamofire. For example, you cannot use the simple response/responseJSON completion handlers. You can only download/upload tasks (no data tasks). You have to write code to handle the OS restarting your app to tell you that the network request was sent (even if you're not doing anything meaningful with this response). Etc.
But the advantage of this more complicated approach is that it is more robust. There's no 3 minute limit to this process. The OS will still take care of sending the request on your behalf whenever connectivity is reestablished. Your app may may even be terminated by that point in time, and the OS will still send the request on your behalf.
Note, neither of these approaches can handle a "force-quit" (e.g. the user double taps on the home button and swipes up to terminate the app). It just handles the normal graceful leaving of the app to go do something else.

Suspending already executing task NSOperationQueue

I have problem suspending the current task being executed, I have tried to set NSOperationQueue setSuspended=YES for pausing and setSuspended=NO for resuming the process.
According to apple docs I can not suspend already executing task.
If you want to issue a temporary halt to the execution of operations, you can suspend the corresponding operation queue using the setSuspended: method. Suspending a queue does not cause already executing operations to pause in the middle of their tasks. It simply prevents new operations from being scheduled for execution. You might suspend a queue in response to a user request to pause any ongoing work, because the expectation is that the user might eventually want to resume that work.
My app needs to suspend the time taking upload operation in case of internet unavailability and finally resume the same operation once internet is available. Is there any work around for this? or I just need to start the currently executing task from zero?
I think you need to start from zero. otherwise two problems will come there. If you resume the current uploading you cant assure that you are not missed any packets or not. At the same time if the connection available after a long period of time, server may delete the data that you uploaded previously because of the incomplete operation.
Whether or not you can resume or pause a operation queue is not your issue here...
If it worked like you imagined it could (and it doesn't) when you get back to servicing the TCP connection it may very well be in a bad state, it could have timed out, closed remotely...
you will want to find out what your server supports and use the parts of a REST (or similar) service to resume a stalled upload on a brand new fresh connection.
If you haven't yet, print out this and put it on the walls of your cube, make t-shirts for your family members to wear... maybe add it as a screensaver?

Resources