Cleanup on termination of durable function - azure-durable-functions

Is it possibly to somehow get notified that the durable function as been or about to be terminated, such that it is possibly to initiate cleanup of the already finished activity functions for example? In my example we're sending multiple requests to a subsystem and need to revoke or refund the orders in case of the durable function is being terminated.

I don't believe that there is any way to subscribe to Terminatation events in Durable Functions, as the Durable Task Framework handles this before user code is ever invoked.
One option instead of using the explicit terminate API built into Durable Functions is to instead listen to some CustomTerminate event within your orchestration, using a Task.WhenAny() approach whenever you schedule an activity or suborchestration. Then, if you ever receive this CustomTerminate event instead of the Activity or SubOrchestration response, you could manually handle the cleanup at this point.

Related

Does the entire code in the top level of the manifest v3 service worker run repeatedly every time it wakes up?

Tested: In order to avoid repeated execution of some code (like chrome.contextMenus.create repeated execution makes
Unchecked runtime.lastError: Cannot create item with duplicate id
), it needs to be moved into chrome.runtime.onInstalled.addListener.
But some code (like chrome.action.onClicked.addListener) moved into chrome.runtime.onInstalled.addListener won't run on next wakeup.
If chrome.action.onClicked.addListener is placed at the top level of the service worker,
will the Listener be added again every time the service worker wakes up,
Will there be multiple duplicate listeners?
will the functions in the new added Listener and in Listener added previous be executed both?
https://developer.chrome.com/docs/extensions/mv3/service_workers/ saying:
A background service worker is loaded when it is needed, and unloaded when it goes idle. Some examples include:
The extension is first installed or updated to a new version.
The background page was listening for an event, and the event is
dispatched.
A content script or other extension sends a message.
Another view in the extension, such as a popup, calls
runtime.getBackgroundPage.
says 'unloaded when it goes idle', will the Listener added previous be unloaded too? ___if so,how awaking service worker again?
or only unload the functions in Listener added previous, and reserve the Listener empty shell just for awaking service worker ?
Yes, it reruns anew.
No, there'll be no duplicate listeners.
No multiple threads, no sleeping/suspending/resuming.
The confusion is caused by a rather inept description in the old version of the documentation, now it's rewritten. What actually happens is that after a certain timeout the service worker is simply terminated. It doesn't "unload" or "resume". It "terminates completely" and "starts fresh".
When it terminates, the JavaScript environment disappears (JS listeners, variables, everything).
When it's started by the browser in reaction to an event to which you subscribed via addListener in the previous run of the SW, your SW script runs in its entirety. Each addListener for a chrome event registers this listener internally. Then the event that woke the worker will be dispatched to the listeners. This is why it is important to register the listeners synchronously in the first task of the event loop when the script starts (the old documentation used a rather arcane term "top-level" from the makers of V8 and oversimplified it to the need to declare the listeners in the global scope of the script, which is not mandatory because you can certainly do it inside a function call as long as it's synchronous).
The contextMenus API is different: the data is saved inside Chrome's internal preferences so there's no need to recreate it on each run, doing it inside chrome.runtime.onInstalled is sufficient. Firefox doesn't save them yet, but I guess they will do it once they implement MV3.
P.S.
The lifetime duration is 30 seconds after the last incoming external event. Using a runtime port adds another 5 minutes to the timeout. Using native host messaging keeps the service worker alive indefinitely, but it is also possible to emulate a persistent service worker to a degree even without native messaging: more info.
Another view in the extension, such as a popup, calls runtime.getBackgroundPage.
This is not true anymore in MV3.

How can I implement throttling by the message value using MassTransit? (backend is SNS/SQS but flexible)

I'm interested in using MassTransit as the event bus to help me bust a cache, but I'm not sure how to properly throttle the service.
The situation
I have a .Net service that has a refreshCache(itemId) API which recomputes the cache for itemId. I want to call this whenever code in my organization modifies any data related to itemId.
However, due to legacy code, I may have 10 events for a given itemId emitted within the same second. Since the refreshCache(itemId) call is expensive, I'd prefer to only call it once every second or so per itemId.
For instance, imagine that I have 10 events emitted for item1 and then 1 event emitted for item2. I'd like refreshCache to be called twice, once with item1 and once with item2.
Trouble with MassTransit
I could send event messages that essentially are just itemId over SNS/SQS, and the .Net service could use a MassTransit consumer to listen to that SQS queue and call refreshCache for each message. Ideally, I can also throttle either in SNS/SQS or MassTransit.
I've read these docs: https://masstransit-project.com/advanced/middleware/rate-limiter.html and have tried to find the middleware in the code but wasn't able to locate it.
They seem to suggest that the rate-limiting just delays the delivery of messages, which means that my refreshCache would get called 10 times with item1 before getting called with item2. Instead, I'd prefer it get called once per item, ideally both immediately.
Similarly, it seems as if SNS and SQS can either rate-limit in-order delivery or throttle based on the queue but not based on the contents of that queue. It would not be feasible for me to have separate queues per itemId, as there will be 100,000+ distinct itemIds.
The Ask
Is what I'm trying to do possible in MassTransit? If not, is it possible via SQS? I'm also able to be creative with using RabbitMQ or adding in Lambdas, but would prefer to keep it simple.

RxSwift -- MainScheduler.instance vs MainScheduler.asyncInstance

What is the difference between using RxSwift's MainSchedule.instance and MainSchedule.asyncInstance within the context of observeOn?
asyncInstance guarantees asynchronous delivery of events whereas instance can deliver events synchronously if it’s already on the main thread.
As for why you would ever need to force asynchronous delivery when you’re already on the main thread: it’s fairly rare and I would typically try to avoid it, but sometimes you have a recursive reactive pipeline where one event triggers the delivery of a new event in the same pipeline. If this happens synchronously, it breaks the Rx contract and RxSwift will spit out a warning that you tried to deliver a second event before the first event finished. In this case you can observe on MainScheduler.asyncInstance to break the cycle.

How can I get a callback after a caller has been "enqueued"?

I want to run some code that takes a few seconds to complete (lookup the appropriate agents and call them). If I do it inline, the queueing of the call is delayed by a few seconds and the caller hears silence.
The "action" callback only triggers after the caller leaves the queue, and if I do it in the "waitUrl" callback, the call music is delayed.
Is there an elegant solution for this? Like some way to run the code async, or do it in a callback that won't affect the caller experience?
I guess I could use a 3rd party service (like Zapier, e.g. incoming webhook that calls a Twilio function from an outgoing webhook) to defer the long-running code, but I'd prefer to keep everything on the Twilio platform.
Twilio developer evangelist here.
As you've noted, there are several times that Twilio requests your application and gives you opportunities to perform these actions. But in the context of a voice call, these webhooks are synchronous.
Asynchronous webhooks come in the form of the statusCallback, but these callbacks only occur for major events in the lifecycle of the call such as queued (this is for when calls are initiated, not when they are enqueued), ringing, in-progress, completed, busy, failed or no-answer.
For asynchronous actions you want to take in response to synchronous webhooks, you will need to setup an asynchronous call or pass the long running action off to a job to be processed outside of the synchronous call flow. There isn't anything inherent in Twilio to do this for you.

Flow Framework: How does #Signal behave and can be used to perform some dynamic tasks, which change the execution flow in SWF

Some of my observations on researching about flow framework are:
#Signal starts executing in the decider replay once the signal is received. #Signal method is executed in all the future replays of the same workflow. (Once signal is received, on each replay the decider executes #Signal). #Asynchronus methods should not be long running tasks, as decider will schedule Activity Tasks only after all #Asynch methods have completed execution.
Are my observations correct ? If yes: then what if in the same workflow I want a signal, which performs some task and then stop executing for future replays. Such as a pause signal: user might pause and resume a workflow multiple times.
Another problem is: How are the following types of cases handled by flow: A decider times out, and meanwhile two events come: Cancel workflow and Activity Completed. How does decider figure out that they are related and if cancellation is done, then do not responds to ActivityComplatedEvent.
It helps to not think about workflow behavior in terms of replay. Replay is just a mechanism for recovering workflow state. But when workflow logic is written it is not really visible besides the requirement of determinism and that workflow code is asynchronous and non blocking. So never think about replay when designing your workflow logic. Write it as it is a locally executing asynchronous program.
So when replay is not in a way the #Signal is just a callback method that executes once per received signal. So if you invoke some action from the #Signal method then it is going to execute once.
As for second question it depends on the order in which cancellation and activity completion are received. If cancellation is first then cancellation is delivered to a workflow first which might cause the cancellation of the activity. Cancellation is actually blocking waiting for activity to cancel. Completion of activity (which is the next event) unblocks the cancellation for it. If completion is second then activity completes and whatever follows it is cancelled by the next event. In majority of cases the result is exactly the same that activity completion is received but all logic after that is cancelled.

Resources