Basically what the title says: I want different workflows to be able to wait on the completion of shared tasks. For example, workflow 1 needs tasks A, B, and C completed, while workflow 2 needs tasks C, D, and E completed, before they move on to do other things. I know activities have unique id's, so if workflow 2 tried to start "C" while 1 had already started "C", it will return an ACTIVITY_ID_ALREADY_IN_USE error and will know now to start a duplicate copy of the activity worker. The problem is, how do I notify both workflows once C is complete?
Thanks
What sdk are you using for implementing the workflow? If you use the Java sdk flow framework then the framework abstracts all this logic of creating activity id from you.
Both workflow 1 and workflow 2 will schedule the activity c and whenever the respective tasks are completed by activity c worker then the corresponding worker is informed.
I think the flow framework is available for ruby as well.
Related
There is one task Assigned to WORKER A, however after spending sometime, WORKER A realized, this can not be handled by own and needs to be transferred to WORKER B.
How can we achieve this using Twilio Task Router?
First you have to understand how is the lifecycle of a Task.
When the task is created. the first state is pending.
Then, Twilio will look for a worker who has capacity to get this Task.
The task is now reserved.
When a Task is reserved, this task could not be assigned to a new agent, because it violates the Task LifeCycle. (https://www.twilio.com/docs/taskrouter/lifecycle-task-state)
If you are going to solve this problem, you have two options:
If you want a Flex solution for the twilio flex plattaform you can use a plugin available (https://www.twilio.com/docs/flex/solutions-library/chat-and-sms-transfers)
If you want to solve it with a backend solution. you have to first:
delete or complete the Task.
Create a new one with the same Task attributes to preserve the data in the
conversation.
Create a new channel to communicate the worker with the task user.
Assign the task to the workerSid (WorkerB). Remember that, you have to handle if the worker B has no capacity to recieve a new Task
I'm attempting to use azure durable tasks to orchestrate some microservices but am running into a small gap in understanding how taskhubs work as well as coordinating the projects correctly.
I'm trying to create a main orchestrator that is in charge of kicking off sub orchestrations to do the actual work. Below is a diagram of what I'm trying to achieve.
The idea is that each .net Project will be able to scale independent of the other, so if .Net project 2 was under quite a bit of load I'd be able to scale that project only and not have to worry about the other 2 projects. The problem I'm running into is from what I understand the taskhub queue is shared by all the services so there is no way to have each process focus on only it's work, meaning each project can see everything in the queue and it may cause 1 project to dequeue a message intended for project 2. Is this correct?
From reading the documentation it doesn't seem clear that I can send project 2 it's sub orchestration messages as well as send project 3 it's specific orchestration.
Am I thinking about this problem incorrectly, is there a different way I might want to approach this?
What you want cannot be achieve.
As of now, Azure Function only allow orchestrator functions to call activity and sub-orchestrator functions that exist in the same function app. The main reason is a technical one: queues within a task hub are shared across all functions, so there's no way to guarantee that a message intended for FunctionAppA does not get picked up by FunctionAppB.
If cross-project communication is required, the correct method is to use http or queue.
I have a requirement in my MVC app.
I had an export to excel functionality that is taking 3 mins of time (user clicks on a export button and waits on).
This export downloads an excel that has multiple worksheets after applying certain rules on the data.
These rules are datamanipulations plus applying colors on the cells belonging to certain columns.
Inorder to avoid the wait time, I was asked to develop a code with in the MVC app that can run like a scheduled job.
This job has to export the excel to a dedicated folder with in the network on the scheduled time (daily once).
Also i was asked to develop a web page within the app which has links to download these excels.
Quesions here (Any help would be appreciated) :
I have chosen Quartz.NET to implement this requirement. This is an open source (to my little knowledge) that can
provide the facility to schedule a job (class developed in .NET). Is it the right choice or would there be any implications in future?
Is it really needed to develop a job like code or any other way of coding can address this?
I'm not very familiar with Quartz.net, but I do know that trying to run background/scheduled tasks from within the same process as the MVC application can be problematic.
Ref 1: http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx/
Ref 2: http://www.hanselman.com/blog/HowToRunBackgroundTasksInASPNET.aspx
Essentially, you can't guarantee that the process will complete correctly when running it due to how IIS handles app pools (which is where you MVC process runs: assuming hosting on IIS anyway).
You mention running a scheduled task within your MVC app. Again, this is incorrect. Why can't you just slap a console app project into the solution and drive the code from there, then put it on the server and use the Windows Task Scheduler?
In terms of background tasks, the "correct" way to do this is to send a command from your MVC app to some sort of message queue, which can then ensure that the command doesn't get dropped. I've used RabbitMQ in the past (a middleware message broker). Perhaps this is the aim of Quartz.net.
This setup typically involves another app (for me, usually a console app run on the server) that receives the command message from the message queue and runs in it's own process, entirely separate from MVC and thus the issues inherent with IIS AppPools and background tasks.
A lot of work, really... one would think it'd be easier, but that's the surefire way to do it and maintain the integrity of the task to be run.
I have four servers which are classified under the same activity type. All four servers are consistently polling from SWF. I start one workflow and one of the nodes start a processing routine. This routine will take an hour long and 80% of the CPU resources of the server.
How do I make sure that the next workflow I start does not utilize this same server? And so on for the third and fourth workflows I start? Is there any logic I can put in my decider to do this?
I think it is better handled on the level of the activity worker. The basic idea is that after a poll returns an activity task the next poll is not issued until the task is completed. By monitoring the depth of the task list you can support autoscaling of worker nodes if necessary.
Sorry for another non programming question, but I'm using Quartz.NET, a scheduler for .NET applications, for a Windows Service which allows users to schedule transferrig of files that match a regular expression from various sources - for example the user may schedule a job to occur every day at 6pm that transfers the files from a network path to a FTP server.
The adding jobs and management is done using an ASP.NET project, and I'm creating a Dashboard to display useful info to the user. I have the following information on the dashboard so far:
Total number of jobs
Windows Service status
Time since scheduler active
I know it's a very general question, but what other snippets of info can I add to the dashboard, as it's very sparse at the moment.
I've worked as a product manager on a few schedulers. Here are some common requirements for these types of things, but I urge you to talk to some target users to find out if they are applicable to your application.
The use cases:
1. Trying to identify if the jobs are running okay.
2. If jobs are not running okay, give the user clues as to the cause. Give user tools to debug and fix.
General requirements:
1. Table with info on last N jobs:
- Time started, time completed. Status of completion (success / failure). Length of time. Any errors. User who scheduled job. Any dependencies this job has on other jobs or other events. Specific machine the job ran on (if in a cluster).
Might be nice to include links in this dashlet that would allow you to cancel a job that might be hung.
Priority of the job (if you have priorities).
Compare all jobs: %succeed, %failure. Avg time to complete job.
Compare jobs by the scheduling user: avg time, %success, %failure.
This is by no means a comprehensive list or something. Just my trying to give you a few ideas, based on what I can remember off the top of my head.
-