I'm attempting to use azure durable tasks to orchestrate some microservices but am running into a small gap in understanding how taskhubs work as well as coordinating the projects correctly.
I'm trying to create a main orchestrator that is in charge of kicking off sub orchestrations to do the actual work. Below is a diagram of what I'm trying to achieve.
The idea is that each .net Project will be able to scale independent of the other, so if .Net project 2 was under quite a bit of load I'd be able to scale that project only and not have to worry about the other 2 projects. The problem I'm running into is from what I understand the taskhub queue is shared by all the services so there is no way to have each process focus on only it's work, meaning each project can see everything in the queue and it may cause 1 project to dequeue a message intended for project 2. Is this correct?
From reading the documentation it doesn't seem clear that I can send project 2 it's sub orchestration messages as well as send project 3 it's specific orchestration.
Am I thinking about this problem incorrectly, is there a different way I might want to approach this?
What you want cannot be achieve.
As of now, Azure Function only allow orchestrator functions to call activity and sub-orchestrator functions that exist in the same function app. The main reason is a technical one: queues within a task hub are shared across all functions, so there's no way to guarantee that a message intended for FunctionAppA does not get picked up by FunctionAppB.
If cross-project communication is required, the correct method is to use http or queue.
Related
So I have a system that receives messages from devices and then it goes through 3 different servers and countless of services are run on each job. From an architecture perspective, whats certain considerations in using sidekiq to make my program async? Are there downsides to making sub processes run using sidekiq. Any advice?
architecture(system design) should be based on the problems you are trying to solve. if your services are design to unique business domains and if they are async compatible then you can spawn sub jobs for each service. but if not or your need flexible transactions among services then job per request is the right choice. so you may have both of these implementations in your system based on the requirements.
The upside to making your program async with sidekiq is that it is easy and produces good reporting in case of an error. The downsides of using sidekiq for this task is that there is a lot of overhead creating and executing the jobs. This could become such a problem that it represents the majority of the resources used.
I have a requirement in my MVC app.
I had an export to excel functionality that is taking 3 mins of time (user clicks on a export button and waits on).
This export downloads an excel that has multiple worksheets after applying certain rules on the data.
These rules are datamanipulations plus applying colors on the cells belonging to certain columns.
Inorder to avoid the wait time, I was asked to develop a code with in the MVC app that can run like a scheduled job.
This job has to export the excel to a dedicated folder with in the network on the scheduled time (daily once).
Also i was asked to develop a web page within the app which has links to download these excels.
Quesions here (Any help would be appreciated) :
I have chosen Quartz.NET to implement this requirement. This is an open source (to my little knowledge) that can
provide the facility to schedule a job (class developed in .NET). Is it the right choice or would there be any implications in future?
Is it really needed to develop a job like code or any other way of coding can address this?
I'm not very familiar with Quartz.net, but I do know that trying to run background/scheduled tasks from within the same process as the MVC application can be problematic.
Ref 1: http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx/
Ref 2: http://www.hanselman.com/blog/HowToRunBackgroundTasksInASPNET.aspx
Essentially, you can't guarantee that the process will complete correctly when running it due to how IIS handles app pools (which is where you MVC process runs: assuming hosting on IIS anyway).
You mention running a scheduled task within your MVC app. Again, this is incorrect. Why can't you just slap a console app project into the solution and drive the code from there, then put it on the server and use the Windows Task Scheduler?
In terms of background tasks, the "correct" way to do this is to send a command from your MVC app to some sort of message queue, which can then ensure that the command doesn't get dropped. I've used RabbitMQ in the past (a middleware message broker). Perhaps this is the aim of Quartz.net.
This setup typically involves another app (for me, usually a console app run on the server) that receives the command message from the message queue and runs in it's own process, entirely separate from MVC and thus the issues inherent with IIS AppPools and background tasks.
A lot of work, really... one would think it'd be easier, but that's the surefire way to do it and maintain the integrity of the task to be run.
I have a web service written in Cowboy and I am planning to use RabbitMQ as the DB layer. So my Cowboy service will be one of the producer which writes to the queue and the consumer writes to the database. There are couple more asynchronous tasks that will come from another service (not Cowboy).
Now the question is where these consumers should go. Should these be part of single erlang app or should I create separate Erlang app for all the consumers.
Any advice would be highly appreciated.
Since Erlang is not the exclusive producer, and since one can usually imagine consumers running without knowledge of the producers, having separate applications is not a bad idea at all. You can have multiple top-level applications in a single Erlang release (that's what the dependencies are, really), so you can always put all the code in the same repository (I usually have a top level apps/ directory for these), and if needed later on split them out to separate repos.
Having them as separate applications certainly makes deciding later on to distribute the application across multiple erlang nodes easier: just start the relevant producer applications s on some nodes, and the consumer application on others.
So while either way will probably work, separate apps is probably a cleaner design and keeps the door open for future expansion in a slightly nicer way.
I have an ASP.NET MVC 4 app hosted as an Azure web role. I want to do something that seems like it should be pretty standard: I want to create a function that I can call that initiates a VIP swap and raises and event (or calls a callback) when the VIP Swap operation is done.
Just to add some context to the situation: My website implements a workflow that takes about an hour (or less) to complete. If I want to release a new version of the website code, it's convenient (i.e. much less "backward compatibility" code to write) to first let all of the current users complete the workflow so that the new code doesn't need to deal with data created by the previous version of the code. So a management function in my website would first poke a value into the database that disables new workflows; it would then wait until all current workflows are done; it would then call the "VIP Swap" routine; finally, when the VIP Swap routine signals its completion, it would poke the database value to re-enable new workflows.
I found the Microsoft documentation for how to programmatically initiate a VIP swap here:
http://msdn.microsoft.com/en-us/library/ee460814.aspx
The procedure involves POSTing to a magic URL and including some headers in the POST, then periodically performing a GET to a magic URL and checking the response code.
The more I think about this, the more non-trivial it seems. In addition to the basic complexities of wiring up a background timer and completion notification, I don't know what complexities, if any, I might run into trying to do this stuff in the IIS environment. Can I even perform HTTP operations on a background thread? For that matter, will I run into complications just trying to use any of the half dozen or so different "do things in the background" mechanisms baked into .NET?
Any help or guidance will be greatly appreciated. In particular, I'd be ecstatic if someone could point me at a ready-to-go implementation of this function!
I don't think you will find an easy solution to this as the fabric controller is setup to do some very fancy things without your involvement. Running hour-long workflows on a cloud computing environment, where an instance can be pulled out from underneath you, (with a maximum of 5 minutes from the OnStopping event being called to clean up) requires that you do other work anyway to make sure that all of your tasks complete.
The simple question is "What do you do if an instance goes down when workflows are still running?" Do you restart them or are they lost? If they get lost then you don't care anyway, so killing workflows for an upgrade are equally unimportant. If you re-start them then use that same mechanism to decide whether or not a node is due to be shut down, and distribute the jobs accordingly. This pattern is eerily similar to the Hadoop JobTracker. Don't just run the workflows on any 'ol instance. Submit them to a (job tracker) service that decides what to do. The (job tracker) service can then use the service management API to scale up as many instances as you need running the version that you want, run workflows on the appropriate node, and shut them down when they are no longer needed or are outdated.
Unfortunately this may not be the simple solution that you are looking for, but something in your architecture needs to change, rather than trying to force PaaS to fit with your current approach. Decompose your workloads, create loosely coupled services, design for failure, and a few other cloud/distributed computing practices need to be considered. There is a reason why Hadoop is built the way that it is — and it has a reputation for being able to get work done on a bunch of somewhat unreliable commodity hardware.
I am developing my first web application using ASP.Net MVC, and I am in a situation where I would like a background service to process status notifications outside of the application, not unlike the reputation/badge system on stackoverflow.
What is the best way to handle something like this? Is it even possible in a shared-hosting environment like Godaddy, which I am using.
I don't need to communicate with the background worker directly, since I will be adding notification records to a database table with a column set to an "unprocessed" state. Then the worker will just scan the table on a regular schedule and processes what is ready.
Thanks for your advice.
Have you tried with quartz.net? I think it may fit your needs.
also take a look at this Simulate a Windows Service using ASP.NET to run scheduled jobs article.
it explains a nice way to schedule operations with no outer dependence.
The idea is to use Cache timeout to control the schedule. I've implemented it successfully on a project which required regular temp file cleaning. This cleaning is a bit heavy so we move this clean operation in a scheduled job (using the asp.net cache) to avoid having to deploy scheduled task or custom program.
To answer whether GoDaddy will support a seperate service you need to ask them.
However there are a number of creative ways that you can "get around" this issue on shared hosting.
Have a secure page that's purpose is to execute your background work. You could have scheduled task on a machine under your control that calls to this web page at set intervals.
Use a variation of the Background Worker Thread answer from #safi. Your background worker thread could check to see if another is already processing and stop, so that only one instance is running at a time.
If only one background task is enough for you then use the WebBackgrounder
And this is the article with detailed explanation.