How can I make a method run in the background ASP MVC - asp.net-mvc

I have a particularly long running method that I need to execute from my controller. The Method is in it's own Model. I am using an async controller, and I have the method setup using asyncFunc library to make it asynchronous. I have also tried invoking it on it's own process. The problem is I want to controller to go ahead and return a view so the user can continue doing other things as the method will notify the user it is completed or has any errors via e-mail.
The problem is even thogh it is an asynchronous method the controller will not move forward to return the view until the process is done. 15+ mins. and if you navigate to a different page the method stops trying to execute.
so how can I get the method to execute as a worker and free up the controller?
Any Help would be greatly appreciated.
all the best,
Chase Q, Aucoin

Use ThreadPool.QueueUserWorkItem() as a fire-and-forget approach in the ASPX page.
Do the long-running work in the WaitCallback you pass to QUWI.
when the work is complete, that WaitCallback can send an email, or whatever it wants.
You need to take care to handle the case that the w3wp.exe is stopped during the 15 minute run. What will you do if the work is 2/3 complete? Some options are, making the work restartable, or just allowing the interrupted work to be forgotten.
Making it restartable might mean, when w3wp.exe restarts, your ASP.NET logic makes sure to begin again, any work that was interrupted. It might mean that your ASP.NET logic sets "syncpoints" so that it knows where to restart.
If you want the restartable option, you might think about Workflow, which is specifically designed for this purpose - maintaining state of long-running workflows, restarting automatically, and so on. If you use Workflow, you can set it to run asynchronously, and you may decide you do not need QueueUserWorkItem.
see also:
Moving a time taking process away from my asp.net application
the Workflow Foundation tag

This will help > http://msdn.microsoft.com/en-us/library/ms227433.aspx
It is the standard way of running a background process on the server in the .NET stack.

I don't know why, but I still live in conviction that this should not be done. Executing background threads in ASP.NET smells. You will also steal threads from ASP.NET thread pool which is controlled by IIS. It can decide that something is wrong with your worker process and restart it any time just to keep memory consumption, processing time consumption or thread consumption low. If you need background logic create custom NT service and call the process on that service either via old .NET remoting or WCF.
Btw. approach I described is used frequently in commercial applications and those which doesn't use it often self-host the whole web server.

Related

C# 5 .NET MVC long async task, progress report and cancel globally

I use ASP.Net MVC 5 and I have a long running action which have to poll webservices, process data and store them in database.
For that I want to use TPL library to start the task async.
But I wonder how to do 3 things :
I want to report progress of this task. For this I think about SignalR
I want to be able to left the page where I start this task from and be able to report the progression across the website (from a panel on the left but this is ok)
And I want to be able to cancel this task globally (from my panel on the left)
I know quite a few about all of technologies involved. But I'm not sure about the best way to achieve this.
Is someone can help me about the best solution ?
The fact that you want to run long running work while the user can navigate away from the page that initiates the work means that you need to run this work "in the background". It cannot be performed as part of a regular HTTP request because the user might cancel his request at any time by navigating away or closing the browser. In fact this seems to be a key scenario for you.
Background work in ASP.NET is dangerous. You can certainly pull it off but it is not easy to get right. Also, worker processes can exit for many reasons (app pool recycle, deployment, machine reboot, machine failure, Stack Overflow or OOM exception on an unrelated thread). So make sure your long-running work tolerates being aborted mid-way. You can reduce the likelyhood that this happens but never exclude the possibility.
You can make your code safe in the face of arbitrary termination by wrapping all work in a transaction. This of course only works if you don't cause non-transacted side-effects like web-service calls that change state. It is not possible to give a general answer here because achieving safety in the presence of arbitrary termination depends highly on the concrete work to be done.
Here's a possible architecture that I have used in the past:
When a job comes in you write all necessary input data to a database table and report success to the client.
You need a way to start a worker to work on that job. You could start a task immediately for that. You also need a periodic check that looks for unstarted work in case the app exits after having added the work item but before starting a task for it. Have the Windows task scheduler call a secret URL in your app once per minute that does this.
When you start working on a job you mark that job as running so that it is not accidentally picked up a second time. Work on that job, write the results and mark it as done. All in a single transaction. When your process happens to exit mid-way the database will reset all data involved.
Write job progress to a separate table row on a separate connection and separate transaction. The browser can poll the server for progress information. You could also use SignalR but I don't have experience with that and I expect it would be hard to get it to resume progress reporting in the presence of arbitrary termination.
Cancellation would be done by setting a cancel flag in the progress information row. The app needs to poll that flag.
Maybe you can make use of message queueing for job processing but I'm always wary to use it. To process a message in a transacted way you need MSDTC which is unsupported with many high-availability solutions for SQL Server.
You might think that this architecture is not very sophisticated. It makes use of polling for lots of things. Polling is a primitive technique but it works quite well. It is reliable and well-understood. It has a simple concurrency model.
If you can assume that your application never exits at inopportune times the architecture would be much simpler. But this cannot be assumed. You cannot assume that there will be no deployments during work hours and that there will be no bugs leading to crashes.
Even if using http worker is a bad thing to run long task I have made a small example of how to manage it with SignalR :
Inside this example you can :
Start a task
See task progression
Cancel task
It's based on :
twitter bootstrap
knockoutjs
signalR
C# 5.0 async/await with CancelToken and IProgress
You can find the source of this example here :
https://github.com/dragouf/SignalR.Progress

Waiting for applications to finish loading [duplicate]

I have an application which needs to run several other applications in chain. I am running them via ShellExecuteEx. The order of running each of the apps is very important cause they are dependant on each other. For example:
Start(App1);
If App1.IsRunning then
Start(App2);
If App2.IsRunning then
Start(App3);
.........................
If App(N-1).IsRunning then
Start(App(N));
Everything works fine but there is a one possible problem:
ShellExecuteEx starts the application, and return almost immediately. The problem might arise when for example App1 has started properly but has not finished some internal tasks, it is not yet ready to use. But ShellExecuteEx is already starting App2 which depends on the App1, and App2 won't start properly because it needs fully initialized App1.
Please note, that I don't want to wait for App(N-1) to finish and then start AppN.
I don't know if this is possible to solve with ShellExecuteEx, I've tried to use
SEInfo.fMask := SEE_MASK_NOCLOSEPROCESS or SEE_MASK_NOASYNC;
but without any effect.
After starting the AppN application I have a handle to the process. If I assume that the application is initialized after its main window is created (all of Apps have a window), can I somehow put a hook on its message queue and wait until WM_CREATE appears or maybe WM_ACTIVATE? In pressence of such message my Application would know that it can move on.
It's just an idea. However, I don't know how to put such hook. So if you could help me in this or you have a better idea that would be great:)
Also, the solution must work on Windows XP and above.
Thanks for your time.
Edited
#Cosmic Prund: I don't understand why did you delete your answer? I might try your idea...
You can probably achieve what you need by calling WaitForInputIdle() on each process handle returned by ShellExecute().
Waits until the specified process has finished processing its initial input and is waiting for user input with no input pending, or until the time-out interval has elapsed.
If your application has some custom initialization logic that doesn't run in UI thread then WaitForInputIdle might not help. In that case you need a mechanism to signal the previous app that you're done initializing.
For signaling you can use named pipes, sockets, some RPC mechanism or a simple file based lock.
You can always use IPC and Interpocess Synchronization to make your application communicate with (and wait for, if needed) each other, as long as you code both applications.

Ruby/Rails synchronous job manager

hi
i'm going to set up a rails-website where, after some initial user input, some heavy calculations are done (via c-extension to ruby, will use multithreading). as these calculations are going to consume almost all cpu-time (memory too), there should never be more than one calculation running at a time. also i can't use (asynchronous) background jobs (like with delayed job) as rails has to show the results of that calculation and the site should work without javascript.
so i suppose i need a separate process where all rails instances have to queue their calculation requests und wait for the answer (maybe an error message if the queue is full), kind of a synchronous job manager.
does anyone know if there is a gem/plugin with such functionality?
(nanite seemed pretty cool to me, but seems to be only asynchronous, so the rails instances would not know when the calculation is finished. is that correct?)
another idea is to write my own using distributed ruby (drb), but why invent the wheel again if it already exists?
any help would be appreciated!
EDIT:
because of the tips of zaius i think i will be able to do this asynchronously, so i'm going to try resque.
Ruby has mutexes / semaphores.
http://www.ruby-doc.org/core/classes/Mutex.html
You can use a semaphore to make sure only one resource intensive process is happening at the same time.
http://en.wikipedia.org/wiki/Mutex
http://en.wikipedia.org/wiki/Semaphore_(programming)
However, the idea of blocking a front end process while other tasks finish doesn't seem right to me. If I was doing this, I would use a background worker, and then use a page (or an iframe) with the refresh meta tag to continuously check on the progress.
http://en.wikipedia.org/wiki/Meta_refresh
That way, you can use the same code for both javascript enabled and disabled clients. And your web app threads aren't blocking.
If you have a separate process, then you have a background job... so either you can have it or you can't...
What I have done is have the website write the request params to a database. Then a separate process looks for pending requests in the database - using the daemons gem. It does the work and writes the results back to the database.
The website then polls the database until the results are ready and then displays them.
Although I use javascript to make it do the polling.
If you really cant use javascript, then it seems you need to either do the work in the web request thread or make that thread wait for the background thread to finish.
To make the web request thread wait, just do a loop in it, checking the database until the reply is saved back into it. Once its there, you can then complete the thread.
HTH, chris

How IIS requests are parallelized using COMET?

I have an ASP.NET MVC 2 Beta application where I need to block incoming requests for a specific action until I have some data available to return or just release the request after 30 seconds with no new data available.
In order to accomplish this, I'm using AutoResetEvent.WaitOne(30000);
The big issue is that IIS does not seem to be accepting any new request while the thread is blocked at the WaitOne instruction. New requests get hung till the thread releases.
I need to be able to parallelize the requests while still keeping the WaitOne behavior.
Async handlers are what you're looking for. If you're building a comet solution, you may want to check out our .NET implementation of a comet server here, it'll save you some time. If you're wanting to roll your own, you'll definately need to use the async handlers to avoid hitting upper concurrency limits by the time you get past 60 or 70 users, but even with the async handlers, you'll still have to do some fancy footwork. Basically, you're still going to hit some upper limits in the threadpool unless you hand off the requests into a bounded thread pool that can basically manage all the incoming requests for you.
Good luck!
You should not be blocking incoming requests at all. If the data you need are not ready, then return an empty response, or perhaps return an error code.
For a web application, it is more advisable (not a hard rule) to return a message to tell the users to retry again later due to whatever reason you want to call it.
Stalling/blocking the requests by 'waiting' doesn't really help much as the wait is undeterministic, unless of course you have a mechanism to make it so.
I do not know the nature/context/traffic pattern of your website. 30 seconds can be a number that works for you. Perhaps my points above are not really relevant, just my 2 cents.
Actually, it turns out that this behavior only happens with ASP.NET MVC 2 Beta. I had this working fine with MVC 2 Preview 2 and rolled back to this version to re-test and confirmed that the application worked fine with that version.
Now, the question is: Why am I seeing this different behavior between these two MVC release versions, and what is the correct behavior I should expect to get in this scenario?

any better timer in asp.net?

I used System.Timers.timer in global.as in asp.net to set a timer for scheduling execute a
function
let' say transferMoney().
But it seems that this timer might stop after several hours unexpected.
And this cause that all the actions are pending.
I want to know whether there are any better methods to set up a timer in asp.net, MVC 1.0?
Thanks in advance!
It might just be because the application got recycled. Global.ashx is not really the right place to do long running tasks because if your AppDomain gets recycled your timer will die. I suggest making a job windows service instead.
Edit: Well, it's fairly easy to create a windows service project in Visual Studio just do [File] > [Add] > [New Project...] > [Windows] > [Windows Service] and you will get the stub code for the project.
It's hard to come up with a complete example so i suggest you google it. ;) There are tons of samples out there for you to look at.
This article on CodeProject seems to be a good introduction to Windows Services.
Any timer you'll use in ASP.NET apps will eventually "terminate", but this a very expected behavior due to process recycling.
The timer will never work because IIS will reschedule the worker process regularly based on Application Pool settings, so when it recycles your timer will get destroyed and you might need to reopen it.
You can put a check on whether timer object is still available or not, if not available then create it !!, using any other timer object will not work. But this still has a problem, because if you dont have any web request for particular period of time, it will still get destroyed. Best is to setup a ping monitor from other place which can keep your website alive.
You can't reliably run a timer in ASP.NET. If there are no requests coming in, the IIS can shut down the application, and it will not start until the next request arrives.
Why do you think that you need a timer? In most web applications this is not needed at all to do periodical updates unless they depend on an external source.
If you are just moving data around inside your application, the actual transactions doesn't have to happen at an exact interval, you only have to calculate what the result would be if they had happened. Whenever a request comes in, you calculate how many transactions would have happened since the last request, and do them to catch up to the current state.
If your transactions rely on an external source so that they actually has to run at a specific time, you simply can't do it with ASP.NET alone. You need an application that runs outside IIS, for example started periodically by the windows scheduler.
You could try the system.threading.timer
http://msdn.microsoft.com/en-us/library/system.threading.timer.aspx

Resources