asp.net mvc 4 running a method non stop without it slowing down or blocking application - asp.net-mvc

after looking for several days for a clear answer on this I decided to just ask it here myself.
In my company I'm am tasked to make a web application that is made for Gas and Fire Detection. It talks with a modbus library over modbus tcp/ip. This connection is working verry good and it reads about 1 time each second (although I can easly adjust this).
I've red a lot about AsynControllers and async-await task in the asp.net mvc model.
The problem is I have to get this working non stop. So when my application starts for the first time(with the intention of never schutting it down since it will be a control-room kind of app), my application need to start reading and continue on reading while putting this information in a database so I can work with this database for the rest of the application. Since it fire detection this update needs to work all the time and almost instantly.
I was thinking of calling it in the global.aspx so it runs at startup. But of course when I call this method is just blocks my complete application. How can I make this work ?
Any help would be greatly appreciated.
EDIT:
it seems to work with adding:
Task.Factory.StartNew(() =>
{
ModbusDB.Main();
});
to the global.aspx!

A web application isn't meant for this sort of "always-on" model. What you're essentially talking about is two applications (which can run on the same web server):
A Windows Service (or scheduled console application, whichever you want) which does the background work of interfacing with the external system and updating the application's database. This is the "always-on" part. It polls the external system, gets the data it needs from it, updates the application database. Conversely, it can poll the application database, get the data is needs from that, update the external system. Basically, it does the ongoing background stuff.
A Web Application which handles interaction with the users. This isn't "always-on" in the strictest sense. A web application is designed to receive a request and return a response. After that interaction, it's essentially done. Just waiting for other requests. So this application would just interact with the application database and present the users with the interface they need. From the users' perspective, this application is connecting them to that external system. But from a technical perspective it's just a simple web application connecting to a simple database, something else handles the interaction with the external system.
This provides the added benefit of separating your concerns into distinct modules. If you ever need to build another application to interact with that external system (maybe some kind of system-tray-installed monitoring service on people's workstations) then all that application needs to do is interact with the database, not with the external system.
Conversely, if the external system ever changes then you only have to change that background service, not the application(s) that the users use to interact with it.

Related

Scheduled Job equivalent functionality in MVC

I have a requirement in my MVC app.
I had an export to excel functionality that is taking 3 mins of time (user clicks on a export button and waits on).
This export downloads an excel that has multiple worksheets after applying certain rules on the data.
These rules are datamanipulations plus applying colors on the cells belonging to certain columns.
Inorder to avoid the wait time, I was asked to develop a code with in the MVC app that can run like a scheduled job.
This job has to export the excel to a dedicated folder with in the network on the scheduled time (daily once).
Also i was asked to develop a web page within the app which has links to download these excels.
Quesions here (Any help would be appreciated) :
I have chosen Quartz.NET to implement this requirement. This is an open source (to my little knowledge) that can
provide the facility to schedule a job (class developed in .NET). Is it the right choice or would there be any implications in future?
Is it really needed to develop a job like code or any other way of coding can address this?
I'm not very familiar with Quartz.net, but I do know that trying to run background/scheduled tasks from within the same process as the MVC application can be problematic.
Ref 1: http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx/
Ref 2: http://www.hanselman.com/blog/HowToRunBackgroundTasksInASPNET.aspx
Essentially, you can't guarantee that the process will complete correctly when running it due to how IIS handles app pools (which is where you MVC process runs: assuming hosting on IIS anyway).
You mention running a scheduled task within your MVC app. Again, this is incorrect. Why can't you just slap a console app project into the solution and drive the code from there, then put it on the server and use the Windows Task Scheduler?
In terms of background tasks, the "correct" way to do this is to send a command from your MVC app to some sort of message queue, which can then ensure that the command doesn't get dropped. I've used RabbitMQ in the past (a middleware message broker). Perhaps this is the aim of Quartz.net.
This setup typically involves another app (for me, usually a console app run on the server) that receives the command message from the message queue and runs in it's own process, entirely separate from MVC and thus the issues inherent with IIS AppPools and background tasks.
A lot of work, really... one would think it'd be easier, but that's the surefire way to do it and maintain the integrity of the task to be run.

How to perform work after the request in MVC

I have a long running CPU bound task that I want to initialize from a link in my MVC application. When I click the link, I want the server to create a GUID to identify the job, return that GUID to the client, and perform the job after returning.
I set this up using ThreadPool.QueueWorkItem, but I've read this can be problematic in MVC. Is there a better option for this case? Is there a different approach I should be using?
In my experience it is better to perform long running CPU tasks not in ASP.NET application itself but in separate application. For example you can create separate Windows service to process tasks. To interchange data you can use for example message queue, database (probably the easiest way) or web service.
This approach has following advantages:
1) Integrity of background job. In IIS you can configure to restart worker processes periodically. If your background job is running at that moment it will be interrupted what could be undesirable.
2) Plan server load balancing. For example you can move your web service to separate server which will free web server and can provide better end user experience.
Take a look at this example to see how it can be implemented with Azure.
You can do a fire and forget, by creating an asynchronous task without waiting for it, and it will run successfully most of the time, but due IIS application life cycle management those task may be abruptly cut.
You can register an IRegisteredObject object in IIS, so IIS will such object know that the domain is being shutdown.
Please take a look to this article:
http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx/

Best way to run rails with long delays

I'm writing a Rails web service that interacts with various pieces of hardware scattered throughout the country.
When a call is made to the web service, the Rails app then attempts to contact the appropriate piece of hardware, get the needed information, and reply to the web client. The time between the client's call and the reply may be up to 10 seconds, depending upon lots of factors.
I do not want to split the web service call in two (ask for information, answer immediately with a pending reply, then force another api call to get the actual results).
I basically see two options. Either run JRuby and use multithreading or else run several regular Ruby instances and hope that not many people try to use the service at a time. JRuby seems like the much better solution, but it still doesn't seem to be mainstream and have out of the box support at Heroku and EngineYard. The multiple instance solution seems like a total kludge.
1) Am I right about my two options? Is there a better one I'm missing?
2) Is there an easy deployment option for JRuby?
I do not want to split the web service call in two (ask for information, answer immediately with a pending reply, then force another api call to get the actual results).
From an engineering perspective, this seems like it would be the best alternative.
Why don't you want to do it?
There's a third option: If you host your Rails app with Passenger and enable global queueing, you can do this transparently. I have some actions that take several minutes, with no issues (caveat: some browsers may time out, but that may not be a concern for you).
If you're worried about browser timeout, or you cannot control the deployment environment, you may want to process it in the background:
User requests data
You enter request into a queue
Your web service returns a "ticket" identifier to check the progress
A background process processes the jobs in the queue
The user polls back, referencing the "ticket" id
As far as hosting in JRuby, I've deployed a couple of small internal applications using the glassfish gem, but I'm not sure how much I would trust it for customer-facing apps. Just make sure you run config.threadsafe! in production.rb. I've heard good things about Trinidad, too.
You can also run the web service call in a delayed background job so that it's not hogging up a web-server and can even be run on a separate physical box. This is also a much more scaleable approach. If you make the web call using AJAX then you can ping the server every second or two to see if your results are ready, that way your client is not held in limbo while the results are being calculated and the request does not time out.

Keeping applications and infrastructure connected

I work in an IT department that is divided into two groups. One group develops and manages applications, the other manages the company's infrastructure and servers. One of the problems we face is a break down in communication. I work for the application group and one of the problems I have is not being notified when a server is taken down by infrastructure, or a database is being refreshed.
Does anyone have suggestions on how to improve communications between the two groups or any ideas on how to keep a light-weight log across multiple systems (both linux and windows)? Ideally it would be nice if we could have our boxes just tweet their statuses or something.
Thanks for the help,
Ben
One thing you could do to communicate server status is to have our Infrastructure group setup a network monitoring system like Nagios. This will give everyone in your application group the ability to get a snapshot view of the status of every server in the system. Having this kind of status is invaluable when you are doing development.
Nagios gives you network monitoring, but also allows you to show scheduled down time for a particular server in the system.
Another thing your group could do to foster communication with the Infrastructure is to have your build system report which servers it is currently using for building and testing your products.
Also, setting up regular meeting between stakeholders of both groups is probably a good idea too. If you all are talking to each other, even for 15 minute a week, you'll probably see incidents like the one you described above go down quite a bit.
I think this is a bigger issue of change control.
You should have hardware and software change control and an approval process.
Ultimately, infrastructure serves you - the purpose for IT infrastructure is to run applications.
In my current large financial data company, servers are not TOUCHED without proper authorization through the client and application groups. It seems like a huge pain, but every single server is there for a reason - to meet a specific business goal and run a specific application. There is simply no excuse for the infrastructure group to be changing things or upsetting servers on their own volition.
Response to critical hardware failure might be an exception.
Needed software and OS updates are handled through scheduled maintenance windows and an approved change process.
I like the Nagios idea as well. If you want to setup something that's more of a communication tool, I would recommend a content management system like Drupal.
We use Drupal internally to communicate between teams. When one team takes a server down, they would add an event into Drupal. The rest of us would either get it as an email, an RSS item or just by refreshing the page.
Implement a change control process where changes are submitted, approved and scheduled for BOTH groups. This lets everyone know what is going on. This process can be as light or heavy-weight as you want.

Using MSTest as site/environment monitoring tool

We currently use Hp SiteScope for monitoring synthetic transactions across some of our web apps. This works pretty well except for the licensing cost for each synthetic transaction makes it prohibitive to ensure adequate coverage across our applications.
So, an alternative would be to use SiteScope's URL monitoring which can basically call a URL and then provide some basic checks for the certain strings. With that approach, I'd like to create a page that either calls a bunch of pages or try to tap into a MSTest group somehow to run tests.
In the end, I'd like a set of test cases that can be used against multiple environments to be used for production verification, uptime, status, etc.
Thanks,
Matt
Have you taken a look at System Center Operations Manager 2007?
I'm just getting started, but it appears to do what you are describing in your question.
We are looking to monitoring our data center and the a web application...from the few things I have found on the web it is going to fit our need.
Update
I've since moved to Application Insights. A great overview can be found here, https://azure.microsoft.com/en-us/documentation/articles/app-insights-monitor-web-app-availability/
There are two methods one can use, a simple ping, or record a multi-step synthetic user "experience". Basically you act as a user, and using IE and a Visual Studio Web Test project you record navigating around your site and upload that file to Azure.
For example, I record logging in, navigating a few pages, and then logging out. As long as all of those events happen in a timely manner the site is in a good operating state.
If the tests fail, take too long to respond for example, I'll get an email alerting me something isn't exactly right.

Resources