setting timeout for a long running WCF Service - timeout

I have a WCF service method which takes more than two hours to execute (runs some reports). how can I make sure that it doesn't timeout regardless of the time it takes? I think there are many timeout settings in WCF config, I am not sure which one is relevant for me. for ASMX webservices, there was an option to specify infinite timeout setting, is there a similar one for WCF?. also do I need to alter any IIS settings for this (WCF servcie is hosted in IIS), like recycling of worker processes, idle timeouts etc?

This is an abuse of web services. Don't do this.
Instead, have the web service kick off the long-running operation, running in a separate process. If the clients need to know when the reports are done, then have the "separate process" keep track of the report creation and have it note when the reports are finished. The client can call a web service to check that status.
You really don't want to be depending on an HTTP connection remaining open for hours. It's a network. Things happen on networks. Bad things.

Have you considered using callbacks? Your client sends a request and then waits for a notification from the server for when it is done? This would probably require a change in the client, but in that way, your service can "rattle the chain" and tell the client when the report is finished.
help: http://idunno.org/archive/2008/05/29/wcf-callbacks-a-beginners-guide.aspx
WCF timeouts:
http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/84551e45-19a2-4d0d-bcc0-516a4041943d/
You should also consider the timeouts on the client side, as well. (binding.OpenTimeout, ReceiveTimeout, CloseTimeout etc.)
Another option would be to host the WCF in a Windows Service, which could simplify your situation, as it removes IIS from the equation:
http://msdn.microsoft.com/en-us/library/ms733069.aspx
Or, what about using a one way WCF call? That way, the call will return to the client ASAP after sending the request.
Need sample fire and forget async call to WCF service

Consider creating a WCF workflow service (using WF) instead. These are specifically designed to handle long-running processes, especially if you use persistence.

Related

How can background tasks be executed from a library in an ASP.NET MVC 5 app

In my job we are building Web Apps that rely on a common Enterprise class. This class has a method that sends a request to our server every time the app_start or app_end event triggers so we can monitor the status remotely. But we are now requiring that at least once a day the web app reports its status, a bit like telemetry. I don't know how to accomplish this, so far I have found some options, but some have limitations:
Use hangfire. I don't like this since it requires to setup a Database or add more tables and install a new Nuget package on each project, but could be my last option.
Use a Windows Service that reads databases. This could be less work but it can't access the Web App web.config code
Use a Javascript tasks that sends an AJAX request. This requires to have an open web browser and is a big risk.
I'm looking for a server side approach that could allow to set to trigger an event or function at 1am.
I would got with Hangifire.
It is dead easy to setup and very reliable.
You don't need to setup the database, you might want to check memory storage:
https://github.com/perrich/Hangfire.MemoryStorage
Also check:
What is the equivalent to CRON jobs in ASP.NET? - C#
You can use FluentScheduler instead of Hangfire (it is more lightweight).
Instead of a Javascript task that sends an AJAX request you can use a WebJob or an Azure Function.

Scalable SignalR + Azure - where to put SignalR, and should I be using Azure Queues?

I'm developing an application that has various types of Notifications. Examples of notifications:
Message Created
Listing Submitted
Listing Approved
I'd like to tie all of these up to SignalR so that any connected clients get updates in real-time.
As far as architecture goes - right now the application is entirely within a single solution hosted on an Azure Website. The triggers for each of these notification types live within this application.
When a trigger is hit, I'd like to tell signalR, "Hey, send this message to the following clients" along with a list of userIds. I'm assuming that it's possible to identify connected clients based on userId... and I'm assuming that the process of send message to clients should be executed outside of the web application, so as to not slow down the MVC app or risk losing data in a broken async call. First question - are these assumptions correct?
Assuming so, this means that I'll need something like a dedicated web/worker role to be sending messages to clients. I could pass messages from my web application directly to this process, but what happens if the process dies? The resiliency concerns lead me to believe that the proper way to pass messages would be via a queue of some sort. Second question - is this a valid train of thought?
Assuming so, this means that I can either use a good ol' Azure SQL database as a queue, but it seems like there are some specialized (and maybe cheaper) services to handle message queueing, such as this:
http://www.windowsazure.com/en-us/develop/net/how-to-guides/queue-service/
Third question: Should this be used as a queueing mechanism for signalR? I'm interested in using Redis for caching in the future... would Redis be better or worse than the queue service?
Final Question:
I've attempted to illustrate my proposed architecture here:
What I'm most unclear on here is how the MVC app will know when to queue, or how the SignalR processes will know when to broadcast. Should the MVC app queue blindly, without caring about connected clients? This seems to introduce a lot of wasted space on the queue, and wasted cycles in the worker roles, since a very small percentage of clients will ever be connected.
The only other approach I can think of is to somehow give the MVC app visibility into the SignalR processes to see if the client is connected... and if they are, then Enqueue. This makes me uncomfortable though because it means I have to hit that red line on the diagram for every trigger that gets hit, which - even if done async - gets me worrying about performance and reliability.
What is the recommended architecture for scalable, performant SignalR message broadcasting? Performance is top priority, followed closely by cost.
Bonus question:
What if some messages are of higher priority than others? Should two queues be used, one of which always gets checked before the other?
If you want to target some users, you'll have to come up with a mechanism, off the top of my head I can give an example, if any user hits a page, you can create a group for that page and push to all users in that group/in that page.
It's not clear to me why you need the queues. Usually users subscribe to some events when hitting a page or by some action like join a chat room, and the server pushes data using those events/functions when appropriate.
For scalability, you can run signalr in different servers, in which case you should use sql server, or service bus or redis as a backplane.
Firstly you need to create a SignalR server to which all the users can connect to. This SignalR server can be created either in the web role or worker role. If you have a huge user base then its better to create the SignalR server on a separate role.
Then wherever the trigger is hit and you want to send messages to users, you have to create a SignalR client (.NET or javascript) and then connect to SignalR server. Then you can send the message to SignalR server which in turn will broadcast to all the other users connected. After that you can disconnect the connection with SignalR server. This way you dont have to use queues to communicate with the SignalR role.
And also to send messages to specific users you can store the socket id's along with their user id's in a table (azure table storage should do) when they connect to SignalR server. Then using socket id you can send messages to specific user.

Best choice for robust self hosting server: WCF vs. ASP.NET Web Api

We currently have an .NET 4 application that consists of Windows Service running in the background and local or remote clients (only 1-3 normally).
The clients have a WPF GUI and need some data from the windows service. Therefore, we use WCF with NamedPipe binding for a local client and NetTcp binding for remote clients. This works, but we often have problems with endpoints that are not reachable (channel faulted or not found etc.). We already try to rebuild faulted connections but it seems to be pretty fragile...
Now enter Web Api: It looks like a HTTP based stack might be more robust (no channels, no endpoints, can be self-hosted in windows service as well). There seems to be no problems with broken channels because each request is handled individually. So if something fails, you just repeat the request. (And we have experience with ASP.NET MVC from other apps, so this not new to us).
Now we are thinking what might be our best bet. Is it better to "harden" our existing WCF service (one service interface with about 15 operations) or to move the interface to Web Api and run it as HTTP requests (with JSON data)? Performance is not our main issue here...
Any ideas?
Hartmut
I recommend you stick with WCF (SOAP) services for your WPF application rather than moving to the Web API. There are a number of reasons for this. First I think we need to consider what the new Web API is trying to address - namely to provide a framework for supporting RESTful/HTTP/hypermedia services. This is likely to be a good fit for building applications that make heavy use of HTTP such as web, mobile and JavaScript applications, where you want to maximise the "reach" or interopability of your services (irrespective of platform). This is not to say that you can't use it for WPF clients but in your case, where all traffic is local to your domain, it makes more sense to stick with your current implementation.
The binding choices you have made for your services / clients sound ok to me. I would focus on why your channels are faulting and address these issues. You may also want to consider hosting your services via IIS and use WAS to expose your non-HTTP endpoints. I have had much success with this in the past and for the most part has been pretty stable. It also takes away a few of the headaches with managing your own host. If you are concerned about the TCP binding faults, then just create a new HTTP or wsHTTP endpoint and use that instead. This will provide you exactly the same transport the web api uses without having to change your programming model.

Running a queue-based background process on an externally hosted website

I have an ASP.NET MVC web application which is hosted by an external provider, on IIS 7.
I wish to run a process every 15 minutes or so, which checks a backlog of emails that need to be sent, and actually sends them.
It seems that the normal way to do this is with Microsoft Message Queue, but since this is a hosted environment which I can't directly control, I won't be able to install or configure MSMQ.
So far I've decided to do it by appending rows to a table in my SQL Server database (same hosting).
So how should I implement the bit where I check the backlog and send the emails?
Should it be some kind of separate thread in my main web application, which restarts itself every 15 minutes?
Another option I considered was just opening an HTTP-POST interface which, when called with an appropriate admin password, runs an iteration of the email sender.
I could then create a small console app on my local PC which calls the interface every 15 minutes.
The first option is simpler, but the second might be more robust.
Any ideas?
I would recommend you taking a look at Quartz.NET. Also an important thing you should be aware is that the web server could unload the ASP.NET application from memory if it is not used meaning that all threads that have been spawned would simply die. That's one of the reasons why such tasks shouldn't be performed in ASP.NET applications but rather offloaded in Windows Services.
Jeff Atwood did a post on how he originally achieved the badge system on Stack Overflow using an expiring cache to reset the process periodically.
https://blog.stackoverflow.com/2008/07/easy-background-tasks-in-aspnet/
I have done something similar to this in the past sending emails out every day. The service was non essential, and it didn't matter if the emails missed a day or two, as they would go out eventually anyway, but the system worked quite well. It's all asp.net so works fine in the hosting environments I use, without access to service on the server or creating a local trigger from your desktop.

What is the standard method for a website to communicate with a win32 executable?

I have some delphi code which, given a list of items, calculates the total price taking into account any special deals that might apply.
This code is non-trivial to rewrite in another language.
How should I set it up to communicate with a website running on the same server? The website will need to ask it for a price every time the user updates their shopping cart. It's possible that there will be multiple concurrent requests.
The delphi code needs to maintain an in-memory list of special deals, periodically refreshed from a database. So it cannot simply be executed every time or anything as simple as that.
I don't know what the website is written in, or even which http server it runs under, so I'm just looking for ideas or standard methods.
It sounds like the win32 app is already running as a Windows Service on the box. So, if you can't modify that service, you are going to have to deal with whatever way it wants to accept and respond to requests. This could be through sockets or some higher level communication protocol like web services.
You could do a couple of things. Write an assembly that knows how to communicate with the service and have your web site use that assembly. Or you could build a shim service that knows how to communicate with the legacy service, but exposes communication over higher level protocols such as web services. Either way will have the benefit of hiding the concurrency, threading and communications issue behind an easy to call interface, but the latter will make communicating with the service easier for everyone going forward.
If you can modify the delphi app to take an XML request and respond with an XML answer over a TCP socket (ideally using the HTTP protocol), you will be able to make it interoperate with most web server frameworks relatively easily. But the exact details of how to make that integration happen will depend on the language/framework it was written in.
If the web server is on windows you can compile your delphi app as a DLL that can return XML or HTML, taking parameters as part of the URL or a POST operation. Some details on making a Delphi DLL for web servers are here.
It doesn't matter what web server or OS the existing system is running under. What matters is what you want YOUR code to run under. If it is windows then the easiest solution would be to use WebBroker and write a custom ISAPI application, or use SOAP to expose web services. The first method could be used if you wanted to write a rest like API for instance, the second if your web application has the ability to consume web services.
Another option, if you are running both on the same box under IIS, is to create a COM/Automation object which you then invoke via server side scripting (ASP). If the application is an ASP.NET application, then I would use PRISM to port your code into an assembly.
I have done this with a quite complicated workers compensation calculator. I created a windows service using RemObjects Sdk. The calculations are exposed as a soap method so it can be accessed by nearly anything.
It's not necessary to use RemObjects in the service but it makes it much easier to do as it handles a lot of the underlying plumbing. The clients don't need RemObjects, they just need to be able to call soap methods. Nearly any programming langugae can do that.
You could also create an isapi dll for IIS that exposes a soap interface. This would be useful if other websites on different servers needed access to the methods. However I have handled this in my case by opening a port in the firewall to access my windows service.
There is a lot of examples on the web. A couple of places to start reading are About.Com and Dr Bob.
Torn this app into Windows Service. Write Web Service that will communicate with your windows service. You should spend some time designing your Web Service, because this Web Service is going to be your consistent interface, shielding old Delphi app. So in the future whenever you will want to write web app, mobile app, or whatever you will imagine, you will have one consistent interface – XML Web Service.
A popular way to integrate a web application with background services is a message broker.
The message flow would be:
the web application sends a "calculation request" message to a message destination on the message broker, which contains all needed parameters and also a correlation id to match the calculation request with the response from the Delphi service
one (or, in a high availability / load balanced environment more) Delphi services handle the messages: pull the next incoming message, process it by feeding the parameters to the calculation engine, and send a "calculation result message" back to the web server
the web server can either synchronously wait for the response (and discard responses which have no matching correlation ide) and build the result HTML document, or continue with other tasks and asynchronously receive the calculation result in a separate thread, for example in a Ajax based web application
See for an introduction this slideshow about the Dopplr image service:
http://de.slideshare.net/carsonified/dopplr-its-made-of-messages-matt-biddulph-presentation
If you can make it a service (but not a library), you have to do inter-process communication somehow - there are a few ways to do this on Windows:
Sockets directly which is hardest since you have to do marshalling/auth yourself
Shared Memory (yuck!)
RPC which works great but isn't trivial
DCOM which is easier but a pain to configure
WCF - but can you call it from your Windows Service written in Delphi?

Resources