Communicating between Node.Js and ASP.NET MVC Application - asp.net-mvc

I have an existing complex website built using ASP.NET MVC, including a database backend, data layer, as well as the Web UI layer. Rebuilding this website in another language is not a feasible option.
There are some UI elements on some views (client side) which would benefit from live interactivity, involving both push and pull, so rather than implement some kind of custom long polling or websocket server in asp.net, I am looking to leverage node.js for Windows, and Socket.io.
My problem is that I need two way communication between both applications. Each user should only be able to receive data once they are authorised on the ASP.NET website, so I first need communication for this. Secondly, once certain events occur on the ASP.NET website I want to immediately push this data to the Node server, to be broadcast to specific users or groups of users. Thirdly, I would like any data sent to the node.js server to be pushed to the ASP.NET website for processing, as this is where all our business logic lies. The sole reason for adding Node.js is to have the possibility to push data directly to the client, I do not want to build any business logic into it (or as little as possible).
I would like to know what the fastest method of two-way push communication is between Node.Js and ASP.NET. The only good option I'm aware of so far is to create a special listener on a specific port on the node.js server and connect to that, but I was wondering if there's a more elegant or more efficient method? I also know that you could use a database inbetween but surely this would need to be polled and would be less efficient? Both servers will be running on the same server under a Visual Studio project.
Many thanks for any help you can provide.

I'm not an ASP.NET expert, but I think there are multiple ways you can achieve this:
1) As you said, you could make Node listen on a specific port for data and then react based on the data received (TCP)
2) You can make POST requests to Node.js (HTTP) and also send an auth-key in the process to be extra-secure. Like on 1) Node would react to the data you send.
3) Use something like Redis for pub-sub, send messages from ASP.NET (pub) and get them on the Node.js part (sub). This is even better if you want to scale your app across multiple machines etc.

The only good option I'm aware of so far is to create a special
listener on a specific port on the node.js server and connect to that,
but I was wondering if there's a more elegant or more efficient
method?
You can try to look at redis pub/sub model where ASP.NET MVC application and node.js would communicate through separate channels in order to achieve full-duplex communication. Or you can also try to use CouchDB change nofitications.
I also know that you could use a database inbetween but surely this
would need to be polled and would be less efficient?
Former techniques do not require you to poll for changes, but instead they will notify you when the changes happens or channel message arrives.

Related

Can multiple instances of SignalR be configured to use a different message bus for different instances

I am running a site in which I am using SignalR with a custom scaleout backplane that allows us to push out real-time data to users connected to any of our load balanced web servers.
I recently found out that the Kendo UI components for MVC (which we use for other site features) can be configured to use SignalR rather than AJAX for binding with the data model. It seems like using web sockets via SignalR could potentially offer a performance boost over using AJAX as we are now. However, I would ideally like to let our kendo components access a SignalR instance that only uses whatever web server they connect to rather than using the instance with the scaleout backplane since that would involve a lot of overgead that isn't necessary for that data binding.
I should mention that there would be cases where we would have one page with one partial view that uses one SignalR configuration and another partial view that uses the other.
Is this something that can be done? If so, are there recommended ways of going about doing so?
I am not entirely certain about your approach but you can of course have 2 SignalR servers using different kinds of backplanes and connect to the 2 servers from a client. You would have to though look into how to handle cross-domain requests with SignalR.

Scalable SignalR + Azure - where to put SignalR, and should I be using Azure Queues?

I'm developing an application that has various types of Notifications. Examples of notifications:
Message Created
Listing Submitted
Listing Approved
I'd like to tie all of these up to SignalR so that any connected clients get updates in real-time.
As far as architecture goes - right now the application is entirely within a single solution hosted on an Azure Website. The triggers for each of these notification types live within this application.
When a trigger is hit, I'd like to tell signalR, "Hey, send this message to the following clients" along with a list of userIds. I'm assuming that it's possible to identify connected clients based on userId... and I'm assuming that the process of send message to clients should be executed outside of the web application, so as to not slow down the MVC app or risk losing data in a broken async call. First question - are these assumptions correct?
Assuming so, this means that I'll need something like a dedicated web/worker role to be sending messages to clients. I could pass messages from my web application directly to this process, but what happens if the process dies? The resiliency concerns lead me to believe that the proper way to pass messages would be via a queue of some sort. Second question - is this a valid train of thought?
Assuming so, this means that I can either use a good ol' Azure SQL database as a queue, but it seems like there are some specialized (and maybe cheaper) services to handle message queueing, such as this:
http://www.windowsazure.com/en-us/develop/net/how-to-guides/queue-service/
Third question: Should this be used as a queueing mechanism for signalR? I'm interested in using Redis for caching in the future... would Redis be better or worse than the queue service?
Final Question:
I've attempted to illustrate my proposed architecture here:
What I'm most unclear on here is how the MVC app will know when to queue, or how the SignalR processes will know when to broadcast. Should the MVC app queue blindly, without caring about connected clients? This seems to introduce a lot of wasted space on the queue, and wasted cycles in the worker roles, since a very small percentage of clients will ever be connected.
The only other approach I can think of is to somehow give the MVC app visibility into the SignalR processes to see if the client is connected... and if they are, then Enqueue. This makes me uncomfortable though because it means I have to hit that red line on the diagram for every trigger that gets hit, which - even if done async - gets me worrying about performance and reliability.
What is the recommended architecture for scalable, performant SignalR message broadcasting? Performance is top priority, followed closely by cost.
Bonus question:
What if some messages are of higher priority than others? Should two queues be used, one of which always gets checked before the other?
If you want to target some users, you'll have to come up with a mechanism, off the top of my head I can give an example, if any user hits a page, you can create a group for that page and push to all users in that group/in that page.
It's not clear to me why you need the queues. Usually users subscribe to some events when hitting a page or by some action like join a chat room, and the server pushes data using those events/functions when appropriate.
For scalability, you can run signalr in different servers, in which case you should use sql server, or service bus or redis as a backplane.
Firstly you need to create a SignalR server to which all the users can connect to. This SignalR server can be created either in the web role or worker role. If you have a huge user base then its better to create the SignalR server on a separate role.
Then wherever the trigger is hit and you want to send messages to users, you have to create a SignalR client (.NET or javascript) and then connect to SignalR server. Then you can send the message to SignalR server which in turn will broadcast to all the other users connected. After that you can disconnect the connection with SignalR server. This way you dont have to use queues to communicate with the SignalR role.
And also to send messages to specific users you can store the socket id's along with their user id's in a table (azure table storage should do) when they connect to SignalR server. Then using socket id you can send messages to specific user.

Best choice for robust self hosting server: WCF vs. ASP.NET Web Api

We currently have an .NET 4 application that consists of Windows Service running in the background and local or remote clients (only 1-3 normally).
The clients have a WPF GUI and need some data from the windows service. Therefore, we use WCF with NamedPipe binding for a local client and NetTcp binding for remote clients. This works, but we often have problems with endpoints that are not reachable (channel faulted or not found etc.). We already try to rebuild faulted connections but it seems to be pretty fragile...
Now enter Web Api: It looks like a HTTP based stack might be more robust (no channels, no endpoints, can be self-hosted in windows service as well). There seems to be no problems with broken channels because each request is handled individually. So if something fails, you just repeat the request. (And we have experience with ASP.NET MVC from other apps, so this not new to us).
Now we are thinking what might be our best bet. Is it better to "harden" our existing WCF service (one service interface with about 15 operations) or to move the interface to Web Api and run it as HTTP requests (with JSON data)? Performance is not our main issue here...
Any ideas?
Hartmut
I recommend you stick with WCF (SOAP) services for your WPF application rather than moving to the Web API. There are a number of reasons for this. First I think we need to consider what the new Web API is trying to address - namely to provide a framework for supporting RESTful/HTTP/hypermedia services. This is likely to be a good fit for building applications that make heavy use of HTTP such as web, mobile and JavaScript applications, where you want to maximise the "reach" or interopability of your services (irrespective of platform). This is not to say that you can't use it for WPF clients but in your case, where all traffic is local to your domain, it makes more sense to stick with your current implementation.
The binding choices you have made for your services / clients sound ok to me. I would focus on why your channels are faulting and address these issues. You may also want to consider hosting your services via IIS and use WAS to expose your non-HTTP endpoints. I have had much success with this in the past and for the most part has been pretty stable. It also takes away a few of the headaches with managing your own host. If you are concerned about the TCP binding faults, then just create a new HTTP or wsHTTP endpoint and use that instead. This will provide you exactly the same transport the web api uses without having to change your programming model.

How can I retrieve updated records in real-time? (push notifications?)

I'm trying to create a ruby on rails ecommerce application, where potential customers will be able to place an order and the store owner will be able to receive the order in real-time.
The finalized order will be recorded into the database (at the moment SQLite), and the storeowner will have a browser window open, where the new orders will appear just after the order is finalized.
(Application info: I'm using the HOBO rails framework, and planning to host the app in Heroku)
I'm now considering the best technology to implement this, as the application is expected to have a lot of users sending in a lot of orders:
1) Each browser window refreshes the page every X minutes, polling the server continuously for new records (new orders). Of course, this puts a heavy load on the server.
2) As above, but poll the server with some kind of AJAX framework.
3) Use some kind of server push technology, like 'comet' asynchronous messaging. Found Juggernaut, only problem is that it is using Flash and custom ports, and this could be a problem as my app should be accessible behind corporate firewalls and NAT.
4) I'm also checking node.js framework, seems to be efficient for this kind of asynchronous messaging, though it is not supported in Heroku.
Which is the most efficient way to implement this kind of functionality? Is there perhaps another method that I have not thought of?
Thank you for your time and help!
Node.js would probably be a nice fit - it's fast, loves realtime and has great comet support. Only downside is that you are introducing another technology into your solution. It's pretty fun to program in tho and a lot of the libraries have been inspired by rails and sinatra.
I know heroku has been running a node.js beta for a while and people were using it as part of the recent nodeknockout competition. See this blog post. If that's not an option, you could definitely host it elsewhere. If you host it at heroku, you might be able to proxy requests. Otherwise, you could happily run it off a sub domain so you can share cookies.
Also checkout socket.io. It does a great job of choosing the best way to do comet based on the browser's capabilities.
To share data between node and rails, you could share cookies and then store the session data in your database where both applications can get to it. A more involved architecture might involve using Redis to publish messages between them. Or you might be able to get away with passing everything you need in the http requests.
In HTTP, requests can only come from the client. Thus the best options are what you already mentioned (polling and HTTP streaming).
Polling is the easier to implement option; it will use quite a bit of bandwidth though. That's why you should keep the requests and responses as small as possible, so you should definitely use XHR (Ajax) for this.
Your other option is HTTP streaming (Comet); it will require more work on the set up, but you might find it worth the effort. You can give Realtime on Rails a shot. For more information and tips on how to reduce bandwidth usage, see:
http://ajaxpatterns.org/Periodic_Refresh
http://ajaxpatterns.org/HTTP_Streaming
Actually, if you have your storeowner run Chrome (other browsers will follow soon), you can use WebSockets (just for the storeowner's notification though), which allows you to have a constant connection open, and you can send data to the browser without the browser requesting anything.
There are a few websocket libraries for node.js, but i believe you can do it easily yourself using just a regular tcp connection.

What is the standard method for a website to communicate with a win32 executable?

I have some delphi code which, given a list of items, calculates the total price taking into account any special deals that might apply.
This code is non-trivial to rewrite in another language.
How should I set it up to communicate with a website running on the same server? The website will need to ask it for a price every time the user updates their shopping cart. It's possible that there will be multiple concurrent requests.
The delphi code needs to maintain an in-memory list of special deals, periodically refreshed from a database. So it cannot simply be executed every time or anything as simple as that.
I don't know what the website is written in, or even which http server it runs under, so I'm just looking for ideas or standard methods.
It sounds like the win32 app is already running as a Windows Service on the box. So, if you can't modify that service, you are going to have to deal with whatever way it wants to accept and respond to requests. This could be through sockets or some higher level communication protocol like web services.
You could do a couple of things. Write an assembly that knows how to communicate with the service and have your web site use that assembly. Or you could build a shim service that knows how to communicate with the legacy service, but exposes communication over higher level protocols such as web services. Either way will have the benefit of hiding the concurrency, threading and communications issue behind an easy to call interface, but the latter will make communicating with the service easier for everyone going forward.
If you can modify the delphi app to take an XML request and respond with an XML answer over a TCP socket (ideally using the HTTP protocol), you will be able to make it interoperate with most web server frameworks relatively easily. But the exact details of how to make that integration happen will depend on the language/framework it was written in.
If the web server is on windows you can compile your delphi app as a DLL that can return XML or HTML, taking parameters as part of the URL or a POST operation. Some details on making a Delphi DLL for web servers are here.
It doesn't matter what web server or OS the existing system is running under. What matters is what you want YOUR code to run under. If it is windows then the easiest solution would be to use WebBroker and write a custom ISAPI application, or use SOAP to expose web services. The first method could be used if you wanted to write a rest like API for instance, the second if your web application has the ability to consume web services.
Another option, if you are running both on the same box under IIS, is to create a COM/Automation object which you then invoke via server side scripting (ASP). If the application is an ASP.NET application, then I would use PRISM to port your code into an assembly.
I have done this with a quite complicated workers compensation calculator. I created a windows service using RemObjects Sdk. The calculations are exposed as a soap method so it can be accessed by nearly anything.
It's not necessary to use RemObjects in the service but it makes it much easier to do as it handles a lot of the underlying plumbing. The clients don't need RemObjects, they just need to be able to call soap methods. Nearly any programming langugae can do that.
You could also create an isapi dll for IIS that exposes a soap interface. This would be useful if other websites on different servers needed access to the methods. However I have handled this in my case by opening a port in the firewall to access my windows service.
There is a lot of examples on the web. A couple of places to start reading are About.Com and Dr Bob.
Torn this app into Windows Service. Write Web Service that will communicate with your windows service. You should spend some time designing your Web Service, because this Web Service is going to be your consistent interface, shielding old Delphi app. So in the future whenever you will want to write web app, mobile app, or whatever you will imagine, you will have one consistent interface – XML Web Service.
A popular way to integrate a web application with background services is a message broker.
The message flow would be:
the web application sends a "calculation request" message to a message destination on the message broker, which contains all needed parameters and also a correlation id to match the calculation request with the response from the Delphi service
one (or, in a high availability / load balanced environment more) Delphi services handle the messages: pull the next incoming message, process it by feeding the parameters to the calculation engine, and send a "calculation result message" back to the web server
the web server can either synchronously wait for the response (and discard responses which have no matching correlation ide) and build the result HTML document, or continue with other tasks and asynchronously receive the calculation result in a separate thread, for example in a Ajax based web application
See for an introduction this slideshow about the Dopplr image service:
http://de.slideshare.net/carsonified/dopplr-its-made-of-messages-matt-biddulph-presentation
If you can make it a service (but not a library), you have to do inter-process communication somehow - there are a few ways to do this on Windows:
Sockets directly which is hardest since you have to do marshalling/auth yourself
Shared Memory (yuck!)
RPC which works great but isn't trivial
DCOM which is easier but a pain to configure
WCF - but can you call it from your Windows Service written in Delphi?

Resources