I have implemented SignalR support for web application. It works great. The problem I'm dealing now is make it work in non-Azure web farm environment. SignalR supports Windows Azure Service Bus and Redis out of the box. Also there is RabbitMQ implementation on GitHub. All these solutions implement IMessageBus interface.
Based on our current situation we can't use Redis or RabbitMQ. So I have few questions:
1) Is there any alternative solution that uses SQL Server or MSMQ?
2) Is it difficult (possible) to implement your own solution for SQL Server or MSMQ? David's post on SignalR 0.5 (http://weblogs.asp.net/davidfowler/archive/2012/05/02/signalr-0-5.aspx) says they are going to support SQL Server QNS or Service Broker (not SQL Server DB itself) so maybe it's a wrong way at all?
3) Is there a way to work around until this support is implemented? For example, it sounds like the we need to handle state of the connections list between servers. If we know number of nodes and their IPs we can share this information between servers via Web Service calls instead. Does it make any sense?
Damian Edwards appears to have just started working on the SQL scaleout implementation. You can find the details of that implementation here on GitHub and the issue tracking this work can be followed here.
Related
I am working on a chat service with some unique features in it, and thinking about a server to dispatch messages and do all the IM-related stuff. First-priority client is going to be for iOS, built with Swift.
Is it feasible to create server, based on NodeJS Express, or may be Loopback? I have had a look at multiple choices, including ready solutions, like QuickBlox, Parse.
As for creating it from scratch, I think about NodeJS or Erlang.
At what stage should I make a decision so that not to waste too much time on reconfiguring everything for scaling and rapidity and convenience of development?
With technologies like Socket.io, Node.js, and Express, you could make a chat application fairly quickly.
Sockets are typically the best solution and the most common route to implementing a chat system, as they provide two way communication between the client and the server.
You could use practically any backend for a socket server, but it may end up being quicker to use Node.js and socket.io depending on your comfortability level with JavaScript.
All you would need is a socket compatible server and a client side library that connects to a socket server - there are plenty of JavaScript libs out there, including a socket.io-client.
Check out socket.io's chat demo on their site for a quick look at how it works:
http://socket.io/demos/chat/
They even provide a first party iOS Swift client:
https://github.com/socketio/socket.io-client-swift
Personally I recommend you to checkout SailsJS, a great framework for building API & chat server at the same time. It adopts socket.io internally so every route in a Sails app is compatible with socket.io (in other words, you can decide to call an API request via Socket anytime you wish!)
I've built a complete, working iOS App having chat feature. Its backend was completely developed using SailsJS. It saved me hundreds of hours. Sails documentation also mentions about scaling for production. Please have a look at http://sailsjs.org
Maybe a silly question :)
If we use separate physical servers for Application and Database, when using Traversal framework, which one of the servers should support the queries (DB or Application)?
Neo4j traversals run on the Neo4j database server, if you are using the server version of Neo4j.
Disclaimer: I'm building a similar system right now, so my view is probably biased.
My Node.js Application server provides an Angular app from the /public folder. The client application speaks only to the Application server.
Generally it works like this:
Client sends a message to Server.
Server returns a promise to client.
Server queries the DB, performs any necessary manipulation to the data.
Server resolves the promise.
Client interprets the response.
hope that's helpful.
I've downloaded the last Signalr.Redis package (v0.1) and I've compiled the last Redis source code (2.4.26).
I tried to run Redis on my local pc (server and client work well) but when I start SignalR with Redis as a message broadcaster, it seems that signalr wants to start multiple connection to server (same server=localhost but multiple port number).
I know that Redis integration with SignalR is new and perphaps buggy, but is it possible to work with redis+signalr on local machine or is not a supported scenario?
Thanks.
SignalR will attempt to make a variety of connections to the server in order to keep an open connection. For most browsers it ends up long polling the server (which results in multiple requests regardless). What I ended up doing was using allowing SignalR to connect in a normal fashion to my MVC app and then call actions on my controllers which in turn communicated with Redis. This gives me the added benefit of being able to perform business logic in between. Not sure I answered your question, but I just wanted to share how its worked for me.
We currently have an .NET 4 application that consists of Windows Service running in the background and local or remote clients (only 1-3 normally).
The clients have a WPF GUI and need some data from the windows service. Therefore, we use WCF with NamedPipe binding for a local client and NetTcp binding for remote clients. This works, but we often have problems with endpoints that are not reachable (channel faulted or not found etc.). We already try to rebuild faulted connections but it seems to be pretty fragile...
Now enter Web Api: It looks like a HTTP based stack might be more robust (no channels, no endpoints, can be self-hosted in windows service as well). There seems to be no problems with broken channels because each request is handled individually. So if something fails, you just repeat the request. (And we have experience with ASP.NET MVC from other apps, so this not new to us).
Now we are thinking what might be our best bet. Is it better to "harden" our existing WCF service (one service interface with about 15 operations) or to move the interface to Web Api and run it as HTTP requests (with JSON data)? Performance is not our main issue here...
Any ideas?
Hartmut
I recommend you stick with WCF (SOAP) services for your WPF application rather than moving to the Web API. There are a number of reasons for this. First I think we need to consider what the new Web API is trying to address - namely to provide a framework for supporting RESTful/HTTP/hypermedia services. This is likely to be a good fit for building applications that make heavy use of HTTP such as web, mobile and JavaScript applications, where you want to maximise the "reach" or interopability of your services (irrespective of platform). This is not to say that you can't use it for WPF clients but in your case, where all traffic is local to your domain, it makes more sense to stick with your current implementation.
The binding choices you have made for your services / clients sound ok to me. I would focus on why your channels are faulting and address these issues. You may also want to consider hosting your services via IIS and use WAS to expose your non-HTTP endpoints. I have had much success with this in the past and for the most part has been pretty stable. It also takes away a few of the headaches with managing your own host. If you are concerned about the TCP binding faults, then just create a new HTTP or wsHTTP endpoint and use that instead. This will provide you exactly the same transport the web api uses without having to change your programming model.
I have an existing complex website built using ASP.NET MVC, including a database backend, data layer, as well as the Web UI layer. Rebuilding this website in another language is not a feasible option.
There are some UI elements on some views (client side) which would benefit from live interactivity, involving both push and pull, so rather than implement some kind of custom long polling or websocket server in asp.net, I am looking to leverage node.js for Windows, and Socket.io.
My problem is that I need two way communication between both applications. Each user should only be able to receive data once they are authorised on the ASP.NET website, so I first need communication for this. Secondly, once certain events occur on the ASP.NET website I want to immediately push this data to the Node server, to be broadcast to specific users or groups of users. Thirdly, I would like any data sent to the node.js server to be pushed to the ASP.NET website for processing, as this is where all our business logic lies. The sole reason for adding Node.js is to have the possibility to push data directly to the client, I do not want to build any business logic into it (or as little as possible).
I would like to know what the fastest method of two-way push communication is between Node.Js and ASP.NET. The only good option I'm aware of so far is to create a special listener on a specific port on the node.js server and connect to that, but I was wondering if there's a more elegant or more efficient method? I also know that you could use a database inbetween but surely this would need to be polled and would be less efficient? Both servers will be running on the same server under a Visual Studio project.
Many thanks for any help you can provide.
I'm not an ASP.NET expert, but I think there are multiple ways you can achieve this:
1) As you said, you could make Node listen on a specific port for data and then react based on the data received (TCP)
2) You can make POST requests to Node.js (HTTP) and also send an auth-key in the process to be extra-secure. Like on 1) Node would react to the data you send.
3) Use something like Redis for pub-sub, send messages from ASP.NET (pub) and get them on the Node.js part (sub). This is even better if you want to scale your app across multiple machines etc.
The only good option I'm aware of so far is to create a special
listener on a specific port on the node.js server and connect to that,
but I was wondering if there's a more elegant or more efficient
method?
You can try to look at redis pub/sub model where ASP.NET MVC application and node.js would communicate through separate channels in order to achieve full-duplex communication. Or you can also try to use CouchDB change nofitications.
I also know that you could use a database inbetween but surely this
would need to be polled and would be less efficient?
Former techniques do not require you to poll for changes, but instead they will notify you when the changes happens or channel message arrives.