Grails 3, how to send/receive messages (not topics!) via web sockets - grails

Does anyone know of an example which has really simple onConnect onMessage onClose structure for use with grails-spring-websocket?
The first problem when trying to implement web sockets in grails 3 is which plugin library to use. It needs to be one which will be supported in grails so it gets new versions, and one which has lots of users, examples and/or documentation.
This article has exactly what i need - the ability for users to connect, to store the list of connected users in a collection, for clients send messages to the server, and have the server send async "replies" back to one or small subset of the connected users. But it uses an obscure socket implementation (javax.websocket:javax.websocket-api:1.1) with grails.
This one: https://plugins.grails.org/plugin/zyro/grails-spring-websocket seems to be more popular, but all the examples I can find only cover publish/subscribe via topics, and don't have any connection to the individual client. While I could create a topic per client, this is far more complicated than it should be. Also, configuration looks to be arcane and overly complex, there should be none.
Someone suggested using the wschat plugin as a tictactoe game was done with it, but really I just need a simple socket implemenation with onConnect onMessage and onClose callbacks - no publish and subscribe, no topics etc.
I implemented this in 5 minutes in node.js using the "ws" plugin, which is really simple, e.g.:
wss.on("connection", myFunction(ws) {..}
ws.on('message', function(message) {
messageHandler(message, ws)
})
ws.on('error', function(er) {
console.log(er)
})
ws.on('close', function() {
console.log('Connection closed')
})
Is anything like this available in grails with an official web socket plugin, or should I stick to node for websockets? no security or other features are required (security is handled by passing user/pass down the secure socket in a "login" message - more or less the same as a web login)
Update
No luck yet finding a way to handle simple websocket events and to send web socket messages back to users. This plugin looked promising, but it only handles topics/queues. The article mentioned earlier which uses javax.websocket-api is ugly but more or less what we need, but unfortunately as it is stand alone code, you cant access the database or services from the handlers, and services can't send messages.

Related

How to update a web page from requests made by another client (in rails)?

Here is my need:
I have to displays some information from a web page.
The web browser is actually on the same machine (localhost).
I want the data to be updated dynamically by the server initiative.
Since HTTP protocol is actually a request/response protocol, I know that to get this functionality, the connection between the server and the client (which is local here) should be kept open in some way (Websocket, Server-Sent Events, etc..)
Yes, "realtime" is really a fashion trend nowadays and there are many frameworks out there to do this (meteor, etc...)
And indeed, it seems that Rails supports this functionnality too in addition to using Websockets (Server-Sent Events in Rails 4 and ActionCable in Rails 5)
So achieving this functionnality would not be a big deal, I guess...
Nevertheless what I really want is to trigger an update of the webpage (displayed here locally) from a request made by another client..
This picture will explain that better :
At the beginning, the browser connects to the (local) server (green arrows).
I guess that a thread is executed where all the session data (instance variables) are stored.
In order to use some "realtime" mechanisms, the connection remains open and therefore the thread Y is not terminated. (I guess this is how it works)
A second user is connecting (blue arrows) to the server (could be or not be the same web page) and make some actions (eg. posting a form).
Here the response to that external client does not matter. Just an HTTP OK response is fine. But a confirmation web page could also be returned.
But in anyway the thread X (and/or the connection) has no particular reason to be kept.
Ok, here is my question (BTW thank you for reading me thus far).
How can I echo this new data on the local web browser ?
I see 2 differents ways to do this :
Path A: Before terminating, the thread X passes the data (its instance variables) to the thread Y which has its connection still open. Thus the server is able to update the web browser.
Path B: Before terminating the thread X sends a request (I mean a response since it is the server) directly to the web browser using a particular socket.
Which mechanisms should I use in either method to achieve this functionnality ?
For method A, how can I exchange data between threads ?
For method B, how can I use an already opened socket ?
But which of these two methods (or another one) is actually the best way to do that?
Again thank you for reading me thus far, and sorry for my bad english.
I hope I've been clear enough to expose my need.
You are overthinking this. There is no need to think of such low-level mechanisms as threads and sockets. Most (all?) pub-sub live-update tools (ActionCable, faye, etc.) operate in terms of "channels" and "events".
So, your flow will look like this:
Client A (web browser) makes a request to your server and subscribes to events from channel "client-a-events" (or something).
Client B (the other browser) makes a request to your server with instructions to post an event to channel "client-a-events".
Pub-sub library does its magic.
Client A gets an update and updates the UI accordingly.
Check out this intro guide: Action Cable Overview.

How to dynamically and efficiently pull information from database (notifications) in Rails

I am working in a Rails application and below is the scenario requiring a solution.
I'm doing some time consuming processes in the background using Sidekiq and saves the related information in the database. Now when each of the process gets completed, we would like to show notifications in a separate area saying that the process has been completed.
So, the notifications area really need to pull things from the back-end (This notification area will be available in every page) and show it dynamically. So, I thought Ajax must be an option. But, I don't know how to trigger it for a particular area only. Or is there any other option by which Client can fetch dynamic content from the server efficiently without creating much traffic.
I know it would be a broad topic to say about. But any relevant info would be greatly appreciated. Thanks :)
You're looking at a perpetual connection (either using SSE's or Websockets), something Rails has started to look at with ActionController::Live
Live
You're looking for "live" connectivity:
"Live" functionality works by keeping a connection open
between your app and the server. Rails is an HTTP request-based
framework, meaning it only sends responses to requests. The way to
send live data is to keep the response open (using a perpetual connection), which allows you to send updated data to your page on its
own timescale
The way to do this is to use a front-end method to keep the connection "live", and a back-end stack to serve the updates. The front-end will need either SSE's or a websocket, which you'll connect with use of JS
The SEE's and websockets basically give you access to the server out of the scope of "normal" requests (they use text/event-stream content / mime type)
Recommendation
We use a service called pusher
This basically creates a third-party websocket service, to which you can push updates. Once the service receives the updates, it will send it to any channels which are connected to it. You can split the channels it broadcasts to using the pub/sub pattern
I'd recommend using this service directly (they have a Rails gem) (I'm not affiliated with them), as well as providing a super simple API
Other than that, you should look at the ActionController::Live functionality of Rails
The answer suggested in the comment by #h0lyalg0rithm is an option to go.
However, primitive options are.
Use setinterval in javascript to perform a task every x seconds. Say polling.
Use jQuery or native ajax to poll for information to a controller/action via route and have the controller push data as JSON.
Use document.getElementById or jQuery to update data on the page.

ruby on rails chat application over port 80 which is hosting site agnostic(no flash and websockets)

Wanted to build a chat like application(i.e bidirectional message passing to multiple connected clients). Looked at the Faye gem but it opens a new port apart from port 80.
The big problem is that if the client is behind firewall all access to other ports except 80 are restricted and not all the hosting sites provide the support.
The ActionController::Live component does not have any mechanism to register the clients so that the message can not be passed to the registered clients on a specific event occurance.
Looking for a solution where the alive clients are stored in a collection(array or somthing like that) and when any of the alive client sends a message then the collection can be iterated and the messages can be written on it. All of these must happen only through port 80.
Good question - having implemented something similar, let me explain how it works:
Connections
A "live" web application is not really "live" at all - it's just got a persistent request; meaning it still works exactly the same as a "normal" Rails app, except clients don't close the connection (hence why you're interested in opening another port)
The way you handle the request is where the magic happens. This is as much to do with the client-side, as it is with Rails (server-side)
Clients
When you connect to a "chat" application, your browser is opening a live connection with the server. This will typically be done with either server sent events (Ajax long polling), or web sockets
The way this works is to open the connection using the normal Rails ActionDispatch middleware, and then allow you to connect
If you've played with ActionController::Live functionality, you'll find that it's not a typical controller-action. It's actually a separate technology (like resque or Redis) which you call from another controller action. This gives room to do cool things with
Server
The way you'd handle something like this is to separate the "live" functionality and the "normal" Rails app. It's one of the current down-falls of Rails - in that it's probably better to implement something like nodeJS with socket.io to handle the live data (with an endpoint like chat.yourapp.com), whilst using Rails to handle authentication & authorization
From a server perspective, its job is to handle incoming & outgoing requests -- not to handle persistent connections. So I guess you may want to look at ways you could "outsource" the websocket connectivity. Admittedly, my experience is slightly thin in this area, so you may do well searching the net
Solutions
We've had a lot of success using a third-party system called Pusher
This is a web socket system which allows you to open a persistent connection as a client, and integrates with Rails in a similar way to Redis (you can push to it)
This means you can host the "chat" application with Rails (http://yourapp.com/chat), send the messages to your Rails app (http://yourapp.com/chat/send), and handle the incoming chats from pusher (or similar)
Maybe you want to use my open source comet web server (https://github.com/TorstenRobitzki/Sioux). There is a ruby web chat example. I use this to implement an interactive role playing map with rails (http://dungeonpilot.com).

Signalr calling specific client from outside the hub

I'm aware of the Chris Fulstow project log4net.signalr, it is a great idea if you want a non production log since it logs all messages from all requests. I would like to have something that discriminates log messages by the request originating them and sed back to the proper browser.
Here what I've done in the appender:
public class SignalRHubAppender:AppenderSkeleton
{
protected override void Append(log4net.Core.LoggingEvent loggingEvent)
{
if (HttpContext.Current != null)
{
var cookie = HttpContext.Current.Request.Cookies["log-id"];
if (null != cookie)
{
var formattedEvent = RenderLoggingEvent(loggingEvent);
var context = GlobalHost.ConnectionManager.GetHubContext<Log4NetHub>();
context.Clients[cookie.Value].onLog(new { Message = formattedEvent, Event = loggingEvent });
}
}
}
}
I'm trying to attach the session id to a cookie, but this does not work on the same machine because the cookie is overwritten.
here is the code I use on the client to attach the event:
//start hubs
$.connection.hub.start()
.done(function () {
console.log("hub subsystem running...");
console.log("hub connection id=" + $.connection.hub.id);
$.cookie("log-id", $.connection.hub.id);
log4netHub.listen();
});
As a result, just the last page connected shows the log messages. I would like to know if there is some strategies to have the current connection id from the browser which originate the current request, if there is any.
Also I'm interested to know if there is better design to achieve a per browser logging.
EDIT
I could made a convention name based cookie ( like log-id-someguid ), but I wonder if there is something smarter.
BOUNTY
I decided to start a bounty on that question, and I would additionally ask about the architecture, in order to see if my strategy makes sense or not.
My doubt is, I'm using the hub in a single "direction" from server to client, and I use it to log activities not originating from calls to the hub, but from other requests ( potentially requests raised on other hubs ), is that a correct approach, having as a goal a browser visible log4net appender?
The idea about how to correctly target the right browser instance/tab, even when multiple tabs are open on the same SPA, is to differentiate them through the Url. One possible way to implement that is to redirect them at the first access from http://foo.com to http://foo.com/hhd83hd8hd8dh3, randomly generated each time. That url rewriting could be done in other ways too, but it's just a way to illustrate the problem. This way the appender will be able to inspect the originating Url, and from the Url through some mapping you keep server side you can identify the right SignalR ConnectionId. The implementation details may vary, but the basic idea is this one. Tracking some more info available in the HttpContext since the first connection you could also put in place additional strategies in order to prevent any hijacking.
About your architecture, I can tell you that this is exactly the way I used it in ElmahR. I have messages originating from outside the notification hub (errors posted from other web apps), and I do a broadcast to all clients connected to that hub (and subscribing certain groups): it works fine.
I'm not an authoritative source, but I also guess that such an architecture is ok, even with multiple hubs, because hubs at the end of the day are just an abstraction over a (one) persistent connection which allows you to group messaging by contexts. Behind the scenes (I'm simplifying) you have just a persistent connection with messages going back and forth, so whatever hub structure you define on top of it (which is there just to help you organizing things) you still insist on that connection, so you cannot do any harm.
SignalR is good on doing 2 things: massive broadcast (Clients), and one-to-one communication (Caller). As long as you do not try to do weird things like building keeping server-side references to specific callers, you should be ok, whatever number of Hubs, and interactions among them, you have.
These are my conclusions, coming from the field. Maybe you can twit #dfowler about this question and see if he has (much) more authoritative guidelines.

Ideas for web application with external input and realtime notification

I am to build a web application which will accept different events from external sources and present them quickly to the user for further actions. I want to use Ruby on Rails for the web application. This project is a internal development project. I would prefer simple and easy to use solutions for rapid development over high reliable and complex systems.
What it should do
The user has the web application opened in his browser. Now an phone call comes is. The phone call is registered by a PBX monitoring daemon. In this case via the Asterisk Manager Interface. The daemon sends the available information (remote extension, local extension, call direction, channel status, start time, end time) somehow to the web application. Next the user receives a notified about the phone call event. The user now can work with this. For example by entering a summary or by matching the call to a customer profile.
The duration from the first event on the PBX (e.g. the creation of a new channel) to the popup notification in the browser should be short. Given a fast network I would like to be within two seconds. The single pieces of information about an event are created asynchronously. The local extension may be supplied separate from the remote extension. The user can enter a summary before the call has ended. The end time, new status etc. will show up on the interface as soon as one party has hung up.
The PBX monitor is just one data source. There will be more monitors like email or a request via a web form. The monitoring daemons will not necessarily run on the same host as the database or web server. I do not image the application will serve thousands of logged in users or concurrent requests soon. But from the design 200 users with maybe about the same number of events per minute should not be a scalability issue.
How should I do?
I am interested to know how you would design such an application. What technologies would you suggest? How do the daemons communicate their information? When and by whom is the data about an event stored into the main database? How does the user get notified? Should the browser receive a complete dataset on behalf of a daemon or just a short note that new data is available? Which JS library to use and how to create the necessary code on the server side?
On my research I came across a lot of possibilities: Message brokers, queue services, some rails background task solutions, HTTP Push services, XMPP and so on. Some products I am going to look into: ActiveMQ, Starling and Workling, Juggernaut and Bosh.
Maybe I am aiming too hight? If there is a simpler or easier way, like just using the XML or JSON interface of Rails, I would like to read this even more.
I hope the text is not too long :)
Thanks.
If you want to skip Java and Flash, perhaps it makes sense to use a technology in the Comet family to do the push from the server to the browser?
http://en.wikipedia.org/wiki/Comet_%28programming%29
For the sake of simplicity, for notifications from daemons to the Web browser, I'd leave Rails in the middle, create a RESTful interface to that Rails application, and have all of the daemons report to it. Then in your daemons you can do something as simple as use curl or libcurl to post the notifications. The Rails app would then be responsible for collecting the incoming notifications from the various sources and reporting them to the browser, either via JavaScript using a Comet solution or via some kind of fatter client implemented using Flash or Java.
You could approach this a number of ways but my only comment would be: Push, don't pull. For low latency it's not only quicker it's more efficient, as your server now doesn't have to handle n*clients once a second polling the db/queue. ActiveMQ is OK, but Starling will probably serve you better if you're not looking for insane levels of persistence.
You'll almost certainly end up using Flash on the client side (Juggernaut uses it last time I checked) or Java. This may be an issue for your clients (if they don't have Flash/Java installed) but for most people it's not an issue; still, a fallback mechanism onto a pull notification system might be prudent to implement.
Perhaps http://goldfishserver.com might be of some use to you. It provides a simple API to allow push notifications to your web pages. In short, when your data updates, send it (some payload data) to the Goldfish servers and your client browsers will be notified, with the same data.
Disclaimer: I am a developer working on goldfish.
The problem
There is an event - either external (or perhaps internally within your app).
Users should be notified.
One solution
I am myself facing this problem. I haven't solved it yet, but this is how I intend to do it. It may help you too:
(A) The app must learn about the event (via an exposed end point)
Expose an end point by which you app can be notified about external events.
When the end point is hit (and after authentication then users need to be notified).
(B) Notification
You can notify the user directly by changing the DOM on the current web page they are on.
You can notify users by using the Push API (but you need to make sure your browsers can target that).
All of these notification features should be able to be handled via Action Cable: (i) either by updating the DOM to notify you when a phone call comes in, or (ii) via a push notification that pops up in your browser.
Summary: use Action Cable.
(Also: why use an external service like Pusher, when you have ActionCable at your disposal? Some people say scalability, and infrastructure management. But I do not know enough to comment on these issues. )

Resources