How to implement business logic in Faye? Rails - ruby-on-rails

I have Rack Faye application on Thin server and i have some logic on every faye client handshake/subscribe/disconnect.
This logic requires data storage in DB, some calculations and publishing messages back to some channels.
Where and how should one implement such stuff to avoid blocking main Faye thread with extensions?

author of Faye here. Depends what DB you're talking to, but in general you should use a non-blocking (i.e. based on EventMachine's TCP stack) database client. This means the extension will return quickly (assuming you're not waiting on the result of the DB call to affect the incoming/outgoing message) so Faye can continue processing messages while the DB call is in progress.

Related

Can Rails 6 with ActionCable implement long polling API?

I need to create an API for my rails project, and the API main purpose is to start actions that will run in background for some time (using ActiveJob).
I need the API client to be able to monitor the background job using long-polling. The client makes the request and if the job is still running, the request waits until some timeout to give the result, or returns earlier in casethe job has terminated in the meantime.
If I ware to design a full-stack application, I'd use ActionCable to notify the client when the job terminates. But here, I don't want to make public any ActionCable websocket API and I want to stick with plain HTTP.
Is it possible to use ActionCable to wait for messages on a channel in a Rails controller (server-side)? Can such controller cause a connection count limit, especially with postgresql connections (which I won't be using while I'm waiting on Redis for ActionCable messages)?

How to dynamically and efficiently pull information from database (notifications) in Rails

I am working in a Rails application and below is the scenario requiring a solution.
I'm doing some time consuming processes in the background using Sidekiq and saves the related information in the database. Now when each of the process gets completed, we would like to show notifications in a separate area saying that the process has been completed.
So, the notifications area really need to pull things from the back-end (This notification area will be available in every page) and show it dynamically. So, I thought Ajax must be an option. But, I don't know how to trigger it for a particular area only. Or is there any other option by which Client can fetch dynamic content from the server efficiently without creating much traffic.
I know it would be a broad topic to say about. But any relevant info would be greatly appreciated. Thanks :)
You're looking at a perpetual connection (either using SSE's or Websockets), something Rails has started to look at with ActionController::Live
Live
You're looking for "live" connectivity:
"Live" functionality works by keeping a connection open
between your app and the server. Rails is an HTTP request-based
framework, meaning it only sends responses to requests. The way to
send live data is to keep the response open (using a perpetual connection), which allows you to send updated data to your page on its
own timescale
The way to do this is to use a front-end method to keep the connection "live", and a back-end stack to serve the updates. The front-end will need either SSE's or a websocket, which you'll connect with use of JS
The SEE's and websockets basically give you access to the server out of the scope of "normal" requests (they use text/event-stream content / mime type)
Recommendation
We use a service called pusher
This basically creates a third-party websocket service, to which you can push updates. Once the service receives the updates, it will send it to any channels which are connected to it. You can split the channels it broadcasts to using the pub/sub pattern
I'd recommend using this service directly (they have a Rails gem) (I'm not affiliated with them), as well as providing a super simple API
Other than that, you should look at the ActionController::Live functionality of Rails
The answer suggested in the comment by #h0lyalg0rithm is an option to go.
However, primitive options are.
Use setinterval in javascript to perform a task every x seconds. Say polling.
Use jQuery or native ajax to poll for information to a controller/action via route and have the controller push data as JSON.
Use document.getElementById or jQuery to update data on the page.

ruby on rails chat application over port 80 which is hosting site agnostic(no flash and websockets)

Wanted to build a chat like application(i.e bidirectional message passing to multiple connected clients). Looked at the Faye gem but it opens a new port apart from port 80.
The big problem is that if the client is behind firewall all access to other ports except 80 are restricted and not all the hosting sites provide the support.
The ActionController::Live component does not have any mechanism to register the clients so that the message can not be passed to the registered clients on a specific event occurance.
Looking for a solution where the alive clients are stored in a collection(array or somthing like that) and when any of the alive client sends a message then the collection can be iterated and the messages can be written on it. All of these must happen only through port 80.
Good question - having implemented something similar, let me explain how it works:
Connections
A "live" web application is not really "live" at all - it's just got a persistent request; meaning it still works exactly the same as a "normal" Rails app, except clients don't close the connection (hence why you're interested in opening another port)
The way you handle the request is where the magic happens. This is as much to do with the client-side, as it is with Rails (server-side)
Clients
When you connect to a "chat" application, your browser is opening a live connection with the server. This will typically be done with either server sent events (Ajax long polling), or web sockets
The way this works is to open the connection using the normal Rails ActionDispatch middleware, and then allow you to connect
If you've played with ActionController::Live functionality, you'll find that it's not a typical controller-action. It's actually a separate technology (like resque or Redis) which you call from another controller action. This gives room to do cool things with
Server
The way you'd handle something like this is to separate the "live" functionality and the "normal" Rails app. It's one of the current down-falls of Rails - in that it's probably better to implement something like nodeJS with socket.io to handle the live data (with an endpoint like chat.yourapp.com), whilst using Rails to handle authentication & authorization
From a server perspective, its job is to handle incoming & outgoing requests -- not to handle persistent connections. So I guess you may want to look at ways you could "outsource" the websocket connectivity. Admittedly, my experience is slightly thin in this area, so you may do well searching the net
Solutions
We've had a lot of success using a third-party system called Pusher
This is a web socket system which allows you to open a persistent connection as a client, and integrates with Rails in a similar way to Redis (you can push to it)
This means you can host the "chat" application with Rails (http://yourapp.com/chat), send the messages to your Rails app (http://yourapp.com/chat/send), and handle the incoming chats from pusher (or similar)
Maybe you want to use my open source comet web server (https://github.com/TorstenRobitzki/Sioux). There is a ruby web chat example. I use this to implement an interactive role playing map with rails (http://dungeonpilot.com).

Process job using workers while client waits and return response when complete

I'm building an API using Rails where requests come in and they need to be executed by a cluster of workers running on a different server (these workers call remote APIs and parse the data, etc...). I'm going to be using Sidekiq or Resque to handle the queueing/processing of that.
My issue is the client needs to wait while this is happening and the controller needs to return the response to the client once it's complete. How would I handle this in the controller? We're using a redis backend, so I was thinking something along the lines of subscribing to a pub/sub channel and waiting for the worker to publish a status message. The controller would wait for a set time period and then return a 'check back later' response to the client if it doesn't receive a message in time. What would be the best way to implement that, or is there a better solution?
Do not make your clients wait! There are a lot of issues if you make the controller block for a long running job:
Other programs may assume the request timed out (proxies, browsers, scripts, etc.)
It makes your API endpoints become a source for denial of service
It requires you to put more engineering work into web servers (since a rails process can't handle another web request while it's handling the blocking call)
Part of the reason of using Sidekiq or Resque is the avoid controllers that do heavily lifting during the http request.
Instead, background jobs should report their status to the database. Then web server should query and return to the client the latest status from the database.
If clients need more immediate feedback, you can:
make clients constantly poll
post request to the client (if the API consumer is another webserver)
use another protocol mechanism (eg - websockets).

Rails 3.1 - Firing an specific event with the EventMachine

I would like to use the plugin em-eventsource ( https://github.com/AF83/em-eventsource ) for server-sent events in a Rails 3.1-project. My problem is, that there is only explained how to listen on events and receive messages, but not how to fire a specific event up and send the message. I would like to produce the event in an Active Record-Observer. Am I right when I think that I have to defer a operation with EventMachine to produce this event, or how can I solve this?
And yes, it has to be Ruby on Rails. If I don't get this to work with EventMachine, I would try to bypass the whole ruby-part with node.js.
Actually I worked on this library a little with the maintainer. I think you mixed the client part with the server one. em-eventsource is a client library which you can use to consume a ServerSentEvent API, it's not meant to fire SSE.
On the server side, it quite doesn't matter whether you are using Rails or any other stack (nodejs, php…) as long as the server you are running on supports streaming. The default web server shipped with Rails does not (Webrick) but there are many others which do: Thin, Puma, Goliath…
In order to fire SSE in Rails, you would have to use both a streaming-capable server among those cited, and abide by the SSE specification. It mostly falls down to, first, responding with the proper Content-type header ("text/event-stream") so that the client (browser) knows it should hang-on, and then start streaming on the socket. That latter part is the one not easily possible as of today in Rails 3 (yet not impossible!); Rails 4 actually now supports streaming in an easy way, with a clean and simple internal API, so it's definitely coming.
In the mean time, you'd either:
mess with Rack's API in Rails (using EventMachine I guess, there are some examples in the wild)
or have it smart and make use of the streaming feature provided by Sinatra, built on top of Rack (see https://gist.github.com/1476463 for an example of Sinatra app which can be mounted in a Rails one!)
or you could use an external service such as Pusher
or leverage a entirely different stack…
A good overview: http://blog.phusion.nl/2012/08/03/why-rails-4-live-streaming-is-a-big-deal/
Maybe I'm wrong, but if IIRC Rails can't support long pooling. Rails block whole server (or thread if you have more than one running inside server) for each request and can't reuse them unless whole response was send. That's why you should setup reverse proxy (like nginx) in front of Rails application if you suspect there could be many concurrent connections - to proxy slow client requests and send them to Rails when whole request is received. It's just how Rack works, there's not much you can do about this probably.

Resources