Rails 5 app (Action Cable) as Socket.io server AND client - ruby-on-rails

I am now familiar with Action Cable (Rails 5 functionality) as an emitter or server of websockets. However, I am supposed to consume an API which sends the data over websockets (e.g. 'socket.provider.com?token=12345').
I made some tests with a plain Ruby file using socket.io-client-simple (https://github.com/shokai/ruby-socket.io-client-simple) and it works, but I am not sure on how this would work on a deployed Rails app. I'm guessing I need a separate process which listens constantly for events emitted by the API provider. Has anyone done something similar? I am going to use Heroku for deployment.
Note: I think using a client-side approach for receiving the websockets and then posting them to my Rails app (i.e. Javascript Socket.IO library) is not an option, since I need to receive AND persist some of the data coming from the events in real time and not depend of the connectivity of at least one client.
I'm also wondering if there is any way to automatically set Action Cable to act as a 'listener of events' somehow. Haven't read anything on that topic so far, but would love to see some suggestions on that direction.
Update: Here is the Ruby code I'm using so far to connect to the provider's websockets API:
require 'rubygems'
require 'socket.io-client-simple'
socket = SocketIO::Client::Simple.connect 'https://api.provider.com', token: '12345'
socket.on :connect do
puts "connect!!!"
end
socket.on :disconnect do
puts "disconnected!!"
end
socket.on :providerevent do |data|
puts data
end
socket.on :error do |err|
p err
end
puts "please input and press Enter key"
loop do
sleep 100
end

ActionCable can't be listening to an external site's event on its own, so you'll have to combine socket.io and ActionCable.
ActionCable can send updates to the channel like this:
ActionCable.server.broadcast "channel_name", param1: your_param1, param2: your_param2
to update the channel when an event occured. The
received action of your channel's coffeescript file is where you have to do something with it.
From what I understand, you're looking for something like this in the controller where you would be listening for events:
def listen
socket.on :connect do
# persist your data
# example: #post = Post.create(...)
ActionCable.server.broadcast "channel_name", post_title: #post.title
end
end
and in your channel_name.coffee:
received: (data) ->
console.log(data["post_title"])
# do something
With this setup, you would be receiving events from the api, and broadcasting it to your channel. The page were the channel is setup would be updated each time your socket receives an event.
You should first follow DDH's tutorial, and then you'll probably understand better my solution (which is pretty easy to implement).

Related

Must a server sent event always be firing regardless of what page a user is on?

I am pretty new to SSE so feel free to let me know if I've misunderstood the purpose and there's a much better way of implementing what I want!
I have a working SSE that, every minute, updates a user's dashboard. The code looks like this:
# SitesController
def dashboard
end
def regular_update
response.headers['Content-Type'] = 'text/event-stream'
sse = SSE.new(response.stream, event: 'notice')
begin
sse.write(NoticeTask.perform) # custom code returning the JSOn
sleep 60
rescue ClientDisconnected
ensure
sse.close
end
end
# routes
get "/dashboard(/:id)" => "sites#dashboard"
get "/site_update" => 'sites#regular_update'
# view - /dashboard
var source = new EventSource('/site_update');
source.addEventListener('notice', function(event) {
var data = JSON.parse(event.data)
appendNoticeAndAlert(data)
});
This works just fine. When I'm on /dashboard for a user, the right info is being updated regularly by the SSE, great!
However, I notice if I'm on any random page, like just the home page, the SSE is still running in the background. Now... obviously this makes sense, since there's nothing in the code that is otherwise limiting that... but shouldn't there be??? Like shouldn't there be a way to scope the SSE in some way? Isn't it a huge waste of resources if the user is never on the /dashboard for the SSE to be constantly working in the background, updating the /dashboard page?
Again, new to SSE, if this is fundamentally wrong, please advise as well. Thanks!
In your controller when handling SSE you're expected to do updates in a loop,
then ActionController::Live::ClientDisconnected is raised by response.stream.write once client is gone:
def regular_update
response.headers['Content-Type'] = 'text/event-stream'
sse = SSE.new(response.stream, event: 'notice')
loop do
sse.write(NoticeTask.perform) # custom code returning the JSOn
sleep 60
end
rescue ClientDisconnected
logger.info "Client is gone"
ensure
sse.close
end
your code disconnects the client after first update and delay, but everything appears to be working because EventSource automatically reconnects (thus you're getting long-polling updates).
On client EventSource should be close()d once it is not needed. Usually it is done automatically upon navigation away from page containing it, so:
make sure that eventsource javascript is only on the dashboard page, not in javascript bundle (or is in bundle, but only enabled on specific page)
if you're using turbolinks - you have to close() the connection manually, as a quick solution - try adding <meta name="turbolinks-visit-control" content="reload"> to page header or disabling turbolinks temporarily.
Also think again whether you actually need SSE for this specific task, because for plain periodic updates you can just poll a json action from client side code, that will render the same data. This will make controller simpler, will not keep connection busy for each client, has wider server compatibility etc.
For SSE to be reasoned - at least check if something has really changed and skip message if there's nothing. Better way is to use some kind of pub-sub (like Redis' SUBSCRIBE/PUBLISH, or Postgres' LISTEN/NOTIFY) - emit events to a topic every time something that affects the dashboard changes, subscribe on SSE connect and so on (may be also throttle updates, depends on your application). Similar can be implemented with ActionCable (is a bit overkill, but can be handy, since already has pub-sub integrated)

Send message en hook success - Delayed_job

How can I send a message when the job finishes successfully? I would like to send the message and show it in a swal in javascript when the work finishes correctly, but I do not know how to do this, any suggestions?
I do not need to do anything other than send a message
class CompileProjectJob < Struct.new(:url)
def perform
end
def success(job)
#send message when the work is successful
end
end
At the end of perform method queue new delayed job for sending the message
class CompileProjectJob < Struct.new(:url)
def perform
# the code here of this job
# queue new job
end
end
the code of the perform method is executed sequentially as any regular code
Update
to send the message to the front end there are two ways (push and pull) more info
- push: using web sockets you push the message from the backend to the front end
- pull: the front end sends requests every certain period to check if the backend has a new data
and you can use any of these techniques to solve the problem
if you used pulling you will make the job update a data store as an example Redis or mysql. the front end will send a request every interval to check for the new data in some scenarios this will be a better solution but i think you are looking for the other technique
pushing:
here you can use something like active cable https://guides.rubyonrails.org/action_cable_overview.html
or a third party like pusher https://www.pusher.com/tutorials/realtime-table-ruby-rails
the main idea here your frontend app will open a websocket connection with your server. this socket will stay opened and listen for any updates from the backend through a channel so when you send the update after finishing the job through this channel it will be received by the front end so you can add code to show the message

Actioncable broadcasts not hitting Received JS function

I have been trying to get a rails app together to replace a nastily coded php monstrosity. The current incarnation is pulling data from a non-Rails db, putting it into the Rails db and then displaying it in the views. The db is mainly populated with temperature readings that are added every few seconds. I can display a static-ish page in Rails with no problems but, trying to add ActionCable/realtime data has proven problematic. MOST things seem to be working properly but, when I broadcast to my channel, it does not seem to hit the Received function in the mychannel.coffee.
My Setup:
Server - passenger (bundle exec passenger start)
Jobs - Resque
ActionCable - Redis
Data is imported from the legacydb by a job that grabs the raw SQL and creates new records. After this, another job broadcasts to the channel.
The problems are coming from ActionCable, I think. All examples that I can find require user input to trigger the JS, it seems. However, I am trying to trigger things strictly from the server side. This job:
class DatabroadcastJob < ApplicationJob
queue_as :default
self.queue_adapter = :resque
def perform
ActionCable.server.broadcast 'dashboard_channel', content: render_thedata
end
private
def render_thedata
dataArr =[Data1.last, Data2.last, Data3.last]
ApplicationController.renderer.render(partial:
'dashboard/data_tables', locals: {item: dataArr})
end
end
Works. It works. I see the broadcast hitting the dashboard_channel. However, nothing in the dashboard.coffee gets triggered by the broadcast. This is incredibly confusing.
Dashboard.coffee
App.dashboard = App.cable.subscriptions.create "DashboardChannel",
connected: ->
# Called when the subscription is ready for use on the server
disconnected: ->
# Called when the subscription has been terminated by the server
received: (data) ->
# Called when there's incoming data on the websocket for this channel
alert data['content']
Nothing happens. The logs show the broadcast but nothing hits dashboard.coffee and raises an alert in browser. Am I thinking about this the wrong way because of all of the chat examples? Is there another place where I grab the broadcast and push it to subscribers when only making server side changes?
If any other info is needed to address this, please let me know. This issue has been driving me mental for days now.
First, check your frames. Are you sure you're getting the messages you want?
Then, in your channel you should set an ID to your subs. If you have a stream that is related to a model, then the broadcasting used can be generated from the model and channel.
class DashboardChannel < ApplicationCable::Channel
def subscribed
post = Post.find(params[:id])
stream_for post
end
end
Then you can broadcast to your channel like so
DashboardChannel.broadcast_to(#post, #comment)
Otherwise, you should do the following:
class DashboardChannel < ApplicationCable::Channel
def subscribed
stream_from 'dashboard_channel'
end
end
But this is a bad practice, because you won't be able to define which user transmits to your server.
One thing I would add for troubleshooting and testing the coffee/javascript is that console.log is your friend. Adding console.log "First step complete" and so on throughout really helped to trackdown where the errors were occurring.

What can be the reason of "Unable to find subscription with identifier" in Rails ActionCable?

I'm building a messenger application using Rails 5.0.0.rc1 + ActionCable + Redis.
I've single channel ApiChannel and a number of actions in it. There are some "unicast" actions -> ask for something, get something back, and "broadcast" actions -> do something, broadcast the payload to some connected clients.
From time to time I'm getting RuntimeError exception from here: https://github.com/rails/rails/blob/master/actioncable/lib/action_cable/connection/subscriptions.rb#L70 Unable to find subscription with identifier (...).
What can be a reason of this? In what situation can I get such exception? I spent quite a lot of time on investigating the issue (and will continue to do so) and any hints would be greatly appreciated!
It looks like it's related to this issue: https://github.com/rails/rails/issues/25381
Some kind of race conditions when Rails reply the subscription has been created but in fact it hasn't been done yet.
As a temporary solution adding a small timeout after establishing the subscription has solved the issue.
More investigation needs to be done, though.
The reason for this error might be the difference of the identifiers you subscribe to and messaging to. I use ActionCable in Rails 5 API mode (with gem 'devise_token_auth') and I faced the same error too:
SUBSCRIBE (ERROR):
{"command":"subscribe","identifier":"{\"channel\":\"UnreadChannel\"}"}
SEND MESSAGE (ERROR):
{"command":"message","identifier":"{\"channel\":\"UnreadChannel\",\"correspondent\":\"client2#example.com\"}","data":"{\"action\":\"process_unread_on_server\"}"}
For some reason ActionCable requires your client instance to apply the same identifier twice - while subscribing and while messaging:
/var/lib/gems/2.3.0/gems/actioncable-5.0.1/lib/action_cable/connection/subscriptions.rb:74
def find(data)
if subscription = subscriptions[data['identifier']]
subscription
else
raise "Unable to find subscription with identifier: #{data['identifier']}"
end
end
This is a live example: I implement a messaging subsystem where users get the unread messages notifications in the real-time mode. At the time of the subscription, I don't really need a correspondent, but at the messaging time - I do.
So the solution is to move the correspondent from identifier hash to the data hash:
SEND MESSAGE (CORRECT):
{"command":"message","identifier":"{\"channel\":\"UnreadChannel\"}","data":"{\"correspondent\":\"client2#example.com\",\"action\":\"process_unread_on_server\"}"}
This way the error is gone.
Here's my UnreadChannel code:
class UnreadChannel < ApplicationCable::Channel
def subscribed
if current_user
unread_chanel_token = signed_token current_user.email
stream_from "unread_#{unread_chanel_token}_channel"
else
# http://api.rubyonrails.org/classes/ActionCable/Channel/Base.html#class-ActionCable::Channel::Base-label-Rejecting+subscription+requests
reject
end
end
def unsubscribed
# Any cleanup needed when channel is unsubscribed
end
def process_unread_on_server param_message
correspondent = param_message["correspondent"]
correspondent_user = User.find_by email: correspondent
if correspondent_user
unread_chanel_token = signed_token correspondent
ActionCable.server.broadcast "unread_#{unread_chanel_token}_channel",
sender_id: current_user.id
end
end
end
helper: (you shouldn't expose plain identifiers - encode them the same way Rails encodes plain cookies to signed ones)
def signed_token string1
token = string1
# http://vesavanska.com/2013/signing-and-encrypting-data-with-tools-built-in-to-rails
secret_key_base = Rails.application.secrets.secret_key_base
verifier = ActiveSupport::MessageVerifier.new secret_key_base
signed_token1 = verifier.generate token
pos = signed_token1.index('--') + 2
signed_token1.slice pos..-1
end
To summarize it all you must first call SUBSCRIBE command if you want later call MESSAGE command. Both commands must have the same identifier hash (here "channel"). What is interesting here, the subscribed hook is not required (!) - even without it you can still send messages (after SUBSCRIBE) (but nobody would receive them - without the subscribed hook).
Another interesting point here is that inside the subscribed hook I use this code:
stream_from "unread_#{unread_chanel_token}_channel"
and obviously the unread_chanel_token could be whatever - it applies only to the "receiving" direction.
So the subscription identifier (like \"channel\":\"UnreadChannel\") has to be considered as a "password" for the future message-sending operations (e.g. it applies only to the "sending" direction) - if you want to send a message, (first send subscribe, and then) provide the same "pass" again, or you'll get the described error.
And more of it - it's really just a "password" - as you can see, you can actually send a message to whereever you want:
ActionCable.server.broadcast "unread_#{unread_chanel_token}_channel", sender_id: current_user.id
Weird, right?
This all is pretty complicated. Why is it not described in the official documentation?

Pull/push status in rails 3

I have a longer running task in the background, and how exactly would I let pull status from my background task or would it better somehow to communicate the task completion to my front end?
Background :
Basically my app uses third party service for processing data, so I want this external web service workload not to block all the incoming requests to my website, so I put this call inside a background job (I use sidekiq). And so when this task is done, I was thinking of sending a webhook to a certain controller which will notify the front end that the task is complete.
How can I do this? Is there a better solution for this?
Update:
My app is hosted on heroku
Update II:
I've done some research on the topic and I found out that I can create a seperate app on heroku which will handle this, found this example :
https://github.com/heroku-examples/ruby-websockets-chat-demo
This long running task will be run per user, on a website with a lot of traffic, is this a good idea?
I would implement this using a pub/sub system such as Faye or Pusher. The idea behind this is that you would publish the status of your long running job to a channel, which would then cause all subscribers of that channel to be notified of the status change.
For example, within your job runner you could notify Faye of a status change with something like:
client = Faye::Client.new('http://localhost:9292/')
client.publish('/jobstatus', {id: jobid, status: 'in_progress'})
And then in your front end you can subscribe to that channel using javascript:
var client = new Faye.Client('http://localhost:9292/');
client.subscribe('/jobstatus', function(message) {
alert('the status of job #' + message.jobid + ' changed to ' + message.status);
});
Using a pub/sub system in this way allows you to scale your realtime page events separately from your main app - you could run Faye on another server. You could also go for a hosted (and paid) solution like Pusher, and let them take care of scaling your infrastructure.
It's also worth mentioning that Faye uses the bayeaux protocol, which means it will utilise websockets where it is available, and long-polling where it is not.
We have this pattern and use two different approaches. In both cases background jobs are run with Resque, but you could likely do something similar with DelayedJob or Sidekiq.
Polling
In the polling approach, we have a javascript object on the page that sets a timeout for polling with a URL passed to it from the rails HTML view.
This causes an Ajax ("script") call to the provided URL, which means Rails looks for the JS template. So we use that to respond with state and fire an event for the object to response to when available or not.
This is somewhat complicated and I wouldn't recommend it at this point.
Sockets
The better solution we found was to use WebSockets (with shims). In our case we use PubNub but there are numerous services to handle this. That keeps the polling/open-connection off your web server and is much more cost effective than running the servers needed to handle these connection.
You've stated you are looking for front-end solutions and you can handle all the front-end with PubNub's client JavaScript library.
Here's a rough idea of how we notify PubNub from the backend.
class BackgroundJob
#queue = :some_queue
def perform
// Do some action
end
def after_perform
publish some_state, client_channel
end
private
def publish some_state, client_channel
Pubnub.new(
publish_key: Settings.pubnub.publish_key,
subscribe_key: Settings.pubnub.subscribe_key,
secret_key: Settings.pubnub.secret_key
).publish(
channel: client_channel,
message: some_state.to_json,
http_sync: true
)
end
end
The simplest approach that I can think of is that you set a flag in your DB when the task is complete, and your front-end (view) sends an ajax request periodically to check the flag state in db. In case the flag is set, you take appropriate action in the view. Below are code samples:
Since you suggested that this long running task needs to run per user, so let's add a boolean to users table - task_complete. When you add the job to sidekiq, you can unset the flag:
# Sidekiq worker: app/workers/task.rb
class Task
include Sidekiq::Worker
def perform(user_id)
user = User.find(user_id)
# Long running task code here, which executes per user
user.task_complete = true
user.save!
end
end
# When adding the task to sidekiq queue
user = User.find(params[:id])
# flag would have been set to true by previous execution
# In case it is false, it means sidekiq already has a job entry. We don't need to add it again
if user.task_complete?
Task.perform_async(user.id)
user.task_complete = false
user.save!
end
In the view you can periodically check whether the flag was set using ajax requests:
<script type="text/javascript">
var complete = false;
(function worker() {
$.ajax({
url: 'task/status/<%= #user.id %>',
success: function(data) {
// update the view based on ajax request response in case you need to
},
complete: function() {
// Schedule the next request when the current one's complete, and in case the global variable 'complete' is set to true, we don't need to fire this ajax request again - task is complete.
if(!complete) {
setTimeout(worker, 5000); //in miliseconds
}
}
});
})();
</script>
# status action which returns the status of task
# GET /task/status/:id
def status
#user = User.find(params[:id])
end
# status.js.erb - add view logic based on what you want to achieve, given whether the task is complete or not
<% if #user.task_complete? %>
$('#success').show();
complete = true;
<% else %>
$('#processing').show();
<% end %>
You can set the timeout based on what the average execution time of your task is. Let's say your task takes 10 minutes on average, so their's no point in checking it at a 5sec frequency.
Also in case your task execution frequency is something complex (and not 1 per day), you may want to add a timestamp task_completed_at and base your logic on a combination of the flag and timestamp.
As for this part:
"This long running task will be run per user, on a website with a lot of traffic, is this a good idea?"
I don't see a problem with this approach, though architectural changes like executing jobs (sidekiq workers) on separate hardware will help. These are lightweight ajax calls, and some intelligence built into your javascript (like the global complete flag) will avoid the unnecessary requests. In case you have huge traffic, and DB reads/writes are a concern then you may want to store that flag directly into redis instead (since you already have it for sidekiq). I believe that will resolve your read/write concerns, and I don't see that it is going to cause problems. This is the simplest and cleanest approach I can think of, though you can try achieving the same via websockets, which are supported by most modern browsers (though can cause problems in older versions).

Resources