Rails push EM-Socket - ruby-on-rails

I'm writing rails app and i want user to receive notification as soon as new message is saved to DB. For websocket i'm using em-websocket gem. After connection, i store client id and socket instance in array.
Question: How to push data to the client from controller\model action (before_save for example)? Is it able to do so with em-websocket?
chat.rb
EVENTCHAT_CONFIG = YAML.load_file("#{Rails.root}/config/eventchat.yml")[Rails.env].symbolize_keys
#escape html/xss
include ERB::Util
Thread.abort_on_exception = true
Thread.new {
EventMachine.run {
#sockets = Array.new
EventMachine::WebSocket.start(EVENTCHAT_CONFIG) do |ws|
ws.onopen do
end
ws.onclose do
index = #sockets.index {|i| i[:socket] == ws}
client = #sockets.delete_at index
#sockets.each {|s| s[:socket].send h("#{client[:id]} has disconnected!")}
end
ws.onmessage do |msg|
client = JSON.parse(msg).symbolize_keys
case client[:action]
when 'connect'
#sockets.push({:id=>client[:id], :socket=>ws})
#sockets.each {|s| s[:socket].send h("#{client[:id]} has connected!")}
when 'say'
#sockets.each {|s| s[:socket].send h("#{client[:id]} says : #{client[:data]}")}
end
end
end
}
}

When deploying - you may have problems related to having a separate EM reactor in each rails worker (for example errors about some failing to bind to socket or not all users receiving all messages)
Rails controllers in relation to websocket reactor are just another type of client, easiest way is to open a connection with special client id, push some data and close it afterwards. More efficient way - is to keep a open connection from each rails worker
If a EM-based single-process server is used(for example - thin), you can use EM::Queue to deliver messages to websocket worker in-process or even write to websocket directly from controller

Related

How to receive data in ActionCable Channel without JS?

I'm writing a Rails application that uses WebSockets to communicate with other machines (no browser and client side logic in this process). I have a channel:
class MachinesChannel < ApplicationCable::Channel
def subscribed
...
end
def unsubscribed
...
end
def handle_messages
...
end
end
To receive the data the only way I know about is the JavaScript client:
ActionCable.createConsumer('/cable').subscriptions.create 'MachinesChannel',
received: (message) ->
#perform('handle_messages')
I can call server side methods from JS via #perform() method.
Is there any way to omit the JS part and somehow directly handle the incoming data in MachinesChannel?
The ideal situation would be to have the handle_messages method accept a data argument and have this metod called on incoming data.
After looking into ActionCable source code I got the following solution. You just have to create a method in MachinesChannel that you want to be called, e.g. handle_messages(data). Then, in the client that connects to your websocket, you need to send a message in the following format (example in ruby):
id = { channel: 'MachinesChannel' }
ws = WebSocket::Client::Simple.connect(url)
ws.send(JSON.generate(command: 'message', identifier: JSON.generate(id), data: JSON.generate(action: 'handle_messages', foo: 'bar', biz: 'baz')))
action has to be the name of the method you want to be called in MachinesChannel. The rest of key-values are whatever you want. This the date you can receive in the ActionCable channel.
Recently a gem action_cable_client has been release which seems exactly perfect for this kind of usage. I haven't used it, so I don't know how it really works.
Instead of:
def handle_messages
...
end
This works for me:
def receive(data)
puts data
...
end

Rails: How to listen to / pull from service or queue?

Most Rails applications work in a way that they are waiting for requests comming from a client and then do their magic.
But if I want to use a Rails application as part of a microservice architecture (for example) with some asychonious communication (Serivce A sends an event into a Kafka or RabbitMQ queue and Service B - my Rails app - is supposed to listen to this queue), how can I tune/start the Rails app to immediately listen to a queue and being triggered by event from there? (Meaning the initial trigger is not comming from a client, but from the App itself.)
Thanks for your advice!
I just set up RabbitMQ messaging within my application and will be implementing for decoupled (multiple, distributed) applications in the next day or so. I found this article very helpful (and the RabbitMQ tutorials, too). All the code below is for RabbitMQ and assumes you have a RabbitMQ server up and running on your local machine.
Here's what I have so far - that's working for me:
#Gemfile
gem 'bunny'
gem 'sneakers'
I have a Publisher that sends to the queue:
# app/agents/messaging/publisher.rb
module Messaging
class Publisher
class << self
def publish(args)
connection = Bunny.new
connection.start
channel = connection.create_channel
queue_name = "#{args.keys.first.to_s.pluralize}_queue"
queue = channel.queue(queue_name, durable: true)
channel.default_exchange.publish(args[args.keys.first].to_json, :routing_key => queue.name)
puts "in #{self}.#{__method__}, [x] Sent #{args}!"
connection.close
end
end
end
end
Which I use like this:
Messaging::Publisher.publish(event: {... event details...})
Then I have my 'listener':
# app/agents/messaging/events_queue_receiver.rb
require_dependency "#{Rails.root.join('app','agents','messaging','events_agent')}"
module Messaging
class EventsQueueReceiver
include Sneakers::Worker
from_queue :events_queue, env: nil
def work(msg)
logger.info msg
response = Messaging::EventsAgent.distribute(JSON.parse(msg).with_indifferent_access)
ack! if response[:success]
end
end
end
The 'listener' sends the message to Messaging::EventsAgent.distribute, which is like this:
# app/agents/messaging/events_agent.rb
require_dependency #{Rails.root.join('app','agents','fsm','state_assignment_agent')}"
module Messaging
class EventsAgent
EVENT_HANDLERS = {
enroll_in_program: ["FSM::StateAssignmentAgent"]
}
class << self
def publish(event)
Messaging::Publisher.publish(event: event)
end
def distribute(event)
puts "in #{self}.#{__method__}, message"
if event[:handler]
puts "in #{self}.#{__method__}, event[:handler: #{event[:handler}"
event[:handler].constantize.handle_event(event)
else
event_name = event[:event_name].to_sym
EVENT_HANDLERS[event_name].each do |handler|
event[:handler] = handler
publish(event)
end
end
return {success: true}
end
end
end
end
Following the instructions on Codetunes, I have:
# Rakefile
# Add your own tasks in files placed in lib/tasks ending in .rake,
# for example lib/tasks/capistrano.rake, and they will automatically be available to Rake.
require File.expand_path('../config/application', __FILE__)
require 'sneakers/tasks'
Rails.application.load_tasks
And:
# app/config/sneakers.rb
Sneakers.configure({})
Sneakers.logger.level = Logger::INFO # the default DEBUG is too noisy
I open two console windows. In the first, I say (to get my listener running):
$ WORKERS=Messaging::EventsQueueReceiver rake sneakers:run
... a bunch of start up info
2016-03-18T14:16:42Z p-5877 t-14d03e INFO: Heartbeat interval used (in seconds): 2
2016-03-18T14:16:42Z p-5899 t-14d03e INFO: Heartbeat interval used (in seconds): 2
2016-03-18T14:16:42Z p-5922 t-14d03e INFO: Heartbeat interval used (in seconds): 2
2016-03-18T14:16:42Z p-5944 t-14d03e INFO: Heartbeat interval used (in seconds): 2
In the second, I say:
$ rails s --sandbox
2.1.2 :001 > Messaging::Publisher.publish({:event=>{:event_name=>"enroll_in_program", :program_system_name=>"aha_chh", :person_id=>1}})
in Messaging::Publisher.publish, [x] Sent {:event=>{:event_name=>"enroll_in_program", :program_system_name=>"aha_chh", :person_id=>1}}!
=> :closed
Then, back in my first window, I see:
2016-03-18T14:17:44Z p-5877 t-19nfxy INFO: {"event_name":"enroll_in_program","program_system_name":"aha_chh","person_id":1}
in Messaging::EventsAgent.distribute, message
in Messaging::EventsAgent.distribute, event[:handler]: FSM::StateAssignmentAgent
And in my RabbitMQ server, I see:
It's a pretty minimal setup and I'm sure I'll be learning a lot more in coming days.
Good luck!
I'm afraid that for RabbitMQ at least you will need a client. RabbitMQ implements the AMQP protocol, as opposed to the HTTP protocol used by web servers. As Sergio mentioned above, Rails is a web framework, so it doesn't have AMQP support built into it. You'll have to use an AMQP client such as Bunny in order to subscribe to a Rabbit queue from within a Rails app.
Lets say Service A is sending some events to Kafka queue, you can have a background process running with your Rails app which would lookup into the kafka queue and process those queued messages. For background process you can go for cron-job or sidekiq kind of things.
Rails is a lot of things. Parts of it handle web requests. Other parts (ActiveRecord) don't care if you are a web request or a script or whatever. Rails itself does not even come with a production worthy web server, you use other gems (e.g., thin for plain old web browsers, or wash_out for incoming SOAP requests) for that. Rails only gives you the infrastructure/middleware to combine all the pieces regarding servers.
Unless your queue can call out to your application in some fashion of HTTP, for example in the form of SOAP requests, you'll need something that listens to your queueing system, whatever that may be, and translates new "tickets" on your queue into controller actions in your Rails world.

The stratigy of build a talk-to-talk system using em-websocket in rails?

Maybe it is a good example for server push system. There are many users in the system, and users can talk with each other. It can be accomplished like this: one user sends message(through websocket) to the server, then the server forward the message to the other user. The key is to find the binding between the ws(websocket object) and the user. The example code like below:
EM.run {
EM::WebSocket.run(:host => "0.0.0.0", :port => 8080, :debug => false) do |ws|
ws.onopen { |handshake|
# extract the user id from handshake and store the binding between user and ws
}
ws.onmessage { |msg|
# extract the text and receiver id from msg
# extract the ws_receiver from the binding
ws_receiver.send(text)
}
end
}
I want to figure out following issues:
The ws object can be serialized so it can be stored into disk or database? Otherwise I can only store the binding into memory.
What the differences between em-websocket and websocket-rails?
Which gem do you recommend for websocket?
You're approaching a use case that websockets are pretty good for, so you're on the right track.
You could serialize the ws object with Marshal, but think of websocket objects as being a bit like http request objects in that they are abstractions for a type of communication. You are probably best off marshaling/storing the data.
em-websocket is a lower(ish) lever websocket library built more or less directly on web-machine. websocket-rails is a higher level abstraction on websockets, with a lot of nice tools built in and pretty ok docs. It is built on top of faye-websocket-rails which is itself built on web machine. *Note, action cable which is the new websocket library for Rails 5 is built on faye.
I've use websocket-rails in the past and rather like it. It will take care of a lot for you. However, if you can use Rails 5 and Action Cable, do that, its the future.
The following is in addition to Chase Gilliam's succinct answer which included references to em-websocket, websocket-rails (which hadn't been maintained in a long while), faye-websocket-rails and ActionCable.
I would recommend the Plezi framework. It works both as an independent application framework as well as a Rails Websocket enhancement.
I would consider the following points as well:
do you need the message to persist between connections (i.e. if the other user if offline, should the message wait in a "message box"? for how long should the message wait?)...?
Do you wish to preserve message history?
These points would help yo decide if to use a persistent storage (i.e. a database) for the messages or not.
i.e., to use Plezi with Rails, create an init_plezi.rb in your application's config/initializers folder. use (as an example) the following code:
class ChatDemo
# use JSON events instead of raw websockets
#auto_dispatch = true
protected #protected functions are hidden from regular Http requests
def auth msg
#user = User.auth_token(msg['token'])
return close unless #user
# creates a websocket "mailbox" that will remain open for 9 hours.
register_as #user.id, lifetime: 60*60*9, max_connections: 5
end
def chat msg, received = false
unless #user # require authentication first
close
return false
end
if received
# this is only true when we sent the message
# using the `broadcast` or `notify` methods
write msg # writes to the client websocket
end
msg['from'] = #user.id
msg['time'] = Plezi.time # an existing time object
unless msg['to'] && registered?(msg['to'])
# send an error message event
return {event: :err, data: 'No recipient or recipient invalid'}.to_json
end
# everything was good, let's send the message and inform
# this will invoke the `chat` event on the other websocket
# notice the `true` is setting the `received` flag.
notify msg['to'], :chat, msg, true
# returning a String will send it to the client
# when using the auto-dispatch feature
{event: 'message_sent', msg: msg}.to_json
end
end
# remember our route for websocket connections.
route '/ws_chat', ChatDemo
# a route to the Javascript client (optional)
route '/ws/client.js', :client
Plezi sets up it's own server (Iodine, a Ruby server), so remember to remove from your application any references to puma, thin or any other custom server.
On the client side you might want to use the Javascript helper provided by Plezi (it's optional)... add:
<script src='/es/client.js' />
<script>
TOKEN = <%= #user.token %>;
c = new PleziClient(PleziClient.origin + "/ws_chat") // the client helper
c.log_events = true // debug
c.chat = function(event) {
// do what you need to print a received message to the screen
// `event` is the JSON data. i.e.: event.event == 'chat'
}
c.error = function(event) {
// do what you need to print a received message to the screen
alert(event.data);
}
c.message_sent = function(event) {
// invoked after the message was sent
}
// authenticate once connection is established
c.onopen = function(event) {
c.emit({event: 'auth', token: TOKEN});
}
// // to send a chat message:
// c.emit{event: 'chat', to: 8, data: "my chat message"}
</script>
I didn't test the actual message code because it's just a skeleton and also it requires a Rails app with a User model and a token that I didn't want to edit just to answer a question (no offense).

Thread running in Middleware is using old version of parent's instance variable

I've used Heroku tutorial to implement websockets.
It works properly with Thin, but does not work with Unicorn and Puma.
Also there's an echo message implemented, which responds to client's message. It works properly on each server, so there are no problems with websockets implementation.
Redis setup is also correct (it catches all messages, and executes the code inside subscribe block).
How does it work now:
On server start, an empty #clients array is initialized. Then new Thread is started, which is listening to Redis and which is intended to send that message to corresponding user from #clients array.
On page load, new websocket connection is created, it is stored in #clients array.
If we receive the message from browser, we send it back to all clients connected with the same user (that part is working properly on both Thin and Puma).
If we receive the message from Redis, we also look up for all user's connections stored in #clients array.
This is where weird thing happens:
If running with Thin, it finds connections in #clients array and sends the message to them.
If running with Puma/Unicorn, #clients array is always empty, even if we try it in that order (without page reload or anything):
Send message from browser -> #clients.length is 1, message is delivered
Send message via Redis -> #clients.length is 0, message is lost
Send message from browser -> #clients.length is still 1, message is delivered
Could someone please clarify me what am I missing?
Related config of Puma server:
workers 1
threads_count = 1
threads threads_count, threads_count
Related middleware code:
require 'faye/websocket'
class NotificationsBackend
def initialize(app)
#app = app
#clients = []
Thread.new do
redis_sub = Redis.new
redis_sub.subscribe(CHANNEL) do |on|
on.message do |channel, msg|
# logging #clients.length from here will always return 0
# [..] retrieve user
send_message(user.id, { message: "ECHO: #{event.data}"} )
end
end
end
end
def call(env)
if Faye::WebSocket.websocket?(env)
ws = Faye::WebSocket.new(env, nil, {ping: KEEPALIVE_TIME })
ws.on :open do |event|
# [..] retrieve current user
if user
# add ws connection to #clients array
else
# close ws
end
end
ws.on :message do |event|
# [..] retrieve current user
Redis.current.publish({user_id: user.id, { message: "ECHO: #{event.data}"}} )
end
ws.rack_response
else
#app.call(env)
end
end
def send_message user_id, message
# logging #clients.length here will always return correct result
# cs = all connections which belong to that client
cs.each { |c| c.send(message.to_json) }
end
end
Unicorn (and apparently puma) both start up a master process and then fork one or more workers. fork copies (or at least presents the illusion of copying - an actual copy usually only happens as you write to pages) your entire process but only the thread that called fork exists in the new process.
Clearly your app is being initialised before being forked - this is normally done so that workers can start quickly and benefit from copy on write memory savings. As a consequence your redis checking thread is only running in the master process whereas #clients is being modified in the child process.
You can probably work around this by either deferring the creation of your redis thread or disabling app preloading, however you should be aware that your setup will prevent you from scaling beyond a single worker process (which with puma and a thread friendly JVM like jruby would be less of a constraint)
Just in case somebody will face the same problem, here are two solutions I have come up with:
1. Disable app preloading (this was the first solution I have come up with)
Simply remove preload_app! from the puma.rb file. Therefore, all threads will have their own #clients variable. And they will be accessible by other middleware methods (like call etc.)
Drawback: you will lose all benefits of app preloading. It is OK if you have only 1 or 2 workers with a couple of threads, but if you need a lot of them, then it's better to have app preloading. So I continued my research, and here is another solution:
2. Move thread initialization out of initialize method (this is what I use now)
For example, I moved it to call method, so this is how middleware class code looks like:
attr_accessor :subscriber
def call(env)
#subscriber ||= Thread.new do # if no subscriber present, init new one
redis_sub = Redis.new(url: ENV['REDISCLOUD_URL'])
redis_sub.subscribe(CHANNEL) do |on|
on.message do |_, msg|
# parsing message code here, retrieve user
send_message(user.id, { message: "ECHO: #{event.data}"} )
end
end
end
# other code from method
end
Both solutions solve the same problem: Redis-listening thread will be initialized for each Puma worker/thread, not for main process (which is actually not serving requests).

Can't connect to AMQP twice in order to send messages to it

I am having problem connecting to the AMQP in my rSpec testing. I have code like this:
Module Rabbit
Class Client
def start
EventMachine.run do
connection = AMQP.connect(Settings_object) #it holds host, username and password information
channel = AMQP::Channel.new(connection)
channel.queue("queue_name", :durable => true)
channel.default_exchange.publish("A test message", :routing_key => "queue_name")
end
end
end
Module Esper
Class Server
def start
EventMachine.run do
connection = AMQP.connect(Settings_object) #it holds host, username and password information
=begin
Some code to subscribe to queues
=end
end
end
end
My problem is when I run the rspec:
#client = Rabbit::Client.new
#server = Esper::Server.new
Thread.new do
#client.start
end
Thread.new do
#server.start
end
At first Client is able to connect to the AMQP, and the Server doesn't , but when I run it for the second time, then the Client can't connect to the server. I can't see to overcome this problem. I don't see a reason why would Client stop connecting when I run it on the second time?
The root cause of this problem is that new for every queue for AMQP needs to have it's separate connection. For example:
queue1_connectom = AMQP::Channel.new(connection)
queue2_connectom = AMQP::Channel.new(connection)
And use it like that.
But overall for this whole situation is to use deamon-kit gem. It separates the AMQP into a separate application and the AMQP connections are handled within that "application" or better yet - A Deamon.
It also has a generator for AMQP so good thing would be to use that.

Resources