Rails, ActionController::Live, Puma: ThreadError - ruby-on-rails

I want to stream notification to the client. For this, I use Redis pup/sub and the ActionController::Live. Here is what my StreamingController looks like:
class StreamingController < ActionController::Base
include ActionController::Live
def stream
response.headers['Content-Type'] = 'text/event-stream'
$redis.psubscribe("user-#{params[:user_id]}:*") do |on|
on.pmessage do |subscription, event, data|
response.stream.write "data: #{data}\n\n"
end
end
rescue IOError
logger.info "Stream closed"
ensure
response.stream.close
end
end
Here the JS part to listen to the stream:
var source = new EventSource("/stream?user_id=" + user_id);
source.addEventListener("message", function(e) {
data = jQuery.parseJSON(e.data);
switch(data.type) {
case "unread_receipts":
updateUnreadReceipts(data);
break;
}
}, false);
Now if I push something to redis, the client gets the push-notification. So this works fine. But when I click on a link nothing is happening. After canceling the rails server (I use puma) with Ctrl+C I got the following error:
ThreadError: Attempt to unlock a mutex which is locked by another thread
The problem can be solved after adding config.middleware.delete Rack::Lock to development.rb, but then I don't see any console output after pushing to the client. config.cache_classes = true and config.eager_load = trueare no options because I don't want to restart my server every time in development.
Is there any other solution?

If you want to avoid restarting the server to pick up changes then I think you'd need to be running multiple processes.

Related

Action Cable Broadcast message fron sidekiq shows up only after refresh, works instantly from console

I followed this tutorial to create an action cable broadcast but it's not quite working as expected. The channel streams and the web app subscribes successfully, but messages broadcasted from the sidekiq background job are only displayed after refreshing the page. Using the same command on the console does result in an immediate update to the page.
When looking at the frames in chrome's developer mode, I cannot see the broadcasted messages from the background job but can immediately see the ones sent by the console. However, I can confirm that the sidekiq background job is broadcasting those messages somewhere since they do show up upon refresh; however, I don't know where they are being queued.
Are there any additional configuration changes needed to keep the messages from the background job from being queued somewhere? Are there any typos or errors in my code that could be causing this?
Action Cable Broadcast message:
ActionCable.server.broadcast "worker_channel", {html:
"<div class='alert alert-success alert-block text-center'>
Market data retrieval complete.
</div>"
}
smart_worker.rb: -- This is called as perform_async from the controller's action
class SmartWorker
include Sidekiq::Worker
include ApplicationHelper
sidekiq_options retry: false
def perform
ActionCable.server.broadcast "worker_channel", {html:
"<div class='alert alert-success alert-block text-center'>
Market data retrieval complete.
</div>"
}
end
connection.rb:
module ApplicationCable
class Connection < ActionCable::Connection::Base
identified_by :current_user
def connect
self.current_user = current_user #find_verified_user ignored until method implemented correctly and does not always return unauthorized
end
private
def find_verified_user
if current_user = User.find_by(id: cookies.signed[:user_id])
current_user
else
reject_unauthorized_connection
end
end
end
end
worker_channel:
class WorkerChannel < ApplicationCable::Channel
def subscribed
stream_from "worker_channel"
end
def unsubscribed
end
end
worker.js:
App.notifications = App.cable.subscriptions.create('WorkerChannel', {
connected: function() {
console.log('message connected');
},
disconnected: function() {},
received: function(data) {
console.log('message recieved');
$('#notifications').html(data.html);
}
});
cable.yml
development:
adapter: redis
url: redis://localhost:6379/1
test:
adapter: async
production:
adapter: redis
url: <%= ENV.fetch("REDIS_URL") { "redis://localhost:6379/1" } %>
channel_prefix: smarthost_production
Also added
to the view but that didn't make a difference.
I'm not sure this is the entire explanation but this is what I have observed through further testing:
After multiple server restarts, the broadcast started working and would log as expected in the development logger. Console messages where still hit or miss, so I added some additional identifiers to the broadcasted messages and identified that they were being broadcasted before the loading of the next page was completed. This caused two things:
1) A quick flashing of flash messages triggered by the broadcast (in what was perceived to be the old page - i.e. only works after a refresh)
2) A lack of or inconsistent behavior in the browser console: Because the sidekiq worker job finished so quick, sometimes even before the browser started rendering the new page, I believe the console messages are being reset by the page loading actions and are therefore not visible when you check the logs (or even if you stare at it for a while).
It seems as though this is working as expected, and is simply working to quickly in the local environment which makes it seem as though it's not working as intended.
ActionChannel normally does not queue messages and those broadcasted when there's no subscriber should be lost. Observed behaviour can happen if notification actually comes later than you expect.
I'd check:
Run entire job in console, not just notification, and see if it's running slow
Check sidekiq queues latency
Add logging before/after notification in job and check logs if the job is actually run successfully

Rails push EM-Socket

I'm writing rails app and i want user to receive notification as soon as new message is saved to DB. For websocket i'm using em-websocket gem. After connection, i store client id and socket instance in array.
Question: How to push data to the client from controller\model action (before_save for example)? Is it able to do so with em-websocket?
chat.rb
EVENTCHAT_CONFIG = YAML.load_file("#{Rails.root}/config/eventchat.yml")[Rails.env].symbolize_keys
#escape html/xss
include ERB::Util
Thread.abort_on_exception = true
Thread.new {
EventMachine.run {
#sockets = Array.new
EventMachine::WebSocket.start(EVENTCHAT_CONFIG) do |ws|
ws.onopen do
end
ws.onclose do
index = #sockets.index {|i| i[:socket] == ws}
client = #sockets.delete_at index
#sockets.each {|s| s[:socket].send h("#{client[:id]} has disconnected!")}
end
ws.onmessage do |msg|
client = JSON.parse(msg).symbolize_keys
case client[:action]
when 'connect'
#sockets.push({:id=>client[:id], :socket=>ws})
#sockets.each {|s| s[:socket].send h("#{client[:id]} has connected!")}
when 'say'
#sockets.each {|s| s[:socket].send h("#{client[:id]} says : #{client[:data]}")}
end
end
end
}
}
When deploying - you may have problems related to having a separate EM reactor in each rails worker (for example errors about some failing to bind to socket or not all users receiving all messages)
Rails controllers in relation to websocket reactor are just another type of client, easiest way is to open a connection with special client id, push some data and close it afterwards. More efficient way - is to keep a open connection from each rails worker
If a EM-based single-process server is used(for example - thin), you can use EM::Queue to deliver messages to websocket worker in-process or even write to websocket directly from controller

Thread running in Middleware is using old version of parent's instance variable

I've used Heroku tutorial to implement websockets.
It works properly with Thin, but does not work with Unicorn and Puma.
Also there's an echo message implemented, which responds to client's message. It works properly on each server, so there are no problems with websockets implementation.
Redis setup is also correct (it catches all messages, and executes the code inside subscribe block).
How does it work now:
On server start, an empty #clients array is initialized. Then new Thread is started, which is listening to Redis and which is intended to send that message to corresponding user from #clients array.
On page load, new websocket connection is created, it is stored in #clients array.
If we receive the message from browser, we send it back to all clients connected with the same user (that part is working properly on both Thin and Puma).
If we receive the message from Redis, we also look up for all user's connections stored in #clients array.
This is where weird thing happens:
If running with Thin, it finds connections in #clients array and sends the message to them.
If running with Puma/Unicorn, #clients array is always empty, even if we try it in that order (without page reload or anything):
Send message from browser -> #clients.length is 1, message is delivered
Send message via Redis -> #clients.length is 0, message is lost
Send message from browser -> #clients.length is still 1, message is delivered
Could someone please clarify me what am I missing?
Related config of Puma server:
workers 1
threads_count = 1
threads threads_count, threads_count
Related middleware code:
require 'faye/websocket'
class NotificationsBackend
def initialize(app)
#app = app
#clients = []
Thread.new do
redis_sub = Redis.new
redis_sub.subscribe(CHANNEL) do |on|
on.message do |channel, msg|
# logging #clients.length from here will always return 0
# [..] retrieve user
send_message(user.id, { message: "ECHO: #{event.data}"} )
end
end
end
end
def call(env)
if Faye::WebSocket.websocket?(env)
ws = Faye::WebSocket.new(env, nil, {ping: KEEPALIVE_TIME })
ws.on :open do |event|
# [..] retrieve current user
if user
# add ws connection to #clients array
else
# close ws
end
end
ws.on :message do |event|
# [..] retrieve current user
Redis.current.publish({user_id: user.id, { message: "ECHO: #{event.data}"}} )
end
ws.rack_response
else
#app.call(env)
end
end
def send_message user_id, message
# logging #clients.length here will always return correct result
# cs = all connections which belong to that client
cs.each { |c| c.send(message.to_json) }
end
end
Unicorn (and apparently puma) both start up a master process and then fork one or more workers. fork copies (or at least presents the illusion of copying - an actual copy usually only happens as you write to pages) your entire process but only the thread that called fork exists in the new process.
Clearly your app is being initialised before being forked - this is normally done so that workers can start quickly and benefit from copy on write memory savings. As a consequence your redis checking thread is only running in the master process whereas #clients is being modified in the child process.
You can probably work around this by either deferring the creation of your redis thread or disabling app preloading, however you should be aware that your setup will prevent you from scaling beyond a single worker process (which with puma and a thread friendly JVM like jruby would be less of a constraint)
Just in case somebody will face the same problem, here are two solutions I have come up with:
1. Disable app preloading (this was the first solution I have come up with)
Simply remove preload_app! from the puma.rb file. Therefore, all threads will have their own #clients variable. And they will be accessible by other middleware methods (like call etc.)
Drawback: you will lose all benefits of app preloading. It is OK if you have only 1 or 2 workers with a couple of threads, but if you need a lot of them, then it's better to have app preloading. So I continued my research, and here is another solution:
2. Move thread initialization out of initialize method (this is what I use now)
For example, I moved it to call method, so this is how middleware class code looks like:
attr_accessor :subscriber
def call(env)
#subscriber ||= Thread.new do # if no subscriber present, init new one
redis_sub = Redis.new(url: ENV['REDISCLOUD_URL'])
redis_sub.subscribe(CHANNEL) do |on|
on.message do |_, msg|
# parsing message code here, retrieve user
send_message(user.id, { message: "ECHO: #{event.data}"} )
end
end
end
# other code from method
end
Both solutions solve the same problem: Redis-listening thread will be initialized for each Puma worker/thread, not for main process (which is actually not serving requests).

How to know when a user disconnects from a Faye channel?

I'm trying to use Faye to build a simple chat room with Rails, and host it on heroku. So far I was able to make the Faye server run, and get instant messaging to work. The crucial lines of code that I'm using are:
Javascript file launched when the page loads:
$(function() {
var faye = new Faye.Client(<< My Feye server on Heoku here >>);
faye.subscribe("/messages/new", function(data) {
eval(data);
});
});
create.js.erb, triggered when the user sends a message
<% broadcast "/messages/new" do %>
$("#chat").append("<%= j render(#message) %>");
<% end %>
Everything is working fine, but now I would like to notify when a user disconnects from the chat. How should I do this?
I already looked in the Faye's website about monitoring, but it's not clear where should I put that code.
Event monitoring goes in your rackup file. Here is an example I'm using in production:
Faye::WebSocket.load_adapter('thin')
server = Faye::RackAdapter.new(mount: '/faye', timeout: 25)
server.bind(:disconnect) do |client_id|
puts "Client #{client_id} disconnected"
end
run server
Of course you can do whatever you like in the block you pass to #bind.
You may want to bind to the subscribe and unsubscribe events instead of the disconnect event. Read the word of warning on the bottom of the faye monitoring docs.
This has worked well for me:
server.bind(:subscribe) do |client_id|
# code to execute
# puts "Client #{client_id} connected"
end
server.bind(:unsubscribe) do |client_id|
# code to execute
# puts "Client #{client_id} disconnected"
end
I also recommend using the private pub gem - this will help secure your faye app.

Safely stopping em-websocket in rails on thin

I have a rails app I am running on thin server to utilize the EventMachine run loop. The problem is that I would like to be able to include em-websocket to process information coming in from a ws and stop and start the websocket without stopping the EM run loop. This is how I am starting the websocket.
EventMachine::WebSocket.start(:host => "0.0.0.0", :port => 8080) do |ws|
ws.onopen { }
ws.onclose { }
ws.onmessage { |msg| }
end
The problem is in the start/stop code. From em-websocket's docs
#Start WebSocket
def self.start(options, &blk)
EM.epoll
EM.run do
trap("TERM") { stop }
trap("INT") { stop }
EventMachine::start_server(options[:host], options[:port],
EventMachine::WebSocket::Connection, options) do |c|
blk.call(c)
end
end
end
#Stop WebSocket
def self.stop
puts "Terminating WebSocket Server"
EventMachine.stop
end
The problem is that the internal em-websocket code does not track the signature coming from EM:start_server to be able to call EventMachine::stop_server(signature) to shut it down. Is there a way I can override these functions without modifying em-websocket so I can safely start / stop these websockets? I would like if it performed more like the standart Eventmachine server.
Seems to me you don't need to use EM::Websocket.start(). Instead write your own start/stop code, then you can manage the signature yourself.
# start a ws server and return the signature
# caller is responsible for +trap+ing to stop it later using said signature.
def start_ws_server(options, &blk)
return EventMachine::start_server(options[:host], options[:port],
EventMachine::WebSocket::Connection, options) do |c|
blk.call(c)
end
end
# stop a previously started ws server
def stop_ws_server(signature)
EventMachine::stop_server signature
end
So now you can start and capture the signature and stop it later using it. No trap code in the start method as at that point the signature is unknown. Since you are capturing the sig outside the method you can trap outside too and use the stored sig there.

Resources