I'm using private_pub to implement a one-to-one chat-like application.
Here is my story: as a user, I would like to receive a message when my partner leaves the chat – closes the window, etc.
Looking through the Faye Monitoring docs here is my attempt at binding on unsubscribe:
# Run with: rackup private_pub.ru -s thin -E production
require "bundler/setup"
require "yaml"
require "faye"
require "private_pub"
require "active_support/core_ext"
Faye::WebSocket.load_adapter('thin')
PrivatePub.load_config(File.expand_path("../config/private_pub.yml", __FILE__), ENV["RAILS_ENV"] || "development")
wts_pubsub = PrivatePub.faye_app
wts_pubsub.bind(:subscribe) do |client_id, channel|
puts "[#{Time.now}] Client #{client_id} joined #{channel}"
end
wts_pubsub.bind(:unsubscribe) do |client_id, channel|
puts "[#{Time.now}] Client #{client_id} disconnected from #{channel}"
PrivatePub.publish_to channel, { marius_says: 'quitter' }
end
run wts_pubsub
but I keep getting timeouts: [ERROR] [Faye::RackAdapter] Timeout::Error
Prying into PrivatePub#publish_to, data holds what I expect both when I'm publishing from the Rails or the private_pub app, but the private_pub app keeps hanging.
How can I get publishing from private_pub to work?
Your second bind should be to disconnect event instead of unsubscribe.
Also, remember to fire off a Faye/PrivatePub disconnect event in your client side code when a browser window is closed.
Note: You might need to do this for all open sessions with the Faye server or just on a channel by channel basis based on chat application's design
In plain JS this might be something like:
window.onbeforeunload = functionThatTriggersFayeDisconnectEvent;
Sorry for not using proper markup, posting from mobile.
After hours of research and numerous attempts, this is the solution I found:
Replace PrivatePub.publish_to channel, { marius_says: 'quitter' } with:
system "curl http://localhost:9292/faye -d 'message={\"channel\":\"#{channel}\", \"data\":{\"channel\":\"#{channel}\",\"data\":{\"message\":{\"content\":\"#{client_id} disconnected from this channel.\"}}}, \"ext\":{\"private_pub_token\":\"ADD_APPROPRIATE_SECRET_HERE\"}}' &"
This will trigger an asynchronous request (curl + &) which will bypass the problem. Not the best fix, but it works.
Related
Constructing a basic rails app I'm re-factoring to do heavy lifting on an external docker/compute as a service i.e. iron.io. the 'worker'
In refactoring created Grape API to allow status of processing from remote 'worker' to notify the server when processing is done. The user interface then uses ajax to poll the local server to update. API and basic tests all ok. It also works in development using Delayed::job running the worker.
I however cannot seem to get my capybara tests to work end to end as the delayed::job running process making the HTTP request back to the server always gets connection refused.
It works fine if i run a rails server in parallel as the tests: (RAILS_ENV="test" rails s -p 3001), then make sure the ENV variable is set to port 3001.
I had tried
various combination of Capybara.configure (as below)
in the test: visit url (where url="http://#{Capybara.server_host}:#{Capybara.server_port}" ) to see if that 'kicks off' the server perhaps
various webdrivers (poltergeist, selenium etc)
Any thoughts, experience or guidance much appreciated
Ben
note: in the code
populate the domain & port via ENV[''] variables that are populated (these environment variables will be set in the running environment iron.io)
port & app_host set as below
ENV variables populated in the test
Capybara.configure do |config|
config.run_server = true
config.server_port = "9876"
config.app_host = "http://127.0.0.1:9876"
end
rails 4.1.0
rspec 3.4.0
capybara 2.7.0
poltergeist 1.5.1
selenium 2.53.0
I think you're trying to have your test too too much. I would recommend that you "mock out" the interactions with the other service to make the tests self sufficient. In the past I have added a test.js that:
Mocks out ajax on the page
Checks for specific requests to have been made (page.evaluate_script)
Responds back to them in the way your external service will (execute_script)
Like this:
# test.js
$.ajax = function(settings) {
window.__ajaxRequests || (window.__ajaxRequests = []);
window.__ajaxRequests.push(settings);
return {
done: function(cb) { settings.__done = cb; }
}
}
# spec/features/jobs_spec.rb
visit '/jobs'
click_button 'Start job'
requests = page.evaulate_script('window.__ajaxRequests')
expect(requests.size).to eq(1)
expect(requests[0].url).to eq('http://jobs.yourproduct.com/start')
...
expect(page).not_to have_content('Job completed')
page.execute_script('window.__ajaxRequests[0].__done({data:{status:"complete"}})')
expect(page).to have_content('Job completed')
Most Rails applications work in a way that they are waiting for requests comming from a client and then do their magic.
But if I want to use a Rails application as part of a microservice architecture (for example) with some asychonious communication (Serivce A sends an event into a Kafka or RabbitMQ queue and Service B - my Rails app - is supposed to listen to this queue), how can I tune/start the Rails app to immediately listen to a queue and being triggered by event from there? (Meaning the initial trigger is not comming from a client, but from the App itself.)
Thanks for your advice!
I just set up RabbitMQ messaging within my application and will be implementing for decoupled (multiple, distributed) applications in the next day or so. I found this article very helpful (and the RabbitMQ tutorials, too). All the code below is for RabbitMQ and assumes you have a RabbitMQ server up and running on your local machine.
Here's what I have so far - that's working for me:
#Gemfile
gem 'bunny'
gem 'sneakers'
I have a Publisher that sends to the queue:
# app/agents/messaging/publisher.rb
module Messaging
class Publisher
class << self
def publish(args)
connection = Bunny.new
connection.start
channel = connection.create_channel
queue_name = "#{args.keys.first.to_s.pluralize}_queue"
queue = channel.queue(queue_name, durable: true)
channel.default_exchange.publish(args[args.keys.first].to_json, :routing_key => queue.name)
puts "in #{self}.#{__method__}, [x] Sent #{args}!"
connection.close
end
end
end
end
Which I use like this:
Messaging::Publisher.publish(event: {... event details...})
Then I have my 'listener':
# app/agents/messaging/events_queue_receiver.rb
require_dependency "#{Rails.root.join('app','agents','messaging','events_agent')}"
module Messaging
class EventsQueueReceiver
include Sneakers::Worker
from_queue :events_queue, env: nil
def work(msg)
logger.info msg
response = Messaging::EventsAgent.distribute(JSON.parse(msg).with_indifferent_access)
ack! if response[:success]
end
end
end
The 'listener' sends the message to Messaging::EventsAgent.distribute, which is like this:
# app/agents/messaging/events_agent.rb
require_dependency #{Rails.root.join('app','agents','fsm','state_assignment_agent')}"
module Messaging
class EventsAgent
EVENT_HANDLERS = {
enroll_in_program: ["FSM::StateAssignmentAgent"]
}
class << self
def publish(event)
Messaging::Publisher.publish(event: event)
end
def distribute(event)
puts "in #{self}.#{__method__}, message"
if event[:handler]
puts "in #{self}.#{__method__}, event[:handler: #{event[:handler}"
event[:handler].constantize.handle_event(event)
else
event_name = event[:event_name].to_sym
EVENT_HANDLERS[event_name].each do |handler|
event[:handler] = handler
publish(event)
end
end
return {success: true}
end
end
end
end
Following the instructions on Codetunes, I have:
# Rakefile
# Add your own tasks in files placed in lib/tasks ending in .rake,
# for example lib/tasks/capistrano.rake, and they will automatically be available to Rake.
require File.expand_path('../config/application', __FILE__)
require 'sneakers/tasks'
Rails.application.load_tasks
And:
# app/config/sneakers.rb
Sneakers.configure({})
Sneakers.logger.level = Logger::INFO # the default DEBUG is too noisy
I open two console windows. In the first, I say (to get my listener running):
$ WORKERS=Messaging::EventsQueueReceiver rake sneakers:run
... a bunch of start up info
2016-03-18T14:16:42Z p-5877 t-14d03e INFO: Heartbeat interval used (in seconds): 2
2016-03-18T14:16:42Z p-5899 t-14d03e INFO: Heartbeat interval used (in seconds): 2
2016-03-18T14:16:42Z p-5922 t-14d03e INFO: Heartbeat interval used (in seconds): 2
2016-03-18T14:16:42Z p-5944 t-14d03e INFO: Heartbeat interval used (in seconds): 2
In the second, I say:
$ rails s --sandbox
2.1.2 :001 > Messaging::Publisher.publish({:event=>{:event_name=>"enroll_in_program", :program_system_name=>"aha_chh", :person_id=>1}})
in Messaging::Publisher.publish, [x] Sent {:event=>{:event_name=>"enroll_in_program", :program_system_name=>"aha_chh", :person_id=>1}}!
=> :closed
Then, back in my first window, I see:
2016-03-18T14:17:44Z p-5877 t-19nfxy INFO: {"event_name":"enroll_in_program","program_system_name":"aha_chh","person_id":1}
in Messaging::EventsAgent.distribute, message
in Messaging::EventsAgent.distribute, event[:handler]: FSM::StateAssignmentAgent
And in my RabbitMQ server, I see:
It's a pretty minimal setup and I'm sure I'll be learning a lot more in coming days.
Good luck!
I'm afraid that for RabbitMQ at least you will need a client. RabbitMQ implements the AMQP protocol, as opposed to the HTTP protocol used by web servers. As Sergio mentioned above, Rails is a web framework, so it doesn't have AMQP support built into it. You'll have to use an AMQP client such as Bunny in order to subscribe to a Rabbit queue from within a Rails app.
Lets say Service A is sending some events to Kafka queue, you can have a background process running with your Rails app which would lookup into the kafka queue and process those queued messages. For background process you can go for cron-job or sidekiq kind of things.
Rails is a lot of things. Parts of it handle web requests. Other parts (ActiveRecord) don't care if you are a web request or a script or whatever. Rails itself does not even come with a production worthy web server, you use other gems (e.g., thin for plain old web browsers, or wash_out for incoming SOAP requests) for that. Rails only gives you the infrastructure/middleware to combine all the pieces regarding servers.
Unless your queue can call out to your application in some fashion of HTTP, for example in the form of SOAP requests, you'll need something that listens to your queueing system, whatever that may be, and translates new "tickets" on your queue into controller actions in your Rails world.
I've used Heroku tutorial to implement websockets.
It works properly with Thin, but does not work with Unicorn and Puma.
Also there's an echo message implemented, which responds to client's message. It works properly on each server, so there are no problems with websockets implementation.
Redis setup is also correct (it catches all messages, and executes the code inside subscribe block).
How does it work now:
On server start, an empty #clients array is initialized. Then new Thread is started, which is listening to Redis and which is intended to send that message to corresponding user from #clients array.
On page load, new websocket connection is created, it is stored in #clients array.
If we receive the message from browser, we send it back to all clients connected with the same user (that part is working properly on both Thin and Puma).
If we receive the message from Redis, we also look up for all user's connections stored in #clients array.
This is where weird thing happens:
If running with Thin, it finds connections in #clients array and sends the message to them.
If running with Puma/Unicorn, #clients array is always empty, even if we try it in that order (without page reload or anything):
Send message from browser -> #clients.length is 1, message is delivered
Send message via Redis -> #clients.length is 0, message is lost
Send message from browser -> #clients.length is still 1, message is delivered
Could someone please clarify me what am I missing?
Related config of Puma server:
workers 1
threads_count = 1
threads threads_count, threads_count
Related middleware code:
require 'faye/websocket'
class NotificationsBackend
def initialize(app)
#app = app
#clients = []
Thread.new do
redis_sub = Redis.new
redis_sub.subscribe(CHANNEL) do |on|
on.message do |channel, msg|
# logging #clients.length from here will always return 0
# [..] retrieve user
send_message(user.id, { message: "ECHO: #{event.data}"} )
end
end
end
end
def call(env)
if Faye::WebSocket.websocket?(env)
ws = Faye::WebSocket.new(env, nil, {ping: KEEPALIVE_TIME })
ws.on :open do |event|
# [..] retrieve current user
if user
# add ws connection to #clients array
else
# close ws
end
end
ws.on :message do |event|
# [..] retrieve current user
Redis.current.publish({user_id: user.id, { message: "ECHO: #{event.data}"}} )
end
ws.rack_response
else
#app.call(env)
end
end
def send_message user_id, message
# logging #clients.length here will always return correct result
# cs = all connections which belong to that client
cs.each { |c| c.send(message.to_json) }
end
end
Unicorn (and apparently puma) both start up a master process and then fork one or more workers. fork copies (or at least presents the illusion of copying - an actual copy usually only happens as you write to pages) your entire process but only the thread that called fork exists in the new process.
Clearly your app is being initialised before being forked - this is normally done so that workers can start quickly and benefit from copy on write memory savings. As a consequence your redis checking thread is only running in the master process whereas #clients is being modified in the child process.
You can probably work around this by either deferring the creation of your redis thread or disabling app preloading, however you should be aware that your setup will prevent you from scaling beyond a single worker process (which with puma and a thread friendly JVM like jruby would be less of a constraint)
Just in case somebody will face the same problem, here are two solutions I have come up with:
1. Disable app preloading (this was the first solution I have come up with)
Simply remove preload_app! from the puma.rb file. Therefore, all threads will have their own #clients variable. And they will be accessible by other middleware methods (like call etc.)
Drawback: you will lose all benefits of app preloading. It is OK if you have only 1 or 2 workers with a couple of threads, but if you need a lot of them, then it's better to have app preloading. So I continued my research, and here is another solution:
2. Move thread initialization out of initialize method (this is what I use now)
For example, I moved it to call method, so this is how middleware class code looks like:
attr_accessor :subscriber
def call(env)
#subscriber ||= Thread.new do # if no subscriber present, init new one
redis_sub = Redis.new(url: ENV['REDISCLOUD_URL'])
redis_sub.subscribe(CHANNEL) do |on|
on.message do |_, msg|
# parsing message code here, retrieve user
send_message(user.id, { message: "ECHO: #{event.data}"} )
end
end
end
# other code from method
end
Both solutions solve the same problem: Redis-listening thread will be initialized for each Puma worker/thread, not for main process (which is actually not serving requests).
for my Rails-App I need to call all connected clients if new data is uploaded. So I want to use websockets. Currently I have created a new file in initializers which starts the socket server in an new thread:
require 'em-websocket'
$websocket_clients = []
Thread.new do
EventMachine.run {
EventMachine::WebSocket.start(:host => "0.0.0.0", :port => 8080) do |ws|
ws.onopen {
$websocket_clients << ws
}
ws.onclose {
$websocket_clients.delete(ws)
}
end
}
end
So I can use
$websocket_clients.each do |ws|
ws.send "text"
end
in my controller.
My question now is: Is this good practice or will I experience any probelms with that
This may cause problems when you depoly your application. When you deploy your application, you are usually forking multiple worker processes which each handle requests - at least in the two most popular servers (Phusion Passenger and unicorn).
Each server will try to start a websocket thread. The first one starts smoothly, the next ones will probably crash, because the port is blocked by the first one. If you fix this problem and you are just using the code to distribute messages to the clients, as posted above, it will probably work without major problems.
However problems will arise if you start to query your database, as long as you do not enable thread safety in ActiveRecord. When the websocket part of your application gets larger, you can put it into an extra daemon that handles requests seperately from the server processes.
I'm trying to use Faye to build a simple chat room with Rails, and host it on heroku. So far I was able to make the Faye server run, and get instant messaging to work. The crucial lines of code that I'm using are:
Javascript file launched when the page loads:
$(function() {
var faye = new Faye.Client(<< My Feye server on Heoku here >>);
faye.subscribe("/messages/new", function(data) {
eval(data);
});
});
create.js.erb, triggered when the user sends a message
<% broadcast "/messages/new" do %>
$("#chat").append("<%= j render(#message) %>");
<% end %>
Everything is working fine, but now I would like to notify when a user disconnects from the chat. How should I do this?
I already looked in the Faye's website about monitoring, but it's not clear where should I put that code.
Event monitoring goes in your rackup file. Here is an example I'm using in production:
Faye::WebSocket.load_adapter('thin')
server = Faye::RackAdapter.new(mount: '/faye', timeout: 25)
server.bind(:disconnect) do |client_id|
puts "Client #{client_id} disconnected"
end
run server
Of course you can do whatever you like in the block you pass to #bind.
You may want to bind to the subscribe and unsubscribe events instead of the disconnect event. Read the word of warning on the bottom of the faye monitoring docs.
This has worked well for me:
server.bind(:subscribe) do |client_id|
# code to execute
# puts "Client #{client_id} connected"
end
server.bind(:unsubscribe) do |client_id|
# code to execute
# puts "Client #{client_id} disconnected"
end
I also recommend using the private pub gem - this will help secure your faye app.