There are three parts in my Rails application:
A part which listens for HTML5 web sockets (em-websocket)
a piece of JavaScript which connects to them
and a part which task is to connect to these sockets from inside of the same web application (em-websocket-client) (Yes, I am trying to do some IPC in Phusion Passenger environment)
JavaScript code connects fine and Web sockets server is happy with such a client, but when I connect from em-websocket-client a strange thing is happening: onclose handler is being called without calling onopen, and moreover - it is called for a socket which had been opened by web browser, not em-websocket-client.
The same em-websocket-client code, when executed in separate Ruby script through command line, works as planned. Here is the sample of em-websocket-client code:
require 'em-websocket-client'
class WebSocketsClient
def initialize
Thread.new do
log 'In a thread'
EventMachine.run do
log 'EM run'
#conn = EventMachine::WebSocketClient.connect("ws://localhost:5050?user_id=1&page_token=JYUTbfYDTTliglififi")
#conn.callback do
log 'Callback'
#conn.send_msg({ message_type: 'phone_call', user_id: 1, order_id: 1}.to_json)
#conn.close_connection
end
#conn.errback do |e|
log 'Errback'
puts "Got error: #{e}"
end
#conn.stream do |msg|
#log 'Stream'
#puts "<#{msg}>"
#if msg.data == 'done'
# #conn.close_connection
#end
end
#conn.disconnect do
puts 'gone'
EventMachine::stop_event_loop
end
end
end
end
def send_phone_call(order_id, user_id)
#conn.send_msg({ message_type: 'phone_call', user_id: user_id, order_id: order_id}.to_json)
end
def log(text)
puts "WebSocketsClient: #{text}\n"
end
end
WebSocketsClient.new
onclose on server side is being callsd as soon as EventMachine::WebSocketClient.connect is executed on client side. It doesn't even come to #conn.disconnect call.
One more thing I can conjecture is that this behaviour is due to using same EventMachine mechanism by server and client inside of same Rails application.
Related
Lately i have been experimenting with ruby and websockets. So i created a new Rails 5 project with ActionCable, all seems to work fine with it.
But also i created a ruby plain script with the Faye's ruby websocket client. Unlike most tutorials on internet i want to try a server side (as a client) script, not a frontend JS script inside an HTML file.
So i tried the basic usage of it and i successfully make the handshake to perform correctly but i can't continue testing because i cant figure where to subscribe after connected to a desired channel exposed in the Rails server.
Here is my ruby script code:
require 'faye/websocket'
require 'eventmachine'
EM.run {
ws = Faye::WebSocket::Client.new('ws://localhost:3001/cable',nil,{
headers: {'Origin' => 'ws://localhost:3001/cable'}
})
ws.on :open do |event|
p [:open]
ws.send({data: 'hello'})
end
ws.on :message do |event|
p [:message, event.data]
end
ws.on :close do |event|
p [:close, event.code, event.reason]
ws = nil
end
ws.on :error do |event|
p [:close, event.code, event.reason]
ws = nil
end
ws.send({data: 'yoyoyooy'}) # This gets sent to nowhere..
# I was hoping into subscribing a channel and callbacks for that channel, something like:
# ws.subscribe('my-channel',receive_message_callback,error_callback)
}
On the actioncable side my connection class does trigger the connect method, but i am still unsure how to interact with a channel. Like subscribing from the client so i can start sending and receiving messages.
By reading the readme of websocket action cable client gem i realized that any websocket would do the job, i just needed to send the proper payload in the send method according to what ActionCable needs. In this case:
ws.send {"command":"subscribe","identifier":"{\"channel\":\"#{channel}\",\"some_id\":\"1\"}"}.to_json
Off-topic: I couldn't find this in the rails overview page I just needed to search over the ActionCable Protocol
How can I get Rspec or even Ruby to read a Faye message? They come through in Faye's log alright, but I can't seem to connect to Faye through Rspec:
it 'gets Faye message' do
EM.run do
client = Faye::Client.new('http://localhost:9292/faye')
sub = client.subscribe('/documents') do |message|
puts message
end
sub.callback do |message|
puts message
end
end
end
This just hangs. The messages come through in Faye's log. What am I doing wrong?
http://www.rubydoc.info/github/eventmachine/eventmachine/EventMachine.run
(Read the NOTE block)
I'd say the EM.run call blocks (never returns and waits for connections) and that's why your test hangs.
Not really seeing what your test is trying to do though, so I can't give you a pointer on how to improve this.
So I've solved my own problem. I'll post my solution here in the event that it helps someone else.
it 'generates a document process and gets a push response from faye' do
EM.run do
client = Faye::Client.new('http://localhost:9292/faye')
Thread.new { subject.perform(:generate, id) }
client.subscribe "/documents/#{id}/notify" do |response|
publication = client.publish("/documents/#{id}/notify", '0')
publication.callback do
if response != '0'
expect(response).to eq(id.to_s)
EM.stop
else
p "FAYE RESPONSE: #{response}" # diagnostic only
end
end
publication.errback { |error| p "FAYE RESPONSE: #{error.inspect}" }
end
end
end
My end game was simply to get Rspec to get the Faye messages sent from the subject.perform... process. Mission accomplished. Not the neatest thing in the world, but who cares.
My server will get stuck, when users open SSE many times,
Because it seems Redis has some bugs with SSE.
The stream won't be closed even clients close browser or go to another page.
By the way I don't know when where the
logger.info "Stream closed"
logger.info "Client disconnected"
will be invoked ? (it doesn't be invoked when I close the browser)
Is it some workaround to avoid this issue ?
def new_prizes_stream
# http://ngauthier.com/2013/02/rails-4-sse-notify-listen.html
begin
response.headers.delete('Content-Length')
response.headers['Cache-Control'] = 'no-cache'
response.headers['Content-Type'] = 'text/event-stream'
logger.info "New stream starting, connecting to redis"
redis = Redis.new
redis.subscribe('messages.create', 'heartbeat') do |on|
on.message do |event, data|
if event == 'messages.create'
response.stream.write "event: #{event}\n"
response.stream.write "data: #{data}\n\n"
elsif event == 'heartbeat'
response.stream.write("event: heartbeat\ndata: heartbeat\n\n")
end
end
end
rescue IOError
logger.info "Stream closed"
rescue ActionController::Live::ClientDisconnected
logger.info "Client disconnected"
ensure
ap "close a live stream"
redis.quit
response.stream.close
end
end
My recommendation is that you do not create a connection on each request/SSE, and benchmark the results. Everytime you execute:
redis = Redis.new
If you can create the connection (singleton or factory patterns), instead of running this you would instead do something like:
redis = myPoolObj.getRedisConnection()
You then decide what you want to do on that Pool and how many connections you want to use. I checked the docs at redis-db but I did not see a built-in API for this like I saw when inspecting the python one.
You can open an issue on the repo asking if there's a built-in way to execute this without managing your own pool.
Did this help?
Here is my code:
class ParsepdfClient
#!/usr/bin/env ruby
# encoding: utf-8
require 'amqp'
require "rubygems"
require 'mq'
def self.test
EventMachine.run do
connection = AMQP.connect(:host => '127.0.0.1')
puts "Connected to AMQP broker. Running #{AMQP::VERSION} version of the gem..."
channel = AMQP::Channel.new(connection)
queue = channel.queue("amqpgem.examples.helloworld", :auto_delete => true)
exchange = channel.direct("")
queue.subscribe do |payload|
sleep(1.minutes)
puts "Received a message: #{payload}. Disconnecting..."
connection.close { EventMachine.stop }
end
exchange.publish "Hello, world!", :routing_key => queue.name
end
end
end
I'm using Rabbitmq as a broker with rails amqp gem. Now I'm calling :
ParsepdfClient.test from a controller.
From my understanding my call shouldn't sleep for a minute but it waits for a minute and it outputs
"Received a message: Hello, world!. Disconnecting..."
And then it executes rest of my code of controller. Shouldn't the call be asynchronous?
If not how can I make it asynchronous?
What I mean is that shouldn't it execute the rest of the code of my controller after outputting
Connected to AMQP broker. Running #{AMQP::VERSION} version of the gem...
I managed to implement Rabbitmq in rails successfully with the help of my senior. If any one is having any issue I have written a blog post here.
With the launch of Amazon's Relational Database Service today and their 'enforced' maintenance windows I wondered if anyone has any solutions for handling a missing database connection in Rails.
Ideally I'd like to be able to automatically present a maintenance page to visitors if the database connection disappears (i.e. Amazon are doing their maintenance) - has anyone ever done anything like this?
Cheers
Arfon
You can do this with a Rack Middleware:
class RescueFromNoDB < Struct.new(:app)
def call(env)
app.call(env)
rescue Mysql::Error => e
if e.message =~ /Can't connect to/
[500, {"Content-Type" => "text/plain"}, ["Can't get to the DB server right now."]]
else
raise
end
end
end
Obviously you can customize the error message, and the e.message =~ /Can't connect to/ bit may just be paranoia, almost all other SQL errors should be caught inside ActionController::Dispatcher.