Testing Faye for message success from Rails app to Faye server - ruby-on-rails

I have my main Rails server and a separate Faye server (localhost:9292) setup correctly to handle messaging. Everything is working as expected but I'm having problems with trying to test it.
I'm trying to setup my test as a worker that runs once a day, and if a message isn't received I want to have the app send me an email. Here's what I have so far based on the info in Faye's docs
require 'eventmachine'
require 'faye/websocket'
class PingFayeServerWorker < BaseWorker
def initialize
end
def process
EventMachine.run {
client = Faye::Client.new('http://localhost:9292/faye')
client.set_header('Authorization', 'OAuth abcd-1234')
client.subscribe('/foo') do |message|
puts message.inspect
end
publication = client.publish('/foo', 'text' => 'Hello world')
publication.callback do
puts 'Message received by server!'
end
publication.errback do |error|
puts 'There was a problem: ' + error.message
end
}
end
end

Related

Faye websocket - 200 error

I'm building an app that supports realtime bid using Faye-websocket. But I got this 200 error and I have no idea what problem it is.
Error:
WebSocket connection to 'ws://localhost/auctions/3' failed: Error during WebSocket handshake: Unexpected response code: 200
SocketConnection.rb
require 'faye/websocket'
require 'websocket/extensions'
require 'thread'
require 'json'
class SocketConnection
KEEPALIVE_TIME = 15 # in seconds
def initialize app
#app = app
end
def call env
#env = env
if Faye::WebSocket.websocket?(env)
socket = Faye::WebSocket.new env
socket.ping 'Mic check, one, two' do
p [:ping, socket.object_id, socket.url]
end
socket.on :open do |event|
p [:open, socket.object_id, socket.url]
p [:open, socket.url, socket.version, socket.protocol]
end
socket.rack_response
else
#app.call(env)
end
end
end
I firgured out the problem. It requires a server to support socket connection. In my case, I use thin server. All errors are fixed

rails global variable inaccessible in thread

I'm using Faye websockets and Redis to attempt a master/client websocket setup for a slideshow presentation.
The clients' should all follow along with the master (i.e. the master moves forward a slide, a message is sent out to all of the clients, and they all move forward a slide).
The issue I'm having is that my list of client websockets is empty when I access inside the thread created in the initialize function. The separate thread is necessary as Redis' 'subscribe' is a blocking function.
This file is a middleware and runs on app boot up.
I know that the point of global variables in rails is that they're shared across threads, but in my case something seems to be preventing it.
Is there a way to store a list of websockets in a globally accessible place? Globally, as in, for all running instances of the app on the same server.
(can't use Redis for that cause it can't store objects).
require 'faye/websocket'
require 'redis'
class WsCommunication
KEEPALIVE_TIME = 15 #seconds
CHANNEL = 'vip'
def initialize(app)
#app = app
$clients = []
uri = URI.parse(ENV['REDISCLOUD_URL'])
Thread.new do
redis_sub = Redis.new(host: uri.host, port: uri.port, password: uri.password)
redis_sub.subscribe(CHANNEL) do |on|
on.message do |channel, msg|
puts 'client list on thread'
puts $clients
#### prints nothing
$clients.each { |ws| ws.send(msg) }
end
end
end
end
def call(env)
if Faye::WebSocket.websocket?(env)
ws = Faye::WebSocket.new(env, nil, {ping: KEEPALIVE_TIME})
ws.on :open do |event|
$clients << ws
end
ws.on :message do |event|
puts 'client list'
puts $clients
### prints the full list of clients
$redis.publish(CHANNEL, event.data)
end
ws.on :close do |event|
$clients.delete(ws)
ws = nil
end
# Return async Rack response
ws.rack_response
else
#app.call(env)
end
end
end

Heroku timeout when sending emails

I am on Heroku with a custom domain, and I have the Redis add-on. I need help understanding how to create a background worker for email notifications. Users can inbox message each other, and I would like to send a email notification to the user for each new message received. I have the notifications working in development, but I am not good with creating background jobs which is required for Heroku, otherwise the server would timeout.
Messages Controller:
def create
#recipient = User.find(params[:user])
current_user.send_message(#recipient, params[:body], params[:subject])
flash[:notice] = "Message has been sent!"
if request.xhr?
render :json => {:notice => flash[:notice]}
else
redirect_to :conversations
end
end
User model:
def mailboxer_email(object)
if self.no_email
email
else
nil
end
end
Mailboxer.rb:
Mailboxer.setup do |config|
#Configures if you applications uses or no the email sending for Notifications and Messages
config.uses_emails = false
#Configures the default from for the email sent for Messages and Notifications of Mailboxer
config.default_from = "no-reply#domain.com"
#Configures the methods needed by mailboxer
config.email_method = :mailboxer_email
config.name_method = :name
#Configures if you use or not a search engine and which one are you using
#Supported enignes: [:solr,:sphinx]
config.search_enabled = false
config.search_engine = :sphinx
end
Sidekiq is definitely the way to go with Heroku. I don't think mailboxer supports background configuration out of the box. Thankfully, it's still really easy with sidekiq's queueing process.
Add gem 'sidekiq' to your gemfile and run bundle.
Create a worker file app/workers/message_worker.rb.
class MessageWorker
include Sidekiq::Worker
def perform(sender_id, recipient_id, body, subject)
sender = User.find(sender_id)
recipient = User.find(recipient_id)
sender.send_message(recipient, body, subject)
end
end
Update your Controller to Queue Up the Worker
Remove: current_user.send_message(#recipient, params[:body], params[:subject])
Add: MessageWorker.perform_async(current_user.id, #recipient.id, params[:body], params[:subject])
Note: You should never pass workers ActiveRecord objects. That's why I setup this method to pass the User ids and look them up in the worker's perform method, instead of the entire object.
Finally, restart your server and run bundle exec sidekiq. Now your app should be sending the email background.
When you deploy, you will need a separate dyno for the worker which should look like this: worker: bundle exec sidekiq. You will also need Heroku's redis add-on.
Sounds like a H21 Request Timeout:
An HTTP request took longer than 30 seconds to complete.
To create a background worker for this in RoR, you should grab Resque, a Redis-backed background queueing library for RoR. Here is a demo. Another demo. And another demo.
To learn more about using Resque in Heroku, you can also read the herokue article up here. Or this tutorial (it's an old one though). Another great tutorial.
There is also a resque_mailer gem that will speed things up for you.
gem install resque_mailer #or add it to your Gemfile & use bundler
It is fairly straightforward. Here is a snippet from a working demo by the author:
class Notifier < ActionMailer::Base
include Resque::Mailer
default :from => "from#example.com"
def test(data={})
data.symbolize_keys!
Rails.logger.info "sending test mail"
Rails.logger.info "params: #{data.keys.join(',')}"
Rails.logger.info ""
#subject = data[:subject] || "Testing mail"
mail(:to => "nap#localhost.local",
:subject => #subject)
end
end
doing Notifier.test.deliver will deliver the mail.
You can also consider using mail delivery services like SES.
Sidekiq is an option that you could consider. To get it working you can add something like RedisToGo, then configure an initializer for Redis. Then on Heroku you can add something like worker: bundle exec sidekiq ... to your Procfile.
https://github.com/mperham/sidekiq/wiki/Getting-Started
It also has a dashboard for monitoring.
https://github.com/mperham/sidekiq/wiki/Monitoring

Unable to push the method to the rabbitmq queue

I am working on rabbitmq and trying to push a method to a queue from my ruby on rails app and I am running a server side ruby script to read the queue and execute the method which is send in the payload. Here is my client side code.
module Rabbitesh
require 'amqp'
#debugger
def self.call_rabbits(payload,queue_name)
AMQP.start(:host => "localhost") do |connection|
channel = AMQP::Channel.new(connection)
queue = channel.queue(queue_name)
channel.default_exchange.publish(payload, :routing_key => queue.name)
#EM.add_timer(0.01) do
connection.close do
#end
end
end
end
end
This is now I call the Rabbitmq function
Rabbitesh::call_rabbits(obj,"welcome_mail")
where "welcome_mail" is the queue_name
This is the server side script
require 'rubygems'
require 'amqp'
require 'daemons'
options = { :backtrace => true, :dir => '.', :log_output => true}
Daemons.run_proc('raabbitmq_daemon',options) do
AMQP.start(:host => "localhost") do |connection|
channel = AMQP::Channel.new(connection)
queue = channel.queue("welcome_mail")
Signal.trap("INT") do
connection.close do
EM.stop { exit }
end
end
puts " [*] Waiting for messages. To exit press CTRL+C"
queue.subscribe do |body|
UserMailers.welcome_organic(body).deliver
end
end
end
The problem is when my rails app calls the rabbitmq function the console stops there saying "updating client properties" and though I will be running my server side ruby script, it will not read the queue and take execute the process. I am not able to understand whats wrong with the code, kindly help me out.

Rails + TweetStream gem reconnecting

Hey, I just tested the TweetStream gem.
Example:
TweetStream::Client.new('myuser','mypass').track('ruby', 'rails') do |status|
puts "[#{status.user.screen_name}] #{status.text}"
end
This example works.
Questions:
I tried restarting my router (internet connection lost) and after that no new messages have arrived. Can someone explain this behavior to me?
I tested the daemon. What happens if no internet connection is available for a day or more? Will it reconnect automatically?
I like Rufus gem (for background processes). Can I somehow integrate this code with Rufus where I would check if the process is still active?
My reconnect solution (config/initializers/tweet_stream.rb):
client = nil
scheduler = Rufus::Scheduler.start_new
scheduler.every '30min', :first_in => '1s' do |job|
client.stop rescue nil
client = TweetStream::Client.new('user','pass').on_error do |message|
Rails.logger.info "[Rufus][#{Time.now}] TweetStream error: #{message}"
end.track('love') do |status|
Rails.logger.error "[TweetStream] Status: #{status.id}"
end
end
Thx!

Resources