I have two rails applications, App1 and App2(added cloudAMQP gem) in heroku, App1 is producing some message when click on a button
App1
class Publisher
def publish
# Start a communication session with RabbitMQ
connection = Bunny.new(:host => "chimpanzee.rmq.cloudamqp.com", :vhost => "test", :user => "test", :password => "password")
connection.start
# open a channel
channel = connection.create_channel
# declare a queue
queue = channel.queue("test1")
# publish a message to the default exchange which then gets routed to this queue
queue.publish("Hello, everybody!")
end
end
so in the App2 i have to consume all the messages without any button click and put that in sidekiq to process the data, but i am stuck on how can i automatically read from that queue, i know the code how to read values from queue, people are saying sneakers gem, but i am bit confused with sidekiq and sneakers, any idea of how can we do it in heroku?
To read the messages you publish from App1 to App2, in App2 you gonna need sneakers (https://github.com/jondot/sneakers)
your reader would do something like:
class Reader
include Sneakers::Worker
from_queue 'test1'
def work(message)
# your code
ack!
end
end
and you need to configure your environment, you can take a look at https://github.com/jondot/sneakers/wiki/Configuration
Related
I am trying to run message queues on heroku. For this I am using RabbitMQ Bigwig plugin.
I am publishing messages using bunny gem and trying to receive messages with sneakers gem. This whole setup works smoothly on local machine.
I take following steps to setup queue
I run this rake on server to setup queue:
namespace :rabbitmq do
desc 'Setup routing'
task :setup_test_commands_queue do
require 'bunny'
conn = Bunny.new(ENV['SYNC_AMQP'], read_timeout: 10, heartbeat: 10)
conn.start
ch = conn.create_channel
# get or create exchange
x = ch.direct('testsync.pcc', :durable => true)
# get or create queue (note the durable setting)
queue = ch.queue('test.commands', :durable => true, :ack => true, :routing_key => 'test_cmd')
# bind queue to exchange
queue.bind(x, :routing_key => 'test_cmd')
conn.close
end
end
I am able to see this queue in rabbitmq management plugin with mentioned binding.
class TestPublisher
def self.publish(test)
x = channel.direct("testsync.pcc", :durable => true)
puts "publishing this = #{Test}"
x.publish(Test, :persistent => true, :routing_key => 'pcc_cmd')
end
def self.channel
#channel ||= connection.create_channel
end
def self.connection
#conn = Bunny.new(ENV['RABBITMQ_BIGWIG_TX_URL'], read_timeout: 10, heartbeat: 10) # getting configuration from rabbitmq.yml
#conn.start
end
end
I am calling TestPublisher.publish() to publish message.
I have sneaker worker like this:
require 'test_sync'
class TestsWorker
include Sneakers::Worker
from_queue "test.commands", env: nil
def work(raw_event)
puts "^"*100
puts raw_event
# o = CaseNote.create!(content: raw_event, creator_id: 1)
# puts "#########{o}"
test = Oj.load raw_event
test.execute
# event_params = JSON.parse(raw_event)
# SomeWiseService.build.call(event_params)
ack!
end
end
My Procfile
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec rake jobs:work
sneaker: WORKERS=TestsWorker bundle exec rake sneakers:run
My Rakefile
require File.expand_path('../config/application', __FILE__)
require 'rake/dsl_definition'
require 'rake'
require 'sneakers/tasks'
Test::Application.load_tasks
My sneaker configuration
require 'sneakers'
Sneakers.configure amqp: ENV['RABBITMQ_BIGWIG_RX_URL'],
log: "log/sneakers.log",
threads: 1,
workers: 1
puts "configuring sneaker"
I am sure that message gets published. I am able to get message on rabbitmq management plugin. But sneaker does not work. There is nothing in sneakers.log that can help.
sneakers.log on heroku :
# Logfile created on 2016-04-05 14:40:59 +0530 by logger.rb/41212
Sorry for this late response. I was able to get this working on heroku. When I faced this error after hours of debugging I was not able to fix it. So I rewrote all above code and I did not check what was wrong with my previous code.
The only problem with this code and correct code is queue binding.
I had two queues on same exchange. pcc.commands with routing key pcc_cmd and test.commands with routing key test_cmd.
I was working with test_cmd but as per following line in TestPublisher
x.publish(Test, :persistent => true, :routing_key => 'pcc_cmd')
I was publishing to different queue(pcc.commands). Hence I was not able to recieve the message on test.commands queue.
In TestWorker
from_queue "test.commands", env: nil
This states that fetch messages only from test.commands queue.
Regarding sneakers.log file:
Above setup was not able to give me logs in sneakers.log file. Yes this setup works on your local development machine, but it was not working on heroku. Now days to debug such issue I ommit log attribute from configuration. like this:
require 'sneakers'
Sneakers.configure amqp: ENV['RABBITMQ_BIGWIG_RX_URL'],
# log: "log/sneakers.log",
threads: 1,
workers: 1
This way you will get sneaker logs (even heartbeat logs) in heroku logs which can be seen by running command heroku logs -a app_name --tail.
I am on Heroku with a custom domain, and I have the Redis add-on. I need help understanding how to create a background worker for email notifications. Users can inbox message each other, and I would like to send a email notification to the user for each new message received. I have the notifications working in development, but I am not good with creating background jobs which is required for Heroku, otherwise the server would timeout.
Messages Controller:
def create
#recipient = User.find(params[:user])
current_user.send_message(#recipient, params[:body], params[:subject])
flash[:notice] = "Message has been sent!"
if request.xhr?
render :json => {:notice => flash[:notice]}
else
redirect_to :conversations
end
end
User model:
def mailboxer_email(object)
if self.no_email
email
else
nil
end
end
Mailboxer.rb:
Mailboxer.setup do |config|
#Configures if you applications uses or no the email sending for Notifications and Messages
config.uses_emails = false
#Configures the default from for the email sent for Messages and Notifications of Mailboxer
config.default_from = "no-reply#domain.com"
#Configures the methods needed by mailboxer
config.email_method = :mailboxer_email
config.name_method = :name
#Configures if you use or not a search engine and which one are you using
#Supported enignes: [:solr,:sphinx]
config.search_enabled = false
config.search_engine = :sphinx
end
Sidekiq is definitely the way to go with Heroku. I don't think mailboxer supports background configuration out of the box. Thankfully, it's still really easy with sidekiq's queueing process.
Add gem 'sidekiq' to your gemfile and run bundle.
Create a worker file app/workers/message_worker.rb.
class MessageWorker
include Sidekiq::Worker
def perform(sender_id, recipient_id, body, subject)
sender = User.find(sender_id)
recipient = User.find(recipient_id)
sender.send_message(recipient, body, subject)
end
end
Update your Controller to Queue Up the Worker
Remove: current_user.send_message(#recipient, params[:body], params[:subject])
Add: MessageWorker.perform_async(current_user.id, #recipient.id, params[:body], params[:subject])
Note: You should never pass workers ActiveRecord objects. That's why I setup this method to pass the User ids and look them up in the worker's perform method, instead of the entire object.
Finally, restart your server and run bundle exec sidekiq. Now your app should be sending the email background.
When you deploy, you will need a separate dyno for the worker which should look like this: worker: bundle exec sidekiq. You will also need Heroku's redis add-on.
Sounds like a H21 Request Timeout:
An HTTP request took longer than 30 seconds to complete.
To create a background worker for this in RoR, you should grab Resque, a Redis-backed background queueing library for RoR. Here is a demo. Another demo. And another demo.
To learn more about using Resque in Heroku, you can also read the herokue article up here. Or this tutorial (it's an old one though). Another great tutorial.
There is also a resque_mailer gem that will speed things up for you.
gem install resque_mailer #or add it to your Gemfile & use bundler
It is fairly straightforward. Here is a snippet from a working demo by the author:
class Notifier < ActionMailer::Base
include Resque::Mailer
default :from => "from#example.com"
def test(data={})
data.symbolize_keys!
Rails.logger.info "sending test mail"
Rails.logger.info "params: #{data.keys.join(',')}"
Rails.logger.info ""
#subject = data[:subject] || "Testing mail"
mail(:to => "nap#localhost.local",
:subject => #subject)
end
end
doing Notifier.test.deliver will deliver the mail.
You can also consider using mail delivery services like SES.
Sidekiq is an option that you could consider. To get it working you can add something like RedisToGo, then configure an initializer for Redis. Then on Heroku you can add something like worker: bundle exec sidekiq ... to your Procfile.
https://github.com/mperham/sidekiq/wiki/Getting-Started
It also has a dashboard for monitoring.
https://github.com/mperham/sidekiq/wiki/Monitoring
Say I rescue from an Exception and I do:
begin
raise StandardError
rescue StandardError => ex
ExceptionNotifier.notify_exception(ex)
end
end
How can I make that ExceptionNotifier email be sent from a queue? So, it is asynchronous to the process of the application?
In the docs I can see how to send ExceptionNotifier if the error has happened within a worker, but not how to enqueue that sending to a queue.
The queue aspect of Rails has to be handled by a third-party semi-persistent data store. We use Redis & Resque
--
Here is a good tutorial on this:
Initializer
#app/config/initializers/redis.rb
require 'resque/server' #-> allows processing of jobs
require 'resque_scheduler' #-> allows for scheduling
uri = URI.parse(ENV["REDISCLOUD_URL"] ||= "http://localhost:6379")
Resque.redis = Redis.new(:host => uri.host, :port => uri.port, :password => uri.password)
-
Resque
This will allow you to send data to redis, using your Resque queue to handle it:
def your_action
Resque.enqueue(SendEmail, [[data ref]])
end
-
Queue
Then you can use resque to run through the Redis queue & send the emails:
$ rake resque:work QUEUE='*'
Quite a vague description, I know; but hopefully it will give you an idea as to how to use a third-party queue-based system to handle sending emails for you
I have a question that I am not finding much useful information for. I'm wondering if this is possible and, if so, how to best implement it.
We are building an app in Rails which has heavy data-processing in the background via DelayedJob (…it is working well for us.)
The app runs in AWS and we have a few different environments configured in Capistrano.
When we have heavy processing loads, our DelayedJob queues can back up--which is mostly fine. I do have one or two queues that I'd like to have a separate node tend to. Since it would be ignoring the 'clogged' queues, it would keep tending its one or two queues and they would stay current. For example, some individual jobs can take over an hour and I wouldn't want a forgotten-password-email delivery to be held up for 90 minutes until the next worker completes a task and checks for a priority job.
What I want is to have a separate EC2 instance that has one worker launched that tends to two different, explicit queues.
I can do this manually on my dev machine by launching one or two workers with the '--QUEUES' option.
Here is my question, how can I define a new role in capistrano and tell that role's nodes to start a different number of workers and tend to specific queues? Again, my normal delayed_jobs role is set to 3 workers and runs all queues.
Is this possible? Is there a better way?
Presently on Rails 3.2.13 with PostgreSQL 9.2 and the delayed_job gem.
Try this code - place it in deploy.rb after requiring default delayed_job recipes.
# This overrides default delayed_job tasks to support args per role
# If you want to use command line options, for example to start multiple workers,
# define a Capistrano variable delayed_job_args_per_role:
#
# set :delayed_job_args_per_role, {:worker_heavy => "-n 4",:worker_light => "-n 1" }
#
# Target server roles are taken from delayed_job_args_per_role keys.
namespace :delayed_job do
def args_per_host(host)
roles.each do |role|
find_servers(:roles => role).each do |server|
return args[role] if server.host == host
end
end
end
def args
fetch(:delayed_job_args_per_role, {:app => ""})
end
def roles
args.keys
end
desc "Start the delayed_job process"
task :start, :roles => lambda { roles } do
find_servers_for_task(current_task).each do |server|
run "cd #{current_path};#{rails_env} script/delayed_job start #{args_per_host server.host}", :hosts => server.host
end
end
desc "Restart the delayed_job process"
task :restart, :roles => lambda { roles } do
find_servers_for_task(current_task).each do |server|
run "cd #{current_path};#{rails_env} script/delayed_job restart #{args_per_host server.host}", :hosts => server.host
end
end
end
P.S. I've tested it only with single role in hash, but multiple roles should work fine too.
In Capistrano3, using the official capistrano3-delayed-job gem, you can do this without modifying the Capistrano methods:
# If you have several servers handling Delayed Jobs and you want to configure
# different pools per server, you can define delayed_job_pools_per_server:
#
# set :delayed_job_pools_per_server, {
# 'server11-prod' => {
# 'default,emails' => 3,
# 'loud_notifications' => 1,
# 'silent_notifications' => 1,
# },
# 'server12-prod' => {
# 'default' => 2
# }
# }
# Server names (server11-prod, server12-prod) in :delayed_job_pools_per_server
# must match the hostnames on Delayed Job servers. You can verify it by running
# `hostname` on your servers.
# If you use :delayed_job_pools_per_server, :delayed_job_pools will be ignored.
I am writing a rails app which requires to track users' status to see if they are available, busy or offline. I'm using the private_pub gem, which uses Faye underneath. When a user signs in he subscribes to a channel /user/[:user_id]. I want to update user's status to ONLINE when they subscribe using Faye's subscribe event listener. I added this code at the end of private_pub.ru file:
server = PrivatePub.faye_app
server.bind :subscribe do |client_id, channel|
if /\/user\/*/.match(channel)
m = /\/user\/(?<user_id>\d+)/.match(channel)
user_id = m[:user_id]
end
user = User.find(user_id)
user.status = 1 # 1 means online
end
run server
The problem is every time a user subscribes, thin server reports:
[ERROR] [Faye::RackAdapter] uninitialized constant User
I guess I need to require certain files to be able to use activerecords in the rackup file. But I don't know how.
Thanks for any help.
In our project we decide to use redis for similar case.
Gemfile:
gem 'redis-objects'
Faye: use redis-rb for writing status
require 'redis'
Redis.current = Redis.new(:host => '127.0.0.1', :port => 6379)
# init faye server
...
server.bind(:subscribe) do |client_id, channel|
if /\/user\/*/.match(channel)
m = /\/user\/(?<user_id>\d+)/.match(channel)
Redis.current.set("user:#{m[:user_id]}:online_status", "1")
end
end
Rails: use redis-objects gem for reading it in User's model.
class User < ActiveRecord::Base
include Redis::Objects
value :online_status
end
#user.online_status # returns "1" if channel is connected
Hope this helps.