How do i integrate Hoptoad with DelayedJob and DaemonSpawn? - ruby-on-rails

I have been happily using the DelayedJob idiom:
foo.send_later(:bar)
This calls the method bar on the object foo in the DelayedJob process.
And I've been using DaemonSpawn to kick off the DelayedJob process on my server.
But... if foo throws an exception Hoptoad doesn't catch it.
Is this a bug in any of these packages... or do I need to change some configuration... or do I need to insert some exception handling in DS or DJ that will call the Hoptoad notifier?
In response to the first comment below.
class DelayedJobWorker < DaemonSpawn::Base
def start(args)
ENV['RAILS_ENV'] ||= args.first || 'development'
Dir.chdir RAILS_ROOT
require File.join('config', 'environment')
Delayed::Worker.new.start
end

Try monkeypatching Delayed::Worker#handle_failed_job :
# lib/delayed_job_airbrake.rb
module Delayed
class Worker
protected
def handle_failed_job_with_airbrake(job, error)
say "Delayed job failed -- logging to Airbrake"
HoptoadNotifier.notify(error)
handle_failed_job_without_airbrake(job, error)
end
alias_method_chain :handle_failed_job, :airbrake
end
end
This worked for me.
(in a Rails 3.0.10 app using delayed_job 2.1.4 and hoptoad_notifier 2.4.11)

Check out the source for Delayed::Job... there's a snippet like:
# This is a good hook if you need to report job processing errors in additional or different ways
def log_exception(error)
logger.error "* [JOB] #{name} failed with #{error.class.name}: #{error.message} - #{attempts} failed attempts"
logger.error(error)
end
I haven't tried it, but I think you could do something like:
class Delayed::Job
def log_exception_with_hoptoad(error)
log_exception_without_hoptoad(error)
HoptoadNotifier.notify(error)
end
alias_method_chain :log_exception, :hoptoad
end

Hoptoad uses the Rails rescue_action_in_public hook method to intercept exceptions and log them. This method is only executed when the request is dispatched by a Rails controller.
For this reason, Hoptoad is completely unaware of any exception generated, for example, by rake tasks or the rails script/runner.
If you want to have Hoptoad tracking your exception, you should manually integrate it.
It should be quite straightforward. The following code fragment demonstrates how Hoptoad is invoked
def rescue_action_in_public_with_hoptoad exception
notify_hoptoad(exception) unless ignore?(exception) || ignore_user_agent?
rescue_action_in_public_without_hoptoad(exception)
end
Just include Hoptoad library in your environment and call notify_hoptoad(exception) should work. Make sure your environment provides the same API of a Rails controller or Hoptoad might complain.

Just throwing it out there - your daemon should require the rails environment that you're working on. It should look something along the lines of:
RAILS_ENV = ARGV.first || ENV['RAILS_ENV'] || 'production'
require File.join('config', 'environment')
This way you can specify environment in which daemon is called.
Since it runs delayed job chances are daemon already does that (it needs activerecord), but maybe you're only requiring minimal activerecord to make delayed_job happy without rails.

Related

Testing unmarshalling of asynchronous ActionMailer methods using Sidekiq

TLDR; How can I test that a PORO argument for an asynchronous ActionMailer action (using Sidekiq) serializes and deserializes correctly?
Sidekiq provides RSpec matchers for testing that a job is enqueued and performing a job (with given arguments).
--
To give you some context, I have a Ruby on Rails 4 application with an ActionMailer. Within the ActionMailer is a method that takes in a PORO as an argument - with references to data I need in the email. I use Sidekiq to handle the background jobs. It turns out that there was an issue in deserializing the argument that it would fail when Sidekiq decided to perform the job. I haven't been able to find a way to test the correctness of the un/marshaling such that the PORO I called the action with is being used when performed.
For example:
Given an ActionMailer with an action
class ApplicationMailer < ActionMailer::Base
def send_alert profile
#profile = profile
mail(to: profile.email)
end
end
...
I would use it like this
profile = ProfileDetailsService.new(user)
ApplicationMailer.send_alert(profile).deliver_later
...
And have a test like this (but this fails)
let(:profile) { ProfileDetailsService.new(user) }
it 'request an email to be sent' do
expect {
ApplicationMailer.send_alert(profile).deliver_later
}.to have_enqueued_job.on_queue('mailers')
end
Any assistance would be much appreciated.
You can test it in a synchronous way (using only IRB and without the need of start the Sidekiq workers).
Let's say your worker class has the following implementation:
class Worker
include Sidekiq::Worker
def perform(poro_obj)
puts poro_obj.inspect
# ...
end
end
You can open IRB (bundle exec irb) and type the following commands:
require 'sidekiq'
require 'worker'
Worker.new.perform
Thus, you will execute the code in a synchronous way AND using Sidekiq (note that we're invoking the perform method instead of the perform_async).
I think this is the best way to debug your code.

How can I turn all my 'perform_later's into 'perform_now's locally?

I'm working on a product that calls of perform_later jobs. This works for our product in production because we have a series of workers who will run all the jobs.
But, when I'm using the app locally, I don't have access to these workers, and I'd like to change all the perform_laters into perform_nows only when I use the app locally.
What's the best way to do this? One idea I had was to add something in my env file that would add a variable to make all perform_laters into perform_nows -- but I'm not sure what a flag or variable like that would look like.
Ideas?
The clean solution is to change the adapter in development environment.
In your /config/environments/development.rb you need to add:
Rails.application.configure do
config.active_job.queue_adapter = :inline
end
"When enqueueing jobs with the Inline adapter the job will be executed immediately."
In your app you can have:
/my_app/config/initializers/jobs_initializer.rb
module JobsExt
extend ActiveSupport::Concern
class_methods do
def perform_later(*args)
puts "I'm on #{Rails.env} envirnoment. So, I'll run right now"
perform_now(*args)
end
end
end
if Rails.env != "production"
puts "including mixin"
ActiveJob::Base.send(:include, JobsExt)
end
This mixin will be included on test and development environments only.
Then, if you have the job in:
/my_app/app/jobs/my_job.rb
class MyJob < ActiveJob::Base
def perform(param)
"I'm a #{param}!"
end
end
You can execute:
MyJob.perform_later("job")
And get:
#=> "I'm a job!"
Instead of the job instance:
#<MyJob:0x007ff197cd1938 #arguments=["job"], #job_id="aab4dbfb-3d57-4f6d-8994-065a178dc09a", #queue_name="default">
Remember: Doing this, ALL your Jobs will be executed right now on test and dev environments. If you want to enable this functionality for a single job, you will need to include the JobsExt mixin in that job only.
We solved this by calling an intermediate method which then called perform_later or perform_now depending on Rails config:
def self.perform(*args)
if Rails.application.config.perform_later
perform_later(*args)
else
perform_now(*args)
end
end
And simply updated environments configs accordingly

How do I turn off Rails SQL logging in test?

For some reason (probably an updated gem) Rails is logging all my SQL commands now. I run autotest and they are being spammed during tests also. How do I turn it off?
I tried add this to config/environments/test.rb but it didn't work. logger was already nil.
# ActiveRecord::Base.logger = nil
# ActiveRecord::Base.logger.level = 1
Rails 4.0.0
Ok I found it. This worked:
config.after_initialize do
ActiveRecord::Base.logger = nil
end
Another thing you can do is call this code at runtime, it doesn't need to be in a config file.
For example, if you put in your specific test case
# test/functionals/application_controller_test.rb for example
ActiveRecord::Base.logger = nil
It would work just as well, and this way you can toggle it at runtime. Useful if you only want to stifle a few lines of code or a block.
In case someone wants to actually knock out SQL statement logging (without changing logging level, and while keeping the logging from their AR models):
The line that writes to the log (in Rails 3.2.16, anyway) is the call to 'debug' in lib/active_record/log_subscriber.rb:50.
That debug method is defined by ActiveSupport::LogSubscriber.
So we can knock out the logging by overwriting it like so:
module ActiveSupport
class LogSubscriber
def debug(*args, &block)
end
end
end

delayed_job: NoMethodError

Here is my tiny Rails3 controller:
class HomeController < ApplicationController
def index
HomeController.delay.do_stuff
end
def self.do_stuff
puts "Hello"
end
end
Upon accessing index, the job gets correctly inserted in database:
--- !ruby/struct:Delayed::PerformableMethod
object: !ruby/object:Class HomeController
method_name: :do_stuff
PROBLEM: When executing bundle exec rake jobs:work, I get:
Class#do_stuff failed with NoMethodError:
undefined method `do_stuff' for #<Class:0x0000000465f910>
Despite the fact that HomeController.do_stuff works perfectly. Any idea?
See https://github.com/collectiveidea/delayed_job/wiki/Common-problems#wiki-undefined_method_xxx_for_class in documentation.
It seems that you should have
..object: !ruby/class HomeController method_name ...
in the database, but you have
..object: !ruby/object:Class HomeController method_name ...
instead. Which is bad.
Even delayed_job author don't know the reason. It somehow depends on the webserver you run in on. Try the wiki's recommendation.
I had the same problem.
I found several discussions stating that different yaml parsers were used by when the job was put in the queue by the web application and when it was executed later.
Some suggest the psych parser should be used. Some suggests syck. First I tried psych but ended up with incompability issues with other gems. So I picked syck.
I wasn't able to sort out which configuration files that are used by the web server and the queue. After a lot of experimentation I ended up with the following configurations (all of them in the top of the file):
#application.rb
require File.expand_path('../boot', __FILE__)
require 'rails/all'
require 'yaml'
YAML::ENGINE.yamler= 'syck'
# ...
and
#environment.rb
require 'yaml'
YAML::ENGINE.yamler= 'syck'
# ...
and
#boot.rb
require 'yaml'
YAML::ENGINE.yamler= 'syck'
require 'rubygems'
# ...
I'm using Ruby 1.9.3, Rails 3.2.8, Webrick, delayed_job 3.0.3
I my case the problem was mainly because I was passing Hash as a parameter to object that was passed to delayed_job queue.
But after 25 trails I have come to conclusion that delayed_job accepts objects only with integer as parameter. Hence I stored all the parameters in the database and then passed that record id as parameter to delayed_job and inside the perform function we can access all the parameters with that record id and delete that record after fetching that data.
Delayed::Job.enqueue LeadsJob.new(params[:customer]) # this job will be queued but will never run, this is because of the way Delayed_job serializes and De-serializes the objects.
instead do something like this
#customer = Customer.create(params[:customer])
Delayed::Job.enqueue LeadsJob.new(#customer.id)
If the customer details was just to pass the parameters then delete that record inside the function.
Please ping me if you need more details on the same.
The problem might be also because of the YAML parser that Delayed_Job uses but I haven't tried out that option that is mentioned by #Stefan Pettersson

Delayed job: How to reload the payload classes during every call in Development mode

I am running a delayed job worker. When ever I invoke the foo method, worker prints hello.
class User
def foo
puts "Hello"
end
handle_asynchronously :foo
end
If I make some changes to the foo method, I have to restart the worker for the changes to reflect. In the development mode this can become quite tiresome.
I am trying to find a way to reload the payload class(in this case User class) for every request. I tried monkey patching the DelayedJob library to invoke require_dependency before the payload method invocation.
module Delayed::Backend::Base
def payload_object_with_reload
if Rails.env.development? and #payload_object_with_reload.nil?
require_dependency(File.join(Rails.root, "app", "models", "user.rb"))
end
#payload_object_with_reload ||= payload_object_without_reload
end
alias_method_chain :payload_object, :reload
end
This approach doesn't work as the classes registered using require_dependency needs to be reloaded before the invocation and I haven't figured out how to do it. I spent some time reading the dispatcher code to figure out how Rails reloads the classes for every request. I wasn't able to locate the reload code.
Has anybody tried this before? How would you advise me to proceed? Or do you have any pointers for locating the Rails class reload code?
I managed to find a solution. I used ActiveSupport::Dependencies.clear method to clear the loaded classes.
Add a file called config/initializers/delayed_job.rb
Delayed::Worker.backend = :active_record
if Rails.env.development?
module Delayed::Backend::Base
def payload_object_with_reload
if #payload_object_with_reload.nil?
ActiveSupport::Dependencies.clear
end
#payload_object_with_reload ||= payload_object_without_reload
end
alias_method_chain :payload_object, :reload
end
end
As of version 4.0.6, DelayedJob reloads automatically if Rails.application.config.cache_classes is set to false:
In development mode, if you are using Rails 3.1+, your application code will automatically reload every 100 jobs or when the queue finishes. You no longer need to restart Delayed Job every time you update your code in development.
This looks like it solves your problem without the alias_method hackery:
https://github.com/Viximo/delayed_job-rails_reloader

Resources