TLDR; How can I test that a PORO argument for an asynchronous ActionMailer action (using Sidekiq) serializes and deserializes correctly?
Sidekiq provides RSpec matchers for testing that a job is enqueued and performing a job (with given arguments).
--
To give you some context, I have a Ruby on Rails 4 application with an ActionMailer. Within the ActionMailer is a method that takes in a PORO as an argument - with references to data I need in the email. I use Sidekiq to handle the background jobs. It turns out that there was an issue in deserializing the argument that it would fail when Sidekiq decided to perform the job. I haven't been able to find a way to test the correctness of the un/marshaling such that the PORO I called the action with is being used when performed.
For example:
Given an ActionMailer with an action
class ApplicationMailer < ActionMailer::Base
def send_alert profile
#profile = profile
mail(to: profile.email)
end
end
...
I would use it like this
profile = ProfileDetailsService.new(user)
ApplicationMailer.send_alert(profile).deliver_later
...
And have a test like this (but this fails)
let(:profile) { ProfileDetailsService.new(user) }
it 'request an email to be sent' do
expect {
ApplicationMailer.send_alert(profile).deliver_later
}.to have_enqueued_job.on_queue('mailers')
end
Any assistance would be much appreciated.
You can test it in a synchronous way (using only IRB and without the need of start the Sidekiq workers).
Let's say your worker class has the following implementation:
class Worker
include Sidekiq::Worker
def perform(poro_obj)
puts poro_obj.inspect
# ...
end
end
You can open IRB (bundle exec irb) and type the following commands:
require 'sidekiq'
require 'worker'
Worker.new.perform
Thus, you will execute the code in a synchronous way AND using Sidekiq (note that we're invoking the perform method instead of the perform_async).
I think this is the best way to debug your code.
Related
I have been trying to see if this is possible and so far have found nothing so I will try and ask specifically
Is it possible to have a sidekiq worker which can recive a method as for example a lambda method and pass on arguments to it?
Example case:
I need to make some heavy computation on my server and my options are to either make a specific sidekiq worker for the job which will only be done 1 time ever and will end up cloddering my code base, or make a worker which could lets say accept something like:
lot_of_work.each do |args|
Workers::Tmp::LetsGo.perform_async(args) { |a| a.lets_go }
end
I've tried looking through old stackoverflow posts and documentation for sidekiq.
I've tried the above method which I hoped worked as a normal method but it does not.
I would have liked it to execute the method which was pass to the worker such that I do not need to make workers for 1 time cases and dont have to use single thread computation.
I found a solution to this problem, there are probably better ones but this worked for me.
Make a worker like this:
module Workers
module Default
class TesterWorker
include Sidekiq::Worker
sidekiq_options queue: :default, retry: false
def perform(method_name, method)
eval(method)
send(method_name)
end
end
end
end
After this you simply just have to write your code as a string like this:
methode_name = 'tester'
spec = "def test; puts 1; end"
Workers::Default::TesterWorker.perform_async(methode_name, spec)
And this will execute the for example the puts 1 action on the sidekiq ^^
I've a ruby script that has been implemented as an independent functionality.
Now I would like to execute this script in my rails environament, with the added difficulty of executing it as a background job, because it needs a great amount of time processing.
After adding the delayed_job gem, I've tryied calling the following sentence:
delay.system("ruby my_script.rb")
And this is the error I get:
Completed 500 Internal Server Error in 95ms
TypeError (can't dump anonymous module: #<Module:0x007f8a9ce14dc0>):
app/controllers/components_controller.rb:49:in `create'
Calling the self.delay method from your controller won't work, because DJ will try to serialize your controller into the Job. You'd better create a class to handle your task then flag its method as asynchronous :
class AsyncTask
def run
system('ruby my_script.rb')
end
handle_asynchronously :run
end
In your controller :
def create
...
AsyncTask.new.run
...
end
See the second example in the "Queing Jobs" section of the readme.
Like Jef stated the best solution is to create a custom job.
The problem with Jef's answer is that its syntax (as far as I know) is not correct and that's his job handles a single system command while the following will allow you more customization:
# lib/system_command_job.rb
class SystemCommandJob < Struct.new(:cmd)
def perform
system(cmd)
end
end
Note the cmd argument for the Struct initializer. It allows you to pass arguments to your job so the code would look like:
Delayed::Job.enqueue(SystemCommandJob.new("ruby my_script.rb"))
I would like to test some resque workers, and the actions that enqueue jobs with these workers. I am using rspec and rails.
Currently I have a model, let's call it Article.rb, that has a before_save method called updates_related_categories that checks if a job with CategoriesSorter needs to be enqueued. If so, it enqueues a job with that worker, with the argument of the category id that the article is related to.
In test, however, these jobs are sent to the same queue as the development server sends jobs to. (i check using the resque server that you can tie into your server at root/redis/overview)
I want to know:
1) How can I send test jobs to a different queue than the development jobs?
If this is possible, any other advice on testing resque is also welcome.
I have seen some related questions that suggest resque-unit and resque-spec but these are pretty undeveloped, and I couldn't get them to a useful working state. Also, I have heard of using Resque.inline but I don't know if that is relevant in this case, as resque isn't called in test specs, it's called from the article model on save of objects created in test.
Sample Code:
Article.rb:
before_save :update_related_categories
def update_related_categories
#some if statements/checks to see if a related category needs updating
Resque.enqueue(CategoriesWorker, [category_id])
end
end
CategoriesSorter:
class CategoriesSorter
#queue=:sorting_queue
def self.perform(ids)
ids.each do |id|
#some code
end
end
end
Specs:
it "should do something" do
#article = #set something to enqueue a job
#article.save
#can i check if a job is enqueued? can i send this job NOT to the development queue but a different one?
end
As I see it, you don't want to test whether enqueueing the job results in perform being called - we trust Resque does what it should.
However, you can test that 1. calling update_related_categories enqueues the job. Then you can test, separately, whether 2. a worker calling perform results in the desired behavior.
For testing Resque in general, a combination of resque-spec and simulating a worker can accomplish the above two goals.
For 1, with resque-spec, you can simulate a worker calling the perform method on your class, and then check that it has been enqueued correctly:
describe "Calling update_related_categories " do
before(:each) do
ResqueSpec.reset!
end
it "should enqueue the job" do
Article.update_related_categories
CategoriesSorter.should have_queue_size_of(1)
end
end
For 2, you can create a Job (and specify a separate queue name than your development queue), a Resque::Worker and then assign the Worker to the Job:
def run_worker_simulation
# see Resque::Job api for setting the args you want
Resque::Job.create('test_queue_name', 'class_name', 'type', 'id')
worker = Resque::Worker.new('test_queue_name')
worker.very_verbose = true
job = worker.reserve
worker.perform(job)
end
Hope that gives you some ideas.
You shouldn't test resque, that's the resque development team job, your application should only test that your perfom method does what it have to do. You can also test that your model Article.rb enqueues the job as you want. Sending real jobs to resque is useless, your test will end and the reque queue will be full of useless jobs.
Do something like:
describe 'Article' do
before(:each) do
Resque.stub!(:enqueue)
end
it 'enqueues the job' do
cat_id = 1
Resque.should_receive(:enqueue).once.with(CategoriesWorker, [cat_id])
Article.create(:category_id => cat_id)
end
end
describe 'CategoriesSorter' do
it 'sorts the categories' do
result = CategoriesSorter.perform([1,4,6,3,2])
result.should == [1,2,3,4,6]
end
end
stub or mock unneeded methods/classes
EDIT: also, as you test Article, it's good that you set a before(:each) filter to stub resque so your spec never send jobs to the real queue, i've edited my answer
I have
Resque.inline = ENV['RAILS_ENV'] == "test"
This makes all the resque tasks as inline in test environment.
For testing each Job class , I have separate specs to test perform method of each job separately.
I have this Class which response to perform to be run by "Resque", I have an error at this line recipient.response = response.body which is:
undefined method response=' for #Hash:0x00000003969da0
I think that because the worker and ActiveRecord can't work together.
P.S I already loaded my environment and this class placed in lib directory
Using:
Ruby 1.9.2
Rails 3
Resque 1.10.0
class Msg
def self.perform(message,sender,host, path, recipient)
message_logger ||= Logger.new("#{Rails.root}/log/message.log")
response = Net::HTTP.get_response(host, path)
begin
recipient.response = response.body
recipient.sent_at = Time.zone.now
recipient.save
# Logging
log = "Message #{
message.sent_at}\n\tRespone:\n\t\tBody: #{response.body}\n\t\tCode: #{response.code}\n"
message_logger.info(log)
rescue Exception => e
message_logger.error(e.message + '/n' + e.backtrace.inspect)
end
end
end
Resque uses json serialization. JSON serialization would not allow you to deserialize an object with the method intact.
If you have an instance of Recipient (named "recipient") and want to use it in the method to perform/persist a response then you should enqueue the id of the recipient and fetch it from your persistence layer when perform is called.
https://github.com/defunkt/resque (checkout the section on Persistence)
Resque is different from DelayedJob/Background Job and other in this way. (which is why I like it. the same queue can be shared by multiple ruby implementations, jruby, mri, ...)
That doesn't sound like an issue with resque and activerecord at all. It says the parameter recipient that you passed in was a hash. Where's the code that enqueued the job? You can also take a look a the log output from the worker where you saw that error message to see what the parameters passed into the job were.
I have been happily using the DelayedJob idiom:
foo.send_later(:bar)
This calls the method bar on the object foo in the DelayedJob process.
And I've been using DaemonSpawn to kick off the DelayedJob process on my server.
But... if foo throws an exception Hoptoad doesn't catch it.
Is this a bug in any of these packages... or do I need to change some configuration... or do I need to insert some exception handling in DS or DJ that will call the Hoptoad notifier?
In response to the first comment below.
class DelayedJobWorker < DaemonSpawn::Base
def start(args)
ENV['RAILS_ENV'] ||= args.first || 'development'
Dir.chdir RAILS_ROOT
require File.join('config', 'environment')
Delayed::Worker.new.start
end
Try monkeypatching Delayed::Worker#handle_failed_job :
# lib/delayed_job_airbrake.rb
module Delayed
class Worker
protected
def handle_failed_job_with_airbrake(job, error)
say "Delayed job failed -- logging to Airbrake"
HoptoadNotifier.notify(error)
handle_failed_job_without_airbrake(job, error)
end
alias_method_chain :handle_failed_job, :airbrake
end
end
This worked for me.
(in a Rails 3.0.10 app using delayed_job 2.1.4 and hoptoad_notifier 2.4.11)
Check out the source for Delayed::Job... there's a snippet like:
# This is a good hook if you need to report job processing errors in additional or different ways
def log_exception(error)
logger.error "* [JOB] #{name} failed with #{error.class.name}: #{error.message} - #{attempts} failed attempts"
logger.error(error)
end
I haven't tried it, but I think you could do something like:
class Delayed::Job
def log_exception_with_hoptoad(error)
log_exception_without_hoptoad(error)
HoptoadNotifier.notify(error)
end
alias_method_chain :log_exception, :hoptoad
end
Hoptoad uses the Rails rescue_action_in_public hook method to intercept exceptions and log them. This method is only executed when the request is dispatched by a Rails controller.
For this reason, Hoptoad is completely unaware of any exception generated, for example, by rake tasks or the rails script/runner.
If you want to have Hoptoad tracking your exception, you should manually integrate it.
It should be quite straightforward. The following code fragment demonstrates how Hoptoad is invoked
def rescue_action_in_public_with_hoptoad exception
notify_hoptoad(exception) unless ignore?(exception) || ignore_user_agent?
rescue_action_in_public_without_hoptoad(exception)
end
Just include Hoptoad library in your environment and call notify_hoptoad(exception) should work. Make sure your environment provides the same API of a Rails controller or Hoptoad might complain.
Just throwing it out there - your daemon should require the rails environment that you're working on. It should look something along the lines of:
RAILS_ENV = ARGV.first || ENV['RAILS_ENV'] || 'production'
require File.join('config', 'environment')
This way you can specify environment in which daemon is called.
Since it runs delayed job chances are daemon already does that (it needs activerecord), but maybe you're only requiring minimal activerecord to make delayed_job happy without rails.