I am calling an interactor inside a worker and I want the Sidekiq job to fail if the interactor fails.
def perform(id)
result = RandomInteractor.call(id: id)
# catch and respond to result.failure?
end
Right now, this will display the job as completed rather than failed.
I haven't used interactor gem before but based on your question, this should work:
def perform(id)
result = RandomInteractor.call(id: id)
raise StandardError if result.failure?
end
Since you have setup to not retry failed jobs, this should be marked as Failed as soon as the exception is raised.
Related
I have a rails app, sidekiq and sentry.
I want to find event in sentry by job arguments.
Sample:
I have SomeJob which executed with arguments [{some_arg: 'Arg1'}]
Job failed with error and send event to sentry.
How I can find event by job arguments?
I try full-text search, but it doesn't work
Search in sentry is limited by what they allow you to search by.
From reading their Search docs briefly you can either use:
sentry tags
messages
Either way, you would want to enrich your sentry events.
For example, let's assume you will rescue from the error raised in your job
class SomeJob
include Sidekiq::Worker
def perform(args)
# do stuff with args
rescue StandardError
SentryError.new(args: args)
end
end
SentryJobError is really just a PORO that would be called by your job classes.
class SentryJobError
def initialize(args:)
return if Rails.env.development?
Sentry.configure_scope do |scope|
scope.set_context('job_args', { args: args })
scope.set_context('message', 'job ${args[:some_arg]} failed')
end
end
end
i have a job like this
class CrawlSsbHistory < BaseJob
retry_on(Ssb::SessionExpired) do
Ssb::Login.call
end
def perform
response = Ssb::Client.get_data
SsbHistory.last.destroy
end
end
and i have test like this
it "retries the job if session error" do
allow(Ssb::Client).to receive(:get_data).and_raise(Ssb::SessionExpired)
allow(Ssb::Login).to receive(:call)
described_class.perform_now # it is CrawlSsbHistory
expect(Ssb::Login).to have_received(:call)
end
CrawlSsbHistory is a job to crawl some data. it call Ssb::Client.get_data to get the data.
Inside Ssb::Client.get_data i raise Ssb::SessionExpired if the session expired. so then i can capture the raised error on the job using retry_on. Then if it is happened i want to try the job.
but i got error like this
(Ssb::Login (class)).call(*(any args))
expected: 1 time with any arguments
received: 0 times with any arguments
Does the job no call retry_on? or do i test it wrong? how to make a rspec to test that retry_on is working and the Ssb::Login.call is called?
retry_on does not call the block immediately, on each retry. Only when attempts are exhausted. Otherwise, it just reschedules the job.
From the documentation:
You can also pass a block that'll be invoked if the retry attempts fail for custom logic rather than letting the exception bubble up. This block is yielded with the job instance as the first and the error instance as the second parameter.
I assume that you are testing ActiveJob, i guess retry_on will enqueue job instead of perform immediately, so you could try setup job queue using ActiveJob::TestHelper
your test case should be:
require 'rails_helper'
RSpec.describe CrawlSsbJob, type: :job do
include ActiveJob::TestHelper
before(:all) do
ActiveJob::Base.queue_adapter = :test
end
it "retries the job if session error" do
allow(Ssb::Client).to receive(:get_data).and_raise(Ssb::SessionExpired)
expect(Ssb::Login).to receive(:call)
# enqueue job
perform_enqueued_jobs {
described_class.perform_later # or perform_now
end
end
end
I am pretty new to rails and my team recently moved to use sidekiq
Calling a worker with this instruction within model
CoolClassJob.perform_async(...)
I'm using a worker with a code similar to this:
class CoolClassJob
include Sidekiq::Worker
sidekiq_options queue: "payment", retry: 5
sidekiq_retry_in do |count|
10
end
def perform()
...
whatever = {...}
if whatever.status == 'successful'
thisCoolFunction # successfully ends job
elsif whatever.status == 'failed'
anotherCoolFunction # successfully ends job
elsif whatever.pending? # I want to retry if it falls in this condition since it is "waiting" for another task to complete.
raise 'We are trying again'
end
...
end
...
end
I tried with
begin
raise 'We are trying again!'
rescue
nil
end
But when I run my tests, I get this error:
Failure/Error: raise 'We are trying again!'
RuntimeError:
'We are trying again!'
...
Which of course, makes sense to me, since I'm raising the error, I tried searching but wasn't able to come up with a solution.
I'm wondering whether its able to a) retry again without raising an error or b) tell Capybara (rspec) to keep trying without throwing an error.
One way would be to reschedule your worker :
def perform()
...
whatever = {...}
if whatever.status == 'successful'
thisCoolFunction # successfully ends job
elsif whatever.status == 'failed'
anotherCoolFunction # successfully ends job
elsif whatever.pending? # I want to retry if it falls in this condition since it is "waiting" for another task to complete.
self.class.perform_in(1.minute)
end
...
end
Or maybe you can check this SO answer : Sidekiq/Airbrake only post exception when retries extinguished
given the delayed job worker,
class UserCommentsListWorker
attr_accessor :opts
def initialize opts = {}
#opts = opts
end
def perform
UserCommentsList.new(#opts)
end
def before job
p 'before hook', job
end
def after job
p 'after hook', job
end
def success job
p 'success hook', job
end
def error job, exception
p '4', exception
end
def failure job
p '5', job
end
def enqueue job
p '-1', job
end
end
When I run Delayed::Job.enqueue UserCommentsListWorker.new(client: client) from a rails console, I can get repeated sequences of print statements and a proper delayed job lifecyle even hooks to print including the feedback from the worker that the job was a success.
Including the same call to run the worker via a standard rails controller endpoint like;
include OctoHelper
include QueryHelper
include ObjHelper
include StructuralHelper
class CommentsController < ApplicationController
before_filter :authenticate_user!
def index
if params['updateCache'] == 'true'
client = build_octoclient current_user.octo_token
Delayed::Job.enqueue UserCommentsListWorker.new(client: client)
end
end
end
I'm noticing that the worker will run and created the delayed job, but none of the hooks get called and the worker nevers logs the job as completed.
Notice the screenshot,
Jobs 73,75,76 were all triggered via a roundtrip to the above referenced endpoint while job 74 was triggered via the rails console, what is wrong with my setup and/or what am I failing to notice in this process? I will stress that the first time the webserver hits the above controller endpoint, the job queues and runs properly but all subsequent instances where the job should run properly appear to be doing nothing and giving me no feedback in the process.
i would also highlight that i'm never seeing the failure, error or enqueue hooks run.
thanks :)
The long and the short of the answer to this problem was that if you notice, i was attempting to store a client object in the delayed job notification which was causing problems, so therefore, don't store complex objects in the job, just work with basic data ids 1 or strings foo or booleans true etc. capisce?
To avoid running unnecessary, I'd like my Sidekiq worker to make a check at each stage for a certain condition. If that condition is not met, then Sidekiq should stop and report the error.
Currently I have:
class BotWorker
include Sidekiq::Worker
def perform(id)
user = User.find(id)
if user.nil?
# report the error? Thinking of using UserMailer
return false # stop the worker
end
# other processing here
end
This seems like a naive way to handle Sidekiq errors. The app needs to immediately notify admin if something breaks in the worker.
Am I missing something? What is a better way to handle errors in Sidekiq?
You can create your own error handler
Sidekiq.configure_server do |config|
config.error_handlers << Proc.new {|exception,context_hash| MyErrorService.notify(exception,context_hash) }
end