I am confused about how ActiveJob handles retries when there is an exception raised during the execution of a job. The Rails Guide about ActiveJob has this example:
10.1 Retrying or Discarding failed jobs
It's also possible to retry or discard a job if an exception is raised during execution. For example:
class RemoteServiceJob < ApplicationJob
retry_on CustomAppException # defaults to 3s wait, 5 attempts
discard_on ActiveJob::DeserializationError
def perform(*args)
# Might raise CustomAppException or ActiveJob::DeserializationError
end
end
To get more details see the API Documentation for ActiveJob::Exceptions.
That means there is a method to explicitly tell ActiveJob to retry a job on certain exceptions and at the same time, there is a method to explicitly tell ActiveJob to discard a job on certain exceptions.
But how does ActiveJob handle exceptions when the developer didn't define retry_on or discard_on? What is the default behavior? Would it discard the job? Would it retry the job? And if, how often and in what interval?
Related
Rails ActiveJob has a retry_on hook that allows you to customize retry behavior. For example:
retry_on AnotherCustomAppException, wait: ->(executions) { executions * 2 }
Rails also passes the current retry number as executions with the job_data and there's a retry_job method for further customization.
However, if you use the delayed_job_active_record gem as your backend, it looks like there's a separate config called max_attempts that controls the retry behavior.
My question is, if you use the delayed_job_active_record backend, can you still use retry_on without issues?
If you can't use retry_on, then what would be an appropriate strategy for imitating that customization of rescues?
When you're using delayed_job as a backend to ActiveJob, you end up with two retry mechanisms: first from ActiveJob, configurable using the retry_on method, and second from delayed_job, which is controlled by the max_attempts variable.
You can turn off the retry behaviour from delayed_job with the following:
# in config/initializers/delayed_job.rb
Delayed::Worker.max_attempts = 1
Now your retries are controlled entirely by the ActiveJob retry_on call, which should result in predictable behaviour.
My Rails application is using ActiveJob + DelayedJob to execute some background jobs.
I am trying to figure out what is the way to define what happens on failure (not on error) - meaning, if DelayedJob has marked the job as failed, after the allowed 3 attempts, I want to perform some operation.
This is what I know so far:
DelayedJob has the aptly named failure hook.
This hook is not supported in ActiveJob
ActiveJob has a rescue_from method
The rescue_from method is probably not the right solution, since I do not want to do something on each exception, but rather only after 3 attempts (read: only after DelayedJob has deemed the job as failed).
ActiveJob has an after_perform hook, which I cannot utilize since (as far as I can see) it is not called when perform fails.
Any help is appreciated.
You may already find the solution to this, but for people who still struggle on this issue, you can use ActiveJob rety_on method with a block to run custom logic when maximum attempts have reached but still failed.
class RemoteServiceJob < ApplicationJob
retry_on(CustomAppException) do |job, error|
ExceptionNotifier.caught(error)
end
def perform(*args)
# Might raise CustomAppException
end
end
You can find more info about Exception handling in ActiveJob in https://api.rubyonrails.org/v6.0.3.2/classes/ActiveJob/Exceptions/ClassMethods.html
I'm using sidekiq to process thousands of jobs per hour - all of which ping an external API (Google). One out of X thousand requests will return an unexpected (or empty) result. As far as I can tell, this is unavoidable when dealing with an external API.
Currently, when I encounter such response, I raise an Exception so that the retry logic will automatically take care of it on the next try. Something is only really wrong with the same job fails over and over many times. Exceptions are handled by Airbrake.
However my airbrake gets clogged up with these mini-outages that aren't really 'issues'. I'd like Airbrake to only be notified of these issues if the same job has failed X times already.
Is it possible to either
disable the automated airbrake integration so that I can use the sidekiq_retries_exhausted to report the error manually via Airbrake.notify
Rescue the error somehow so it doesn't notify Airbrake but keep retrying it?
Do this in a different way that I'm not thinking of?
Here's my code outline
class GoogleApiWorker
include Sidekiq::Worker
sidekiq_options queue: :critical, backtrace: 5
def perform
# Do stuff interacting with the google API
rescue Exception => e
if is_a_mini_google_outage? e
# How do i make it so this harmless error DOES NOT get reported to Airbrake but still gets retried?
raise e
end
end
def is_a_mini_google_outage? e
# check to see if this is a harmless outage
end
end
As far as I know Sidekiq has a class for retries and jobs, you can get your current job through arguments (comparing - cannot he effective) or jid (in this case you'd need to record the jid somewhere), check the number of retries and then notify or not Airbrake.
https://github.com/mperham/sidekiq/wiki/API
https://github.com/mperham/sidekiq/blob/master/lib/sidekiq/api.rb
(I just don't give more info because I'm not able to)
if you look for Sidekiq solution https://blog.eq8.eu/til/retry-active-job-sidekiq-when-exception.html
if you are more interested in configuring Airbrake so you don't get these errors untill certain retry check Airbrake::Sidekiq::RetryableJobsFilter
https://github.com/airbrake/airbrake#airbrakesidekiqretryablejobsfilter
I'm trying to create an ActiveJob in rails 4.2 that runs at a regular rate. The job is being called the first time, but it does not start again. My code is throwing the exception below after trying to call perform_later.
log output
[ActiveJob] Enqueued ProcessInboxJob (Job ID: 76a63689-e330-47a1-af92-8e4838b508ae) to Inline(default)
[ActiveJob] [ProcessInboxJob] [76a63689-e330-47a1-af92-8e4838b508ae] Performing ProcessInboxJob from Inline(default)
ProcessInboxJob running...
[ActiveJob] [ProcessInboxJob] [76a63689-e330-47a1-af92-8e4838b508ae] [AWS S3 200 0.358441 0 retries] list_objects(:bucket_name=>"...",:max_keys=>1000)
[ActiveJob] [ProcessInboxJob] [76a63689-e330-47a1-af92-8e4838b508ae] Enqueued ProcessInboxJob (Job ID: dfd3dd7a-06ab-4dba-9bbf-ce1ad606f7e5) to Inline(default) with arguments: {:wait=>30 seconds}
[ActiveJob] [ProcessInboxJob] [76a63689-e330-47a1-af92-8e4838b508ae] Performed ProcessInboxJob from Inline(default) in 599.72ms
Exiting
/Users/antarrbyrd/.rbenv/versions/2.1.2/lib/ruby/gems/2.1.0/gems/activejob-4.2.0/lib/active_job/arguments.rb:60:in `serialize_argument': Unsupported argument type: ActiveSupport::Duration (ActiveJob::SerializationError)
process_inbox_job.rb
class ProcessInboxJob < ActiveJob::Base
queue_as :default
#FREQUENCY = 3.minutes
def perform()
# do some work
end
# reschedule job
after_perform do |job|
self.class.perform_later(wait: 30.seconds)
end
end
The syntax is self.class.set(wait: 30.seconds).perform_later. But that's not a reliable way of doing it as if an exception occurs the chain breaks. Also you must have the initial job scheduled.
If you use resque you can use https://rubygems.org/gems/activejob-scheduler
As #bcd said, you have to use self.class.set(wait: 30.seconds).perform_later with a queue adapter that support queuing, i.e. not the default (inline) adapter.
I post to give a different point of view on the question of rescheduling, which may help future readers.
after_perform will not be called if an exception is raised but that does not make it a bad place to reschedule a job. If you have an exception in a job, better rescue it (with the class method rescue_from) and send yourself a notification if your backend doesn't do it already.
You can then try to fix the problem (either in the data or in your code) and retry (if you can) or enqueue a similar job again.
For the scheduling part, activejob-scheduler is great and does not work only for resque, but has some down sides.
It uses rufus-scheduler, which performs in-memory delay, so whenever your server restarts you'll lose all scheduling information, which may really be a problem for some tasks (I schedule tasks 1 month in the future and update my app every week, which mean a restart each time).
You also lose all advantages of using an actual queuing backend, such as beanstalk with backburner.
ActiveJob-scheduler also claims to perform job at the exact right time, which is false. The ActiveJob adapter runs at the specified time, but depending on your setup it may take a few time before the job is actually performed, e.g. when you run your jobs on another server.
Lastly, for the initial scheduling you can include a code that checks if the job exists at the worker start, and schedule it if needed.
To sum up,
Yeah ActiveJob-Scheduler is great, but you'll lose some of ActiveJob features and it does not do everything.
depending on which queuing system you are using you can try https://github.com/codez/delayed_cron_job or https://github.com/ondrejbartas/sidekiq-cron. With DJ cron you can use UI like rails_admin to actually edit the cron regex. Sidekiq-cron gives you Sinatra web UI where you can manually kick off a job or pause it.
Say I have a worker class that looks like this:
class BuilderWorker
include Sidekiq::Worker
sidekiq_options retry: false
def perform(order_id)
if(order_id == 5)
# How can I finish the job here? Say I want to finish it with a status of FAIL or COMPLETE.
end
end
end
I am looking for a way to finish a job from the Worker class, and when finished give it the status of FAILED. The finish should be quiet, (not raising an exception)
With Sidekiq there are only two job results:
success - a job returns from the perform method without error
failure - a job raises an error, going onto the retry queue to be retried in the future
Your complicated scenario is called application logic and Sidekiq cannot provide it. It is up to you to write that logic.