Rails : ActiveJob : Retry same job with some delay - ruby-on-rails

I am using active jobs and it works wonderfully well. As I was playing around, I noticed something and I'm looking for any improvements.
I have a job like:
class SomeJob < ApplicationJob
queue_as :default
def perform(param)
# if condition then re-try after x minutes
if condition
self.class.set(:wait => x.minutes).perform_later(param)
return
end
# something else
end
end
Upon some condition, I am trying to re-schedule the current job after a x minutes delay with the same original parameters. The scheduling works great. But, there was some nuance that I observed at the database level and wanted an improvement.
The issue is a new job is created, a new row in the db table. Instead, I'd like to have it work as the same job just with some added delay (basically I want to modify the parameters to re-schedule the same current job with the same parameters obviously) .
I do realize that raising an error will probably do the trick, as far as working on the same job is concerned. One nice thing about that is the attempts gets incremented too. But, I'd like to be able to just add an delay before the job runs again (the same job, without creating a new one).
How can I do this? Thanks.

Yes you'll want to retry versus enqueuing a new job. Look at customizations by using the class method on_retry
Changing your code, it could look like:
class SomeJob < ApplicationJob
queue_as :default
retry_on RetrySomeJobException, wait: x.minutes
def perform(param)
raise RetrySomeJobException if condition
# Do the work!
end
end

Related

What is the best way to re-queue job in Rails with delay?

I want to re-queue the job automatically after it's done. There will be optional delay for 10 seconds if nothing was processed.
I can see two approaches. The first one is to put the logic inside perform block. The second is to use around_block feature to do that.
What is the more elegant way to do that?
Here is an example of code block with such logic
# Process queue automatically
class ProcessingJob < ApplicationJob
queue_as :processing_queue
around_perform do |_job, block|
if block.call.zero?
# wait for 10 seconds if 0 items was handled
self.class.set(wait: 10.seconds).perform_later
else
# don't rest, there are many work to do
self.class.perform_later
end
end
def perform
# will return number of processed items
ProcessingService.handle_next_batch
end
end
Should I put around_block logic into the perform function?
def perform
# will return number of processed items
processed_items_count = ProcessingService.handle_next_batch
delay_for_next_job = processed_items_count.zero? ? 10 : 0
next_job = self.class.set(wait: delay_for_next_job.seconds)
next_job.perform_later
end
You can extract delay_for_next_job as private method, etc, apply refactoring as needed.
Why to use around_perform? You don't need job instance here.
Depending on your needs, you can also check if any jobs are currently pending using something like https://github.com/mhenrixon/sidekiq-unique-jobs (sorry, I'm not really familiar with ActiveJob API)

How to make a delayed job enqueue itself

I am implementing a video streaming interface for Azure's Media Services API in Rails and I need to continuously update the uploaded video in order to process it (copy, encode) through Media Services, the status will eventually be either available or failed. To do this I decided to use delayed jobs, however, I am not sure what's the best way to keep a job always running.
class UpdateAzureVideosJob < ApplicationJob
queue_as :azure_media_service
def perform
to_update = AzureVideo.all.map{ |v| v if v.state != 5 }.compact
unless to_update.empty?
to_update.each do |video|
video.update
end
end
sleep(5)
Delayed::Job.enqueue self
end
def before(job)
Delayed::Job.where("last_error is NULL AND queue = ? AND created_at < ?", job.queue, DateTime.now).delete_all
end
end
The reason I delete previous jobs of the same queue is because when I call enqueue method inside perform it adds an extra job which then adds an extra job and the queue with the scheduled jobs gets dirty really quickly.
I am just experimenting and this is probably the closest workaround (although a bit silly) for my case. I haven't tried other alternatives but any suggestions would be appreciated. Thank you.
This is how I did it:
def after(job)
Delayed::Job.transaction do
Delayed::Job.create(handler: job.handler, queue: job.queue, run_at: job.run_at + 5.minutes)
job.destroy
end
end
It will re-schedule the job to run every 5.minutes after original has finihsed. Been using it in production for major part of the year with out any issues. Also have the same logic in #error(job) callback

Rails and sucker_punch: Debounce x seconds before executing job to control rate of execution

In my Rails 3.2 project, I am using SuckerPunch to run a expensive background task when a model is created/updated.
Users can do different types of interactions on this model. Most of the times these updates are pretty well spaced out, however for some other actions like re-ordering, bulk-updates etc, those POST requests can come in very frequently, and that's when it overwhelms the server.
My question is, what would be the most elegant/smart strategy to start the background job when first update happens, but wait for say 10 seconds to make sure no more updates are coming in to that Model (Table, not a row) and then execute the job. So effectively throttling without queuing.
My sucker_punch worker looks something like this:
class StaticMapWorker
include SuckerPunch::Job
workers 10
def perform(map,markers)
#perform some expensive job
end
end
It gets called from Marker and 'Map' model and sometimes from controllers (for update_all cases)like so:
after_save :generate_static_map_html
def generate_static_map_html
StaticMapWorker.new.async.perform(self.map, self.map.markers)
end
So, a pretty standard setup for running background job. How do I make the job wait or not schedule until there are no updates for x seconds on my Model (or Table)
If it helps, Map has_many Markers so triggering the job with logic that when any marker associations of a map update would be alright too.
What you are looking for is delayed jobs, implemented through ActiveJob's perform_later. According to the edge guides, that isn't implemented in sucker_punch.
ActiveJob::QueueAdapters comparison
Fret not, however, because you can implement it yourself pretty simply. When your job retrieves the job from the queue, first perform some math on the records modified_at timestamp, comparing it to 10 seconds ago. If the model has been modified, simply add the job to the queue and abort gracefully.
code!
As per the example 2/5 of the way down the page, explaining how to add a job within a worker Github sucker punch
class StaticMapWorker
include SuckerPunch::Job
workers 10
def perform(map,markers)
if Map.where(modified_at: 10.seconds.ago..Time.now).count > 0
StaticMapWorker.new.async.perform(map,markers)
else
#perform some expensive job
end
end
end

Delayed Job: Configure run_at and max_attempts for a specific job

I need to overwrite the Delayed::Worker.max_attempts for one specific job, which I want to retry a lot of times. Also, I don't want the next scheduled time to be determined exponentially (From the docs: 5 seconds + N ** 4, where N is the number of retries).
I don't want to overwrite the Delayed::Worker settings, and affect other jobs.
My job is already a custom job (I handle errors in a certain way), so that might be helpful. Any pointers on how to do this?
I figured it out by looking through delayed_job source code. This is not documented anywhere in their docs.
Here's what I did:
class MyCustomJob < Struct.new(:param1, :param2)
def perform
# do something
end
# attempts and time params are required by delayed_job
def reschedule_at(time, attempts)
30.seconds.from_now
end
def max_attempts
50
end
end
Then run it wherever you need to by using enqueue, like this:
Delayed::Job.enqueue( MyCustomJob.new( param1, param2 ) )
Hope this helps someone in the future.

More Advanced Control Over Delayed Job workers retry

So I'm using Delayed::Job workers (on Heroku) as an after_create callback after a user creates a certain model.
A common use case, though, it turns out, is for users to create something, then immediately delete it (likely because they made a mistake or something).
When this occurs the workers are fired up, but by the time they query for the model at hand, it's already deleted, BUT because of the auto-retry feature, this ill-fated job will retry 25 times, and definitely never work.
Is there any way I can catch certain errors and, when they occur, prevent that specific job from ever retrying again, but if it's not that error, it will retry in the future?
Abstract the checks into the function you call with delayed_job. Make the relevant checks wether your desired job can proceed or not and either work on that job or return success.
To expand on David's answer, instead of doing this:
def after_create
self.send_later :spam_this_user
end
I'd do this:
# user.rb
def after_create
Delayed::Job.enqueue SendWelcomeEmailJob.new(self.id)
end
# send_welcome_email_job.rb
class SendWelcomeEmailJob < Struct(:user_id)
def perform
user = User.find_by_id(self.user_id)
return if user.nil? #user must have been deleted
# do stuff with user
end
end

Resources