sidekiq retries exhausted with arguments - ruby-on-rails

I crawl leads from other sites and activate them at mine. With Sidekiq I run a HealthCheckerWorker every hour to deactivate the lead on my site unless it's still up and running on the other site.
Sometimes the HealthCheckerWorker throws an error, iE: bad URI etc... My dilemma here is that if the job fails 10 times, now in the Dead Job Queue, the lead is still considered active although it may be long gone on the other site.
I thought the solution would be to add a sidekiq_retries_exhausted block like so:
sidekiq_retries_exhausted do |lead_id|
Lead.find(lead_id).deactivate
end
But as defined in the docs:
The hook receives the queued message as an argument. This hook is called right before Sidekiq moves the job to the DJQ.
So it only gets the message as an argument. How can I deactivate the lead after the retries are exhausted?

The message is the entire job, including the arguments. Just pull out the arguments.
sidekiq_retries_exhausted do |msg|
lead_id = msg['args'].first
Lead.find(lead_id).deactivate
end

Related

Display info (date) about completed Sidekiq job

I'm using Rails 6 in my app with Sidekiq on board. I've got FetchAllProductsWorker like below:
module Imports
class FetchAllProductsWorker
include Sidekiq::Worker
sidekiq_options queue: 'imports_fetch_all', retry: 0
def perform
(...)
end
end
end
I want to check when FetchAllProductsWorker was finished successfully last time and display this info in my front-end. This job will be fired sporadically but the user must have feedback when the last database sync (FetchAllProductsWorker is responsible for that) succeeded.
I want to have such info only for this one worker. I saw a lot of useful things inside the sidekiq API docs but none of them relate to the history of completed jobs.
You could use the Sidekiq Batches API that provides an on_success callback but that is mostly used for tracking batch work which is an overkill for your problem. I suggest writing your code at the end of the perform function.
def perform
(...) # Run the already implemented code
(notify/log for success) # If it is successful notify/log.
end
The simplified default lifecycle of a Sidekiq looks like this:
If there is an error then the job will be retried a couple of times (read about Retries in the Sidekiq docs). During that time you can see the failing job and the error on the Sidekiq Web UI if configured.
– If the job is finished successfully the job is removed from Redis and there is no information about this specific job available to the application.
That means Sidekiq does not really support running queries about jobs that run successfully in the past. If you need information about past jobs then you need to build this on its own. I basically see three options to allow monitoring Sidekiq Jobs:
Write useful information to your application's log. Most logging tools support monitoring for specific messages, sending messages, or creating views for specific events. This might be enough if you just need this information for debugging reasons.
def perform
Rails.logger.info("#{self.class.name}" started)
begin
# job code
rescue => exception
Rails.logger.error("#{self.class.name} failed: #{exception.message}")
raise # re-raise the exception to trigger Sidekiq's default retry behavior
else
Rails.logger.info("#{self.class.name}" was finished successfully)
end
end
If you are mostly interested in getting informed when there is a problem then I suggest looking at a tool like Dead Man's Snitch. how those tools are working is that you ping their API as the last step of a job which will only reach when there was no error. Then configure that tool to notify you if its API hasn't been pinged in the expected timeframe, for example, if you have a daily import job, then Dead Man's Snitch would send you a message only if there wasn't a successful import Job in the last 24 hours. If the job was successful it will not spam you every single day.
require 'open-uri'
def perform
# job code
open("https://nosnch.in/#{TOKEN}")
end
If you want to allow your application's users to see job return statuses on a dashboard in the application. Then it makes sense to store that information in the database. You could, for example, just create a JobStatus ActiveRecord model with columns like job_name, status, payload, and a created_at and then create records in that table whenever it feels useful. Once the data is in the database you can present that data like every other model's data to the user.
def perform
begin
# job code
rescue => exception
JobStatus.create(job_name: self.class.name, status: 'failed', payload: exception.to_json)
raise # re-raise the exception to trigger Sidekiq's default retry behavior
else
JobStatus.create(job_name: self.class.name, status: 'success')
end
end
And, of course, you can combine all those technics and tools for different use-cases. 1. for history and statistics, 2. for admins and people being on-call, 3. for users of your application.

How to avoid meeting Heroku's API rate limit with delayed job and workless

My Survey model has about 2500 instances and I need to apply the set_state method to each instance twice. I need to apply it the second time only after every instance has had the method applied to it once. (The state of an instance can depend on the state of other instances.)
I'm using delayed_job to create delayed jobs and workless to automatically scale up/down my worker dynos as required.
The set_state method typically takes about a second to execute. So I've run the following at the heroku console:
2.times do
Survey.all.each do |survey|
survey.delay.set_state
sleep(4)
end
end
Shouldn't be any issues with overloading the API, right?
And yet I'm still seeing the following in my logs for each delayed job:
Heroku::API::Errors::ErrorWithResponse: Expected(200) <=> Actual(429 Unknown)
I'm not seeing any infinite loops -- it just returns this message as soon as I create the delayed job.
How can I avoid blowing Heroku's API rate limits?
Reviewing workless, it looks like it incurs an API call per delayed job to check the worker count and potentially a second API call to scale up/down. So if you are running 5000 (2500x2) jobs within a short period, you'll end up with 5000+ API calls. Which would be well in excess of the 1200/requests per hour limit. I've commented over there to hopefully help toward reducing the overall API usage (https://github.com/lostboy/workless/issues/33#issuecomment-20982433), but I think we can offer a more specific solution for you.
In the mean time, especially if your workload is pretty predictable (like this). I'd recommend skipping workless and doing that portion yourself. ie it sounds like you already know WHEN the scaling would need to happen (scale up right before the loop above, scale down right after). If that is the case you could do something like this to emulate the behavior in workless:
require 'heroku-api'
heroku = Heroku::API.new(:api_key => ENV['HEROKU_API_KEY'])
client.post_ps_scale(ENV['APP_NAME'], 'worker', Survey.count)
2.times do
Survey.all.each do |survey|
survey.delay.set_state
sleep(4)
end
end
min_workers = ENV['WORKLESS_MIN_WORKERS'].present? ? ENV['WORKLESS_MIN_WORKERS'].to_i : 0
client.post_ps_scale(ENV['APP_NAME'], 'worker', min_workers)
Note that you'll need to remove workless from these jobs also. I didn't see a particular way to do this JUST for certain jobs though, so you might want to ask on that project if you need that. Also, if this needs to be 2 pass (the first time through needs to finish before the second), the 4 second sleep may in some cases be insufficient but that is a different can of worms.
I hope that helps narrow in on what you needed, but I'm certainly happy to discuss further and/or elaborate on the above as needed. Thanks!

is there a way to run a job at a set time later, without cron, say a scheduled queue?

I have a rails application where I want to run a job in the background, but I need to run the job 2 hours from the original event.
The use case might be something like this:
User posts a product listing.
Background job is queued to syndicate listing to 3rd party api's, but even after original request, the response could take a while and the 3rd party's solution is to poll them every 2 hours to see if we can get a success acknowledgement.
So is there a way to queue a job, so that a worker daemon knows to ignore it or only listen to it at the scheduled time?
I don't want to use cron because it will load up a whole application stack and may be executed twice on overlapping long running jobs.
Can a priority queue be used for this? What solutions are there to implement this?
try delayed job - https://github.com/collectiveidea/delayed_job
something along these lines?
class ProductCheckSyndicateResponseJob < Struct.new(:product_id)
def perform
product = Product.find(product_id)
if product.still_needs_syndicate_response
# do it ...
# still no response, check again in two hours
Delayed::Job.enqueue(ProductCheckSyndicateResponseJob.new(product.id), :run_at => 2.hours.from_now)
else
# nothing to do ...
end
end
end
initialize job first time in controller or maybe before_create callback on model?
Delayed::Job.enqueue(ProductCheckSyndicateResponseJob.new(#product.id), :run_at => 2.hours.from_now)
Use the Rufus Scheduler gem. It runs as a background thread, so you don't have to load the entire application stack again. Add it to your Gemfile, and then your code is as simple as:
# in an initializer,
SCHEDULER = Rufus::Scheduler.start_new
# then wherever you want in your Rails app,
SCHEDULER.in('2h') do
# whatever code you want to run in 2 hours
end
The github page has tons of more examples.

Delay sending mails to boost page load time

In my Product#create method I have something like
ProductNotificationMailer.notify_product(n.email).deliver
Which fires off if the product gets saved. Now thing is before the above gets fired off, there are bunch of logics and calculations happening which delays the confirmation page load time. Is there a way to make sure the next page loads first and the mail delivery can happen later or in the background?
Thanks
Yes, you'll want to look into background workers. Sidekiq, DelayedJob or Resque are some popular ones.
Here's a great RailsCast demonstrating Sidekiq.
class NotificationWorker
include Sidekiq::Worker
def perform(n_id)
n = N.find(n_id)
ProductNotificationMailer.notify_product(n.email).deliver
end
end
I'm not sure what n was in your example, so I just went with it. Now where you do the work, you can replace it with:
NotificationWorker.perform_async(n.id)
The reason you don't pass full object n as an argument, is because the arguments will be serialized, and it's easier/faster to serialize just the integer id.
Once the jobs is stored, you have a second process running in the background that will do the work, freeing up your web process to immediately go back to rendering the response.
Delayed jobs will do this:
Here is the github page.
and here is a railscast on setting it up.

Long running delayed_job jobs stay locked after a restart on Heroku

When a Heroku worker is restarted (either on command or as the result of a deploy), Heroku sends SIGTERM to the worker process. In the case of delayed_job, the SIGTERM signal is caught and then the worker stops executing after the current job (if any) has stopped.
If the worker takes to long to finish, then Heroku will send SIGKILL. In the case of delayed_job, this leaves a locked job in the database that won't get picked up by another worker.
I'd like to ensure that jobs eventually finish (unless there's an error). Given that, what's the best way to approach this?
I see two options. But I'd like to get other input:
Modify delayed_job to stop working on the current job (and release the lock) when it receives a SIGTERM.
Figure out a (programmatic) way to detect orphaned locked jobs and then unlock them.
Any thoughts?
Abort Job Cleanly on SIGTERM
A much better solution is now built into delayed_job. Use this setting to throw an exception on TERM signals by adding this in your initializer:
Delayed::Worker.raise_signal_exceptions = :term
With that setting, the job will properly clean up and exit prior to heroku issuing a final KILL signal intended for non-cooperating processes:
You may need to raise exceptions on SIGTERM signals, Delayed::Worker.raise_signal_exceptions = :term will cause the worker to raise a SignalException causing the running job to abort and be unlocked, which makes the job available to other workers. The default for this option is false.
Possible values for raise_signal_exceptions are:
false - No exceptions will be raised (Default)
:term - Will only raise an exception on TERM signals but INT will wait for the current job to finish.
true - Will raise an exception on TERM and INT
Available since Version 3.0.5.
See this commit where it was introduced.
TLDR:
Put this at the top of your job method:
begin
term_now = false
old_term_handler = trap 'TERM' do
term_now = true
old_term_handler.call
end
AND
Make sure this is called at least once every ten seconds:
if term_now
puts 'told to terminate'
return true
end
AND
At the end of your method, put this:
ensure
trap 'TERM', old_term_handler
end
Explanation:
I was having the same problem and came upon this Heroku article.
The job contained an outer loop, so I followed the article and added a trap('TERM') and exit. However delayed_job picks that up as failed with SystemExit and marks the task as failed.
With the SIGTERM now trapped by our trap the worker's handler isn't called and instead it immediately restarts the job and then gets SIGKILL a few seconds later. Back to square one.
I tried a few alternatives to exit:
A return true marks the job as successful (and removes it from the queue), but suffers from the same problem if there's another job waiting in the queue.
Calling exit! will successfully exit the job and the worker, but it doesn't allow the worker to remove the job from the queue, so you still have the 'orphaned locked jobs' problem.
My final solution was the one given at at the top of my answer, it comprises of three parts:
Before we start the potentially long job we add a new interrupt handler for 'TERM' by doing a trap (as described in the Heroku article), and we use it to set term_now = true.
But we must also grab the old_term_handler which the delayed job worker code set (which is returned by trap) and remember to call it.
We still must ensure that we return control to Delayed:Job:Worker with sufficient time for it to clean up and shutdown, so we should check term_now at least (just under) every ten seconds and return if it is true.
You can either return true or return false depending on whether you want the job to be considered successful or not.
Finally it is vital to remember to remove your handler and install back the Delayed:Job:Worker one when you have finished. If you fail to do this you will keep a dangling reference to the one we added, which can result in a memory leak if you add another one on top of that (for example, when the worker starts this job again).
New to the site, so can't comment on Dave's post, and need to add a new answer.
The issue I have with Dave's approach is that my tasks are long (minutes up to 8 hours), and are not repetitive at all. I can't "ensure to call" every 10 seconds.
Also, I have tried Dave's answer, and the job is always removed from the queue, regardless of what I return -- true or false. I am unclear as to how to keep the job on the queue.
See this this pull request. I think this may work for me. Please feel free to comment on it and support the pull request.
I am currently experimenting with a trap then rescue the exit signal... No luck so far.
That is what max_run_time is for: after max_run_time has elapsed from the time the job was locked, other processes will be able to acquire the lock.
See this discussion from google groups
I ended up having to do this in a few places, so I created a module that I stick in lib/, and then run ExitOnTermSignal.execute { long_running_task } from inside my delayed job's perform block.
# Exits whatever is currently running when a SIGTERM is received. Needed since
# Delayed::Job traps TERM, so it does not clean up a job properly if the
# process receives a SIGTERM then SIGKILL, as happens on Heroku.
module ExitOnTermSignal
def self.execute(&block)
original_term_handler = Signal.trap 'TERM' do
original_term_handler.call
# Easiest way to kill job immediately and having DJ mark it as failed:
exit
end
begin
yield
ensure
Signal.trap 'TERM', original_term_handler
end
end
end
I use a state machine to track the progress of jobs, and make the process idempotent so I can call perform on a given job/object multiple times and be confident it won't re-apply a destructive action. Then update the rake task/delayed_job to release the log on TERM.
When the process restarts it will continue as intended.

Resources