Long running delayed_job jobs stay locked after a restart on Heroku - ruby-on-rails

When a Heroku worker is restarted (either on command or as the result of a deploy), Heroku sends SIGTERM to the worker process. In the case of delayed_job, the SIGTERM signal is caught and then the worker stops executing after the current job (if any) has stopped.
If the worker takes to long to finish, then Heroku will send SIGKILL. In the case of delayed_job, this leaves a locked job in the database that won't get picked up by another worker.
I'd like to ensure that jobs eventually finish (unless there's an error). Given that, what's the best way to approach this?
I see two options. But I'd like to get other input:
Modify delayed_job to stop working on the current job (and release the lock) when it receives a SIGTERM.
Figure out a (programmatic) way to detect orphaned locked jobs and then unlock them.
Any thoughts?

Abort Job Cleanly on SIGTERM
A much better solution is now built into delayed_job. Use this setting to throw an exception on TERM signals by adding this in your initializer:
Delayed::Worker.raise_signal_exceptions = :term
With that setting, the job will properly clean up and exit prior to heroku issuing a final KILL signal intended for non-cooperating processes:
You may need to raise exceptions on SIGTERM signals, Delayed::Worker.raise_signal_exceptions = :term will cause the worker to raise a SignalException causing the running job to abort and be unlocked, which makes the job available to other workers. The default for this option is false.
Possible values for raise_signal_exceptions are:
false - No exceptions will be raised (Default)
:term - Will only raise an exception on TERM signals but INT will wait for the current job to finish.
true - Will raise an exception on TERM and INT
Available since Version 3.0.5.
See this commit where it was introduced.

TLDR:
Put this at the top of your job method:
begin
term_now = false
old_term_handler = trap 'TERM' do
term_now = true
old_term_handler.call
end
AND
Make sure this is called at least once every ten seconds:
if term_now
puts 'told to terminate'
return true
end
AND
At the end of your method, put this:
ensure
trap 'TERM', old_term_handler
end
Explanation:
I was having the same problem and came upon this Heroku article.
The job contained an outer loop, so I followed the article and added a trap('TERM') and exit. However delayed_job picks that up as failed with SystemExit and marks the task as failed.
With the SIGTERM now trapped by our trap the worker's handler isn't called and instead it immediately restarts the job and then gets SIGKILL a few seconds later. Back to square one.
I tried a few alternatives to exit:
A return true marks the job as successful (and removes it from the queue), but suffers from the same problem if there's another job waiting in the queue.
Calling exit! will successfully exit the job and the worker, but it doesn't allow the worker to remove the job from the queue, so you still have the 'orphaned locked jobs' problem.
My final solution was the one given at at the top of my answer, it comprises of three parts:
Before we start the potentially long job we add a new interrupt handler for 'TERM' by doing a trap (as described in the Heroku article), and we use it to set term_now = true.
But we must also grab the old_term_handler which the delayed job worker code set (which is returned by trap) and remember to call it.
We still must ensure that we return control to Delayed:Job:Worker with sufficient time for it to clean up and shutdown, so we should check term_now at least (just under) every ten seconds and return if it is true.
You can either return true or return false depending on whether you want the job to be considered successful or not.
Finally it is vital to remember to remove your handler and install back the Delayed:Job:Worker one when you have finished. If you fail to do this you will keep a dangling reference to the one we added, which can result in a memory leak if you add another one on top of that (for example, when the worker starts this job again).

New to the site, so can't comment on Dave's post, and need to add a new answer.
The issue I have with Dave's approach is that my tasks are long (minutes up to 8 hours), and are not repetitive at all. I can't "ensure to call" every 10 seconds.
Also, I have tried Dave's answer, and the job is always removed from the queue, regardless of what I return -- true or false. I am unclear as to how to keep the job on the queue.
See this this pull request. I think this may work for me. Please feel free to comment on it and support the pull request.
I am currently experimenting with a trap then rescue the exit signal... No luck so far.

That is what max_run_time is for: after max_run_time has elapsed from the time the job was locked, other processes will be able to acquire the lock.
See this discussion from google groups

I ended up having to do this in a few places, so I created a module that I stick in lib/, and then run ExitOnTermSignal.execute { long_running_task } from inside my delayed job's perform block.
# Exits whatever is currently running when a SIGTERM is received. Needed since
# Delayed::Job traps TERM, so it does not clean up a job properly if the
# process receives a SIGTERM then SIGKILL, as happens on Heroku.
module ExitOnTermSignal
def self.execute(&block)
original_term_handler = Signal.trap 'TERM' do
original_term_handler.call
# Easiest way to kill job immediately and having DJ mark it as failed:
exit
end
begin
yield
ensure
Signal.trap 'TERM', original_term_handler
end
end
end

I use a state machine to track the progress of jobs, and make the process idempotent so I can call perform on a given job/object multiple times and be confident it won't re-apply a destructive action. Then update the rake task/delayed_job to release the log on TERM.
When the process restarts it will continue as intended.

Related

How to run the job synchronously with sidekiq

Currently I am working with queue job on the ruby on rail with the Sidekiq. I have 2 jobs that are depend to each other and I want 1st job to finish first before starting the 2nd job, so is there any way to make it with Sidekiq.
Yes, you can use the YourSidekiqJob.new.perform(parameters_to_the_job) pattern. This will run your jobs in order, synchronously.
However, there are 2 things to consider here:
What happens if the first job fails?
How long does the each job run?
For #2, the pattern blocks execution for the length of time each job takes to run. If the jobs are extremely short in runtime, why use the jobs in the first place? If they're long, are you expecting the user to wait until they're done?
Alternatively, you can schedule the running of the second job as the last line in the body of the first one. You still need to account for the failure mode of job #1 or #2. Also, you need to consider that the job won't necessarily run when it's scheduled to run, due to the state of the queue at schedule time. How does this affect your business logic?
Hope this helps
--edit according to last comment
class SecondJob < SidekiqJob
def perform(params)
data = SomeData.find
return unless data.ready?
# do whatever you need to do with the ready data
end
end

sidekiq retries exhausted with arguments

I crawl leads from other sites and activate them at mine. With Sidekiq I run a HealthCheckerWorker every hour to deactivate the lead on my site unless it's still up and running on the other site.
Sometimes the HealthCheckerWorker throws an error, iE: bad URI etc... My dilemma here is that if the job fails 10 times, now in the Dead Job Queue, the lead is still considered active although it may be long gone on the other site.
I thought the solution would be to add a sidekiq_retries_exhausted block like so:
sidekiq_retries_exhausted do |lead_id|
Lead.find(lead_id).deactivate
end
But as defined in the docs:
The hook receives the queued message as an argument. This hook is called right before Sidekiq moves the job to the DJQ.
So it only gets the message as an argument. How can I deactivate the lead after the retries are exhausted?
The message is the entire job, including the arguments. Just pull out the arguments.
sidekiq_retries_exhausted do |msg|
lead_id = msg['args'].first
Lead.find(lead_id).deactivate
end

Sidekiq handling re-queue when processing large data

See the updated question below.
Original question:
In my current Rails project, I need to parse large xml/csv data file and save it into mongodb.
Right now I use this steps:
Receive uploaded file from user, store the data into mongodb
Use sidekiq to perform async process of the data in mongodb.
After process finished, delete the raw data.
For small and medium data in localhost, the steps above run well. But in heroku, I use hirefire to dynamically scale worker dyno up and down. When worker still processing the large data, hirefire see empty queue and scale down worker dyno. This send kill signal to the process, and leave the process in incomplete state.
I'm searching a better way to do the parsing, allow the parsing process got killed anytime (saving the current state when receiving kill signal), and allow the process got re-queued.
Right now I'm using Model.delay.parse_file and it don't get re-queued.
UPDATE
After reading sidekiq wiki, I found article about job control. Can anyone explain the code, how it works, and how it preserve it's state when receiving SIGTERM signal and the worker get re-queued?
Is there any alternative way to handle job termination, save current state, and continue right from the last position?
Thanks,
Might be easier to explain the process and the high level steps, give a sample implementation (a stripped down version of one that I use), and then talk about throw and catch:
Insert the raw csv rows with an incrementing index (to be able to resume from a specific row/index later)
Process the CSV stopping every 'chunk' to check if the job is done by checking if Sidekiq::Fetcher.done? returns true
When the fetcher is done?, store the index of the currently processed item on the user and return so that the job completes and control is returned to sidekiq.
Note that if a job is still running after a short timeout (default 20s) the job will be killed.
Then when the job runs again simply, start where you left off last time (or at 0)
Example:
class UserCSVImportWorker
include Sidekiq::Worker
def perform(user_id)
user = User.find(user_id)
items = user.raw_csv_items.where(:index => {'$gte' => user.last_csv_index.to_i})
items.each_with_index do |item, i|
if (i+1 % 100) == 0 && Sidekiq::Fetcher.done?
user.update(last_csv_index: item.index)
return
end
# Process the item as normal
end
end
end
The above class makes sure that each 100 items we check that the fetcher is not done (a proxy for if shutdown has been started), and ends execution of the job. Before the execution ends however we update the user with the last index that has been processed so that we can start where we left off next time.
throw catch is a way to implement this above functionality a little cleaner (maybe) but is a little like using Fibers, nice concept but hard to wrap your head around. Technically throw catch is more like goto than most people are generally comfortable with.
edit
Also you could not make call to Sidekiq::Fetcher.done? and record the last_csv_index on each row or on each chunk of rows processed, that way if your worker is killed without having the opportunity to record the last_csv_index you can still resume 'close' to where you left off.
You are trying to address the concept of idempotency, the idea that processing a thing multiple times with potential incomplete cycles does not cause problems. (https://github.com/mperham/sidekiq/wiki/Best-Practices#2-make-your-jobs-idempotent-and-transactional)
Possible steps forward
Split the file up into parts and process those parts with a job per part.
Lift the threshold for hirefire so that it will scale when jobs are likely to have fully completed (10 minutes)
Don't allow hirefire to scale down while a job is working (set a redis key on start and clear on completion)
Track progress of the job as it is processing and pick up where you left off if the job is killed.

Moving a Resque job between queues

Is there anyway to move a resque job between two different queues?
We sometimes get in the situation that we have a big queue and a job that is near the end we find a need to "bump up its priority." We thought it might be an easy way to simply move it to another queue that had a worker waiting for any high priority jobs.
This happens rarely and is usually a case where we get a special call from a customer, so scaling, re-engineering don't seem totally necessary.
There is nothing built-in in Resque. You can use rpoplpush like:
module Resque
def self.move_queue(source, destination)
r = Resque.redis
r.llen("queue:#{source}").times do
r.rpoplpush("queue:#{source}", "queue:#{destination}")
end
end
end
https://gist.github.com/rafaelbandeira3/7088498
If it's a rare occurrence you're probably better off just manually pushing a new job into a shorter queue. You'll want to make sure that your system has a way to identify that the job has already run and to bail out so that when the job in the long queue is finally reached it is not processed again (if double processing is a problem for you).

erlang supervisor restart strategy

I would like to start several processes as children of a given supervisor. The restart strategy is one_for_one For my needs, every process which terminates should be restarted after a given amount of time (e.g. 20 seconds).
How can this be done? Maybe with a delay in the init or in the terminate functions in combination with:
Shutdown = brutal_kill | integer() >=0 | infinity
Is there a better way to achieve this?
Don't use init/1 for this. While init is running, the supervisor is blocked. It is better to start up the process right away, but only let it register itself for operations like this after it has waited for 20 seconds. You could use a simple erlang:send_after(..) call in the init to trigger this startup delay.
I don't like the termination thing either. Perhaps have a close-down state in which you linger for a bit before termination. This could perhaps make sure that nobody else runs while you are doing. I'd recommend that if you are in control of when to close down. Simply enter this state and then await a timer trigger like the above. Note though that this solution will only free up external resources after the grace period (files, ETS tables, sockets) - unless explicitly freed.

Resources