I have made a decent amount of changes to the process of one of my jobs over the last year. Things like triggering it from after_commit instead of after_create on the respective Model as well as cleaning up the logic and covering corner cases
I see my old jobs from months ago retrying over and over again in my Papertrail logs on my Heroku Ruby on Rails app. The new ones are fine and I believe my changes have fixed any issues. The problem is how do I stop all those old jobs and why do I not see them on my Sidekiq UI? The Sidekiq UI just shows a number of completed jobs, but 0 failed, dead, busy, or enqueued. It says 0 yet I see the logs churning away.
I log the job ID's but have seen that you cannot kill a specific job. I have restarted my server multiple times with no luck. Every day they try again.
I should note that all recent jobs are fine. Anything within the last month or so do not repeat. Out of the 5000 objects that had an after_create triggered job, only 1-60 are retrying. The others passed and are fine
If you know the jid's you can do this from a rails console:
queue = Sidekiq::Queue.new("my_queue")
queue.each do |job|
job.delete if job.jid == 'abcdef1234567890'
end
If it's in the retryset you can do:
query = Sidekiq::RetrySet.new
query.each do |job|
job.delete if job.jid == 'abcdef1234567890'
end
If you can't delete because the jobs are inflight, stop your worker processes (ie shut 'em down) for a few minutes and then run the above.
Related
I want to use Heroku but the fact they restart dynos every 24 hours at random times is making things a bit difficult.
I have a series of jobs dealing with payment processing that are very important, and I want them backed by the database so they're 100% reliable. For this reason, I chose DJ which is slow.
Because I chose DJ, it means that I also can't just push 5,000,000 events to the database at once (1 per each email send).
Because of THAT, I have longer running jobs (send 200,000 text messages over a few hours).
With these longer running jobs, it's more challenging to get them working if they're cut off right in the middle.
It appears heroku sends SIGTERM and then expects the process to shut down within 30 seconds. This is not going to happen for my longer jobs.
Now I'm not sure how to handle them... the only way I can think is to update the database immediately after sending texts for instance (for example, a sms_sent_at column), but that just means I'm destroying database performance instead of sending a single update query for every batch.
This would be a lot better if I could schedule restarts, at least then I could do it at night when I'm 99% likely not going to be running any jobs that don't take longer than 30 seconds to shut down.
Or.. another way, can I 'listen' for SIGTERM within a long running DJ and at least abort the loop early so it can resume later?
Manual restarts will reset the 24 hr clock - heroku ps:restart at your preferred time ought to give you the control you are looking for.
More info can be found here: Dynos and the Dyno Manager
Here's the proper answer, you listen for SIGTERM (I'm using DJ here) and then gracefully rescue. It's important that the jobs are idempotent.
Long running delayed_job jobs stay locked after a restart on Heroku
class WithdrawPaymentsJob
def perform
begin
term_now = false
old_term_handler = trap('TERM') { term_now = true; old_term_handler.call }
loop do
puts 'doing long running job'
sleep 1
if term_now
raise 'Gracefully terminating job early...'
end
end
ensure
trap('TERM', old_term_handler)
end
end
end
Here's how you solve it with Que:
if Que.worker_count.zero?
raise 'Gracefully terminating job early...'
end
I have an fairly simple Rails 5 app for monitoring sites speeds. It sends jobs off to an external page speed testing service and periodically checks to see if the jobs are complete, and if so it calls and stores the data about the page.
Each project can have many pages, and each page can have many jobs, which is where the response from the testing service is stored.
There is an activejob, with a sidekiq backend, that is supposed to run every minute, check for pages that are due to run, and if any are found launch a job to enqueue it. It also checks if there are any enqueued jobs, and if they are found, it spools up a job to check the status and save data if it's complete.
def perform()
#Todo:- Add some way of setting the status into the view and check for active here - no way to stop jobs at the mo!
pagestorun = Page.where("runtime < ?", DateTime.now)
pagestorun.each do |page|
job = Job.new(:status => "new")
page.jobs << job
updatedatetime(page)
end
if pagestorun.count != 0
#Run the queuejob task to pick out all unqueued jobs and send them to webpagespeedtest
QueuejobsJob.perform_later
end
#Check any jobs in the queue, then schedule the next task
GetrunningtasksJob.perform_later
FindjobstorunJob.set(wait: 1.minute).perform_later()
end
This seems to work as expected for a while, but after 5 minutes or so two jobs seem to end up spawning at the same time. Eventually each of those spawn more of their own, and after a few days I end up with tens of thousands trying to run per hour. There's no errors or failing jobs as best I can tell, I can't find any reason why it'd be happening. Any help would be mundo appreciated :)
There's a chance the jobs are being retried due failures which, when overlapping with the regular 60 seconds schedule, could cause the double scheduling you're experiencing. For more info, see Error Handling in the Sidekiq wiki.
BTW I'm not entirely sure an active job it's the best way to run periodical tasks(unless your using sidekiq's enterprise periodical-jobs). Instead, I'd use a cron job running a rake task every 60 seconds, the rake
task would schedule the jobs to check specific pages
QueuejobsJob.perform_later page.id
We are using DelayedJob to run tasks in the background because they could take a while, and also because if an error is thrown, we still want the web request to succeed.
The issue is, sometimes the job could be really big (changing hundreds or thousands of database rows) and sometimes it could be really small (like 5 db rows). In the case of the small ones, we'd still like to have it run as a delayed job so that the error handling can work the same way, but we'd love to not have to wait roughly 5 seconds for DJ to pick up the job.
Is there a way to queue the job so it runs in the background, but then immediately run it so we don't have to wait for the worker to execute 5 seconds later?
Edit: Yes, this is Ruby on Rails :-)
Delayed Job polls the database for new dj records at a set interval. You can reconfigure this interval in an initializer:
# config/delayed_job.rb
Delayed::Worker.sleep_delay = 2 # or 1 if you're feeling wild.
This will affect DJ globally.
How about
SomeJob.set(
wait: 0,
queue: "queue_name",
).perform_later
Whenever I deploy with capistrano or run cap production delayed_job:restart, I end up with the currently-running delayed_job row remaining locked.
The delayed_job process is successfully stopped, a new delayed_job process is started, and a new row is locked by the new process. The problem is that the last process' row is still sitting there & marked as locked. So I have to go into the database manually, delete the row, and then manually add that job back into the queue for the new delayed_job process to get to.
Is there a way for the database cleanup & re-queue of the previous job to happen automatically?
I have the same problem. This happens whenever a job is forcibly killed. Part of the problem is that worker processes are managed by the daemons gem rather than delayed_job itself. I'm currently investigating ways to fix this, such as:
Setting a longer timeout before daemons forcibly terminates (nothing about this in docs for delayed_joob or daemons)
Clearing locks before starting delayed_job workers
I'll post back here whan and if I come up with a solution.
Adjust your Daemon wait time or raise an exception on SIGINT.
#John Carney is correct. In short, all delayed_job workers get sent something like a SIGINT (nice interrupt) on a redeploy. delayed_job workers, by default, will complete their current job (if they are working on one) and then gracefully terminate.
However, if the job that they are working on is a longer-running job, there's an amount of time the Daemon manager waits before it gets annoyed and sends a more serious interrupt signal, like a SIGTERM or SIGKILL. This wait time and what gets sent really depends on your setup and configuration.
When that happens, the delayed_job worker gets killed immediately without being able to finish the job it is working on or even cleanup after itself and mark the job as no longer locked.
This ends up in a "stranded" job that is marked as "locked" but locked to a process/worker that no longer exists. Not good.
That's the crux of the issue and what is happening. To get around this, you have two main options, depending on your what your jobs look like (we use both):
1. Raise an exception when an interrupt is received.
You can do this by setting the raise_signal_exceptions configuration to either :term or true:
Delayed::Worker.raise_signal_exceptions = :term
This configuration options accepts :term, true or false (default). You can read more on the original commit here.
I would try first with :term and see if that solves your issue. If not, you may need to set it to true.
Setting to :term or true will gracefully raise an exception and unlock the job for another delayed_job worker to pickup the job and start working on it.
Setting it to true means that your delayed_job workers won't even attempt to finish the current job that they are working on. They will just immediately raise an exception, unlock the job and terminate themselves.
2. Adjust how your workers are interrupted/terminated/killed on a redeploy.
This really depends on your redeploy, etc. In our case, we are using Cloud66 to handle deploys so we just had to configure this with them. But this is what ours looks like:
stop_sequence: int, 172800, term, 90, kill # Allows long-running delayed jobs to finish before being killed (i.e. on redeploy). Sends SIGINT, waits 48 hours, sends SIGTERM, waits 90 seconds, sends SIGKILL.
On a redeploy, this tells the Daemon manager to follow these steps will each delayed_job worker:
Send a SIGINT.
Wait 172800 seconds (2 days) - we have very long-running jobs.
Send a SIGTERM, if the worker is still alive.
Wait 90 seconds.
Send a SIGKILL, if the worker is still alive.
Anyway, that should help get you on the right track to configuring this properly for yourself.
We use both methods by setting a lengthy timeout as well as raising an exception when a SIGTERM is received. This ensures that if there is a job that runs past the 2 day limit, it will at least raise an exception and unlock the job, allowing us to investigate instead of just leaving a stranded job that is locked to a process that no longer exists.
In our application we are using rake task for sending mails to around 11 000 users. Each email sending is executed as a delayed job as given following.
#Users.each do |a|
a.delay.send_email(body,text)
end
It was working perfect two weeks back and suddenly slowed down. Means it was about to send all that emails in single day but currently it takes time.
We have tried to follow this performance issue but couldn't find anything so far.
1. We investigated the code, tried with single delayed job. Commented out the part taking from db etc. But it is doing in the same time
2. Tried the email sending part commented out. But time taken was same to execute the delayed job.
Later on noticed about the heroku worker process dyno. We have purchased 1 Worker and 2 Webs currently. Is that the reason it is getting delayed. If so how it was previously working? Adding more workers would increase the performance?