I am using sidekiq in my rails app. Users of my app create reports that start a sidekiq job. However, sometimes users want to be able to cancel "processing" reports. Deleting the report is easy but I also need to be able to delete the sidekiq job as well.
So far I have been able to get a list of workers like so:
workers = Sidekiq::Workers.new
and each worker has args that include a report_id so I can identify which job belongs to which report. However, I'm not sure how to actually delete the job. It should be noted that I want to delete the job whether it is currently busy, or set in retry.
According to this Sidekiq documentation page to delete a job with a single id you need to iterate the queue and call .delete on it.
queue = Sidekiq::Queue.new("mailer")
queue.each do |job|
job.klass # => 'MyWorker'
job.args # => [1, 2, 3]
job.delete if job.jid == 'abcdef1234567890'
end
There is also a plugin called sidekiq-status that provides you the ability to cancel a single job
scheduled_job_id = MyJob.perform_in 3600
Sidekiq::Status.cancel scheduled_job_id #=> true
The simplest way I found to do this is:
job = Sidekiq::ScheduledSet.new.find_job([job_id])
where [job_id] is the JID that pertains to the report. Followed by:
job.delete
I found no need to iterate through the entire queue as described by other answers here.
I had the same problem, but the difference is that I needed to cancel a scheduled job, and my solution is:
Sidekiq::ScheduledSet.new.each do |_job|
next unless [online_jid, offline_jid].include? _job.jid
status = _job.delete
end
If you want to cancel a scheduled job, I'm not sure about #KimiGao's answer, but this is what I adapted from Sidekiq's current API documentation:
jid = MyCustomWorker.perform_async
r = Sidekiq::ScheduledSet.new
jobs = r.select{|job| job.jid == jid }
jobs.each(&:delete)
Hope it helps.
You can delete sidekiq job filtering by worker class and args:
class UserReportsWorker
include Sidekiq::Worker
def perform(report_id)
# ...
end
end
jobs = Sidekiq::ScheduledSet.new.select do |retri|
retri.klass == "UserReportsWorker" && retri.args == [42]
end
jobs.each(&:delete)
I had the same problem.
I solved it by registering the job id when I initialize it and by creating another function cancel! to delete it.
Here is the code:
after_enqueue do |job|
sidekiq_job = nil
queue = Sidekiq::Queue.new
sidekiq_job = queue.detect do |j|
j.item['args'][0]['job_id'] == job.job_id
end
if sidekiq_job.nil?
scheduled = Sidekiq::ScheduledSet.new
sidekiq_job = scheduled.detect do |j|
j.item['args'][0]['job_id'] == job.job_id
end
end
if sidekiq_job.present?
booking = job.arguments.first
booking.close_comments_jid = sidekiq_job.jid
booking.save
end
end
def perform(booking)
# do something
end
def self.cancel!(booking)
queue = Sidekiq::Queue.new
sidekiq_job = queue.find_job(booking.close_comments_jid)
if sidekiq_job.nil?
scheduled = Sidekiq::ScheduledSet.new
sidekiq_job = scheduled.find_job(booking.close_comments_jid)
end
if sidekiq_job.nil?
# Report bug in my Bug Tracking tool
else
sidekiq_job.delete
end
end
There is simple way of deleting a job if you know the job_id:
job = Sidekiq::ScheduledSet.new.find_job(job_id)
begin
job.delete
rescue
Rails.logger.error "Job: (job_id: #{job_id}) not found while deleting jobs."
end
Or you can use sidekiq page on rails server.
For example, http://localhost:3000/sidekiq, you can stop/remove the sidekiq jobs.
Before that, you have to updates the routes.rb.
require 'sidekiq/web'
mount Sidekiq::Web => '/sidekiq'
Related
I'm working on a Ruby on Rails project. I'm wondering if I can get the worker name that triggered the particular job using the job id so I can display the appropriate message after it is done. Is this possible? or should I just save the worker name in the model?
#I have a method that creates the job
def generate
my_model.job_id = HardWorker.perform_async()
my_model.save!
end
def check_status
if job_id && Sidekiq::Status::complete?(my_model.job_id)
# if HardWorker
# "HardWorker is done!"
# elseif AnotherWorker
# "AnotherWorker is Done!"
# end
end
end
You can use the following code to get Worker Class using job id:
queue = Sidekiq::Queue.new
job = queue.detect { |job| job.jid == job_id //<-- the job id you have from perfrom_async }
Your code would be something like this:
def check_status
queue = Sidekiq::Queue.new
job = queue.detect { |job| job.jid == my_model.job_id }
if job_id && Sidekiq::Status::complete?(my_model.job_id)
puts "#{job.klass} is done" <-- The worker that triggered the job
end
end
More on queues here
Is there a way to get a list of all the jobs currently in the queue and running? Basically, I want to know if a job of given class is already there, I don't want to insert my other job. I've seen other option but I want to do it this way.
I can see here how to get the list of jobs in the queue.
queue = Sidekiq::Queue.new("mailer")
queue.each do |job|
job.klass # => 'MyWorker'
end
from what I understand this will not include processing/running jobs. Any way to get them?
if you want to list all currently running jobs from console, try this
workers = Sidekiq::Workers.new
workers.each do |_process_id, _thread_id, work|
p work
end
a work is a hash.
to list all queue data.
queues = Sidekiq::Queue.all
queues.each do |queue|
queue.each do |job|
p job.klass, job.args, job.jid
end
end
for a specific queue change this to Sidekiq::Queue.new('queue_name')
similarly you can get all scheduled jobs using Sidekiq::ScheduledSet.new
running jobs:
Sidekiq::Workers.new.each do |_process_id, _thread_id, work|
p work
end
queued jobs across all queues:
Sidekiq::Queue.all.each do |queue|
# p queue.name, queue.size
queue.each do |job|
p job.klass, job.args
end
end
Assuming you passed the Hash as the argument to Sidekiq when you enqueued.
args = {
"student_id": 1,
"student_name": "Michael Moore"
}
YourWorker.perform_in(1.second,args)
Then anywhere from your application, you could retrieve it as following
ss = Sidekiq::ScheduledSet.new
student_id_list = ss.map{|job| job['args'].first["student_id"]}
I am using this in the ApplicationJob to check if there is already a job in the queue with same name/arguments and prevent from queue it twice
apps/jobs/application_job.rb
class ApplicationJob < ActiveJob::Base
# Check if there is the same job already queued
around_enqueue do |job, block|
existing_queued_jobs = list_queued_jobs(job.class, job.queue_name, job.arguments)
if existing_queued_jobs.size == 0
block.call # this will enqueue your job
else
puts "JOB not enqueue because already queued (#{job.class}, #{job.queue_name}, #{job.arguments})"
end
end
def list_queued_jobs(job_class, queue_name, arguments)
found_jobs = []
queues = Sidekiq::Queue.all
queues.each do |queue|
queue.each do |job|
job.args.each do |arg|
found_jobs << job if arg['job_class'].to_s == job_class.to_s && arg['queue_name'] == queue_name && arg['arguments'] == arguments
end
end
end
return found_jobs
end
end
I schedule reminder emails when a user creates a task for a certain date using the following code in the create action:
if #post.save
EmailWorker.perform_in(#time.minutes, #post.id)
end
I want to delete the scheduled reminder mail whenever the associated task is deleted. I tried using a model method on before_destroy:
before_destroy :destroy_sidekiq_job
def destroy_sidekiq_job
post_id = self.id
queue = Sidekiq::Queue.new('critical')
queue.each do |job|
if job.klass == 'EmailWorker' && job.args.first == post_id
job.delete
end
end
end
However, the jobs aren't deleted from the queue. Any suggestions for me to fix this?
Don't do this. Let Sidekiq execute the job but verify the post exists and is current when the email job is run.
def perform(post_id)
post = Post.find_by_id(post_id)
return unless post
...
end
The scheduled jobs are not within a queue yet, use Sidekiq::ScheduledSet to find the scheduled jobs:
def destroy_sidekiq_jobs
scheduled = Sidekiq::ScheduledSet.new
scheduled.each do |job|
if job.klass == 'EmailWorker' && job.args.first == id
job.delete
end
end
end
I have some update triggers which push jobs onto the Sidekiq queue. So in some cases, there can be multiple jobs to process the same object.
There are a couple of uniqueness plugins ("Middleware", Unique Jobs), they're not documented much, but they seem to be more like throttlers to prevent repeat processing; what I want is a throttler that prevents repeat creating of the same jobs. That way, an object will always be processed in its freshest state. Is there a plugin or technique for this?
Update: I didn't have time to make a middleware, but I ended up with a related cleanup function to ensure queues are unique: https://gist.github.com/mahemoff/bf419c568c525f0af903
What about a simple client middleware?
module Sidekiq
class UniqueMiddleware
def call(worker_class, msg, queue_name, redis_pool)
if msg["unique"]
queue = Sidekiq::Queue.new(queue_name)
queue.each do |job|
if job.klass == msg['class'] && job.args == msg['args']
return false
end
end
end
yield
end
end
end
Just register it
Sidekiq.configure_client do |config|
config.client_middleware do |chain|
chain.add Sidekiq::UniqueMiddleware
end
end
Then in your job just set unique: true in sidekiq_options when needed
My suggestion is to search for prior scheduled jobs based on some select criteria and delete, before scheduling a new one. This has been useful for me when i want a single scheduled job for a particular Object, and/or one of its methods.
Some example methods in this context:
find_jobs_for_object_by_method(klass, method)
jobs = Sidekiq::ScheduledSet.new
jobs.select { |job|
job.klass == 'Sidekiq::Extensions::DelayedClass' &&
((job_klass, job_method, args) = YAML.load(job.args[0])) &&
job_klass == klass &&
job_method == method
}
end
##
# delete job(s) specific to a particular class,method,particular record
# will only remove djs on an object for that method
#
def self.delete_jobs_for_object_by_method(klass, method, id)
jobs = Sidekiq::ScheduledSet.new
jobs.select do |job|
job.klass == 'Sidekiq::Extensions::DelayedClass' &&
((job_klass, job_method, args) = YAML.load(job.args[0])) &&
job_klass == klass &&
job_method == method &&
args[0] == id
end.map(&:delete)
end
##
# delete job(s) specific to a particular class and particular record
# will remove any djs on that Object
#
def self.delete_jobs_for_object(klass, id)
jobs = Sidekiq::ScheduledSet.new
jobs.select do |job|
job.klass == 'Sidekiq::Extensions::DelayedClass' &&
((job_klass, job_method, args) = YAML.load(job.args[0])) &&
job_klass == klass &&
args[0] == id
end.map(&:delete)
end
Take a look at this: https://github.com/mhenrixon/sidekiq-unique-jobs
It's sidekiq with unique jobs added
Maybe you could use Queue Classic which enqueues jobs on a Postgres database (in a really open way), so it could be extended (open-source) to check for uniqueness before doing so.
Delayed::Job's auto-retry feature is great, but there's a job that I want to manually retry now. Is there a method I can call on the job itself like...
Delayed::Job.all[0].perform
or run, or something. I tried a few things, and combed the documentation, but couldn't figure out how to execute a manual retry of a job.
To manually call a job
Delayed::Job.find(10).invoke_job # 10 is the job.id
This does not remove the job if it is run successfully. You need to remove it manually:
Delayed::Job.find(10).destroy
Delayed::Worker.new.run(Delayed::Job.last)
This will remove the job after it is done.
You can do it exactly the way you said, by finding the job and running perform.
However, what I generally do is just set the run_at back so the job processor picks it up again.
I have a method in a controller for testing purposes that just resets all delayed jobs when I hit a URL. Not super elegant but works great for me:
# For testing purposes
def reset_all_jobs
Delayed::Job.all.each do |dj|
dj.run_at = Time.now - 1.day
dj.locked_at = nil
dj.locked_by = nil
dj.attempts = 0
dj.last_error = nil
dj.save
end
head :ok
end
Prior answers above might be out of date. I found I needed to set failed_at, locked_by, and locked_at to nil:
(for each job you want to retry):
d.last_error = nil
d.run_at = Time.now
d.failed_at = nil
d.locked_at = nil
d.locked_by = nil
d.attempts = 0
d.failed_at = nil # needed in Rails 5 / delayed_job (4.1.2)
d.save!
if you have failed delayed job which you need to re-run, then you will need to only select them and set everything refer to failed retry to null:
Delayed::Job.where("last_error is not null").each do |dj|
dj.run_at = Time.now.advance(seconds: 5)
dj.locked_at = nil
dj.locked_by = nil
dj.attempts = 0
dj.last_error = nil
dj.failed_at = nil
dj.save
end
In a development environment, through rails console, following Joe Martinez's suggestion, a good way to retry all your delayed jobs is:
Delayed::Job.all.each{|d| d.run_at = Time.now; d.save!}
Delayed::Job.all.each(&:invoke_job)
Put this in an initializer file!
module Delayed
module Backend
module ActiveRecord
class Job
def retry!
self.run_at = Time.now - 1.day
self.locked_at = nil
self.locked_by = nil
self.attempts = 0
self.last_error = nil
self.failed_at = nil
self.save!
end
end
end
end
end
Then you can run Delayed::Job.find(1234).retry!
This will basically stick the job back into the queue and process it normally.