Rails 6 how to check if sidekiq job is running - ruby-on-rails

In my Rails 6 API only app I've got FetchAllProductsWorker background job which takes around 1h30m.
module Imports
class FetchAllProductsWorker
include Sidekiq::Worker
sidekiq_options queue: 'imports_fetch_all'
def perform
# do some things
end
end
end
During this time the frontend app sends requests to the endpoint on BE which checks if the job is still running. I need to send true/false of that process. According to the docs there is a scan method - https://github.com/mperham/sidekiq/wiki/API#scan but none of these works for me even when worker is up and running:
# endpoint method to check sidekiq status
def status
ss = Sidekiq::ScheduledSet.new
render json: ss.scan('FetchAllProductsWorker') { |job| job.present? }
end
The console shows me:
> ss.scan("\"class\":\"FetchAllProductsWorker\"") {|job| job }
=> nil
> ss.scan("FetchAllProductsWorker") { |job| job }
=> nil
How to check if particular sidekiq process is not finished?

Maybe this will be useful for someone. Sidekiq provides programmatic access to the current active worker using Sidekiq::Workers https://github.com/mperham/sidekiq/wiki/API#workers
So based on that we could do something like:
active_workers = Sidekiq::Workers.new.map do |_process_id, _thread_id, work|
work
end
active_workers.select do |worker|
worker['queue'] == 'imports_fetch_all'
end.present?

Related

How to create a background job for get request with Sidekiq and httparty?

I need help developing a worker with sidekiq for this situation:
I have a helper that looks like this:
module UploadsHelper
def save_image
response = HTTParty.get(ENV['IMAGE_URI'])
image_data = JSON.parse(response.body)
images = image_data["rows"].map do |line|
u = Image.new
u.description = line[5]
u.image_url = line[6]
u.save
u
end
images.select(&:persisted?)
end
end
In my app/views/uploads/index.html.erb I just do this
<% save_image %>
Now, when a user visits the uploads/index page the images are saved to the database.
The problem is that the get request to the API is really slow. I want to prevent request timeouts by moving this to a background job with sidekiq.
This is my workers/api_worker.rb
class ApiWorker
include Sidekiq::Worker
def perform
end
end
I just don't know the best way to proceed from here.
Performing this task using a Sidekiq worker implies that the task will run in async, and thus, it will not be able to return the response immediately, which is being sent by images.select(&:persisted?).
First of all, instead of calling save_image, you need to call the worker's perform_async method.
<% ApiWorker.perform_async %>
This will enqueue a job in Sidekiq's queue (your_queue in this example). Then in worker's perform method, call the save_image method of UploadsHelper.
class ApiWorker
include Sidekiq::Worker
sidekiq_options queue: 'your_queue'
include UploadsHelper
def perform
save_image
end
end
You may want to save the response of save_image somewhere. To get Sidekiq start processing the jobs, you can run bundle exec sidekiq from your app directory.

How to notify a user after background task is finished?

I use rails with ActiveJob and sidekiq as backend. When user come on a page sidekiq create a long-term background task, how can I notice a user (by render partial on the web page) when a task would be completed?
Rails and sidekiq work as different processes. This fact confused me I don't understand how to handle completed status using background job.
ActiveJob provides an after_perform callback which according to docs work like this:
class VideoProcessJob < ActiveJob::Base
queue_as :default
after_perform do |job|
UserMailer.notify_video_processed(job.arguments.first)
end
def perform(video_id)
Video.find(video_id).process
end
end
So, you don't have to worry to integrate directly with Sidekiq or any other queuing backend, talk to ActiveJob :)
My approach in this situation is:
Add sidekiq-status so that background jobs can be tracked by ID.
In the client call that creates the background job, return the newly-created job's ID.
class MyController < ApplicationController
def create
# sidekiq-status lets us retrieve a unique job ID when
# creating a job
job_id = Workers::MyJob.perform_async(...)
# tell the client where to find the progress of this job
return :json => {
:next => "/my/progress?job_id={job_id}"
}
end
end
Poll a 'progress' endpoint on the server with that job ID. This endpoint fetches job progress information for the job and returns it to the client.
class MyController < ApplicationController
def progress
# fetch job status from sidekiq-status
status = Sidekiq::Status::get_all(params[:job_id])
# in practice, status can be nil if the info has expired from
# Redis; I'm ignoring that for the purpose of this example
if status["complete"]
# job is complete; notify the client in some way
# perhaps by sending it a rendered partial
payload = {
:html => render_to_string({
:partial => "my/job_finished",
:layout => nil
})
}
else
# tell client to check back again later
payload = {:next => "/my/progress?job_id={params[:job_id]}"}
end
render :json => payload
end
end
If the client sees that the job has completed, it can then display a message or take whatever next step is required.
var getProgress = function(progress_url, poll_interval) {
$.get(progress_url).done(function(progress) {
if(progress.html) {
// job is complete; show HTML returned by server
$('#my-container').html(progress.html);
} else {
// job is not yet complete, try again later at the URL
// provided by the server
setTimeout(function() {
getProgress(progress.next, poll_interval);
}, poll_interval);
}
});
};
$("#my-button").on('click', function(e) {
$.post("/my").done(function(data) {
getProgress(data.next, 5000);
});
e.preventDefault();
});
Caveat emptor: that code is meant to be illustrative, and is missing things you should take care of such as error handling, preventing duplicate submissions, and so forth.

why doesn't my delayed job work more than once when being triggered from a rails server?

given the delayed job worker,
class UserCommentsListWorker
attr_accessor :opts
def initialize opts = {}
#opts = opts
end
def perform
UserCommentsList.new(#opts)
end
def before job
p 'before hook', job
end
def after job
p 'after hook', job
end
def success job
p 'success hook', job
end
def error job, exception
p '4', exception
end
def failure job
p '5', job
end
def enqueue job
p '-1', job
end
end
When I run Delayed::Job.enqueue UserCommentsListWorker.new(client: client) from a rails console, I can get repeated sequences of print statements and a proper delayed job lifecyle even hooks to print including the feedback from the worker that the job was a success.
Including the same call to run the worker via a standard rails controller endpoint like;
include OctoHelper
include QueryHelper
include ObjHelper
include StructuralHelper
class CommentsController < ApplicationController
before_filter :authenticate_user!
def index
if params['updateCache'] == 'true'
client = build_octoclient current_user.octo_token
Delayed::Job.enqueue UserCommentsListWorker.new(client: client)
end
end
end
I'm noticing that the worker will run and created the delayed job, but none of the hooks get called and the worker nevers logs the job as completed.
Notice the screenshot,
Jobs 73,75,76 were all triggered via a roundtrip to the above referenced endpoint while job 74 was triggered via the rails console, what is wrong with my setup and/or what am I failing to notice in this process? I will stress that the first time the webserver hits the above controller endpoint, the job queues and runs properly but all subsequent instances where the job should run properly appear to be doing nothing and giving me no feedback in the process.
i would also highlight that i'm never seeing the failure, error or enqueue hooks run.
thanks :)
The long and the short of the answer to this problem was that if you notice, i was attempting to store a client object in the delayed job notification which was causing problems, so therefore, don't store complex objects in the job, just work with basic data ids 1 or strings foo or booleans true etc. capisce?

Sidekiq run job only once

I have a question about how can you run sidekiq job only once f.e. just on the start of rails web-app. One thing I tried is to initialize redis semaphore in the config/initializer.rb and then used it in job, but it kinda didn't work. Here's the code I'm trying to get working:
config/initializer.rb
SEMAPHORE = Redis::Semaphore.new(:semaphore_name, :host => "localhost")
queue_worker.rb
class QueueWorker
include Sidekiq::Worker
def perform
logger.info 'Start polling'
unless SEMAPHORE.locked?
logger.info 'Im here only once'
SEMAPHORE.lock
end
end
end
root_controller_action
QueueWorker.perform_async
Well another variant I don't know if it's possible to run sidekiq job in the initializer. If you can do that, there's no need in semaphore at all.
Thx for answering.

Sending recurring emails with Sidekiq and Sidetiq

I have problem with sending recurring mails with Sidekiq and Sidetiq. I'v tried almost everything and I didn't find the solution.
I have Sidekiq worker which looks like this:
class InvoiceEmailSender
include Sidekiq::Worker
include Sidetiq::Schedulable
recurrence {minutely(2)}
def perform(invoice_id, action)
#invoice = Invoice.find(invoice_id.to_i)
if action == "invoice"
send_invoice
else
send_reminder
end
end
private
def send_invoice
if #invoice.delivery_date == Date.today
InvoiceMailer.delay.send_invoice(#invoice)
else
InvoiceMailer.delay_for(#invoice.delivery_date.to_time).send_invoice(#invoice)
end
end
def send_reminder
InvoiceMailer.delay.send_invoice_reminder(#invoice) unless #invoice.paid?
end
end
End in controller I use it in this way:
InvoiceEmailSender.perform_async(#invoice.id, "invoice")
And when I try to send this emails I have the following error in sidekiq console:
2014-08-26T05:36:01.107Z 4664 TID-otcc5idts WARN: {"retry"=>true, "queue"=>"default", "class"=>"InvoiceEmailSender", "args"=>[1409031120.0, 1409031240.0], "jid"=>"06dc732831c24e1a6f78d929", "enqueued_at"=>1409031120.7438812, "error_message"=>"Couldn't find Invoice with 'id'=1409031120", "error_class"=>"ActiveRecord::RecordNotFound", "failed_at"=>1409031249.1003482, "retry_count"=>2, "retried_at"=>1409031361.1066737}
2014-08-26T05:36:01.107Z 4664 TID-otcc5idts WARN: Couldn't find Invoice with 'id'=1409031120
2014-08-26T05:36:01.107Z 4664 TID-otcc5idts WARN: /home/mateusz/.rvm/gems/ruby-2.0.0-p0#rails4/gems/activerecord-4.1.2/lib/active_record/relation/finder_methods.rb:320:in `raise_record_not_found_exception!'
In sideiq web monitor in scheduled tab it looks like this:
Please help because I have not idea what is going on...
The data passed in looks like epoch timestamps, turns out Sidetiq passes the last and current times as the 2 parameters to your worker, according to the documentation.
I'm not sure how you go about having custom parameters with a scheduled worker, you'll probably need a different strategy like instead of trying to create more scheduled workers, just have 1 (or two, since it looks like you made this class do 2 jobs) scheduled worker(s) that processes a list of work to do every so often.

Resources