Sidekiq Error Handling and stop worker - ruby-on-rails

To avoid running unnecessary, I'd like my Sidekiq worker to make a check at each stage for a certain condition. If that condition is not met, then Sidekiq should stop and report the error.
Currently I have:
class BotWorker
include Sidekiq::Worker
def perform(id)
user = User.find(id)
if user.nil?
# report the error? Thinking of using UserMailer
return false # stop the worker
end
# other processing here
end
This seems like a naive way to handle Sidekiq errors. The app needs to immediately notify admin if something breaks in the worker.
Am I missing something? What is a better way to handle errors in Sidekiq?

You can create your own error handler
Sidekiq.configure_server do |config|
config.error_handlers << Proc.new {|exception,context_hash| MyErrorService.notify(exception,context_hash) }
end

Related

Rails 6 how to check if sidekiq job is running

In my Rails 6 API only app I've got FetchAllProductsWorker background job which takes around 1h30m.
module Imports
class FetchAllProductsWorker
include Sidekiq::Worker
sidekiq_options queue: 'imports_fetch_all'
def perform
# do some things
end
end
end
During this time the frontend app sends requests to the endpoint on BE which checks if the job is still running. I need to send true/false of that process. According to the docs there is a scan method - https://github.com/mperham/sidekiq/wiki/API#scan but none of these works for me even when worker is up and running:
# endpoint method to check sidekiq status
def status
ss = Sidekiq::ScheduledSet.new
render json: ss.scan('FetchAllProductsWorker') { |job| job.present? }
end
The console shows me:
> ss.scan("\"class\":\"FetchAllProductsWorker\"") {|job| job }
=> nil
> ss.scan("FetchAllProductsWorker") { |job| job }
=> nil
How to check if particular sidekiq process is not finished?
Maybe this will be useful for someone. Sidekiq provides programmatic access to the current active worker using Sidekiq::Workers https://github.com/mperham/sidekiq/wiki/API#workers
So based on that we could do something like:
active_workers = Sidekiq::Workers.new.map do |_process_id, _thread_id, work|
work
end
active_workers.select do |worker|
worker['queue'] == 'imports_fetch_all'
end.present?

How to create a background job for get request with Sidekiq and httparty?

I need help developing a worker with sidekiq for this situation:
I have a helper that looks like this:
module UploadsHelper
def save_image
response = HTTParty.get(ENV['IMAGE_URI'])
image_data = JSON.parse(response.body)
images = image_data["rows"].map do |line|
u = Image.new
u.description = line[5]
u.image_url = line[6]
u.save
u
end
images.select(&:persisted?)
end
end
In my app/views/uploads/index.html.erb I just do this
<% save_image %>
Now, when a user visits the uploads/index page the images are saved to the database.
The problem is that the get request to the API is really slow. I want to prevent request timeouts by moving this to a background job with sidekiq.
This is my workers/api_worker.rb
class ApiWorker
include Sidekiq::Worker
def perform
end
end
I just don't know the best way to proceed from here.
Performing this task using a Sidekiq worker implies that the task will run in async, and thus, it will not be able to return the response immediately, which is being sent by images.select(&:persisted?).
First of all, instead of calling save_image, you need to call the worker's perform_async method.
<% ApiWorker.perform_async %>
This will enqueue a job in Sidekiq's queue (your_queue in this example). Then in worker's perform method, call the save_image method of UploadsHelper.
class ApiWorker
include Sidekiq::Worker
sidekiq_options queue: 'your_queue'
include UploadsHelper
def perform
save_image
end
end
You may want to save the response of save_image somewhere. To get Sidekiq start processing the jobs, you can run bundle exec sidekiq from your app directory.

Sidekiq stacktrace in logs

I'm running a sidekiq application on heroku with papertrails addon and I use exceptions to fail jobs. For each exception full stacktrace is stored in papertrail logs which is definitely not what I want.
I didn't find a way how to turn off that feature. Could you give me a hint what I could do with that?
Maybe I should handle job failing in a different way?
Thanks!
Here's a modification of the standard error logger that limits the backtrace logging to the lines unique to the application:
class ExceptionHandlerLogger
def call(ex, ctxHash)
Sidekiq.logger.warn(Sidekiq.dump_json(ctxHash)) if !ctxHash.empty?
Sidekiq.logger.warn "#{ex.class.name}: #{ex.message}"
unless ex.backtrace.nil?
Sidekiq.logger.warn filter_backtrace(ex.backtrace).join("\n")
end
end
def filter_backtrace(backtrace)
index = backtrace.index { |item| item.include?('/lib/sidekiq/processor.rb') }
backtrace.first(index.to_i)
end
end
if !Sidekiq.error_handlers.delete_if { |h| h.class == Sidekiq::ExceptionHandler::Logger }
fail "default sidekiq logger class changed!"
end
Sidekiq.error_handlers << ExceptionHandlerLogger.new

Sidekiq run job only once

I have a question about how can you run sidekiq job only once f.e. just on the start of rails web-app. One thing I tried is to initialize redis semaphore in the config/initializer.rb and then used it in job, but it kinda didn't work. Here's the code I'm trying to get working:
config/initializer.rb
SEMAPHORE = Redis::Semaphore.new(:semaphore_name, :host => "localhost")
queue_worker.rb
class QueueWorker
include Sidekiq::Worker
def perform
logger.info 'Start polling'
unless SEMAPHORE.locked?
logger.info 'Im here only once'
SEMAPHORE.lock
end
end
end
root_controller_action
QueueWorker.perform_async
Well another variant I don't know if it's possible to run sidekiq job in the initializer. If you can do that, there's no need in semaphore at all.
Thx for answering.

sidekiq - fall back to standard sync ruby code when redis server is not running

I'm using sidekiq in a rails app to send some emails asynchronously. How can I ensure that the code (the job itself) is executed even when the Redis server is not running.
CommentsWorker.perform_async(#user.id, #comment.id)
In the comments worker, I'm fetching the user and the comment, and I send an email:
def perform(user_id, comment_id)
user = User.find(user_id)
comment = Comment.find(comment_id)
CommentMailer.new_comment(user, comment).deliver
end
If I stop the Redis server, my app raises an exception Redis::CannotConnectError
I still want to send that email, even when the server is stopped, using old fashioned sync code. I tried to rescue from that exception, but for some reason it doesn't work.
Figured it out. The solution was to test for a redis connection and rescue from the exception, but before the call to perform_async. There's now only the minor issue of having to wait for the connection to time out, but I guess I can live with that.
redis_available = true
Sidekiq.redis do |connection|
begin
connection.info
rescue Redis::CannotConnectError
redis_available = false
end
end
if redis_available
CommentsWorker.perform_async(user.id, #comment.id, #award.id)
else
#sync code
user = User.find(user_id)
comment = Comment.find(comment_id)
CommentMailer.new_comment(user, comment).deliver
end
I know this has already been answered and I liked #mihai's approach, but we wanted to reuse the code elsewhere in our application for multiple workers. Also, we wanted to make it more generic to work with any worker.
We decided to extend Sidekiq::Worker with an additional method perform_async_with_failover defined below:
module Sidekiq
module Worker
module ClassMethods
def perform_async_with_failover(*args)
redis_available = true
Sidekiq.redis do |connection|
begin
connection.info
rescue Redis::CannotConnectError
redis_available = false
end
end
if redis_available
# process the job asynchronously
perform_async(*args)
else
# otherwise, instantiate and perform synchronously
self.new.perform(*args)
end
end
end
end
end
Improving Kyle's solution above since I guess using connection.info is to see if there is a connection available. But my understanding is that info would send extra commands when redis is online and that is unnecessary. I would just catch the exception from perform_async instead:
module Sidekiq
module Worker
module ClassMethods
def perform_async_with_failover(*args)
begin
# process the job asynchronously
perform_async(*args)
rescue Redis::CannotConnectError => e
# otherwise, instantiate and perform synchronously
self.new.perform(*args)
end
end
end
end
end
edited:
you may want to have a look at sidekiq error handling section
Implementing #mihai's answer I ended up creating a service to encapsulate the action of "gracefully delivering email".
Example of calling the class:
message = MyMailer.order_confirmation(email_arguments)
GracefullyDeliverEmail.call(message)
The class:
class GracefullyDeliverEmail
###
# Attempts to queue email for async sending but fails
# gracefully to delivering it immediately.
#
# #param context.message {ActionMailer::MessageDelivery}
###
def self.call(message)
validate!(message)
if redis_available?
message.deliver_later(wait: 2.mins)
else
message.deliver_now # Fallback to inline delivery
end
end
# == Private Methods ======================================================
private
# https://stackoverflow.com/questions/15993080/sidekiq-fall-back-to-standard-sync-ruby-code-when-redis-server-is-not-running/42247913#42247913
def redis_available?
redis_available = true
Sidekiq.redis do |connection|
begin
connection.info
rescue Redis::CannotConnectError
redis_available = false
end
end
redis_available
end
def validate!(message)
if !message.is_a?(ActionMailer::MessageDelivery)
raise "message must be of class ActionMailer::MessageDelivery"
end
end
end
You should check if there are active redis clients, something like:
def perform(user_id, comment_id)
user = User.find(user_id)
comment = Comment.find(comment_id)
redis_info = Sidekiq.redis { |conn| conn.info }
CommentMailer.new_comment(user, comment).deliver
rescue Redis::CannotConnectError
CommentMailer.new_comment(user, comment)
end
should do it.

Resources