how to retry a sidekiq worker without raising a exception - ruby-on-rails

My sidekiq worker use get request to get the speech recognition outcome, the response status will be "SUCCESS" or "FAILED" or "RUNNING". when the status is "RUNNING", I want to retry the sidekiq worker in 10 minutes. How can I retry without sleep or raise a exception. Because sleep will consume too much resource and raise exception will leave newrelic error log which I don't want to record.
class GetAsrTextWorker
include Sidekiq::Worker
sidekiq_options :queue => :default, :retry => 5
sidekiq_retry_in do | count|
600 * (count + 1)
end
def perform(task_id):
# get request to get the outcome
if status == "SUCCESSD"
# save the outcome
elsif status == "FAILED"
raise AsrError
elsif status == "RUNNING"
raise "Asr running"
end
rescue AsrError => e
Sidekiq.logger.error e.message
end
end
This method will retry this worker but will lead to ugly error log in newrelic which I don't want to record.

Use the New Relic Ruby agent option error_collector.ignore_errors to ignore the specific exception you are raising:
Define a custom exception you can raise:
# lib/retry_job.rb
class RetryJob < StandardError; end
Define a worker that raises the exception:
# app/workers/foo_worker.rb
class FooWorker
include Sidekiq::Worker
def perform
raise RetryJob
end
end
Ignore that exception with the New Relic agent:
# config/newrelic.yml
common: &default_settings
error_collector:
ignore_errors: "ActionController::RoutingError,Sinatra::NotFound,RetryJob"

Related

Not catching the unicorn timeout exception in rails

This is sample code which i am trying to handle exception from unicorn.
unicron.rb
worker_processes Integer(ENV['WEB_CONCURRENCY'] || 3)
timeout 15
preload_app true
timeout.rb
Rack::Timeout.timeout = 12
Sample code
def create_store
ActiveRecord::Base.transaction do
#store = Store.new(params[:store])
if #store.save!
sleep 12.3
InformUserWorker.perform_async(store_id)
end
end
rescue => exception
exception.backtrace.each { |trace| puts trace }
rescue Timeout::Error
puts 'That took too long, exiting...1'
rescue Rack::Timeout::RequestTimeoutException
puts 'That took too long, exiting...2'
rescue Rack::Timeout::RequestTimeoutError
puts 'That took too long, exiting...3'
rescue Rack::Timeout::RequestExpiryError
puts 'That took too long, exiting...4'
end
I am getting code=H13 desc="Connection closed without response" with this sleep 12.3 seconds, and the transaction rollback happens, but none of these exceptions are executing. I have added couple of exception here. Anything goes wrong here?.
Rescues are executed in order. So your first rescue captures anything that is being raised and the code never gets past that point.
Try putting it last:
# other rescues in here.
rescue => exception
exception.backtrace.each { |trace| puts trace }
end # this one is the last, only executed if none of the others are
If the problem is that you're not sure which class is being raised, just use something like pry to debug the exception.class

Sidekiq: error handler threw an error

I keep getting this error (without any backtrace) in my production logs:
!!! ERROR HANDLER THREW AN ERROR !!!
For my understanding that means that a registered error handler is raising an exception.
However my application doesn't have any custom error handler.
How can I debug / solve this issue?
My application looks like this:
class DeliveryJob
include Sidekiq::Worker
sidekiq_options queue: :deliveries, retry: 3
sidekiq_retry_in { |count| 20 * (3 ** count) }
sidekiq_retries_exhausted do |s|
# do something (some methods use Redis) ...
Rails.logger.tagged('delivery') do
Rails.logger.error "Sidekiq retries exhausted for ..."
end
end
def perform(id)
Notification.find(id).deliver
end
end
class Notification
def deliver
# do something that may fail
rescue => e
Rails.logger.tagged('delivery') do
Rails.logger.error "Delivery error for #{endpoint}: #{e.to_s}"
end
raise e
end
end
So in my logs I often see something like:
Delivery error for https://example.com/path: SSL connection error, etc.
Followed by:
!!! ERROR HANDLER THREW AN ERROR !!!
The Delivery error is ok and expected, however I don't understand why I get an error from an error handler.
https://github.com/mperham/sidekiq/blob/d8f11c26518dbe967880f76fd23bb99e9d2411d5/lib/sidekiq/exception_handler.rb#L23
Sidekiq throws this error from its own error handler.

deliver_later with Sidekiq causes 500 error when unable to connect to Redis

In one of my Rails application controllers, I do something like this:
def create
#user = User.create(user_params)
# send welcome email
UserMailer.welcome(#user).deliver_later
end
Now, I've intentionally disabled my Redis server so that I can replicate what would happen in the case that a connection couldn't be made from my app.
Unfortunately, this entire create request fails with a 500 if the deliver_later is unable to connect to Redis.
What I'd like is that the request still succeeds, but the mailer fails silently.
How can I accomplish this?
Additional information:
In config/initializers/action_mailer.rb:
rescue_from(Redis::CannotConnectError) do |exception|
Rails.logger.error "Original record not found: #{#serialized_arguments.join(', ')}"
end
This never gets called though on exception. I tried rescue_from(StandardError) and (Exception), but that was never called either.
I'm using sidekiq as my job queue adapter:
config.active_job.queue_adapter = :sidekiq
The 500 error I get is:
Redis::CannotConnectError (Error connecting to Redis on localhost:6379 (Errno::ECONNREFUSED)):
My UserMailer is a subclass of ApplicationMailer which is a subclass of ActionMailer::Base.
In order to prevent calls to deliver_later from crashing when Redis is down, we added the following monkey-patch:
# If +raise_delivery_errors+ is false, errors occurring while attempting to
# enqueue the email for delivery will be ignored (for instance, if Redis is
# unreachable). In these cases, the email will never be delivered.
module MessageDeliveryWithSilentQueueingErrors
def deliver_later
super
rescue Redis::CannotConnectError => e
raise if raise_delivery_errors
# Log details of the failed email here
end
def deliver_later!
super
rescue Redis::CannotConnectError => e
raise if raise_delivery_errors
# Log details of the failed email here
end
end
ActionMailer::MessageDelivery.send(:prepend, MessageDeliveryWithSilentQueueingErrors)

How to run background jobs during cucumber tests?

What is the best way to test something that requires background jobs with Cucumber? I need to run DelayedJob and Sneakers workers in background while tests are running.
You can run any application in the background:
#pid = Process.spawn "C:/Apps/whatever.exe"
Process.detach(#pid)
And even kill it after tests are done:
Process.kill('KILL', #pid) unless #pid.nil?
You can create your own step definition in features/step_definitions/whatever_steps.rb (hopefully with a better name)
When /^I wait for background jobs to complete$/ do
Delayed::Worker.new.work_off
end
That can be extended for any other scripts you'd like to run with that step. Then in the test, it goes something like:
Then I should see the text "..."
When I wait for background jobs to complete
And I refresh the page
Then I should see the text "..."
If anyone has similar problem I ended up writing this (thanks to Square blog post):
require "timeout"
class CucumberExternalWorker
attr_accessor :worker_pid, :start_command
def initialize(start_command)
raise ArgumentError, "start_command was expected" if start_command.nil?
self.start_command = start_command
end
def start
puts "Trying to start #{start_command}..."
self.worker_pid = fork do
start_child
end
at_exit do
stop_child
end
end
private
def start_child
exec({ "RAILS_ENV" => Rails.env }, start_command)
end
def stop_child
puts "Trying to stop #{start_command}, pid: #{worker_pid}"
# send TERM and wait for exit
Process.kill("TERM", worker_pid)
begin
Timeout.timeout(10) do
Process.waitpid(worker_pid)
puts "Process #{start_command} stopped successfully"
end
rescue Timeout::Error
# Kill process if could not exit in 10 seconds
puts "Sending KILL signal to #{start_command}, pid: #{worker_pid}"
Process.kill("KILL", worker_pid)
end
end
end
This can be called as following (added it to env.rb for cucumber):
# start delayed job
$delayed_job_worker = CucumberExternalWorker.new("rake jobs:work")
$delayed_job_worker.start

How to test concurrency locally?

Whats the best way to test concurrency locally? i.e. i want to test 10 concurrent hits. I am aware of services like Blitz. However, I am trying to find a simpler way of doing it locally to test against race conditions.
Any ideas? Via Curl maybe?
Check out Apache Bench (ab). Basic usage is dead simple:
ab -n 100 -c 10 http://your.application
For locally testing race conditions in the tests you can use helpers like this
# call block in a forked process
def fork_with_new_connection(config, object = nil, options={})
raise ArgumentError, "Missing block" unless block_given?
options = {
:stop => true, :index => 0
}.merge(options)
fork do
# stop the process after fork
Signal.trap('STOP') if options[:stop]
begin
ActiveRecord::Base.establish_connection(config)
yield(object)
ensure
ActiveRecord::Base.remove_connection
end
end
end
# call multiply times blocks
def multi_times_call_in_fork(count=3, &block)
raise ArgumentError, "Missing block" unless block_given?
config = ActiveRecord::Base.remove_connection
pids = []
count.times do |index|
pids << fork_with_new_connection(config, nil, :index=>index, &block)
end
# continue forked processes
Process.kill("CONT", *pids)
Process.waitall
ActiveRecord::Base.establish_connection(config)
end
# example
multi_times_call_in_fork(5) do
# do something with race conditions
# add asserts
end

Resources