Get error message out of Sidekiq job - ruby-on-rails

I want to get exception error message out of the sidekiq job. when I set back_trace option to true it retries my job but I want to exit from job when error raises and get error message.
if I find that process ended successful or fail is enough.
def perform(text)
begin
fail StandardError, 'Error!'
rescue
fail 'EEE' # I want to get this error when call job
end
end
# call
NormalJob.perform_async('test')
# I want to get error here after call

If I were you I would try gem sidekiq-status. It has several options, which can be helpful in such situations:
You can retrieve status of your worker:
job_id = MyJob.perform_async(*args)
# :queued, :working, :complete or :failed , nil after expiry (30 minutes)
status = Sidekiq::Status::status(job_id)
Sidekiq::Status::queued? job_id
Sidekiq::Status::working? job_id
Sidekiq::Status::complete? job_id
Sidekiq::Status::failed? job_id
Also you have options for Tracking progress, saving and retrieveing data associated with job
class MyJob
include Sidekiq::Worker
include Sidekiq::Status::Worker # Important!
def perform(*args)
# your code goes here
# the common idiom to track progress of your task
total 100 # by default
at 5, "Almost done"
# a way to associate data with your job
store vino: 'veritas'
# a way of retrieving said data
# remember that retrieved data is always is String|nil
vino = retrieve :vino
end
end
job_id = MyJob.perform_async(*args)
data = Sidekiq::Status::get_all job_id
data # => {status: 'complete', update_time: 1360006573, vino: 'veritas'}
Sidekiq::Status::get job_id, :vino #=> 'veritas'
Sidekiq::Status::at job_id #=> 5
Sidekiq::Status::total job_id #=> 100
Sidekiq::Status::message job_id #=> "Almost done"
Sidekiq::Status::pct_complete job_id #=> 5
Another option is to use sidekiq batches status
This is what batches allow you to do!
batch = Sidekiq::Batch.new
batch.description = "Batch description (this is optional)"
batch.notify(:email, :to => 'me#example.org')
batch.jobs do
rows.each { |row| RowWorker.perform_async(row) }
end
puts "Just started Batch #{batch.bid}"
b = Sidekiq::Batch.new(bid) # bid is a method on Sidekiq::Worker that gives access to the Batch ID associated to the job.
b.jobs do
SomeWorker.perform_async(1)
sleep 1
# Uh oh, Sidekiq has finished all outstanding batch jobs
# and fires the complete message!
SomeWorker.perform_async(2)
end
status = Sidekiq::Batch::Status.new(bid)
status.total # jobs in the batch => 98
status.failures # failed jobs so far => 5
status.pending # jobs which have not succeeded yet => 17
status.created_at # => 2012-09-04 21:15:05 -0700
status.complete? # if all jobs have executed at least once => false
status.join # blocks until the batch is considered complete, note that some jobs might have failed
status.failure_info # an array of failed jobs
status.data # a hash of data about the batch which can easily be converted to JSON for javascript usage
It can be used out of the box

Related

Rails 5 - Sidekiq worker shows job done but nothing happens

I'm using Sidekiq for delayed jobs with sidekiq-status and sidekiq-ent gems. I've created a worker which is reponsible to update minor status to false when user is adult and has minor: true. This worker should be fired every day at midnight ET. Like below:
#initializers/sidekiq.rb
config.periodic do |mgr|
# every day between midnight 0 5 * * *
mgr.register("0 5 * * *", MinorWorker)
end
#app/workers/minor_worker.rb
class MinorWorker
include Sidekiq::Worker
def perform
User.adults.where(minor: true).remove_minor_status
rescue => e
Rails.logger.error("Unable to update minor field. Exception: #{e.message} : #{e.backtrace.join('\n')}")
end
end
#models/user.rb
class User < ApplicationRecord
scope :adults, -> { where('date_of_birth <= ?', 18.years.ago) }
def self.remove_minor_status
update(minor: false)
end
end
No I want to check this on my local machine - to do so I'm using gem 'timecop' to timetravel:
#application.rb
config.time_zone = 'Eastern Time (US & Canada)'
#config/environments/development.rb
config.after_initialize do
t = Time.local(2021, 12, 21, 23, 59, 0)
Timecop.travel(t)
end
After firing up sidekiq by bundle exec sidekiq and bundle exec rails s I'm waiting a minute and I see that worker shows up:
2021-12-21T22:59:00.130Z 25711 TID-ovvzr9828 INFO: Managing 3 periodic jobs
2021-12-21T23:00:00.009Z 25711 TID-ovw69k4ao INFO: Enqueued periodic job SettlementWorker with JID ddab15264f81e0b417e7dd83 for 2021-12-22 00:00:00 +0100
2021-12-21T23:00:00.011Z 25711 TID-ovw69k4ao INFO: Enqueued periodic job MinorWorker with JID 0bcd6b76d6ee4ff9e7850b35 for 2021-12-22 00:00:00 +0100
But it didn't do anything, the user's minor status is still set to minor: true:
2.4.5 :002 > User.last.date_of_birth
=> Mon, 22 Dec 2003
2.4.5 :001 > User.last.minor
=> true
Did I miss something?
EDIT
I have to add that when I'm trying to call this worker on rails c everything works well. I've got even a RSpec test which also passes:
RSpec.describe MinorWorker, type: :worker do
subject(:perform) { described_class.new.perform }
context 'when User has minor status' do
let(:user1) { create(:user, minor: true) }
it 'removes minor status' do
expect { perform }.to change { user1.reload.minor }.from(true).to(false)
end
context 'when user is adult' do
let(:registrant2) { create(:registrant) }
it 'not change minor status' do
expect(registrant2.reload.minor).to eq(false)
end
end
end
end
Since this is the class method update won't work
def self.remove_minor_status
update(minor: false)
end
Make use of #update_all
def self.remove_minor_status
update_all(minor: false)
end
Also, I think it's best practice to have some test cases to ensure the working of the methods.
As of now you can try this method from rails console and verify if they actually work
test "update minor status" do
user = User.create(date_of_birth: 19.years.ago, minor: true)
User.adults.where(minor: true).remove_minor_status
assert_equal user.reload.minor, false
end
I think you need to either do update_all or update each record by itself, like this:
User.adults.where(minor: true).update_all(minor: false)
or
class MinorWorker
include Sidekiq::Worker
def perform
users = User.adults.where(minor: true)
users.each { |user| user.remove_minor_status }
rescue => e
Rails.logger.error("Unable to update minor field. Exception: #{e.message} : #{e.backtrace.join('\n')}")
end
end
You may also want to consider changing update to update! so it throws an error if failing to be caught by your rescue in the job:
def self.remove_minor_status
update!(minor: false)
end

Rails cache counter

I have a simple Ruby method meant to throttle some execution.
MAX_REQUESTS = 60
# per
TIME_WINDOW = 1.minute
def throttle
cache_key = "#{request.ip}_count"
count = Rails.cache.fetch(cache_key, expires_in: TIME_WINDOW.to_i) { 0 }
if count.to_i >= MAX_REQUESTS
render json: { message: 'Too many requests.' }, status: 429
return
end
Rails.cache.increment(cache_key)
true
end
After some testing I've found that cache_key never invalidates.
I investigated with binding.pry and found the issue:
[35] pry(#<Refinery::ApiReferences::Admin::ApiHostsController>)> Rails.cache.write(cache_key, count += 1, expires_in: 60, raw: true)
=> true
[36] pry(#<Refinery::ApiReferences::Admin::ApiHostsController>)> Rails.cache.send(:read_entry, cache_key, {})
=> #<ActiveSupport::Cache::Entry:0x007fff1e34c978 #created_at=1495736935.0091069, #expires_in=60.0, #value=11>
[37] pry(#<Refinery::ApiReferences::Admin::ApiHostsController>)> Rails.cache.increment(cache_key)
=> 12
[38] pry(#<Refinery::ApiReferences::Admin::ApiHostsController>)> Rails.cache.send(:read_entry, cache_key, {})
=> #<ActiveSupport::Cache::Entry:0x007fff1ee105a8 #created_at=1495736965.540865, #expires_in=nil, #value=12>
So, increment is wiping out the expires_in value and changing the created_at, regular writes will do the same thing.
How do I prevent this? I just want to update the value for a given cache key.
UPDATE
Per suggestion I tried:
MAX_REQUESTS = 60
# per
TIME_WINDOW = 1.minute
def throttle
cache_key = "#{request.ip}_count"
count = Rails.cache.fetch(cache_key, expires_in: TIME_WINDOW.to_i, raw: true) { 0 }
if count.to_i >= MAX_REQUESTS
render json: { message: 'Too many requests.' }, status: 429
return
end
Rails.cache.increment(cache_key)
true
end
No effect. Cache does not expire.
Here's a "solution," I won't mark it correct because surely this isn't necessary?
MAX_REQUESTS = 60
# per
TIME_WINDOW = 1.minute
def throttle
count_cache_key = "#{request.ip}_count"
window_cache_key = "#{request.ip}_window"
window = Rails.cache.fetch(window_cache_key) { (Time.zone.now + TIME_WINDOW).to_i }
if Time.zone.now.to_i >= window
Rails.cache.write(window_cache_key, (Time.zone.now + TIME_WINDOW).to_i)
Rails.cache.write(count_cache_key, 1)
end
count = Rails.cache.read(count_cache_key) || 0
if count.to_i >= MAX_REQUESTS
render json: { message: 'Too many requests.' }, status: 429
return
end
Rails.cache.write(count_cache_key, count + 1)
true
end
Incrementing a raw value (with raw: true option) in Rails cache works exactly the way you desire, i.e. it updates only the value, not the expiration time. However, when debugging this, you cannot rely on the output of read_entry very much as this does not correspond fully with the raw value stored in cache, because the cache store does not give back the expiry time when storing just the raw value.
That is why, normally (without the raw) option, Rails does not store just the raw value, but it creates a cache Entry object which, besides the value, holds additional data, such as the expiry time. Then it serializes this object and saves it to the cache store. Upon reading the value back, it de-serializes the object and still has access to all info, including the expiry time.
However, as you cannot increment a serialized object, you need to store a raw value instead, i.e. use the raw: true option. This makes Rails store directly the value and pass the expiry time as param to the cache store write method (without the possibility to read it back from the store).
So, to sum up, you must use raw: true when caching a value for incrementing and the expiry time will be normally preserved in the cache store. See the following test (done on the mem_cache_store store):
# cache_test.rb
cache_key = "key"
puts "setting..."
Rails.cache.fetch(cache_key, expires_in: 3.seconds, raw: true) { 1 }
puts "#{Time.now} cached value: #{Rails.cache.read(cache_key)}"
sleep(2)
puts "#{Time.now} still cached: #{Rails.cache.read(cache_key)}"
puts "#{Time.now} incrementing..."
Rails.cache.increment(cache_key)
puts "#{Time.now} incremented value: #{Rails.cache.read(cache_key)}"
sleep(1)
puts "#{Time.now} gone!: #{Rails.cache.read(cache_key).inspect}"
When running this, you'll get:
$ rails runner cache_test.rb
Running via Spring preloader in process 31666
setting...
2017-05-25 22:15:26 +0200 cached value: 1
2017-05-25 22:15:28 +0200 still cached: 1
2017-05-25 22:15:28 +0200 incrementing...
2017-05-25 22:15:28 +0200 incremented value: 2
2017-05-25 22:15:29 +0200 gone!: nil
As you can see, the value has been incremented without resetting the expiry time.
Update: I set up a minimal test for you code, though not run through a real controller but only as a script. I made only 4 small changes to the throttle code in your OP:
lowered the time window
changed render to a simple puts
used only a single key as if requests came from a single IP address
print the incremented value
The script:
# chache_test2.rb
MAX_REQUESTS = 60
# per
#TIME_WINDOW = 1.minute
TIME_WINDOW = 3.seconds
def throttle
#cache_key = "#{request.ip}_count"
cache_key = "127.0.0.1_count"
count = Rails.cache.fetch(cache_key, expires_in: TIME_WINDOW.to_i, raw: true) { 0 }
if count.to_i >= MAX_REQUESTS
#render json: { message: 'Too many requests.' }, status: 429
puts "too many requests"
return
end
puts Rails.cache.increment(cache_key)
true
end
62.times do |i|
throttle
end
sleep(3)
throttle
The run prints the following:
$ rails runner cache_test2.rb
Running via Spring preloader in process 32589
2017-05-26 06:11:26 +0200 1
2017-05-26 06:11:26 +0200 2
2017-05-26 06:11:26 +0200 3
2017-05-26 06:11:26 +0200 4
...
2017-05-26 06:11:26 +0200 58
2017-05-26 06:11:26 +0200 59
2017-05-26 06:11:26 +0200 60
2017-05-26 06:11:26 +0200 too many requests
2017-05-26 06:11:26 +0200 too many requests
2017-05-26 06:11:29 +0200 1
Perhaps you don't have caching configured in development at all? I recommend testing this in the memcached store, which is the most preferred cache store in production environment. In development, you need to explicitly switch it on:
# config/environemnts/development.rb
config.cache_store = :mem_cache_store
Also, if you are running a recent Rails 5.x version, you may need to run the rails dev:cache command which creates the tmp/caching-dev.txt file that is used in the development config to actually enable caching in development env.

Can I automatically re-run a method if a timeout error occurs?

We have an application that makes hundreds of API calls to external services. Sometimes some calls take too much time to respond.
I am using the rake_timeout gem to find time consuming process, so, Timeout::Error will be thrown whenever some request is taking too long to respond. I am rescuing this error and doing a retry on that method:
def new
#make_one_external_service_call = exteral_api_fetch1(params[:id])
#make_second_external_call = exteral_api_fetch1(#make_one_external_service_call)
#Below code will be repeated in every method
tries = 0
rescue Timeout::Error => e
tries += 1
retry if tries <= 3
logger.error e.message
end
This lets the method fully re-run it. This is very verbose and I am repeating it every time.
Is there any way to do this so that, if the Timeout:Error occurrs, it will retry that method automatically three times?
I have a little module for that:
# in lib/retryable.rb
module Retryable
# Options:
# * :tries - Number of tries to perform. Defaults to 1. If you want to retry once you must set tries to 2.
# * :on - The Exception on which a retry will be performed. Defaults to Exception, which retries on any Exception.
# * :log - The log level to log the exception. Defaults to nil.
#
# If you work with something like ActiveRecord#find_or_create_by_foo, remember to put that call in a uncached { } block. That
# forces subsequent finds to hit the database again.
#
# Example
# =======
# retryable(:tries => 2, :on => OpenURI::HTTPError) do
# # your code here
# end
#
def retryable(options = {}, &block)
opts = { :tries => 1, :on => Exception }.merge(options)
retry_exception, retries = opts[:on], opts[:tries]
begin
return yield
rescue retry_exception => e
logger.send(opts[:log], e.message) if opts[:log]
retry if (retries -= 1) > 0
end
yield
end
end
and than in your model:
extend Retryable
def new
retryable(:tries => 3, :on => Timeout::Error, :log =>:error) do
#make_one_external_service_call = exteral_api_fetch1(params[:id])
#make_second_external_call = exteral_api_fetch1(#make_one_external_service_call)
end
...
end
You could do something like this:
module Foo
def self.retryable(options = {})
retry_times = options[:times] || 10
try_exception = options[:on] || Exception
yield if block_given?
rescue *try_exception => e
retry if (retry_times -= 1) > 0
raise e
end
end
Foo.retryable(on: Timeout::Error, times: 5) do
# your code here
end
You can even pass multiple exceptions to "catch":
Foo.retryable(on: [Timeout::Error, StandardError]) do
# your code here
end
I think what you need is the retryable gem.
With the gem, you can write your method like below
def new
retryable :on => Timeout::Error, :times => 3 do
#make_one_external_service_call = exteral_api_fetch1(params[:id])
#make_second_external_call = exteral_api_fetch1(#make_one_external_service_call)
end
end
Please read the documentation for more information on how to use the gem and the other options it provides
you could just write a helper-method for that:
class TimeoutHelper
def call_and_retry(tries=3)
yield
rescue Timeout::Error => e
tries -= 1
retry if tries > 0
Rails.logger.error e.message
end
end
(completely untested) and call it via
TimeoutHelper.call_and_retry { [your code] }

How do I reschedule a failed Rufus every job?

I have a Rufus "every job" that runs periodically, but sometimes it may fail to perform it's task.
On failure, I would like to reschedule a retry soon rather than wait until the next cycle.
class PollProducts
def initialize()
end
def call(job)
puts "Updating products"
begin
# Do something that might fail
raise if rand(3) == 1
rescue Exception => e
puts "Request failed - recheduling: #{e}"
# job.in("5s") <-- What can I do?
end
end
end
scheduler.every '6h', PollProducts.new, :first_in => '5s', :blocking => true
Is this possible?
Ok this worked for me:
job.scheduler.in '5s', self

Rails backgroundRB plugin need to schedule it and queue to database for persistancy

I'm trying to do the following:
Run a Worker and a method within it every 15 minutes
Have a log of the job last runtime, in the database table
bdrd_job_queue.
What I've done:
I have a schedule every 15 minutes in my backgroundRB.yml file
The method call has a persistent_job.finish! call, but it's not working,
because the persistent_job object is nil.
How can I ensure it's logged in the DB, but still automatically
scheduled from backgroundRB.yml?
I was finally able to do it.
The workaround is to schedule a task that will queue it to the database, scheduled to run right away.
In your worker ...
class NotificationWorker < BackgrounDRb::MetaWorker
set_worker_name :notification_worker
def create(args = nil)
end
def queue_notify_changes(args = nil)
BdrbJobQueue.insert_job(:worker_name => 'notification_worker',
:worker_method => 'notify_new_changes_DAEMON',
:args => 'hello_world',
:scheduled_at => Time.now.utc,
:job_key => 'email_changes_notification_task')
end
def notify_new_changes_DAEMON
#Do Incredibly cool stuff here
end
In the config file backgroundrb.yml
---
:backgroundrb:
:ip: 0.0.0.0
:port: 11006
:environment: production
:log: foreground
:debug_log: true
:persistent_disabled: false
:persistent_delay: 10
:schedules:
:notification_worker:
:queue_notify_changes:
:trigger_args: 0 0 0 * * *

Resources