Sidekiq failed jobs - ruby-on-rails

I've set up Sidekiq to monitor some asynchronous and scheduled tasks.
When I queue a job I can see it on the web monitoring tool.
Here is an example of a job
class HardWorker
include Sidekiq::Worker
def perform(name, count)
raise "Error"
end
end
If I then run HardWorker.perform_in(10.second,'bob', 5) and the job fails (which it always does intentionally), it seems to disappear from the web GUI. 'Failed', 'retries', 'processed' etc don't go up. None of the graphs change.
Here is what the log spits out:
2015-04-19T11:03:40.013Z 1438 TID-3fk WARN: uninitialized constant HardWorker
This makes sense as I created the class through console rather than in my project but shouldn't sidekiq show this as a failed job?
I've also tried force setting the following:
sidekiq_options :retry => false
sidekiq_options :failures => true
Anyone got any suggestions how to get the web app to show those failed jobs?

Turns out because I wrote the class in rails c the subsequent threads didn't know about the class. This was intentional but I didn't think about the fact that within the class it's involving Sidekiq meaning in my scenarios when it failed, it never spoke to Sidekiq reporting the failure.

Related

How set timeout for jobs in sidekiq

I encountered an issue with sidekiq: I want to set timeout for jobs, meaning when a job has process time greater than timeout then that job will stop.
I have searched how to set global timeout config in file sidekiq.yml. But I want to set separate timeout for difference separate jobs meaning one of classes to define worker will have particular timeout config.
Can you help me. Thanks so much.
There's no approved way to do this. You cannot stop a thread safely while it is executing. You need to change your job to check periodically if it should stop.
You can set network timeouts on any 3rd party calls you are making so that they time out.
You can wrap your job code inside a timeout block like the below:
Timeout::timeout(2.hours) do
***. do possibly long-running task *****
end
The job will fail automatically after 2 hours.
This is the same method as yassen suggested, but more concrete.
class MyCustomWorker
include Sidekiq::Worker
def perform
begin
Timeout::timeout(30.minutes) do # set timeout to 30 minutes
perform_job()
end
rescue Timeout::Error
Rails.logger.error "timeout reached for worker"
end
end
def perform_job
# worker logic here
end
end

How to run cyclic background process in Ruby-on-Rails?

I have some methods that works with API of third party app. To do it on button click is no problem, but it should be permanent process.
How to run them background? And how to pause the cycle for make some other works with same API and resume the cycle after the job is done.
Now I read about ActiveJob, but its has time dependences only...
UPDATE
I've tried to make it with whenever and sidekiq, task runs, but it do nothing. Where to look for logs I can't understand.
**schedule.rb**
every 1.minute do
runner "UpdateWorker.perform_async"
end
**update_worker.rb**
class UpdateWorker
include Sidekiq::Worker
include CommonMods
def perform
logger.info "Things are happening."
logger.debug "Here's some info: #{hash.inspect}"
myMethod
end
def myMethod
....
....
....
end
end
It's not exactly what I need, but better then nothing. Can somebody explain me with examples?
UPDATE 2 After manipulating with code it's absolutely necessary to restart sidekiq . With this problem is solved, but I'm not sure that this is the best way.
You can define a job which enqueues itself:
class MyJob < ActiveJob::Base
def perform(*args)
# Do something unless some flag is raised
ensure
self.class.set(wait: 1.hour).perform_later(*args)
end
end
There are several libraries to schedule jobs on a regular basis. For example you could use to sidekiq-cron to run a job every minute.
If you want to pause it for some time, you could set a flag somewhere (Redis/database/file) and skip execution as long it is detected.
On a somewhat related note: don't use sidetiq. It was really great but it's not maintained anymore and has incompatibilities to current Sidekiq versions.
Just enqueue next execution in ensure section after job completes after checking some flag that indicates that it should.
Also i recommend adding some delay there so that you don't end up with dead loop on some error inside job
I dont know ActiveJobs, but I can recommend the whenever gem to create cron (periodic background) jobs. Basically you end up writing a rake tasks. Like this:
desc 'send digest email'
task send_digest_email: :environment do
# ... set options if any
UserMailer.digest_email_update(options).deliver!
end
I never added a rake task to itself but for repeated processing you could do somehow like this (from answers to this specific question)
Rake::Task["send_digest_email"].execute

DelayedJob sometimes cannot load job class with a namespace

Sometimes I've got errors in delayed_job worker
NameError: uninitialized constant Notifiers::MessageNotifierJob
full backtrace https://gist.github.com/olegantonyan/eeca9d612f9a10864efe
Notifiers::MessageNotifierJob is defined in app/jobs/notifiers/message_notifier_job.rb
By sometimes I mean that this job may fail -> retry -> succeed. Same thing with another jobs which has a namespace. Jobs without namespace work just fine.
I tried to add app/jobs/ to autoload paths explicitly without any luck
config.autoload_paths += Dir[ Rails.root.join('app', 'jobs', '**/') ]
The job itself looks like this
module Notifiers
class MessageNotifierJob < BaseNotifierJob
def perform(from, to, text)
# some code to send slack notification
end
end
end
Solved. Delayed job or autoloader are not to blame.
A week before adding these new jobs (like Notifiers::MessageNotifierJob) I've increased number of delayed job workers (using capistrano3-delayed-job gem) from 1 to 4. But, capistrano3-delayed-job haven't killed old delayed job process, and only started new 4. So I ended up with 1 old job without any knowledge about my new job classes. Whenever this old process picked the job it failed. Then one of the new processes picked this job and succeeded.

How to finish a Sidekiq job?

Say I have a worker class that looks like this:
class BuilderWorker
include Sidekiq::Worker
sidekiq_options retry: false
def perform(order_id)
if(order_id == 5)
# How can I finish the job here? Say I want to finish it with a status of FAIL or COMPLETE.
end
end
end
I am looking for a way to finish a job from the Worker class, and when finished give it the status of FAILED. The finish should be quiet, (not raising an exception)
With Sidekiq there are only two job results:
success - a job returns from the perform method without error
failure - a job raises an error, going onto the retry queue to be retried in the future
Your complicated scenario is called application logic and Sidekiq cannot provide it. It is up to you to write that logic.

Sidekiq worker running for thousands of seconds even though there is a timeout

I have a sidekiq worker that shouldn't take more than 30 seconds, but after a few days I'll find that the entire worker queue stops executing because all of the workers are locked up.
Here is my worker:
class MyWorker
include Sidekiq::Worker
include Sidekiq::Status::Worker
sidekiq_options queue: :my_queue, retry: 5, timeout: 4.minutes
sidekiq_retry_in do |count|
5
end
sidekiq_retries_exhausted do |msg|
store({message: "Gave up."})
end
def perform(id)
begin
Timeout::timeout(3.minutes) do
got_lock = with_semaphore("lock_#{id}") do
# DO WORK
end
end
rescue ActiveRecord::RecordNotFound => e
# Handle
rescue Timeout::Error => e
# Handle
raise e
end
end
def with_semaphore(name, &block)
Semaphore.get(name, {stale_client_timeout: 1.minute}).lock(1, &block)
end
end
And the semaphore class we use. (redis-semaphore gem)
class Semaphore
def self.get(name, options = {})
Redis::Semaphore.new(name.to_sym,
:redis => Application.redis,
stale_client_timeout: options[:stale_client_timeout] || 1.hour,
)
end
end
Basically I'll stop the worker and it will state done: 10000 seconds, which the worker should NEVER be running for.
Anyone have any ideas on how to fix this or what is causing it? The workers are running on EngineYard.
Edit: One additional comment. The # DO WORK has a chance to fire off a PostgresSQL function. I have noticed in logs some mention of PG::TRDeadlockDetected: ERROR: deadlock detected. Would this cause the worker to never complete even with a timeout set?
Given you want to ensure unique job execution, i would attempt removing all locks and delegate job uniqueness control to a plugin like Sidekiq Unique Jobs
In this case, even if sidetiq enqueue the same job id twice, this plugin ensures it will be enqueued/processed a single time.
You might also try the ActiveRecord with_lock mechanism: http://api.rubyonrails.org/classes/ActiveRecord/Locking/Pessimistic.html
I have had a similar problem before. To solve this problem, you should stop using Timeout.
As explained in this article, you should never use Timeout in a Sidekiq job. If you use Timeout, Sidekiq processes and threads can easily break.
Not only Ruby, but also Java has a similar problem. Stopping a thread from the outside is inherently dangerous, regardless of the language.
If you continue to have the same problem after deleting Timeout, check that if you are using threads carelessly in your code.
As Sidekiq's architecture is so sophisticated, in almost all cases, the source of the bug is outside of Sidekiq.

Resources