I have a Sidekiq worker in my application that essentially works like this:
class SetResultsWorker
include Sidekiq::Worker
def perform
ResultSetter.new.set_results!
end
end
class ResultSetter
def set_results!
events = Event.started.without_results
do_something if events.any?
end
end
The problem is that this events.any? is returning falsewhen it should not. It's not a problem with the scopes because when executing the worker synchronously (SetResultsWorker.new.perform), everything works OK. Any ideas?
EDIT: More information. This worker is executed periodically by a daemon. The first executions are OK, but then it stops working.
Related
I have been trying to see if this is possible and so far have found nothing so I will try and ask specifically
Is it possible to have a sidekiq worker which can recive a method as for example a lambda method and pass on arguments to it?
Example case:
I need to make some heavy computation on my server and my options are to either make a specific sidekiq worker for the job which will only be done 1 time ever and will end up cloddering my code base, or make a worker which could lets say accept something like:
lot_of_work.each do |args|
Workers::Tmp::LetsGo.perform_async(args) { |a| a.lets_go }
end
I've tried looking through old stackoverflow posts and documentation for sidekiq.
I've tried the above method which I hoped worked as a normal method but it does not.
I would have liked it to execute the method which was pass to the worker such that I do not need to make workers for 1 time cases and dont have to use single thread computation.
I found a solution to this problem, there are probably better ones but this worked for me.
Make a worker like this:
module Workers
module Default
class TesterWorker
include Sidekiq::Worker
sidekiq_options queue: :default, retry: false
def perform(method_name, method)
eval(method)
send(method_name)
end
end
end
end
After this you simply just have to write your code as a string like this:
methode_name = 'tester'
spec = "def test; puts 1; end"
Workers::Default::TesterWorker.perform_async(methode_name, spec)
And this will execute the for example the puts 1 action on the sidekiq ^^
How can I detect if a particular request is still active?
For example I have this request uuid:
# my_controller.rb
def my_action
request.uuid # -> ABC1233
end
From another request, how can I know if the request with uuid ABC1233 is still working?
For the curious:
Following beanstalk directives I am running cron jobs using URL requests.
I don't want to start the next iteration if the previous one is still running. I can not just relay in a ini/end flag updated by the request because the request some times dies before it finishes.
Using normal cron tasks I was managing this properly using the PID of the process.
But I don't think I can use PID any more because processes in a web server can be reused among different requests.
I don't think Rails (or more correctly, Rack) has support for this since (to the best of my knowledge) each Rails request doesn't know about any other requests. You may try to get access to all running threads (and even processes) but such implementation (if even possible) seems ugly to me
.
How about implementing it yourself?
class ApplicationController < ActionController::Base
before_filter :register_request
after_filter :unregister_request
def register_request
$redis.set request.uuid
end
def unregister_request
$redis.unset request.uuid
end
end
You'll still need to figure out what to do with exceptions since after_filters are skipped (perhaps move this whole code to a middleware: on the before phase of the middleware it writes the uuid to redis and on the after phase it removes the key ). There's a bunch of other ways to achieve this I'm sure and obviously substitute redis with your favorite persistence of choice.
Finally I recovered my previous approach based on PIDs.
I implemented something like this:
# The Main Process
module MyProcess
def self.run_forked
Process.fork do
SynchProcess.run
end
Process.wait
end
def self.run
RedisClient.set Process.pid # store the PID
... my long process code is here
end
def self.still_alive?(pid)
!!Process.kill(0, pid) rescue false
end
end
# In one thread I can do
MyProcess.run_forked
# In another thread I can do
pid = RedisClient.get
MyProcess.still_alive?(pid) # -> true if the process still running
I can call this code from a Rails request and even if the request process is reused the child one is not and I can monitor the PID of the child process to see if the Ruby process is still running.
I have some methods that works with API of third party app. To do it on button click is no problem, but it should be permanent process.
How to run them background? And how to pause the cycle for make some other works with same API and resume the cycle after the job is done.
Now I read about ActiveJob, but its has time dependences only...
UPDATE
I've tried to make it with whenever and sidekiq, task runs, but it do nothing. Where to look for logs I can't understand.
**schedule.rb**
every 1.minute do
runner "UpdateWorker.perform_async"
end
**update_worker.rb**
class UpdateWorker
include Sidekiq::Worker
include CommonMods
def perform
logger.info "Things are happening."
logger.debug "Here's some info: #{hash.inspect}"
myMethod
end
def myMethod
....
....
....
end
end
It's not exactly what I need, but better then nothing. Can somebody explain me with examples?
UPDATE 2 After manipulating with code it's absolutely necessary to restart sidekiq . With this problem is solved, but I'm not sure that this is the best way.
You can define a job which enqueues itself:
class MyJob < ActiveJob::Base
def perform(*args)
# Do something unless some flag is raised
ensure
self.class.set(wait: 1.hour).perform_later(*args)
end
end
There are several libraries to schedule jobs on a regular basis. For example you could use to sidekiq-cron to run a job every minute.
If you want to pause it for some time, you could set a flag somewhere (Redis/database/file) and skip execution as long it is detected.
On a somewhat related note: don't use sidetiq. It was really great but it's not maintained anymore and has incompatibilities to current Sidekiq versions.
Just enqueue next execution in ensure section after job completes after checking some flag that indicates that it should.
Also i recommend adding some delay there so that you don't end up with dead loop on some error inside job
I dont know ActiveJobs, but I can recommend the whenever gem to create cron (periodic background) jobs. Basically you end up writing a rake tasks. Like this:
desc 'send digest email'
task send_digest_email: :environment do
# ... set options if any
UserMailer.digest_email_update(options).deliver!
end
I never added a rake task to itself but for repeated processing you could do somehow like this (from answers to this specific question)
Rake::Task["send_digest_email"].execute
I have a sidekiq worker that shouldn't take more than 30 seconds, but after a few days I'll find that the entire worker queue stops executing because all of the workers are locked up.
Here is my worker:
class MyWorker
include Sidekiq::Worker
include Sidekiq::Status::Worker
sidekiq_options queue: :my_queue, retry: 5, timeout: 4.minutes
sidekiq_retry_in do |count|
5
end
sidekiq_retries_exhausted do |msg|
store({message: "Gave up."})
end
def perform(id)
begin
Timeout::timeout(3.minutes) do
got_lock = with_semaphore("lock_#{id}") do
# DO WORK
end
end
rescue ActiveRecord::RecordNotFound => e
# Handle
rescue Timeout::Error => e
# Handle
raise e
end
end
def with_semaphore(name, &block)
Semaphore.get(name, {stale_client_timeout: 1.minute}).lock(1, &block)
end
end
And the semaphore class we use. (redis-semaphore gem)
class Semaphore
def self.get(name, options = {})
Redis::Semaphore.new(name.to_sym,
:redis => Application.redis,
stale_client_timeout: options[:stale_client_timeout] || 1.hour,
)
end
end
Basically I'll stop the worker and it will state done: 10000 seconds, which the worker should NEVER be running for.
Anyone have any ideas on how to fix this or what is causing it? The workers are running on EngineYard.
Edit: One additional comment. The # DO WORK has a chance to fire off a PostgresSQL function. I have noticed in logs some mention of PG::TRDeadlockDetected: ERROR: deadlock detected. Would this cause the worker to never complete even with a timeout set?
Given you want to ensure unique job execution, i would attempt removing all locks and delegate job uniqueness control to a plugin like Sidekiq Unique Jobs
In this case, even if sidetiq enqueue the same job id twice, this plugin ensures it will be enqueued/processed a single time.
You might also try the ActiveRecord with_lock mechanism: http://api.rubyonrails.org/classes/ActiveRecord/Locking/Pessimistic.html
I have had a similar problem before. To solve this problem, you should stop using Timeout.
As explained in this article, you should never use Timeout in a Sidekiq job. If you use Timeout, Sidekiq processes and threads can easily break.
Not only Ruby, but also Java has a similar problem. Stopping a thread from the outside is inherently dangerous, regardless of the language.
If you continue to have the same problem after deleting Timeout, check that if you are using threads carelessly in your code.
As Sidekiq's architecture is so sophisticated, in almost all cases, the source of the bug is outside of Sidekiq.
I am running a delayed job worker. When ever I invoke the foo method, worker prints hello.
class User
def foo
puts "Hello"
end
handle_asynchronously :foo
end
If I make some changes to the foo method, I have to restart the worker for the changes to reflect. In the development mode this can become quite tiresome.
I am trying to find a way to reload the payload class(in this case User class) for every request. I tried monkey patching the DelayedJob library to invoke require_dependency before the payload method invocation.
module Delayed::Backend::Base
def payload_object_with_reload
if Rails.env.development? and #payload_object_with_reload.nil?
require_dependency(File.join(Rails.root, "app", "models", "user.rb"))
end
#payload_object_with_reload ||= payload_object_without_reload
end
alias_method_chain :payload_object, :reload
end
This approach doesn't work as the classes registered using require_dependency needs to be reloaded before the invocation and I haven't figured out how to do it. I spent some time reading the dispatcher code to figure out how Rails reloads the classes for every request. I wasn't able to locate the reload code.
Has anybody tried this before? How would you advise me to proceed? Or do you have any pointers for locating the Rails class reload code?
I managed to find a solution. I used ActiveSupport::Dependencies.clear method to clear the loaded classes.
Add a file called config/initializers/delayed_job.rb
Delayed::Worker.backend = :active_record
if Rails.env.development?
module Delayed::Backend::Base
def payload_object_with_reload
if #payload_object_with_reload.nil?
ActiveSupport::Dependencies.clear
end
#payload_object_with_reload ||= payload_object_without_reload
end
alias_method_chain :payload_object, :reload
end
end
As of version 4.0.6, DelayedJob reloads automatically if Rails.application.config.cache_classes is set to false:
In development mode, if you are using Rails 3.1+, your application code will automatically reload every 100 jobs or when the queue finishes. You no longer need to restart Delayed Job every time you update your code in development.
This looks like it solves your problem without the alias_method hackery:
https://github.com/Viximo/delayed_job-rails_reloader