I have some methods that works with API of third party app. To do it on button click is no problem, but it should be permanent process.
How to run them background? And how to pause the cycle for make some other works with same API and resume the cycle after the job is done.
Now I read about ActiveJob, but its has time dependences only...
UPDATE
I've tried to make it with whenever and sidekiq, task runs, but it do nothing. Where to look for logs I can't understand.
**schedule.rb**
every 1.minute do
runner "UpdateWorker.perform_async"
end
**update_worker.rb**
class UpdateWorker
include Sidekiq::Worker
include CommonMods
def perform
logger.info "Things are happening."
logger.debug "Here's some info: #{hash.inspect}"
myMethod
end
def myMethod
....
....
....
end
end
It's not exactly what I need, but better then nothing. Can somebody explain me with examples?
UPDATE 2 After manipulating with code it's absolutely necessary to restart sidekiq . With this problem is solved, but I'm not sure that this is the best way.
You can define a job which enqueues itself:
class MyJob < ActiveJob::Base
def perform(*args)
# Do something unless some flag is raised
ensure
self.class.set(wait: 1.hour).perform_later(*args)
end
end
There are several libraries to schedule jobs on a regular basis. For example you could use to sidekiq-cron to run a job every minute.
If you want to pause it for some time, you could set a flag somewhere (Redis/database/file) and skip execution as long it is detected.
On a somewhat related note: don't use sidetiq. It was really great but it's not maintained anymore and has incompatibilities to current Sidekiq versions.
Just enqueue next execution in ensure section after job completes after checking some flag that indicates that it should.
Also i recommend adding some delay there so that you don't end up with dead loop on some error inside job
I dont know ActiveJobs, but I can recommend the whenever gem to create cron (periodic background) jobs. Basically you end up writing a rake tasks. Like this:
desc 'send digest email'
task send_digest_email: :environment do
# ... set options if any
UserMailer.digest_email_update(options).deliver!
end
I never added a rake task to itself but for repeated processing you could do somehow like this (from answers to this specific question)
Rake::Task["send_digest_email"].execute
Related
I am creating an automatic raffling system. I have a draw button that will run a draw function to select a winner or winners and it sends an email to the admin.
I want this to be a completely automated system so that the admin only has to create the raffles and they receive an email with who won after the draw date has passed. My raffles have a draw date associated with them and once that passes, I need the function to be called.
How do I tell the application to check the time/date to see if any of the raffle draw times have passed? I have looked everywhere and cannot seem to find a way to do it.
You could use the whenever gem to define a job that runs hourly (or however often you want), checks the draw dates, and runs the draw for any that have passed.
I use Clockwork in my rails apps whenever I need to schedule things. Simply set it up to run a job when you want and do your logic within that job to see which raffles need to be processed. Example:
Clockwork config
every(1.day, 'Raffle::CheckJob', at: '01:00')
Job
Raffle.not_complete.find_each(batch_size: 10) do |raffle|
if raffle.has_ended?
// logic
end
end
You should write a rake task and add it's execution to your crontab on server. You can use whenever gem to simplify crontab scripting and auto update on each deploy (whenever-capistrano/whenever-mina). Example of your rake task:
namespace :raffle do
task :check do
Raffle.get_winners.each do |w|
Mailer.send_win_mail(w).deliver_later
end
end
end
deliver_later is background execution in queue by the queue driver you use (DelayedJob/Rescue/Backburner etc)
I encountered an issue with sidekiq: I want to set timeout for jobs, meaning when a job has process time greater than timeout then that job will stop.
I have searched how to set global timeout config in file sidekiq.yml. But I want to set separate timeout for difference separate jobs meaning one of classes to define worker will have particular timeout config.
Can you help me. Thanks so much.
There's no approved way to do this. You cannot stop a thread safely while it is executing. You need to change your job to check periodically if it should stop.
You can set network timeouts on any 3rd party calls you are making so that they time out.
You can wrap your job code inside a timeout block like the below:
Timeout::timeout(2.hours) do
***. do possibly long-running task *****
end
The job will fail automatically after 2 hours.
This is the same method as yassen suggested, but more concrete.
class MyCustomWorker
include Sidekiq::Worker
def perform
begin
Timeout::timeout(30.minutes) do # set timeout to 30 minutes
perform_job()
end
rescue Timeout::Error
Rails.logger.error "timeout reached for worker"
end
end
def perform_job
# worker logic here
end
end
It is instead taking up my processor, and then effectually timing out.
I have in my controller :
after_save :handle_file
def handle_test
Resque.enqueue UnpackFileOnS3, parent.id
end
It hits this mark, and then the entire app waits for it to set up and upload the files as prescribed inside my Job. Then it predictably times out because it takes awhile to upload it.
This occurs in my console as well.. If I run :
Resque.enqueue UnpackFileOnS3, 4
Then instead of enqueue'ing it, it locks up my console as it tries to run the entire file. I think that normally, console would just enqueue it to a worker and redis..
Why isn't this actually happening in the background? As I assume if that were the case, the timeouts would not occur.
My guess is that you are running resque in an inline mode. In this mode queing is disabled. Check your configs for this kind of code:
Resque.inline = ENV['RAILS_ENV'] == "cucumber"
#or whatever, important part is the inline option
I have an app with both sidekiq and delayed job gems installed. When I trigger handle_asynchronously in active record models it appear to be handled by sidekiq while I would like to trigger delayed_job.
Is there a way to desactivate sidekiq for a specific model?
UPDATE:
Sidekiq now provides ways to either disable its delay module completely or alias it as sidekiq_delay. Please check this to see how to do it. https://github.com/mperham/sidekiq/wiki/Delayed-Extensions#disabling-extensions
For older version of sidekiq:
I use this monkey patch to make it so that calling .sidekiq_delay() goes to sidekiq and .delay() is goes to DelayedJob. According the answer by Viren, I think this may also solve your problem.
The patch is less complex (just a bunch of aliases), and gives you the power to consciously decide which delay you are actually calling.
As I mention in the comment In order to get it working you have to redefine/basically monkey patch the handle_asynchronously method something like this
Anywhere you like (but make sure it loaded )
in your config/initializers/patch.rb the code look like this
module Patch
def handle_asynchronously(method, opts = {})
aliased_method, punctuation = method.to_s.sub(/([?!=])$/, ''), $1
with_method, without_method = "#{aliased_method}_with_delay#{punctuation}", "#{aliased_method}_without_delay#{punctuation}"
define_method(with_method) do |*args|
curr_opts = opts.clone
curr_opts.each_key do |key|
if (val = curr_opts[key]).is_a?(Proc)
curr_opts[key] = if val.arity == 1
val.call(self)
else
val.call
end
end
end
## Replace this with other syntax
# delay(curr_opts).__send__(without_method, *args)
__delay__(curr_opts).__send__(without_method, *args)
end
alias_method_chain method, :delay
end
end
Module.send(:include,Patch)
And I believe rest all will follow then they way it should :)
Reason:
Delayed::Job include delay method on Object and Sidekiq include it delay method over ActiveRecord
Hence when the class try to invoke delay it look up it ancestors class (including the Eigen Class)
and it find the method define or included in ActiveRecord::Base class (which is sidekiq delay)
why does __delay__ work because alias define the copy of the existing method which is delay method of DelayedJob , hence when you invoke the __delay__ method it invoke delay method define DelayedJob
include to Object
Note:
Although the solution is bit patch but the it works . Keeping in mind that every direct .delay methid invocation is invoking delay method of the SideKiq and not DelayedJob to invoke the DelayedJob delay method you always has call it this way __delay__
Suggestion :
Monkey Patching is just a bad practice on my personal note I would rather not use 2 entirely different background processing library for a single application to achieve the same task. If the task is process thing in background why cant it be done with a single library either delayed_job or sidekiq (why it is that you required both of them )
So the point and to simply thing make your background processing an ease with respect to future I sincerely advice you take any one of the two library for background processing and I feel that would the valid answer for your question instead of monkey patching an doing other crazy stuff
Hope this help
I have a rails 3 application and looked around in the internet for daemons but didnt found the right for me..
I want a daemon which fetches data permanently (exchange courses) from a web resource and saves it to the database..
like:
while true
Model.update_attribte(:course, http::get.new("asdasd").response)
end
I've only seen cron like jobs, but they only run after a specific time... I want it permanently, depending on how long it takes to end the query...
Do you understand what i mean?
The gem light-daemon I wrote should work very well in your case.
http://rubygems.org/gems/light-daemon
You can write your code in a class which has a perform method, use a queue system like this and at application startup enqueue the job with Resque.enqueue(Updater).
Obviously the job won't end until the application is stopped, personally I don't like that, but if this is the requirement.
For this reason if you need to execute other tasks you should configure more than one worker process and optionally more than one queue.
If you can edit your requirements and find a trigger for the update mechanism the same approach still works, you only have to remove the while true loop
Sample class needed:
Class Updater
#queue = :endless_queue
def self.perform
while true
Model.update_attribute(:course, http::get.new("asdasd").response)
end
end
end
Finaly i found a cool solution for my problem:
I use the god gem -> http://god.rubyforge.org/
with a bash script (link) for starting / stopping a simple rake task (with an infinite loop in it).
Now it works fine and i have even some monitoring with god running that ensures that the rake task runs ok.