Rails rufus-scheduler jobs returning nil - ruby-on-rails

I'm using rufus-scheduler to schedule jobs at a certain date through the following code:
job = Rufus::Scheduler.singleton.schedule_at #post.read_attribute(:parse_time).to_s do
end
I then save the id of that job in my post class
#post.update_attribute(:job_id, job.id)
However, if I try and access that job again by calling:
Rufus::Scheduler.singleton.job(#post.read_attribute(:job_id)).unschedule
I get an error because the job is nil. If I try and look at the jobs of the Scheduler by calling:
Rufus::Scheduler.singleton.jobs
I get a blank array. Can anyone explain why my jobs aren't saving properly kept / being tracked?
Here's my initialization file for the scheduler. Do I have to do anything to enable singleton though? Or does it come with rails automatically:
require 'rufus-scheduler'
# Create singleton rufus scheduler
s = Rufus::Scheduler.singleton

rufus-scheduler doesn't keep triggered jobs around.
Your job has probably triggered and is gone.

Related

ActiveJob does not execute the job asynchronously

I am trying to implement an API endpoint that would queue a request and return immediately.
I am using the gem https://rubygems.org/gems/activejob/versions/5.2.0 (I am on an old version for historical reasons).
I have defined a job that looks something like:
class Service::ExportBooks::Job < ActiveJob::Base
def perform
## ... Do the job
rescue StandardError
binding.pry
raise
end
end
In the controller, I am calling:
Service::ExportBooks::Job.perform_later
The job gets called synchronously and the controller gets even any errors raised by the job.
I've also tried other options such as:
job = Service::ExportBooks::Job.new
job.enqueue(wait: 5.seconds)
but it does the same, the job is not enqueued, is immediately executed.
UPDATE:
It looks like the method Resque.inline? returns true and so the execution is inline and not async. How can I make sure that it's async? I tried to set Resque.inline = false manually and the job was queued but it wasn't executed...
I have started a worker using the command:
QUEUE=* PIDFILE=./tmp/resque.pid bundle exec rake environment resque:work
Two things to do here.
Make sure Resque.inline = false.
Start up the resque workers in another process. See here.
This will get the job enqueued and run on the worker process.

Why is Rufus scheduling the first job twice?

I have a Rails app that uses Rufus Scheduler combined with Delayed jobs to execute background jobs. There are another jobs, but the one I'm having trouble with is scheduled in a controller using this code:
def create
#harvest_plan = HarvestPlan.new(resource_params)
#harvest_plan.start_date = Time.parse(resource_params[:start_date])
if #harvest_plan.save
ApplicationController.new.insert_in_messages_list(session, :success, 'Harvest plan created')
schedule_harvest
redirect_to farms_path
end
end
private
def schedule_harvest
Rufus::Scheduler.singleton.every "#{#harvest_plan.hours_between}h",
:times => #harvest_plan.repetitions, :first_at => #harvest_plan.start_date do
CreateHarvestFromPlanJob.perform_later
end
end
The job is supposed to be scheduled according to the harvest plan model, which indicates how many hours must past between jobs, when is the first one supposed to be scheduled and how many repetitions must occur. Everything works perfect except for the first job, which does happen at the time specified with first_at but it is scheduled twice for some reason, delayed jobs then executes the job twice. I tried using the mutex, blocking and overlap options, but it did nothing different. After the first job (scheduled twice) everything works fine. The next jobs are scheduled on time and just once. I have just one delayed jobs worker
Why is this happening?
I am running Rails 4.2.4, Ruby 2.2.2 and Rufus 3.3.2. Since the error happens both with passenger and webrick I assume this has nothing to do with the problem.
Why is Rufus scheduling the first job twice?
because of a bug you found: https://github.com/jmettraux/rufus-scheduler/issues/231
Thanks a lot!

DelayedJob sometimes cannot load job class with a namespace

Sometimes I've got errors in delayed_job worker
NameError: uninitialized constant Notifiers::MessageNotifierJob
full backtrace https://gist.github.com/olegantonyan/eeca9d612f9a10864efe
Notifiers::MessageNotifierJob is defined in app/jobs/notifiers/message_notifier_job.rb
By sometimes I mean that this job may fail -> retry -> succeed. Same thing with another jobs which has a namespace. Jobs without namespace work just fine.
I tried to add app/jobs/ to autoload paths explicitly without any luck
config.autoload_paths += Dir[ Rails.root.join('app', 'jobs', '**/') ]
The job itself looks like this
module Notifiers
class MessageNotifierJob < BaseNotifierJob
def perform(from, to, text)
# some code to send slack notification
end
end
end
Solved. Delayed job or autoloader are not to blame.
A week before adding these new jobs (like Notifiers::MessageNotifierJob) I've increased number of delayed job workers (using capistrano3-delayed-job gem) from 1 to 4. But, capistrano3-delayed-job haven't killed old delayed job process, and only started new 4. So I ended up with 1 old job without any knowledge about my new job classes. Whenever this old process picked the job it failed. Then one of the new processes picked this job and succeeded.

How can clear enqueue jobs in backburner with beanstalkd in rails while updating any object

while creating a campaign (model) in my project I have enqueue jobs for Backburner with the help of beanstalkd in rails. As describe below :
Backburner::Worker.enqueue(DeviceJob, [ad, campaign.id, "Ad"], :delay => add_job.to_i.minutes)
that is working fine but when i update campaign then previous jobs will remain with current jobs. but, i want to remove all previous jobs for current campaign.
In delayed_job, we can done with active records table.
In resque, we can use redis server.
But in backburner, How it could be possible.
Thanks.
I would either use the beaneater API directly or use a tool like beanstalkd_view gem which provides a web-based management UI.
beanstalk = Beaneater::Pool.new(['localhost:11300'])
tube = beanstalk.tubes["some-tube"]
while tube.peek(:ready)
job = tube.reserve
job.delete
end
beanstalk.close

Delayed Job just not working

I am building an application where at some point I need to sync a bunch of data from fb with my database, so I am (attemtping) to use Delayed Job to push this into the background. Here is what part of my Delayed Job class looks like.
class FbSyncJob < Struct.new(:user_id)
require 'RsvpHelper'
def perform
user = User.find(user_id)
FbSyncJob.sync_user(user)
end
def FbSyncJob.sync_user(user)
friends = HTTParty.get(
"https://graph.facebook.com/me/friends?access_token=#{user.fb['token']}"
)
friends_list = friends["data"].map { |friend| friend["id"] }
user.fb["friends"] = friends_list
user.fb["sync"]["friends"] = Time.now
user.save!
FbSyncJob.friend_crawl(user)
end
end
With the RsvpHelper class living in lib/RsvpHelper.rb. So at some point in my application I call Delayed::Job.enqueue(FbSyncJob.new(user.id)) with a known valid user. The worker I set up even tells me that the job has been completed successfully:
1 jobs processed at 37.1777 j/s, 0 failed
However when I check the user in the database he has not had his friends list updated. Am I doing something wrong or what? Thanks so much for the help this has been driving me crazy.
Delayed::Job.enqueue will put a record in the delayed job table, but you need to run a seperate process to execute the job code (perform method)
typically in development this would be bundle exec rake jobs:work (NOTE: you must restart this rake task anytime you make code changes, it will not auto load changes)
see https://github.com/collectiveidea/delayed_job#running-jobs
I usually put the following into my delayed configuration while in development - this never puts a record in the delayed job table and runs all background code synchronously (in development) and by default rails will reload changes to your code
Delayed::Worker.delay_jobs = !(Rails.env.test? || Rails.env.development?)
https://github.com/collectiveidea/delayed_job#gory-details (see config/initializers/delayed_job_config.rb example section)

Resources