In my current project I am doing some process in background by using Delayed Job gem. It's working fine in local but in production jobs are inserted into Delayed Job table but DJ not picking few jobs from that DJ table. Could any one have idea how Delayed Job will pick up jobs from Delayed Job table.
Below is the Delayed job code in my project:
Delayed::Job.enqueue(ProposalJob.new(current_user, #proposal, request.host, params[:proposal][:revision_notes], params[:proposal][:close_date]), :queue => 'publishing')
Delayed job configuration in my project:
Delayed::Worker.destroy_failed_jobs = false
Delayed::Worker.max_attempts = 3
Delayed::Worker.max_run_time = 1.hours
Delayed::Worker.read_ahead = 10
It's not possible that delayed job will pick some job and not pick other.
Either it will process all job or not process any. If it process any job then your configuration is right.
There might be issue due to Delayed::Worker.destroy_failed_jobs = false this configuration. if your any job fails three times then it will not get deleted from your delayed job table. Because of due to this you might feel that this jobs not get processed.
Try debug your delayed job code with puts.
You can also try https://github.com/resque/resque-scheduler this will provide visual way to watch delayed jobs.
Related
I have a Rails app that uses Rufus Scheduler combined with Delayed jobs to execute background jobs. There are another jobs, but the one I'm having trouble with is scheduled in a controller using this code:
def create
#harvest_plan = HarvestPlan.new(resource_params)
#harvest_plan.start_date = Time.parse(resource_params[:start_date])
if #harvest_plan.save
ApplicationController.new.insert_in_messages_list(session, :success, 'Harvest plan created')
schedule_harvest
redirect_to farms_path
end
end
private
def schedule_harvest
Rufus::Scheduler.singleton.every "#{#harvest_plan.hours_between}h",
:times => #harvest_plan.repetitions, :first_at => #harvest_plan.start_date do
CreateHarvestFromPlanJob.perform_later
end
end
The job is supposed to be scheduled according to the harvest plan model, which indicates how many hours must past between jobs, when is the first one supposed to be scheduled and how many repetitions must occur. Everything works perfect except for the first job, which does happen at the time specified with first_at but it is scheduled twice for some reason, delayed jobs then executes the job twice. I tried using the mutex, blocking and overlap options, but it did nothing different. After the first job (scheduled twice) everything works fine. The next jobs are scheduled on time and just once. I have just one delayed jobs worker
Why is this happening?
I am running Rails 4.2.4, Ruby 2.2.2 and Rufus 3.3.2. Since the error happens both with passenger and webrick I assume this has nothing to do with the problem.
Why is Rufus scheduling the first job twice?
because of a bug you found: https://github.com/jmettraux/rufus-scheduler/issues/231
Thanks a lot!
Sometimes I've got errors in delayed_job worker
NameError: uninitialized constant Notifiers::MessageNotifierJob
full backtrace https://gist.github.com/olegantonyan/eeca9d612f9a10864efe
Notifiers::MessageNotifierJob is defined in app/jobs/notifiers/message_notifier_job.rb
By sometimes I mean that this job may fail -> retry -> succeed. Same thing with another jobs which has a namespace. Jobs without namespace work just fine.
I tried to add app/jobs/ to autoload paths explicitly without any luck
config.autoload_paths += Dir[ Rails.root.join('app', 'jobs', '**/') ]
The job itself looks like this
module Notifiers
class MessageNotifierJob < BaseNotifierJob
def perform(from, to, text)
# some code to send slack notification
end
end
end
Solved. Delayed job or autoloader are not to blame.
A week before adding these new jobs (like Notifiers::MessageNotifierJob) I've increased number of delayed job workers (using capistrano3-delayed-job gem) from 1 to 4. But, capistrano3-delayed-job haven't killed old delayed job process, and only started new 4. So I ended up with 1 old job without any knowledge about my new job classes. Whenever this old process picked the job it failed. Then one of the new processes picked this job and succeeded.
while creating a campaign (model) in my project I have enqueue jobs for Backburner with the help of beanstalkd in rails. As describe below :
Backburner::Worker.enqueue(DeviceJob, [ad, campaign.id, "Ad"], :delay => add_job.to_i.minutes)
that is working fine but when i update campaign then previous jobs will remain with current jobs. but, i want to remove all previous jobs for current campaign.
In delayed_job, we can done with active records table.
In resque, we can use redis server.
But in backburner, How it could be possible.
Thanks.
I would either use the beaneater API directly or use a tool like beanstalkd_view gem which provides a web-based management UI.
beanstalk = Beaneater::Pool.new(['localhost:11300'])
tube = beanstalk.tubes["some-tube"]
while tube.peek(:ready)
job = tube.reserve
job.delete
end
beanstalk.close
I'm using rufus-scheduler to schedule jobs at a certain date through the following code:
job = Rufus::Scheduler.singleton.schedule_at #post.read_attribute(:parse_time).to_s do
end
I then save the id of that job in my post class
#post.update_attribute(:job_id, job.id)
However, if I try and access that job again by calling:
Rufus::Scheduler.singleton.job(#post.read_attribute(:job_id)).unschedule
I get an error because the job is nil. If I try and look at the jobs of the Scheduler by calling:
Rufus::Scheduler.singleton.jobs
I get a blank array. Can anyone explain why my jobs aren't saving properly kept / being tracked?
Here's my initialization file for the scheduler. Do I have to do anything to enable singleton though? Or does it come with rails automatically:
require 'rufus-scheduler'
# Create singleton rufus scheduler
s = Rufus::Scheduler.singleton
rufus-scheduler doesn't keep triggered jobs around.
Your job has probably triggered and is gone.
I am facing a very interesting problem. I have tested the Delay job gem 4 times. I doubt it is the design problem of the gem or a bug. I use command rake jobs:work to create worker to do delayed job.
Once I create a LongTask record, i also make a delayed job which will change the attributeminutes_delayed to 2.
The gem works perfectly if I don't update the attributes. But once I edited the description, the gem will not work properly, which means it will not execute the delayed job, but the related delayed job record will be removed in the database.
Interesting final result:
It Seems to reference a object with attribute that is exactly the same, this picture was captured before the running time have gone over.
This one was captured after all tests have been gone though. You can see the delayed job record for test4 have been removed even this delayed job did't have any effect.
terminal results (only 2 jobs are executed)
[Worker(host:Jasonteki-MacBook-Air.local pid:1726)] Starting job worker
[Worker(host:Jasonteki-MacBook-Air.local pid:1726)] LongTask#set_delay_time_without_delay completed after 0.0343
[Worker(host:Jasonteki-MacBook-Air.local pid:1726)] 1 jobs processed at 16.6270 j/s, 0 failed ...
[Worker(host:Jasonteki-MacBook-Air.local pid:1726)] LongTask#set_delay_time_without_delay completed after 0.0105
[Worker(host:Jasonteki-MacBook-Air.local pid:1726)] 1 jobs processed at 51.4774 j/s, 0 failed ...
Code in model:
def set_delay_time(time)
self.minutes_delayed = time
# very important for this, otherwise cannot write the change into the database
self.save
end
handle_asynchronously :set_delay_time, :run_at => Proc.new { 2.minutes.from_now }
Code in controller:
def create
#long_task = LongTask.new(params[:long_task])
respond_to do |format|
if #long_task.save
#long_task.set_delay_time(2)
Without seeing your code, it's impossible to tell for sure, but it's likely that both of your delayed jobs are working on serialized copies of your object, rather than reloading them from the database.