delayed_job, jobs randomly disappears - ruby-on-rails

We have a model that generates reports.
Each report can be very complicated and may take a long time to load. Therefore, we are using delayed_job to do this in background.
Everything works on my local computer but in our production environment jobs randomly disappear. They do not even exist in the delayed_job.log as success or failed. Delayed jobs are created but sometimes they are deleted without throwing any errors or doing the work.
This is the method in our model:
def generate_html
ac = DelayedJobsController.new()
tmp_html = ac.render_to_string partial: self.partial_path, object: self
self.update_attributes(html: tmp_html, done: true)
end
handle_asynchronously :generate_html

After lots of work we found the problem.
When we did run crontab -l and ps aux we cud see that two instances of delayed_job was running. After we killed the oldest one of them all worked like it should.

Related

Why is Rufus scheduling the first job twice?

I have a Rails app that uses Rufus Scheduler combined with Delayed jobs to execute background jobs. There are another jobs, but the one I'm having trouble with is scheduled in a controller using this code:
def create
#harvest_plan = HarvestPlan.new(resource_params)
#harvest_plan.start_date = Time.parse(resource_params[:start_date])
if #harvest_plan.save
ApplicationController.new.insert_in_messages_list(session, :success, 'Harvest plan created')
schedule_harvest
redirect_to farms_path
end
end
private
def schedule_harvest
Rufus::Scheduler.singleton.every "#{#harvest_plan.hours_between}h",
:times => #harvest_plan.repetitions, :first_at => #harvest_plan.start_date do
CreateHarvestFromPlanJob.perform_later
end
end
The job is supposed to be scheduled according to the harvest plan model, which indicates how many hours must past between jobs, when is the first one supposed to be scheduled and how many repetitions must occur. Everything works perfect except for the first job, which does happen at the time specified with first_at but it is scheduled twice for some reason, delayed jobs then executes the job twice. I tried using the mutex, blocking and overlap options, but it did nothing different. After the first job (scheduled twice) everything works fine. The next jobs are scheduled on time and just once. I have just one delayed jobs worker
Why is this happening?
I am running Rails 4.2.4, Ruby 2.2.2 and Rufus 3.3.2. Since the error happens both with passenger and webrick I assume this has nothing to do with the problem.
Why is Rufus scheduling the first job twice?
because of a bug you found: https://github.com/jmettraux/rufus-scheduler/issues/231
Thanks a lot!

Resque job not actually backgrounding

It is instead taking up my processor, and then effectually timing out.
I have in my controller :
after_save :handle_file
def handle_test
Resque.enqueue UnpackFileOnS3, parent.id
end
It hits this mark, and then the entire app waits for it to set up and upload the files as prescribed inside my Job. Then it predictably times out because it takes awhile to upload it.
This occurs in my console as well.. If I run :
Resque.enqueue UnpackFileOnS3, 4
Then instead of enqueue'ing it, it locks up my console as it tries to run the entire file. I think that normally, console would just enqueue it to a worker and redis..
Why isn't this actually happening in the background? As I assume if that were the case, the timeouts would not occur.
My guess is that you are running resque in an inline mode. In this mode queing is disabled. Check your configs for this kind of code:
Resque.inline = ENV['RAILS_ENV'] == "cucumber"
#or whatever, important part is the inline option

ruby on rails - delayed jobs with different process id updating the same column

I discovered the problem, it had nothing to do with locks.
It seems that in production, I had a jobs:work running permanently, that was called I don't know how! So all the jobs processed by that process would do something somewhere else!
And that somewhere else is not my database, so I just killed it and all started to work fine.
Sorry, for wasting your time!!
Sorry, forgot to tell that I'm working with rails 2.3.8!
I have asynchronous updates to the same row, same column from different background process. I'm using the delayed_jobs gem.
What I want to do is:
ActiveRecord::Base.connection.execute(
"Update table_name set column = column + #{updated_number}
where id = #{self.id}").
My database is mysql and the table where I write is InnoDB.
So the problem is, running that query in different delayed_jobs will cause some data increments to be lost. please note that (column = column + #{updated_number}) I want to increment the current value on the table!
Using rails lock doesn't work because each delayed job run in a different process, I was thinking more like if the table had some locks to do updates safely.
And one more thing, using lock!, On my development code, I run 3 times the rake jobs:work, then I confirm on the delayed_job table that 3 different process locked 3 jobs, and is the development code it works perfectly.
But when put that code in production it doesn't work. The lost of increment data is still there.
Use pessimistic locking:
your_object.with_lock do
your_object.column += updated_number
your_object.save!
end
This will make sure the updates are synchronized via DB.

Permanent daemon for quering a web resource

I have a rails 3 application and looked around in the internet for daemons but didnt found the right for me..
I want a daemon which fetches data permanently (exchange courses) from a web resource and saves it to the database..
like:
while true
Model.update_attribte(:course, http::get.new("asdasd").response)
end
I've only seen cron like jobs, but they only run after a specific time... I want it permanently, depending on how long it takes to end the query...
Do you understand what i mean?
The gem light-daemon I wrote should work very well in your case.
http://rubygems.org/gems/light-daemon
You can write your code in a class which has a perform method, use a queue system like this and at application startup enqueue the job with Resque.enqueue(Updater).
Obviously the job won't end until the application is stopped, personally I don't like that, but if this is the requirement.
For this reason if you need to execute other tasks you should configure more than one worker process and optionally more than one queue.
If you can edit your requirements and find a trigger for the update mechanism the same approach still works, you only have to remove the while true loop
Sample class needed:
Class Updater
#queue = :endless_queue
def self.perform
while true
Model.update_attribute(:course, http::get.new("asdasd").response)
end
end
end
Finaly i found a cool solution for my problem:
I use the god gem -> http://god.rubyforge.org/
with a bash script (link) for starting / stopping a simple rake task (with an infinite loop in it).
Now it works fine and i have even some monitoring with god running that ensures that the rake task runs ok.

delayed_job - Performs not up to date code?

I'm using delayed_job (tried both tobi's and collective_idea's) on site5.com shared hosting, with passenger as rails environment.
I managed to make jobs done.
However, it seems the plugin ignores any changes in a job class source code after first run.
I have restarted the server on every change (touch tmp/restart.txt) but it still ignores it.
Example:
file: lib/xx_job.rb
class XxJob
def perform
Rails.logger.info "XX START"
TempTest.delete_all
i = 0
10.times {
i+=1
TempTest.create(:name => "XXX")
sleep(1)
}
Rails.logger.info "XX END"
end
end
In a simple controller I call:
Delayed::Job.enqueue(XxJob.new)
Conclusions I have gathered:
If I change xx_job.rb to xx_job1.rb - error on the controller
If I change class XxJob to class XxJob1 - error on the controller
If I delete all the perform method content - the old code old code is executed
New .rb file with class and perform, enqueue this class - works perfectly
If I change something in that new file's perform and run job again - old code is executed
Between every change I made a restart for the server.
It seems like Passenger or something else saves class cache.
How can I delete this cache? Is is stored on the server somewhere? (I hope I have access to it from the shared hosting)
Thanks!
If you run delayed job workers daemonized, then you need to restart them to reload the code. Also, keep in mind that each worker loads its own instance of rails.
Eventually I figured that out - several workers were running in background, each of them caught a job and had their own cache.
I didn't know how to kill them so I changed the table's name for several seconds. That killed them :)
Then I used https://github.com/tobi/delayed_job/wiki/Running-Delayed::Worker-as-a-daemon as worker start, and it works great.

Resources