So i have many database operations that i put into my helpers that i want to do in the background. As an example, I define a record_activity method in my Users helper. When a Post gets created I want to record this activity ie in the create method in the Posts controller:
def create
#operations to save the post
record_activity(user, post)
end
For performance reasons, I want to delay this record_activity and others, and run them with workers on the back-end. I use delayed_job for delaying mailers and it works excellently. In rails console, method.delay works great ie I could in rails console do:
record_activity.delay
However, the same .delay doesn't work when written in a controller ie the following still runs live, not delayed:
def create
#operations to save the post
record_activity(user, post).delay
end
Why is this? I'm using Rails 3.0.9 in one app and Rails 3.1.3 in another, plus I have delayed_job version 2.1.4.
Can anyone suggest how to make this work?
EDIT *
I think the answer provided by mu_is_too_short is the right path. It creates a delayed job, only it doesn't execute the record_activity method properly. When the delayed_job worker starts, it executes the delayed_job and has no errors, and deletes the record as if it worked. But no activity gets recorded. To give some context, here is the call and the method i am troubleshooting now:
self.delay.record_activity(user, #comment)
ANd the method:
def record_activity(current_user, act)
activity = Activity.new
activity.user_id = current_user.id
activity.act_id = act.id
activity.act_type = act.class.name
activity.created_at = act.created_at
activity.save
end
I then thought that maybe I couldn't pass user variables through, in this case, so I tried to just pass integer values and so on. I restarted the delayed_job workers and tried these methods, to no avail:
self.delay.record_activity(user.id, #comment.id, #comment.class.name, #comment.created_at)
And the altered method:
def record_activity(current_user_id, act_id, act_type, act_created_at)
activity = Activity.new
activity.user_id = current_user_id
activity.act_id = act_id
activity.act_type = act_type
activity.created_at = act_created_at
activity.save
end
I don't think record_activity.delay in the console is working the way you think it is. That will execute record_activity before delay has a chance to do anything.
The delay call has to go first, then you call your delayed method on what delay returns:
def create
self.delay.record_activity(user, post)
end
The delay call will return an object that intercepts all method calls (probably through method_missing), YAMLizes them, and adds the YAML to its delayed job queue table in the database. So, just saying record_activity.delay doesn't do anything useful, it just executes record_activity, creates the delayed-job interceptor object, and throws away what delay created.
Related
I have a before_destroy callback on my model named Connection.
It looks like this:
def run_my_worker
SomeWorker.perform_async(self.id)
end
The method calls a Sidekiq Worker to be performed. By the time the Sidekiq Worker is running, the model has been destroyed, and it can't "find" it when I query for it based on the id passed through to the worker.
How can I get around this/what are my alternatives to this situation?
The simplest approaches are:
do the work synchronously, or
pass all the data you need to the asynchronous method (like in Reyko's answer)
If neither of those work, you'll need to do the asynchronous work, then destroy the object once you're done with it.
One approach is to write a new method (like do_whatever_then_destroy). You can use Sidekiq's Batches feature to get a callback when the work has completed. At this point you can destroy the model object, since you're finally done with it.
You could pass the whole object which should be available to your worker even if the record gets destroyed.
def run_my_worker
SomeWorker.perform_async(self)
end
Update 1
Parse the json then inside your worker
def perform(my_object)
# parsed_object will store a hash representation of my_object
parsed_object = JSON.parse(my_object)
end
I've been trying to implement a DelayedJob custom job for a very long time, but am not finding much information online in terms of how to do this from start to finish, and am finding almost nothing that is not about sending mass emails (I've read collectiveidea's Github intro, Railscasts, SO questions, etc). As someone relatively new to Rails, I imagine that while the instructions are likely clear for someone more experienced, they are not clear enough for someone at my level to understand how to get this to work properly.
The aim of my task is to run the job, and then destroy the object (I am aware that DelayedJob destroys all completed jobs, but I also want the object destroyed from my database as well upon completion of the job.)
Previously, I was doing this with a DelayedJob non-custom job in my controller's create method: user.delay.scrape, followed by user.delay.destroy, which worked well. Therefore, everything else in my application is working fine, and the problem lies strictly in how I am setting up this custom job. However, for various reasons, it would be much better in this case to create a custom job.
Below is the current (non-working) way I have DelayedJob set up in my app. However, when I run the app, the console reports: uninitialized constant UsersController::UserScrapeJob. Any suggestions of how to get this to work properly would be greatly appreciated, and I'd be happy to answer any questions about this request.
Here is my model:
class User < ActiveRecord::Base
def scrape
some code here...
end
end
In my controller, the delayed job needs to function as part of the create method.
And here is my controller (with only the create method shown):
class UsersController < ApplicationController
#user = User.new(params[:user])
if #user.save
Delayed::Job.enqueue UserScrapeJob.new(user.id)
else
render :action => 'new'
end
end
And here is the job file userScrapeJob.rb, which is in the app/jobs folder:
class UserScrapeJob < Struct.new(:user_id)
def perform
user = User.find(user_id)
user.scrape
user.destroy
end
end
You have a typo when you create the job, the name of the class is UserScrapeJob, with a capital 'U' (name of classes in ruby are constants).
Delayed::Job.enqueue UserScrapeJob.new(user.id)
You also have a syntax error in the if, it's if ... else ... end, and not if ... end else ... end
Try renaming your job file from userScrapeJob.rb to user_scrape_job.rb.
When you call UserScrapeJob.new Rails converts the class name to snake case (i.e. user_scrape_job) and looks for the corresponding file of that name, user_scrape_job.rb.
I've been trying to figure this out for a long time, and can't figure it out.
I am using DelayedJob in my Rails app in order to run a script to fill out some forms on a website via a Mechanize script. However, after the job completes, I don't want any record of the entry to be stored in any database in my application, as there is no reason anyone should access it again.
The process works perfectly when I ran it as a simple background method within the controller's create method - that is, by calling #course.delay.scrape right after if #course.save. But now that I want to destroy the object right after the background job finishes, I believe I need to create a custom job, and am struggling with that.
I am aware that the DelayedJob documentation lists the method def after(job). In order to use that method, I need to create a custom job. I'm confused about how to create a custom job, as nearly every example I can find is for sending mass emails, whereas this is for a different purpose. I don't know how to get the script to run this way.
If you can help me with fixing up this code at all, that would be greatly appreciated! I've tried many variations, looking at as many examples as possible. I'm aware it has at least a few errors, but am not advanced enough to know what to change. This is the last thing I tried before throwing in the towel.
Here is my model (in models/course.rb):
class Course < ActiveRecord::Base
after_create :send_to_delayed_job
def scrape
...Mechanize script goes here ....
end
def send_to_delayed_job
Delayed::Job.enqueue CourseJob.new(self.id), :queue => 'mycoursequeue'
end
end
Here is my job (in models/course_job.rb):
class CourseJob < Struct.new(:course_id)
def perform
course = Course.find(self.id)
course.scrape
end
def after(job)
Course.destroy(params[:id])
end
end
Can we just have course.destroy as the last line of CourseJob#perform method?
I have a simple resque job that accepts a model id, fetches it, and then calls a method on the fetched item.
class CupcakeWorker
#queue = :cupcake_queue
def self.perform(cupcake_id)
#cupcake = Cupcake.find(cupcake_id)
#cupcake.bake
end
end
I queue it from a controller action using the 'enqueue' method
def bake
Resque.enqueue(CupcakeWorker, params[:cupcake_id])
render :json => 'Baking...'
end
The job queues and executes correctly. However if I modify the record's data in the database and proceded to queue the job again the operation doesn't execute using the new values.
I can only get it to fetch the new values if I restart the resque worker process. Is resque caching the object? Is there a way to ensure it refetches from the database every time the worker is called?
This answer is a little crude, but could work. You could call reload on the ActiveRecord model before doing any processing. This forces ActiveRecord to update the data.
The worker would then look like this:
class CupcakeWorker
#queue = :cupcake_queue
def self.perform(cupcake_id)
#cupcake = Cupcake.find(cupcake_id).reload
#cupcake.bake
end
end
I'm afraid I don't know why Resque might be doing this, however. Try Sidekiq and see if you get better results.
I'm using the Delayed Jobs plugin for Rails 2, and every time I try to modify a model and save it in the "perform" method required by Delayed Jobs, it fails out (no error messages or anything, it's just listed as a failure in the database).
I have the "perform" method in one of my rails model files (Video), and I'm passing an instance of that model (#video, let's say) to the Delayed::Job.enqueue
Is it a known issue that you can't do database modifications while in the queue? Am I doing something wrong (it only fails when it tries to save, not when I'm actually changing the attributes, and that sounds like a database modification issue).
If this IS expected: How can I fix it? I'm trying to save a "done" attribute to true, so I know when the model is ready to get to the next step. Is there some standard way to figure out when a delayed job is done?
EDIT: I have confirmed that calling perform standalone (without delayed job) has no problems with saving (no errors or warnings, or anything). When I call it through DelayedJobs it fails IMMEDIATELY (no time out) the second it gets to the save line.
EDIT: Wait, I think I see what is going on: my "perform" is part of an "after_create" call back... Which is all well and good, until I try to SAVE. It looks like when I save, it calls perform AGAIN (while already in perform), and that doesn't fly with Delayed Jobs (nor should it). For some reason I thought after_create would only get called once (not after every save). Wait, a simple test just showed that that IS the case. Hrrm... So why is perform called twice when I save, and once when I don't, in delayed jobs?
My code:
after_create :start_transcodes
def start_transcodes
Delayed::Job.enqueue self
end
def perform
puts "performing"
self.flash_status = 100
self.save!
puts "done"
end
What I see:
performing
performing
2 jobs processed at 3.3406 j/s, 2 failed ...
I don't see it say "done" ever.
What I DO see in my rails log is:
"* [JOB] Video failed with NameError: undefined local variable or method `flush_deletes' for #<Paperclip::Attachment:0xb6e51da0> - 2 failed attempts
undefined local variable or method `flush_deletes' for #<Paperclip::Attachment:0xb6e51da0>"
I am using the paperclip plugin for this class, and I can call save all day (even in that perform method) and get no problems. I ALSO can call save(again, even in perform) all day and not see my after_create method called more than once--UNLESS I"m using Delayed Job. (might be it doing some sort of auto retry?)
I'm gonna look around my paperclip plugin, see what's going on...
If your save fails, its got nothing to do with delayed_job ( atleast it shouldn't be unless the save takes longer than the MAX_RUN_TIME. )
Try diagnosing the problem with the save, by not using delayed_job.
Also take a look at the delayed_job.log file in your logs
Okay, not sure EXACTLY what was happening, but I've made a skeleton "TranscodeJob" class in my lib directory. This class gets initialized with a reference to what video I want it to process, and processes it, and saves it, and plays nicely with Delayed Job.
Basically, it looks like passing my entire complicated Video object (complete with paperclip plugins) to Delayed Job was freaking things out, and passing a simple object, with no more info than it needs makes things much easier.
Below is the code I used, which works just fine (and if that works fine, i can add my long running code back little by little and confirm it continues to do so, but it worked fine before, just hiccuped with saving)
class TranscodeJob
def initialize(video_id)
#video_id = video_id
end
#delayed jobs expected method
def perform
#video = Video.find(#video_id)
#video.flash_status = 100
#video.save!
end
end
This code is STILL called from a after_create filter, and I'm not seeing it called twice, so it looks like I mistoke DelayedJobs auto-retry for recursion, or something.