Why can't my resque job find the object? - ruby-on-rails

In our Rails app. We save a model (Video). We have a callback on that object:
after_create :send_to_background_job, :if => :persisted?
The method looks like:
def send_to_background_job
Resque.enqueue(AddVideo, self.id)
end
When the worker is called. It does the following:
class AddVideo
#queue = :high
def self.perform(video_id)
video = Video.find(video_id)
video.original_file_name
....
Resque-web reports an error:
AddVideo
Arguments
51061
Exception
NoMethodError
Error
undefined method `original_filename' for nil:NilClass
Which is a bit odd, because, if I go to Rails console to look for this video. It does exist. Furthermore, calling Resque.enqueue(AddVideo, 51061) a SECOND time, runs without any errors.
It is as if it takes more time to save the record in the database, than it takes to create the worker/job. But even this statement doesn't add up, since the object calls the Resque job only after the object is saved. In Rails, this is done via a callback method in the model (after_create).
Don't know if this plays a role in the issue. In an initializer file, I have:
Resque.before_fork do
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
Resque.after_fork do
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end

This probably happens because the actual transaction in which the object is saved is not yet commited and the background job already started working on it.
You should switch the after_create to
after_commit :send_to_background_job, on: :create

Related

Rails methods not initialized in time for worker

Earlier, I had posted this question – and thought it was resolved:
Rails background worker always fails first time, works second
However, after continuing with tests and development, the error is back again, but in a slightly different way.
I'm using Sidekiq (with Rails 3.2.8, Ruby 1.9.3) to run background processes, after_save. Below is the code for my model, worker, and controller.
Model:
class Post < ActiveRecord::Base
attr_accessible :description,
:name,
:key
after_save :process
def process
ProcessWorker.perform_async(id, key) if key.present?
true
end
def secure_url
key.match(/(.*\/)+(.*$)/)[1]
end
def nonsecure_url
key.gsub('https', 'http')
end
end
Worker:
class ProcessWorker
include Sidekiq::Worker
def perform(id, key)
post = Post.find(id)
puts post.nonsecure_url
end
end
(Updated) Controller:
def create
#user = current_user
#post = #user.posts.create(params[:post])
render nothing: true
end
Whenever jobs are first dispatched, no matter the method, they fail initially:
undefined method `gsub' for nil:NilClass
Then, they always succeed on the first retry.
I've come across the following github issue, that appears to be resolved – relating to this same issue:
https://github.com/mperham/sidekiq/issues/331
Here, people are saying that if they create initializers to initialize the ActiveRecord methods on the model, that it resolves their issue.
To accomplish this, I've tried creating an initializer in lib/initializers called sidekiq.rb, with the following, simply to initialize the methods on the Post model:
Post.first
Now, the first job created completes successfully the first time. This is good. However, a second job created fails the first time – and completes upon retry... putting me right back to where I started.
This is really blowing my mind – has anyone had the same issue? Any help is appreciated.
Change your model callback from after_save to after_commit for the create action. Sometimes, sidekiq can initialize your worker before the model actually finishes saving to the database.
after_commit :process, :on => :create

Strange behavior with a resque scheduler job

so some context, I got some advice here:
Scheduling events in Ruby on Rails
aand have been tying to implement it today. I cant seem to make it work though. this is my scheduler job that is used to move my questions around between a delayed queue and a ready to send out queue (i've since decided to use email instead of SMS)
require 'Assignment'
require 'QuestionMailer'
module SchedulerJob
#delayed_queue = :delayed_queue
#ready_queue
def self.perform()
#delayed_queue.each do |a|
if(Time.now >= a.question.schedule)
#ready_queue << a
#delayed_queue.delete(a)
end
end
push_questions
end
def self.gather()
assignments = Assignment.find :all
assignments.each do |a|
#delayed_queue << a unless #delayed_queue.include? a
end
end
private
def self.push_questions
#ready_queue.each do |a|
QuestionMailer.question(a)
end
end
end
I use a callback on_create to call the gather method every time an assignment is created, and then the perform action actually does the sending of emails when resque runs.
I'm getting a strange error from the callback though.
undefined method `include?' for :delayed_queue:Symbol
here is the code from the assignment model
class Assignment < ActiveRecord::Base
belongs_to :user
belongs_to :question
attr_accessible :title, :body, :user_id, :question_id , :response , :correct
after_create :queue_assignments
def grade
self.correct = (response == self.question.solution) unless response == nil
end
def queue_assignments
SchedulerJob.gather
end
Any ideas what's going on? I think this is a problem with my understanding of how these queue's work with resque-scheduler. I assumed that if the queues were list-like objects then I could operate on them , but it appears that it a symbol instead of something with methode like include? I assume the << notation for adding something to it is also invalid.
Also please advise if this isn't the way to go about handling this kind of job scheduling
It appears you may have not restarted your Rails app after adding the new method gather to the SchedulerJob module. Try restarting your app to resolve this.
You may also be able to add the directory containing your Resque worker to Rails' watchable_dirs array so that changes you make to Resque worker modules in development don't require restarting your app. See this blog post for details:
http://wondible.com/2012/01/13/rails-3-2-autoloading-in-theory/

Delayed job; does it skip before filter

I have a delayed job that runs perfect against a public schema in postgresql.
Most of my operations however are against other schemas (one for each client)
To handle different schemas I've followed the instructions and put code to switch search path, in my before_filter (in application controller).
I've noticed. That the code in the before_filter gets called perfectly during typical operations, but not at all during delayed job.
I trimmed and trimmed out everything but the simplest thing I could think of, to show entrance.
class ApplicationController < ActionController::Base
protect_from_forgery
def write_to_log(text)
File.open('c:\temp.txt', 'ab') do |f|
f.write text + "\r\n"
f.close
end
end
before_filter :on_before_filter
def on_before_filter
write_to_log('hey dave');
return if(use_token() == false);
set_active_schema if(goto_log_in? == false);
end
The code in the worker class
def run_job(id)
upload = Upload.find(id)
upload.run_job();
end
handle_asynchronously :run_job, :priority => 10, :queue => 'public'
Quite standard stuff? Though the code in the job runs, the before_filter code doesn't get called.
So my question is. Did I do something wrong? Or more importantly, how can I do something right?
I'm not recommending this approach; I'm just answering your question by providing this code. Since you essentially want your code to run before any attempted call to the database, you can monkey patch ActiveRecord. Add the following code to config/initializers/active_record_monkey_patch.rb
class ActiveRecord::ConnectionAdapters::ConnectionPool
# create an alias for the old 'connection' method
alias_method :old_connection, :connection
# redefine the 'connection' method
def connection
# output something just to make sure the monkey patch is working
puts "*** custom connection method called ***"
# your custom code is here
write_to_log('hey dave');
return if(use_token() == false);
set_active_schema if(goto_log_in? == false);
# call the old 'connection' method
old_connection
end
end
You'll see your custom connection method getting called frequently now, and it will work without a controller. You can test it by opening up a rails console and performing any database query, and you should see the "custom connection method called" message displayed several times.
If you want to manipulate the ActiveRecord search path for Postgres and schemas you can use a full-featured gem like apartment: https://github.com/bradrobertson/apartment
You can switch to a new schema:
Apartment::Database.switch('database_name')
Regardless if you call this in an application controller request or a background job.

perform not being called for Delayed Jobs

I'm using delayed_job 2.1.4 from collectiveidea, and it seems the perform method is never called even though the jobs are processed and removed from the queue. Am I missing something?
I'm using Rails 3.0.5 on Heroku
In the Controller:
Delayed::Job.enqueue FacebookJob.new
In the Job class:
class FacebookJob
def initialize
end
def perform
fb_auths = Authentication.where(:provider => 'facebook')
fb_auths.each do |auth|
checkins = FbGraph::User.new('me', :access_token => URI.encode(auth.token)).checkins
if checkins != nil
checkins.each do |checkin|
[...]
end
end
end
end
end
(the whole code: https://gist.github.com/966509)
The simple answer: does DelayedJob know about the Authentication and FBGraph::User classes? If not, you'll see exactly the behavior you describe: the items will be silently removed from the queue.
See this entry in the Delayed Job Wiki in the Delayed Job Wiki.
Try adding 'require authentication' and 'require fb_graph' (or whatever) in your facebook_job.rb file.

How to save something to the database after failed ActiveRecord validations?

Basically what I want to do is to log an action on MyModel in the table of MyModelLog. Here's some pseudo code:
class MyModel < ActiveRecord::Base
validate :something
def something
# test
errors.add(:data, "bug!!")
end
end
I also have a model looking like this:
class MyModelLog < ActiveRecord::Base
def self.log_something
self.create(:log => "something happened")
end
end
In order to log I tried to :
Add MyModelLog.log_something in the something method of MyModel
Call MyModelLog.log_something on the after_validation callback of MyModel
In both cases the creation is rolled back when the validation fails because it's in the validation transaction. Of course I also want to log when validations fail. I don't really want to log in a file or somewhere else than the database because I need the relationships of log entries with other models and ability to do requests.
What are my options?
Nested transactions do seem to work in MySQL.
Here is what I tried on a freshly generated rails (with MySQL) project:
./script/generate model Event title:string --skip-timestamps --skip-fixture
./script/generate model EventLog error_message:text --skip-fixture
class Event < ActiveRecord::Base
validates_presence_of :title
after_validation_on_create :log_errors
def log_errors
EventLog.log_error(self) if errors.on(:title).present?
end
end
class EventLog < ActiveRecord::Base
def self.log_error(event)
connection.execute('BEGIN') # If I do transaction do then it doesn't work.
create :error_message => event.errors.on(:title)
connection.execute('COMMIT')
end
end
# And then in script/console:
>> Event.new.save
=> false
>> EventLog.all
=> [#<EventLog id: 1, error_message: "can't be blank", created_at: "2010-10-22 13:17:41", updated_at: "2010-10-22 13:17:41">]
>> Event.all
=> []
Maybe I have over simplified it, or missing some point.
Would this be a good fit for an Observer? I'm not sure, but I'm hoping that exists outside of the transaction... I have a similar need where I might want to delete a record on update...
I've solved a problem like this by taking advantage of Ruby's variable scoping. Basically I declared an error variable outside of a transaction block then catch, store log message, and raise the error again.
It looks something like this:
def something
error = nil
ActiveRecord::Base.transaction do
begin
# place codez here
rescue ActiveRecord::Rollback => e
error = e.message
raise ActiveRecord::Rollback
end
end
MyModelLog.log_something(error) unless error.nil?
end
By declaring the error variable outside of the transaction scope the contents of the variable persist even after the transaction has exited.
I am not sure if it applies to you, but i assume you are trying to save/create a model from your controller. In the controller it is easy to check the outcome of that action, and you most likely already do to provide the user with a useful flash; so you could easily log an appropriate message there.
I am also assuming you do not use any explicit transactions, so if you handle it in the controller, it is outside of the transaction (every save and destroy work in their own transaction).
What do you think?
MyModelLog.log_something should be done using a different connection.
You can make MyModelLog model always use a different connection by using establish_connection.
class MyModelLog < ActiveRecord::Base
establish_connection Rails.env # Use different connection
def self.log_something
self.create(:log => "something happened")
end
end
Not sure if this is the right way to do logging!!
You could use a nested transaction. This way the code in your callback executes in a different transaction than the failing validation. The Rails documentations for ActiveRecord::Transactions::ClassMethods discusses how this is done.

Resources