send object as parameter in resque - ruby-on-rails

sorry for my newbie question.
I'm sending current_user helper that have User object type as resque job runner parameter like this
JobRunner.run LikerCommenterAnalyzerJob, current_user, session['basic_instagram_data']['access_token'], 505
but when I try to get current_user parameter in perform method of resque it changed to Hash type
def self.perform (current_user, access_token, archive_id)
account = current_user.InstagramAccount.where('access_token', access_token).first
end
and my error is
undefined method `InstagramAccount' for #<Hash:0x007f9e39c27458>

Basically, when you push a task to resque, it will store somewhere to run later (Redis for example). So it can't store your object, it just stores the description of your passed parameters, the Hash above is the description.
Then Resque pulls down the description to run it.
So the solution will be, put the information that you use to retrieve the object instead (this is best practice)
When pushing the task:
JobRunner.run LikerCommenterAnalyzerJob, current_user.id, session['basic_instagram_data']['access_token'], 505
When runing the task
def self.perform (user_id, access_token, archive_id)
user = User.find(user_id)
account = user.InstagramAccount.where('access_token', access_token).first
end

Related

Sending email as a job on ruby on rails

Good day everyone,
I created a mailer to send an email to my client. As of right now im still testing it, but I couldn't make it to work. I've read redis, sidekiq, rails_mailer and still nothing. I can see that the mail is in the queue of sidekiq UI but I cant receive the email.
Here's the flow of my code.
User will check the text box on the view if they wanted to send an email to a client.
I a method will be triggered on the controller. Heres my code.
def send_workorder_message
if params.has_key?(:to_send_email)
WorkorderMessage::WorkorderMessageJob.perform_in(10.seconds, #curr_user, params[:message])
end
endv
then a workorder job is created. heres the code.
class WorkorderMessage::WorkorderMessageJob
# include SuckerPunch::Job
include Sidekiq::Worker
sidekiq_options queue: 'mailers'
def perform(user, message)
Spree::WorkorderMailer.workorder_send_to_email(user, message).deliver_now
# ActiveRecord::Base.connection_pool.with_connection do
# end
end
end
after that it will trigger the WorkorderMailer heres the code.
class WorkorderMailer < BaseMailer
def workorder_send_to_email(to_user, message)
ActiveRecord::Base.connection_pool.with_connection do
subject = "sample message mailer"
#message = message
#user = to_user
mail(
to: #user.email,
# 'reply-to': Spree::Store.current.support_address,
from: Spree::Store.current.support_address,
subject: subject
)
end
end
end
when I use the preview mailer I can see the UI working fine.
Also I've noticed that on sidekiq view I see this User Obj. I that normal?
According to the Sidekiq documentation, the arguments you pass must be primitives that cleanly serialize to JSON, and not full Ruby objects, like the user you are passing here:
Complex Ruby objects do not convert to JSON, by default it will
convert with to_s and look like #<Quote:0x0000000006e57288>. Even if
they did serialize correctly, what happens if your queue backs up and
that quote object changes in the meantime? Don't save state to
Sidekiq, save simple identifiers. Look up the objects once you
actually need them in your perform method.
The arguments you pass to perform_async must be composed of simple
JSON datatypes: string, integer, float, boolean, null(nil), array and
hash. This means you must not use ruby symbols as arguments. The
Sidekiq client API uses JSON.dump to send the data to Redis. The
Sidekiq server pulls that JSON data from Redis and uses JSON.load to
convert the data back into Ruby types to pass to your perform method.
Don't pass symbols, named parameters or complex Ruby objects (like
Date or Time!) as those will not survive the dump/load round trip
correctly.
I would suggest you change it to lookup the User by ID within the job, and only pass the ID instead of the entire user object.
# pass #curr_user.id instead of #curr_user
WorkorderMessage::WorkorderMessageJob.perform_in(10.seconds, #curr_user.id, params[:message])
# accept the ID instead of user here
def perform(user_id, message)
# get the user object here
user = User.find(user_id)
# send the mail
mail(
to: user.email,
#...
end

Setting tenant scope for DelayedJob

I have a multitenant Rails app that has a tenant_id column on many models.
Each model that belongs to a specific tenant has a default scope based on a class variable on the Tenant class:
default_scope { where(tenant_id: Tenant.current_id) }
Tenant.current_id is set in the application controller.
The problem is that when I send mail (via a Delayed Job) regarding a tenant-scoped object (ie. UserMailer.delay.contact_user(#some_user_in_a_specific_tenant)), I get NoMethodErrors for nilClass whenever I call anything on #some_user_in_a_specific_tenant within the Mailer. Presumably because the Delayed Job process isn't setting the Tenant.current_id.
How can I get DJ to access the objects I'm passing in?
Grab the current_id when you queue the job and build a scope out of that that doesn't depend on a class variable from the app. Or get a list of record ids to operate on first, passing that to DJ.
Examples:
def method_one(id)
Whatever.where(:tenant_id => id).do_stuff
end
def method_two(ids)
Whatever.find(ids).do_stuff
end
handle_asynchronously :method_one, :method_two
# then
method_one(Tenant.current_id)
# or
ids = Whatever.all.map(&:id)
method_two(ids)

delayed_job: how to check for presence of a particular job based on a triggered method

I have a method like this that goes through an array to find different APIs and launch a delayed_job instance for every API found like this.
def refresh_users_list
apis_array.each do |api|
api.myclass.new.delay.get_and_create_or_update_users
end
end
I have an after_filter on users#index controller to trigger this method. This is creating many jobs to be triggered that will eventually cause too many connections problems on Heroku.
I'm wondering if there's a way I can check for the presence of a Job in the database by each of the API that the array iterates. This would be very helpful so I can only trigger a particular refresh if that api wasn't updated on a given time.
Any idea how to do this?
In config/application.rb, add the following
config.autoload_paths += Dir["#{config.root}/app/jobs/**/"]
Create a new directory at app/jobs/.
Create a file at app/jobs/api_job.rb that looks like
class ApiJob < Struct.new(:attr1, :attr2, :attr3)
attr_accessor :token
def initialize(*attrs)
self.token = self.class.token(attr1, attr2, attr3)
end
def display_name
self.class.token(attr1, attr2, attr3)
end
#
# Class methods
#
def self.token(attr1, attr2, attr3)
[name.parameterize, attr1.id, attr2.id, attr3.id].join("/")
end
def self.find_by_token(token)
Delayed::Job.where("handler like ?", "%token: #{token}%")
end
end
Note: You will replace attr1, attr2, and attr3 with whatever number of attributes you need (if any) to pass to the ApiJob to perform the queued task. More on how to call this in a moment
For each of your API's that you queue some get_and_create_or_update_users method for you'll create another Job. For example, if I have some Facebook api model, I might have a class at app/jobs/facebook_api_job.rb that looks like
class FacebookApiJob < ApiJob
def perform
FacebookApi.new.get_and_create_or_update_users(attr1, attr2, attr3)
end
end
Note: In your Question you did not pass any attributes to get_and_create_or_update_users. I am just showing you where you would do this if you need the job to have attributes passed to it.
Finally, wherever your refresh_users_list is defined, define something like this job_exists? method
def job_exists?(tokens)
tokens = [tokens] if !tokens.is_a?(Array) # allows a String or Array of tokens to be passed
tokens.each do |token|
return true unless ApiJob.find_by_token(token).empty?
end
false
end
Now, within your refresh_users_list and loop, you can build new tokens and call job_exists? to check if you have queued jobs for the API. For example
# Build a token
def refresh_users_list
apis_array.each do |api|
token = ApiJob.token(attr1, attr2, attr3)
next if job_exists?(token)
api.myclass.new.delay.get_and_create_or_update_users
end
end
Note: Again I want to point out, you won't be able to just drop in the code above and have it work. You must tailor it to your application and the job's you're running.
Why is this so complicated?
From my research, there's no way to "tag" or uniquely identify a queued job through what delayed_job provides. Sure, each job has a unique :id attribute. You could store the ID values for each created job in some hash somewhere
{
"FacebookApi": [1, 4, 12],
"TwitterApi": [3, 193, 44],
# ...
}
and then check corresponding hash key for an ID, but I find this limiting, and not always sufficient for the problem When you need to identify a specific job by multiple attributes like above, we must create a way to find these jobs (without loading every job into memory and looping over them to see if one matches our criteria).
How is this working?
The Struct that the ApiJob extends has a :token attribute. This token is based on the attributes passed (attr1, attr2, attr3) and is built when a new class extending ApiJob is instantiated.
The find_by_token class method simply searches the string representation of the job in the delayed_job queue for a match based on a token built using the same token class method.

Ruby on Rails - Delayed job in model

I need to do a delayed job to count fbLikes in Model but I have the error report of "undefined send_later() method". Is there any way to do delayed job to my fb_likes function in model?
==============================Latest===================================================
This is my latest code in my project. Things still the same, fb_likes does not display likes count.
[Company.rb]-MODEL
require "delayed_job"
require "count_job.rb"
class Company < ActiveRecord::Base
before_save :fb_likes
def fb_likes
Delayed::Job.enqueue(CountJob.new(self.fbId))
end
end
[config/lib/count_job.rb]
class CountJob<Struct.new(:fbId)
def perform
uri = URI("http://graph.facebook.com/#{fbId}")
data = Net::HTTP.get(uri)
self.fbLikes = JSON.parse(data)['likes']
end
end
[controller]
def create
#company = Company.new(params[:company])
if #company.save!
flash[:success] = "New company successfully registered."
----and other more code----
Library files are not required by default.
Rename the job file to count_job.rb. Using camelCase for filenames is insane and will burn you in unpredictable ways.
Create an initializer and add require 'count_job.rb'
One way is to create a separate worker that will get queued, the run to fetch the updated Model and call its fb_likes method on it, but the method will need to be public. Or take the logic into the worker itself.

Sending email through resque: object treated as hash

I'm sending emails out through resque. All emails send properly except this one, which sends fine in development locally but fails on staging server.
It seems to view the 'Admin' object as a hash instead of treating it as an admin object. Any ideas?
account.rb
class Account < ActiveRecord::Base
after_commit :send_welcome_email
def send_welcome_email
SubscriptionNotifier.welcome(self).deliver
end
end
subscription_notifier.rb
class SubscriptionNotifier < ActionMailer::Base
def welcome(account)
#subscriber = account
mail(:to => account.admin.email, :subject => "Welcome!")
end
end
Resque error
SubscriptionNotifier Arguments
"welcome"
{"account"=>{"address_line1"=>nil, "address_line2"=>nil, "city"=>nil, "created_at"=>"2012-02-08T10:56:22-08:00", "currency"=>"United States Dollar (USD)", "deleted_at"=>nil, "description"=>nil, "email"=>"test2#test.com", "full_domain"=>"www.test.net", "id"=>3, "initial_plan"=>nil, "latitude"=>nil, "longitude"=>nil, "name"=>"macs", "phone"=>nil, "setup_steps_complete"=>0, "state"=>nil, "time_zone"=>"Pacific Time (US & Canada)", "updated_at"=>"2012-02-08T10:56:22-08:00", "website"=>nil, "zip"=>nil}}
Exception
NoMethodError
Error
undefined method `admin' for #<Hash:0x0000000585aa70>
I think you should just pass in the account ID into the queue and have the worker fetch the Account object when it does its perform method. That should lessen your Hash woes.
This is an old question, but still relevant. The answers here give workarounds, but don't describe why the problem arises or how to design jobs to avoid it.
Basically for Resque to persist jobs in Redis, the arguments need to be serialized so they can be saved. You can't save a Ruby object in a database (for example), so the arguments are serialized to JSON (which can be persisted). In your case, it's calling account.to_json and stores that as an argument to your job.
The likely reason this is an issue in staging but not in development, is your development Redis is only storing the jobs in memory (and therefore they don't need to be serialized). Staging is persisting them to disk or a database for example, so they have to be serialized.
To avoid this problem, arguments to jobs should be strings, numbers, or data structures that can be converted to json.
You will need to load the environment by running 'rake environment resque:work QUEUE='*' RAILS_ENV=staging'
that is unless you have
task "resque:setup" => :environment
defined in a resque.rake file.
Try this:
in account.rb
def send_welcome_email
SubscriptionNotifier.welcome(self.admin.email).deliver
end
in subscription_notifier.rb
def welcome(account_admin_email)
#subscriber = account
mail(:to => account_admin_email, :subject => "Welcome!")
end
I had the same error, apparently you are passing account as a hash and it has no email admin method. So just get the email in the send_welcome_email method and pass it as a parameter, instead to passing a hash and trying to access the email in the welcome method.
NOTE: for the #subscriber, you would need to pass the parameters, just like the email, you use in the email template for example self.admin.name in the model and #name = account_admin_name in the welcome method
Hope this helps.

Resources