class Radar
include Mongoid::Document
after_save :post_on_facebook
private
def post_on_facebook
if self.user.settings.post_facebook
Delayed::Job.enqueue(::FacebookJob.new(self.user,self.body,url,self.title),0,self.active_from)
end
end
end
class FacebookJob < Struct.new(:user,:body,:url,:title)
include SocialPluginsHelper
def perform
facebook_client(user).publish_feed('', :message => body, :link => url, :name => title)
end
end
I want execute post_on_facebook method at specific date. I store this date at "active_from" field.
Code above is working and job is executed at correct date.
But in some cases I first create Radar object and send some job to Delayed Job queue. After that I update this object and send another job to Delayed Job.
This is wrong behavior because I wan't execute job only once at correct time. In this implementation I will have 2 jobs which will be executed. How I can delete previous job so only updated one will be executed ?
Rails 3.0.7
Delayed Job => 2.1.4 https://github.com/collectiveidea/delayed_job
ps: sorry for my english I try do my best
Sounds like you want to de-queue any jobs if a radar object gets updated and re-queue.
Delayed::Job.enqueue should return a Delayed::Job record, so you can grab the ID off of that and save it back onto the Radar record (create a field for it on radar document) so you can find it again later easily.
You should change it to a before_save so you don't enter an infinite loop of saving.
before_save :post_on_facebook
def post_on_facebook
if self.user.settings.post_facebook && self.valid?
# delete existing delayed_job if present
Delayed::Job.find(self.delayed_job_id).destroy if self.delayed_job_id
# enqueue job
dj = Delayed::Job.enqueue(
::FacebookJob.new(self.user,self.body,url,self.title),0,self.active_from
)
# save id of delayed job on radar record
self.delayed_job_id = dj.id
end
end
did you try storing the id from the delayed job and then store it for possible deletion:
e.g
job_id = Delayed::Job.enqueue(::FacebookJob.new(self.user,self.body,url,self.title),0,self.active_from)
job = Delayed::Job.find(job_id)
job.delete
Related
I'm trying to use delayed_jobs (background workers) to process my incoming email.
class EmailProcessor
def initialize(email)
#raw_html = email.raw_html
#subject = email.subject
end
def process
do something with #raw_html & #subject
end
handle_asynchronously :process, :priority => 20
end
The problem is I can't pass instance variables (#raw_html & #subject) into delayed jobs. Delayed jobs requests that I save data into the model to be retrieved in the background task, but I would prefer to have a background worker complete the entire task (including saving the record).
Any thoughts?
Use delay to pass params to a method that you want to run in the background:
class EmailProcessor
def self.process(email)
# do something with the email
end
end
# Then somewhere down the line:
EmailProcessor.delay.process(email)
I need to be able to find queued and/or working jobs and/or failed jobs for a model object, say for example when the model object is destroyed we want to find all and either decide not to delete or destroy the jobs (conditionally).
Is there a recommended way to do this before I reinvent the wheel?
Example:
If you want to create a before_destroy callback that destroys all jobs when the object is destroyed (queued and failed jobs) and only destroy if there are no working jobs
Some pseudo code of what I am thinking to do for this example use case:
Report model
class Report < ActiveRecord::Base
before_destroy :check_if_working_jobs, :destroy_queued_and_failed_jobs
def check_if_working_jobs
# find all working jobs related to this report object
working_jobs = ProcessReportWorker.find_working_jobs_by_report_id(self.id)
return false unless working_jobs.empty?
end
def destroy_queued_and_failed_jobs
# find all jobs related to this report object
queued_jobs = ProcessReportWorker.find_queued_jobs_by_report_id(self.id)
failed_jobs = ProcessReportWorker.find_failed_jobs_by_report_id(self.id)
# destroy/remove all jobs found
(queued_jobs + failed_jobs).each do |job|
# destroy the job here ... commands?
end
end
end
Report processing worker class for resque / redis backed jobs
class ProcessReportWorker
# find the jobs by report id which is one of the arguments for the job?
# envisioned as separate methods so they can be used independently as needed
def self.find_queued_jobs_by_report_id(id)
# parse all jobs in all queues to find based on the report id argument?
end
def self.find_working_jobs_by_report_id(id)
# parse all jobs in working queues to find based on the report id argument?
end
def self.find_failed_jobs_by_report_id(id)
# parse all jobs in failed queue to find based on the report id argument?
end
end
Is this approach on track with what needs to happen?
What are the missing pieces above to find the queued or working jobs by model object id and then destroy it?
Are there already methods in place to find and/or destroy by associated model object id that I have missed in the documentation or my searching?
Update: Revised the usage example to only use working_jobs as a way to check to see if we should delete or not vs suggesting we will try to delete working_jobs also. (because deleting working jobs is more involved than simply removing the redis key entries)
Its been quiet here with no responses so I managed to spend the day tackling this myself following the path I was indicating in my question. There may be a better solution or other methods available but this seems to get the job done so far. Feel free to comment if there are better options here for the methods used or if it can be improved further.
The overall approach here is you need to search through all jobs (queued, working, failed) and filtering out only jobs for the class and queue that are relevant and that match the object record id you are looking for in the correct index position of the args array. For example (after confirming the class and queue match) if the argument position 0 is where the object id is, then you can test to see if args[0] matches the object id.
Essentially, a job is associated to the object id if: job_class == class.name && job_queue == #queue && job_args[OBJECT_ID_ARGS_INDEX].to_i == object_id
Queued Jobs: To find all queued jobs you need to collect all redis
entries with the keys named queue:#{#queue} where #queue is the
name of the queue your worker class is using. Modify accordingly by
looping through multiple queues if you are using multiple queues for
a particular worker class. Resque.redis.lrange("queue:#{#queue}",0,-1)
Failed Jobs: To find all queued jobs you need to collect all redis
entries with the keys named failed (unless you are using multiple
failure queues or some other than default setup). Resque.redis.lrange("failed",0,-1)
Working Jobs: To find all working jobs you can use Resque.workers
which contains an array of all workers and the jobs that are running. Resque.workers.map(&:job)
Job: Each job in each of the above lists will be an encoded hash. You
can decode the job into a ruby hash using Resque.decode(job).
Class and args: For queued jobs, the class and args keys are job["class"]
and job["args"]. For failed and working jobs these are job["payload"]["class"] and job["payload"]["args"].
Queue: For each of the failed and working jobs found, the queue will be job["queue"]. Before testing the args list for the object id, you only want jobs that match the class and queue. Your queued jobs list will already be limited to the queue you collected.
Below are the example worker class and model methods to find (and to remove) jobs that are associated to the example model object (report).
Report processing worker class for resque / redis backed jobs
class ProcessReportWorker
# queue name
#queue = :report_processing
# tell the worker class where the report id is in the arguments list
REPORT_ID_ARGS_INDEX = 0
# <snip> rest of class, not needed here for this answer
# find jobs methods - find by report id (report is the 'associated' object)
def self.find_queued_jobs_by_report_id report_id
queued_jobs(#queue).select do |job|
is_job_for_report? :queued, job, report_id
end
end
def self.find_failed_jobs_by_report_id report_id
failed_jobs.select do |job|
is_job_for_report? :failed, job, report_id
end
end
def self.find_working_jobs_by_report_id report_id
working_jobs.select do |worker,job|
is_job_for_report? :working, job, report_id
end
end
# association test method - determine if this job is associated
def self.is_job_for_report? state, job, report_id
attributes = job_attributes(state, job)
attributes[:klass] == self.name &&
attributes[:queue] == #queue &&
attributes[:args][REPORT_ID_ARGS_INDEX].to_i == report_id
end
# remove jobs methods
def self.remove_failed_jobs_by_report_id report_id
find_failed_jobs_by_report_id(report_id).each do |job|
Resque::Failure.remove(job["index"])
end
end
def self.remove_queued_jobs_by_report_id report_id
find_queued_jobs_by_report_id(report_id).each do |job|
Resque::Job.destroy(#queue,job["class"],*job["args"])
end
end
# reusable methods - these methods could go elsewhere and be reusable across worker classes
# job attributes method
def self.job_attributes(state, job)
if state == :queued && job["args"].present?
args = job["args"]
klass = job["class"]
elsif job["payload"] && job["payload"]["args"].present?
args = job["payload"]["args"]
klass = job["payload"]["class"]
else
return {args: nil, klass: nil, queue: nil}
end
{args: args, klass: klass, queue: job["queue"]}
end
# jobs list methods
def self.queued_jobs queue
Resque.redis.lrange("queue:#{queue}", 0, -1)
.collect do |job|
job = Resque.decode(job)
job["queue"] = queue # for consistency only
job
end
end
def self.failed_jobs
Resque.redis.lrange("failed", 0, -1)
.each_with_index.collect do |job,index|
job = Resque.decode(job)
job["index"] = index # required if removing
job
end
end
def self.working_jobs
Resque.workers.zip(Resque.workers.map(&:job))
.reject { |w, j| w.idle? || j['queue'].nil? }
end
end
So then the usage example for Report model becomes
class Report < ActiveRecord::Base
before_destroy :check_if_working_jobs, :remove_queued_and_failed_jobs
def check_if_working_jobs
# find all working jobs related to this report object
working_jobs = ProcessReportWorker.find_working_jobs_by_report_id(self.id)
return false unless working_jobs.empty?
end
def remove_queued_and_failed_jobs
# find all jobs related to this report object
queued_jobs = ProcessReportWorker.find_queued_jobs_by_report_id(self.id)
failed_jobs = ProcessReportWorker.find_failed_jobs_by_report_id(self.id)
# extra code and conditionals here for example only as all that is really
# needed is to call the remove methods without first finding or checking
unless queued_jobs.empty?
ProcessReportWorker.remove_queued_jobs_by_report_id(self.id)
end
unless failed_jobs.empty?
ProcessReportWorker.remove_failed_jobs_by_report_id(self.id)
end
end
end
The solution needs to be modified if you use multiple queues for the worker class or if you have multiple failure queues. Also, redis failure backend was used. If a different failure backend is used, changes may be required.
I want to trigger mail to be sent one hour before an appointment comes up. I am using the at field from the #appointment instance variable.
class AppointmentController < ApplicationController
def create
if DateTime.now + 1.hour > #appointment.at
AppointmentReminder.send_appointment_email(#appointment).deliver_now
end
end
end
This works if the appointment was created within an hour, but if the appointment was created in the future... then our poor customer won't be notified. Is there a mechanism where Rails can automatically deliver the email at the right time? I don't want to use a cronjob or rake task.
I'd recommend looking at background processing systems like Sidekiq or Sucker Punch which can be configured to perform jobs "later".
This way when the appointment is created you can schedule the job to execute at the correct time. You'll need to add checks to make sure when the job finally runs that it's still legitimate, etc.
http://sidekiq.org
https://github.com/brandonhilkert/sucker_punch
As you tagged your question as related to rails 4.2 then Active Job exactly what you need.
You could use whenever to run a block of code on a schedule. Say, ever 5 minutes, looks for appointments that are starting within the next hour and send an email.
To prevent multiple servers from sending an email, you could have a status on the appointment to keep track of if the email has been sent.
Then, using postgres, you can use this SQL to grab records to send and use the database to decide which server is going to send out a specific email:
Email.find_by_sql("WITH updated AS (
UPDATE emails SET status = 'processing' where lower(status) = 'new' RETURNING id
)
SELECT *
FROM emails
WHERE id IN (SELECT id FROM updated)
order by id asc
")
I will share how I have done it. It works just fine.
First, install whenever gem.
You should have your mailer. Here is mine:
class WeeklyDigestMailer < ApplicationMailer
default :from => "bla#bla.org"
# Subject can be set in your I18n file at config/locales/en.yml
# with the following lookup:
#
# en.weekly_digest_mailer.weekly_promos.subject
#
helper ApplicationHelper
def weekly_promos(suscriptor, promos)
#promos = promos
mail(:to => "<#{suscriptor.email}>", :subject => "Mercadillo digital semanal")
end
end
Of course, you need to style your view.
Then, you create a rake task (in lib/tasks). Just like this:
desc 'send digest email'
task send_weekly_email: :environment do
#promociones = Promo.where("validez > ?", Time.zone.now).order("created_at DESC")
if (#promociones.count > 0)
#suscriptors = Suscriptor.where(email_confirmation: true)
#suscriptors.each do |suscriptor|
WeeklyDigestMailer.weekly_promos(suscriptor, #promociones).deliver_now
end
end
end
Finally, you configure your schedule with whenever gem. As I want to send the mails all thrusdays at 9 am, I just put it:
every :thursday, at: '9:00 am' do # Use any day of the week or :weekend, :weekday
rake "send_weekly_email"
end
One important point: since you are using a rake task, use deliver_now instead of deliver_later because if the task finish before all emails have been sent, the rest will be undelivered.
That's all.
I have a Post model (below) which has a callback method to modify the body attribute via a delayed job. If I remove "delay." and just execute #shorten_urls! instantly, it works fine. However, from the context of a delayed job, it does NOT save the updated body.
class Post < ActiveRecord::Base
after_create :shorten_urls
def shorten_urls
delay.shorten_urls!
end
def shorten_urls!
# this task might take a long time,
# but for this example i'll just change the body to something else
self.body = 'updated body'
save!
end
end
Strangely, the job is processed without any problems:
[Worker(host:dereks-imac.home pid:76666)] Post#shorten_urls! completed after 0.0021
[Worker(host:dereks-imac.home pid:76666)] 1 jobs processed at 161.7611 j/s, 0 failed ...
Yet, the body is not updated. Anyone know what I'm doing wrong?
-- EDIT --
Per Alex's suggestion, I've updated the code to look like this (but to no avail):
class Post < ActiveRecord::Base
after_create :shorten_urls
def self.shorten_urls!(post_id=nil)
post = Post.find(post_id)
post.body = 'it worked'
post.save!
end
def shorten_urls
Post.delay.shorten_urls!(self.id)
end
end
One of the reasons might be that self is not serialized correctly when you pass method to delay. Try making shorten_urls! a class method that takes record id and fetches it from DB.
I have a process which takes generally a few seconds to complete so I'm trying to use delayed_job to handle it asynchronously. The job itself works fine, my question is how to go about polling the job to find out if it's done.
I can get an id from delayed_job by simply assigning it to a variable:
job = Available.delay.dosomething(:var => 1234)
+------+----------+----------+------------+------------+-------------+-----------+-----------+-----------+------------+-------------+
| id | priority | attempts | handler | last_error | run_at | locked_at | failed_at | locked_by | created_at | updated_at |
+------+----------+----------+------------+------------+-------------+-----------+-----------+-----------+------------+-------------+
| 4037 | 0 | 0 | --- !ru... | | 2011-04-... | | | | 2011-04... | 2011-04-... |
+------+----------+----------+------------+------------+-------------+-----------+-----------+-----------+------------+-------------+
But as soon as it completes the job it deletes it and searching for the completed record returns an error:
#job=Delayed::Job.find(4037)
ActiveRecord::RecordNotFound: Couldn't find Delayed::Backend::ActiveRecord::Job with ID=4037
#job= Delayed::Job.exists?(params[:id])
Should I bother to change this, and maybe postpone the deletion of complete records? I'm not sure how else I can get a notification of it's status. Or is polling a dead record as proof of completion ok? Anyone else face something similar?
Let's start with the API. I'd like to have something like the following.
#available.working? # => true or false, so we know it's running
#available.finished? # => true or false, so we know it's finished (already ran)
Now let's write the job.
class AwesomeJob < Struct.new(:options)
def perform
do_something_with(options[:var])
end
end
So far so good. We have a job. Now let's write logic that enqueues it. Since Available is the model responsible for this job, let's teach it how to start this job.
class Available < ActiveRecord::Base
def start_working!
Delayed::Job.enqueue(AwesomeJob.new(options))
end
def working?
# not sure what to put here yet
end
def finished?
# not sure what to put here yet
end
end
So how do we know if the job is working or not? There are a few ways, but in rails it just feels right that when my model creates something, it's usually associated with that something. How do we associate? Using ids in database. Let's add a job_id on Available model.
While we're at it, how do we know that the job is not working because it already finished, or because it didn't start yet? One way is to actually check for what the job actually did. If it created a file, check if file exists. If it computed a value, check that result is written. Some jobs are not as easy to check though, since there may be no clear verifiable result of their work. For such case, you can use a flag or a timestamp in your model. Assuming this is our case, let's add a job_finished_at timestamp to distinguish a not yet ran job from an already finished one.
class AddJobIdToAvailable < ActiveRecord::Migration
def self.up
add_column :available, :job_id, :integer
add_column :available, :job_finished_at, :datetime
end
def self.down
remove_column :available, :job_id
remove_column :available, :job_finished_at
end
end
Alright. So now let's actually associate Available with its job as soon as we enqueue the job, by modifying the start_working! method.
def start_working!
job = Delayed::Job.enqueue(AwesomeJob.new(options))
update_attribute(:job_id, job.id)
end
Great. At this point I could've written belongs_to :job, but we don't really need that.
So now we know how to write the working? method, so easy.
def working?
job_id.present?
end
But how do we mark the job finished? Nobody knows a job has finished better than the job itself. So let's pass available_id into the job (as one of the options) and use it in the job. For that we need to modify the start_working! method to pass the id.
def start_working!
job = Delayed::Job.enqueue(AwesomeJob.new(options.merge(:available_id => id))
update_attribute(:job_id, job.id)
end
And we should add the logic into the job to update our job_finished_at timestamp when it's done.
class AwesomeJob < Struct.new(:options)
def perform
available = Available.find(options[:available_id])
do_something_with(options[:var])
# Depending on whether you consider an error'ed job to be finished
# you may want to put this under an ensure. This way the job
# will be deemed finished even if it error'ed out.
available.update_attribute(:job_finished_at, Time.current)
end
end
With this code in place we know how to write our finished? method.
def finished?
job_finished_at.present?
end
And we're done. Now we can simply poll against #available.working? and #available.finished? Also, you gain the convenience of knowing which exact job was created for your Available by checking #available.job_id. You can easily turn it into a real association by saying belongs_to :job.
I ended up using a combination of Delayed_Job with an after(job) callback which populates a memcached object with the same ID as the job created. This way I minimize the number of times I hit the database asking for the status of the job, instead polling the memcached object. And it contains the entire object I need from the completed job, so I don't even have a roundtrip request. I got the idea from an article by the github guys who did pretty much the same thing.
https://github.com/blog/467-smart-js-polling
and used a jquery plugin for the polling, which polls less frequently, and gives up after a certain number of retries
https://github.com/jeremyw/jquery-smart-poll
Seems to work great.
def after(job)
prices = Room.prices.where("space_id = ? AND bookdate BETWEEN ? AND ?", space_id.to_i, date_from, date_to).to_a
Rails.cache.fetch(job.id) do
bed = Bed.new(:space_id => space_id, :date_from => date_from, :date_to => date_to, :prices => prices)
end
end
I think that the best way would be to use the callbacks available in the delayed_job.
These are:
:success, :error and :after.
so you can put some code in your model with the after:
class ToBeDelayed
def perform
# do something
end
def after(job)
# do something
end
end
Because if you insist of using the obj.delayed.method, then you'll have to monkey patch Delayed::PerformableMethod and add the after method there.
IMHO it's far better than polling for some value which might be even backend specific (ActiveRecord vs. Mongoid, for instance).
The simplest method of accomplishing this is to change your polling action to be something similar to the following:
def poll
#job = Delayed::Job.find_by_id(params[:job_id])
if #job.nil?
# The job has completed and is no longer in the database.
else
if #job.last_error.nil?
# The job is still in the queue and has not been run.
else
# The job has encountered an error.
end
end
end
Why does this work? When Delayed::Job runs a job from the queue, it deletes it from the database if successful. If the job fails, the record stays in the queue to be ran again later, and the last_error attribute is set to the encountered error. Using the two pieces of functionality above, you can check for deleted records to see if they were successful.
The benefits to the method above are:
You get the polling effect that you were looking for in your original post
Using a simple logic branch, you can provide feedback to the user if there is an error in processing the job
You can encapsulate this functionality in a model method by doing something like the following:
# Include this in your initializers somewhere
class Queue < Delayed::Job
def self.status(id)
self.find_by_id(id).nil? ? "success" : (job.last_error.nil? ? "queued" : "failure")
end
end
# Use this method in your poll method like so:
def poll
status = Queue.status(params[:id])
if status == "success"
# Success, notify the user!
elsif status == "failure"
# Failure, notify the user!
end
end
I'd suggest that if it's important to get notification that the job has completed, then write a custom job object and queue that rather than relying upon the default job that gets queued when you call Available.delay.dosomething. Create an object something like:
class DoSomethingAvailableJob
attr_accessor options
def initialize(options = {})
#options = options
end
def perform
Available.dosomething(#options)
# Do some sort of notification here
# ...
end
end
and enqueue it with:
Delayed::Job.enqueue DoSomethingAvailableJob.new(:var => 1234)
The delayed_jobs table in your application is intended to provide the status of running and queued jobs only. It isn't a persistent table, and really should be as small as possible for performance reasons. Thats why the jobs are deleted immediately after completion.
Instead you should add field to your Available model that signifies that the job is done. Since I'm usually interested in how long the job takes to process, I add start_time and end_time fields. Then my dosomething method would look something like this:
def self.dosomething(model_id)
model = Model.find(model_id)
begin
model.start!
# do some long work ...
rescue Exception => e
# ...
ensure
model.finish!
end
end
The start! and finish! methods just record the current time and save the model. Then I would have a completed? method that your AJAX can poll to see if the job is finished.
def completed?
return true if start_time and end_time
return false
end
There are many ways to do this but I find this method simple and works well for me.