I have a Post model (below) which has a callback method to modify the body attribute via a delayed job. If I remove "delay." and just execute #shorten_urls! instantly, it works fine. However, from the context of a delayed job, it does NOT save the updated body.
class Post < ActiveRecord::Base
after_create :shorten_urls
def shorten_urls
delay.shorten_urls!
end
def shorten_urls!
# this task might take a long time,
# but for this example i'll just change the body to something else
self.body = 'updated body'
save!
end
end
Strangely, the job is processed without any problems:
[Worker(host:dereks-imac.home pid:76666)] Post#shorten_urls! completed after 0.0021
[Worker(host:dereks-imac.home pid:76666)] 1 jobs processed at 161.7611 j/s, 0 failed ...
Yet, the body is not updated. Anyone know what I'm doing wrong?
-- EDIT --
Per Alex's suggestion, I've updated the code to look like this (but to no avail):
class Post < ActiveRecord::Base
after_create :shorten_urls
def self.shorten_urls!(post_id=nil)
post = Post.find(post_id)
post.body = 'it worked'
post.save!
end
def shorten_urls
Post.delay.shorten_urls!(self.id)
end
end
One of the reasons might be that self is not serialized correctly when you pass method to delay. Try making shorten_urls! a class method that takes record id and fetches it from DB.
Related
i want to save this message five time from active jobs not from controller.
is there anyway to that ?
here message.save just returning true and its not saving the message in databas.
class MessageBroadcastJob < ApplicationJob
queue_as :default
def perform(message)
for i in 0..5
message.save!
ActionCable.server.broadcast 'chat', {message: render_message(message)}
end
end
private
def render_message(message)
MessagesController.render(
partial: 'message',
locals: {
message: message
}
)
end
end
this code is from model.
class Message < ApplicationRecord
belongs_to :user
after_create_commit {
MessageBroadcastJob.perform_later(self)
}
end
You are not calling your job in code anywhere. You are saving that after_create_commit mean after saving message to active record you want to run this job. But you are not doing it.
So you have to save at least one time, to run this job. That mean in your controller or anywhere in code at least save once message, this job will run when you have saved message onces, than it will save message. If you still have issue please check job is running properly? do you have any configuration in redis
I smell from your codes you are using cable, that mean you have to put following in your conversation channel file.
Message.create(message_params)
As soon as above code run than your job will also run. If above all is good than you have also issue in your job, you are saying that same message should be saved, so it won't save 5 message, you have to take parameters in it do following
m = Message.new
m.body = message
m.user_id = message.user_id
# setup other parameters required
m.save
ActionCable.server.broadcast 'chat', {message: render_message(message)}
I'm trying to get the job which starts an action in this particular action.
Let me explain.
class MyClass
def go_for_it(delay = true)
if delay
delay(run_at: 2.minutes.from_now).go_for_it(false)
else
# How can I know if I was called by a DelayedJob AND if yes, which one ?
puts "I'll do it"
end
end
end
my_class = MyClass.new
my_class.delay(run_at: 2.minutes.from_now).go_for_it
My aim here is to make restrictions on jobs creation. I don't want go_for_it method called twice but this method can delay again itself according to some reasons. If I add those lines to go_for_it:
calling_method = caller_locations[0].label
job = Delayed::Job.where(queue: "my_queue").first
puts job.payload_object.id
# => id of MyClass if recorded
puts job.payload_object.method_name
# => :go_for_it
In the case of go_for_it delaying itself, these data are not enough because job variable can be itself and then it's not a second different call of got_for_it. It's just itself delayed again.
What I need to know here is which job call run or invoke_job on go_for_it method.
If I'm understanding well, you need to know which job is actually running.
You can use a custom job with a before hook to do an action before running the job, also you'll have totally access to job object at this moment.
Example :
class MyClassJob
def initialize(my_object: MyClass.new)
#my_object = my_object
end
def before(job)
binding.pry
another_job = Delayed::Job.where(queue: "my_queue").where('id <> ?', job.id)
end
def perform
#my_object.go_for_it
end
end
MyClassJob.new().delay.perform
I need to do a delayed job to count fbLikes in Model but I have the error report of "undefined send_later() method". Is there any way to do delayed job to my fb_likes function in model?
==============================Latest===================================================
This is my latest code in my project. Things still the same, fb_likes does not display likes count.
[Company.rb]-MODEL
require "delayed_job"
require "count_job.rb"
class Company < ActiveRecord::Base
before_save :fb_likes
def fb_likes
Delayed::Job.enqueue(CountJob.new(self.fbId))
end
end
[config/lib/count_job.rb]
class CountJob<Struct.new(:fbId)
def perform
uri = URI("http://graph.facebook.com/#{fbId}")
data = Net::HTTP.get(uri)
self.fbLikes = JSON.parse(data)['likes']
end
end
[controller]
def create
#company = Company.new(params[:company])
if #company.save!
flash[:success] = "New company successfully registered."
----and other more code----
Library files are not required by default.
Rename the job file to count_job.rb. Using camelCase for filenames is insane and will burn you in unpredictable ways.
Create an initializer and add require 'count_job.rb'
One way is to create a separate worker that will get queued, the run to fetch the updated Model and call its fb_likes method on it, but the method will need to be public. Or take the logic into the worker itself.
class Radar
include Mongoid::Document
after_save :post_on_facebook
private
def post_on_facebook
if self.user.settings.post_facebook
Delayed::Job.enqueue(::FacebookJob.new(self.user,self.body,url,self.title),0,self.active_from)
end
end
end
class FacebookJob < Struct.new(:user,:body,:url,:title)
include SocialPluginsHelper
def perform
facebook_client(user).publish_feed('', :message => body, :link => url, :name => title)
end
end
I want execute post_on_facebook method at specific date. I store this date at "active_from" field.
Code above is working and job is executed at correct date.
But in some cases I first create Radar object and send some job to Delayed Job queue. After that I update this object and send another job to Delayed Job.
This is wrong behavior because I wan't execute job only once at correct time. In this implementation I will have 2 jobs which will be executed. How I can delete previous job so only updated one will be executed ?
Rails 3.0.7
Delayed Job => 2.1.4 https://github.com/collectiveidea/delayed_job
ps: sorry for my english I try do my best
Sounds like you want to de-queue any jobs if a radar object gets updated and re-queue.
Delayed::Job.enqueue should return a Delayed::Job record, so you can grab the ID off of that and save it back onto the Radar record (create a field for it on radar document) so you can find it again later easily.
You should change it to a before_save so you don't enter an infinite loop of saving.
before_save :post_on_facebook
def post_on_facebook
if self.user.settings.post_facebook && self.valid?
# delete existing delayed_job if present
Delayed::Job.find(self.delayed_job_id).destroy if self.delayed_job_id
# enqueue job
dj = Delayed::Job.enqueue(
::FacebookJob.new(self.user,self.body,url,self.title),0,self.active_from
)
# save id of delayed job on radar record
self.delayed_job_id = dj.id
end
end
did you try storing the id from the delayed job and then store it for possible deletion:
e.g
job_id = Delayed::Job.enqueue(::FacebookJob.new(self.user,self.body,url,self.title),0,self.active_from)
job = Delayed::Job.find(job_id)
job.delete
I have a process which takes generally a few seconds to complete so I'm trying to use delayed_job to handle it asynchronously. The job itself works fine, my question is how to go about polling the job to find out if it's done.
I can get an id from delayed_job by simply assigning it to a variable:
job = Available.delay.dosomething(:var => 1234)
+------+----------+----------+------------+------------+-------------+-----------+-----------+-----------+------------+-------------+
| id | priority | attempts | handler | last_error | run_at | locked_at | failed_at | locked_by | created_at | updated_at |
+------+----------+----------+------------+------------+-------------+-----------+-----------+-----------+------------+-------------+
| 4037 | 0 | 0 | --- !ru... | | 2011-04-... | | | | 2011-04... | 2011-04-... |
+------+----------+----------+------------+------------+-------------+-----------+-----------+-----------+------------+-------------+
But as soon as it completes the job it deletes it and searching for the completed record returns an error:
#job=Delayed::Job.find(4037)
ActiveRecord::RecordNotFound: Couldn't find Delayed::Backend::ActiveRecord::Job with ID=4037
#job= Delayed::Job.exists?(params[:id])
Should I bother to change this, and maybe postpone the deletion of complete records? I'm not sure how else I can get a notification of it's status. Or is polling a dead record as proof of completion ok? Anyone else face something similar?
Let's start with the API. I'd like to have something like the following.
#available.working? # => true or false, so we know it's running
#available.finished? # => true or false, so we know it's finished (already ran)
Now let's write the job.
class AwesomeJob < Struct.new(:options)
def perform
do_something_with(options[:var])
end
end
So far so good. We have a job. Now let's write logic that enqueues it. Since Available is the model responsible for this job, let's teach it how to start this job.
class Available < ActiveRecord::Base
def start_working!
Delayed::Job.enqueue(AwesomeJob.new(options))
end
def working?
# not sure what to put here yet
end
def finished?
# not sure what to put here yet
end
end
So how do we know if the job is working or not? There are a few ways, but in rails it just feels right that when my model creates something, it's usually associated with that something. How do we associate? Using ids in database. Let's add a job_id on Available model.
While we're at it, how do we know that the job is not working because it already finished, or because it didn't start yet? One way is to actually check for what the job actually did. If it created a file, check if file exists. If it computed a value, check that result is written. Some jobs are not as easy to check though, since there may be no clear verifiable result of their work. For such case, you can use a flag or a timestamp in your model. Assuming this is our case, let's add a job_finished_at timestamp to distinguish a not yet ran job from an already finished one.
class AddJobIdToAvailable < ActiveRecord::Migration
def self.up
add_column :available, :job_id, :integer
add_column :available, :job_finished_at, :datetime
end
def self.down
remove_column :available, :job_id
remove_column :available, :job_finished_at
end
end
Alright. So now let's actually associate Available with its job as soon as we enqueue the job, by modifying the start_working! method.
def start_working!
job = Delayed::Job.enqueue(AwesomeJob.new(options))
update_attribute(:job_id, job.id)
end
Great. At this point I could've written belongs_to :job, but we don't really need that.
So now we know how to write the working? method, so easy.
def working?
job_id.present?
end
But how do we mark the job finished? Nobody knows a job has finished better than the job itself. So let's pass available_id into the job (as one of the options) and use it in the job. For that we need to modify the start_working! method to pass the id.
def start_working!
job = Delayed::Job.enqueue(AwesomeJob.new(options.merge(:available_id => id))
update_attribute(:job_id, job.id)
end
And we should add the logic into the job to update our job_finished_at timestamp when it's done.
class AwesomeJob < Struct.new(:options)
def perform
available = Available.find(options[:available_id])
do_something_with(options[:var])
# Depending on whether you consider an error'ed job to be finished
# you may want to put this under an ensure. This way the job
# will be deemed finished even if it error'ed out.
available.update_attribute(:job_finished_at, Time.current)
end
end
With this code in place we know how to write our finished? method.
def finished?
job_finished_at.present?
end
And we're done. Now we can simply poll against #available.working? and #available.finished? Also, you gain the convenience of knowing which exact job was created for your Available by checking #available.job_id. You can easily turn it into a real association by saying belongs_to :job.
I ended up using a combination of Delayed_Job with an after(job) callback which populates a memcached object with the same ID as the job created. This way I minimize the number of times I hit the database asking for the status of the job, instead polling the memcached object. And it contains the entire object I need from the completed job, so I don't even have a roundtrip request. I got the idea from an article by the github guys who did pretty much the same thing.
https://github.com/blog/467-smart-js-polling
and used a jquery plugin for the polling, which polls less frequently, and gives up after a certain number of retries
https://github.com/jeremyw/jquery-smart-poll
Seems to work great.
def after(job)
prices = Room.prices.where("space_id = ? AND bookdate BETWEEN ? AND ?", space_id.to_i, date_from, date_to).to_a
Rails.cache.fetch(job.id) do
bed = Bed.new(:space_id => space_id, :date_from => date_from, :date_to => date_to, :prices => prices)
end
end
I think that the best way would be to use the callbacks available in the delayed_job.
These are:
:success, :error and :after.
so you can put some code in your model with the after:
class ToBeDelayed
def perform
# do something
end
def after(job)
# do something
end
end
Because if you insist of using the obj.delayed.method, then you'll have to monkey patch Delayed::PerformableMethod and add the after method there.
IMHO it's far better than polling for some value which might be even backend specific (ActiveRecord vs. Mongoid, for instance).
The simplest method of accomplishing this is to change your polling action to be something similar to the following:
def poll
#job = Delayed::Job.find_by_id(params[:job_id])
if #job.nil?
# The job has completed and is no longer in the database.
else
if #job.last_error.nil?
# The job is still in the queue and has not been run.
else
# The job has encountered an error.
end
end
end
Why does this work? When Delayed::Job runs a job from the queue, it deletes it from the database if successful. If the job fails, the record stays in the queue to be ran again later, and the last_error attribute is set to the encountered error. Using the two pieces of functionality above, you can check for deleted records to see if they were successful.
The benefits to the method above are:
You get the polling effect that you were looking for in your original post
Using a simple logic branch, you can provide feedback to the user if there is an error in processing the job
You can encapsulate this functionality in a model method by doing something like the following:
# Include this in your initializers somewhere
class Queue < Delayed::Job
def self.status(id)
self.find_by_id(id).nil? ? "success" : (job.last_error.nil? ? "queued" : "failure")
end
end
# Use this method in your poll method like so:
def poll
status = Queue.status(params[:id])
if status == "success"
# Success, notify the user!
elsif status == "failure"
# Failure, notify the user!
end
end
I'd suggest that if it's important to get notification that the job has completed, then write a custom job object and queue that rather than relying upon the default job that gets queued when you call Available.delay.dosomething. Create an object something like:
class DoSomethingAvailableJob
attr_accessor options
def initialize(options = {})
#options = options
end
def perform
Available.dosomething(#options)
# Do some sort of notification here
# ...
end
end
and enqueue it with:
Delayed::Job.enqueue DoSomethingAvailableJob.new(:var => 1234)
The delayed_jobs table in your application is intended to provide the status of running and queued jobs only. It isn't a persistent table, and really should be as small as possible for performance reasons. Thats why the jobs are deleted immediately after completion.
Instead you should add field to your Available model that signifies that the job is done. Since I'm usually interested in how long the job takes to process, I add start_time and end_time fields. Then my dosomething method would look something like this:
def self.dosomething(model_id)
model = Model.find(model_id)
begin
model.start!
# do some long work ...
rescue Exception => e
# ...
ensure
model.finish!
end
end
The start! and finish! methods just record the current time and save the model. Then I would have a completed? method that your AJAX can poll to see if the job is finished.
def completed?
return true if start_time and end_time
return false
end
There are many ways to do this but I find this method simple and works well for me.