I am doing the delayed_job by tobi and when I run the delayed_job but the fbLikes count is all wrong and it seems to increment each time I add one more company. Not sure wheres the logic wrong. The fbLikes method I tested before and it work(before I changed to delayed_job)
not sure where the "1" come from...
[output]
coca-cola
http://www.cocacola.com
Likes: 1 <--- Not sure why the fbLikes is 1 and it increment with second company fbLikes is 2 and so on...
.
[Worker(host:aname.local pid:1400)] Starting job worker
[Worker(host:aname.local pid:1400)] CountJob completed after 0.7893
[Worker(host:aname.local pid:1400)] 1 jobs processed at 1.1885 j/s, 0 failed ...
I am running the delayed_job in Model and trying to run the job of
counting the facebook likes
here is my code.
[lib/count_rb.job]
require 'net/http'
class CountJob< Struct.new(:fbid)
def perform
uri = URI("http://graph.facebook.com/#{fbid}")
data = Net::HTTP.get(uri)
return JSON.parse(data)['likes']
end
end
[Company model]
before_save :fb_likes
def fb_likes
self.fbLikes = Delayed::Job.enqueue(CountJob.new(self.fbId))
end
the issue is coming from
before_save :fb_likes
def fb_likes
self.fbLikes = Delayed::Job.enqueue(CountJob.new(self.fbId))
end
the enqueue method will not return the results of running the CountJob. I believe it will return whether the job successfully enqueued or not and when you are saving this to the fb_likes value it will evaluate to 1 when the job is enqueued successfully.
You should be setting fbLikes inside the job that is being run by delayed_job not as a result of the enqueue call.
before_save :enqueue_fb_likes
def fb_likes
Delayed::Job.enqueue(CountJob.new(self.fbId))
end
Your perform method in the CountJob class should probably take the model id for you to look up and have access to the fbId and the fbLikes attributes instead of just taking the fbId.
class CountJob< Struct.new(:id)
def perform
company = Company.find(id)
uri = URI("http://graph.facebook.com/#{company.fbid}")
data = Net::HTTP.get(uri)
company.fbLikes = JSON.parse(data)['likes']
company.save
end
Related
I have a recurring delayed job that sends out a confirmation email and marks the order as completed so that the next time the delayed job runs the order will not be re-processed.
Sometimes (seems to be when a certain string is tied to a promo code field but that might just be a coincidence) the job processes and sends the email but does not save the record and mark it as completed. I have used IRB to set the record to what the code would and verified that the record is valid.
Any ideas why this might be happening or has anyone seen this happen?
class PaymentEmailAndLock < Struct.new(:blank)
include Delayed::RecurringJob
run_at '8:00pm'
run_every 1.day
timezone 'US/Eastern'
queue 'dailyjobs'
def perform
time_set = 30.hours.ago..2.hours.ago
#mail_and_lock = Cart.where(updated_at:time_set,payment_sent:true,complete_order_lock:false)
#mail_and_lock.each do |obj|
obj.complete_order_lock = true
obj.survey_available = true
obj.save
if obj.payment == 1
MessageMailer.delay(queue: 'mailers').message_payment_paper(obj.cust_email,obj)
else
MessageMailer.delay(queue: 'mailers').message_payment_digital(obj.cust_email,obj)
end
end
end
end
I am working on an Ruby On Rails application. We have many sidekiq workers that can process multiple jobs at a time. Each job will make calls to the Shopify API, the calls limit set by Shopify is 2 calls per second. I want to synchronize that, so that only two jobs can call the API in a given second.
The way I'm doing that right now, is like this:
# frozen_string_literal: true
class Synchronizer
attr_reader :shop_id, :queue_name, :limit, :wait_time
def initialize(shop_id:, queue_name:, limit: nil, wait_time: 1)
#shop_id = shop_id
#queue_name = queue_name.to_s
#limit = limit
#wait_time = wait_time
end
# This method should be called for each api call
def synchronize_api_call
raise "a block is required." unless block_given?
get_api_call
time_to_wait = calculate_time_to_wait
sleep(time_to_wait) unless Rails.env.test? || time_to_wait.zero?
yield
ensure
return_api_call
end
def set_api_calls
redis.del(api_calls_list)
redis.rpush(api_calls_list, calls_list)
end
private
def get_api_call
logger.log_message(synchronizer: 'Waiting for api call', color: :yellow)
#api_call_timestamp = redis.brpop(api_calls_list)[1].to_i
logger.log_message(synchronizer: 'Got api call.', color: :yellow)
end
def return_api_call
redis_timestamp = redis.time[0]
redis.rpush(api_calls_list, redis_timestamp)
ensure
redis.ltrim(api_calls_list, 0, limit - 1)
end
def last_call_timestamp
#api_call_timestamp
end
def calculate_time_to_wait
current_time = redis.time[0]
time_passed = current_time - last_call_timestamp.to_i
time_to_wait = wait_time - time_passed
time_to_wait > 0 ? time_to_wait : 0
end
def reset_api_calls
redis.multi do |r|
r.del(api_calls_list)
end
end
def calls_list
redis_timestamp = redis.time[0]
limit.times.map do |i|
redis_timestamp
end
end
def api_calls_list
#api_calls_list ||= "api-calls:shop:#{shop_id}:list"
end
def redis
Thread.current[:redis] ||= Redis.new(db: $redis_db_number)
end
end
the way I use it is like this
synchronizer = Synchronizer.new(shop_id: shop_id, queue_name: 'shopify_queue', limit: 2, wait_time: 1)
# this is called once the process started, i.e. it's not called by the jobs themselves but by the App from where the process is kicked off.
syncrhonizer.set_api_calls # this will populate the api_calls_list with 2 timestamps, those timestamps will be used to know when the last api call has been sent.
then when a job wants to make a call
syncrhonizer.synchronize_api_call do
# make the call
end
The problem
The problem with this is that if for some reason a job fails to return to the api_calls_list the api_call it took, that will make that job and the other jobs stuck for ever, or until we notice that and we call set_api_calls again. That problem won't affect that particular shop only, but also the other shops as well, because the sidekiq workers are shared between all the shops using our app. It happen sometimes that we don't notice that until a user calls us, and we find that it was stuck for many hours while it should be finished in a few minutes.
The Question
I just realised lately that Redis is not the best tool for shared locking. So I am asking, Is there any other good tool for this job?? If not in the Ruby world, I'd like to learn from others as well. I'm interested in the techniques as well as the tools. So every bit helps.
You may want to restructure your code and create a micro-service to process the API calls, which will use a local locking mechanism and force your workers to wait on the socket. It comes with the added complexity of maintaining the micro-service. But if you're in a hurry then Ent-Rate-Limiting looks cool too.
I have three Rails jobs to process a player yellow/red cards in a soccer tournament, and the penalties these players will have due to getting this cards.
The idea is that the first job collects all Incidences (an Incidence is when a Player gets a yellow card, to give an example), and counts all the cards a Player got.
class ProcessBookedPlayersJob < ApplicationJob
queue_as :default
#cards = []
def perform(*args)
#cards = []
yellows = calculate_cards(1)
reds = calculate_cards(2)
#cards << yellows << reds
end
def after_perform(*match)
#ProcessPenaltiesJob.perform_later #cards
ProcessPenalties.perform_later #cards
#PenaltiesFinalizerJob.perform_later match
PenaltiesFinalizer.perform_later match
end
def calculate_cards(card_type)
cards = Hash.new
players = Player.fetch_players_with_active_incidences
players.each { |p|
# 1 is yellow, 2 is red
counted_cards = Incidence.incidences_for_player_id_and_incidence_type(p.id, card_type).size
cards[p] = counted_cards
}
return cards
end
end
This first job is executed when an Instance is created.
class Incidence < ApplicationRecord
belongs_to :player
belongs_to :match
after_save :process_incidences, on: :create
def self.incidences_for_player_id_and_incidence_type(player_id, card_type)
return Incidence.where(status: 1).where(incidence_type: card_type).where(player_id: player_id)
end
protected
def process_incidences
ProcessBookedPlayers.perform_later
end
end
After this, another job runs and creates the necessary Penalties (a Penalty is a ban for the next Match, for example) according to the Hash output that the previous job created.
class ProcessPenaltiesJob < ApplicationJob
queue_as :default
def perform(*cards)
yellows = cards[0]
reds = cards[1]
create_penalties_for_yellow_cards(yellows)
create_penalties_for_red_cards(reds)
end
# rest of the job...
And also there's another job, that sets these bans as disabled, once they have expired.
class PenaltiesFinalizerJob < ApplicationJob
queue_as :default
def perform(match)
active_penalties = Penalty.find_by(status: 1)
active_penalties.each do |p|
#penalty.starting_match.order + penalty.length == el_match_que_inserte.order (ver si seria >=)
if p.match.order + p.length >= match.order
p.status = 2 # Inactivate
p.save!
end
end
end
end
As you can see in ProcessBookedPlayersJob's after_perform method
def after_perform(*match)
ProcessPenalties.perform_later #cards
PenaltiesFinalizer.perform_later match
end
I'm trying to get those two other jobs executed (ProcessPenaltiesJob and PenaltiesFinalizerJob) with no luck. The job ProcessBookedPlayersJob is being executed (because I can see this in the log)
[ActiveJob] [ProcessBookedPlayersJob] [dbb8445e-a706-4443-9cb8-2c45f49a4f8f] Performed ProcessBookedPlayersJob (Job ID: dbb8445e-a706-4443-9cb8-2c45f49a4f8f) from Async(default) in 38.81ms
But the other two jobs aren't executed. So, how can I get both ProcessPenaltiesJob and PenaltiesFinalizerJob run after ProcessBookedPlayersJob has finalized its execution? I don't mind if they run in parallel, but they need to be run after the first one finishes, since they need its output as their input.
I have searched for this, and the closest match I found was this answer. Quoting it:
If the sequential jobs you are talking about however are of different
jobs / class, then you can just call the other job once the first job
has finished.
That's exactly the behaviour I'm trying to have... but how can I get my jobs to run sequentially?
For now, I'm thinking in adding the first job's logic into Incidences's after_savehook, but that doesn't sound too natural. Is there any other way to pipeline the execution of my jobs?
Many thanks in advance
I have a Rails app in which I use delayed_job. I want to detect whether I am in a delayed_job process or not; something like
if in_delayed_job?
# do something only if it is a delayed_job process...
else
# do something only if it is not a delayed_job process...
end
But I can't figure out how. This is what I'm using now:
IN_DELAYED_JOB = begin
basename = File.basename $0
arguments = $*
rake_args_regex = /\Ajobs:/
( basename == 'delayed_job' ) ||
( basename == 'rake' && arguments.find{ |v| v =~ rake_args_regex } )
end
Another solution is, as #MrDanA said:
$ DELAYED_JOB=true script/delayed_job start
# And in the app:
IN_DELAYED_JOB = ENV['DELAYED_JOB'].present?
but they are IMHO weak solutions. Can anyone suggest a better solution?
The way that I handle these is through a Paranoid worker. I use delayed_job for video transcoding that was uploaded to my site. Within the model of the video, I have a field called video_processing which is set to 0/null by default. Whenever the video is being transcoded by the delayed_job (whether on create or update of the video file), it will use the hooks from delayed_job and will update the video_processing whenever the job starts. Once the job is completed, the completed hook will update the field to 0.
In my view/controller I can do video.video_processing? ? "Video Transcoding in Progress" : "Video Fished Transcoding"
Maybe something like this. Add a field to your class and set it when your invoke the method that does all your work from delayed job:
class User < ActiveRecord::Base
attr_accessor :in_delayed_job
def queue_calculation_request
Delayed::Job.enqueue(CalculationRequest.new(self.id))
end
def do_the_work
if (in_delayed_job)
puts "Im in delayed job"
else
puts "I was called directly"
end
end
class CalculationRequest < Struct.new(:id)
def perform
user = User.find(id)
user.in_delayed_job = true
user.do_the_work
end
def display_name
"Perform the needeful user Calculations"
end
end
end
Here is how it looks:
From Delayed Job:
Worker(host:Johns-MacBook-Pro.local pid:67020)] Starting job worker
Im in delayed job
[Worker(host:Johns-MacBook-Pro.local pid:67020)] Perform the needeful user Calculations completed after 0.2787
[Worker(host:Johns-MacBook-Pro.local pid:67020)] 1 jobs processed at 1.5578 j/s, 0 failed ...
From the console
user = User.first.do_the_work
User Load (0.8ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT 1 [["id", 101]]
I was called directly
This works for me:
def delayed_job_worker?
(ENV["_"].include? "delayed_job")
end
Unix will set the "_" environment variable to the current command.
It'll be wrong if you have a bin script called "not_a_delayed_job", but don't do that.
How about ENV['PROC_TYPE']
Speaking only of heroku... but when you're a worker dyno, this is set to 'worker'
I use it as my "I'm in a DJ"
You can create a plugin for delayed job, e.g. create the file is_dj_job_plugin.rb in the config/initializers directory.
class IsDjJobPlugin < Delayed::Plugin
callbacks do |lifecycle|
lifecycle.around(:invoke_job) do |job, *args, &block|
begin
old_is_dj_job = Thread.current[:is_dj_job]
Thread.current[:is_dj_job] = true
block.call(job, *args) # Forward the call to the next callback in the callback chain
Thread.current[:is_dj_job] = old_is_dj_job
end
end
end
def self.is_dj_job?
Thread.current[:is_dj_job] == true
end
end
Delayed::Worker.plugins << IsDjJobPlugin
You can then test in the following way:
class PrintDelayedStatus
def run
puts IsDjJobPlugin.is_dj_job? ? 'delayed' : 'not delayed'
end
end
PrintDelayedStatus.new.run
PrintDelayedStatus.new.delay.run
I have a delayed_job designed to send an email using a mailer.
Upon completion, I need to record that the email was sent -- I do this by saving the newly created ContactEmail.
Right now, the new ContactEmail records gets saved even if the delayed_job fails.
How do I correct that so that the new ContactEmail is only saved when the mailer is successfully sent?
Here is the snippet from the cron task which calls the delayed_job:
puts contact_email.subject
contact_email.date_sent = Date.today
contact_email.date_created = Date.today
contact_email.body = email.substituted_message(contact, contact.colleagues)
contact_email.status = "sent"
#Delayed::Job.enqueue OutboundMailer.deliver_campaign_email(contact,contact_email)
Delayed::Job.enqueue SomeMailJob.new(contact,contact_email)
contact_email.save #now save the record
Here is the some_mail_job.rb
class SomeMailJob < Struct.new(:contact, :contact_email)
def perform
OutboundMailer.deliver_campaign_email(contact,contact_email)
end
end
And here is the outbound_mailer:
class OutboundMailer < Postage::Mailer
def campaign_email(contact,email)
subject email.subject
recipients contact.email
from '<me#me.com>'
sent_on Date.today
body :email => email
end
You could update the status in the perform of the job itself.
For example, something like:
contact_email.status = 'queued'
contact_email.save
contact_email.delay.deliver_campaign_email
And then in your ContactEmail class, something to the effect of
def deliver_campaign_email
OutboundMailer.deliver_campaign_email(self.contact, self)
self.status = 'sent' # or handle failure and set it appropriately
self.save
end
delayed_job has some magic bits that it adds to your models that will deal with the persistence.
In order to deal with your OutboundMailer throwing an exception, you can do something like so:
def deliver_campaign_email
begin
OutboundMailer.deliver_campaign_email(self.contact, self)
self.status = 'sent'
rescue
self.status = 'failed' # or better yet grab the the message from the exception
end
self.save
end
You need synchronic delivery so stop using delayed job in this case and do standard mailer delivery.
or add success column to you ContactEmail - initialy save it with false then update in job to true