In Rails App, I have added one job to update the User status from created to processing.
class UserStatusCreatedToProcessingJob < ApplicationScheduledJob
def perform
title = "[UserStatusCreatedToProcessingJob]"
Rails.logger.info "==> #{title} - Start"
users = User.where(status: 'created')
total = users.count
if total == 0
Rails.logger.info "==> #{title} : User Count Is Zero."
else
users.update_all(status: 'processing')
Rails.logger.info "==> #{title} : User Count : #{total} Changed to Processing."
end
end
end
If the Job gets any Users, the system updates the status from Created To Processing.
But if the job does not get any users, the system enters into the IF condition and prints the User Count is Zero.
But, In end status of Job becomes Failed.
[xxxxx] Scheduled Job Failed: UserStatusCreatedToProcessingJob 13493
Job ran successfully, but still getting Job as Failed. Please suggest.
Related
I have a recurring delayed job that sends out a confirmation email and marks the order as completed so that the next time the delayed job runs the order will not be re-processed.
Sometimes (seems to be when a certain string is tied to a promo code field but that might just be a coincidence) the job processes and sends the email but does not save the record and mark it as completed. I have used IRB to set the record to what the code would and verified that the record is valid.
Any ideas why this might be happening or has anyone seen this happen?
class PaymentEmailAndLock < Struct.new(:blank)
include Delayed::RecurringJob
run_at '8:00pm'
run_every 1.day
timezone 'US/Eastern'
queue 'dailyjobs'
def perform
time_set = 30.hours.ago..2.hours.ago
#mail_and_lock = Cart.where(updated_at:time_set,payment_sent:true,complete_order_lock:false)
#mail_and_lock.each do |obj|
obj.complete_order_lock = true
obj.survey_available = true
obj.save
if obj.payment == 1
MessageMailer.delay(queue: 'mailers').message_payment_paper(obj.cust_email,obj)
else
MessageMailer.delay(queue: 'mailers').message_payment_digital(obj.cust_email,obj)
end
end
end
end
every 1.minute do
runner "DailyNotificationChecker.send_notifications"
end
this is whenever gem scheduler
class DailyNotificationChecker
def self.send_notifications
puts "Triggered send_notifications"
expiry_time = Time.now + 57
while (Time.now < expiry_time)
if RUN_SCHEDULER == "true" || RUN_SCHEDULER == true
process_notes
end
sleep 4 #seconds
end
end
def self.process_notes
notes = nil
time = Benchmark.measure do
Note.uncached do
notes = Note.within_2_mins.unprocessed
if notes.present?
note_ids = notes.pluck(:id)
note_hash = {}
note_hash[:note_ids] = note_ids.to_json
url = URI.parse(PROCESS_NOTE_API_URL)
resp, data = Net::HTTP.post_form(url, note_hash)
notes.each do |note|
puts "note_id = #{note.id}"
puts "processed #{note.processed}"
RealtimeNotificationChecker.perform_async(note.user_id,"note_created","TempUserNote",note.id)
end
end
end
end
puts "time #{time}"
end
end
My purpose is I am trying to run realtime notifications. If a user creates a note in app then within 4 seconds I will send him a push notification.
so I run a cron job every 1 minute and hit the method send_notifications in my DailyNotificationChecker class. Now this is supposed to run a method process_notes every 4 seconds. every new note that is created is set processed = 0 flag. and I retrieve every unprocessed flag not more than 2 mins old and perform my operation on it.
the line
note_ids = notes.pluck(:id)
note_hash = {}
note_hash[:note_ids] = note_ids.to_json
url = URI.parse(PROCESS_NOTE_API_URL)
resp, data = Net::HTTP.post_form(url, note_hash)
gets all the note_ids that are unprocessed and sends them to master server to mark them processed = 1 so that in next 4 second hit these notes dont come again to processing list.
Then I loop in every note and send them in background to send push notificaion
notes.each do |note|
RealtimeNotificationChecker.perform_async(note.user_id,"note_created","TempUserNote",note.id)
end
but i guess something is wrong here. As when I have more than 2 notes in to be processed it doesn't sends notification to all users.
Can someone suggest whats wrong and required
Note:
I may have more than 10-15 notes to be processed within 4 sec.
I am using sucker punch gem for background jobs.
I created a batch email system for my website. The problem I have, which is terrible, is it continuously sends out emails. It seems the job is stuck in an infinite loop. Please advise. It is crazy because on my development server only one email is sent per account, but on my production server I received 5 emails. Thus, meaning all users of my site received multiple emails.
Controller:
class BatchEmailsController < ApplicationController
before_filter :authenticate_admin_user!
def deliver
flash[:notice] = "Email Being Delivered"
Delayed::Job.enqueue(BatchEmailJob.new(params[:batch_email_id]), 3, 10.seconds.from_now, :queue => 'batch-email', :attempts => 0)
redirect_to admin_batch_emails_path
end
end
Job in the lib folder:
class BatchEmailJob < Struct.new(:batch_email_id)
def perform
be = BatchEmail.find(batch_email_id)
if be.to.eql?("Contractors")
cs = Contractor.all
cs.each do|c|
begin
BatchEmailMailer.batch_email(be.subject, be.message, be.link_name, be.link_path, be.to, c.id).deliver
rescue Exception => e
Rails.logger.warn "Batch Email Error: #{e.message}"
end
else
ps = Painter.all
ps.each do |p|
begin
BatchEmailMailer.batch_email(be.subject, be.message, be.link_name, be.link_path, be.to, p.id).deliver
rescue Exception => e
Rails.logger.warn "Batch Email Error: #{e.message}"
end
end
end
end
end
Delayed Job Initializer:
Delayed::Worker.max_attempts = 0
Please provide feedback on this approach. I want to send out the batch email to all users, but avoid retrying multiple times if something goes wrong. I added rescue block to catch email exceptions in hope that the batch will skip errors and continue processing. As a last resort do not run again if something else goes wrong.
What one of my apps does which seems to work flawlessly after millions of emails:
1) in an initializer, do NOT let DelayedJob re-attempt a failed job AND ALSO not let DJ delete failed jobs:
Delayed::Worker.destroy_failed_jobs = false
Delayed::Worker.max_attempts = 1
2) Scheduling a mass email is 1 job, aka the "master job"
3) When THAT jobs runs, it spawns N jobs where N is the number of emails being sent. So each email gets its own job. (Note: if you use a production email service with 'batch' capability, one "email" might actually be a batch of 100 or 1000 emails.)
4) We have an admin panel that shows us if we have any failed jobs, and if they are, because we don't delete them, we can inspect the failed job and see what happened (malformed email address etc)
If one email fails, the others are un-affected. And no email can ever be sent twice.
I am doing the delayed_job by tobi and when I run the delayed_job but the fbLikes count is all wrong and it seems to increment each time I add one more company. Not sure wheres the logic wrong. The fbLikes method I tested before and it work(before I changed to delayed_job)
not sure where the "1" come from...
[output]
coca-cola
http://www.cocacola.com
Likes: 1 <--- Not sure why the fbLikes is 1 and it increment with second company fbLikes is 2 and so on...
.
[Worker(host:aname.local pid:1400)] Starting job worker
[Worker(host:aname.local pid:1400)] CountJob completed after 0.7893
[Worker(host:aname.local pid:1400)] 1 jobs processed at 1.1885 j/s, 0 failed ...
I am running the delayed_job in Model and trying to run the job of
counting the facebook likes
here is my code.
[lib/count_rb.job]
require 'net/http'
class CountJob< Struct.new(:fbid)
def perform
uri = URI("http://graph.facebook.com/#{fbid}")
data = Net::HTTP.get(uri)
return JSON.parse(data)['likes']
end
end
[Company model]
before_save :fb_likes
def fb_likes
self.fbLikes = Delayed::Job.enqueue(CountJob.new(self.fbId))
end
the issue is coming from
before_save :fb_likes
def fb_likes
self.fbLikes = Delayed::Job.enqueue(CountJob.new(self.fbId))
end
the enqueue method will not return the results of running the CountJob. I believe it will return whether the job successfully enqueued or not and when you are saving this to the fb_likes value it will evaluate to 1 when the job is enqueued successfully.
You should be setting fbLikes inside the job that is being run by delayed_job not as a result of the enqueue call.
before_save :enqueue_fb_likes
def fb_likes
Delayed::Job.enqueue(CountJob.new(self.fbId))
end
Your perform method in the CountJob class should probably take the model id for you to look up and have access to the fbId and the fbLikes attributes instead of just taking the fbId.
class CountJob< Struct.new(:id)
def perform
company = Company.find(id)
uri = URI("http://graph.facebook.com/#{company.fbid}")
data = Net::HTTP.get(uri)
company.fbLikes = JSON.parse(data)['likes']
company.save
end
In my app, delayed jobs isn't running automatically on my server anymore. It used to..
When I manually ssh in, and perform rake jobs:work
I return this :
*** Starting job worker host:ip-(censored) pid:21458
* [Worker(host:ip-(censored) pid:21458)] acquired lock on PhotoJob
* [JOB] host:ip-(censored) pid:21458 failed with ActiveRecord::RecordNotFound: Couldn't find Photo with ID=9237 - 4 failed attempts
This returns roughly 20 times over for what I think is several jobs. Then I get a few of these:
* [Worker(host:ip-(censored) pid:21458)] failed to acquire exclusive lock for PhotoJob
And then finally one of these :
12 jobs processed at 73.6807 j/s, 12 failed ...
Any ideas what I should be mulling over? Thanks so much!
Edit :
Here is the photo controller that calls delayed jobs:
def update
#gallery = #organization.media.find_by_media_id(params[:gallery_id]).media
#photo = #gallery.photos.find(params[:id])
if #photo.update_attributes(params[:photo])
#photo.update_attribute(:processing, true)
logger.info "HERE IS PROCESSING #{#photo.processing}"
Delayed::Job.enqueue PhotoJob.new(#photo.id)
respond_to do |format|
format.html do
if params[:captions_screen] == 'true'
logger.info "WE ARE GOING TO DO NOTHING AT ALL"
render :text => ''
else
redirect_to organization_media_url(#organization)
end
end
format.js { redirect_to organization_media_url(#organization) }
end
else
render :action => 'edit'
end
end
Open your scripts/console and try to Photo.find(9237). You will probably get the same error. This means something/someone is calling controller action for unexistent record. You can avoid this by using find_by_id(params[:id]) which will return nil if there is no record with given id. Also add one more condition in your if statement
if #photo.present? && #photo.update_attributes(params[:photo])
Many many thanks to Tadas Tamosauskas on helping out, but after some research, I found that the problem is actually with delayed_jobs. What happened was when I deployed to a cluster server, the server overwrote my recipes on my ey-cloud for delayed_jobs to initialize. So delayed jobs never booted up. The jobs never ran. Updated the recipe, redeployed, everything is hunky dory.