Delayed Jobs is not finding Records and failing - ruby-on-rails

In my app, delayed jobs isn't running automatically on my server anymore. It used to..
When I manually ssh in, and perform rake jobs:work
I return this :
*** Starting job worker host:ip-(censored) pid:21458
* [Worker(host:ip-(censored) pid:21458)] acquired lock on PhotoJob
* [JOB] host:ip-(censored) pid:21458 failed with ActiveRecord::RecordNotFound: Couldn't find Photo with ID=9237 - 4 failed attempts
This returns roughly 20 times over for what I think is several jobs. Then I get a few of these:
* [Worker(host:ip-(censored) pid:21458)] failed to acquire exclusive lock for PhotoJob
And then finally one of these :
12 jobs processed at 73.6807 j/s, 12 failed ...
Any ideas what I should be mulling over? Thanks so much!
Edit :
Here is the photo controller that calls delayed jobs:
def update
#gallery = #organization.media.find_by_media_id(params[:gallery_id]).media
#photo = #gallery.photos.find(params[:id])
if #photo.update_attributes(params[:photo])
#photo.update_attribute(:processing, true)
logger.info "HERE IS PROCESSING #{#photo.processing}"
Delayed::Job.enqueue PhotoJob.new(#photo.id)
respond_to do |format|
format.html do
if params[:captions_screen] == 'true'
logger.info "WE ARE GOING TO DO NOTHING AT ALL"
render :text => ''
else
redirect_to organization_media_url(#organization)
end
end
format.js { redirect_to organization_media_url(#organization) }
end
else
render :action => 'edit'
end
end

Open your scripts/console and try to Photo.find(9237). You will probably get the same error. This means something/someone is calling controller action for unexistent record. You can avoid this by using find_by_id(params[:id]) which will return nil if there is no record with given id. Also add one more condition in your if statement
if #photo.present? && #photo.update_attributes(params[:photo])

Many many thanks to Tadas Tamosauskas on helping out, but after some research, I found that the problem is actually with delayed_jobs. What happened was when I deployed to a cluster server, the server overwrote my recipes on my ey-cloud for delayed_jobs to initialize. So delayed jobs never booted up. The jobs never ran. Updated the recipe, redeployed, everything is hunky dory.

Related

How to save data using Redmine issue model

I need to modify the issue's start_date and due_date some how,
But I haven't used Rails before, so I don't know how the it run in server,
And I need to implement it in a short time.
So I add these code in the controller,
def date
issue = Issue.find(params[:id])
issue.start_date = params[:start_date]
issue.due_date = params[:due_date]
ntc_str = "Fail to save!" + params[:id]
if issue.save
ntc_str = 'Issue saved!'
end
flash[:notice] = ntc_str;
redirect_to :controller => 'gantts', :action => 'show', :project_id => params[:p_id]
end
It runs when I access it by myself
It always failed and the "ntc_str" always is "Fail to save!" if I use javascript to access it.
For example:
It runs when I input the url "http://xxx.xxx.xxx.xxx/date?id=6&project_id=test&start_date=2016-06-08&due_date=2016-06-30" by hands,
But it failed when I use javascript "window.location.href='/date?id=6&project_id=test&start_date=2016-06-08&due_date=2016-06-30'"
It runs when I input the params in the form I create and click to submit it,
But it failed when I use javascript "document.getElementById('start_date').value = '2016-06-30'; /..../ $('#test-form').submit()"
Could you tell me why it always fails and how can I use the issue model? I have be crazy now.
It would be useful, if you provide some logs with each cases you try.
Also, you can see what goes wrong with issue, when you try to save it, with:
if issue.save
ntc_str = 'Issue saved!'
else
Rails.logger.error(issue.errors.full_messages)
end

How can I make the transaction rollback if the API call fails and vice versa?

Two things I want:
a) I want to be able to save a record in the db only if the API call succeeds
b) I want to execute the API call only if the db record saves successfully
The goal, is to keep data stored locally (in the DB) consistent with that of the data on Stripe.
#payment = Payment.new(...)
begin
Payment.transaction do
#payment.save!
stripe_customer = Stripe::Customer.retrieve(manager.customer_id)
charge = Stripe::Charge.create(
amount: #plan.amount_in_cents,
currency: 'usd',
customer: stripe_customer.id
)
end
# https://stripe.com/docs/api#errors
rescue Stripe::CardError, Stripe::InvalidRequestError, Stripe::APIError => error
#payment.errors.add :base, 'There was a problem processing your credit card. Please try again.'
render :new
rescue => error
render :new
else
redirect_to dashboard_root_path, notice: 'Thank you. Your payment is being processed.'
end
The above following will work, because if the record (on line 5) doesn't save, the rest of the code doesn't execute.
But what if I needed the #payment object saved after the API call, because I need to assign the #payment object with values from the API results. Take for example:
#payment = Payment.new(...)
begin
Payment.transaction do
stripe_customer = Stripe::Customer.retrieve(manager.customer_id)
charge = Stripe::Charge.create(
amount: #plan.amount_in_cents,
currency: 'usd',
customer: stripe_customer.id
)
#payment.payment_id = charge[:id]
#payment.activated_at = Time.now.utc
#payment.save!
end
# https://stripe.com/docs/api#errors
rescue Stripe::CardError, Stripe::InvalidRequestError, Stripe::APIError => error
#payment.errors.add :base, 'There was a problem processing your credit card. Please try again.'
render :new
rescue => error
render :new
else
redirect_to dashboard_root_path, notice: 'Thank you. Your payment is being processed.'
end
You notice #payment.save! happens after the API call. This could be a problem, because the API call ran, before the DB tried to save the record. Which could mean, a successful API call, but a failed DB commit.
Any ideas / suggestions for this scenario?
You can't execute API => DB and DB => API at the same time (sounds like an infinite execution conditions), at least I can't image how you can achieve this workflow. I understand your data consistency needs, so I propose:
Check if record is valid #payment.valid? (probably with a custom method like valid_without_payment?)
Run api call
Save record (with payment_id) only if api call succeds
Alternatively:
Save record without payment_id
Run api call
Update record with payment_id (api response) if call succeds
Run a task (script) periodically (cron) to check inconsistent instances (where(payment_id: nil)) and delete it
I think both options are acceptable and your data will remain consistent.

how to delete a job in sidekiq

I am using sidekiq in my rails app. Users of my app create reports that start a sidekiq job. However, sometimes users want to be able to cancel "processing" reports. Deleting the report is easy but I also need to be able to delete the sidekiq job as well.
So far I have been able to get a list of workers like so:
workers = Sidekiq::Workers.new
and each worker has args that include a report_id so I can identify which job belongs to which report. However, I'm not sure how to actually delete the job. It should be noted that I want to delete the job whether it is currently busy, or set in retry.
According to this Sidekiq documentation page to delete a job with a single id you need to iterate the queue and call .delete on it.
queue = Sidekiq::Queue.new("mailer")
queue.each do |job|
job.klass # => 'MyWorker'
job.args # => [1, 2, 3]
job.delete if job.jid == 'abcdef1234567890'
end
There is also a plugin called sidekiq-status that provides you the ability to cancel a single job
scheduled_job_id = MyJob.perform_in 3600
Sidekiq::Status.cancel scheduled_job_id #=> true
The simplest way I found to do this is:
job = Sidekiq::ScheduledSet.new.find_job([job_id])
where [job_id] is the JID that pertains to the report. Followed by:
job.delete
I found no need to iterate through the entire queue as described by other answers here.
I had the same problem, but the difference is that I needed to cancel a scheduled job, and my solution is:
Sidekiq::ScheduledSet.new.each do |_job|
next unless [online_jid, offline_jid].include? _job.jid
status = _job.delete
end
If you want to cancel a scheduled job, I'm not sure about #KimiGao's answer, but this is what I adapted from Sidekiq's current API documentation:
jid = MyCustomWorker.perform_async
r = Sidekiq::ScheduledSet.new
jobs = r.select{|job| job.jid == jid }
jobs.each(&:delete)
Hope it helps.
You can delete sidekiq job filtering by worker class and args:
class UserReportsWorker
include Sidekiq::Worker
def perform(report_id)
# ...
end
end
jobs = Sidekiq::ScheduledSet.new.select do |retri|
retri.klass == "UserReportsWorker" && retri.args == [42]
end
jobs.each(&:delete)
I had the same problem.
I solved it by registering the job id when I initialize it and by creating another function cancel! to delete it.
Here is the code:
after_enqueue do |job|
sidekiq_job = nil
queue = Sidekiq::Queue.new
sidekiq_job = queue.detect do |j|
j.item['args'][0]['job_id'] == job.job_id
end
if sidekiq_job.nil?
scheduled = Sidekiq::ScheduledSet.new
sidekiq_job = scheduled.detect do |j|
j.item['args'][0]['job_id'] == job.job_id
end
end
if sidekiq_job.present?
booking = job.arguments.first
booking.close_comments_jid = sidekiq_job.jid
booking.save
end
end
def perform(booking)
# do something
end
def self.cancel!(booking)
queue = Sidekiq::Queue.new
sidekiq_job = queue.find_job(booking.close_comments_jid)
if sidekiq_job.nil?
scheduled = Sidekiq::ScheduledSet.new
sidekiq_job = scheduled.find_job(booking.close_comments_jid)
end
if sidekiq_job.nil?
# Report bug in my Bug Tracking tool
else
sidekiq_job.delete
end
end
There is simple way of deleting a job if you know the job_id:
job = Sidekiq::ScheduledSet.new.find_job(job_id)
begin
job.delete
rescue
Rails.logger.error "Job: (job_id: #{job_id}) not found while deleting jobs."
end
Or you can use sidekiq page on rails server.
For example, http://localhost:3000/sidekiq, you can stop/remove the sidekiq jobs.
Before that, you have to updates the routes.rb.
require 'sidekiq/web'
mount Sidekiq::Web => '/sidekiq'

Rails Delayed Job Continuously Running

I created a batch email system for my website. The problem I have, which is terrible, is it continuously sends out emails. It seems the job is stuck in an infinite loop. Please advise. It is crazy because on my development server only one email is sent per account, but on my production server I received 5 emails. Thus, meaning all users of my site received multiple emails.
Controller:
class BatchEmailsController < ApplicationController
before_filter :authenticate_admin_user!
def deliver
flash[:notice] = "Email Being Delivered"
Delayed::Job.enqueue(BatchEmailJob.new(params[:batch_email_id]), 3, 10.seconds.from_now, :queue => 'batch-email', :attempts => 0)
redirect_to admin_batch_emails_path
end
end
Job in the lib folder:
class BatchEmailJob < Struct.new(:batch_email_id)
def perform
be = BatchEmail.find(batch_email_id)
if be.to.eql?("Contractors")
cs = Contractor.all
cs.each do|c|
begin
BatchEmailMailer.batch_email(be.subject, be.message, be.link_name, be.link_path, be.to, c.id).deliver
rescue Exception => e
Rails.logger.warn "Batch Email Error: #{e.message}"
end
else
ps = Painter.all
ps.each do |p|
begin
BatchEmailMailer.batch_email(be.subject, be.message, be.link_name, be.link_path, be.to, p.id).deliver
rescue Exception => e
Rails.logger.warn "Batch Email Error: #{e.message}"
end
end
end
end
end
Delayed Job Initializer:
Delayed::Worker.max_attempts = 0
Please provide feedback on this approach. I want to send out the batch email to all users, but avoid retrying multiple times if something goes wrong. I added rescue block to catch email exceptions in hope that the batch will skip errors and continue processing. As a last resort do not run again if something else goes wrong.
What one of my apps does which seems to work flawlessly after millions of emails:
1) in an initializer, do NOT let DelayedJob re-attempt a failed job AND ALSO not let DJ delete failed jobs:
Delayed::Worker.destroy_failed_jobs = false
Delayed::Worker.max_attempts = 1
2) Scheduling a mass email is 1 job, aka the "master job"
3) When THAT jobs runs, it spawns N jobs where N is the number of emails being sent. So each email gets its own job. (Note: if you use a production email service with 'batch' capability, one "email" might actually be a batch of 100 or 1000 emails.)
4) We have an admin panel that shows us if we have any failed jobs, and if they are, because we don't delete them, we can inspect the failed job and see what happened (malformed email address etc)
If one email fails, the others are un-affected. And no email can ever be sent twice.

Passing a Block to a delayed_job

I have a function that is marked to be handled asynchronously by delayed_job:
class CapJobs
def execute(params, id)
begin
unless Rails.env == "test"
Capistrano::CLI.parse(params).execute!
end
rescue
site = Site.find(id)
site.records.create!(:date => DateTime.now, :action => "Task Failure: #{params[0]}", :type => :failure)
site.save
ensure
yield id
end
end
handle_asynchronously :execute
end
When I run this function I pass in a block:
capjobs = CapJobs.new
capjobs.execute(parameters, #site.id) do |id|
asite = Site.find(id)
asite.records.create!(:date => DateTime.now, :action => "Created", :type => :init)
asite.status = "On Demo"
asite.dev = true
asite.save
end
This works fine when run without delayed_job but when run with it I get the following error
2012-08-13T09:24:36-0300: [Worker(delayed_job host:eagle pid:12089)] SitesHelper::CapJobs#execute_without_delay failed with LocalJumpError: no block given (yield) - 0 failed attempts
2012-08-13T09:24:36-0300: [Worker(delayed_job host:eagle pid:12089)] PERMANENTLY removing SitesHelper::CapJobs#execute_without_delay because of 1 consecutive failures.
2012-08-13T09:24:36-0300: [Worker(delayed_job host:eagle pid:12089)] 1 jobs processed at 0.0572 j/s, 1 failed ...
It seems not to pick up the block that was passed in. Is this not the correct way of doing this or should I find a different method?
delayed_job works by saving your jobs into a data store (most often your primary database) and then loading the jobs out of this data store in a background process, where it is handled/executed.
To save a job into the database, delayed_job needs to somehow save what method to call on which object with what arguments. This is done by serializing everything into a string (delayed_job uses yaml for that). Unfortunately, blocks cannot be serialized. So the background worker does not know about the block argument and calls the method without it. This results in the LocalJumpError when the method is trying to yield to the block.
I found a method of doing this. It is kind of hacky but works well. I found this article that talks about creating a SerializableProc class. If I pass this to the function then everything works great.
Most people would treat this as an abstraction problem.
The proc code is probably not changing from run-to-run (except vars) and so you should make the block code into a class or instance method. Pass the name of that method, and then call it in your execute method, like
#some_data = CapJobs.send( target_method )
or perhaps-better-even
#some_data = DomainSpecificModel.send( target_method )

Resources