I have the following worker:
class ImageWorker
include Sidekiq::Worker
def perform(tenant_id, id, key)
tenant = Tenant.find(tenant_id)
tenant.scope_schema do
image = Image.find(id)
unless image.image_processed?
image.key = key
image.remote_image_url = image.image.direct_fog_url(with_path: true)
image.save!
image.update_column(:image_processed, true)
end
end
end
end
The Tenant#scope_schema method looks like this:
def scope_schema(*paths)
original_search_path = ActiveRecord::Base.connection.schema_search_path
paths << "extensions"
ActiveRecord::Base.connection.schema_search_path = ["tenant#{id}", *paths].join(",")
yield
ensure
ActiveRecord::Base.connection.schema_search_path = original_search_path
end
When the ImageWorker job runs, it tells me that it can't find an Image with id=7 so the scope_schema doesn't appear to be working although I can take the same code outside of a Sidekiq worker class and it runs just fine.
Use after_commit to ensure the database record is there when the job executes.
Related
ActiveRecord::RecordNotFound in Sidekiq worker when I save object. I don't use rails callbacks.
I run worker from service, and save object in this service.
class LeadSmsSendingService < Rectify::Command
...initialize params
def send_sms_message
sms_conversation = lead.sms_conversations.find_or_create_by(sms_number: sms_number)
attrs = sms_form.to_hash.symbolize_keys.slice(:body, :direction, :from, :to)
.merge(campaign_id: campaign_id)
sms_message = sms_conversation.sms_messages.build(attrs)
sms_message.to ||= lead.phone
sms_message.body = VariableReplacement.new(lead).render(sms_message.body)
# #todo we need to raise an exception here
return unless sms_message.save
DeliverSmsMessageWorker.perform_in(3.seconds, sms_message.id, 'LeadSmsSendingService')
end
end
class DeliverSmsMessageWorker
include Sidekiq::Worker
sidekiq_options queue: 'priority'
def perform(sms_message_id, from_where="Unknown")
sms_message = SmsMessage.find(sms_message_id)
sms_message.deliver!
rescue StandardError => e
Bugsnag.notify(e) do |report|
# Add information to this report
report.add_tab(:worker, { from_where: from_where.to_s })
end
end
end
Seems that the record has still to be commited, even if it sounds strange because of the 3 seconds delay. Does it work if you increase this delay?
This link could be useful: https://github.com/mperham/sidekiq/wiki/Problems-and-Troubleshooting#cannot-find-modelname-with-id12345
I haven't had a lot of experience with deadlocking issues in the past, but the more I try to work with ActiveJob and concurrently processing those jobs, I'm running into this problem. An example of one Job that is creating it is shown below. The way it operates is I start ImportGameParticipationsJob and it queues up a bunch of CreateOrUpdateGameParticipationJobs.
When attempting to prevent my SQL Server from alerting me to a ton of deadlock errors, where is the cause likely happening below? Can I get a deadlock from simply selecting records to populate an object? Or can it really only happen when I'm attempting to save/update the record within my process_records method below when saving?
ImportGameParticipationsJob
class ImportGameParticipationsJob < ActiveJob::Base
queue_as :default
def perform(*args)
import_participations(args.first.presence)
end
def import_participations(*args)
games = Game.where(season: 2016)
games.each do |extract_record|
CreateOrUpdateGameParticipationJob.perform_later(extract_record.game_key)
end
end
end
CreateOrUpdateGameParticipationJob
class CreateOrUpdateGameParticipationJob < ActiveJob::Base
queue_as :import_queue
def perform(*args)
if args.first.present?
game_key = args.first
# get all particpations for a given game
game_participations = GameRoster.where(game_key: game_key)
process_records(game_participations)
end
end
def process_records(participations)
# Loop through participations and build record for saving...
participations.each do |participation|
if participation.try(:player_id)
record = create_or_find(participation)
record = update_record(record, participation)
end
begin
if record.valid?
record.save
else
end
rescue Exception => e
end
end
end
def create_or_find(participation)
participation_record = GameParticipation.where(
game_id: participation.game.try(:id),
player_id: participation.player.try(:id))
.first_or_initialize do |record|
record.game = Game.find_by(game_key: participation.game_key)
record.player = Player.find_by(id: participation.player_id)
record.club = Club.find_by(club_id: participation.club_id)
record.status = parse_status(participation.player_status)
end
return participation_record
end
def update_record(record, record)
old_status = record.status
new_status = parse_status(record.player_status)
if old_status != new_status
record.new_status = record.player_status
record.comment = "status was updated via participations import job"
end
return record
end
end
They recently updated and added an additional option you can set that should help with the deadlocking. I had the same issue and was on 4.1, moving to 4.1.1 fixed this issue for me.
https://github.com/collectiveidea/delayed_job_active_record
https://rubygems.org/gems/delayed_job_active_record
Problems locking jobs
You can try using the legacy locking code. It is usually slower but works better for certain people.
Delayed::Backend::ActiveRecord.configuration.reserve_sql_strategy = :default_sql
I have a Rails app in which I use delayed_job. I want to detect whether I am in a delayed_job process or not; something like
if in_delayed_job?
# do something only if it is a delayed_job process...
else
# do something only if it is not a delayed_job process...
end
But I can't figure out how. This is what I'm using now:
IN_DELAYED_JOB = begin
basename = File.basename $0
arguments = $*
rake_args_regex = /\Ajobs:/
( basename == 'delayed_job' ) ||
( basename == 'rake' && arguments.find{ |v| v =~ rake_args_regex } )
end
Another solution is, as #MrDanA said:
$ DELAYED_JOB=true script/delayed_job start
# And in the app:
IN_DELAYED_JOB = ENV['DELAYED_JOB'].present?
but they are IMHO weak solutions. Can anyone suggest a better solution?
The way that I handle these is through a Paranoid worker. I use delayed_job for video transcoding that was uploaded to my site. Within the model of the video, I have a field called video_processing which is set to 0/null by default. Whenever the video is being transcoded by the delayed_job (whether on create or update of the video file), it will use the hooks from delayed_job and will update the video_processing whenever the job starts. Once the job is completed, the completed hook will update the field to 0.
In my view/controller I can do video.video_processing? ? "Video Transcoding in Progress" : "Video Fished Transcoding"
Maybe something like this. Add a field to your class and set it when your invoke the method that does all your work from delayed job:
class User < ActiveRecord::Base
attr_accessor :in_delayed_job
def queue_calculation_request
Delayed::Job.enqueue(CalculationRequest.new(self.id))
end
def do_the_work
if (in_delayed_job)
puts "Im in delayed job"
else
puts "I was called directly"
end
end
class CalculationRequest < Struct.new(:id)
def perform
user = User.find(id)
user.in_delayed_job = true
user.do_the_work
end
def display_name
"Perform the needeful user Calculations"
end
end
end
Here is how it looks:
From Delayed Job:
Worker(host:Johns-MacBook-Pro.local pid:67020)] Starting job worker
Im in delayed job
[Worker(host:Johns-MacBook-Pro.local pid:67020)] Perform the needeful user Calculations completed after 0.2787
[Worker(host:Johns-MacBook-Pro.local pid:67020)] 1 jobs processed at 1.5578 j/s, 0 failed ...
From the console
user = User.first.do_the_work
User Load (0.8ms) SELECT "users".* FROM "users" WHERE "users"."id" = $1 LIMIT 1 [["id", 101]]
I was called directly
This works for me:
def delayed_job_worker?
(ENV["_"].include? "delayed_job")
end
Unix will set the "_" environment variable to the current command.
It'll be wrong if you have a bin script called "not_a_delayed_job", but don't do that.
How about ENV['PROC_TYPE']
Speaking only of heroku... but when you're a worker dyno, this is set to 'worker'
I use it as my "I'm in a DJ"
You can create a plugin for delayed job, e.g. create the file is_dj_job_plugin.rb in the config/initializers directory.
class IsDjJobPlugin < Delayed::Plugin
callbacks do |lifecycle|
lifecycle.around(:invoke_job) do |job, *args, &block|
begin
old_is_dj_job = Thread.current[:is_dj_job]
Thread.current[:is_dj_job] = true
block.call(job, *args) # Forward the call to the next callback in the callback chain
Thread.current[:is_dj_job] = old_is_dj_job
end
end
end
def self.is_dj_job?
Thread.current[:is_dj_job] == true
end
end
Delayed::Worker.plugins << IsDjJobPlugin
You can then test in the following way:
class PrintDelayedStatus
def run
puts IsDjJobPlugin.is_dj_job? ? 'delayed' : 'not delayed'
end
end
PrintDelayedStatus.new.run
PrintDelayedStatus.new.delay.run
Looking to use em-mongo for a text analyzer script which loads text from db, analyzes it, flags keywords and updates the db.
Would love to see some examples of em-mongo in action. Only one I could find was on github em-mongo repo.
require 'em-mongo'
EM.run do
db = EM::Mongo::Connection.new.db('db')
collection = db.collection('test')
EM.next_tick do
doc = {"hello" => "world"}
id = collection.insert(doc)
collection.find('_id' => id]) do |res|
puts res.inspect
EM.stop
end
collection.remove(doc)
end
end
You don't need the next_tick method, that is em-mongo doing for you. Define callbacks, that are executed if the db actions are done. Here is a skeleton:
class NonBlockingFetcher
include MongoConfig
def initialize
configure
#connection = EM::Mongo::Connection.new(#server, #port)
#collection = init_collection(#connection)
end
def fetch(value)
mongo_cursor = #collection.find({KEY => value.to_s})
response = mongo_cursor.defer_as_a
response.callback do |documents|
# foo
# get one document
doc = documents.first
end
response.errback do |err|
# foo
end
end
end
I have the following script which runs once a day on cron on heroku.
However, I realize that I would like the option for the user to be able to press a button from a web page to initiate this same process.
Is there a way to create a 'subroutine' that either cron can call or from a web request? I don't want to use a separate service that runs jobs.
I've just put a snippet to illustrate.....
letter_todos = Todo.current_date_lte(Date.today).asset_is("Letter").done_date_null
unless letter_todos.blank? #check if there any ToDos
# group by asset_id so that each batch is specific to the asset_id
letter_todos.group_by(&:asset_id).each do |asset_id, letter_todos|
# pdf = Prawn::Document.new(:margin => 100) #format the PDF document
html_file = ''
letter_todos.each do |todo| #loop through all Letter_Todos
contact = Contact.find(todo.contact_id) #get associated contact
letter = Letter.find(todo.asset_id) #get associated Letter
redcloth_contact_letter = RedCloth.new(letter.substituted_message(contact, [])).to_html
html_file = html_file + redcloth_contact_letter
html_file = html_file + "<p style='display: none; page-break-after: always'><center> ... </center> </p>"
end
kit = PDFKit.new(html_file)
kit.stylesheets << "#{RAILS_ROOT}/public/stylesheets/compiled/pdf.css"
file = kit.to_pdf
letter = Letter.find(asset_id)
#OutboundMailer.deliver_pdf_email(file)
kit.to_file("#{RAILS_ROOT}/tmp/PDF-#{letter.title}-#{Date.today}.pdf")
# Create new BatchPrint record
batch = BatchPrint.new
batch.pdf = File.new("#{RAILS_ROOT}/tmp/PDF-#{letter.title}-#{Date.today}.pdf")
I've done this by putting the function in question in a file in lib (lib/tasks_n_stuff.rb, say):
module TasksNStuff
def self.do_something
# ...doing something...
end
end
Then I can call if from a Rake task:
desc 'Make sure we depend on :environment, so we can get to the Railsy stuff...'
task :do_something => :environment do
TasksNStuff.do_something
end
Or from a controller (or anywhere, really):
class WhateverController < ApplicationController
def do_something
TasksNStuff.do_something
end
end
And since you can run a rake task as a cron job (cd /my/rails/root; rake do_something), that should be all you need. Cheers!