Logging in delayed_job? - ruby-on-rails

I can't get any log output from delayed_job, and I'm not sure my jobs are starting.
Here's my Procfile:
web: bundle exec rails server
worker: bundle exec rake jobs:work
worker: bundle exec clockwork app/clock.rb
And here's the job:
class ScanningJob
def perform
logger.info "logging from delayed_job"
end
def after(job)
Rails.logger.info "logging from after delayed_job"
end
end
I see that clockwork outputs to system out, and I can see worker executor starting, but I never see my log statements hit. I tried puts as well to no avail.
My clock file is pretty simple:
every(3.seconds, 'refreshlistings') { Delayed::Job.enqueue ScanningJob.new }
I just want to see this working, and lack of logging means I can't. What's going on here?

When I needed log output from Delayed Jobs, I found this question to be fairly helpful.
In config/initializers/delayed_job.rb I add the line:
Delayed::Worker.logger = Logger.new(File.join(Rails.root, 'log', 'dj.log'))
Then, in my job, I output to this log with:
Delayed::Worker.logger.info("Log Entry")
This results in the file log/dj.log being written to. This works in development, staging, and production environments.

Related

Can I run a rake task to simulate sidekiq running?

I have a rails app that I want to use sidekiq on.
Before I figure out how to setup sidekiq on the server etc., is it possible for me to create a rake task that will run the sidekiq workers for any pending jobs?
This isn't a production solution I know, but I just want to make sure everything else is working on the server and for the time being me running a rake task on the server is fine as it is more for QA'ing at this point.
Log into the server, cd to the app's dir and run 'bundle exec sidekiq' manually.
Referencing from
SortedEntry and ScheduledSet, I managed to come up with the following:
# ScheduledSet returns "pending" jobs
scheduled_jobs = Sidekiq::ScheduledSet.new
# puts scheduled_jobs.class
# => Sidekiq::ScheduledSet
scheduled_jobs.each do |scheduled_job|
# puts scheduled_job.class
# => Sidekiq::SortedEntry
job_class = scheduled_job.klass.constantize
job_args = scheduled_job.args
# you can also filter out only jobs in certain queue, i.e.:
# if scheduled_job.queue == 'mailers' ......
# run the job inline
job_class.new.perform(*job_args)
end
Then just wrap this into a rake task.

How to automate testing successful boot of Sidekiq Process

We've had a couple of issues recently in our Rails 4 app that could have been prevented with a simple check to see that both the main app and a worker process correctly boot up our application.
Our Procfile for Heroku looks like so:
web: bundle exec unicorn -p $PORT -c config/heroku/unicorn.rb
sidekiq: bundle exec sidekiq -q high -q medium -q low
Right now, we run a very large test suite that takes 30 mins to complete, even when bisecting the suite across 10+ containers.
We use CircleCI and are considering adding a test like follows that simulates the Sidekiq bootup process.
require 'spec_helper'
RSpec.describe 'Booting processes' do
context 'Sidekiq server' do
it 'loads successfully' do
boot_sidekiq_sequence = <<-SEQ
require 'sidekiq/cli'
Sidekiq::CLI.instance.environment = '#{ENV['RAILS_ENV']}'
Sidekiq::CLI.instance.send(:boot_system)
SEQ
expect(system(%{echo "#{boot_sidekiq_sequence}" | ruby > /dev/null})).to be_truthy,
"The Sidekiq process could not boot up properly. Run `sidekiq` to troubleshoot"
end
end
end
The problem with this is that:
It's not quite a complete test of our application boot process which would require reading from our Procfile (we also want to protect devs from making breaking changes there)
It's slow and would add approx 30 secs to our test runtime
It's a bit low-level for my liking.
What makes testing the Sidekiq boot process in particular challenging is that it will only exit if there is an exception on boot.
Can anyone recommend a better, faster, more thorough approach to automating this kind of test?
For integration testing, it's always best to run the process as close to production as possible, so bundle exec sidekiq .... I wouldn't use the CLI API to boot it in-process.
I would first enqueue a special job which just does this:
def perform
puts "Sidekiq booted"
Process.kill($$, TERM) # terminate ourselves
end
Then execute Sidekiq by running the binary and monitoring STDOUT. If you see /booted/ within 30 seconds, PASS. If you see nothing within 30 seconds, FAIL and kill the child.
Thanks to the inspiration here I ended up with a test that executes the actual command from the Procfile:
require 'foreman/procfile'
require 'open3'
# Utilizes a Procfile like this:
# web: bundle exec puma -C config/puma.rb
# workers: bundle exec sidekiq
#
describe Sidekiq do
let(:procfile_name){ 'Procfile' }
let(:command_name){ 'workers' }
let(:command){ Foreman::Procfile.new(procfile_name)[command_name] || raise("'#{command_name}' not defined in '#{procfile_name}'") }
it 'launches without errors' do
expect(output_from(command)).to match(/Starting processing/)
end
def output_from(command)
Open3.popen2e(*command.split) do |_, out, wait_thr|
pid = wait_thr.pid
output_pending = IO.select([out], nil, nil, 10.seconds)
Process.kill 'TERM', pid
if output_pending
out.read
else
raise Timeout::Error
end
end
end
end

Cron Job not working using whenever gem

I am trying to learn Cron Job and whenever gem.
In my app i have created a rake task.
require 'rubygems'
namespace :cron_job do
desc "To Check Users Inactivity"
task user_inactivity: :environment do
p "Inactive Users..."
end
end
and in schedule.rb i have wrote like this
every 1.minute do
rake "cron_job:user_inactivity", environment: "development"
end
and in my terminal i wrote two commands
whenever --update-crontab
and then
sudo /etc/init.d/cron restart
but nothing is happening after 1 minute.
I checked my console for p messages and nothing happens. Do i miss something?
The output won't show in your console.
You have to set the output log path by:
set :output, "/path/to/my/cron_log.log"
Then check the log.

Resque workers failing silently with no logs or no debug information

I have a piece of code code which runs some method and I have put that method inside the self.perform as below
class StartWorker
#queue = :worker
def self.perform
puts "hello"
Parser.find_all_by_class_name("Parser").first.sites.each do |s|
site = Site.find(s.id)
parser = Parser.new(site)
parser.run
end
end
end
If I comment the def self.perform and end of this and run the file it is correctly showing the desired results but when I try to put this as a background task by uncommenting the same It is failing silently into the resque panel showing nothing .and also with no outputs in the command prompt
I am queuing this from resque-scheduler, inside my resque_schedule.yml
start_worker:
cron: "55 6 * * *"
class: StartWorker
queue: worker
args:
description: "This job starts all the worker jobs"
I tried putting some simple puts statements inside the the def self.perform method to check whether the control is passing inside the method or not but it seems the control is not even passing inside the self.perform method
also I tried to load the rails environment and required resque inside it but of no use
I have runned the bundle exec rake resque:work QUEUE='*' script for all the above before making any changes
I tried Resque.info inside the rails console it is showing like this ,
{:working=>0, :queues=>1, :failed=>1, :environment=>"development", :servers=>["redis://localhost:6379/0"], :pending=>0, :processed=>1, :workers=>1}
Any Ideas how to overcome this?
Have you checked the resque-web panel to see what that failed job is?
Make sure you restart workers after any changes.
If you're not seeing 'hello' in your logs anywhere (specifically log/worker.log or where ever yours log to) then the function simply isn't happening.
bundle exec rake resque:work QUEUE='*' should be bundle exec rake resque:work QUEUE=*
, just incase you really did quote the wildcard.
Worked for me:
Resque::Scheduler.configure do |c|
c.verbose = Rails.env.development? # or just true
end
Ruby: 3.0.3,
Resque: 2.2.0,
Resque-Scheduler 4.5.0,
Rails: 6.1.4.4

Start or ensure that Delayed Job runs when an application/server restarts

We have to use delayed_job (or some other background-job processor) to run jobs in the background, but we're not allowed to change the boot scripts/boot-levels on the server. This means that the daemon is not guaranteed to remain available if the provider restarts the server (since the daemon would have been started by a capistrano recipe that is only run once per deployment).
Currently, the best way I can think of to ensure the delayed_job daemon is always running, is to add an initializer to our Rails application that checks if the daemon is running. If it's not running, then the initializer starts the daemon, otherwise, it just leaves it be.
The question, therefore, is how do we detect that the Delayed_Job daemon is running from inside a script? (We should be able to start up a daemon fairly easily, bit I don't know how to detect if one is already active).
Anyone have any ideas?
Regards,
Bernie
Based on the answer below, this is what I came up with. Just put it in config/initializers and you're all set:
#config/initializers/delayed_job.rb
DELAYED_JOB_PID_PATH = "#{Rails.root}/tmp/pids/delayed_job.pid"
def start_delayed_job
Thread.new do
`ruby script/delayed_job start`
end
end
def process_is_dead?
begin
pid = File.read(DELAYED_JOB_PID_PATH).strip
Process.kill(0, pid.to_i)
false
rescue
true
end
end
if !File.exist?(DELAYED_JOB_PID_PATH) && process_is_dead?
start_delayed_job
end
Some more cleanup ideas: The "begin" is not needed. You should rescue "no such process" in order not to fire new processes when something else goes wrong. Rescue "no such file or directory" as well to simplify the condition.
DELAYED_JOB_PID_PATH = "#{Rails.root}/tmp/pids/delayed_job.pid"
def start_delayed_job
Thread.new do
`ruby script/delayed_job start`
end
end
def daemon_is_running?
pid = File.read(DELAYED_JOB_PID_PATH).strip
Process.kill(0, pid.to_i)
true
rescue Errno::ENOENT, Errno::ESRCH # file or process not found
false
end
start_delayed_job unless daemon_is_running?
Keep in mind that this code won't work if you start more than one worker. And check out the "-m" argument of script/delayed_job which spawns a monitor process along with the daemon(s).
Check for the existence of the daemons PID file (File.exist? ...). If it's there then assume it's running else start it up.
Thank you for the solution provided in the question (and the answer that inspired it :-) ), it works for me, even with multiple workers (Rails 3.2.9, Ruby 1.9.3p327).
It worries me that I might forget to restart delayed_job after making some changes to lib for example, causing me to debug for hours before realizing that.
I added the following to my script/rails file in order to allow the code provided in the question to execute every time we start rails but not every time a worker starts:
puts "cleaning up delayed job pid..."
dj_pid_path = File.expand_path('../../tmp/pids/delayed_job.pid', __FILE__)
begin
File.delete(dj_pid_path)
rescue Errno::ENOENT # file does not exist
end
puts "delayed_job ready."
A little drawback that I'm facing with this though is that it also gets called with rails generate for example. I did not spend much time looking for a solution for that but suggestions are welcome :-)
Note that if you're using unicorn, you might want to add the same code to config/unicorn.rb before the before_fork call.
-- EDITED:
After playing around a little more with the solutions above, I ended up doing the following:
I created a file script/start_delayed_job.rb with the content:
puts "cleaning up delayed job pid..."
dj_pid_path = File.expand_path('../../tmp/pids/delayed_job.pid', __FILE__)
def kill_delayed(path)
begin
pid = File.read(path).strip
Process.kill(0, pid.to_i)
false
rescue
true
end
end
kill_delayed(dj_pid_path)
begin
File.delete(dj_pid_path)
rescue Errno::ENOENT # file does not exist
end
# spawn delayed
env = ARGV[1]
puts "spawing delayed job in the same env: #{env}"
# edited, next line has been replaced with the following on in order to ensure delayed job is running in the same environment as the one that spawned it
#Process.spawn("ruby script/delayed_job start")
system({ "RAILS_ENV" => env}, "ruby script/delayed_job start")
puts "delayed_job ready."
Now I can require this file anywhere I want, including 'script/rails' and 'config/unicorn.rb' by doing:
# in top of script/rails
START_DELAYED_PATH = File.expand_path('../start_delayed_job', __FILE__)
require "#{START_DELAYED_PATH}"
# in config/unicorn.rb, before before_fork, different expand_path
START_DELAYED_PATH = File.expand_path('../../script/start_delayed_job', __FILE__)
require "#{START_DELAYED_PATH}"
not great, but works
disclaimer: I say not great because this causes a periodic restart, which for many will not be desirable. And simply trying to start can cause problems because the implementation of DJ can lock up the queue if duplicate instances are created.
You could schedule cron tasks that run periodically to start the job(s) in question. Since DJ treats start commands as no-ops when the job is already running, it just works. This approach also takes care of the case where DJ dies for some reason other than a host restart.
# crontab example
0 * * * * /bin/bash -l -c 'cd /var/your-app/releases/20151207224034 && RAILS_ENV=production bundle exec script/delayed_job --queue=default -i=1 restart'
If you are using a gem like whenever this is pretty straightforward.
every 1.hour do
script "delayed_job --queue=default -i=1 restart"
script "delayed_job --queue=lowpri -i=2 restart"
end

Resources