Rails delayed_job start and stop - ruby-on-rails

I'm using delayed_job on my project, and until now calling rake jobs:work in the console it worked.
But now I'm trying to stop the processing of jobs after a concret event, and continue processing after other event.
To simulate this behaviour I've created a script config/initializers/delayed_jobs.rb
puts "START DELAYED JOB PRE"
`script/delayed_job start`
puts "START DELAYED JOB POST"
And only the first puts is called, the server has mired in the instruction, and the web is not being showed.
How to call stop and start delayed_jobs getting the correct behaviour?

You can move it into an own thread:
puts "START DELAYED JOB PRE"
Thread.new do
`script/delayed_job start`
end
puts "START DELAYED JOB POST"

I would suggest using external daemon that would be responsible for keeping your workers up. I had quite good experience with resque-pool that is designed for resque queue. Usually such daemon would allow you pausing workers just by sending signal to a master process. It also makes deployment restarts and management quite easy.

Related

Sidekiq/Redis queuing a job that doesn't exist

I built a simple test job for Sidekiq and added it to my schedule.yml file for Sidekiq Cron.
Here's my test job:
module Slack
class TestJob < ApplicationJob
queue_as :default
def perform(*args)
begin
SLACK_NOTIFIER.post(attachments: {"pretext": "test", "text": "hello"})
rescue Exception => error
puts error
end
end
end
end
The SLACK_NOTIFIER here is a simple API client for Slack that I initialize on startup.
And in my schedule.yml:
test_job:
cron: "* * * * *"
class: "Slack::TestJob"
queue: default
description: "Test"
So I wanted to have it run every minute, and it worked exactly as I expected.
However, I've now deleted the job file and removed the job from schedule.yml, and it still tries to run the job every minute. I've gone into my sidekiq dashboard, and I see a bunch of retries for that job. No matter how many times I kill them all, they just keep coming.
I've tried shutting down both the redis server and sidekiq several times. I've tried turning off my computer (after killing the servers, of course). It still keeps scheduling these jobs and it's interrupting my other jobs because it raises the following exception:
NameError: uninitialized constant Slack::TestJob
I've done a project-wide search for "TestJob", but get no results.
I only had the redis server open with this job for roughly 10 minutes...
Is there maybe something lingering in the redis database? I've looked into the redis-cli documentation, but I don't think any of it helps me.
Try calling $ FLUSHALL in the redis cli...
Other than that...
The sidekiq-cron documentation seems to expect that you check for the existence of schedule.yml explicitly...
#initializers/sidekiq.rb
schedule_file = "config/schedule.yml"
if File.exist?(schedule_file) && Sidekiq.server?
Sidekiq::Cron::Job.load_from_hash YAML.load_file(schedule_file)
end
The default for Sidekiq is to retry a failed job 25 times. Because the job doesn't exist, it fails... every time; Sidekiq only knows that the job fails (because attempting to perform a job that doesn't exist would raise an exception), so it flags it for retry... 25 times. Because you were scheduling that job every minute, you probably had a TON of these failed jobs queued up for retry. You can:
Wait ~3 weeks to hit the maximum number of retries
If you have a UI page set up for Sidekiq, you can see and clear these jobs in the Retries tab
Dig through the Redis CLI documentation for a way to identify these specific jobs and delete them (or flush the whole thing if you're not worried about losing other jobs)

delayed_job - Stop only one process

Using delayed_job gem, how can I stop only one process without stopping all the workers?
For example:
rake jobs:work start workers
process1 = SomeClass.enqueue start process 1 in code
process2 = SomeClass.enqueue start process 2 in code
process1.stop will stop only process 1 and keep process 2 running.
I guess a similar question would be "How can I get the PID of a delayed job process?" because then I can kill the process using the PID.
Delayed Job on a whole is a process. To know the pid check the .pid file that is created in temp folder when delayed job starts.
In order to stop one particular process from code use Delayed::Job.all, or find your job and use the destroy method to close it.

Rake Task Starts But Stops abruptly when executed via controller

I have a set of rake tasks that run on the production server, its detached from the main thread, and happens in the background
here is the code to execute it
def vehicle
#estate = Estate.find(#estate_id)
#date_string = #login_month.strftime("%m%Y")
system("rake udpms:process_only_vehicle[#{#date_string},#{#estate_id}] &")
redirect_to :controller => "reports/error_messages", :message => "Processing will happen in the background and reports will be refreshed after two minutes", :target => "_blank"
end
when this code is executed via the url route, it runs the rake task, i can see if i check the active processes on the production machine, but it ends abruptly after about 10 seconds.
ps axl | grep rake
this is the it shows
ruby /usr/local/rvm/gems/ruby-1.8.7-p352/bin/rake udpms:process_only_vehicle[082012,5]
if i execute the same same rake task in the app folder in the terminal it runs with out any errors. This runs without any issues on the dev machine. (OSX). Server is Mint. Rake version is the same on both. there is only one version of the gem.
since its the production server there are no logs (other than the produciton.log, and its no help). any help on how i go about debugging this issue will be much appreciated.
This is probably happening because your server software reaps requests that take longer than 10 seconds to respond. Despite the fact you're kicking off a rake task, it still has to wait for that system call to execute: if it takes awhile then the task will be terminated and the server worker returned to the worker pool.
In a more general sense, this is not the appropriate way to make a task happen in the background. You probably want to use Resque or Delayed Job, which enqueue tasks and run them in the background for you.

Getting delayed_job to just work

I followed the railscast which uses CollectiveIdea's fork. I'm not able to get it to work. I created a new file in my /lib folder and included this
class Device
def deliver
#my long running method
end
handle_asynchronously :deliver
end
device = Device.new
device.deliver
I do a script/delayed_job and that forks an app instance. Now,
There's no job activity going on. Nothing in the delayed_jobs table and nothing in the logs. Am I missing something here?
How do I set the interval for which the method should be run? (Ex every 30 seconds)
I'm testing this in the development mode (Rails 2.3.2), and soon will be moving this into production.
Thanks !
Do you see a process for the script/delayed_job that you ran? Do a ps aux | grep delayed_job and see if there is a process running.
AFAIK, you cannot set any time intervals using Delayed Job.
As a first step to diagnose the problem:
Stop your job workers
Launch a delayed job
Check whether it is present in the database.

Keeping a rake job running

I'm using delayed_job to run jobs, with new jobs being added every minute by a cronjob.
Currently I have an issue where the rake jobs:work task, currently started with 'nohup rake jobs:work &' manually, is randomly exiting.
While God seems to be a solution to some people, the extra memory overhead is rather annoying and I'd prefer a simpler solution that can be restarted by the deployment script (Capistrano).
Is there some bash/Ruby magic to make this happen, or am I destined to run a monitoring service on my server with some horrid hacks to allow the unprivelaged account the site deploys to the ability to restart it?
For me the daemons gem was unreliable with delayed_job. Could be a poorly written script (was using the one on collectiveidea's delayed_job github page), and not daemons fault, I'm not really sure. But for whatever reason, it would restart inconsistently on deployments.
I read somewhere this was due to it not waiting for the process to actually exit, so the pid files would get overwritten or something. But I didn't really bother to investigate. I switched to the daemons-spawn gem using these instructions and it seems to be much more reliable now.
The delayed_job docs suggest that you use a monitoring service to manage the rake worker job(s). I use runit--works well.
(You can install it in the mode where it does not replace init.)
Added:
Re: restart by Capistrano: yes, runit enables that. Just do a
sudo sv kill delayed_job
in your Capistrano recipe to kill the delayed_job worker. Runit will then restart it with your newly deployed code base.
I have implemented small rake task that restarts the jobs task over and over again:
desc "Start a delayed_job worker in a endless loop to prevent exits."
task :jobs => :environment do
while true
begin
Delayed::Worker.new(:min_priority => ENV['MIN_PRIORITY'],
:max_priority => ENV['MAX_PRIORITY'],
:quiet => false).start
rescue Exception => e
puts "Exception occured (#{e})"
end
puts "Task jobs:work exited, clearing queue and restarting"
sleep 1
Delayed::Job.delete_all
end
end
Apparently it did not work. So I ended with this simple solution:
for (( ;; )); do rake jobs:work --trace; done
get rid of delayed job and use either whenever or resque

Resources