Using delayed_job gem, how can I stop only one process without stopping all the workers?
For example:
rake jobs:work start workers
process1 = SomeClass.enqueue start process 1 in code
process2 = SomeClass.enqueue start process 2 in code
process1.stop will stop only process 1 and keep process 2 running.
I guess a similar question would be "How can I get the PID of a delayed job process?" because then I can kill the process using the PID.
Delayed Job on a whole is a process. To know the pid check the .pid file that is created in temp folder when delayed job starts.
In order to stop one particular process from code use Delayed::Job.all, or find your job and use the destroy method to close it.
Related
I'm using delayed_job library as adapter for Active Jobs in Rails:
config.active_job.queue_adapter = :delayed_job
My delayed_job worker is configured to sleep for 60 seconds before checking the queue for new jobs:
Delayed::Worker.sleep_delay = 60
In my Rails code I add a new job to the queue like this:
MyJob.perform_later(data)
However, the job will not be picked up by the delayed_job worker immediately, even if there are no jobs in the queue, because of the sleep_delay. Is there a way to tell the delayed_job worker to wake up and start processing the job queue if it's sleeping?
There is a MyJob.perform_now method, but it blocks the thread, so it's not what I want because I want to execute a job asynchronously.
Looking at the delayed_job code and it appears that there's no way to directly control or communicate with the workers after they are daemonized.
I think the best you can do would be to start a separate worker with a small sleep_delay that only reads a specific queue, then use that queue for these jobs. A separate command is necessary because you can't start a worker pool where the workers have different sleep delays:
Start your main worker: bin/delayed_job start
Start your fast worker: bin/delayed_job start --sleep-delay=5 --queue=fast --identifier=999 (the identifier is necessary to differentiate the workers daemons)
Update your job to use that queue:
class MyJob < ApplicationJob
queue_as :fast
def perform...
end
Notes:
When you want to stop the workers you'll also have to do it separately: bin/delayed_job stop and bin/delayed_job stop --identifier=999.
This introduces some potential parallelism and extra load on the server when both workers are working at the same time.
The main worker will process jobs from the fast queue too, it's just a matter of which worker grabs the job first. If you don't want that you need to setup the main worker to only read from the other queue(s), by default that's only 'default', so: bin/delayed_job start --queue=default.
I have Ruby on Rails app and after I do any deployment with Dokku, I have my old worker running the tasks normally, but when I complete the deploy, after a few seconds this worker is removed and a new one is created without finishing all the tasks that were running. Does anyone know how I get around this? Or that the worker finishes all tasks before removing, or when a worker is created after deploying, the tasks from the old worker go to the new worker? Any indication?
Dokku's zero-downtime deploy mechanism will wait a configurable amount of time before running docker container stop. This will send a SIGTERM to your application, and then a SIGKILL after a grace period.
Your application code should handle SIGTERM and gracefully stop accepting new work, finish old work, and then terminate. This is generally what background processing frameworks do by default, but you may need to configure this in yours or add the functionality if it is a custom framework you wrote.
I managed to solve it with a workaround, I created a check_worker.rb file that I call in the predeploy, it silences the workers so as not to take any more tasks, and while there is a worker with tasks being executed, it does not advance with the deploy until it completes all, only after completing all the tasks it completes the deploy
app.json
{
"scripts": {
"dokku": {
"predeploy": "ruby check_worker.rb"
}
}
}
check_worker.rb
#!/usr/bin/env ruby
require 'sidekiq'
require 'sidekiq/api'
ps = Sidekiq::ProcessSet.new
ps.each(&:quiet!)
workers = true
while workers
works = []
active_workers = Sidekiq::Workers.new.map do |_process_id, _thread_id, work|
works = work
end
workers = works.length > 0
if workers
sleep 2
end
end
I'm using delayed_job on my project, and until now calling rake jobs:work in the console it worked.
But now I'm trying to stop the processing of jobs after a concret event, and continue processing after other event.
To simulate this behaviour I've created a script config/initializers/delayed_jobs.rb
puts "START DELAYED JOB PRE"
`script/delayed_job start`
puts "START DELAYED JOB POST"
And only the first puts is called, the server has mired in the instruction, and the web is not being showed.
How to call stop and start delayed_jobs getting the correct behaviour?
You can move it into an own thread:
puts "START DELAYED JOB PRE"
Thread.new do
`script/delayed_job start`
end
puts "START DELAYED JOB POST"
I would suggest using external daemon that would be responsible for keeping your workers up. I had quite good experience with resque-pool that is designed for resque queue. Usually such daemon would allow you pausing workers just by sending signal to a master process. It also makes deployment restarts and management quite easy.
Resque is currently showing me that I have a worker doing work on a queue. That worker was shutdown by me in the middle of the queue (it's just for testing) and the worker is still showing as running. I've confirmed the process ID has been killed and bluepill is no longer monitoring it. I can't find anyway in the UI to force clear that it is working.
What's the best way to update the status for the # of workers that are currently up (I have 2, web UI reports 3).
You may have a lingering pid file. This file is independent of the process running; in other words, when you killed the process, it didn't delete the pid file.
If you're using a typical Rails and Resque setup, Resque will store the pid in the Rails ./tmp directory.
Some Resque start scripts specify the pid file in a different location, something like this:
PIDFILE=foo/bar/resque/pid bundle exec rake resque:work
Wherever the script puts the pid file, look there, then delete it, then restart.
Also on the command line, you can ask redis for the running workers:
redis-cli keys *worker:*
If there are workers that you don't expect, you can delete them with:
redis-cli del <keyname>
Try to restart the applications.
For future references: also have a look under https://github.com/resque/resque/issues/299
I'm having a rough time here with Resque, firstly, in development when running rake resque:work QUEUE='*' to work on the queue it starts up fine and starts running the perform method for my workers, which is fine; the problem is that the workers don't seem to run my new application code, say I update the perform method in that worker then Ctrl+c out of that rake resque:work QUEUE='*' process and starting it up again, whilst queuing new jobs to be worked on doesn't result in the worker running with the new updated code.
So mainly my problem here is, how do I safely kill the resque:work task and restart my workers with the new application code?
Resque workers respond to a few different signals:
QUIT - Wait for child to finish processing then exit
TERM / INT - Immediately kill child then exit
USR1 - Immediately kill child but don't exit
USR2 - Don't start to process any new jobs
CONT - Start to process new jobs again after a USR2
If you want to gracefully shutdown a Resque worker, use QUIT.
kill -s QUIT reqsue.pid
if you want to setup resque restart with capitrano, use this gits