Redis and Resque, workers not getting updated application code - ruby-on-rails

I'm having a rough time here with Resque, firstly, in development when running rake resque:work QUEUE='*' to work on the queue it starts up fine and starts running the perform method for my workers, which is fine; the problem is that the workers don't seem to run my new application code, say I update the perform method in that worker then Ctrl+c out of that rake resque:work QUEUE='*' process and starting it up again, whilst queuing new jobs to be worked on doesn't result in the worker running with the new updated code.
So mainly my problem here is, how do I safely kill the resque:work task and restart my workers with the new application code?

Resque workers respond to a few different signals:
QUIT - Wait for child to finish processing then exit
TERM / INT - Immediately kill child then exit
USR1 - Immediately kill child but don't exit
USR2 - Don't start to process any new jobs
CONT - Start to process new jobs again after a USR2
If you want to gracefully shutdown a Resque worker, use QUIT.
kill -s QUIT reqsue.pid
if you want to setup resque restart with capitrano, use this gits

Related

Is there a way to tell a sleeping delayed job worker to process the queue from Rails code?

I'm using delayed_job library as adapter for Active Jobs in Rails:
config.active_job.queue_adapter = :delayed_job
My delayed_job worker is configured to sleep for 60 seconds before checking the queue for new jobs:
Delayed::Worker.sleep_delay = 60
In my Rails code I add a new job to the queue like this:
MyJob.perform_later(data)
However, the job will not be picked up by the delayed_job worker immediately, even if there are no jobs in the queue, because of the sleep_delay. Is there a way to tell the delayed_job worker to wake up and start processing the job queue if it's sleeping?
There is a MyJob.perform_now method, but it blocks the thread, so it's not what I want because I want to execute a job asynchronously.
Looking at the delayed_job code and it appears that there's no way to directly control or communicate with the workers after they are daemonized.
I think the best you can do would be to start a separate worker with a small sleep_delay that only reads a specific queue, then use that queue for these jobs. A separate command is necessary because you can't start a worker pool where the workers have different sleep delays:
Start your main worker: bin/delayed_job start
Start your fast worker: bin/delayed_job start --sleep-delay=5 --queue=fast --identifier=999 (the identifier is necessary to differentiate the workers daemons)
Update your job to use that queue:
class MyJob < ApplicationJob
queue_as :fast
def perform...
end
Notes:
When you want to stop the workers you'll also have to do it separately: bin/delayed_job stop and bin/delayed_job stop --identifier=999.
This introduces some potential parallelism and extra load on the server when both workers are working at the same time.
The main worker will process jobs from the fast queue too, it's just a matter of which worker grabs the job first. If you don't want that you need to setup the main worker to only read from the other queue(s), by default that's only 'default', so: bin/delayed_job start --queue=default.

concurrency in delayed_jobs

I have ROR application and 1 delay_job process ran using rake job:work.
ROR application add Job in multiple queue.
Lets say we have queue 1 and queue 2.
My Question is task in queue 1 and task in queue 2 will be executed concurrently?
Currently in my application after running rake job:work process only 1 thread is spawn which executes queue1 task and then queue2 task.
If i have to execute in parallel, i have to run two rake task of job:work.
Is it correct behavior or it can be run concurrently in 1 rake task of job:work.
And what is worker in Delay Job. Is delay Job interchangeably used with worker
Thanks
Priyanka
No, one worker cannot run two jobs concurrently, you need more than one process running for that.
In the example you are describing, you are starting a worker that is running in the foreground (rake job:work), but what you could do instead, is to start them as background workers, by running bin/delayed_job instead (script/delayed_job for earlier versions). That command has multiple options that you can use to specify how you want delayed_job to function.
One of the options is -n or --number_of_workers=workers. That means you can start two workers by running the following command:
bundle exec bin/delayed_job --number_of_workers=2 start
It is also possible to dedicate certain workers to only run jobs from a specific queue, or only high priority jobs.

Rails delayed job and docker: adding more workers

I run my rails app using a Docker. Delayed jobs are processed by a single worker that runs in a separate container called worker and inside it worker runs with a command bundle exec rake jobs:work.
I have several types of jobs that I would like to move to a separate queue and create a separate worker for that. Or at least have two workers for process tasks.
I tried to run my worker container with env QUEUE=default_queue bundle exec rake job:work && env QUEUE=another_queue bundle exec rake job:work but that does not make any sense. It does not fails, is starts but jobs aren't processed.
Is there any way to have separate workers in one container? And is it correct? Or should I create separate container for each worker I would ever want to make?
Thanx in advance!
Running the command command1 && command2 results in command2 being executed only when command1 completes. rake jobs:work never terminates, even when it has finished executing all the jobs in the queue, so the second command will never execute.
A single "&" is probably what you're looking for: command1 & command2.
This will run the commands independently in their own processes.
You should use the delayed_job script on production, and it's a good idea to put workers of different queues into different containers in case one of the queues contains jobs that use up a lot of resources.
This will start a delayed job worker for the default_queue:
bundle exec script/delayed_job --queue=default_queue start
For Rails 4, it is: bundle exec bin/delayed_job --queue=default_queue start
Check out this answer on the topic: https://stackoverflow.com/a/6814591/6006050
You can also start multiple workers in separate processes using the -n option. This will start 3 workers in separate processes, all picking jobs from the default_queue:
bundle exec script/delayed_job --queue=default_queue -n 3 start
Differences between rake jobs:work and the delayed_job script:
It appears that the only difference is that rake jobs:work starts processing jobs in the foreground, while the delayed_job script creates a daemon which processes jobs in the background. You can use whichever is more suited to your use case.
Check this github issue: https://github.com/collectiveidea/delayed_job/issues/659
Actually i just came across this problem with scaling delayed_jobs on docker
see this gist for a script that starts delayed jobs with arbitrary arguments and listens to SIGTERM and executes a smooth shutdown of the started jobs on container shutdown. This way you can execute as many processes and queues as you want.
https://gist.github.com/jklimke/3fea1e5e7dd7cd8003de7500508364df
#!/bin/bash
# Variable DELAYED_JOB_ARGS contains the arguments for delayed jobs for, e.g. defining queues and worker pools.
# function that is called when the docker container should stop. It stops the delayed job processes
_term() {
echo "Caught SIGTERM signal! Stopping delayed jobs !"
# unbind traps
trap - SIGTERM
trap - TERM
trap - SIGINT
trap - INT
# end delayed jobs
bundle exec "./bin/delayed_job ${DELAYED_JOB_ARGS} stop"
exit
}
# register handler for selected signals
trap _term SIGTERM
trap _term TERM
trap _term INT
trap _term SIGINT
echo "Starting delayed jobs ... with ARGs \"${DELAYED_JOB_ARGS}\""
# restart delayed jobs on script execution
bundle exec "./bin/delayed_job ${DELAYED_JOB_ARGS} restart"
echo "Finished starting delayed jobs... Waiting for SIGTERM / CTRL C"
# sleep forever until exit
while true; do sleep 86400; done

delayed_job - Stop only one process

Using delayed_job gem, how can I stop only one process without stopping all the workers?
For example:
rake jobs:work start workers
process1 = SomeClass.enqueue start process 1 in code
process2 = SomeClass.enqueue start process 2 in code
process1.stop will stop only process 1 and keep process 2 running.
I guess a similar question would be "How can I get the PID of a delayed job process?" because then I can kill the process using the PID.
Delayed Job on a whole is a process. To know the pid check the .pid file that is created in temp folder when delayed job starts.
In order to stop one particular process from code use Delayed::Job.all, or find your job and use the destroy method to close it.

Resque: worker status is not right

Resque is currently showing me that I have a worker doing work on a queue. That worker was shutdown by me in the middle of the queue (it's just for testing) and the worker is still showing as running. I've confirmed the process ID has been killed and bluepill is no longer monitoring it. I can't find anyway in the UI to force clear that it is working.
What's the best way to update the status for the # of workers that are currently up (I have 2, web UI reports 3).
You may have a lingering pid file. This file is independent of the process running; in other words, when you killed the process, it didn't delete the pid file.
If you're using a typical Rails and Resque setup, Resque will store the pid in the Rails ./tmp directory.
Some Resque start scripts specify the pid file in a different location, something like this:
PIDFILE=foo/bar/resque/pid bundle exec rake resque:work
Wherever the script puts the pid file, look there, then delete it, then restart.
Also on the command line, you can ask redis for the running workers:
redis-cli keys *worker:*
If there are workers that you don't expect, you can delete them with:
redis-cli del <keyname>
Try to restart the applications.
For future references: also have a look under https://github.com/resque/resque/issues/299

Resources