Sidekiq worker not getting triggered - ruby-on-rails

I am using Sidekiq for my background jobs:
I have a worker app/workers/data_import_worker.rb
class DataImportWorker
include Sidekiq::Worker
sidekiq_options retry: false
def perform(job_id,file_name)
begin
#Some logic in it .....
end
end
Called from a file lib/parse_excel.rb
def parse_raw_data
#job_id and #filename are defined bfr
DataImportWorker.perform_async(job_id,filename)
end
As soon as i trigger it from my action the worker is not getting called.. Redis is running on localhost:6379
Any idea why this must be happening. The Environment is Linux.

I had a similar problem where Sidekiq was running but when I called perform_async it didn't do anything except return true.
The problem was rspec-sidekiq was added to my ":development, :test" group. I fixed the problem by moving rspec-sidekiq to the ":test" group only.

Start sidekiq from the root directory of your Rails app. For example,
bundle exec sidekiq -e staging -C config/sidekiq.yml

I encounter the same problem, it turns out that the argument I've passed in the function perform_async is not appropriate, it seems that one should not pass any query result in perform_async, you must do all the query in the function perform.

You need to specify the name of the queue that worker is for.
Example:
sidekiq_options retry: false, :queue => data_import_worker
data_import_worker can be any name you want to give it.
Then when you go to the web interface: yoursite.com/sidekiq, you'll be able to see the current workers for the queue "data_import_worker"

For me when doing a perform_later, it would enqueue but never remove from queue. I needed to add my queue name to the sidekiq.yml file
---
:concurrency: 25
:pidfile: ./tmp/pids/sidekiq.pid
:logfile: ./log/sidekiq.log
:queues:
- default
- my_queue

Lost a good 15 min on this. To check if Sidekiq is correctly loading your config file (with the queues names), go to the web interface in the Busy tab and you'll find your Process ID and below it you'll find your queues.
In our case, we had misspelled mailer (the correct ActiveJob queue for Mailers is mailers, in plural).

My issue was simply having the worker file in the wrong path.
Needs to be in "project_root/app/worker/worker.rb", not "project_root/worker/worker.rb"
Check the file path!

is it realy run multiple workers on standalone sidekiq?
for example I have 2 workers:
ProccessWorker
CallbackWorker
when I am runnigs sidekiq:
bundle exec sidekiq -r ./workers/proccess_worker.rb -C ./config/sidekiq.yml
only one worker in same time.

I was calling perform_async(23) in a production console, however my sidekiq was started in staging mode.
After I started the Sidekiq in production mode, things have started working very well.

Related

How can I test sidekiq-scheduler in my local?

I am trying to add couple of scheduled workers to my rails application. These workers will be crawling different sites in given intervals.
I want to test these workers but not able to do it. I am starting redis and my application. What should I do to see whether my scheduled jobs are working or not?
Here is my crawler class:
class AyedasCrawler
include Sidekiq::Worker
and my sidekiq.yml is:
:schedule:
ayedas_crawler:
cron: '0 * * * * *' # Runs once per minute
class: AyedasCrawler
start the sidekiq worker and the scheduler processes by running
bundle exec sidekiq or sidekiq from your app root in the command-line.
sidekiq-scheduler provides an extension to the Sidekiq web interface that adds a Recurring Jobs page.
There are two ways to do this:
In your routes.rb file, just below the require 'sidekiq/web', add require 'sidekiq-scheduler/web'
In your config.ru, just below the require 'sidekiq/web', add
require 'sidekiq-scheduler/web'
run Sidekiq::Web
On the browser, goto ==> http://localhost:{port}/sidekiq/recurring-jobs. where {port} is the port your application is running in.
You will see the list of scheduled jobs for your application and some other details about it.
Read more in the official documentation
You need to run Sidekiq process as well.
bundle exec sidekiq
It will start both worker/s and the scheduler
If you wish to test it using rspec, you can to the following:
it 'spawns scheduled workers' do
Sidekiq::Cron::Job.load_from_hash YAML.load_file('config/sidekiq.yml')[: schedule]
Sidekiq::Cron::Job.all.each(&:enque!)
expect(AyedasCrawler.jobs.size).to be(1)
end
It loads the YAML configuration, enqueues all the jobs, and asserts if the job has been enqued.
Using this method you can validate if your schedule YAML is correct. It will NOT test CRON syntax and scheduled intervals.
I'm also using https://github.com/philostler/rspec-sidekiq to allow sidekiq testing without jobs actually being executed.

Sidekiq worker not working from Rails engine

I have a rails app using an engine where Sidekiq workers are defined. The worker's perform_async is invoked in a controller within the engine. The worker's perform does the work on arguments passed in through that controller. The worker specific queue is defined in the worker class too. However, when a request comes in to that controller, it gets pushed to the redis server from 'perform_async', to the right queue. A bundle exec sidekiq starts up Sidekiq. However, the worker's perform never gets executed. Checking the sidekiq UI, I can see that the job is in the right queue.
This is how my worker looks like
require 'pando'
class PandoWorker
include Sidekiq::Worker
sidekiq_options :queue => :pando, :backtrace=> true
def perform(*args)
puts "in here"
puts args
end
end
So in this case the sidekiq UI shows that the args are queued in 'pando'. The sidekiq process never processes from that queue or even the default.
You have to tell sidekiq process which queue to look at:
bundle exec sidekiq -q pando
Otherwise process is only watching 'default' queue.

Ruby on rails, delayed jobs with rails s

I want to start workes for the job directly after some certain method. So, I start the application with usual rails s. Upload some stuff, so the create method is invoked. After create method the :perform_analysis -method is delayed. Some data is inserted into delayed_jobs table. Normally I start the workers to work typing "script/delayed_job start" in the command line. But I would like to start the workers work automatically, so I will type nothing.
model:
after_create :perform_analysis
def perform_analysis
bla
end
handle_asynchronously :perform_analysis, :run_at => Proc.new { 5.minutes.from_now }
So, I run an application with rails s. I log in in my wep-page. Upload some files, after 5 min the jobs are delayed. Then the worker should start to work.
I have found this page that does almost what I want but somehow the workers do not start at all. So the schedule.rb is not run. Should I do something more that is not told on that webpage?
Is there any other possibility do it?
I recommend you take a look at Foreman (http://ddollar.github.com/foreman/) and have your procfile declare a worker process:
web: bundle exec rails s
worker: bundle exec rake jobs:work
This way, a single command foreman start will start both the server and worker. The output will be presented in the same window for both.

Resque: worker status is not right

Resque is currently showing me that I have a worker doing work on a queue. That worker was shutdown by me in the middle of the queue (it's just for testing) and the worker is still showing as running. I've confirmed the process ID has been killed and bluepill is no longer monitoring it. I can't find anyway in the UI to force clear that it is working.
What's the best way to update the status for the # of workers that are currently up (I have 2, web UI reports 3).
You may have a lingering pid file. This file is independent of the process running; in other words, when you killed the process, it didn't delete the pid file.
If you're using a typical Rails and Resque setup, Resque will store the pid in the Rails ./tmp directory.
Some Resque start scripts specify the pid file in a different location, something like this:
PIDFILE=foo/bar/resque/pid bundle exec rake resque:work
Wherever the script puts the pid file, look there, then delete it, then restart.
Also on the command line, you can ask redis for the running workers:
redis-cli keys *worker:*
If there are workers that you don't expect, you can delete them with:
redis-cli del <keyname>
Try to restart the applications.
For future references: also have a look under https://github.com/resque/resque/issues/299

How do I clear stuck/stale Resque workers?

As you can see from the attached image, I've got a couple of workers that seem to be stuck. Those processes shouldn't take longer than a couple of seconds.
I'm not sure why they won't clear or how to manually remove them.
I'm on Heroku using Resque with Redis-to-Go and HireFire to automatically scale workers.
None of these solutions worked for me, I would still see this in redis-web:
0 out of 10 Workers Working
Finally, this worked for me to clear all the workers:
Resque.workers.each {|w| w.unregister_worker}
In your console:
queue_name = "process_numbers"
Resque.redis.del "queue:#{queue_name}"
Otherwise you can try to fake them as being done to remove them, with:
Resque::Worker.working.each {|w| w.done_working}
EDIT
A lot of people have been upvoting this answer and I feel that it's important that people try hagope's solution which unregisters workers off a queue, whereas the above code deletes queues. If you're happy to fake them, then cool.
You probably have the resque gem installed, so you can open the console and get current workers
Resque.workers
It returns a list of workers
#=> [#<Worker infusion.local:40194-0:JAVA_DYNAMIC_QUEUES,index_migrator,converter,extractor>]
pick the worker and prune_dead_workers, for example the first one
Resque.workers.first.prune_dead_workers
Adding to answer by hagope, I wanted to be able to only unregister workers that had been running for a certain amount of time. The code below will only unregister workers running for over 300 seconds (5 minutes).
Resque.workers.each {|w| w.unregister_worker if w.processing['run_at'] && Time.now - w.processing['run_at'].to_time > 300}
I have an ongoing collection of Resque related Rake tasks that I have also added this to: https://gist.github.com/ewherrmann/8809350
Run this command wherever you ran the command to start the server
$ ps -e -o pid,command | grep [r]esque
you should see something like this:
92102 resque: Processing ProcessNumbers since 1253142769
Make note of the PID (process id) in my example it is 92102
Then you can quit the process 1 of 2 ways.
Gracefully use QUIT 92102
Forcefully use TERM 92102
* I'm not sure of the syntax it's either QUIT 92102 or QUIT -92102
Let me know if you have any trouble.
I just did:
% rails c production
irb(main):001:0>Resque.workers
Got the list of workers.
irb(main):002:0>Resque.remove_worker(Resque.workers[n].id)
... where n is the zero based index of the unwanted worker.
I had a similar problem that Redis saved the DB to disk that included invalid (non running) workers. Each time Redis/resque was started they appeared.
Fix this using:
Resque::Worker.working.each {|w| w.done_working}
Resque.redis.save # Save the DB to disk without ANY workers
Make sure you restart Redis and your Resque workers.
Started working on https://github.com/shaiguitar/resque_stuck_queue/ recently. It's not a solution to how to fix stuck workers but it addresses the issue of resque hanging/being stuck, so I figured it could be helpful for people on this thread. From README:
"If resque doesn't run jobs within a certain timeframe, it will trigger a pre-defined handler of your choice. You can use this to send an email, pager duty, add more resque workers, restart resque, send you a txt...whatever suits you."
Been used in production and works pretty well for me thus far.
Here's how you can purge them from Redis by hostname. This happens to me when I decommission a server and workers do not exit gracefully.
Resque.workers.each { |w| w.unregister_worker if w.id.start_with?(hostname) }
I ran into this issue and started down the path of implementing a lot of the suggestions here. However, I discovered the root cause that was creating this issue was that I was using the gem redis-rb 3.3.0. Downgrading to redis-rb 3.2.2 prevented these workers from getting stuck in the first place.
I've cleared them out from redis-cli directly. Luckily redistogo.com allows access from environments outside heroku.
Get dead worker ID from the list. Mine was
55ba6f3b-9287-4f81-987a-4e8ae7f51210:2
Run this command in redis directly.
del "resque:worker:55ba6f3b-9287-4f81-987a-4e8ae7f51210:2:*"
You can monitor redis db to see what it's doing behind the scenes.
redis xxx.redistogo.com> MONITOR
OK
1380274567.540613 "MONITOR"
1380274568.345198 "incrby" "resque:stat:processed" "1"
1380274568.346898 "incrby" "resque:stat:processed:c65c8e2b-555a-4a57-aaa6-477b27d6452d:2:*" "1"
1380274568.346920 "del" "resque:worker:c65c8e2b-555a-4a57-aaa6-477b27d6452d:2:*"
1380274568.348803 "smembers" "resque:queues"
Second last line deletes the worker.
In resque 2.0.0, here's one way that seems to work to remove only actually appearantly-dead workers in resque 2.0.0:
Resque::Worker.all_workers_with_expired_heartbeats.each { |w| w.unregister_worker }
I am not an expert in what's going, it's possible there's a better way to do this or that this will have problems. I'm just trying to figure this out too.
This seems to remove workers that haven't sent a "heartbeat" in much longer than expected from the resque worker list.
If the phantom worker was in a "running" state, then a new entry in the "failed" job queue will be created corresponding to phantom job.
I had stuck/stale resque workers here too, or should I say 'jobs', because the worker is actually still there and running fine, it's the forked process that is stuck.
I chose the brutal solution of killing the forked process "Processing" since more than 5min, via a bash script, then the worker just spawn the next in queue, and everything keeps on going
have a look at my script here: https://gist.github.com/jobwat/5712437
If you are using newer versions of Resque, you'll need to use the following command as the internal APIs have changed...
Resque::WorkerRegistry.working.each {|work| Resque::WorkerRegistry.remove(work.id)}
This avoids the problem as long as you have a resque version newer than 1.26.0:
resque: env QUEUE=foo TERM_CHILD=1 bundle exec rake resque:work
Keep in mind that it does not let the currently running job finish.
If you use Docker, you can also use this command:
<id> is the worker id.
docker stop <id>
docker start <id>

Resources