Rufus-scheduler only running once on production - ruby-on-rails

I'm using rufus-scheduler to run a process every day from a rails server. For testing purposes, let's say every 5 minutes. My code looks like this:
in config/initializers/task_scheduler.rb
scheduler = Rufus::Scheduler::PlainScheduler.start_new
scheduler.every "10m", :first_in => '30s' do
# Do stuff
end
I've also tried the cron format:
scheduler.cron '50 * * * *' do
# stuff
end
for example, to get the process to run every hour at 50 minutes after the hour.
The infuriating part is that it works on my local machine. The process will run regularly and just work. It's only on my deployed-to-production app that the process will run once, and not repeat.
ps faux reveals that cron is running, passenger is handling the spin-up of the rails process, the site has been pinged again so it knows it should refresh, and production shows the changes in the code. The only thing that's different is that, without a warning or error, the scheduled task doesn't repeat.
Help!

You probably shouldn't run rufus-scheduler in the rails server itself, especially not with a multi-process framework like passenger. Instead, you should run it in a daemon process.
My theory on what's happening:
Passenger starts up a ruby server process and uses it to fork off other servers to handle requests. But since rufus-scheduler runs its jobs in a separate thread from the main thread, the rufus thread is only alive in the original ruby process (ruby's fork only duplicates the thread that does the forking). This might seem like a good thing because it prevents multiple schedulers from running, but... Passenger may kill ruby processes under certain conditions - and if it kills the original, the scheduler thread is gone.

Add the Below lines to your apache2 config /etc/apache2/apach2.conf and restart your apache server
RailsAppSpawnerIdleTime 0
PassengerMinInstances 1

Kelvin is right.
Passenger kills 'unnecessary' threads.
http://groups.google.com/group/rufus-ruby/search?group=rufus-ruby&q=passenger

Related

Rails - Old cron job keeps running, can't delete it

So I'm using Rails and I have a few Sidekiq workers, but none are enabled. I'm using the sidekiq-cron gem, which requires you to put files in app/workers/, configure a sidekiq scheduler in config/sidekiq_schedule.yml, and also add a few lines in config/initializers/sidekiq.rb. However, I've commented everything out from sidekiq_schedule.yml and also commented the following lines out from sidekiq.rb:
# Sidekiq scheduler.
# schedule_file = 'config/sidekiq_schedule.yml'
# if File.exists?(schedule_file) && Sidekiq.server?
# Sidekiq::Cron::Job.load_from_hash! YAML.load_file(schedule_file)
# end
However, if I launch Sidekiq, every minute (which is the old schedule), I see this in the prompt:
2018-01-19T02:54:04.156Z 22197 TID-ovsidcme8 ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper JID-8609429b89db2a91793509ea INFO: start
2018-01-19T02:54:04.164Z 22197 TID-ovsidcme8 ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper JID-8609429b89db2a91793509ea INFO: fail: 0.008 sec
and it fails because it's trying to launch code a job that's not supposed to be launching.
I've went to the rails console prompt (rails -c) and tried to find the job, but nothing's in there:
irb(main):001:0> Sidekiq::Cron::Job.all
=> []
so I'm not quite sure why it's constantly trying to launch a job. If I go to the rails interface on my application, I don't see anything in the queue, nothing being processed, busy, retries, enqueued, nothing.
Any suggestions would be greatly appreciated. I've been trying to hunt this down for like the last hour and have no success. I even removed ALL of the workers from the workers directory, and yet it's still trying to launch one of them.
Because you have already load jobs, I think that those jobs configuration are still in REDIS. Checking this assumption by opening a new terminal tab with redis-cli:
KEYS '*cron*'
If there are those keys on REDIS, clear them will fix your issue.
Since you mentioned a cron job in your title but not in the question, I'm assuming there's a cronjob running the background sidekiq task.
Try running crontab - l in Terminal to see all your cron jobs. If you see something like "* * * * *", that means there's a job that is running every minute.
Then, use crontab - r to clear your cron tab and delete all scheduled tasks.

Is there is something like cron in rails application on windows?

I'm trying to use cron in my application to send mails every week but I think it doesn't work on Windows.
Does anybody knows any equivalent to cron solution that works on Windows?
Windows equivalent of Unix's cron is a "Task Scheduler". You can configure your periodical task there.
Purely Ruby solution
If you want a purely Ruby solution look into:
rufus-scheduler - it's Windows cron gem.
crono - it's a in-Rails cron scheduler, so it should work anywhere.
Web services - there are plenty of free online services that would make a request to a given URL in specific time periods. This is basically a poor man's cronjob.
I recommend taking a look at Resque and the extension Resque-scheduler gems. You will need to have a resque scheduler process running with bundle exec rake resque:scheduler and at least one worker process running with QUEUE=* bundle exec rake resque:work.
If you want these services to run in the background as a windows service, you can do it with srvany.exe as described in this SO question.
The above assumes you are ok with installing Redis - a key-value store that is very popular among the Rails community as it can be easily used to support other Rails components such as caching and ActionCable, and it is awesome by itself for many multi-process use cases.
Resque is a queue system on top of Redis that allows you to define jobs that can be executed asynchronously in the background. When you run QUEUE=* bundle exec rake resque:work, a worker process runs constantly and polls the queue. Once a job is enqueued, an available worker pops it from the queue and starts working on it. This architecture is quite scalable, as you can have multiple workers listening to the queues if you'd like.
To define a job, you do this:
class MyWeeklyEmailSenderJob
def self.perform
# Your code to send weekly emails
end
end
While you can enqueue this job to the queue yourself from anywhere (e.g. from a controller as a response to an action), in your case you want it to automatically be placed into the queue once a week. This is what Resque-scheduler is for. It allows you to configure a file such as app/config/resque_schedule.yml in which you can define which jobs should be enqueued in which time interval. For example:
send_weekly_emails:
cron: 0 8 * * Mon
class: MyWeeklyEmailSenderJob
queue: email_sender_queue
description: "Send weekly emails"
Remember that a scheduling process has to run in order for this to work with bundle exec rake resque:scheduler.
thanks guys , actually i tried rufus scheduler gem and it worked for me , i guess it's the best and easier solution

Number of delayed_job works on system reboot

I'm using the delayed_job Ruby gem just fine.
It defaults to a single worker, so I've gone ahead and done:
script/delayed_job stop
script/delayed_job -n 5 start
to ensure there are 5 workers.
However, when I reboot (or when the system decides to reboot), the Rails app boots back up with only a single delayed_job worker.
How can I change the default number of workers? It doesn't seem to be listed at https://github.com/collectiveidea/delayed_job.
It turns out the mechanism which ensured delayed_job started up on reboot was being controlled in my scenario by the 'whenever' gem's config/schedule.rb.
The number of jobs may also change if you're using Capistrano with delayed_job recipes. You'll have to ensure the two numbers are shared correctly.

What's the better way to execute daemons when Rails server runing

I have some gems in my Rails App, such as resque, sunspot. I run the following command manually when the machines boots:
rake sunspot:solr:start
/usr/local/bin/redis-server /usr/local/etc/redis.conf
rake resque:work QUEUE='*'
Is there a better practice to run these daemon in the background? And is there any side-effect when run these tasks run in the background?
My solution to that is to use a mix of god, capistrano and whenever. A specific problem I have is that I want all app processes to be run as user, so initd scripts are not an option (this could be done, but it's quite a pain of user switching / environment loading).
God
The basic idea is to use god to start / restart / monitor processes. God may be difficult to get start with, but is very powerful :
running god alone will start all your processes (webserver, bg jobs, whatever)
it can detect a process crashed and restart it
you can group processes and batch restart them (staging, production, background, devops, etc)
Whenever
You still have to start god on server restart. A good mean to do so is to use user crontab. Most cron implementation have a special instruction called #reboot, which allows you to run a specific command on server restart :
#reboot /bin/bash -l -c 'cd /home/my_app && SERVER=true god -c production/current/config/app.god"
Whenever is a gem that allows easy management for crontab, including generating reboot command. While it's not absolutely necessary for achieving what I describe, it's really useful for its capistrano integration.
Capistrano
You not only want to start your processes on server restart, you also want to restart them on deploy. If your background jobs code is not up to date, problem will arise.
Capistrano allows to easily handle that, just ask god to restart the whole group (like : god restart production) in a post deploy capistrano task, and it will be handled seamlessly.
Whenever's capistrano integration also ensure your crontab is always up to date, updating it if you changed your config/schedule.rb file.
You can use something like foreman to manage these processes. You can define process types and other things in a Procfile and you can start and do whatever with them.

How do I clear stuck/stale Resque workers?

As you can see from the attached image, I've got a couple of workers that seem to be stuck. Those processes shouldn't take longer than a couple of seconds.
I'm not sure why they won't clear or how to manually remove them.
I'm on Heroku using Resque with Redis-to-Go and HireFire to automatically scale workers.
None of these solutions worked for me, I would still see this in redis-web:
0 out of 10 Workers Working
Finally, this worked for me to clear all the workers:
Resque.workers.each {|w| w.unregister_worker}
In your console:
queue_name = "process_numbers"
Resque.redis.del "queue:#{queue_name}"
Otherwise you can try to fake them as being done to remove them, with:
Resque::Worker.working.each {|w| w.done_working}
EDIT
A lot of people have been upvoting this answer and I feel that it's important that people try hagope's solution which unregisters workers off a queue, whereas the above code deletes queues. If you're happy to fake them, then cool.
You probably have the resque gem installed, so you can open the console and get current workers
Resque.workers
It returns a list of workers
#=> [#<Worker infusion.local:40194-0:JAVA_DYNAMIC_QUEUES,index_migrator,converter,extractor>]
pick the worker and prune_dead_workers, for example the first one
Resque.workers.first.prune_dead_workers
Adding to answer by hagope, I wanted to be able to only unregister workers that had been running for a certain amount of time. The code below will only unregister workers running for over 300 seconds (5 minutes).
Resque.workers.each {|w| w.unregister_worker if w.processing['run_at'] && Time.now - w.processing['run_at'].to_time > 300}
I have an ongoing collection of Resque related Rake tasks that I have also added this to: https://gist.github.com/ewherrmann/8809350
Run this command wherever you ran the command to start the server
$ ps -e -o pid,command | grep [r]esque
you should see something like this:
92102 resque: Processing ProcessNumbers since 1253142769
Make note of the PID (process id) in my example it is 92102
Then you can quit the process 1 of 2 ways.
Gracefully use QUIT 92102
Forcefully use TERM 92102
* I'm not sure of the syntax it's either QUIT 92102 or QUIT -92102
Let me know if you have any trouble.
I just did:
% rails c production
irb(main):001:0>Resque.workers
Got the list of workers.
irb(main):002:0>Resque.remove_worker(Resque.workers[n].id)
... where n is the zero based index of the unwanted worker.
I had a similar problem that Redis saved the DB to disk that included invalid (non running) workers. Each time Redis/resque was started they appeared.
Fix this using:
Resque::Worker.working.each {|w| w.done_working}
Resque.redis.save # Save the DB to disk without ANY workers
Make sure you restart Redis and your Resque workers.
Started working on https://github.com/shaiguitar/resque_stuck_queue/ recently. It's not a solution to how to fix stuck workers but it addresses the issue of resque hanging/being stuck, so I figured it could be helpful for people on this thread. From README:
"If resque doesn't run jobs within a certain timeframe, it will trigger a pre-defined handler of your choice. You can use this to send an email, pager duty, add more resque workers, restart resque, send you a txt...whatever suits you."
Been used in production and works pretty well for me thus far.
Here's how you can purge them from Redis by hostname. This happens to me when I decommission a server and workers do not exit gracefully.
Resque.workers.each { |w| w.unregister_worker if w.id.start_with?(hostname) }
I ran into this issue and started down the path of implementing a lot of the suggestions here. However, I discovered the root cause that was creating this issue was that I was using the gem redis-rb 3.3.0. Downgrading to redis-rb 3.2.2 prevented these workers from getting stuck in the first place.
I've cleared them out from redis-cli directly. Luckily redistogo.com allows access from environments outside heroku.
Get dead worker ID from the list. Mine was
55ba6f3b-9287-4f81-987a-4e8ae7f51210:2
Run this command in redis directly.
del "resque:worker:55ba6f3b-9287-4f81-987a-4e8ae7f51210:2:*"
You can monitor redis db to see what it's doing behind the scenes.
redis xxx.redistogo.com> MONITOR
OK
1380274567.540613 "MONITOR"
1380274568.345198 "incrby" "resque:stat:processed" "1"
1380274568.346898 "incrby" "resque:stat:processed:c65c8e2b-555a-4a57-aaa6-477b27d6452d:2:*" "1"
1380274568.346920 "del" "resque:worker:c65c8e2b-555a-4a57-aaa6-477b27d6452d:2:*"
1380274568.348803 "smembers" "resque:queues"
Second last line deletes the worker.
In resque 2.0.0, here's one way that seems to work to remove only actually appearantly-dead workers in resque 2.0.0:
Resque::Worker.all_workers_with_expired_heartbeats.each { |w| w.unregister_worker }
I am not an expert in what's going, it's possible there's a better way to do this or that this will have problems. I'm just trying to figure this out too.
This seems to remove workers that haven't sent a "heartbeat" in much longer than expected from the resque worker list.
If the phantom worker was in a "running" state, then a new entry in the "failed" job queue will be created corresponding to phantom job.
I had stuck/stale resque workers here too, or should I say 'jobs', because the worker is actually still there and running fine, it's the forked process that is stuck.
I chose the brutal solution of killing the forked process "Processing" since more than 5min, via a bash script, then the worker just spawn the next in queue, and everything keeps on going
have a look at my script here: https://gist.github.com/jobwat/5712437
If you are using newer versions of Resque, you'll need to use the following command as the internal APIs have changed...
Resque::WorkerRegistry.working.each {|work| Resque::WorkerRegistry.remove(work.id)}
This avoids the problem as long as you have a resque version newer than 1.26.0:
resque: env QUEUE=foo TERM_CHILD=1 bundle exec rake resque:work
Keep in mind that it does not let the currently running job finish.
If you use Docker, you can also use this command:
<id> is the worker id.
docker stop <id>
docker start <id>

Resources