Keeping a rake job running - ruby-on-rails

I'm using delayed_job to run jobs, with new jobs being added every minute by a cronjob.
Currently I have an issue where the rake jobs:work task, currently started with 'nohup rake jobs:work &' manually, is randomly exiting.
While God seems to be a solution to some people, the extra memory overhead is rather annoying and I'd prefer a simpler solution that can be restarted by the deployment script (Capistrano).
Is there some bash/Ruby magic to make this happen, or am I destined to run a monitoring service on my server with some horrid hacks to allow the unprivelaged account the site deploys to the ability to restart it?

For me the daemons gem was unreliable with delayed_job. Could be a poorly written script (was using the one on collectiveidea's delayed_job github page), and not daemons fault, I'm not really sure. But for whatever reason, it would restart inconsistently on deployments.
I read somewhere this was due to it not waiting for the process to actually exit, so the pid files would get overwritten or something. But I didn't really bother to investigate. I switched to the daemons-spawn gem using these instructions and it seems to be much more reliable now.

The delayed_job docs suggest that you use a monitoring service to manage the rake worker job(s). I use runit--works well.
(You can install it in the mode where it does not replace init.)
Added:
Re: restart by Capistrano: yes, runit enables that. Just do a
sudo sv kill delayed_job
in your Capistrano recipe to kill the delayed_job worker. Runit will then restart it with your newly deployed code base.

I have implemented small rake task that restarts the jobs task over and over again:
desc "Start a delayed_job worker in a endless loop to prevent exits."
task :jobs => :environment do
while true
begin
Delayed::Worker.new(:min_priority => ENV['MIN_PRIORITY'],
:max_priority => ENV['MAX_PRIORITY'],
:quiet => false).start
rescue Exception => e
puts "Exception occured (#{e})"
end
puts "Task jobs:work exited, clearing queue and restarting"
sleep 1
Delayed::Job.delete_all
end
end
Apparently it did not work. So I ended with this simple solution:
for (( ;; )); do rake jobs:work --trace; done

get rid of delayed job and use either whenever or resque

Related

How to make sure resque background jobs are always up?

I use ActiveJob with a resque back-end and use capistrano-resque to (re)start my work processes on deploy.
What I have been strugling with is making sure those processes are always up. Can and could such a process crash? Should I put safeguards in making sure that my background jobs always get picked up by a worker?
I have searched far and wide but have not found any standard solution to this.
I am using god with resque. Here's an example script for it.
Capistrano
desc "Restart resque workers"
task :restart_workers, roles: :resque do
run "sudo god restart resque-production"
end
after 'deploy:restart', 'deploy:restart_workers'
where resque-production is the w.name from the script example.

Is there is something like cron in rails application on windows?

I'm trying to use cron in my application to send mails every week but I think it doesn't work on Windows.
Does anybody knows any equivalent to cron solution that works on Windows?
Windows equivalent of Unix's cron is a "Task Scheduler". You can configure your periodical task there.
Purely Ruby solution
If you want a purely Ruby solution look into:
rufus-scheduler - it's Windows cron gem.
crono - it's a in-Rails cron scheduler, so it should work anywhere.
Web services - there are plenty of free online services that would make a request to a given URL in specific time periods. This is basically a poor man's cronjob.
I recommend taking a look at Resque and the extension Resque-scheduler gems. You will need to have a resque scheduler process running with bundle exec rake resque:scheduler and at least one worker process running with QUEUE=* bundle exec rake resque:work.
If you want these services to run in the background as a windows service, you can do it with srvany.exe as described in this SO question.
The above assumes you are ok with installing Redis - a key-value store that is very popular among the Rails community as it can be easily used to support other Rails components such as caching and ActionCable, and it is awesome by itself for many multi-process use cases.
Resque is a queue system on top of Redis that allows you to define jobs that can be executed asynchronously in the background. When you run QUEUE=* bundle exec rake resque:work, a worker process runs constantly and polls the queue. Once a job is enqueued, an available worker pops it from the queue and starts working on it. This architecture is quite scalable, as you can have multiple workers listening to the queues if you'd like.
To define a job, you do this:
class MyWeeklyEmailSenderJob
def self.perform
# Your code to send weekly emails
end
end
While you can enqueue this job to the queue yourself from anywhere (e.g. from a controller as a response to an action), in your case you want it to automatically be placed into the queue once a week. This is what Resque-scheduler is for. It allows you to configure a file such as app/config/resque_schedule.yml in which you can define which jobs should be enqueued in which time interval. For example:
send_weekly_emails:
cron: 0 8 * * Mon
class: MyWeeklyEmailSenderJob
queue: email_sender_queue
description: "Send weekly emails"
Remember that a scheduling process has to run in order for this to work with bundle exec rake resque:scheduler.
thanks guys , actually i tried rufus scheduler gem and it worked for me , i guess it's the best and easier solution

Rufus Scheduler not running when rails server runs as daemon

I have a a rails app that is using Rufus Scheduler. When I turn on the rails server with:
rails s --port=4000
Rufus scheduler runs its tasks. If I run the rails server with:
rails s --port=4000 --daemon
Rufus no longer does its tasks. I added a couple of log messages. Here is the schedule code:
class AtTaskScheduler
def self.start
scheduler = Rufus::Scheduler.new
p "Starting Attask scheduler"
scheduler.every('5m') do
# test sending hip chat message
issue = Issue.new
issue.post_to_hipchat("Starting sync with AtTask","SYNC")
p "Launching Sync"
Issue.synchronize
end
end
end
Hipchat never gets the message from the scheduler and the log never gets the statement "Launching Sync".
Any ideas on what may be causing this?
There is documentation of this issue in the rufus-scheduler docs:
There is the handy rails server -d that starts a development Rails as
a daemon. The annoying thing is that the scheduler as seen above is
started in the main process that then gets forked and daemonized. The
rufus-scheduler thread (and any other thread) gets lost, no scheduling
happens.
I avoid running -d in development mode and bother about daemonizing
only for production deployment.
These are two well crafted articles on process daemonization, please
read them:
http://www.mikeperham.com/2014/09/22/dont-daemonize-your-daemons/
http://www.mikeperham.com/2014/07/07/use-runit/
If anyway, you need something like rails server -d, why not try bundle exec unicorn -D
instead? In my (limited) experience, it worked out of the box (well,
had to add gem 'unicorn' to Gemfile first).

What's the better way to execute daemons when Rails server runing

I have some gems in my Rails App, such as resque, sunspot. I run the following command manually when the machines boots:
rake sunspot:solr:start
/usr/local/bin/redis-server /usr/local/etc/redis.conf
rake resque:work QUEUE='*'
Is there a better practice to run these daemon in the background? And is there any side-effect when run these tasks run in the background?
My solution to that is to use a mix of god, capistrano and whenever. A specific problem I have is that I want all app processes to be run as user, so initd scripts are not an option (this could be done, but it's quite a pain of user switching / environment loading).
God
The basic idea is to use god to start / restart / monitor processes. God may be difficult to get start with, but is very powerful :
running god alone will start all your processes (webserver, bg jobs, whatever)
it can detect a process crashed and restart it
you can group processes and batch restart them (staging, production, background, devops, etc)
Whenever
You still have to start god on server restart. A good mean to do so is to use user crontab. Most cron implementation have a special instruction called #reboot, which allows you to run a specific command on server restart :
#reboot /bin/bash -l -c 'cd /home/my_app && SERVER=true god -c production/current/config/app.god"
Whenever is a gem that allows easy management for crontab, including generating reboot command. While it's not absolutely necessary for achieving what I describe, it's really useful for its capistrano integration.
Capistrano
You not only want to start your processes on server restart, you also want to restart them on deploy. If your background jobs code is not up to date, problem will arise.
Capistrano allows to easily handle that, just ask god to restart the whole group (like : god restart production) in a post deploy capistrano task, and it will be handled seamlessly.
Whenever's capistrano integration also ensure your crontab is always up to date, updating it if you changed your config/schedule.rb file.
You can use something like foreman to manage these processes. You can define process types and other things in a Procfile and you can start and do whatever with them.

Rake Task Starts But Stops abruptly when executed via controller

I have a set of rake tasks that run on the production server, its detached from the main thread, and happens in the background
here is the code to execute it
def vehicle
#estate = Estate.find(#estate_id)
#date_string = #login_month.strftime("%m%Y")
system("rake udpms:process_only_vehicle[#{#date_string},#{#estate_id}] &")
redirect_to :controller => "reports/error_messages", :message => "Processing will happen in the background and reports will be refreshed after two minutes", :target => "_blank"
end
when this code is executed via the url route, it runs the rake task, i can see if i check the active processes on the production machine, but it ends abruptly after about 10 seconds.
ps axl | grep rake
this is the it shows
ruby /usr/local/rvm/gems/ruby-1.8.7-p352/bin/rake udpms:process_only_vehicle[082012,5]
if i execute the same same rake task in the app folder in the terminal it runs with out any errors. This runs without any issues on the dev machine. (OSX). Server is Mint. Rake version is the same on both. there is only one version of the gem.
since its the production server there are no logs (other than the produciton.log, and its no help). any help on how i go about debugging this issue will be much appreciated.
This is probably happening because your server software reaps requests that take longer than 10 seconds to respond. Despite the fact you're kicking off a rake task, it still has to wait for that system call to execute: if it takes awhile then the task will be terminated and the server worker returned to the worker pool.
In a more general sense, this is not the appropriate way to make a task happen in the background. You probably want to use Resque or Delayed Job, which enqueue tasks and run them in the background for you.

Resources