I have a rake task in my Rails application,and when I execute the order in my rails app path /home/hxh/Share/ruby/sport/:
rake get_sportdata
This will work fine.
Now,I want to use crontab to make this rake to be a timed task .so,I add a task:
* * * * * cd /home/hxh/Share/ruby/sport && /usr/local/bin/rake get_sportdata >/dev/null 2>&1
But this doesn't work.I get the log in cron.log file:
Job `cron.daily' terminated
I want to know where the error is.
Does the "cd /home/hxh/Share/ruby/sport && /usr/local/bin/rake get_sportdata >/dev/null 2>&1" can work in your terminal?
But use crontab in Rails normally is not a good idea. It will load Rails environment every time and slow down your performance.
I think whenever or rufus-scheduler are all good. For example, use rufus-scheduler is very easy. In config\initializers\schedule_task.rb
require 'rubygems'
require 'rufus/scheduler'
scheduler = Rufus::Scheduler.start_new(:thread_name => "Check Resources Health")
scheduler.every '1d', :first_at => Time.now do |job|
puts "###########RM Schedule Job - Check Resources Health: #{job.job_id}##########"
begin
HealthChecker.perform
rescue Exception => e
puts e.message
puts e.backtrace
raise "Error in RM Scheduler - Check Resources Health " + e.message
end
end
And implement "perform" or some other class method in your controller, now the controller is "HealthChecker". Very easy and no extra effort. Hope it help.
So that you can test better, and get a handle on whether it works I suggest:
Write a shell script in [app root]/script which sets up the right environment variables to point to Ruby (if necessary) and has the call to rake. E.g., something like script/get-sportdata.sh.
Test the script as root. E.g., first do sudo -s.
Call this script from cron. E.g., * cd [...] && script/get-sportdata.sh. If necessary, test that line as root too.
That's been my recipe for success, running rake tasks from cron on Ubuntu. This is because the cron environment is a bit different than the usual shell setup. And so limiting your actual cron jobs to simple commands to run a particular script are a good way to divide the configuration into smaller parts which can be individually tested.
Related
I have a rails app that I want to use sidekiq on.
Before I figure out how to setup sidekiq on the server etc., is it possible for me to create a rake task that will run the sidekiq workers for any pending jobs?
This isn't a production solution I know, but I just want to make sure everything else is working on the server and for the time being me running a rake task on the server is fine as it is more for QA'ing at this point.
Log into the server, cd to the app's dir and run 'bundle exec sidekiq' manually.
Referencing from
SortedEntry and ScheduledSet, I managed to come up with the following:
# ScheduledSet returns "pending" jobs
scheduled_jobs = Sidekiq::ScheduledSet.new
# puts scheduled_jobs.class
# => Sidekiq::ScheduledSet
scheduled_jobs.each do |scheduled_job|
# puts scheduled_job.class
# => Sidekiq::SortedEntry
job_class = scheduled_job.klass.constantize
job_args = scheduled_job.args
# you can also filter out only jobs in certain queue, i.e.:
# if scheduled_job.queue == 'mailers' ......
# run the job inline
job_class.new.perform(*job_args)
end
Then just wrap this into a rake task.
I want to write a script that I can configure a cron to run every 24 hours beneath my Rails app.
script.rb
User.all.each do |user|
days = user[:days]
if days >= 1
days = days - 1
end
user.update_attribute(:days, days)
end
However, whenever I run this, I get this error:
uninitialized constant User (NameError)
What's going wrong?
If you are in Rails app's home directory, then simply:
rails runner -e production script.rb
For cron (suppose, script.rb is in home dir again):
Find out full path to your bundle (which bundle)
In your crontab add (change bundle and project paths accordingly):
0 * * * * cd /project_home && /bundle_executable exec rails runner -e production script.rb
Another approach is to require_relative 'config/environment.rb' (adjust paths to your use case) at the top of the script; There's a slight performance hit, but if you are only running this once a day, a few seconds of startup penalty won't matter.
Also, you may need to set RAILS_ENV appropriately to access the proper database tables.
I always use whenever to manage my cron jobs.
Install whenever, then add the following script to your config/schedule.rb:
every :day do
runner 'User.all.each {|user| user.decrement!(:days) if user.days > 0}'
end
Then run whenever -w from your terminal.
We're running several cron tasks in our server, and we start them all using rails runner, like this:
rails runner 'MyTask.run'
where MyTask is a class in the project. The thing is, we use Bugsnag to handle errors in case anything fails. When we run a rake task, Bugsnag saves the errors and lists them in their website. But this does not happen when using rails runner. How can I configure rails to send errors to Bugsnag when this happens?
Rails runner is very difficult to configure or to customize. That is because all it really is, is a script with this main body:
if code_or_file.nil?
$stderr.puts "Run '#{$0} -h' for help."
exit 1
elsif File.exist?(code_or_file)
$0 = code_or_file
Kernel.load code_or_file
else
eval(code_or_file, binding, __FILE__, __LINE__)
end
As you can see, it just does an eval of the code you sent, so there's no wrapper, no class you can extend, and basically nothing you can configure. It is better to create a rake task to perform things the same way as runner, but this time in an environment that will be controlled by Rake, therefore allowing you to configure everything you need:
desc 'Wraps a runner command with rake'
task :runner, [:command] => :environment do |t, args|
eval(args[:command])
end
Then, you call it using
rake 'runner["MyTask.run"]'
This will run the tasks in a very similar way to using rails runner, but in the context of rake (which will include using Bugsnag).
i deployed (with capistrano) a ruby on rails project on an aws micro server.
I'm on ruby 1.9.2-290 and rails 3.2.6 and i also use bundler.
I developed a task rake in my opt/rails-project/lib/tasks/tasks.rake
namespace :myclass do
task "my-task" => :environment do
# do the stuff which work nicely if i enter my command line manually
end
end
This is how i call it in my crontab :
*/3 * * * * cd /opt/rails-project/current && /opt/rails-project/shared/bundle/ruby/1.9.1/gems/rake-0.9.2.2/bin/rake myclass:my-task RAILS_ENV=production >> ~/logs-my-task.txt
The file ~/logs-my-task.txt is created and updated every 3min as it does. This file only contains info of the version release from capistrano but nothing from my task rake.
As i said in my comment in my task rake, if i launch this command directly in the server via ssh, my task rake does its job...
I searched the web all day/night long and can not figure it out.
I tried to remove the http_basic auth from rails but same problem.
Hope you have a idea,
Thanks for help !
Try to put this part
cd /opt/rails-project/current && /opt/rails-project/shared/bundle/ruby/1.9.1/gems/rake-0.9.2.2/bin/rake myclass:my-task RAILS_ENV=production >> ~/logs-my-task.txt
inside some file, somescript.sh, give execution permissions:
chmod +x somescript.sh
and try to run it manually:
/path/to/somescript.sh
If it works, try to put it into crontab:
*/3 * * * * /path/to/somescript.sh
It often helps to put complex stuff inside script to run in from crontab.
Next step, ensure that you PATH environment variable the same for your shell and for cron. You can set it inside crontab or inside your script.
After I used a shell script as recommended by denis.peplin and launched it manually, I got the problem described here: Ruby on Rails and Rake problems: uninitialized constant Rake::DSL.
I included the following line in my Rakefile and let my crontab as it was before:
require 'rake/dsl_definition'
We have to use delayed_job (or some other background-job processor) to run jobs in the background, but we're not allowed to change the boot scripts/boot-levels on the server. This means that the daemon is not guaranteed to remain available if the provider restarts the server (since the daemon would have been started by a capistrano recipe that is only run once per deployment).
Currently, the best way I can think of to ensure the delayed_job daemon is always running, is to add an initializer to our Rails application that checks if the daemon is running. If it's not running, then the initializer starts the daemon, otherwise, it just leaves it be.
The question, therefore, is how do we detect that the Delayed_Job daemon is running from inside a script? (We should be able to start up a daemon fairly easily, bit I don't know how to detect if one is already active).
Anyone have any ideas?
Regards,
Bernie
Based on the answer below, this is what I came up with. Just put it in config/initializers and you're all set:
#config/initializers/delayed_job.rb
DELAYED_JOB_PID_PATH = "#{Rails.root}/tmp/pids/delayed_job.pid"
def start_delayed_job
Thread.new do
`ruby script/delayed_job start`
end
end
def process_is_dead?
begin
pid = File.read(DELAYED_JOB_PID_PATH).strip
Process.kill(0, pid.to_i)
false
rescue
true
end
end
if !File.exist?(DELAYED_JOB_PID_PATH) && process_is_dead?
start_delayed_job
end
Some more cleanup ideas: The "begin" is not needed. You should rescue "no such process" in order not to fire new processes when something else goes wrong. Rescue "no such file or directory" as well to simplify the condition.
DELAYED_JOB_PID_PATH = "#{Rails.root}/tmp/pids/delayed_job.pid"
def start_delayed_job
Thread.new do
`ruby script/delayed_job start`
end
end
def daemon_is_running?
pid = File.read(DELAYED_JOB_PID_PATH).strip
Process.kill(0, pid.to_i)
true
rescue Errno::ENOENT, Errno::ESRCH # file or process not found
false
end
start_delayed_job unless daemon_is_running?
Keep in mind that this code won't work if you start more than one worker. And check out the "-m" argument of script/delayed_job which spawns a monitor process along with the daemon(s).
Check for the existence of the daemons PID file (File.exist? ...). If it's there then assume it's running else start it up.
Thank you for the solution provided in the question (and the answer that inspired it :-) ), it works for me, even with multiple workers (Rails 3.2.9, Ruby 1.9.3p327).
It worries me that I might forget to restart delayed_job after making some changes to lib for example, causing me to debug for hours before realizing that.
I added the following to my script/rails file in order to allow the code provided in the question to execute every time we start rails but not every time a worker starts:
puts "cleaning up delayed job pid..."
dj_pid_path = File.expand_path('../../tmp/pids/delayed_job.pid', __FILE__)
begin
File.delete(dj_pid_path)
rescue Errno::ENOENT # file does not exist
end
puts "delayed_job ready."
A little drawback that I'm facing with this though is that it also gets called with rails generate for example. I did not spend much time looking for a solution for that but suggestions are welcome :-)
Note that if you're using unicorn, you might want to add the same code to config/unicorn.rb before the before_fork call.
-- EDITED:
After playing around a little more with the solutions above, I ended up doing the following:
I created a file script/start_delayed_job.rb with the content:
puts "cleaning up delayed job pid..."
dj_pid_path = File.expand_path('../../tmp/pids/delayed_job.pid', __FILE__)
def kill_delayed(path)
begin
pid = File.read(path).strip
Process.kill(0, pid.to_i)
false
rescue
true
end
end
kill_delayed(dj_pid_path)
begin
File.delete(dj_pid_path)
rescue Errno::ENOENT # file does not exist
end
# spawn delayed
env = ARGV[1]
puts "spawing delayed job in the same env: #{env}"
# edited, next line has been replaced with the following on in order to ensure delayed job is running in the same environment as the one that spawned it
#Process.spawn("ruby script/delayed_job start")
system({ "RAILS_ENV" => env}, "ruby script/delayed_job start")
puts "delayed_job ready."
Now I can require this file anywhere I want, including 'script/rails' and 'config/unicorn.rb' by doing:
# in top of script/rails
START_DELAYED_PATH = File.expand_path('../start_delayed_job', __FILE__)
require "#{START_DELAYED_PATH}"
# in config/unicorn.rb, before before_fork, different expand_path
START_DELAYED_PATH = File.expand_path('../../script/start_delayed_job', __FILE__)
require "#{START_DELAYED_PATH}"
not great, but works
disclaimer: I say not great because this causes a periodic restart, which for many will not be desirable. And simply trying to start can cause problems because the implementation of DJ can lock up the queue if duplicate instances are created.
You could schedule cron tasks that run periodically to start the job(s) in question. Since DJ treats start commands as no-ops when the job is already running, it just works. This approach also takes care of the case where DJ dies for some reason other than a host restart.
# crontab example
0 * * * * /bin/bash -l -c 'cd /var/your-app/releases/20151207224034 && RAILS_ENV=production bundle exec script/delayed_job --queue=default -i=1 restart'
If you are using a gem like whenever this is pretty straightforward.
every 1.hour do
script "delayed_job --queue=default -i=1 restart"
script "delayed_job --queue=lowpri -i=2 restart"
end