How can I schedule a 'weekly' job on Heroku? - ruby-on-rails

I have a Rails app deployed on Heroku with the Heroku scheduler add-on successfully working for daily jobs.
Now I want a weekly job, but the scheduler add-on does not allow me a weekly option.
Any suggestions on how I could accomplish this:
I've tried using rufus-scheduler in the past but it caused me some problems, so that's not an option. See here for detail.
I'm thinking of something along the lines of checking for current day within a daily job. Has anyone tried it and have feedback or know of issues with the approach?
Other ideas much appreciated.

One approach would be the one of your 2nd bullet point:
activate the Heroku cron add-on, and add a cron.rake task in app/lib/tasks
Activate the Heroku scheduler add-on, and add a scheduler.rake task in app/lib/tasks
task :your_weekly_task=> :environment do
if Time.now.friday? # previous answer: Date.today.wday == 5
#do something
end
end
You even have the luxury of defining the day you want your task to run ;o)
(5 is for Friday)
EDIT: by the way, Cron is deprecated and Heroku recommends switching to their Scheduler add-on. This doesn't change the approach for a weekly job though.
EDIT2: adjustments to my answer based on feedback from sunkencity.

An alternate option using only shell code. Setup the Heroku scheduler hourly, and do a comparison against the date command:
# setting the schedular to run hourly at *:30 is equivalent to the
# crondate: 30 8 * * 1
if [ "$(date +%H)" = 08 ] && [ "$(date +%d)" = 01 ]; then YOUR_COMMAND ; fi
I've used this code to mimic cron in my local time zone:
nz_hour="$(TZ=NZ date +%H)" ; nz_day="$(TZ=NZ date +%d)" ; if [ "$nz_hour" = 08 ] && [ "$nz_day" = 01 ]; then YOUR_COMMAND ; fi

It's not ideal, but I've taken to adding a RUN_IF environment variable to rake tasks run through heroku:scheduler which lets me weekly and monthly schedules for jobs.
# lib/tasks/scheduler.rake
def run?
eval ENV.fetch('RUN_IF', 'true')
end
def scheduled
if run?
yield
else
puts "RUN_IF #{ENV['RUN_IF'].inspect} eval'd to false: aborting job."
end
end
# lib/tasks/job.rake
task :job do
scheduled do
# ...
end
end
If a rake task is run without a RUN_IF variable it will run. Otherwise, the job will be aborted unless the value of RUN_IF evals to a falsey value.
$ rake job # => runs always
$ rake job RUN_IF='Date.today.monday?' # => only runs on Mondays
$ rake job RUN_IF='Date.today.day == 1' # => only runs on the 1st of the month
$ rake job RUN_IF='false' # => never runs (not practical, just demonstration)
Similar to other ideas above, but I prefer moving the scheduling details out of the application code.

As discussed over here, and using the above logic from Rob, here are the bash scripts broken down by a day of the week, once a month, and on a specific date.
Run a task every Monday:
if [ "$(date +%u)" = 1 ]; then MY_COMMAND; fi
Run a task every 1st day in a month:
if [ "$(date +%d)" = 01 ]; then MY_COMMAND; fi
You could also run a job every year on December 24th:
if [ "$(date +%m)" = 12 ] && [ "$(date +%d)" = 24 ]; then MY_COMMAND; fi

If you don't want or cannot do this with Rake, another option is to do this with Ruby from bash:
#!/bin/bash
cmd="echo your schedule job here"
# Only run on even days
ruby -e 'if Time.now.utc.day.even?; puts "Not today!"; exit 1; end' && $cmd "$#"
This works even for non-ruby projects (as in my case: Clojure).

Related

How to kill sidekiq job in Rails 4 with ActiveJob

Let's say we have test job:
class TestJob < ActiveJob::Base
queue_as :default
def perform(video)
video.process
end
end
After it we run bundle exec sidekiq start
And run in new terminal window in rails console
Video.first.pending!; TestJob.perform_later(Video.first)
Job is running, ffmpeg is on background, everything is fine, but according to official sidekiq wiki docs I try:
require 'sidekiq/api'
Sidekiq::Queue.new
=> #<Sidekiq::Queue:0x00000006406978 #name="default", #rname="queue:default">
Sidekiq::Queue.new.each {|job| puts job}
=> nil
Sidekiq::Queue.new.size
=> 0
ss = Sidekiq::ScheduledSet.new
=> #<Sidekiq::ScheduledSet:0x000000064a4330 #_size=0, #name="schedule">
ss.size
=> 0
Why there is no jobs? The job is running successfully ( I see it also in first window where sidekiq starts, but I can't see and delete it in rails console )
I am using Ubuntu 14 if it helps
With best regards, Ruslan.
UPD:
It looks that
ps = Sidekiq::ProcessSet.new; ps.each(&:quiet!)
works
, but it doesn't stop my ffmpeg process this part of code internally in process:
cmd = "ffmpeg -i #{input_file.shellescape} #{options} -threads 0 -y #{self.path + outfile}"
pid = spawn(cmd, :out => output_file, :err => output_file)
Process.wait(pid)
How to stop it?
If a job is processing, it's not enqueued anymore.
I use sidekiq web UI and various queues for my jobs. If I realize that particular job is failing I can clear out the queue of subsequent jobs of the same class.

Sidekiq jobs not persisting data on a rails app

I have the following setup, an ubuntu server with nginx - passenger, postgres db, redis and sidekiq.
i have set up an upstart job to keep an eye on the sidekiq process.
description "Sidekiq Background Worker"
# no "start on", we don't want to automatically start
stop on (stopping workers or runlevel [06])
# change to match your deployment user
setuid deploy
setgid deploy
respawn
respawn limit 3 30
# TERM and USR1 are sent by sidekiqctl when stopping sidekiq. Without declaring these as normal exit codes, it just respawns.
normal exit 0 TERM USR1
script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv
exec /bin/bash <<EOT
# use syslog for logging
exec &> /dev/kmsg
# pull in system rbenv
export HOME=/home/deploy
export RAILS_ENV=production
source /home/deploy/.rvm/scripts/rvm
cd /home/deploy/myapp/current
bundle exec sidekiq -c 25 -L log/sidekiq.log -P tmp/pids/sidekiq.pid -q default -q payment -e production
EOT
end script
This actually work i can start, stop and restart the process. i can see the process over at the sidekiq web http monitor. but when i run this workers workers they get processed. Here are both workers:
class EventFinishedWorker
include Sidekiq::Worker
def perform(event_id)
event = Event.find(event_id)
score_a = event.team_A_score
score_b = event.team_B_score
event.bets.each do |bet|
if (score_a == bet.team_a_score) && (score_b == bet.team_b_score)
bet.guessed = true
bet.save
end
end
end
end
and
class PayPlayersBetsWorker
include Sidekiq::Worker
sidekiq_options :queue => :payment
def perform(event_id)
event = Event.find(event_id)
bets = event.bets.where(guessed: true)
total_winning_bets = bets.count
unless total_winning_bets == 0
pot = event.pot
amount_to_pay = pot / total_winning_bets
unless bets.empty?
bets.each do |bet|
account = bet.user.account
user = User.find_by_id(bet.user.id)
bet.payed = true
bet.save
account.wincoins += amount_to_pay
account.accum_wincoins.increment(amount_to_pay)
account.save
user.set_score(user.current_score)
end
end
end
end
end
2014-05-27T18:48:34Z 5985 TID-7agy8 EventFinishedWorker JID-33ef5d7e7de51189d698c1e7 INFO: start
2014-05-27T18:48:34Z 5985 TID-5s4rg PayPlayersBetsWorker JID-e8473aa1bc59f0b958d23509 INFO: start
2014-05-27T18:48:34Z 5985 TID-5s4rg PayPlayersBetsWorker JID-e8473aa1bc59f0b958d23509 INFO: done: 0.07 sec
2014-05-27T18:48:35Z 5985 TID-7agy8 EventFinishedWorker JID-33ef5d7e7de51189d698c1e7 INFO: done: 0.112 sec
I get no errors running both workers processes, but ...
cheking my data, EventFinishedWorker, does its job. no problems whatsoever. the second job PayPlayersBetsWorker doesnt work. but when i go into the console and do something like.
worker = PayPlayersBetsWorker.new
worker.perform(1)
it works it runs the jobs flawlessly. i have detected also that if i run sidekiq from the console directly not using upstart it works too.
bundle exec sidekiq -P tmp/pids/sidekiq.pid -q default -q payment -e production
this works. the problem seems to be running sidekiq with upstart. help would be appreciated :D
It looks like you have a race condition between the two jobs. EFW isn't finished before PPBW queries, doesn't find anything and immediately exits.

How long should "rake routes" run?

I just started out with Rails, so excuse my fairly basic question. I am already noticing that the rake routes command takes a while to execute everytime I run it. I have about 20 routes for 3 controllers and it takes about 40 seconds to execute.
Is that normal? How could I speed this up?
P.S.: I am on Windows 7 with Rails 3.1.3 (set up with Rails Installer).
The rake routes task depends on the environment task which loads your Rails environment and requires thousands of Ruby files.
The startup time of a Rails environment and the corresponding rake routes execution time are very close (on my Linux on-steroids-laptop with a Rails application with ~ 50 routes):
$ time ruby -r./config/environment.rb -e ''
real 0m5.065s
user 0m4.552s
sys 0m0.456s
$ time rake routes
real 0m4.955s
user 0m4.580s
sys 0m0.344s
There is no easy way to decrease startup time as it relies on the way your interpreter requires script files : http://rhnh.net/2011/05/28/speeding-up-rails-startup-time
I came up with a solution to rake routes taking about 8 seconds to run every time. It's a simple file based cache that runs bundle exec rake routes, stores the output in a file under tmp. The filename is the md5 hash of config/routes.rb, so if you make a change and change it back, it will use the old cached file.
I put the following bash functions in an executable file I call fastroutes:
if [ ! -f config/routes.rb ]; then
echo "Not in root of rails app"
exit 1
fi
cached_routes_filename="tmp/cached_routes_$(md5 -q config/routes.rb).txt"
function cache_routes {
bundle exec rake routes > $cached_routes_filename
}
function clear_cache {
for old_file in $(ls tmp/cache_routes*.txt); do
rm $old_file
done
}
function show_cache {
cat $cached_routes_filename
}
function show_current_filename {
echo $cached_routes_filename
}
function main {
if [ ! -f $cached_routes_filename ]; then
cache_routes
fi
show_cache
}
if [[ "$1" == "-f" ]]
then
show_current_filename
elif [[ "$1" == "-r" ]]
then
rm $cached_routes_filename
cache_routes
else
main
fi
Here's a github link too.
This way, you only have to generate the routes once, and then fastroutes will used the cached values.
That seems a bit long, but do you really need to run rake routes that often? On my system, OSX Lion/Rails 3.2.0, rake routes takes ~10s.
In your Rakefile:
#Ouptut stored output of rake routes
task :fast_routes => 'tmp/routes_output' do |t|
sh 'cat', t.source
end
#Update tmp/routes_output if its older than 'config/routes.rb'
file 'tmp/routes_output' => 'config/routes.rb' do |t|
outputf = File.open(t.name, 'w')
begin
$stdout = outputf
Rake.application['routes'].invoke
ensure
outputf.close
$stdout = STDOUT
end
end
Rails environment takes a huge more amount of time to be loaded on Windows. I recommend you to give Unix a try, like Ubuntu, as Windows is the worst environment in which you can run and develop Ruby on Rails applications. But if you are just trying Rails, Windows is enough :)

Is there a way to use capistrano (or similar) to remotely interact with rails console

I'm loving how capistrano has simplified my deployment workflow, but often times a pushed change will run into issues that I need to log into the server to troubleshoot via the console.
Is there a way to use capistrano or another remote administration tool to interact with the rails console on a server from your local terminal?
**Update:
cap shell seems promising, but it hangs when you try to start the console:
cap> cd /path/to/application/current
cap> pwd
** [out :: application.com] /path/to/application/current
cap> rails c production
** [out :: application.com] Loading production environment (Rails 3.0.0)
** [out :: application.com] Switch to inspect mode.
if you know a workaround for this, that'd be great
I found pretty nice solution based on https://github.com/codesnik/rails-recipes/blob/master/lib/rails-recipes/console.rb
desc "Remote console"
task :console, :roles => :app do
env = stage || "production"
server = find_servers(:roles => [:app]).first
run_with_tty server, %W( ./script/rails console #{env} )
end
desc "Remote dbconsole"
task :dbconsole, :roles => :app do
env = stage || "production"
server = find_servers(:roles => [:app]).first
run_with_tty server, %W( ./script/rails dbconsole #{env} )
end
def run_with_tty(server, cmd)
# looks like total pizdets
command = []
command += %W( ssh -t #{gateway} -l #{self[:gateway_user] || self[:user]} ) if self[:gateway]
command += %W( ssh -t )
command += %W( -p #{server.port}) if server.port
command += %W( -l #{user} #{server.host} )
command += %W( cd #{current_path} )
# have to escape this once if running via double ssh
command += [self[:gateway] ? '\&\&' : '&&']
command += Array(cmd)
system *command
end
This is how i do that without Capistrano: https://github.com/mcasimir/remoting (a deployment tool built on top of rake tasks). I've added a task to the README to open a remote console on the server:
# remote.rake
namespace :remote do
desc "Open rails console on server"
task :console do
require 'remoting/task'
remote('console', config.login, :interactive => true) do
cd config.dest
source '$HOME/.rvm/scripts/rvm'
bundle :exec, "rails c production"
end
end
end
Than i can run
$ rake remote:console
I really like the "just use the existing tools" approach displayed in this gist. It simply uses the SSH shell command instead of implementing an interactive SSH shell yourself, which may break any time irb changes it's default prompt, you need to switch users or any other crazy thing happens.
Not necessarily the best option, but I hacked the following together for this problem in our project:
task :remote_cmd do
cmd = fetch(:cmd)
puts `#{current_path}/script/console << EOF\r\n#{cmd}\r\n EOF`
end
To use it, I just use:
cap remote_cmd -s cmd="a = 1; b = 2; puts a+b"
(note: If you use Rails 3, you will have to change script/console above to rails console, however this has not been tested since I don't use Rails 3 on our project yet)
cap -T
cap invoke # Invoke a single command on the remote ser...
cap shell # Begin an interactive Capistrano session.
cap -e invoke
------------------------------------------------------------
cap invoke
------------------------------------------------------------
Invoke a single command on the remote servers. This is useful for performing
one-off commands that may not require a full task to be written for them. Simply
specify the command to execute via the COMMAND environment variable. To execute
the command only on certain roles, specify the ROLES environment variable as a
comma-delimited list of role names. Alternatively, you can specify the HOSTS
environment variable as a comma-delimited list of hostnames to execute the task
on those hosts, explicitly. Lastly, if you want to execute the command via sudo,
specify a non-empty value for the SUDO environment variable.
Sample usage:
$ cap COMMAND=uptime HOSTS=foo.capistano.test invoke
$ cap ROLES=app,web SUDO=1 COMMAND="tail -f /var/log/messages" invoke
The article http://errtheblog.com/posts/19-streaming-capistrano has a great solution for this. I just made a minor change so that it works in multiple server setup.
desc "open remote console (only on the last machine from the :app roles)"
task :console, :roles => :app do
server = find_servers_for_task(current_task).last
input = ''
run "cd #{current_path} && ./script/console #{rails_env}", :hosts => server.host do |channel, stream, data|
next if data.chomp == input.chomp || data.chomp == ''
print data
channel.send_data(input = $stdin.gets) if data =~ /^(>|\?)>/
end
end
the terminal you get is not really amazing though. If someone have some improvement that would make CTRL-D and CTRL-H or arrows working, please post it.

unix at command pass variable to shell script?

I'm trying to setup a simple timer that gets started from a Rails Application. This timer should wait out its duration and then start a shell script that will start up ./script/runner and complete the initial request. I need script/runner because I need access to ActiveRecord.
Here's my test lines in Rails
output = `at #{(Time.now + 60).strftime("%H:%M")} < #{Rails.root}/lib/parking_timer.sh STRING_VARIABLE`
return render :text => output
Then my parking_timer.sh looks like this
#!/bin/sh
~/PATH_TO_APP/script/runner -e development ~/PATH_TO_APP/lib/ParkingTimer.rb $1
echo "All Done"
Finally, ParkingTimer.rb reads the passed variable with
ARGV.each do|a|
puts "Argument: #{a}"
end
The problem is that the Unix command "at" doesn't seem to like variables and only wants to deal with filenames. I either get one of two errors depending on how I position "s
If I put quotes around the right hand side like so
... "~/PATH_TO_APP/lib/parking_timer.sh STRING_VARIABLE"
I get,
-bash: ~/PATH_TO_APP/lib/parking_timer.sh STRING_VARIABLE: No such file or directory
I I leave the quotes out, I get,
at: garbled time
This is all happening on a Mac OS 10.6 box running Rails 2.3 & Ruby 1.8.6
I've already messed around w/ BackgrounDrb, and decided its a total PITA. I need to be able to cancel the job at any time before it is due.
After playing around with irb a bit, here's what I found.
The backtick operator invokes the shell after ruby has done any interpretation necessary. For my test case, the strace output looked something like this:
execve("/bin/sh", ["sh", "-c", "echo at 12:57 < /etc/fstab"], [/* 67 vars */]) = 0
Since we know what it's doing, let's take a look at how your command will be executed:
/bin/sh -c "at 12:57 < RAILS_ROOT/lib/parking_timer.sh STRING_VARIABLE"
That looks very odd. Do you really want to pipe parking_timer.sh, the script, as input into the at command?
What you probably ultimately want is something like this:
/bin/sh -c "RAILS_ROOT/lib/parking_timer.sh STRING_VARIABLE | at 12:57"
Thus, the output of the parking_timer.sh command will become the input to the at command.
So, try the following:
output = `#{Rails.root}/lib/parking_timer.sh STRING_VARIABLE | at #{(Time.now + 60).strftime("%H:%M")}`
return render :text => output
You can always use strace or truss to see what's happening. For example:
strace -o strace.out -f -ff -p $IRB_PID
Then grep '^exec' strace.out* to see where the command is being executed.

Resources