How long should "rake routes" run? - ruby-on-rails

I just started out with Rails, so excuse my fairly basic question. I am already noticing that the rake routes command takes a while to execute everytime I run it. I have about 20 routes for 3 controllers and it takes about 40 seconds to execute.
Is that normal? How could I speed this up?
P.S.: I am on Windows 7 with Rails 3.1.3 (set up with Rails Installer).

The rake routes task depends on the environment task which loads your Rails environment and requires thousands of Ruby files.
The startup time of a Rails environment and the corresponding rake routes execution time are very close (on my Linux on-steroids-laptop with a Rails application with ~ 50 routes):
$ time ruby -r./config/environment.rb -e ''
real 0m5.065s
user 0m4.552s
sys 0m0.456s
$ time rake routes
real 0m4.955s
user 0m4.580s
sys 0m0.344s
There is no easy way to decrease startup time as it relies on the way your interpreter requires script files : http://rhnh.net/2011/05/28/speeding-up-rails-startup-time

I came up with a solution to rake routes taking about 8 seconds to run every time. It's a simple file based cache that runs bundle exec rake routes, stores the output in a file under tmp. The filename is the md5 hash of config/routes.rb, so if you make a change and change it back, it will use the old cached file.
I put the following bash functions in an executable file I call fastroutes:
if [ ! -f config/routes.rb ]; then
echo "Not in root of rails app"
exit 1
fi
cached_routes_filename="tmp/cached_routes_$(md5 -q config/routes.rb).txt"
function cache_routes {
bundle exec rake routes > $cached_routes_filename
}
function clear_cache {
for old_file in $(ls tmp/cache_routes*.txt); do
rm $old_file
done
}
function show_cache {
cat $cached_routes_filename
}
function show_current_filename {
echo $cached_routes_filename
}
function main {
if [ ! -f $cached_routes_filename ]; then
cache_routes
fi
show_cache
}
if [[ "$1" == "-f" ]]
then
show_current_filename
elif [[ "$1" == "-r" ]]
then
rm $cached_routes_filename
cache_routes
else
main
fi
Here's a github link too.
This way, you only have to generate the routes once, and then fastroutes will used the cached values.

That seems a bit long, but do you really need to run rake routes that often? On my system, OSX Lion/Rails 3.2.0, rake routes takes ~10s.

In your Rakefile:
#Ouptut stored output of rake routes
task :fast_routes => 'tmp/routes_output' do |t|
sh 'cat', t.source
end
#Update tmp/routes_output if its older than 'config/routes.rb'
file 'tmp/routes_output' => 'config/routes.rb' do |t|
outputf = File.open(t.name, 'w')
begin
$stdout = outputf
Rake.application['routes'].invoke
ensure
outputf.close
$stdout = STDOUT
end
end

Rails environment takes a huge more amount of time to be loaded on Windows. I recommend you to give Unix a try, like Ubuntu, as Windows is the worst environment in which you can run and develop Ruby on Rails applications. But if you are just trying Rails, Windows is enough :)

Related

Crontab in Amazon Elastic Beanstalk

I am doing a cron tab in AWS - Elastic Beanstalk with Ruby on Rails 3, but I don't know what is wrong.
I have this code in my .ebextensions/default.config
container_commands:
01remove_old_cron_jobs:
command: "crontab -r || exit 0"
02send_test_email:
command: crontab */2 * * * * rake send_email:test
leader_only: true
I receive this error:
Failed on instance with return code: 1 Output: Error occurred during build: Command 02send_test_email failed .
UPDATE 1
I tried next:
crontab.txt
*/2 * * * * rake send_email:test > /dev/null 2>&1
default.config
02_crontab:
command: "cat .ebextensions/crontab.txt | crontab"
leader_only: true
RESULT: No errors, but it does not work.
UPDATE 2
crontab.sh
crontab -l > /tmp/cronjob
#CRONJOB RULES
echo "*/2 * * * * /usr/bin/wget http://localhost/crontabs/send_test_email > /dev/null 2>&1" >> /tmp/cronjob
#echo "*/2 * * * * rake send_email:test > /dev/null 2>&1" >> /tmp/cronjob
crontab /tmp/cronjob
rm /tmp/cronjob
echo 'Script successful executed, crontab updated.'
default.config
02_crontab:
command: "/bin/bash .ebextensions/crontab.sh"
leader_only: true
RESULT: Works with url, but not with rake task.
Updated for 2018
In order to get this to work on the latest version of Elastic Beanstalk, I added the following to my .ebextensions:
.ebextensions/0005_cron.config
files:
"/etc/cron.d/mycron":
mode: "000644"
owner: root
group: root
content: |
56 11 * * * root . /opt/elasticbeanstalk/support/envvars && cd /var/app/current && /opt/rubies/ruby-2.3.4/bin/bundle exec /opt/rubies/ruby-2.3.4/bin/rake send_email:test >> /var/app/current/log/cron.log 2>&1
commands:
remove_old_cron:
command: "rm -f /etc/cron.d/*.bak"
How I got there:
There are four main issues to confront when trying to cron a rake task in AWS EB:
The first hurdle is making sure all of your EB and Rails environment variables are loaded. I beat my head against the wall a while on this one, but then I discovered this AWS forum post (login may be required). Running . /opt/elasticbeanstalk/support/envvars loads all of your environment variables.
Then we need to make sure we cd into the current app directory using cd /var/app/current.
Next we need to know where to find the bundle and rake executables. They are not installed in the normal bin directories, but are located in a directory specific to your ruby version. To find out where your executables are located, ssh into your EB server (eb ssh) and then type the following:
$ cd /var/app/current
$ which bundle
/opt/rubies/ruby-2.3.4/bin/bundle
$ which rake
/opt/rubies/ruby-2.3.4/bin/rake
You could probably guess the directory based on your ruby version, but the above commands will let you know for sure. Based on the above, your can build your rake command as:
/opt/rubies/ruby-2.3.4/bin/bundle exec /opt/rubies/ruby-2.3.4/bin/rake send_email:test
NOTE: If you update your ruby version, you will likely need to update your cron config as well. This is a little brittle. I'd recommend making a note in your README on this. Trust me, six months from now, you will forget.
The fourth thing to consider is logging. I'd recommend logging to the same location as your other rails logs. We do this by tacking on >> /var/app/current/log/cron.log 2>&1 to the end of our command string.
Putting all of this together leads to a cron command string of:
. /opt/elasticbeanstalk/support/envvars && cd /var/app/current && /opt/rubies/ruby-2.3.4/bin/bundle exec /opt/rubies/ruby-2.3.4/bin/rake send_email:test >> /var/app/current/log/cron.log 2>&1
Finally, I referenced the latest AWS documentation to build an .ebextensions config file for my cron command. The result was the .ebextensions/0005_cron.config file displayed at the top of this answer.
I am having the same issue. Though I figured out that the reason that rake task doesn't run correctly on eb is because of RACK_ENV, RAILS_ENV and BUNDLE_WITHOUT
Defaults of eb:
RACK_ENV: production
RAILS_ENV: production
BUNDLE_WITHOUT: test:development
When the cron runs rake task, it runs in development mode, and gives gem not found error, as gems grouped in development are not installed.
you can see this by changing your cron a bitfrom:
*/2 * * * * rake send_email:test > /dev/null 2>&1
to:
*/2 * * * * cd /var/app/current/ && /usr/bin/bundle exec /usr/bin/rake send_email:test > /tmp/cron_log 2>&1
and then checking the /tmp/cron_log file
To know the location of bundle and rake, run
which bundle
which rake
I tried setting RAILS_ENV in command in cron, but that didn't work aswell
One quick fix is to set
BUNDLE_WITHOUT to null
EDIT:
Finally I got it to work,
.ebextensions/.config
files:
"/tmp/cron_jobs" :
mode: "000777"
content: |
1 10 * * * cd /var/app/current/ && RACK_ENV=production rake some:task >> /var/app/current/log/cron_log 2>&1
encoding: plain
container_commands:
01_delete_cron_jobs:
command: "crontab -r -u webapp || exit 0"
02_add_cron_jobs:
command: "crontab /tmp/cron_jobs -u webapp"
leader_only: true
option_settings:
- option_name: RAILS_ENV
value: production
- option_name: RACK_ENV
value: production
Notice the '-u webapp' when removing and adding cron, this will run this cron under user webapp. The above will also run in production mode. And the output will be dumped in log/cron_log file.
If the above wont work then adding 'Bundle exec' before 'rake some:task' might work.
I've seen these used with separate files in .ebextensions, such as:
02send_test_email:
command: "cat .ebextensions/crontab | crontab"
leader_only: true
I haven't gotten around to it yet, but I took note of this along the way. Let us know if this works.
This stackoverflow post has much more information
After Update 1/2:
Cron doesn't know where rake is. Your application runs from /var/app/current, and you need to be running bundle exec rake from that directory.
Elastic beanstalk is horrible with logging errors, to get this right, ssh to the machine and experiment until you have the commands right, then put this back into your cron script. You can even try and re-run some of the eb scripts as found in the logs, then reverse that into your ebextensions files.

whenever gem: I set :output but the logfile doesn't show up where I'd expect it to

In my schedule.rb file I have the following lines:
set :output, '/log/cron_log.log'
every 5.minutes do
command 'echo "hello"'
end
I ran whenever -w as suggested in this question Rails, using whenever gem in development, and I assume the cronfile is written and running. (I tried restarting the Rails server as well.)
And when I run $ crontab -l I see the following:
0,5,10,15,20,25,30,35,40,45,50,55 * * * * /bin/bash
-l -c 'echo "hello" >> /log/cron_log.log 2>&1'
But I can't find the log file. I checked in rails_project/log, ~/log and /log to no avail. On OSX btw.
How can I get the log file to be written to the rails project log folder?
Where is your log?
You're putting the output file at the highest directory level:
$ cd /log
To see if the file exists and if it has data in it:
$ ls -la cron_log.log
To create the log file if needed:
$ touch cron_log.log
To open up permissions for your own local debugging (do NOT do this in production!)
$ chmod +rw cron_log.log
Is your command running?
To run the command manually to find out if it works as you expect:
$ /bin/bash -l -c 'echo "hello" >> /log/cron_log.log 2>&1'
To improve your security and protect your path, use full paths:
wrong: command 'echo "hello"'
right: command '/bin/echo "hello"'
To find the command full path:
$ which echo
To verify the cron is running as you expect:
$ sudo grep CRON /var/log/syslog
The grep result should have lines that something like this:
Jan 1 12:00:00 example.com CRON[123]: (root) CMD (... your command here ...)
Are you on a Mac?
If you're not seeing output in the syslog, and you're on a Mac, you may want to read about the Mac OSX switching from cron to launchd.
See the cron plist (/System/Library/LaunchDaemons/com.vix.cron.plist) and use a stdout/stderr path to debug cron itself. I don't recall if launchctl unloading and launchctl loading the plist is sufficient, or since it's a system daemon if you'd have to restart entirely. (see where is the cron log file in lion)
How to log relative to Rails?
To put the log relative to a Rails app, omit the leading slash (and typically call it cron.log)
set :output, "log/cron.log"
To put the log in a specific fully-qualified directory:
set :output, '/abc/def/ghi/log/cron.log'
The Whenever wiki has some good examples about redirecting output:
https://github.com/javan/whenever/wiki/Output-redirection-aka-logging-your-cron-jobs
Examples:
every 3.hours do
runner "MyModel.some_process", :output => 'cron.log'
rake "my:rake:task", :output => {:error => 'error.log', :standard => 'cron.log'}
command "/usr/bin/cmd"
end

How to run multiple Rails unit tests at once

I often run the various test groups like:
rake test:units
rake test:functionals
I also like to run individual test files or individual tests:
ruby -Itest test/unit/file_test.rb
ruby -Itest test/unit/file_test.rb -n '/some context Im working on/'
There's also:
rake test TEST=test/unit/file_test.rb
And I've even created custom groupings in my Rakefile:
Rake::TestTask.new(:ps3) do |t|
t.libs << 'test'
t.verbose = true
t.test_files = FileList["test/unit/**/ps3_*_test.rb", "test/functional/services/ps3/*_test.rb"]
end
What I haven't figured out yet is how to run multiple ad-hoc tests at the command line. In other words, how can I inject test_files into the rake task. Something like:
rake test TEST=test/unit/file_test.rb,test/functional/files_controller_test.rb
Then I could run a shell function taking arbitrary parameters and run the fast ruby -Itest single test, or a rake task if there's more than one file.
bundle exec ruby -I.:test -e "ARGV.each{|f| require f}" file1 file1
or:
find test -name '*_test.rb' | xargs -t bundle exec ruby -I.:test -e "ARGV.each{|f| require f}"
I ended up hacking this into my RakeFile myself like so:
Rake::TestTask.new(:fast) do |t|
files = if ENV['TEST_FILES']
ENV['TEST_FILES'].split(',')
else
FileList["test/unit/**/*_test.rb", "test/functional/**/*_test.rb", "test/integration/**/*_test.rb"]
end
t.libs << 'test'
t.verbose = true
t.test_files = files
end
Rake::Task['test:fast'].comment = "Runs unit/functional/integration tests (or a list of files in TEST_FILES) in one block"
Then I whipped up this bash function that allows you to call rt with an arbitrary list of test files. If there's just one file it runs it as ruby directly (this saves 8 seconds for my 50k loc app), otherwise it runs the rake task.
function rt {
if [ $# -le 1 ] ; then
ruby -Itest $1
else
test_files = ""
while [ "$1" != "" ]; do
if [ "$test_files" == "" ]; then
test_files=$1
else
test_files="$test_files,$1"
fi
shift
done
rake test:fast TEST_FILES=$test_files
fi
}
There's a parallel_tests gem that will let you run multiple tests in parallel. Once you have it in your Gemfile, you can just run as ...
bundle exec parallel_test integration/test_*.rb
For me I setup a short rake task to run only the tests I want.
Bash Script
RUBY_MULTI_TEST="/tmp/ruby_multi_test.rb"
function suitup-multi-test-prepare {
sudo rm $RUBY_MULTI_TEST 2> /dev/null
}
function suitup-multi-test-add {
WORK_FOLDER=`pwd`
echo "require '$WORK_FOLDER/$1' " >> $RUBY_MULTI_TEST
}
function suitup-multi-test-status {
cat $RUBY_MULTI_TEST 2> /dev/null
}
function suitup-multi-test-run {
suitup-multi-test-status
ruby -I test/ $RUBY_MULTI_TEST
}
ery#tkpad:rails_app:$ suitup-multi-test-prepare
ery#tkpad:rails_app:$ suitup-multi-test-add test/functional/day_reports_controller_test.rb
ery#tkpad:rails_app:$ suitup-multi-test-add test/functional/month_reports_controller_test.rb
ery#tkpad:rails_app:$ suitup-multi-test-run

Is there a way to use capistrano (or similar) to remotely interact with rails console

I'm loving how capistrano has simplified my deployment workflow, but often times a pushed change will run into issues that I need to log into the server to troubleshoot via the console.
Is there a way to use capistrano or another remote administration tool to interact with the rails console on a server from your local terminal?
**Update:
cap shell seems promising, but it hangs when you try to start the console:
cap> cd /path/to/application/current
cap> pwd
** [out :: application.com] /path/to/application/current
cap> rails c production
** [out :: application.com] Loading production environment (Rails 3.0.0)
** [out :: application.com] Switch to inspect mode.
if you know a workaround for this, that'd be great
I found pretty nice solution based on https://github.com/codesnik/rails-recipes/blob/master/lib/rails-recipes/console.rb
desc "Remote console"
task :console, :roles => :app do
env = stage || "production"
server = find_servers(:roles => [:app]).first
run_with_tty server, %W( ./script/rails console #{env} )
end
desc "Remote dbconsole"
task :dbconsole, :roles => :app do
env = stage || "production"
server = find_servers(:roles => [:app]).first
run_with_tty server, %W( ./script/rails dbconsole #{env} )
end
def run_with_tty(server, cmd)
# looks like total pizdets
command = []
command += %W( ssh -t #{gateway} -l #{self[:gateway_user] || self[:user]} ) if self[:gateway]
command += %W( ssh -t )
command += %W( -p #{server.port}) if server.port
command += %W( -l #{user} #{server.host} )
command += %W( cd #{current_path} )
# have to escape this once if running via double ssh
command += [self[:gateway] ? '\&\&' : '&&']
command += Array(cmd)
system *command
end
This is how i do that without Capistrano: https://github.com/mcasimir/remoting (a deployment tool built on top of rake tasks). I've added a task to the README to open a remote console on the server:
# remote.rake
namespace :remote do
desc "Open rails console on server"
task :console do
require 'remoting/task'
remote('console', config.login, :interactive => true) do
cd config.dest
source '$HOME/.rvm/scripts/rvm'
bundle :exec, "rails c production"
end
end
end
Than i can run
$ rake remote:console
I really like the "just use the existing tools" approach displayed in this gist. It simply uses the SSH shell command instead of implementing an interactive SSH shell yourself, which may break any time irb changes it's default prompt, you need to switch users or any other crazy thing happens.
Not necessarily the best option, but I hacked the following together for this problem in our project:
task :remote_cmd do
cmd = fetch(:cmd)
puts `#{current_path}/script/console << EOF\r\n#{cmd}\r\n EOF`
end
To use it, I just use:
cap remote_cmd -s cmd="a = 1; b = 2; puts a+b"
(note: If you use Rails 3, you will have to change script/console above to rails console, however this has not been tested since I don't use Rails 3 on our project yet)
cap -T
cap invoke # Invoke a single command on the remote ser...
cap shell # Begin an interactive Capistrano session.
cap -e invoke
------------------------------------------------------------
cap invoke
------------------------------------------------------------
Invoke a single command on the remote servers. This is useful for performing
one-off commands that may not require a full task to be written for them. Simply
specify the command to execute via the COMMAND environment variable. To execute
the command only on certain roles, specify the ROLES environment variable as a
comma-delimited list of role names. Alternatively, you can specify the HOSTS
environment variable as a comma-delimited list of hostnames to execute the task
on those hosts, explicitly. Lastly, if you want to execute the command via sudo,
specify a non-empty value for the SUDO environment variable.
Sample usage:
$ cap COMMAND=uptime HOSTS=foo.capistano.test invoke
$ cap ROLES=app,web SUDO=1 COMMAND="tail -f /var/log/messages" invoke
The article http://errtheblog.com/posts/19-streaming-capistrano has a great solution for this. I just made a minor change so that it works in multiple server setup.
desc "open remote console (only on the last machine from the :app roles)"
task :console, :roles => :app do
server = find_servers_for_task(current_task).last
input = ''
run "cd #{current_path} && ./script/console #{rails_env}", :hosts => server.host do |channel, stream, data|
next if data.chomp == input.chomp || data.chomp == ''
print data
channel.send_data(input = $stdin.gets) if data =~ /^(>|\?)>/
end
end
the terminal you get is not really amazing though. If someone have some improvement that would make CTRL-D and CTRL-H or arrows working, please post it.

unix at command pass variable to shell script?

I'm trying to setup a simple timer that gets started from a Rails Application. This timer should wait out its duration and then start a shell script that will start up ./script/runner and complete the initial request. I need script/runner because I need access to ActiveRecord.
Here's my test lines in Rails
output = `at #{(Time.now + 60).strftime("%H:%M")} < #{Rails.root}/lib/parking_timer.sh STRING_VARIABLE`
return render :text => output
Then my parking_timer.sh looks like this
#!/bin/sh
~/PATH_TO_APP/script/runner -e development ~/PATH_TO_APP/lib/ParkingTimer.rb $1
echo "All Done"
Finally, ParkingTimer.rb reads the passed variable with
ARGV.each do|a|
puts "Argument: #{a}"
end
The problem is that the Unix command "at" doesn't seem to like variables and only wants to deal with filenames. I either get one of two errors depending on how I position "s
If I put quotes around the right hand side like so
... "~/PATH_TO_APP/lib/parking_timer.sh STRING_VARIABLE"
I get,
-bash: ~/PATH_TO_APP/lib/parking_timer.sh STRING_VARIABLE: No such file or directory
I I leave the quotes out, I get,
at: garbled time
This is all happening on a Mac OS 10.6 box running Rails 2.3 & Ruby 1.8.6
I've already messed around w/ BackgrounDrb, and decided its a total PITA. I need to be able to cancel the job at any time before it is due.
After playing around with irb a bit, here's what I found.
The backtick operator invokes the shell after ruby has done any interpretation necessary. For my test case, the strace output looked something like this:
execve("/bin/sh", ["sh", "-c", "echo at 12:57 < /etc/fstab"], [/* 67 vars */]) = 0
Since we know what it's doing, let's take a look at how your command will be executed:
/bin/sh -c "at 12:57 < RAILS_ROOT/lib/parking_timer.sh STRING_VARIABLE"
That looks very odd. Do you really want to pipe parking_timer.sh, the script, as input into the at command?
What you probably ultimately want is something like this:
/bin/sh -c "RAILS_ROOT/lib/parking_timer.sh STRING_VARIABLE | at 12:57"
Thus, the output of the parking_timer.sh command will become the input to the at command.
So, try the following:
output = `#{Rails.root}/lib/parking_timer.sh STRING_VARIABLE | at #{(Time.now + 60).strftime("%H:%M")}`
return render :text => output
You can always use strace or truss to see what's happening. For example:
strace -o strace.out -f -ff -p $IRB_PID
Then grep '^exec' strace.out* to see where the command is being executed.

Resources