How to update code at runtime in rails? - ruby-on-rails

I have a simple scenario (Rails 4 using Passenger):
1) One developing machine.
2) Multiple customers of the system being developed on machine 1. The system runs at customer's facilities in a virtual machine that is identical of the developing machine.
In this system, we are trying to make a feature that shows (only for the administrator) a page on which he can click a button (update code) and the system would do:
Connect to git server.
Run git pull.
touch tmp/restart.txt.
We setup all certificates in order not to ask for a password, setup Passenger/Apache to use the same user which is the owner of Rails app and, in console, it works using this code:
....
item = "git pull"
#result = %x[ #{item} ]
....
But, when I run this inside my app, it doesn't do anything and doesn't output nothing yet.
One strange clue is that when I change the command for some command that doesn't have to access git server (for instance, git status), it works flawlessly (remember that, in console, at the same virtual machine, the code works)
If anyone could help...

I don't want to assume to much, but it sounds like you need to implement a Continuous Integration (CI) strategy. I assume that the Admin is going to push this "button" when they are informed that there is new code, correct?
Have you guys attempted to use something like Capistrano to push updates to the customer system?
EDIT:
Suggest using popen -
# Get new code
IO.popen "cd #{Rails.root} && git pull" do |io|
io.each { |line| Rails.logger.info line }
end
# Bundle if necessary
IO.popen "cd #{Rails.root} && bundle" do |io|
io.each { |line| Rails.logger.info line }
end
# Migrate if necessary
IO.popen "cd #{Rails.root} && rake db:migrate" do |io|
io.each { |line| Rails.logger.info line }
end
# Restart Passenger
IO.popen "cd #{Rails.root} && touch tmp/restart.txt" do |io|
io.each { |line| Rails.logger.info line }
end
You might also want to shove this in a shell script and call that.

Related

crontab didn't work in Rails rake task

I have a rake task in my Rails application,and when I execute the order in my rails app path /home/hxh/Share/ruby/sport/:
rake get_sportdata
This will work fine.
Now,I want to use crontab to make this rake to be a timed task .so,I add a task:
* * * * * cd /home/hxh/Share/ruby/sport && /usr/local/bin/rake get_sportdata >/dev/null 2>&1
But this doesn't work.I get the log in cron.log file:
Job `cron.daily' terminated
I want to know where the error is.
Does the "cd /home/hxh/Share/ruby/sport && /usr/local/bin/rake get_sportdata >/dev/null 2>&1" can work in your terminal?
But use crontab in Rails normally is not a good idea. It will load Rails environment every time and slow down your performance.
I think whenever or rufus-scheduler are all good. For example, use rufus-scheduler is very easy. In config\initializers\schedule_task.rb
require 'rubygems'
require 'rufus/scheduler'
scheduler = Rufus::Scheduler.start_new(:thread_name => "Check Resources Health")
scheduler.every '1d', :first_at => Time.now do |job|
puts "###########RM Schedule Job - Check Resources Health: #{job.job_id}##########"
begin
HealthChecker.perform
rescue Exception => e
puts e.message
puts e.backtrace
raise "Error in RM Scheduler - Check Resources Health " + e.message
end
end
And implement "perform" or some other class method in your controller, now the controller is "HealthChecker". Very easy and no extra effort. Hope it help.
So that you can test better, and get a handle on whether it works I suggest:
Write a shell script in [app root]/script which sets up the right environment variables to point to Ruby (if necessary) and has the call to rake. E.g., something like script/get-sportdata.sh.
Test the script as root. E.g., first do sudo -s.
Call this script from cron. E.g., * cd [...] && script/get-sportdata.sh. If necessary, test that line as root too.
That's been my recipe for success, running rake tasks from cron on Ubuntu. This is because the cron environment is a bit different than the usual shell setup. And so limiting your actual cron jobs to simple commands to run a particular script are a good way to divide the configuration into smaller parts which can be individually tested.

Restart rails server on git commit

I'm setting up a shared development server for a Ruby on Rails project.
Is there a good way to set it up to restart, or reload the code every time someone pushes a commit to the master branch (for example)? I don't care about setting up gems etc every time, à la Heroku - I just want to run the new code.
If there are any problems, I can go in and restart the server manually - I just don't want to do it every time.
The post-receive hook runs after the entire process is completed and can be used to update other services or notify users.
In the post-receive hook, you'll most likely need to grep for the Ruby PID, kill that process and then restart the Rails server.
Git Hooks is what you're looking for.
Using these, you can run custom commands based on certain conditions.
Create a file named post-commit under your .git/hooks folder like so:
#!/bin/sh
exec rake deploy
and in your Rakefile,
task :deploy do
pid = IO.open("ps").grep(/script\/rails/) { |x| x.split(" ").first }.first
sh "kill -9 #{pid}"
sh "rails s"
end

Rails + Capybara-webkit – javascript code coverage?

I am looking into using capybara-webkit to do somewhat close-to-reality tests of app. This is absolutely neccessary as the app features a very rich JS-based UI and the Rails part is mostly API calls.
The question is: is there any tools to integrate into testing pipeline which could instrument Javascript code and report its coverage? The key here is the ability to integrate into testing workflow (just like rcov/simplecov) easily – I don't like the idea do it myself with jscoverage or analogue :)
Many thanks in advance.
This has now been added to JSCover (in trunk) - the related thread at JSCover is here.
I managed to get JSCover working in the Rails + Capybara pipeline, but
it did take quite a bit of hacking to get it to work
These changes are now in JSCover's trunk and will be part of version 1.0.5. There's working examples (including a Selenium IDE recorded example) and documentation in there too.
There is some additional work needed to get the branch detection to
work since that uses objects that cannot be easily serialized to JSON
This is a function to do this which is used in the new code.
Anyway, the end result works nicely
I agree. This makes JSCover useable by higher level tools that don't work well with iFrames or multiple windows which are avoided by this approach. It also means code coverage can be added to existing Selenium tests with two adjustments:
Make the tests run through the JSCover proxy
Save the coverage report at the end of the test suite
See JSCover's documentation for more information. Version 1.0.5 containing these changes should be released in a few days.
Update: Starting from JSCover version 1.05 the hacks I outlined in my previous answer are no longer needed. I've updated my answer to reflect this.
I managed to get JSCover working in the Rails + Capybara pipeline, but it did take some hacking to get it to work. I built a little rake task that:
uses the rails asset pipeline to generate the scripts
calls the java jar to instrument all the files and generate an empty report into a temp dir
patches the jscover.js script to operate in "report mode" (simply add jscoverage_isReport=true at the end)
copies the result to /public/assets so the tests pick it up without needing any changes and so the coverage report can be opened automatically in the browser
Then I added a setup task to clear out the browser's localStorage at the start of the tests and a teardown task that writes out the completed report at the end.
def setup
unless $startup_once
$startup_once=true
puts 'Clearing localStorage'
visit('/')
page.execute_script('localStorage.removeItem("jscover");')
end
end
def teardown
out=page.evaluate_script("typeof(_$jscoverage)!='undefined' && jscoverage_serializeCoverageToJSON()")
unless out.blank? then
File.open(File.join(Rails.root,"public/assets/jscoverage.json"), 'w') {|f| f.write(out) }
end
end
Anyway, the end result works nicely, the advantage of doing it this way is that it also works on headless browsers so it can also be included in CI.
*** Update 2: Here is a rake task that automates the steps, drop this in /lib/tasks
# Coverage testing for JavaScript
#
# Usage:
# Download JSCover from: http://tntim96.github.io/JSCover/ and move it to
# ~/Applications/JSCover-1
# First instumentalize the javascript files:
# rake assets:coverage
# Then run browser tests
# rake test
# See the results in the browser
# http://localhost:3000/assets/jscoverage.html
# Don't forget to clean up instrumentalization afterwards:
# rake assets:clobber
# Also don't forget to re-do instrumentalization after changing a JS file
namespace :assets do
desc 'Instrument all the assets named in config.assets.precompile'
task :coverage do
Rake::Task["assets:coverage:primary"].execute
end
namespace :coverage do
def jscoverage_loc;Dir.home+'/Applications/JSCover-1/';end
def internal_instrumentalize
config = Rails.application.config
target=File.join(Rails.public_path,config.assets.prefix)
environment = Sprockets::Environment.new
environment.append_path 'app/assets/javascripts'
`rm -rf #{tmp=File.join(Rails.root,'tmp','jscover')}`
`mkdir #{tmp}`
`rm -rf #{target}`
`mkdir #{target}`
print 'Generating assets'
require File.join(Rails.root,'config','initializers','assets.rb')
(%w{application.js}+config.assets.precompile.select{|f| f.is_a?(String) && f =~ /\.js$/}).each do |f|
print '.';File.open(File.join(target,f), 'w') {|ff| ff.write(environment[f].to_s) }
end
puts "\nInstrumentalizing…"
`java -Dfile.encoding=UTF-8 -jar #{jscoverage_loc}target/dist/JSCover-all.jar -fs #{target} #{tmp} #{'--no-branch' unless ENV['C1']} --local-storage`
puts 'Copying into place…'
`cp -R #{tmp}/ #{target}`
`rm -rf #{tmp}`
File.open("#{target}/jscoverage.js",'a'){|f| f.puts 'jscoverage_isReport = true' }
end
task :primary => %w(assets:environment) do
unless Dir.exist?(jscoverage_loc)
abort "Cannot find JSCover! Download from: http://tntim96.github.io/JSCover/ and put in #{jscoverage_loc}"
end
internal_instrumentalize
end
end
end

Clear Memcached on Heroku Deploy

What is the best way to automatically clear Memcached when I deploy my rails app to Heroku?
I'm caching the home page, and when I make changes and redeploy, the page is served from the cache, and the updates aren't incorporated.
I want to have this be totally automated. I don't want to have to clear the cache in the heroku console each time I deploy.
Thanks!
I deploy my applications using a bash script that automates GitHub & Heroku push, database migration, application maintenance mode activation and cache clearing action.
In this script, the command to clear the cache is :
heroku run --app YOUR_APP_NAME rails runner -e production Rails.cache.clear
This works with Celadon Cedar with the Heroku Toolbelt package. I know this is not a Rake-based solution however it's quite efficient.
Note : be sure you set the environment / -e option of the runner command to production as it will be executed on the development one otherwise.
Edit : I have experienced issues with this command on Heroku since a few days (Rails 3.2.21). I did not have time to check the origin the issue but removing the -e production did the trick, so if the command does not succeed, please run this one instead :
heroku run --app YOUR_APP_NAME rails runner Rails.cache.clear
[On the Celadon Cedar Stack]
-- [Update 18 June 2012 -- this no longer works, will see if I can find another workaround]
The cleanest way I have found to handle these post-deploy hooks is to latch onto the assets:precompile task that is already called during slug compilation. With a nod to asset_sync Gem for the idea:
Rake::Task["assets:precompile"].enhance do
# How to invoke a task that exists elsewhere
# Rake::Task["assets:environment"].invoke if Rake::Task.task_defined?("assets:environment")
# Clear cache on deploy
print "Clearing the rails memcached cache\n"
Rails.cache.clear
end
I just put this in a lib/tasks/heroku_deploy.rake file and it gets picked up nicely.
What I ended up doing was creating a new rake task that deployed to heroku and then cleared the cache. I created a deploy.rake file and this is it:
namespace :deploy do
task :production do
puts "deploying to production"
system "git push heroku"
puts "clearing cache"
system "heroku console Rails.cache.clear"
puts "done"
end
end
Now, instead of typing git push heroku, I just type rake deploy:production.
25 Jan 2013: this is works for a Rails 3.2.11 app running on Ruby 1.9.3 on Cedar
In your Gemfile add the following line to force ruby 1.9.3:
ruby '1.9.3'
Create a file named lib/tasks/clear_cache.rake with this content:
if Rake::Task.task_defined?("assets:precompile:nondigest")
Rake::Task["assets:precompile:nondigest"].enhance do
Rails.cache.clear
end
else
Rake::Task["assets:precompile"].enhance do
# rails 3.1.1 will clear out Rails.application.config if the env vars
# RAILS_GROUP and RAILS_ENV are not defined. We need to reload the
# assets environment in this case.
# Rake::Task["assets:environment"].invoke if Rake::Task.task_defined?("assets:environment")
Rails.cache.clear
end
end
Finally, I also recommend running heroku labs:enable user-env-compile on your app so that its environment is available to you as part of the precompilation.
Aside from anything you can do inside your application that runs on 'application start' you could use the heroku deploy hooks (http://devcenter.heroku.com/articles/deploy-hooks#http_post_hook) that would hit a URL within your application that clears the cache
I've added config/initializers/expire_cache.rb with
ActionController::Base.expire_page '/'
Works sweet!
Since the heroku gem is deprecated, an updated version of Solomons very elegant answer would be to save the following code in lib/tasks/heroku_deploy.rake:
namespace :deploy do
task :production do
puts "deploying to production"
system "git push heroku"
puts "clearing cache"
system "heroku run rake cache:clear"
puts "done"
end
end
namespace :cache do
desc "Clears Rails cache"
task :clear => :environment do
Rails.cache.clear
end
end
then instead of git push heroku master you type rake deploy:production in command line.
To just clear the cache you can run rake cache:clear
The solution I like to use is the following:
First, I implement a deploy_hook action that looks for a parameter that I set differently for each app. Typically I just do this on the on the "home" or "public" controller, since it doesn't take that much code.
### routes.rb ###
post 'deploy_hook' => 'home#deploy'
### home_controller.rb ###
def deploy_hook
Rails.cache.clear if params[:secret] == "a3ad3d3"
end
And, I simply tell heroku to setup a deploy hook to post to that action whenever I deploy!
heroku addons:add deployhooks:http \
--url=http://example.com/deploy_hook?secret=a3ad3d3
Now, everytime that I deploy, heroku will do an HTTP post back to the site to let me know that the deploy worked just fine.
Works like a charm for me. Of course, the secret token not "high security" and this shouldn't be used if there were a good attack vector for taking your site down if caches were cleared. But, honestly, if the site is that critical to attack, then don't host it on Heroku! However, if you wanted to increase the security a bit, then you could use a Heroku configuration variable and not have the 'token' in the source code at all.
Hope people find this useful.
I just had this problem as well but wanted to stick to the git deployment without an additional script as a wrapper.
So my approach is to write a file during slug generation with an uuid that marks the current precompilation. This is impelmented as a hook in assets:precompile.
# /lib/tasks/store_asset_cacheversion.rake
# add uuidtools to Gemfile
require "uuidtools"
def storeCacheVersion
cacheversion = UUIDTools::UUID.random_create
File.open(".cacheversion", "w") { |file| file.write(cacheversion) }
end
Rake::Task["assets:precompile"].enhance do
puts "Storing git hash in file for cache invalidation (assets:precompile)\n"
storeCacheVersion
end
Rake::Task["assets:precompile:nondigest"].enhance do
puts "Storing git hash in file for cache invalidation (assets:precompile:nondigest)\n"
storeCacheVersion
end
The other is an initializer that checks this id against the cached version. If they differ, there has been another precompilation and the cache will be invalidated.
So it dosen't matter how often the application spins up or down or on how many nodes the worker will be distributed, because the slug generation just happens once.
# /config/initializers/00_asset_cache_check.rb
currenthash = File.read ".cacheversion"
cachehash = Rails.cache.read "cacheversion"
puts "Checking cache version: #{cachehash} against slug version: #{currenthash}\n"
if currenthash != cachehash
puts "flushing cache\n"
Rails.cache.clear
Rails.cache.write "cacheversion", currenthash
else
puts "cache ok\n"
end
I needed to use a random ID because there is as far as I know no way of getting the git hash or any other useful id. Perhaps the ENV[REQUEST_ID] but this is an random ID as well.
The good thing about the uuid is, that it is now independent from heroku as well.

Why Doesn't My Cron Job Work Properly?

I have a cron job on an Ubuntu Hardy VPS that only half works and I can't work out why. The job is a Ruby script that uses mysqldump to back up a MySQL database used by a Rails application, which is then gzipped and uploaded to a remote server using SFTP.
The gzip file is created and copied successfully but it's always zero bytes. Yet if I run the cron command directly from the command line it works perfectly.
This is the cron job:
PATH=/usr/bin
10 3 * * * ruby /home/deploy/bin/datadump.rb
This is datadump.rb:
#!/usr/bin/ruby
require 'yaml'
require 'logger'
require 'rubygems'
require 'net/ssh'
require 'net/sftp'
APP = '/home/deploy/apps/myapp/current'
LOGFILE = '/home/deploy/log/data.log'
TIMESTAMP = '%Y%m%d-%H%M'
TABLES = 'table1 table2'
log = Logger.new(LOGFILE, 5, 10 * 1024)
dump = "myapp-#{Time.now.strftime(TIMESTAMP)}.sql.gz"
ftpconfig = YAML::load(open('/home/deploy/apps/myapp/shared/config/sftp.yml'))
config = YAML::load(open(APP + '/config/database.yml'))['production']
cmd = "mysqldump -u #{config['username']} -p#{config['password']} -h #{config['host']} --add-drop-table --add-locks --extended-insert --lock-tables #{config['database']} #{TABLES} | gzip -cf9 > #{dump}"
log.info 'Getting ready to create a backup'
`#{cmd}`
# Strongspace
log.info 'Backup created, starting the transfer to Strongspace'
Net::SSH.start(ftpconfig['strongspace']['host'], ftpconfig['strongspace']['username'], ftpconfig['strongspace']['password']) do |ssh|
ssh.sftp.connect do |sftp|
sftp.open_handle("#{ftpconfig['strongspace']['dir']}/#{dump}", 'w') do |handle|
sftp.write(handle, open("#{dump}").read)
end
end
end
log.info 'Finished transferring backup to Strongspace'
log.info 'Removing local file'
cmd = "rm -f #{dump}"
log.debug "Executing: #{cmd}"
`#{cmd}`
log.info 'Local file removed'
I've checked and double-checked all the paths and they're correct. Both sftp.yml (SFTP credentials) and database.yml (MySQL credentials) are owned by the executing user (deploy) with read-only permissions for that user (chmod 400). I'm using the 1.1.x versions of net-ssh and net-sftp. I know they're not the latest, but they're what I'm familiar with at the moment.
What could be causing the cron job to fail?
When scripts run correctly interactively but not when run by cron, the problem is usually because of the environment environment settings in place ... for example the PATH as alrady mentioned by #Ted Percival, but may be other environment variables.
This is because cron will not invoke .bash_profile, .bashrc or /etc/profile before executing.
The best way to avoid this is to ensure any scripts invoked by cron do not make any assumptions about the environment when executing. Over-coming this can be as simple as including a few lines in your script to make sure the environment is setup properly. For example, in my case I have all the significant settings in /etc/profile (for RHEL), so I will include the following line in any scripts to be run under cron:
source /etc/profile
Looks like your PATH is missing a few directories, most importantly /bin (for /bin/rm). Here's what my system's /etc/crontab uses:
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
Are you sure the temporary file is being created correctly when running as a cron job? The working directory for your script will either be specified in the HOME environment variable, or the /etc/passwd entry for the user that installed the cron job. If deploy does not have write permissions for the directory in which it is executing, then you could specify an absolute path for the dump file to fix the problem.
Is cron sending emails with logs?
If not, pipe the output of cron to a log file.
Make sure to redirect STDERR to the log.

Resources