I'm currently working on a multi-stage recipe for Capistrano that would, ideally, after deploy, make wise use of the yui compressor for all css and js.
Here's what I currently came up to :
after "deploy", "deploy:cleanup", "minifier:compress"
# Task to minify via Yui-compressor
# Uses compressor bundled with application in #{application}/lib/yuicompressor
namespace :minifier do
def minify(files)
files.each do |file|
cmd = "java -jar lib/yuicompressor/build/yuicompressor-2.4.6.jar #{file} -o #{file}"
puts cmd
ret = system(cmd)
raise "Minification failed for #{file}" if !ret
end
end
desc "minify"
task :compress do
minify_js
minify_css
end
desc "minify javascript"
task :minify_js do
minify(Filelist['public/js/**/*.js'])
end
desc "minify css"
task :minify_css do
minify(Filelist['public/css/**/*.css'])
end
end
What's really puzzling me is the
uninitialized constant Capistrano::Configuration::Filelist (NameError)
I get as soon as Capistrano gets to the point.
As a total newbie to Ruby, Rails, and Capistrano, I understand for some reason FileList isn't a common Capistrano method, but can't figure out what to replace it with.
Thanks for the help.
Your task is conceptually wrong, it will run on local system (the one from which you're deploying), because you're call system, you should use the run method which run commands remotely.
def minify(files)
files.each do |file|
cmd = "java -jar lib/yuicompressor/build/yuicompressor-2.4.6.jar #{file} -o #{file}"
puts cmd
ret = system(cmd) # *** SYSTEM RUN LOCAL COMMANDS ***
raise "Minification failed for #{file}" if !ret
end
end
That said, I will change that code with shell scripting, something like (untested)
task :minify
cmd = "java -jar lib/yuicompressor/build/yuicompressor-2.4.6.jar"
run "find #{current_path}/public/css/ -name '*.css' -print0 | xargs -0 -I file #{cmd} file -o file"
run "find #{current_path}/public/js/ -name '*.js' -print0 | xargs -0 -I file #{cmd} file -o file"
end
or if you prefer to use ruby to program it, you should move the code into a rake task (which you can try and debug locally) and then invoke it with Capistrano: How do I run a rake task from Capistrano?
Related
I have Capistrano deploying my app to a Ubuntu remote server on a cloud host. It works except that Sidekiq does not get restarted. After a deploy new Sidekiq jobs are stuck in the queue until it does finally get restarted. I currently manually SSH into the machine and run sudo initctl stop/start workers which works. I am not super strong at all with Capistrano and me research so far has failed to find me a solution to this. I am hoping I am missing something obvious to someone more familiar than me. Here is the relevant portion of my /config/deploy.rb file:
namespace :deploy do
namespace :sidekiq do
task :quiet do
on roles(:app) do
puts capture("pgrep -f 'workers' | xargs kill -USR1")
end
end
task :restart do
on roles(:app) do
execute :sudo, :initctl, :stop, :workers
execute :sudo, :initctl, :start, :workers
end
end
end
after 'deploy:starting', 'sidekiq:quiet'
after 'deploy:reverted', 'sidekiq:restart'
after 'deploy:published', 'sidekiq:restart'
end
UPDATE
From my reply logs:
DEBUG [268bc235] Running /usr/bin/env kill -0 $( cat /home/ubuntu/staging/shared/tmp/pids/sidekiq-0.pid ) as ubuntu#159.203.8.242
DEBUG [268bc235] Command: cd /home/ubuntu/staging/releases/20160806065537 && ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.2.3" ; /usr/bin/env kill -0 $( cat /home/ubuntu/staging/shared/tmp/pids/sidekiq-0.pid ) )
DEBUG [268bc235] Finished in 0.471 seconds with exit status 1 (failed).
I don't believe you need those configs in your deploy.rb if you have the capistrano-sidekiq gem installed and called in your Capfile.
Make sure you have require 'capistrano/sidekiq' in your Capfile or it won't know to call the default tasks.
I am trying to write a rake task to rename all occurrences of a method in a ruby project. I have achieved this using the following command from the command line.
Basically
retention.group_by('bla').count
needs to be changed to
retention.group_by('bla').size
I managed to achieve this uisng the following from the command line
find . -name \*.rb -exec ruby -i -p -e "gsub(/(group_by(\(([^\)]+)\))).count/, '\1.size')" \;
I am now trying to do this from a rake task to make it straight forward to change in all our projects. Which is the easiest / most elegant way to do this ? Think I am close, its just selecting all files in a project directory I am stuck on.
This did the trick
namespace :rename do
task :gb_count_rename do
Dir.glob("**/*.rb").each do |file_name|
text = File.read(file_name)
content = text.gsub(/(group_by(\(([^\)]+)\))).count/, '\1.size')
File.open(file_name, "w") { |file| file << content }
end
end
end
I often run the various test groups like:
rake test:units
rake test:functionals
I also like to run individual test files or individual tests:
ruby -Itest test/unit/file_test.rb
ruby -Itest test/unit/file_test.rb -n '/some context Im working on/'
There's also:
rake test TEST=test/unit/file_test.rb
And I've even created custom groupings in my Rakefile:
Rake::TestTask.new(:ps3) do |t|
t.libs << 'test'
t.verbose = true
t.test_files = FileList["test/unit/**/ps3_*_test.rb", "test/functional/services/ps3/*_test.rb"]
end
What I haven't figured out yet is how to run multiple ad-hoc tests at the command line. In other words, how can I inject test_files into the rake task. Something like:
rake test TEST=test/unit/file_test.rb,test/functional/files_controller_test.rb
Then I could run a shell function taking arbitrary parameters and run the fast ruby -Itest single test, or a rake task if there's more than one file.
bundle exec ruby -I.:test -e "ARGV.each{|f| require f}" file1 file1
or:
find test -name '*_test.rb' | xargs -t bundle exec ruby -I.:test -e "ARGV.each{|f| require f}"
I ended up hacking this into my RakeFile myself like so:
Rake::TestTask.new(:fast) do |t|
files = if ENV['TEST_FILES']
ENV['TEST_FILES'].split(',')
else
FileList["test/unit/**/*_test.rb", "test/functional/**/*_test.rb", "test/integration/**/*_test.rb"]
end
t.libs << 'test'
t.verbose = true
t.test_files = files
end
Rake::Task['test:fast'].comment = "Runs unit/functional/integration tests (or a list of files in TEST_FILES) in one block"
Then I whipped up this bash function that allows you to call rt with an arbitrary list of test files. If there's just one file it runs it as ruby directly (this saves 8 seconds for my 50k loc app), otherwise it runs the rake task.
function rt {
if [ $# -le 1 ] ; then
ruby -Itest $1
else
test_files = ""
while [ "$1" != "" ]; do
if [ "$test_files" == "" ]; then
test_files=$1
else
test_files="$test_files,$1"
fi
shift
done
rake test:fast TEST_FILES=$test_files
fi
}
There's a parallel_tests gem that will let you run multiple tests in parallel. Once you have it in your Gemfile, you can just run as ...
bundle exec parallel_test integration/test_*.rb
For me I setup a short rake task to run only the tests I want.
Bash Script
RUBY_MULTI_TEST="/tmp/ruby_multi_test.rb"
function suitup-multi-test-prepare {
sudo rm $RUBY_MULTI_TEST 2> /dev/null
}
function suitup-multi-test-add {
WORK_FOLDER=`pwd`
echo "require '$WORK_FOLDER/$1' " >> $RUBY_MULTI_TEST
}
function suitup-multi-test-status {
cat $RUBY_MULTI_TEST 2> /dev/null
}
function suitup-multi-test-run {
suitup-multi-test-status
ruby -I test/ $RUBY_MULTI_TEST
}
ery#tkpad:rails_app:$ suitup-multi-test-prepare
ery#tkpad:rails_app:$ suitup-multi-test-add test/functional/day_reports_controller_test.rb
ery#tkpad:rails_app:$ suitup-multi-test-add test/functional/month_reports_controller_test.rb
ery#tkpad:rails_app:$ suitup-multi-test-run
I'm loving how capistrano has simplified my deployment workflow, but often times a pushed change will run into issues that I need to log into the server to troubleshoot via the console.
Is there a way to use capistrano or another remote administration tool to interact with the rails console on a server from your local terminal?
**Update:
cap shell seems promising, but it hangs when you try to start the console:
cap> cd /path/to/application/current
cap> pwd
** [out :: application.com] /path/to/application/current
cap> rails c production
** [out :: application.com] Loading production environment (Rails 3.0.0)
** [out :: application.com] Switch to inspect mode.
if you know a workaround for this, that'd be great
I found pretty nice solution based on https://github.com/codesnik/rails-recipes/blob/master/lib/rails-recipes/console.rb
desc "Remote console"
task :console, :roles => :app do
env = stage || "production"
server = find_servers(:roles => [:app]).first
run_with_tty server, %W( ./script/rails console #{env} )
end
desc "Remote dbconsole"
task :dbconsole, :roles => :app do
env = stage || "production"
server = find_servers(:roles => [:app]).first
run_with_tty server, %W( ./script/rails dbconsole #{env} )
end
def run_with_tty(server, cmd)
# looks like total pizdets
command = []
command += %W( ssh -t #{gateway} -l #{self[:gateway_user] || self[:user]} ) if self[:gateway]
command += %W( ssh -t )
command += %W( -p #{server.port}) if server.port
command += %W( -l #{user} #{server.host} )
command += %W( cd #{current_path} )
# have to escape this once if running via double ssh
command += [self[:gateway] ? '\&\&' : '&&']
command += Array(cmd)
system *command
end
This is how i do that without Capistrano: https://github.com/mcasimir/remoting (a deployment tool built on top of rake tasks). I've added a task to the README to open a remote console on the server:
# remote.rake
namespace :remote do
desc "Open rails console on server"
task :console do
require 'remoting/task'
remote('console', config.login, :interactive => true) do
cd config.dest
source '$HOME/.rvm/scripts/rvm'
bundle :exec, "rails c production"
end
end
end
Than i can run
$ rake remote:console
I really like the "just use the existing tools" approach displayed in this gist. It simply uses the SSH shell command instead of implementing an interactive SSH shell yourself, which may break any time irb changes it's default prompt, you need to switch users or any other crazy thing happens.
Not necessarily the best option, but I hacked the following together for this problem in our project:
task :remote_cmd do
cmd = fetch(:cmd)
puts `#{current_path}/script/console << EOF\r\n#{cmd}\r\n EOF`
end
To use it, I just use:
cap remote_cmd -s cmd="a = 1; b = 2; puts a+b"
(note: If you use Rails 3, you will have to change script/console above to rails console, however this has not been tested since I don't use Rails 3 on our project yet)
cap -T
cap invoke # Invoke a single command on the remote ser...
cap shell # Begin an interactive Capistrano session.
cap -e invoke
------------------------------------------------------------
cap invoke
------------------------------------------------------------
Invoke a single command on the remote servers. This is useful for performing
one-off commands that may not require a full task to be written for them. Simply
specify the command to execute via the COMMAND environment variable. To execute
the command only on certain roles, specify the ROLES environment variable as a
comma-delimited list of role names. Alternatively, you can specify the HOSTS
environment variable as a comma-delimited list of hostnames to execute the task
on those hosts, explicitly. Lastly, if you want to execute the command via sudo,
specify a non-empty value for the SUDO environment variable.
Sample usage:
$ cap COMMAND=uptime HOSTS=foo.capistano.test invoke
$ cap ROLES=app,web SUDO=1 COMMAND="tail -f /var/log/messages" invoke
The article http://errtheblog.com/posts/19-streaming-capistrano has a great solution for this. I just made a minor change so that it works in multiple server setup.
desc "open remote console (only on the last machine from the :app roles)"
task :console, :roles => :app do
server = find_servers_for_task(current_task).last
input = ''
run "cd #{current_path} && ./script/console #{rails_env}", :hosts => server.host do |channel, stream, data|
next if data.chomp == input.chomp || data.chomp == ''
print data
channel.send_data(input = $stdin.gets) if data =~ /^(>|\?)>/
end
end
the terminal you get is not really amazing though. If someone have some improvement that would make CTRL-D and CTRL-H or arrows working, please post it.
My run of cap deploy fails, and I think it's because of a formatting issue. Here's some output:
* executing "rm -rf /var/www/cap-deploy/socialmit/releases/20101215141011/log /var/www/cap-deploy/socialmit/releases/20101215141011/public/system /var/www/cap-deploy/socialmit/releases/20101215141011/tmp/pids &&\\\n mkdir -p /var/www/cap-deploy/socialmit/releases/20101215141011/public &&\\\n mkdir -p /var/www/cap-deploy/socialmit/releases/20101215141011/tmp &&\\\n ln -s /var/www/cap-deploy/socialmit/shared/log /var/www/cap-deploy/socialmit/releases/20101215141011/log &&\\\n ln -s /var/www/cap-deploy/socialmit/shared/system /var/www/cap-deploy/socialmit/releases/20101215141011/public/system &&\\\n ln -s /var/www/cap-deploy/socialmit/shared/pids /var/www/cap-deploy/socialmit/releases/20101215141011/tmp/pids"
(Sorry for the formatting.)
The &&\\\n things look really fishy, and indeed dumping them into my console causes an output of `\n: command not found.
WHere is cap deploy defined? It looks like the issue has something to do with it being defined as a list of commands that aren't properly formatted, leading to the extraneous newline that is throwing stuff off. But I can't find the actual code for cap deploy to fix it. It doesn't seem to be an app-specific thing since it's not in my Capfile or any of the files referenced by the Capfile.
The issue was that some user-defined tasks named after_symlink had to be renamed and invoked after the symlink using the after("deploy:symlink", "deploy:new_name") syntax:
problem:
namespace :deploy do
desc "Symlink the upload directories"
task :after_symlink do
#run "mkdir -p #{shared_path}/uploads"
run "ln -s #{deploy_to}/shared/db #{deploy_to}/#{current_dir}/db/link"
end
end
error (actually a warning):
Deprecation Warning] Naming tasks with before_ and after_ is deprecated, please see the new before() and after() methods. (Offending task name was after_update_code)
[Deprecation Warning] Naming tasks with before_ and after_ is deprecated, please see the new before() and after() methods. (Offending task name was after_symlink)
correct way of doing it:
namespace :deploy do
desc "Symlink the upload directories"
task :link_db do
#run "mkdir -p #{shared_path}/uploads"
run "ln -s #{deploy_to}/shared/db #{deploy_to}/#{current_dir}/db/link"
end
end
after("deploy:symlink", "deploy:link_db")
The issue with the \\\n business was a misdiagnosis on my part. Apparently that is executed fine.
The deploy task is defined in the gem here.
I'd say that's most likely not the problem though. What error is it raising when the deploy fails?