execute bash script inside deploy.rb using capistrano - ruby-on-rails

I am learning (by doing) Rails and Capistrano.
How can I execute a scrpit inside deploy.rb?
I came across run(command), exec(command), execute: or run:.
I don't have to specify :db or web so I have the following backbone:
task :myTask do
on roles(:app) do
execute "bash myScript.sh"
puts "#{:server} reports: #{myTask}"
end
end
Is this correct?
Is the ssh part of the whole process or I have to ssh in the command?
How do people develope deploy.rb without cap deploy every time they make a change?
Thank you!

Ruby allows you to run a shell script using backtick
for example
output = `pwd`
puts "output is #{output}"
see more https://ruby-doc.org/core-1.9.3/Kernel.html#method-i-60

This is what worked for me:
role :app, 'user#domain1.com'
on roles(:app) do
within 'remote_path' do
execute 'bash', ' myScript.sh'
end
end

Related

Capistrano deploy one server at a time

I am using capistrano for our RAILS deployment. We want to deploy to one server first, and after deployment is finished on the first server, then we want to start deployment on second server. We do not want restarting in sequence with delay. We want to have complete deployment one at a time. So far I have this:
namespace :deploy do
task :sequence do
on roles(:app), in: :sequence do |host|
invoke 'deploy'
end
end
end
The problem is with invoke 'deploy'
It calls deploy for all the app servers which in turns deploy in parallel.
Finally How Do I invoke deploy task for a specific host?
Following should help you to run the deploy task in sequential mode:
task :my_task, roles: :web do
find_servers_for_task(current_task).each do |server|
run "YOUR_COMMAND", hosts: server.host
end
end
If I had that requirement, I'd probably script it. You can run Capistrano with the --hosts parameter to define which of the servers you described in your stage file (config/deploy/dev|stage|prod|somethingelse.rb) you actually want to run the command against. This can take two forms. Let's say I have three servers, test1, test2, and prod1. I can run it with a list, like cap prod --hosts=test1,test2 deploy and only test1 and test2 will receive the deployment. You can also use a regular expression to achieve the same thing, like cap prod --hosts=^test deploy.
This is documented here: http://capistranorb.com/documentation/advanced-features/host-filtering/
With this in mind, I'd probably write a script (or Makefile) which runs capistrano N time, for a different server each time.

crontab didn't work in Rails rake task

I have a rake task in my Rails application,and when I execute the order in my rails app path /home/hxh/Share/ruby/sport/:
rake get_sportdata
This will work fine.
Now,I want to use crontab to make this rake to be a timed task .so,I add a task:
* * * * * cd /home/hxh/Share/ruby/sport && /usr/local/bin/rake get_sportdata >/dev/null 2>&1
But this doesn't work.I get the log in cron.log file:
Job `cron.daily' terminated
I want to know where the error is.
Does the "cd /home/hxh/Share/ruby/sport && /usr/local/bin/rake get_sportdata >/dev/null 2>&1" can work in your terminal?
But use crontab in Rails normally is not a good idea. It will load Rails environment every time and slow down your performance.
I think whenever or rufus-scheduler are all good. For example, use rufus-scheduler is very easy. In config\initializers\schedule_task.rb
require 'rubygems'
require 'rufus/scheduler'
scheduler = Rufus::Scheduler.start_new(:thread_name => "Check Resources Health")
scheduler.every '1d', :first_at => Time.now do |job|
puts "###########RM Schedule Job - Check Resources Health: #{job.job_id}##########"
begin
HealthChecker.perform
rescue Exception => e
puts e.message
puts e.backtrace
raise "Error in RM Scheduler - Check Resources Health " + e.message
end
end
And implement "perform" or some other class method in your controller, now the controller is "HealthChecker". Very easy and no extra effort. Hope it help.
So that you can test better, and get a handle on whether it works I suggest:
Write a shell script in [app root]/script which sets up the right environment variables to point to Ruby (if necessary) and has the call to rake. E.g., something like script/get-sportdata.sh.
Test the script as root. E.g., first do sudo -s.
Call this script from cron. E.g., * cd [...] && script/get-sportdata.sh. If necessary, test that line as root too.
That's been my recipe for success, running rake tasks from cron on Ubuntu. This is because the cron environment is a bit different than the usual shell setup. And so limiting your actual cron jobs to simple commands to run a particular script are a good way to divide the configuration into smaller parts which can be individually tested.

launching background process in capistrano task

capistrano task
namespace :service do
desc "start daemontools (svscan/supervise/svscanboot)"
task :start, :roles => :app do
sudo "svscanboot&"
end
end
Now this doesn't work: the svscanboot process simply doesn't run.
This helped me find sleep: https://github.com/defunkt/resque/issues/284
other sources pointed me to nohup, redirection, and pty => true, so I tried all these.
run "nohup svscanboot >/tmp/svscanboot.log 2>&1 &" # NO
run "(svscanboot&) && sleep 1" # NO
run "(nohup svscanboot&) && sleep 1" # YES!
Now, could anyone explain to me why i need the sleep statement and what difference does nohup make?
For the record all the above run equally well if run from user shell, problem is only in the context of capistrano.
thanks
Try forking the process as explained here: Spawn a background process in Ruby
You should be able to do something like this:
job1 = fork do
run "svscanboot"
end
Process.detach(job1)
As well, checkout this: Starting background tasks with Capistrano
My simple solution would be make svscanboot.sh file at remote server with whatever code you want to run. In your case
svscanboot >/tmp/svscanboot.log 2>&1
In cap rake task add this
run "sh +x somefile.sh &"
this works well for me.
I think nohup just launches the process in background, so you don't need to explicitly set the last &.
Did you try
run "nohup svscanboot >/tmp/svscanboot.log 2>&1"
(without the ending & to send it to the background).
That should work and remain running when your current capistrano session is closed.
Try this
run "nohup svscanboot >/tmp/svscanboot.log 2>&1 & sleep 5", pty: false
I'd like to share my solution which also works when executing multiple commands. I tried many other variants found online, including the "sleep N" hack.
run("nohup sh -c 'cd #{release_path} && bundle exec rake task_namespace:task_name RAILS_ENV=production > ~/shared/log/<rakelog>.log &' > /dev/null 2>&1", :pty => true)

Capistrano tasks not performing within the given scope.

I have build some capistrano tasks which I need to run on within the defined :app roles. This is what I have so far:
desc "Stop unicorn"
task :stop, :roles => :app do
logger.info "Stopping unicorn server(s).."
run "touch #{unicorn_pid}"
pid = capture("cat #{unicorn_pid}").to_i
run "kill -s QUIT #{pid}" if pid > 0
end
As far as I know, this should run the given commands on the servers given in the :app role, right? But the fact of the matter is that it's running the commands on the servers in the :db role.
Can anyone give some insight into this problem? Or is there a way to force Capistrano to adhere to the :roles flag?
Thanks in advance
// Emil
Using Capture will cause the task to be run only on the first server listed.
From the documentation:
The capture helper will execute the given command on the first matching server, and will return the output of the command as a string.
https://github.com/capistrano/capistrano/wiki/2.x-DSL-Action-Inspection-Capture
Unfortunately I am facing a similar issue, the find_servers solution may work, but it's hacky, and runs N x N times, where N in the number of servers you have.

How do I "copy unless later version exists" in Capistrano?

I want to protect my database.yml file by keeping it out of version control. Thus, I have two tasks in my Capistrano deploy recipe:
task :copy_db_config do
# copy local config file if it exists and is more
# recent than the remote one
end
task :symlink_db_config do
run "ln -nfs #{shared_path}/config/database.yml #{release_path}/config/database.yml"
end
Can you help fill in the first task?
I don't have functioning code for you here and now but..
you can the the local timestamp by using ruby. File class has a function ctime which let's You know when it was changed.
Run the same command on the servers database.yml
If the local one is newest, capistrano has a method for secure upload
upload("products.txt", "/home/medined", :via => :scp)
I had the same problem, but I approached it differently. Maybe it will be helpful.
The setup task copies database.yml.example to database.yml. The deploy task does not touch database.yml. I have separate tasks for changing the database names, usernames, and passwords. Here's an example:
desc "Change the database name"
task :change_db_database, :roles => :app do
database = prompt('Enter new database name: ')
run <<-CMD
cd #{shared_path}/config &&
perl -i -pe '$env = $1 if /^(\\w+)/; s/database:.*/database: #{database}/ if $env eq "#{ENV['CONNECTION'] || ENV['TARGET']}"' database.yml
CMD
end
I run these after setup but before the first deploy on new boxes. Then any time after that when I need to change database parameters, I use these tasks instead of copying in a new file.

Resources