Capistrano tasks not performing within the given scope. - ruby-on-rails

I have build some capistrano tasks which I need to run on within the defined :app roles. This is what I have so far:
desc "Stop unicorn"
task :stop, :roles => :app do
logger.info "Stopping unicorn server(s).."
run "touch #{unicorn_pid}"
pid = capture("cat #{unicorn_pid}").to_i
run "kill -s QUIT #{pid}" if pid > 0
end
As far as I know, this should run the given commands on the servers given in the :app role, right? But the fact of the matter is that it's running the commands on the servers in the :db role.
Can anyone give some insight into this problem? Or is there a way to force Capistrano to adhere to the :roles flag?
Thanks in advance
// Emil

Using Capture will cause the task to be run only on the first server listed.
From the documentation:
The capture helper will execute the given command on the first matching server, and will return the output of the command as a string.
https://github.com/capistrano/capistrano/wiki/2.x-DSL-Action-Inspection-Capture
Unfortunately I am facing a similar issue, the find_servers solution may work, but it's hacky, and runs N x N times, where N in the number of servers you have.

Related

execute bash script inside deploy.rb using capistrano

I am learning (by doing) Rails and Capistrano.
How can I execute a scrpit inside deploy.rb?
I came across run(command), exec(command), execute: or run:.
I don't have to specify :db or web so I have the following backbone:
task :myTask do
on roles(:app) do
execute "bash myScript.sh"
puts "#{:server} reports: #{myTask}"
end
end
Is this correct?
Is the ssh part of the whole process or I have to ssh in the command?
How do people develope deploy.rb without cap deploy every time they make a change?
Thank you!
Ruby allows you to run a shell script using backtick
for example
output = `pwd`
puts "output is #{output}"
see more https://ruby-doc.org/core-1.9.3/Kernel.html#method-i-60
This is what worked for me:
role :app, 'user#domain1.com'
on roles(:app) do
within 'remote_path' do
execute 'bash', ' myScript.sh'
end
end

Rails: managing multiple sidekiqs without upstart script

Once upon a time, I had one app - Cashyy. It used sidekiq. I deployed it and used this upstart script to manage sidekiq (start/restart).
I decide to deploy another app to the same server. The app (let's call it Giimee) also uses sidekiq.
And here is the issue. Sometimes I need to restart sidekiq for Cashyy, but not for Giimee. Now, as I understand I will need to hack something using index thing (in upstart script and managing sidekiqs: sudo restart sidekiq index=1) (if I understood it correctly).
BUT!
I have zero desire to dabble with these indexes (nightmare to support? Like you need to know how many apps are using sidekiq and be sure to assign unique index to each sidekiq. And to know assigned index if you want to restart specific sidekiq).
So here is the question: how can I isolate each sidekiq (so I would not need to maintain the index) and still get the stability and usability of upstart (starting the process, restarting, etc.)?
Or maybe I don't understand something and the thing with index is state of art?
You create two services:
cp sidekiq.conf /etc/init/cashyy.conf
cp sidekiq.conf /etc/init/glimee.conf
Edit each as necessary. sudo start cashyy, sudo stop glimee, etc. Now you'll have two completely separate Sidekiq processes running.
As an alternative to an upstart script, you can use Capistrano and Capistrano-Sidekiq to manage those Sidekiqs.
We have Sidekiq running on 3 machines and have had a good experience with these two libraries/tools.
Note: we currently use an older version of Capistrano (2.15.5)
In our architecture, the three machines are customized slightly on deploy. This led us to break up our capistrano deploy scripts by machine so that we could customize some classes, manage Sidekiq, etc. Our capistrano files are structured something like this:
- config/
- deploy.rb
- deploy/
- gandalf.rb
- gollum.rb
- legolas.rb
With capistrano-sidekiq, we are able to control, well, Sidekiq :) at any time (during a deploy or otherwise). We set up the Sidekiq aspects of our deploy scripts in the following way:
# config/deploy.rb
# global sidekiq settings
set :sidekiq_default_hooks, false
set :sidekiq_cmd, "#{fetch(:bundle_cmd, 'bundle')} exec sidekiq"
set :sidekiqctl_cmd, "#{fetch(:bundle_cmd, 'bundle')} exec sidekiqctl"
set :sidekiq_role, :app
set :sidekiq_pid, "#{current_path}/tmp/pids/sidekiq.pid"
set :sidekiq_env, fetch(:rack_env, fetch(:rails_env, fetch(:default_stage)))
set :sidekiq_log, File.join(shared_path, 'log', 'sidekiq.log')
# config/deploy/gandalf.rb
# Custom Sidekiq settings
set :sidekiq_timeout, 30
set :sidekiq_processes, 1
namespace :sidekiq do
# .. code omitted from methods and tasks for brevity
def for_each_process(&block)
end
desc 'Quiet sidekiq (stop accepting new work)'
task :quiet, :roles => lambda { fetch(:sidekiq_role) }, :on_no_matching_servers => :continue do
end
desc 'Stop sidekiq'
task :stop, :roles => lambda { fetch(:sidekiq_role) }, :on_no_matching_servers => :continue do
end
desc 'Start sidekiq'
task :start, :roles => lambda { fetch(:sidekiq_role) }, :on_no_matching_servers => :continue do
end
desc 'Restart sidekiq'
task :restart, :roles => lambda { fetch(:sidekiq_role) }, :on_no_matching_servers => :continue do
end
end
When I need to restart one of my Sidekiq instances, I can just go to my terminal and execute the following:
$ bundle exec cap gandalf sidekiq:restart
$ bundle exec cap gollum sidekiq:stop
It's made Sidekiq management quite painless for our team and thought it would be worth sharing in the event something similar could help you out.

launching background process in capistrano task

capistrano task
namespace :service do
desc "start daemontools (svscan/supervise/svscanboot)"
task :start, :roles => :app do
sudo "svscanboot&"
end
end
Now this doesn't work: the svscanboot process simply doesn't run.
This helped me find sleep: https://github.com/defunkt/resque/issues/284
other sources pointed me to nohup, redirection, and pty => true, so I tried all these.
run "nohup svscanboot >/tmp/svscanboot.log 2>&1 &" # NO
run "(svscanboot&) && sleep 1" # NO
run "(nohup svscanboot&) && sleep 1" # YES!
Now, could anyone explain to me why i need the sleep statement and what difference does nohup make?
For the record all the above run equally well if run from user shell, problem is only in the context of capistrano.
thanks
Try forking the process as explained here: Spawn a background process in Ruby
You should be able to do something like this:
job1 = fork do
run "svscanboot"
end
Process.detach(job1)
As well, checkout this: Starting background tasks with Capistrano
My simple solution would be make svscanboot.sh file at remote server with whatever code you want to run. In your case
svscanboot >/tmp/svscanboot.log 2>&1
In cap rake task add this
run "sh +x somefile.sh &"
this works well for me.
I think nohup just launches the process in background, so you don't need to explicitly set the last &.
Did you try
run "nohup svscanboot >/tmp/svscanboot.log 2>&1"
(without the ending & to send it to the background).
That should work and remain running when your current capistrano session is closed.
Try this
run "nohup svscanboot >/tmp/svscanboot.log 2>&1 & sleep 5", pty: false
I'd like to share my solution which also works when executing multiple commands. I tried many other variants found online, including the "sleep N" hack.
run("nohup sh -c 'cd #{release_path} && bundle exec rake task_namespace:task_name RAILS_ENV=production > ~/shared/log/<rakelog>.log &' > /dev/null 2>&1", :pty => true)

Scaling out an app that hits external apis

I'm using beanstalkd to background process api calls to facebook graph api and I want the app to update, i.e. hits facebook api every 10 minutes get the info. I thought about creating a simple script that loads necessary info from db (fb ids/urls), queues jobs in beanstalkd and then sleeps for 9 minutes. Maybe use God to make sure the script keeps running/restart if memory consumption gets too big.
Then I started reading about drbs and wondered if there's a way/need to integrate the two.
I asked in #rubyonrails and got cron and regular rb script as two options. Just wondering if there's a better way.
I would recommend, for simplicity of configuration using delayed_job and a cronjob which calls a rake task which deals with queueing of the jobs.
Monit is also a good alternative to God and seems to be more stable and less memory hungry for process monitoring.
For delayed job you need to add the following to your deploy script (assuming you plan to deploy with capistrano)
namespace :delayed_job do
def rails_env
fetch(:rails_env, false) ? "RAILS_ENV=#{fetch(:rails_env)}" : ''
end
desc "Stop the delayed_job process"
task :stop, :roles => :app do
run "cd #{current_path};#{rails_env} script/delayed_job stop"
end
desc "Start the delayed_job process"
task :start, :roles => :app do
run "cd #{current_path};#{rails_env} script/delayed_job start"
end
desc "Restart the delayed_job process"
task :restart, :roles => :app do
run "cd #{current_path};#{rails_env} script/delayed_job stop"
run "cd #{current_path};#{rails_env} script/delayed_job start"
end
end
I had to extract these recipies from the delayed_job gem to get them to run.

Tail production log with Capistrano - how to stop it

I found this nifty code snippet on several sites, allowing me to analyze the production log via Capistrano:
desc "tail production log files"
task :tail_logs, :roles => :app do
run "tail -f #{shared_path}/log/production.log" do |channel, stream, data|
puts # for an extra line break before the host name
puts "#{channel[:host]}: #{data}"
break if stream == :err
end
end
It works perfectly well, however, when I'm finished reading the logs, I hit Ctrl+C and it produces a nasty error on my console. Not that this is a huge problem, but I find it annoying. What can I do so that no error is produced, but the task/tail/log viewing just quietly ends?
Also, I'm not that familiar with how to analyze logs - is this really the best way to just have a quick look at the most recent events in your (remote production) log, or is there a better way? I know there are a gazillion tools for log analysis, but I want a dead-simple solution to see the last couple requests, not something bulky and complicated. I'm not sure if this Capistrano solution is really optimal though. Like, what's the solution most people use?
Try trap("INT") { puts 'Interupted'; exit 0; } like this:
desc "tail production log files"
task :tail_logs, :roles => :app do
trap("INT") { puts 'Interupted'; exit 0; }
run "tail -f #{shared_path}/log/production.log" do |channel, stream, data|
puts # for an extra line break before the host name
puts "#{channel[:host]}: #{data}"
break if stream == :err
end
end
I hope this helps.
This was pretty easy to find on a blog
But here is some code for Capistrano 3
namespace :logs do
desc "tail rails logs"
task :tail_rails do
on roles(:app) do
execute "tail -f #{shared_path}/log/#{fetch(:rails_env)}.log"
end
end
end
I had issues with the rails_env variable, so i just replaced it, but it might be worth it to you to get it working, so I left it.
I made one small change to Jeznet's great answer. If you run capistrano-ext with multiple environments like we do, you can have the RAILS_ENV automatically specified for you:
run "tail -f #{shared_path}/log/#{rails_env}.log" do |channel, stream, data|
I had a problem with the trap("INT") part. While it makes the script exit without errors, the tail processes where still running on the remote machines. If fixed it with this line:
trap("INT") { puts 'Interupted'; run "killall -u myusername tail"; exit 0; }
Not elegant, but working for me.
I use gem capistrano-rails-tail-log and everything is ok.
https://github.com/ayamomiji/capistrano-rails-tail-log

Resources