Capistrano mystery! No error, but won't use latest release... - ruby-on-rails

I'm using Rails 3.2 with Capistrano 2.x, and I've got a deploy script setup. All was working fine until a few days ago. I noticed however, that our new code was not actually getting used on production. It's definitely been merged into the correct branches on the project's github. And, in the output of the cap:deploy, I even see it trying to checkout the correct sha reference for the latest commit. However, the files never actually update. My deploy script is below, as well as some output from running cap:deploy.
Deploy Script
require "bundler/capistrano"
# General
set :application, "foo"
set :domain, "foo.com"
set :user, "deploy"
set :runner, "deploy"
set :use_sudo, false
set :deploy_to, "/var/www/#{application}"
set :release_path, "/var/www/#{application}/current"
set :repository_cache, "#{application}_cache"
set :environment, "production"
# Roles
role :web, domain
role :app, domain
role :db, domain, :primary => true
# GIT
set :repository, "git#github.com:foo/bar.git"
set :branch, "release"
set :keep_releases, 3
set :deploy_via, :remote_cache
set :scm, :git
# SSH
default_run_options[:pty] = true
ssh_options[:forward_agent] = true
ssh_options[:paranoid] = true # comment out if it gives you trouble. newest net/ssh needs this set.
######## Callbacks - No More Config ########
before "deploy", "deploy:backup_db"
before "deploy", "deploy:copy_ck_editor_assets"
after 'deploy:create_symlink', 'deploy:cleanup' # makes sure there's only 3 deployments, deletes the extras
after "deploy", "deploy:migrate"
after "deploy", "deploy:put_back_ck_editor_assets"
# Custom Tasks
namespace :deploy do
task(:start) {}
task(:stop) {}
desc "Restart Application"
task :restart, :roles => :web, :except => { :no_release => true } do
run "touch #{current_path}/tmp/restart.txt"
end
desc "Back up db"
task :backup_db do
puts "Backing up the database"
run "cd #{release_path}; RAILS_ENV=production bundle exec rake db:data:backup"
puts "Backed up DB"
end
desc "Copy CK Editor assets"
task :copy_ck_editor_assets do
puts "copying assets"
run "cd #{release_path}; cp -R public/ckeditor_assets/ ~"
end
desc "Putting CK Editor assets back"
task :put_back_ck_editor_assets do
puts "putting assets back"
run "cd #{release_path}; cp -R ~/ckeditor_assets public/"
end
end
And I believe this is the relevant excerpt from the cap:deploy output....
2014-01-09 23:14:11 executing `deploy:update_code'
updating the cached checkout on all servers
executing locally: "git ls-remote git#github.foo/bar.git release"
command finished in 1841ms
* executing "if [ -d /var/www/bar/shared/bar_cache ]; then cd /var/www/bar/shared/bar_cache && git fetch -q origin && git fetch --tags -q origin && git reset -q --hard 4dbce9382fb4e9b532bf26c71a731e35b3970966 && git clean -q -d -x -f; else git clone -q -b release git#github.com:foo/bar.git /var/www/bar/shared/bar_cache && cd /var/www/bar/shared/bar_cache && git checkout -q -b deploy 4dbce9382fb4e9b532bf26c71a731e35b3970966; fi"
servers: ["bar.com"]
[bar.com] executing command
command finished in 3369ms
copying the cached version to /var/www/bar/current
* executing "cp -RPp /var/www/bar/shared/bar_cache /var/www/bar/current && (echo 4dbce9382fb4e9b532bf26c71a731e35b3970966 > /var/www/bar/current/REVISION)"
The sha hash is correct. And no errors are being thrown anywhere in the deploy process. This sha hash definitely has what I want on the git repo. But all I see in my releases folder is older stuff, and my current folder definitely has an old version (like several days old).
Any thoughts here are much appreciated... I'm totally unsure how to go about solving this one, since there aren't really any errors, and it worked only several days ago with no changes on the deploy script.
Also, if it helps... I did do a chmod to 755 of the main app directory (under which the release and current folders live) a few days ago. That was to fix this other issue we were having.
Thanks!!!

Related

Capistrano not restarting Sidekiq

I have Capistrano deploying my app to a Ubuntu remote server on a cloud host. It works except that Sidekiq does not get restarted. After a deploy new Sidekiq jobs are stuck in the queue until it does finally get restarted. I currently manually SSH into the machine and run sudo initctl stop/start workers which works. I am not super strong at all with Capistrano and me research so far has failed to find me a solution to this. I am hoping I am missing something obvious to someone more familiar than me. Here is the relevant portion of my /config/deploy.rb file:
namespace :deploy do
namespace :sidekiq do
task :quiet do
on roles(:app) do
puts capture("pgrep -f 'workers' | xargs kill -USR1")
end
end
task :restart do
on roles(:app) do
execute :sudo, :initctl, :stop, :workers
execute :sudo, :initctl, :start, :workers
end
end
end
after 'deploy:starting', 'sidekiq:quiet'
after 'deploy:reverted', 'sidekiq:restart'
after 'deploy:published', 'sidekiq:restart'
end
UPDATE
From my reply logs:
DEBUG [268bc235] Running /usr/bin/env kill -0 $( cat /home/ubuntu/staging/shared/tmp/pids/sidekiq-0.pid ) as ubuntu#159.203.8.242
DEBUG [268bc235] Command: cd /home/ubuntu/staging/releases/20160806065537 && ( export RBENV_ROOT="$HOME/.rbenv" RBENV_VERSION="2.2.3" ; /usr/bin/env kill -0 $( cat /home/ubuntu/staging/shared/tmp/pids/sidekiq-0.pid ) )
DEBUG [268bc235] Finished in 0.471 seconds with exit status 1 (failed).
I don't believe you need those configs in your deploy.rb if you have the capistrano-sidekiq gem installed and called in your Capfile.
Make sure you have require 'capistrano/sidekiq' in your Capfile or it won't know to call the default tasks.

Capistrano 3 Deployment Initialization

I deploy Rails app so infrequently that I always get into a head-butting contest with Capistrano when I do it. Here, I have a repo on Github. I'm using Capistrano 3.2.1 and the relevant (i.e., non-boilerplate) part of my deploy.rb is this:
lock '3.2.1'
set :application, 'my_app'
set :scm, :git
set :repository, "git#github.com:my_github_user/my_app.git"
set :user, 'deploy'
set :deploy_to, "/home/deploy/rails_apps/my_app"
in config/deploy/production.rb I have this:
role :app, %w{deploy#my_domain.com}
role :web, %w{deploy#my_domain.com}
role :db, %w{deploy#my_domain.com}
I get hung up on the following error:
DEBUG[03378c05] Running /usr/bin/env git ls-remote -h on my_domain.com
DEBUG[03378c05] Command: ( GIT_ASKPASS=/bin/echo GIT_SSH=/tmp/my_app/git-ssh.sh /usr/bin/env git ls-remote -h )
DEBUG[03378c05] usage: git ls-remote [--heads] [--tags] [-u <exec> | --upload-pack <exec>] <repository> <refs>...
DEBUG[03378c05] Finished in 0.165 seconds with exit status 129 (failed).
Note also that I am repeating strings like my_app. I used to be able to do:
set :repository, "git#github.com:my_github_user/#{application}.git"
but now I get an error that the property or method application is not found.
I know I am missing a step or steps. I have simply been unable to figure out what these steps are.
Any ideas?
Use set :repository, "git#github.com:my_github_user/#{fetch(:application)}.git".
Ok, I got it. Told you I always butt heads with Capistrano!
The :repository variable was changed to :repo_url (d'oh).
Using fetch as mentioned above works.
You also need to add a line to your restart script as mentioned here:
task :restart do
on roles(:app), in: :sequence, wait: 5 do
# Your restart mechanism here, for calicowebdev:
execute :mkdir, '-p', "#{ release_path }/tmp"
The mkdir part is what you need to add.

Chef deploy_resource private repo, ssh deploy keys and ssh_wrapper

I'm having loads of trouble getting my Chef recipe to clone a private repo. Well, I had it working yesterday but after 'cheffin' my Vagrant box half a dozen times, I've broken it. I'm a Chef newbie as you may guess.
Following the deploy_resource guide here, I've created my deploy.rb recipe (shortened):
deploy_branch "/var/www/html/ps" do
repo git#github.com:simonmorley/private-v2.git
ssh_wrapper "/tmp/.ssh/chef_ssh_deploy_wrapper.sh"
branch "rails4"
migrate false
environment "RAILS_ENV" => node[:ps][:rails_env]
purge_before_symlink %w{conf data log tmp public/system public/assets}
create_dirs_before_symlink []
symlinks( # the arrow is sort of reversed:
"conf" => "conf", # current/conf -> shared/conf
"data" => "data", # current/data -> shared/data
"log" => "log", # current/log -> shared/log
"tmp" => "tmp", # current/tmp -> shared/tmp
"system" => "public/system", # current/public/system -> shared/system
"assets" => "public/assets" # current/public/assets -> shared/assets
)
scm_provider Chef::Provider::Git # is the default, for svn: Chef::Provider::Subversion
notifies :restart, "service[ps]"
notifies :restart, "service[nginx]"
end
In defaults, I have the following to create the dirs etc.
directory "/tmp/.ssh" do
action :create
owner node[:base][:username]
group node[:base][:username]
recursive true
end
template "/tmp/.ssh/chef_ssh_deploy_wrapper.sh" do
source "chef_ssh_deploy_wrapper.sh.erb"
owner node[:base][:username]
mode 0770
end
# Put SSH private key to be used with SSH wrapper
template "/tmp/.ssh/id_deploy" do
source "id_rsa.pub.erb"
owner node[:base][:username]
mode 0600
end
And in the wrapper:
#!/bin/sh
exec ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i "/tmp/.ssh/id_deploy" "$#"
And I have created a public key and uploaded this to github.
When I deploy the recipe, it gives me an error:
deploy_branch[/var/www/html/ps] action deployEnter passphrase for key '/tmp/.ssh/id_deploy':
Obvs I don't have a password set... The private key must therefore be missing..
Just by chance, I removed the id_deploy key from the recipe, deleted the folders and ran it again. Low and behold, it started working... The reason being that the id_rsa.pub && id_rsa files were in /root/.ssh from when I manually generated them to test.
I don't understand what I'm doing wrong here. My questions are therefore:
Do I need a private and public key on each node I deploy to? The docs don't mention this.
Should this not be deploying as non-root user? I have set a user in my roles file..
Why is the ssh_wrapper not doing what it's supposed to
It took a good couple of days to figure this out properly.
Just to clarify, this is what I did to fix it. I do not know if it's correct, but it works for me.
Generate a set of public and private keys following this tutorial.
Add the public key to the Github repo that you want to clone.
Create a template in my default recipe which includes both the public and private keys. See below.
Created the relevant templates for the pub and private keys.
Created the chef_ssh_deploy_wrapper.sh.erb file (see below)
Created a deploy.rb recipe (see below)
Uploaded and added the recipes to my role. Ran chef-client.
Hey presto! Sit back with a beer and watch your repo. smartly cloned into your dir.
The templates are as follows:
Create the directories and templates:
template "/tmp/.ssh/chef_ssh_deploy_wrapper.sh" do
source "chef_ssh_deploy_wrapper.sh.erb"
owner node[:base][:username]
mode 0770
end
template "/home/#{node[:base][:username]}/.ssh/id_rsa.pub" do
source "id_rsa.pub.erb"
owner node[:base][:username]
mode 0600
end
template "/home/#{node[:base][:username]}/.ssh/id_rsa" do
source "id_rsa.erb"
owner node[:base][:username]
mode 0600
end
Create an ssh wrapper chef_ssh_deploy_wrapper.erb
#!/bin/sh
exec ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -i "/home/#{node[:base][:username]}/.ssh/id_rsa" "$#"
(Make sure you use the private key here or it will fail)
Finally the deploy.rb recipe:
deploy_branch node[:my_app][:deploy_to] do
repo node[:base][:repository]
ssh_wrapper "/tmp/.ssh/chef_ssh_deploy_wrapper.sh"
branch "rails4"
user node[:base][:username]
group node[:base][:username]
rollback_on_error true
migrate false
environment "RAILS_ENV" => node[:my_app][:environment]
purge_before_symlink %w{conf data log tmp public/system public/assets}
create_dirs_before_symlink []
symlinks(
"config" => "config",
"data" => "data",
"log" => "log",
"tmp" => "tmp",
"system" => "public/system",
"assets" => "public/assets"
)
scm_provider Chef::Provider::Git # is the default, for svn: Chef::Provider::Subversion
before_restart do
system("su #{node[:base][:username]} -c 'cd #{node[:my_app][:deploy_to]}/current && /usr/bin/bundle install'") or raise "bundle install failed"
system("su #{node[:base][:username]} -c 'RAILS_ENV=production /usr/local/bin/rake assets:precompile'")
end
notifies :restart, "service[my_app]"
notifies :restart, "service[nginx]"
end
The before restart has since been replaced as we were initially compiling ruby from source but decided to use rvm in the end. Much easier for multi-user installations.
NB: I'm deploying as an sudo user, if you're doing so as root (avoid this), use the /root/.ssh path instead.
I took much inspiration from this article.
Good luck, I hope this helps someone.
Your question doesn't have a link to to the deploy_resource source, so I can't be sure if this will apply, but if it uses a git resource underneath, the following might be helpful...
As described in this answer to a similar question, you can avoid creating extra script files to go with each SSH key by adding the SSH command as an "external transport" part of the repository URL:
git "/path/to/destination" do
repository "ext::ssh -i /path/to/.ssh/deployment_key -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no git#github.com %S /my_name/some_repo.git"
branch "master"
...
end

Capistrano deploy with gitosis and application server at the same server?

I build gitosis server and stage server on the same VPS server. Clone the repository from gitosis in my local machine or stage server are work well. But cap deploy in local machine always ask me input password as below, I have no idea which password is and I try every password all doesn't work.
And I know could copy local repository with deploy_via: copy, but I prefer build a gitosis server for the other projects.
Any ideas? thanks.
environment
gitosis and stage server ip: 106.187.xxx.xxx (mask some number for security reason)
log
* executing `deploy'
triggering before callbacks for `deploy'
* executing `check:revision'
* executing `deploy:update'
** transaction: start
* executing `deploy:update_code'
updating the cached checkout on all servers
executing locally: "git ls-remote gitosis#106.187.xxx.xxx:foo_project.git master"
command finished in 1105ms
* executing "if [ -d /home/deployer/apps/railsapp/shared/cached-copy ]; then cd /home/deployer/apps/railsapp/shared/cached-copy && git fetch -q origin && git fetch --tags -q origin && git reset -q --hard 07827de89355c5366c4511ee22fdd9c68a31b0be && git clean -q -d -x -f; else git clone -q gitosis#106.187.xxx.xxx:foo_project.git /home/deployer/apps/railsapp/shared/cached-copy && cd /home/deployer/apps/railsapp/shared/cached-copy && git checkout -q -b deploy 07827de89355c5366c4511ee22fdd9c68a31b0be; fi"
servers: ["106.187.xxx.xxx"]
[106.187.xxx.xxx] executing command
** [106.187.xxx.xxx :: out] Password:
Password:
** [106.187.xxx.xxx :: out]
** [106.187.xxx.xxx :: out] Password:
Password:
** [106.187.xxx.xxx :: out]
** [106.187.xxx.xxx :: out] Password:
Password:
** [106.187.xxx.xxx :: out]
** [106.187.xxx.xxx :: out] Permission denied (publickey,keyboard-interactive).
** [106.187.xxx.xxx :: out] fatal: The remote end hung up unexpectedly
deploy.rb
server "106.187.xxx.xxx", :web, :app, :db, primary: true
set :application, "railsapp"
set :user, "deployer"
set :local_user, "joshchang"
set :deploy_to, "/home/#{user}/apps/#{application}"
set :use_sudo, false
set :rails_env, "stage"
set :scm, "git"
set :repository, "gitosis#106.187.xxx.xxx:foo_project.git"
set :deploy_via, :remote_cache
set :branch, "master"
default_run_options[:pty] = true
ssh_options[:forward_agent] = true
Sorry, a little hard to understand the question, but there are two ways to use git within capistrano. The first is to grant the server direct access to the git repository; on GitHub, for example, you have an option to install "deploy keys" -- the public key(s) of the server(s) that need access. So check if gitosis has this option.
But before you do, consider the other approach, which is to pass the git authorization of the user doing the deploy, so when you deploy, you pull as yourself, rather than instructing the server to do so. There are pros and cons to each method, but I think the second method is much easier to manage in the long run.
To use the second method, the machine of the person deploying (running capistrano) needs to 1) have ssh-agent running, and 2) needs to use ssh-add to authorize ssh-agent to use your public key -- it's very secure, and once you have it set up, it's transparent.
When you use the second method, access to git will be the same as it is locally, so shouldn't prompt for password. Otherwise your settings in deploy.rb are fine as is.

Is there a way to use capistrano (or similar) to remotely interact with rails console

I'm loving how capistrano has simplified my deployment workflow, but often times a pushed change will run into issues that I need to log into the server to troubleshoot via the console.
Is there a way to use capistrano or another remote administration tool to interact with the rails console on a server from your local terminal?
**Update:
cap shell seems promising, but it hangs when you try to start the console:
cap> cd /path/to/application/current
cap> pwd
** [out :: application.com] /path/to/application/current
cap> rails c production
** [out :: application.com] Loading production environment (Rails 3.0.0)
** [out :: application.com] Switch to inspect mode.
if you know a workaround for this, that'd be great
I found pretty nice solution based on https://github.com/codesnik/rails-recipes/blob/master/lib/rails-recipes/console.rb
desc "Remote console"
task :console, :roles => :app do
env = stage || "production"
server = find_servers(:roles => [:app]).first
run_with_tty server, %W( ./script/rails console #{env} )
end
desc "Remote dbconsole"
task :dbconsole, :roles => :app do
env = stage || "production"
server = find_servers(:roles => [:app]).first
run_with_tty server, %W( ./script/rails dbconsole #{env} )
end
def run_with_tty(server, cmd)
# looks like total pizdets
command = []
command += %W( ssh -t #{gateway} -l #{self[:gateway_user] || self[:user]} ) if self[:gateway]
command += %W( ssh -t )
command += %W( -p #{server.port}) if server.port
command += %W( -l #{user} #{server.host} )
command += %W( cd #{current_path} )
# have to escape this once if running via double ssh
command += [self[:gateway] ? '\&\&' : '&&']
command += Array(cmd)
system *command
end
This is how i do that without Capistrano: https://github.com/mcasimir/remoting (a deployment tool built on top of rake tasks). I've added a task to the README to open a remote console on the server:
# remote.rake
namespace :remote do
desc "Open rails console on server"
task :console do
require 'remoting/task'
remote('console', config.login, :interactive => true) do
cd config.dest
source '$HOME/.rvm/scripts/rvm'
bundle :exec, "rails c production"
end
end
end
Than i can run
$ rake remote:console
I really like the "just use the existing tools" approach displayed in this gist. It simply uses the SSH shell command instead of implementing an interactive SSH shell yourself, which may break any time irb changes it's default prompt, you need to switch users or any other crazy thing happens.
Not necessarily the best option, but I hacked the following together for this problem in our project:
task :remote_cmd do
cmd = fetch(:cmd)
puts `#{current_path}/script/console << EOF\r\n#{cmd}\r\n EOF`
end
To use it, I just use:
cap remote_cmd -s cmd="a = 1; b = 2; puts a+b"
(note: If you use Rails 3, you will have to change script/console above to rails console, however this has not been tested since I don't use Rails 3 on our project yet)
cap -T
cap invoke # Invoke a single command on the remote ser...
cap shell # Begin an interactive Capistrano session.
cap -e invoke
------------------------------------------------------------
cap invoke
------------------------------------------------------------
Invoke a single command on the remote servers. This is useful for performing
one-off commands that may not require a full task to be written for them. Simply
specify the command to execute via the COMMAND environment variable. To execute
the command only on certain roles, specify the ROLES environment variable as a
comma-delimited list of role names. Alternatively, you can specify the HOSTS
environment variable as a comma-delimited list of hostnames to execute the task
on those hosts, explicitly. Lastly, if you want to execute the command via sudo,
specify a non-empty value for the SUDO environment variable.
Sample usage:
$ cap COMMAND=uptime HOSTS=foo.capistano.test invoke
$ cap ROLES=app,web SUDO=1 COMMAND="tail -f /var/log/messages" invoke
The article http://errtheblog.com/posts/19-streaming-capistrano has a great solution for this. I just made a minor change so that it works in multiple server setup.
desc "open remote console (only on the last machine from the :app roles)"
task :console, :roles => :app do
server = find_servers_for_task(current_task).last
input = ''
run "cd #{current_path} && ./script/console #{rails_env}", :hosts => server.host do |channel, stream, data|
next if data.chomp == input.chomp || data.chomp == ''
print data
channel.send_data(input = $stdin.gets) if data =~ /^(>|\?)>/
end
end
the terminal you get is not really amazing though. If someone have some improvement that would make CTRL-D and CTRL-H or arrows working, please post it.

Resources