Capistrano error : "could not connect to ssh-agent: Agent not configured" - ruby-on-rails

I would like to deploy my app using capistrano.
I'm able to connect to the server using ssh my_app_stag, but when I try to docker-compose run web cap staging deploy I get this error :
D, [2018-09-21T11:46:40.858453 #1] DEBUG -- net.ssh.authentication.agent[ac989c]: connecting to ssh-agent
E, [2018-09-21T11:46:40.858662 #1] ERROR -- net.ssh.authentication.agent[ac989c]: could not connect to ssh-agent: Agent not configured
E, [2018-09-21T11:46:40.859037 #1] ERROR -- net.ssh.authentication.session[3f9c0e0cac1c]: all authorization methods failed (tried publickey)
(Backtrace restricted to imported tasks)
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as ubuntu#34.244.167.85: Authentication failed for user ubuntu#34.244.167.85
Caused by:
Net::SSH::AuthenticationFailed: Authentication failed for user ubuntu#34.244.167.85
Tasks: TOP => rvm:hook
(See full trace by running task with --trace)
I'm using pem key to authenticate, which is located in .ssh/app_name.pem (this key was included on the repo when I cloned the app. So I did not generate it my self)
ssh/config
Host my_app_stag
ForwardAgent yes
Hostname my_Ip_adress
User ubuntu
IdentityFile /Users/my_name/.ssh/app_name.pem
deploy.rb
lock '3.10.0'
set :rvm_ruby_version, '2.3.3'
set :default_stage, 'staging'
set :stages, %w(staging production)
set :application, 'app_name'
set :repo_url, 'git#github.com:my_account/app_name.git'
set :full_app_name, "#{fetch(:application)}"
set :user, 'ubuntu'
set :use_sudo, false
# Default branch is :master
set :branch, fetch(:branch, "master")
# Default deploy_to directory is /var/www/my_app_name
set :deploy_to, "/var/www/apps/#{fetch(:full_app_name)}"
# Default value for :scm is :git
set :ssh_options, {
auth_methods: %w[publickey],
keys: %w(~/.ssh/app_name.pem),
:verbose => :debug
}
set :use_agent, false
set :pty, true
[]).push('config/database.yml', 'config/secrets.yml')
set :linked_files, %w{config/database.yml config/unicorn_init.sh config/unicorn.rb log/session.secret}
set :linked_dirs, %w{log tmp/pids public/assets public/images/promotions public/images/logo public/images/offers public/images/vehicules public/import_logs sitemaps}
# Default value for keep_releases is 5
set :keep_releases, 5
set :bundle_bins, %w{gem rake ruby}
set(:config_files, %w(
nginx.conf
database.yml
unicorn.rb
unicorn_init.sh
))
set(:executable_config_files, %w(
unicorn_init.sh
))
set(:symlinks, [
{
source: "nginx.conf",
link: "/etc/nginx/sites-enabled/#{fetch(:full_app_name)}"
},
{
source: "unicorn_init.sh",
link: "/etc/init.d/unicorn_#{fetch(:full_app_name)}"
}
])
# set :linked_dirs, %w(public/system log tmp)
set :linked_dirs, fetch(:linked_dirs, []).push('log', 'tmp', 'vendor/bundle', 'public/system')
staging.rb
server 'ip_server', user: 'ubuntu', roles: %w(app db web), primary: true
set :stage, :staging
set :rails_env, 'staging'
set :branch, 'develop'
I tried this article but it still doesn't work.

Maybe add your ssh-key to a ssh-agent instead of linking to the file?
ssh-add -K /Users/my_name/.ssh/app_name.pem

Related

Rails app - Capistrano deployment failing

I have an existing Rails application that is set up to use Capistrano for deployments. I'm adding a Staging environment to it, but running bundle exec cap staging deploy returns an error:
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as deploy#[IP Redacted]: cat /var/www/staging-app-name/current/REVISION exit status: 1
cat /var/www/staging-app-name/current/REVISION stdout: Nothing written
cat /var/www/staging-app-name/current/REVISION stderr: cat: /var/www/staging-app-name/current/REVISION: No such file or directory
Versions:
Rails - 4.2.11
Ruby - 2.3.1
Capistrano - 3.15.0
Deploy.rb:
set :stages, %w(production staging)
set :application, "application-name"
set :repo_url, "[Redacted]"
set :conditionally_migrate, !ENV['FIRST_RUN']
set :migration_role, :app
set :default_stage, "production"
set :linked_files, fetch(:linked_files, []).push('config/database.yml', 'config/secrets.yml')
set :linked_dirs, fetch(:linked_dirs, []).push(
'log',
'tmp/pids',
'tmp/cache',
'tmp/sockets',
'vendor/bundle',
'public/system',
'public/uploads',
'public/assets'
)
set :default_env, { path: "/opt/ruby_build/builds/2.3.1/bin:$PATH" }
deploy/staging.rb:
server '[IP redacted]', user: 'deploy', roles: %w{app db web}
set :branch, 'develop'
set :deploy_to, '/var/www/staging-app-name'
set :stage, 'staging'
set :rails_env, 'staging'
deploy/production.rb
server '[IP redacted]', user: 'deploy', roles: %w{app db web}
set :branch, 'master'
set :deploy_to, '/var/www/production-app-name'
set :stage, 'production'
set :rails_env, 'production'
Capfile
# Load DSL and set up stages
require "capistrano/setup"
# Include default deployment tasks
require "capistrano/deploy"
require "capistrano/scm/git"
install_plugin Capistrano::SCM::Git
require "capistrano/bundler"
require "capistrano/rails/migrations"
require "capistrano/conditional"
# Load custom tasks from `lib/capistrano/tasks` if you have any defined
Dir.glob("lib/capistrano/tasks/*.rake").each { |r| import r }
This would be the first Staging deployment, so the staging-app-name directory is empty.
It's worth noting that Production deployments are working.
I've confirmed the directory/file permissions on the server are fine.
Any help would be appreciated!

Capistrano/Rails not showing latest change

I am using rails 4 with nginx and passenger for my personal project. Today I decided to use capistrano for deployment. My capsitrano config is working fine and I am able to deploy my application to production. After deploying I can see my changes in current folder and latest release folder. But I don't see the changes in browser.
let's say I have following folder structures on my server after setting up capistrano.
[1]app_name/app/views/finance/index.html
[2]app_name/releases/<latest_release>app/views/finance/index.html
[3]app_name/current/app/views/finance/index.html
If I ssh into server then I can see my code changes are applied to folder structure [2] and [3] but my code in not updated in folder structure [1].
Below are snippets from my cap files:
production.rb
set :port, 22
set :user, 'deploy'
set :deploy_via, :remote_cache
set :use_sudo, false
server 'xx.xxx.x.xxx',
roles: [:web, :app, :db],
port: fetch(:port),
user: fetch(:user),
primary: true
set :deploy_to, "/var/www/app_name"
set :ssh_options, {
forward_agent: true,
auth_methods: %w(publickey),
user: 'deploy',
}
set :rails_env, :production
set :conditionally_migrate, true
deploy.rb
lock '3.4.0'
set :application, 'app_name'
set :repo_url, 'git#github.com:user_name/app_name.git'
# Default branch is :master
set :branch, 'master'
set :use_sudo, false
set :bundle_binstubs, nil
# Default value for :scm is :git
set :scm, :git
# Default value for :format is :pretty
set :format, :pretty
# Default value for :log_level is :debug
set :log_level, :debug
# Default value for :pty is false
set :pty, true
# Default value for :linked_files is []
set :linked_files, fetch(:linked_files, []).push('config/database.yml', 'config/secrets.yml')
# Default value for linked_dirs is []
set :linked_dirs, fetch(:linked_dirs, []).push('log', 'tmp/pids', 'tmp/cache', 'tmp/sockets', 'vendor/bundle', 'public/system')
# Default value for keep_releases is 5
set :keep_releases, 5
set :keep_assets, 3
namespace :deploy do
task :restart do
on roles(:app) do
within release_path do
execute :touch, 'tmp/restart.txt'
end
end
end
end
Do I need to point my application server to current directory?
I fixed the problem by telling nginx to point to current/public folder.
root /var/www/app_name/current/public;

Authentication failed for user #domain.com - Capistrano & Rails

I am trying to set up Capistrano to deploy a website on a remote server. When I run the following command:
cap production deploy
I get the following error:
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing on host domain.com: Authentication failed for user #domain.com
Here is my deploy.rb file in the config directory of my Rails app:
...
set :application, 'my-app'
set :repo_url, 'git#bitbucket.org:karns/my-app.git'
set :deploy_to, "/var/www/my-app"
# Default value for :scm is :git
set :scm, :git
set :branch, 'master'
set :user, "deploy"
set :use_sudo, false
set :rails_env, "production"
set :deploy_via, :copy
server "domain.com", roles: [:app, :web, :db], :primary => true
...
To my understanding, deploy is supposed to be the user on the remote server.
What am I missing? Why does the error say "Authentication failed for user #domain.com"? Why wouldn't it say "deploy#domain.com"? If you need any other code, please let me know.

git exit status: 2 with capistrano

For a while I had the status: 1 with Capistrano and now got it working. I deployed and things were going well but then it got stuck at this below.
Here is my deploy.rb file:
# config valid only for Capistrano 3.1
lock '3.2.1'
set :application, 'Joggleio'
set :repo_url, 'git#github.com:xxxxxx/xxxxxx.io.git'
# set :repo_url, 'git://github.com:xxxxxxxx/xxxxxxx.io.git'
# Default branch is :master
# ask :branch, proc { `git rev-parse --abbrev-ref HEAD`.chomp }.call
# Default deploy_to directory is /var/www/my_app
set :deploy_to, '/home/tristan/xxxxxxx'
# Default value for :scm is :git
set :scm, :git
set :branch, "production"
# Default value for :format is :pretty
set :format, :pretty
# Default value for :log_level is :debug
# set :log_level, :debug
# Default value for :pty is false
set :pty, true
# Default value for :linked_files is []
# set :linked_files, %w{config/database.yml}
# Default value for linked_dirs is []
# set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}
# Default value for default_env is {}
# set :default_env, { path: "/opt/ruby/bin:$PATH" }
# Default value for keep_releases is 5
set :keep_releases, 5
# User
set :user, "tristan"
set :use_sudo, false
# Rails env
set :rails_env, "production"
set :deploy_via, :remote_cache
set :ssh_options, { :forward_agent => true}
set :default_stage, "production"
# server "xx.xxx.xxx.xx", :app, :web, :db, :primary => true
namespace :deploy do
# before "deploy", "deploy:stop_dj"
desc 'Restart application'
task :restart do
sudo "service nginx restart"
run "RAILS_ENV=production rake assets:precompile"
run "RAILS_ENV=production bin/delayed_job -n2 restart"
# on roles(:app), in: :sequence, wait: 5 do
# # Your restart mechanism here, for example:
# # execute :touch, release_path.join('tmp/restart.txt')
# end
end
after :publishing, :restart
after :restart, "deploy:cleanup"
end
Not sure what is going on that's wrong
Just before that I get this:
Command: cd /home/tristan/xxxx/repo && ( GIT_ASKPASS=/bin/echo GIT_SSH=/tmp/xxxxx/git-ssh.sh /usr/bin/env git archive production | tar -x -C /home/tristan/xxxxx/releases/20140611234421 )
DEBUG[0dbe45c7] fatal: Not a valid object name
DEBUG[0dbe45c7] tar: This does not look like a tar archive
DEBUG[0dbe45c7] tar: Exiting with failure status due to previous errors
Any help is greatly appreciated!!
I just run into same problem. The root cause is the branch you specified is not on the remote git repo yet. I'm using git flow so I specify my branch as develop. after push this branch to remote and deploy again the problem is gone.

Why is Capistrano deploy giving No Matching Host for bundle exec rake db:migrate?

i am deploying with capistrano and I am strugg;ling to find out why it will not run migrations when I try to deploy the site.
here is the whole error:
WARN [SKIPPING] No Matching Host for /usr/bin/env if test ! -d /home/deploy/forge_staging/releases/20140319132005; then echo "Directory does not exist '/home/deploy/forge_staging/releases/20140319132005'" 1>&2; false; fi
WARN [SKIPPING] No Matching Host for bundle exec rake db:migrate
heres my setup:
deploy.rb
lock '3.1.0'
server "xxx.xxx.xxx.xxx"
set :application, "ForgeAndCo"
set :scm, "git"
set :repo_url, "my-repo"
# set :scm_passphrase, ""
set :user, "deploy"
set :use_sudo, false
set :ssh_options, {
forward_agent: true,
port: 14439
}
# files we want symlinking to specific entries in shared.
set :linked_files, %w{config/database.yml}
# dirs we want symlinking to shared
set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}
SSHKit.config.command_map[:rake] = "bundle exec rake" #8
SSHKit.config.command_map[:rails] = "bundle exec rails"
set :branch, ENV["REVISION"] || ENV["BRANCH_NAME"] || "master"
set :keep_releases, 20
namespace :deploy do
desc 'Restart passenger without service interruption (keep requests in a queue while restarting)'
task :restart do
on roles(:app) do
execute :touch, release_path.join('tmp/restart.txt')
unless execute :curl, '-s -k --location localhost | grep "Forge" > /dev/null'
exit 1
end
end
end
after :finishing, "deploy:cleanup"
end
# start new deploy.rb stuff for the beanstalk repo
staging.rb
role :app, %w{deploy#xxx.xxx.xxx.xxx}
role :web, %w{deploy#xxx.xxx.xxx.xxx}
role :db, %w{deploy#xxx.xxx.xxx.xxx}
# Extended Server Syntax
# ======================
# This can be used to drop a more detailed server
# definition into the server list. The second argument
# something that quacks like a hash can be used to set
# extended properties on the server.
# server 'example.com', user: 'deploy', roles: %w{web app}, my_property: :my_value
set :stage, :staging
server "xxx.xxx.xxx.xxx", user: "deploy", roles: %w{web app db}
set :deploy_to, "/home/deploy/forge_staging"
set :rails_env, 'staging' # If the environment differs from the stage name
set :migration_role, 'migrator' # Defaults to 'db'
set :branch, ENV["REVISION"] || ENV["BRANCH_NAME"] || "master"
Is this an issue with roles?
remove from deploy.rb: (add capistrano-bundler and require it in the Capfile)
server "xxx.xxx.xxx.xxx"
SSHKit.config.command_map[:rake] = "bundle exec rake"
SSHKit.config.command_map[:rails] = "bundle exec rails"
set :user, "deploy" # you have it in staging.rb
set :use_sudo, false # not used in cap3
remove from staging.rb:
role :app, %w{deploy#xxx.xxx.xxx.xxx}
role :web, %w{deploy#xxx.xxx.xxx.xxx}
role :db, %w{deploy#xxx.xxx.xxx.xxx}
set :migration_role, 'migrator' # <= this is why you got the error
PS: edit your question and replace the ip with xxx.

Resources