Trying to test Capistrano from scratch.
Capfile:
require 'capistrano/setup'
require 'capistrano/deploy'
I18n.enforce_available_locales = false
Dir.glob('lib/capistrano/tasks/*.rb').each { |r| import r }
deploy.rb:
role :testrole, 'x.x.x.x'
set :user, 'ubuntu'
The test.rb task:
namespace :test do
desc "Uptime on servers"
task :uptime do
on roles(:testrole) do
execute "uptime"
end
end
end
cap command:
cap production test:uptime
output:
INFO [c077da7f] Running /usr/bin/env uptime on x.x.x.x
DEBUG [c077da7f] Command: /usr/bin/env uptime
cap aborted!
Net::SSH::AuthenticationFailed
Dont have a problem to login from the shell using the same user and key.
While logged in the remote server, I can see in auth.log that an empty user given while executing the cap:
test-srv sshd[1459]: Invalid user from x.x.x.x
What do I miss ?
Thanks!
If you take a look at their example code, supplied when you cap install your project, you'll see something like this in staging.rb and production.rb:
# Simple Role Syntax
# ==================
# Supports bulk-adding hosts to roles, the primary
# server in each group is considered to be the first
# unless any hosts have the primary property set.
# Don't declare `role :all`, it's a meta role
role :app, %w{deploy#example.com}
role :web, %w{deploy#example.com}
role :db, %w{deploy#example.com}
# Extended Server Syntax
# ======================
# This can be used to drop a more detailed server
# definition into the server list. The second argument
# something that quacks like a hash can be used to set
# extended properties on the server.
server 'example.com', user: 'deploy', roles: %w{web app}, my_property: :my_value
You'll either want to specify your user in one of those places, or use fetch(:user) to grab it programmatically at runtime. E.g.,
server 'example.com', user: fetch(:user), roles: %w{web app}, my_property: :my_value
Related
I am preparing capistrano to deploy ruby on rails application to AWS. The application servers will be behind bastian host.
I have two servers server1 and server2. I want to deploy and run puma, nginx on server1, and run resque workers and resque schedulers on server2. I know about roles and here is my configuration so far:
# deploy/production.rb
web_instances = [web-instance-ip]
worker_instances = [worker-instance-ip]
role :app, web_instances
role :web, web_instances
role :worker, worker_instances
set :deploy_user, ENV['DEPLOY_USER'] || 'ubuntu'
set :branch, 'master'
set :ssh_options, {
forward_agent: true,
keys: ENV['SSH_KEY_PATH'],
proxy: Net::SSH::Proxy::Command.new("ssh -i '#{ENV['SSH_KEY_PATH']}' #{fetch(:deploy_user)}##{ENV['BASTIAN_PUBLIC_IP']} -W %h:%p"),
}
set :puma_role, :app
I am not sure what should I do or how to write tasks by making sure that puma start, restart is done on only server1 and resque, resque scheduler start restart etc is handled on only server2. While common tasks such as pulling latest code, bundle install etc is done on each instance?
Let's assume, you have defined the roles in the following manner
role :puma_nginx_role, 'server1.com'
role :resque_role, 'server2.com'
Now define a rake task in your config/deploy.rb file, ex:
namespace :git do
desc 'To push the code'
task :push do
execute "git push"
end
end
Now assuming the above example should be run on server1, all you have do is
namespace :git do
desc 'To push the code'
task :push, :roles => [:puma_nginx_role] do
execute "git push"
end
end
Thereby, your telling capistrano configuration, that the git:push should be executed on role :puma_nginx_role, which in-turn would run it on server1.com. You'll have to modify the tasks to run puma/nginx/resque and make changes based on roles.
This can achieve by using role to limit the tasks to be run for each servers and some hooks to trigger your custom tasks. Your deploy/production.rb file will look something similar to this.
web_instances = [web-instance-ip]
worker_instances = [worker-instance-ip]
role :app, web_instances
role :web, web_instances
role :worker, worker_instances
set :deploy_user, ENV['DEPLOY_USER'] || 'ubuntu'
set :branch, 'master'
set :ssh_options, {
forward_agent: true,
keys: ENV['SSH_KEY_PATH'],
proxy: Net::SSH::Proxy::Command.new("ssh -i '#{ENV['SSH_KEY_PATH']}' #{fetch(:deploy_user)}##{ENV['BASTIAN_PUBLIC_IP']} -W %h:%p"),
}
# This will run on server with web role only
namespace :puma do
task :restart do
on roles(:web) do |host|
with rails_env: fetch(:rails_env) do
** Your code to restart puma server **
end
end
end
end
# This will run on server with worker role only
namespace :resque do
task :restart do
on roles(:worker) do |host|
with rails_env: fetch(:rails_env) do
** Your code to restart resque server **
end
end
end
end
after :deploy, 'puma:restart'
after :deploy, 'resque:restart'
Check out the docs for more information about commands and hooks to setup your deployment.
I'm trying to deploy to our new production server. Capistrano SSH into the server is working, yet the IP address listed below in the Terminal Output is an IP that we no longer use, and is nowhere in our Rails installation (we iterated all files).
How do I get Capistrano to stop trying to access this IP? Where is it even coming from? Is there a Capistrano cache that exists that could be holding this address?
Terminal Output
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as vadmin#15.1.153.247: Net::SSH::ConnectionTimeout
Caused by:
Net::SSH::ConnectionTimeout: Net::SSH::ConnectionTimeout
Tasks: TOP => rvm:hook => passenger:rvm:hook => passenger:test_which_passenger
Capistrano Log Output
INFO START 2018-08-17 18:20:21 -0600 cap production doctor
INFO ---------------------------------------------------------------------------
DEBUG [0244ff8e] Running /usr/bin/env which passenger as vadmin#15.153.1.30
DEBUG [e9aebdc9] Running /usr/bin/env which passenger as vadmin#15.1.153.247
DEBUG [0244ff8e] Command: ( export RVM_BIN_PATH="~/.rvm/bin" ; /usr/bin/env which passenger )
DEBUG [e9aebdc9] Command: ( export RVM_BIN_PATH="~/.rvm/bin" ; /usr/bin/env which passenger )
deploy.rb
# config valid for current version and patch releases of Capistrano
lock "~> 3.10.0"
set :application, "<omitted>"
set :repo_url, "ssh://<omitted>"
# Default branch is :master
# ask :branch, `git rev-parse --abbrev-ref HEAD`.chomp
# Default deploy_to directory is /var/www/my_app_name
set :deploy_to, "/home/vadmin/<omitted>"
# Default value for :format is :airbrussh.
# set :format, :airbrussh
# You can configure the Airbrussh format using :format_options.
# These are the defaults.
# set :format_options, command_output: true, log_file: "log/capistrano.log", color: :auto, truncate: :auto
# Default value for :pty is false
# set :pty, true
# Default value for :linked_files is []
append :linked_files, "config/database.yml", "config/secrets.yml"
# Default value for linked_dirs is []
append :linked_dirs, 'log', 'tmp/pids', 'tmp/cache', 'tmp/sockets', 'vendor/bundle', '.bundle', 'public/system', 'public/uploads'
# Default value for default_env is {}
# set :default_env, { path: "/opt/ruby/bin:$PATH" }
set :default_env, { rvm_bin_path: '~/.rvm/bin' }
# Default value for local_user is ENV['USER']
# set :local_user, -> { `git config user.name`.chomp }
# Default value for keep_releases is 5
set :keep_releases, 10
# set migration role to :app instead of :db
set :migration_role, :app
# Uncomment the following to require manually verifying the host key before first deploy.
# set :ssh_options, verify_host_key: :secure
production.rb
# server-based syntax
# ======================
# Defines a single server with a list of roles and multiple properties.
# You can define all roles on a single server, or split them:
server "15.153.1.30", user: "vadmin", roles: %w{app db web}
# server "example.com", user: "deploy", roles: %w{app web}, other_property: :other_value
# server "db.example.com", user: "deploy", roles: %w{db}
set :stage, :production
# role-based syntax
# ==================
# Defines a role with one or multiple servers. The primary server in each
# group is considered to be the first unless any hosts have the primary
# property set. Specify the username and a domain or IP for the server.
# Don't use `:all`, it's a meta role.
# role :app, %w{deploy#example.com}, my_property: :my_value
# role :web, %w{user1#primary.com user2#additional.com}, other_property: :other_value
# role :db, %w{deploy#example.com}
# Configuration
# =============
# You can set any configuration variable like in config/deploy.rb
# These variables are then only loaded and set in this stage.
# For available Capistrano configuration variables see the documentation page.
# http://capistranorb.com/documentation/getting-started/configuration/
# Feel free to add new variables to customise your setup.
# Custom SSH Options
# ==================
# You may pass any option but keep in mind that net/ssh understands a
# limited set of options, consult the Net::SSH documentation.
# http://net-ssh.github.io/net-ssh/classes/Net/SSH.html#method-c-start
#
# Global options
# --------------
# set :ssh_options, {
# keys: %w(/home/rlisowski/.ssh/id_rsa),
# forward_agent: false,
# auth_methods: %w(password)
# }
#
# The server-based syntax can be used to override options:
# ------------------------------------
# server "example.com",
# user: "user_name",
# roles: %w{web app},
# ssh_options: {
# user: "user_name", # overrides user setting above
# keys: %w(/home/user_name/.ssh/id_rsa),
# forward_agent: false,
# auth_methods: %w(publickey password)
# # password: "please use keys"
# }
gems
gem 'capistrano', '~> 3.10'
gem 'capistrano-rails', '~> 1.3'
gem 'capistrano-rvm'
gem 'capistrano-passenger'
You need to look for this directory in your server cached_repo and delete it. This directory will get created in the next deployment.
In capistrano 3
You should remove this $deploy_to/repo
I can't figure out how to solve this problem. Capistrano didn't work correctly. So can't deploy my app.
Here's the error.
$ bundle exec cap staging deploy
(Backtrace restricted to imported tasks)
cap aborted!
Net::SSH::AuthenticationFailed: Authentication failed for user ec2-user#13.112.91.105
Here's config file, named config/deploy.rb
# config valid only for Capistrano 3.1
lock '3.5.0'
set :application, 'dola'
set :repo_url, 'git#ghe.intelligence-dev.com/inolab/eiicon-dola.git'
# Default branch is :master
# ask :branch, proc { `git rev-parse --abbrev-ref HEAD`.chomp }.call
set :branch, 'master'
# Default deploy_to directory is /var/www/my_app
set :deploy_to, '/var/www/dola'
# Default value for keep_releases is 5
# set :keep_releases, 5
set :rbenv_type, :user
set :rbenv_ruby, '2.3.2-p217'
set :rbenv_map_bins, %w{rake gem bundle ruby rails}
set :rbenv_roles, :all
set :linked_dirs, %w{bin log tmp/backup tmp/pids tmp/cache tmp/sockets vendor/bundle}
role :web, %w{13.112.91.105}
namespace :deploy do
desc 'Restart application'
task :restart do
on roles(:app), in: :sequence, wait: 5 do
# Your restart mechanism here, for example:
# execute :touch, release_path.join('tmp/restart.txt')
end
end
after :publishing, :restart
after :restart, :clear_cache do
on roles(:web), in: :groups, limit: 3, wait: 10 do
# Here we can do anything such as:
# within release_path do
# execute :rake, 'cache:clear'
# end
end
end
end
And here's config/deploy/staging.rb
Simple Role Syntax
# ==================
# Supports bulk-adding hosts to roles, the primary server in each group
# is considered to be the first unless any hosts have the primary
# property set. Don't declare `role :all`, it's a meta role.
role :app, %w{ec2-user#13.112.91.105}
role :web, %w{ec2-user#13.112.91.105}
# Extended Server Syntax
# ======================
# This can be used to drop a more detailed server definition into the
# server list. The second argument is a, or duck-types, Hash and is
# used to set extended properties on the server.
server '13.112.91.105', user: 'ec2-user', roles: %w{web app}, my_property: :my_value
# Custom SSH Options
# ==================
set :stage, :staging
set :rails_env, 'staging'
server '13.112.91.105', user: 'ec2-user',
roles: %w{web app}
set :ssh_options, {
keys: [File.expand_path('~/.ssh/id_rsa_ec2.pem)')]
}
Anyone, please!
Capistrano is trying to establish an SSH session between your computer and the machine to which you are trying to deploy your application - 13.112.91.105 in this case. In order to do that, given your Capistrano configuration, you need to be able to authenticate to the SSH server that is running on 13.112.91.105 as the user ec2-user using your SSH private key, which I'm assuming is ~/.ssh/id_rsa_ec2.pem. For this to happen, your corresponding SSH public key must be listed in the authorized_keys file for ec2-user on the machine 13.112.91.105.
I started using Capistrano to deploy my Rails application to different remote servers, however, deploying to a server using cap production deploy sets my RAILS_ENV to deployment instead of production. I have tried forcing the environment by adding ENV['RAILS_ENV'] ||= 'production' to the environment.rb, but that doesn't seem to fix the problem. I checked the production.log for Passenger, Apache, and Rails and nothing seems to be wrong, except for the incorrect environment deployment. What could be wrong with my Capistrano deployment?
production.rb
role :app, %w{deployer#*****}
role :web, %w{deployer#*****}
role :db, %w{deployer#*****}
# Define server(s)
server '*****', user: 'deployer', roles: %w{web}
# SSH Options
# See the example commented out section in the file
# for more options.
set :ssh_options, {
forward_agent: false,
auth_methods: %w(password),
password: '******',
user: 'deployer',
}
deploy.rb
# Define the name of the application
set :application, 'app_pro'
# Define where can Capistrano access the source repository
# set :repo_url, 'https://github.com/[user name]/[application name].git'
set :scm, :git
set :repo_url, 'https://github.com/awernick/app_pros.git'
# Define where to put your application code
set :deploy_to, "/var/sentora/hostdata/zadmin/public_html/app_dir"
set :pty, true
set :format, :pretty
# Set the post-deployment instructions here.
# Once the deployment is complete, Capistrano
# will begin performing them as described.
# To learn more about creating tasks,
# check out:
# http://capistranorb.com/
# namespace: deploy do
# desc 'Restart application'
# task :restart do
# on roles(:app), in: :sequence, wait: 5 do
# # Your restart mechanism here, for example:
# execute :touch, release_path.join('tmp/restart.txt')
# end
# end
# after :publishing, :restart
# after :restart, :clear_cache do
# on roles(:web), in: :groups, limit: 3, wait: 10 do
# # Here we can do anything such as:
# # within release_path do
# # execute :rake, 'cache:clear'
# # end
# end
# end
# end
Capfile
# Load DSL and set up stages
require 'capistrano/setup'
# Include default deployment tasks
require 'capistrano/deploy'
# Include tasks from other gems included in your Gemfile
#
# For documentation on these, see for example:
#
# https://github.com/capistrano/rvm
# https://github.com/capistrano/rbenv
# https://github.com/capistrano/chruby
# https://github.com/capistrano/bundler
# https://github.com/capistrano/rails
# https://github.com/capistrano/passenger
#
# require 'capistrano/rvm'
# require 'capistrano/rbenv'
# require 'capistrano/chruby'
require 'capistrano/bundler'
require 'capistrano/rails'
require 'capistrano/rails/assets'
require 'capistrano/rails/migrations'
require 'capistrano/passenger'
# Load custom tasks from `lib/capistrano/tasks' if you have any defined
Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r }
*The fields in the files are filled out with the correct information.
I could be wrong, but normally Capistrano, as long as it does not have any special plugins for Apache or Nginx, deploys the code as it is, your problems appears to come from passenger configuration. It could be it tries to run the server under wrong environment. I don't remember how it is with Apache, but with nginx you have to make sure the line
...
passenger_app_env production;
...
Is inside /opt/nginx/conf/nginx.conf
Maybe this could help you with setting up Apache config:
https://www.phusionpassenger.com/documentation/Users%20guide%20Apache.html#PassengerAppEnv
In production.rb you should have:
set :stage, :production
or some say this option will not work in v3(I am using v3 and set stage works for me), however you might want to read this in case set stage doesn't work:
http://dylanmarkow.com/blog/2014/01/08/capistrano-3-setting-a-default-stage/
I was able to resolve my problem. My problem was occurring because I forgot to add my production secret_key_base as an environment variable in my production server.
Having setup capistrano on my rails app I was deploying okay I then went and changed some css values on my local site. This was fine but then when I went to deploy my site with cap staging deploy it started to do its normal routine tasks just fine but when it came to capistrano's precompile task for assets it failed. I have managed to source where it is going wrong on the capistrano area and thats here:
task :backup_manifest do
on roles(fetch(:assets_roles)) do
within release_path do
execute :cp,
release_path.join('public', fetch(:assets_prefix), 'manifest*'),
release_path.join('assets_manifest_backup')
end
end
end
task :restore_manifest do
on roles(fetch(:assets_roles)) do
within release_path do
source = release_path.join('assets_manifest_backup')
target = capture(:ls, release_path.join('public', fetch(:assets_prefix),
'manifest*')).strip
if test "[[ -f #{source} && -f #{target} ]]"
execute :cp, source, target
else
msg = 'Rails assets manifest file (or backup file) not found.'
warn msg
fail Capistrano::FileNotFound, msg
end
end
end
end
It fails here where you have within release_path do as thats in the stack trace but I do not know why it does that as i have not changed any tasks at all just ccs tweaks.
Here are my deployment settings for capistrano:
deploy.rb
lock '3.1.0'
server "188.226.182.102"
set :application, "ForgeAndCo"
set :scm, "git"
set :repo_url, "git#made-by-mark.beanstalkapp.com:/made-by-mark/forge.git"
# set :scm_passphrase, ""
set :user, "deploy"
set :use_sudo, false
set :ssh_options, {
forward_agent: true,
port: 14439
}
set :assets_prefix, 'prepackaged-assets'
# files we want symlinking to specific entries in shared.
set :linked_files, %w{config/database.yml}
# dirs we want symlinking to shared
set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}
SSHKit.config.command_map[:rake] = "bundle exec rake" #8
SSHKit.config.command_map[:rails] = "bundle exec rails"
set :branch, ENV["REVISION"] || ENV["BRANCH_NAME"] || "master"
set :keep_releases, 20
namespace :deploy do
desc 'Restart passenger without service interruption (keep requests in a queue while restarting)'
task :restart do
on roles(:app) do
execute :touch, release_path.join('tmp/restart.txt')
unless execute :curl, '-s -k --location localhost | grep "Forge" > /dev/null'
exit 1
end
end
end
end
after 'deploy:publishing', 'deploy:restart'
deploy/staging.rb
role :app, %w{deploy#188.226.182.102}
role :web, %w{deploy#188.226.182.102}
role :db, %w{deploy#188.226.182.102}
# Extended Server Syntax
# ======================
# This can be used to drop a more detailed server
# definition into the server list. The second argument
# something that quacks like a hash can be used to set
# extended properties on the server.
# server 'example.com', user: 'deploy', roles: %w{web app}, my_property: :my_value
set :stage, :staging
server "188.226.182.102", user: "deploy", roles: %w{web app db}
set :deploy_to, "/home/deploy/forge_staging"
set :rails_env, 'staging' # If the environment differs from the stage name
set :migration_role, 'migrator' # Defaults to 'db'
set :assets_roles, [:web, :app] # Defaults to [:web]
set :assets_prefix, 'prepackaged-assets' # Defaults to 'assets' this should match config.assets.prefix in your rails config/application.rb
set :branch, ENV["REVISION"] || ENV["BRANCH_NAME"] || "master"
Does anyone know why it would fail at this task at all?