Rails app - Capistrano deployment failing - ruby-on-rails

I have an existing Rails application that is set up to use Capistrano for deployments. I'm adding a Staging environment to it, but running bundle exec cap staging deploy returns an error:
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as deploy#[IP Redacted]: cat /var/www/staging-app-name/current/REVISION exit status: 1
cat /var/www/staging-app-name/current/REVISION stdout: Nothing written
cat /var/www/staging-app-name/current/REVISION stderr: cat: /var/www/staging-app-name/current/REVISION: No such file or directory
Versions:
Rails - 4.2.11
Ruby - 2.3.1
Capistrano - 3.15.0
Deploy.rb:
set :stages, %w(production staging)
set :application, "application-name"
set :repo_url, "[Redacted]"
set :conditionally_migrate, !ENV['FIRST_RUN']
set :migration_role, :app
set :default_stage, "production"
set :linked_files, fetch(:linked_files, []).push('config/database.yml', 'config/secrets.yml')
set :linked_dirs, fetch(:linked_dirs, []).push(
'log',
'tmp/pids',
'tmp/cache',
'tmp/sockets',
'vendor/bundle',
'public/system',
'public/uploads',
'public/assets'
)
set :default_env, { path: "/opt/ruby_build/builds/2.3.1/bin:$PATH" }
deploy/staging.rb:
server '[IP redacted]', user: 'deploy', roles: %w{app db web}
set :branch, 'develop'
set :deploy_to, '/var/www/staging-app-name'
set :stage, 'staging'
set :rails_env, 'staging'
deploy/production.rb
server '[IP redacted]', user: 'deploy', roles: %w{app db web}
set :branch, 'master'
set :deploy_to, '/var/www/production-app-name'
set :stage, 'production'
set :rails_env, 'production'
Capfile
# Load DSL and set up stages
require "capistrano/setup"
# Include default deployment tasks
require "capistrano/deploy"
require "capistrano/scm/git"
install_plugin Capistrano::SCM::Git
require "capistrano/bundler"
require "capistrano/rails/migrations"
require "capistrano/conditional"
# Load custom tasks from `lib/capistrano/tasks` if you have any defined
Dir.glob("lib/capistrano/tasks/*.rake").each { |r| import r }
This would be the first Staging deployment, so the staging-app-name directory is empty.
It's worth noting that Production deployments are working.
I've confirmed the directory/file permissions on the server are fine.
Any help would be appreciated!

Related

How to separate rake tasks between 2 servers

I have 2 different remote servers for my Rails application.To deploy im using capistrano. First - staging, second production. To run puma server each server uses their own comand, so on local machine im using next:
STAGING:
run("bundle exec puma -C /var/www/snapship/shared/config/puma.rb --daemon")
PROD:
sudo :systemctl, "start puma.target"
When I deploy my application i write: cap staging deploy or cap production deploy. How can I let the Rails app know which puma start it should use? Because I cant check Rails.env on local machine (it always development)
UPDATE:: (Capfile)
require "capistrano/setup"
require "capistrano/deploy"
require "capistrano/rails/assets"
require "capistrano/rails/migrations"
require "capistrano/rbenv"
require "capistrano/bundler"
require "capistrano/puma"
require "capistrano/puma/nginx"
require "capistrano/puma/jungle"
require "sshkit/sudo"
require "whenever/capistrano"
# Load custom tasks from `lib/capistrano/tasks` if you have any defined
Dir.glob("lib/capistrano/tasks/*.rake").each { |r| import r }
Deploy.rb
lock '3.5.0'
set :application, 'snapship'
set :pty, true
set :repo_url, 'git#gitlab.com:snapship/snapship-backend.git'
set :user, ENV['USER'] || 'deploy'
set :linked_files, fetch(:linked_files, []).push('config/database.yml', 'config/secrets.yml', 'config/puma.rb', '.env', '.rbenv-vars')
set :linked_dirs, %w(tmp/pids tmp/sockets log public/uploads)
# rbenv plugin setup
set :rbenv_type, :user
set :rbenv_ruby, File.read('.ruby-version').strip
set :rbenv_prefix, "RBENV_ROOT=#{fetch(:rbenv_path)} RBENV_VERSION=#{fetch(:rbenv_ruby)} #{fetch(:rbenv_path)}/bin/rbenv exec"
set :rbenv_map_bins, fetch(:rbenv_map_bins, []).push('foreman')
# bundle plugin setup
set :bundle_bins, fetch(:bundle_bins, []).push('foreman')
# puma plugin setup
set :puma_preload_app, true
set :puma_init_active_record, true
set :puma_conf, "#{shared_path}/config/puma.rb"
before :deploy, "deploy:run_tests"
after "deploy:publishing", "foreman:export"
after "deploy:publishing", "systemd:update"
after "deploy:publishing", "systemd:enable"
set :migration_servers, -> { release_roles(fetch(:migration_role)) }
Also 2 files staging and production for each deploy.
What worked for me:
# in deploy.rb
# define stages
set :stages, %w(staging production)
# then get the current stage
set :my_var_env, Proc.new { fetch :stage }
then run cap staging deploy
I also use the gem 'capistrano3-puma' which gives me tasks like
cap staging|production puma:[start|stop|restart|status]

Gems are not being installed from capistrano deployment

I am using:
rails 4.2
unicorn server
nginx web server
capistrano for deployment.
If I am adding a new gem to gemfile it's not reflecting in the application. I tried to check a gem by using Gem.loaded_specs["koala"].full_gem_path but its not showing anywhere. I can see gem being bundled in the log and deployment gets completed successfully. But somewhere in between, I can see one error in Capistrano logs.
NOTE: Bundler is already installed.
cd /home/deploy/bloom/releases/20170516105043 && RAILS_ENV=dev bundle exec honeybadger deploy --environment dev --revision 08e4726 --repository git#bitbucket.org:appster/bloom-ruby.git --user arvindmehra
DEBUG[1450b9f0] **bash: bundle: command not found**
Here is my capfile:
require 'capistrano/setup'
require 'capistrano/deploy'
require 'capistrano/bundler'
require 'capistrano/honeybadger'
require 'capistrano/rails/assets'
require 'capistrano/rails/migrations'
require 'capistrano/rvm'
require 'whenever/capistrano'
Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r }
Here is my deployment environment script from dev.rb
set :branch, 'dev'
set :keep_releases, 3
server '66.128.61.239',
user: 'deploy',
roles: %w{web app db},
ssh_options: {
user: 'deploy', # overrides user setting above
keys: %w(~/.ssh/id_rsa),
forward_agent: false,
#auth_methods: %w(publickey)
password: 'password'
}
namespace :deploy do
%w[start stop restart].each do |command|
desc "#{command} unicorn server"
task command do
on roles(:app), in: :sequence, wait: 1 do
execute "/etc/init.d/bloom-ruby #{command}"
end
end
end
after :publishing, :restart
after :restart, :clear_cache do
on roles(:web), in: :groups, limit: 3, wait: 10 do
# Here we can do anything such as:
# within release_path do
# execute :rake, 'cache:clear'
# end
end
end
end
Here is my deploy.rb
# config valid only for current version of Capistrano
lock '3.3.3'
set :application, 'bloom'
set :repo_url, 'git#bitbucket.org:appster/bloom-ruby.git'
set :deploy_to, '/home/deploy/bloom'
#set :linked_files, fetch(:linked_files, []).push('config/database.yml', 'config/secrets.yml')
# Define which type of RVM the server is using
set :rvm_type, :user
set :rvm_ruby_version, '2.2.2#bloom'
# Default value for :linked_files is []
set :linked_files, %w{config/database.yml config/secrets.yml config/settings.yml config/providers.yml config/stripe.yml}
# Default value for linked_dirs is []
set :linked_dirs, %w{log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system public/identicons public/uploads public/images}
Deployment is in production mode.
RAILS_ENV=production bundle install

how to start puma server after code is deployed using capistrano on ec2

I have setup the capistrano script to deploy on staging. I cant seem to find a way to restart the puma server as deployment gets completed and to restart the puma server if server is rebooted for any reason.
I am using rails 4.2 and Ubuntu 16.04 on ec2 server. I tried upstart script with puma-manager but I think its not supported on ubuntu 16.04.
I followed this link for puma-manager http://blog.peterkw.me/automatic-start-for-puma-rails-and-postgresql/
my deploy.rb file is
lock "3.8.0"
set :application, 'pb-ruby'
set :repo_url, 'git#bitbucket.org:url/pb-ruby.git' # Edit this to match your repository
set :branch, :staging_new
set :stages, %w(staging,dev_org)
set :default_stage, "dev_org"
set :deploy_to, '/home/pb/pb-ruby'
set :pty, true
set :linked_files, %w{config/database.yml}
set :linked_dirs, %w{log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system public/uploads}
set :bundle_binstubs, nil
set :keep_releases, 5
set :rvm_type, :user
set :rvm_ruby_version, '2.3.0' # Edit this if you are using MRI Ruby
set :puma_rackup, -> { File.join(current_path, 'config.ru') }
set :puma_state, "#{shared_path}/tmp/pids/puma.state"
set :puma_pid, "#{shared_path}/tmp/pids/puma.pid"
set :puma_bind, "unix://#{shared_path}/tmp/sockets/puma.sock" #accept array for multi-bind
set :puma_conf, "#{shared_path}/config/puma.rb"
set :puma_access_log, "#{shared_path}/log/puma_error.log"
set :puma_error_log, "#{shared_path}/log/puma_access.log"
set :puma_role, :app
set :puma_env, fetch(:rack_env, fetch(:rails_env, 'staging'))
set :puma_threads, [0, 8]
set :puma_workers, 0
set :puma_worker_timeout, nil
set :puma_init_active_record, true
set :puma_preload_app, false
Capfile is:
require 'capistrano/setup'
# Include default deployment tasks
require 'capistrano/deploy'
require 'capistrano/bundler'
require 'capistrano/rvm'
require 'capistrano/rails/assets' # for asset handling add
require 'capistrano/rails/migrations' # for running migrations
require 'capistrano/puma'
puma.rb file is
workers 1
# Min and Max threads per worker
threads 1, 6
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"
# Default to production
rails_env = ENV['RAILS_ENV'] || "staging"
environment rails_env
# Set up socket location
bind "unix:///home/pb/pb-ruby/shared/tmp/sockets/puma.sock"
# Logging
stdout_redirect "/home/pb/pb-ruby/shared/log/puma.stdout.log", "/home/pb/pb-ruby/shared/log/puma.stderr.log", true
# Set master PID and state locations
pidfile "/home/pb/pb-ruby/shared/tmp/pids/puma.pid"
state_path "/home/pb/pb-ruby/shared/tmp/pids/puma.state"
activate_control_app
on_worker_boot do
require "active_record"
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("/home/pb/pb-ruby/shared/config/database.yml")[rails_env])
end
I had an issue like this once. I ended up adding the rails server instance as a daemon using this command every time I deploy
cd current/app_dir
rails s -d -p 3000 -e 'production'
PS: I kill the current running rails instance before doing this

Sidekiq looking for en.yml file in older capistrano release

I am running sidekiq in production and I deploy my app using capistrano. While processing background job, I am getting following error
I18n::InvalidLocaleData: can not load translations from /path/to/folder/releases/20160904153949/config/locales/en.yml: #<Errno::ENOENT: No such file or directory # rb_sysopen - /path/to/folder/releases/20160904153949/config/locales/en.yml>.
release 20160904153949 is old and has been deleted. I am wondering why sidekiq is still looking into older release.
Below is how my deploy.rb file looks like:
# config valid only for current version of Capistrano
lock '3.4.0'
set :application, 'app_name'
set :repo_url, 'git#github.com:reboot/app_name.git'
# Default branch is :master
set :branch, 'master'
# Default deploy_to directory is /var/www/my_app_name
set :deploy_to, '/path/to/app'
set :use_sudo, false
set :bundle_binstubs, nil
# Default value for :scm is :git
set :scm, :git
# Default value for :format is :pretty
set :format, :pretty
# Default value for :log_level is :debug
set :log_level, :debug
# Default value for :pty is false
set :pty, true
# Default value for :linked_files is []
set :linked_files, fetch(:linked_files, []).push('config/database.yml', 'config/secrets.yml')
# Default value for linked_dirs is []
set :linked_dirs, fetch(:linked_dirs, []).push('log', 'tmp/pids', 'tmp/cache', 'tmp/sockets', 'vendor/bundle', 'public/system', 'public/assets')
# Default value for default_env is {}
# set :default_env, { path: "/opt/ruby/bin:$PATH" }
# Default value for keep_releases is 5
set :keep_releases, 5
set :keep_assets, 3
after 'deploy:publishing', 'deploy:restart'
namespace :deploy do
task :restart do
on roles(:app) do
execute :touch, release_path.join('tmp/restart.txt')
end
end
end
Below is how my Capfile looks like
require 'capistrano/setup'
require 'capistrano/deploy'
require 'capistrano/rvm'
require 'capistrano/bundler'
require 'capistrano/rails/assets'
require 'capistrano/rails/migrations'
require 'capistrano/sidekiq'
# Load custom tasks from `lib/capistrano/tasks` if you have any defined
Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r }
sidekiq stop/start is handled by capistrano/sidekiq gem. My app gets deployed fine. Problem is that sidekiq is looking into wrong release for required file.
Also I don't have sidekiq.yml file at this stage. My app is small so never created yml file for it.
Ruby: 2.3.0p0,
Rails: 4.2.5,
nginx/passenger combination,
Capistrano 3.4
Upadate
Below is full error message:
2016-09-07T19:33:22.349Z 3262 TID-md1a4 ContactUsEmailJob JID-fb18ad450d73ed857fe66aee INFO: fail: 0.069 sec
2016-09-07T19:33:22.350Z 3262 TID-md1a4 WARN: {"class":"ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper","wrapped":"ContactUsEmailJob","queue":"default","args":[{"job_class":"ContactUsEmailJob","job_id":"e30dfe20-89b2-49a9-833c-8e479bdb8a2d","queue_name":"default","arguments":[{"utf8":"✓","authenticity_token":"i+OuDyC2c243UvC0FRWk1esASnUhQ2jKbfvZnoX2GZLela+mPCcOU6qtpU3OZhxr0wTCYUpmFXD6623Q==","name":"Test","message":"test","controller":"static_pages","action":"email_us","_aj_hash_with_indifferent_access":true}],"locale":"en"}],"retry":true,"jid":"fb18ad450d73ed857fe66aee","created_at":1473276802.277088,"enqueued_at":1473276802.2773373,"error_message":"can not load translations from /path/to/app/releases/20160904153949/config/locales/en.yml: #<Errno::ENOENT: No such file or directory # rb_sysopen - /path/to/app/releases/20160904153949/config/locales/en.yml>","error_class":"I18n::InvalidLocaleData","failed_at":1473276802.3480363,"retry_count":0}
2016-09-07T19:33:22.350Z 3262 TID-md1a4 WARN: I18n::InvalidLocaleData: can not load translations from /path/to/app/releases/20160904153949/config/locales/en.yml: #<Errno::ENOENT: No such file or directory # rb_sysopen - /path/to/app/releases/20160904153949/config/locales/en.yml>
2016-09-07T19:33:22.350Z 3262 TID-md1a4 WARN: /path/to/app/shared/bundle/ruby/2.3.0/gems/i18n-0.7.0/lib/i18n/backend/base.rb:184:in `rescue in load_yml'
Ok so I solved the problem by following these steps:
kill process of sidekiq pkill sidekiq
deleted sidekiq pid file from /path/to/app/current/tmp/pid/
started sidekiq using cap sidekiq:start, this will create pid file
finally deploy my app ( this process will restart sidekiq again)
Everything worked find after that.

capistrano shared/bin folder is empty

I am using capistrano for deployment, and for some reason my shared/bin folder is empty, and, well it should contain -rails, rake, bundle, setup, spring. now obviously I did something wrong, but as I am new to capistrano, I have no Idea what it is, because it is in my git repository, and as far as I know it copies the entire thing from the repository. since I am not sure wether it is relavent or not, I will just put everything I changed reguarding the capistrano deployment.
here's my deploy.rb
lock '3.4.0'
# application settings
set :application, 'SomeApplication'
set :user, 'someuser'
#set :repo_url, 'git#bitbucket.org:someapp/someappserver.git'
set :rails_env, 'production'
set :use_sudo, false
set :keep_releases, 5
#git settings
set :scm, :git
set :branch, "master"
set :repo_url, "git#bitbucket.org:someapplication/someapplicationserver.git"
set :deploy_via, :remote_cache
set :rvm_ruby_version, '2.2.1'
set :default_env, { rvm_bin_path: '~/.rvm/bin' }
SSHKit.config.command_map[:rake] = "#{fetch(:default_env)[:rvm_bin_path]}/rvm ruby-#{fetch(:rvm_ruby_version)} do bundle exec rake"
# dirs we want symlinked to the shared folder
# during deployment
set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}
set :pg_database, "someapp_production"
set :pg_user, "someapp_production"
set :pg_ask_for_password, true
namespace :deploy do
task :config_nginx do
pre = File.basename(previous_release)
cur = File.basename(release_path)
run "#{sudo} sed 's/#{pre}/#{cur}/g' /etc/nginx/sites-available/default"
end
task :restart_thin_server do
run "cd #{previous_release}; source $HOME/.bash_profile && thin stop -C thin_config.yml"
run "cd #{release_path}; source $HOME/.bash_profile && thin start -C thin_config.yml"
end
task :restart_nginx do
run "#{sudo} service nginx restart"
end
desc 'Restart application'
task :restart do
on roles(:app), in: :sequence, wait: 5 do
# Your restart mechanism here, for example:
# execute :touch, release_path.join('tmp/restart.txt')
#
# The capistrano-unicorn-nginx gem handles all this
# for this example
end
end
after :publishing, :restart
after :restart, :clear_cache do
on roles(:web), in: :groups, limit: 3, wait: 10 do
# Here we can do anything such as:
# within release_path do
# execute :rake, 'cache:clear'
# end
end
end
end
here is my deploy/production.rb
# production deployment
set :stage, :production
# use the master branch of the repository
set :branch, "master"
# the user login on the remote server
# used to connect and deploy
set :deploy_user, "someuser"
# the 'full name' of the application
set :full_app_name, "#{fetch(:application)}_#{fetch(:stage)}"
# the server(s) to deploy to
server 'someserver.cloudapp.net', user: 'someuser', roles: %w{web app db}, primary: true
# the path to deploy to
set :deploy_to, "/home/#{fetch(:deploy_user)}/apps/#{fetch(:full_app_name)}"
# set to production for Rails
set :rails_env, :production
and here is my cap file
require 'capistrano/setup'
# Include default deployment tasks
require 'capistrano/deploy'
# Include tasks from other gems included in your Gemfile
#
# For documentation on these, see for example:
#
# https://github.com/capistrano/rvm
# https://github.com/capistrano/rbenv
# https://github.com/capistrano/chruby
# https://github.com/capistrano/bundler
# https://github.com/capistrano/rails
# https://github.com/capistrano/passenger
#
require 'capistrano/rvm'
# require 'capistrano/rbenv'
# require 'capistrano/chruby'
require 'capistrano/bundler'
require 'capistrano/rails/assets'
require 'capistrano/rails/migrations'
# require 'capistrano/passenger'
require 'capistrano/thin'
require 'capistrano/postgresql'
# Load custom tasks from `lib/capistrano/tasks` if you have any defined
Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r }
like #emj365 said
just remove bin form your linked_dirs in config/deploy.rb
set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}
#gilmishal Kindly check with this link .. capistrano-deploy-configuration
And keep your eye on directory path where i did mistake many a times
# Default deploy_to directory is /var/www/my_app_name
# set :deploy_to, '/var/www/my_app_name' # This conf is by default
Hope it will solve your problem

Resources