I have 2 different remote servers for my Rails application.To deploy im using capistrano. First - staging, second production. To run puma server each server uses their own comand, so on local machine im using next:
STAGING:
run("bundle exec puma -C /var/www/snapship/shared/config/puma.rb --daemon")
PROD:
sudo :systemctl, "start puma.target"
When I deploy my application i write: cap staging deploy or cap production deploy. How can I let the Rails app know which puma start it should use? Because I cant check Rails.env on local machine (it always development)
UPDATE:: (Capfile)
require "capistrano/setup"
require "capistrano/deploy"
require "capistrano/rails/assets"
require "capistrano/rails/migrations"
require "capistrano/rbenv"
require "capistrano/bundler"
require "capistrano/puma"
require "capistrano/puma/nginx"
require "capistrano/puma/jungle"
require "sshkit/sudo"
require "whenever/capistrano"
# Load custom tasks from `lib/capistrano/tasks` if you have any defined
Dir.glob("lib/capistrano/tasks/*.rake").each { |r| import r }
Deploy.rb
lock '3.5.0'
set :application, 'snapship'
set :pty, true
set :repo_url, 'git#gitlab.com:snapship/snapship-backend.git'
set :user, ENV['USER'] || 'deploy'
set :linked_files, fetch(:linked_files, []).push('config/database.yml', 'config/secrets.yml', 'config/puma.rb', '.env', '.rbenv-vars')
set :linked_dirs, %w(tmp/pids tmp/sockets log public/uploads)
# rbenv plugin setup
set :rbenv_type, :user
set :rbenv_ruby, File.read('.ruby-version').strip
set :rbenv_prefix, "RBENV_ROOT=#{fetch(:rbenv_path)} RBENV_VERSION=#{fetch(:rbenv_ruby)} #{fetch(:rbenv_path)}/bin/rbenv exec"
set :rbenv_map_bins, fetch(:rbenv_map_bins, []).push('foreman')
# bundle plugin setup
set :bundle_bins, fetch(:bundle_bins, []).push('foreman')
# puma plugin setup
set :puma_preload_app, true
set :puma_init_active_record, true
set :puma_conf, "#{shared_path}/config/puma.rb"
before :deploy, "deploy:run_tests"
after "deploy:publishing", "foreman:export"
after "deploy:publishing", "systemd:update"
after "deploy:publishing", "systemd:enable"
set :migration_servers, -> { release_roles(fetch(:migration_role)) }
Also 2 files staging and production for each deploy.
What worked for me:
# in deploy.rb
# define stages
set :stages, %w(staging production)
# then get the current stage
set :my_var_env, Proc.new { fetch :stage }
then run cap staging deploy
I also use the gem 'capistrano3-puma' which gives me tasks like
cap staging|production puma:[start|stop|restart|status]
Related
I have an existing Rails application that is set up to use Capistrano for deployments. I'm adding a Staging environment to it, but running bundle exec cap staging deploy returns an error:
cap aborted!
SSHKit::Runner::ExecuteError: Exception while executing as deploy#[IP Redacted]: cat /var/www/staging-app-name/current/REVISION exit status: 1
cat /var/www/staging-app-name/current/REVISION stdout: Nothing written
cat /var/www/staging-app-name/current/REVISION stderr: cat: /var/www/staging-app-name/current/REVISION: No such file or directory
Versions:
Rails - 4.2.11
Ruby - 2.3.1
Capistrano - 3.15.0
Deploy.rb:
set :stages, %w(production staging)
set :application, "application-name"
set :repo_url, "[Redacted]"
set :conditionally_migrate, !ENV['FIRST_RUN']
set :migration_role, :app
set :default_stage, "production"
set :linked_files, fetch(:linked_files, []).push('config/database.yml', 'config/secrets.yml')
set :linked_dirs, fetch(:linked_dirs, []).push(
'log',
'tmp/pids',
'tmp/cache',
'tmp/sockets',
'vendor/bundle',
'public/system',
'public/uploads',
'public/assets'
)
set :default_env, { path: "/opt/ruby_build/builds/2.3.1/bin:$PATH" }
deploy/staging.rb:
server '[IP redacted]', user: 'deploy', roles: %w{app db web}
set :branch, 'develop'
set :deploy_to, '/var/www/staging-app-name'
set :stage, 'staging'
set :rails_env, 'staging'
deploy/production.rb
server '[IP redacted]', user: 'deploy', roles: %w{app db web}
set :branch, 'master'
set :deploy_to, '/var/www/production-app-name'
set :stage, 'production'
set :rails_env, 'production'
Capfile
# Load DSL and set up stages
require "capistrano/setup"
# Include default deployment tasks
require "capistrano/deploy"
require "capistrano/scm/git"
install_plugin Capistrano::SCM::Git
require "capistrano/bundler"
require "capistrano/rails/migrations"
require "capistrano/conditional"
# Load custom tasks from `lib/capistrano/tasks` if you have any defined
Dir.glob("lib/capistrano/tasks/*.rake").each { |r| import r }
This would be the first Staging deployment, so the staging-app-name directory is empty.
It's worth noting that Production deployments are working.
I've confirmed the directory/file permissions on the server are fine.
Any help would be appreciated!
I have setup the capistrano script to deploy on staging. I cant seem to find a way to restart the puma server as deployment gets completed and to restart the puma server if server is rebooted for any reason.
I am using rails 4.2 and Ubuntu 16.04 on ec2 server. I tried upstart script with puma-manager but I think its not supported on ubuntu 16.04.
I followed this link for puma-manager http://blog.peterkw.me/automatic-start-for-puma-rails-and-postgresql/
my deploy.rb file is
lock "3.8.0"
set :application, 'pb-ruby'
set :repo_url, 'git#bitbucket.org:url/pb-ruby.git' # Edit this to match your repository
set :branch, :staging_new
set :stages, %w(staging,dev_org)
set :default_stage, "dev_org"
set :deploy_to, '/home/pb/pb-ruby'
set :pty, true
set :linked_files, %w{config/database.yml}
set :linked_dirs, %w{log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system public/uploads}
set :bundle_binstubs, nil
set :keep_releases, 5
set :rvm_type, :user
set :rvm_ruby_version, '2.3.0' # Edit this if you are using MRI Ruby
set :puma_rackup, -> { File.join(current_path, 'config.ru') }
set :puma_state, "#{shared_path}/tmp/pids/puma.state"
set :puma_pid, "#{shared_path}/tmp/pids/puma.pid"
set :puma_bind, "unix://#{shared_path}/tmp/sockets/puma.sock" #accept array for multi-bind
set :puma_conf, "#{shared_path}/config/puma.rb"
set :puma_access_log, "#{shared_path}/log/puma_error.log"
set :puma_error_log, "#{shared_path}/log/puma_access.log"
set :puma_role, :app
set :puma_env, fetch(:rack_env, fetch(:rails_env, 'staging'))
set :puma_threads, [0, 8]
set :puma_workers, 0
set :puma_worker_timeout, nil
set :puma_init_active_record, true
set :puma_preload_app, false
Capfile is:
require 'capistrano/setup'
# Include default deployment tasks
require 'capistrano/deploy'
require 'capistrano/bundler'
require 'capistrano/rvm'
require 'capistrano/rails/assets' # for asset handling add
require 'capistrano/rails/migrations' # for running migrations
require 'capistrano/puma'
puma.rb file is
workers 1
# Min and Max threads per worker
threads 1, 6
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"
# Default to production
rails_env = ENV['RAILS_ENV'] || "staging"
environment rails_env
# Set up socket location
bind "unix:///home/pb/pb-ruby/shared/tmp/sockets/puma.sock"
# Logging
stdout_redirect "/home/pb/pb-ruby/shared/log/puma.stdout.log", "/home/pb/pb-ruby/shared/log/puma.stderr.log", true
# Set master PID and state locations
pidfile "/home/pb/pb-ruby/shared/tmp/pids/puma.pid"
state_path "/home/pb/pb-ruby/shared/tmp/pids/puma.state"
activate_control_app
on_worker_boot do
require "active_record"
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("/home/pb/pb-ruby/shared/config/database.yml")[rails_env])
end
I had an issue like this once. I ended up adding the rails server instance as a daemon using this command every time I deploy
cd current/app_dir
rails s -d -p 3000 -e 'production'
PS: I kill the current running rails instance before doing this
I am running sidekiq in production and I deploy my app using capistrano. While processing background job, I am getting following error
I18n::InvalidLocaleData: can not load translations from /path/to/folder/releases/20160904153949/config/locales/en.yml: #<Errno::ENOENT: No such file or directory # rb_sysopen - /path/to/folder/releases/20160904153949/config/locales/en.yml>.
release 20160904153949 is old and has been deleted. I am wondering why sidekiq is still looking into older release.
Below is how my deploy.rb file looks like:
# config valid only for current version of Capistrano
lock '3.4.0'
set :application, 'app_name'
set :repo_url, 'git#github.com:reboot/app_name.git'
# Default branch is :master
set :branch, 'master'
# Default deploy_to directory is /var/www/my_app_name
set :deploy_to, '/path/to/app'
set :use_sudo, false
set :bundle_binstubs, nil
# Default value for :scm is :git
set :scm, :git
# Default value for :format is :pretty
set :format, :pretty
# Default value for :log_level is :debug
set :log_level, :debug
# Default value for :pty is false
set :pty, true
# Default value for :linked_files is []
set :linked_files, fetch(:linked_files, []).push('config/database.yml', 'config/secrets.yml')
# Default value for linked_dirs is []
set :linked_dirs, fetch(:linked_dirs, []).push('log', 'tmp/pids', 'tmp/cache', 'tmp/sockets', 'vendor/bundle', 'public/system', 'public/assets')
# Default value for default_env is {}
# set :default_env, { path: "/opt/ruby/bin:$PATH" }
# Default value for keep_releases is 5
set :keep_releases, 5
set :keep_assets, 3
after 'deploy:publishing', 'deploy:restart'
namespace :deploy do
task :restart do
on roles(:app) do
execute :touch, release_path.join('tmp/restart.txt')
end
end
end
Below is how my Capfile looks like
require 'capistrano/setup'
require 'capistrano/deploy'
require 'capistrano/rvm'
require 'capistrano/bundler'
require 'capistrano/rails/assets'
require 'capistrano/rails/migrations'
require 'capistrano/sidekiq'
# Load custom tasks from `lib/capistrano/tasks` if you have any defined
Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r }
sidekiq stop/start is handled by capistrano/sidekiq gem. My app gets deployed fine. Problem is that sidekiq is looking into wrong release for required file.
Also I don't have sidekiq.yml file at this stage. My app is small so never created yml file for it.
Ruby: 2.3.0p0,
Rails: 4.2.5,
nginx/passenger combination,
Capistrano 3.4
Upadate
Below is full error message:
2016-09-07T19:33:22.349Z 3262 TID-md1a4 ContactUsEmailJob JID-fb18ad450d73ed857fe66aee INFO: fail: 0.069 sec
2016-09-07T19:33:22.350Z 3262 TID-md1a4 WARN: {"class":"ActiveJob::QueueAdapters::SidekiqAdapter::JobWrapper","wrapped":"ContactUsEmailJob","queue":"default","args":[{"job_class":"ContactUsEmailJob","job_id":"e30dfe20-89b2-49a9-833c-8e479bdb8a2d","queue_name":"default","arguments":[{"utf8":"✓","authenticity_token":"i+OuDyC2c243UvC0FRWk1esASnUhQ2jKbfvZnoX2GZLela+mPCcOU6qtpU3OZhxr0wTCYUpmFXD6623Q==","name":"Test","message":"test","controller":"static_pages","action":"email_us","_aj_hash_with_indifferent_access":true}],"locale":"en"}],"retry":true,"jid":"fb18ad450d73ed857fe66aee","created_at":1473276802.277088,"enqueued_at":1473276802.2773373,"error_message":"can not load translations from /path/to/app/releases/20160904153949/config/locales/en.yml: #<Errno::ENOENT: No such file or directory # rb_sysopen - /path/to/app/releases/20160904153949/config/locales/en.yml>","error_class":"I18n::InvalidLocaleData","failed_at":1473276802.3480363,"retry_count":0}
2016-09-07T19:33:22.350Z 3262 TID-md1a4 WARN: I18n::InvalidLocaleData: can not load translations from /path/to/app/releases/20160904153949/config/locales/en.yml: #<Errno::ENOENT: No such file or directory # rb_sysopen - /path/to/app/releases/20160904153949/config/locales/en.yml>
2016-09-07T19:33:22.350Z 3262 TID-md1a4 WARN: /path/to/app/shared/bundle/ruby/2.3.0/gems/i18n-0.7.0/lib/i18n/backend/base.rb:184:in `rescue in load_yml'
Ok so I solved the problem by following these steps:
kill process of sidekiq pkill sidekiq
deleted sidekiq pid file from /path/to/app/current/tmp/pid/
started sidekiq using cap sidekiq:start, this will create pid file
finally deploy my app ( this process will restart sidekiq again)
Everything worked find after that.
I am using capistrano for deployment, and for some reason my shared/bin folder is empty, and, well it should contain -rails, rake, bundle, setup, spring. now obviously I did something wrong, but as I am new to capistrano, I have no Idea what it is, because it is in my git repository, and as far as I know it copies the entire thing from the repository. since I am not sure wether it is relavent or not, I will just put everything I changed reguarding the capistrano deployment.
here's my deploy.rb
lock '3.4.0'
# application settings
set :application, 'SomeApplication'
set :user, 'someuser'
#set :repo_url, 'git#bitbucket.org:someapp/someappserver.git'
set :rails_env, 'production'
set :use_sudo, false
set :keep_releases, 5
#git settings
set :scm, :git
set :branch, "master"
set :repo_url, "git#bitbucket.org:someapplication/someapplicationserver.git"
set :deploy_via, :remote_cache
set :rvm_ruby_version, '2.2.1'
set :default_env, { rvm_bin_path: '~/.rvm/bin' }
SSHKit.config.command_map[:rake] = "#{fetch(:default_env)[:rvm_bin_path]}/rvm ruby-#{fetch(:rvm_ruby_version)} do bundle exec rake"
# dirs we want symlinked to the shared folder
# during deployment
set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}
set :pg_database, "someapp_production"
set :pg_user, "someapp_production"
set :pg_ask_for_password, true
namespace :deploy do
task :config_nginx do
pre = File.basename(previous_release)
cur = File.basename(release_path)
run "#{sudo} sed 's/#{pre}/#{cur}/g' /etc/nginx/sites-available/default"
end
task :restart_thin_server do
run "cd #{previous_release}; source $HOME/.bash_profile && thin stop -C thin_config.yml"
run "cd #{release_path}; source $HOME/.bash_profile && thin start -C thin_config.yml"
end
task :restart_nginx do
run "#{sudo} service nginx restart"
end
desc 'Restart application'
task :restart do
on roles(:app), in: :sequence, wait: 5 do
# Your restart mechanism here, for example:
# execute :touch, release_path.join('tmp/restart.txt')
#
# The capistrano-unicorn-nginx gem handles all this
# for this example
end
end
after :publishing, :restart
after :restart, :clear_cache do
on roles(:web), in: :groups, limit: 3, wait: 10 do
# Here we can do anything such as:
# within release_path do
# execute :rake, 'cache:clear'
# end
end
end
end
here is my deploy/production.rb
# production deployment
set :stage, :production
# use the master branch of the repository
set :branch, "master"
# the user login on the remote server
# used to connect and deploy
set :deploy_user, "someuser"
# the 'full name' of the application
set :full_app_name, "#{fetch(:application)}_#{fetch(:stage)}"
# the server(s) to deploy to
server 'someserver.cloudapp.net', user: 'someuser', roles: %w{web app db}, primary: true
# the path to deploy to
set :deploy_to, "/home/#{fetch(:deploy_user)}/apps/#{fetch(:full_app_name)}"
# set to production for Rails
set :rails_env, :production
and here is my cap file
require 'capistrano/setup'
# Include default deployment tasks
require 'capistrano/deploy'
# Include tasks from other gems included in your Gemfile
#
# For documentation on these, see for example:
#
# https://github.com/capistrano/rvm
# https://github.com/capistrano/rbenv
# https://github.com/capistrano/chruby
# https://github.com/capistrano/bundler
# https://github.com/capistrano/rails
# https://github.com/capistrano/passenger
#
require 'capistrano/rvm'
# require 'capistrano/rbenv'
# require 'capistrano/chruby'
require 'capistrano/bundler'
require 'capistrano/rails/assets'
require 'capistrano/rails/migrations'
# require 'capistrano/passenger'
require 'capistrano/thin'
require 'capistrano/postgresql'
# Load custom tasks from `lib/capistrano/tasks` if you have any defined
Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r }
like #emj365 said
just remove bin form your linked_dirs in config/deploy.rb
set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}
#gilmishal Kindly check with this link .. capistrano-deploy-configuration
And keep your eye on directory path where i did mistake many a times
# Default deploy_to directory is /var/www/my_app_name
# set :deploy_to, '/var/www/my_app_name' # This conf is by default
Hope it will solve your problem
We have a production environment for a Rails 4 app with Apache, Phusion Passenger, and Capistrano 3, and a remote bitbucket repository. The Capistrano's "cap production deploy" works well, and executes without errors. But when we go the "current" folder on the remote server, and do a "git log" command, the last commits of our remote repository aren't loaded.
We've tried the "git log" in the main folder of our app, same problem.
Our question is, who can we load the last commits of our repo into the production env ? Isn't Capistrano made to do it by default ?
Any idea of where it could come from ?
Here is the code of our Capfile, deploy.rb and deploy/production.rb files :
Capfile
# Load DSL and Setup Up Stages
require 'capistrano/setup'
# Includes default deployment tasks
require 'capistrano/deploy'
require 'rvm1/capistrano3'
require 'capistrano/bundler'
require 'capistrano/rails/assets'
require 'capistrano/rails/migrations'
# Loads custom tasks from `lib/capistrano/tasks' if you have any defined.
Dir.glob('lib/capistrano/tasks/*.cap').each { |r| import r }
Dir.glob('lib/capistrano/**/*.rb').each { |r| import r }
deploy.rb
lock '3.1.0'
set :application, 'XXXXXXX'
set :deploy_user, 'XXXXXXX'
set :repo_url, 'GIT_REPO_URL.XXXXXXX.git'
set :keep_releases, 5
set :rvm_type, :user
set :rvm_ruby_version, 'ruby-2.1.2'
set :default_env, { rvm_bin_path: '/usr/local/rvm/bin' }
set :bundle_dir, "/usr/local/bin"
set :ssh_options, {:forward_agent => true}
set :linked_files, %w{config/database.yml config/application.yml}
set :linked_dirs, %w{bin log tmp/pids tmp/cache tmp/sockets vendor/bundle public/system}
set :tests, []
set(:config_files, %w(
apache2.conf
database.example.yml
log_rotation
unicorn.rb
unicorn_init.sh
))
set :log_level, :debug
set :pty, true
set :assets_roles, [:app]
# which config files should be made executable after copying
# by deploy:setup_config
set(:executable_config_files, %w(
unicorn_init.sh
))
# files which need to be symlinked to other parts of the
# filesystem. For example nginx virtualhosts, log rotation
# init scripts etc.
set(:symlinks, [
{
source: "apache2.conf",
link: "/etc/apache2/sites-enabled/#{fetch(:full_app_name)}"
},
{
source: "unicorn_init.sh",
link: "/etc/init.d/unicorn_#{fetch(:full_app_name)}"
},
{
source: "log_rotation",
link: "/etc/logrotate.d/#{fetch(:full_app_name)}"
}
])
namespace :deploy do
task :start do ; end
task :stop do ; end
desc 'Restart application'
task :restart do
on roles(:all), in: :sequence, wait: 5 do
# Your restart mechanism here, for example:
execute :touch, release_path.join('restart.txt')
end
end
task :stop_node do
on roles(:all), in: :sequence do
#Stop the node_server
execute "nohup node ./realtime/node_server.js &"
end
end
task :restart_node do
on roles(:all), in: :sequence do
#Restart the node_server
execute "nohup node ./realtime/node_server.js &"
end
end
end
# Bundle install configuration
set :bundle_without, %w{development test}.join(' ')
set :bundle_roles, :all
namespace :bundler do
desc "Install gems with bundler."
task :install do
on roles fetch(:bundle_roles) do
with RAILS_ENV: fetch(:environment) do
within release_path do
execute :bundle, "install", "--without #{fetch(:bundle_without)}"
end
end
end
end
end
before 'deploy:updated', 'bundler:install'
before 'deploy:restart', 'bundler:install'
after 'deploy:updated', 'deploy:publishing'
after 'deploy:restart','deploy:restart_node'
deploy/production.rb
set :stage, :production
set :branch, "REPO_BRANCH"
set :full_app_name, "#{fetch(:application)}_#{fetch(:stage)}"
set :server_name, "XXXXXXX.com www.XXXXXXXX.com"
set :password, ask('Server password', nil)
server 'XXXXXX.com', user: 'XXXXXX', password: fetch(:password), port: 22, roles: %w{web app}, primary: true
set :deploy_to, '/PATH/TO/APP'
set :rails_env, :production
set :environment, "production"
set :unicorn_worker_count, 5
set :enable_ssl, false
Looks like capistrano keeps a repo/ directory in /var/www/:appname/repo which caches the git repo, so if you change the repo capistrano won't auto-update.
Nuking the repo directory did the trick for me
You have set a specific branch for deployment (set :branch, "REPO_BRANCH") and this branch is from the remote git repository. Make sure you have pushed the commits to the right branch of the bitbucket repo.