Capistrano-unicorn gem getting wrong environment set - ruby-on-rails

I've been using this gem for a while and just took the dive to try deploying an actual staging environment to my staging server, and I ran into issues. Unicorn starts with the command unicorn_rails and -E production despite all the settings being correct afaik.
I noticed in deploy.rb that my unicorn_bin variable was set as unicorn_rails. I took out this setting in my deploy.rb. However unicorn:duplicate still executes the unicorn_rails command, when the default should be unicorn.
My vars are all set to staging in the deploy/staging.rb, as outlined in the multistage setup wiki document, but I noticed -E is still getting set to production.
Relevent info:
Here's my output from my unicorn.log file after a deploy:
executing ["/var/www/apps/myapp/shared/bundle/ruby/2.0.0/bin/unicorn_rails", "-c", "/var/www/apps/bundio/current/config/unicorn.rb", "-E", "production", "-D", {12=>#<Kgio::UNIXServer:/tmp/bundio.socket>, 13=>#<Kgio::TCPServer:fd 13>}] (in /var/www/apps/bundio/current)
Here's the output from cap -T (defaults to staging)
# Environments
rails_env "staging"
unicorn_env "staging"
unicorn_rack_env "staging"
# Execution
unicorn_user nil
unicorn_bundle "/usr/local/rvm/gems/ruby-2.0.0-p247#global/bin/bundle"
unicorn_bin "unicorn"
unicorn_options ""
unicorn_restart_sleep_time 2
# Relative paths
app_subdir ""
unicorn_config_rel_path "config"
unicorn_config_filename "unicorn.rb"
unicorn_config_rel_file_path "config/unicorn.rb"
unicorn_config_stage_rel_file_path "config/unicorn/staging.rb"
# Absolute paths
app_path "/var/www/apps/myapp/current"
unicorn_pid "/var/www/apps/myapp/shared/pids/unicorn.myapp.pid"
bundle_gemfile "/var/www/apps/myapp/current/Gemfile"
unicorn_config_path "/var/www/apps/myapp/current/config"
unicorn_config_file_path "/var/www/apps/myapp/current/config/unicorn.rb"
unicorn_config_stage_file_path
-> "/var/www/apps/myapp/current/config/unicorn/staging.rb"
And another curiousity, the unicorn_rails -E flag should reference the rails environment, whereas the unicorn -E should reference the rack env -- the rack env should only get the values developement and deployment, but it gets set to production, which is a bit strange (see unicorn docs for settings of the RACK_ENV variable.
Any insight into this would be much appreciated. On my staging server, I've also set the RAILS_ENV to staging. I've set up the things for rails for another environment, like adding staging.rb in my environments folder, adding a staging section to database.yml, etc.
Important lines in lib/capistrano-unicorn/config.rb talking about unicorn_rack_env:
_cset(:unicorn_env) { fetch(:rails_env, 'production' ) }
_cset(:unicorn_rack_env) do
# Following recommendations from http://unicorn.bogomips.org/unicorn_1.html
fetch(:rails_env) == 'development' ? 'development' : 'deployment'
end
Thanks in advance.

Ok, after a long time not having the correct environment, I have discovered the issue!
Basically, my init scripts were running BEFORE my capistrano-unicorn bin was doing its thing.
So, make sure that your init.d or upstart scripts to manage Unicorn and its workers are taken into account when capistrano-unicorn is doing the unicorn restart / reload / duplication tasks.
I did not think to look at these scripts when I had to debug the stale pid file / already running / unable to listen on socket errors. But it makes sense, as upstart starts Unicorn when it is not running, and then capistrano-unicorn is also attempting to start Unicorn.
I have now combined these capistrano tasks and hooks with Monit and a Unicorn init script.
Capistrano tasks:
namespace :monit do
desc ' wait 20 seconds '
task :wait_20_seconds do
sleep 20
end
task :monitor_all, :roles => :app do
sudo "monit monitor all"
end
task :unmonitor_all, :roles => :app do
sudo "monit unmonitor all"
end
desc 'monitor unicorn in the monit rc file'
task :monitor_unicorn, :roles => :app do
sudo "monit monitor unicorn"
end
desc 'unmonitor unicorn in the monit rc file'
task :unmonitor_unicorn, :roles => :app do
sudo "monit unmonitor unicorn"
end
end
Capistrano hooks:
after 'deploy:restart', 'unicorn:duplicate' # app preloaded. check https://github.com/sosedoff/capistrano-unicorn section for zero downtime
before 'deploy', "monit:unmonitor_unicorn"
before 'deploy:migrations', "monit:unmonitor_unicorn"
after 'deploy', 'monit:wait_20_seconds'
after "deploy:migrations", "monit:wait_20_seconds"
after 'monit:wait_20_seconds', 'monit:monitor_unicorn'
I use Monit to monitor my unicorn process:
Within /etc/monit/monitrc:
check process unicorn
with pidfile /var/www/apps/my_app/shared/pids/mypid.pid
start program = "/usr/bin/sudo service unicorn start"
stop program = "/usr/bin/sudo service unicorn stop"
Within your init script, you will start the unicorn process with something like:
unicorn_rails -c /var/www/apps/my_app/current/config/unicorn.rb -E staging -D
Make sure the -E flag is set to the correct environment. The capistrano-unicorn gem has directives using :set within deploy.rb which allow you to specify the environment for that unicorn process.

Related

Puma unable to find production environment configuration

I have a rails (4.2.4) app running with puma. The documentation says:
If an environment is specified, either via the -e and --environment flags, or through the RACK_ENV environment variable, the default file location will be config/puma/environment_name.rb.
And so that's where my configuration files are.
In my development environment, puma starts up just fine. Here is the configuration:
workers 1
threads 1, 6
environment 'development'
activate_control_app
My production environment has problems however. Here's the configuration:
workers 1
threads 1, 6
environment 'production'
app_dir = File.expand_path('../../..', __FILE__)
bind "unix://#{app_dir}/tmp/sockets/puma.sock"
stdout_redirect "#{app_dir}/log/puma.stdout.log",
"#{app_dir}/log/puma.stderr.log",
true
pidfile "#{app_dir}/tmp/pids/puma.pid"
state_path "#{app_dir}/tmp/pids/puma.state"
activate_control_app
daemonize true
This is obviously more complicated, mostly to make use of sockets, logging, and daemonization. I know it works however, because it starts just fine with the following command:
$ bundle exec pumactl start
So far so good. But if I want to stop or restart the server like the above command, I get the following message:
$ bundle exec pumactl stop
Neither pid nor control url available
If I specify the location of the configuration file it works:
$ bundle exec pumactl -F config/puma/production.rb stop
Why do I need to specify the configuration file for stop and restart but not start?

Sidekiq with Capistrano (2.x) - is needed to set up restarting Sidekiq after every delpoyment?

I am playing with Sidekiq and using Capistrano for deploying the application.
So far I am using the gem 'capistrano-sidekiq' , group: :development gem and as I was previously using DelaydedJob, I needed to remove from my current deploy file these lines:
set :delayed_job_command, "bin/delayed_job"
after "deploy:start", "delayed_job:start"
after "deploy:stop", "delayed_job:stop"
What do I need to add to Capistrano for Sidekiq to make sure Sidekiq will run all the time (means it will not be interrupted/stopped after I deploy some code to server?
Or does the gem automatically (re)starts Sidekiq after every deployment that has been made?
Sidekiq is restarted automatically in Capistrano 2.x after deployment.
The capistrano-sidekiq gem which manages it says:
if fetch(:sidekiq_default_hooks)
before 'deploy:update_code', 'sidekiq:quiet'
after 'deploy:stop', 'sidekiq:stop'
after 'deploy:start', 'sidekiq:start'
before 'deploy:restart', 'sidekiq:restart'
end
On lines 26-31 of the gem file lib/capistrano/tasks/capistrano2.rb.
:sidekiq_default_hooks is set in the block
Capistrano::Configuration.instance.load do
_cset(:sidekiq_default_hooks) { true }
Above it so it is a Capistrano instance variable loaded by the configuration for this particular environment.
The sidekiq:quiet command runs
run_as "if [ -d #{current_path} ] && [ -f #{pid_file} ] && kill -0 `cat #{pid_file}`> /dev/null 2>&1; then cd #{current_path} && #{fetch(:sidekiqctl_cmd)} quiet #{pid_file} ; else echo 'Sidekiq is not running'; fi"
And I'm assuming by using the default Capistrano 2 deploy configuration your deployment process would look like:
So - based on that you can confirm that it will restart every time. You can manually alter your deployment process using Capistrano 2 if you so choose to do so or you can also alter the :sidekiq_default_hooks variable and define your own Sidekiq process too that way.

Upstart script using foreman export is using wrong ruby version

I have just recently deployed my application to production server, but it looks that that every process I have added to my .procfile (foreman) is not started at all. Details:
I am using Rails 4.0.12, foreman 0.78.0, sidekiq 3.4.2 and clockwork 1.2.0. I am using Capistrano and I have defined a task to export procfile as an upstart service on Ubuntu 14.02. But when I start the service, no background jobs are processed. When I take look at the log files of that upstart service, I just see the following:
Your ruby version is 1.9.3, but your Gemfile specified 2.1.2
Application is running, I can see on sidekiq dashboard that nothing is processed. Based on the error message, it looks like that I am executing my procfile somehow wrong. I have tried multiple execution scenarios, but nothing seems to work. My .procfile currently looks like this:
worker: rbenv sudo bundle exec sidekiq -C config/sidekiq.yml -e production
clock: rbenv sudo bundle exec clockwork config/clock.rb
One part of exported upstart script looks like this for example:
start on starting aaa-clock
stop on stopping aaa-clock
respawn
env PORT=5100
setuid da_admin
chdir /var/www/aaa/releases/20150728172635
exec rbenv sudo bundle exec clockwork config/clock.rb
If i try the last two commands alone in bash, it works, but when i start the service "sudo service aaa start" or "rbenv sudo service aaa start", it doesn't work.
Part of deploy.rb where I am exporting my upstart service:
namespace :foreman do
desc "Export the Procfile to Ubuntu's upstart scripts"
task :export do
on roles(:app) do
within release_path do
execute :rbenv, "sudo bundle exec foreman export upstart /etc/init -a #{fetch(:application)} -u #{fetch(:user)} -l #{current_path}/log -f #{release_path}/Procfile"
end
end
end
desc "Start the application services"
task :start do
on roles(:app) do
execute :rbenv, "sudo service #{fetch(:application)} start"
end
end
desc "Stop the application services"
task :stop do
on roles(:app) do
execute :rbenv, "sudo service #{fetch(:application)} stop"
end
end
desc "Restart the application services"
task :restart do
on roles(:app) do
execute :rbenv, "sudo service #{fetch(:application)} restart"
end
end
end
Does anybody has any idea what could be wrong? I suspect, that this will be some mistake in environment configuration. Thank you in advance for your time.
EDIT:
The problem was at the end in environment of the upstart script, similar problems which pointed me in the right direction were:
foreman issue
foreman another issue
I had to create .env file with configuration of various environment variables. Now it atleast starts (other bugs arised, but they are not related to this issue).
Example of the .env file in the root of project directory:
PATH=/home/user/.rbenv/versions/2.1.2/bin:/usr/bin
RAILS_ENV=production
HOME=/home/user

Puma phased-restart fails when Gemfile is changed

I'm using Puma as application server for my Rails 4 project on MRI 2.1.0. I'm using Capistrano 3 to handle deployments. Everything is working like a charm. But, I recently noticed an issue with my deployment process. If I change my Gemfile then, puma fails to complete phased-restart and eventually all workers get killed. I'm running Puma in cluster mode and preload_app! is set true.
Here is my Capistrano recipe to handle phased-restart.
desc "Restart the application (phased restart)"
task :phased_restart do
on roles(:app) do |h|
execute "cd #{fetch(:current_path)} && bundle exec pumactl -S #{fetch(:puma_state)} phased-restart", :pty => true
end
end
This is truncated output of Capistrano log.
DEBUG [4790766f] Command: cd /home/app/current && bundle exec pumactl -S /home/app/shared/tmp/pids/puma.state phased-restart
DEBUG [de00176a] Command phased-restart sent success
INFO [de00176a] Finished in 0.909 seconds with exit status 0 (successful).
This is my config/puma.rb file.
#!/usr/bin/env puma
require 'active_support'
environment 'production'
daemonize
pidfile '/home/app/shared/tmp/pids/puma.pid'
state_path '/home/app/shared/tmp/pids/puma.state'
stdout_redirect 'log/puma_stdout.log', 'log/puma_stderr.log'
threads 100, 100
bind 'tcp://0.0.0.0:9292'
bind 'unix:////home/app/shared/tmp/pids/puma.sock'
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{Rails.root}/config/database.yml")[Rails.env])
end
end
workers 4
preload_app!
Does anybody see anything wrong in my puma config file?
So, currently I do bundle exec cap production deploy:start to start Puma when this happens. But, I want zero-downtime-deployment in every cases.
Can Puma keep using old worker processes in case new spawned processes couldn't be started?
Do you know that preload_app! conflicts with phased restarts?
Proof: https://github.com/puma/puma/blob/0ea7af5e2cc8fa192ec82934a4a47880bdb592f8/lib/puma/configuration.rb#L333-L335
I think first you need to decide which to use.
For doing a phased restart you need to enable the prune_bundler option and disable preload_app!
See https://github.com/puma/puma/blob/master/DEPLOYMENT.md#restarting
To do zero-downtime deploys with Capistrano, you can use the capistrano3-puma gem with the following options:
set :puma_preload_app, false
set :puma_prune_bundler, true

Unicorn + Capistrano zero-downtime deploy -- Not switching to new release

The app deploys fine if I want to tolerate downtime from manually stopping and starting unicorn after a deployment. However, I want to use the zero-downtime unicorn settings, but it isn't working because the new unicorn process that starts up is looking at the old deployment release path. Nothing special, simple cap restart in deploy.rb:
desc "Zero-downtime restart of Unicorn"
task :restart, :except => { :no_release => true } do
run "cd #{current_path}; #{try_sudo} kill -s USR2 `cat /var/www/appname/shared/pids/unicorn.pid`"
end
I know that it's looking at the wrong directory because if the views don't change, and if I set keep_releases to 1 or 2, the unicorn logs will show an error because the directory it's trying to start up in was deleted:
/var/www/appname/shared/bundle/ruby/1.9.1/gems/unicorn-4.4.0/lib/unicorn/http_server.rb:425:in `chdir': No such file or directory - /var/www/appname/releases/20130330104246 (Errno::ENOENT)
I've been trying to debug this on and off for several weeks now. Any help getting this working is greatly appreciated!
Set this environment variable when starting unicorn
BUNDLE_GEMFILE=$APP_PATH/current/Gemfile
Otherwise it will point at a specific release directory, which will cause the behaviour you describe.
eg.
cd $APP_PATH/current && BUNDLE_GEMFILE=$APP_PATH/current/Gemfile bundle exec unicorn_rails -c $APP_PATH/current/config/unicorn.rb -E $RAILS_ENV -D

Resources