I'm using Puma as application server for my Rails 4 project on MRI 2.1.0. I'm using Capistrano 3 to handle deployments. Everything is working like a charm. But, I recently noticed an issue with my deployment process. If I change my Gemfile then, puma fails to complete phased-restart and eventually all workers get killed. I'm running Puma in cluster mode and preload_app! is set true.
Here is my Capistrano recipe to handle phased-restart.
desc "Restart the application (phased restart)"
task :phased_restart do
on roles(:app) do |h|
execute "cd #{fetch(:current_path)} && bundle exec pumactl -S #{fetch(:puma_state)} phased-restart", :pty => true
end
end
This is truncated output of Capistrano log.
DEBUG [4790766f] Command: cd /home/app/current && bundle exec pumactl -S /home/app/shared/tmp/pids/puma.state phased-restart
DEBUG [de00176a] Command phased-restart sent success
INFO [de00176a] Finished in 0.909 seconds with exit status 0 (successful).
This is my config/puma.rb file.
#!/usr/bin/env puma
require 'active_support'
environment 'production'
daemonize
pidfile '/home/app/shared/tmp/pids/puma.pid'
state_path '/home/app/shared/tmp/pids/puma.state'
stdout_redirect 'log/puma_stdout.log', 'log/puma_stderr.log'
threads 100, 100
bind 'tcp://0.0.0.0:9292'
bind 'unix:////home/app/shared/tmp/pids/puma.sock'
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{Rails.root}/config/database.yml")[Rails.env])
end
end
workers 4
preload_app!
Does anybody see anything wrong in my puma config file?
So, currently I do bundle exec cap production deploy:start to start Puma when this happens. But, I want zero-downtime-deployment in every cases.
Can Puma keep using old worker processes in case new spawned processes couldn't be started?
Do you know that preload_app! conflicts with phased restarts?
Proof: https://github.com/puma/puma/blob/0ea7af5e2cc8fa192ec82934a4a47880bdb592f8/lib/puma/configuration.rb#L333-L335
I think first you need to decide which to use.
For doing a phased restart you need to enable the prune_bundler option and disable preload_app!
See https://github.com/puma/puma/blob/master/DEPLOYMENT.md#restarting
To do zero-downtime deploys with Capistrano, you can use the capistrano3-puma gem with the following options:
set :puma_preload_app, false
set :puma_prune_bundler, true
Related
When I do rails s, it successfully starts my server, and serves the page through the URL http://localhost:3000, but the problem is: it doesn't show any changes when I access anything from this server.
I have accessed so many pages, but still it's not showing any hint. I have restarted the Rails server, have killed all the rails servers using:
ps aux | grep 'rails' | grep -v 'grep' | awk '{ print $2 }' | xargs kill -9
Have quit and started again iTerm but nothing is helping me.
I'm using Rails 5.0.0, and ruby 2.2.3.
There was a problem in config/puma.rb file, and it took me a night to figure this thing out. Anyway, the correct configuration for Puma while running your server for development is:
workers Integer(ENV['WEB_CONCURRENCY'] || 2)
threads_count = Integer(ENV['RAILS_MAX_THREADS'] || 5)
threads threads_count, threads_count
preload_app!
rackup DefaultRackup
port ENV['PORT'] || 3000
environment ENV['RACK_ENV'] || 'development'
on_worker_boot do
# Worker specific setup for Rails 4.1+
# See: https://devcenter.heroku.com/articles/deploying-rails-applications-with-the-puma-web-server#on-worker-boot
ActiveRecord::Base.establish_connection
end
I had the same problem. There was an error in puma.rb file and I figure out the problem by copying following content in puma.rb file. After that, restart your server and you can now find logs on any request on the server and also stop breakpoint on the server by mentioning breakpoint in you code, which was not possible with previous configuration.
# Code goes here
workers 2
# Min and Max threads per worker
threads 1, 6
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"
# Default to production
rails_env = ENV['RAILS_ENV'] || "production"
environment rails_env
# Set up socket location
bind "unix://#{shared_dir}/sockets/puma.sock"
# Logging
stdout_redirect "#{shared_dir}/log/puma.stdout.log", "#{shared_dir}/log/puma.stderr.log", true
# Set master PID and state locations
pidfile "#{shared_dir}/pids/puma.pid"
state_path "#{shared_dir}/pids/puma.state"
activate_control_app
Add this in development.rb file. I have my app on Rails 6.
config.logger = Logger.new(STDOUT)
In my case, thanks to #mick comment here my logs started appearing when I ran it through rails s (precisely bundle exec rails s).
I will update this answer later after debugging more on why is there a different, but a hunch is that running it through rails s would configure the logs output to be routed to STDOUT when the server is not running in daemonized mode.
I have a rails (4.2.4) app running with puma. The documentation says:
If an environment is specified, either via the -e and --environment flags, or through the RACK_ENV environment variable, the default file location will be config/puma/environment_name.rb.
And so that's where my configuration files are.
In my development environment, puma starts up just fine. Here is the configuration:
workers 1
threads 1, 6
environment 'development'
activate_control_app
My production environment has problems however. Here's the configuration:
workers 1
threads 1, 6
environment 'production'
app_dir = File.expand_path('../../..', __FILE__)
bind "unix://#{app_dir}/tmp/sockets/puma.sock"
stdout_redirect "#{app_dir}/log/puma.stdout.log",
"#{app_dir}/log/puma.stderr.log",
true
pidfile "#{app_dir}/tmp/pids/puma.pid"
state_path "#{app_dir}/tmp/pids/puma.state"
activate_control_app
daemonize true
This is obviously more complicated, mostly to make use of sockets, logging, and daemonization. I know it works however, because it starts just fine with the following command:
$ bundle exec pumactl start
So far so good. But if I want to stop or restart the server like the above command, I get the following message:
$ bundle exec pumactl stop
Neither pid nor control url available
If I specify the location of the configuration file it works:
$ bundle exec pumactl -F config/puma/production.rb stop
Why do I need to specify the configuration file for stop and restart but not start?
I have two resque commands that I'd like to implement into capistrano so I can run it successfully on the server. I've checked by running these manually that they both work, however if I'm to keep these continuously running I'll end up with a broken pipe.
I'd like to be able to start resque:
queue=* rake environment resque:work
and start resque-scheduler:
rake environment resque:scheduler
anybody know how I can implement this into my deploy.rb file?
Try the capistrano-resque gem which should do exactly this (it includes support for resque-scheduler).
After setting it up, you'll get these Capistrano tasks:
➔ cap -vT | grep resque
cap resque:status # Check workers status
cap resque:start # Start Resque workers
cap resque:stop # Quit running Resque workers
cap resque:restart # Restart running Resque workers
cap resque:scheduler:restart #
cap resque:scheduler:start # Starts Resque Scheduler with default configs
cap resque:scheduler:stop # Stops Resque Schedule
(I currently help maintain this gem, so if you have any trouble with it just file an issue and I'll take a look).
I've been using this gem for a while and just took the dive to try deploying an actual staging environment to my staging server, and I ran into issues. Unicorn starts with the command unicorn_rails and -E production despite all the settings being correct afaik.
I noticed in deploy.rb that my unicorn_bin variable was set as unicorn_rails. I took out this setting in my deploy.rb. However unicorn:duplicate still executes the unicorn_rails command, when the default should be unicorn.
My vars are all set to staging in the deploy/staging.rb, as outlined in the multistage setup wiki document, but I noticed -E is still getting set to production.
Relevent info:
Here's my output from my unicorn.log file after a deploy:
executing ["/var/www/apps/myapp/shared/bundle/ruby/2.0.0/bin/unicorn_rails", "-c", "/var/www/apps/bundio/current/config/unicorn.rb", "-E", "production", "-D", {12=>#<Kgio::UNIXServer:/tmp/bundio.socket>, 13=>#<Kgio::TCPServer:fd 13>}] (in /var/www/apps/bundio/current)
Here's the output from cap -T (defaults to staging)
# Environments
rails_env "staging"
unicorn_env "staging"
unicorn_rack_env "staging"
# Execution
unicorn_user nil
unicorn_bundle "/usr/local/rvm/gems/ruby-2.0.0-p247#global/bin/bundle"
unicorn_bin "unicorn"
unicorn_options ""
unicorn_restart_sleep_time 2
# Relative paths
app_subdir ""
unicorn_config_rel_path "config"
unicorn_config_filename "unicorn.rb"
unicorn_config_rel_file_path "config/unicorn.rb"
unicorn_config_stage_rel_file_path "config/unicorn/staging.rb"
# Absolute paths
app_path "/var/www/apps/myapp/current"
unicorn_pid "/var/www/apps/myapp/shared/pids/unicorn.myapp.pid"
bundle_gemfile "/var/www/apps/myapp/current/Gemfile"
unicorn_config_path "/var/www/apps/myapp/current/config"
unicorn_config_file_path "/var/www/apps/myapp/current/config/unicorn.rb"
unicorn_config_stage_file_path
-> "/var/www/apps/myapp/current/config/unicorn/staging.rb"
And another curiousity, the unicorn_rails -E flag should reference the rails environment, whereas the unicorn -E should reference the rack env -- the rack env should only get the values developement and deployment, but it gets set to production, which is a bit strange (see unicorn docs for settings of the RACK_ENV variable.
Any insight into this would be much appreciated. On my staging server, I've also set the RAILS_ENV to staging. I've set up the things for rails for another environment, like adding staging.rb in my environments folder, adding a staging section to database.yml, etc.
Important lines in lib/capistrano-unicorn/config.rb talking about unicorn_rack_env:
_cset(:unicorn_env) { fetch(:rails_env, 'production' ) }
_cset(:unicorn_rack_env) do
# Following recommendations from http://unicorn.bogomips.org/unicorn_1.html
fetch(:rails_env) == 'development' ? 'development' : 'deployment'
end
Thanks in advance.
Ok, after a long time not having the correct environment, I have discovered the issue!
Basically, my init scripts were running BEFORE my capistrano-unicorn bin was doing its thing.
So, make sure that your init.d or upstart scripts to manage Unicorn and its workers are taken into account when capistrano-unicorn is doing the unicorn restart / reload / duplication tasks.
I did not think to look at these scripts when I had to debug the stale pid file / already running / unable to listen on socket errors. But it makes sense, as upstart starts Unicorn when it is not running, and then capistrano-unicorn is also attempting to start Unicorn.
I have now combined these capistrano tasks and hooks with Monit and a Unicorn init script.
Capistrano tasks:
namespace :monit do
desc ' wait 20 seconds '
task :wait_20_seconds do
sleep 20
end
task :monitor_all, :roles => :app do
sudo "monit monitor all"
end
task :unmonitor_all, :roles => :app do
sudo "monit unmonitor all"
end
desc 'monitor unicorn in the monit rc file'
task :monitor_unicorn, :roles => :app do
sudo "monit monitor unicorn"
end
desc 'unmonitor unicorn in the monit rc file'
task :unmonitor_unicorn, :roles => :app do
sudo "monit unmonitor unicorn"
end
end
Capistrano hooks:
after 'deploy:restart', 'unicorn:duplicate' # app preloaded. check https://github.com/sosedoff/capistrano-unicorn section for zero downtime
before 'deploy', "monit:unmonitor_unicorn"
before 'deploy:migrations', "monit:unmonitor_unicorn"
after 'deploy', 'monit:wait_20_seconds'
after "deploy:migrations", "monit:wait_20_seconds"
after 'monit:wait_20_seconds', 'monit:monitor_unicorn'
I use Monit to monitor my unicorn process:
Within /etc/monit/monitrc:
check process unicorn
with pidfile /var/www/apps/my_app/shared/pids/mypid.pid
start program = "/usr/bin/sudo service unicorn start"
stop program = "/usr/bin/sudo service unicorn stop"
Within your init script, you will start the unicorn process with something like:
unicorn_rails -c /var/www/apps/my_app/current/config/unicorn.rb -E staging -D
Make sure the -E flag is set to the correct environment. The capistrano-unicorn gem has directives using :set within deploy.rb which allow you to specify the environment for that unicorn process.
I can't get my app to successfully start at Heroku. Here's the basics:
Ruby 1.9.3p392 (when I run Ruby -v in my dev terminal this is what is returned, however the Heroku logs seem to indicate Ruby 2.0.0)
Rails 3.2.13
Unicorn Web Server
Postgresql DB
I have deployed my app to Heroku but getting "An error occurred in the application and your page could not be served."
Here's the final entries in the Heroku log:
+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `<main>'
+00:00 heroku[web.1]: Process exited with status 1
+00:00 heroku[web.1]: State changed from starting to crashed
When I try to run Heroku ps, I get:
=== web (1X): `bundle exec unicorn -p $PORT -c ./config/unicorn.rb`
web.1: crashed 2013/06/22 17:31:22 (~ 6m ago)
I think it's possible the problem is stemming from this line in my app/config/application.rb
ENV.update YAML.load(File.read(File.expand_path('../application.yml', __FILE__)))
This line is useful in dev to read my environment variables from my application.yml file. However, for security purposes I gitignore it from my repo and can see the Heroku logs complain that this file not being found. For production, I have set my environment variables at Heroku via:
heroku config:add SECRET_TOKEN=a_really_long_number
Here's my app/config/unicorn.rb
# config/unicorn.rb
worker_processes Integer(ENV["WEB_CONCURRENCY"] || 3)
timeout 15
preload_app true
before_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to send QUIT'
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end
And here's my Procfile
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
Both my app/config/unicorn.rb and Procfile settings come from https://devcenter.heroku.com/articles/rails-unicorn
Based on some IRC guidance, I installed Figaro, but alas that did not resolve the issue.
If you want to see the full app, it's posted at: https://github.com/mxstrand/mxspro
If you have guidance on what might be wrong or how I might troubleshoot further I'd appreciate it. Thank you.
You're spot on with you analysis. I've just pulled your code, made some tweaks and now have it started on Heroku.
My only changes;
config/application.rb - moved lines 12 & 13 to config/environments/development.rb - if you're using application.yml for development environment variables then keep it that way. Other option is to make line 13 conditional to your development environment with if Rails.env.development? at the end.
config/environments/production.rb - line 33 missing preceeding # mark