I have a rails (4.2.4) app running with puma. The documentation says:
If an environment is specified, either via the -e and --environment flags, or through the RACK_ENV environment variable, the default file location will be config/puma/environment_name.rb.
And so that's where my configuration files are.
In my development environment, puma starts up just fine. Here is the configuration:
workers 1
threads 1, 6
environment 'development'
activate_control_app
My production environment has problems however. Here's the configuration:
workers 1
threads 1, 6
environment 'production'
app_dir = File.expand_path('../../..', __FILE__)
bind "unix://#{app_dir}/tmp/sockets/puma.sock"
stdout_redirect "#{app_dir}/log/puma.stdout.log",
"#{app_dir}/log/puma.stderr.log",
true
pidfile "#{app_dir}/tmp/pids/puma.pid"
state_path "#{app_dir}/tmp/pids/puma.state"
activate_control_app
daemonize true
This is obviously more complicated, mostly to make use of sockets, logging, and daemonization. I know it works however, because it starts just fine with the following command:
$ bundle exec pumactl start
So far so good. But if I want to stop or restart the server like the above command, I get the following message:
$ bundle exec pumactl stop
Neither pid nor control url available
If I specify the location of the configuration file it works:
$ bundle exec pumactl -F config/puma/production.rb stop
Why do I need to specify the configuration file for stop and restart but not start?
Related
I am trying to start the clockwork with following command sudo bundle exec clockwork config/clock.rb in production. But it throws the following error
ActiveRecord::AdapterNotSpecified: 'development' database is not configured. Available: ["production"].
It works correctly in local. We have Puma server and JRuby setup in server.
Add the RAILS_ENV variable:
RAILS_ENV=production sudo bundle exec clockwork config/clock.rb
Or, set the RAILS_ENV variable in your .bashrc:
echo 'export RAILS_ENV=production' >> ~/.bashrc
And then exit the shell, and login again. From then on, clockwork (and all other rails related things) should be in production mode. You can use the command you were using originally.
I'm using Puma as application server for my Rails 4 project on MRI 2.1.0. I'm using Capistrano 3 to handle deployments. Everything is working like a charm. But, I recently noticed an issue with my deployment process. If I change my Gemfile then, puma fails to complete phased-restart and eventually all workers get killed. I'm running Puma in cluster mode and preload_app! is set true.
Here is my Capistrano recipe to handle phased-restart.
desc "Restart the application (phased restart)"
task :phased_restart do
on roles(:app) do |h|
execute "cd #{fetch(:current_path)} && bundle exec pumactl -S #{fetch(:puma_state)} phased-restart", :pty => true
end
end
This is truncated output of Capistrano log.
DEBUG [4790766f] Command: cd /home/app/current && bundle exec pumactl -S /home/app/shared/tmp/pids/puma.state phased-restart
DEBUG [de00176a] Command phased-restart sent success
INFO [de00176a] Finished in 0.909 seconds with exit status 0 (successful).
This is my config/puma.rb file.
#!/usr/bin/env puma
require 'active_support'
environment 'production'
daemonize
pidfile '/home/app/shared/tmp/pids/puma.pid'
state_path '/home/app/shared/tmp/pids/puma.state'
stdout_redirect 'log/puma_stdout.log', 'log/puma_stderr.log'
threads 100, 100
bind 'tcp://0.0.0.0:9292'
bind 'unix:////home/app/shared/tmp/pids/puma.sock'
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{Rails.root}/config/database.yml")[Rails.env])
end
end
workers 4
preload_app!
Does anybody see anything wrong in my puma config file?
So, currently I do bundle exec cap production deploy:start to start Puma when this happens. But, I want zero-downtime-deployment in every cases.
Can Puma keep using old worker processes in case new spawned processes couldn't be started?
Do you know that preload_app! conflicts with phased restarts?
Proof: https://github.com/puma/puma/blob/0ea7af5e2cc8fa192ec82934a4a47880bdb592f8/lib/puma/configuration.rb#L333-L335
I think first you need to decide which to use.
For doing a phased restart you need to enable the prune_bundler option and disable preload_app!
See https://github.com/puma/puma/blob/master/DEPLOYMENT.md#restarting
To do zero-downtime deploys with Capistrano, you can use the capistrano3-puma gem with the following options:
set :puma_preload_app, false
set :puma_prune_bundler, true
I am running puma like this:
puma -e production -b unix://blahblah.sock
setting up configs etc going to take a little more time, so I want to put into production using this. Ofcourse, using nginx.
Where can I find error logs ?
Or any other command to include for error logs to work ?
It's a good practice to create config/puma.rb file, where you can manage related configuration settings.
Set up puma
Puma can be configured just by providing command-line arguments, but it's more convenient to bake the configuration settings into a configuration file and then provide that file with a single -C command line directive. A simple puma.rb that works with MRI Ruby is:
#!/usr/bin/env puma
# start puma with:
# RAILS_ENV=production bundle exec puma -C ./config/puma.rb
application_path = Rails.root
railsenv = 'production'
directory application_path
environment railsenv
daemonize true
pidfile "#{application_path}/tmp/pids/puma-#{railsenv}.pid"
state_path "#{application_path}/tmp/pids/puma-#{railsenv}.state"
stdout_redirect
"#{application_path}/log/puma-#{railsenv}.stdout.log",
"#{application_path}/log/puma-#{railsenv}.stderr.log"
threads 0, 16
bind "unix://#{application_path}/tmp/sockets/#{railsenv}.socket"
For more info please refer - Puma and Nginx production stack
I've been using this gem for a while and just took the dive to try deploying an actual staging environment to my staging server, and I ran into issues. Unicorn starts with the command unicorn_rails and -E production despite all the settings being correct afaik.
I noticed in deploy.rb that my unicorn_bin variable was set as unicorn_rails. I took out this setting in my deploy.rb. However unicorn:duplicate still executes the unicorn_rails command, when the default should be unicorn.
My vars are all set to staging in the deploy/staging.rb, as outlined in the multistage setup wiki document, but I noticed -E is still getting set to production.
Relevent info:
Here's my output from my unicorn.log file after a deploy:
executing ["/var/www/apps/myapp/shared/bundle/ruby/2.0.0/bin/unicorn_rails", "-c", "/var/www/apps/bundio/current/config/unicorn.rb", "-E", "production", "-D", {12=>#<Kgio::UNIXServer:/tmp/bundio.socket>, 13=>#<Kgio::TCPServer:fd 13>}] (in /var/www/apps/bundio/current)
Here's the output from cap -T (defaults to staging)
# Environments
rails_env "staging"
unicorn_env "staging"
unicorn_rack_env "staging"
# Execution
unicorn_user nil
unicorn_bundle "/usr/local/rvm/gems/ruby-2.0.0-p247#global/bin/bundle"
unicorn_bin "unicorn"
unicorn_options ""
unicorn_restart_sleep_time 2
# Relative paths
app_subdir ""
unicorn_config_rel_path "config"
unicorn_config_filename "unicorn.rb"
unicorn_config_rel_file_path "config/unicorn.rb"
unicorn_config_stage_rel_file_path "config/unicorn/staging.rb"
# Absolute paths
app_path "/var/www/apps/myapp/current"
unicorn_pid "/var/www/apps/myapp/shared/pids/unicorn.myapp.pid"
bundle_gemfile "/var/www/apps/myapp/current/Gemfile"
unicorn_config_path "/var/www/apps/myapp/current/config"
unicorn_config_file_path "/var/www/apps/myapp/current/config/unicorn.rb"
unicorn_config_stage_file_path
-> "/var/www/apps/myapp/current/config/unicorn/staging.rb"
And another curiousity, the unicorn_rails -E flag should reference the rails environment, whereas the unicorn -E should reference the rack env -- the rack env should only get the values developement and deployment, but it gets set to production, which is a bit strange (see unicorn docs for settings of the RACK_ENV variable.
Any insight into this would be much appreciated. On my staging server, I've also set the RAILS_ENV to staging. I've set up the things for rails for another environment, like adding staging.rb in my environments folder, adding a staging section to database.yml, etc.
Important lines in lib/capistrano-unicorn/config.rb talking about unicorn_rack_env:
_cset(:unicorn_env) { fetch(:rails_env, 'production' ) }
_cset(:unicorn_rack_env) do
# Following recommendations from http://unicorn.bogomips.org/unicorn_1.html
fetch(:rails_env) == 'development' ? 'development' : 'deployment'
end
Thanks in advance.
Ok, after a long time not having the correct environment, I have discovered the issue!
Basically, my init scripts were running BEFORE my capistrano-unicorn bin was doing its thing.
So, make sure that your init.d or upstart scripts to manage Unicorn and its workers are taken into account when capistrano-unicorn is doing the unicorn restart / reload / duplication tasks.
I did not think to look at these scripts when I had to debug the stale pid file / already running / unable to listen on socket errors. But it makes sense, as upstart starts Unicorn when it is not running, and then capistrano-unicorn is also attempting to start Unicorn.
I have now combined these capistrano tasks and hooks with Monit and a Unicorn init script.
Capistrano tasks:
namespace :monit do
desc ' wait 20 seconds '
task :wait_20_seconds do
sleep 20
end
task :monitor_all, :roles => :app do
sudo "monit monitor all"
end
task :unmonitor_all, :roles => :app do
sudo "monit unmonitor all"
end
desc 'monitor unicorn in the monit rc file'
task :monitor_unicorn, :roles => :app do
sudo "monit monitor unicorn"
end
desc 'unmonitor unicorn in the monit rc file'
task :unmonitor_unicorn, :roles => :app do
sudo "monit unmonitor unicorn"
end
end
Capistrano hooks:
after 'deploy:restart', 'unicorn:duplicate' # app preloaded. check https://github.com/sosedoff/capistrano-unicorn section for zero downtime
before 'deploy', "monit:unmonitor_unicorn"
before 'deploy:migrations', "monit:unmonitor_unicorn"
after 'deploy', 'monit:wait_20_seconds'
after "deploy:migrations", "monit:wait_20_seconds"
after 'monit:wait_20_seconds', 'monit:monitor_unicorn'
I use Monit to monitor my unicorn process:
Within /etc/monit/monitrc:
check process unicorn
with pidfile /var/www/apps/my_app/shared/pids/mypid.pid
start program = "/usr/bin/sudo service unicorn start"
stop program = "/usr/bin/sudo service unicorn stop"
Within your init script, you will start the unicorn process with something like:
unicorn_rails -c /var/www/apps/my_app/current/config/unicorn.rb -E staging -D
Make sure the -E flag is set to the correct environment. The capistrano-unicorn gem has directives using :set within deploy.rb which allow you to specify the environment for that unicorn process.
what is the "foreman way" for behaving differently in production vs
development? That is we want foreman start to start up a bunch of
stuff in dev, however in heroku production we don't need it to start
(for example) solr.
I follow the convention;
Procfile defines all processes
.foreman set specific foreman variables
Development:
.env sets environment variables for each developer
.env.example sets defaults for development
foreman start starts all processes
Production:
heroku config sets environment variables
heroku ps:scale turns on or off whichever processes are needed for production
Here's an example from a project.
Procfile:
web: bundle exec unicorn_rails -p $PORT -c ./config/unicorn.rb
worker: bundle exec rake jobs:work
search: bundle exec rake sunspot:solr:run
.env.example:
# default S3 bucket
S3_KEY=keykeykeykeykeykey
S3_SECRET=secretsecretsecret
S3_BUCKET=myapp-development
.env
# developer's private S3 bucket
S3_KEY=mememememememememe
S3_SECRET=mysecretmysecret
S3_BUCKET=myapp-development
.foreman:
# development port is 3000
port: 3000
Foreman takes arguments to use a different file (-d) and arguments to specify what to run. It also supports a .foreman file that allows those args to become default. See http://ddollar.github.com/foreman/ for more info
I've used environment-specific Procfiles before, which is pretty simple and works fine.
Basically you have Procfile.development, Procfile.production, etc. In each you can customize the procs you want to start, then run them via foreman like so:
foreman start -f Procfile.development
Another approach is to reference scripts in your Procfile, and within each script start up the appropriate process based on the environment. The creator of Foreman does this and has an example from his Anvil project your reference.
Our solution was to use a separate job type in our Procfile for dev vs. production. It is not the most DRY method, but it works...
sidekiq: bundle exec sidekiq
sidekiq-prod: bundle exec sidekiq -e production
Then we run foreman specifying the prod job on the production system:
foreman start -c sidekiq-prod=4