Rails Unicorn Web Server Won't Start on Heroku - ruby-on-rails

I can't get my app to successfully start at Heroku. Here's the basics:
Ruby 1.9.3p392 (when I run Ruby -v in my dev terminal this is what is returned, however the Heroku logs seem to indicate Ruby 2.0.0)
Rails 3.2.13
Unicorn Web Server
Postgresql DB
I have deployed my app to Heroku but getting "An error occurred in the application and your page could not be served."
Here's the final entries in the Heroku log:
+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `<main>'
+00:00 heroku[web.1]: Process exited with status 1
+00:00 heroku[web.1]: State changed from starting to crashed
When I try to run Heroku ps, I get:
=== web (1X): `bundle exec unicorn -p $PORT -c ./config/unicorn.rb`
web.1: crashed 2013/06/22 17:31:22 (~ 6m ago)
I think it's possible the problem is stemming from this line in my app/config/application.rb
ENV.update YAML.load(File.read(File.expand_path('../application.yml', __FILE__)))
This line is useful in dev to read my environment variables from my application.yml file. However, for security purposes I gitignore it from my repo and can see the Heroku logs complain that this file not being found. For production, I have set my environment variables at Heroku via:
heroku config:add SECRET_TOKEN=a_really_long_number
Here's my app/config/unicorn.rb
# config/unicorn.rb
worker_processes Integer(ENV["WEB_CONCURRENCY"] || 3)
timeout 15
preload_app true
before_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to send QUIT'
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end
And here's my Procfile
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
Both my app/config/unicorn.rb and Procfile settings come from https://devcenter.heroku.com/articles/rails-unicorn
Based on some IRC guidance, I installed Figaro, but alas that did not resolve the issue.
If you want to see the full app, it's posted at: https://github.com/mxstrand/mxspro
If you have guidance on what might be wrong or how I might troubleshoot further I'd appreciate it. Thank you.

You're spot on with you analysis. I've just pulled your code, made some tweaks and now have it started on Heroku.
My only changes;
config/application.rb - moved lines 12 & 13 to config/environments/development.rb - if you're using application.yml for development environment variables then keep it that way. Other option is to make line 13 conditional to your development environment with if Rails.env.development? at the end.
config/environments/production.rb - line 33 missing preceeding # mark

Related

Unicorn & Rails & Supervisor + Capistrano deploys: Stopping and starting unicorn gracefully

This caused me no end of pain due to the unexpected unicorn requirements, so I thought I'd leave my solution here for anyone else:
I'm running
- Ruby 2.3.1
- Ubuntu 16.04
- Unicorn 5.3.0
- Supervisor
The main issue I had was with the graceful restart. I was sending the following to the supervisor process:
supervisorctl signal USR2 unicorn
Unicorn would process the signal, and gracefully restart. However - every so often, developers would complain that their code hadn't loaded, and a little more infrequently, the entire restart process would fail.
What was happening was that Unicorn when signalled was not honouring the working_directory, and was instead attempting to reload from the previous symlink. This was despite the working directory in the unicorn config being set to /path/to/app/current rather than the underlying /path/to/release/directory.
Unicorn would restart the workers from the previous deployed application, and when the initial release directory was purged from system (i.e. capistrano being set to keep X releases), the process would fail, because the release directory unicorn had been initially started in would have been removed from the system.
The key is to add the following in the unicorn.conf.rb file:
Unicorn::HttpServer::START_CTX[:cwd] = "/path/to/current"
Unicorn::HttpServer::START_CTX[0] = "/path/to/your/unicorn-rails/binary"
And to also explicitly define the bundle gemfile path (as unicorn on load was setting this to the initial release directory, and again ignoring the /path/to/current).
before_exec do |server|
ENV['BUNDLE_GEMFILE'] = "/path/to/current/Gemfile"
end
I've posted my entire configuration below:
# Unicorn supervisor config
[program:unicorn]
command=/bin/start_unicorn.sh
stdout_logfile=/var/log/supervisor/program_supervise_unicorn.log
stderr_logfile=/var/log/supervisor/program_supervise_unicorn.error
priority=100
user=web_user
directory=/srv/applications/ewok/current
autostart=true
autorestart=true
stopsignal=QUIT
#Unicorn start script (start_unicorn.sh)
!/bin/bash
source /etc/profile
cd /srv/applications/ewok/current
exec bundle exec unicornherder -u unicorn_rails -p
/srv/applications/ewok/shared/pids/unicorn.pid -- -c config/unicorn.conf.rb -E $RAILS_ENV
exit 0
# My unicorn.conf.rb
APP_PATH = "/srv/applications/ewok/current"
SHARED_PATH = "/srv/applications/ewok/shared"
Unicorn::HttpServer::START_CTX[:cwd] = APP_PATH
Unicorn::HttpServer::START_CTX[0] = "/usr/local/rvm/gems/ruby-2.3.1#ewok/bin/unicorn_rails"
worker_processes 3
working_directory APP_PATH
listen "unix:#{SHARED_PATH}/pids/.unicorn.sock", :backlog => 64
listen 8080, :tcp_nopush => true
timeout 30
pid "#{SHARED_PATH}/pids/unicorn.pid"
old_pid ="#{SHARED_PATH}/pids/unicorn.pid.oldbin"
stderr_path "#{SHARED_PATH}/log/unicorn.stderr.log"
stdout_path "#{SHARED_PATH}/log/unicorn.stdout.log"
preload_app true
GC.respond_to?(:copy_on_write_friendly=) and
GC.copy_on_write_friendly = true
#check_client_connection false
run_once = true
before_exec do |server|
ENV['BUNDLE_GEMFILE'] = "#{APP_PATH}/Gemfile"
end
before_fork do |server, worker|
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
run_once = false if run_once # prevent from firing again
# Before forking, kill the master process that belongs to the .oldbin
PID.
# This enables 0 downtime deploys.
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
end
after_fork do |server, worker|
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end
Hope this helps somebody, as I was about to start considering monkey patching unicorn itself before I fixed it.

Ruby on rails. Heroku compile assets timeout when i add unicorn gem

-----> Preparing app for Rails asset pipeline
Running: rake assets:precompile
! Timed out compiling Ruby app (15 minutes)
! See https://devcenter.heroku.com/articles/slug-compiler#time-limit
When I delete unicorn gem from gemfile assets:precompile start works... How i can fix this?
Since this seems like a setup issue to me, I'll give you the guides I've been looking at.
The way I would solve this would be to google the error. When I typed in heroku unicorn setup I got this page: https://devcenter.heroku.com/articles/rails-unicorn
UPDATE: The error that I was struggling with was an asset compilation issue. That was brought to my attention through using inspect element on my webpage. So I went through a few guides again to make sure the assets pre-complied (can only give the names since I already have two links):
Deploying Rails Applications with Unicorn
Getting Started with Rails 4.x on Heroku
SQLite on Heroku (I had to update my app from SQLite)
Rails 4 Asset Pipeline on Heroku
Otherwise, here's the guide I've been following:
below are some instructions for getting your app onto heroku...
Heroku specific instructions:
NOTE: everything from below this point will be for getting your app ready for deployment onto Heroku.
To enable features such as static asset serving and logging on Heroku add rails_12factor gem to the end of your Gemfile.
gem 'rails_12factor', group: :production
Specify specific Ruby. At the end of your gemfile add...
ruby "2.1.2"
Add Unicorn Webserver to your Gemfile:
Inside your gemfile
gem 'unicorn'
Then run
$ bundle install
Now you are ready to configure your app to use Unicorn.
Create a configuration file for Unicorn at config/unicorn.rb:
$ touch config/unicorn.rb
Add Unicorn specific configuration options In file config/unicorn.rb:
worker_processes Integer(ENV["WEB_CONCURRENCY"] || 3)
timeout 15
preload_app true
before_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to send QUIT'
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end
This default configuration assumes a standard Rails app with Active Record. You should get acquainted with the different options in the official Unicorn documentation.
Finally you will need to tell Heroku how to run your Rails app by creating a Procfile in the root of your application directory.
Add a Procfile:
touch Procfile
(note: the case is important!)
in the Procfile write:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
Set the RACK_ENV to development in your environment and a PORT to connect to. Before pushing to Heroku you’ll want to test with the RACK_ENV set to production since this is the enviroment your Heroku app will run in.
$ echo "RACK_ENV=development" >> .env
$ echo "PORT=3000" >> .env
You’ll also want to add .env to your .gitignore since this is for local enviroment setup.
$ echo ".env" >> .gitignore
$ git add .gitignore
$ git commit -m "add .env to .gitignore"
Test your Procfile locally using Foreman:
$ gem install foreman
You can now start your web server by running
$ foreman start
18:24:56 web.1 | I, [2013-03-13T18:24:56.885046 #18793] INFO -- : listening on addr=0.0.0.0:5000 fd=7
18:24:56 web.1 | I, [2013-03-13T18:24:56.885140 #18793] INFO -- : worker=0 spawning...
18:24:56 web.1 | I, [2013-03-13T18:24:56.885680 #18793] INFO -- : master process ready
18:24:56 web.1 | I, [2013-03-13T18:24:56.886145 #18795] INFO -- : worker=0 spawned pid=18795
18:24:56 web.1 | I, [2013-03-13T18:24:56.886272 #18795] INFO -- : Refreshing Gem list
18:24:57 web.1 | I, [2013-03-13T18:24:57.647574 #18795] INFO -- : worker=0 ready
Press Ctrl-C to exit and you can deploy your changes to Heroku:
$ git add .
$ git commit -m "use unicorn via procfile"
$ git push heroku master
Check ps, you’ll see the web process uses your new command specifying Unicorn as the web server
$ heroku ps
=== web (1X): `bundle exec unicorn -p $PORT -c ./config/unicorn.rb`
web.1: starting 2014/04/17 12:55:33 (~ 1s ago)
At this point you can follow the normal procedure to get your app on heroku
git init
git add -A
git commit -m "initial commit"
get things set up on github....then
heroku create
git push heroku master
Migrate the database on Heroku
heroku run rake db:migrate
make sure the app is set to run
heroku ps:scale web=1
open the app
heroku open

Puma phased-restart fails when Gemfile is changed

I'm using Puma as application server for my Rails 4 project on MRI 2.1.0. I'm using Capistrano 3 to handle deployments. Everything is working like a charm. But, I recently noticed an issue with my deployment process. If I change my Gemfile then, puma fails to complete phased-restart and eventually all workers get killed. I'm running Puma in cluster mode and preload_app! is set true.
Here is my Capistrano recipe to handle phased-restart.
desc "Restart the application (phased restart)"
task :phased_restart do
on roles(:app) do |h|
execute "cd #{fetch(:current_path)} && bundle exec pumactl -S #{fetch(:puma_state)} phased-restart", :pty => true
end
end
This is truncated output of Capistrano log.
DEBUG [4790766f] Command: cd /home/app/current && bundle exec pumactl -S /home/app/shared/tmp/pids/puma.state phased-restart
DEBUG [de00176a] Command phased-restart sent success
INFO [de00176a] Finished in 0.909 seconds with exit status 0 (successful).
This is my config/puma.rb file.
#!/usr/bin/env puma
require 'active_support'
environment 'production'
daemonize
pidfile '/home/app/shared/tmp/pids/puma.pid'
state_path '/home/app/shared/tmp/pids/puma.state'
stdout_redirect 'log/puma_stdout.log', 'log/puma_stderr.log'
threads 100, 100
bind 'tcp://0.0.0.0:9292'
bind 'unix:////home/app/shared/tmp/pids/puma.sock'
on_worker_boot do
ActiveSupport.on_load(:active_record) do
ActiveRecord::Base.connection.disconnect! rescue ActiveRecord::ConnectionNotEstablished
ActiveRecord::Base.establish_connection(YAML.load_file("#{Rails.root}/config/database.yml")[Rails.env])
end
end
workers 4
preload_app!
Does anybody see anything wrong in my puma config file?
So, currently I do bundle exec cap production deploy:start to start Puma when this happens. But, I want zero-downtime-deployment in every cases.
Can Puma keep using old worker processes in case new spawned processes couldn't be started?
Do you know that preload_app! conflicts with phased restarts?
Proof: https://github.com/puma/puma/blob/0ea7af5e2cc8fa192ec82934a4a47880bdb592f8/lib/puma/configuration.rb#L333-L335
I think first you need to decide which to use.
For doing a phased restart you need to enable the prune_bundler option and disable preload_app!
See https://github.com/puma/puma/blob/master/DEPLOYMENT.md#restarting
To do zero-downtime deploys with Capistrano, you can use the capistrano3-puma gem with the following options:
set :puma_preload_app, false
set :puma_prune_bundler, true

Rails 4: Specified Unicorn in Procfile, but Webrick is executed

I am using Rails 4.0.1 and I want to run unicorn as my web server, but when I execute rails s, Webrick is used instead (the unicorn gem is in my Gemfile, so it can't be that).
This is my Procfile:
worker: bundle exec rake jobs:work
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
And this is the unicorn.rb file:
worker_processes 2
timeout 30
preload_app true
before_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to sent QUIT'
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end
What is going on? Thanks!
rails server doesn't use your Procfile; that's for foreman. Start your application with foreman instead:
bundle exec foreman start
If you want rails server to use Unicorn as well, you can include the unicorn-rails gem.
I am adding this as a general help article to those that have landed here because of a related search on Google.
If you want to run Unicorn then add this in your project
Gemfile
# Use unicorn as the app server
gem 'unicorn'
gem 'unicorn-rails'
Then in your terminal run bundle install
You will then get something like this in your terminal that shows that you are now using Unicorn.
=> Booting Unicorn
=> Rails 4.0.0 application starting in development on http://0.0.0.0:3000
=> Run `rails server -h` for more startup options
=> Ctrl-C to shutdown server
I, [2014-10-24T18:39:41.074259 #32835] INFO -- : listening on addr=0.0.0.0:3000 fd=8
I, [2014-10-24T18:39:41.074399 #32835] INFO -- : worker=0 spawning...
I, [2014-10-24T18:39:41.075407 #32835] INFO -- : master process ready
I, [2014-10-24T18:39:41.076712 #32836] INFO -- : worker=0 spawned pid=32836
I, [2014-10-24T18:39:41.237335 #32836] INFO -- : worker=0 ready
Additional Reading
Unicorn Rails
Deploying to Heroku with Unicorn
You need to start everything by running foreman, e.g.,
$ foreman start
Otherwise you're just starting up Rails' default server.
See this Getting Started guide for further background info.

Unicorn.rb, Heroku, Delayed_Job config

I am successfully using Unicorn server and Delayed_Job on Heroku for my site. However I'm unsure if it's setup the best way and am wanted to get more info on how to view worker processes, etc. My config/unicorn.rb file which works is pasted below:
worker_processes 3
preload_app true
timeout 30
# setting the below code because of the preload_app true setting above:
# http://unicorn.bogomips.org/Unicorn/Configurator.html#preload_app-method
#delayed_job_pid = nil
before_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
Rails.logger.info('Disconnected from ActiveRecord')
# start the delayed_job worker queue in Unicorn, use " -n 2 " to start 2 workers
if Rails.env == "production"
# #delayed_job_pid ||= spawn("RAILS_ENV=production ../script/delayed_job start")
# #delayed_job_pid ||= spawn("RAILS_ENV=production #{Rails.root.join('script/delayed_job')} start")
#delayed_job_pid ||= spawn("bundle exec rake jobs:work")
elsif Rails.env == "development"
#delayed_job_pid ||= spawn("script/delayed_job start")
# #delayed_job_pid ||= spawn("rake jobs:work")
end
end
end
after_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
Rails.logger.info('Connected to ActiveRecord')
end
end
delayed_job says to use RAILS_ENV=production script/delayed_job start to start worker processes in production mode, but if I use this command I get "file not found" errors in Heroku. So, for now I'm using bundle exec rake jobs:work in production, which seems to work, but is this correct?
How many processes are actually running in this setup and could it be better optimized? My guess is that there is 1 Unicorn master process, 3 Web workers and 1 Delayed job worker for a total of 5? When I run in dev mode locally I see 5 ruby pid's being spawned. Perhaps it would be better to use only 2 Web workers and then give 2 workers to Delayed_job (I have pretty low traffic)
All of this is run in a single Heroku dyno, so I have no idea how to check the status of the Unicorn workers, any idea how?
**note, I've commented out lines that break the site in production because Heroku says it "can't find the file"
Your config/unicorn.rb should not spawn a DJ workers like this. You should specify a separate worker process in your Procfile like so:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec rake jobs:work
You can use foreman for local development to spin up both Unicorn and DJ. Your resulting config/unicorn.rb file would then be simpler:
worker_processes 3
preload_app true
timeout 30
# setting the below code because of the preload_app true setting above:
# http://unicorn.bogomips.org/Unicorn/Configurator.html#preload_app-method
before_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
Rails.logger.info('Disconnected from ActiveRecord')
end
end
after_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
Rails.logger.info('Connected to ActiveRecord')
end
end
As I mentioned in the comments, you're spawning child processes that you never reap, and will likely become zombies. Even if you added code to try and account for that, you're still trying to get single dynos to perform multiple roles (web and background worker), and are likely going to cause you problems down the road (memory errors, et. al).
Foreman: https://devcenter.heroku.com/articles/procfile
DJ on Heroku: https://devcenter.heroku.com/articles/delayed-job
Spawn: http://www.ruby-doc.org/core-1.9.3/Process.html#method-c-spawn

Resources