Unicorn.rb, Heroku, Delayed_Job config - ruby-on-rails

I am successfully using Unicorn server and Delayed_Job on Heroku for my site. However I'm unsure if it's setup the best way and am wanted to get more info on how to view worker processes, etc. My config/unicorn.rb file which works is pasted below:
worker_processes 3
preload_app true
timeout 30
# setting the below code because of the preload_app true setting above:
# http://unicorn.bogomips.org/Unicorn/Configurator.html#preload_app-method
#delayed_job_pid = nil
before_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
Rails.logger.info('Disconnected from ActiveRecord')
# start the delayed_job worker queue in Unicorn, use " -n 2 " to start 2 workers
if Rails.env == "production"
# #delayed_job_pid ||= spawn("RAILS_ENV=production ../script/delayed_job start")
# #delayed_job_pid ||= spawn("RAILS_ENV=production #{Rails.root.join('script/delayed_job')} start")
#delayed_job_pid ||= spawn("bundle exec rake jobs:work")
elsif Rails.env == "development"
#delayed_job_pid ||= spawn("script/delayed_job start")
# #delayed_job_pid ||= spawn("rake jobs:work")
end
end
end
after_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
Rails.logger.info('Connected to ActiveRecord')
end
end
delayed_job says to use RAILS_ENV=production script/delayed_job start to start worker processes in production mode, but if I use this command I get "file not found" errors in Heroku. So, for now I'm using bundle exec rake jobs:work in production, which seems to work, but is this correct?
How many processes are actually running in this setup and could it be better optimized? My guess is that there is 1 Unicorn master process, 3 Web workers and 1 Delayed job worker for a total of 5? When I run in dev mode locally I see 5 ruby pid's being spawned. Perhaps it would be better to use only 2 Web workers and then give 2 workers to Delayed_job (I have pretty low traffic)
All of this is run in a single Heroku dyno, so I have no idea how to check the status of the Unicorn workers, any idea how?
**note, I've commented out lines that break the site in production because Heroku says it "can't find the file"

Your config/unicorn.rb should not spawn a DJ workers like this. You should specify a separate worker process in your Procfile like so:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec rake jobs:work
You can use foreman for local development to spin up both Unicorn and DJ. Your resulting config/unicorn.rb file would then be simpler:
worker_processes 3
preload_app true
timeout 30
# setting the below code because of the preload_app true setting above:
# http://unicorn.bogomips.org/Unicorn/Configurator.html#preload_app-method
before_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
Rails.logger.info('Disconnected from ActiveRecord')
end
end
after_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
Rails.logger.info('Connected to ActiveRecord')
end
end
As I mentioned in the comments, you're spawning child processes that you never reap, and will likely become zombies. Even if you added code to try and account for that, you're still trying to get single dynos to perform multiple roles (web and background worker), and are likely going to cause you problems down the road (memory errors, et. al).
Foreman: https://devcenter.heroku.com/articles/procfile
DJ on Heroku: https://devcenter.heroku.com/articles/delayed-job
Spawn: http://www.ruby-doc.org/core-1.9.3/Process.html#method-c-spawn

Related

Unicorn & Rails & Supervisor + Capistrano deploys: Stopping and starting unicorn gracefully

This caused me no end of pain due to the unexpected unicorn requirements, so I thought I'd leave my solution here for anyone else:
I'm running
- Ruby 2.3.1
- Ubuntu 16.04
- Unicorn 5.3.0
- Supervisor
The main issue I had was with the graceful restart. I was sending the following to the supervisor process:
supervisorctl signal USR2 unicorn
Unicorn would process the signal, and gracefully restart. However - every so often, developers would complain that their code hadn't loaded, and a little more infrequently, the entire restart process would fail.
What was happening was that Unicorn when signalled was not honouring the working_directory, and was instead attempting to reload from the previous symlink. This was despite the working directory in the unicorn config being set to /path/to/app/current rather than the underlying /path/to/release/directory.
Unicorn would restart the workers from the previous deployed application, and when the initial release directory was purged from system (i.e. capistrano being set to keep X releases), the process would fail, because the release directory unicorn had been initially started in would have been removed from the system.
The key is to add the following in the unicorn.conf.rb file:
Unicorn::HttpServer::START_CTX[:cwd] = "/path/to/current"
Unicorn::HttpServer::START_CTX[0] = "/path/to/your/unicorn-rails/binary"
And to also explicitly define the bundle gemfile path (as unicorn on load was setting this to the initial release directory, and again ignoring the /path/to/current).
before_exec do |server|
ENV['BUNDLE_GEMFILE'] = "/path/to/current/Gemfile"
end
I've posted my entire configuration below:
# Unicorn supervisor config
[program:unicorn]
command=/bin/start_unicorn.sh
stdout_logfile=/var/log/supervisor/program_supervise_unicorn.log
stderr_logfile=/var/log/supervisor/program_supervise_unicorn.error
priority=100
user=web_user
directory=/srv/applications/ewok/current
autostart=true
autorestart=true
stopsignal=QUIT
#Unicorn start script (start_unicorn.sh)
!/bin/bash
source /etc/profile
cd /srv/applications/ewok/current
exec bundle exec unicornherder -u unicorn_rails -p
/srv/applications/ewok/shared/pids/unicorn.pid -- -c config/unicorn.conf.rb -E $RAILS_ENV
exit 0
# My unicorn.conf.rb
APP_PATH = "/srv/applications/ewok/current"
SHARED_PATH = "/srv/applications/ewok/shared"
Unicorn::HttpServer::START_CTX[:cwd] = APP_PATH
Unicorn::HttpServer::START_CTX[0] = "/usr/local/rvm/gems/ruby-2.3.1#ewok/bin/unicorn_rails"
worker_processes 3
working_directory APP_PATH
listen "unix:#{SHARED_PATH}/pids/.unicorn.sock", :backlog => 64
listen 8080, :tcp_nopush => true
timeout 30
pid "#{SHARED_PATH}/pids/unicorn.pid"
old_pid ="#{SHARED_PATH}/pids/unicorn.pid.oldbin"
stderr_path "#{SHARED_PATH}/log/unicorn.stderr.log"
stdout_path "#{SHARED_PATH}/log/unicorn.stdout.log"
preload_app true
GC.respond_to?(:copy_on_write_friendly=) and
GC.copy_on_write_friendly = true
#check_client_connection false
run_once = true
before_exec do |server|
ENV['BUNDLE_GEMFILE'] = "#{APP_PATH}/Gemfile"
end
before_fork do |server, worker|
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
run_once = false if run_once # prevent from firing again
# Before forking, kill the master process that belongs to the .oldbin
PID.
# This enables 0 downtime deploys.
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
end
after_fork do |server, worker|
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end
Hope this helps somebody, as I was about to start considering monkey patching unicorn itself before I fixed it.

Rails, Unicorn, Concurrent Requests Breaks app

I have a Ruby on Rails app that I deployed recently and ran into a bottlenecking issue when too many users tried to use it. It's a simple game stats application. A user enters his name, the app makes an API call, and it returns the user's stats. It works perfectly when there are only a few users. When more users started using it though, it creates an insufferable lag, sometimes of up to 5 minutes per request. Therefore, I added unicorn to my Gemfile, set up a Procfile, and deployed it. Now, if there are two simultaneous requests, it crashes the app. I thought unicorn was meant to handle concurrent requests, not destroy them? At least before, my requests were still processing, albeit with a delay. What am I doing wrong here?
Here is the Procfile I used:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
Here is my unicorn file:
worker_processes 3
timeout 30
preload_app true
before_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to sent QUIT'
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end

Unicorn with Resque in Heroku - Should I be concerned?

So far I am using the Thin server. I am planning on switching to Unicorn to add some concurrency to the web dynos, and I am concerned because I read through this article and I found this code:
before_fork do |server, worker|
# ...
# If you are using Redis but not Resque, change this
if defined?(Resque)
Resque.redis.quit
Rails.logger.info('Disconnected from Redis')
end
end
after_fork do |server, worker|
# ...
# If you are using Redis but not Resque, change this
if defined?(Resque)
Resque.redis = ENV['REDIS_URI']
Rails.logger.info('Connected to Redis')
end
end
I don't really understand why is this code necessary and if I should add it or not when using Resque.
What do you guys think I should take into account when switching to Unicorn if I am using some Resque workers?
Unicorn is a forking, multi-process server. It loads your Rails environment in one process, then forks a number of workers. Using fork causes it to copy the entire parent process, including any opened connections to databases, memcache, redis, etc.
To fix this, you should re-connect any live connections in the after_fork block as shown in the example. You only need to reconnect connections/services you're using.

Rails Unicorn Web Server Won't Start on Heroku

I can't get my app to successfully start at Heroku. Here's the basics:
Ruby 1.9.3p392 (when I run Ruby -v in my dev terminal this is what is returned, however the Heroku logs seem to indicate Ruby 2.0.0)
Rails 3.2.13
Unicorn Web Server
Postgresql DB
I have deployed my app to Heroku but getting "An error occurred in the application and your page could not be served."
Here's the final entries in the Heroku log:
+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `<main>'
+00:00 heroku[web.1]: Process exited with status 1
+00:00 heroku[web.1]: State changed from starting to crashed
When I try to run Heroku ps, I get:
=== web (1X): `bundle exec unicorn -p $PORT -c ./config/unicorn.rb`
web.1: crashed 2013/06/22 17:31:22 (~ 6m ago)
I think it's possible the problem is stemming from this line in my app/config/application.rb
ENV.update YAML.load(File.read(File.expand_path('../application.yml', __FILE__)))
This line is useful in dev to read my environment variables from my application.yml file. However, for security purposes I gitignore it from my repo and can see the Heroku logs complain that this file not being found. For production, I have set my environment variables at Heroku via:
heroku config:add SECRET_TOKEN=a_really_long_number
Here's my app/config/unicorn.rb
# config/unicorn.rb
worker_processes Integer(ENV["WEB_CONCURRENCY"] || 3)
timeout 15
preload_app true
before_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to send QUIT'
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end
And here's my Procfile
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
Both my app/config/unicorn.rb and Procfile settings come from https://devcenter.heroku.com/articles/rails-unicorn
Based on some IRC guidance, I installed Figaro, but alas that did not resolve the issue.
If you want to see the full app, it's posted at: https://github.com/mxstrand/mxspro
If you have guidance on what might be wrong or how I might troubleshoot further I'd appreciate it. Thank you.
You're spot on with you analysis. I've just pulled your code, made some tweaks and now have it started on Heroku.
My only changes;
config/application.rb - moved lines 12 & 13 to config/environments/development.rb - if you're using application.yml for development environment variables then keep it that way. Other option is to make line 13 conditional to your development environment with if Rails.env.development? at the end.
config/environments/production.rb - line 33 missing preceeding # mark

Resque is not picking up Redis configuration settings

I am having unexpected and significant problems trying to get a Rails app, running under Unicorn, to connect to a password-protected Redis server.
Using bundle exec rails c production on the command line, I can issue commands through Resque.redis. However, it seems that my configuration is being lost when it's forked under Unicorn.
Using a non-password-protected Redis server Just Works. However, I intend to run workers on other servers than where the Redis server lives, so I need this to be password protected.
I have also had success in using a password protected (using the same technique) but using Passenger rather than Unicorn.
I have the following setup:
# config/resque.yml
development: localhost:6379
test: localhost:6379
production: redis://user:PASSWORD#oak.isc.org:6379
.
# config/initializers/redis.rb
rails_root = ENV['RAILS_ROOT'] || File.dirname(__FILE__) + '/../..'
rails_env = ENV['RAILS_ENV'] || 'development'
$resque_config = YAML.load_file(rails_root + '/config/resque.yml')
uri = URI.parse($resque_config[rails_env])
Resque.redis = Redis.new(host: uri.host, port: uri.port, password: uri.password)
.
# unicorn.rb bootup file
preload_app true
before_fork do |server, worker|
Redis.current.quit
end
after_fork do |server, worker|
Redis.current.quit
end
.
Ok, for the sake of other people who might be googling this problem, I've solved this for myself at least
Basic problem is calling Redis.new other places in the code ,e.g. in your geocoder setup or unicorn config file.
just make sure that every time you call initialize Redis you pass in the appropriate values
e.g. something like
REDIS = Redis.connect(:url => ENV['REDISTOGO_URL'])
everywhere and you should never have
Redis.new
as it will default to localhost and the default port
UPDATED Totally different idea based on #lmarlow's comment to a resque issue.
I bet it breaks wherever you have Redis ~>3 (I mean the ruby client version, not the server version).
As of this writing, Resque needs Redis ~>2 but doesn't specify that in its gemspec. Therefore you have to help it out by adding this to your Gemfile:
gem 'redis', '~>2' # until a new version of resque comes out
gem 'resque'
Also, make sure that bundler is being used everywhere. Otherwise, if your system has a new version of the Redis gem, it will get used and Resque will fail like before.
Finally, a cosmetic note... you could simplify the config to:
# config/initializers/redis.rb
$resque_redis_url = uris_per_environment[rails_env] # note no URI.parse
Resque.redis = $resque_redis_url
and then
# unicorn.rb bootup file
after_fork do |server, worker|
Resque.redis = $resque_redis_url
end
This was helpful to me:
source: https://github.com/redis/redis-rb/blob/master/examples/unicorn/unicorn.rb
require "redis"
worker_processes 3
# If you set the connection to Redis *before* forking,
# you will cause forks to share a file descriptor.
#
# This causes a concurrency problem by which one fork
# can read or write to the socket while others are
# performing other operations.
#
# Most likely you'll be getting ProtocolError exceptions
# mentioning a wrong initial byte in the reply.
#
# Thus we need to connect to Redis after forking the
# worker processes.
after_fork do |server, worker|
Redis.current.quit
end
What worked for me was the unicorn config here: https://stackoverflow.com/a/14636024/18706
before_fork do |server, worker|
if defined?(Resque)
Resque.redis.quit
Rails.logger.info("Disconnected from Redis")
end
end
after_fork do |server, worker|
if defined?(Resque)
Resque.redis = REDIS_WORKER
Rails.logger.info("Connected to Redis")
end
end
I think the issue was with Resque-web. Its config file, they fixed it now. In version 0.0.11, they mention it in a comment as well : https://github.com/resque/resque-web/blob/master/config/initializers/resque_config.rb#L3
Earlier, their file looked like this : https://github.com/resque/resque-web/blob/v0.0.9/config/initializers/resque_config.rb
And, if due to any reasons, you cannot upgrade, then rather try to set the env variable RAILS_RESQUE_REDIS=<host>:<port> instead, as the Initializers are loading after the it tries connect redis(and fails).

Resources