Rails Gemfile.lock - ruby-on-rails

I added a section for development in my Gemfile
group :development do
gem 'thin'
end
and then ran bundle install on my local machine. This created a Gemfile.lock which contained thin. I checked in this file into the repo and pushed to Heroku. Normally I use unicorn server in production. But when this version of the Gemfile was pushed to Heroku, the app crashed saying command thin not found.
I don't understand why a gem included only in development group will affect my production deployment. What is the right way to include a gem only in development but without affecting Heroku production deployment?

I use unicorn on heroku and thin for development. My unicorn and thin are included at the top of the gemfile(thin not in development) and they work ok.. Check your unicorn.rb(my one is as below) and update your gemfile.
worker_processes Integer(ENV["WEB_CONCURRENCY"] || 3)
timeout 15
preload_app true
before_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to send QUIT'
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end

Related

Rails, Unicorn, Concurrent Requests Breaks app

I have a Ruby on Rails app that I deployed recently and ran into a bottlenecking issue when too many users tried to use it. It's a simple game stats application. A user enters his name, the app makes an API call, and it returns the user's stats. It works perfectly when there are only a few users. When more users started using it though, it creates an insufferable lag, sometimes of up to 5 minutes per request. Therefore, I added unicorn to my Gemfile, set up a Procfile, and deployed it. Now, if there are two simultaneous requests, it crashes the app. I thought unicorn was meant to handle concurrent requests, not destroy them? At least before, my requests were still processing, albeit with a delay. What am I doing wrong here?
Here is the Procfile I used:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
Here is my unicorn file:
worker_processes 3
timeout 30
preload_app true
before_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to sent QUIT'
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end

Unicorn with Resque in Heroku - Should I be concerned?

So far I am using the Thin server. I am planning on switching to Unicorn to add some concurrency to the web dynos, and I am concerned because I read through this article and I found this code:
before_fork do |server, worker|
# ...
# If you are using Redis but not Resque, change this
if defined?(Resque)
Resque.redis.quit
Rails.logger.info('Disconnected from Redis')
end
end
after_fork do |server, worker|
# ...
# If you are using Redis but not Resque, change this
if defined?(Resque)
Resque.redis = ENV['REDIS_URI']
Rails.logger.info('Connected to Redis')
end
end
I don't really understand why is this code necessary and if I should add it or not when using Resque.
What do you guys think I should take into account when switching to Unicorn if I am using some Resque workers?
Unicorn is a forking, multi-process server. It loads your Rails environment in one process, then forks a number of workers. Using fork causes it to copy the entire parent process, including any opened connections to databases, memcache, redis, etc.
To fix this, you should re-connect any live connections in the after_fork block as shown in the example. You only need to reconnect connections/services you're using.

Rails Unicorn Web Server Won't Start on Heroku

I can't get my app to successfully start at Heroku. Here's the basics:
Ruby 1.9.3p392 (when I run Ruby -v in my dev terminal this is what is returned, however the Heroku logs seem to indicate Ruby 2.0.0)
Rails 3.2.13
Unicorn Web Server
Postgresql DB
I have deployed my app to Heroku but getting "An error occurred in the application and your page could not be served."
Here's the final entries in the Heroku log:
+00:00 app[web.1]: from /app/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `<main>'
+00:00 heroku[web.1]: Process exited with status 1
+00:00 heroku[web.1]: State changed from starting to crashed
When I try to run Heroku ps, I get:
=== web (1X): `bundle exec unicorn -p $PORT -c ./config/unicorn.rb`
web.1: crashed 2013/06/22 17:31:22 (~ 6m ago)
I think it's possible the problem is stemming from this line in my app/config/application.rb
ENV.update YAML.load(File.read(File.expand_path('../application.yml', __FILE__)))
This line is useful in dev to read my environment variables from my application.yml file. However, for security purposes I gitignore it from my repo and can see the Heroku logs complain that this file not being found. For production, I have set my environment variables at Heroku via:
heroku config:add SECRET_TOKEN=a_really_long_number
Here's my app/config/unicorn.rb
# config/unicorn.rb
worker_processes Integer(ENV["WEB_CONCURRENCY"] || 3)
timeout 15
preload_app true
before_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn master intercepting TERM and sending myself QUIT instead'
Process.kill 'QUIT', Process.pid
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.connection.disconnect!
end
after_fork do |server, worker|
Signal.trap 'TERM' do
puts 'Unicorn worker intercepting TERM and doing nothing. Wait for master to send QUIT'
end
defined?(ActiveRecord::Base) and
ActiveRecord::Base.establish_connection
end
And here's my Procfile
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
Both my app/config/unicorn.rb and Procfile settings come from https://devcenter.heroku.com/articles/rails-unicorn
Based on some IRC guidance, I installed Figaro, but alas that did not resolve the issue.
If you want to see the full app, it's posted at: https://github.com/mxstrand/mxspro
If you have guidance on what might be wrong or how I might troubleshoot further I'd appreciate it. Thank you.
You're spot on with you analysis. I've just pulled your code, made some tweaks and now have it started on Heroku.
My only changes;
config/application.rb - moved lines 12 & 13 to config/environments/development.rb - if you're using application.yml for development environment variables then keep it that way. Other option is to make line 13 conditional to your development environment with if Rails.env.development? at the end.
config/environments/production.rb - line 33 missing preceeding # mark

Rails, Mongoid & Unicorn config for Heroku

I am using Mongoid 3, with Rails 3.2.9 and Unicorn for production. Would like to setup a before_fork & after_fork for the connection to mongodb, found the following code for active record:
before_fork do |server, worker|
# Replace with MongoDB or whatever
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
Rails.logger.info('Disconnected from ActiveRecord')
end
end
after_fork do |server, worker|
# Replace with MongoDB or whatever
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
Rails.logger.info('Connected to ActiveRecord')
end
end
What is the relevant code for Mongoid (to connect and disconnect)?
Update:
You dont actually need to do this, so for people coming to view this question see:
http://mongoid.org/en/mongoid/docs/rails.html
"Unicorn and Passenger
When using Unicorn or Passenger, each time a child process is forked when using app preloading or smart spawning, Mongoid will automatically reconnect to the master database. If you are doing this in your application manually you may remove your code."
Though it would still be interesting to know what would be the equivalent Mongoid code.
You dont actually need to do this, so for people coming to view this question see:
http://mongoid.org/en/mongoid/docs/rails.html
"Unicorn and Passenger
When using Unicorn or Passenger, each time a child process is forked when using app preloading or smart spawning, Mongoid will automatically reconnect to the master database. If you are doing this in your application manually you may remove your code."
Though it would still be interesting to know what would be the equivalent Mongoid code.
What about
::Mongoid.default_session.connect
::Mongoid.default_session.disconnect
https://docs.mongodb.com/mongoid/current/tutorials/mongoid-configuration/#usage-with-forking-servers
The documentation on mongodb.com says that after_fork and before_fork for unicorn or passenger are required.
This probably changed recently. This is the 7.0 mongoid documentation

Unicorn.rb, Heroku, Delayed_Job config

I am successfully using Unicorn server and Delayed_Job on Heroku for my site. However I'm unsure if it's setup the best way and am wanted to get more info on how to view worker processes, etc. My config/unicorn.rb file which works is pasted below:
worker_processes 3
preload_app true
timeout 30
# setting the below code because of the preload_app true setting above:
# http://unicorn.bogomips.org/Unicorn/Configurator.html#preload_app-method
#delayed_job_pid = nil
before_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
Rails.logger.info('Disconnected from ActiveRecord')
# start the delayed_job worker queue in Unicorn, use " -n 2 " to start 2 workers
if Rails.env == "production"
# #delayed_job_pid ||= spawn("RAILS_ENV=production ../script/delayed_job start")
# #delayed_job_pid ||= spawn("RAILS_ENV=production #{Rails.root.join('script/delayed_job')} start")
#delayed_job_pid ||= spawn("bundle exec rake jobs:work")
elsif Rails.env == "development"
#delayed_job_pid ||= spawn("script/delayed_job start")
# #delayed_job_pid ||= spawn("rake jobs:work")
end
end
end
after_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
Rails.logger.info('Connected to ActiveRecord')
end
end
delayed_job says to use RAILS_ENV=production script/delayed_job start to start worker processes in production mode, but if I use this command I get "file not found" errors in Heroku. So, for now I'm using bundle exec rake jobs:work in production, which seems to work, but is this correct?
How many processes are actually running in this setup and could it be better optimized? My guess is that there is 1 Unicorn master process, 3 Web workers and 1 Delayed job worker for a total of 5? When I run in dev mode locally I see 5 ruby pid's being spawned. Perhaps it would be better to use only 2 Web workers and then give 2 workers to Delayed_job (I have pretty low traffic)
All of this is run in a single Heroku dyno, so I have no idea how to check the status of the Unicorn workers, any idea how?
**note, I've commented out lines that break the site in production because Heroku says it "can't find the file"
Your config/unicorn.rb should not spawn a DJ workers like this. You should specify a separate worker process in your Procfile like so:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker: bundle exec rake jobs:work
You can use foreman for local development to spin up both Unicorn and DJ. Your resulting config/unicorn.rb file would then be simpler:
worker_processes 3
preload_app true
timeout 30
# setting the below code because of the preload_app true setting above:
# http://unicorn.bogomips.org/Unicorn/Configurator.html#preload_app-method
before_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
Rails.logger.info('Disconnected from ActiveRecord')
end
end
after_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
Rails.logger.info('Connected to ActiveRecord')
end
end
As I mentioned in the comments, you're spawning child processes that you never reap, and will likely become zombies. Even if you added code to try and account for that, you're still trying to get single dynos to perform multiple roles (web and background worker), and are likely going to cause you problems down the road (memory errors, et. al).
Foreman: https://devcenter.heroku.com/articles/procfile
DJ on Heroku: https://devcenter.heroku.com/articles/delayed-job
Spawn: http://www.ruby-doc.org/core-1.9.3/Process.html#method-c-spawn

Resources