Set unicorn timeout - ruby-on-rails

I use rails 3.0.11, ruby 1.9.3-p0, nginx 1.0.4 and unicorn 3.6.2 for my project. And I have got a problem.
I have to do long-term operation on my server. It's about 150 seconds. And it's okay in this case.
I've set up my nginx config in location
proxy_read_timeout 240;
proxy_send_timeout 240;
And set up my unicorn.rb file with command
timeout 240
But I always get 502 bad gateway error.
I think, problem with unicorn. I get this unicorn logs
E, [2012-05-21T11:52:21.052382 #30423] ERROR -- : worker=1 PID:30871 timeout (104.052329915s > 60s), killing
E, [2012-05-21T11:52:21.080378 #30423] ERROR -- : reaped #<Process::Status: pid 30871 SIGKILL (signal 9)> worker=1
I, [2012-05-21T11:52:21.105045 #30423] INFO -- : worker=1 spawning...
I, [2012-05-21T11:52:21.111148 #894] INFO -- : worker=1 spawned pid=894
I, [2012-05-21T11:52:21.111659 #894] INFO -- : Refreshing Gem list
Can you help me? Any help is appreciated. Thank you.

Copying the answer from the comments in order to remove this question from the "Unanswered" filter:
I have never used this gem, but if you're doing this after
'deploy:restart', 'unicorn:reload' you need to restart unicorn, not
only reload it. sudo /etc/init.d/unicorn restart and the timeout will
be set. Reload and restart are two different things in unicorn.
~ answer per MaurĂ­cio Linhares

After chaning timeout in config/unicron/production.rb
I had to run
cap deploy
and then stop & start unicron master process to pick up new config with:
cap unicorn:stop
cap unicorn:start

Related

Couldn't call app. Bad request to "curl 'http://localhost:3564/' -s --fail 2>&1" derailed_benchmarks gem

I am getting error messages as pasted below:
% USE_SERVER=puma bundle exec derailed exec perf:mem_over_time
Booting: production
docking_dev already exists
Endpoint: "/"
Port: 3857
Server: "puma"
[4990] Puma starting in cluster mode...
[4990] * Version 3.7.0 (ruby 2.3.3-p222), codename: Snowy Sagebrush
[4990] * Min threads: 5, max threads: 5
[4990] * Environment: production
[4990] * Process workers: 2
[4990] * Preloading application
[4990] * Listening on tcp://0.0.0.0:3000
[4990] Use Ctrl-C to stop
[4990] - Worker 0 (pid: 5013) booted, phase: 0
[4990] - Worker 1 (pid: 5014) booted, phase: 0
PID: 4990
149.67578125
Couldn't call app.
Bad request to "curl 'http://localhost:3857/' -s --fail 2>&1"
***RESPONSE***:
""
[5014] ! Detected parent died, dying
[5013] ! Detected parent died, dying
I checked RAILS_ENV=production rails server and RAILS_ENV=production rails console both working as expected. What else I need to check to make it working. Is this because my http://localhost:3000/ url has authentication enabled. I checked that I turned force_ssl to false. I checked this post, what it suggested not helping.
I also don't know why it is picking some random ports every time I run it, like in this pasted one it is 3857. But my app runs using 3000 port locally. Is there something I need to do to so that it uses same port 3000?
P.S. Why random port I got to know from gem code.
Ok I fixed it. In my case, root url redirects to login page when users are not signed in. So this is causing the gem bad requests, it seems not able to handle 302 correctly at this moment. So I fixed it..
PATH_TO_HIT="/login" bundle exec derailed exec perf:mem_over_time
Yes I also had to removed the USE_SERVER=puma as it was causing errors too.

Unicorn workers timeout after "zero downtime" deploy with capistrano

I'm running a Rails 3.2.21 app and deploy to a Ubuntu 12.04.5 box using capistrano (nginx and unicorn).
I have my app set for a zero-downtime deploy (at least I thought), with my config files looking more or less like these.
Here's the problem: When the deploy is nearly done and it restarts unicorn, when I watch my unicorn.log I see it fire up the new workers, reap the old ones... but then my app just hangs for 2-3 minutes. Any request to the app at this point hits the timeout window (which I set to 40 seconds) and returns my app's 500 error page.
Here is the first part of the output from unicorn.log as unicorn is restarting (I have 5 unicorn workers):
I, [2015-04-21T23:06:57.022492 #14347] INFO -- : master process ready
I, [2015-04-21T23:06:57.844273 #15378] INFO -- : worker=0 ready
I, [2015-04-21T23:06:57.944080 #15381] INFO -- : worker=1 ready
I, [2015-04-21T23:06:58.089655 #15390] INFO -- : worker=2 ready
I, [2015-04-21T23:06:58.230554 #14541] INFO -- : reaped #<Process::Status: pid 15551 exit 0> worker=4
I, [2015-04-21T23:06:58.231455 #14541] INFO -- : reaped #<Process::Status: pid 3644 exit 0> worker=0
I, [2015-04-21T23:06:58.249110 #15393] INFO -- : worker=3 ready
I, [2015-04-21T23:06:58.650007 #15396] INFO -- : worker=4 ready
I, [2015-04-21T23:07:01.246981 #14541] INFO -- : reaped #<Process::Status: pid 32645 exit 0> worker=1
I, [2015-04-21T23:07:01.561786 #14541] INFO -- : reaped #<Process::Status: pid 15534 exit 0> worker=2
I, [2015-04-21T23:07:06.657913 #14541] INFO -- : reaped #<Process::Status: pid 16821 exit 0> worker=3
I, [2015-04-21T23:07:06.658325 #14541] INFO -- : master complete
Afterwards, as the app hangs for those 2-3 minutes, here is what's happening:
E, [2015-04-21T23:07:38.069635 #14347] ERROR -- : worker=0 PID:15378 timeout (41s > 40s), killing
E, [2015-04-21T23:07:38.243005 #14347] ERROR -- : reaped #<Process::Status: pid 15378 SIGKILL (signal 9)> worker=0
E, [2015-04-21T23:07:39.647717 #14347] ERROR -- : worker=3 PID:15393 timeout (41s > 40s), killing
E, [2015-04-21T23:07:39.890543 #14347] ERROR -- : reaped #<Process::Status: pid 15393 SIGKILL (signal 9)> worker=3
I, [2015-04-21T23:07:40.727755 #16002] INFO -- : worker=0 ready
I, [2015-04-21T23:07:43.212395 #16022] INFO -- : worker=3 ready
E, [2015-04-21T23:08:24.511967 #14347] ERROR -- : worker=3 PID:16022 timeout (41s > 40s), killing
E, [2015-04-21T23:08:24.718512 #14347] ERROR -- : reaped #<Process::Status: pid 16022 SIGKILL (signal 9)> worker=3
I, [2015-04-21T23:08:28.010429 #16234] INFO -- : worker=3 ready
Eventually, after 2 or 3 minutes, the app starts being responsive again, but everything is more sluggish. You can see this very clearly in New Relic (the horizontal line marks the deploy, and the light blue area indicates Ruby):
I have an identical staging server, and I cannot replicate the issue in staging... granted, staging is under no load (I'm the only person trying to make page requests).
Here is my config/unicorn.rb file:
root = "/home/deployer/apps/myawesomeapp/current"
working_directory root
pid "#{root}/tmp/pids/unicorn.pid"
stderr_path "#{root}/log/unicorn.log"
stdout_path "#{root}/log/unicorn.log"
shared_path = "/home/deployer/apps/myawesomeapp/shared"
listen "/tmp/unicorn.myawesomeapp.sock"
worker_processes 5
timeout 40
preload_app true
before_exec do |server|
ENV['BUNDLE_GEMFILE'] = "#{root}/Gemfile"
end
before_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
end
old_pid = "#{root}/tmp/pids/unicorn.pid.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
end
end
end
after_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
end
And just to paint a complete picture, in my capistrano deploy.rb, the unicorn restart task looks like this:
namespace :deploy do
task :restart, roles: :app, except: { no_release: true } do
run "kill -s USR2 `cat #{release_path}/tmp/pids/unicorn.pid`"
end
end
Any ideas why the unicorn workers timeout right after the deploy? I thought the point of a zero-downtime was to keep the old ones around until the new ones are spun up and ready to serve?
Thanks!
UPDATE
I did another deploy, and this time kept an eye on production.log to see what was going on there. The only suspicious thing was the following lines, which were mixed in with normal requests:
Dalli/SASL authenticating as 7510de
Dalli/SASL: 7510de
Dalli/SASL authenticating as 7510de
Dalli/SASL: 7510de
Dalli/SASL authenticating as 7510de
Dalli/SASL: 7510de
UPDATE #2
As suggested by some of the answers below, I changed the before_fork block to add sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU so the workers would be incrementally killed off.
Same result, terribly slow deploy, with the same spike I illustrated in the graph above. Just for context, out of my 5 worker processes, the first 4 sent a TTOU signal, and the 5th sent QUIT. Still, does not seem to have made a difference.
I came across a similar problem recently while trying to set up Rails/Nginx/Unicorn on Digital Ocean. I was able to get zero-downtime deploys to work after tweaking a few things. Here are a few things to try:
Reduce the number of worker process.
Increase the memory of your server. I was getting timeouts on the 512MB RAM droplet. Seemed to fix the issue when I increased it to 1GB.
Use the "capistrano3-unicorn" gem.
If preload_app true, use restart (USR2). If false, use reload (HUP).
Ensure "tmp/pids" is in the set as a linked_dirs in deploy.rb.
Use px aux | grep unicorn to make sure the old processes are being removed.
Use kill [pid] to manually stop any unicorn processes still running.
Here's my unicorn config for reference:
working_directory '/var/www/yourapp/current'
pid '/var/www/yourapp/current/tmp/pids/unicorn.pid'
stderr_path '/var/www/yourapp/log/unicorn.log'
stdout_path '/var/www/yourapp/log/unicorn.log'
listen '/tmp/unicorn.yourapp.sock'
worker_processes 2
timeout 30
preload_app true
before_fork do |server, worker|
old_pid = "/var/www/yourapp/current/tmp/pids/unicorn.pid.oldbin"
if old_pid != server.pid
begin
sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
Process.kill(sig, File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
end
end
end
deploy.rb
lock '3.4.0'
set :application, 'yourapp'
set :repo_url, 'git#bitbucket.org:username/yourapp.git'
set :deploy_to, '/var/www/yourapp'
set :linked_files, fetch(:linked_files, []).push('config/database.yml', 'config/secrets.yml', 'config/application.yml')
set :linked_dirs, fetch(:linked_dirs, []).push('log', 'tmp/pids', 'tmp/cache', 'tmp/sockets', 'vendor/bundle', 'public/system')
set :format, :pretty
set :log_level, :info
set :rbenv_ruby, '2.1.3'
namespace :deploy do
after :restart, :clear_cache do
on roles(:web), in: :groups, limit: 3, wait: 10 do
end
end
end
after 'deploy:publishing', 'deploy:restart'
namespace :deploy do
task :restart do
#invoke 'unicorn:reload'
invoke 'unicorn:restart'
end
end
Are you vendoring unicorn and having cap run a bundle install on deploy? If so this could be an executable issue.
When you do a Capistrano deploy, cap creates a new release directory for your revision and moves the current symlink to point to the new release. If you haven't told the running unicorn to gracefully update the path to its executable, it should work if you add this line:
Unicorn::HttpServer::START_CTX[0] = ::File.join(ENV['GEM_HOME'].gsub(/releases\/[^\/]+/, "current"),'bin','unicorn')
You can find some more information here. I think the before_fork block you have looks good, but I would add the sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU line from #travisluong's answer as well; that will incrementally kill off the workers as the new ones spawn.
I would not remove preload_app true, incidentally, as it greatly improves worker spawn time.

Deploying rails app with realtime-rails gem and redis on Heroku

I have built a rails app following the model presented here:
http://mikeatlas.github.io/realtime-rails/
I am using Rails with the realtime-rails gem and redis. My application is working in my development environment and i want to move it to Heroku. I have already setup a Redis database at Redis-to-go and I now want to make whatever changes are necessary for the realtime-gem and associated socket-io server setup. At a minimum, I will need to modify the production portion of my application_controller:
def realtime_server_url
if Rails.env.development?
return 'HTTPLOCALHOST:5001'
end
if Rails.env.production?
return 'PRODUCION-LOCATION'
end
end
I have deployed the socket-io server (realtime-server) that is installed by instructions from the link above for realtime-server to a separate heroku instance. I then made the PRODUCION-LOCATION that url for the realtime -server dyno with port 5001 and both http and https. No joy.
Following the instructions the realtime-server folder was created at the top level of my project folder, parallel to the app folder. Does this mean i should include it in the main repository and somehow have it run from the same dyno as the app? If so, how do i go about starting it? The instructions say to start it by locally with:
cd realtime-server
foreman start
Not clear that I can do that through the heroku cli will be able to run in the same instance and how is it started.
=============
Update
Found documentation on heroku that made me realize, I need to set the REDISCLOUD_URL for the heroku dyno running the realtime-server using:
heroku config:add REDISCLOUD_URL='redis_cloud_url'
and that in production it wasnt using the 5001 port:
if (process.env.NODE_ENV != 'production') {
port = 5001; // run on a different port when in non-production mode.
}
Also, found form console log that the realtime-server was enforcing HTTPS.
Now, the blocking issue seems to be the request of /socket.io/socket.io.js from the realtime-server which returns:
503 (Service Unavailable)
So far, it seems that separating the realtime-server from the rails app repository was the right move.
Looking through the code for the realtime-server to determine how that is routed...
================
Update
i looked at the logs on your advice, thanks. saw that the realtime-server was crashing because it wasnt liking the port, so I tried to set the PORT variable 443, 3000, 5001 variously to no avail using:
heroku config:add PORT='443'
based on this code from the enviornment.js file of the realtime-server:
var port = process.env.PORT || 5001;
if (process.env.NODE_ENV != 'production') {
port = 5001; // run on a different port when in non-production mode.
}
here is an excerpt of logs:
2015-02-22T19:26:45.512317+00:00 heroku[api]: Set PORT config vars by tmt#breakthroughtek.com
2015-02-22T19:26:45.512317+00:00 heroku[api]: Release v5 created by tmt#breakthroughtek.com
2015-02-22T19:26:45.754327+00:00 heroku[web.1]: State changed from crashed to starting
2015-02-22T19:26:48.318928+00:00 heroku[web.1]: Starting process with command `node forever.js`
2015-02-22T19:26:49.540190+00:00 app[web.1]: Detected 512 MB available memory, 512 MB limit per process (WEB_MEMORY)
2015-02-22T19:26:49.540213+00:00 app[web.1]: Recommending WEB_CONCURRENCY=1
2015-02-22T19:26:49.957808+00:00 app[web.1]: STARTING ON PORT: 5001
2015-02-22T19:27:44.163425+00:00 heroku[api]: Scale to web=1 by tmt#breakthroughtek.com
2015-02-22T19:27:48.811493+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2015-02-22T19:27:48.811551+00:00 heroku[web.1]: Stopping process with SIGKILL
2015-02-22T19:27:49.561043+00:00 heroku[web.1]: State changed from starting to crashed
2015-02-22T19:27:49.561720+00:00 heroku[web.1]: State changed from crashed to starting
2015-02-22T19:27:49.534451+00:00 heroku[web.1]: Process exited with status 137
2015-02-22T19:27:51.936860+00:00 heroku[web.1]: Starting process with command `node forever.js`
2015-02-22T19:27:53.289025+00:00 app[web.1]: Detected 512 MB available memory, 512 MB limit per process (WEB_MEMORY)
2015-02-22T19:27:53.289046+00:00 app[web.1]: Recommending WEB_CONCURRENCY=1
2015-02-22T19:27:53.703573+00:00 app[web.1]: STARTING ON PORT: 5001
2015-02-22T19:28:51.991836+00:00 heroku[web.1]: Stopping process with SIGKILL
2015-02-22T19:28:51.991836+00:00 heroku[web.1]: Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 60 seconds of launch
2015-02-22T19:28:52.758191+00:00 heroku[web.1]: Process exited with status 137
2015-02-22T19:28:52.764783+00:00 heroku[web.1]: State changed from starting to crashed
2015-02-22T19:31:22.240362+00:00 heroku[api]: Set PORT config vars by tmt#breakthroughtek.com
2015-02-22T19:31:22.240362+00:00 heroku[api]: Release v6 created by tmt#breakthroughtek.com
2015-02-22T19:31:22.378770+00:00 heroku[web.1]: State changed from crashed to starting
2015-02-22T19:31:24.766187+00:00 heroku[web.1]: Starting process with command `node forever.js`
2015-02-22T19:31:26.316332+00:00 app[web.1]: Detected 512 MB available memory, 512 MB limit per process (WEB_MEMORY)
2015-02-22T19:31:26.316353+00:00 app[web.1]: Recommending WEB_CONCURRENCY=1
2015-02-22T19:31:26.717561+00:00 app[web.1]: STARTING ON PORT: 5001
===========
Update
looking at the logs and seeing the system was still picking port 5001, I checked the heroku environement via :
heroku config
and saw that the NODE_ENV variable was not set. Did a:
heroku config:add NODE_ENV='production'
and now the js file is loading. YAY!!! .. Thanks, d.danailov :)
Have to resolve some issue with rails on heroku erring with a missing template error on access to the admin area:
2015-02-22T20:04:36.492199+00:00 app[web.1]: Processing by LocationsController#index as HTML
2015-02-22T20:04:36.497052+00:00 app[web.1]: * "/app/app/views"
2015-02-22T20:04:36.497054+00:00 app[web.1]: * "/app/vendor/bundle/ruby/2.0.0/gems/devise-3.4.1/app/views"
2015-02-22T20:04:36.497059+00:00 app[web.1]: app/controllers/locations_controller.rb:8:in `index'
2015-02-22T20:04:36.497050+00:00 app[web.1]: ActionView::MissingTemplate (Missing template locations/index, application/index with {:locale=>[:en], :formats=>[:html], :variants=>[], :handlers=>[:erb, :builder, :raw, :ruby, :coffee, :jbuilder]}. Searched in:
2015-02-22T20:04:36.497062+00:00 app[web.1]:
2015-02-22T20:04:36.497057+00:00 app[web.1]: ):
2015-02-22T20:04:36.497060+00:00 app[web.1]:
2015-02-22T20:04:36.497056+00:00 app[web.1]: * "/app/vendor/bundle/ruby/2.0.0/gems/realtime-0.0.12/app/views"
And of course, the page is working fine in development. So I am going to start with the assumption it has something to do with sprockets/asset pipeline.
But i think this portion may be solved. I will close once I verify I can send realtime msgs.
=============
Update
SOLVED: beware you dont use redis elsewhere in your app.
HEADS UP: i had a sneaky little Redis.new in an initializer that was wiping out my Redis config settings.
most of the solution was setting the NODE_ENV & REDISCLOUD_URL vars for the realitme server running in a separate instance.

Unicorn restarting on its own - No memory - gets killed

I am running two Rails apps on DigitalOcean with 512MB RAM and with 4 nginx processes.
The rails apps use Unicorn.
One has 2 workers and the other uses 1.
My problem is with the second app that has 1 Unicorn worker (same problem was there when there were 2 workers as well). What happens is, suddenly my app throws a 500 error. When I SSH into the server I would find that the app's unicorn process is not running!
When I start unicorn again everything would be fine.
This is my log file. As you can see, the worker gets reaped and then it is not able to fork it and the reason given is No Memory.
, [2014-01-24T04:12:28.080716 #8820] INFO -- : master process ready
I, [2014-01-24T04:12:28.110834 #8824] INFO -- : worker=0 ready
E, [2014-01-24T06:45:08.423082 #8820] ERROR -- : reaped #<Process::Status: pid 8824 SIGKILL (signal 9)> worker=0
E, [2014-01-24T06:45:08.438352 #8820] ERROR -- : Cannot allocate memory - fork(2) (Errno::ENOMEM)
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/lib/unicorn/http_server.rb:523:in `fork'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/lib/unicorn/http_server.rb:523:in `spawn_missing_workers'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/lib/unicorn/http_server.rb:538:in `maintain_worker_count'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/lib/unicorn/http_server.rb:303:in `join'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/bin/unicorn:126:in `<top (required)>'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `load'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `<main>'
I, [2014-01-24T08:43:53.693228 #26868] INFO -- : Refreshing Gem list
I, [2014-01-24T08:43:56.283950 #26868] INFO -- : unlinking existing socket=/tmp/unicorn.hmd.sock
I, [2014-01-24T08:43:56.284840 #26868] INFO -- : listening on addr=/tmp/unicorn.hmd.sock fd=11
I, [2014-01-24T08:43:56.320075 #26868] INFO -- : master process ready
I, [2014-01-24T08:43:56.348648 #26872] INFO -- : worker=0 ready
E, [2014-01-24T09:10:07.251846 #26868] ERROR -- : reaped #<Process::Status: pid 26872 SIGKILL (signal 9)> worker=0
I, [2014-01-24T09:10:07.300339 #27743] INFO -- : worker=0 ready
I, [2014-01-24T09:18:09.992675 #28039] INFO -- : executing ["/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/bin/unicorn", "-D", "-c", "/home/vocp/projects/hmd/config/unicorn.rb", "-E", "production", {11=>#<Kgio::UNIXServer:/tmp/unicorn.hmd.sock>}] (in /home/vocp/projects/hmd)
I, [2014-01-24T09:18:10.426852 #28039] INFO -- : inherited addr=/tmp/unicorn.hmd.sock fd=11
I, [2014-01-24T09:18:10.427090 #28039] INFO -- : Refreshing Gem list
E, [2014-01-24T09:18:13.456986 #28039] ERROR -- : Cannot allocate memory - fork(2) (Errno::ENOMEM)
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/lib/unicorn/http_server.rb:523:in `fork'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/lib/unicorn/http_server.rb:523:in `spawn_missing_workers'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/lib/unicorn/http_server.rb:153:in `start'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/bin/unicorn:126:in `<top (required)>'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `load'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `<main>'
E, [2014-01-24T09:18:13.464982 #26868] ERROR -- : reaped #<Process::Status: pid 28039 exit 1> exec()-ed
This is my unicorn.rb
root = "/home/vocp/projects/hmd"
working_directory root
pid "#{root}/tmp/pids/unicorn.pid"
stderr_path "#{root}/log/unicorn.log"
stdout_path "#{root}/log/unicorn.log"
listen "/tmp/unicorn.hmd.sock"
worker_processes 1
timeout 30
preload_app true
# Force the bundler gemfile environment variable to
# reference the capistrano "current" symlink
before_exec do |_|
ENV["BUNDLE_GEMFILE"] = File.join(root, 'Gemfile')
end
before_fork do |server, worker|
defined?(ActiveRecord::Base) && ActiveRecord::Base.connection.disconnect!
old_pid = Rails.root + '/tmp/pids/unicorn.pid.oldbin'
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
puts "Old master alerady dead"
end
end
end
after_fork do |server, worker|
defined?(ActiveRecord::Base) && ActiveRecord::Base.establish_connection
child_pid = server.config[:pid].sub('.pid', ".#{worker.nr}.pid")
system("echo #{Process.pid} > #{child_pid}")
end
I donot have monit or god or any monitoring tools. I find it very odd because generally the used server memory would be 380/490. And nobody uses these two apps apart from me! They are in development.
Have I wrongly configured anything? Why is this happening? please help. Should I configure god to restart unicorn when it crashes?
For Unicorn memory usage the only way is up, unfortunately. Unicorn will allocate more memory if your rails app needs it. But it does not release it even if it doesn't need it anymore. For example if you load a lot of records for an index page at once, unicorn will increase memory usage. Now this is exacerbated by the fact that 512MB are not a huge amount of memory for 2 rails apps with 3 workers.
Furthermore there are memory leaks that increase memory usage too. See this article
https://www.digitalocean.com/community/articles/how-to-optimize-unicorn-workers-in-a-ruby-on-rails-app
At the end of the article they refer to the unicorn-worker-killer gem in order to restart the unicorn workers based on either max connections or max memory which looks pretty straightforward.
Personally I have used the bluepill gem to monitor individual unicorn processes and restart them if needed.
In your case I would monitor all unicorn processes and restart them if they reach a certain memory size.
Check First the memory by using command "df -H" on your server. If the memory is ok than reboot your system with
"sudo su reboot" and it will work fine..

How to avoid refreshing gem list

I see the following statement while starting a rails app using unicorn, what does it do and how to avoid this:
I, [2013-03-28T06:46:05.060194 #1762] INFO -- : worker=0 spawning...
I, [2013-03-28T06:46:05.066834 #2137] INFO -- : worker=0 spawned pid=2137
I, [2013-03-28T06:46:05.067210 #2137] INFO -- : Refreshing Gem list
The log you present us contains:
worker=0 spawning
The worker that will answers your HTTP requests is spawned as a separate process, with the pid 2137.
Refreshing Gem list
According to the official Unicorn documentation (http://unicorn.bogomips.org/SIGNALS.html), the Gem set is reloaded in order so "updated code for your application can pick up newly installed RubyGems"
Looking at the source code, the message "Refreshing Gem list" is called whenever the application is built:
def build_app!
if app.respond_to?(:arity) && app.arity == 0
if defined?(Gem) && Gem.respond_to?(:refresh)
logger.info "Refreshing Gem list"
Gem.refresh
end
self.app = app.call
end
end
end
Setting the preload_app config provides some control over this behavior.

Resources