I see the following statement while starting a rails app using unicorn, what does it do and how to avoid this:
I, [2013-03-28T06:46:05.060194 #1762] INFO -- : worker=0 spawning...
I, [2013-03-28T06:46:05.066834 #2137] INFO -- : worker=0 spawned pid=2137
I, [2013-03-28T06:46:05.067210 #2137] INFO -- : Refreshing Gem list
The log you present us contains:
worker=0 spawning
The worker that will answers your HTTP requests is spawned as a separate process, with the pid 2137.
Refreshing Gem list
According to the official Unicorn documentation (http://unicorn.bogomips.org/SIGNALS.html), the Gem set is reloaded in order so "updated code for your application can pick up newly installed RubyGems"
Looking at the source code, the message "Refreshing Gem list" is called whenever the application is built:
def build_app!
if app.respond_to?(:arity) && app.arity == 0
if defined?(Gem) && Gem.respond_to?(:refresh)
logger.info "Refreshing Gem list"
Gem.refresh
end
self.app = app.call
end
end
end
Setting the preload_app config provides some control over this behavior.
Related
I'm running a Rails 3.2.21 app and deploy to a Ubuntu 12.04.5 box using capistrano (nginx and unicorn).
I have my app set for a zero-downtime deploy (at least I thought), with my config files looking more or less like these.
Here's the problem: When the deploy is nearly done and it restarts unicorn, when I watch my unicorn.log I see it fire up the new workers, reap the old ones... but then my app just hangs for 2-3 minutes. Any request to the app at this point hits the timeout window (which I set to 40 seconds) and returns my app's 500 error page.
Here is the first part of the output from unicorn.log as unicorn is restarting (I have 5 unicorn workers):
I, [2015-04-21T23:06:57.022492 #14347] INFO -- : master process ready
I, [2015-04-21T23:06:57.844273 #15378] INFO -- : worker=0 ready
I, [2015-04-21T23:06:57.944080 #15381] INFO -- : worker=1 ready
I, [2015-04-21T23:06:58.089655 #15390] INFO -- : worker=2 ready
I, [2015-04-21T23:06:58.230554 #14541] INFO -- : reaped #<Process::Status: pid 15551 exit 0> worker=4
I, [2015-04-21T23:06:58.231455 #14541] INFO -- : reaped #<Process::Status: pid 3644 exit 0> worker=0
I, [2015-04-21T23:06:58.249110 #15393] INFO -- : worker=3 ready
I, [2015-04-21T23:06:58.650007 #15396] INFO -- : worker=4 ready
I, [2015-04-21T23:07:01.246981 #14541] INFO -- : reaped #<Process::Status: pid 32645 exit 0> worker=1
I, [2015-04-21T23:07:01.561786 #14541] INFO -- : reaped #<Process::Status: pid 15534 exit 0> worker=2
I, [2015-04-21T23:07:06.657913 #14541] INFO -- : reaped #<Process::Status: pid 16821 exit 0> worker=3
I, [2015-04-21T23:07:06.658325 #14541] INFO -- : master complete
Afterwards, as the app hangs for those 2-3 minutes, here is what's happening:
E, [2015-04-21T23:07:38.069635 #14347] ERROR -- : worker=0 PID:15378 timeout (41s > 40s), killing
E, [2015-04-21T23:07:38.243005 #14347] ERROR -- : reaped #<Process::Status: pid 15378 SIGKILL (signal 9)> worker=0
E, [2015-04-21T23:07:39.647717 #14347] ERROR -- : worker=3 PID:15393 timeout (41s > 40s), killing
E, [2015-04-21T23:07:39.890543 #14347] ERROR -- : reaped #<Process::Status: pid 15393 SIGKILL (signal 9)> worker=3
I, [2015-04-21T23:07:40.727755 #16002] INFO -- : worker=0 ready
I, [2015-04-21T23:07:43.212395 #16022] INFO -- : worker=3 ready
E, [2015-04-21T23:08:24.511967 #14347] ERROR -- : worker=3 PID:16022 timeout (41s > 40s), killing
E, [2015-04-21T23:08:24.718512 #14347] ERROR -- : reaped #<Process::Status: pid 16022 SIGKILL (signal 9)> worker=3
I, [2015-04-21T23:08:28.010429 #16234] INFO -- : worker=3 ready
Eventually, after 2 or 3 minutes, the app starts being responsive again, but everything is more sluggish. You can see this very clearly in New Relic (the horizontal line marks the deploy, and the light blue area indicates Ruby):
I have an identical staging server, and I cannot replicate the issue in staging... granted, staging is under no load (I'm the only person trying to make page requests).
Here is my config/unicorn.rb file:
root = "/home/deployer/apps/myawesomeapp/current"
working_directory root
pid "#{root}/tmp/pids/unicorn.pid"
stderr_path "#{root}/log/unicorn.log"
stdout_path "#{root}/log/unicorn.log"
shared_path = "/home/deployer/apps/myawesomeapp/shared"
listen "/tmp/unicorn.myawesomeapp.sock"
worker_processes 5
timeout 40
preload_app true
before_exec do |server|
ENV['BUNDLE_GEMFILE'] = "#{root}/Gemfile"
end
before_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.connection.disconnect!
end
old_pid = "#{root}/tmp/pids/unicorn.pid.oldbin"
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
end
end
end
after_fork do |server, worker|
if defined?(ActiveRecord::Base)
ActiveRecord::Base.establish_connection
end
end
And just to paint a complete picture, in my capistrano deploy.rb, the unicorn restart task looks like this:
namespace :deploy do
task :restart, roles: :app, except: { no_release: true } do
run "kill -s USR2 `cat #{release_path}/tmp/pids/unicorn.pid`"
end
end
Any ideas why the unicorn workers timeout right after the deploy? I thought the point of a zero-downtime was to keep the old ones around until the new ones are spun up and ready to serve?
Thanks!
UPDATE
I did another deploy, and this time kept an eye on production.log to see what was going on there. The only suspicious thing was the following lines, which were mixed in with normal requests:
Dalli/SASL authenticating as 7510de
Dalli/SASL: 7510de
Dalli/SASL authenticating as 7510de
Dalli/SASL: 7510de
Dalli/SASL authenticating as 7510de
Dalli/SASL: 7510de
UPDATE #2
As suggested by some of the answers below, I changed the before_fork block to add sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU so the workers would be incrementally killed off.
Same result, terribly slow deploy, with the same spike I illustrated in the graph above. Just for context, out of my 5 worker processes, the first 4 sent a TTOU signal, and the 5th sent QUIT. Still, does not seem to have made a difference.
I came across a similar problem recently while trying to set up Rails/Nginx/Unicorn on Digital Ocean. I was able to get zero-downtime deploys to work after tweaking a few things. Here are a few things to try:
Reduce the number of worker process.
Increase the memory of your server. I was getting timeouts on the 512MB RAM droplet. Seemed to fix the issue when I increased it to 1GB.
Use the "capistrano3-unicorn" gem.
If preload_app true, use restart (USR2). If false, use reload (HUP).
Ensure "tmp/pids" is in the set as a linked_dirs in deploy.rb.
Use px aux | grep unicorn to make sure the old processes are being removed.
Use kill [pid] to manually stop any unicorn processes still running.
Here's my unicorn config for reference:
working_directory '/var/www/yourapp/current'
pid '/var/www/yourapp/current/tmp/pids/unicorn.pid'
stderr_path '/var/www/yourapp/log/unicorn.log'
stdout_path '/var/www/yourapp/log/unicorn.log'
listen '/tmp/unicorn.yourapp.sock'
worker_processes 2
timeout 30
preload_app true
before_fork do |server, worker|
old_pid = "/var/www/yourapp/current/tmp/pids/unicorn.pid.oldbin"
if old_pid != server.pid
begin
sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
Process.kill(sig, File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
end
end
end
deploy.rb
lock '3.4.0'
set :application, 'yourapp'
set :repo_url, 'git#bitbucket.org:username/yourapp.git'
set :deploy_to, '/var/www/yourapp'
set :linked_files, fetch(:linked_files, []).push('config/database.yml', 'config/secrets.yml', 'config/application.yml')
set :linked_dirs, fetch(:linked_dirs, []).push('log', 'tmp/pids', 'tmp/cache', 'tmp/sockets', 'vendor/bundle', 'public/system')
set :format, :pretty
set :log_level, :info
set :rbenv_ruby, '2.1.3'
namespace :deploy do
after :restart, :clear_cache do
on roles(:web), in: :groups, limit: 3, wait: 10 do
end
end
end
after 'deploy:publishing', 'deploy:restart'
namespace :deploy do
task :restart do
#invoke 'unicorn:reload'
invoke 'unicorn:restart'
end
end
Are you vendoring unicorn and having cap run a bundle install on deploy? If so this could be an executable issue.
When you do a Capistrano deploy, cap creates a new release directory for your revision and moves the current symlink to point to the new release. If you haven't told the running unicorn to gracefully update the path to its executable, it should work if you add this line:
Unicorn::HttpServer::START_CTX[0] = ::File.join(ENV['GEM_HOME'].gsub(/releases\/[^\/]+/, "current"),'bin','unicorn')
You can find some more information here. I think the before_fork block you have looks good, but I would add the sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU line from #travisluong's answer as well; that will incrementally kill off the workers as the new ones spawn.
I would not remove preload_app true, incidentally, as it greatly improves worker spawn time.
I am running two Rails apps on DigitalOcean with 512MB RAM and with 4 nginx processes.
The rails apps use Unicorn.
One has 2 workers and the other uses 1.
My problem is with the second app that has 1 Unicorn worker (same problem was there when there were 2 workers as well). What happens is, suddenly my app throws a 500 error. When I SSH into the server I would find that the app's unicorn process is not running!
When I start unicorn again everything would be fine.
This is my log file. As you can see, the worker gets reaped and then it is not able to fork it and the reason given is No Memory.
, [2014-01-24T04:12:28.080716 #8820] INFO -- : master process ready
I, [2014-01-24T04:12:28.110834 #8824] INFO -- : worker=0 ready
E, [2014-01-24T06:45:08.423082 #8820] ERROR -- : reaped #<Process::Status: pid 8824 SIGKILL (signal 9)> worker=0
E, [2014-01-24T06:45:08.438352 #8820] ERROR -- : Cannot allocate memory - fork(2) (Errno::ENOMEM)
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/lib/unicorn/http_server.rb:523:in `fork'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/lib/unicorn/http_server.rb:523:in `spawn_missing_workers'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/lib/unicorn/http_server.rb:538:in `maintain_worker_count'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/lib/unicorn/http_server.rb:303:in `join'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/bin/unicorn:126:in `<top (required)>'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `load'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `<main>'
I, [2014-01-24T08:43:53.693228 #26868] INFO -- : Refreshing Gem list
I, [2014-01-24T08:43:56.283950 #26868] INFO -- : unlinking existing socket=/tmp/unicorn.hmd.sock
I, [2014-01-24T08:43:56.284840 #26868] INFO -- : listening on addr=/tmp/unicorn.hmd.sock fd=11
I, [2014-01-24T08:43:56.320075 #26868] INFO -- : master process ready
I, [2014-01-24T08:43:56.348648 #26872] INFO -- : worker=0 ready
E, [2014-01-24T09:10:07.251846 #26868] ERROR -- : reaped #<Process::Status: pid 26872 SIGKILL (signal 9)> worker=0
I, [2014-01-24T09:10:07.300339 #27743] INFO -- : worker=0 ready
I, [2014-01-24T09:18:09.992675 #28039] INFO -- : executing ["/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/bin/unicorn", "-D", "-c", "/home/vocp/projects/hmd/config/unicorn.rb", "-E", "production", {11=>#<Kgio::UNIXServer:/tmp/unicorn.hmd.sock>}] (in /home/vocp/projects/hmd)
I, [2014-01-24T09:18:10.426852 #28039] INFO -- : inherited addr=/tmp/unicorn.hmd.sock fd=11
I, [2014-01-24T09:18:10.427090 #28039] INFO -- : Refreshing Gem list
E, [2014-01-24T09:18:13.456986 #28039] ERROR -- : Cannot allocate memory - fork(2) (Errno::ENOMEM)
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/lib/unicorn/http_server.rb:523:in `fork'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/lib/unicorn/http_server.rb:523:in `spawn_missing_workers'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/lib/unicorn/http_server.rb:153:in `start'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/gems/unicorn-4.7.0/bin/unicorn:126:in `<top (required)>'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `load'
/home/vocp/projects/hmd/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `<main>'
E, [2014-01-24T09:18:13.464982 #26868] ERROR -- : reaped #<Process::Status: pid 28039 exit 1> exec()-ed
This is my unicorn.rb
root = "/home/vocp/projects/hmd"
working_directory root
pid "#{root}/tmp/pids/unicorn.pid"
stderr_path "#{root}/log/unicorn.log"
stdout_path "#{root}/log/unicorn.log"
listen "/tmp/unicorn.hmd.sock"
worker_processes 1
timeout 30
preload_app true
# Force the bundler gemfile environment variable to
# reference the capistrano "current" symlink
before_exec do |_|
ENV["BUNDLE_GEMFILE"] = File.join(root, 'Gemfile')
end
before_fork do |server, worker|
defined?(ActiveRecord::Base) && ActiveRecord::Base.connection.disconnect!
old_pid = Rails.root + '/tmp/pids/unicorn.pid.oldbin'
if File.exists?(old_pid) && server.pid != old_pid
begin
Process.kill("QUIT", File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
puts "Old master alerady dead"
end
end
end
after_fork do |server, worker|
defined?(ActiveRecord::Base) && ActiveRecord::Base.establish_connection
child_pid = server.config[:pid].sub('.pid', ".#{worker.nr}.pid")
system("echo #{Process.pid} > #{child_pid}")
end
I donot have monit or god or any monitoring tools. I find it very odd because generally the used server memory would be 380/490. And nobody uses these two apps apart from me! They are in development.
Have I wrongly configured anything? Why is this happening? please help. Should I configure god to restart unicorn when it crashes?
For Unicorn memory usage the only way is up, unfortunately. Unicorn will allocate more memory if your rails app needs it. But it does not release it even if it doesn't need it anymore. For example if you load a lot of records for an index page at once, unicorn will increase memory usage. Now this is exacerbated by the fact that 512MB are not a huge amount of memory for 2 rails apps with 3 workers.
Furthermore there are memory leaks that increase memory usage too. See this article
https://www.digitalocean.com/community/articles/how-to-optimize-unicorn-workers-in-a-ruby-on-rails-app
At the end of the article they refer to the unicorn-worker-killer gem in order to restart the unicorn workers based on either max connections or max memory which looks pretty straightforward.
Personally I have used the bluepill gem to monitor individual unicorn processes and restart them if needed.
In your case I would monitor all unicorn processes and restart them if they reach a certain memory size.
Check First the memory by using command "df -H" on your server. If the memory is ok than reboot your system with
"sudo su reboot" and it will work fine..
I have followed this tutorial https://www.digitalocean.com/community/articles/how-to-1-click-install-ruby-on-rails-on-ubuntu-12-10-with-digitalocean
and I already ran into a wall..
I have followed the tutorial and installed all the gems, however I keep on getting this whenever I try to bundle install :
root#montrealfixed:~# bundle install
Bundler::GemfileNotFound
Here is the output of tail /home/unicorn/log/unicorn.log
I, [2013-11-01T16:37:37.842833 #3929] INFO -- : worker=0 spawning...
I, [2013-11-01T16:37:37.845118 #3929] INFO -- : worker=1 spawning...
I, [2013-11-01T16:37:37.847094 #3929] INFO -- : master process ready
I, [2013-11-01T16:37:37.852229 #3932] INFO -- : worker=0 spawned pid=3932
I, [2013-11-01T16:37:37.859884 #3934] INFO -- : worker=1 spawned pid=3934
I, [2013-11-01T16:37:37.869478 #3932] INFO -- : Refreshing Gem list
I, [2013-11-01T16:37:37.876426 #3934] INFO -- : Refreshing Gem list
I, [2013-11-01T16:37:41.489375 #3934] INFO -- : worker=1 ready
I, [2013-11-01T16:37:41.490536 #3932] INFO -- : worker=0 ready
96.127.229.50 - - [01/Nov/2013 16:38:35] "GET /rails.png HTTP/1.0" 304 - 0.4569
I even tried refreshing unicorn.
I'm not sure if this is appropriate for a ticket, but can anyone help me ? I would really like to get my rails app launched with DO.
P.S : I tried again and installed every gem separately and now i am simply getting the following :
-bash: bundle: command not found
I use rails 3.0.11, ruby 1.9.3-p0, nginx 1.0.4 and unicorn 3.6.2 for my project. And I have got a problem.
I have to do long-term operation on my server. It's about 150 seconds. And it's okay in this case.
I've set up my nginx config in location
proxy_read_timeout 240;
proxy_send_timeout 240;
And set up my unicorn.rb file with command
timeout 240
But I always get 502 bad gateway error.
I think, problem with unicorn. I get this unicorn logs
E, [2012-05-21T11:52:21.052382 #30423] ERROR -- : worker=1 PID:30871 timeout (104.052329915s > 60s), killing
E, [2012-05-21T11:52:21.080378 #30423] ERROR -- : reaped #<Process::Status: pid 30871 SIGKILL (signal 9)> worker=1
I, [2012-05-21T11:52:21.105045 #30423] INFO -- : worker=1 spawning...
I, [2012-05-21T11:52:21.111148 #894] INFO -- : worker=1 spawned pid=894
I, [2012-05-21T11:52:21.111659 #894] INFO -- : Refreshing Gem list
Can you help me? Any help is appreciated. Thank you.
Copying the answer from the comments in order to remove this question from the "Unanswered" filter:
I have never used this gem, but if you're doing this after
'deploy:restart', 'unicorn:reload' you need to restart unicorn, not
only reload it. sudo /etc/init.d/unicorn restart and the timeout will
be set. Reload and restart are two different things in unicorn.
~ answer per MaurĂcio Linhares
After chaning timeout in config/unicron/production.rb
I had to run
cap deploy
and then stop & start unicron master process to pick up new config with:
cap unicorn:stop
cap unicorn:start
I am trying to use foreman to start my rails app. Unfortunately I have difficulties connecting my IDE for debugging.
I read here about using
Debugger.wait_connection = true
Debugger.start_remote
to start a remote debugging session, but that does not really work out.
Question:
Is there a way to debug a rails (3.2) app started by foreman? If so, what is the approach?
If you use several workers with full rails environment you could use the following initializer:
# Enabled debugger with foreman, see https://github.com/ddollar/foreman/issues/58
if Rails.env.development?
require 'debugger'
Debugger.wait_connection = true
def find_available_port
server = TCPServer.new(nil, 0)
server.addr[1]
ensure
server.close if server
end
port = find_available_port
puts "Remote debugger on port #{port}"
Debugger.start_remote(nil, port)
end
And in the foreman's logs you'll be able to find debugger's ports:
$ foreman start
12:48:42 web.1 | started with pid 29916
12:48:42 worker.1 | started with pid 29921
12:48:44 web.1 | I, [2012-10-30T12:48:44.810464 #29916] INFO -- : listening on addr=0.0.0.0:5000 fd=10
12:48:44 web.1 | I, [2012-10-30T12:48:44.810636 #29916] INFO -- : Refreshing Gem list
12:48:47 web.1 | Remote debugger on port 59269
12:48:48 worker.1 | Remote debugger on port 41301
Now run debugger using:
rdebug -c -p [PORT]
One approach is to require debugger normally in your gemfile, and add debugger normally in your code as needed. When the server hits that line, it will stop, but foreman won't be verbose about it. In your foreman console you can blindly type irb, and only then will you see a prompt appear. Bad UX, right?
Another (augmentative) approach is to tail your logs:
tail -f log/development.log
Hope this helps.