Unicorn failing to spawn workers on USR2 signal - ruby-on-rails

I'm sending a USR2 signal to the master process in order to achieve zero downtime deploy with unicorn. After the old master is dead, I'm getting the following error:
adding listener failed addr=/path/to/unix_socket (in use)
unicorn-4.3.1/lib/unicorn/socket_helper.rb:140:in `initialize':
Address already in use - /path/to/unix_socket (Errno::EADDRINUSE)
The old master is killed in the before_fork block on the unicorn.rb config file. The process is started via upstart without the daemon (-D) option.
Any Ideia on what's going on?

Well, turns out you have to run in daemonized mode (-D) if you want to be able to do zero downtime deployment. I changed a few things in my upstart script and now it works fine:
setuid username
pre-start exec unicorn_rails -E production -c /path/to/app/config/unicorn.rb -D
post-stop exec kill cat `/path/to/app/tmp/pids/unicorn.pid`
respawn

Related

supervisor restart causes zombie uwsgi process

I have a python/Django project (myproject) running on nginx and uwsgi.
I am running uwsgi command via supervisord. This works perfectly, but on restarting supervisord it creates zombie process. what am i doing wrong? What am I overlooking to do this cleanly? any Advise?
Often times supervisor service takes too long. at that point I have found the following in supervisor.log file
INFO waiting for stage2_BB_wsgi, stage3_BB_wsgi, stage4_BB_wsgi to die
Point to Note: I am running multiple staging server in one machine, namely stage2 .. stageN
supervisor.conf file extract
[program:stage2_BB_wsgi]
command=uwsgi --close-on-exec -s /home/black/stage2/shared_locks/uwsgi_bb.sock --touch-reload=/home/black/stage2/shared_locks/reload_uwsgi --listen 10 --chdir /home/black/stage2/myproject/app/ --pp .. -w app.wsgi -C666 -H /home/black/stage2/myproject/venv/
user=black
numprocs=1
stdout_logfile=/home/black/stage2/logs/%(program_name)s.log
stderr_logfile=/home/black/stage2/logs/%(program_name)s.log
autostart=true
autorestart=true
startsecs=10
exitcodes=1
stopwaitsecs=600
killasgroup=true
priority=1000
thanks in advance.
You will want to set your stopsignal to INT or QUIT.
By default supervisord sends out a SIGTERM when restarting a program. This will not kill uwsgi, only reload it and its workers.

Doing Hart's Rails Tutorial, but server won't shut down for me fire it up after updates

I have made some changes to various files, and need to shut down and then restart the server to see them. I am using the Cloud9 railstutorial environment. But I keep getting the same error - "A server is already running". Please see below:
darrenbrett:~/workspace/sample_app (filling-in-layout) $ rails server -b $IP -p $PORT
=> Booting WEBrick
=> Rails 4.2.2 application starting in development on http://0.0.0.0:8080
=> Run `rails server -h` for more startup options
=> Ctrl-C to shutdown server
A server is already running. Check /home/ubuntu/workspace/sample_app/tmp/pids/server.pid.
Exiting
darrenbrett:~/workspace/sample_app (filling-in-layout) $
Find out the process id (PID) first:
$ lsof -wni tcp:8080
This will give you something like this:
$ lsof -wni tcp:8080
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ruby 59656 rislam 14u IPv6 0xa86b8563672ef037 0t0 TCP [::1]:http-alt (LISTEN)
Then, kill the process with PID = 59656 (for example, it will be different for you):
$ kill -9 59656
This should solve your problem.
You can also use the following command to kill all running apps that has rails in the name:
killall -9 rails
Sometimes, this is very effective when the first command does not do the trick.
It sounds like you either have an orphaned server process running, or you have multiple applications in the same environment and you're running the server for another one and it's bound to the same port. I'm guessing the former.
Run ps -fe | grep rails. You should see something like 501 35861 2428 0 1:55PM ttys012 0:04.00 /your_file_path/ruby bin/rails server
Grab the process id (in the case of the example, it's 35861), and run kill -9 35861. Then try to start your server again.
The easiest way I found to kill that server is to click on the suggested file in the line:
A server is already running. Check /home/ubuntu/workspace/sample_app/tmp/pids/server.pid
That opens the file containing a number, the process id you're looking for.
Example
43029
In the terminal window use "kill" via that same process id and restart the server.
kill -15 43029
rails server -b $IP -p $PORT

Can you bounce thin servers instead of restarting them all at once?

This sucks:
$ thin restart -p 4000 -s 3 -d;
Stopping server on 0.0.0.0:4000 ...
Sending QUIT signal to process 13375 ...
>> Exiting!
Stopping server on 0.0.0.0:4001 ...
Sending QUIT signal to process 13385 ...
>> Exiting!
Stopping server on 0.0.0.0:4002 ...
Sending QUIT signal to process 13397 ...
>> Exiting!
Starting server on 0.0.0.0:4000 ...
Starting server on 0.0.0.0:4001 ...
Starting server on 0.0.0.0:4002 ...
Is there a way to cycle through each server and restart one at a time?
rtm
> thin
Command required
Usage: thin [options] start|stop|restart|config
[...]
Cluster options:
-s, --servers NUM Number of servers to start
-o, --only NUM Send command to only one server of the cluster
-C, --config FILE Load options from config file
-O, --onebyone Restart the cluster one by one (only works with restart command)

How to restart individual servers in thin cluster in rails 3.1 app

I have a thin cluster set up to start 3 servers:
/etc/thin/myapp.yml
...
wait: 30
servers: 3
daemonize: true
...
and the I use thin restart -C /etc/thin/myapp.yml to restart. However, I would like to restart each server at a time, to reduce downtime.
Is there a way to restart each server by pid number or location for example?
There is something better for you
try option: --onebyone
you may also add the following line to your config file
onebyone: true
afterwards you able to restart you thin cluster without any downtime.
I know the question has been answered, but I'd like to add the -o option to the mix.
So
thin restart -C /etc/thin/myapp.yml -o 3000
Will only start the server running on port 3000. If let's say you have two other servers running on 3001 and 3002, they'll be left untouched.
-o works with start and stop commands too.

How do you restart Rails under Mongrel, without stopping and starting Mongrel

Is there a way to restart the Rails app (e.g. when you've changed a plugin/config file) while Mongrel is running. Or alternatively quickly restart Mongrel. Mongrel gives these hints that you can but how do you do it?
** Signals ready. TERM => stop. USR2 => restart. INT => stop (no restart).
** Rails signals registered. HUP => reload (without restart). It might not work well.
You can add the -c option if the config for your app's cluster is elsewhere:
mongrel_rails cluster::restart -c /path/to/config
1st discover the current mongrel pid path with something like:
>ps axf | fgrep mongrel
you will see a process line like:
ruby /usr/lib64/ruby/gems/1.8/gems/swiftiply-0.6.1.1/bin/mongrel_rails start -p 3000 -a 0.0.0.0 -e development -P /home/xxyyzz/rails/myappname/tmp/pids/mongrel.pid -d
Take the '-P /home/xxyyzz/rails/myappname/tmp/pids/mongrel.pid' part and use it like this:
>mongrel_rails restart -P /home/xxyyzz/rails/myappname/tmp/pids/mongrel.pid
Sending USR2 to Mongrel at PID 18481...Done.
I use this to recover from the dreaded "Broken pipe" to MySQL problem.
in your rails home directory
mongrel_rails cluster::restart
For example,
killall -USR2 mongrel_rails

Resources