Can you bounce thin servers instead of restarting them all at once? - ruby-on-rails

This sucks:
$ thin restart -p 4000 -s 3 -d;
Stopping server on 0.0.0.0:4000 ...
Sending QUIT signal to process 13375 ...
>> Exiting!
Stopping server on 0.0.0.0:4001 ...
Sending QUIT signal to process 13385 ...
>> Exiting!
Stopping server on 0.0.0.0:4002 ...
Sending QUIT signal to process 13397 ...
>> Exiting!
Starting server on 0.0.0.0:4000 ...
Starting server on 0.0.0.0:4001 ...
Starting server on 0.0.0.0:4002 ...
Is there a way to cycle through each server and restart one at a time?

rtm
> thin
Command required
Usage: thin [options] start|stop|restart|config
[...]
Cluster options:
-s, --servers NUM Number of servers to start
-o, --only NUM Send command to only one server of the cluster
-C, --config FILE Load options from config file
-O, --onebyone Restart the cluster one by one (only works with restart command)

Related

How to stop a nohup process running rails server on port 3000

I have started a rails server puma by using the following command.
nohup rails server &
its output was [2] 22481 along with the following:
nohup: ignoring input and appending output to 'nohup.out'
But now I have forget the returned process id, so how can I detect the process id so as to delete the process on aws.
To kill whatever is on port 3000 (webrick server default port), type this below command to get process id for 3000 port:
$ lsof -wni tcp:3000
Then, use process id (PID) to kill the process:
$ kill -9 PID
Rails server process pid can be found in this directory:
-> tmp/pids/server.pid
then,
Kill -9 pid
command
ps -ef
return the full output list of processes in which one of the list item is as:
ec2-user 12992 1 0 Dec20 ? 00:00:57 puma 3.12.0 (tcp://0.0.0.0:3000) [tukatech_garmentstore_live]
so force killed the process by.
kill -9 12992
did the job
ps aux|grep 3000
This will give you rails server id running on port 3000

Doing Hart's Rails Tutorial, but server won't shut down for me fire it up after updates

I have made some changes to various files, and need to shut down and then restart the server to see them. I am using the Cloud9 railstutorial environment. But I keep getting the same error - "A server is already running". Please see below:
darrenbrett:~/workspace/sample_app (filling-in-layout) $ rails server -b $IP -p $PORT
=> Booting WEBrick
=> Rails 4.2.2 application starting in development on http://0.0.0.0:8080
=> Run `rails server -h` for more startup options
=> Ctrl-C to shutdown server
A server is already running. Check /home/ubuntu/workspace/sample_app/tmp/pids/server.pid.
Exiting
darrenbrett:~/workspace/sample_app (filling-in-layout) $
Find out the process id (PID) first:
$ lsof -wni tcp:8080
This will give you something like this:
$ lsof -wni tcp:8080
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ruby 59656 rislam 14u IPv6 0xa86b8563672ef037 0t0 TCP [::1]:http-alt (LISTEN)
Then, kill the process with PID = 59656 (for example, it will be different for you):
$ kill -9 59656
This should solve your problem.
You can also use the following command to kill all running apps that has rails in the name:
killall -9 rails
Sometimes, this is very effective when the first command does not do the trick.
It sounds like you either have an orphaned server process running, or you have multiple applications in the same environment and you're running the server for another one and it's bound to the same port. I'm guessing the former.
Run ps -fe | grep rails. You should see something like 501 35861 2428 0 1:55PM ttys012 0:04.00 /your_file_path/ruby bin/rails server
Grab the process id (in the case of the example, it's 35861), and run kill -9 35861. Then try to start your server again.
The easiest way I found to kill that server is to click on the suggested file in the line:
A server is already running. Check /home/ubuntu/workspace/sample_app/tmp/pids/server.pid
That opens the file containing a number, the process id you're looking for.
Example
43029
In the terminal window use "kill" via that same process id and restart the server.
kill -15 43029
rails server -b $IP -p $PORT

Sidekiq not running at startup of passenger server in Rails 4.1.6 app

I need Sidekiq to run once I start the server on our staging application. We moved to a different server instance on Rackspace to better mirror our production conditions.
The application is started with
passenger start --nginx-config-template nginx.conf.erb --address 127.0.0.1 -p 3002 --daemonize
The sidekiq files are as follows:
# /etc/init/sidekiq.conf - Sidekiq config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Sidekiq instances with
# Upstart, Ubuntu's native service management tool.
#
# See workers.conf for how to manage all Sidekiq instances at once.
#
# Save this config as /etc/init/sidekiq.conf then mange sidekiq with:
# sudo start sidekiq index=0
# sudo stop sidekiq index=0
# sudo status sidekiq index=0
#
# or use the service command:
# sudo service sidekiq {start,stop,restart,status}
#
description "Sidekiq Background Worker"
# no "start on", we don't want to automatically start
stop on (stopping workers or runlevel [06])
# change to match your deployment user
setuid root
setgid root
respawn
respawn limit 3 30
# TERM is sent by sidekiqctl when stopping sidekiq. Without declaring these as normal exit codes, it just respawns.
normal exit 0 TERM
instance $index
script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv
exec /bin/bash <<EOT
# use syslog for logging
exec &> /dev/kmsg
# pull in system rbenv
export HOME=/root
source /etc/profile.d/rbenv.sh
cd /srv/monolith
exec bin/sidekiq -i ${index} -e staging
EOT
end script
and workers.conf
# /etc/init/workers.conf - manage a set of Sidekiqs
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Sidekiq instances with
# Upstart, Ubuntu's native service management tool.
#
# See sidekiq.conf for how to manage a single Sidekiq instance.
#
# Use "stop workers" to stop all Sidekiq instances.
# Use "start workers" to start all instances.
# Use "restart workers" to restart all instances.
# Crazy, right?
#
description "manages the set of sidekiq processes"
# This starts upon bootup and stops on shutdown
start on runlevel [2345]
stop on runlevel [06]
# Set this to the number of Sidekiq processes you want
# to run on this machine
env NUM_WORKERS=2
pre-start script
for i in `seq 0 $((${NUM_WORKERS} - 1))`
do
start sidekiq index=$i
done
end script
When I go into the server and try service sidekiq start index=0 or service sidekiq status index=0, it can't find the service, but if I run bundle exec sidekiq -e staging, sidekiq starts up and runs through the job queue without problem. Unfortunately, as soon as I close the ssh session, sidekiq finds a way to kill itself.
How can I ensure sidekiq runs when I start the server and that it will restart itself if something goes wrong as mentioned in the use of upstart to run sidekiq?
Thanks.
In order to run Sidekiq as a service you should put script called "sidekiq" in
/etc/init.d
, not /etc/init

Unicorn failing to spawn workers on USR2 signal

I'm sending a USR2 signal to the master process in order to achieve zero downtime deploy with unicorn. After the old master is dead, I'm getting the following error:
adding listener failed addr=/path/to/unix_socket (in use)
unicorn-4.3.1/lib/unicorn/socket_helper.rb:140:in `initialize':
Address already in use - /path/to/unix_socket (Errno::EADDRINUSE)
The old master is killed in the before_fork block on the unicorn.rb config file. The process is started via upstart without the daemon (-D) option.
Any Ideia on what's going on?
Well, turns out you have to run in daemonized mode (-D) if you want to be able to do zero downtime deployment. I changed a few things in my upstart script and now it works fine:
setuid username
pre-start exec unicorn_rails -E production -c /path/to/app/config/unicorn.rb -D
post-stop exec kill cat `/path/to/app/tmp/pids/unicorn.pid`
respawn

How to start thin process at system boot

I am using Debian flavor linux system. I am using thin web server to get live status of call in my application. This process gets started, when I use /etc/init.d/thin start. I used update-rc.d -f thin defaults to make thin process to be started at system boot. After adding the entry, I rebooted the system but thin process not getting started. I checked apache2 and it gets started properly at system boot. My thin script in init.d is as follows,
DAEMON=/usr/local/lib/ruby/gems/1.9.1/bin/thin
SCRIPT_NAME=/etc/init.d/thin
CONFIG_PATH=/etc/thin
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
case "$1" in
start)
$DAEMON start --all $CONFIG_PATH
;;
stop)
$DAEMON stop --all $CONFIG_PATH
;;
restart)
$DAEMON restart --all $CONFIG_PATH
;;
*)
echo "Usage: $SCRIPT_NAME {start|stop|restart}" >&2
exit 3
;;
esac
My configuration file in /etc/thin is as follows.
user_status.yml
---
chdir: /FMS/src/FMS-Frontend
environment: production
address: localhost
port: 5000
timeout: 30
log: log/thin.log
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 512
require: []
wait: 30
servers: 1
rackup: user_status.ru
threaded: true
daemonize: false
You need a wrapper for 'thin'.
See https://rvm.io/integration/init-d.
The wrapper path then needs substituting for DAEMON in the init.d script.
I keep forgetting this and it has cost a good few hours!
Now I've checked it out, as root, enter the two commands
rvm wrapper current bootup thin
which bootup_thin
The first creates the wrapper, and the second gives the path to it.
Edit the DAEMON line in /etc/init.d/thin to use this path, and finish off with
systemctl daemon-reload
service thin restart
I have assumed a multi-user installation of rvm, also you have to enter root
with
su -
to get the rvm environment right.

Resources