How to start thin process at system boot - ruby-on-rails

I am using Debian flavor linux system. I am using thin web server to get live status of call in my application. This process gets started, when I use /etc/init.d/thin start. I used update-rc.d -f thin defaults to make thin process to be started at system boot. After adding the entry, I rebooted the system but thin process not getting started. I checked apache2 and it gets started properly at system boot. My thin script in init.d is as follows,
DAEMON=/usr/local/lib/ruby/gems/1.9.1/bin/thin
SCRIPT_NAME=/etc/init.d/thin
CONFIG_PATH=/etc/thin
# Exit if the package is not installed
[ -x "$DAEMON" ] || exit 0
case "$1" in
start)
$DAEMON start --all $CONFIG_PATH
;;
stop)
$DAEMON stop --all $CONFIG_PATH
;;
restart)
$DAEMON restart --all $CONFIG_PATH
;;
*)
echo "Usage: $SCRIPT_NAME {start|stop|restart}" >&2
exit 3
;;
esac
My configuration file in /etc/thin is as follows.
user_status.yml
---
chdir: /FMS/src/FMS-Frontend
environment: production
address: localhost
port: 5000
timeout: 30
log: log/thin.log
pid: tmp/pids/thin.pid
max_conns: 1024
max_persistent_conns: 512
require: []
wait: 30
servers: 1
rackup: user_status.ru
threaded: true
daemonize: false

You need a wrapper for 'thin'.
See https://rvm.io/integration/init-d.
The wrapper path then needs substituting for DAEMON in the init.d script.
I keep forgetting this and it has cost a good few hours!
Now I've checked it out, as root, enter the two commands
rvm wrapper current bootup thin
which bootup_thin
The first creates the wrapper, and the second gives the path to it.
Edit the DAEMON line in /etc/init.d/thin to use this path, and finish off with
systemctl daemon-reload
service thin restart
I have assumed a multi-user installation of rvm, also you have to enter root
with
su -
to get the rvm environment right.

Related

Monit bundle exec rails s

I have the following shell script that allows me to start my rails app, let's say it's called start-app.sh:
#!/bin/bash
cd /var/www/project/current
. /home/user/.rvm/environments/ruby-2.3.3
RAILS_SERVE_STATIC_FILES=true RAILS_ENV=production nohup bundle exec rails s -e production -p 4445 > /var/www/project/log/production.log 2>&1 &
the file above have permissions of:
-rwxr-xr-x 1 user user 410 Mar 21 10:00 start-app.sh*
if i want to check the process I do the following:
ps aux | grep -v grep | grep ":4445"
it'd give me the following output:
user 2960 0.0 7.0 975160 144408 ? Sl 10:37 0:07 puma 3.12.0 (tcp://0.0.0.0:4445) [20180809094218]
P.S: the reason i grep ":4445" is because i have few processes running on different ports. (for different projects)
now coming to monit, i used apt-get to install it, and the latest version from repo is 5.16, as i'm running on Ubuntu 16.04, also note that monit is running as root, that's why i specified the gid uid in the following. (because the start script is used to be executed from "user" and not "root")
Here's the configuration for monit:
set daemon 20 # check services at 20 seconds interval
set logfile /var/log/monit.log
set idfile /var/lib/monit/id
set statefile /var/lib/monit/state
set eventqueue
basedir /var/lib/monit/events # set the base directory where events will be stored
slots 100 # optionally limit the queue size
set mailserver xx.com port xxx
username "xx#xx.com" password "xxxxxx"
using tlsv12
with timeout 20 seconds
set alert xx#xx.com
set mail-format {
from: xx#xx.com
subject: monit alert -- $EVENT $SERVICE
message: $EVENT Service $SERVICE
Date: $DATE
Action: $ACTION
Host: $HOST
Description: $DESCRIPTION
}
set limits {
programOutput: 51200 B
sendExpectBuffer: 25600 B
fileContentBuffer: 51200 B
networktimeout: 10 s
}
check system $HOST
if loadavg (1min) > 4 then alert
if loadavg (5min) > 2 then alert
if cpu usage > 90% for 10 cycles then alert
if memory usage > 85% then alert
if swap usage > 35% then alert
check process nginx with pidfile /var/run/nginx.pid
start program = "/bin/systemctl start nginx"
stop program = "/bin/systemctl stop nginx"
check process redis
matching "redis"
start program = "/bin/systemctl start redis"
stop program = "/bin/systemctl stop redis"
check process myapp
matching ":4445"
start program = "/bin/bash -c '/home/user/start-app.sh'" as uid "user" and gid "user"
stop program = "/bin/bash -c /home/user/stop-app.sh" as uid "user" and gid "user"
include /etc/monit/conf.d/*
include /etc/monit/conf-enabled/*
Now monit, is detecting and alerting me when the process goes down (if i kill it manually) and when it's manually recovered, but it won't start that shell script automatically.. and according to /var/log/monit.log, it's showing the following:
[UTC Aug 13 10:16:41] info : Starting Monit 5.16 daemon
[UTC Aug 13 10:16:41] info : 'production-server' Monit 5.16 started
[UTC Aug 13 10:16:43] error : 'myapp' process is not running
[UTC Aug 13 10:16:46] info : 'myapp' trying to restart
[UTC Aug 13 10:16:46] info : 'myapp' start: /bin/bash
[UTC Aug 13 10:17:17] error : 'myapp' failed to start (exit status 0) -- no output
So far what I see when monit tries to execute the script is that it tries to load it (i can see it for less than 3 seconds using ps aux | grep -v grep | grep ":4445", but this output is different from the above output i showed up, it shows the content of the shell script being executed and specifically this one:
blablalba... nohup bundle exec rails s -e production -p 4445
and then it disappears. then it tries to re-execute the shell.. again and again...
What am I missing, and what is wrong with my configuration? note that I can't change anything in the start-app.sh because it's on production and working 100%. (i just want to monitor it)
Edit: To my understanding and experience, it seems to be a Environment Variable issue or path issue, but i'm not sure how to solve it, it doesn't make any sense to put the env variables inside monit .. what if someone else wanted to edit that shell script or add something new? i hope you get my point
As i expected, it was user-environment issue and i solved it by editing monit configuration as below:
Before (not working)
check process myapp
matching ":4445"
start program = "/bin/bash -c '/home/user/start-app.sh'" as uid "user" and gid "user"
stop program = "/bin/bash -c /home/user/stop-app.sh" as uid "user" and gid "user"
After (working)
check process myapp
matching ":4445"
start program = "/bin/su -s /bin/bash -c '/home/user/start-app.sh' user"
stop program = "/bin/su -s /bin/bash -c '/home/user/stop-app.sh' user"
Explanation: i removed (uid and gid) as "user" from monit because it will only execute the shell script in the name of "user" but it won't get/import/use user's env path, or env variables.

Apache + passenger - /tmp permission denied

I am trying to run ruby on rails under passenger with apache2 under fedora 19 and I got this error in log:
[Tue Feb 25 09:37:52.367683 2014] [passenger:error] [pid 2779] ***
Passenger could not be initialized because of this error: Unable to
start the Phusion Passenger watchdog because it encountered the
following error during startup: Cannot change the directory
'/tmp/passenger.1.0.2779/generation-1/buffered_uploads' its UID to 48
and GID to 48: Operation not permitted (errno=1)
That directory (/tmp/passenger.1.0.2779) doesn't even exist. I think that problem is with selinux. I tried to solve it about 4 hours. Httpd is running under user apache and group apache, I tried:
cat /var/log/audit/audit.log | grep passenger | audit2allow -M
passenger semodule -i passenger.pp
but still nothing.
In your case, you should switch SELinux into Permissive mode at first, then try to capture the audit log from starting Apache to run your application.1
Once you got the home page of your application, you can build your custom policy with the logs.
Switch SELinux into Permissive mode and clean audit.log
]# setenforce 0
]# rm /var/log/audit/audit.log
]# service auditd restart
Restart Apache
]# service httpd restart
Try to open your application with a web browser
It might give more information about what is happenning when you application is running.
Make a custom policy module to allow these actions
]# mkdir work
]# cd work
]# grep httpd /var/log/audit/audit.log | audit2allow -M passenger
]# ls
passenger.pp passenger.te
Load postgrey policy module using the 'semodule' command into the current SELinux policy:
]# semodule -i passenger.pp
]# setenforce 1
Restart Apache
]# service httpd restart
References:
http://wiki.centos.org/HowTos/SELinux#head-faa96b3fdd922004cdb988c1989e56191c257c01
I ran into a similar error, with a startup error about being unable to create a directory that did not exist. (logs, not tmp, but same sort of thing) I, too, battled with it for an hour and couldn't make sense of it. I created/deleted/chmod the directory many ways without success.
The fix for me was to change the parameters to passenger-start. Initially, my Docker container started passenger with:
exec bundle exec passenger start --auto --disable-security-update-check --min-instances 20 --max-pool-size 20 --max-request-queue-size 500
I removed all parameters, leaving just this:
exec bundle exec passenger start
At this point, passenger could create the log folder and file, and all was well. I could have restored the params at this point, but we decided they were not needed for the development environment so left them out moving ahead.
In hindsight, I have a hunch that I deleted the log directory while a file in it was still open, and the file system persisted that condition in some way. But that's just a hunch. Perhaps simply rebooting my Mac would have fixed it...

Can you bounce thin servers instead of restarting them all at once?

This sucks:
$ thin restart -p 4000 -s 3 -d;
Stopping server on 0.0.0.0:4000 ...
Sending QUIT signal to process 13375 ...
>> Exiting!
Stopping server on 0.0.0.0:4001 ...
Sending QUIT signal to process 13385 ...
>> Exiting!
Stopping server on 0.0.0.0:4002 ...
Sending QUIT signal to process 13397 ...
>> Exiting!
Starting server on 0.0.0.0:4000 ...
Starting server on 0.0.0.0:4001 ...
Starting server on 0.0.0.0:4002 ...
Is there a way to cycle through each server and restart one at a time?
rtm
> thin
Command required
Usage: thin [options] start|stop|restart|config
[...]
Cluster options:
-s, --servers NUM Number of servers to start
-o, --only NUM Send command to only one server of the cluster
-C, --config FILE Load options from config file
-O, --onebyone Restart the cluster one by one (only works with restart command)

Unicorn failing to spawn workers on USR2 signal

I'm sending a USR2 signal to the master process in order to achieve zero downtime deploy with unicorn. After the old master is dead, I'm getting the following error:
adding listener failed addr=/path/to/unix_socket (in use)
unicorn-4.3.1/lib/unicorn/socket_helper.rb:140:in `initialize':
Address already in use - /path/to/unix_socket (Errno::EADDRINUSE)
The old master is killed in the before_fork block on the unicorn.rb config file. The process is started via upstart without the daemon (-D) option.
Any Ideia on what's going on?
Well, turns out you have to run in daemonized mode (-D) if you want to be able to do zero downtime deployment. I changed a few things in my upstart script and now it works fine:
setuid username
pre-start exec unicorn_rails -E production -c /path/to/app/config/unicorn.rb -D
post-stop exec kill cat `/path/to/app/tmp/pids/unicorn.pid`
respawn

How do you restart Rails under Mongrel, without stopping and starting Mongrel

Is there a way to restart the Rails app (e.g. when you've changed a plugin/config file) while Mongrel is running. Or alternatively quickly restart Mongrel. Mongrel gives these hints that you can but how do you do it?
** Signals ready. TERM => stop. USR2 => restart. INT => stop (no restart).
** Rails signals registered. HUP => reload (without restart). It might not work well.
You can add the -c option if the config for your app's cluster is elsewhere:
mongrel_rails cluster::restart -c /path/to/config
1st discover the current mongrel pid path with something like:
>ps axf | fgrep mongrel
you will see a process line like:
ruby /usr/lib64/ruby/gems/1.8/gems/swiftiply-0.6.1.1/bin/mongrel_rails start -p 3000 -a 0.0.0.0 -e development -P /home/xxyyzz/rails/myappname/tmp/pids/mongrel.pid -d
Take the '-P /home/xxyyzz/rails/myappname/tmp/pids/mongrel.pid' part and use it like this:
>mongrel_rails restart -P /home/xxyyzz/rails/myappname/tmp/pids/mongrel.pid
Sending USR2 to Mongrel at PID 18481...Done.
I use this to recover from the dreaded "Broken pipe" to MySQL problem.
in your rails home directory
mongrel_rails cluster::restart
For example,
killall -USR2 mongrel_rails

Resources