Im trying to stop puma server with a script that i've found here --> script
#!/usr/bin/env bash
# Simple move this file into your Rails `script` folder. Also make sure you `chmod +x puma.sh`.
# Please modify the CONSTANT variables to fit your configurations.
# The script will start with config set by $PUMA_CONFIG_FILE by default
PUMA_CONFIG_FILE=config/puma.rb
PUMA_PID_FILE=tmp/pids/puma.pid
PUMA_SOCKET=tmp/sockets/puma.sock
# check if puma process is running
puma_is_running() {
if [ -S $PUMA_SOCKET ] ; then
if [ -e $PUMA_PID_FILE ] ; then
if cat $PUMA_PID_FILE | xargs pgrep -P > /dev/null ; then
return 0
else
echo "No puma process found"
fi
else
echo "No puma pid file found"
fi
else
echo "No puma socket found"
fi
return 1
}
case "$1" in
start)
echo "Starting puma..."
rm -f $PUMA_SOCKET
if [ -e $PUMA_CONFIG_FILE ] ; then
bundle exec puma --config $PUMA_CONFIG_FILE
else
bundle exec puma --daemon --bind unix://$PUMA_SOCKET --pidfile $PUMA_PID_FILE
fi
echo "done"
;;
stop)
echo "Stopping puma..."
kill -s SIGTERM `cat $PUMA_PID_FILE`
rm -f $PUMA_PID_FILE
rm -f $PUMA_SOCKET
echo "done"
;;
restart)
if puma_is_running ; then
echo "Hot-restarting puma..."
kill -s SIGUSR2 `cat $PUMA_PID_FILE`
echo "Doublechecking the process restart..."
sleep 5
if puma_is_running ; then
echo "done"
exit 0
else
echo "Puma restart failed :/"
fi
fi
echo "Trying cold reboot"
script/puma.sh start
;;
*)
echo "Usage: script/puma.sh {start|stop|restart}" >&2
;;
esac
When I try to stop it it gives me this error
/etc/init.d/puma: 54: kill: invalid signal number or name: SIGTERM
What am I missing here?
When this script is being executed its using a different version of kill that doesn't support those args.
You should be able to change kill -s SIGTERM to just kill -15 (this passes the 15 sigterm code)
Related
I'm trying to set a rails test server, deployed using capistrano.
I know my capistrano scripts are working as it deploys to the production server using the same scripts without a problem.
During deployment, unicorn should be started, to do this
sudo service unicorn_appname start
is called.
This gives the following error:
Job for unicorn_appname.service failed because the control process exited with error code. See "systemctl status unicorn_appname.service" and "journalctl -xe" for details.
When I check sudo journalctl -u unicorn_appname is see
systemd[1]: Starting LSB: starts the unicorn web server...
su[3790]: Successful su for user by root
su[3790]: + ??? root:user
su[3790]: pam_unix(su:session): session opened for user user by (uid=0)
unicorn_appname[3787]: -su: bundle: command not found
systemd[1]: unicorn_appname.service: Control process exited, code=exited status=127
systemd[1]: Failed to start LSB: starts the unicorn web server.
systemd[1]: unicorn_appname.service: Unit entered failed state.
systemd[1]: unicorn_appname.service: Failed with result 'exit-code'.
/etc/init.d/unicorn_appname exists and when in /etc/init.d
./unicorn_appname start works
sudo ./unicorn_appname start however gives -su: bundle: command not found
however which bundle and sudo which bundle both show the same path (/home/user/.rbenv/shims/bundle)
If possible I don't want to change the scripts, as they work on the other server.
So I think there's some setting different or missing on the new server – but I have no more idea where to look.
This is the content of unicorn_appname:
#!/bin/sh
### BEGIN INIT INFO
# Provides: unicorn
# Required-Start: $local_fs $remote_fs $network $syslog
# Required-Stop: $local_fs $remote_fs $network $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts the unicorn web server
# Description: starts unicorn
### END INIT INFO
set -e
# Feel free to change any of the following variables for your app:
TIMEOUT=${TIMEOUT-60}
APP_ROOT=/home/user/apps/appname/current
PID_DIR=$APP_ROOT/tmp/pids
PID=$PID_DIR/unicorn.pid
CMD="cd $APP_ROOT; bundle exec unicorn -D -c /home/user/apps/appname/shared/config/unicorn.rb -E production"
AS_USER=user
set -u
OLD_PIN="$PID.oldbin"
sig () {
test -s "$PID" && kill -$1 `cat $PID`
}
oldsig () {
test -s $OLD_PIN && kill -$1 `cat $OLD_PIN`
}
workersig () {
workerpid="$APP_ROOT/tmp/pids/unicorn.$2.pid"
test -s "$workerpid" && kill -$1 `cat $workerpid`
}
run () {
if [ "$(id -un)" = "$AS_USER" ]; then
eval $1
else
su -c "$1" - $AS_USER
fi
}
case "$1" in
start)
sig 0 && echo >&2 "Already running" && exit 0
run "$CMD"
;;
stop)
sig QUIT && exit 0
echo >&2 "Not running"
;;
force-stop)
sig TERM && exit 0
echo >&2 "Not running"
;;
kill_worker)
workersig QUIT $2 && exit 0
echo >&2 "Worker not running"
;;
restart|reload)
sig USR2 && echo reloaded OK && exit 0
echo >&2 "Couldn't reload, starting '$CMD' instead"
run "$CMD"
;;
upgrade)
if sig USR2 && sleep 2 && sig 0 && oldsig QUIT
then
n=$TIMEOUT
while test -s $OLD_PIN && test $n -ge 0
do
printf '.' && sleep 1 && n=$(( $n - 1 ))
done
echo
if test $n -lt 0 && test -s $OLD_PIN
then
echo >&2 "$OLD_PIN still exists after $TIMEOUT seconds"
exit 1
fi
exit 0
fi
echo >&2 "Couldn't upgrade, starting '$CMD' instead"
run "$CMD"
;;
reopen-logs)
sig USR1
;;
*)
echo >&2 "Usage: $0 <start|stop|restart|upgrade|force-stop|reopen-logs>"
exit 1
;;
esac
Any more info you'll need?
EDIT:
user is the user I use to deploy. For root on this system, nothing is installed. Can that be the problem?
I think your problem (may) be:
cd $APP_ROOT; bundle exec unicorn -D -c /home/user/apps/appname/shared/config/unicorn.rb -E production"
Assuming that is the same command you're using on your test server, you'll need to change the environment to test.
I think I found the solution:
On the prod server I had a /etc/profile.d/rbenv.sh that's missing on the new server.
That's its content:
export RBENV_ROOT=/home/user/.rbenv
export PATH=$RBENV_ROOT/shims:$RBENV_ROOT/bin:$PATH
I have deployed my website in ec2 rails server with centos. How can i run sidekiq when ec2 server is reboot? i followed this http://dxta.github.io/blog/2014/03/06/init-script-for-sidekiq-in-centos/ and I wrote a bash script like below but sidekiq dont restart as expected
"#! /bin/bash
#
# sidekiq Init script for sidekiq
#
# chkconfig: 345 99 1
# description: Starts and stops sidekiq message processor
# Source function library.
# . /etc/rc.d/init.d/functions
# You will need to modify these
APP=""sps_qa""
AS_USER=""ec2-user""
APP_DIR=""/home/ec2-user/www/sps_qa/current""
APP_CONFIG=""/home/ec2-user/www/sps_qa/current/config""
LOG_FILE=""/home/ec2-user/www/sps_qa/current/log/sidekiq.log""
LOCK_FILE=""$APP_DIR/${APP}-lock""
PID_FILE=""$APP_DIR/${APP}.pid""
GEMFILE=""$APP_DIR/Gemfile""
SIDEKIQ=""sidekiq""
APP_ENV=""qa""
BUNDLE=""bundle""
# [ -e /etc/sysconfig/sidekiq-your_app ] && . /etc/sysconfig/sidekiq- your_app
START_CMD=""exec ~/.rvm/bin/rvm-shell -c '$BUNDLE exec $SIDEKIQ -q mailer -q default -e $APP_ENV -P $PID_FILE'""
CMD=""source /home/ec2-user/.rvm/scripts/rvm; cd ${APP_DIR}; ${START_CMD} >> ${LOG_FILE} 2>&1 &""
RETVAL=0
start() {
status
if [ $? -eq 1 ]; then
[ `id -u` == '0' ] || (echo ""$SIDEKIQ runs as root only ..""; exit 5)
[ -d $APP_DIR ] || (echo ""$APP_DIR not found!.. Exiting""; exit 6)
cd $APP_DIR
echo ""Starting $SIDEKIQ message processor .. ""
su -c ""$CMD"" - $AS_USER
RETVAL=$?
#Sleeping for 8 seconds for process to be precisely visible in process table - See status ()
sleep 8
[ $RETVAL -eq 0 ] && touch $LOCK_FILEd
"
return $RETVAL
else
echo "$SIDEKIQ message processor is already running .. "
fi
}
stop() {
echo "Stopping $SIDEKIQ message processor .."
SIG="INT"
kill -$SIG `cat $PID_FILE`
RETVAL=$?
[ $RETVAL -eq 0 ] && rm -f $LOCK_FILE
return $RETVAL
}
status() {
ps -ef | grep 'sidekiq [0-9].[0-9].[0-9]' | grep -v grep
return $?
}
restart() {
stop
start
}
reload() {
restart
}
force_reload() {
case "$1" in
start)
stop)
restart)
;;
reload)
;;
force_reload)
;;
status)
status
if [ $? -eq 0 ]; then
echo "$SIDEKIQ message processor is running .."
RETVAL=0
else
echo "$SIDEKIQ message processor is stopped .."
RETVAL=1
fi
*)
exit 0
esac
exit $RETVAL
currently i am running sidekiq manually.
bundle exec sidekiq -q mailer -q default -e qa -d -L /home/ec2-user/www/sps_qa/current/log/sidekiq.log 2>&1
Create an upstart job so that sidekiq is fired off during boot. There's an example in the sidekiq wiki. Change the params to match yours. https://github.com/mperham/sidekiq/blob/master/examples/upstart/manage-one/sidekiq.conf
I'm trying to run celery as a daemon in the background on Ubuntu 14.04.
I've followed the instructions at http://celery.readthedocs.org/en/latest/tutorials/daemonizing.html and used the celeryd shell script
#!/bin/sh -e
# ============================================
# celeryd - Starts the Celery worker daemon.
# ============================================
#
# :Usage: /etc/init.d/celeryd {start|stop|force-reload|restart|try-restart|status}
# :Configuration file: /etc/default/celeryd
#
# See http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#generic-init-scripts
### BEGIN INIT INFO
# Provides: celeryd
# Required-Start: $network $local_fs $remote_fs
# Required-Stop: $network $local_fs $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: celery task worker daemon
### END INIT INFO
#
#
# To implement separate init scripts, copy this script and give it a different
# name:
# I.e., if my new application, "little-worker" needs an init, I
# should just use:
#
# cp /etc/init.d/celeryd /etc/init.d/little-worker
#
# You can then configure this by manipulating /etc/default/little-worker.
#
VERSION=10.1
echo "celery init v${VERSION}."
if [ $(id -u) -ne 0 ]; then
echo "Error: This program can only be used by the root user."
echo " Unprivileged users must use the 'celery multi' utility, "
echo " or 'celery worker --detach'."
exit 1
fi
# Can be a runlevel symlink (e.g. S02celeryd)
if [ -L "$0" ]; then
SCRIPT_FILE=$(readlink "$0")
else
SCRIPT_FILE="$0"
fi
SCRIPT_NAME="$(basename "$SCRIPT_FILE")"
DEFAULT_USER="celery"
DEFAULT_PID_FILE="/var/run/celery/%n.pid"
DEFAULT_LOG_FILE="/var/log/celery/%n.log"
DEFAULT_LOG_LEVEL="INFO"
DEFAULT_NODES="celery"
DEFAULT_CELERYD="-m celery worker --detach"
CELERY_DEFAULTS=${CELERY_DEFAULTS:-"/etc/default/${SCRIPT_NAME}"}
# Make sure executable configuration script is owned by root
_config_sanity() {
local path="$1"
local owner=$(ls -ld "$path" | awk '{print $3}')
local iwgrp=$(ls -ld "$path" | cut -b 6)
local iwoth=$(ls -ld "$path" | cut -b 9)
if [ "$(id -u $owner)" != "0" ]; then
echo "Error: Config script '$path' must be owned by root!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with mailicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change ownership of the script:"
echo " $ sudo chown root '$path'"
exit 1
fi
if [ "$iwoth" != "-" ]; then # S_IWOTH
echo "Error: Config script '$path' cannot be writable by others!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with malicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change the scripts permissions:"
echo " $ sudo chmod 640 '$path'"
exit 1
fi
if [ "$iwgrp" != "-" ]; then # S_IWGRP
echo "Error: Config script '$path' cannot be writable by group!"
echo
echo "Resolution:"
echo "Review the file carefully and make sure it has not been "
echo "modified with malicious intent. When sure the "
echo "script is safe to execute with superuser privileges "
echo "you can change the scripts permissions:"
echo " $ sudo chmod 640 '$path'"
exit 1
fi
}
if [ -f "$CELERY_DEFAULTS" ]; then
_config_sanity "$CELERY_DEFAULTS"
echo "Using config script: $CELERY_DEFAULTS"
. "$CELERY_DEFAULTS"
fi
# Sets --app argument for CELERY_BIN
CELERY_APP_ARG=""
if [ ! -z "$CELERY_APP" ]; then
CELERY_APP_ARG="--app=$CELERY_APP"
fi
CELERYD_USER=${CELERYD_USER:-$DEFAULT_USER}
# Set CELERY_CREATE_DIRS to always create log/pid dirs.
CELERY_CREATE_DIRS=${CELERY_CREATE_DIRS:-0}
CELERY_CREATE_RUNDIR=$CELERY_CREATE_DIRS
CELERY_CREATE_LOGDIR=$CELERY_CREATE_DIRS
if [ -z "$CELERYD_PID_FILE" ]; then
CELERYD_PID_FILE="$DEFAULT_PID_FILE"
CELERY_CREATE_RUNDIR=1
fi
if [ -z "$CELERYD_LOG_FILE" ]; then
CELERYD_LOG_FILE="$DEFAULT_LOG_FILE"
CELERY_CREATE_LOGDIR=1
fi
CELERYD_LOG_LEVEL=${CELERYD_LOG_LEVEL:-${CELERYD_LOGLEVEL:-$DEFAULT_LOG_LEVEL}}
CELERY_BIN=${CELERY_BIN:-"celery"}
CELERYD_MULTI=${CELERYD_MULTI:-"$CELERY_BIN multi"}
CELERYD_NODES=${CELERYD_NODES:-$DEFAULT_NODES}
export CELERY_LOADER
if [ -n "$2" ]; then
CELERYD_OPTS="$CELERYD_OPTS $2"
fi
CELERYD_LOG_DIR=`dirname $CELERYD_LOG_FILE`
CELERYD_PID_DIR=`dirname $CELERYD_PID_FILE`
# Extra start-stop-daemon options, like user/group.
if [ -n "$CELERYD_CHDIR" ]; then
DAEMON_OPTS="$DAEMON_OPTS --workdir=$CELERYD_CHDIR"
fi
check_dev_null() {
if [ ! -c /dev/null ]; then
echo "/dev/null is not a character device!"
exit 75 # EX_TEMPFAIL
fi
}
maybe_die() {
if [ $? -ne 0 ]; then
echo "Exiting: $* (errno $?)"
exit 77 # EX_NOPERM
fi
}
create_default_dir() {
if [ ! -d "$1" ]; then
echo "- Creating default directory: '$1'"
mkdir -p "$1"
maybe_die "Couldn't create directory $1"
echo "- Changing permissions of '$1' to 02755"
chmod 02755 "$1"
maybe_die "Couldn't change permissions for $1"
if [ -n "$CELERYD_USER" ]; then
echo "- Changing owner of '$1' to '$CELERYD_USER'"
chown "$CELERYD_USER" "$1"
maybe_die "Couldn't change owner of $1"
fi
if [ -n "$CELERYD_GROUP" ]; then
echo "- Changing group of '$1' to '$CELERYD_GROUP'"
chgrp "$CELERYD_GROUP" "$1"
maybe_die "Couldn't change group of $1"
fi
fi
}
check_paths() {
if [ $CELERY_CREATE_LOGDIR -eq 1 ]; then
create_default_dir "$CELERYD_LOG_DIR"
fi
if [ $CELERY_CREATE_RUNDIR -eq 1 ]; then
create_default_dir "$CELERYD_PID_DIR"
fi
}
create_paths() {
create_default_dir "$CELERYD_LOG_DIR"
create_default_dir "$CELERYD_PID_DIR"
}
export PATH="${PATH:+$PATH:}/usr/sbin:/sbin"
_get_pidfiles () {
# note: multi < 3.1.14 output to stderr, not stdout, hence the redirect.
${CELERYD_MULTI} expand "${CELERYD_PID_FILE}" ${CELERYD_NODES} 2>&1
}
_get_pids() {
found_pids=0
my_exitcode=0
for pidfile in $(_get_pidfiles); do
local pid=`cat "$pidfile"`
local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
echo "bad pid file ($pidfile)"
one_failed=true
my_exitcode=1
else
found_pids=1
echo "$pid"
fi
if [ $found_pids -eq 0 ]; then
echo "${SCRIPT_NAME}: All nodes down"
exit $my_exitcode
fi
done
}
_chuid () {
su "$CELERYD_USER" -c "$CELERYD_MULTI $*"
}
start_workers () {
if [ ! -z "$CELERYD_ULIMIT" ]; then
ulimit $CELERYD_ULIMIT
fi
_chuid $* start $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
$CELERY_APP_ARG \
$CELERYD_OPTS
}
dryrun () {
(C_FAKEFORK=1 start_workers --verbose)
}
stop_workers () {
_chuid stopwait $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
restart_workers () {
_chuid restart $CELERYD_NODES $DAEMON_OPTS \
--pidfile="$CELERYD_PID_FILE" \
--logfile="$CELERYD_LOG_FILE" \
--loglevel="$CELERYD_LOG_LEVEL" \
$CELERY_APP_ARG \
$CELERYD_OPTS
}
kill_workers() {
_chuid kill $CELERYD_NODES --pidfile="$CELERYD_PID_FILE"
}
restart_workers_graceful () {
echo "WARNING: Use with caution in production"
echo "The workers will attempt to restart, but they may not be able to."
local worker_pids=
worker_pids=`_get_pids`
[ "$one_failed" ] && exit 1
for worker_pid in $worker_pids; do
local failed=
kill -HUP $worker_pid 2> /dev/null || failed=true
if [ "$failed" ]; then
echo "${SCRIPT_NAME} worker (pid $worker_pid) could not be restarted"
one_failed=true
else
echo "${SCRIPT_NAME} worker (pid $worker_pid) received SIGHUP"
fi
done
[ "$one_failed" ] && exit 1 || exit 0
}
check_status () {
my_exitcode=0
found_pids=0
local one_failed=
for pidfile in $(_get_pidfiles); do
if [ ! -r $pidfile ]; then
echo "${SCRIPT_NAME} down: no pidfiles found"
one_failed=true
break
fi
local node=`basename "$pidfile" .pid`
local pid=`cat "$pidfile"`
local cleaned_pid=`echo "$pid" | sed -e 's/[^0-9]//g'`
if [ -z "$pid" ] || [ "$cleaned_pid" != "$pid" ]; then
echo "bad pid file ($pidfile)"
one_failed=true
else
local failed=
kill -0 $pid 2> /dev/null || failed=true
if [ "$failed" ]; then
echo "${SCRIPT_NAME} (node $node) (pid $pid) is down, but pidfile exists!"
one_failed=true
else
echo "${SCRIPT_NAME} (node $node) (pid $pid) is up..."
fi
fi
done
[ "$one_failed" ] && exit 1 || exit 0
}
case "$1" in
start)
check_dev_null
check_paths
start_workers
;;
stop)
check_dev_null
check_paths
stop_workers
;;
reload|force-reload)
echo "Use restart"
;;
status)
check_status
;;
restart)
check_dev_null
check_paths
restart_workers
;;
graceful)
check_dev_null
restart_workers_graceful
;;
kill)
check_dev_null
kill_workers
;;
dryrun)
check_dev_null
dryrun
;;
try-restart)
check_dev_null
check_paths
restart_workers
;;
create-paths)
check_dev_null
create_paths
;;
check-paths)
check_dev_null
check_paths
;;
*)
echo "Usage: /etc/init.d/${SCRIPT_NAME} {start|stop|restart|graceful|kill|dryrun|create-paths}"
exit 64 # EX_USAGE
;;
esac
exit 0
,
which I put in /etc/init.d/celeryd.
I've also got the following config filw (also called celeryd which lives in etc/default/celeryd
# Names of nodes to start
# most will only start one node:
CELERYD_NODES="worker"
# but you can also start multiple and configure settings
# for each in CELERYD_OPTS (see `celery multi --help` for examples).
# Absolute or relative path to the 'celery' command:
CELERY_BIN="/usr/local/bin/celery"
# App instance to use
# comment out this line if you don't use an app
CELERY_APP="proj"
# or fully qualified:
#CELERY_APP="proj.tasks:app"
# Where to chdir at start.
CELERYD_CHDIR="/home/drmclean/"
# Extra command-line arguments to the worker
CELERYD_OPTS="--time-limit=300 --concurrency=8"
# %N will be replaced with the first part of the nodename.
CELERYD_LOG_FILE="/var/log/celery/%N.log"
CELERYD_PID_FILE="/var/run/celery/%N.pid"
# Workers should run as an unprivileged user.
# You need to create this user manually (or you can choose
# a user/group combination that already exists, e.g. nobody).
CELERYD_USER="drmclean"
CELERYD_GROUP="drmclean"
# If enabled pid and log directories will be created if missing,
# and owned by the userid/group configured.
CELERY_CREATE_DIRS=1
I can easily start the celery service running by using the command:
sudo /etc/init.d/celeryd start
and the service runs as I expect.
However on start-up the service never runs. When I inspect the logfile for the celery, it says.
"[2014-09-17 16:27:41,541: ERROR/MainProcess] consumer: Cannot connect to amqp://guest:**#127.0.0.1:5672//: [Errno 111] Connection refused.
Trying again in 2.00 seconds..."
Can anyone help with this error? I can't see when the connection would be refused on start-up but also when using the sudo /etc/init.d/celeryd start command?
Are you connecting from a remote server? If yes, the guest user is not allowed to connect from a remote server. See https://www.rabbitmq.com/access-control.html
No the entire thing is running on my own machine. Actually it appears that my celeryd script which is in:
/etc/init.d/celeryd
is never running or start-up. It's unclear why it isn't though.
I am using monit for sidekiq
while I am running the monit log file, it is showing the error.
monit: Start or stop method not defined -- process sidekiq_site
sidekiq.erb
check process sidekiq_site
with pidfile /var/www/project/shared/pids/sidekiq.pid
start program = "if [[ ! -f /var/www/project/shared/pids/sidekiq.pid ]]; then touch /var/www/project/shared/pids/sidekiq.pid; chmod 777 /var/www/project/shared/pids/sidekiq.pid; fi; cd /var/www/project/current ; bundle exec sidekiq --index 0 --pidfile /var/www/project/shared/pids/sidekiq.pid --environment production --logfile /var/www/project/shared/log/sidekiq.log --daemon" with timeout 90 seconds
stop program = "if [ -d /var/www/project/current ] && [ -f /var/www/project/shared/pids/sidekiq.pid ] && kill -0 `cat /var/www/project/shared/pids/sidekiq.pid`> /dev/null 2>&1; then cd /var/www/project/current && bundle exec sidekiqctl stop /var/www/project/shared/pids/sidekiq.pid 1 ; else echo 'Sidekiq is not running'; fi"
if totalmem is greater than 200 MB for 2 cycles then restart # eating up memory?
group site_sidekiq
I'm a new Linode/Linux user running Debian 6. I'm trying to get my Unicorn server to start at boot, but for some reason it is not, and I'm not able to track down any error message. Nginx is starting fine, and I have a multi-user RVM install. My gut feeling is that is has something to do with RVM. This is my unicorn_init.sh file in /rails/todo, and there's a symlink to it at /etc/init.d/unicorn:
# unicorn_init.sh
#!/bin/sh
set -e
TIMEOUT=${TIMEOUT-60}
APP_ROOT=/rails/todo
PID=$APP_ROOT/tmp/pids/unicorn.pid
CMD="$APP_ROOT/bin/unicorn_rails -D -c $APP_ROOT/config/unicorn.rb -E production"
GEM_HOME="/usr/local/rvm/gems/ruby-1.9.2-p290#global"
action="$1"
set -u
old_id="$PID.oldbin"
cd $APP_ROOT || exit 1
export GEM_HOME=$GEM_HOME
sig () {
test -s "$PID" && kill -$1 `cat $PID`
}
oldsig () {
test -s $old_pid && kill -$1 `cat $old_pid`
}
case $action in
start)
sig 0 && echo >&2 "Already running" && exit 0
su -c "$CMD" - root
;;
stop)
sig QUIT && exit 0
echo >&2 "Not running"
;;
force-stop)
sig TERM && exit 0
echo >&2 "Not running"
;;
restart|reload)
sig HUP && echo reloaded OK && exit 0
echo >&2 "Couldn't reload, starting '$CMD' instead"
su -c "$CMD" - root
;;
upgrade)
if sig USR2 && sleep 2 && sig 0 && oldsig QUIT
then
n=$TIMEOUT
while test -s $old_pid && test $n -ge 0
do
printf '.' && sleep 1 && n=$(( $n - 1 ))
done
echo
if test $n -lt 0 && test -s $old_pid
then
echo >&2 "$old_pid still exists after $TIMEOUT seconds"
exit 1
fi
exit 0
fi
echo >&2 "Couldn't upgrade, starting '$CMD' instead"
su -c "$CMD" - root
;;
reopen-logs)
sig USR1
;;
*)
echo >&2 "Usage: $0 <start|stop|restart|upgrade|force-stop|reopen-logs>"
exit 1
;;
esac
I'm 99% of the way to getting my setup working—any advice would be much appreciated.
Here is the output of $ update-rc.d unicorn defaults:
update-rc.d: using dependency based boot sequencing
insserv: warning: script 'unicorn' missing LSB tags and overrides
insserv: There is a loop between service nginx and unicorn if stopped
insserv: loop involving service unicorn at depth 2
insserv: loop involving service nginx at depth 1
insserv: Stopping unicorn depends on nginx and therefore on system facility `$all' which can not be true!
insserv: exiting now without changing boot order!
update-rc.d: error: insserv rejected the script header
update-rc.d: error: insserv rejected the script header
The start of your file looks like:
# unicorn_init.sh
#!/bin/sh
The shebang line (#!/bin/sh) MUST be the very first line of the file.
I can't comment on the loop-detection messages as I haven't ever seen 'em before. It's possible something you say in /etc/init.d/nginx is expressing dependency on unicorn, but I don't see anything in the unicorn init which expresses a dependency on nginx, so the loop isn't clear.
insserv: warning: script 'unicorn' missing LSB tags and overrides
You should add LSB info to your init script http://wiki.debian.org/LSBInitScripts