Debian 8.11 init.d script won't run at startup - startup

I've created the following init.d script per this guide, which is designed to start this branch of MaNGOS at boot:
#!/bin/sh
### BEGIN INIT INFO
# Provides: mangosd
# Should-Start: console-screen dbus network-manager
# Required-Start: $all
# Required-Stop: $remote_fs
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: start mangosd at boot time
### END INIT INFO
#
set -e
/lib/lsb/init-functions
PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin
SCRIPT="/usr/local/sbin/realmd.sh"
SCRIPT2="/usr/local/sbin/mangosd.sh"
PROGRAMNAME="realmd"
PROGRAMNAME2="mangosd"
case "$1" in
start)
$SCRIPT
$SCRIPT2
;;
stop)
pkill $PROGRAMNAME
pkill $PROGRAMNAME2
;;
esac
exit 0
I am able to run this script with sudo /etc/init.d/mangosd start, which will cause it to work as expected, running realmd.sh and mangosd.sh, which are as follows.
realmd.sh:
#!/bin/sh
# /usr/local/sbin/realmd.sh
/home/rebirth/MaNGOS/bin/realmd &
mangosd.sh:
#!/bin/sh
# /usr/local/sbin/mangosd.sh
cd /home/rebirth/MaNGOS/bin
./mangosd &
All three files have the same permissions, as follows:
-rwxr-xr-x 1 root root 80 Sep 2 20:33 /usr/local/sbin/mangosd.sh
The programs realmd and mangosd will then run as expected. Per the guide, I have run sudo insserv mangosd and verified the boot file was created:
$ ls -la /etc/rc2.d/S04mangosd
lrwxrwxrwx 1 root root 17 Sep 2 18:00 /etc/rc2.d/S04mangosd -> ../init.d/mangosd
I ran sudo reboot and neither realmd nor mangosd started automatically at boot. Running the init.d script manually at this point still works as expected.
I have viewed the following posts relating to this issue:
Init.d script to start Hudson doesn't run at boot on Ubuntu
debian init.d script not running after reboot
Neither provided a solution, however the latter did have another command I hadn't tried, sudo update-rc.d mangosd defaults. Unfortunately, after running this command and rebooting, realmd and mangosd were still not running automatically at boot.
If anyone has any suggestions, or is able to point me in the right direction, I'd really appreciate it. Thank you very much!

You can check on debian a file called skeleton, located in the directory /etc/init.d/, which is supposed to help people to get started with custom init.d services.
This line is not mandatory and you can remove it :
# Should-Start: console-screen dbus network-manager
Replace :
/lib/lsb/init-functions
with
. /lib/lsb/init-functions
You should remove too :
set -e
If it's not working, you can try to set defaults required-start to this :
# Required-Start: $remote_fs $syslog
So final file can be : (not tested)
#!/bin/sh
### BEGIN INIT INFO
# Provides: mangosd
# Required-Start: $local_fs $remote_fs $network $syslog
# Required-Stop: $local_fs $remote_fs $network $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: start mangosd at boot time
### END INIT INFO
#
. /lib/lsb/init-functions
PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/sbin
SCRIPT="/usr/local/sbin/realmd.sh"
SCRIPT2="/usr/local/sbin/mangosd.sh"
PROGRAMNAME="realmd"
PROGRAMNAME2="mangosd"
case "$1" in
start)
$SCRIPT
$SCRIPT2
;;
stop)
pkill $PROGRAMNAME
pkill $PROGRAMNAME2
;;
esac
exit 0
These links can help you :
Debian wiki : https://wiki.debian.org/LSBInitScripts/
initscript example : https://gist.github.com/gsf/6222405
another example : https://gist.github.com/wallyqs/c96d56e735c74ee4cc1f

Related

start rails from init.d service script and loads the correct version of ruby

I want to start rails server in production mode after installing, migrating and running some scripts in order for this script to be attached as pipeline deploy script.
The problem is that the same script doesn't work as service mode.
ubuntu#ip-x-y-z-w:~/backend.rails.com$ sudo vim /etc/init.d/rails-start-backend
#! /bin/sh
# Start/stop the rails server daemon.
#
### BEGIN INIT INFO
# Provides: rails server start
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start daemon at boot time
# Description: Enable service provided by daemon.
### END INIT INFO
PATH=/bin:/usr/bin:/sbin:/usr/sbin
DESC="rails daemon"
NAME=rails
DAEMON=/home/ubuntu/backend.rails.com/gitlab-ci.sh
PIDFILE=/var/run/rails.pid
test -f $DAEMON || exit 0
. /lib/lsb/init-functions
case "$1" in
start)
log_daemon_msg "Starting rails"
/home/ubuntu/backend.rails.com/gitlab-ci.sh > /home/ubuntu/backend.rails.com/log/start_script.log
start_daemon -p $PIDFILE $DAEMON $EXTRA_OPTS
log_end_msg $?
;;
stop) log_daemon_msg "Stopping rails" "rails"
sudo kill -9 $(sudo lsof -t -i:3000)
killproc -p $PIDFILE $DAEMON
RETVAL=$?
[ $RETVAL -eq 0 ] && [ -e "$PIDFILE" ] && rm -f $PIDFILE
log_end_msg $RETVAL
;;
restart) log_daemon_msg "Restarting " "rails"
$0 stop
$0 start
;;
reload|force-reload) log_daemon_msg "Reloading rails" "rails"
# rails reloads automatically
log_end_msg 0
;;
*) log_action_msg "Usage: /etc/init.d/rails {start|stop|status|restart|reload|force-reload}"
exit 2
;;
esac
exit 0
and thats my gitlab-ci.sh script
cd /home/ubuntu/backend.rails.com
sudo chmod +x gitlab-ci.sh
rm config/master.key
rm config/credentials.yml.enc
echo "credentials"
RAILS_ENV=production EDITOR="mate --wait" rails credentials:edit
export RAILS_ENV=production
export FRONTEND_BASE_URL=https://www.rails.com
echo "bundle install"
bundle install
echo "rails db:migrate"
bundle exec rails db:migrate
echo "rails rake application:initialize"
bundle exec rake application:initialize
echo "kill"
sudo kill -9 $(sudo lsof -t -i:3000)
echo "start"
rails s &
The problem comes when I restarted the sudo service rails-start-backend restart service. It seems that in that context, all bundle, rails and ruby versions and settings are not the same as when I execute the same script manually in ssh.
The errors I get are:
/usr/bin/env: ‘ruby_executable_hooks2.6’: No such file or directory
bundle: not found
here's my PATH when ssh-logged
/home/ubuntu/.rvm/gems/ruby-2.6.5/bin:/home/ubuntu/.rvm/gems/ruby-2.6.5#global/bin:/usr/share/rvm/rubies/ruby-2.6.5/bin:/usr/share/rvm/bin:/home/ubuntu/bin:/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
and then when the script is executed as service
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
It works after setting the right paths to the right variables (using 2.6.5 ruby version)
export PATH=/home/ubuntu/.rvm/gems/ruby-2.6.5/bin:/usr/share/rvm/bin:$PATH
export GEM_PATH=/home/ubuntu/.rvm/gems/ruby-2.6.5:/home/ubuntu/.rvm/gems/ruby-2.6.5#global:$GEM_PATH
source .bash_profile
source ~/.rvm/scripts/rvm

Puma Upstart not loading ENV variables

I've deployed an app in production in an Ubuntu Server VM. It uses Puma, so I've followed this guide: https://www.digitalocean.com/community/tutorials/how-to-deploy-a-rails-app-with-puma-and-nginx-on-ubuntu-14-04
to configure it there (it is currently working properly on heroku, we are looking to migrate it to this new server).
This is my /etc/init/puma-manager.conf
# /etc/init/puma-manager.conf - manage a set of Pumas
description "Manages the set of puma processes"
# This starts upon bootup and stops on shutdown
start on runlevel [2345]
stop on runlevel [06]
# Set this to the number of Puma processes you want
# to run on this machine
env PUMA_CONF="/etc/puma.conf"
pre-start script
for i in `cat $PUMA_CONF`; do
app=`echo $i | cut -d , -f 1`
logger -t "puma-manager" "Starting $app"
start puma app=$app
done
end script
And my /etc/init/puma.conf
description "Puma Background Worker"
# no "start on", we don't want to automatically start
stop on (stopping puma-manager or runlevel [06])
# change apps to match your deployment user if you want to use this as a less privileged user (recommended!)
setuid user
setgid user
respawn
respawn limit 3 30
instance ${app}
script
# source ENV variables manually as Upstart doesn't, eg:
. /etc/server-vars
exec /bin/bash <<'EOT'
# set HOME to the setuid user's home, there doesn't seem to be a better, portable way
export HOME="$(eval echo ~$(id -un))"
if [ -d "/usr/local/rbenv/bin" ]; then
export PATH="/usr/local/rbenv/bin:/usr/local/rbenv/shims:$PATH"
elif [ -d "$HOME/.rbenv/bin" ]; then
export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
elif [ -f /etc/profile.d/rvm.sh ]; then
source /etc/profile.d/rvm.sh
elif [ -f /usr/local/rvm/scripts/rvm ]; then
source /etc/profile.d/rvm.sh
elif [ -f "$HOME/.rvm/scripts/rvm" ]; then
source "$HOME/.rvm/scripts/rvm"
elif [ -f /usr/local/share/chruby/chruby.sh ]; then
source /usr/local/share/chruby/chruby.sh
if [ -f /usr/local/share/chruby/auto.sh ]; then
source /usr/local/share/chruby/auto.sh
fi
# if you aren't using auto, set your version here
# chruby 2.0.0
fi
cd $app
logger -t puma "Starting server: $app"
exec bundle exec puma -C config/puma.rb
EOT
end script
It works properly BUT it is not setting the ENV variables I specify in:
/etc/server-vars
I don't want to put all ENV vars directly into this script because they are many, and it limits the usability of the script.
The solution for me was to use "set -a" before sourcing the environment file. Here's the documentation describing what set -a does: The Set Builtin
Try 'set -a' before sourcing your environment file as you can see in the following example:
# /etc/init/puma.conf - Puma config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See puma-manager.conf for how to manage all Puma instances at once.
#
# Save this config as /etc/init/puma.conf then manage puma with:
# sudo start puma app=PATH_TO_APP
# sudo stop puma app=PATH_TO_APP
# sudo status puma app=PATH_TO_APP
#
# or use the service command:
# sudo service puma {start,stop,restart,status}
#
description "Puma Background Worker"
# no "start on", we don't want to automatically start
start on runlevel [2345]
stop on runlevel [06]
# change apps to match your deployment user if you want to use this as a less privileged user (recommended!)
setuid deploy
setgid deploy
respawn
respawn limit 3 30
instance ${app}
script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv/rvm
# quoted heredoc to tell /bin/sh not to interpret
# variables
# source ENV variables manually as Upstart doesn't, eg:
#. /etc/environment
exec /bin/bash <<'EOT'
set -a
. /etc/environment
# set HOME to the setuid user's home, there doesn't seem to be a better, portable way
export HOME="$(eval echo ~$(id -un))"
if [ -d "/usr/local/rbenv/bin" ]; then
export PATH="/usr/local/rbenv/bin:/usr/local/rbenv/shims:$PATH"
elif [ -d "$HOME/.rbenv/bin" ]; then
export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
elif [ -f /etc/profile.d/rvm.sh ]; then
source /etc/profile.d/rvm.sh
elif [ -f /usr/local/rvm/scripts/rvm ]; then
source /etc/profile.d/rvm.sh
elif [ -f "$HOME/.rvm/scripts/rvm" ]; then
source "$HOME/.rvm/scripts/rvm"
elif [ -f /usr/local/share/chruby/chruby.sh ]; then
source /usr/local/share/chruby/chruby.sh
if [ -f /usr/local/share/chruby/auto.sh ]; then
source /usr/local/share/chruby/auto.sh
fi
# if you aren't using auto, set your version here
# chruby 2.0.0
fi
logger -t puma "Starting server: $app"
cd $app
exec bundle exec puma -C /home/deploy/brilliant/config/puma.rb
EOT
end script

Puma restart fails on reboot using EC2 + Rails + Nginx + Capistrano

I have successfully used capistrano to deploy my rails app to Ubuntu EC2. Everything works great on deploy. Rails app name is deseov12
My issue is that Puma does not start on boot which will be necessary as production EC2 instances will be instantiated on demand.
Puma will start when deploying via Capistrano, it will also start when running
cap production puma:start
on local machine.
It will also start on server after a reboot if I run the following commands:
su - deploy
[enter password]
cd /home/deploy/deseov12/current && ( export RACK_ENV="production" ; ~/.rvm/bin/rvm ruby-2.2.4 do bundle exec puma -C /home/deploy/deseov12/shared/puma.rb --daemon )
I have followed directions from the Puma jungle tool to make Puma start on boot by using upstart as follows:
Contents of /etc/puma.conf
/home/deploy/deseov12/current
Contents of /etc/init/puma.conf and /home/deploy/puma.conf
# /etc/init/puma.conf - Puma config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See workers.conf for how to manage all Puma instances at once.
#
# Save this config as /etc/init/puma.conf then manage puma with:
# sudo start puma app=PATH_TO_APP
# sudo stop puma app=PATH_TO_APP
# sudo status puma app=PATH_TO_APP
#
# or use the service command:
# sudo service puma {start,stop,restart,status}
#
description "Puma Background Worker"
# no "start on", we don't want to automatically start
stop on (stopping puma-manager or runlevel [06])
# change apps to match your deployment user if you want to use this as a less privileged user $
setuid deploy
setgid deploy
respawn
respawn limit 3 30
instance ${app}
script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv/rvm
# quoted heredoc to tell /bin/sh not to interpret
# variables
# source ENV variables manually as Upstart doesn't, eg:
#. /etc/environment
exec /bin/bash <<'EOT'
# set HOME to the setuid user's home, there doesn't seem to be a better, portable way
export HOME="$(eval echo ~$(id -un))"
if [ -d "/usr/local/rbenv/bin" ]; then
export PATH="/usr/local/rbenv/bin:/usr/local/rbenv/shims:$PATH"
elif [ -d "$HOME/.rbenv/bin" ]; then
export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
elif [ -f /etc/profile.d/rvm.sh ]; then
source /etc/profile.d/rvm.sh
elif [ -f /usr/local/rvm/scripts/rvm ]; then
source /etc/profile.d/rvm.sh
elif [ -f "$HOME/.rvm/scripts/rvm" ]; then
source "$HOME/.rvm/scripts/rvm"
elif [ -f /usr/local/share/chruby/chruby.sh ]; then
source /usr/local/share/chruby/chruby.sh
if [ -f /usr/local/share/chruby/auto.sh ]; then
source /usr/local/share/chruby/auto.sh
fi
# if you aren't using auto, set your version here
# chruby 2.0.0
fi
cd $app
logger -t puma "Starting server: $app"
exec bundle exec puma -C current/config/puma.rb
EOT
end script
Contents of /etc/init/puma-manager.conf and /home/deploy/puma-manager.conf
# /etc/init/puma-manager.conf - manage a set of Pumas
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See puma.conf for how to manage a single Puma instance.
#
# Use "stop puma-manager" to stop all Puma instances.
# Use "start puma-manager" to start all instances.
# Use "restart puma-manager" to restart all instances.
# Crazy, right?
#
description "Manages the set of puma processes"
# This starts upon bootup and stops on shutdown
start on runlevel [2345]
stop on runlevel [06]
# Set this to the number of Puma processes you want
# to run on this machine
env PUMA_CONF="/etc/puma.conf"
pre-start script
for i in `cat $PUMA_CONF`; do
app=`echo $i | cut -d , -f 1`
logger -t "puma-manager" "Starting $app"
start puma app=$app
done
end script
Contents of /home/deploy/deseov12/shared/puma.rb
#!/usr/bin/env puma
directory '/home/deploy/deseov12/current'
rackup "/home/deploy/deseov12/current/config.ru"
environment 'production'
pidfile "/home/deploy/deseov12/shared/tmp/pids/puma.pid"
state_path "/home/deploy/deseov12/shared/tmp/pids/puma.state"
stdout_redirect '/home/deploy/deseov12/shared/log/puma_error.log', '/home/deploy/deseov12/shar$
threads 0,8
bind 'unix:///home/deploy/deseov12/shared/tmp/sockets/puma.sock'
workers 0
activate_control_app
prune_bundler
on_restart do
puts 'Refreshing Gemfile'
ENV["BUNDLE_GEMFILE"] = "/home/deploy/deseov12/current/Gemfile"
end
However, I have not been able to make Puma start up automatically after a server reboot. It just does not start.
I would certainly appreciate some help
EDIT: I just noticed something that could be a clue:
when running the following command as deploy user:
sudo start puma app=/home/deploy/deseov12/current
ps aux will show a puma process for a few seconds before it disappears.
deploy 4312 103 7.7 183396 78488 ? Rsl 03:42 0:02 puma 2.15.3 (tcp://0.0.0.0:3000) [20160106224332]
this puma process is different from the working process launched by capistrano:
deploy 5489 10.0 12.4 858088 126716 ? Sl 03:45 0:02 puma 2.15.3 (unix:///home/deploy/deseov12/shared/tmp/sockets/puma.sock) [20160106224332]
This is finally solved after a lot of research. It turns out the issue was threefold:
1) the proper environment was not being set when running the upstart script
2) the actual production puma.rb configuration file when using capistrano can be found in the home/deploy/deseov12/shared directory not in the /current/ directory
3) not demonizing the puma server properly
To solve these issues:
1) This line should be added to the start of the script in /etc/init/puma.conf and /home/deploy/puma.conf:
env RACK_ENV="production"
2) and 3) this line
exec bundle exec puma -C current/config/puma.rb
should be replaced with this one
exec bundle exec puma -C /home/deploy/deseov12/shared/puma.rb --daemon
After doing this, the puma server starts properly on reboot or new instance generation. Hope this helps someone avoid hours of troubleshooting.

Using supervisord and rvm to run rubyonrails

I have a RubyOnRails 3 project and I'm using rvm. I want to switch from a sysvinit script to supervisord. The sysvinit script can only start the software in case of an error it it gets killed and restarted by $something. Mostly me.
In the project folder there is a .ruby-version and a .ruby-gemset file so that the correct ruby version and gemset gets loaded automatically. Then the app is startet with a shell script which looks like this:
#!/bin/bash
RAILS_ENV="production" rails server -d
My init script looks like this and works besides restarting and stopping:
#!/bin/sh
### BEGIN INIT INFO
# Provides: myapp
# Required-Start: $local_fs $remote_fs $network $syslog
# Required-Stop: $local_fs $remote_fs $network $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts myapp
# Description: starts the myapp software
### END INIT INFO
USER=myuser
PATH=$PATH
DAEMON=go.sh
DAEMON_OPTS=""
NAME=myapp
DESC="myapp for $USER"
PID=/home/$USER/myapp/tmp/pids/server.pid
case "$1" in
start)
CD_TO_APP_DIR="cd /home/$USER/myapp"
START_DAEMON_PROCESS="$DAEMON $DAEMON_OPTS"
echo -n "Starting $DESC: "
if [ $(whoami) = root ]; then
su - $USER -c "$CD_TO_APP_DIR > /dev/null 2>&1 && ./$START_DAEMON_PROCESS &"
else
$CD_TO_APP_DIR > /dev/null 2>&1 && ./$START_DAEMON_PROCESS &
fi
echo "$NAME."
;;
stop)
echo -n "Stopping $DESC: "
kill -QUIT `cat $PID`
echo "$NAME."
;;
restart)
echo -n "Restarting $DESC: "
kill -USR2 `cat $PID`
echo "$NAME."
;;
reload)
echo -n "Reloading $DESC configuration: "
kill -HUP `cat $PID`
echo "$NAME."
;;
*)
echo "Usage: $NAME {start|stop|restart|reload}" >&2
exit 1
;;
esac
exit 0
My supervisor config looks like this:
[program:myapp]
directory=/home/myuser/myapp/
command=/home/myuser/.rvm/wrappers/ruby-2.1.5#myapp/rails server -d
environment=RAILS_ENV="production"
autostart=true
autorestart=true
Problem is that there is no rails binary in the wrapper. so that the command fails. What is the correct way to do this? I'm out of ideas and would start putting some really ugly bash script together that does the job in a very wrong and bad way but does it. Btw I found rails in the gems folder.
$ ls /home/myuser/.rvm/wrappers/ruby-2.1.5#myapp/
bundle bundler erb executable-hooks-uninstaller gem irb rake rdoc ri ruby testrb
$ which rails
/home/ffwi/.rvm/gems/ruby-2.1.5#ffwi-extern/bin/rails
Try to source rvm in your script (this link describes usecases like yours).
You have to load RVM into the shell of your script manually:
source "$HOME/.rvm/scripts/rvm"
It is only enabled for interactive login shells automatically.
From this point on, you can cd into directories and rvm should do its business.

How do I restart a Phusion Passenger Standalone?

have to do passenger stop then start
or I can still do this by touching tmp/restart.txt?
Yes you can restart it by touching tmp/restart.txt.
create a script in /etc/init.d:
$ sudo nano /etc/init.d/YOUR_SERVICE_NAME
Then, Change the parameters according your needs.
#!/bin/sh
### BEGIN INIT INFO
# Provides: <NAME>
# Required-Start: $local_fs $network $named $time $syslog
# Required-Stop: $local_fs $network $named $time $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Description: <DESCRIPTION>
### END INIT INFO
start() {
echo 'Starting service…' >&2
/bin/bash -l -c 'cd /var/www/myapp/current && passenger start --daemonize -e [staging | production | development ] --ruby path/to/your/bin/ruby'
echo 'Service started' >&2
}
stop() {
echo 'Stopping service…' >&2
passenger stop /var/www/myapp/current
echo 'Service stopped' >&2
}
status() {
passenger status /var/www/myapp/current
}
case "$1" in
start)
start
exit 0
;;
stop)
stop
exit 0
;;
status)
status
exit 0
;;
restart)
stop
start
exit 0
;;
*)
echo "Usage: $0 {start|stop|restart|status}"
esac
Make this file executable:
$ sudo chmod +x /etc/init.d/YOUR_SERVICE_NAME
Then test it:
/etc/init.d/YOUR_SERVICE_NAME start
You can set the to reboot with the system:
$ sudo update-rc.d YOUR_SERVICE_NAME defaults

Resources