Puma Upstart not loading ENV variables - ruby-on-rails

I've deployed an app in production in an Ubuntu Server VM. It uses Puma, so I've followed this guide: https://www.digitalocean.com/community/tutorials/how-to-deploy-a-rails-app-with-puma-and-nginx-on-ubuntu-14-04
to configure it there (it is currently working properly on heroku, we are looking to migrate it to this new server).
This is my /etc/init/puma-manager.conf
# /etc/init/puma-manager.conf - manage a set of Pumas
description "Manages the set of puma processes"
# This starts upon bootup and stops on shutdown
start on runlevel [2345]
stop on runlevel [06]
# Set this to the number of Puma processes you want
# to run on this machine
env PUMA_CONF="/etc/puma.conf"
pre-start script
for i in `cat $PUMA_CONF`; do
app=`echo $i | cut -d , -f 1`
logger -t "puma-manager" "Starting $app"
start puma app=$app
done
end script
And my /etc/init/puma.conf
description "Puma Background Worker"
# no "start on", we don't want to automatically start
stop on (stopping puma-manager or runlevel [06])
# change apps to match your deployment user if you want to use this as a less privileged user (recommended!)
setuid user
setgid user
respawn
respawn limit 3 30
instance ${app}
script
# source ENV variables manually as Upstart doesn't, eg:
. /etc/server-vars
exec /bin/bash <<'EOT'
# set HOME to the setuid user's home, there doesn't seem to be a better, portable way
export HOME="$(eval echo ~$(id -un))"
if [ -d "/usr/local/rbenv/bin" ]; then
export PATH="/usr/local/rbenv/bin:/usr/local/rbenv/shims:$PATH"
elif [ -d "$HOME/.rbenv/bin" ]; then
export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
elif [ -f /etc/profile.d/rvm.sh ]; then
source /etc/profile.d/rvm.sh
elif [ -f /usr/local/rvm/scripts/rvm ]; then
source /etc/profile.d/rvm.sh
elif [ -f "$HOME/.rvm/scripts/rvm" ]; then
source "$HOME/.rvm/scripts/rvm"
elif [ -f /usr/local/share/chruby/chruby.sh ]; then
source /usr/local/share/chruby/chruby.sh
if [ -f /usr/local/share/chruby/auto.sh ]; then
source /usr/local/share/chruby/auto.sh
fi
# if you aren't using auto, set your version here
# chruby 2.0.0
fi
cd $app
logger -t puma "Starting server: $app"
exec bundle exec puma -C config/puma.rb
EOT
end script
It works properly BUT it is not setting the ENV variables I specify in:
/etc/server-vars
I don't want to put all ENV vars directly into this script because they are many, and it limits the usability of the script.

The solution for me was to use "set -a" before sourcing the environment file. Here's the documentation describing what set -a does: The Set Builtin
Try 'set -a' before sourcing your environment file as you can see in the following example:
# /etc/init/puma.conf - Puma config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See puma-manager.conf for how to manage all Puma instances at once.
#
# Save this config as /etc/init/puma.conf then manage puma with:
# sudo start puma app=PATH_TO_APP
# sudo stop puma app=PATH_TO_APP
# sudo status puma app=PATH_TO_APP
#
# or use the service command:
# sudo service puma {start,stop,restart,status}
#
description "Puma Background Worker"
# no "start on", we don't want to automatically start
start on runlevel [2345]
stop on runlevel [06]
# change apps to match your deployment user if you want to use this as a less privileged user (recommended!)
setuid deploy
setgid deploy
respawn
respawn limit 3 30
instance ${app}
script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv/rvm
# quoted heredoc to tell /bin/sh not to interpret
# variables
# source ENV variables manually as Upstart doesn't, eg:
#. /etc/environment
exec /bin/bash <<'EOT'
set -a
. /etc/environment
# set HOME to the setuid user's home, there doesn't seem to be a better, portable way
export HOME="$(eval echo ~$(id -un))"
if [ -d "/usr/local/rbenv/bin" ]; then
export PATH="/usr/local/rbenv/bin:/usr/local/rbenv/shims:$PATH"
elif [ -d "$HOME/.rbenv/bin" ]; then
export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
elif [ -f /etc/profile.d/rvm.sh ]; then
source /etc/profile.d/rvm.sh
elif [ -f /usr/local/rvm/scripts/rvm ]; then
source /etc/profile.d/rvm.sh
elif [ -f "$HOME/.rvm/scripts/rvm" ]; then
source "$HOME/.rvm/scripts/rvm"
elif [ -f /usr/local/share/chruby/chruby.sh ]; then
source /usr/local/share/chruby/chruby.sh
if [ -f /usr/local/share/chruby/auto.sh ]; then
source /usr/local/share/chruby/auto.sh
fi
# if you aren't using auto, set your version here
# chruby 2.0.0
fi
logger -t puma "Starting server: $app"
cd $app
exec bundle exec puma -C /home/deploy/brilliant/config/puma.rb
EOT
end script

Related

start rails from init.d service script and loads the correct version of ruby

I want to start rails server in production mode after installing, migrating and running some scripts in order for this script to be attached as pipeline deploy script.
The problem is that the same script doesn't work as service mode.
ubuntu#ip-x-y-z-w:~/backend.rails.com$ sudo vim /etc/init.d/rails-start-backend
#! /bin/sh
# Start/stop the rails server daemon.
#
### BEGIN INIT INFO
# Provides: rails server start
# Required-Start: $remote_fs $syslog
# Required-Stop: $remote_fs $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: Start daemon at boot time
# Description: Enable service provided by daemon.
### END INIT INFO
PATH=/bin:/usr/bin:/sbin:/usr/sbin
DESC="rails daemon"
NAME=rails
DAEMON=/home/ubuntu/backend.rails.com/gitlab-ci.sh
PIDFILE=/var/run/rails.pid
test -f $DAEMON || exit 0
. /lib/lsb/init-functions
case "$1" in
start)
log_daemon_msg "Starting rails"
/home/ubuntu/backend.rails.com/gitlab-ci.sh > /home/ubuntu/backend.rails.com/log/start_script.log
start_daemon -p $PIDFILE $DAEMON $EXTRA_OPTS
log_end_msg $?
;;
stop) log_daemon_msg "Stopping rails" "rails"
sudo kill -9 $(sudo lsof -t -i:3000)
killproc -p $PIDFILE $DAEMON
RETVAL=$?
[ $RETVAL -eq 0 ] && [ -e "$PIDFILE" ] && rm -f $PIDFILE
log_end_msg $RETVAL
;;
restart) log_daemon_msg "Restarting " "rails"
$0 stop
$0 start
;;
reload|force-reload) log_daemon_msg "Reloading rails" "rails"
# rails reloads automatically
log_end_msg 0
;;
*) log_action_msg "Usage: /etc/init.d/rails {start|stop|status|restart|reload|force-reload}"
exit 2
;;
esac
exit 0
and thats my gitlab-ci.sh script
cd /home/ubuntu/backend.rails.com
sudo chmod +x gitlab-ci.sh
rm config/master.key
rm config/credentials.yml.enc
echo "credentials"
RAILS_ENV=production EDITOR="mate --wait" rails credentials:edit
export RAILS_ENV=production
export FRONTEND_BASE_URL=https://www.rails.com
echo "bundle install"
bundle install
echo "rails db:migrate"
bundle exec rails db:migrate
echo "rails rake application:initialize"
bundle exec rake application:initialize
echo "kill"
sudo kill -9 $(sudo lsof -t -i:3000)
echo "start"
rails s &
The problem comes when I restarted the sudo service rails-start-backend restart service. It seems that in that context, all bundle, rails and ruby versions and settings are not the same as when I execute the same script manually in ssh.
The errors I get are:
/usr/bin/env: ‘ruby_executable_hooks2.6’: No such file or directory
bundle: not found
here's my PATH when ssh-logged
/home/ubuntu/.rvm/gems/ruby-2.6.5/bin:/home/ubuntu/.rvm/gems/ruby-2.6.5#global/bin:/usr/share/rvm/rubies/ruby-2.6.5/bin:/usr/share/rvm/bin:/home/ubuntu/bin:/home/ubuntu/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
and then when the script is executed as service
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
It works after setting the right paths to the right variables (using 2.6.5 ruby version)
export PATH=/home/ubuntu/.rvm/gems/ruby-2.6.5/bin:/usr/share/rvm/bin:$PATH
export GEM_PATH=/home/ubuntu/.rvm/gems/ruby-2.6.5:/home/ubuntu/.rvm/gems/ruby-2.6.5#global:$GEM_PATH
source .bash_profile
source ~/.rvm/scripts/rvm

Configure unicorn service script using RVM

In my /etc/init.d/unicorn startup script I have the hard-coded paths as follows:
export GEM_HOME=/usr/local/rvm/gems/ruby-2.2.0-dev
export GEM_PATH=/usr/local/rvm/gems/ruby-2.2.0-dev:/usr/local/rvm/gems/ruby-2.2.0-dev/gems:/usr/local/rvm/gems/ruby-2.2.0-dev#global/gems
DAEMON=/usr/local/rvm/gems/ruby-2.2.0-dev/bin/unicorn
UNICORN_OPTS="-D -c /home/unicorn/unicorn.conf -E production"
I am using RVM and when I change the ruby version, then the current ruby should be used for unicorn.
Question 1
So how can I make sure that these variables always point to the proper ruby?
Question 2
In my bundle I am using rack 1.5.5. Now on my production server I had to install the unicorn gem as a "stand-alone-gem" so that I can start my server using:
service unicorn start
Now the unicorn gem installed rack 1.6.x and now my Rails app crashed because rack is already loaded. Now locally I would just execute it with bundle but how can I do that when I am using this /etc/inid.d script.
The part where the server is started looks like follows, and I don't know how I could "inject" the bundler call there or if that is a good practise at all:
# ...
log_daemon_msg "Starting $DESC" $NAME || true
if start-stop-daemon --start --quiet --oknodo --pidfile $PID --exec $DAEMON -- $UNICORN_OPTS; then
The config: /home/unicorn/unicorn.conf
listen "127.0.0.1:8080"
worker_processes 2
user "rails"
working_directory "/home/myapp"
pid "/home/unicorn/pids/unicorn.pid"
stderr_path "/home/unicorn/log/unicorn.log"
stdout_path "/home/unicorn/log/unicorn.log"
Assuming rubyuser has all these rvm and rubies installed, one might parse the output of rvm info, running the latter from this user:
`sudo -H -u rubyuser bash -c '[[ -s "$HOME/.rvm/scripts/rvm" ]] && source "$HOME/.rvm/scripts/rvm" && rvm info | grep "^ \+GEM_"| sed -e "s/^ \+/export /" -e "s/: \+/=/"'`
The command above will set the variables GEM_HOME and GEM_PATH. To just output them, remove backticks around this command.
For rvm installed in local, it’s even simpler:
[[ -s "/usr/local/rvm/scripts/rvm" ]] && source "/usr/local/rvm/scripts/rvm" && rvm info | grep "^ \+GEM_"| sed -e "s/^ \+/export /" -e "s/: \+/=/"
# export GEM_HOME="/usr/local/rvm/gems/ruby-2.1.1"
# export GEM_PATH="/usr/local/rvm/gems/ruby-2.1.1:/usr/local/rvm/gems/ruby-2.1.1#global"

Puma restart fails on reboot using EC2 + Rails + Nginx + Capistrano

I have successfully used capistrano to deploy my rails app to Ubuntu EC2. Everything works great on deploy. Rails app name is deseov12
My issue is that Puma does not start on boot which will be necessary as production EC2 instances will be instantiated on demand.
Puma will start when deploying via Capistrano, it will also start when running
cap production puma:start
on local machine.
It will also start on server after a reboot if I run the following commands:
su - deploy
[enter password]
cd /home/deploy/deseov12/current && ( export RACK_ENV="production" ; ~/.rvm/bin/rvm ruby-2.2.4 do bundle exec puma -C /home/deploy/deseov12/shared/puma.rb --daemon )
I have followed directions from the Puma jungle tool to make Puma start on boot by using upstart as follows:
Contents of /etc/puma.conf
/home/deploy/deseov12/current
Contents of /etc/init/puma.conf and /home/deploy/puma.conf
# /etc/init/puma.conf - Puma config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See workers.conf for how to manage all Puma instances at once.
#
# Save this config as /etc/init/puma.conf then manage puma with:
# sudo start puma app=PATH_TO_APP
# sudo stop puma app=PATH_TO_APP
# sudo status puma app=PATH_TO_APP
#
# or use the service command:
# sudo service puma {start,stop,restart,status}
#
description "Puma Background Worker"
# no "start on", we don't want to automatically start
stop on (stopping puma-manager or runlevel [06])
# change apps to match your deployment user if you want to use this as a less privileged user $
setuid deploy
setgid deploy
respawn
respawn limit 3 30
instance ${app}
script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv/rvm
# quoted heredoc to tell /bin/sh not to interpret
# variables
# source ENV variables manually as Upstart doesn't, eg:
#. /etc/environment
exec /bin/bash <<'EOT'
# set HOME to the setuid user's home, there doesn't seem to be a better, portable way
export HOME="$(eval echo ~$(id -un))"
if [ -d "/usr/local/rbenv/bin" ]; then
export PATH="/usr/local/rbenv/bin:/usr/local/rbenv/shims:$PATH"
elif [ -d "$HOME/.rbenv/bin" ]; then
export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
elif [ -f /etc/profile.d/rvm.sh ]; then
source /etc/profile.d/rvm.sh
elif [ -f /usr/local/rvm/scripts/rvm ]; then
source /etc/profile.d/rvm.sh
elif [ -f "$HOME/.rvm/scripts/rvm" ]; then
source "$HOME/.rvm/scripts/rvm"
elif [ -f /usr/local/share/chruby/chruby.sh ]; then
source /usr/local/share/chruby/chruby.sh
if [ -f /usr/local/share/chruby/auto.sh ]; then
source /usr/local/share/chruby/auto.sh
fi
# if you aren't using auto, set your version here
# chruby 2.0.0
fi
cd $app
logger -t puma "Starting server: $app"
exec bundle exec puma -C current/config/puma.rb
EOT
end script
Contents of /etc/init/puma-manager.conf and /home/deploy/puma-manager.conf
# /etc/init/puma-manager.conf - manage a set of Pumas
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See puma.conf for how to manage a single Puma instance.
#
# Use "stop puma-manager" to stop all Puma instances.
# Use "start puma-manager" to start all instances.
# Use "restart puma-manager" to restart all instances.
# Crazy, right?
#
description "Manages the set of puma processes"
# This starts upon bootup and stops on shutdown
start on runlevel [2345]
stop on runlevel [06]
# Set this to the number of Puma processes you want
# to run on this machine
env PUMA_CONF="/etc/puma.conf"
pre-start script
for i in `cat $PUMA_CONF`; do
app=`echo $i | cut -d , -f 1`
logger -t "puma-manager" "Starting $app"
start puma app=$app
done
end script
Contents of /home/deploy/deseov12/shared/puma.rb
#!/usr/bin/env puma
directory '/home/deploy/deseov12/current'
rackup "/home/deploy/deseov12/current/config.ru"
environment 'production'
pidfile "/home/deploy/deseov12/shared/tmp/pids/puma.pid"
state_path "/home/deploy/deseov12/shared/tmp/pids/puma.state"
stdout_redirect '/home/deploy/deseov12/shared/log/puma_error.log', '/home/deploy/deseov12/shar$
threads 0,8
bind 'unix:///home/deploy/deseov12/shared/tmp/sockets/puma.sock'
workers 0
activate_control_app
prune_bundler
on_restart do
puts 'Refreshing Gemfile'
ENV["BUNDLE_GEMFILE"] = "/home/deploy/deseov12/current/Gemfile"
end
However, I have not been able to make Puma start up automatically after a server reboot. It just does not start.
I would certainly appreciate some help
EDIT: I just noticed something that could be a clue:
when running the following command as deploy user:
sudo start puma app=/home/deploy/deseov12/current
ps aux will show a puma process for a few seconds before it disappears.
deploy 4312 103 7.7 183396 78488 ? Rsl 03:42 0:02 puma 2.15.3 (tcp://0.0.0.0:3000) [20160106224332]
this puma process is different from the working process launched by capistrano:
deploy 5489 10.0 12.4 858088 126716 ? Sl 03:45 0:02 puma 2.15.3 (unix:///home/deploy/deseov12/shared/tmp/sockets/puma.sock) [20160106224332]
This is finally solved after a lot of research. It turns out the issue was threefold:
1) the proper environment was not being set when running the upstart script
2) the actual production puma.rb configuration file when using capistrano can be found in the home/deploy/deseov12/shared directory not in the /current/ directory
3) not demonizing the puma server properly
To solve these issues:
1) This line should be added to the start of the script in /etc/init/puma.conf and /home/deploy/puma.conf:
env RACK_ENV="production"
2) and 3) this line
exec bundle exec puma -C current/config/puma.rb
should be replaced with this one
exec bundle exec puma -C /home/deploy/deseov12/shared/puma.rb --daemon
After doing this, the puma server starts properly on reboot or new instance generation. Hope this helps someone avoid hours of troubleshooting.

Using supervisord and rvm to run rubyonrails

I have a RubyOnRails 3 project and I'm using rvm. I want to switch from a sysvinit script to supervisord. The sysvinit script can only start the software in case of an error it it gets killed and restarted by $something. Mostly me.
In the project folder there is a .ruby-version and a .ruby-gemset file so that the correct ruby version and gemset gets loaded automatically. Then the app is startet with a shell script which looks like this:
#!/bin/bash
RAILS_ENV="production" rails server -d
My init script looks like this and works besides restarting and stopping:
#!/bin/sh
### BEGIN INIT INFO
# Provides: myapp
# Required-Start: $local_fs $remote_fs $network $syslog
# Required-Stop: $local_fs $remote_fs $network $syslog
# Default-Start: 2 3 4 5
# Default-Stop: 0 1 6
# Short-Description: starts myapp
# Description: starts the myapp software
### END INIT INFO
USER=myuser
PATH=$PATH
DAEMON=go.sh
DAEMON_OPTS=""
NAME=myapp
DESC="myapp for $USER"
PID=/home/$USER/myapp/tmp/pids/server.pid
case "$1" in
start)
CD_TO_APP_DIR="cd /home/$USER/myapp"
START_DAEMON_PROCESS="$DAEMON $DAEMON_OPTS"
echo -n "Starting $DESC: "
if [ $(whoami) = root ]; then
su - $USER -c "$CD_TO_APP_DIR > /dev/null 2>&1 && ./$START_DAEMON_PROCESS &"
else
$CD_TO_APP_DIR > /dev/null 2>&1 && ./$START_DAEMON_PROCESS &
fi
echo "$NAME."
;;
stop)
echo -n "Stopping $DESC: "
kill -QUIT `cat $PID`
echo "$NAME."
;;
restart)
echo -n "Restarting $DESC: "
kill -USR2 `cat $PID`
echo "$NAME."
;;
reload)
echo -n "Reloading $DESC configuration: "
kill -HUP `cat $PID`
echo "$NAME."
;;
*)
echo "Usage: $NAME {start|stop|restart|reload}" >&2
exit 1
;;
esac
exit 0
My supervisor config looks like this:
[program:myapp]
directory=/home/myuser/myapp/
command=/home/myuser/.rvm/wrappers/ruby-2.1.5#myapp/rails server -d
environment=RAILS_ENV="production"
autostart=true
autorestart=true
Problem is that there is no rails binary in the wrapper. so that the command fails. What is the correct way to do this? I'm out of ideas and would start putting some really ugly bash script together that does the job in a very wrong and bad way but does it. Btw I found rails in the gems folder.
$ ls /home/myuser/.rvm/wrappers/ruby-2.1.5#myapp/
bundle bundler erb executable-hooks-uninstaller gem irb rake rdoc ri ruby testrb
$ which rails
/home/ffwi/.rvm/gems/ruby-2.1.5#ffwi-extern/bin/rails
Try to source rvm in your script (this link describes usecases like yours).
You have to load RVM into the shell of your script manually:
source "$HOME/.rvm/scripts/rvm"
It is only enabled for interactive login shells automatically.
From this point on, you can cd into directories and rvm should do its business.

daemonizing sidekiq with upstart script - is not working

I'm trying to daemonize sidekiq using two upstart scripts following this example.
Basically the workers service starts a fixed number of sidekiq services.
The problem is that the sidekiq script fails at the line of code where I am starting sidekiq. I've tried to run the command directly in bash and it works fine.
I tried all different commented lines and none works.
So my question is what am I doing wrong? Where can I see the error messages?
This is my modified sidekiq script:
# /etc/init/sidekiq.conf - Sidekiq config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Sidekiq instances with
# Upstart, Ubuntu's native service management tool.
#
# See workers.conf for how to manage all Sidekiq instances at once.
#
# Save this config as /etc/init/sidekiq.conf then mange sidekiq with:
# sudo start sidekiq index=0
# sudo stop sidekiq index=0
# sudo status sidekiq index=0
#
# or use the service command:
# sudo service sidekiq {start,stop,restart,status}
#
description "Sidekiq Background Worker"
respawn
respawn limit 15 5
# no "start on", we don't want to automatically start
stop on (stopping workers or runlevel [06])
# TERM and USR1 are sent by sidekiqctl when stopping sidekiq. Without declaring these as normal exit codes, it just respawns.
normal exit 0 TERM USR1
instance $index
script
exec /bin/bash <<EOT
# use syslog for logging
# exec &> /dev/kmsg
# pull in system rbenv
# export HOME=/home/deploy
# source /etc/profile.d/rbenv.sh
cd /home/rails
touch /root/sidekick_has_started
sidekiq -i ${index} -e production
# exec sidekiq -i ${index} -e production
# exec /usr/local/rvm/gems/ruby-2.0.0-p353/gems/sidekiq-3.1.3/bin/sidekiq -i ${index} -e production
touch /root/sidekick_has_started_2
EOT
end script
You are right, RVM env are required to be loaded in. Try this:
.....
.....
script
exec /bin/bash <<EOT
#export HOME=/home/deploy
source /usr/local/rvm/environments/ruby-2.0.0-p353#global
cd /home/rails
exec sidekiq -i ${index} -e production
.....
.....
Does it work?

Resources