Puma restart fails on reboot using EC2 + Rails + Nginx + Capistrano - ruby-on-rails

I have successfully used capistrano to deploy my rails app to Ubuntu EC2. Everything works great on deploy. Rails app name is deseov12
My issue is that Puma does not start on boot which will be necessary as production EC2 instances will be instantiated on demand.
Puma will start when deploying via Capistrano, it will also start when running
cap production puma:start
on local machine.
It will also start on server after a reboot if I run the following commands:
su - deploy
[enter password]
cd /home/deploy/deseov12/current && ( export RACK_ENV="production" ; ~/.rvm/bin/rvm ruby-2.2.4 do bundle exec puma -C /home/deploy/deseov12/shared/puma.rb --daemon )
I have followed directions from the Puma jungle tool to make Puma start on boot by using upstart as follows:
Contents of /etc/puma.conf
/home/deploy/deseov12/current
Contents of /etc/init/puma.conf and /home/deploy/puma.conf
# /etc/init/puma.conf - Puma config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See workers.conf for how to manage all Puma instances at once.
#
# Save this config as /etc/init/puma.conf then manage puma with:
# sudo start puma app=PATH_TO_APP
# sudo stop puma app=PATH_TO_APP
# sudo status puma app=PATH_TO_APP
#
# or use the service command:
# sudo service puma {start,stop,restart,status}
#
description "Puma Background Worker"
# no "start on", we don't want to automatically start
stop on (stopping puma-manager or runlevel [06])
# change apps to match your deployment user if you want to use this as a less privileged user $
setuid deploy
setgid deploy
respawn
respawn limit 3 30
instance ${app}
script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv/rvm
# quoted heredoc to tell /bin/sh not to interpret
# variables
# source ENV variables manually as Upstart doesn't, eg:
#. /etc/environment
exec /bin/bash <<'EOT'
# set HOME to the setuid user's home, there doesn't seem to be a better, portable way
export HOME="$(eval echo ~$(id -un))"
if [ -d "/usr/local/rbenv/bin" ]; then
export PATH="/usr/local/rbenv/bin:/usr/local/rbenv/shims:$PATH"
elif [ -d "$HOME/.rbenv/bin" ]; then
export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
elif [ -f /etc/profile.d/rvm.sh ]; then
source /etc/profile.d/rvm.sh
elif [ -f /usr/local/rvm/scripts/rvm ]; then
source /etc/profile.d/rvm.sh
elif [ -f "$HOME/.rvm/scripts/rvm" ]; then
source "$HOME/.rvm/scripts/rvm"
elif [ -f /usr/local/share/chruby/chruby.sh ]; then
source /usr/local/share/chruby/chruby.sh
if [ -f /usr/local/share/chruby/auto.sh ]; then
source /usr/local/share/chruby/auto.sh
fi
# if you aren't using auto, set your version here
# chruby 2.0.0
fi
cd $app
logger -t puma "Starting server: $app"
exec bundle exec puma -C current/config/puma.rb
EOT
end script
Contents of /etc/init/puma-manager.conf and /home/deploy/puma-manager.conf
# /etc/init/puma-manager.conf - manage a set of Pumas
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See puma.conf for how to manage a single Puma instance.
#
# Use "stop puma-manager" to stop all Puma instances.
# Use "start puma-manager" to start all instances.
# Use "restart puma-manager" to restart all instances.
# Crazy, right?
#
description "Manages the set of puma processes"
# This starts upon bootup and stops on shutdown
start on runlevel [2345]
stop on runlevel [06]
# Set this to the number of Puma processes you want
# to run on this machine
env PUMA_CONF="/etc/puma.conf"
pre-start script
for i in `cat $PUMA_CONF`; do
app=`echo $i | cut -d , -f 1`
logger -t "puma-manager" "Starting $app"
start puma app=$app
done
end script
Contents of /home/deploy/deseov12/shared/puma.rb
#!/usr/bin/env puma
directory '/home/deploy/deseov12/current'
rackup "/home/deploy/deseov12/current/config.ru"
environment 'production'
pidfile "/home/deploy/deseov12/shared/tmp/pids/puma.pid"
state_path "/home/deploy/deseov12/shared/tmp/pids/puma.state"
stdout_redirect '/home/deploy/deseov12/shared/log/puma_error.log', '/home/deploy/deseov12/shar$
threads 0,8
bind 'unix:///home/deploy/deseov12/shared/tmp/sockets/puma.sock'
workers 0
activate_control_app
prune_bundler
on_restart do
puts 'Refreshing Gemfile'
ENV["BUNDLE_GEMFILE"] = "/home/deploy/deseov12/current/Gemfile"
end
However, I have not been able to make Puma start up automatically after a server reboot. It just does not start.
I would certainly appreciate some help
EDIT: I just noticed something that could be a clue:
when running the following command as deploy user:
sudo start puma app=/home/deploy/deseov12/current
ps aux will show a puma process for a few seconds before it disappears.
deploy 4312 103 7.7 183396 78488 ? Rsl 03:42 0:02 puma 2.15.3 (tcp://0.0.0.0:3000) [20160106224332]
this puma process is different from the working process launched by capistrano:
deploy 5489 10.0 12.4 858088 126716 ? Sl 03:45 0:02 puma 2.15.3 (unix:///home/deploy/deseov12/shared/tmp/sockets/puma.sock) [20160106224332]

This is finally solved after a lot of research. It turns out the issue was threefold:
1) the proper environment was not being set when running the upstart script
2) the actual production puma.rb configuration file when using capistrano can be found in the home/deploy/deseov12/shared directory not in the /current/ directory
3) not demonizing the puma server properly
To solve these issues:
1) This line should be added to the start of the script in /etc/init/puma.conf and /home/deploy/puma.conf:
env RACK_ENV="production"
2) and 3) this line
exec bundle exec puma -C current/config/puma.rb
should be replaced with this one
exec bundle exec puma -C /home/deploy/deseov12/shared/puma.rb --daemon
After doing this, the puma server starts properly on reboot or new instance generation. Hope this helps someone avoid hours of troubleshooting.

Related

Puma Upstart not loading ENV variables

I've deployed an app in production in an Ubuntu Server VM. It uses Puma, so I've followed this guide: https://www.digitalocean.com/community/tutorials/how-to-deploy-a-rails-app-with-puma-and-nginx-on-ubuntu-14-04
to configure it there (it is currently working properly on heroku, we are looking to migrate it to this new server).
This is my /etc/init/puma-manager.conf
# /etc/init/puma-manager.conf - manage a set of Pumas
description "Manages the set of puma processes"
# This starts upon bootup and stops on shutdown
start on runlevel [2345]
stop on runlevel [06]
# Set this to the number of Puma processes you want
# to run on this machine
env PUMA_CONF="/etc/puma.conf"
pre-start script
for i in `cat $PUMA_CONF`; do
app=`echo $i | cut -d , -f 1`
logger -t "puma-manager" "Starting $app"
start puma app=$app
done
end script
And my /etc/init/puma.conf
description "Puma Background Worker"
# no "start on", we don't want to automatically start
stop on (stopping puma-manager or runlevel [06])
# change apps to match your deployment user if you want to use this as a less privileged user (recommended!)
setuid user
setgid user
respawn
respawn limit 3 30
instance ${app}
script
# source ENV variables manually as Upstart doesn't, eg:
. /etc/server-vars
exec /bin/bash <<'EOT'
# set HOME to the setuid user's home, there doesn't seem to be a better, portable way
export HOME="$(eval echo ~$(id -un))"
if [ -d "/usr/local/rbenv/bin" ]; then
export PATH="/usr/local/rbenv/bin:/usr/local/rbenv/shims:$PATH"
elif [ -d "$HOME/.rbenv/bin" ]; then
export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
elif [ -f /etc/profile.d/rvm.sh ]; then
source /etc/profile.d/rvm.sh
elif [ -f /usr/local/rvm/scripts/rvm ]; then
source /etc/profile.d/rvm.sh
elif [ -f "$HOME/.rvm/scripts/rvm" ]; then
source "$HOME/.rvm/scripts/rvm"
elif [ -f /usr/local/share/chruby/chruby.sh ]; then
source /usr/local/share/chruby/chruby.sh
if [ -f /usr/local/share/chruby/auto.sh ]; then
source /usr/local/share/chruby/auto.sh
fi
# if you aren't using auto, set your version here
# chruby 2.0.0
fi
cd $app
logger -t puma "Starting server: $app"
exec bundle exec puma -C config/puma.rb
EOT
end script
It works properly BUT it is not setting the ENV variables I specify in:
/etc/server-vars
I don't want to put all ENV vars directly into this script because they are many, and it limits the usability of the script.
The solution for me was to use "set -a" before sourcing the environment file. Here's the documentation describing what set -a does: The Set Builtin
Try 'set -a' before sourcing your environment file as you can see in the following example:
# /etc/init/puma.conf - Puma config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See puma-manager.conf for how to manage all Puma instances at once.
#
# Save this config as /etc/init/puma.conf then manage puma with:
# sudo start puma app=PATH_TO_APP
# sudo stop puma app=PATH_TO_APP
# sudo status puma app=PATH_TO_APP
#
# or use the service command:
# sudo service puma {start,stop,restart,status}
#
description "Puma Background Worker"
# no "start on", we don't want to automatically start
start on runlevel [2345]
stop on runlevel [06]
# change apps to match your deployment user if you want to use this as a less privileged user (recommended!)
setuid deploy
setgid deploy
respawn
respawn limit 3 30
instance ${app}
script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv/rvm
# quoted heredoc to tell /bin/sh not to interpret
# variables
# source ENV variables manually as Upstart doesn't, eg:
#. /etc/environment
exec /bin/bash <<'EOT'
set -a
. /etc/environment
# set HOME to the setuid user's home, there doesn't seem to be a better, portable way
export HOME="$(eval echo ~$(id -un))"
if [ -d "/usr/local/rbenv/bin" ]; then
export PATH="/usr/local/rbenv/bin:/usr/local/rbenv/shims:$PATH"
elif [ -d "$HOME/.rbenv/bin" ]; then
export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
elif [ -f /etc/profile.d/rvm.sh ]; then
source /etc/profile.d/rvm.sh
elif [ -f /usr/local/rvm/scripts/rvm ]; then
source /etc/profile.d/rvm.sh
elif [ -f "$HOME/.rvm/scripts/rvm" ]; then
source "$HOME/.rvm/scripts/rvm"
elif [ -f /usr/local/share/chruby/chruby.sh ]; then
source /usr/local/share/chruby/chruby.sh
if [ -f /usr/local/share/chruby/auto.sh ]; then
source /usr/local/share/chruby/auto.sh
fi
# if you aren't using auto, set your version here
# chruby 2.0.0
fi
logger -t puma "Starting server: $app"
cd $app
exec bundle exec puma -C /home/deploy/brilliant/config/puma.rb
EOT
end script

How to update ENV variable on Passenger standalone restart

I'm using Capistrano to deploy my application. Application runs on Passenger standalone. When I redeploy the application the Passenger still uses the Gemfile from the the old release because BUNDLE_GEMFILE environment variable has not been updated.
Where I should put the updated path to Gemfile so that Passenger would pick it up on restart?
The server startup command is in monit and I just call monit scripts from Capistrano tasks except for restart where I just touch the restart.txt.
namespace :deploy do
task :stop do
run("sudo /usr/bin/monit stop my_app_#{rails_env}")
end
task :restart do
run("cd #{current_path} && touch tmp/restart.txt")
end
task :start do
run("sudo /usr/bin/monit start my_app_#{rails_env}")
end
The startup command in monit is:
start program = "/bin/su - app_user -l -c 'cd /home/app_user/current && bundle exec passenger start -d -p 8504 -e production --pid-file=/home/app_user/current/tmp/pids/passenger.8504.pid /home/app_user/current'"
I already tried to add the BUNDLE_GEMFILE into the startup command like this:
start program = "/bin/su - app_user -l -c 'cd /home/app_user/current && BUNDLE_GEMFILE=/home/app_user/current/Gemfile bundle exec passenger start -d -p 8504 -e production --pid-file=/home/app_user/current/tmp/pids/passenger.8504.pid /home/app_user/current'"
But it didn't work since the path /home/app_user/current is a symlink to a release path and that release path was picked up instead.
Simple solution.
Define the Gemfile to be used in the server start command. For example:
BUNDLE_GEMFILE=/home/app_user/current/Gemfile bundle exec passenger start -d -p 9999 -e production --pid-file=/home/app_user/current/tmp/pids/passenger.9999.pid /home/app_user/current
The earlier solution (setting the BUNDLE_GEMFILE env variable in .profile) is not good. When you are deploying a new version of your application and there is a new gem in the bundle the migrations etc. will fail because it will still use the Gemfile defined in the env variable.

daemonizing sidekiq with upstart script - is not working

I'm trying to daemonize sidekiq using two upstart scripts following this example.
Basically the workers service starts a fixed number of sidekiq services.
The problem is that the sidekiq script fails at the line of code where I am starting sidekiq. I've tried to run the command directly in bash and it works fine.
I tried all different commented lines and none works.
So my question is what am I doing wrong? Where can I see the error messages?
This is my modified sidekiq script:
# /etc/init/sidekiq.conf - Sidekiq config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Sidekiq instances with
# Upstart, Ubuntu's native service management tool.
#
# See workers.conf for how to manage all Sidekiq instances at once.
#
# Save this config as /etc/init/sidekiq.conf then mange sidekiq with:
# sudo start sidekiq index=0
# sudo stop sidekiq index=0
# sudo status sidekiq index=0
#
# or use the service command:
# sudo service sidekiq {start,stop,restart,status}
#
description "Sidekiq Background Worker"
respawn
respawn limit 15 5
# no "start on", we don't want to automatically start
stop on (stopping workers or runlevel [06])
# TERM and USR1 are sent by sidekiqctl when stopping sidekiq. Without declaring these as normal exit codes, it just respawns.
normal exit 0 TERM USR1
instance $index
script
exec /bin/bash <<EOT
# use syslog for logging
# exec &> /dev/kmsg
# pull in system rbenv
# export HOME=/home/deploy
# source /etc/profile.d/rbenv.sh
cd /home/rails
touch /root/sidekick_has_started
sidekiq -i ${index} -e production
# exec sidekiq -i ${index} -e production
# exec /usr/local/rvm/gems/ruby-2.0.0-p353/gems/sidekiq-3.1.3/bin/sidekiq -i ${index} -e production
touch /root/sidekick_has_started_2
EOT
end script
You are right, RVM env are required to be loaded in. Try this:
.....
.....
script
exec /bin/bash <<EOT
#export HOME=/home/deploy
source /usr/local/rvm/environments/ruby-2.0.0-p353#global
cd /home/rails
exec sidekiq -i ${index} -e production
.....
.....
Does it work?

How should I maintain my Puma application server?

I can successfully run a rails application on my server using Puma as the application server. I start Puma like this:
bundle exec puma -e production -b unix:///var/run/my_app.sock
That is a unix command that starts puma in production mode at the specified location. However, if I need to reboot my vps, I'll need to go through all of my apps and run that command over and over to start the Puma server for each app.
What's the best way to go about doing this? I'm a bit of an Ubuntu noob, but would the best way to be this:
Every time I install a new rails application on my vps, I
sudo vi /etc/rc.local
and append rc.local with the command? So that rc.local looks like this after a while:
#!/bin/sh -e
#
# rc.local
#
bundle exec puma -e production -b unix:///var/run/app_1.sock
bundle exec puma -e production -b unix:///var/run/app_2.sock
bundle exec puma -e production -b unix:///var/run/app_3.sock
bundle exec puma -e production -b unix:///var/run/app_4.sock
bundle exec puma -e production -b unix:///var/run/app_5.sock
exit 0
Ubuntu uses upstart to manage services. Puma actually provides upstart scripts that make it incredibly easy to do what you want. Have a look at the scripts in their repo:
https://github.com/puma/puma/tree/master/tools/jungle/upstart
Ubuntu makes this very difficult. The simplest solution I've seen so far is with OpenBSD. To make sure your apps start on reboot, add this to your /etc/rc.conf.local:
pkg_scripts="myapp myapp2 myapp3"
Each app would need a startup script like this (/etc/rc.d/myapp):
#!/bin/sh
# OPENBSD PUMA STARTUP SCRIPT
# Remember to `chmod +x` this file
# http://www.openbsd.org/cgi-bin/cvsweb/ports/infrastructure/templates/rc.template?rev=1.5
puma="/usr/local/bin/puma"
pumactl="/usr/local/bin/pumactl"
puma_state="-S /home/myapp/tmp/puma.state"
puma_config="-C /home/myapp/config/puma.rb"
. /etc/rc.d/rc.subr
rc_start() {
${rcexec} "${pumactl} ${puma_state} start ${puma_config}"
}
rc_reload() {
${rcexec} "${pumactl} ${puma_state} restart ${puma_config}"
}
rc_stop() {
${rcexec} "${pumactl} ${puma_state} stop"
}
rc_check() {
${rcexec} "${pumactl} ${puma_state} status"
}
rc_cmd $1
Then do like:
% /etc/rc.d/myapp start
% /etc/rc.d/myapp reload
% /etc/rc.d/myapp stop
% /etc/rc.d/myapp status

Start Unicorn with Runit and User's RVM

I'm deploying my Rails App servers with Chef. Have just swapped to RVM from a source install of Ruby (because I was having issues with my deploy user).
Now I have my deploy sorted, assets compiled and bundler's installed all my gems.
The problem I have is supervising Unicorn with Runit..
RVM is not installed as root user - only as my deploy user has it, as follows:
$ rvm list
rvm rubies
=* ruby-2.0.0-p247 [ x86_64 ]
I can manually start Unicorn successfully from my deploy user. However, it won't start as part of runit.
My run file looks like this. I have also tried the solution in this SO question unsuccessfully..
#!/bin/bash
cd /var/www/html/deploy/production/current
exec 2>&1
exec chpst -u deploy:deploy /home/deploy/.rvm/gems/ruby-2.0.0-p247/bin/unicorn -E production -c config/unicorn_production.rb
If I run it manually, I get this error:
/usr/bin/env: ruby_noexec_wrapper: No such file or directory
I created a small script (gist here) which does run as root. However, if I call this from runit, I see the workers start but I get two processes for runit and I can't stop or restart the service:
Output of ps:
1001 29062 1 0 00:08 ? 00:00:00 unicorn master -D -E production -c /var/www/html/deploy/production/current/config/unicorn_production.rb
1001 29065 29062 9 00:08 ? 00:00:12 unicorn worker[0] -D -E production -c /var/www/html/deploy/production/current/config/unicorn_production.rb
root 29076 920 0 00:08 ? 00:00:00 su - deploy -c cd /var/www/html/deploy/production/current; export GEM_HOME=/home/deploy/.rvm/gems/ruby-2.0.0-p247; /home/deploy/.rvm/gems/ruby-2.0.0-p247/bin/unicorn -D -E production -c /var/www/html/deploy/production/current/config/unicorn_production.rb
1001 29083 29076 0 00:08 ? 00:00:00 -su -c cd /var/www/html/deploy/production/current; export GEM_HOME=/home/deploy/.rvm/gems/ruby-2.0.0-p247; /home/deploy/.rvm/gems/ruby-2.0.0-p247/bin/unicorn -D -E production -c /var/www/html/deploy/production/current/config/unicorn_production.rb
What should I do here? Move back to monit which worked nicely?
your run file is doing it wrong, you are using the binary without setting the environment, for that purpose you should use wrappers:
rvm wrapper ruby-2.0.0-p247 --no-links unicorn
To simplify the script use alias so it does not need to be changed when you decide which ruby should be used:
rvm alias create my_app_unicorn ruby-2.0.0-p247
And change the script to:
#!/bin/bash
cd /var/www/html/deploy/production/current
exec 2>&1
exec chpst -u deploy:deploy /home/deploy/.rvm/wrappers/my_app_unicorn/unicorn -E production -c config/unicorn_production.rb
This will ensure proper environment is used for execution of unicorn and any time you want change ruby used to run it just crate alias to a new ruby.

Resources