Conditional server start on Heroku - ruby-on-rails

To avoid issues in development with Foreman I need to I need to wrap the Puma server in a Socks proxy for Quotaguard to get a static IP in production and production only.
Using a link to this article that I found from this question I implemented the following code in bin/web call from Procfile.
#!/bin/sh
if [ "$RACK_ENV" == "production" ]; then
echo Using qgsocksify to wrap Puma
bin/qgsocksify bundle exec puma -C config/puma.rb
else
echo Using straight Puma
bundle exec puma -C config/puma.rb
fi
Although the environment is production set on RACK_ENV the unwrapped "straight Puma" server launches. Testing this locally works but it fails on Heroku.
What's the syntax error that would cause it to fail on Heroku?

Apparently sh doesn't use == for equivalence on Heroku but works locally on my Mac which is why it works in development. Googling revealed standard sh uses = for equivalence. In this case however I just changed #!/bin/sh to #!/bin/bash as Heroku supports bash as well.
#!/bin/bash
if [ "$RACK_ENV" == "production" ]; then
echo Using qgsocksify to wrap Puma
bin/qgsocksify bundle exec puma -C config/puma.rb
else
echo Using straight Puma
bundle exec puma -C config/puma.rb
fi

Related

Foreman not working with a rake tast on rails

So I have my rails backend running at port 3001 and may React frontend running at port 3000.
I want to setup a simple rake start task to start both.
To do so, I use the foreman gem, which works perfectly when I run: foreman start -f Procfile.dev.
However: when I run my task: rake start, I get the following error:
Running via Spring preloader in process 36257
15:56:57 web.1 | started with pid 36258
15:56:57 api.1 | started with pid 36259
15:56:57 api.1 | /usr/local/opt/rbenv/versions/2.3.4/lib/ruby/gems/2.3.0/gems/foreman-0.64.0/bin/foreman-runner: line 41: exec: PORT=3001: not found
15:56:57 api.1 | exited with code 127
15:56:57 system | sending SIGTERM to all processes
15:56:57 web.1 | terminated by SIGTERM
Here is my my start.rake file:
namespace :start do
desc 'Start dev server'
task :development do
exec 'foreman start -f Procfile.dev'
end
desc 'Start production server'
task :production do
exec 'NPM_CONFIG_PRODUCTION=true npm run postinstall && foreman start'
end
end
task :start => 'start:development'
and my Procfile.dev file:
web: cd client && PORT=3000 npm start
api: PORT=3001 && bundle exec rails s
Any idea ?
I faced the same issue. I am not sure why, but when foreman is running from rake, it is unable to handle multiple commands on the same line, e.g.
web: cd client && PORT=3000 npm start
To solve the issue, I changed my Procfile.dev to
web: npm start --prefix client
api: bundle exec rails s -p 3001
and in my package.json, I changed
"scripts": {
"start": "react-scripts start",
...
}
to
"scripts": {
"start": "PORT=3000 react-scripts start",
...
}
This allows you to specify the ports for both react and rails servers, and works fine with both
foreman start -f Procfile.dev and rake start
I don't know Foreman, but every morning I start my dev environment with teamocil. Here is an example file.
Add an alias to your .bash_alias file:
alias s2="cd /home/manuel/chipotle/schnell && tmux new-session -d 'teamocil schnell' \; attach"
so you just need to type "s2" in the console and all, including the database prompt, is up and ready.
I know I'm a little bit late, but essentially you shouldn't use multi-line command, always try avoiding then. you could change your syntax and make it online with some flags as fallow:
web: npm start --port 3000 --prefix client
api: bundle exec rails s -p 3001
I hope it could help anyone facing the same issue.
Happy Coding

Sidekiq launches and mysteriously disappears when started by Capistrano

I'm struggling with starting sidekiq remotely with a custom v2 capistrano task:
namespace :sidekiq do
desc "Start sidekiq"
task :start do
run "cd #{current_path} && bundle exec sidekiq --version"
run "cd #{current_path} && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && echo OK"
end
end
Output:
* 2018-01-05 11:40:51 executing `sidekiq:start'
* executing "cd /home/deploy/applications/xxx/current && bundle exec sidekiq --version"
servers: ["198.58.110.211"]
[198.58.110.211] executing command
** [out :: 198.58.110.211] Sidekiq 5.0.5
** [out :: 198.58.110.211]
command finished in 1424ms
* executing "cd /home/deploy/applications/xxx/current && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && echo OK"
servers: ["198.58.110.211"]
[198.58.110.211] executing command
** [out :: 198.58.110.211] OK
command finished in 1128ms
I can confirm I'm getting the environment (rbenv & bundler correctly) as the first run cmd shows. But unexpectedly the sidekiq task starts and dispersal into obliviom: 1) tmp/pids/sidekiq.pid gets initialized but the process not exists and 2) logs/sidekiq.log gets created but only with the header:
# Logfile created on 2018-01-05 11:34:09 -0300 by logger.rb/56438
If I remove the --daemon switch I get the process running perfectly, but of course the capistrano deploy task never ends and when I do CTRL+C sidekiq closes.
If I just ssh into the remote and execute the command (replacing current_path obviously) it works perfectly.
I've tried almost everything I can imagine: not using a config.file, using RAILS_ENV instead of --environment, etc.
As the "&& echo OK" shows, the command is not returning an error.
Capistrano is using "/bin/bash --login -c 'cd /home/deploy/applications/microgestion/current && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml'" as far as I can tell to run the command.
Ruby v2.3.3, Capistrano 2.15.5, Sidekiq 5.0.5, Rails 4.0.12
Solved it by adding && sleep 1 at the end as explained here: http://blog.hartshorne.net/2013/03/capistrano-nohup-and-sleep.html.
desc "Start sidekiq"
task :start do
run "cd #{current_path} && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && sleep 1"
end
Thanks #user3309314 for pointing me in the correct direction.
If you use plain Capistrano to daemonize Sidekiq, any crash will lead to downtime. Don't do this. You need to use a process monitor that will restart the Sidekiq process if it dies. Use systemd, upstart and/or Foreman as explained in the docs.
https://github.com/mperham/sidekiq/wiki/Deployment#running-your-own-process

Puma restart fails on reboot using EC2 + Rails + Nginx + Capistrano

I have successfully used capistrano to deploy my rails app to Ubuntu EC2. Everything works great on deploy. Rails app name is deseov12
My issue is that Puma does not start on boot which will be necessary as production EC2 instances will be instantiated on demand.
Puma will start when deploying via Capistrano, it will also start when running
cap production puma:start
on local machine.
It will also start on server after a reboot if I run the following commands:
su - deploy
[enter password]
cd /home/deploy/deseov12/current && ( export RACK_ENV="production" ; ~/.rvm/bin/rvm ruby-2.2.4 do bundle exec puma -C /home/deploy/deseov12/shared/puma.rb --daemon )
I have followed directions from the Puma jungle tool to make Puma start on boot by using upstart as follows:
Contents of /etc/puma.conf
/home/deploy/deseov12/current
Contents of /etc/init/puma.conf and /home/deploy/puma.conf
# /etc/init/puma.conf - Puma config
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See workers.conf for how to manage all Puma instances at once.
#
# Save this config as /etc/init/puma.conf then manage puma with:
# sudo start puma app=PATH_TO_APP
# sudo stop puma app=PATH_TO_APP
# sudo status puma app=PATH_TO_APP
#
# or use the service command:
# sudo service puma {start,stop,restart,status}
#
description "Puma Background Worker"
# no "start on", we don't want to automatically start
stop on (stopping puma-manager or runlevel [06])
# change apps to match your deployment user if you want to use this as a less privileged user $
setuid deploy
setgid deploy
respawn
respawn limit 3 30
instance ${app}
script
# this script runs in /bin/sh by default
# respawn as bash so we can source in rbenv/rvm
# quoted heredoc to tell /bin/sh not to interpret
# variables
# source ENV variables manually as Upstart doesn't, eg:
#. /etc/environment
exec /bin/bash <<'EOT'
# set HOME to the setuid user's home, there doesn't seem to be a better, portable way
export HOME="$(eval echo ~$(id -un))"
if [ -d "/usr/local/rbenv/bin" ]; then
export PATH="/usr/local/rbenv/bin:/usr/local/rbenv/shims:$PATH"
elif [ -d "$HOME/.rbenv/bin" ]; then
export PATH="$HOME/.rbenv/bin:$HOME/.rbenv/shims:$PATH"
elif [ -f /etc/profile.d/rvm.sh ]; then
source /etc/profile.d/rvm.sh
elif [ -f /usr/local/rvm/scripts/rvm ]; then
source /etc/profile.d/rvm.sh
elif [ -f "$HOME/.rvm/scripts/rvm" ]; then
source "$HOME/.rvm/scripts/rvm"
elif [ -f /usr/local/share/chruby/chruby.sh ]; then
source /usr/local/share/chruby/chruby.sh
if [ -f /usr/local/share/chruby/auto.sh ]; then
source /usr/local/share/chruby/auto.sh
fi
# if you aren't using auto, set your version here
# chruby 2.0.0
fi
cd $app
logger -t puma "Starting server: $app"
exec bundle exec puma -C current/config/puma.rb
EOT
end script
Contents of /etc/init/puma-manager.conf and /home/deploy/puma-manager.conf
# /etc/init/puma-manager.conf - manage a set of Pumas
# This example config should work with Ubuntu 12.04+. It
# allows you to manage multiple Puma instances with
# Upstart, Ubuntu's native service management tool.
#
# See puma.conf for how to manage a single Puma instance.
#
# Use "stop puma-manager" to stop all Puma instances.
# Use "start puma-manager" to start all instances.
# Use "restart puma-manager" to restart all instances.
# Crazy, right?
#
description "Manages the set of puma processes"
# This starts upon bootup and stops on shutdown
start on runlevel [2345]
stop on runlevel [06]
# Set this to the number of Puma processes you want
# to run on this machine
env PUMA_CONF="/etc/puma.conf"
pre-start script
for i in `cat $PUMA_CONF`; do
app=`echo $i | cut -d , -f 1`
logger -t "puma-manager" "Starting $app"
start puma app=$app
done
end script
Contents of /home/deploy/deseov12/shared/puma.rb
#!/usr/bin/env puma
directory '/home/deploy/deseov12/current'
rackup "/home/deploy/deseov12/current/config.ru"
environment 'production'
pidfile "/home/deploy/deseov12/shared/tmp/pids/puma.pid"
state_path "/home/deploy/deseov12/shared/tmp/pids/puma.state"
stdout_redirect '/home/deploy/deseov12/shared/log/puma_error.log', '/home/deploy/deseov12/shar$
threads 0,8
bind 'unix:///home/deploy/deseov12/shared/tmp/sockets/puma.sock'
workers 0
activate_control_app
prune_bundler
on_restart do
puts 'Refreshing Gemfile'
ENV["BUNDLE_GEMFILE"] = "/home/deploy/deseov12/current/Gemfile"
end
However, I have not been able to make Puma start up automatically after a server reboot. It just does not start.
I would certainly appreciate some help
EDIT: I just noticed something that could be a clue:
when running the following command as deploy user:
sudo start puma app=/home/deploy/deseov12/current
ps aux will show a puma process for a few seconds before it disappears.
deploy 4312 103 7.7 183396 78488 ? Rsl 03:42 0:02 puma 2.15.3 (tcp://0.0.0.0:3000) [20160106224332]
this puma process is different from the working process launched by capistrano:
deploy 5489 10.0 12.4 858088 126716 ? Sl 03:45 0:02 puma 2.15.3 (unix:///home/deploy/deseov12/shared/tmp/sockets/puma.sock) [20160106224332]
This is finally solved after a lot of research. It turns out the issue was threefold:
1) the proper environment was not being set when running the upstart script
2) the actual production puma.rb configuration file when using capistrano can be found in the home/deploy/deseov12/shared directory not in the /current/ directory
3) not demonizing the puma server properly
To solve these issues:
1) This line should be added to the start of the script in /etc/init/puma.conf and /home/deploy/puma.conf:
env RACK_ENV="production"
2) and 3) this line
exec bundle exec puma -C current/config/puma.rb
should be replaced with this one
exec bundle exec puma -C /home/deploy/deseov12/shared/puma.rb --daemon
After doing this, the puma server starts properly on reboot or new instance generation. Hope this helps someone avoid hours of troubleshooting.

How to update ENV variable on Passenger standalone restart

I'm using Capistrano to deploy my application. Application runs on Passenger standalone. When I redeploy the application the Passenger still uses the Gemfile from the the old release because BUNDLE_GEMFILE environment variable has not been updated.
Where I should put the updated path to Gemfile so that Passenger would pick it up on restart?
The server startup command is in monit and I just call monit scripts from Capistrano tasks except for restart where I just touch the restart.txt.
namespace :deploy do
task :stop do
run("sudo /usr/bin/monit stop my_app_#{rails_env}")
end
task :restart do
run("cd #{current_path} && touch tmp/restart.txt")
end
task :start do
run("sudo /usr/bin/monit start my_app_#{rails_env}")
end
The startup command in monit is:
start program = "/bin/su - app_user -l -c 'cd /home/app_user/current && bundle exec passenger start -d -p 8504 -e production --pid-file=/home/app_user/current/tmp/pids/passenger.8504.pid /home/app_user/current'"
I already tried to add the BUNDLE_GEMFILE into the startup command like this:
start program = "/bin/su - app_user -l -c 'cd /home/app_user/current && BUNDLE_GEMFILE=/home/app_user/current/Gemfile bundle exec passenger start -d -p 8504 -e production --pid-file=/home/app_user/current/tmp/pids/passenger.8504.pid /home/app_user/current'"
But it didn't work since the path /home/app_user/current is a symlink to a release path and that release path was picked up instead.
Simple solution.
Define the Gemfile to be used in the server start command. For example:
BUNDLE_GEMFILE=/home/app_user/current/Gemfile bundle exec passenger start -d -p 9999 -e production --pid-file=/home/app_user/current/tmp/pids/passenger.9999.pid /home/app_user/current
The earlier solution (setting the BUNDLE_GEMFILE env variable in .profile) is not good. When you are deploying a new version of your application and there is a new gem in the bundle the migrations etc. will fail because it will still use the Gemfile defined in the env variable.

How should I maintain my Puma application server?

I can successfully run a rails application on my server using Puma as the application server. I start Puma like this:
bundle exec puma -e production -b unix:///var/run/my_app.sock
That is a unix command that starts puma in production mode at the specified location. However, if I need to reboot my vps, I'll need to go through all of my apps and run that command over and over to start the Puma server for each app.
What's the best way to go about doing this? I'm a bit of an Ubuntu noob, but would the best way to be this:
Every time I install a new rails application on my vps, I
sudo vi /etc/rc.local
and append rc.local with the command? So that rc.local looks like this after a while:
#!/bin/sh -e
#
# rc.local
#
bundle exec puma -e production -b unix:///var/run/app_1.sock
bundle exec puma -e production -b unix:///var/run/app_2.sock
bundle exec puma -e production -b unix:///var/run/app_3.sock
bundle exec puma -e production -b unix:///var/run/app_4.sock
bundle exec puma -e production -b unix:///var/run/app_5.sock
exit 0
Ubuntu uses upstart to manage services. Puma actually provides upstart scripts that make it incredibly easy to do what you want. Have a look at the scripts in their repo:
https://github.com/puma/puma/tree/master/tools/jungle/upstart
Ubuntu makes this very difficult. The simplest solution I've seen so far is with OpenBSD. To make sure your apps start on reboot, add this to your /etc/rc.conf.local:
pkg_scripts="myapp myapp2 myapp3"
Each app would need a startup script like this (/etc/rc.d/myapp):
#!/bin/sh
# OPENBSD PUMA STARTUP SCRIPT
# Remember to `chmod +x` this file
# http://www.openbsd.org/cgi-bin/cvsweb/ports/infrastructure/templates/rc.template?rev=1.5
puma="/usr/local/bin/puma"
pumactl="/usr/local/bin/pumactl"
puma_state="-S /home/myapp/tmp/puma.state"
puma_config="-C /home/myapp/config/puma.rb"
. /etc/rc.d/rc.subr
rc_start() {
${rcexec} "${pumactl} ${puma_state} start ${puma_config}"
}
rc_reload() {
${rcexec} "${pumactl} ${puma_state} restart ${puma_config}"
}
rc_stop() {
${rcexec} "${pumactl} ${puma_state} stop"
}
rc_check() {
${rcexec} "${pumactl} ${puma_state} status"
}
rc_cmd $1
Then do like:
% /etc/rc.d/myapp start
% /etc/rc.d/myapp reload
% /etc/rc.d/myapp stop
% /etc/rc.d/myapp status

Resources