sudo password fail trying to setup monit for rails application - ruby-on-rails

I'm following along with Ryan Bates' Railscast on Monit http://railscasts.com/episodes/375-monit?view=asciicast, the monitoring system for Rails applications. In it, he creates a capistrano recipe for monit (see below) which runs with the command cap monit:setup. When I run it, it fails with this message
command finished in 200ms
failed: "sh -c 'sudo -p '\\''sudo password: '\\'' service monit reload'" on 192.XXX.XXX.XXX
It's obviously a password related error, but I can't figure out why it's happening because the earlier commands that the script runs start stop restart syntax reload seem to run successfully until it gets to reload.
Noting that the second last line in the script uses root, I thought it might also be a root issue, since I'm logged into my server with a different user name (but with sudo permissions). Therefore, I changed root in the second last line to my user name and ran cap monit:setup again but I got the same error.
* 2013-07-10 05:52:01 executing `monit:syntax'
* executing "sudo -p 'sudo password: ' service monit syntax"
servers: ["192.XXX.XXX.XXX"]
[192.XXX.XXX.XXX] executing command
** [out :: 192.XXX.XXX.XXX] Control file syntax OK
command finished in 248ms
* 2013-07-10 05:52:02 executing `monit:reload'
* executing "sudo -p 'sudo password: ' service monit reload"
servers: ["192.XXX.XXX.XXX"]
[192.XXX.XXX.XXX] executing command
** [out :: 192.XXX.XXX.XXX] Usage: /etc/init.d/monit {start|stop|restart|force-reload|syntax}
command finished in 200ms
failed: "sh -c 'sudo -p '\\''sudo password: '\\'' service monit reload'" on 192.XXX.XXX.XXX
the Monit.rb recipe
namespace :monit do
desc "Install Monit"
task :install do
run "#{sudo} apt-get -y install monit"
end
after "deploy:install", "monit:install"
desc "Setup all Monit configuration"
task :setup do
monit_config "nginx"
syntax
reload
end
after "deploy:setup", "monit:setup"
%w[start stop restart syntax reload].each do |command|
desc "Run Monit #{command} script"
task command do
run "#{sudo} service monit #{command}"
end
end
end
def monit_config(name, destination = nil)
destination ||= "/etc/monit/conf.d/#{name}.conf"
template "monit/#{name}.erb", "/tmp/monit_#{name}"
run "#{sudo} mv /tmp/monit_#{name} #{destination}"
run "#{sudo} chown root #{destination}"
run "#{sudo} chmod 600 #{destination}"
end

Related

Sidekiq launches and mysteriously disappears when started by Capistrano

I'm struggling with starting sidekiq remotely with a custom v2 capistrano task:
namespace :sidekiq do
desc "Start sidekiq"
task :start do
run "cd #{current_path} && bundle exec sidekiq --version"
run "cd #{current_path} && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && echo OK"
end
end
Output:
* 2018-01-05 11:40:51 executing `sidekiq:start'
* executing "cd /home/deploy/applications/xxx/current && bundle exec sidekiq --version"
servers: ["198.58.110.211"]
[198.58.110.211] executing command
** [out :: 198.58.110.211] Sidekiq 5.0.5
** [out :: 198.58.110.211]
command finished in 1424ms
* executing "cd /home/deploy/applications/xxx/current && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && echo OK"
servers: ["198.58.110.211"]
[198.58.110.211] executing command
** [out :: 198.58.110.211] OK
command finished in 1128ms
I can confirm I'm getting the environment (rbenv & bundler correctly) as the first run cmd shows. But unexpectedly the sidekiq task starts and dispersal into obliviom: 1) tmp/pids/sidekiq.pid gets initialized but the process not exists and 2) logs/sidekiq.log gets created but only with the header:
# Logfile created on 2018-01-05 11:34:09 -0300 by logger.rb/56438
If I remove the --daemon switch I get the process running perfectly, but of course the capistrano deploy task never ends and when I do CTRL+C sidekiq closes.
If I just ssh into the remote and execute the command (replacing current_path obviously) it works perfectly.
I've tried almost everything I can imagine: not using a config.file, using RAILS_ENV instead of --environment, etc.
As the "&& echo OK" shows, the command is not returning an error.
Capistrano is using "/bin/bash --login -c 'cd /home/deploy/applications/microgestion/current && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml'" as far as I can tell to run the command.
Ruby v2.3.3, Capistrano 2.15.5, Sidekiq 5.0.5, Rails 4.0.12
Solved it by adding && sleep 1 at the end as explained here: http://blog.hartshorne.net/2013/03/capistrano-nohup-and-sleep.html.
desc "Start sidekiq"
task :start do
run "cd #{current_path} && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && sleep 1"
end
Thanks #user3309314 for pointing me in the correct direction.
If you use plain Capistrano to daemonize Sidekiq, any crash will lead to downtime. Don't do this. You need to use a process monitor that will restart the Sidekiq process if it dies. Use systemd, upstart and/or Foreman as explained in the docs.
https://github.com/mperham/sidekiq/wiki/Deployment#running-your-own-process

During cap deploy:cold - command not found for /etc/init.d/unicorn

I'm very close to having my first rails app live up on Linode VPS, but keep on getting a strange error message near the end of cap deploy:cold. I've been following railscasts 335 to deploy my Rails app to a VPS using nginx, Unicorn, PostgreSQL, rbenv and more (unfortunately for me from a Windows machine). I'm hosting on Linode Ubuntu 10.04 LTS Profile. Near the end of the deploy I get this error message:
* ←[32m2013-04-24 13:08:13 executing `deploy:start'←[0m
* ←[33mexecuting "sudo -p 'sudo password: ' /etc/init.d/unicorn_wheretoski start"←[0m
servers: ["xxx.xx.xxx.242"]
[xxx.xx.xxx.242] executing command
** [out :: xxx.xx.xxx.242]
** [out :: xxx.xx.xxx.242] sudo: /etc/init.d/unicorn_wheretoski: command not found
** [out :: xxx.xx.xxx.242]
←[2;37mcommand finished in 309ms←[0m
failed: "env PATH=$HOME/.rbenv/shims:$HOME/.rbenv/bin:$PATH sh -c 'sudo -p '\\''
sudo password: '\\'' /etc/init.d/unicorn_wheretoski start'" on xxx.xx.xxx.242
When I go to the server, it locates the file
:~/apps/wheretoski/current$ ls /etc/init.d/unicorn_wheretoski
/etc/init.d/unicorn_wheretoski
From deploy.rb
namespace :deploy do
%w[start stop restart].each do |command|
desc "#{command} unicorn server"
task command, roles: :app, except: {no_release: true} do
sudo "/etc/init.d/unicorn_#{application} #{command}"
end
end
......
And from unicorn_init.sh
#!/bin/sh
set -e
# Feel free to change any of the following variables for your app:
TIMEOUT=${TIMEOUT-60}
APP_ROOT=/home/deployer/apps/wheretoski/current
PID=$APP_ROOT/tmp/pids/unicorn.pid
CMD="cd $APP_ROOT; bundle exec unicorn -D -c $APP_ROOT/config/unicorn.rb -E production"
AS_USER=deployer
set -u
OLD_PIN="$PID.oldbin"
sig () {
test -s "$PID" && kill -$1 `cat $PID`
}
oldsig () {
test -s $OLD_PIN && kill -$1 `cat $OLD_PIN`
}
run () {
if [ "$(id -un)" = "$AS_USER" ]; then
eval $1
else
su -c "$1" - $AS_USER
fi
}
case "$1" in
start)
sig 0 && echo >&2 "Already running" && exit 0
run "$CMD"
;;
I then head over to the VPS and try to execute the various commands and I get an error when executing the following:
deployer#li543-242:~/apps/wheretoski/current$ bundle exec unicorn -D -c $/home/apps/wheretoski/current/config/unicorn.rb -E production
/home/deployer/.rbenv/versions/1.9.3-p125/lib/ruby/gems/1.9.1/gems/bundler-1.3.5/lib/bundler/rubygems_integration.rb:214:in `block in replace_gem': unicorn is not part of the bundle. Add it to Gemfile. (Gem::LoadError)
from /home/deployer/.rbenv/versions/1.9.3-p125/bin/unicorn:22:in `<main>'
Here is what I get for echo $PATH on the VPS:
/home/deployer/.rbenv/shims:/home/deployer/.rbenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/deployer/.rbenv/versions/1.9.3-p125/bin
I have tried with both the unicorn gem under the production group and as part of the main gems, both have produced this same error message. When I open the Gemfile.lock in the current folder on the server Unicorn only shows up under the dependencies, not under the specs.
Thanks for any help!!
Alright, there was a couple of issues here.
1 - I had different versions of bundler on my local machine and the server.
2 - Developing on a Windows machine. I had to put the unicorn gem under a production group in my gemfile, and for whatever reason the gemfile.lock was not created successfully as a result. Had a buddy with a mac pull my code, move unicorn to the main section of the gemfile, and bundle installed it. This created a good Gemfile.lock which is in use now on the server.
Not sure if this will be helpful to others or not, quite the weird error.

cap monit:setup failed

I'm using monit for monitoring my rails app. This is my monit.rb file:
namespace :monit do
desc "Install Monit"
task :install do
run "#{sudo} apt-get -y install monit"
end
after "deploy:install", "monit:install"
desc "Setup all Monit configuration"
task :setup do
#monit_config "monitrc", "/etc/monit/monitrc"
#nginx
#unicorn
monit_config "nginx"
syntax
reload
end
after "deploy:setup", "monit:setup"
task(:nginx, roles: :web) { monit_config "nginx" }
task(:unicorn, roles: :app) { monit_config "unicorn" }
%w[start stop restart syntax reload].each do |command|
desc "Run Monit #{command} script"
task command do
run "#{sudo} service monit #{command}"
end
end
end
def monit_config(name, destination = nil)
destination ||= "/etc/monit/conf.d/#{name}.conf"
template "monit/#{name}.erb", "/tmp/monit_#{name}"
run "#{sudo} mv /tmp/monit_#{name} #{destination}"
run "#{sudo} chown root #{destination}"
run "#{sudo} chmod 600 #{destination}"
end
When I run cap monit:setup on the end of log when monit go to reload I get:
** [out :: 11.111.1.11] Control file syntax OK
command finished in 1073ms
* executing `monit:reload'
* executing "sudo -p 'sudo password: ' service monit reload"
servers: ["11.111.1.11"]
[50.116.2.102] executing command
** [out :: 11.111.1.11] Usage: /etc/init.d/monit {start|stop|restart|force-reload|syntax}
command finished in 987ms
failed: "sh -c 'sudo -p '\\''sudo password: '\\'' service monit reload'" on 11.111.1.11
failed: "sh -c 'sudo -p '\''sudo password: '\'' service monit reload'" on 11.111.1.11
Where is my error?
The monit service daemon does not have a reload command argument. You can see the hint of the error here:
Usage: /etc/init.d/monit {start|stop|restart|force-reload|syntax}
So you have to choose one of the above arguments. reload does not exist.
You probably want force-reload or restart instead.

Rubber gem having issue with graphite server

I am having an issue with using rubber, whenever I try cap rubber:bootstrap and had a staging instance, it always stuck on this error.
* executing "sudo -p 'sudo password: ' bash -l -c 'cd /mnt/localstore-production/releases/20120519213905 && RUBBER_ENV=production RAILS_ENV=production ./script/rubber config --force --file=\"role/graphite_server\"'"
servers: ["production.localstore.com"]
[production.localstore.com] executing command
** [out :: production.localstore.com] Instance not found for host: ip-10-2-118-252
** [out :: production.localstore.com]
command finished in 5849ms
failed: "/bin/bash -l -c 'sudo -p '\\''sudo password: '\\'' bash -l -c '\\''cd /mnt/localstore-production/releases/20120519213905 && RUBBER_ENV=production RAILS_ENV=production ./script/rubber config --force --file=\"role/graphite_server\"'\\'''" on production.localstore.com
Actually the issue was that I changed the static IP of the instance from the AWS Console, so why the host info of the instance changed some how.
So I use this command cap rubber:referesh to refresh every thing and then bootstrap the instance and it solved my problem.

ferret_server start problem druing the deploy

When do the cap deploy everything works fine except the ferret-server, while restarting server its try to stop the ferret_server in production mode and try to start the ferret_server but it fails due to permission problem .Here is the output from my deploy file** transaction: commit
executing deploy:restart'
triggering before callbacks fordeploy:restart'
executing `ferret:stop'
executing "cd /home/sj/reelinfo/current; script/ferret_server -e production stop || true"
servers: ["67.23.28.171"]
[67.23.28.171] executing command
** [out :: 67.23.28.171] sh: script/ferret_server: Permission denied
command finished
executing "chown www-data -R /home/sj/reelinfo/current/"
servers: ["67.23.28.171"]
[67.23.28.171] executing command
command finished
executing "touch /home/sj/reelinfo/current/tmp/restart.txt"
servers: ["67.23.28.171"]
[67.23.28.171] executing command
command finished
triggering after callbacks for `deploy:restart'
executing `ferret:start'
executing "cd /home/sj/reelinfo/current; script/ferret_server -e production start"
servers: ["67.23.28.171"]
[67.23.28.171] executing command
** [out :: 67.23.28.171] sh: script/ferret_server: Permission denied
command finished
failed: "sh -c \"cd /home/sj/reelinfo/current; script/ferret_server -e production start\"" on 67.23.28.171
I've had this problem as well, the issue is that script/ferret_server did not have executable permissions.
I added the following deploy task to handle the permissions:
before "deploy:restart", "correct_ferret_server_permissions"
task :correct_ferret_server_permissions do
run "chmod a+x #{current_path}/script/ferret_server
end

Resources