cap monit:setup failed - ruby-on-rails

I'm using monit for monitoring my rails app. This is my monit.rb file:
namespace :monit do
desc "Install Monit"
task :install do
run "#{sudo} apt-get -y install monit"
end
after "deploy:install", "monit:install"
desc "Setup all Monit configuration"
task :setup do
#monit_config "monitrc", "/etc/monit/monitrc"
#nginx
#unicorn
monit_config "nginx"
syntax
reload
end
after "deploy:setup", "monit:setup"
task(:nginx, roles: :web) { monit_config "nginx" }
task(:unicorn, roles: :app) { monit_config "unicorn" }
%w[start stop restart syntax reload].each do |command|
desc "Run Monit #{command} script"
task command do
run "#{sudo} service monit #{command}"
end
end
end
def monit_config(name, destination = nil)
destination ||= "/etc/monit/conf.d/#{name}.conf"
template "monit/#{name}.erb", "/tmp/monit_#{name}"
run "#{sudo} mv /tmp/monit_#{name} #{destination}"
run "#{sudo} chown root #{destination}"
run "#{sudo} chmod 600 #{destination}"
end
When I run cap monit:setup on the end of log when monit go to reload I get:
** [out :: 11.111.1.11] Control file syntax OK
command finished in 1073ms
* executing `monit:reload'
* executing "sudo -p 'sudo password: ' service monit reload"
servers: ["11.111.1.11"]
[50.116.2.102] executing command
** [out :: 11.111.1.11] Usage: /etc/init.d/monit {start|stop|restart|force-reload|syntax}
command finished in 987ms
failed: "sh -c 'sudo -p '\\''sudo password: '\\'' service monit reload'" on 11.111.1.11
failed: "sh -c 'sudo -p '\''sudo password: '\'' service monit reload'" on 11.111.1.11
Where is my error?

The monit service daemon does not have a reload command argument. You can see the hint of the error here:
Usage: /etc/init.d/monit {start|stop|restart|force-reload|syntax}
So you have to choose one of the above arguments. reload does not exist.
You probably want force-reload or restart instead.

Related

Sidekiq launches and mysteriously disappears when started by Capistrano

I'm struggling with starting sidekiq remotely with a custom v2 capistrano task:
namespace :sidekiq do
desc "Start sidekiq"
task :start do
run "cd #{current_path} && bundle exec sidekiq --version"
run "cd #{current_path} && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && echo OK"
end
end
Output:
* 2018-01-05 11:40:51 executing `sidekiq:start'
* executing "cd /home/deploy/applications/xxx/current && bundle exec sidekiq --version"
servers: ["198.58.110.211"]
[198.58.110.211] executing command
** [out :: 198.58.110.211] Sidekiq 5.0.5
** [out :: 198.58.110.211]
command finished in 1424ms
* executing "cd /home/deploy/applications/xxx/current && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && echo OK"
servers: ["198.58.110.211"]
[198.58.110.211] executing command
** [out :: 198.58.110.211] OK
command finished in 1128ms
I can confirm I'm getting the environment (rbenv & bundler correctly) as the first run cmd shows. But unexpectedly the sidekiq task starts and dispersal into obliviom: 1) tmp/pids/sidekiq.pid gets initialized but the process not exists and 2) logs/sidekiq.log gets created but only with the header:
# Logfile created on 2018-01-05 11:34:09 -0300 by logger.rb/56438
If I remove the --daemon switch I get the process running perfectly, but of course the capistrano deploy task never ends and when I do CTRL+C sidekiq closes.
If I just ssh into the remote and execute the command (replacing current_path obviously) it works perfectly.
I've tried almost everything I can imagine: not using a config.file, using RAILS_ENV instead of --environment, etc.
As the "&& echo OK" shows, the command is not returning an error.
Capistrano is using "/bin/bash --login -c 'cd /home/deploy/applications/microgestion/current && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml'" as far as I can tell to run the command.
Ruby v2.3.3, Capistrano 2.15.5, Sidekiq 5.0.5, Rails 4.0.12
Solved it by adding && sleep 1 at the end as explained here: http://blog.hartshorne.net/2013/03/capistrano-nohup-and-sleep.html.
desc "Start sidekiq"
task :start do
run "cd #{current_path} && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && sleep 1"
end
Thanks #user3309314 for pointing me in the correct direction.
If you use plain Capistrano to daemonize Sidekiq, any crash will lead to downtime. Don't do this. You need to use a process monitor that will restart the Sidekiq process if it dies. Use systemd, upstart and/or Foreman as explained in the docs.
https://github.com/mperham/sidekiq/wiki/Deployment#running-your-own-process

sudo password fail trying to setup monit for rails application

I'm following along with Ryan Bates' Railscast on Monit http://railscasts.com/episodes/375-monit?view=asciicast, the monitoring system for Rails applications. In it, he creates a capistrano recipe for monit (see below) which runs with the command cap monit:setup. When I run it, it fails with this message
command finished in 200ms
failed: "sh -c 'sudo -p '\\''sudo password: '\\'' service monit reload'" on 192.XXX.XXX.XXX
It's obviously a password related error, but I can't figure out why it's happening because the earlier commands that the script runs start stop restart syntax reload seem to run successfully until it gets to reload.
Noting that the second last line in the script uses root, I thought it might also be a root issue, since I'm logged into my server with a different user name (but with sudo permissions). Therefore, I changed root in the second last line to my user name and ran cap monit:setup again but I got the same error.
* 2013-07-10 05:52:01 executing `monit:syntax'
* executing "sudo -p 'sudo password: ' service monit syntax"
servers: ["192.XXX.XXX.XXX"]
[192.XXX.XXX.XXX] executing command
** [out :: 192.XXX.XXX.XXX] Control file syntax OK
command finished in 248ms
* 2013-07-10 05:52:02 executing `monit:reload'
* executing "sudo -p 'sudo password: ' service monit reload"
servers: ["192.XXX.XXX.XXX"]
[192.XXX.XXX.XXX] executing command
** [out :: 192.XXX.XXX.XXX] Usage: /etc/init.d/monit {start|stop|restart|force-reload|syntax}
command finished in 200ms
failed: "sh -c 'sudo -p '\\''sudo password: '\\'' service monit reload'" on 192.XXX.XXX.XXX
the Monit.rb recipe
namespace :monit do
desc "Install Monit"
task :install do
run "#{sudo} apt-get -y install monit"
end
after "deploy:install", "monit:install"
desc "Setup all Monit configuration"
task :setup do
monit_config "nginx"
syntax
reload
end
after "deploy:setup", "monit:setup"
%w[start stop restart syntax reload].each do |command|
desc "Run Monit #{command} script"
task command do
run "#{sudo} service monit #{command}"
end
end
end
def monit_config(name, destination = nil)
destination ||= "/etc/monit/conf.d/#{name}.conf"
template "monit/#{name}.erb", "/tmp/monit_#{name}"
run "#{sudo} mv /tmp/monit_#{name} #{destination}"
run "#{sudo} chown root #{destination}"
run "#{sudo} chmod 600 #{destination}"
end

During cap deploy:cold - command not found for /etc/init.d/unicorn

I'm very close to having my first rails app live up on Linode VPS, but keep on getting a strange error message near the end of cap deploy:cold. I've been following railscasts 335 to deploy my Rails app to a VPS using nginx, Unicorn, PostgreSQL, rbenv and more (unfortunately for me from a Windows machine). I'm hosting on Linode Ubuntu 10.04 LTS Profile. Near the end of the deploy I get this error message:
* ←[32m2013-04-24 13:08:13 executing `deploy:start'←[0m
* ←[33mexecuting "sudo -p 'sudo password: ' /etc/init.d/unicorn_wheretoski start"←[0m
servers: ["xxx.xx.xxx.242"]
[xxx.xx.xxx.242] executing command
** [out :: xxx.xx.xxx.242]
** [out :: xxx.xx.xxx.242] sudo: /etc/init.d/unicorn_wheretoski: command not found
** [out :: xxx.xx.xxx.242]
←[2;37mcommand finished in 309ms←[0m
failed: "env PATH=$HOME/.rbenv/shims:$HOME/.rbenv/bin:$PATH sh -c 'sudo -p '\\''
sudo password: '\\'' /etc/init.d/unicorn_wheretoski start'" on xxx.xx.xxx.242
When I go to the server, it locates the file
:~/apps/wheretoski/current$ ls /etc/init.d/unicorn_wheretoski
/etc/init.d/unicorn_wheretoski
From deploy.rb
namespace :deploy do
%w[start stop restart].each do |command|
desc "#{command} unicorn server"
task command, roles: :app, except: {no_release: true} do
sudo "/etc/init.d/unicorn_#{application} #{command}"
end
end
......
And from unicorn_init.sh
#!/bin/sh
set -e
# Feel free to change any of the following variables for your app:
TIMEOUT=${TIMEOUT-60}
APP_ROOT=/home/deployer/apps/wheretoski/current
PID=$APP_ROOT/tmp/pids/unicorn.pid
CMD="cd $APP_ROOT; bundle exec unicorn -D -c $APP_ROOT/config/unicorn.rb -E production"
AS_USER=deployer
set -u
OLD_PIN="$PID.oldbin"
sig () {
test -s "$PID" && kill -$1 `cat $PID`
}
oldsig () {
test -s $OLD_PIN && kill -$1 `cat $OLD_PIN`
}
run () {
if [ "$(id -un)" = "$AS_USER" ]; then
eval $1
else
su -c "$1" - $AS_USER
fi
}
case "$1" in
start)
sig 0 && echo >&2 "Already running" && exit 0
run "$CMD"
;;
I then head over to the VPS and try to execute the various commands and I get an error when executing the following:
deployer#li543-242:~/apps/wheretoski/current$ bundle exec unicorn -D -c $/home/apps/wheretoski/current/config/unicorn.rb -E production
/home/deployer/.rbenv/versions/1.9.3-p125/lib/ruby/gems/1.9.1/gems/bundler-1.3.5/lib/bundler/rubygems_integration.rb:214:in `block in replace_gem': unicorn is not part of the bundle. Add it to Gemfile. (Gem::LoadError)
from /home/deployer/.rbenv/versions/1.9.3-p125/bin/unicorn:22:in `<main>'
Here is what I get for echo $PATH on the VPS:
/home/deployer/.rbenv/shims:/home/deployer/.rbenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/deployer/.rbenv/versions/1.9.3-p125/bin
I have tried with both the unicorn gem under the production group and as part of the main gems, both have produced this same error message. When I open the Gemfile.lock in the current folder on the server Unicorn only shows up under the dependencies, not under the specs.
Thanks for any help!!
Alright, there was a couple of issues here.
1 - I had different versions of bundler on my local machine and the server.
2 - Developing on a Windows machine. I had to put the unicorn gem under a production group in my gemfile, and for whatever reason the gemfile.lock was not created successfully as a result. Had a buddy with a mac pull my code, move unicorn to the main section of the gemfile, and bundle installed it. This created a good Gemfile.lock which is in use now on the server.
Not sure if this will be helpful to others or not, quite the weird error.

Capistrano Attempting To Create /public Directory

I'm currently in the process of attempting my first Rails deployment using Capistrano, and I've run into a roadblock I haven't been able to overcome. During the cap deploy I'm getting an error "mkdir: cannot create directory/public'`".
Pertinent Details:
Rails Version: 3.2.6
Capistrano Version: 2.13.5
Running on Dreamhost
I'm precompiling my assets (I suspect this is part of the problem), so I've got load 'deploy/assets' in my Capfile.
I've followed the directions here: http://wiki.dreamhost.com/Capistrano for the most part, as well as scouring the web for a number of other articles on Capistrano.
deploy.rb
require "bundler/capistrano"
ssh_options[:forward_agent] = true
ssh_options[File.join(ENV["HOME"], ".ssh", "id_rsa-dreamhost")]
set :application, "<app>"
set :repository, "git#bitbucket.org:<gituser>/#{application}.git"
set :server_name, "<host>"
set :scm, :git # You can set :scm explicitly or Capistrano will make an intelligent guess based on known version control directory names
# Or: `accurev`, `bzr`, `cvs`, `darcs`, `git`, `mercurial`, `perforce`, `subversion` or `none`
set :checkout, "export"
set :deploy_via, :remote_cache
set :branch, "master"
set :base_path, "/home/<user>/<domain>"
set :deploy_to, "#{base_path}/#{application}"
set :keep_releases, 3
set :user, '<user>'
set :runner, '<user>'
set :use_sudo, false
default_run_options[:pty] = true
set :shared_path, "/home/<user>/<shared_folder>"
set :release_path, "#{base_path}/#{application}"
role :web, "<host>" # Your HTTP server, Apache/etc
role :app, "<host>" # This may be the same as your `Web` server
role :db, "<host>", :primary => true # This is where Rails migrations will run
# if you want to clean up old releases on each deploy uncomment this:
# after "deploy:restart", "deploy:cleanup"
# if you're still using the script/reaper helper you will need
# these http://github.com/rails/irs_process_scripts
# If you are using Passenger mod_rails uncomment this:
namespace :deploy do
task :start do ; end
task :stop do ; end
desc "Restart the app by touching the restart.txt file."
task :restart, :roles => :app, :except => { :no_release => true } do
run "touch #{File.join(current_path,'tmp','restart.txt')}"
end
desc "Update the environment-specific files from the shared folder."
task :symlink_shared, :roles => [:app] do
run "ln -s #{shared_path}/app_config.yml #{release_path}/config/"
run "rm #{release_path}/config/database.yml"
run "ln -s #{shared_path}/database.yml #{release_path}/config/"
run "rm #{release_path}/public/.htaccess"
run "ln -s #{shared_path}/.htaccess #{release_path}/public/"
end
end
before "deploy:restart", "deploy:symlink_shared"
after "deploy:update_code", "deploy:migrate"
Output of cap deploy:setup
* 2012-12-23 16:49:27 executing `deploy:setup'
* executing "mkdir -p /home/<user>/<domain>/<app> /home/<user>/<domain>/<app>/releases /home/<user>/<shared_folder> /home/<user>/<shared_folder>/system /home/<user>/<shared_folder>/log /home/<user>/<shared_folder>/pids"
servers: ["<host>"]
[<host>] executing command
command finished in 263ms
* executing "chmod g+w /home/<user>/<domain>/<app> /home/<user>/<domain>/<app>/releases /home/<user>/<shared_folder> /home/<user>/<shared_folder>/system /home/<user>/<shared_folder>/log /home/<user>/<shared_folder>/pids"
servers: ["<host>"]
[<host>] executing command
command finished in 261ms
Output of cap deploy:check
* 2012-12-23 16:49:45 executing `deploy:check'
* executing "test -d /home/<user>/<domain>/<app>/releases"
servers: ["<host>"]
[<host>] executing command
command finished in 265ms
* executing "test -w /home/<user>/<domain>/<app>"
servers: ["<host>"]
[<host>] executing command
command finished in 256ms
* executing "test -w /home/<user>/<domain>/<app>/releases"
servers: ["<host>"]
[<host>] executing command
command finished in 256ms
* executing "which git"
servers: ["<host>"]
[<host>] executing command
command finished in 259ms
* executing "test -w /home/<user>/<shared_folder>"
servers: ["<host>"]
[<host>] executing command
command finished in 263ms
You appear to have all necessary dependencies installed
Output of cap:deploy
* 2012-12-23 16:51:41 executing `deploy'
* 2012-12-23 16:51:41 executing `deploy:update'
** transaction: start
* 2012-12-23 16:51:41 executing `deploy:update_code'
updating the cached checkout on all servers
executing locally: "git ls-remote git#bitbucket.org:<gituser>/<app>.git master"
command finished in 1102ms
* executing "if [ -d /home/<user>/<shared_folder>/cached-copy ]; then cd /home/<user>/<shared_folder>/cached-copy && git fetch -q origin && git fetch --tags -q origin && git reset -q --hard 42dfb6a3f529e2293192f5e22c3214b7da55c9b4 && git clean -q -d -x -f; else git clone -q git#bitbucket.org:<gituser>/<app>.git /home/<user>/<shared_folder>/cached-copy && cd /home/<user>/<shared_folder>/cached-copy && git checkout -q -b deploy 42dfb6a3f529e2293192f5e22c3214b7da55c9b4; fi"
servers: ["<host>"]
[<host>] executing command
command finished in 3233ms
copying the cached version to /home/<user>/<domain>/<app>
* executing "cp -RPp /home/<user>/<shared_folder>/cached-copy /home/<user>/<domain>/<app> && (echo 42dfb6a3f529e2293192f5e22c3214b7da55c9b4 > /home/<user>/<domain>/<app>/REVISION)"
servers: ["<host>"]
[<host>] executing command
command finished in 338ms
* 2012-12-23 16:51:47 executing `deploy:finalize_update'
triggering before callbacks for `deploy:finalize_update'
* 2012-12-23 16:51:47 executing `deploy:assets:symlink'
* executing "ls -x /home/<user>/<domain>/<app>/releases"
servers: ["<host>"]
[<host>] executing command
command finished in 252ms
* executing "rm -rf /public/assets &&\\\n mkdir -p /public &&\\\n mkdir -p /home/<user>/<shared_folder>/assets &&\\\n ln -s /home/<user>/<shared_folder>/assets /public/assets"
servers: ["<host>"]
[<host>] executing command
** [out :: <host>] mkdir: cannot create directory `/public'
** [out :: <host>] : Permission denied
command finished in 268ms
*** [deploy:update_code] rolling back
* executing "rm -rf /home/<user>/<domain>/<app>; true"
servers: ["<host>"]
[<host>] executing command
command finished in 265ms
failed: "sh -c 'rm -rf /public/assets &&\\\n mkdir -p /public &&\\\n mkdir -p /home/<user>/<shared_folder>/assets &&\\\n ln -s /home/<user>/<shared_folder>/assets /public/assets'" on <host>
You can see the Permission denied error near the bottom of the output, as well as the failed shell command at the end. I can't figure out why it is attempting to do anything with /public, as I would expect a relative path to public in my web folder, rather than referring to what appears to be a public folder on root. I feel like I'm missing a variable reference somewhere which should get prepended to the mkdir command, but none of of the documentation I've read indicates this.
I'd appreciate any help I can get, and thanks in advance.
Please don't set release_path and shared_path by yourself. Let Capistrano figure it out automatically.
But you should set deploy_to correctly:
set :deploy_to, "/home/<user>/<domain>/<application>"
Also be careful with variables. The set syntax of Capistrano doesn't mean the variables are available for substitution. Your problem occurs because of "#{base_path}/" is nil and evalutates to "/".
Find out more on the configuration wiki of Capistrano.

ferret_server start problem druing the deploy

When do the cap deploy everything works fine except the ferret-server, while restarting server its try to stop the ferret_server in production mode and try to start the ferret_server but it fails due to permission problem .Here is the output from my deploy file** transaction: commit
executing deploy:restart'
triggering before callbacks fordeploy:restart'
executing `ferret:stop'
executing "cd /home/sj/reelinfo/current; script/ferret_server -e production stop || true"
servers: ["67.23.28.171"]
[67.23.28.171] executing command
** [out :: 67.23.28.171] sh: script/ferret_server: Permission denied
command finished
executing "chown www-data -R /home/sj/reelinfo/current/"
servers: ["67.23.28.171"]
[67.23.28.171] executing command
command finished
executing "touch /home/sj/reelinfo/current/tmp/restart.txt"
servers: ["67.23.28.171"]
[67.23.28.171] executing command
command finished
triggering after callbacks for `deploy:restart'
executing `ferret:start'
executing "cd /home/sj/reelinfo/current; script/ferret_server -e production start"
servers: ["67.23.28.171"]
[67.23.28.171] executing command
** [out :: 67.23.28.171] sh: script/ferret_server: Permission denied
command finished
failed: "sh -c \"cd /home/sj/reelinfo/current; script/ferret_server -e production start\"" on 67.23.28.171
I've had this problem as well, the issue is that script/ferret_server did not have executable permissions.
I added the following deploy task to handle the permissions:
before "deploy:restart", "correct_ferret_server_permissions"
task :correct_ferret_server_permissions do
run "chmod a+x #{current_path}/script/ferret_server
end

Resources