Sidekiq launches and mysteriously disappears when started by Capistrano - ruby-on-rails

I'm struggling with starting sidekiq remotely with a custom v2 capistrano task:
namespace :sidekiq do
desc "Start sidekiq"
task :start do
run "cd #{current_path} && bundle exec sidekiq --version"
run "cd #{current_path} && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && echo OK"
end
end
Output:
* 2018-01-05 11:40:51 executing `sidekiq:start'
* executing "cd /home/deploy/applications/xxx/current && bundle exec sidekiq --version"
servers: ["198.58.110.211"]
[198.58.110.211] executing command
** [out :: 198.58.110.211] Sidekiq 5.0.5
** [out :: 198.58.110.211]
command finished in 1424ms
* executing "cd /home/deploy/applications/xxx/current && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && echo OK"
servers: ["198.58.110.211"]
[198.58.110.211] executing command
** [out :: 198.58.110.211] OK
command finished in 1128ms
I can confirm I'm getting the environment (rbenv & bundler correctly) as the first run cmd shows. But unexpectedly the sidekiq task starts and dispersal into obliviom: 1) tmp/pids/sidekiq.pid gets initialized but the process not exists and 2) logs/sidekiq.log gets created but only with the header:
# Logfile created on 2018-01-05 11:34:09 -0300 by logger.rb/56438
If I remove the --daemon switch I get the process running perfectly, but of course the capistrano deploy task never ends and when I do CTRL+C sidekiq closes.
If I just ssh into the remote and execute the command (replacing current_path obviously) it works perfectly.
I've tried almost everything I can imagine: not using a config.file, using RAILS_ENV instead of --environment, etc.
As the "&& echo OK" shows, the command is not returning an error.
Capistrano is using "/bin/bash --login -c 'cd /home/deploy/applications/microgestion/current && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml'" as far as I can tell to run the command.
Ruby v2.3.3, Capistrano 2.15.5, Sidekiq 5.0.5, Rails 4.0.12

Solved it by adding && sleep 1 at the end as explained here: http://blog.hartshorne.net/2013/03/capistrano-nohup-and-sleep.html.
desc "Start sidekiq"
task :start do
run "cd #{current_path} && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && sleep 1"
end
Thanks #user3309314 for pointing me in the correct direction.

If you use plain Capistrano to daemonize Sidekiq, any crash will lead to downtime. Don't do this. You need to use a process monitor that will restart the Sidekiq process if it dies. Use systemd, upstart and/or Foreman as explained in the docs.
https://github.com/mperham/sidekiq/wiki/Deployment#running-your-own-process

Related

Sidekiq daemon process dies after some second

namespace :sidekiq do
task :quiet do
on roles(:app) do
execute "pgrep -f 'sidekiq'| xargs kill -9 -USR1"
end
end
task :restart do
on roles(:app) do
execute "cd #{current_path} bundle exec sidekiq -C config/sidekiq.yml -d"
end
end
end
after 'deploy:starting', 'sidekiq:quiet'
after 'deploy:reverted', 'sidekiq:restart'
after 'deploy:published', 'sidekiq:restart'
This is my script I am using in deploy.rb
It kills the process properly it also creates a daemon process but within few second it kills. And I am using capistrano for deployment.
I think so the daemon process is child process and capistrano deployment is parent process so after completing deployment it kills the child process(i.e daemon process).
Please help me I am really stuck from last 1 week.
Try to use the module capistrano-sidekiq. This works great.
Also you have two commands in one execution, but there is a missing &&:
execute "cd #{current_path} && bundle exec sidekiq -C config/sidekiq.yml -d"

How to update ENV variable on Passenger standalone restart

I'm using Capistrano to deploy my application. Application runs on Passenger standalone. When I redeploy the application the Passenger still uses the Gemfile from the the old release because BUNDLE_GEMFILE environment variable has not been updated.
Where I should put the updated path to Gemfile so that Passenger would pick it up on restart?
The server startup command is in monit and I just call monit scripts from Capistrano tasks except for restart where I just touch the restart.txt.
namespace :deploy do
task :stop do
run("sudo /usr/bin/monit stop my_app_#{rails_env}")
end
task :restart do
run("cd #{current_path} && touch tmp/restart.txt")
end
task :start do
run("sudo /usr/bin/monit start my_app_#{rails_env}")
end
The startup command in monit is:
start program = "/bin/su - app_user -l -c 'cd /home/app_user/current && bundle exec passenger start -d -p 8504 -e production --pid-file=/home/app_user/current/tmp/pids/passenger.8504.pid /home/app_user/current'"
I already tried to add the BUNDLE_GEMFILE into the startup command like this:
start program = "/bin/su - app_user -l -c 'cd /home/app_user/current && BUNDLE_GEMFILE=/home/app_user/current/Gemfile bundle exec passenger start -d -p 8504 -e production --pid-file=/home/app_user/current/tmp/pids/passenger.8504.pid /home/app_user/current'"
But it didn't work since the path /home/app_user/current is a symlink to a release path and that release path was picked up instead.
Simple solution.
Define the Gemfile to be used in the server start command. For example:
BUNDLE_GEMFILE=/home/app_user/current/Gemfile bundle exec passenger start -d -p 9999 -e production --pid-file=/home/app_user/current/tmp/pids/passenger.9999.pid /home/app_user/current
The earlier solution (setting the BUNDLE_GEMFILE env variable in .profile) is not good. When you are deploying a new version of your application and there is a new gem in the bundle the migrations etc. will fail because it will still use the Gemfile defined in the env variable.

During cap deploy:cold - command not found for /etc/init.d/unicorn

I'm very close to having my first rails app live up on Linode VPS, but keep on getting a strange error message near the end of cap deploy:cold. I've been following railscasts 335 to deploy my Rails app to a VPS using nginx, Unicorn, PostgreSQL, rbenv and more (unfortunately for me from a Windows machine). I'm hosting on Linode Ubuntu 10.04 LTS Profile. Near the end of the deploy I get this error message:
* ←[32m2013-04-24 13:08:13 executing `deploy:start'←[0m
* ←[33mexecuting "sudo -p 'sudo password: ' /etc/init.d/unicorn_wheretoski start"←[0m
servers: ["xxx.xx.xxx.242"]
[xxx.xx.xxx.242] executing command
** [out :: xxx.xx.xxx.242]
** [out :: xxx.xx.xxx.242] sudo: /etc/init.d/unicorn_wheretoski: command not found
** [out :: xxx.xx.xxx.242]
←[2;37mcommand finished in 309ms←[0m
failed: "env PATH=$HOME/.rbenv/shims:$HOME/.rbenv/bin:$PATH sh -c 'sudo -p '\\''
sudo password: '\\'' /etc/init.d/unicorn_wheretoski start'" on xxx.xx.xxx.242
When I go to the server, it locates the file
:~/apps/wheretoski/current$ ls /etc/init.d/unicorn_wheretoski
/etc/init.d/unicorn_wheretoski
From deploy.rb
namespace :deploy do
%w[start stop restart].each do |command|
desc "#{command} unicorn server"
task command, roles: :app, except: {no_release: true} do
sudo "/etc/init.d/unicorn_#{application} #{command}"
end
end
......
And from unicorn_init.sh
#!/bin/sh
set -e
# Feel free to change any of the following variables for your app:
TIMEOUT=${TIMEOUT-60}
APP_ROOT=/home/deployer/apps/wheretoski/current
PID=$APP_ROOT/tmp/pids/unicorn.pid
CMD="cd $APP_ROOT; bundle exec unicorn -D -c $APP_ROOT/config/unicorn.rb -E production"
AS_USER=deployer
set -u
OLD_PIN="$PID.oldbin"
sig () {
test -s "$PID" && kill -$1 `cat $PID`
}
oldsig () {
test -s $OLD_PIN && kill -$1 `cat $OLD_PIN`
}
run () {
if [ "$(id -un)" = "$AS_USER" ]; then
eval $1
else
su -c "$1" - $AS_USER
fi
}
case "$1" in
start)
sig 0 && echo >&2 "Already running" && exit 0
run "$CMD"
;;
I then head over to the VPS and try to execute the various commands and I get an error when executing the following:
deployer#li543-242:~/apps/wheretoski/current$ bundle exec unicorn -D -c $/home/apps/wheretoski/current/config/unicorn.rb -E production
/home/deployer/.rbenv/versions/1.9.3-p125/lib/ruby/gems/1.9.1/gems/bundler-1.3.5/lib/bundler/rubygems_integration.rb:214:in `block in replace_gem': unicorn is not part of the bundle. Add it to Gemfile. (Gem::LoadError)
from /home/deployer/.rbenv/versions/1.9.3-p125/bin/unicorn:22:in `<main>'
Here is what I get for echo $PATH on the VPS:
/home/deployer/.rbenv/shims:/home/deployer/.rbenv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/home/deployer/.rbenv/versions/1.9.3-p125/bin
I have tried with both the unicorn gem under the production group and as part of the main gems, both have produced this same error message. When I open the Gemfile.lock in the current folder on the server Unicorn only shows up under the dependencies, not under the specs.
Thanks for any help!!
Alright, there was a couple of issues here.
1 - I had different versions of bundler on my local machine and the server.
2 - Developing on a Windows machine. I had to put the unicorn gem under a production group in my gemfile, and for whatever reason the gemfile.lock was not created successfully as a result. Had a buddy with a mac pull my code, move unicorn to the main section of the gemfile, and bundle installed it. This created a good Gemfile.lock which is in use now on the server.
Not sure if this will be helpful to others or not, quite the weird error.

Rails hourly cron jobs using Whenever fails to run hourly

On Redhat, using Whenever. My cron jobs fail to run hourly. I need help as to why.
Schedule.rb
every 1.hours do
rake "deathburrito:all", :environment => "development"
rake "bamboo:all", :environment => "development"
rake "jira:grab_data", :environment => "development"
end
Crontab -l
0 * * * * /bin/bash -l -c 'cd /var/www/qadashboard && RAILS_ENV=production bundle exec rake deathburrito:all --silent'
0 * * * * /bin/bash -l -c 'cd /var/www/qadashboard && RAILS_ENV=development bundle exec rake bamboo:all --silent'
0 * * * * /bin/bash -l -c 'cd /var/www/qadashboard && RAILS_ENV=development bundle exec rake jira:grab_data --silent'
Can anyone help me? I am not even sure what else I should be checking.
Add
MAILTO=your#email.com
to your crontab. Then enjoy the error reports from cron.
If that won't solve the issue, post the error report here.
bundle will have to be in that subshell's path. Try specifying a full-blown /usr/bin/bundle (or whatever it is).
Append log to config/schedule.rb
set :output, "/var/log/cron"
and create this file 'cron' in /var/log and give it write permission.
Execute
bundle exec whenever --update-crontab
sudo /etc/init.d/cron restart
To see the logs:
tail -f /var/log/cron
will give you more insight on the error.

ferret_server start problem druing the deploy

When do the cap deploy everything works fine except the ferret-server, while restarting server its try to stop the ferret_server in production mode and try to start the ferret_server but it fails due to permission problem .Here is the output from my deploy file** transaction: commit
executing deploy:restart'
triggering before callbacks fordeploy:restart'
executing `ferret:stop'
executing "cd /home/sj/reelinfo/current; script/ferret_server -e production stop || true"
servers: ["67.23.28.171"]
[67.23.28.171] executing command
** [out :: 67.23.28.171] sh: script/ferret_server: Permission denied
command finished
executing "chown www-data -R /home/sj/reelinfo/current/"
servers: ["67.23.28.171"]
[67.23.28.171] executing command
command finished
executing "touch /home/sj/reelinfo/current/tmp/restart.txt"
servers: ["67.23.28.171"]
[67.23.28.171] executing command
command finished
triggering after callbacks for `deploy:restart'
executing `ferret:start'
executing "cd /home/sj/reelinfo/current; script/ferret_server -e production start"
servers: ["67.23.28.171"]
[67.23.28.171] executing command
** [out :: 67.23.28.171] sh: script/ferret_server: Permission denied
command finished
failed: "sh -c \"cd /home/sj/reelinfo/current; script/ferret_server -e production start\"" on 67.23.28.171
I've had this problem as well, the issue is that script/ferret_server did not have executable permissions.
I added the following deploy task to handle the permissions:
before "deploy:restart", "correct_ferret_server_permissions"
task :correct_ferret_server_permissions do
run "chmod a+x #{current_path}/script/ferret_server
end

Resources