Sidekiq daemon process dies after some second - ruby-on-rails

namespace :sidekiq do
task :quiet do
on roles(:app) do
execute "pgrep -f 'sidekiq'| xargs kill -9 -USR1"
end
end
task :restart do
on roles(:app) do
execute "cd #{current_path} bundle exec sidekiq -C config/sidekiq.yml -d"
end
end
end
after 'deploy:starting', 'sidekiq:quiet'
after 'deploy:reverted', 'sidekiq:restart'
after 'deploy:published', 'sidekiq:restart'
This is my script I am using in deploy.rb
It kills the process properly it also creates a daemon process but within few second it kills. And I am using capistrano for deployment.
I think so the daemon process is child process and capistrano deployment is parent process so after completing deployment it kills the child process(i.e daemon process).
Please help me I am really stuck from last 1 week.

Try to use the module capistrano-sidekiq. This works great.
Also you have two commands in one execution, but there is a missing &&:
execute "cd #{current_path} && bundle exec sidekiq -C config/sidekiq.yml -d"

Related

Sidekiq launches and mysteriously disappears when started by Capistrano

I'm struggling with starting sidekiq remotely with a custom v2 capistrano task:
namespace :sidekiq do
desc "Start sidekiq"
task :start do
run "cd #{current_path} && bundle exec sidekiq --version"
run "cd #{current_path} && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && echo OK"
end
end
Output:
* 2018-01-05 11:40:51 executing `sidekiq:start'
* executing "cd /home/deploy/applications/xxx/current && bundle exec sidekiq --version"
servers: ["198.58.110.211"]
[198.58.110.211] executing command
** [out :: 198.58.110.211] Sidekiq 5.0.5
** [out :: 198.58.110.211]
command finished in 1424ms
* executing "cd /home/deploy/applications/xxx/current && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && echo OK"
servers: ["198.58.110.211"]
[198.58.110.211] executing command
** [out :: 198.58.110.211] OK
command finished in 1128ms
I can confirm I'm getting the environment (rbenv & bundler correctly) as the first run cmd shows. But unexpectedly the sidekiq task starts and dispersal into obliviom: 1) tmp/pids/sidekiq.pid gets initialized but the process not exists and 2) logs/sidekiq.log gets created but only with the header:
# Logfile created on 2018-01-05 11:34:09 -0300 by logger.rb/56438
If I remove the --daemon switch I get the process running perfectly, but of course the capistrano deploy task never ends and when I do CTRL+C sidekiq closes.
If I just ssh into the remote and execute the command (replacing current_path obviously) it works perfectly.
I've tried almost everything I can imagine: not using a config.file, using RAILS_ENV instead of --environment, etc.
As the "&& echo OK" shows, the command is not returning an error.
Capistrano is using "/bin/bash --login -c 'cd /home/deploy/applications/microgestion/current && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml'" as far as I can tell to run the command.
Ruby v2.3.3, Capistrano 2.15.5, Sidekiq 5.0.5, Rails 4.0.12
Solved it by adding && sleep 1 at the end as explained here: http://blog.hartshorne.net/2013/03/capistrano-nohup-and-sleep.html.
desc "Start sidekiq"
task :start do
run "cd #{current_path} && bundle exec sidekiq --environment production --daemon --config config/sidekiq.yml && sleep 1"
end
Thanks #user3309314 for pointing me in the correct direction.
If you use plain Capistrano to daemonize Sidekiq, any crash will lead to downtime. Don't do this. You need to use a process monitor that will restart the Sidekiq process if it dies. Use systemd, upstart and/or Foreman as explained in the docs.
https://github.com/mperham/sidekiq/wiki/Deployment#running-your-own-process

Whenever does not trigger Sidekiq job

I am trying to setup a Sidekiq scheduler to run recurring jobs.
I can get the Sidekiq job to run fine in Rails console, or Rails app. I can get Whenever to trigger command. However, I CANNOT get Whenever to trigger Sidekiq job.
I replicated my work in different machines, and in some machine, I get it to work. In most other machine, it is not working.
Here is my setup:
# app/workers/create_random_product.rb
class CreateRandomProduct
include Sidekiq::Worker
def perform
new_product = Product.new
new_product.name = "Product #{Time.now}"
new_product.price = 5.5
new_product.save
end
end
# config/schedule.rb
every 1.minute do
runner "CreateRandomProduct.perform_async"
command "echo 'hello' >> /home/vagrant/output.txt"
end
As you can see in the schedule.rb, I can get the second command to run because I can see the output updated, but the first runner command does not seem to do anything. Sidekiq dashboard does not show any activity either.
Running "whenever" returns these cron jobs:
* * * * * /bin/bash -l -c 'cd /home/vagrant/sidekiqdemo && bin/rails runner -e production '\''CreateRandomProduct.perform_async'\'''
* * * * * /bin/bash -l -c 'echo '\''hello'\'' >> /home/vagrant/output.txt'
I ran the first command manually and it DOES trigger the Sidekiq job.
I did run the "whenever -i" command to update the crontab file. I check the crontab log, and I can see that it tried to trigger the Sidekiq job:
Oct 16 16:15:01 vagrant-ubuntu-trusty-64 CRON[13062]: (vagrant) CMD (/bin/bash -l -c 'cd /home/vagrant/sidekiqdemo && bin/rails runner -e production '\''CreateRandomProduct.perform_async'\''')
Oct 16 16:15:01 vagrant-ubuntu-trusty-64 CRON[13063]: (vagrant) CMD (/bin/bash -l -c 'echo '\''hello'\'' >> /home/vagrant/output.txt')
Is there anything that I am missing?
I solved this defining a new job_type as suggested here.
To let this work you have to install sidekiq-client-cli as well.
Moreover I discovered thanks to this post that when a command is executed from the cron process only a set of the environment variables are set so in my case I the command was not working because my BUNDLE_PATH wasn't set. at the end my working job_type is something like this:
job_type :sidekiq, "cd :path && BUNDLE_PATH=/bundle /usr/local/bin/bundle exec sidekiq-client :task :output"
every 1.minute do
sidekiq "push GenerateExportsWorker"
end
this is the reason why in my case a command like
every 1.minute do
command "echo 'you can use raw cron syntax too' >> /home/log/myscript.log 2>&1 "
end
despite the other.

How to update ENV variable on Passenger standalone restart

I'm using Capistrano to deploy my application. Application runs on Passenger standalone. When I redeploy the application the Passenger still uses the Gemfile from the the old release because BUNDLE_GEMFILE environment variable has not been updated.
Where I should put the updated path to Gemfile so that Passenger would pick it up on restart?
The server startup command is in monit and I just call monit scripts from Capistrano tasks except for restart where I just touch the restart.txt.
namespace :deploy do
task :stop do
run("sudo /usr/bin/monit stop my_app_#{rails_env}")
end
task :restart do
run("cd #{current_path} && touch tmp/restart.txt")
end
task :start do
run("sudo /usr/bin/monit start my_app_#{rails_env}")
end
The startup command in monit is:
start program = "/bin/su - app_user -l -c 'cd /home/app_user/current && bundle exec passenger start -d -p 8504 -e production --pid-file=/home/app_user/current/tmp/pids/passenger.8504.pid /home/app_user/current'"
I already tried to add the BUNDLE_GEMFILE into the startup command like this:
start program = "/bin/su - app_user -l -c 'cd /home/app_user/current && BUNDLE_GEMFILE=/home/app_user/current/Gemfile bundle exec passenger start -d -p 8504 -e production --pid-file=/home/app_user/current/tmp/pids/passenger.8504.pid /home/app_user/current'"
But it didn't work since the path /home/app_user/current is a symlink to a release path and that release path was picked up instead.
Simple solution.
Define the Gemfile to be used in the server start command. For example:
BUNDLE_GEMFILE=/home/app_user/current/Gemfile bundle exec passenger start -d -p 9999 -e production --pid-file=/home/app_user/current/tmp/pids/passenger.9999.pid /home/app_user/current
The earlier solution (setting the BUNDLE_GEMFILE env variable in .profile) is not good. When you are deploying a new version of your application and there is a new gem in the bundle the migrations etc. will fail because it will still use the Gemfile defined in the env variable.

sudo password fail trying to setup monit for rails application

I'm following along with Ryan Bates' Railscast on Monit http://railscasts.com/episodes/375-monit?view=asciicast, the monitoring system for Rails applications. In it, he creates a capistrano recipe for monit (see below) which runs with the command cap monit:setup. When I run it, it fails with this message
command finished in 200ms
failed: "sh -c 'sudo -p '\\''sudo password: '\\'' service monit reload'" on 192.XXX.XXX.XXX
It's obviously a password related error, but I can't figure out why it's happening because the earlier commands that the script runs start stop restart syntax reload seem to run successfully until it gets to reload.
Noting that the second last line in the script uses root, I thought it might also be a root issue, since I'm logged into my server with a different user name (but with sudo permissions). Therefore, I changed root in the second last line to my user name and ran cap monit:setup again but I got the same error.
* 2013-07-10 05:52:01 executing `monit:syntax'
* executing "sudo -p 'sudo password: ' service monit syntax"
servers: ["192.XXX.XXX.XXX"]
[192.XXX.XXX.XXX] executing command
** [out :: 192.XXX.XXX.XXX] Control file syntax OK
command finished in 248ms
* 2013-07-10 05:52:02 executing `monit:reload'
* executing "sudo -p 'sudo password: ' service monit reload"
servers: ["192.XXX.XXX.XXX"]
[192.XXX.XXX.XXX] executing command
** [out :: 192.XXX.XXX.XXX] Usage: /etc/init.d/monit {start|stop|restart|force-reload|syntax}
command finished in 200ms
failed: "sh -c 'sudo -p '\\''sudo password: '\\'' service monit reload'" on 192.XXX.XXX.XXX
the Monit.rb recipe
namespace :monit do
desc "Install Monit"
task :install do
run "#{sudo} apt-get -y install monit"
end
after "deploy:install", "monit:install"
desc "Setup all Monit configuration"
task :setup do
monit_config "nginx"
syntax
reload
end
after "deploy:setup", "monit:setup"
%w[start stop restart syntax reload].each do |command|
desc "Run Monit #{command} script"
task command do
run "#{sudo} service monit #{command}"
end
end
end
def monit_config(name, destination = nil)
destination ||= "/etc/monit/conf.d/#{name}.conf"
template "monit/#{name}.erb", "/tmp/monit_#{name}"
run "#{sudo} mv /tmp/monit_#{name} #{destination}"
run "#{sudo} chown root #{destination}"
run "#{sudo} chmod 600 #{destination}"
end

How do I stop delayed_job if I'm running it with the -m "monitor" option?

How do I stop delayed_job if I'm running it with the -m "monitor" option? The processes keep getting restarted!
The command I start delayed_job with is:
script/delayed_job -n 4 -m start
The -m runs a monitor processes that spawns a new delayed_job process if one dies.
The command I'm using to stop is:
script/delayed_job stop
But that doesn't stop the monitor processes, which in turn start up all the processes again. I would just like them to go away. I can kill them, which I have, but I was hoping there was some command line option to just shut the whole thing down.
In my capistrano deploy script I have this:
desc "Start workers"
task :start_workers do
run "cd #{release_path} && RAILS_ENV=production script/delayed_job -m -n 2 start"
end
desc "Stop workers"
task :stop_workers do
run "ps xu | grep delayed_job | grep monitor | grep -v grep | awk '{print $2}' | xargs -r kill"
run "cd #{current_path} && RAILS_ENV=production script/delayed_job stop"
end
To avoid any errors that may stop your deployment script:
"ps xu" only show processes owned by the current user
"xargs -r kill" only invoke the kill command when there is something to kill
I only kill the delayed_job monitor, and stop the delayed_job deamon the normal way.
I had this same problem. Here's how I solved it:
# ps -ef | grep delay
root 8605 1 0 Jan03 ? 00:00:00 delayed_job_monitor
root 15704 1 0 14:29 ? 00:00:00 dashboard/delayed_job
root 15817 12026 0 14:31 pts/0 00:00:00 grep --color=auto delay
Here you see the delayed_job process and the monitor. Next, I would manually kill these processes and then delete the PIDs. From the application's directory (/usr/share/puppet-dashboard in my case):
# ps -ef | grep delay | grep -v grep | awk '{print $2}' | xargs kill && rm tmp/pids/*
The direct answer is that you have to kill the monitor process first. However AFAIK there isn't an easy way to do this, I don't think the monitor PIDs are stored anywhere and the DJ start and stop script certainly doesn't do anything intelligent there, as you noticed.
I find it odd that the monitor feature was included -- I guess Daemons has it so whomever was writing the DJ script figured they would just pass that option down. But it's not really usable as it is.
I wrote an email to the list about this a while back, didn't get an answer: https://groups.google.com/d/msg/delayed_job/HerSuU97BOc/n4Ps430AI1UJ
You can see more about monitoring with Daemons here: http://daemons.rubyforge.org/classes/Daemons.html#M000004
If you come up with a better answer/solution, add it to the wiki here: https://github.com/collectiveidea/delayed_job/wiki/monitor-process
if you can access server, you can try these commands:
ps -ef | grep delayed_job
kill -9 XXXX #xxxx is process id
OR
cd path-to-app-folder/current
RAILS_ENV=production bin/delay_job stop
RAILS_ENV=production bin/delay_job start
You can also add this script to capistrano3 in config/deploy.rb
namespace :jobworker do
task :start do
on roles(:all) do
within "#{current_path}" do
with rails_env: "#{fetch(:stage)}" do
execute "bin/delayed_job start"
end
end
end
end
task :stop do
on roles(:all) do
within "#{current_path}" do
with rails_env: "#{fetch(:stage)}" do
execute "bin/delayed_job stop"
end
end
end
end
end
then run
cap
cap production jobworker:stop
cap production jobworker:start

Resources