I want to start script/delayed_job start on my production when i will start my rails server.
Is there anyway i can do that?
EDIT::
I have added this line to my config/initializers/delayed_job.rb .
Delayed::Worker.new.start
But my delayed job server is not starting when i am running my rails applicaiton.
Is there any other solution??
I would recommend deploying your app with Capistrano and defining a after:deploy hook to start/restart DJ at every deploy.
I would also recommend using Resque over DelayedJob, as the latter has tendencies to just die without any reason, and usually requires a Monit/God monitoring and restarting it.
namespace :delayed_job do
desc "Start delayed_job process"
task :start, :roles => :app do
run "cd #{current_path}; script/delayed_job start #{rails_env}"
end
desc "Stop delayed_job process"
task :stop, :roles => :app do
run "cd #{current_path}; script/delayed_job stop #{rails_env}"
end
desc "Restart delayed_job process"
task :restart, :roles => :app do
run "cd #{current_path}; script/delayed_job restart #{rails_env}"
end
end
after "deploy:start", "delayed_job:start"
after "deploy:stop", "delayed_job:stop"
after "deploy:restart", "delayed_job:restart"
You can setup an init.d file, but I would recommend either monit or god. God is ruby, so it is familiar, but that also means it leaks a bit. If you are going to run God, I recommend a cron job to restart it. This is a VERY good post on configuring monit on your server.
We went the God route, but if we had it to do over again - we would do monit.
You can do
Delayed::Worker.new.start
in your initializers directory (create a new ".rb" file in there and it'll start with your app)
Related
I have setup Capistrano and eveything is working fine except Capistrano is not restarting passenger after deployment. Eveytime after deployment I have to ssh into server and type touch tmp/restart.txt inside current directory. I tried different ways to restart passenger but nothing is working for me.
first attempt:
namespace :deploy do
task :restart do
on roles(:app) do
run "#{try_sudo} touch #{File.join(current_path,'tmp','restart.txt')}"
end
end
end
second attempt
namespace :deploy do
task :restart do
on roles(:app) do
within current_path do
execute :touch, 'tmp/restart.txt'
end
end
end
end
third attempt
namespace :deploy do
task :restart do
run "touch #{current_path}/tmp/restart.txt"
end
end
I found above code snippets in stackoverflow with similar problem to mine but none of them is restarting the server.
I am using capistrano (3.4.0) with Rails 4 (nginx + passenger)
It could be that your deploy:restart task is not being executed.
Capistrano 3.1.0 and higher (as explained in the Capistrano's CHANGELOG), does not automatically execute deploy:restart at the end of cap deploy.
You must therefore explicitly tell Capistrano to do so, by adding this to your deploy.rb:
after 'deploy:publishing', 'deploy:restart'
I'm using Capistrano to deploy apps that I'm building in Sinatra and Rails. For a while now I've been writing all the stuff I need to get done during the deployment right into config/deploy.rb. It looks like I'm just writing Rake here. I was wondering if I could get some advice on if I'm putting these in the right place or if I could be more "Capistranorish" with my deployments.
Here are a few examples of things I'm doing here. I write pretty much everything that I need my deployments to do here.
# deploy.rb
task :initctl_reload_configuration do
on roles(:app), in: :sequence do
execute "sudo initctl reload-configuration"
end
end
task :rebuild_sitemap_no_ping do
on roles(:app), in: :sequence do
execute "cd /srv/app/#{environment}/current && RAILS_ENV=#{environment} bundle exec rake sitemap:refresh:no_ping"
end
end
task :rebuild_sitemap do
on roles(:app), in: :sequence do
execute "cd /srv/app/#{environment}/current && RAILS_ENV=#{environment} bundle exec rake sitemap:refresh"
end
end
task :restart_services do
on roles(:app), in: :sequence do
execute "sudo service tomcat6 restart"
execute "sudo service sunspot-solr restart"
execute "sudo service app-#{environment} restart"
execute "sudo service nginx restart"
end
end
If that's all you got, it might be just fine leaving it in deploy.rb.
If you really want to move those tasks somewhere, below contents of Capfile (you likely have it in the root of your project) should give you a hint:
# Load custom tasks from `lib/capistrano/tasks' if you have any defined
Dir.glob('lib/capistrano/tasks/*.rake').each { |r| import r }
So just create a file in lib/capistrano/tasks/ ending with .rake and that should do it!
I have Bluepill setup to monitor my delayed_job processes.
On my production server, I use RVM installed in the user's home folder (username is deploy). My app's gems are installed in its own project-specific gemset. So, the bluepill gem and its corresponding binary are installed within the ~/.rvm/.... folder.
When I deploy my app using capistrano, I want bluepill to be stopped and started, so my DJs get restarted. I am looking at the instructions for the capistrano recipe here.
I think my RVM-compliant bluepill tasks have to be like the following:
# Bluepill related tasks
after 'deploy:start', 'bluepill:start'
after 'deploy:stop', 'bluepill:quit'
after 'deploy:restart', 'bluepill:quit', 'bluepill:start'
namespace :bluepill do
desc 'Stop processes that bluepill is monitoring and quit bluepill'
task :quit, :roles => [:app] do
run "cd #{current_path}; sudo bluepill #{application}_#{rails_env} stop"
run "cd #{current_path}; sudo bluepill #{application}_#{rails_env} quit"
sleep 5
end
desc 'Load bluepill configuration and start it'
task :start, :roles => [:app] do
run "cd #{current_path}; sudo bluepill load #{current_path}/config/server/#{rails_env}/delayed_job.bluepill"
end
desc 'Prints bluepills monitored processes statuses'
task :status, :roles => [:app] do
run "cd #{current_path}; sudo bluepill #{application}_#{rails_env} status"
end
end
I haven't tested the above yet.
What I am wondering is: what should I put in my sudoers file to allow the deploy user run just these bluepill related commands as root without a password? On this page they have mentioned this:
deploy ALL=(ALL) NOPASSWD: /usr/local/bin/bluepill
But the path to the bluepill binary would be different in my case. And it would be different for different projects, because of project-specific gemsets. Should I be mentioning each of the binary paths or is there a better way of handling this?
use wrappers and aliases:
namespace :bluepill do
task :setup do
run "rvm alias create #{application} #{rvm_ruby_name_evaluated}"
run "rvm wrappers #{application} --no-links bluepill"
end
end
so after this task bluepill is available via #{rvm_path}/wrappers/#{application}/bluepill which will be always the same even if you change ruby version, so it can be added to sudoers for preserving path:
deploy ALL=(ALL) NOPASSWD: /home/my_user/.rvm/wrappers/my_app/bluepill
and then the tasks can use:
sudo #{rvm_path}/wrappers/#{application}/bluepill ...
it is important to note here that the wrapper takes care of loading rvm environment because it was lost by invocation of sudo ... but this is just a detail ;)
I've got following settings in deploy.rb to restart my server:
namespace :deploy do
task :restart do
run "if [ -f #{unicorn_pid} ] && [ -e /proc/$(cat #{unicorn_pid}) ]; then kill -USR2 \`cat #{unicorn_pid}\`; else cd #{deploy_to}/current && bundle exec unicorn -c #{unicorn_conf} - E #{rails_env} -D; fi"
end
end
but it doesn't work. I mean that command executes (it asks me the password and gives no errors), but all changes in config files are still ignored (i.e. number of worker processes or database settings).
Maybe this is because of the way unicorn restarts. Not every worker is restarted immediately. This is to make it possible to have zero downtime and loose no requests. If you want to see your changes for sure, try to stop and then start your application instead. I have had to do this some times. Of course you will potentially loose some request.
The following tasks is what I use for restarting, stopping, and starting my unicorn server.
desc "Zero-downtime restart of Unicorn"
task :restart, :except => { :no_release => true } do
run "kill -s USR2 `cat #{shared_path}/pids/unicorn.pid`"
end
desc "Start unicorn"
task :start, :except => { :no_release => true } do
run "cd #{current_path} ; bundle exec unicorn_rails -c config/unicorn.rb -D -E production"
end
desc "Stop unicorn"
task :stop, :except => { :no_release => true } do
run "kill -s QUIT `cat #{shared_path}/pids/unicorn.pid`"
end
Hope this helps you.
Maybe this article is of interest.
see here my baby~
Restarting Unicorn with USR2 doesn't seem to reload production.rb settings
Keep in mind that: your working directory in unicorn.rb should be : /your/cap/directory/current
NOT be: File.expand_path("../..", FILE)
Because the unicorn and linux soft link forking error: soft link can not work well.
You should give capistrano-unicorn a try, that's what I currently use with the default hooks mentioned below.
Setup
Add the library to your Gemfile:
ruby
group :development do
gem 'capistrano-unicorn', :require => false
end
And load it into your deployment script config/deploy.rb:
ruby
require 'capistrano-unicorn'
Add unicorn restart task hook:
ruby
after 'deploy:restart', 'unicorn:reload' # app IS NOT preloaded
after 'deploy:restart', 'unicorn:restart' # app preloaded
after 'deploy:restart', 'unicorn:duplicate' # before_fork hook implemented (zero downtime deployments)
I cannot start delayed job process using a capistrano recipe. Here's the error I am getting.
/usr/local/lib/ruby/gems/1.9.1/gems/delayed_job-2.1.1/lib/delayed/command.rb:62:in `mkdir': File exists - /my_app/server/releases/20101120001612/tmp/pids (Errno::EEXIST)
Here's the capistrano code (NOTE-: I have tried both start/restart commands)
after "deploy:restart", "delayed_job:start"
task :start, :roles => :app do
run "cd #{current_path}; RAILS_ENV=#{rails_env} script/delayed_job -n 2 start"
end
More detail errors from deployment logs -
executing command
[err :: my_server] /usr/local/lib/ruby/gems/1.9.1/gems/delayed_job-2.1.1/lib/delayed/command.rb:62:in `mkdir': File exists - /my_app/server/releases/20101120001612/tmp/pids (Errno::EEXIST)
[err :: my_server] from /usr/local/lib/ruby/gems/1.9.1/gems/delayed_job-2.1.1/lib/delayed/command.rb:62:in `daemonize'
[err :: my_server] from script/delayed_job:5:in `<main>'
command finished
failed: "sh -c 'cd /my_app/server/current; RAILS_ENV=production script/delayed_job -n 3 restart'" on myserevr
This is a Rails 3 app (v3.0.3)
Seeing the same problem.
It turns out I was missing the ~/apps/application_name/shared/pids directory.
Finally creating it made this problem go away.
No need to set up custom dj_pids directory.
I also got this error and found a couple of issues:
Ensure you have a shared/pids folder.
Ensure you have the correct hooks setup
Your deploy.rb script should contain:
require "delayed/recipes"
after "deploy:stop", "delayed_job:stop"
after "deploy:start", "delayed_job:start"
after "deploy:restart", "delayed_job:restart"
I'd copied the hooks from an old post and they appear to be incorrect now. These are from the actual delayed_job recipe file comments.
I believe cap deploy:setup should create the pids folder but I set things up a different way and it was not created. app/current/tmp/pids links to app/shared/pids and this was causing the false directory exists error.
This is how I fixed the issue, I passed an explicit pids dir parameter using "--pid-dir". Not sure if this is perfect, but it worked.
task :restart, :roles => :app do
run "cd #{current_path}; RAILS_ENV=#{rails_env} script/delayed_job -n #{dj_proc_count} --pid-dir=#{app_root}/shared/dj_pids restart"
end
Add the creation of this directory before
after "deploy:restart", "delayed_job:start"
task :start, :roles => :app do
run "mkdir #{current_path}/tmp/pids"
run "cd #{current_path}; RAILS_ENV=#{rails_env} script/delayed_job -n 2 start"
end
I had the same issue. Turned out that there was an existing
application_name/shared/pids/delayed_job.main.pid
file, which had incorrect owner permissions, which was causing the deployment to fail. Fixing this file's permissions solved the issue for me.
Since the creation of the directories is cheap and fast, use the following callback:
before 'deploy', 'deploy:setup'
This will ensure that structure is always there before each deploy.