I recently changed machines, and had a few rough spots updating Rails. The server itself stayed as it was. Everything seemed to be fine, but not capistrano. When I make changes and update SVN, running
cap deploy
the correct new version of the repository is placed on the server. The logging in the terminal running capistrano shows nothing out of the ordinary, but evidently no restart actually takes place because the server continues to run. Running
cap deploy:restart
Produces
Dans-iMac:rebuild apple$ cap deploy:restart
* executing `deploy:restart'
* executing `accelerator:smf_restart'
* executing `accelerator:smf_stop'
* executing "sudo -p 'sudo password: ' svcadm disable /network/mongrel/urbanistica-production"
servers: ["www.urbanisti.ca"]
Password:
[www.urbanisti.ca] executing command
command finished
* executing `accelerator:smf_start'
* executing "sudo -p 'sudo password: ' svcadm enable -r /network/mongrel/urbanistica-production"
servers: ["www.urbanisti.ca"]
[www.urbanisti.ca] executing command
command finished
* executing `accelerator:restart_apache'
* executing "sudo -p 'sudo password: ' svcadm refresh svc:/network/http:cswapache2"
servers: ["www.urbanisti.ca"]
[www.urbanisti.ca] executing command
command finished
But no evident change takes place. What might be going on? The Mongrel log on the server shows no changes whatever: it's still running the older version that predates the update.
The problem would seem to be in your custom (or, at least non-built-in) restart task. The task accelerator:smf_restart, and associated smf_stop and smf_start tasks, that are being called are not part of the standard Capistrano suite. Did you write those tasks yourself or are they from a Capistrano extension? If so, what extension?
If you can post a link to that extension, or post your Capfile if you wrote them youself, that would help people figuring out more specifically what's going wrong.
Related
I am running a Rails 3.2.18 application on AWS. This application is deployed using Capistrano, including starting Resque workers for the application.
My problem is that AWS can occasionally restart instances with little to no warning, or an instance can be restarted from the AWS console. When this happens, Resque is not started as it is during our normal deployment process.
I have tried to create a shell script in /etc/init.d to start Resque on boot, but this script continues to prompt for a password, and I'm not sure what I'm missing. The essence of the start script is:
/bin/su -l deploy-user -c "/www/apps/deploy-user/current && bundle exec cap <environment> resque:scheduler:start resque:start"
Obviously the above command works as expected when run as the "deploy" user from the bash prompt, but when run via sudo /etc/init.d/resque start, it prompts for a password upon running the first Capistrano command.
Is there something glaring that I am missing? Or perhaps is there a better way to accomplish this?
You should run su with -c parameter to specify commands, and enclose all commands within double quotes:
/bin/su -l deploy-user -c "/www/apps/deploy-user/current && bundle exec cap <environment> resque:scheduler:start resque:start"
Of course, you have other alternatives, like /etc/rc.local.
But if you're going to use an init.d script, I'd suggest to create it propperly (at least start/stop, default runlevels...). Otherwise I'd go with /etc/rc.local or even with a cron job for the deploy-user:
#reboot /www/apps/deploy-user/current && bundle exec cap <environment> resque:scheduler:start resque:start
I'm trying to set daily cron job to update my site stats, but it looks like it doesn't work.
Cron entry (for deployer user):
0 0 * * * cd /var/www/my_site/current && rake RAILS_ENV=production stats:update
I'm running ubuntu server, with rbenv.
Any idea what's wrong?
Many times $PATH is defined differently when cron runs compared to when you are working in your own shell. Do "whereis rake" to find the full path to rake and then replace "rake" with its full path. (I am assuming that the "cd" command is working, so I am focusing on whether "rake" is found / running properly.)
Has cron sent you any emails with error messages after you added your command to your crontab?
You might want to run "crontab -l" under the proper user account to make sure that your cron command is actually registered within the crontab, especially if you aren't receiving any emails.
The presence of a Gemfile can also affect the ability to properly run rake. See, for example, Error: "Could not find rake", yet Rake is installed
I am working on a God script to monitor my Unicorns. I started with GitHub's examples script and have been modifying it to match my server configuration. Once God is running, commands such as god stop unicorn and god restart unicorn work just fine.
However, god start unicorn results in WARN: unicorn start command exited with non-zero code = 1. The weird part is that if I copy the start script directly from the config file, it starts right up like a brand new mustang.
This is my start command:
/usr/local/bin/unicorn_rails -c /home/my-linux-user/my-rails-app/config/unicorn.rb -E production -D
I have declared all paths as absolute in the config file. Any ideas what might be preventing this script from working?
I haven't used unicorn as an app server, but I've used god for monitoring before.
If I remember rightly when you start god and give your config file, it automatically starts whatever you've told it to watch. Unicorn is probably already running, which is why it's throwing the error.
Check this by running god status once you've started god. If that's not the case you can check on the command line what the comand's exit status is:
/usr/local/bin/unicorn_rails -c /home/my-linux-user/my-rails-app/config/unicorn.rb -E production -D;
echo $?;
that echo will print the exit status of the last command. If it's zero, the last command reported no errors. Try starting unicorn twice in a row, I expect the second time it'll return 1, because it's already running.
EDIT:
including the actual solution from comments, as this seems to be a popular response:
You can set an explicit user and group if your process requires to be run as a specific user.
God.watch do |w|
w.uid = 'root'
w.gid = 'root'
# remainder of config
end
My problem was that I never bundled as root. Here is what I did:
sudo bash
cd RAILS_ROOT
bundle
You get a warning telling you to never do this:
Don't run Bundler as root. Bundler can ask for sudo if it is needed,
and installing your bundle as root will break this application for all
non-root users on this machine.
But it was the only way I could get resque or unicorn to run with god. This was on an ec2 instance if that helps anyone.
Add the log option has helped me greatly in debugging.
God.watch do |w|
w.log = "#{RAILS_ROOT}/log/god.log"
# remainder of config
end
In the end, my bug turned out to be the start_script in God was executed in development environment. I fixed this by appending the RAILS_ENV to the start script.
start_script = "RAILS_ENV=#{ENV['RACK_ENV']} bundle exec sidekiq -P #{pid_file} -C #{config_file} -L #{log_file} -d"
I have a very simple task called update_feeds:
desc "Update feeds"
task :update_feeds do
run "cd #{release_path}"
run "script/console production"
run "FeedEntry.update_all"
end
Whenever I try to run this task, I get the following message:
[out :: mysite.com] sh: script/console: No such file or directory
I figured it's because I am not in the right directory, but trying
run "cd ~/user/mysite.com/current"
instead of
run "cd #{release_path}"
Also fails. Running the exact same commands manually (through ssh) works perfectly.
Why can't capistrano properly cd (change directory) into the site directory to run the command?
Thanks!
Update: Picked an answer, and thank you so much to all who replied.
The best answer may actually be the one on server fault, though the gist of both (the one on server fault and the one on stack overflow) is the same.
You want to use script/runner. It starts an instance of the app to execute the method you want to call. Its slow though as it has to load all of your rails app.
~/user/mysite.com/current/script/runner -e production FeedEntry.update_all 2>&1
You can run that from the capistrano task.
I cannot imagine that you would be able to remotely log into rails console from capistrano. I suggest you call your model method from a rake task.
How do I run a rake task from Capistrano?
As for the latter part of your question, are you logging into the server with the same user account as capistrano?
I have a cluster of three mongrels running under nginx, and I deploy the app using Capistrano 2.4.3. When I "cap deploy" when there is a running system, the behavior is:
The app is deployed. The code is successfully updated.
In the cap deploy output, there is this:
executing "sudo -p 'sudo password: '
mongrel_rails cluster::restart -C
/var/www/rails/myapp/current/config/mongrel_cluster.yml"
servers: ["myip"]
[myip] executing command
** [out :: myip] stopping port 9096
** [out :: myip] stopping port 9097
** [out :: myip] stopping port 9098
** [out :: myip] already started port 9096
** [out :: myip] already started port 9097
** [out :: myip] already started port 9098
I check immediately on the server and find that Mongrel is still running, and the PID files are still present for the previous three instances.
A short time later (less than one minute), I find that Mongrel is no longer running, the PID files are gone, and it has failed to restart.
If I start mongrel on the server by hand, the app starts up just fine.
It seems like 'mongrel_rails cluster::restart' isn't properly waiting for a full stop
before attempting a restart of the cluster. How do I diagnose and fix this issue?
EDIT: Here's the answer:
mongrel_cluster, in the "restart" task, simply does this:
def run
stop
start
end
It doesn't do any waiting or checking to see that the process exited before invoking "start". This is a known bug with an outstanding patch submitted. I applied the patch to Mongrel Cluster and the problem disappeared.
You can explicitly tell the mongrel_cluster recipes to remove the pid files before a start by adding the following in your capistrano recipes:
# helps keep mongrel pid files clean
set :mongrel_clean, true
This causes it to pass the --clean option to mongrel_cluster_ctl.
I went back and looked at one of my deployment recipes and noticed that I had also changed the way my restart task worked. Take a look at the following message in the mongrel users group:
mongrel users discussion of restart
The following is my deploy:restart task. I admit it's a bit of a hack.
namespace :deploy do
desc "Restart the Mongrel processes on the app server."
task :restart, :roles => :app do
mongrel.cluster.stop
sleep 2.5
mongrel.cluster.start
end
end
First, narrow the scope of what your testing by only calling cap deploy:restart. You might want to pass the --debug option to prompt before remote execution or the --dry-run option just to see what's going on as you tweak your settings.
At first glance, this sounds like a permissions issue on the pid files or mongrel processes, but it's difficult to know for sure. A couple things that catch my eye are:
the :runner variable is explicity set to nil -- Was there a specific reason for this?
Capistrano 2.4 introduced a new behavior for the :admin_runner variable. Without seeing the entire recipe, is this possibly related to your problem?
:runner vs. :admin_runner (from capistrano 2.4 release)
Some cappers have noted that having deploy:setup and deploy:cleanup run as the :runner user messed up their carefully crafted permissions. I agreed that this was a problem. With this release, deploy:start, deploy:stop, and deploy:restart all continue to use the :runner user when sudoing, but deploy:setup and deploy:cleanup will use the :admin_runner user. The :admin_runner variable is unset, by default, meaning those tasks will sudo as root, but if you want them to run as :runner, just do “set :admin_runner, runner”.
My recommendation for what to do next. Manually stop the mongrels and clean up the PIDs. Start the mongrels manually. Next, continue to run cap deploy:restart while debugging the problem. Repeat as necessary.
Either way, my mongrels are starting before the previous stop command has finished shutting 'em all down.
sleep 2.5 is not a good solution, if it takes longer than 2.5 seconds to halt all running mongrels.
There seems to be a need for:
stop && start
vs.
stop; start
(this is how bash works, && waits for the first command to finish w/o error, while ";" simply runs the next command).
I wonder if there is a:
wait cluster_stop
then cluster_start
I hate to be so basic, but it sounds like the pid files are still hanging around when it is trying to start. Make sure that mongrel is stopped by hand. Clean up the pid files by hand. Then do a cap deploy.
Good discussion: http://www.ruby-forum.com/topic/139734#745030