Restarting Rails Server - ruby-on-rails

I've inherited an existing Rails 2 application and am currently trying to deploy it on production servers.
As a rails/unix novice, what's the best way to find out what webserver the rails application is running on and how can I restart the server. (since from what I've read, rails will cache everything on production servers)
The previous developer used Capistrano, but unfortunately I don't have access to the GIT repository.
I noticed /configuration/deploy.rb has the following lines:
desc "Custom restart task for mongrel cluster"
task :restart, :roles => :app, :except => { :no_release => true } do
deploy.mongrel.restart
end
desc "Custom start task for mongrel cluster"
task :start, :roles => :app do
deploy.mongrel.start
end
desc "Custom stop task for mongrel cluster"
task :stop, :roles => :app do
deploy.mongrel.stop
end
Does this imply mongrel_rails is being used?
If so what's the best way to restart the application to pick up my changes?
Many thanks.

Does this imply mongrel_rails is being
used?
Yes.
If so what's the best way to restart
the application to pick up my changes?
It depends on which Application server you currently use. Assuming the current recipe is ok, simply call the Capistrano restart task.
$ cap deploy:restart

Related

Rails: managing multiple sidekiqs without upstart script

Once upon a time, I had one app - Cashyy. It used sidekiq. I deployed it and used this upstart script to manage sidekiq (start/restart).
I decide to deploy another app to the same server. The app (let's call it Giimee) also uses sidekiq.
And here is the issue. Sometimes I need to restart sidekiq for Cashyy, but not for Giimee. Now, as I understand I will need to hack something using index thing (in upstart script and managing sidekiqs: sudo restart sidekiq index=1) (if I understood it correctly).
BUT!
I have zero desire to dabble with these indexes (nightmare to support? Like you need to know how many apps are using sidekiq and be sure to assign unique index to each sidekiq. And to know assigned index if you want to restart specific sidekiq).
So here is the question: how can I isolate each sidekiq (so I would not need to maintain the index) and still get the stability and usability of upstart (starting the process, restarting, etc.)?
Or maybe I don't understand something and the thing with index is state of art?
You create two services:
cp sidekiq.conf /etc/init/cashyy.conf
cp sidekiq.conf /etc/init/glimee.conf
Edit each as necessary. sudo start cashyy, sudo stop glimee, etc. Now you'll have two completely separate Sidekiq processes running.
As an alternative to an upstart script, you can use Capistrano and Capistrano-Sidekiq to manage those Sidekiqs.
We have Sidekiq running on 3 machines and have had a good experience with these two libraries/tools.
Note: we currently use an older version of Capistrano (2.15.5)
In our architecture, the three machines are customized slightly on deploy. This led us to break up our capistrano deploy scripts by machine so that we could customize some classes, manage Sidekiq, etc. Our capistrano files are structured something like this:
- config/
- deploy.rb
- deploy/
- gandalf.rb
- gollum.rb
- legolas.rb
With capistrano-sidekiq, we are able to control, well, Sidekiq :) at any time (during a deploy or otherwise). We set up the Sidekiq aspects of our deploy scripts in the following way:
# config/deploy.rb
# global sidekiq settings
set :sidekiq_default_hooks, false
set :sidekiq_cmd, "#{fetch(:bundle_cmd, 'bundle')} exec sidekiq"
set :sidekiqctl_cmd, "#{fetch(:bundle_cmd, 'bundle')} exec sidekiqctl"
set :sidekiq_role, :app
set :sidekiq_pid, "#{current_path}/tmp/pids/sidekiq.pid"
set :sidekiq_env, fetch(:rack_env, fetch(:rails_env, fetch(:default_stage)))
set :sidekiq_log, File.join(shared_path, 'log', 'sidekiq.log')
# config/deploy/gandalf.rb
# Custom Sidekiq settings
set :sidekiq_timeout, 30
set :sidekiq_processes, 1
namespace :sidekiq do
# .. code omitted from methods and tasks for brevity
def for_each_process(&block)
end
desc 'Quiet sidekiq (stop accepting new work)'
task :quiet, :roles => lambda { fetch(:sidekiq_role) }, :on_no_matching_servers => :continue do
end
desc 'Stop sidekiq'
task :stop, :roles => lambda { fetch(:sidekiq_role) }, :on_no_matching_servers => :continue do
end
desc 'Start sidekiq'
task :start, :roles => lambda { fetch(:sidekiq_role) }, :on_no_matching_servers => :continue do
end
desc 'Restart sidekiq'
task :restart, :roles => lambda { fetch(:sidekiq_role) }, :on_no_matching_servers => :continue do
end
end
When I need to restart one of my Sidekiq instances, I can just go to my terminal and execute the following:
$ bundle exec cap gandalf sidekiq:restart
$ bundle exec cap gollum sidekiq:stop
It's made Sidekiq management quite painless for our team and thought it would be worth sharing in the event something similar could help you out.

deploy similar ruby on rails code to multiple server

I have a rails app looking good on my localhost. Now I want to deploy it to multiple server (one load balancer, and two application server to be exact, with possible increase in the future), and somehow I'm lost. This would be my first time deploying a web by myself, so I'm sorry for my lack of knowledge.
I want all application server to run exactly same code.
And when I create a new content, I want the new content to be stored on each server's database instance (MySQL). So when I took down one server for maintenance and updating, the rest of the server could serve users with exact same content. I've read that capistrano could help me with this, but somehow I managed to get lost in learning how to do this. So, how should I proceed from here? How should the capistrano recipe look like, and do I have to tweak database.yml in my rails also?
Thank you very much for help.
You can use roles to deploy the same application to multiple servers.
Assuming you're using the multistage extension, define the roles in production.rb:
server1 = 'appserver1.tld'
server2 = 'appserver2.tld'
server3 = 'webserver1.tld'
role :app, server1, server2
role :web, server3
The web server will run on servers specified by the :web role.
The app layer will run on servers specified by the :app role.
If you run migrations or other DB operations during deploy, you should also specify a server under the :db role. For example:
role :db, 'dbserver.tld', :primary => true
You may have multiple DB servers, but by specifying one as the primary server capistrano will only run DB operations on that server.
In your deploy.rb, you can also create tasks that run only for certain roles. For example:
task :restart, :roles => :app, :except => { :no_release => true } do
run "touch #{current_path}/tmp/restart.txt"
end
In the above example, :except => { :no_release => true } means that it will only run if at least one release exists on the server being deployed to.
This wiki article may be of further help to you.

Nginx rolling restart of Rails app with capistrano

For the life of me I can't figure out how to make this work properly.
The problem is similar to what others have, such as: How to do a rolling restart of a cluster of mongrels
We, however, are using Nginx/Passenger instead of Mongrel.
The issue is that on a deploy if we use this standard :restart task:
task :restart, :roles => [:app], :except => {:no_release => true} do
run "cd #{deploy_to}/current && touch tmp/restart.txt"
end
It touches the restart.txt file across every web server, but any passenger instances currently serving requests need to finish before the new ones are spawned it seems. This creates a serious delay and causes our app to be unavailable for up to 2 minutes while everything is coming back up.
In order to get around that the plan is to do the following:
deploy code
go to server 1, remove it from the load balancer
restart nginx-passenger on server 1
wait 60 seconds
add server 1 back to load balancer
go to server 2 (repeat steps 3 - 5)
To accomplish this, I attempted this:
(lb.txt is the file that the load balancer looks for)
task :restart, :roles => [:app], :except => {:no_release => true} do
servers = find_servers_for_task(current_task)
servers.map do |s|
run "cd #{deploy_to}/current && echo '' > public/lb.txt", :host => s.host
run %Q{rvmsudo /etc/init.d/nginx-passenger restart > /dev/null}, :host => s.host
sleep 60
run "cd #{deploy_to}/current && echo 'ok' > public/lb.txt", :host => s.host
end
end
This almost works, however, during the deploy it seemed to run the loop through the servers once per servers listed in the :app role. We currently have 6 app servers, so the loop runs 6 times, restarting nginx-passenger 6 times per server.
I just need this loop to run through one time.
I know it seems that eventually passenger will get rolling restarts, but they do not seem to exist yet.
If it helps, we are using Capistrano 2.x and Rails 3
Any help would be great.
Thanks.
run "cd #{deploy_to}/current && echo 'ok' > public/lb.txt", :host => s.host
should actually be:
run "cd #{deploy_to}/current && echo 'ok' > public/lb.txt", :hosts => s.host
I ran across this gem capify-ec2 which has a rolling restart feature. capify-ec2 on github.
I am about to install it and try it out.
Here's the description from the readme that describes what their rolling restart feature does:
"This feature allows you to deploy your code to instances one at a time, rather than simultaneously. This becomes useful for more complex applications that may take longer to startup after a deployment. Capistrano will perform a full deploy (including any custom hooks) against a single instance, optionally perform a HTTP healthcheck against the instance, then proceed to the next instance if deployment was successful.
After deployment a status report is displayed, indicating on which instances deployment succeeded, failed or did not begin. With some failures, further action may need to be taken manually; for example if an instance is removed from an ELB (see the 'Usage with Elastic Load Balancers' section below) and the deployment fails, the instance will not be reregistered with the ELB, for safety reasons."

Scaling out an app that hits external apis

I'm using beanstalkd to background process api calls to facebook graph api and I want the app to update, i.e. hits facebook api every 10 minutes get the info. I thought about creating a simple script that loads necessary info from db (fb ids/urls), queues jobs in beanstalkd and then sleeps for 9 minutes. Maybe use God to make sure the script keeps running/restart if memory consumption gets too big.
Then I started reading about drbs and wondered if there's a way/need to integrate the two.
I asked in #rubyonrails and got cron and regular rb script as two options. Just wondering if there's a better way.
I would recommend, for simplicity of configuration using delayed_job and a cronjob which calls a rake task which deals with queueing of the jobs.
Monit is also a good alternative to God and seems to be more stable and less memory hungry for process monitoring.
For delayed job you need to add the following to your deploy script (assuming you plan to deploy with capistrano)
namespace :delayed_job do
def rails_env
fetch(:rails_env, false) ? "RAILS_ENV=#{fetch(:rails_env)}" : ''
end
desc "Stop the delayed_job process"
task :stop, :roles => :app do
run "cd #{current_path};#{rails_env} script/delayed_job stop"
end
desc "Start the delayed_job process"
task :start, :roles => :app do
run "cd #{current_path};#{rails_env} script/delayed_job start"
end
desc "Restart the delayed_job process"
task :restart, :roles => :app do
run "cd #{current_path};#{rails_env} script/delayed_job stop"
run "cd #{current_path};#{rails_env} script/delayed_job start"
end
end
I had to extract these recipies from the delayed_job gem to get them to run.

Deploy from Git using Capistrano without a hard reset?

I've an issue at the moment where we are running a CMS within a site
(browsercms) that lets the user upload files. However, every time I
do a deploy Capistrano runs a hard reset thus nuking any uploaded
files.
Does anyone have any suggestions as to how to prevent the hard reset,
and just do a pull, or a way of moving the uploaded files elsewhere,
without having to change the application code?
This might not be the right approach.
You should include your 'images' folder in your .gitignore and symlink the $current_release/images folder to $shared/images.
This may be done automatically on every deployment if you put in your deploy.rb:
task :link_imgs do
run "ln -s #{shared_path}/photos #{release_path}/photos"
end
after "deploy:update_code", :link_imgs
I've done the same with my CMS and it works like a charm
This doesn't quite meet your criteria of "without having to change the application code".
However after running into a similar issue I shifted my uploaded image from /public/images to /public/system/images the /public/system directory is not 'versioned' by each capistrano deployment so the images survive.
Could it be the capistrano 'versioning' causing the problem (instead of a git reset)?
cap deploy calls deploy:update and deploy:restart
deploy:update makes the versioning, copying stuff
deploy:restart does the true restart, overload it at your convenince, usually in your config/deploy.rb file
namespace :deploy do
desc "Softly restart the server"
task :restart, :roles => :app, :except => { :no_release => true } do
my_own.restart_recipe
end
end

Resources