The GitHub guys recently released their background processing app which uses Redis:
http://github.com/defunkt/resque
http://github.com/blog/542-introducing-resque
I have it working locally, but I'm struggling to get it working in production. Has anyone got a:
Capistrano recipe to deploy workers (control number of workers, restarting them, etc)
Deployed workers to separate machine(s) from where the main app is running, what settings were needed here?
gotten redis to survive a reboot on the server (I tried putting it in cron but no luck)
how did you work resque-web (their excellent monitoring app) into your deploy?
Thanks!
P.S. I posted an issue on Github about this but no response yet. Hoping some SO gurus can help on this one as I'm not very experienced in deployments. Thank you!
I'm a little late to the party, but thought I'd post what worked for me. Essentially, I have god setup to monitor redis and resque. If they aren't running anymore, god starts them back up. Then, I have a rake task that gets run after a capistrano deploy that quits my resque workers. Once the workers are quit, god will start new workers up so that they're running the latest codebase.
Here is my full writeup of how I use resque in production:
http://thomasmango.com/2010/05/27/resque-in-production
I just figured this out last night, for Capistrano you should use san_juan, then I like the use of God to manage deployment of workers. As for surviving a reboot, I am not sure, but I reboot every 6 months so I am not too worried.
Although he suggest different ways of starting it, this is what worked easiest for me. (Within your deploy.rb)
require 'san_juan'
after "deploy:symlink", "god:app:reload"
after "deploy:symlink", "god:app:start"
To manage where it runs, on another server, etc, he covers that in the configuration section of the README.
I use Passenger on my slice, so it was relatively easy, I just needed to have a config.ru file like so:
require 'resque/server'
run Rack::URLMap.new \
"/" => Resque::Server.new
For my VirtualHost file I have:
<VirtualHost *:80>
ServerName resque.server.com
DocumentRoot /var/www/server.com/current/resque/public
<Location />
AuthType Basic
AuthName "Resque Workers"
AuthUserFile /var/www/server.com/current/resque/.htpasswd
Require valid-user
</Location>
</VirtualHost>
Also, a quick note. Make sure you overide the resque:setup rake task, it will save you lots of time for spawning new workers with God.
I ran into a lot of trouble, so if you need any more help, just post a comment.
Garrett's answer really helped, just wanted to post a few more details. It took a lot of tinkering to get it right...
I'm using passenger also, but nginx instead of apache.
First, don't forget you need to install sinatra, this threw me for a while.
sudo gem install sinatra
Then you need to make a directory for the thing to run, and it has to have a public and tmp folder. They can be empty but the problem is that git won't save an empty directory in the repo. The directory has to have at least one file in it, so I made some junk files as placeholders. This is a weird feature/bug in git.
I'm using the resque plugin, so I made the directory there (where the default config.ru is). It looks like Garrett made a new 'resque' directory in his rails_root. Either one should work. For me...
cd MY_RAILS_APP/vendor/plugins/resque/
mkdir public
mkdir tmp
touch public/placeholder.txt
touch tmp/placeholder.txt
Then I edited MY_RAILS_APP/vendor/plugins/resque/config.ru so it looks like this:
#!/usr/bin/env ruby
require 'logger'
$LOAD_PATH.unshift File.expand_path(File.dirname(__FILE__) + '/lib')
require 'resque/server'
use Rack::ShowExceptions
# Set the AUTH env variable to your basic auth password to protect Resque.
AUTH_PASSWORD = "ADD_SOME_PASSWORD_HERE"
if AUTH_PASSWORD
Resque::Server.use Rack::Auth::Basic do |username, password|
password == AUTH_PASSWORD
end
end
run Resque::Server.new
Don't forget to change ADD_SOME_PASSWORD_HERE to the password you want to use to protect the app.
Finally, I'm using Nginx so here is what I added to my nginx.conf
server {
listen 80;
server_name resque.seoaholic.com;
root /home/admin/public_html/seoaholic/current/vendor/plugins/resque/public;
passenger_enabled on;
}
And so it gets restarted on your deploys, probably something like this in your deploy.rb
run "touch #{current_path}/vendor/plugins/resque/tmp/restart.txt"
I'm not really sure if this is the best way, I've never setup rack/sinatra apps before. But it works.
This is just to get the monitoring app going. Next I need to figure out the god part.
Use these steps instead of making configuration with web server level and editing plugin:
#The steps need to be performed to use resque-web with in your application
#In routes.rb
ApplicationName::Application.routes.draw do
resources :some_controller_name
mount Resque::Server, :at=> "/resque"
end
#That's it now you can access it from within your application i.e
#http://localhost:3000/resque
#To be insured that that Resque::Server is loaded add its requirement condition in Gemfile
gem 'resque', :require=>"resque/server"
#To add basic http authentication add resque_auth.rb file in initializers folder and add these lines for the security
Resque::Server.use(Rack::Auth::Basic) do |user, password|
password == "secret"
end
#That's It !!!!! :)
#Thanks to Ryan from RailsCasts for this valuable information.
#http://railscasts.com/episodes/271-resque?autoplay=true
https://gist.github.com/1060167
Related
Does anyone have a good way to manage the appserver with capistrano?. This seems to be a leave it to your own devices situation, and I've yet to see a good example of it.
There is basically two trains of thoughts I see.
1) Daemonize it as the deploy user. Pros, no system service etc, so no permissions issues. However this wreaks as if the machine is rebooted, blam the system goes down.
2) Init scripts. Installing a init script and using that to manage the server. This would survive reboots, and allow for say /etc/init.d/myapp restart/stop/start control if you ssh'd in. This is decent apart from two reasons
Most people manage it from capistrano with sudo (I feel like capistrano 3 discourages this)
I've yet to see a good upstart or the like script that works with unicorn for it.
I'm experimenting with using nginx+unicorn. Nginx I have set perfectly. I've added a site to sites-available and pointed upstream to /appserver/public. This works great, asset precompilation works fantastic and all is well, I can redeploy and be served new assets. It's simple, works with the OS init process. However I've lucked out as the nginx config is basically static, and nginx only has to serve static files.
The appserver.. unicorn/thin/puma/ whatever is the part thats tripping me. I would like it to reload the application on cap deploy, but I'm struggling to find a good enough example of this.
In summary. What is a simple way of having a rails application survive reboots, and reload when cap deploy is called
If you use Passenger with your nginx and unicorn or thin... you can restart after deployment by touching tmp/restart.txt file:
task :restart do
on roles(:app), in: :sequence, wait: 5 do
execute :touch, release_path.join('tmp/restart.txt')
end
end
To reload a puma server after deploy use capistrano3-puma:
Gemfile:
gem 'capistrano3-puma'
Capfile:
require 'capistrano/puma'
I have a chain of nginx + passenger for my rails app.
Now after each server restart i need to write in terminal in project folder
rake ts:start
but how can i automatize it?
So that after each server restart thinking sphinx is automatically started without my command in terminal?
I use rails 3.2.8 and ubuntu 12.04.
I can not imagine what can i try ever, please help me.
How can i do this, give some advices?
What I did to solve the same problem:
In config/application.rb, add:
module Rails
def self.rake?
!!#rake
end
def self.rake=(value)
#rake = !!value
end
end
In Rakefile, add this line:
Rails.rake = true
Finally, in config/initializers/start_thinking_sphinx.rb put:
unless Rails.rake?
begin
# Prope ts connection
ThinkingSphinx.search "test", :populate => true
rescue Mysql2::Error => err
puts ">>> ThinkingSphinx is unavailable. Trying to start .."
MyApp::Application.load_tasks
Rake::Task['ts:start'].invoke
end
end
(Replace MyApp above with your app's name)
Seems to work so far, but if I encounter any issues I'll post back here.
Obviously, the above doesn't take care of monitoring that the server stays up. You might want to do that separately. Or an alternative could be to manage the service with Upstart.
If you are using the excellent whenever gem to manage your crontab, you can just put
every :reboot do
rake "ts:start"
end
in your schedule.rb and it seems to work great. I just tested on an EC2 instance running Ubuntu 14.04.
There's two options I can think of.
You could look at how Ubuntu manages start-up scripts and add one for this (perhaps in /etc/init?).
You could set up monit or another monitoring tool and have it keep Sphinx running. Monit should boot automatically when your server restarts, and so it should ensure Sphinx (and anything else it's tracking) is running.
The catch with Monit and other such tools is that when you deliberately stop Sphinx (say, to update configuration structure and corresponding index changes), it might start it up again before it's appropriate. So I think you should start with the first of these two options - I just don't know a great deal about the finer points of that approach.
I followed #pat's suggestion and wrote a script to start ThinkingSphinx whenever the server boots up. You can see it as a gist -
https://gist.github.com/declan/4b7cc4fb4926df16f54c
We're using Capistrano for deployment to Ubuntu 14.04, and you may need to modify the path and user name to match your server setup. Otherwise, all you need to do is
Put this script into /etc/init.d/thinking_sphinx
Confirm that the script works: calling /etc/init.d/thinking_sphinx start on the command line should start ThinkingSphinx for your app, and /etc/init.d/thinking_sphinx stop should stop it
Tell Ubuntu to run this script automatically on startup: update-rc.d thinking_sphinx defaults
There's a good post on debian-administration.org called making scripts run at boot time that has more details.
I'm using EventMachine and Monetarily to start e TCP server along with my rails application. This is started from config/initializers/momentarily.rb.
My problem is that it starts also when I run rake tasks, like db:migrate. I only want it to start when when I start the HTTP server. Environments won't help, since both the server start and rake tasks are under Development environment. Is there a way of knowing that the application is running the HTTP server as opposed to anything else? Note that is not only rake tasks, the EM starts also if I run the rails console, which is again something not desirable for my case.
unless File.basename($0) == "rake" && ARGV.include?("db:migrate")
# you are not in rake db:migrate
end
There's not a great way of doing this that I know of. You could copy newrelic's approach (check discover_dispatcher in local_environment.rb) which basically has a list of heuristics used to detect if it is running inside passenger, thin, etc.
For passenger it checks
defined?(::PhusionPassenger)
for thin it checks
if defined?(::Thin) && defined?(::Thin::Server)
Set an environment variable in config.ru file, and use it anywhere in the code to detect if it's executed using a rails server command only.
For e.g.
File: config.ru
ENV['server_mode'] = '1'
And using it somewhere as:
File: config/environment.rb
Thread.new { infinite_loop! }.join if ENV['server_mode'] = '1'
Reference: Answer
Maybe you can implement a switch in the initializer based on ARGV?
Something like:
if ARGV.join(' ').match /something/
# your initializer code here
end
Don't start that other server from an initializer. Create a daemon in script/momentarily and start it from within your app.
After your application launches, you could have it shell out to check ps. If ps shows that the HTTP server is running and the running HTTP server has the same pid as your application (check the pid by inspecting $$), then you could launch the TCP server.
In addition to a great answer by Frederick Cheung above, there can be some other "footprints" in actual process environment. Eg. Phusion Passenger adds certain variables to ENV such as:
PASSENGER_APP_ENV
IN_PASSENGER
PASSENGER_SPAWN_WORK_DIR
PASSENGER_USE_FEEDBACK_FD
Web servers typically can also set SERVER_SOFTWARE variable eg.:
SERVER_SOFTWARE=nginx/1.15.8 Phusion_Passenger/6.0.2
I want to do something simple with the gem rufus-scheduler:
https://github.com/jmettraux/rufus-scheduler
but, i can't get it to work.
I have a regular rails app. I created a .rb file:
# test_rufus_scheduler.rb
require 'rubygems'
require 'rufus/scheduler'
scheduler = Rufus::Scheduler.start_new
scheduler.in '1s' do
puts "hello world"
end
Then, when I try ruby test_rufus_scheduler.rb, nothing happens. Am i doing it right? gem list shows rufus-scheduler.
Thanks.
If your script exits right away please try to add
scheduler.join
at the end. Please note that it's different when running the script stand alone and via rails. See the README for detailled information.
Add the below lines to your apache2 config and restart your apache2 server
RailsAppSpawnerIdleTime 0
PassengerMinInstances 1
Found a solution to this here (which also is in sync to Ajet's answer above).
Production servers require a bit of additional setup. On most production web servers, idle Ruby processes are killed. In order for Rufus to work, you'll need to stop this from happening. For Passenger/Nginx you can copy the following code below to your nginx.conf config file for your website after the line that says passenger_enabled on;.
nginx.conf:
passenger_spawn_method direct;
passenger_min_instances 1;
passenger_pool_idle_time 0;
In Rails, what is the best strategy to restarting app servers like Thin after a code deployment through a Capistrano script. I would like to be able to deploy code to production servers without fearing that a user might see the 500.html page.
I found this question while looking for an answer. Because I wanted to stick with Thin, none of the answers here suited my needs. This fixed it for me:
thin restart -e production --servers 3 --onebyone --wait 30
Unicorn is supposed to have rolling restarts built in. I have not setup a unicorn stack yet but http://sirupsen.com/setting-up-unicorn-with-nginx/ looks like a good start.
The way I used to do the production servers are with apache and passenger. thats a industry standard setup and will allow you to deploy new versions with out a down time
Once everything is correctly setup all you have to do is, go to app directory
create a file called restart.txt in /tmp dir.
Ex: touch tmp/restart.txt
read more here http://www.modrails.com/
http://jimneath.org/2008/05/10/using-capistrano-with-passenger-mod_rails.html
http://www.zorched.net/2008/06/17/capistrano-deploy-with-git-and-passenger/
http://snippets.dzone.com/posts/show/5466
HTH
sameera