What is the significance of the dispatch.fcgi file in Rails - ruby-on-rails

There is a file under public folder in Rails environment called dispatch.fcgi. What is the significance of this particular file?
I opened that file and it has this
# # Default log path, 50 requests between GC.
# RailsFCGIHandler.process! nil, 50
#
# # Custom log path, normal GC behavior.
# RailsFCGIHandler.process! '/var/log/myapp_fcgi_crash.log'
#
require File.dirname(__FILE__) + "/../config/environment"
require 'fcgi_handler'
RailsFCGIHandler.process!
Cannot understand what this does. Can someone please explain?

That must be an old version of rails, because this file is a relic for servers which start the rails app with fcgi in your http server.
Apache and Nginx are now supported via passenger, or you can use a proxy with a mongrels cluster, all these solutions don't need a dispatch.fcgi.
https://serverfault.com/questions/60222/apache-dispatch-fcgi-doesnt-get-interpreted-with-passenger

Related

Ruby on Rails header for sending files In NGINX

My application runs on a Nginx and passenger server. Inside the production.rb I see a line says:
# Specifies the header that your server uses for sending files.
config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect' # for NGINX
How does it Specifies header to sending files? How does Rails sends file without having this turned on?
Is it a good practice so turn this on? Does it make my application to run faster?
The behavior is explained in the send_file documentation
You should use this option, it will make your application faster and it is good practice to do so.
If you don't use this option, the file will be read by the ruby process, sent to nginx and then to the client.

How to enable compression in Ruby on Rails?

I posted a similar question here
Serving Compressed Assets in Heroku with Rack-Zippy
but decided to give up on that service, since I couldn't get it to work.
I ran PageSpeed Insights on my website to determine the speed of my website.
The most important suggestion I received was to Enable Compression.
Compressing resources with gzip or deflate can reduce the number of bytes sent over the network.
Enable compression for the following resources to reduce their transfer size by 191.2KiB
(74% reduction).
I've followed the instructions on this website
https://developers.google.com/speed/docs/insights/EnableCompression
and it says to consult the documentation for your web server on how to enable compression:
I've used this website to find out my web server
http://browserspy.dk/webserver.php
It turns out that my web server is WEBrick.
The PageSpeed Insights Page only lists the following 3 servers
Apache: Use mod_deflate
Nginx: Use ngx_http_gzip_module
IIS: Configure HTTP Compression
I've searched for documentation on gzip compression for WEBrick servers but couldn't find anything.
I've searched for how to enable compression in Rails and couldn't find anything. That's why I'm asking here.
I've tried using Rack Zippy but gave up on it.
Right now, I don't even know where to begin. My first step, is finding out what I should do.
Edit
I followed Ahmed's suggestion of using Rack::Deflator
I confirmed that I had it by running
rake middleware
=> use Rack::Deflator
and then
git add .
git commit -m '-'
git push heroku master
Unfortunately PageSpeed still says it needs to be compress. I confirmed that by going into Developer Tools << Network Settings and refreshing the page. Size and content were virtually identical for every resource meaning the files are not compressed.
Is there something wrong with one of my files?
Thank you for your help.
Here is my full config/application.rb file
require File.expand_path('../boot', __FILE__)
require 'rails/all'
Bundler.require(*Rails.groups)
module AppName
class Application < Rails::Application
config.middleware.use Rack::Deflater
config.assets.precompile += %w(*.png *.jpg *.jpeg *.gif)
config.exceptions_app = self.routes
config.cache_store = :memory_store
end
end
If there is a problem, the source is likely over there, right?
Do I need to install the deflator gem?
Enable compression
Add it to config/application.rb:
module YourApp
class Application < Rails::Application
config.middleware.use Rack::Deflater
end
end
Source: http://robots.thoughtbot.com/content-compression-with-rack-deflater
Rack::Deflater should work if you use insert_before (instead of "use"), to place it near the top of the middleware stack, prior to any other middleware that might send a response. .use places it at the bottom of the stack. On my machine the topmost middleware is Rack::Sendfile. So I would use:
config.middleware.insert_before(Rack::Sendfile, Rack::Deflater)
You can get the list of middleware in order of loading by doing rake middleware from the command line.
Note: A good link for insert_before vs Use in middleware rack
As per the author of Rack::Deflater it should be placed after ActionDispatch::Static in a Rails app. The reasoning is that if your app is also serving static assets (like on Heroku, for example), when assets are served from disk they are already compressed. Inserting it before would only end up in Rack::Deflater attempting to re-compress those assets. Therefore as a performance optimisation:
# application.rb
config.middleware.insert_after ActionDispatch::Static, Rack::Deflater

Rufus Scheduler not running

I want to do something simple with the gem rufus-scheduler:
https://github.com/jmettraux/rufus-scheduler
but, i can't get it to work.
I have a regular rails app. I created a .rb file:
# test_rufus_scheduler.rb
require 'rubygems'
require 'rufus/scheduler'
scheduler = Rufus::Scheduler.start_new
scheduler.in '1s' do
puts "hello world"
end
Then, when I try ruby test_rufus_scheduler.rb, nothing happens. Am i doing it right? gem list shows rufus-scheduler.
Thanks.
If your script exits right away please try to add
scheduler.join
at the end. Please note that it's different when running the script stand alone and via rails. See the README for detailled information.
Add the below lines to your apache2 config and restart your apache2 server
RailsAppSpawnerIdleTime 0
PassengerMinInstances 1
Found a solution to this here (which also is in sync to Ajet's answer above).
Production servers require a bit of additional setup. On most production web servers, idle Ruby processes are killed. In order for Rufus to work, you'll need to stop this from happening. For Passenger/Nginx you can copy the following code below to your nginx.conf config file for your website after the line that says passenger_enabled on;.
nginx.conf:
passenger_spawn_method direct;
passenger_min_instances 1;
passenger_pool_idle_time 0;

Rails 3, apache & passenger, send_file sends zero byte files

I'm struggling with send_file with rails 3.0.9 running ruby 1.9, passenger 3.0.8 on apache on ubuntu lucid
The xsendfile module is installed and loaded into apache
root~# a2enmod xsendfile
Module xsendfile already enabled
Its symlinked correctly in mods-enabled
lrwxrwxrwx 1 root root 32 Aug 8 11:20 xsendfile.load -> ../mods-available/xsendfile.load
config.action_dispatch.x_sendfile_header = "X-Sendfile" is set in my production.rb
using send_file results in zero byte files being sent to the browser
filepath = Rails.root.join('export',"#{filename}.csv")
if File.exists?(filepath)
send_file filepath, :type => 'text/csv'
end
I believe the previous answer isn't the right way to go because, as far as I can tell, Apache isn't handling the downloads at all when this solution is applied, instead the rails process is. That's why the nginx directive, which shouldn't work, appears to. You get the same result by commenting out the config directive.
Another drawback (aside from tying up a rails process for too long) is that when the data streaming is handled by the rails process the response doesn't seem to send the content length header. So a user doesn't know how large the file they're downloading is, nor how long it will take (a usability problem).
I was able to get it to work by ensuring that mod_sendfile was properly included and loaded in my apache config, like so (this will be dependent on your apache install, etc.):
LoadModule xsendfile_module /usr/lib64/httpd/modules/mod_xsendfile.so
...
# enable mod_x_sendfile for offloading zip file downloads from rails
XSendFile on
XSendFilePath /

How to deploy resque workers in production?

The GitHub guys recently released their background processing app which uses Redis:
http://github.com/defunkt/resque
http://github.com/blog/542-introducing-resque
I have it working locally, but I'm struggling to get it working in production. Has anyone got a:
Capistrano recipe to deploy workers (control number of workers, restarting them, etc)
Deployed workers to separate machine(s) from where the main app is running, what settings were needed here?
gotten redis to survive a reboot on the server (I tried putting it in cron but no luck)
how did you work resque-web (their excellent monitoring app) into your deploy?
Thanks!
P.S. I posted an issue on Github about this but no response yet. Hoping some SO gurus can help on this one as I'm not very experienced in deployments. Thank you!
I'm a little late to the party, but thought I'd post what worked for me. Essentially, I have god setup to monitor redis and resque. If they aren't running anymore, god starts them back up. Then, I have a rake task that gets run after a capistrano deploy that quits my resque workers. Once the workers are quit, god will start new workers up so that they're running the latest codebase.
Here is my full writeup of how I use resque in production:
http://thomasmango.com/2010/05/27/resque-in-production
I just figured this out last night, for Capistrano you should use san_juan, then I like the use of God to manage deployment of workers. As for surviving a reboot, I am not sure, but I reboot every 6 months so I am not too worried.
Although he suggest different ways of starting it, this is what worked easiest for me. (Within your deploy.rb)
require 'san_juan'
after "deploy:symlink", "god:app:reload"
after "deploy:symlink", "god:app:start"
To manage where it runs, on another server, etc, he covers that in the configuration section of the README.
I use Passenger on my slice, so it was relatively easy, I just needed to have a config.ru file like so:
require 'resque/server'
run Rack::URLMap.new \
"/" => Resque::Server.new
For my VirtualHost file I have:
<VirtualHost *:80>
ServerName resque.server.com
DocumentRoot /var/www/server.com/current/resque/public
<Location />
AuthType Basic
AuthName "Resque Workers"
AuthUserFile /var/www/server.com/current/resque/.htpasswd
Require valid-user
</Location>
</VirtualHost>
Also, a quick note. Make sure you overide the resque:setup rake task, it will save you lots of time for spawning new workers with God.
I ran into a lot of trouble, so if you need any more help, just post a comment.
Garrett's answer really helped, just wanted to post a few more details. It took a lot of tinkering to get it right...
I'm using passenger also, but nginx instead of apache.
First, don't forget you need to install sinatra, this threw me for a while.
sudo gem install sinatra
Then you need to make a directory for the thing to run, and it has to have a public and tmp folder. They can be empty but the problem is that git won't save an empty directory in the repo. The directory has to have at least one file in it, so I made some junk files as placeholders. This is a weird feature/bug in git.
I'm using the resque plugin, so I made the directory there (where the default config.ru is). It looks like Garrett made a new 'resque' directory in his rails_root. Either one should work. For me...
cd MY_RAILS_APP/vendor/plugins/resque/
mkdir public
mkdir tmp
touch public/placeholder.txt
touch tmp/placeholder.txt
Then I edited MY_RAILS_APP/vendor/plugins/resque/config.ru so it looks like this:
#!/usr/bin/env ruby
require 'logger'
$LOAD_PATH.unshift File.expand_path(File.dirname(__FILE__) + '/lib')
require 'resque/server'
use Rack::ShowExceptions
# Set the AUTH env variable to your basic auth password to protect Resque.
AUTH_PASSWORD = "ADD_SOME_PASSWORD_HERE"
if AUTH_PASSWORD
Resque::Server.use Rack::Auth::Basic do |username, password|
password == AUTH_PASSWORD
end
end
run Resque::Server.new
Don't forget to change ADD_SOME_PASSWORD_HERE to the password you want to use to protect the app.
Finally, I'm using Nginx so here is what I added to my nginx.conf
server {
listen 80;
server_name resque.seoaholic.com;
root /home/admin/public_html/seoaholic/current/vendor/plugins/resque/public;
passenger_enabled on;
}
And so it gets restarted on your deploys, probably something like this in your deploy.rb
run "touch #{current_path}/vendor/plugins/resque/tmp/restart.txt"
I'm not really sure if this is the best way, I've never setup rack/sinatra apps before. But it works.
This is just to get the monitoring app going. Next I need to figure out the god part.
Use these steps instead of making configuration with web server level and editing plugin:
#The steps need to be performed to use resque-web with in your application
#In routes.rb
ApplicationName::Application.routes.draw do
resources :some_controller_name
mount Resque::Server, :at=> "/resque"
end
#That's it now you can access it from within your application i.e
#http://localhost:3000/resque
#To be insured that that Resque::Server is loaded add its requirement condition in Gemfile
gem 'resque', :require=>"resque/server"
#To add basic http authentication add resque_auth.rb file in initializers folder and add these lines for the security
Resque::Server.use(Rack::Auth::Basic) do |user, password|
password == "secret"
end
#That's It !!!!! :)
#Thanks to Ryan from RailsCasts for this valuable information.
#http://railscasts.com/episodes/271-resque?autoplay=true
https://gist.github.com/1060167

Resources