Caching: wrong paths and pages are not expired - ruby-on-rails

I've got problem with simple caching (ruby 1.9.2, rails 3.1.3, development environment):
development.rb:
config.action_controller.perform_caching = true
config.action_controller.cache_store = :file_store, 'tmp/cache'
config.action_controller.page_cache_directory = 'public/cache'
sweeper:
class CacheSweeper < ActionController::Caching::Sweeper
observe Article, Photo, Advertisement
def after_save(record)
expire_home
end
...
private
...
def expire_home
expire_page(:controller => '/homes', :action => 'index')
end
end
controllers:
class HomeController < ApplicationController
caches_page :index
cache_sweeper :cache_sweeper
def index
....
Pages are cached in right directory and actions triggers sweeper actions as they should, but pages are not expired and server is trying to get cached pages from default place.
cache: [GET /] miss
Any ideas why? Is there something wrong with my configuration?

You have the wrong controller name and leading slash. Try the following:
def expire_home
expire_page(:controller => 'home', :action => 'index')
end

expire_page expects the path of the route so for example the root url in a caches page you could do
expire_page "/"
Also, to get your web server to look into the right place you need to configure a rewrite rule in apache or nginx to look in the cache directory.

Related

Rails dynamic error pages (404, 422, 500) showing as blank

I'm implementing dynamic error pages into an app, and have a feeling it's looking in the public folder for (now non-existent) templates, rather than following the routes I've set up.
In config/application.rb I've added the line config.exceptions_app = self.routes to account for this.
I've then added the following to my routes:
get "/not-found", :to => "errors#not_found"
get "/unacceptable", :to => "errors#unacceptable"
get "/internal-error", :to => "errors#internal_error"
And the errors controller looks like so:
class ErrorsController < ApplicationController
layout 'errors'
def not_found
render :status => 404
end
def unacceptable
render :status => 422
end
def internal_error
render :status => 500
end
end
Going to /not-found shows the template as it should be, though visiting any non-existing URL (i.e. /i-dont-exist) renders an empty page.
The only reason I could see for this would be that the exception handling needs the routes to be, for example, get "/404", :to => "errors#not_found", though, ironically, it's not finding the route for /404 (and no, that's not just it working :) ).
Any advice, greatly appreciated. Thanks, Steve.
It seems some setting is wrong.
Try this in your routes:
match '/404', to: 'errors#not_found', via: :all (match instead of get)
You mention that in application.rb you have config.exceptions_app = self.routes, that is good. But make sure you are restarting the server before testing your changes.
And make sure your error views files have the same name than the actions in your ErrorsController.
If you are getting any kind of (haha) error in the console, could you post it?
Do this instead:
routes.rb
%w(404 422 500).each do |code|
get code, :to => "errors#show", :code => code
end
errors_controller.rb
class ErrorsController < ApplicationController
def show
render status_code.to_s, :status => status_code
end
protected
def status_code
params[:code] || 500
end
end
inside your config/application.rb ensure you have:
module YourWebsite
class Application < Rails::Application
config.exceptions_app = self.routes
# .. more code ..
end
end
Then you will need the views, obviously ;) don't forget to remove the error pages in your public directory as well.

About rails expire_page routes

In my Rails4 app I use actionpack-page_caching.
I have a controller like this:
class CodeController < ApplicationController
def home
end
end
and the route:
get "/code/:order/(:page)" => "code#home"
Then when I clear the page caching
expire_page(:controller => 'code', :action => 'home')
I got the error:
No route matches {:action=>"home", :controller=>"code"}
Why? and what should I do?
According to Rails guides
Page Caching has been removed from Rails 4. See the actionpack-page_caching gem. See DHH's key-based cache expiration overview for the newly-preferred method.

Rails routes: different domains to different places

I've got a Rails app up running on a server. It's a big project so there are lots of routes involved, and two domains point to the root at the moment. I'd like to somehow design my routes.rb to interpret one domain to take it to a certain part of the app as if it was the root, and use the other for everywhere else.
Something like this (very pseudocode, hope you get the idea):
whole_app.com
whole_app.com/documents
whole_app.com/share
whole_app.com/users
partial_app.com, :points_to => 'whole_app.com/share'
Can Rails handle this? Thank-you!
You can achieve this by overriding default url_options method in application controller. This will override host url for every request.
class ApplicationController < ActionController::Base
....
def default_url_options
if some_condition
{:host => "partial_app.com"}
else
{:host => "whole_app.com"}
end
end
....
end
And for pointing a route to some specific url, you may use:
match "/my_url" => redirect("http://google.com/"), :as => :my_url_path
The better way is to do settings on server to redirect some url to a specific location.
is it going to /share based on some kind of criteria? if so you can do this:
routes.rb
root :to => 'pages#home'
pages_controller.rb
def home
if (some condition is met)
redirect_to this_path
else
render :layout => 'that'
end
end

rails expire_page is not deleting the cached file

I have a controller action that has page caching, and I made a sweeper that calls expire_page with the controller and the action specified...
The controller action renders a js.erb template, so I am trying to ensure that expire_page deletes the .js file in public/javascripts, which it is not doing.
class JavascriptsController < ApplicationController
caches_page :lol
def lol
#lol = Lol.all
end
end
class LolSweeper < ActionController::Caching::Sweeper
observe Lol
def after_create(lol)
puts "lol!!!!!!!"
expire_page(:controller => "javascripts", :action => "lol", :format => 'js')
end
end
... So, I visit javascripts/lol.js and I get my template rendered.. I verified that public/javascripts/lol.js exists... I then create a new Lol record, and I see "lol!!!!!!!!!" meaning the after_create observer method is triggered, but expire_page is doing nothing...
According to RailsGuides: 'Page caching ignores all parameters.' I think I had similar problem while working on cashing .xml responses: I would write the cache for /lol.xml, but was trying to expire cache for /lol (write and expire operations can be seen in the server log). The way I made it work: I made the cache "format-agnostic" like this:
cashes_page :lol, :cache_path => Proc.new { |controller| controller.params.delete_if {|k,v| k == "format"} }
and expire in the sweeper like this:
expire_page(:controller => "javascripts", :action => "lol")
It solved my problem. Also, as a note, shouldn't your lol action be called lols? Good luck.
I tried solution form Simon's answer and it didn't work for me. The solution that worked was:
expire_page('javascripts/lol.js')

Multiple robots.txt for subdomains in rails

I have a site with multiple subdomains and I want the named subdomains robots.txt to be different from the www one.
I tried to use .htaccess, but the FastCGI doesn't look at it.
So, I was trying to set up routes, but it doesn't seem that you can't do a direct rewrite since every routes needs a controller:
map.connect '/robots.txt', :controller => ?, :path => '/robots.www.txt', :conditions => { :subdomain => 'www' }
map.connect '/robots.txt', :controller => ?, :path => '/robots.club.txt'
What would be the best way to approach this problem?
(I am using the request_routing plugin for subdomains)
Actually, you probably want to set a mime type in mime_types.rb and do it in a respond_to block so it doesn't return it as 'text/html':
Mime::Type.register "text/plain", :txt
Then, your routes would look like this:
map.robots '/robots.txt', :controller => 'robots', :action => 'robots'
For rails3:
match '/robots.txt' => 'robots#robots'
and the controller something like this (put the file(s) where ever you like):
class RobotsController < ApplicationController
def robots
subdomain = # get subdomain, escape
robots = File.read(RAILS_ROOT + "/config/robots.#{subdomain}.txt")
respond_to do |format|
format.txt { render :text => robots, :layout => false }
end
end
end
at the risk of overengineering it, I might even be tempted to cache the file read operation...
Oh, yeah, you'll almost certainly have to remove/move the existing 'public/robots.txt' file.
Astute readers will notice that you can easily substitute RAILS_ENV for subdomain...
Why not to use rails built in views?
In your controller add this method:
class StaticPagesController < ApplicationController
def robots
render :layout => false, :content_type => "text/plain", :formats => :txt
end
end
In the view create a file: app/views/static_pages/robots.txt.erb with robots.txt content
In routes.rb place:
get '/robots.txt' => 'static_pages#robots'
Delete the file /public/robots.txt
You can add a specific business logic as needed, but this way we don't read any custom files.
As of Rails 6.0 this has been greatly simplified.
By default, if you use the :plain option, the text is rendered without
using the current layout. If you want Rails to put the text into the
current layout, you need to add the layout: true option and use the
.text.erb extension for the layout file. Source
class RobotsController < ApplicationController
def robots
subdomain = request.subdomain # Whatever logic you need
robots = File.read( "#{Rails.root}/config/robots.#{subdomain}.txt")
render plain: robots
end
end
In routes.rb
get '/robots.txt', to: 'robots#robots'
For Rails 3:
Create a controller RobotsController:
class RobotsController < ApplicationController
#This controller will render the correct 'robots' view depending on your subdomain.
def robots
subdomain = request.subdomain # you should also check for emptyness
render "robots.#{request.subdomain}"
end
end
Create robots views (1 per subdomain):
views/robots/robots.subdomain1.txt
views/robots/robots.subdomain2.txt
etc...
Add a new route in config/routes.rb: (note the :txt format option)
match '/robots.txt' => 'robots#robots', :format => :txt
And of course, you should declare the :txt format in config/initializers/Mime_types.rb:
Mime::Type.register "text/plain", :txt
Hope it helps.
If you can't configure your http server to do this before the request is sent to rails, I would just setup a 'robots' controller that renders a template like:
def show_robot
subdomain = # get subdomain, escape
render :text => open('robots.#{subdomain}.txt').read, :layout => false
end
Depending on what you're trying to accomplish you could also use a single template instead of a bunch of different files.
I liked TA Tyree's solution but it is very Rails 2.x centric so here is what I came up with for Rail 3.1.x
mime_types.rb
Mime::Type.register "text/plain", :txt
By adding the format in the routes you don't have to worry about using a respond_to block in the controller.
routes.rb
match '/robots.txt' => 'robots#robots', :format => "text"
I added a little something extra on this one. The SEO people were complaining about duplicated content both in subdomains and in SSL pages so I created a two robot files one for production and one for not production which is also going to be served with any SSL/HTTPS requests in production.
robots_controller.rb
class RobotsController < ApplicationController
def robots
site = request.host
protocol = request.protocol
(site.eql?("mysite.com") || site.eql?("www.mysite.com")) && protocol.eql?("http://") ? domain = "production" : domain = "nonproduction"
robots = File.read( "#{Rails.root}/config/robots-#{domain}.txt")
render :text => robots, :layout => false
end
end

Resources