Rendering action based on (random) route name - ruby-on-rails

I'm trying to make a controller action that renders a random route from a set of given route names, without a redirect.
I know the method render controller: name, action: name but rendering fails because it tries to find a template on it's own instead of letting the target action determine the template.
Here is my code:
def random
# create basic route names
route_names = %w(root route1 route2)
# get route path
path = Rails.application.routes.url_helpers.send("#{route_names.sample}_path")
# {controller: name, action: name, param: val}
action_config = Rails.application.routes.recognize_path(path, {:method => :get})
# doesn't work
# fails with Missing template application/*action name*
return render action_config
# doesnt work
# require 'open-uri'
# render html: open("http://localhost:3000/#{path}") { |io| io.read }
# doesn't work
# require 'net/http'
# require 'uri'
# render html: Net::HTTP.get(URI.parse("http://localhost:3000/#{path}"))
# doesnt work
# ctrl = (action_config[:controller].camelize + "Controller").constantize.new
# ctrl.request = request
# ctrl.response = response
# ctrl.send(action_config[:action])
# works, but not for Derailed
# redirect_to path
# works but not for Derailed, since the server doesn't parse the <iframe>
#render html: "
# <iframe
# src='#{path}'
# width='100%'
# height='100%'
# style='overflow: visible; border: 0;'></iframe>
# <style>body {margin: 0;}</style>".html_safe
end
Could anyone make the render work properly?
background
I'm trying to debug a memory leak in my Rails app. I'm using the Derailed gem that retrieves a path from my app 10.000 times. Derailed only supports hitting a single path. So, to actually mimic site usage I'm trying to implement an action that renders a random route from a set of given routes. Derailed allows me to use a real webserver like Puma, but that configuration doesn't follow redirects, so I need Rails to render without a redirect.

You can write middleware for this:
class RandomMiddleware
def initialize(app)
#app = app
end
def call(env)
route_names = %w(/ /route1 /route2)
if env['PATH_INFO'] == '/random'
env['PATH_INFO'] = route_names.sample
end
#app.call(env)
end
end
and then insert this middleware in the stack (in config/application.rb):
config.middleware.use RandomMiddleware

You can try to open new app session inside controller, render action there and return the result:
session = ActionDispatch::Integration::Session.new(Rails.application)
session.get '/'
render html: session.body

Related

Rails routes for js file

I got a ServiceWorker made with a webpack plugin that is public available with http://example.com/packs/sw.js URL. Since I want this Service Worker to control the whole website I need to serve it from http://example.com/. So, no redirect_to allowed
How do I have to set the routes.rb for this?
What I've tried is:
Added : get '/sw' => 'sw#show' to routes.rb
Created controller sw_controller.rb:
require 'net/http'
class SwController < ActionController::Base
def show
respond_to do |format|
format.all {
uri = URI('http://example.com/packs/sw.js')
res = Net::HTTP.get(uri)
render inline: res
}
end
end
end
It's ugly I know but should work:
Visiting http://example.com/sw.js gives No route matches [GET] "/sw.js"
Visiting http://example.com/packs/sw.js prints out the service worker correctly
Visiting http://example.com/sw prints out the service worker correctly
Also visiting http://example.com/sw.lol print the serviceworker
In config/initializers/mime_types.rb try adding:
Mime::Type.register "text/javascript", :js
This should allow format.all to recognise the Javascript request, which it will otherwise ignore.
Source: http://api.rubyonrails.org/classes/ActionController/MimeResponds.html

No route matches in functional/controller test

I have the following controller test using Minitest::Rails and Rails 4. When I run it, I get an error: ActionController::UrlGenerationError: No route matches {:action=>"/", :controller=>"foo"} error, despite it being defined.
The whole point of this is to test methods that are on ApplicationController (FooController exists only in this test and is not a placeholder for the question).
class FooController < ApplicationController
def index
render nothing: true
end
end
class FooControllerTest < ActionController::TestCase
it 'does something' do
with_routing do |set|
set.draw do
root to: 'foo#index', via: :get
end
root_path.must_equal '/' #=> 👍
get(root_path).must_be true #=> No route matches error
end
end
end
There a number of similar questions on StackOverflow and elsewhere, but they all refer to the issue of route segments being left out (e.g. no ID specified on a PUT). This is a simple GET with no params, however.
I get the same result if the route is assembled differently, so I don't think it's the root_path bit doing it (e.g. controller :foo { get 'test/index' => :index }).
I did some search on what information you have provided. I found an issue open in rspec-rails gem. However gem doesn't matter here but fundamentally they said its context problem. When you call with_routing it doesn't executed in correct context, so gives error of No Route matches.
To resolve issue, I tried locally with different solution. Here is what I have tried
require 'rails_helper'
RSpec.describe FooController, type: :controller do
it 'does something' do
Rails.application.routes.draw do
root to: 'foo#index', via: :get
end
expect(get: root_path).to route_to("foo#index")
end
end
In above code, the major problem is it overwrite existing routes. But we can reproduce routes with Rails.application.reload_routes! method.
I hope this helps to you!!
UPDTATE
I tried to understand your last comment and dig into get method. When we call get method it takes argument of action of controller for which we are doing test. In our case when we do get(root_path) it tries to find foo#/ which is not exists and hence gives no route matches error.
as our main goal is to check root_path routes are generated correctly, we need to use method assert_routing to check it. Here is how I test and it works
assert_routing root_path , controller: "foo", action: "index"
Full code :
require 'test_helper'
class FooControllerTest < ActionController::TestCase
it 'does something' do
with_routing do |set|
set.draw do
root to: 'foo#index', via: :get
end
root_path.must_equal '/' #=> true
assert_routing root_path , controller: "foo", action: "index" #=> true
get :index
response.body.must_equal ""
end
end
end
I read things from official document : http://api.rubyonrails.org/classes/ActionDispatch/Assertions/RoutingAssertions.html
if you check the source of ActionController::TestCase#get method, it expects action name, e.g. :index, :create, 'edit, 'create'
if you pass root_path on #get method, absolutely it will raise error, because root_path method returns '/'.
I just checked to add :/ method to FooController
class FooController
def index
end
def /
end
end
Rails.application.routes.draw do
root 'foo#index'
get 'foobar' => 'foo#/'
end
when I was visiting http://localhost:3000/foobar, rails gave me
AbstractController::ActionNotFound (The action '/' could not be found for FooController): respond
I think '/' is not permitted action on rails, I don't do research further, because I think it's very reasonable.
You may write
assert_routing '/', controller: "foo", action: "index"
for current test, then you can write integration test to check root_path and other features.
Following are the source code of some methods I've talked above: (I'm using rails version 4.2.3 to test this interesting issue)
action_controller/test_case.rb
# Simulate a GET request with the given parameters.
#
# - +action+: The controller action to call.
# - +parameters+: The HTTP parameters that you want to pass. This may
# be +nil+, a hash, or a string that is appropriately encoded
# (<tt>application/x-www-form-urlencoded</tt> or <tt>multipart/form-data</tt>).
# - +session+: A hash of parameters to store in the session. This may be +nil+.
# - +flash+: A hash of parameters to store in the flash. This may be +nil+.
#
# You can also simulate POST, PATCH, PUT, DELETE, and HEAD requests with
# +post+, +patch+, +put+, +delete+, and +head+.
#
# Note that the request method is not verified. The different methods are
# available to make the tests more expressive.
def get(action, *args)
process(action, "GET", *args)
end
# Simulate a HTTP request to +action+ by specifying request method,
# parameters and set/volley the response.
#
# - +action+: The controller action to call.
# - +http_method+: Request method used to send the http request. Possible values
# are +GET+, +POST+, +PATCH+, +PUT+, +DELETE+, +HEAD+. Defaults to +GET+.
# - +parameters+: The HTTP parameters. This may be +nil+, a hash, or a
# string that is appropriately encoded (+application/x-www-form-urlencoded+
# or +multipart/form-data+).
# - +session+: A hash of parameters to store in the session. This may be +nil+.
# - +flash+: A hash of parameters to store in the flash. This may be +nil+.
#
# Example calling +create+ action and sending two params:
#
# process :create, 'POST', user: { name: 'Gaurish Sharma', email: 'user#example.com' }
#
# Example sending parameters, +nil+ session and setting a flash message:
#
# process :view, 'GET', { id: 7 }, nil, { notice: 'This is flash message' }
#
# To simulate +GET+, +POST+, +PATCH+, +PUT+, +DELETE+ and +HEAD+ requests
# prefer using #get, #post, #patch, #put, #delete and #head methods
# respectively which will make tests more expressive.
#
# Note that the request method is not verified.
def process(action, http_method = 'GET', *args)
# .....
end
The whole point of this is to test behavior inherited by the ApplicationController.
There is no behavior in your question related to ApplicationController other than the fact that FooController inherits from ApplicationController. And since there's no other behavior to test here in FooController related to something from the Application Controller ...
You can test this
class FooController < ApplicationController
def index
render nothing: true
end
end
with this
describe FooController, type: :controller do
describe '#index' do
it 'does not render a template' do
get :index
expect(response).to render_template(nil)
end
end
end
The solution to this problem was much simpler than expected:
# change this:
get(root_path)
# to this
get(:index)
The with_routing method works fine to define the path in this context.

Ruby on Rails: Pass Parameters to View

When a user makes a request to the url /mobile in my Rails app, I would like a parameter to automatically be appended to the URL that gets loaded after the request (something like /mobile?tree_width=5)
I have tried a few things, all of which have not worked.
The closest I have gotten is this in my controller:
def mobile
respond_to do |format|
format.html{
# pass tree width here
render mobile_project_steps_path(#project, :tree_width => #project.tree_width)
}
end
end
I am getting the error
Missing template /projects/8/steps/mobile?tree_width=5
But I think this path should exist according to my rake routes:
mobile_project_steps GET /projects/:project_id/steps/mobile(.:format) steps#mobile
How do I add a param to the URL from a controller?
You need to check if the param is missing and if it is redirect to current action with extra param. I would squeeze it with in before_action:
before_action :check_tree_width, only: :mobile
def mobile
# Your actual logic
end
private
def check_tree_width
redirect_to(tree_width: #project.tree_width) unless params[:tree_width].present?
end

Rails: dynamic robots.txt with erb

I'm trying to render a dynamic text file (robots.txt) in my Rails (3.0.10) app, but it continues to render it as HTML (says the console).
match 'robots.txt' => 'sites#robots'
Controller:
class SitesController < ApplicationController
respond_to :html, :js, :xml, :css, :txt
def robots
#site = Site.find_by_subdomain # blah blah
end
end
app/views/sites/robots.txt.erb:
Sitemap: <%= #site.url %>/sitemap.xml
But when I visit http://www.example.com/robots.txt I get a blank page/source, and the log says:
Started GET "/robots.txt" for 127.0.0.1 at 2011-11-21 11:22:13 -0500
Processing by SitesController#robots as HTML
Site Load (0.4ms) SELECT `sites`.* FROM `sites` WHERE (`sites`.`subdomain` = 'blah') ORDER BY created_at DESC LIMIT 1
Completed 406 Not Acceptable in 828ms
Any idea what I'm doing wrong?
Note: I added this to config/initializers/mime_types, cause Rails was complaining about not knowing what the .txt mime type was:
Mime::Type.register_alias "text/plain", :txt
Note 2: I did remove the stock robots.txt from the public directory.
NOTE: This is a repost from coderwall.
Reading up on some advice to a similar answer on Stackoverflow, I currently use the following solution to render a dynamic robots.txt based on the request's host parameter.
Routing
# config/routes.rb
#
# Dynamic robots.txt
get 'robots.:format' => 'robots#index'
Controller
# app/controllers/robots_controller.rb
class RobotsController < ApplicationController
# No layout
layout false
# Render a robots.txt file based on whether the request
# is performed against a canonical url or not
# Prevent robots from indexing content served via a CDN twice
def index
if canonical_host?
render 'allow'
else
render 'disallow'
end
end
private
def canonical_host?
request.host =~ /plugingeek\.com/
end
end
Views
Based on the request.host we render one of two different .text.erb view files.
Allowing robots
# app/views/robots/allow.text.erb # Note the .text extension
# Allow robots to index the entire site except some specified routes
# rendered when site is visited with the default hostname
# http://www.robotstxt.org/
# ALLOW ROBOTS
User-agent: *
Disallow:
Banning spiders
# app/views/robots/disallow.text.erb # Note the .text extension
# Disallow robots to index any page on the site
# rendered when robot is visiting the site
# via the Cloudfront CDN URL
# to prevent duplicate indexing
# and search results referencing the Cloudfront URL
# DISALLOW ROBOTS
User-agent: *
Disallow: /
Specs
Testing the setup with RSpec and Capybara can be done quite easily, too.
# spec/features/robots_spec.rb
require 'spec_helper'
feature "Robots" do
context "canonical host" do
scenario "allow robots to index the site" do
Capybara.app_host = 'http://www.plugingeek.com'
visit '/robots.txt'
Capybara.app_host = nil
expect(page).to have_content('# ALLOW ROBOTS')
expect(page).to have_content('User-agent: *')
expect(page).to have_content('Disallow:')
expect(page).to have_no_content('Disallow: /')
end
end
context "non-canonical host" do
scenario "deny robots to index the site" do
visit '/robots.txt'
expect(page).to have_content('# DISALLOW ROBOTS')
expect(page).to have_content('User-agent: *')
expect(page).to have_content('Disallow: /')
end
end
end
# This would be the resulting docs
# Robots
# canonical host
# allow robots to index the site
# non-canonical host
# deny robots to index the site
As a last step, you might need to remove the static public/robots.txt in the public folder if it's still present.
I hope you find this useful. Feel free to comment, helping to improve this technique even further.
One solution that works in Rails 3.2.3 (not sure about 3.0.10) is as follows:
1) Name your template file robots.text.erb # Emphasis on text vs. txt
2) Setup your route like this: match '/robots.:format' => 'sites#robots'
3) Leave your action as is (you can remove the respond_with in the controller)
def robots
#site = Site.find_by_subdomain # blah blah
end
This solution also eliminates the need to explicitly specify txt.erb in the render call mentioned in the accepted answer.
For my rails projects I usually have a seperate controller for the robots.txt response
class RobotsController < ApplicationController
layout nil
def index
host = request.host
if host == 'lawc.at' then #liveserver
render 'allow.txt', :content_type => "text/plain"
else #testserver
render 'disallow.txt', :content_type => "text/plain"
end
end
end
Then I have views named : disallow.txt.erb and allow.txt.erb
And in my routes.rb I have
get "robots.txt" => 'robots#index'
I don't like the idea of robots.txt reaching my Web Server.
If you are using Nginx/Apache as your reverse proxy, Static files would be much faster to handle by them than the request reaching rails itself.
This is much cleaner, and I think this is more faster too.
Try using the following setting.
nginx.conf - for production
location /robots.txt {
alias /path-to-your-rails-public-directory/production-robots.txt;
}
nginx.conf - for stage
location /robots.txt {
alias /path-to-your-rails-public-directory/stage-robots.txt;
}
I think the problem is that if you define respond_to in your controller, you have to use respond_with in the action:
def robots
#site = Site.find_by_subdomain # blah blah
respond_with #site
end
Also, try explicitly specifying the .erb file to be rendered:
def robots
#site = Site.find_by_subdomain # blah blah
render 'sites/robots.txt.erb'
respond_with #site
end

switching rails controller

I have to separate models: nested sections and articles, section has_many articles.
Both have path attribute like aaa/bbb/ccc, for example:
movies # section
movies/popular # section
movies/popular/matrix # article
movies/popular/matrix-reloaded # article
...
movies/ratings # article
about # article
...
In routes I have:
map.path '*path', :controller => 'path', :action => 'show'
How to create show action like
def show
if section = Section.find_by_path!(params[:path])
# run SectionsController, :show
elsif article = Article.find_by_path!(params[:path])
# run ArticlesController, :show
else
raise ActiveRecord::RecordNotFound.new(:)
end
end
You should use Rack middleware to intercept the request and then rewrite the url for your proper Rails application. This way, your routes files remains very simple.
map.resources :section
map.resources :articles
In the middleware you look up the entity associated with the path and remap the url to the simple internal url, allowing Rails routing to dispatch to the correct controller and invoking the filter chain normally.
Update
Here's a simple walkthrough of adding this kind of functionality using a Rails Metal component and the code you provided. I suggest you look at simplifying how path segments are looked up since you're duplicating a lot of database-work with the current code.
$ script/generate metal path_rewriter
create app/metal
create app/metal/path_rewriter.rb
path_rewriter.rb
# Allow the metal piece to run in isolation
require(File.dirname(__FILE__) + "/../../config/environment") unless defined?(Rails)
class PathRewriter
def self.call(env)
path = env["PATH_INFO"]
new_path = path
if article = Article.find_by_path(path)
new_path = "/articles/#{article.id}"
elsif section = Section.find_by_path(path)
new_path = "/sections/#{section.id}"
end
env["REQUEST_PATH"] =
env["REQUEST_URI"] =
env["PATH_INFO"] = new_path
[404, {"Content-Type" => "text/html"}, [ ]]
end
end
For a good intro to using Metal and Rack in general, check out Ryan Bates' Railscast episode on Metal, and episode on Rack.
Rather than instantiating the other controllers I would just render a different template from PathController's show action depending on if the path matches a section or an article. i.e.
def show
if #section = Section.find_by_path!(params[:path])
render :template => 'section/show'
elsif #article = Article.find_by_path!(params[:path])
render :template => 'article/show'
else
# raise exception
end
end
The reason being that, whilst you could create instances of one controller within another, it wouldn't work the way you'd want. i.e. the second controller wouldn't have access to your params, session etc and then the calling controller wouldn't have access to instance variables and render requests made in the second controller.

Resources