I am using RSpec and Capybara for my feature specs on my Rails 5.1 app. I want to fake the request IP to '1.2.3.4' for a single spec.
I've tried the following with Poltergeist...
before do
page.driver.add_headers 'REMOTE_HOST' => '1.2.3.4'
end
However, placing a pry in my controller I see that request.headers['REMOTE_HOST'] is 127.0.0.1.
I solved this by stubbing ActionDispatch::Request#remote_ip
allow_any_instance_of(ActionDispatch::Request).to receive(:remote_ip) { '1.2.3.4' }
http://guides.rubyonrails.org/action_controller_overview.html#the-request-object
I would prefer altering the actual request if possible.
Related
I have a Rails 4.2 application....I was adding content compression via this thoughtbot blog post, but I get an error such as:
undefined method `get' for #<RSpec::ExampleGroups::Compression:0x00000009aa4cc8>
Perusing over the capybara docs, it seems like you shouldn't be using get. Any idea how to test the below then in Rails 4?
# spec/integration/compression_spec.rb
require 'spec_helper'
feature 'Compression' do
scenario "a visitor has a browser that supports compression" do
['deflate','gzip', 'deflate,gzip','gzip,deflate'].each do|compression_method|
get root_path, {}, {'HTTP_ACCEPT_ENCODING' => compression_method }
response.headers['Content-Encoding'].should be
end
end
scenario "a visitor's browser does not support compression" do
get root_path
response.headers['Content-Encoding'].should_not be
end
end
In a capybara test you would use visit not get (as described here), but that answer won't actually help you because the test you've written above is not an integration test, it's a controller test.
Move it to spec/controllers and use the controller-specific helpers describe/context/it etc. to construct your tests for your controller. You can set the headers and do the sorts of checks that you're doing in the code you're showing.
I'm working on integration my rails application with Recurly.js.
Before I was making requests to recurly from my server side application, therefore I was able to stub all my integration with excellent VCR gem (https://github.com/myronmarston/vcr) but Recurly.js makes request directly to the service from javascript code using JSONP.
The question is: how to mock these jsonp calls in the integration test?
Currently I'm using rspec + capybara + phantomjs driver (https://github.com/jonleighton/poltergeist)
The only approach I came up with is on-the-fly javascript patching. As far as the Poltergeist gem has a method to execute javascript right in the test browser, you could apply the following patch to turn Recurly.js into the test mode:
# The original 'save' function performs JSONP request to Recurly.
# A token is borrowed during the real API interaction.
page.driver.execute_script("""
Recurly.Subscription.save = function (options) {
Recurly.postResult('/subscription', { token: 'afc58c4895354255a422cc0405a045b0' }, options);
}
""")
Just make a capybara-macros, give a fancy name like 'stub_recurly_js' to it and invoke every time before submitting the Recurly.js forms.
Here is also a link to the original post if you want to dig a little deeper: http://pieoneers.tumblr.com/post/32406386853/test-recurlyjs-in-ruby-using-rspec-capybara-phantomjs
Use puffing-billy. It injects a proxy server between your test browser and the outside world, and allows you to fake responses for specific URLs.
Example:
describe 'my recurly jsonp spec' do
before do
# call proxy.stub to setup a fake response
proxy.stub 'https://api.recurly.com/v2/foo', :jsonp => { :bar => 'baz' }
end
it 'does something with recurly' do
....
end
end
I have a rails application which acts differently depending on what domain it's accessed at (for example www.myapp.com will invoke differently to user.myapp.com). In production use this all works fine but my test code always sees a hostname of "www.example.com".
Is there a clean way of having a test specify the hostname it's pretending to access?
Integration/Request Specs (inheriting from ActionDispatch::IntegrationTest):
host! 'my.awesome.host'
See the docs, section 5.1 Helpers Available for Integration Tests.
alternatively, configure it globally for request specs at spec_helper.rb level:
RSpec.configure do |config|
config.before(:each, type: :request) do
host! 'my.awesome.host'
end
end
Controller Specs (inheriting from ActionController::TestCase)
#request.host = 'my.awesome.host'
See the docs, section 4.4 Instance Variables Available.
Feature Specs (through Capybara)
Capybara.default_host = 'http://my.awesome.host'
# Or to configure domain for route helpers:
default_url_options[:host] = 'my.awesome.host'
From #AminAriana's answer
View Specs (inheriting from ActionView::TestCase)
#request.host = 'my.awesome.host'
...or through RSpec:
controller.request.host = 'my.awesome.host'
See the rspec-rails view spec docs.
#request.host = 'user.myapp.com'
Feature specs
In Feature specs, host! has been deprecated. Add these to your spec_helper.rb:
# Configure Capybara expected host
Capybara.app_host = "http://test.domain"
# Configure actual routes host during test
before(:each) do
default_url_options[:host] = <myhost>
end
Request specs
In Request specs, keep using host! :
host! "test.domain"
Alternatively refactor it in before(:each) blocks, or configure it globally for request specs at spec_helper.rb level:
RSpec.configure do |config|
config.before(:each, type: :request) do
host! "test.domain"
end
end
For Rspec Request specs, use before(:each) { host! 'example.com' }
See more at:
https://relishapp.com/rspec/rspec-rails/v/3-6/docs/request-specs/request-spec
https://github.com/rspec/rspec-rails/issues/1662#issuecomment-241201056
I believe you can modify the HTTP_HOST or SERVER_NAME environment vars to change the request that goes to the router:
ENV['SERVER_NAME'] = "user.myapp.com"
See raw_host_with_port in actionpack/lib/action_controller/request.rb.
Another thing to remember is to make sure to use the correct session instance so that you can properly encapsulate the url helpers.
Integration tests provide you with a default session. You can call all session methods directly from your tests
test "should integrate well" do
https!
get users_path
assert_response :success
end
All these helpers are using the default session instance, which if not changed, goes to "www.example.com". As has been mentioned the host can be changed by doing host!("my.new.host")
If you create multiple sessions using the open_session method, you must ALWAYS use that instance to call the helper methods. This will properly encapsulate the request. Otherwise rails will call the default session instance which may use a different host:
test "should integrate well" do
sess = open_session
sess.host! "my.awesome.host"
sess.get users_url #=> WRONG! will use default session object to build url.
sess.get sess.users_url #=> Correctly invoking url writer from my custom session with new host.
sess.assert_response :success
end
If you intended to use the default session object, then you'll have to alter that host as well:
test "should integrate well" do
sess = open_session
sess.host! "my.awesome.host"
host! sess.host #=> Set default session host to my custom session host.
sess.get users_url
end
#request.host = 'user.myapp.com' is not right.
should use host!('user.myapp.com')
I tried many variations of #request.host, host!, and post path, args, {'SERVER_NAME' => my_secret_domain} without success, both as controller tests and feature tests. Very aggravating, as so many others reported success with those approaches.
The solution for me was:
request.headers["SERVER_NAME"] = my_secret_domain
post path, args
I'm running ruby 2.1.5p273, rspec 3.1.7 and Rails 4.2.0
None of the ways suggested in other answers at the point worked for me. This worked:
Capybara.configure { |config| config.default_host = "my.domain.com" }
Yet another answer:
request.host = "user.myapp.com"
I know it resembles the correct answer, but please bear with me. I don't like assignment operation in test just to set things up, I'd prefer an explicit stub. Interestingly, stubbing like this won't work:
allow(request).to receive(:host).and_return("user.myapp.com")
I personally prefer stubbing over assignment, that way I get 2 benefit, one is that it will be validated by rspec's verify double, second is that it is explicitly saying that is a stub, not part of the test excercise.
I would like all my unit tests to use www.test.host instead of the default test.host.
I tried setting ENV['HTTP_HOST'] in config/environments/test.rb, but that didn't get it.
My purpose is to avoid a redirect in my controller test, the output of inspecting the response object in my test is:
#<ActionController::TestResponse:0x000001059ed378, ..., #header={"Location"=>"http://www.test.host", ... , #status=301, #body=["<html><body>You are being redirected.</body></html>"], ... , "REQUEST_METHOD"=>"GET", "SERVER_NAME"=>"example.org", "SERVER_PORT"=>"80",... , "HTTP_HOST"=>"test.host", "REMOTE_ADDR"=>"0.0.0.0" ...>>
If it makes a difference, I'm using Rails3 and RSPEC2
You do it by setting request.host in your test. You can do it globally by adding this to your test_helper.rb (in Rails 3.1 anyway, not sure about previous versions, but I think it's similar):
class ActionController::TestCase
setup do
request.host = "www.test.host"
end
end
I have an app that uses subdomains to switch databases (multi-tenancy). I'm trying to use Capybara for integration testing, and it really relies a lot on subdomains.
My understanding was that setting Capybara.default_host= to something would make all my requests come from this host. This doesn't seem to be the case. In this post, the author recommends just visiting the explicit url with a host, but this becomes a bit annoying if I'm navigating all over the place. I'd like to just set the host, then be able to use my rails paths as expected. Not sure what I'm doing wrong, but here's what I've tried:
# spec_helper.rb
RSpec.configure do |config|
config.before(:each, :type => :request) do
Capybara.default_host = 'http://app.mydomain.com'
end
end
# in some_integration_spec.rb
before do
puts "Capybara.default_host: #{Capybara.default_host}"
puts "some_app_url: #{some_app_url}"
end
This yields the output:
Capybara.default_host: http://app.mydomain.com
some_app_url: http://www.example.com/some_path
What am I doing wrong? default_host appears to do nothing. As I say, I don't want to have to say visit(Capybara.default_host + some_app_path) as that's a bit annoying each time. Why else does this default_host option exist?
I'm not sure of the intended use of default_host, but app_host does what you need. I've found I first need to call the rails session method host! in order to set the host string that will be passed to controllers in the request object.
Then you need to set Capybara.app_host to tell Capybara to call your app via the web server instead of just making the calls in process. If you don't do that then Capybara wigs out when it encounters redirects and drops the host information in the second request.
I'm not sure why this doesn't take care of the Rails request end of things automatically, but I've found that unless I set the host in both places explicitly, then I get inconsistent results.
def set_host (host)
host! host
Capybara.app_host = "http://" + host
end
before(:each) do
set_host "lvh.me:3000"
end
Then you can just use relative paths to access pages.
Update:
Capybara 2.x and rspec-rails 2.12.0 introduced "Feature" specs for running Capybara acceptance tests. The new FeatureExampleGroup module in rspec-rails is different from RequestExampleGroup and no longer has access to the rack-test host! method. Now you want to use default_url_options instead:
def set_host (host)
# host! host
default_url_options[:host] = host
Capybara.app_host = "http://" + host
end
When you need to change the URL to include the subdomain, you can specify the app_host in your step definitions. Use a domain like lvh.me since it points to 127.0.0.1:
Capybara.app_host = "http://#{subdomain}.lvh.me"
Capybara assumes that when you're specifying an app_host that you're testing a remote server running on port 80, but in our case, we're testing a local app which is running on a random port specified by Capybara. To fix this, in your env.rb file, add this line:
Capybara.always_include_port = true
Now when you visit a page of your app...
visit '/page'
...the url will specify the subdomain as well as the port that the app is running on.
FYI: This worked for me using Capybara 2.0.2.
This guy has the right answer here:
http://zurb.com/forrst/posts/Testing_Subdomains_in_Capybara-g4M
You want to do
Capybara.current_session.driver.reset!
Capybara.default_host = 'http://app.mydomain.com'
as of:
capybara (2.4.1)
capybara-webkit (1.3.0)
Capybara.server_host = "example.com"
Capybara.server_port = 3050
Capybara.run_server = true
Capybara.javascript_driver = :webkit #requires capybara-webkit
This is not exactly the same situation as you but this might help some people:
For my current project, I'm using pow with many subdomains. The test suite also has to run on a different port.
The solution depends on which version of capybara you're running.
For the current latest release I put this in custom_env.rb:
Capybara.server_host = 'myapp.dev'
Capybara.server_port = 9887
Capybara.run_server = true
# I don't remember what this was for. Another team member wrote this part...
module ActionDispatch
module Integration #:nodoc:
class Session
def host
[Capybara.server_host, Capybara.server_port].join(':')
end
end
end
end
With capybara 1.1.2, I had had to make the above change but server_host becomes app_host AND modify lib/capybara/server.rb in the gem like this:
def url(path)
..
if path =~ /^http/
path
else
# Was this (Capybara.app_host || "http://#{host}:#{port}") + path.to_s
(Capybara.app_host || "http://#{host}") + ":#{port}" + path.to_s
end
end