Stackoverflow has a number of answers for this, but they all don't work. It seems like they're outdated.
I've tried code like
get :index, protocol: :https
What your code snippet suggests it's a spec type: :request. The thing is - HTTP is not really involved in those kinds of tests.
What happens is, when you type get :index, a request object is being made, passed to rails, and response object is captured and you test against its state
expect(response).to be_success
So you'd need to specify what exactly do you want to test:
is your app redirecting to https:// if someone uses http://
are your certificates correctly set up and your production app works with https:// protocol? (hint: this can't be testes in the spec tests really)
something else?
SSL encryption should be transparent to your web app, are you having problems with that? Is your production app failing with HTTPS, and works properly with HTTP?
(Feel free to compose a new question when you give it some thought)
Three ways to simulate HTTPS in request specs, with caveats:
If you're trying to simulate the behavior of your non-HTTPS app behind an HTTPS proxy, the easiest thing is to pass an HTTP header such as X-Forwarded-Proto:
get :index, headers: { 'X-Forwarded-Proto' => 'https' }
This will cause request.base_url to begin with 'https', request.scheme to return 'https', and request.ssl? to return true.
However, under this approach you may still have issues with Secure cookies in request specs as Rack::Test::Session and ActionDispatch::Integration::Session will still use http URLs internally.
A workaround for that is is to instead set env['rack.url_scheme'] to https:
get :index, env: { 'rack.url_scheme' => 'https' }
This will cause ActionDispatch::Integration::Session#build_full_uri to translate request paths to https URIs (while still not making any actual HTTP/HTTPS calls).
A potential issue here, however, is that it only affects this specific request; in a complex setup with OmniAuth or other gems you may still end up with a mix of HTTP and HTTPS requests and more cookie issues.
What I finally settled on, myself, is to brute-force configure all integration-test requests to HTTPS. In rails_helper.rb:
config.before(:example, type: :request) do
integration_session.https!
end
This works for my tests, even with OmniAuth cookies in the mix, and it supports hard-coding secure: true and cookies_same_site_protection = :none in application.rb. It may have other negative side effects, though, that I just haven't run into.
Related
TLDR;
Requests say they are sending in https
Payload is in clear text
force_ssl = true
Very lost
Detailed
I am running a react front-end talking to a rails back end via URLs provided by js-routes. The problem is that my requests state they are sending over https but the payload is clear text. I have been working on locking down my server for the past week but cannot seem to overcome this last hurdle.
Info
Site is secured with a SSL cert (I have a green lock throughout)
React form
Rails back end
Ruby 2.3.3
Rails 4.2.6
React 15
valid Cert with 300+ days before expiration
force_ssl config = true
Running server in production mode
js-routes config
JsRoutes.setup do |config|
protocol = Rails.application.secrets.protocol
config.default_url_options = {
format: :json,
trailing_slash: true,
protocol: protocol,
}
end
Request
Notice the https for the request but the clear text payload.
Am I just flat out missing something here?
After #Tony mentioned his comment and I'd already started to think about that as well, I did some tests with Wireshark today sniffing the traffic. The data is indeed encrypted as expected.
Thanks.
is it possible to set the http request header with capybara? i have seen several posts like this
Capybara.current_session.driver.headers = { 'Accept-Language' => 'de' }
Capybara.current_session.driver.header('Accept-Language', 'de')
but doesn't seem to work. I am trying to set the following header
X-TEST-IP : 127.0.0.1
That when i visit my site, I am authenticated. Any ideas?
Thanks
You’re using selenium which doesn’t provide a way to set headers. It is possible through middleware or a programmable proxy - see setting request headers in selenium , although you’re probably better off just using the test mode of whatever auth library you are using (devise, etc)
I was wondering if there is a way to know if the http request made to a rails app is in cUrl?
I have a code that does some front-end process and is specific only for http request done through the web browser. Now, I want to be able to differentiate the normal http request to a cUrl request so that I can make another process only for cUrl request.
Thanks in advance.
if request.env["HTTP_USER_AGENT"] =~ /curl/i
in your controller should do the trick. Or you can do this in the routing level with the user agent option:
get '/resource' => 'Controller#curl_logic', constraints: {user_agent: /curl/i}
get '/resource/' => 'Controller#view_logic' # everything else
You can look at the user-agent in the HTTP request but that will only work if the curl client doesn't override it, which is easy to do. If you're only doing this for 'friendly' clients where you can trust the user-agent, it's straightforward. See the first example in one of the better resources for rails routing.
I have problem creating http request inside my controller action. I used net/http and RestClient but I can't get it to work on my local server url i.e http://localhost:3000/engine/do_process, I always get requesttimeout however It works with other valid url.
Hope you can enlighten me on this one. I did some research but I can't find resources as to why I got this timeout problem.
Sample controller code:
require 'rest_client'
class LgController < ApplicationController
def get_lgjson
response = RestClient.get("http://localhost:3000/engine/do_process_lg")
#generated_json = response.to_str
end
end
I encountered this problem today, too, exactly in the same context: using the Ruby RestClient to make a HTTP request inside a controller. It worked earlier in a different project using OpenURI without problems. This was surprising because both http libraries, the RestClient and OpenURI for Ruby, use the same library Net::HTTP.
It is the URL that makes the difference. We can make a connection to an external URL in the controller, but not to localhost. The problem seems to be the duplicated connection to localhost. There is already a connection to localhost open, and we are trying to open a second one. This does not seem to work in a single-threaded web server like Thin for instance. A multi-threaded web server such as Puma could help.
I think this is because you use single-threaded web server. You have two opportunities to fix.
use passenger
define if it makes sense to make net/http to localhost.
In Rails 2.3.4, the way Accept headers are handled has changed:
http://github.com/rails/rails/commit/1310231c15742bf7d99e2f143d88b383c32782d3
We won't Accept it
The way in which Rails handles incoming Accept headers has been updated. This was primarily due to the fact that web browsers do not always seem to know what they want ... let alone are able to consistently articulate it. So, Accept headers are now only used for XHR requests or single item headers - meaning they're not requesting everything. If that fails, we fall back to using the params[:format].
It's also worth noting that requests to an action in which you've only declared an XML template will no longer be automatically rendered for an HTML request (browser request). This had previously worked, not necessarily by design, but because most browsers send a catch-all Accept header ("/"). So, if you want to serve XML directly to a browser, be sure to provide the :xml format or explicitly specify the XML template (render "template.xml").
I have an active API which is being used by many clients who are all sending both a Content-Type and an Accept header, both set to application/xml. This works fine, but my testing under Rails 2.3.4 demonstrates that this no longer works -- I get a 403 Unauthorised response. Remove the Accept header and just sending Content-Type works, but this clearly isn't an acceptable solution since it will require that all my clients re-code their applications.
If I proceed to deploy to Rails 2.3.4 all the client applications which use the API will break. How can I modify my Rails app such that I can continue to serve existing API requests on Rails 2.3.4 without the clients having to change their code?
If I understand correctly the problem is in the Request headers. You can simply add a custom Rack middleware that corrects it.
Quick idea:
class AcceptCompatibility
def initialize(app)
#app = app
end
def call(env)
if env['Accept'] == "application/xml" && env['Content-Type'] == "application/xml"
# Probably an API call
env.delete('Accept')
end
#app.call(env)
end
end
And then in your environment.rb
require 'accept_compatibility'
config.middleware.use AcceptCompatibility
Embarrassingly enough, this actually turned out to be an Apache configuration issue. Once I resolved this, everything worked as expected. Sorry about that.
As coderjoe correctly pointed out, setting the Content-Type header isn't necessary at all -- only setting the Accept header.