Rails 3.1 ssl is used even if I disabled ssl? - ruby-on-rails

I have added force_ssl in my ApplicationController, and deleted later, but now, every request is still prompted to https. I have tried to add config.force_ssl = false to all the configuration files in application.rb and environments/development.rb, etc., but it doesn't work. When I reboot the server, the requests are still converted to https. Any clue?
Updates:
This happens only when I request the root of the application, e.g. http://localhost:3000/, however in my config/routes.rb file I have specified the url for the root clearly: root :to => 'home#index'

You're seeing the effects of HTTP Strict Transport Security's max-age, which is set by Rack::SSL (which config.force_ssl = true sets up) to something high.

In addition to rebooting your app, you also have to clear the browser cache.

For those who it's still unclear, here is what I did to do the trick.
In application_controller.rb :
before_filter :expire_hsts
[...]
private
def expire_hsts
response.headers["Strict-Transport-Security"] = 'max-age=0'
end
In production.rb
config.force_ssl = false
Clear the cache of your web browser and that's it !

yfeldblum is absolutely correct. Disabling it and making chrome forget the header can be a pain.
Here's what I ended up putting in my config/application.rb
config.middleware.insert_before(Rack::Lock, Rack::SSL, hsts: false, exclude: proc { |env|
!env['PATH_INFO'].start_with?('/manage')
})
** note A: hsts: false is the critical bit
** note B: I'm using 1.9, so my hash syntax might be different than yours.
Beyond that, I had to open this url in Chrome chrome://net-internals/#hsts and remove the domains that had this header set.
Thankfully this didn't make it to production, because Rack::SSL sets a very long expires on this header.

if are you using nginx see option:
proxy_set_header X-Forwarded-Proto https;
and disable it!

Related

Rails application resources not getting rendered over https

I am using ruby 2.4.0p0 and Rails 5.2.3
In the production.rb file I have done the following setting:
# Force all access to the app over SSL, use Strict-Transport-Security, and use secure cookies.
config.force_ssl = true
if Rails.application.config.force_ssl
Rails.application.routes.default_url_options[:protocol] = 'https'
end
But still the resource are getting rendered on http rather then https do I need to do any thing extra, please provide the desired thing to be done to get all assets getting loaded from s3 loads over https.
The website is live here at: https://tukaweb.com/asset/garments
The s3 resources are at http
ex: http://tukaweb.s3.amazonaws.com/uploads/three_d_garment/thumbnail/7/Womens_Dress_35-41_Thumbnail.png?X-Amz-Expires=600&X-Amz-Date=20200918T060705Z&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIRDA3IQIVTEPMN6Q%2F20200918%2Fus-west-2%2Fs3%2Faws4_request&X-Amz-SignedHeaders=host&X-Amz-Signature=1792bd4cc2437abd950b7d16d360d09e64423bdef89f41c24a5386d35e982dfa
need them over https.
The required change should be done inside the carrierwave.rb inside the webapp/config/initializers directory modified the settings as:
CarrierWave.configure do |config|
config.fog_provider = 'fog/aws'
config.fog_credentials = {
provider: 'AWS',
aws_access_key_id: 'XXXXXXXXXX',
aws_secret_access_key: 'xxxxxxxxxx',
use_iam_profile: false,
region: 'us-west-2', # optional, defaults to 'us-east-1'
# host: 'ec2-xx-xxx-xx-xx.us-west-2.compute.amazonaws.com', # optional, defaults to nil
:endpoint => 'https://s3.amazonaws.com',
}
config.fog_directory = 'tukaweb' # required
config.fog_public = false # optional, defaults to true
# config.fog_attributes = { cache_control: "public, max-age=#{365.days.to_i}" } # optional, defaults to {}
end
The line which is responsible for changing s3 resource to be downloaded from https instead of http
:endpoint => 'https://s3.amazonaws.com' ## earlier it was 'http://s3.amazonaws.com'
Force SSL only works for the incoming requests to the rail's routes. If you have an image link set to http://image-domain.com/image it's going to use the http, and you'll get a mixed content warning. You need to ensure anything external to the app's routes is going to be using SSL or a secure connection as well.
First thing I do when I see a mixed content warning is to do a global search of the codebase for http:// to find everywhere that isn't using https://. I may or may not do a global find + replace depending on what I see, there are cases where it needs to be http:// or it won't work right (if the site doesn't have an https:// version).
Next thing is to work out what is causing the insecure url, here it is S3, so I would be looking at what uses S3, and working out how I can tell it to use SSL or a secure connection.
Note: The other answer does well explaining what your actual issue is, but this may be more useful to others for general troubleshooting of mixed content issues, and would lead to the same result.

Running ActionCable behind Cloudfront

We've setup Cloudfront in front of our application, but unfortunately it strips the Upgrade header required for ActionCable to run.
We'd like to have a different subdomain that points to the same servers, but bypasses Cloudfront (socket.site.com, for instance). We've done this and it's somewhat working, but it seems like a persistent connection can't be made. ActionCable continues to retry to make the connection every 10s and seems unable to hold the connection open:
Any advice related to Cloudfront or different domains for ActionCable is appreciated.
To all who follow, hopefully this helps.
As of the time of me writing this (Oct. 2018), it doesn't appear that you can use ActionCable behind Cloudfront at all. CF will discard the upgrade header which will prevent a secure socket connection from ever being made.
Our setup was CF -> Application Load Balancer (ALB) -> EC2. On the AWS side, we began by making a subdomain (socket.example.com) that pointed directly to the same ALB and bypassed CF entirely. Note that Classic Load Balancers absolutely will not work. You can only use ALBs.
This alone did not fix the issue. On your Rails config, you have to add the following lines to your production.rb:
config.action_cable.url = 'wss://socket.example.com:28080/cable'
config.action_cable.allowed_request_origins = ['https://example.com'] # Not the subdomain
You may also need to update your CSP to include wss://socket.example.com/cable for connect_src.
If at this point you're getting a message about failing to upgrade, you need to ensure that your NGINX config is correct. This answer may help.
You will also need to reflect this change in your cable.js. This following snippet works for me with local development as well as production, but you may need to alter it. I wrote it with pre-ES6 in mind because this file never hit Babel in our configuration.
(function() {
this.App || (this.App = {})
var wsUrl
if(location.host.indexOf('localhost') != -1) {
wsUrl = '/cable'
} else {
var host = location.host
var protocol = location.protocol
wsUrl = protocol + '//socket.' + host + '/cable'
}
App.cable = ActionCable.createConsumer(wsUrl)
}).call(this)
That may be all you need, depending on your authentication scheme. However, I was using cookies shared between the main application and ActionCable and this caused a difficult bug. The connection would appear to be made correctly, but it would actually fail and ActionCable would retry every 10s. The final step was to ensure the auth cookies being set would work across the socket subdomain. I updated my cookie as such:
cookies.signed[:cookie_name] = {
value: payload,
domain: ['.socket.example.com', '.example.com']
# Some people have to specify tld_length, but I was fine without it
}

Rails 5 config.force_ssl blocking access to subdomains that point to a different server

I am running a rails 5 site with config.force_ssl set to true. The problem that I am having is that this creates cookies which force SSL on all subdomains, even ones that are not part of the app or hosted on the same server. For example I have mail.example.com which has its DNS pointed at google and blog.example.com has it's DNS pointed to NameCheap servers. The cookie created by config.force_ssl redirects both of these to https and thereby blocks access to those pages.
Based on the ActionDispatch::SSL documentation I can see that this is intended behavior and apparently there's a way to add exclusions to the ssl_options based on this example:
config.ssl_options = { redirect: { exclude: -> request { request.path =~ /healthcheck/ } } }
I'm trying to get this to work with the subdomains mentioned, but it's not working. The same cookie is set and I am once again blocked from those subdomains. Maybe I'm not doing this right. Here is the line:
config.ssl_options = { redirect: { exclude: -> request { request.subdomain =~ /mail|link|blog/ } } }
Alternatively, is there another way I should be doing this?

send_file with X-Accel-Redirect won't return full file content

I'm doing send_file with Nginx using X-Accel-Redirect in a pretty straightforward way, but browsers won't download the full content. It's always cut off in the middle and the rest is truncated, like at 40KB for a 4MB file.
Rails 4.2.1 / Nginx 1.6.2
What is interrupting the file download?
production.rb
config.action_dispatch.x_sendfile_header = 'X-Accel-Redirect' # for NGINX
download_controller.rb
class DownloadController
def download
send_file '/full/path/to/file.csv'
end
end
I have a couple of things for you to try:
Disable proxy buffering for that request by adding:
response.headers["X-Accel-Buffering"] = "no"
To your download action like so:
def download
response.headers["X-Accel-Buffering"] = "no"
send_file '/full/path/to/file.csv'
end
Disable sendfile in the Nginx configuration
This directive is known to cause trouble in virtual environments according to this article:
http://www.conroyp.com/2013/04/25/css-javascript-truncated-by-nginx-sendfile/
While I am not sure if this will impact performance in a way that matters to you, trying these could be worth a shot and reveal more information that might help in solving the problem.
Are you having client_max_body_size in your ngnix conf file ?
From the docs:
This is used to sets the maximum allowed size of the client request body, specified in the Content-Length request header field.
If the size in a request exceeds the configured value, the 413 (Request Entity Too Large) error is returned to the client. Please be aware that browsers cannot correctly display this error. Setting size to 0 disables checking of client request body size
You can set in your conf file.
server {
...
client_max_body_size 4G;
}
or
location / {
...
client_max_body_size 4G;
}

One SSL page assets

I have a problems with SSL. I need only one page with SSL, but some of assets don't change protocol on https and browser show warning.
Some fonts, svg-icons and one bg-image still with http.
For assets I use Proc in env config.
config.action_controller.asset_host = Proc.new { |source, request = nil, *_|
if request && request.ssl?
"#{request.protocol}#{request.host_with_port}"
else
'http://www.mybrandnew.com'
end
}
Anybody have this problem?
P.S: for partial SSL I already use ssl_requirement

Resources