TLDR;
Requests say they are sending in https
Payload is in clear text
force_ssl = true
Very lost
Detailed
I am running a react front-end talking to a rails back end via URLs provided by js-routes. The problem is that my requests state they are sending over https but the payload is clear text. I have been working on locking down my server for the past week but cannot seem to overcome this last hurdle.
Info
Site is secured with a SSL cert (I have a green lock throughout)
React form
Rails back end
Ruby 2.3.3
Rails 4.2.6
React 15
valid Cert with 300+ days before expiration
force_ssl config = true
Running server in production mode
js-routes config
JsRoutes.setup do |config|
protocol = Rails.application.secrets.protocol
config.default_url_options = {
format: :json,
trailing_slash: true,
protocol: protocol,
}
end
Request
Notice the https for the request but the clear text payload.
Am I just flat out missing something here?
After #Tony mentioned his comment and I'd already started to think about that as well, I did some tests with Wireshark today sniffing the traffic. The data is indeed encrypted as expected.
Thanks.
Related
I’m trying to build a rails app that will process inbound mail. I got the app to work on my localhost machine using the Rails conductor and action mailbox. When an email gets sent, I’m able to save the contents of the email. But I’m having difficulty getting it to work on a production environment…I’m not sure how to configure my domain and settings to get it to work.
I’ve been following the instructions here:
https://edgeguides.rubyonrails.org/action_mailbox_basics.html#sendgrid
and https://sendgrid.com/docs/for-developers/parsing-email/setting-up-the-inbound-parse-webhook/
I included this in my rails credentials:
action_mailbox:
ingress_password: mypassword
I have set up an MX record on google domains:
parse.[mydomain].com

I pointed to a Hostname and URL.
https://actionmailbox:mypassword#parse.[mydomain].com/rails/action_mailbox/sendgrid/inbound_emails
I send an email from my email account to
parse#parse.[mydomain].com
but I’m not able to test or track what is happening to this email. I don’t receive an error message back to my email as a reply, so I think that’s a good sign but I’m not sure whether it’s being processed or how to troubleshoot. I even put a puts ‘test’ in my replies_mailbox.rb file but I don’t see anything in the console when I tail logs on production.
Any advice on what next steps I can take?
When dealing with integration testing it's useful to split the issue into smaller ones, in order of email path
Check if mx dns record has propagated, usually when you edit your zone - other dns servers may still respond with old records until zone TTL passes (it is usually set to several hours), use some remote dns checker
Check sendgrid settings (including "Post the raw, full MIME message" which is expected by actionmailbox, so that sendgrid posts 'email' field)
Check if the email is being dropped by spam filter in sengrid
check if the request is present in your web server/reverse proxy logs (like nginx, if you use one)
Try mimicking sendgrid's request to check if your app is accepting it (and if it is in logs), rails only reads params[:email], other fields are not necessary:
curl -X POST "https://actionmailbox:mypassword#parse.[mydomain].com/rails/action_mailbox/sendgrid/inbound_emails" \
-F email="From: foo <abc#localhost>\nTo: bar <bca#localhost>\nSubject: test\nMIME-Version: 1.0\n\nTest!"
I'd start with #5, to be sure your app is accepting email correctly and has logs, and then go up.
PS. puts might not appear in logs in production (or not where you expect it to appear) depending on you logging setup. Better way is to use Rails.logger.info
I spent two weeks on what seems like this same issue and found one possible answer that worked for me, crossposted in SendGrid's GH Issues: https://github.com/sendgrid/opensource/issues/22):
Problem:
Localhost: my endpoint route was working correctly. I was able to receive and parse both SendGrid (through a local tunnel -- both cloudflare and localtunnel) and Postman POSTs.
Production: my endpoint route was working fine when tested with Postman and when tested with SendGrid POSTs when sent to a cloudflare tunnel that pointed at my live site. However the SendGrid POSTs that were sent directly to my site seemed to fall into a black hole. They never made it. I did not have any agent blocks or any IPs blacklisted, so I wasn't sure what was going on.
Solution:
After a lot of back and forth with the support team, I learned that SendGrid Inbound Parse seems to only support TLS 1.2... My site was using TLS 1.3. Local tunnels generated full backwards compatability SSL certs which is why the POSTs would work there, but not directly to my site.
To identify if this is an issue for you, you can test your site at: https://www.ssllabs.com/ssltest/analyze.html ... once it is done, there will be a section that shows you what your site supports:
If you don't have green for TLS 1.2, then you need to update your server to support this.
I used NGINX and CertBot. To update them:
SSL into your server and use sudo NGINX -T to see what your current configuration is, and where it is.
Open up that config with sudo /etc/nginx/snippets/ssl-params.conf (or whatever your actual path and preferred editors are.. and make sure to use the path from the -T call b/c you might end up updating the wrong config).
Look for the line that says ssl_protocols.... you need to update it to read ssl_protocols TLSv1.3 TLSv1.2;
You may also need to add specific ciphers and a path to a dhparam if you don't already have one generated and linked. This is what the relevant portion of my final file looks like:
ssl_protocols TLSv1.3 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_dhparam /etc/nginx/dhparam.pem;
ssl_ciphers EECDH+AESGCM:EDH+AESGCM;
Exit out, make sure your new config works with sudo service nginx configtest and then restart NGINX with sudo service nginx restart
Test your site again on SSLLabs and make sure it supports TLS 1.2
I then sent another inbound parse to SenGrid and was able to confirm that it hit my site, was logged, and was processed.
From the Electron renderer, I am accessing a local GraphQL endpoint served by a Django instance on my computer, which I'd like to do over HTTP, not HTTPS. But Electron's Chromium seems to intercept my fetch request and preemptively return a 307 redirect.
So if my fetch request is POST to http://local.myapp.com:3000/v1/graphql, then Chromium returns a 307 and forces a redirect to https://local.myapp.com:3000/v1/graphql, which fails because my server is listening on port 3000 and for my use case I can't do a local cert for local.myapp.com.
Theoretically the first insecure request should be hitting an nginx docker container listening on port 3000 without any SSL requirement. And nginx is proxying the request to a Hasura container. But I'm not even seeing the requests in the nginx access logs, so I'm pretty sure the request is being intercepted by Chromium.
I believe this StackOverflow comment summarizes well why this is happening: https://stackoverflow.com/a/34213531
Although I don't recall ever returning a Strict-Transport-Security header from my GraphQL endpoint or Django server.
I have tried the following code without success to turn off this Chromium behavior within my Electron app:
import { app, } from 'electron'
app.commandLine.appendSwitch('ignore-certificate-errors',)
app.commandLine.appendSwitch('allow-insecure-localhost', )
app.commandLine.appendSwitch('ignore-urlfetcher-cert-requests', )
app.commandLine.appendSwitch('allow-running-insecure-content', )
I have also tried setting the fetch options to include {redirect: 'manual'} and {redirect: 'error'}. I can prevent the redirect but that doesn't do me any good because I need to make a successful request to the endpoint to get my data.
I tried replacing the native fetch with electron-fetch (link) and cross-fetch (link) but there seems to be no change in behavior when I swap either of those out.
Edit: Also, making the request to my GraphQL outside of Electron with the exact same header and body info works fine (via Insomnia).
So I have a couple of questions:
Is there a way to programmatically view/clear the list of HSTS domains that is being used by Chromium within Electron?
Is there a better way to accomplish what I'm trying to do?
I think the issue might be from the server, most servers don't allow HTTP in any possible way, they'll drop the data transfer and redirect you to HTTPS and there's a clear reason why they would do that.
Imagine you have an app that connects through HTTPS to send your API in return for some data, if someone just changed the https:// to http:// that'd mean the data will be sent un-encrypted and no matter what you do with your API key, it'll be exposed, that's why the servers don't ever allow any HTTP request, they don't accept even a single bit of data.
I could think of two solutions.
Chromium is not the reason for the redirect, our Django instance might be configured as production or with HTTPS listeners.
Nginx might be the one who's doing the redirecting (having a little bit of SSL def on the configuration)
Last but not least, just generate a cert with OpenSSL (on host http://local.myapp.com:3000/) note: include the port and use that on your Django instance. You can trust the certificate so that it could work everywhere on your computer.
I have a rails app that is running on heroku and am using Cloudflare Pro with their Full SSL to encrypt traffic between: User <-SSL-> Cloudflare <-SSL-> Heroku, as detailed in: http://mikecoutermarsh.com/adding-ssl-to-heroku-with-cloudflare/ .
I am also using the rack-ssl-enforcer gem to force all http requests to go through https.
This is working properly, except I have the following issues, by browser:
1) Firefox. I have to add a security exception the first visit to the site, getting the "This site is not trusted" warning. Once on the site, I also have the warning in the address bar:
2) Chrome: page loads first time, but the lock in the address bar has a warning triangle on it, when clicked displays:
Your connection is encrypted with 128-bit encryption. However, this
page includes other resources which are not secure. These resources
can be viewed by others while in transit, and can be modified by an
attacker to change the look of the page. The connection uses TLS 1.2.
The connection is encrypted and authenticated using AES_128_GCM and
uses ECDHE_RSA as the key exchange mechanism.
Safari: initially loads with https badge, but it immediately drops off
Is there a way to leverage Cloudflare SSL + piggyback of Heroku native SSL without running into these security warnings? If not, I don't see much value in the configuration.
My apologies for slinging erroneous accusations against Cloudflare and Heroku :-)
Turns out the issue was not the fault of either, but instead that images on the app (being served from AWS S3) were being served up without https.
If anyone runs into this situation, lessons learned across a wasted day:
S3 only lets you serve up content via https if you serve from your bucket's dedicated url: s3.amazonaws.com/your-bucket-name/etc..
a) I tried setting the bucket up for static website hosting, so I could use the url "your-bucket-name.your-url.s3-website-us-east-1.amazonaws.com/etc...", and then set up a CNAME within my DNS that sends "your-bucket-name.your-url" to "your-bucket-name.your-url.s3-website-us-east-1.amazonaws.com/etc...", to pretty up urls
b) this works, but AWS only lets you serve via https with your full url (s3.amazonaws.com/your-bucket-name/etc..) or *.s3-website-us-east-1.amazonaws.com/etc...", which doesnt work if you have a dot in your bucket name (your-bucket-name.your-url), which was required for me to do the CNAME redirect
If you want to use AWS CDN with https, on your custom domain, AWS' only option is CloudFront with a SSL certificate, which they charge $600/mo, per region. No thanks!
In the end, I sucked it up and have ugly image URLs that looks like: https://s3-website-us-east-1.amazonaws.com/mybucketname...", and using paperclip, I specify https: with ":s3_protocol => :https," in my model. Other than that all is working properly now.
Does anyone know of a plugin / gem that will log any HTTP requests your rails app may be making when responding to a request? For example if you are using HTTParty to hit an API, how can you see what outbound requests are coming out of your rails app?
You have to tell the outbound HTTP client to use a proxy.
For HTTParty it's fairly simple (from the docs),
class Twitter
include HTTParty
http_proxy 'http://myProxy', 1080
If you're looking for a proxy to set up, personally I like Paros proxy (Java so cross platform and does SSL).
Try also http_logger gem:
require 'http_logger'
Net::HTTP.logger = Logger.new(...) # defaults to Rails.logger if Rails is defined
Net::HTTP.colorize = true # Default: true
This will log all requests that goes through Net::HTTP library.
https://github.com/railsware/http_logger
If you're doing development on your own machine, Charles Proxy is a good option.
In production, you'd probably be better off creating your own logger.debug() messages.
The only way I got this to work was to specify only the IP as the first parameter to the http_proxy call:
http_proxy '10.2.2.1', 8888
The example above, with the http:// prefix, did not work, I got a SocketError: getaddrinfo: nodename nor servname provided
Try my httplog gem, you can customize it to log requests, responses, headers etc.
In Rails 2.3.4, the way Accept headers are handled has changed:
http://github.com/rails/rails/commit/1310231c15742bf7d99e2f143d88b383c32782d3
We won't Accept it
The way in which Rails handles incoming Accept headers has been updated. This was primarily due to the fact that web browsers do not always seem to know what they want ... let alone are able to consistently articulate it. So, Accept headers are now only used for XHR requests or single item headers - meaning they're not requesting everything. If that fails, we fall back to using the params[:format].
It's also worth noting that requests to an action in which you've only declared an XML template will no longer be automatically rendered for an HTML request (browser request). This had previously worked, not necessarily by design, but because most browsers send a catch-all Accept header ("/"). So, if you want to serve XML directly to a browser, be sure to provide the :xml format or explicitly specify the XML template (render "template.xml").
I have an active API which is being used by many clients who are all sending both a Content-Type and an Accept header, both set to application/xml. This works fine, but my testing under Rails 2.3.4 demonstrates that this no longer works -- I get a 403 Unauthorised response. Remove the Accept header and just sending Content-Type works, but this clearly isn't an acceptable solution since it will require that all my clients re-code their applications.
If I proceed to deploy to Rails 2.3.4 all the client applications which use the API will break. How can I modify my Rails app such that I can continue to serve existing API requests on Rails 2.3.4 without the clients having to change their code?
If I understand correctly the problem is in the Request headers. You can simply add a custom Rack middleware that corrects it.
Quick idea:
class AcceptCompatibility
def initialize(app)
#app = app
end
def call(env)
if env['Accept'] == "application/xml" && env['Content-Type'] == "application/xml"
# Probably an API call
env.delete('Accept')
end
#app.call(env)
end
end
And then in your environment.rb
require 'accept_compatibility'
config.middleware.use AcceptCompatibility
Embarrassingly enough, this actually turned out to be an Apache configuration issue. Once I resolved this, everything worked as expected. Sorry about that.
As coderjoe correctly pointed out, setting the Content-Type header isn't necessary at all -- only setting the Accept header.