security of http to https redirect - asp.net-mvc

I have a website that is 100% https and will only work as https. My site is an asp.net mvc application running on IIS 7.5.
It is on multiple servers with traffic distributed via a load balancer.
I am not in control of the hardware.
For http requests, I was hoping that it could be stopped at the load balancer and a redirect to https at this point.
However the hardware company wont do this for me, and instead I need to do the redirect from http to https within IIS on the server. Therefore unencrypted traffice can enter the inner network with redirect being at the server level. I would feel more comfortable with such a transfer happening at the load balancer.
Do I have valid concerns?

Threat model:
HTTP request:
Attacker
| Security Boundary
V V
Client -- http request --> Load Balancer
|
Client <-- redirect -----------+
Threats which occur from allowing HTTP redirects regardless of methodology:
Spoofing: Client could connect to MITM spoofed HTTP server which does not pass through the redirect, but instead proxies connections to the actual HTTPS server
Tampering: Client could receive redirect URL from MITM spoofed server which directs them to another action (E.g.: client receives redirect to https://yoursite.com/login.aspx?redirect=/deleteAllDocuments)
Information Disclosure: Initial HTTP request is disclosed, any information in POST or GET is available unencrypted to eavesdroppers.
Arguments for performing redirect on server other than target server:
Firewall can be limited to HTTPS data, limiting risk of unencrypted data due to misconfiguration
Configuration and liability could become "Someone else's problem", from at least a political perspective
Vulnerabilities in HTTP server would be isolated and could not be used to attack HTTPS server or underlying application
Arguments for performing redirect on something other than the load balancer:
Load balancers are not servers
Load balancers are not servers, and therefore might have lightly used code paths when used as servers, which could be more prone to undiscovered bugs or performance problems
Configuration is not available to you, but you (or your company) is still probably liable for any misconfigurations which occur (from a legal perspective)
In light of the above analysis, for highest security with lowest risk:
I would not put the redirect on the target server, nor the load balancer, but instead on a VM which only serves to redirect pages. A minimal Linux or windows box should be able to be tightly locked down to limit exposure.
I would not allow redirects with a query-string or POST data (E.g.: show 404 for any non GET / HTTP/1.1 request)
I would call the possibility of spoofing and tampering an acceptable risk, or show a page to the user explaining that the site must be accessed using HTTPS instead of using a redirect
But, if you can assume the following conditions are met, placing the HTTP server co-resident with the HTTPS server should not reduce security.
Any bug in the HTTP server is present in the HTTPS server
The HTTP server is correctly configured to disallow access to protected resources (set up as a separate site in IIS, for example. Secure site still has no HTTP binding)
No other application is able to create a server on HTTP (netsh urlacl only has IIS, for example)
Configuration is audited to ensure the above configurations are properly maintained (Periodic pen tests, manual configuration review, configuration change management, and an IDS or IPS system)
In some cases, the reduced complexity may even be easier to secure than a separate server. Additionally, if the administrator is unfamiliar with the load balancer's configuration, they may be more prone to make a critical error in configuration than if they were to make the same configuration in a product they know well.

Do I have valid concerns?
Do you have a valid concern about the initial connection being over HTTP? Sure. The initial request can be intercepted and the response spoofed in a MitM attack. The attacker can then either prevent the user from using HTTP (adding ssl/tls between the attacker and your server and relaying to the victim in the clear) or can create an imposter SSL session with the client that terminates at the attacker before being re-encrypted on it's way to you (using various spoofing techniques to make the attack less obvious to the casual user).
However, if such an attack were launched, I would be far more worried about the transit from the client to your load balancer, not between your load balancer and IIS. If you suspect that you have malicious systems behind your load balancer, you have an entirely different set of problems.

See my answer over on security.so for some relevant information regarding redirects from HTTP to HTTPS.

Related

Nginx Auth Proxy

I have multiple services which have their own web-server, listening on different ports, for e.g. :
http://127.0.0.1:5000 (service A)
https://127.0.0.1:3000 (service B)
I need a way to restrict access to them without tweaking each of them individually. So, I have an OAuth server hosted as well (port 2333). I have configured the OAuth server to be able to redirect you to a certain URL, if you successfully authenticated through it. So, for e.g. if I access this URL:
https://127.0.0.1:2333/oauth/authorise?service=A&redirect_uri=http://127.0.0.1:5000
It will ask for authentication (or search for cookie) and redirect me to the desired URL. This works OK if I manually access that URL, but I need it automated (every time you try accessing the initial URL, get redirected to the OAuth).
I need the following scenario:
Insert URL http://127.0.0.1:5000 in browser
Get redirected to https://127.0.0.1:2333/oauth/authorise?service=A&redirect_uri=http://127.0.0.1:5000
The OAuth server takes care of the rest
For this, I was thinking of using nginx to redirect, but I don't know how to configure it. Any ideas?
How are those services hosted? You have a couple of options:
Doing a redirect to (https://127.0.0.1:2333) on the / path for A service (on the source code).
Doing a redirect on the server configuration (which is more low level and should be faster)
The first option gives you more control, and you can do other things easily (like checking if the user is logged in). The second option is faster, but some things are harder to implement, as you will modify the server configuration)

Is it necessary to force_ssl? Or should the SSL terminate at the load balancer?

On AWS OpsWorks. I'm using an ELB, which has my CA's SSL certificate.
The first point of access is always the load balancer (ELB). The ELB directs traffic to the instances. The instances each have a copy of the Rails app, Unicorn, etc.
One thing to note. The instances behind the ELB cannot be accessed directly.
At this point, do I need to force_ssl in Rails? I hear it's common enough to terminate SSL at the border (ELB).
As far as I've read, force_ssl gives the following:
Automatic redirect traffic from http to https.
Flagging cookies as secure and some added protection (i.e. against MITM attacks).
http://api.rubyonrails.org/classes/ActionController/ForceSSL/ClassMethods.html only indicates http to https redirection.
What does force_ssl do in Rails? second answer suggests that force_ssl does more than redirection.
If I decide not to use force_ssl, I can manage redirects by writing Nginx definitions.
Given the scenario, it feel like forcing SSL via Rails is obsolete, since the SSL negotiation is already happening in the ELB. Is it still necessary to force_ssl? Are there any added benefits?
if you're terminating SSL at the ELB level you don't want it. (you want to take http traffic and not be redirected).
bear in mind that in this case the traffic between the ELB and your backend instances will be over HTTP (i.e. not encrypted). this is fine for most cases.

https URL redirecting to external site

Hi I have a website that I will be developing in the future.
Upon looking at the current website I noticed something weird that I have never seen before and also Google'd and found nothing.
If you go to: http://www.smartrainer.com.au you get the normal site
But, if you go to: https://www.smartrainer.com.au you get redirected to another website and are also given an SSL warning beforehand (in Chrome)
The site is hosted on a UNIX / PHP server and the .htaccess file currently has nothing that would suggest that it's redirecting to this other website.
Any help or insight would be appreciated with this, because I've never heard of this or seen this before.. The client also has no idea why it would be directing to that company that we've never heard of
Thanks!
It sounds like you're using a shared hosting server.
In plain HTTP, the server can know which host the client is requesting using the Host header in the request (this is based on the URL). Apache Httpd supports this with what it calls Name-based virtual hosts.
The HTTPS configuration is separate from the HTTP configuration in Apache Httpd (and presumably a number of other servers). Having virtual hosts (typically on a shared host) for the HTTP configuration doesn't mean that the same configuration is replicated for HTTPS.
HTTPS presents another problem: choosing which certificate to send before being able to see the Host header. Indeed, the server needs to send the client a certificate with the correct name during the SSL/TLS handshake, which happens before any HTTP traffic is sent (so before the Host header can be read). To overcome this problem, some hosts will set up a certificate valid for multiple host names (typically multiple Subject Alternative Names, or sometimes wilcards), others will use Server Name Indication (which isn't supported by all clients).
To get your server to host your site for HTTPS, you'd need:
To make sure the certificate it serves is valid for your host name (otherwise, there will be a warning message).
That the virtual hosts (or equivalent) it serves are configured for your host too.
In your case it seems that (a) your server is serving a single certificate that is not valid for your host and (b) your host isn't configured for HTTPS anyway, since you're falling back to what's probably the default host.
You may solve this issue by redirecting HTTPS URL to HTTP URL from your .htaccess. This error might because of shared hosting. If you cannot solve this issue from your .htaccess than you may also contact your hosting provider on this issue.

CloudFlare SSL compatibility with ASP.NET MVC RequireHttps

I am hosting an ASP.NET MVC 4 site on AppHarbor (which uses Amazon EC2), and I'm using CloudFlare for Flexible SSL. I'm having a problem with redirect loops (310) when trying to use RequireHttps. The problem is that, like EC2, CloudFlare terminates the SSL before forwarding the request onto the server. However, whereas Amazon sets the X-Forwarded-Proto header so that you can handle the request with a custom filter, CloudFlare does not appear to. Or if they do, I don't know how they are doing it, since I can't intercept traffic at that level. I've tried the solutions for Amazon EC2, but they don't seem to help with CloudFlare.
Has anyone experienced this issue, or know enough about CloudFlare to help?
The X-Forwarded-Proto header is intentionally overridden by AppHarbor's load balancers to the actual scheme of the request.
Note that while CloudFlare's flexible SSL option may add slightly more security, there is still unencrypted traffic travelling over the public internet from CloudFlare to AppHarbor. This arguably defies the purpose of SSL for anything else than appearances and reducing the number of attack vectors (like packet sniffing on the user's local network) - i.e. it may look "professional" to your users, but it actually is still insecure.
That's less than ideal particularly since AppHarbor supports both installing your own certificates and includes piggyback SSL out of the box. CloudFlare also recommends using "Full SSL" for scenarios where the origin servers/service support SSL. So you have a couple of options:
Continue to use the insecure "Flexible SSL" option, but instead of inspecting the X-Forwarded-Proto header in your custom RequireHttps filter, you should inspect the scheme attribute of the CF-Visitor header. There are more details in this discussion.
Use "Full SSL" and point CloudFlare to your *.apphb.com hostname. This way you can use the complimentary piggyback SSL that is enabled by default with your AppHarbor app. You'll have to override the Host header on CloudFlare to make this work and here's a blog post on how to do that. This will of course make requests to your app appear like they were made to your *.apphb.com domain - so if for instance you automatically redirect requests to a "canonical" URL or generate absolute URLs you'll likely have to take this into consideration.
Upload your certificate and add a custom hostname to AppHarbor. Then turn on "Full SSL" on CloudFlare. This way the host header will remain the same and your application will continue to work without any modifications. You can read more about the SSL options offered by AppHarbor in this knowledge base article.
This is interesting.
Just I recently had a discussion with one of our clients, who asked me about "flexible" SSL and suggested that we (Incapsula) also offer such option.
After some discussion we both came to the conclusion that such a feature would be misleading, since it will provide the end-user with a false sense of security while also exposing the site owner to liability claims.
Simply put, the visitor on one of "flexible" SSL connection may feel absolutely safe behind the encryption and will be willing provide sensitive data, not knowing that the 'server to cloud' route is not encrypted at all and can be intercepted (i.e. by backdoor shells).
It was interesting to visit here and see others reach the same conclusion. +1
Please know that as website owner you may be liable for any unwanted exposure such setup may cause.
My suggestion is to do the responsible thing and to invest in SSL certificate or even create a self signed one (to use for encryption of 'cloud to server' route).
Or you could just get a free one year SSL cert signed by StartCom and upload that to AppHarbor.
Then you can call it a day and pat yourself on the back! That is until future you one year from now has to purchase a cert =).

Rails' page caching vs. HTTP reverse proxy caches

I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question).
What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here.
This is my understanding of how both techniques work (maybe I'm wrong):
With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request
With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser
If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?
You are right.
The only reason to consider it is if your apache sets expires headers. In this configuration, the proxy can take some of the load off apache.
Having said this, apache static vs proxy cache is pretty much an irrelevancy in the rails world. They are both astronomically fast.
The benefits you would get would be for your none page cacheable stuff.
I prefer using proxy caching over page caching (ala heroku), but thats just me, and a digression.
A good proxy cache implementation (e.g., Squid, Traffic Server) is massively more scalable than Apache when using the prefork MPM. If you're using the worker MPM, Apache is OK, but a proxy will still be much more scalable at high loads (tens of thousands of requests / second).
Varnish for example has a feature when the simultaneous requests to the same URL (which is not in cache) are queued and only single/first request actually hits the back-end. That could prevent some nasty dog-pile cases which are nearly impossible to workaround in traditional page caching scenario.
Using a reverse proxy in a setup with only one app server seems a bit overkill IMO.
In a configuration with more than one app server, a reverse proxy (e.g. varnish, etc.) is the most effective way for page caching.
Think of a setup with 2 app servers:
User 'Bob'(redirected to node 'A') posts a new message, the page gets expired and recreated on node 'A'.
User 'Cindy' (redirected to node 'B') requests the page where the new message from 'Bob' should appear, but she can't see the new message, because the page on node 'B' wasn't expired and recreated.
This concurrency problem could be solved with a reverse proxy.

Resources