DotNotOpenAuth Provider white listing/blacklisting hosts - dotnetopenauth

I am developing a provider using DotNetOpenAuth based on the samples. I'm experimenting with whitelisting/blacklisting relying parties. It seems to be ignoring the blacklisted hosts and allowing the relying party in. I have verified that the UntrustedWebRequestHandler is loading the black listed host from the config file. Here's my config section.
<dotNetOpenAuth>
<messaging>
<untrustedWebRequest>
<blacklistHosts>
<add name="localhost" />
</blacklistHosts>
</untrustedWebRequest>
</messaging>
</dotNetOpenAuth>
I also noticed that the OpenIdWebRingSsoProvider implements white lists manually rather than depending on the UntrustedWebRequestHandler. Does the UntrustedWebRequestHandler only handle white listing and black listing when operating as a relying party? If not, what am I doing wrong?

The untrustedWebRequest section of your web config only limits outbound HTTP requests based on the host or IP address of the request. That's why setting it on an OpenID Provider does not (necessarily) block Relying Parties, since Providers don't strictly have to ever send a request to the Relying Party. This .config section is primarily to protect you from evil Internet servers that deliberately try to DoS attack your server. For example, if you're writing an RP, since OpenIDs can be entered directly by the user, they could enter a host that just accepts HTTP requests and lets them dangle there without responding or closing the connection. Enough of those and your server will run out of resources. If you found a few servers doing that to you, you could blacklist them here.
If you actually want to control which services to connect to (relying parties or providers) you should not use the above method. As you saw in the OpenIdWebRingSsoProvider sample, you should filter those yourself using the IAuthenticationRequest.Realm (if you're a Provider) or the IAuthenticationRequest.Provider.Uri (if you're a Relying Party). There are other ways to filter, of course. If you have a large SSO web ring in your org, you may want to filter on some discoverable certificate on the remote service rather than hard-coding URLs throughout your ring.

Related

Real use of same origin policy

I just got to know about the same origin policy in WebAPI. Enabling CORS helps to call a web service which is present in different domain.
My understanding is NOT enabling CORS will only ensure that the webservice cannot be called from browser. But if I cannot call it from browser I still can call it using different ways e.g. fiddler.
So I was wondering what's the use of this functionality. Can you please throw some light? Apologies if its a trivial or a stupid question.
Thanks and Regards,
Abhijit
It's not at all a stupid question, it's a very important aspect when you're dealing with web services with different origin.
To get an idea of what CORS (Cross-Origin Resource Sharing) is, we have to start with the so called Same-Origin Policy which is a security concept for the web. Sounds sophisticated, but only makes sure a web browser permits scripts, contained in a web page to access data on another web page, but only if both web pages have the same origin. In other words, requests for data must come from the same scheme, hostname, and port. If http://player.example tries to request data from http://content.example, the request will usually fail.
After taking a second look it becomes clear that this prevents the unauthorized leakage of data to a third-party server. Without this policy, a script could read, use and forward data hosted on any web page. Such cross-domain activity might be used to exploit cookies and authentication data. Therefore, this security mechanism is definitely needed.
If you want to store content on a different origin than the one the player requests, there is a solution – CORS. In the context of XMLHttpRequests, it defines a set of headers that allow the browser and server to communicate which requests are permitted/prohibited. It is a recommended standard of the W3C. In practice, for a CORS request, the server only needs to add the following header to its response:
Access-Control-Allow-Origin: *
For more information on settings (e.g. GET/POST, custom headers, authentication, etc.) and examples, refer to http://enable-cors.org.
For a detail read, use this https://developer.mozilla.org/en/docs/Web/HTTP/Access_control_CORS

SEC7117 Error when trying to load a javascript file in MS Edge

I'm getting this error when trying to load a javascript file from another server when using microsoft edge. I have a feeling it's related to the server being http instead of https, but I'm not sure. It works in IE (after allowing unsecured content), but I can't find an option in Edge to allow unsecured content.
This is the error I'm receiving:
SEC7117: Network request to http://servername/whatever.js did not succeed. This Internet Explorer instance does not have the following capabilities: privateNetworkClientServer
Thanks in advance for your help!
It may have something to do with mixing the Internet/Intranet Zones rather than the http/https.
See here: Understanding Enhanced Protected Mode
Private Network resources
Because EPM does not declare the privateNetworkClientServer capability, your Intranet resources are protected from many types of cross-zone attacks (usually called “Cross-Site-Request-Forgery (CSRF)” and “Intranet Port Scanning.”) Internet pages are not able to frame Intranet pages, load images or resources from them, send them CORS XHR requests, etc.
I know that this is an old post, but the info still seems to be relevant since Microsoft MSDN site still references it with regards to IE11 (e.g. here: Enhanced Protected Mode on desktop IE). I also know that IE11 is not Edge, but this info might apply to metro-style apps as well.
[UPDATE]
In my setup Edge failed to load my page in an iframe. When I tried loading the page in a separate Edge tab, it loaded just fine.
It turns out Edge fails to load a private/local secured SSL page (iframe) when in-conjunction of loading a public secured page. Both sites are secured using public SSL certificates to prevent mixed content issues. The issue is that Edge security detects that the iframe site is located on the users local network (private/domain network) and prevents the page from loading in an iframe. Edge reports the following security errors in developer console:
SEC7117: Network request to https://my.company.com/default.html did not succeed. This Internet Explorer instance does not have the following capabilities: privateNetworkClientServer
SEC7111: HTTPS security is compromised by ms-appx-web://microsoft.microsoftedge/assets/errorpages/dnserror.html
To resolve the issue we moved the internal site to a non-local address space (a private network space using a different subnet from the local network) so that Edge detects the site as public network. Alternatively you could move the resources to a true public address.
Here are two alternatives to restructuring your network:
You may consider adding the externally hosted site to your "Local
intranet" zone.
E.g. If external.somedomain contains reference to internal.mydomain/whatever.js
then add external.somedomain to "Local intranet" zone in "Internet Options".
If possible change the hostname of the externally hosted site to
match your internally hosted site.
E.g. If external.somedomain contains reference to internal.mydomain/whatever.js then change external.somedomain hostname to external.mydomain.
Both of these options will essentially allow scripts on the external site to probe for HTTPS services on your internal network to some extent, which I assume is what this security feature is trying to prevent. The first option being the least secure as the second option is limited to probing matching domain names.
During testing, I noticed that Edge seems to get network details from Active Directory when Windows is domain joined. It's likely to prevent externally hosted sites linking to resources hosted anywhere within your AD domain, and not just the current subnet you are connected to. The one exception is if the externally hosted site shares the same base domain name. All this is apparently undocumented, which is why I'm posting this info here.

security of http to https redirect

I have a website that is 100% https and will only work as https. My site is an asp.net mvc application running on IIS 7.5.
It is on multiple servers with traffic distributed via a load balancer.
I am not in control of the hardware.
For http requests, I was hoping that it could be stopped at the load balancer and a redirect to https at this point.
However the hardware company wont do this for me, and instead I need to do the redirect from http to https within IIS on the server. Therefore unencrypted traffice can enter the inner network with redirect being at the server level. I would feel more comfortable with such a transfer happening at the load balancer.
Do I have valid concerns?
Threat model:
HTTP request:
Attacker
| Security Boundary
V V
Client -- http request --> Load Balancer
|
Client <-- redirect -----------+
Threats which occur from allowing HTTP redirects regardless of methodology:
Spoofing: Client could connect to MITM spoofed HTTP server which does not pass through the redirect, but instead proxies connections to the actual HTTPS server
Tampering: Client could receive redirect URL from MITM spoofed server which directs them to another action (E.g.: client receives redirect to https://yoursite.com/login.aspx?redirect=/deleteAllDocuments)
Information Disclosure: Initial HTTP request is disclosed, any information in POST or GET is available unencrypted to eavesdroppers.
Arguments for performing redirect on server other than target server:
Firewall can be limited to HTTPS data, limiting risk of unencrypted data due to misconfiguration
Configuration and liability could become "Someone else's problem", from at least a political perspective
Vulnerabilities in HTTP server would be isolated and could not be used to attack HTTPS server or underlying application
Arguments for performing redirect on something other than the load balancer:
Load balancers are not servers
Load balancers are not servers, and therefore might have lightly used code paths when used as servers, which could be more prone to undiscovered bugs or performance problems
Configuration is not available to you, but you (or your company) is still probably liable for any misconfigurations which occur (from a legal perspective)
In light of the above analysis, for highest security with lowest risk:
I would not put the redirect on the target server, nor the load balancer, but instead on a VM which only serves to redirect pages. A minimal Linux or windows box should be able to be tightly locked down to limit exposure.
I would not allow redirects with a query-string or POST data (E.g.: show 404 for any non GET / HTTP/1.1 request)
I would call the possibility of spoofing and tampering an acceptable risk, or show a page to the user explaining that the site must be accessed using HTTPS instead of using a redirect
But, if you can assume the following conditions are met, placing the HTTP server co-resident with the HTTPS server should not reduce security.
Any bug in the HTTP server is present in the HTTPS server
The HTTP server is correctly configured to disallow access to protected resources (set up as a separate site in IIS, for example. Secure site still has no HTTP binding)
No other application is able to create a server on HTTP (netsh urlacl only has IIS, for example)
Configuration is audited to ensure the above configurations are properly maintained (Periodic pen tests, manual configuration review, configuration change management, and an IDS or IPS system)
In some cases, the reduced complexity may even be easier to secure than a separate server. Additionally, if the administrator is unfamiliar with the load balancer's configuration, they may be more prone to make a critical error in configuration than if they were to make the same configuration in a product they know well.
Do I have valid concerns?
Do you have a valid concern about the initial connection being over HTTP? Sure. The initial request can be intercepted and the response spoofed in a MitM attack. The attacker can then either prevent the user from using HTTP (adding ssl/tls between the attacker and your server and relaying to the victim in the clear) or can create an imposter SSL session with the client that terminates at the attacker before being re-encrypted on it's way to you (using various spoofing techniques to make the attack less obvious to the casual user).
However, if such an attack were launched, I would be far more worried about the transit from the client to your load balancer, not between your load balancer and IIS. If you suspect that you have malicious systems behind your load balancer, you have an entirely different set of problems.
See my answer over on security.so for some relevant information regarding redirects from HTTP to HTTPS.

CloudFlare SSL compatibility with ASP.NET MVC RequireHttps

I am hosting an ASP.NET MVC 4 site on AppHarbor (which uses Amazon EC2), and I'm using CloudFlare for Flexible SSL. I'm having a problem with redirect loops (310) when trying to use RequireHttps. The problem is that, like EC2, CloudFlare terminates the SSL before forwarding the request onto the server. However, whereas Amazon sets the X-Forwarded-Proto header so that you can handle the request with a custom filter, CloudFlare does not appear to. Or if they do, I don't know how they are doing it, since I can't intercept traffic at that level. I've tried the solutions for Amazon EC2, but they don't seem to help with CloudFlare.
Has anyone experienced this issue, or know enough about CloudFlare to help?
The X-Forwarded-Proto header is intentionally overridden by AppHarbor's load balancers to the actual scheme of the request.
Note that while CloudFlare's flexible SSL option may add slightly more security, there is still unencrypted traffic travelling over the public internet from CloudFlare to AppHarbor. This arguably defies the purpose of SSL for anything else than appearances and reducing the number of attack vectors (like packet sniffing on the user's local network) - i.e. it may look "professional" to your users, but it actually is still insecure.
That's less than ideal particularly since AppHarbor supports both installing your own certificates and includes piggyback SSL out of the box. CloudFlare also recommends using "Full SSL" for scenarios where the origin servers/service support SSL. So you have a couple of options:
Continue to use the insecure "Flexible SSL" option, but instead of inspecting the X-Forwarded-Proto header in your custom RequireHttps filter, you should inspect the scheme attribute of the CF-Visitor header. There are more details in this discussion.
Use "Full SSL" and point CloudFlare to your *.apphb.com hostname. This way you can use the complimentary piggyback SSL that is enabled by default with your AppHarbor app. You'll have to override the Host header on CloudFlare to make this work and here's a blog post on how to do that. This will of course make requests to your app appear like they were made to your *.apphb.com domain - so if for instance you automatically redirect requests to a "canonical" URL or generate absolute URLs you'll likely have to take this into consideration.
Upload your certificate and add a custom hostname to AppHarbor. Then turn on "Full SSL" on CloudFlare. This way the host header will remain the same and your application will continue to work without any modifications. You can read more about the SSL options offered by AppHarbor in this knowledge base article.
This is interesting.
Just I recently had a discussion with one of our clients, who asked me about "flexible" SSL and suggested that we (Incapsula) also offer such option.
After some discussion we both came to the conclusion that such a feature would be misleading, since it will provide the end-user with a false sense of security while also exposing the site owner to liability claims.
Simply put, the visitor on one of "flexible" SSL connection may feel absolutely safe behind the encryption and will be willing provide sensitive data, not knowing that the 'server to cloud' route is not encrypted at all and can be intercepted (i.e. by backdoor shells).
It was interesting to visit here and see others reach the same conclusion. +1
Please know that as website owner you may be liable for any unwanted exposure such setup may cause.
My suggestion is to do the responsible thing and to invest in SSL certificate or even create a self signed one (to use for encryption of 'cloud to server' route).
Or you could just get a free one year SSL cert signed by StartCom and upload that to AppHarbor.
Then you can call it a day and pat yourself on the back! That is until future you one year from now has to purchase a cert =).

Multiple domains powered by one rails app

I am creating a blogging-like application where we allow our customers to use their own custom domain names such as domainexample.com, so each different domain serves the same application but with different content.
However I am struggling to figure out how to set this up on a production server. If my production server has a static IP then I can surely just set an a-record on each domain to the ip of the production server.
But what if the production server does not have a static IP. For example if we want to host it on heroku or engineyard? I have seen a few solutions online that require using rewrite rules but they require server restarts and cant really dynamically add and remove new domains as new users sign up. Does anyone know any good solutions to let multiple domains hit one rails app?
Heroku isn't your only option. If you can anticipate your customer's domains, have a look at this. If you can't, Rails routes constraints and a combination of the accepted answer to the question linked above should get you where you need to be going. Sounds like you wouldn't want to restart your server--so no editing of the routes. You might also make domains part of your models, or distinguish at the controller level or use URL rewriting in your web-server layer.
The problem, as I see it, is that Rails breaks its mantra of opinion over configuration here. There are many ways of serving up from multiple domains. That might be an intrinsic complexity, but the Rails Guides could at least document one possible solution.
If your customers just CNAME to your domain or create the A record to your IP and you don't handle TLS termination for these custom domains, your app will not support HTTPS, and without it, your app won't work in modern browsers on these custom domains.
You need to set up a TLS termination reverse proxy in front of your webserver. This proxy can be run on a separate machine but you can run it on the same machine as the webserver.
CNAME vs A record
If your customers want to have your app on their subdomain, e.g. app.customer.com they can create a CNAME app.customer.com pointing to your proxy.
If they want to have your app on their root domain, e.g. customer.com then they'll have to create an A record on customer.com pointing to your proxy's IP. Make sure this IP doesn't change, ever!
How to handle TLS termination?
To make TLS termination work, you'll have to issue TLS certificates for these custom domains. You can use Let's Encrypt for that. Your proxy will see the Host header of the incoming request, e.g. app.customer1.com or customer2.com etc., and then it will decide which TLS certificate to use by checking the SNI.
The proxy can be set up to automatically issue and renew certificates for these custom domains. On the first request from a new custom domain, the proxy will see it doesn't have the appropriate certificate. It will ask Let's Encrypt for a new certificate. Let's Encrypt will first issue a challenge to see if you manage the domain, and since the customer already created a CNAME or A record pointing to your proxy, that tells Let's Encrypt you indeed manage the domain, and it will let you issue a certificate for it.
To issue and renew certificates automatically, I'd recommend using Caddyserver, greenlock.js, OpenResty (Nginx).
tl;dr on what happens here;
Caddyserver listens on 443 and 80, it receives requests, issues, and renews certificates automatically, proxies traffic to your backend.
How to handle it on my backend
Your proxy is terminating TLS and proxying requests to your backend. However, your backend doesn't know who is the original customer behind the request. This is why you need to tell your proxy to include additional headers in proxied requests to identify the customer. Just add X-Serve-For: app.customer.com or X-Serve-For: customer2.com or whatever the Host header is of the original request.
Now when you receive the proxied request on the backend, you can read this custom header and you know who is the customer behind the request. You can implement your logic based on that, show data belonging to this customer, etc.
More
Put a load balancer in front of your fleet of proxies for higher availability. You'll also have to use distributed storage for certificates and Let's Encrypt challenges. Use AWS ECS or EBS for automated recovery if something fails, otherwise, you may be waking up in the middle of the night restarting machines, or your proxy manually.
Alternatively, there have been a few services like this recently that allow you to add custom domains to your app without running the infrastructure yourself.
If you need more detail you can DM me on Twitter #dragocrnjac

Resources