I just got to know about the same origin policy in WebAPI. Enabling CORS helps to call a web service which is present in different domain.
My understanding is NOT enabling CORS will only ensure that the webservice cannot be called from browser. But if I cannot call it from browser I still can call it using different ways e.g. fiddler.
So I was wondering what's the use of this functionality. Can you please throw some light? Apologies if its a trivial or a stupid question.
Thanks and Regards,
Abhijit
It's not at all a stupid question, it's a very important aspect when you're dealing with web services with different origin.
To get an idea of what CORS (Cross-Origin Resource Sharing) is, we have to start with the so called Same-Origin Policy which is a security concept for the web. Sounds sophisticated, but only makes sure a web browser permits scripts, contained in a web page to access data on another web page, but only if both web pages have the same origin. In other words, requests for data must come from the same scheme, hostname, and port. If http://player.example tries to request data from http://content.example, the request will usually fail.
After taking a second look it becomes clear that this prevents the unauthorized leakage of data to a third-party server. Without this policy, a script could read, use and forward data hosted on any web page. Such cross-domain activity might be used to exploit cookies and authentication data. Therefore, this security mechanism is definitely needed.
If you want to store content on a different origin than the one the player requests, there is a solution – CORS. In the context of XMLHttpRequests, it defines a set of headers that allow the browser and server to communicate which requests are permitted/prohibited. It is a recommended standard of the W3C. In practice, for a CORS request, the server only needs to add the following header to its response:
Access-Control-Allow-Origin: *
For more information on settings (e.g. GET/POST, custom headers, authentication, etc.) and examples, refer to http://enable-cors.org.
For a detail read, use this https://developer.mozilla.org/en/docs/Web/HTTP/Access_control_CORS
Related
I recently got my hands on an relatively old cordova app for iOS (iphones), which was built around one year ago, in order to debug it.
The app queries an API from a server. This server is built using Laravel and makes use of laravel-cors.
For a peculiar reason, the developers of this app have set up CORS server-side to accept requests, only if the Origin header is missing.
I was told that the app was working just fine for the past year.
While debugging it, I noticed that the iOS browser adds origin => 'file://' to its headers, when cordova app uses $.ajax for doing requests
And now for my questions
Are you aware of such a change on newer iOS verions?
I suppose I can't do anything client-side in order to bypass it?
How safe is to add "file://" as an accepted origin, server-side?
Thanks a ton!
The reason the server accepts null-Origin isn't "peculiar" -- that is how CORS is defined to work. It is intended to protect against browser-based XSS attacks -- browsers send the Origin header automatically so the server can accept or reject the request based on which domain(s) they allow javascript calls from. It is intended as a safe standards-based successor to the JSONP hack to allow cross-origin server requests, but in a controlled way. By default, browsers require and allow only same-origin XHRs and other similar requests (full list).
CORs is undefined for non-browser clients, since non-browser clients can set whatever Origin they want to anyway (e.g. curl), so in those cases it makes sense to just leave off the Origin header completely.
To answer part of your question, it is not (very) safe to add file:// as an accepted origin server-side. The reason is that an attacker wishing to bypass CORS protections could trick a user into downloading a web page to their filesystem and then executing it in their browser -- thus bypassing any intended Origin restrictions since file:// is in the allowed list. There may also be other exploits, known and unknown, that could take advantage of servers that accept a file:// origin.
You'll have to evaluate the risks of adding this based on your own project requirements.
I am making an api backend that makes use of another api,for example Twitter. (Please note the actual api isn't twitter but I am using them as an example).
Let's say that Twitter has a limit on the number of calls that can be made to their api and above this limit, it starts to charge my credit card. This is why it is very important to me that no one misuses my api.
I want to prevent people from looking at my frontend code and seeing which endpoint it hits, because if a malicious person were to do this, I would very quickly go over the limit and have to pay $$$.
My frontend code uses a get call to mybackend.com/twitter/api
Is it enough to simply add an Access-Control-Allow-Origin header to my backend?
headers['Access-Control-Allow-Origin'] = 'myfrontend.com'
The reason I am asking this is because I noticed that typing mybackend.com/twitter/api directly into the browser worked, which is not what I would expect if I had access-control-allow-origin set to a specific website.
Am I doing something wrong? How do I prevent someone from simply writing a script to hit my backend since it is clear that just typing it into the url of my browser works, despite me having an access-control-allow-origin header.
There are two possible solutions for your problem. You can try to implement a request signature for your API, to know exactly the source of it on your backend. You can take a look on how this works here.
The second option, and for me, a one witch fits your problem better, is to set up a Denial of service approach on your server Load Balancer to prevent multiple requests from a same origin, and so, don't let those kind of malicious requests hit your backend.
I'm working on a project to require HTTPS everywhere among a suite of MVC and WebAPI applications. I'm trying to understand the trade-offs between clicking the "Require SSL" checkbox in IIS & using a URL Rewrite zmodule vs. using a RequireHttpsAttribute in my global filters and modifying my web.config.
I've found the following guides detailing each approach:
https://webmasters.stackexchange.com/questions/28057/iis-7-require-ssl-automatically-redirect-to-https
http://tech.trailmax.info/2014/02/implemnting-https-everywhere-in-asp-net-mvc-application/
Explain the mechanism can be lengthy, so I will just list the most significant differences in behaviour:
do "Require SSL" in IIS:
The context basically expalin what it do, it's "Require" not "Enforce", which means, if people trying to access your website content through http, the server will just respond with a 403 error, which is usually not a desired behavior, but this may help some certain situation
using URL rewrite module:
The module itself can do quite some different thing, but I assume you are just going to do the regular https redirect. Which means, if user trying to hit ANY content of the site through http, the server will do a 301 or 302 redirect to the https version of same url. This is usually a good option since it doesn't affect any usability of the website.
Global RequireHttpsAttribute action filter: This do similar thing to option number 2, it will do a 302 redirect for any http request that is hitting an ACTION. The main difference is that this only applies to all actions in your controllers, Which means, if someone trying to just get a image or css file through http on your website, this option will let it through and not do any enforcement. This leave you the capability to serve static contents through http, which can be useful in some specific circumstances
Just one extra thing worth mention, the 301 and 302 redirect is not going too well with http POST, so if your user trying to do a post through http, the request body will get lost (thanks to the info from #ChrisPratt).
Typically the folks managing the infrastructure are responsible for making sure things are on https. Typically they aren't very good at this so that is where the RequireHttpsAttribute kicks in as it can encforce https requests at a code level thereby enforcing the HTTPS-only attribute.
In practice it isn't so great as many production setups -- including stackoverflow.com's -- see https terminated in an edge device before being unwrapped and handed to the back-end apps as http and the require https attribute isn't quite nuanced enough to understand this distinction.
The best bet in general is to configure the edge device providing the public http interface to take HTTPS and only HTTPS. Then setup secondary virtual sites [or whatever is vendor appropriate] to redirect all traffic to the cannonical HTTPS url. I'd be a bit nervous about relying upon the RequireHttpsAttribute unless it will be a small app handling it's own requests. That still leaves open holes in terms of artifacts and other things that might not be coming off of a controller.
I am convinced that I want to use Glimpse for my project, but I would like to learn a bit more about the security model.
From what I can tell, when you turn Glimpse on, it simply writes a set of cookies to the client. When Glimpse receives these cookies, Glimpse begins to record information for the request and then sends it to the client.
Seems like I could just set the cookies for a site I know uses Glimpse and I would then be able to see their information.
I highly doubt this is how it works, so I would like to know what features are in place to prevent exposing server information.
Glimpse uses a collection of configurable Runtime Policies (http://getglimpse.com/Help/Custom-Runtime-Policy) that dictate how Glimpse responds to any given HTTP request.
Glimpse already adds some Runtime Policies out of the box that filter requests based on content types, http status codes, remote or local access, Uri's...
You can also build your own by implementing the IRuntimePolicy and check for instance if a user is authenticated and member of a specific group and based on that allow Glimpse to gather and return data or not. Such an example can be found at the link above.
I am trying to reduce the load on my webservers by adding an "Image server" (a dedicated server for handling image requests), and redirecting all requests for .gif,.jpg,.png etc., to it.
My question is, what is the best way to handle the redirection?
At the firewall level? (can I do this using iptables?)
At the load balancer level? (can ldirectord handle this?)
At the apache level - using rewrite rules?
Thanks for any suggestions on the best way to do this.
--Update--
One thing I would add is that these are domains that are hosted for 3rd parties, so I can't expect all the developers to modify their code and point their images to another server.
The further up the chain you can do it, the better.
Ideally, do it at the DNS level by using a different domain for your images (eg imgs.example.com)
If you can afford it, get someone else to do it by using a CDN (Content delivery network).
-Update-
There are also 2 featuers of apache's mod_rewrite that you might want to look at. They are all described well at http://httpd.apache.org/docs/1.3/misc/rewriteguide.html.
The first is under the heading "Dynamic Miror" in the above document, that uses the mod_rewrite Proxy flag [p]. This lets your server silently fetch files from another domain and return them.
The second is to just redirect the request to the new domain. This second option puts less strain on your server, but requests still need to come in and it slows down the final rendering of the page, as each request needs to make an essentially redundant request to your server first.
i agree with rikh. If you want images to be served from a different webserver, then serve them on a different web-server. For example:
<IMG src="images/Brett.jpg">
becomes
<IMG src="http://brettnesbitt.akamia-technologies.com/images/Brett.jpg">
Any kind of load balancer will still feed the image from the web-server's pipe, which is what you're trying to avoid.
i, of course, know what you really want. What you really want is for any request like:
GET images/Brett.jpg HTTP/1.1
to automatically get converted into:
HTTP/1.1 307 Temporary Redirect
Location: http://brettnesbitt.akamia-technologies.com/images/Brett.jpg
this way you don't have to do any work, except copy the images to the other web-server.
That i really don't know how to do.
By using the phrase "NAT", it implies that the firewall/router receives HTTP requests, and you want to forward the request to a different internal server if the HTTP request was for image files.
This then begs the question about what you're actually trying to save. No matter which internal web-server services the HTTP request, the data is still going to have to flow through the firewall/router's pipe.
The reason i bring it up is because the common scenario when someone wants to serve images from a different server is because they want to split up high-bandwidth, mostly static, low-CPU cost content from their actual logic.
Only using NAT to re-write the packet and send it to a different server will not work towards that common issue.
The other reason might be because images are not static content on your system, and a request to
GET images/Brett.jpg HTTP/1.1
actually builds an image on the fly, with a high-CPU cost, or only using with data available (i.e. SQL Server database) to ServerB.
If this is the case then i would still use a different server name on the image request:
GET http://www.brettsoft.com/default.aspx HTTP/1.1
GET http://imageserver.brettsoft.com/images/Brett.jpg HTTP/1.1
i understand what you're hoping for, with network packet inspection to override the NAT rule and send it to another server - i've never seen any such thing that can do that.
It sounds more "proxy-ish", where the web-proxy does this. (i.e. pfSense and m0n0wall can't do it)
Which then leads to a kind of solution we used once: a custom web-server that analyzes the request, makes the appropriate request off some internal server, and binary writes the response to the client.
That pain in the ass solution was insisted upon by a "security consultant", who apparently believes in security through obscurity.
i know IIS cannot do such things for you itself - i don't know about other web-server products.
i just asked around, and apparently if you wanted to write a custom kernel module for you linux based router, you could have it inspect packets and take appropriate action. Such a module might exist. There are, apparently, plenty of other open-sourced modules to use as a starting point.
But i'd rather shoot myself in the head.