I am looking at my asp.net mvc site and considering blocking the HEAD verb in IIS from accessing the site.
I don't see why such requests are needed or being used at present.
Why would HEAD requests be required on a site?
The comment posted above is correct. As far as I know, HEAD request are made by the browser for checking things like...do I need to download this again, is the page still there, etc. Basically, if the browser wants to know about a page without downloading the entire page, it will issue a HEAD request. So, in short, they are not a harmful thing to be happening.
However, if you want to block these, you can do so in your web.config by using the following syntax (taken from MSDN/IIS)
<configuration>
<system.webServer>
<security>
<requestFiltering>
<verbs applyToWebDAV="false">
<add verb="HEAD" allowed="false" />
</verbs>
</requestFiltering>
</security>
</system.webServer>
</configuration>
However, I think this is an atypical setup and you may want to test your site for performance /breaks across multiple browsers before turning this on for a production facing site.
There are malicious scanners that will issue a large volume of HEAD requests in an attempt to find known vulnerable resources, such as file upload components that may allow them to upload malicious files. They use a HEAD request as it is faster than a GET request because it has no response body, just headers.
Not only is their intent malicious, but by requesting large numbers of non-existent resources they can put load on your server.
On the flip side, Google also use the HEAD request to save time and bandwidth when deciding whether to re-fetch a page (i.e. has it changed since I last crawled).
Ideally, you would find a way to block the malicious requests and allow the legitimate ones from Google / Web Browsers.
Related
I host a small web site at an external host provider. When I open it from my iPhone, I get different results depending on how my iPhone is connected to the internet:
When connection is made through WiFi, my page always opens and runs as expected
When connection is made through Cellular, my page always produces the following error message:
On mobile Safari:
Safari cannot open the page because too many redirects occurred.
On mobile Chrome:
This page isn't working / redirected you too many times.
On mobile Opera:
This site can't be reached / too many HTTP redirects.
As far as I can tell, the only difference that decides the outcome is the internet connection type - WiFi vs. Cellular. I cannot find any other differences.
Since the site works fine through WiFi network, I ruled out a redirect loop on my site (that is the most commonly mentioned cause of "too many redirects" error). I also tried turning off cross-site tracking prevention, but the results remained the same. Am I missing something? What could be the cause of this strange behavior?
In case it is relevant, here are a few things about the web site itself:
Web site is developed with ASP.NET Core
I access site using https in both cases (via WiFi and via Cellular)
Site is on a subdomain, which uses a wildcard certificate from the "top" domain
Site uses ASP.NET Core "scaffold-ed" authentication, which uses redirects and cookies, and has "remember me" functionality.
I finally stumbled upon a fix, although I still do not know why the error does not manifest itself on desktops and mobile WiFi connections. The issue has to do with hosting my web application on IIS using out-of-process mode, and calling UseHttpsRedirection() during the setup.
What happens next is described in this answer: IIS, which connects to my out-of-process host (Kestrel) via http, gets redirected, and the browser on my phone somehow detects it. There is also a second redirection (the legitimate one) to the login page, which the phone browser counts as well. Now the phone browser sees two redirections, so it displays an error, because at most one redirection is allowed.
The fix was simply to remove the call to UseHttpsRedirection(). It was unnecessary in out-of-process hosting scenario: IIS front is configured to require https, so clients get redirected anyway.
Doing application side https forcing behind a reverse proxy is tricky. Generally it is better to let the reverse proxy do the forcing, and set the proxy to communicate on https only to avoid any application side forcing. (If the application has https capabilities of course)
If you must do it from the application, then the proxy must include the necessary headers for the application to evaluate the original connection context. And it may have to know the original hostname and basepath if you rewrite that.
Please review the instructions for asp.net core Forwarded Headers Middleware.
https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/proxy-load-balancer
As to why that behaves differently depending on your type of internet connection is a bit of a mystery.
Try adding the line below to your Web.Config file.
Seems like it could have to do with how mobile networks try and compress your packets as they're sent back to the device.
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="Cache-Control" value="no-transform" />
</customHeaders>
</httpProtocol>
</system.webServer>
which I believe is the equivalent of adding
Header set Cache-Control "no-transform" to your .htaccess file.
If that doesn't work, try adding the following to all pages that should normally be touched during the request.
<% #Language="VBScript" %>
<% Response.CacheControl = "no-transform" %>
NOTE: This code must be inserted at the beginning of the page, unless buffering is enabled, because it is modifying the HTTP headers.
It might be caused by browsers not getting correct content-type
Can you add this to your head in the view or layout(if you are using)
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
</head>
I think your service provider is using an apache server .
if yes then please reset .htacess file (it is server configuration file used to controling server settings including redirects).
I have defined a location for the page in the xml
<error-page>
<error-code>404</error-code>
<location>/faces/public/error-page-not-found.xhtml</location>
</error-page>
<error-page>
but I want the URL to be like below:
faces/{variable}/public/error-page-not-found.xhtml
where the value of the variable will change according to different situations
This question is a bit subjective though in general HTTP errors are handled by the server and most of the time by the scripting language on the server (and occasionally the HTTP server software directly).
In example the Apache HTTP web server software allows for rewrites. So you can request a page at example.com/123 though there is no "123" file there. In the code that would determine if you would have something that would be available for that request you would also determine if a resource exists for that request; if not then your server scripting code (PHP, ColdFusion, Perl, ASP.NET, etc) would need to return an HTTP 404. The server code would then have a small snippet that you would put in to the body of the code such as the code you have above.
You would not need to redirect to an error page, you would simply respond with the HTTP 404 response and any XML you'd use to notify the visitor that there is nothing there. HTTP server software such as Apache can't really produce code (it can only reference or rewrite some file to be used for certain requests).
Generally speaking if you have a website that uses a database you'd do the following...
Parse the URL requested so you can determine what the visitor requested.
Determine if a resource should be retrieved for that request (e.g. make a query to the database).
Once you know whether a resource is available or not you then either show the resource (e.g. a member's profile) or server the HTTP status (401: not signed in at all, 403:, signed in though not authorized where no increase in privileges will grant permission, 404: not found, etc) and display the corresponding content.
I would highly recommend that you read about Apache rewrites and PHP, especially it's $_SERVER array (e.g. <?php print_r($_SERVER);?>). You'd use Apache to rewrite all requests to a file so even if they request /1, /a, /about, /contact/, etc they all get processed by a single PHP file where you first determine what the requested URL is. There are tons of questions here and elsewhere on the web that will help you really get a good quick jump start on handling all that such as this: Redirect all traffic to index.php using mod_rewrite. If you do not know how to setup a local HTTP web server I highly recommend looking in to XAMPP, it's what I started out with years ago. Good luck!
I just got to know about the same origin policy in WebAPI. Enabling CORS helps to call a web service which is present in different domain.
My understanding is NOT enabling CORS will only ensure that the webservice cannot be called from browser. But if I cannot call it from browser I still can call it using different ways e.g. fiddler.
So I was wondering what's the use of this functionality. Can you please throw some light? Apologies if its a trivial or a stupid question.
Thanks and Regards,
Abhijit
It's not at all a stupid question, it's a very important aspect when you're dealing with web services with different origin.
To get an idea of what CORS (Cross-Origin Resource Sharing) is, we have to start with the so called Same-Origin Policy which is a security concept for the web. Sounds sophisticated, but only makes sure a web browser permits scripts, contained in a web page to access data on another web page, but only if both web pages have the same origin. In other words, requests for data must come from the same scheme, hostname, and port. If http://player.example tries to request data from http://content.example, the request will usually fail.
After taking a second look it becomes clear that this prevents the unauthorized leakage of data to a third-party server. Without this policy, a script could read, use and forward data hosted on any web page. Such cross-domain activity might be used to exploit cookies and authentication data. Therefore, this security mechanism is definitely needed.
If you want to store content on a different origin than the one the player requests, there is a solution – CORS. In the context of XMLHttpRequests, it defines a set of headers that allow the browser and server to communicate which requests are permitted/prohibited. It is a recommended standard of the W3C. In practice, for a CORS request, the server only needs to add the following header to its response:
Access-Control-Allow-Origin: *
For more information on settings (e.g. GET/POST, custom headers, authentication, etc.) and examples, refer to http://enable-cors.org.
For a detail read, use this https://developer.mozilla.org/en/docs/Web/HTTP/Access_control_CORS
I'm forcing https to access my website, but some of the contents must be loaded over http (for example video contents can not be over https), but the browsers block the request because of mixed-contents policy.
After hours of searching I found that I can use Content-Security-Policy but I have no idea how to allow mixed contents with it.
<meta http-equiv="Content-Security-Policy" content="????">
You can't.
CSP is there to restrict content on your website, not to loosen browser restrictions.
Secure https sites given users certain guarantees and it's not really fair to then allow http content to be loaded over it (hence the mixed content warnings) and really not fair if you could hide these warnings without your users consent.
You can use CSP for a couple of things to aid a migration to https, for example:
You can use it to automatically upgrade http request to https (though browser support isn't universal). This helps in case you missed changing a http link to https equivalent. However this assumes the resource can be loaded over https and sounds like you cannot load them over https so that's not an option.
You can also use CSP to help you identify any http resources on you site you missed by reporting back a message to a service you can monitor to say a http resource was attempted to be loaded. This allows you identify and fix the http links to https so you don't have to depend on above automatic upgrade.
But neither is what you are really looking for.
You shouldn't... but you CAN, the feature is demonstrated here an HTTP PNG image converted on-the-fly to HTTPS.
<meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests">
There's also a new permissions API, described here, that allows a Web server to check the user's permissions for features like geolocation, push, notification and Web MIDI.
And how can the process be speeded up from a developer point of view?
There are a lot of things going on.
When you first type in an address, the browser will lookup the hostname in DNS, if it is not already in the browser cache.
Then the browser sends a HTTP GET request to the remote server.
What happens on the server is really up to the server; but it should respond back with a HTTP response, that includes headers, which perhaps describe the content to the browser and how long it is allowed to be cached. The response might be a redirect, in which case the browser will send another request to the redirected page.
Obviously, server response time will be one of the critical points for perceived performance, but there are many other things to it.
When a response is returned from the server, the browser will do a few things. First it will parse the HTML returned, and create it's DOM (Document Object Model) from that. Then it will run any startup Javascript on the page; before the page is ready to be displayed in the browser. Remember, that if the page contains any ressources such as external stylesheets, scripts, images and so on, the browser will have to download those, before it can display the page. Each resource is a separate HTTP get, and there are some latency time involved here. Therefore, one thing that in some cases can greatly reduce load times is to use as few external ressources as possible, and make sure they are cached on the client (so the browser don't have to fetch them for each page view).
To summarize, to optimize performance for a web page, you want to look at, as a minimum:
Server response time
Bandwith /content transfer time.
Make sure you have a small and simple DOM (especially if you need to support IE6).
Make sure you understand client side caching and the settings you need to set on the server.
Make sure you make the client download as little data as possible. Consider GZipping resources and perhaps dynamic content also (dependent on your situation).
Make sure you don't have any CPU intensive javascript on Page load.
You might want to read up on the HTTP Protocol, as well as some of the Best Practices. A couple of tools you can use are YSlow and Google Page Speed
What are the steps involved from entering a web site address to the page being displayed on the browser?
The steps are something like:
Get the IP address of the URL
Create a TCP (HTTP) connection to the IP address, and request the specified page
Receive/download the page via TCP/HTTP; the page may consist of several files/downloads: e.g. the HTML document, CSS files, javascript files, image files ...
Render the page
And how can the process be speeded up from a developer point of view?
Measure to discover which of these steps is slow:
It's only worth optimizing whichever step is the slow one (no point in optimizing steps which are already fast)
The answer to your question varies depending on which step it is.