Microsoft's Edge browser has a "friendly error page" feature similar to Internet Explorer's, where it masks a server's non-success responses (e.g. 400 Bad Request) with its own "friendly" pages.
You can observe the behaviour by using Edge to visit here: http://httpstat.us/400
Whilst this is a better user-experience than being presented with highly technical default server error pages, it's an unwanted experience when you want to return an actual page with a non-success status code.
In Internet Explorer, as long as the server returned more than 512 bytes of data in the response, it would display the response, but in Edge, that's not enough.
Is there a way to coerce Edge to display the "error" content returned by the server when a non-success status code is returned? I don't want to reconfigure the browser, just convince it that my response is "friendly" enough to present to the user.
There is no way to remove the "friendly errors" at the moment. You can use the F12 developer tools for debugging.
Nevertheless, the friendly page includes the actual HTTP response code so you can have an idea. Also, if you require more details you can check the full details on the request/response using the F12 Network tool to get the data the server is responding with.
In Internet Explorer, as long as the server returned more than 512 bytes of data in the response, it would display the response, but in Edge, that's not enough.
In experimenting, I found that this is indeed enough (example here of working 400 page).
I also though found that if you had previously sent a response of 512 characters or less, and then fixed the response to be greater, refreshing the page or following a link to it would still get the "friendly" (with friends like these, who needs enemies?…) message. Entering the URI in the address bar again (i.e. just focusing in the address-bar and hitting enter) was necessary to make the error page actually appear.
Related
I'm trying to reproduce an exception my rails site generates whenever a specific crawler hits a certain page:
ActionView::Template::Error:incompatible character encodings: ASCII-8BIT and UTF-8
The page takes GET parameters. When I visit the page with the same GET parameters with my browser, everything renders correctly.
The IP of the crawler is always EU-based (my site is US-based), and one of the user agents is:
Mozilla/5.0 (compatible; GrapeshotCrawler/2.0; +http://www.grapeshot.co.uk/crawler.php)
Looking at the HTTP headers sent, the only difference I see between my browser requests and the crawler's is it includes HTTP_ACCEPT_CHARSET, whereas mine does not:
-- HTTP_ACCEPT_CHARSET: utf-8,iso-8859-1;q=0.7,*;q=0.6
I tried setting this in my request but I couldn't reproduce. Are there HTTP header params that can change how rails renders? Are there any other settings I can try to reproduce this?
That's not a browser, more likely an automatic crawler. In fact, if you follow the link in the user agent you get the following explanation
The Grapeshot crawler is an automated robot that visits pages to examine and analyse the content, in this sense it is somewhat similar to the robots used by the major search engine companies.
Unless the crawler is submitting a POST request (which is really unlikely as crawlers tend to follow links via GET and not to issue POST requests), it means the crawler is somehow injecting some information in your page which causes your controller to crash.
The most common cause is a malformed query string. Check the query string associated with the request: it's likely it contains a not-UTF8 encoded character that is read by your controller and somehow it's crashing it.
It's also worth to inspect the stacktrace of the exception (either in the Rails logs or using a third party apps such as Bugsnag) to determine what component of your stack is causing the exception, reproduce, test and fix it.
I am getting some trouble with a ASP.NET MVC + IIS 7.5 page. I have a really extensive page that sometimes exceeds 15.0MB uncompressed and 1.5MB compressed.
When it happens, it looks like connection never ends. The loading icon stays forever and if I see at Developer Tools, the connection is pending, despite the entire HTML is received.
It happens at Chrome, Firefox and Internet Explorer, so I think the problem is ASP.NET or IIS.
Do I need to do something special to handle such a pages?
15MB is going to be horribly slow and unresponsive - not something your users want - however much they want to "see all published files".
I would introduce, for example, paging into your webpage so not all the files are downloaded at once.
However, if you really want a 15MB page, you may find the limits config can help.
You say that the entire HTML was provably received. How could IIS or ASP.NET be the problem then? Once the content is sent they are out of the loop.
The browser is probably the problem.
You could try setting Response.Buffer to false.
The Buffer property indicates whether to buffer page output. When page output is buffered, the server does not send a response to the client until all of the server scripts on the current page have been processed, or until the Flush or End method is called.
By default, Response.Buffer is set to true, so output will be buffered. Perhaps by feeding the response to the client as it comes the browsers will behave as you need them to.
You do need to set the value of Response.Buffer before any output is sent to the browser though.
Maybe try IIS compression?
IIS provides the following compression options:
Static files only
Dynamic application responses only
Both static files and dynamic application responses
http://technet.microsoft.com/en-us/library/cc771003%28v=ws.10%29.aspx
I got an eccentric problem. I am trying to automate visiting a web site by using WebRequest and WebClient. I have observed all the post request header key-value pairs and posted data string in Firebug(request Header and Post tab). Then I simulated such request with WebRequest and put all the header parameter and posted data there. However when I do GetResponse() from this request instance, I always got an error page back that says some sessionID is short of.
Actually I have taken care to put previously(first step to open the Logon page) responded session cookie in the Header's cookie field for the request. And I can get the correct response back by simulating requesting the logon page(the 1st page), but cannot get through this authentication page. My post data is like userid=John&password=123456789&domain=highmark.And the authentication page request that carried out by browser succeeds every time.
Am I missing something in the request that may not be shown by firebug.If yes, can you give me some recommendation for the tools that may examine the entire request sent by browser.
I have solved this issue. The problem is I set the httpWebRequest instance's AllowAutoRedirect=true. Thus the effect is when I got the first response from the server, the httpWebRequest would continually to make another request asking for a different url that is replied in the response header's Location field.
The defect of HttpWebRequest class is when it is getting redirected, it does not include the Set-Cookies(Response's Header Field)'s cookies in the next request header, thus the server would deny such page request and may redirect again to another different page.
And the httpWebRequest.GetResponse() method only return the last responsed page back under the setting AllowAutoRedirect=true. And I got the totally different response than I expected.
Also in this solving process, I need to thank to a distinguish Http Traffic examining tool:IEInspector Http Analyzer(http://www.ieinspector.com/httpanalyzer/). The great feature of this tool is it can examine not only the http traffic from browser but also what your process's httpWebRequest made. And also it can display in text format the raw stream of those request and response. Although it is a commercial software, you can try it for 15 days. I am quite happy with what it tells me(in well-formed details) and I like to buy it as well.
I have an https page which launches a shadowbox which itself has a page inside it. The shadowbox content is got with whatever protocol the parent page is, so loads with https as well.
When the shadowbox page loads I get a mixed content warning, saying that some parts are not encrypted. Usually this happens when some of the page contents are got with https requests and some with http. However, according to the "Live HTTP Headers" plugin for firefox, every header for the parent page and the page loaded into the shadowbox are all https: there's no sign of an http request anywhere.
I'm a bit stumped by this and want to get rid of the mixed content warning, because they're really intrusive in some browsers and generally make people panic a bit.
Can anyone else suggest why this might be happening, or a way to diagnose it further?
Grateful for any advice - max
And how can the process be speeded up from a developer point of view?
There are a lot of things going on.
When you first type in an address, the browser will lookup the hostname in DNS, if it is not already in the browser cache.
Then the browser sends a HTTP GET request to the remote server.
What happens on the server is really up to the server; but it should respond back with a HTTP response, that includes headers, which perhaps describe the content to the browser and how long it is allowed to be cached. The response might be a redirect, in which case the browser will send another request to the redirected page.
Obviously, server response time will be one of the critical points for perceived performance, but there are many other things to it.
When a response is returned from the server, the browser will do a few things. First it will parse the HTML returned, and create it's DOM (Document Object Model) from that. Then it will run any startup Javascript on the page; before the page is ready to be displayed in the browser. Remember, that if the page contains any ressources such as external stylesheets, scripts, images and so on, the browser will have to download those, before it can display the page. Each resource is a separate HTTP get, and there are some latency time involved here. Therefore, one thing that in some cases can greatly reduce load times is to use as few external ressources as possible, and make sure they are cached on the client (so the browser don't have to fetch them for each page view).
To summarize, to optimize performance for a web page, you want to look at, as a minimum:
Server response time
Bandwith /content transfer time.
Make sure you have a small and simple DOM (especially if you need to support IE6).
Make sure you understand client side caching and the settings you need to set on the server.
Make sure you make the client download as little data as possible. Consider GZipping resources and perhaps dynamic content also (dependent on your situation).
Make sure you don't have any CPU intensive javascript on Page load.
You might want to read up on the HTTP Protocol, as well as some of the Best Practices. A couple of tools you can use are YSlow and Google Page Speed
What are the steps involved from entering a web site address to the page being displayed on the browser?
The steps are something like:
Get the IP address of the URL
Create a TCP (HTTP) connection to the IP address, and request the specified page
Receive/download the page via TCP/HTTP; the page may consist of several files/downloads: e.g. the HTML document, CSS files, javascript files, image files ...
Render the page
And how can the process be speeded up from a developer point of view?
Measure to discover which of these steps is slow:
It's only worth optimizing whichever step is the slow one (no point in optimizing steps which are already fast)
The answer to your question varies depending on which step it is.