Large Pages on IIS + ASP.NET MVC - asp.net-mvc

I am getting some trouble with a ASP.NET MVC + IIS 7.5 page. I have a really extensive page that sometimes exceeds 15.0MB uncompressed and 1.5MB compressed.
When it happens, it looks like connection never ends. The loading icon stays forever and if I see at Developer Tools, the connection is pending, despite the entire HTML is received.
It happens at Chrome, Firefox and Internet Explorer, so I think the problem is ASP.NET or IIS.
Do I need to do something special to handle such a pages?

15MB is going to be horribly slow and unresponsive - not something your users want - however much they want to "see all published files".
I would introduce, for example, paging into your webpage so not all the files are downloaded at once.
However, if you really want a 15MB page, you may find the limits config can help.

You say that the entire HTML was provably received. How could IIS or ASP.NET be the problem then? Once the content is sent they are out of the loop.
The browser is probably the problem.

You could try setting Response.Buffer to false.
The Buffer property indicates whether to buffer page output. When page output is buffered, the server does not send a response to the client until all of the server scripts on the current page have been processed, or until the Flush or End method is called.
By default, Response.Buffer is set to true, so output will be buffered. Perhaps by feeding the response to the client as it comes the browsers will behave as you need them to.
You do need to set the value of Response.Buffer before any output is sent to the browser though.

Maybe try IIS compression?
IIS provides the following compression options:
Static files only
Dynamic application responses only
Both static files and dynamic application responses
http://technet.microsoft.com/en-us/library/cc771003%28v=ws.10%29.aspx

Related

Google Chrome is not caching images

I'm trying to optimize my application in Ruby on Rails, and I realized that the pictures in my application is what most long does it take to load, but I also noticed another problem, which is that google chrome isn't caching the images.
I noted this because in the Google Developers Console you can see that Google Chrome makes a request to load the images that are canceled before the images are truly loaded.
This can be seen here, first I open the Google Developers Console, then refresh the page and within the first requests there you can see the ones of the images, but they are canceled immediately.
After that you can see the requests that actually loaded the images.
I don't understand why is this happening if in the response headers you can see that the Cache Control is set to public with max-age = 31536...
I put the images in my application this way:
<div class="col-xs-3"><%= image_tag "#{#hero.id}/ability_1.png", class: "center-block"%></div>
And the images are organized in folders in app/assets/images
Is there a RoR way to fix this?
Edit: Now testing my app (which is in Heroku) in Windows I noticed that in fact Google Chrome caches the images sometimes, but it happens like the 50% of the times (and when I was in Ubuntu in development it didn't work a single time), while in firefox the first time the images are loaded, but the subsequent times I load the same view I can't even notice the reload, it's beatiful, Why google Chrome is not like that? Is normal that Google Chrome acts so weird?
The most important thing to realize when analyzing browser caching is the "Status Code". In your example, you can see you got a "304", which stands for "Not Modified" Which means the browser "could potentially use it's cache". So you ARE in fact caching. Caching != Not hitting your web server.
The definition according to Mozilla:
This is used for caching purposes. It is telling to client that response has not been modified. So, client can continue to use same cached version of response.
It sends the etag and last-modified to your web server, and your web server then looks at those meta and say "Nope, this file hasn't changed, so feel free to use your cache", and that's it. It actually does not send the file again. You can see that the "Size" is much less then when it's a "200" status code, where the web server IS sending the file, and the timing should me much shorter as well.
In Chrome you can force "non-caching" by checking the "Disable cache" option in the Network tab.
Hope that helps!
It looks like Chrome does handle image caching differently. What type of reload are you doing (following links, pressing enter in the address bar, Ctrl+r)? It looks like if you press enter in the search bar it will respect max-age but if you use Ctrl+r Chrome sets max-age to 0.
expires_in max-age cache control doesn't work
Chrome doesn't cache images/js/css
You can force caching with manifest file. There's plenty of docs on the web about the topic. Here's a starter: http://www.w3schools.com/html/html5_app_cache.asp
the request headers contain max-age=0. Try setting that to a big number!

What Is A Browser Cache? What does it store from a webpage data?

Whenever I have an issue with a website, one of the first suggestions I will hear is “try to clear your browser cache” along with “and delete your cookies“. So what is this browser cache? What does it store and what is it good for?
I have googled. but didn't find the proper answer.I appreciate if anyone help on this.
A browser cache "caches" (as in keeps local copies) of data downloaded from the internet. The next time your browser needs the same data it can get it from the cache (fast) instead of downloading it over the internet (slow)
The problem is that data can be old. For example imagine the browser cached www.nytimes.com today and 24hrs later you visited www.nytimes.com again. If the browser loaded the cached data it would be old news.
So there are headers (metadata) that the servers send to browser telling them how long they should cache something (if at all).
The data the browser generally caches are "requests" which. In other words if your browser asks for "http://foo.com/bar.html" the first time the browser will "request" that "foo.com" send it "bar.html". If the headers from "foo.com" are set a certain way the browser will the save a local copy of "bar.html". If you request the same thing again the browser may load "bar.html" from it's cache. I say "may" because it depends on the headers sent from the server. The server can say how long (say 10 minutes, 10 hours, 10 days, etc..) or it can say "don't cache this at all, always download the newest version".
If you go to your browser's dev tools (chrome shown below) and look at the network tab (not sure what it's called in other browsers). Load the page again and you can see all the requests. You'll also notice which ones were loaded from the cache
If you click on a request you can see the metadata from both the browser (request headers) and the server (response headers)
The reason clearing the cache often fixes things is if for some reason the server (a bug?) said it was ok to cache or used the cached version but the data on the server has actually been updated. The browser, doing what the server told it to do, is using its copy from the cache, not the newer version which is actually needed. There might also from time to time be bugs in the browser itself related to caching.
When everything is working correctly it's great but if one thing or another is mis-configured or sending the wrong headers then the browser can end up loading old data from the cache instead of downloading the newest data. Clearing your cache effectively forces the browser to download the data again.
You can find out the details of what the various headers do here.
Browser caches are not mere rubbish bins but a mechanism to speed up the way we browse the web. Each website we visit has certain common elements like logos, navigation buttons, GIF animation files, script files etc. It doesn’t make sense for the browser to download each element (also commonly called as Temporary Internet files) when we hop from page to the other and back.
The page elements are downloaded when we visit a website and the browser checks its cache folder for copies when we browse the website. If a copy exists, then the browser doesn’t download the same file again, thus significantly speeding up web browsing speeds.
for more info..
http://www.guidingtech.com/8925/what-are-browser-cache-cookies-does-clearing-them-help/
https://en.wikipedia.org/wiki/Cache_(computing)
First result in Google, this is the proper answer, but I will summarize =]
1) What is Browser Cache?
Cache is a component that stores data so future requests for that data can be served faster; the data stored in a cache might be the results of an earlier computation, or the duplicates of data stored elsewhere.
2) What does it store?
Web browsers and web proxy servers employ web caches to store previous responses from web servers, such as web pages and images.
3) What is it good for?
Web caches reduce the amount of information that needs to be transmitted across the network, as information previously stored in the cache can often be re-used. This reduces bandwidth and processing requirements of the web server, and helps to improve responsiveness for users of the web.

Request Length ERR_CONNECTION_RESET in MVC 4 Application on IIS 7.5

I am having an issue with my application using MVC 4 and IIS 7.5 getting a net::ERR_CONNECTION_RESET errror. The page request size is 4.9MB. All of the content loads but the request says that is has not finished yet, and none of my javascript is applied. I have other pages in the application that all load fine and the Javascript is applied with no issues. There seems to be something going on with this particular page.
Checking around I discovered I needed to set the MaxRequestLength and the MaxAllowedContentLength in the web.config. I set both to 8Mb, with MaxRequestLength being in KiloBytes and MaxAllowedContentLength being in Bytes. This still resulted in the same thing, I double checked in IIS to make sure that the MaxRequestLength and MaxAllowedContentLength were both being set correctly, which they were.
Next I adjusted my query to bring back a smaller amount of data and the page request size was well under 900KB and everything seemed to work fine. I kept modifying my query to bring back more results to see what was the max request size I could go to with the page loading fine. To my surprise once the page request length reached 918KB and greater the request would keep going for about 2 minutes before resulting in a net::ERR_CONNECTION_RESET errror. Keep in mind this error only shows in firebug as the page seems to display all data fine except none of the Javascript was applied.
I only discovered this issue when putting my application on the production server. Everything worked fine on localhost. I beleive this to be something going on with the server and IIS 7.5 as putting ELMAH in the application I was unable to capture any errors.
At this point I have run out of ideas and things to try. Any additional help would be great.
I had the same problem,
That error usually is 'cause there is a problem with the Database connection.
try this
FireWall permissions
If your string connection has Trusted Connection, and your user doesn't have password it can be a problem, you must set a password or change string connection for sa.
i really hope that it works

VS2013 RTM making once per second Signal R requests when I check with fiddler

When I check with Fiddler I see my new install of VS2013 is giving Continuous Signal R requests. I don't use anything to do with this in my application. How can I stop these requests which I assume are part of VS2013 trying to sync up something?
It's probably due to the BrowserLink feature mentioned here. BrowserLink uses SignalR to communicate between VS and your browsers.
Maybe log out of VisualStudio? It might also help if you tell us where VS is communicating to...
There seem to be two additional calls made by the browser-link:
a one off call made on a page render, on the same port as the site exchanging information about the mapping between Razor views and the elements. This call has __browserlink at the start of the URL.
a periodic call as part of the SignalR synchronisation. This has SignalR about 30 characters into the URL.
The former, a single call, I can live with. The other fills up my capture history.
To avoid this, in fiddler I've used the 'Hide the following Hosts' option in the Filter Tab and put something like localhost:62533 in the text box. Note that port number seems to change with each restart of VS2013.
As long as the 'Use Filters' is checked, I still see the traffic I want to (plus a one-off call for __browserlink).

What are the steps involved from entering a web site address to the page being displayed on the browser?

And how can the process be speeded up from a developer point of view?
There are a lot of things going on.
When you first type in an address, the browser will lookup the hostname in DNS, if it is not already in the browser cache.
Then the browser sends a HTTP GET request to the remote server.
What happens on the server is really up to the server; but it should respond back with a HTTP response, that includes headers, which perhaps describe the content to the browser and how long it is allowed to be cached. The response might be a redirect, in which case the browser will send another request to the redirected page.
Obviously, server response time will be one of the critical points for perceived performance, but there are many other things to it.
When a response is returned from the server, the browser will do a few things. First it will parse the HTML returned, and create it's DOM (Document Object Model) from that. Then it will run any startup Javascript on the page; before the page is ready to be displayed in the browser. Remember, that if the page contains any ressources such as external stylesheets, scripts, images and so on, the browser will have to download those, before it can display the page. Each resource is a separate HTTP get, and there are some latency time involved here. Therefore, one thing that in some cases can greatly reduce load times is to use as few external ressources as possible, and make sure they are cached on the client (so the browser don't have to fetch them for each page view).
To summarize, to optimize performance for a web page, you want to look at, as a minimum:
Server response time
Bandwith /content transfer time.
Make sure you have a small and simple DOM (especially if you need to support IE6).
Make sure you understand client side caching and the settings you need to set on the server.
Make sure you make the client download as little data as possible. Consider GZipping resources and perhaps dynamic content also (dependent on your situation).
Make sure you don't have any CPU intensive javascript on Page load.
You might want to read up on the HTTP Protocol, as well as some of the Best Practices. A couple of tools you can use are YSlow and Google Page Speed
What are the steps involved from entering a web site address to the page being displayed on the browser?
The steps are something like:
Get the IP address of the URL
Create a TCP (HTTP) connection to the IP address, and request the specified page
Receive/download the page via TCP/HTTP; the page may consist of several files/downloads: e.g. the HTML document, CSS files, javascript files, image files ...
Render the page
And how can the process be speeded up from a developer point of view?
Measure to discover which of these steps is slow:
It's only worth optimizing whichever step is the slow one (no point in optimizing steps which are already fast)
The answer to your question varies depending on which step it is.

Resources