I am using the code from this article: http://weblogs.asp.net/jeff/archive/2009/07/01/304-your-images-from-a-database.aspx to cache the images I am returning from the database but I am having a problem when the user changes the image the browser is not going back to the server to check the timestamp.
I have tested this in Chrome and IE9 and chrome almost always goes back to the server to check the timestamp and if it has been edited it returns the new image, IE9 never does unless I ctrl+F5 to refresh the page.
Is there a cross browser solution that anyone knows of to make sure the browser always makes a call to the server so I can check the timestamp?
Many thanks for any help.
Setting the expiration date to the current date/time forced it to always check with the server.
Response.Cache.SetExpires(DateTime.Now);
The same solution of user351711 with a different syntax:
Response.Cache.SetCacheability(System.Web.HttpCacheability.NoCache);
Related
I'm trying to optimize my application in Ruby on Rails, and I realized that the pictures in my application is what most long does it take to load, but I also noticed another problem, which is that google chrome isn't caching the images.
I noted this because in the Google Developers Console you can see that Google Chrome makes a request to load the images that are canceled before the images are truly loaded.
This can be seen here, first I open the Google Developers Console, then refresh the page and within the first requests there you can see the ones of the images, but they are canceled immediately.
After that you can see the requests that actually loaded the images.
I don't understand why is this happening if in the response headers you can see that the Cache Control is set to public with max-age = 31536...
I put the images in my application this way:
<div class="col-xs-3"><%= image_tag "#{#hero.id}/ability_1.png", class: "center-block"%></div>
And the images are organized in folders in app/assets/images
Is there a RoR way to fix this?
Edit: Now testing my app (which is in Heroku) in Windows I noticed that in fact Google Chrome caches the images sometimes, but it happens like the 50% of the times (and when I was in Ubuntu in development it didn't work a single time), while in firefox the first time the images are loaded, but the subsequent times I load the same view I can't even notice the reload, it's beatiful, Why google Chrome is not like that? Is normal that Google Chrome acts so weird?
The most important thing to realize when analyzing browser caching is the "Status Code". In your example, you can see you got a "304", which stands for "Not Modified" Which means the browser "could potentially use it's cache". So you ARE in fact caching. Caching != Not hitting your web server.
The definition according to Mozilla:
This is used for caching purposes. It is telling to client that response has not been modified. So, client can continue to use same cached version of response.
It sends the etag and last-modified to your web server, and your web server then looks at those meta and say "Nope, this file hasn't changed, so feel free to use your cache", and that's it. It actually does not send the file again. You can see that the "Size" is much less then when it's a "200" status code, where the web server IS sending the file, and the timing should me much shorter as well.
In Chrome you can force "non-caching" by checking the "Disable cache" option in the Network tab.
Hope that helps!
It looks like Chrome does handle image caching differently. What type of reload are you doing (following links, pressing enter in the address bar, Ctrl+r)? It looks like if you press enter in the search bar it will respect max-age but if you use Ctrl+r Chrome sets max-age to 0.
expires_in max-age cache control doesn't work
Chrome doesn't cache images/js/css
You can force caching with manifest file. There's plenty of docs on the web about the topic. Here's a starter: http://www.w3schools.com/html/html5_app_cache.asp
the request headers contain max-age=0. Try setting that to a big number!
I am getting some trouble with a ASP.NET MVC + IIS 7.5 page. I have a really extensive page that sometimes exceeds 15.0MB uncompressed and 1.5MB compressed.
When it happens, it looks like connection never ends. The loading icon stays forever and if I see at Developer Tools, the connection is pending, despite the entire HTML is received.
It happens at Chrome, Firefox and Internet Explorer, so I think the problem is ASP.NET or IIS.
Do I need to do something special to handle such a pages?
15MB is going to be horribly slow and unresponsive - not something your users want - however much they want to "see all published files".
I would introduce, for example, paging into your webpage so not all the files are downloaded at once.
However, if you really want a 15MB page, you may find the limits config can help.
You say that the entire HTML was provably received. How could IIS or ASP.NET be the problem then? Once the content is sent they are out of the loop.
The browser is probably the problem.
You could try setting Response.Buffer to false.
The Buffer property indicates whether to buffer page output. When page output is buffered, the server does not send a response to the client until all of the server scripts on the current page have been processed, or until the Flush or End method is called.
By default, Response.Buffer is set to true, so output will be buffered. Perhaps by feeding the response to the client as it comes the browsers will behave as you need them to.
You do need to set the value of Response.Buffer before any output is sent to the browser though.
Maybe try IIS compression?
IIS provides the following compression options:
Static files only
Dynamic application responses only
Both static files and dynamic application responses
http://technet.microsoft.com/en-us/library/cc771003%28v=ws.10%29.aspx
I had to disable cookies for some testing in a web application. now for some reason in IE I cannot get cookies working on localhost any more. They work as expected in Safari, Firefox, and Chrome, but for some unknown reason I cannot for the life of me get cookies working on localhost. I have tried literally every setting imaginable with absolutely no luck. If I change the Url to 'localhost." it works as expected, but when I just use "localhost", without the "." period, cookies are absolutely not written. What the heck did i do? I tried upgrading to IE 9 and that didn't work. I reverted back to IE 8 and still have the same problem. I'm going absolutely mad trying to firgure out what is causing this. I tried tools, internet options, privacy, advanced, and explicit tell the browser to accept all 1st and 3rd party cookies and I'll be damned if I'm on a localhost site, the cookies are not written. This has worked perfect in the past, so it's no doubt some setting I changed but I cannot for the life of me figure out what the hell is going on. If anyone has any idea of how I can remedy this, please do let me know. I've tried every setting imaginable with absolutely no luck. I hate internet explorer but that a conversation for a different day.
go into tools, internet options, advanced, and hit the reset button. Put everything back to factory defaults :)
At my wit's end, I just decided to try using http://127.0.0.1/... instead of http://localhost/.... It works. Had a similar problem with Safari and same solution worked there. Hope it works for you.
Were you by chance using a tool like Fiddler2? Check your connection settings etc... I have had IE get hung in a weird state after using web proxy tools.
#Hcabnettek try to set IE caching settings to Always Refresh from server in Developer Tools.
That might be problem and also try adding one extra querystring containing some random values to your page URL every time because you can never be sure about cache is enabled or disabled at client side, so adding random values in URL's querystring will trigger IE to load new cache for that different page URL.
Hope that helps you, because it helped me also.
I just discovered a bug in tidhttp component. The scenario is this, im creating a small to fetch the pages of website using tidhttp get. I tried it in ebay all is ok, now after ebay i tried amazon thats where i encountered a problem. What happenned on my side is i searched for item "lenovo laptop" in amazon and copied the url of the second page and paste it in my small app, and whats happenning is it always gets the first page even if the url i used is the second page. Does anyone of you encountered this please see the source code i used in the link below. This source is defaulted to second page of amazon. Thanks you guys in advance.
http://www.yourfilelink.com/get.php?fid=577209
What version of Indy are you using? your code works fine for me as-is when I try it with the current Indy 10.5.8 snapshot release.
If the server returns a success reply, TIdHTTP.Get() saves whatever data the server decodes to send. If you are not seeing the data you are expecting, chances are that Amazon is either redirecting TIdHTTP back to the first page when you try to access the second page directly, or it is sending the first page's data by accident. Either way, I seriously doubt this is a bug in TIdHTTP itself.
I'm implementing a simple web site based on python simplewebserver, that I'm planning to use
through mobile safari (iPad).
The web page issues events to the webserver in the form of queries.
So the page.html has links in the form of
generate event 1
generate event 2
The problem I'm having is that safari seems to be caching the "page.html?eventn", so after the first time, it does not issue the query to the server at all.
How do I force the web client to issue the query every time?
You need to set the 'Expires' header on the web server to a date in the past. The browser will then make a request to the server every single time rather than trying to serve a copy out of the local cache.
Hope this helps.