I notice that in my production enviornment (where I have memcached implemented) in see a cache-control - max-age header in firebug, anytime I am looking at an index page (posts for example).
Cache-Control max-age=315360000
In my dev environment that header looks like following.
Cache-Contro private, max-age=0, must-revalidate
As far as I know I have not done anything special with my nginx.conf file to specify max age for regular content, I do have expires-max set for css, jpg etc. here is my nginx.conf file..
http://pastie.org/1167080
So why is this cache-control is being set? How can I control this cache-control, because the side effect of this is kinda bad. This is what happens.
1 - User request all_posts listing and get a list of 10 pages (paginated)
2 - User view page 1, 2 3 and the respective caches are created.
3 - User goes back to page 1 and firefox does not even make a request to the server. Normally I would expect it would reqeust and hit the cache created in step #2.
The other issue is that if a new post has been created and now the cache is refreshed and it should be at the top of page 1, the user does not get to see it..because the browser isn't hitting the server.
Please help!
Thanks
Update :
I tried setting expires_now in my index action. NO difference the max-age is still the same large value.
Could this be a problem with my max-age regex? I basically want it to match only asset files(images, js, css etc)
I think you're right that this is a problem with your max-age regex.
You're matching against this pattern: ^.+.(jpg|jpeg|gif|png|css|js|swf)?([0-9]+)?$
Because you have question marks ("this part is optional") after both parenthesized sections, the only mandatory part of the regex is that the request URI have at least two characters (.+.). In other words, it's matching pretty near every request to your site.
This is the pattern we use: \.(js|css|jpg|jpeg|gif|png|swf)$
That will match only requests for paths ending with a dot and then one of those seven patterns.
Related
My rails code:
def index
battles = Battle.feed(current_user, params[:category_name], params[:future_time])
#battles = paginate battles, per_page: 50
if stale?([#battles, current_user.id], template: false)
render 'index'
end
end
If I send the If-None-Match header with the last Etag manually I get 304 status code in return, If I don't send it manually (The header is sent automatically with the same If-None-Match header) I get 200 status code...
I'm checking the server using Postman rest client (Cache enabled).
I cannot comment on the Rails side of things here but this is correct behaviour if I'm following you.
When sending "If-None-Match" you get a 304 if the content has not changed (same e-tag). This is basically saying either yourself or something in between such as a proxy has the content already and so does not need to transfer the body again.
If you omit the header then you see a 200. Postman by default will send a set of a headers but it's also pretty lean in the sense that it strips a lot away. Try the same request in your browser and you'll get a 304. You'll see your browser will be set to use caching where possible.
Things may get different if you are relying upon server side caching. You may be seeing what looks like a new response yet the server is actually doing very little yet yielding a 200 response.
To summarise the header is doing the right job from your description.
I plan on using Amazon Cloudfront CDN, and there is a URL that I need to exclude. It's a dynamic URL (contains a querystring).
For example, I want to store/cache every page on my site, except:
http://www.mypage.com/do-not-cache/?query1=true&query2=5
EDIT - Is this what invalidating 'objects' does? If so, please provide an example.
Thanks
At the risk of helping you solve the wrong problem, I should point out that if you configure CloudFront to forward query strings to the origin server, then the response will be cached against the entire URI -- that is, against the path + query-string -- not against the path, only, so...
/dynamic-page?r=1
/dynamic-page?r=2
/dynamic-page?r=2&foo=bar
...would be three different "pages" as far as CloudFront is concerned. It would never serve a request from the cache unless the query string were identical.
If you configure CloudFront to forward query strings to your origin, CloudFront will include the query string portion of the URL when caching the object. [...]
This is true even if your origin always returns the same [content] regardless of the query string.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html
So, it shouldn't really be necessary to explicitly and deliberately avoid caching on this page. The correct behavior should be automatic, if CloudFront is configured to forward the query string.
Additionally, of course, if your origin server sets Cache-Control: private, no-cache, no-store or similar in the response headers, neither CloudFront nor the browser should cache the response.
But if you're very insistent that CloudFront be explicitly configured not to cache this page, create a new cache behavior, with the path pattern matching /do-not-cache*, and configure CloudFront to forward all of the request headers to the origin, which disables caching for page requests matching that path pattern.
What's the difference between reload and reloadFromOrigin in WKWebView? Apple's documentation says that reloadFromOrigin:
Reloads the current page, performing end-to-end revalidation using
cache-validating conditionals if possible.
But I am not sure what that really means.
I was interested in this too. Looking at WebKit's source (Source/WebCore/loader/FrameLoader.cpp, FrameLoader::addExtraFieldsToRequest(...) around the if (loadType == FrameLoadType::Reload) conditional) it looks like the key difference is in what extra HTTP header request fields are specified.
reloadFromOrigin() sets the Cache-Control and Pragma fields to no-cache, while a simple reload() only results in in the Cache-Control header field having max-age=0 set.
To work out what this means I looked at the Header Field Definitions section of the HTTP 1.1 spec. Section 14.9.4 'Cache Revalidation and Reload Controls' states:
The client can specify these three kinds of action using Cache- Control request directives:
End-to-end reload
The request includes a "no-cache" cache-control directive or, for compatibility with HTTP/1.0 clients, "Pragma: no-cache". Field names MUST NOT be included with the no-cache directive in a request. The server MUST NOT use a cached copy when responding to such a request.
Specific end-to-end revalidation
The request includes a "max-age=0" cache-control directive, which forces each cache along the path to the origin server to revalidate its own entry, if any, with the next cache or server. The initial request includes a cache-validating conditional with the client's current validator.
Unspecified end-to-end revalidation
The request includes "max-age=0" cache-control directive, which forces each cache along the path to the origin server to revalidate its own entry, if any, with the next cache or server. The initial request does not include a cache-validating
conditional; the first cache along the path (if any) that holds a cache entry for this resource includes a cache-validating conditional with its current validator.
From my reading of the spec, it looks like reload() uses only max-age=0 so may result in you getting a cached, but validated, copy of the requested data, whereas reloadFromOrigin() will force a fresh copy to be obtained from the origin server.
(This seems to contradict Apple's header/Class Reference documentation for the two functions in WKWebView. I think the descriptions for the two should be swapped over - I have lodged Bug Report/Radar 27020398 with Apple and will update this answer either way if I hear back from them...)
hi is there a wait to load a full url.?
url= 'http://www.example.com/whatever.php'
$('#selector').load(url); // this way returns null (empty result)
instead of :
url = 'whatever.php'
$('#selector').load(url); // works fine
Some may think whats the difference i want to use this because im using multiple directories. so i could be on a page like...
example.com/dir/
but the dir folder will not have the whatever.php
so anyone has a fix for this that i should always use the full url?
thank you.
You could always use relative paths
putting / before the path will tell the browser to go the root of the page. For your example you could call /whatever.php.
You can also move up one directory at a time. Lets say you are in a page at http://www.example.com/dir/foo/bar.php and want to access something in the dir folder, you could specify ../inTheDir.php to move up one directory or ../../inTheRoot.php to move up two.
This should work for you, but based on your comment it sounds like you have a problem somewhere else since your www. page doesn't seem to respond correctly.
No, there isn't.
If http://www.example.com/ takes longer to load than http://example.com/ then it is probably because you have the DNS record for example.com cached but not the record for www.example.com.
Corrected after having realized a typo changed the meaning of the question.:
This is a case of having a mismatch between the host name the page is loaded from and the host name the Ajaxed resource is requested from. i.e. The Same Origin Policy.
Pick a host name to be canonical, use that one in your requests, and redirect (with a 301 status code) from the other so that people don't go to the wrong one by mistake.
One of our website has URL like this : example.oursite.com. We decided to move our site with an URL like this www.oursite.com/example. To do this, we wrote a rewrite rule in our Apache server that redirect to our new URL with a code 301.
Many websites link to us with URLs of the form example.oursite.com/#id=23. The problem is that the redirection erase the hash part of the URL with IE. As far as I know, the hash part is never sent to the server.
I wanted to implement the redirection with javascript to keep the hash part, but the Search Engine will not be aware that our URL changed. (no code 301 returned)
I want the Search Engine to be notified of our new URL(301) because we need to transfer the page rank to our new URL.
Is there a way to redirect with a 301 code and keep the hash part(#id=23) of in the URL ?
Search engines do in fact care about hash tags, they frequently use them to highlight specific content on a page.
To the question, however, anchor locations are unfortunately not sent to the server as part of the HTTP request. If you want to redirect a user, you will need to do this in Javascript on the client side.
Good article: http://web.archive.org/web/20090508005814/http://www.mikeduncan.com/named-anchors-are-not-sent/
Seeing as the server will never see the # (ruling out 301 Redirects) and Google has deprecated their AJAX Crawling scheme, it seems that a front-end solution is the only way!
How I did it:
(function() {
var redirects = [
['#!/about', '/about'],
['#!/contact', '/contact'],
['#!/page-x', '/pageX']
]
for (var i=0; i<redirects.length; i++) {
if (window.location.hash == redirects[i][0]) {
window.location.replace(redirects[i][1]);
}
}
})();
I'm assuming that because Google crawlers do indeed execute Javascript, the new pages will be indexed properly.
I've put it in a <script> tag directly underneath the <title> tag, so that it get executed before any other JS/CSS. Note that this script should only be required for your index file.
I am fairly certain that the hash/page anchor/bookmark part of a URL is not indexed by search engines, and therefore has no effect on your page ranking. Doing a google search for "inurl:#" returns zero documents, so that backs up my assumption. Links from external sites will be indexed without the hash.
You are right in that the hash part isn't sent to the server, so as far as I am aware, there isn't a good way to be able to create a redirection url with the hash in it.
Because of this, it's up to the browser to correctly manage the hash during a redirect. Firefox 3.5 appears to do this successfully. If you append a hash to a URL that has a known redirect, you will see the URL change in the address bar to the new location, but the hash stays on there successfully.
Edit: In response to the comment below, if there isn't a hash sign in the external URL for the part you need, then it is entirely possible to rewrite the URL. An Apache rewrite rule would take care of it:
RewriteCond %{HTTP_HOST} !^exemple\.oursite\.com [NC]
RewriteCond %{HTTP_HOST} !^$
RewriteRule ^/(.*) http://www.oursite.com/exemple/$1 [L,R]
If you're not using Apache, then you'll have to look into the server docs for something similar.
Google has a special syntax for AJAX applications that is based on hash URLs: http://code.google.com/web/ajaxcrawling/docs/getting-started.html
You could create a page on the old address that catches all requests and redirects to the new site with the correct address and code.
I did something like that, but it was in asp.net, which I guess it's not the language you use. Anyway there should be a way to do this in any language.
When returning status 301, your server is supposed to return a 'Location:' header which points to the new location. In practice, the way this is implemented varies; some servers provide the full URL (netloc and path), some just provide the new path and expect the browser to look for that path on the original netloc. It sounds like your rewrite rule is stripping the path.
An easy way to see what the returned Location header is, in the python shell:
>>> import httplib
>>> conn = httplib.HTTPConnection('exemple.oursite.com')
>>> conn.request('HEAD', '/')
>>> res = conn.getresponse()
>>> print res.getheader('location')
I'm afraid I don't know enough about mod_rewrite to tell you how to do the rewrite rule correctly, but this should give you an idea of what your server is actually telling clients to do.
The search bots don't care about hash tags. And if you are using them for some kind of flash or AJAX calls, you have more serious problems than your 301 redirects don't work. Because unless you have the content in an alternate form, the search engines are not indexing your site and you are definitely suffering as far as SEO goes.
I registered my account so I can't edit.
zombat : I'm sorry I made a mistake in my comment. The link to our video is exemple.oursite.com/#video_id=233. In this case, my rewrite rule in Apache doesn't work.
Nick Berardi: We changed the way our links work. We don't use # anymore, only for backward compatibility