What's the difference between reload and reloadFromOrigin in WKWebView? Apple's documentation says that reloadFromOrigin:
Reloads the current page, performing end-to-end revalidation using
cache-validating conditionals if possible.
But I am not sure what that really means.
I was interested in this too. Looking at WebKit's source (Source/WebCore/loader/FrameLoader.cpp, FrameLoader::addExtraFieldsToRequest(...) around the if (loadType == FrameLoadType::Reload) conditional) it looks like the key difference is in what extra HTTP header request fields are specified.
reloadFromOrigin() sets the Cache-Control and Pragma fields to no-cache, while a simple reload() only results in in the Cache-Control header field having max-age=0 set.
To work out what this means I looked at the Header Field Definitions section of the HTTP 1.1 spec. Section 14.9.4 'Cache Revalidation and Reload Controls' states:
The client can specify these three kinds of action using Cache- Control request directives:
End-to-end reload
The request includes a "no-cache" cache-control directive or, for compatibility with HTTP/1.0 clients, "Pragma: no-cache". Field names MUST NOT be included with the no-cache directive in a request. The server MUST NOT use a cached copy when responding to such a request.
Specific end-to-end revalidation
The request includes a "max-age=0" cache-control directive, which forces each cache along the path to the origin server to revalidate its own entry, if any, with the next cache or server. The initial request includes a cache-validating conditional with the client's current validator.
Unspecified end-to-end revalidation
The request includes "max-age=0" cache-control directive, which forces each cache along the path to the origin server to revalidate its own entry, if any, with the next cache or server. The initial request does not include a cache-validating
conditional; the first cache along the path (if any) that holds a cache entry for this resource includes a cache-validating conditional with its current validator.
From my reading of the spec, it looks like reload() uses only max-age=0 so may result in you getting a cached, but validated, copy of the requested data, whereas reloadFromOrigin() will force a fresh copy to be obtained from the origin server.
(This seems to contradict Apple's header/Class Reference documentation for the two functions in WKWebView. I think the descriptions for the two should be swapped over - I have lodged Bug Report/Radar 27020398 with Apple and will update this answer either way if I hear back from them...)
Related
Apparently I was under the misconception that GET and POST methods differ in the sense that the query parameters are put in plaintext as a part of the URL in GET method and the query parameters are THERE IN THE URL IN ENCODED(ENCRYPTED) FORM .
However , I realize that this was a grave misconception . And after going through :
https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9
and after writing a simple socket server in python and sending it both GET and POST (through form submission) and printing the request in server side
I got to know that only in GET the parameters are there in the URL but in POST the parameters are there in the request body .
I went through the following question as well so as to see if there is any difference in sending a GET and POST at lower level (C-Level) :
Simple C example of doing an HTTP POST and consuming the response
So as in the above question above I saw that there is no special encryption being applied to the POST request .
As such I would like to confirm the following :
1.The insecurities associated with GET and POST are only because of the GET method attaching the parameters in the URL .
For somebody who can have the whole request body , both the GET and POST methods are equally vulnerable .
Over the network , both the GET and POST body are sent with the equal degree of encryption applied to them .
Looking forward to comments and explanations.
Yes. The server only gets to know about the URL the user entered/clicked on because it's sent as the data of the request, after (transport) security has been negotiated so it's not inherently insecure:
you type into a browser: https://myhost.com/a.page?param=value
browser does DNS lookup of myhost.com
browser connects to https port 443 of retrieved ip
browser negotiates security, possibly including myhost.com if the server is using SNI certificates
connection is now encrypted, browser sends request data over the link:
GET /a.page?param=value HTTP/1.1
Host: my host.com
(other headers)
//Probably no body data
---- or ----
POST /a.page HTTP/1.1
Host: my host.com
(other headers)
param=value //body data
You can see it's all just data sent over an encrypted connection, the headers and the body are separated by a blank line. A GET doesn't have to have a body but is not prevented from having one. A POST usually has a body, but the point I'm making is that the data sent (param=value) that is relevant to the request (the stuff the user typed in, potentially sensitive info) is included somewhere in the request - either in the headers or the body - but all of it is encrypted
The only real difference from a security perspective is that the browser history tends to retain the full URL and hence in the case of a GET request would show param=value in the history to the next person reading it. The data in transit is secure for either GET or POST, but the tendency to put sensitive data on a POST centres on the "data at rest" concept in the context of the client browser's history. If the browser kept no history (and the address bar didn't show the parameters to shoulder surfers) then either method would be approximately equivalent to the other
Securing the connection between browser and server is quite simple and then means the existing send/receive data facilities all work without individual attention, but it's by no means the only way of securing connection. It would be conceivably possibly not to have the transport do it but instead for the server to send a piece of JavaScript and a public part of a public/private key pair on the page somewhere, then every request the page [script causes the browser to] makes could have its data individually encrypted and even though an interim observer could see most of the request, the data could be secured that way. It is only decryptable by the server because the server retains the private part of the key pair
I plan on using Amazon Cloudfront CDN, and there is a URL that I need to exclude. It's a dynamic URL (contains a querystring).
For example, I want to store/cache every page on my site, except:
http://www.mypage.com/do-not-cache/?query1=true&query2=5
EDIT - Is this what invalidating 'objects' does? If so, please provide an example.
Thanks
At the risk of helping you solve the wrong problem, I should point out that if you configure CloudFront to forward query strings to the origin server, then the response will be cached against the entire URI -- that is, against the path + query-string -- not against the path, only, so...
/dynamic-page?r=1
/dynamic-page?r=2
/dynamic-page?r=2&foo=bar
...would be three different "pages" as far as CloudFront is concerned. It would never serve a request from the cache unless the query string were identical.
If you configure CloudFront to forward query strings to your origin, CloudFront will include the query string portion of the URL when caching the object. [...]
This is true even if your origin always returns the same [content] regardless of the query string.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html
So, it shouldn't really be necessary to explicitly and deliberately avoid caching on this page. The correct behavior should be automatic, if CloudFront is configured to forward the query string.
Additionally, of course, if your origin server sets Cache-Control: private, no-cache, no-store or similar in the response headers, neither CloudFront nor the browser should cache the response.
But if you're very insistent that CloudFront be explicitly configured not to cache this page, create a new cache behavior, with the path pattern matching /do-not-cache*, and configure CloudFront to forward all of the request headers to the origin, which disables caching for page requests matching that path pattern.
I have code that is cached this way:
[OutputCache(Duration="3600", Location=OutputCacheLocation.Client)]
Now, I don't exactly know how this output cache works. Where exactly does it keep the copy of a page? And what are the differences between OutputCacheLocation.Client and OutputCacheLocation.Browser?
Where exactly does it keep the copy of a page?
The location of where the cache is stored is determined by Location property of the OutputCacheAttribute. In your case you set Location=OutputCacheLocation.Client so it will keep the cache on the client browser.
And what are the differences between OutputCacheLocation.Client and
OutputCacheLocation.Browser?
OutputCacheLocation.Browser doesn't exist. It's an invalid value. The documentation of the OutputCacheLocation enumeration type contains the possible values along with a description of its usage:
Any - The output cache can be located on the browser client (where the request originated), on a proxy server (or any other server)
participating in the request, or on the server where the request was
processed. This value corresponds to the HttpCacheability.Public
enumeration value.
Client - The output cache is located on the browser client where the request originated. This value corresponds to the
HttpCacheability.Private enumeration value.
Downstream - The output cache can be stored in any HTTP 1.1 cache-capable devices other than the origin server. This includes
proxy servers and the client that made the request.
Server - The output cache is located on the Web server where the request was processed. This value corresponds to the
HttpCacheability.Server enumeration value.
None - The output cache is disabled for the requested page. This value corresponds to the HttpCacheability.NoCache enumeration value.
ServerAndClient - The output cache can be stored only at the origin
server or at the requesting client. Proxy servers are not allowed to
cache the response. This value corresponds to the combination of the
HttpCacheability.Private and HttpCacheability.Server enumeration
values.
I'm experincing an odd situation where I have some JavaScript code running in a UIWebView. That JS code creates and fires off an XMLHttpRequest to pull some data off a server.
I listen for onreadystatechange and I have an event listener for the "progress" event.
For some reason this XMLHttpRequest returns only a subset of the Response Headers when queried using request.getAllResponseHeaders() or request.getResponseHeader('whatever'). Note that I've tried GETting any number of different of URLs and they all exhibit the same partial lack of headers.
The only headers returned are Pragma, Cache-Control, Expires & Content-Type, nothing else returns (content-length, encoding, various custom headers, etc).
The odd thing is that the same JS code executed in Mobile-Safari will show all the headers, in strike contrast to how that code behaves when inside a UIWebView.
Any idea what's going on here?
I notice that in my production enviornment (where I have memcached implemented) in see a cache-control - max-age header in firebug, anytime I am looking at an index page (posts for example).
Cache-Control max-age=315360000
In my dev environment that header looks like following.
Cache-Contro private, max-age=0, must-revalidate
As far as I know I have not done anything special with my nginx.conf file to specify max age for regular content, I do have expires-max set for css, jpg etc. here is my nginx.conf file..
http://pastie.org/1167080
So why is this cache-control is being set? How can I control this cache-control, because the side effect of this is kinda bad. This is what happens.
1 - User request all_posts listing and get a list of 10 pages (paginated)
2 - User view page 1, 2 3 and the respective caches are created.
3 - User goes back to page 1 and firefox does not even make a request to the server. Normally I would expect it would reqeust and hit the cache created in step #2.
The other issue is that if a new post has been created and now the cache is refreshed and it should be at the top of page 1, the user does not get to see it..because the browser isn't hitting the server.
Please help!
Thanks
Update :
I tried setting expires_now in my index action. NO difference the max-age is still the same large value.
Could this be a problem with my max-age regex? I basically want it to match only asset files(images, js, css etc)
I think you're right that this is a problem with your max-age regex.
You're matching against this pattern: ^.+.(jpg|jpeg|gif|png|css|js|swf)?([0-9]+)?$
Because you have question marks ("this part is optional") after both parenthesized sections, the only mandatory part of the regex is that the request URI have at least two characters (.+.). In other words, it's matching pretty near every request to your site.
This is the pattern we use: \.(js|css|jpg|jpeg|gif|png|swf)$
That will match only requests for paths ending with a dot and then one of those seven patterns.