We are trying to create a WKWebView in the background on application launch so that we can improve the speed and load times of WKWebViews when the user accesses a feature/URL later on. Basically trying to pre-fill the local cache with a few larger CSS and JS files so that later we get these from the cache improving speed.
Even though we carefully use the same WKProcessPool for both the invisible pre-load WKWebView and the later WKWebViews we end up missing the cache subsequently when accessing a real web page in our app. The pre-load WKWebView is definitely downloading the resources (we are checking traffic via a web proxy), and the resources have the correct "Cache-Control: max-age=" HTTP header. The pre-load WKWebView seems to operate normally and download everything ... we just can't get later WKWebViews to locate anything in the cache. Both the dummy pre-load end point and the application URLs are on the same domain (www.domain.com).
Anyone else have success pre-fetching in this manner? Is there some key we are missing in configuring/creating WKWebViews and how they use a common cache?
Related
I have a web app that wants to load bootstrap.min.js
It's on these two CDN's (among others):
https://cdnjs.cloudflare.com/ajax/libs/twitter-bootstrap/4.3.1/js/bootstrap.min.js
https://stackpath.bootstrapcdn.com/bootstrap/4.3.1/js/bootstrap.min.js
The odds of a cache hit from some other app using these CDN's is relatively high.
How can I tell the browser to check if they are cached and load from browser cache?
Can a service worker do this?
I believe that there are some privacy/security restrictions in place that attempt that make it difficult to determine, using JavaScript, whether a third-party URL is present in the browser's cache.
Adding a service worker into the mix will not get around those restrictions.
It's possible to use the Fetch API to create a Request with a mode of 'only-if-cached', which will behave more or less in the way you describe, but that will only work if the request's mode is 'same-origin'. In other words, only if the Request is for a first-party URL, not a third-party CDN URL as in your example.
In my app users can open a website. The website is heavy with content (20MB average). A lot of my users don't have access to the internet at all times, so it's essential that we save any content they open (at the same time they view it). And then the next time they open it, it's completely loaded from the disk.
Here's what I'm thinking:
Use UIWebView and cache everything .. that's working but UIWebview is depreciated.
Use WKWebView, but I cannot cache everything :( Im not even sure I can intercept all the requests and responses.
Use a proxy server inside the app and save all data as it's being loaded from the remote server, I have no idea how to start this.
Basically, I want something similar to https://github.com/evermeer/EVURLCache
but for WKWebView
The question:
Which approach do you recommend? And how can I get around the downsides mentioned ?
With WKWebView you can use URLRequests to load your website data and cache it. The URLRequest can be configured in their caching policy during initialization.
init(url URL: URL,
cachePolicy: NSURLRequest.CachePolicy,
timeoutInterval: TimeInterval)
You could try to go forward with returnCacheDataElseLoad. However, this probably would not cover your particular case even if the cache is used primarily. Have a look at this answer regarding application cache and WKWebView.
I've just come across something that has entirely changed my mental image of URLSession caching in iOS.
We were hitting an endpoint that only ever got hit once.
Restarting the app wouldn't hit the endpoint again.
Deleting the app would cause it to hit the endpoint again... but only once.
The header of the response contains...
Cache-Control public, max-age=1800
So it is down to caching. By manually telling the URLSession to ignore the cache it would hit the endpoint again.
In the docs it shows the caching policy and how it works as a workflow diagram.
https://developer.apple.com/documentation/foundation/nsurlrequestcachepolicy/nsurlrequestuseprotocolcachepolicy
But where is the cached data stored once the app is terminated? Surely the app and everything to do with it is removed from memory?
The URLSession is using URLCache for it's caching system. It's used for all network resources. You can access it directly or setting your own through URLSessionConfiguration. The underlying location of the URLCache is on the file system rather than in memory. There is a way to manage cache yourself though. Say, for instance, your response should be encrypted on the device. Slightly bad example, but you get the point. ;)
Here's an article how to manage cache programmatically if you are needing more control over caching.
Whenever I have an issue with a website, one of the first suggestions I will hear is “try to clear your browser cache” along with “and delete your cookies“. So what is this browser cache? What does it store and what is it good for?
I have googled. but didn't find the proper answer.I appreciate if anyone help on this.
A browser cache "caches" (as in keeps local copies) of data downloaded from the internet. The next time your browser needs the same data it can get it from the cache (fast) instead of downloading it over the internet (slow)
The problem is that data can be old. For example imagine the browser cached www.nytimes.com today and 24hrs later you visited www.nytimes.com again. If the browser loaded the cached data it would be old news.
So there are headers (metadata) that the servers send to browser telling them how long they should cache something (if at all).
The data the browser generally caches are "requests" which. In other words if your browser asks for "http://foo.com/bar.html" the first time the browser will "request" that "foo.com" send it "bar.html". If the headers from "foo.com" are set a certain way the browser will the save a local copy of "bar.html". If you request the same thing again the browser may load "bar.html" from it's cache. I say "may" because it depends on the headers sent from the server. The server can say how long (say 10 minutes, 10 hours, 10 days, etc..) or it can say "don't cache this at all, always download the newest version".
If you go to your browser's dev tools (chrome shown below) and look at the network tab (not sure what it's called in other browsers). Load the page again and you can see all the requests. You'll also notice which ones were loaded from the cache
If you click on a request you can see the metadata from both the browser (request headers) and the server (response headers)
The reason clearing the cache often fixes things is if for some reason the server (a bug?) said it was ok to cache or used the cached version but the data on the server has actually been updated. The browser, doing what the server told it to do, is using its copy from the cache, not the newer version which is actually needed. There might also from time to time be bugs in the browser itself related to caching.
When everything is working correctly it's great but if one thing or another is mis-configured or sending the wrong headers then the browser can end up loading old data from the cache instead of downloading the newest data. Clearing your cache effectively forces the browser to download the data again.
You can find out the details of what the various headers do here.
Browser caches are not mere rubbish bins but a mechanism to speed up the way we browse the web. Each website we visit has certain common elements like logos, navigation buttons, GIF animation files, script files etc. It doesn’t make sense for the browser to download each element (also commonly called as Temporary Internet files) when we hop from page to the other and back.
The page elements are downloaded when we visit a website and the browser checks its cache folder for copies when we browse the website. If a copy exists, then the browser doesn’t download the same file again, thus significantly speeding up web browsing speeds.
for more info..
http://www.guidingtech.com/8925/what-are-browser-cache-cookies-does-clearing-them-help/
https://en.wikipedia.org/wiki/Cache_(computing)
First result in Google, this is the proper answer, but I will summarize =]
1) What is Browser Cache?
Cache is a component that stores data so future requests for that data can be served faster; the data stored in a cache might be the results of an earlier computation, or the duplicates of data stored elsewhere.
2) What does it store?
Web browsers and web proxy servers employ web caches to store previous responses from web servers, such as web pages and images.
3) What is it good for?
Web caches reduce the amount of information that needs to be transmitted across the network, as information previously stored in the cache can often be re-used. This reduces bandwidth and processing requirements of the web server, and helps to improve responsiveness for users of the web.
I have a rails application where I would like to use both memcached and the file store cache, for different purposes.
I want to use the file store cache to keep a large number of pages that don't change often (some not at all) - i.e. page caching - and use memcached for everything else (action and DB caching etc). The reason is that the pages stored on the file store cache are likely to require a large amount of storage, but individually most will be accessed infrequently.
Is this possible to do or will configuring memcached as the cache mean that it is also used for page caching?
As a secondary question, what is a safe way to remove pages from the file store cache in some form of cron job, as there does not seem to be an option to specify ttl for this cache. For example a UNIX find command would quickly find and remove all old pages or pages that haven't been accessed in a long time - is this safe to do given the app server might potentially try to serve one of those pages at the time (tho this is very unlikely)? If not then what is the best way to do this.
If you want to use the filesystem only for page caching and memcached for action and fragment caching, you're fine. Page caching always uses the filesystem. Just remember that page caching bypasses your Rails application, so you can't use it for pages that include content that changes from user to user or for pages that are access controlled with filters.
Regarding the removal of pages, on Unix, a file can be deleted, but it is not actually removed from disk until all open file handles are closed. If the app server has opened the file to serve a request, and the find command deletes it a split-second later, the app server doesn't suddenly get an error when it tries to read.
You could also consider having find delete files based on their last access time, instead of creation or modification, and using a sweeper in your Rails app to delete the cached page when its content is out of date.
A simpler approach may be to use a http cache upstream of your application as your page cache rather than two stores within rails. This way you can use http headers to control the cache behavior, including TTL's. These same limits will also apply to browser's local caches as a nice bonus.
Varnish is about as high performance as it gets, but would require setting up another moving piece in your hosting environment as a proxy. This may still be worthwhile depending on what you're doing.
A simpler approach might be Rack::Cache, which will be easy to set up provided you're using a rack enabled version of rails.