Workbox: clean up cache - service-worker

It doesn't seem like workbox cleans up old caches. For example, if I specify a cache version like this:
var version = 'v2';
workbox.core.setCacheNameDetails({
suffix: version
});
...I would have expected workbox to clean up older versions of the cache once the new service worker activates, but my cache storage looks like this:
Is it safe to manually clean up caches myself? For example in my service worker:
self.addEventListener('activate', function(event) {
event.waitUntil(
caches
.keys()
.then(keys => keys.filter(key => !key.endsWith(version)))
.then(keys => Promise.all(keys.map(key => caches.delete(key))))
);
});

You are changing the suffix property value to a string that you are as a version. But Workbox only uses it to name the bucket for caching.
From the Workbox docs.
The main use case for the prefix and suffix is that if you use Workbox for multiple projects and use the same localhost for each project, setting a custom prefix for each module will prevent the caches from conflicting with each other.
Workbox doesn't think of x-v2 as the new replacement for x-v1.
You can use your cache eviction strategy just fine since Workbox will no longer use the previously named caches.
However you shouldn't need to use suffix to version your assets. Workbox has a number of tools to make sure assets are updated correctly. Plus your suffix implementation will always start out with a fresh cache and download everything.
precache
Precache has revisions for assets so as the assets change, you generate a new build, and deploy the changed assets will update and the unchanged assets will be left alone.
strategies
Strategies is where most of the work will be done. When you are defining routes you will define the caching strategy the fits best with that type of asset. staleWhileRevalidate is a great approach where the on-device cache will be used but Workbox will also hit the network in parallel and check if there are updates to that resource.
expiration
You can also make sure that old assets purged if they become older than the defined expiration length.

Related

Workbox's service worker not updating when changed

Explanation
I'm having an issue with Workbox where my website doesn't update when a file's content is changed, unless I manually clear storage/site data in my browser.
Since v4 release this year, the cleanupOutdatedCaches, which is in my code, should take care of this, but the problem persists.
Example
I created this website to exemplify. Once you access it, Workbox will install the service worker, but if I change, for example, test1 to test2, you won't see the change, unless you clear the site data in your browser and refresh.
I also tried only unregistering the sw; it shows the updated version (test2), but when refreshing twice it goes back to the old version (test1).
You can see the website's code in GitHub here.
Thanks in advance,
Luiz.
cleanOutdatedCaches will only clean caches created by older versions of the workbox library. In this case, since you are using the same version fo workbox, the call to this method does nothing.
https://developers.google.com/web/tools/workbox/reference-docs/latest/workbox.precaching#.cleanupOutdatedCaches
Once a particular file is precached by Workbox, it will never attempt to retrieve that file from the network, unless the revision you have specified in the precacheAndRoute call is different from what was previously cached.
Since you changed index.html but not the revision in precacheAndRoute, workbox assumes the file is unchanged. So,what you need to do is to update the precacheAndRoute with a new hash that corresponds to the new version of index.html
You can achieve this by either using injectManifest
https://developers.google.com/web/tools/workbox/modules/workbox-build
or any other build tooling you use.
Edit:
You can invoke skipWaiting programmatically as well
https://developers.google.com/web/tools/workbox/modules/workbox-core#skip_waiting_and_clients_claim
But you do need to use it with caution. Here is one way to do it :
https://developers.google.com/web/tools/workbox/guides/advanced-recipes

How to configure Service Worker with React to prevent Uncaught Syntax Error on new deployments at AWS CloudFront

I've been reading SO posts and Github issues for the past few days on this topic and I still can't seem to find a combination that works for my setup. Hoping that someone can point me to a specific solution. Here's what's going on.
I'm using create-react-app v2 with sw-precache. Builds go through CircleCI to push new files to S3, which are in turn served from CloudFront. The final step of the build is to invalidate the previous CloudFront distribution.
The problem I'm running into is when a new build is deployed and the service worker is updated, the next page load throws an error because the bundle files that are served by the service worker no longer exist. To say that again, I can load the site at www.example.com but if I then open a new tab and load www.example.com/test I get the Uncaught Syntax Error.
A few notes about my setup.
I created a behavior in CloudFront to set Minimum and Maximum TTL for service-worker.js to 0.
For my sw-precache-config.js, here's my setup:
module.exports = {
staticFileGlobs: [
'build/index.html',
'build/static/css/**.css',
'build/static/js/**.js'
],
swFilePath: './build/service-worker.js',
stripPrefix: 'build/',
handleFetch: false,
runtimeCaching: [...]
}
When I load the first page after a build, for the index.html I see the following response: Status Code: 200 OK (from ServiceWorker)
My assumption is that I want the index.html file to not be cached by the service worker. But if I do that, won't I lose my offline capabilities? Should I set up a runtimeCaching for index.html to be networkFirst so that I can always ask for it but fall back to cache when offline? If so, would that look like this:
runtimeCaching: [
urlPattern: '/index.html',
handler: 'networkFirst'
]
Some of the things I've tried but haven't worked for me include showing a message to users to reload when this happens (technically works but user experience is not optimal). Also looked at skipWaiting: false option, but kept seeing the same error on new builds.
Your problem might be related to caching service-worker.js in user browsers, so when you deploy a new version, the old service-worker still cached on users browsers.
Minimum and Maximum TTL for service worker file should not be 0. You should always cache it on Cloudfront edge since you can run invalidation requests to purge cloudfront edge cache after a new deployment.
The problem might be related to the cache in users browsers. You can try to remove cache headers for service-worker.js files when uploading files to the bucket. You might want to see the diference between cache in CloudFront edge and cache on user browsers.
I wrote an article talking that and how to deploy Single Page applications and troubleshot cache issues. Please have a look at: https://lucasfsantos.com/posts/deploy-react-angular-cloudfront/
I also talk about sw-precaching on single page apps in the article:
Service worker JavaScript update frequency (every 24 hours?), https://github.com/GoogleChromeLabs/sw-precache/issues/332

bundling in mvc to load new set of files for every release

I need to bundle my .js files and lib files in my Mvc project, i am looking for version number for bundle, as my browser need to reload all .js and .lib files when ever new version is added or build is pushed to dev/testing/prod environments.
Bundling in ASP.NET MVC already has a built-in mechanism for handling these type of cache-busting scenarios for release builds as per the "Bundle Caching" section of this documentation :
As long as the bundle doesn't change, the ASP.NET application will request the AllMyScripts bundle using this token. If any file in the
bundle changes, the ASP.NET optimization framework will generate a new
token, guaranteeing that browser requests for the bundle will get the
latest bundle.
This features uses a token within the querystring called v to indicate current "version" of the bundle that looks like :
v=r0sLDicvP58AIXN_mc3QdyVvVj5euZNzdsa2N1PKvb81
This functions as a unique identifier for the particular build and as long as nothing in the bundle was changed, it will continue to use it, otherwise a new one would be generated to "bust" any existing caching.
For Non-Release Builds
If you needed to handle this in non-release builds, then I believe you could set the EnableOptimizations property to true to always handle this :
BundleTable.EnableOptimizations = true;

How do I put #Cacheable annotation on Grails asset pipeline controller action?

I have a Grails 2.4.4 project configured with the default ':cache:1.1.8' plugin. It also uses the default ":asset-pipeline:1.9.9" plugin.
When running the application, I'm seeing this DEBUG message in the logs:
DEBUG simple.MemoryPageFragmentCachingFilter - No cacheable annotation found for GET:/PROJECTNAME/grails/assets/index.dispatch [controller=assets, action=index]
How do I make this message go away? I don't mean by filtering the log file, I mean by putting a cacheable annotation for the asset pipeline controller, or something like that.
UPDATE: It turns out that I was getting dozens of those DEBUG log messages instead of just one, because of a flaw in sass-asset-pipeline:1.9.0.
I updated to sass-asset-pipeline:1.9.1, because they said they fixed some caching issues in 1.9.1 here:
https://github.com/bertramdev/sass-grails-asset-pipeline/issues/11
You don't want to. Caching responses and method calls should use very different logic from caching static resources.
Typically static resources change rarely and are cached forever, but use a unique name or some other mechanism so if you do change the CSS/JS/etc. file, you can get clients to use the new version.
But caching service method calls and controller responses is typically much more short-lived, since database updates often trigger cache invalidation and flushing to ensure that the correct data is used.
The asset-pipeline plugin and its addon plugins have great support for smart caching and you should manage that there, but not by misusing the cache plugin(s).

Uglifying application.js timeout when generating war

I have a Grails 2.4.3 project. It is using angular and a good amount of external js libraries. Whenever, I try to create a war, it times out when attempting to minimized the js. I assumed it was because there are way too many libraries so I extended the timeout in GGTS, but it still timeout. I tried also exclude some the assets but it gave me null pointer exception at the beginning of the asset compilation. For now, I'm skipping the minify by setting it to false. Here are my questions:
Is there a problem having minimized js libraries already in the manifest?
Can I added the required libraries outside the manifest and get added to the war? like in web-app/js ?
I believe I could add the CDN in the html rather than having the libraries in the manifest and copied to my project, but sometimes, I worked without internet access. Is there a way to configure asset pipeline that for production to use CDN for certain assets?
The documentation covers all if not most of your questions.
config section
This section cover cdn and control of minimization
For instance
grails.assets.url = "http://cdn.example.com/"
Will set a cdn URL

Resources