Service worker to save form data when browser is offline - service-worker

I am new to Service Workers, and have had a look through the various bits of documentation (Google, Mozilla, serviceworke.rs, Github, StackOverflow questions). The most helpful is the ServiceWorkers cookbook.
Most of the documentation seems to point to caching entire pages so that the app works completely offline, or redirecting the user to an offline page until the browser can redirect to the internet.
What I want to do, however, is store my form data locally so my web app can upload it to the server when the user's connection is restored. Which "recipe" should I use? I think it is Request Deferrer. Do I need anything else to ensure that Request Deferrer will work (apart from the service worker detector script in my web page)? Any hints and tips much appreciated.
Console errors
The Request Deferrer recipe and code doesn't seem to work on its own as it doesn't include file caching. I have added some caching for the service worker library files, but I am still getting this error when I submit the form while offline:
Console: {"lineNumber":0,"message":
"The FetchEvent for [the form URL] resulted in a network error response:
the promise was rejected.","message_level":2,"sourceIdentifier":1,"sourceURL":""}
My Service Worker
/* eslint-env es6 */
/* eslint no-unused-vars: 0 */
/* global importScripts, ServiceWorkerWare, localforage */
importScripts('/js/lib/ServiceWorkerWare.js');
importScripts('/js/lib/localforage.js');
//Determine the root for the routes. I.e, if the Service Worker URL is http://example.com/path/to/sw.js, then the root is http://example.com/path/to/
var root = (function() {
var tokens = (self.location + '').split('/');
tokens[tokens.length - 1] = '';
return tokens.join('/');
})();
//By using Mozilla’s ServiceWorkerWare we can quickly setup some routes for a virtual server. It is convenient you review the virtual server recipe before seeing this.
var worker = new ServiceWorkerWare();
//So here is the idea. We will check if we are online or not. In case we are not online, enqueue the request and provide a fake response.
//Else, flush the queue and let the new request to reach the network.
//This function factory does exactly that.
function tryOrFallback(fakeResponse) {
//Return a handler that…
return function(req, res) {
//If offline, enqueue and answer with the fake response.
if (!navigator.onLine) {
console.log('No network availability, enqueuing');
return enqueue(req).then(function() {
//As the fake response will be reused but Response objects are one use only, we need to clone it each time we use it.
return fakeResponse.clone();
});
}
//If online, flush the queue and answer from network.
console.log('Network available! Flushing queue.');
return flushQueue().then(function() {
return fetch(req);
});
};
}
//A fake response with a joke for when there is no connection. A real implementation could have cached the last collection of updates and keep a local model. For simplicity, not implemented here.
worker.get(root + 'api/updates?*', tryOrFallback(new Response(
JSON.stringify([{
text: 'You are offline.',
author: 'Oxford Brookes University',
id: 1,
isSticky: true
}]),
{ headers: { 'Content-Type': 'application/json' } }
)));
//For deletion, let’s simulate that all went OK. Notice we are omitting the body of the response. Trying to add a body with a 204, deleted, as status throws an error.
worker.delete(root + 'api/updates/:id?*', tryOrFallback(new Response({
status: 204
})));
//Creation is another story. We can not reach the server so we can not get the id for the new updates.
//No problem, just say we accept the creation and we will process it later, as soon as we recover connectivity.
worker.post(root + 'api/updates?*', tryOrFallback(new Response(null, {
status: 202
})));
//Start the service worker.
worker.init();
//By using Mozilla’s localforage db wrapper, we can count on a fast setup for a versatile key-value database. We use it to store queue of deferred requests.
//Enqueue consists of adding a request to the list. Due to the limitations of IndexedDB, Request and Response objects can not be saved so we need an alternative representations.
//This is why we call to serialize().`
function enqueue(request) {
return serialize(request).then(function(serialized) {
localforage.getItem('queue').then(function(queue) {
/* eslint no-param-reassign: 0 */
queue = queue || [];
queue.push(serialized);
return localforage.setItem('queue', queue).then(function() {
console.log(serialized.method, serialized.url, 'enqueued!');
});
});
});
}
//Flush is a little more complicated. It consists of getting the elements of the queue in order and sending each one, keeping track of not yet sent request.
//Before sending a request we need to recreate it from the alternative representation stored in IndexedDB.
function flushQueue() {
//Get the queue
return localforage.getItem('queue').then(function(queue) {
/* eslint no-param-reassign: 0 */
queue = queue || [];
//If empty, nothing to do!
if (!queue.length) {
return Promise.resolve();
}
//Else, send the requests in order…
console.log('Sending ', queue.length, ' requests...');
return sendInOrder(queue).then(function() {
//Requires error handling. Actually, this is assuming all the requests in queue are a success when reaching the Network.
// So it should empty the queue step by step, only popping from the queue if the request completes with success.
return localforage.setItem('queue', []);
});
});
}
//Send the requests inside the queue in order. Waiting for the current before sending the next one.
function sendInOrder(requests) {
//The reduce() chains one promise per serialized request, not allowing to progress to the next one until completing the current.
var sending = requests.reduce(function(prevPromise, serialized) {
console.log('Sending', serialized.method, serialized.url);
return prevPromise.then(function() {
return deserialize(serialized).then(function(request) {
return fetch(request);
});
});
}, Promise.resolve());
return sending;
}
//Serialize is a little bit convolved due to headers is not a simple object.
function serialize(request) {
var headers = {};
//for(... of ...) is ES6 notation but current browsers supporting SW, support this notation as well and this is the only way of retrieving all the headers.
for (var entry of request.headers.entries()) {
headers[entry[0]] = entry[1];
}
var serialized = {
url: request.url,
headers: headers,
method: request.method,
mode: request.mode,
credentials: request.credentials,
cache: request.cache,
redirect: request.redirect,
referrer: request.referrer
};
//Only if method is not GET or HEAD is the request allowed to have body.
if (request.method !== 'GET' && request.method !== 'HEAD') {
return request.clone().text().then(function(body) {
serialized.body = body;
return Promise.resolve(serialized);
});
}
return Promise.resolve(serialized);
}
//Compared, deserialize is pretty simple.
function deserialize(data) {
return Promise.resolve(new Request(data.url, data));
}
var CACHE = 'cache-only';
// On install, cache some resources.
self.addEventListener('install', function(evt) {
console.log('The service worker is being installed.');
// Ask the service worker to keep installing until the returning promise
// resolves.
evt.waitUntil(precache());
});
// On fetch, use cache only strategy.
self.addEventListener('fetch', function(evt) {
console.log('The service worker is serving the asset.');
evt.respondWith(fromCache(evt.request));
});
// Open a cache and use `addAll()` with an array of assets to add all of them
// to the cache. Return a promise resolving when all the assets are added.
function precache() {
return caches.open(CACHE).then(function (cache) {
return cache.addAll([
'/js/lib/ServiceWorkerWare.js',
'/js/lib/localforage.js',
'/js/settings.js'
]);
});
}
// Open the cache where the assets were stored and search for the requested
// resource. Notice that in case of no matching, the promise still resolves
// but it does with `undefined` as value.
function fromCache(request) {
return caches.open(CACHE).then(function (cache) {
return cache.match(request).then(function (matching) {
return matching || Promise.reject('no-match');
});
});
}
Here is the error message I am getting in Chrome when I go offline:
(A similar error occurred in Firefox - it falls over at line 409 of ServiceWorkerWare.js)
ServiceWorkerWare.prototype.executeMiddleware = function (middleware,
request) {
var response = this.runMiddleware(middleware, 0, request, null);
response.catch(function (error) { console.error(error); });
return response;
};

this is a little more advanced that a beginner level. But you will need to detect when you are offline or in a Li-Fi state. Instead of POSTing data to an API or end point you need to queue that data to be synched when you are back on line.
This is what the Background Sync API should help with. However, it is not supported across the board just yet. Plus Safari.........
So maybe a good strategy is to persist your data in IndexedDB and when you can connect (background sync fires an event for this) you would then POST the data. It gets a little more complex for browsers that don't support service workers (Safari) or don't yet have Background Sync (that will level out very soon).
As always design your code to be a progressive enhancement, which can be tricky, but worth it in the end.

Service Workers tend to cache the static HTML, CSS, JavaScript, and image files.
I need to use PouchDB and sync it with CouchDB
Why CouchDB?
CouchDB is a NoSQL database consisting of a number of Documents
created with JSON.
It has versioning (each document has a _rev
property with the last modified date)
It can be synchronised with
PouchDB, a local JavaScript application that stores data in local
storage via the browser using IndexedDB. This allows us to create
offline applications.
The two databases are both “master” copies of
the data.
PouchDB is a local JavaScript implementation of CouchDB.

I still need a better answer than my partial notes towards a solution!
Yes, this type of service worker is the correct one to use for saving form data offline.
I have now edited it and understood it better. It caches the form data, and loads it on the page for the user to see what they have entered.
It is worth noting that the paths to the library files will need editing to reflect your local directory structure, e.g. in my setup:
importScripts('/js/lib/ServiceWorkerWare.js');
importScripts('/js/lib/localforage.js');
The script is still failing when offline, however, as it isn't caching the library files. (Update to follow when I figure out caching)
Just discovered an extra debugging tool for service workers (apart from the console): chrome://serviceworker-internals/. In this, you can start or stop service workers, view console messages, and the resources used by the service worker.

Related

Notify a web page served from service-worker cache that it has been served from SW cache

I developped a service worker which is serving pages from network first (and caching it) or, when offline, serving it from cache.
Ideally, I would like to inform the user (with a banner, or something like this) that the page has been served from the cache because we detected that he was offline.
Do you have an idea on how to implement this ?
Some ideas I had (but didn't really succeeded to implement) :
Inject some code in the cached response body (like, injecting some JS code triggering a offline-detected event which may or may not have an event listener on the webpage, depending on if I want to do something or not on this webpage while offline).
=> I didn't found how to append stuff in response's body coming from the cache to do this.
Send a postMessage from service worker to the webpage telling that it has been rendered using a cached content.
=> It doesn't seem to be possible as I don't have any MessagePort available in ServiceWorker's fetch event, making it impossible to send any postMessage() to it.
If you have any idea on how to implement this, I would be very happy to discuss about it.
Thanks in advance :)
I looked at different solutions :
Use navigator.onLine flag on HTML page : this is not reliable on my case, as my service worker might served cached page (because of, let's say, a timeout) whilst the browser can consider to be online
Inject custom headers when serving HTML response from cache : I don't see how it might work since response headers are generally not accessible on clientside
Call service-worker's postMessage() few seconds after content is served : problem is, in that case, that fetch event on the "starting" HTML page, doesn't have any clientId yet (we have a chicken & egg problem here, since service worker is not yet attached to the client at the moment the root html page is served from cache)
The only remaining solution I found was to inject some code in the cached response.
Something like this on the service worker side (the main idea is to inject some code on cached response body, triggering a served-offline-page event once DOM content has been loaded) :
async function createClonedResponseWithOfflinePageInjectedEvent(response) {
const contentReader = response.body.getReader();
let content = '', readResult = {done: false, value: undefined };
while(!readResult.done) {
readResult = await contentReader.read();
content += readResult.value?new TextDecoder().decode(readResult.value):'';
}
// Important part here, as we're "cloning" response by injecting some JS code in it
content = content.replace("</body>", `
<script type='text/javascript'>
document.addEventListener('DOMContentLoaded', () => document.dispatchEvent(new Event('served-offline-page')));
</script>
</body>
`);
return new Response(content, {
headers: response.headers,
status: response.status,
statusText: response.statusText
});
}
async function serveResponseFromCache(cache, request) {
try {
const response = await cache.match(request);
if(response) {
console.info("Used cached asset for : "+request.url);
// isCacheableHTMLPage() will return true on html pages where we're ok to "inject" some js code to notify about
if(isCacheableHTMLPage(request.url)) {
return createClonedResponseWithOfflinePageInjectedEvent(response);
} else {
return response;
}
} else {
console.error("Asset neither available from network nor from cache : "+request.url);
// Redirecting on offline page
return new Response(null, {
headers: {
'Location': '/offline.html'
},
status: 307
})
}
}catch(error) {
console.error("Error while matching request "+request.url+" from cache : "+error);
}
}
On the HTML page, this is simple, we only have to write this in the page :
document.addEventListener('served-offline-page', () => {
console.log("It seems like this page has been served as some offline content !");
})

Protocol error when calling puppeteer.connect()

I am using the basic approach as set out in this post to connect from a client docker container to any one of a number of chrome docker containers (in a docker swarm/service, potentially across several servers behind nginx, deployed using CapRover).
In each chrome container I maintain a pool (just a simple array) of browser objects, and direct incoming requests to an appropriate browser as follows (very similar to the linked post):
import http from 'node:http'; // https://nodejs.org/api/http.html
import httpProxy from 'http-proxy'; // https://www.npmjs.com/package/http-proxy
const proxy = new httpProxy.createProxyServer({ ws: true });
// an array (pool) of pre-launched and managed browser objects...
const browsers = [ ... ];
http
.createServer()
.on('upgrade', (req, socket, head) => {
const browser = browsers[Math.floor(Math.random() * browsers.length)]; // in reality I don't just pick a browser at random
const target = browser.wsEndpoint();
proxy.ws(req, socket, head, { target });
})
.listen(3222);
The above is listening at ws://srv-captain--chrome:3222 (communication is "internal" over the docker network between containers).
Then, in my client container, I connect to the common endpoint ws://srv-captain--chrome:3222 as follows:
import puppeteer from 'puppeteer'; // https://www.npmjs.com/package/puppeteer (using version 17.1.3 at time of posting this)
try {
const browser = await puppeteer.connect({ browserWSEndpoint: 'ws://srv-captain--chrome:3222' });
} catch (err) {
console.error('error connecting to browser', err);
}
This works really well, except that I am getting occasional/inconsistent errors like these when calling puppeteer.connect() in the client container above:
Protocol error (Emulation.setDeviceMetricsOverride): Session closed. Most likely the page has been closed.
Protocol error (Performance.enable): Target closed.
Almost always, if I simply try to connect again, the connection is made without further error, and at the first attempt.
I have no idea why the error is complaining that the page has been closed or Target closed since, at this point in the process, I'm not attempting to interact with any page, and I know from listening for browser.on('disconnected'...), and also monitoring the chromium processes themselves, that each browser in the array is still working fine... none has crashed.
Any idea what's going on here?
UPDATE after further testing
Of course, in the client container we don't connect to a browser just for the sake of it, like in the above snippet, but to open a page and do some stuff with the page. In practice, in the client container it's more like the following test snippet:
const doIteration = function (i) {
return new Promise(async (resolve, reject) => {
// mimic incoming requests coming in at random times over a short period by introducing a random initial delay...
await new Promise(resolve => setTimeout(resolve, Math.random() * 5000));
// now actually connect...
let browser;
try {
browser = await puppeteer.connect({ browserWSEndpoint: `ws://srv-captain--chrome:3222?queryParam=loop_${i}` });
} catch (err) {
reject(err);
return;
}
// now that we have a browser, open a new page...
const page = await browser.newPage();
// do something useful with the page (not shown here) and then close it..
await page.close();
// now disconnect (but don't close) the browser...
browser.disconnect();
resolve();
});
};
const promises = [];
for (let i = 0; i < 15; i++) {
promises.push( doIteration(i) );
}
try {
await Promise.all(promises);
} catch (err) {
console.error(`error doing stuff`, err);
}
Each iteration above is being performed multiple times concurrently... I am using Promise.all() on an array of iteration promises to mimic multiple concurrent incoming requests in my production code. The above is enough to reproduce the problem... the error doesn't happen on calling puppeteer.connect() with every iteration, just some.
So there seems to be some sort of interplay between opening/closing a page in one iteration, and calling puppeteer.connect() in another, despite closing the page and disconnecting the browser properly in each iteration? This probably also explains the Most likely the page has been closed error message when calling puppeteer.connect() if there is some hangover relating to a page closed in another iteration... though for some reason this error occurs when calling puppeteer.connect()?
With the use of a pool of browser objects in the browsers array, and a docker swarm having multiple containers on multiple servers, each upgrade message could be received at a different container (which could even be on a different server) and could be routed to a different browser in the browsers array. But I now think that this is a red herring, because in the further testing I narrowed the problem down by routing all requests to browsers[0] and also scaling the service down to just one container... so that the upgrade messages are always handled by the same container on the same server and routed to the same browser... and the problem still occurs.
Full stacktrace for the above-mentioned error:
Error: Protocol error (Emulation.setDeviceMetricsOverride): Session closed. Most likely the page has been closed.
at CDPSession.send (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Connection.js:281:35)
at EmulationManager.emulateViewport (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/EmulationManager.js:33:73)
at Page.setViewport (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Page.js:1776:93)
at Function._create (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Page.js:242:24)
at runMicrotasks (<anonymous>)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async Target.page (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Target.js:123:23)
at async Promise.all (index 0)
at async BrowserContext.pages (file:///root/workspace/myclientapp/node_modules/puppeteer/lib/esm/puppeteer/common/Browser.js:577:23)
at async Promise.all (index 0)
As I dug deeper and deeper into this problem, it become more and more apparent that I might not actually be doing anything fundamentally wrong, and that this might just be a bug in puppeteer itself. So I reported those as an issue over on puppeteer... and indeed, it is acknowledged as a bug for any version later than 15.5.0, and is being fixed. In the meantime, the workaround is to revert to puppeteer version 15.5.0 and to be careful when calling browser.pages() when concurrent connections are being used, because that might itself throw an error... but I understand that this too might be something that they can/will fix so that browser.pages() is more resilient to the presence of concurrent connections.

Service-Worker stays in waiting state in Chrome

I'm working on a SPA with Vue. I'd like to update to a new service-worker when the user navigates to a specific page. A save moment to refresh, because the view of the user already changes (a pattern discussed in this video: https://youtu.be/cElAoxhQz6w)
I have an issue that sometimes (infrequently) the service-worker won't activate while calling skipWaiting. The call is made correctly, and even in Chrome I get a response that the current service-worker stops (see animated GIF), however it the same service-worker starts running again, instead of the waiting one.
After a while (1-2 minutes) the service-worker is suddenly activated. Not a situation you want, because it happens just out of the blue when the user might be in the middle of an activity.
Also when I am in this situation I can't activate the service-worker by calling skipWaiting (by doing multiple navigations) again. It's received by the service-worker but nothing happens. It stays in "waiting to activate". When I press skipWaiting in Chrome itself, it works.
I have no clue what goes wrong. Is this an issue with Chrome, workbox or something else?
Most close comes this topic: self.skipWaiting() not working in Service Worker
I use Vue.js, but I don't depend on the pwa plugin for the service-worker. I use the workbox webpack plugin.
I've edited the example code below, the minimal code probably didn't show the problem well
In main.js:
let sw = await navigator.serviceWorker.register("/service-worker.js", {
updateViaCache: "none",
});
let firstSw = false;
navigator.serviceWorker.addEventListener("controllerchange", () => {
// no need to refresh when the first sw controls the page, we solve this with clientsClaim
// this makes sure when multiple-tabs are open all refresh
if (!firstSw) {
window.location.reload();
}
});
sw.onupdatefound = () => {
const installingWorker = sw.installing;
installingWorker.onstatechange = async () => {
console.log("installing worker state-change: " + installingWorker.state);
if (installingWorker.state === "installed") {
if (navigator.serviceWorker.controller) {
firstSw = false;
// set the waiting service-worker in the store
// so we can update it and refresh the page on navigation
await store.dispatch("setWaitingSW", sw.waiting);
} else {
console.log("First sw available");
firstSw = true;
}
}
};
};
In router.js:
// after navigation to specific routes we check for a waiting service-worker.
router.afterEach(async (to) => {
if (to.name == "specificpage") {
let waitingSw = store.getters["getWaitingSW"];
if (waitingSw) {
waitingSw.postMessage("SKIP_WAITING");
// clean the store, because we might have changed our data model
await store.dispatch("cleanLocalForage");
}
}
});
In service-worker.js:
self.addEventListener("message", event => {
if (event.data === "SKIP_WAITING") {
console.log("sw received skip waiting");
self.skipWaiting();
}
});
skipWaiting() isn't instant. If there are active fetches going through the current service worker, it won't break those. If you're seeing skipWaiting() taking a long time, I'd guess you have some long-running HTTP connections holding the old service worker in place.
I'm not sure that
let sw = await navigator.serviceWorker.register("/service-worker.js", {updateViaCache: "none"});
if (sw.waiting) {
sw.waiting.postMessage("SKIP_WAITING");
}
is the code that you want in this case. Your if (sw.waiting) check is only evaluated once, and the newly registered service worker might still be in the installing state when it's evaluated. If that's the case, then sw.waiting will be false-y at the time of initial evaluation, though it may be true-thy after a small period of time.
Instead, I'd recommend following a pattern like what's demonstrated in this recipe, where you explicitly listen for a service worker to enter waiting on the registration. That example uses the workbox-window library to paper over some of the details.
If you don't want to use workbox-window, you should follow this guidance check to see if sw.installing is set after registration; if it is, listen to the statechange event on sw.installing to detect when it's 'installed'. Once that happens, sw.waiting should be set to the newly installed service worker, and at that point, you could postMessage() to it.
Ok i had a similar issue and it took me two days to find the cause.
There is a scenario where you can cause a race condition between the new service worker and the old if you request a precached asset at the exact same time you call skip waiting.
For me i was prompting the user to update to a new version and upon their confirmation i was showing a loading spinner which was a Vue SFC dynamic import which kicked off a network request to the old service worker to fetch the precached js file which basically caused both to hang and get very confused.
You can check if your having a similar issue by looking at the service worker specific network requests (Network requests button in the image below) that are happening and make sure they aren't happening the instant you're trying to skip waiting on your newer service worker.

Downloading whole websites with k6

I'm currently evaluating whether k6 fits our load testing needs. We have a fairly traditional website architecture that uses Apache webservers with PHP und a MySQL database. Sending simple HTTP requests with k6 looks simple enough and I think we will be able to test all major functionality with it, as we don't rely on JavaScript that much and most pages are static.
However, I'm unsure how to deal with resources (stylesheets, images, etc.) that are referenced in the HTML that is returned in the requests. We need to load them as well, as this sometimes leads to database requests, which must be part of the load test.
Is there some out-of-the-box functionality in k6 that allows you to load all the resources like a browser would? I'm aware that k6 does NOT render the page and I don't need it to. I only need to request all the resources inside the HTML.
You basically have two options, both with their caveats:
Record your session - you can either export har directly from the browser as shown there or use an extension made for your browser here is firefox and chromes. Both should be usable without a k6 cloud account you just need to set them to download the har and it will automatically (and somewhat silently) download them when you hit stop. And then either use the in k6 har converter (which is deprecated, but still works) or the new har-to-k6 one which.
This method is particularly good if you have a lot of pages and/or resources and even works if you have a single page style of application as it just gets what the browser requested as a HAR and then transforms it into a script. And if there were no dynamic things that need to be inputed (username/password) the final script can be used as is most of the time.
The biggest problem with this approach is that if you add a css file you need to redo this whole exercise. This is even more problematic if you css/js file name change on each change or something like that. Which is what the next method is good for:
Use parseHTML and then find the elements you care about and make a request for them.
import http from "k6/http";
import {parseHTML} from "k6/html";
export default function() {
const res = http.get("https://stackoverflow.com");
const doc = parseHTML(res.body);
doc.find("link").toArray().forEach(function (item) {
console.log(item.attr("href"));
// make http gets for it
// or added them to an array and make one batch request
});
}
will produce
NFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/favicon.ico?v=4f32ecc8f43d
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon.png?v=c78bd457575a
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon.png?v=c78bd457575a
INFO[0001] /opensearch.xml
INFO[0001] https://cdn.sstatic.net/Shared/stacks.css?v=53507c7c6e93
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/primary.css?v=d3fa9a72fd53
INFO[0001] https://cdn.sstatic.net/Shared/Product/product.css?v=c9b2e1772562
INFO[0001] /feeds
INFO[0001] https://cdn.sstatic.net/Shared/Channels/channels.css?v=f9809e9ffa90
As you can see some of the urls are relative and not absolute so you will need to handle this. And in this example only some are css, so probably more filtering is needed.
The problem here is that you need to write the code and if you add a relative link or something else you need to handle it. Luckily k6 is scriptable so you can reuse the code :D.
I've followed Михаил Стойков suggestion and written my own function to load resources. You can set the way resources are loaded (batch or sequential gets with options.concurrentResourceLoading).
/**
* #param {http.RefinedResponse<http.ResponseType>} response
*/
export function getResources(response) {
const resources = [];
response
.html()
.find('*[href]:not(a)')
.each((index, element) => {
resources.push(element.attributes().href.value);
});
response
.html()
.find('*[src]:not(a)')
.each((index, element) => {
resources.push(element.attributes().src.value);
});
if (options.concurrentResourceLoading) {
const responses = http.batch(
resources.map((r) => {
return ['GET', resolveUrl(r, response.url), null, {
headers: createHeader()
}];
})
);
responses.forEach(() => {
check(response, {
'resource returns status 200': (r) => r.status === 200,
});
});
} else {
resources.forEach((r) => {
const res = http.get(resolveUrl(r, response.url), {
headers: createHeader(),
});
!check(res, {
'resource returns status 200': (r) => r.status === 200,
});
});
}
}

Service Worker and transparent cache updates

I am trying to install a ServiceWorker for a simple, yet old, Django web app. I started working with the example read-through caching example from the Chrome team
This works well but isn't ideal because I want to update the cache, if needed. There are two recommended ways to do this based on reading all the other service-worker answers here.
Use some server-side logic to know when the stuff you show has updated and then update your service worker to change what is precached. This is what sw-precache does, for example.
Just update the cache version in the service worker JS file (see comments in the JS file on the caching example above) whenever resources you depend on update.
Neither are great solutions for me. First, this is a dumb, legacy app. I don't have the application stack that sw-precache relies on. Second, someone else updates the data that will be shown (it is basically a list of things with a details page).
I wanted to try out the "use cache, but update the cache from network" that Jake Archibald suggested in his offline cookbook but I can't quite get it to work.
My original thinking was I should just be able to return the cached version in my service worker, but queue a function that would update the cache if the network is available. For example, something like this in the fetch event listener
// If there is an entry in cache, return it after queueing an update
console.log(' Found response in cache:', response);
setTimeout(function(request, cache){
fetch(request).then(function(response){
if (response.status < 400 && response.type == 'basic') {
console.log("putting a new response into cache");
cache.put(request, response);
}
})
},10, request.clone(), cache);
return response;
But this doesn't work. The page gets stuck loading.
whats wrong with the code above? Whats the right way to get to my target design?
It sounds like https://jakearchibald.com/2014/offline-cookbook/#stale-while-revalidate is very close to what you're looking for
self.addEventListener('fetch', function(event) {
event.respondWith(
caches.open('mysite-dynamic').then(function(cache) {
return cache.match(event.request).then(function(response) {
var fetchPromise = fetch(event.request).then(function(networkResponse) {
// if we got a response from the cache, update the cache
if (response) {
cache.put(event.request, networkResponse.clone());
}
return networkResponse;
});
// respond from the cache, or the network
return response || fetchPromise;
});
})
);
});
On page reload you can refresh your service worker with new version meanwhile old one will take care of request.
Once everything is done and no page is using old service worker, It will using newer version of service worker.
this.addEventListener('fetch', function(event){
event.responseWith(
caches.match(event.request).then(function(response){
return response || fetch(event.request).then(function(resp){
return caches.open('v1').then(function(cache){
cache.put(event.request, resp.clone());
return resp;
})
}).catch(function() {
return caches.match('/sw/images/myLittleVader.jpg');
});
})
)
});
I recommend you to walk through below link for detailed functionality
https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API/Using_Service_Workers

Resources