The screenshot summarizes what happens to a chrome extension manifest v3 service worker. The service worker does not update when clicking Update or when refreshing the page and Update on reload is checked.
Instead, the current service worker never unloads and the updated service worker does not properly start.
Is there anyway to prevent this or to always have it skip waiting?
Related
I am serving my service worker from /worker.js and want it to intercept fetches to /localzip/*, but the fetch event is never fired.
I register it like this:
navigator.serviceWorker.register(
"worker.js",
{ scope: "/localzip/" }
);
And I claim all clients when it activates, so that I can start intercepting fetches from the current page immediately. I am sure that the service worker is activating and that clients.claim() is succeeding.
self.addEventListener("activate", (e) => {
// Important! Start processing fetches for all clients immediately.
//
// MDN: "When a service worker is initially registered, pages won't use it
// until they next load. The claim() method causes those pages to be
// controlled immediately."
e.waitUntil(clients.claim());
});
Chrome seems happy with it and the scope appears correct:
My fetch event handler is very simple:
self.addEventListener("fetch", (e) => {
console.log("Trying to make fetch happen!");
});
From my application, after the worker is active, I try to make a request, e.g.,
const response = await fetch("/localzip/lol.jpg");
The fetch does not appear to trigger the above event handler, and the browser instead tries to make a request directly to the server and logs GET http://localhost:3000/localzip/lol.jpg 404 (Not Found).
I have tried:
Making sure the latest version of my worker code is running.
Disabling / clearing caches to make sure the fetch isn't being handled by the browser's cache.
Hosting from an HTTPS server. (Chrome is supposed to support service workers on plaintext localhost for development.)
What more does it want from me?
Live demo: https://rgov.github.io/service-worker-zip-experiment/
Note that the scope is slightly different, and the fetch is performed by creating an <img> tag.
First, let's confirm you are not using hard-reload while testing your code. If you use hard-reload, all requests will not go through the service worker.
See https://web.dev/service-worker-lifecycle/#shift-reload
I also checked chrome://serviceworker-internals/ in Chrome, and your service worker has fetch handler.
Then, let's check the codes in detail.
After trying your demo page, I found a network request is handled by the service worker after clicking "Display image from zip archive" button since I can see this log:
Service Worker: Serving archive/maeby.jpg from zip archive
Then, the error is thrown:
Failed to load ‘https://rgov.github.io/localzip/maeby.jpg’. A ServiceWorker passed a promise to FetchEvent.respondWith() that rejected with ‘TypeError: db is undefined’.
This is caused by db object is not initialized properly. It would be worth confirming whether you see the DB related issue as I see in your demo. If not, my following statement might be incorrect.
I try to explain some service worker mechanism alongside my understanding of your code:
Timing of install handler
Your DB open code happens in the install handler only. This means DB object will be assigned only when the install handler is executed.
Please notice the install handler will be executed only when it's necessary. If a service worker exists already and does not need to update, the install handler won't be called. Hence, the db object in your code might not be always available.
Stop/Start Status
When the service worker does not handle events for a while (how long it would be is by browser's design), the service worker will go to stop/idle state.
When the service worker is stopped/idle (you can check the state in the devtools) and started again, the global db object will be undefined.
In my understanding, this is why I see the error TypeError: db is undefined’.
Whenever the service worker wakes up, the whole worker script will be executed. However, the execution of event handlers will depend on whether the events are coming.
How to prevent stop/idle for debugging?
Open your devtools for the page then the browser will keep it being alive.
Once you close the devtool, the service worker might go to "stop" soon.
Why does the service worker stops?
The service worker is designed for handling requests. If no request/event should be handled by a service worker, the service worker thread is not necessary to run for saving resources.
Please notice both fetch and message events (but not limited to) will awake the service worker.
See Prevent Service Worker from automatically stopping
Solution for the demo page
If the error is from the DB, this means the getFromZip function should open the DB when db is unavailable.
By the way, even without any change, your demo code works well in the following steps:
As a user, I open the demo page at the first time. (This is for ensuring that the install handler is called.)
I open the devtools ASAP once I see the page content. (This is for preventing the service worker goes to "stop" state)
I click "Download zip archive to IndexedDB" button.
I click "Display image from zip archive" button.
Then I can see the image is shown properly.
Jake Archibald pointed out that I was confused about the meaning of the service worker's scope, explained here:
We call pages, workers, and shared workers clients. Your service worker can only control clients that are in-scope. Once a client is "controlled", its fetches go through the in-scope service worker.
In other words:
The scope affects which pages (clients) get placed under the service worker's control and not which fetched URLs get intercepted.
In my react application have a service worker working in the browser as expected. Unfortunately when the application is built and packaged with electron the service worker does not work. It seems to register but the service worker throws a error - uncaught in promise typeerror failed to fetch
The only real difference I can see is the origin of the worker. In electron it is stating a file path where as the one in the browser is stating the http://localhost path. This is using electron 11. The application is written in react from create-react app which has not been ejected.
This error originates from CORS(Cross Origin Resource Sharing). The thing is if you delete your service worker content and leave maybe a simple console.log statement the app would work. That is because in your fetch handler in service worker you are manually sending requests with Fetch API.
Unfortunately, CORS rules which are set inside your application are lost inside your service worker if you use the Fetch API.
If you really want to have that fetch handler in your service worker you have two choices:
You can customize the fetch handler so it doesn't do anything for index.html(which is the most important file for you). In that case if you don't do anything and you don't use event.respondWith, the request will not be stopped by service worker and your CORS headers will not be lost.
This is the tougher option. You need to set your CORS headers manually in the service worker fetch handler.
Let's say I have a service worker installation directly in my main.js (I am not waiting load event or anything).
Let's say I have a file called login.js which I change so often and I don't use a hash fingerprint for it in the build process.
When I change the code in login.js and built it, My service worker also changes because I use workbox and its injectManifest still works with file contents and if something changes, regenerates manifest. So service worker gets changed too.
The bad thing that happens is when an user refreshes the page, the new changes don't get shown. I use skipWaiting. The reason changes don't get shown, is that
there might be old service worker returning the old one before installing the new service worker takes place.
the new service worker will get activated immediately but fetch listener for it won't be called. (I don't want to use clients.claim() as it's kind of tricky)
I saw how Jake Archibald does it and he programmatically refreshes the page after the user refreshes the page which means two refreshes take place. I only want one refresh.
when visit the https://vuejs.org/js/vue.min.js, i find it's loaded from ServiceWorker, how it works?
Screenshot of Chrome DevTool
A service worker is a piece of JavaScript code that your browser runs in the background. Service workers are given some special privileges over normal JavaScript code that runs in a browser, with a commonly used priveledge being their ability to intercept fetch events.
A fetch event is activated anytime the client requests a file. Many service workers will download all requested files into a cache. The second time the same file is requested, the service worker will step in and return the cached file without sending any http request.
In this particular case, a service worker is storing the contents of the vue file into a cache the first time it is downloaded, removing the need for an expensive network request on future page loads.
I am using service worker to implement web push notifications. Whenever I change some code of service-worker, that change is not reflected in service-worker on browser unless I delete cookie/cache.
Is this normal behaviour or I have to add some function to update service-worker?
Service worker files are cached for a Max of 24 hours if the cache header is sent with the service worker file.
First step is to set the cache headers to 0 to not cache.
When a browser finds a new service worker it will download and install it. It won't take affect until all pages that are currently controlled by the service worker are closed. For a normal user this isn't a problem. During development in chrome you can use Ctrl+ shift + R to do a hard refresh which forces a page not to be controlled by service worker, allowing your be service worker take control on the next refresh.
Final option is to use skip waiting in install step and claim in the activate step to force a new service worker to instantly activate and control any pages. If earn against this as it's easy to get into weird scenarios.
Update: Browsers are changing this default behavior - Firefox will now ignore the cache header and other browsers are likely to implement the same behaviour
To answer your specific question: yes, the behaviour is intentional and yes, yo can call an update function. Use update() method on the service worker registration. From MDN:
The update method of the ServiceWorkerRegistration interface attempts to update the service worker. It fetches the worker's script URL, and if the new worker is not byte-by-byte identical to the current worker, it installs the new worker. The fetch of the worker bypasses any browser caches if the previous fetch occurred over 24 hours ago.
Notice it says the SW fetch will bypass any browser cache if the previous fetch is older than 24h so you should disable caches while developing service workers.