We are building an progressive web application using service worker.I'm facing an weird issue in service worker fetch event that I can see all my URLs in console but few of them are not recognized by Service Worker. Verified that service worker has installed successfully and responding to requests.
self.addEventListener('fetch', function(event) {
console.log(event.request.url);
}
Any idea why this is happening ?
Note - The URLs which are not getting recognized are actually Ajax calls made using XMLHttpRequest.
Related
I am serving my service worker from /worker.js and want it to intercept fetches to /localzip/*, but the fetch event is never fired.
I register it like this:
navigator.serviceWorker.register(
"worker.js",
{ scope: "/localzip/" }
);
And I claim all clients when it activates, so that I can start intercepting fetches from the current page immediately. I am sure that the service worker is activating and that clients.claim() is succeeding.
self.addEventListener("activate", (e) => {
// Important! Start processing fetches for all clients immediately.
//
// MDN: "When a service worker is initially registered, pages won't use it
// until they next load. The claim() method causes those pages to be
// controlled immediately."
e.waitUntil(clients.claim());
});
Chrome seems happy with it and the scope appears correct:
My fetch event handler is very simple:
self.addEventListener("fetch", (e) => {
console.log("Trying to make fetch happen!");
});
From my application, after the worker is active, I try to make a request, e.g.,
const response = await fetch("/localzip/lol.jpg");
The fetch does not appear to trigger the above event handler, and the browser instead tries to make a request directly to the server and logs GET http://localhost:3000/localzip/lol.jpg 404 (Not Found).
I have tried:
Making sure the latest version of my worker code is running.
Disabling / clearing caches to make sure the fetch isn't being handled by the browser's cache.
Hosting from an HTTPS server. (Chrome is supposed to support service workers on plaintext localhost for development.)
What more does it want from me?
Live demo: https://rgov.github.io/service-worker-zip-experiment/
Note that the scope is slightly different, and the fetch is performed by creating an <img> tag.
First, let's confirm you are not using hard-reload while testing your code. If you use hard-reload, all requests will not go through the service worker.
See https://web.dev/service-worker-lifecycle/#shift-reload
I also checked chrome://serviceworker-internals/ in Chrome, and your service worker has fetch handler.
Then, let's check the codes in detail.
After trying your demo page, I found a network request is handled by the service worker after clicking "Display image from zip archive" button since I can see this log:
Service Worker: Serving archive/maeby.jpg from zip archive
Then, the error is thrown:
Failed to load ‘https://rgov.github.io/localzip/maeby.jpg’. A ServiceWorker passed a promise to FetchEvent.respondWith() that rejected with ‘TypeError: db is undefined’.
This is caused by db object is not initialized properly. It would be worth confirming whether you see the DB related issue as I see in your demo. If not, my following statement might be incorrect.
I try to explain some service worker mechanism alongside my understanding of your code:
Timing of install handler
Your DB open code happens in the install handler only. This means DB object will be assigned only when the install handler is executed.
Please notice the install handler will be executed only when it's necessary. If a service worker exists already and does not need to update, the install handler won't be called. Hence, the db object in your code might not be always available.
Stop/Start Status
When the service worker does not handle events for a while (how long it would be is by browser's design), the service worker will go to stop/idle state.
When the service worker is stopped/idle (you can check the state in the devtools) and started again, the global db object will be undefined.
In my understanding, this is why I see the error TypeError: db is undefined’.
Whenever the service worker wakes up, the whole worker script will be executed. However, the execution of event handlers will depend on whether the events are coming.
How to prevent stop/idle for debugging?
Open your devtools for the page then the browser will keep it being alive.
Once you close the devtool, the service worker might go to "stop" soon.
Why does the service worker stops?
The service worker is designed for handling requests. If no request/event should be handled by a service worker, the service worker thread is not necessary to run for saving resources.
Please notice both fetch and message events (but not limited to) will awake the service worker.
See Prevent Service Worker from automatically stopping
Solution for the demo page
If the error is from the DB, this means the getFromZip function should open the DB when db is unavailable.
By the way, even without any change, your demo code works well in the following steps:
As a user, I open the demo page at the first time. (This is for ensuring that the install handler is called.)
I open the devtools ASAP once I see the page content. (This is for preventing the service worker goes to "stop" state)
I click "Download zip archive to IndexedDB" button.
I click "Display image from zip archive" button.
Then I can see the image is shown properly.
Jake Archibald pointed out that I was confused about the meaning of the service worker's scope, explained here:
We call pages, workers, and shared workers clients. Your service worker can only control clients that are in-scope. Once a client is "controlled", its fetches go through the in-scope service worker.
In other words:
The scope affects which pages (clients) get placed under the service worker's control and not which fetched URLs get intercepted.
I'm looking to make a web extension for Firefox that stores HTML pages and other resources in local storage and serves them for offline viewing. To do that, I need to intercept requests that the browser makes for the pages and the content in them.
Problem is, I can't figure out how to do that. I've tried several approaches:
The webRequest API doesn't allow fulfilling a request entirely - it can only block or redirect a request, or edit the response after it's been done.
Service Workers can listen to the fetch event, which can do what I want, but calling navigator.serviceWorker.register in an addon page (the moz-extension://<id> domain) results in an error: DOMException: The operation is insecure. Relevant Firefox bug
I could possibly set up the service worker on a self hosted domain with a content script, but then it won't be completely offline.
Is there an API that I missed that can intercept requests from inside a web extension?
In my react application have a service worker working in the browser as expected. Unfortunately when the application is built and packaged with electron the service worker does not work. It seems to register but the service worker throws a error - uncaught in promise typeerror failed to fetch
The only real difference I can see is the origin of the worker. In electron it is stating a file path where as the one in the browser is stating the http://localhost path. This is using electron 11. The application is written in react from create-react app which has not been ejected.
This error originates from CORS(Cross Origin Resource Sharing). The thing is if you delete your service worker content and leave maybe a simple console.log statement the app would work. That is because in your fetch handler in service worker you are manually sending requests with Fetch API.
Unfortunately, CORS rules which are set inside your application are lost inside your service worker if you use the Fetch API.
If you really want to have that fetch handler in your service worker you have two choices:
You can customize the fetch handler so it doesn't do anything for index.html(which is the most important file for you). In that case if you don't do anything and you don't use event.respondWith, the request will not be stopped by service worker and your CORS headers will not be lost.
This is the tougher option. You need to set your CORS headers manually in the service worker fetch handler.
I am looking to synchronize an indexed db in the background for offline access. I want to do this while the application is online and have it run in the background where the user doesn't even know it is running
I looked at backgroundSync with service workers but that appears to be for offline usage.
What I am really looking for is something like a cron task in the browser so I can synchronize data from a remote server to a local in-browser database
Here's a different approach, which fetches the json results from the backend's API, store in localStorage and pass results array to a custom function to render it.
If localStorage if not available in the browser, it fetches the results every time the function is called... so it does when the "force" parameter is set to true.
It also uses cookies for storing the last timestamp when the data was retrieved. The given code is set for a duration of 15 minutes (900,000 milliseconds).
It also assumes that in the json result of the api there's a member called .data where's an array of data to be cached / updated.
It requires jquery for the $.ajax, but I'm sure it can be easily refactored for using fetch, axios, or any other approach:
function getTrans(force=false) {
var TRANS=undefined
if(force || window.localStorage===undefined ||
(window.localStorage!==undefined && localStorage.getItem("TRANS") === null) ||
$.cookie('L_getTrans')===undefined ||
($.cookie('L_getTrans')!==undefined && Date.now()-$.cookie('L_getTrans')>900000)) {
$.ajax({
url: 'api/',type: 'post',
data: {'OP':'getTrans'},
success: function (result) {
TRANS = result.data ?? []
if(window.localStorage!==undefined) {
localStorage.setItem('TRANS', JSON.stringify(TRANS))
$.cookie('L_getTrans',Date.now())
}
renderTransactionList(TRANS)
},
error: function (error) { console.error(error) }
});
} else {
TRANS = JSON.parse(localStorage.getItem('TRANS'))
renderTransactionList(TRANS)
}
}
Hope it helps some of you, or even amuse.
For your purpose you probably need webworker instead of service worker.
While service worker acts as a proxy for connections, web worker can be more generic.
https://www.quora.com/Whats-the-difference-between-service-workers-and-web-workers-in-JavaScript
Is has some limitation interacting with browser objects but http connections and indexed db are allowed.
Pay particular attention to browser’s cache during development: even cltr + F5 does not reload web worker code.
Force reload/prevent cache of Web Workers
I believe what you're going for is a Progressive Web Application (PWA).
To build on Claudio's answer, performing background fetches are best done with a web worker. Web workers are typically stateless, and you would need to adapt your project to note what data was loaded last. However, using History API and (lazy) loading other page contents via JavaScript means that the user wouldn't have to exit the page.
A service worker would be able to monitor when your application is online or offline, and can call methods to pause or continue downloads to the indexed db.
As a side note, it is advisable to load only what is needed by your users, as excessive background loading may offend some users.
Further Reading.
Mozilla's PWA Documentation
An example of Ajax loading and the History API from Mozilla
I looked at backgroundSync with service workers but that appears to be
for offline usage.
No it is not just for offline usage! Also Answer about PWA and service workers also right!
A solution in your case:
You can use navigator.onLine to check the internet connection like that
window.addEventListener('offline', function() {
alert('You have lost internet access!');
});
If the user loses their internet connection, they'll see an alert. Next we'll add an event listener for watching for the user to come back online.
window.addEventListener('online', function() {
if(!navigator.serviceWorker && !window.SyncManager) {
fetchData().then(function(response) {
if(response.length > 0) {
return sendData();
}
});
}
});
A good pattern to detection
if(registration.sync) {
registration.sync.register('example-sync')
.catch(function(err) {
return err;
})
} else {
if(navigator.onLine) {
requestSync();
} else {
alert("You are offline! When your internet returns, we'll finish up your request.");
}
}
Note: Maybe you need to limit your application activity while offline
I'm trying to send (push to db) data with fetch api below, but here things like HTML5 Apex or Socket or CacheAPI can also be used.
requestSync() {
navigator.serviceWorker.ready.then(swRegistration => swRegistration.sync.register('todo_updated'));
}
Maybe if you try with a socket(PHP) and a setTimeout(JS)
Let me explain to you:
When you enter your page, it uses the indexDB, and at the same moment starts a setTimeout, for exemple every 30 sec., and also tries to comunicate with the socket. If this action is successful, the web page synchronizes with the indexDB.
I don't know if you understand me.
My English is very bad. I'm sorry.
Service Worker Push Notifications
Given your reference to cron, it sounds like you want to sync a remote client at times when they are not necessarily visiting your site. Service Workers are the correct answer for running an operation in ~1 context irregardless of how many tabs the client might have open, and more specifically service workers with Push Notifications are necessary if the client might have no tabs open to the origin site at the time the sync should occur.
Resources to setup web push notifications:
There are many guides for setting up push notifications, i.e.:
https://developers.google.com/web/fundamentals/codelabs/push-notifications/
which link to the test push services like:
https://web-push-codelab.glitch.me
To test sending the push before you have configured your own server to send pushes.
Once you have your test service worker for a basic push notification, you can modify the service worker's push handler to call back to your site and do the necessary DB sync.
Example that triggers a sync into an IndexedDB via a push
Here is a pouchdb example from a few years back that was used to show pouchdb could use its plugins for http and IndexedDB from within the push handler:
self.addEventListener('push', (event) => {
if (event.data) {
let pushData = event.data.json();
if (pushData.type === 'couchDBChange') {
logDebug(`Got couchDBChange pushed, attempting to sync to: ${pushData.seq}`);
event.waitUntil(
remoteSync(pushData.seq).then((resp) => {
logDebug(`Response from sync ${JSON.stringify(resp, null, 2)}`);
})
);
} else {
logDebug('Unknown push event has data and here it is: ', pushData);
}
}
});
Here:
Pouchdb inside a service worker is receiving only a reference to a sync point in the push itself.
It now has a background browser context with an origin of the server that registered the service worker.
Therefore, it can use its http sync wrapper to contact the db provided over http by the origin server.
This is used to sync its contents which are stored in IndexedDB via its wrapper for IndexedDB
So:
Awaking on a push and contacting the server over http to sync to a reference can be done by a service worker to update an IndexedDB implementation as long as the client agreed to receive pushes and has a browser with internet connectivity.
A client that does not agree to pushes, but has service workers can also centralize/background sync operations on the service worker when tabs are visiting the site either with messages or chosen URLs to represent a cached sync results. (For a single tab SPA visitor, performance benefits as far as backgrounding are similar to web workers DB performance) but pushes, messages and fetches from the origin are all being brought together in one context which can then provide synchronization to prevent repeated work.
Safari appears to have fixed their IndexedDB in service workers bug, but some browsers may still be a bit unreliable or have unfortunate edge cases. (For example, hitting DB quotas should be tested carefully as they can cause problems when a quota is hit while in an unexpected context.)
Still, this is the only reliable way to get multiple browsers to call back and perform a sync to your server without building custom extensions, etc for each vendor separately.
when visit the https://vuejs.org/js/vue.min.js, i find it's loaded from ServiceWorker, how it works?
Screenshot of Chrome DevTool
A service worker is a piece of JavaScript code that your browser runs in the background. Service workers are given some special privileges over normal JavaScript code that runs in a browser, with a commonly used priveledge being their ability to intercept fetch events.
A fetch event is activated anytime the client requests a file. Many service workers will download all requested files into a cache. The second time the same file is requested, the service worker will step in and return the cached file without sending any http request.
In this particular case, a service worker is storing the contents of the vue file into a cache the first time it is downloaded, removing the need for an expensive network request on future page loads.