Service worker - Failed to fetch data (from dropdown list) - service-worker

I have an app where I need to select an option from a dropdown list and then send it through a button which fill some data in a table. This options come from a hosted database in SQL
Im trying to make it work offline and im using a service worker which can get in cache all pages without problem, but the issue here is the dropdown options, wherever I select one and push the send button I get the connection error
This is the console message
Uncaught (in promise) TypeError: Failed to fetch
at onFetch (service-worker.js:77:30)
This is the code, line 77 is the last one on return cachedResponse
async function onFetch(event) {
const cache = await caches.open(cacheName);
let cachedResponse = await cache.match(event.request);
return cachedResponse || fetch(event.request); // here it fails to fetch
}
What I have tried/observed
All the options from the SQL db seem to be cached offline as they are still there
I used an indexeddb to store the SQL data just in case I have to use it for this
Searching over I suspect the issue may be that when an option is selected and sent, the original url changes with this information, so the fetch doest match perfectly, but im not sure.
I suspect I will have to change all the queries to the SQL host, to the indexeddb even when online.
What im expecting
When offline Im expecting to be able to select the option from the dropdown, push the send button and see the data filled in the table (all in the same page)
Im pretty lost here on how to proceed and some enlightment will be very helpful.

Related

CacheDidUpdate not called after BackgroundSync Sync completed event

This happens on Chrome v. 100 on a MacBook.
I'm using the Workbox's StaleWhileRevalidate together with the BrackgroundSyncPlugin and the BroadcastUpdatePlugin. I have the following scenario:
User opens app offline
App loads cached response
User gets internet connection
BackgroundSyncPlugin retries failed StaleWhileRevalidate request and succeeds
BackgroundSyncPlugin shows Sync completed event in Chrome's dev tools background-sync
Cache is updated
BroadcastUpdatePlugin should broadcast cache is updated
However, I never receive the broadcast. I do receive the broadcast if I refresh the page with internet connection.
Code example:
registerRoute(/^some-working-regex/,new StaleWhileRevalidate(
{
cacheName: 'some-cache-name',
matchOptions: {ignoreVary: true},
plugins: [
new ExpirationPlugin({maxEntries: 20,purgeOnQuotaError: false,}),
new BackgroundSyncPlugin('some-queue-name', {maxRetentionTime: 24 * 60}),
new BroadcastUpdatePlugin()
]}),
"GET"
)
And I do see the sync event after turning on the internet:
Expected outcome:
I expect BroadcastUpdatePlugin to fire when the cache is updated successfully by the BackgroundSyncPlugin after the user obtains connectivity again.
Am I missing a step here?
I've also created a custom plugin with a cacheDidUpdate and the handler also doesn't get called. (so it doesn't seem to be related to the BroadcastUpdatePlugin).
Thanks!

Managing Server Side Events with a Service Worker

I am building a web app to display on my iPad to control my raspberry pi acting as an audio recorder. Part of the need is to maintain an event source open so that the server can send Server Side Events. A specific instance of the app can grab control of the recording process, but will loose control if the server sees sse link closes. This is just protection against a client disappearing and leaving the control held (control of the process does needed to be renewed at least every 5 minutes - but I don't really want to wait that long in the normal case of someone just closing the browser tab.)
Part of my need is to push the browser to the background so I can then open up the camera and record a video.
I built this app and had it almost working see https://github.com/akc42/pi_record.git (master branch).
Until I pushed the browser to the background and found IOS shut down the page and broke the sse link.
I tried restructuring to use a private web worker to manage the sse link, massing messages between the web worker and the main javascript thread - again almost working (see workers branch of above repository). But that got shutdown too!
My last thought is to use a service worker, but how to structure the app?
Clearly the service worker must act as a client to the server for the server side events. It must keep the connection open, but it also needs to keep track of multiple tabs in the browser which may or may not try and grab control of the interface, and only allow one tab to do so.
I can think of three approaches - but its difficult to see which is better. At least I have never even seen any mention of approach 2 and 3 below , but it seems to me that one of these two might actually be the simplest.
Approach 1
Move the code I have now for separate web workers into the service worker. However we will need to add to the message passing some form of ID between window and service. So I can record which tab actually grabbed control of the interface and therefore exclude other tabs from doing so (ie simulate a failed attempt to take control).
As far as I can work out MessageEvent.ports[0] could be a unique object which I could store in a Map somewhere, but I am not entirely convinced that the MessageChannel wouldn't close if the browser moved to the background.
Approach 2
have a set of phantom urls in the service worker that simulate all the different message types (and parameters) that where previously sent my the tab to its private web worker.
The fetch event provides a clientid (which I can use to difference between who actually grabbed control) and which I can use to then do Clients.get(clientid).postMessage() (or Clients.matchAll when a broadcast response is needed)
Code would be something like
self.addEventListener('fetch', (event) => {
const requestURL = new URL(event.request.url);
if (/^\/api\//.test(requestURL.pathname)) {
event.respondWith(fetch(event.request)); //all api requests are a direct pass through
} else if (/^\/service\//.test(requestURL.pathname)) {
/*
process these like a message passing with one extra to say the client is going away.
*/
if (urlRecognised) {
event.respondWith(new Response('OK', {status: 200}));
} else {
event.respondWith(new Response(`Unknown request ${requestURL.pathname}`, {status: 404}));
}
} else {
event.respondWith(async () => {
const cache = await caches.open('recorder');
const cachedResponse = await cache.match(event.request);
const networkResponsePromise = fetch(event.request);
event.waitUntil(async () => {
const networkResponse = await networkResponsePromise;
await cache.put(event.request, networkResponse.clone());
});
// Returned the cached response if we have one, otherwise return the network response.
return cachedResponse || networkResponsePromise;
});
}
});
The top of the the fetch event just passes the standard api requests made by the client straight through. I can't cache these (although I could be more sophisticated and perhaps pre reject those not supported).
The second section matches phantom urls /service/something
The last section is taken from Jake Archibald's offline cookbook and tries to use the cache, but updates the cache in the background if any of the static files have changed.
Approach 3
Similar to the approach above, in that we would have phantom urls and use the clientid as a unique marker, but actually try and simulate a server side event stream with one url.
I'm thinking the code with be more like
...
} else if (/^\/service\//.test(requestURL.pathname)) {
const stream = new TransformStream();
const writer = stream.writeable.getWriter();
event.respondWith(async () => {
const streamFinishedPromise = new Promise(async (resolve,reject) => {
event.waitUntil(async () => {
/* eventually close the link */
await streamFinishedPromise;
});
try {
while (true) writer.write(await nextMessageFromServerSideEventStream());
} catch(e) {
writer.close();
resolve();
}
});
return new Response(stream.readable,{status:200}) //probably need eventstream headers too
}
I am thinking that approach 2 could be the simplest, given where I am now but I am concerned that I can see nothing when searching for how to use service workers that discusses this phantom url approach.
Can anyone comment on any of these approaches and provide guidance on how to best program the tricky bits (for instance does Approach 1 message channel close when the browser is moved to the background on an iPad, or how do you really keep a response channel open, and does that get closed when the browser moves to the background in Approach 3)
The simple truth is that none of these approaches will work. What I didn't realise when I asked the question is that a service worker is re-run by the browser when ever there is something to do and that run only lasts for the length of time of the processing of an event. Although eventWaitUntil can prolong that, the only reference to how long I can find is that the browser is still at liberty to cancel it if it appears it might never close. I can't imagine than in a period of several hours it won't get cancelled. So an Event Source will close effectively terminate its link to the server.
So my only option to achieve what I want is to have the server carry on when the Event Source closes and find some other mechanism to release resources held on behalf of the client

Bypass Service-Worker caching

I have a progressive web-app, which speaks to an API. The calls to this api get cached by a service worker, which works great.
But now, I want to add a reload-button, which ideally forces the service worker to try to bypass the cache and update it if successful, also it should not return the cached result if a connection could not be made.
I am a bit unsure how to solve this. I am using the sw-toolbox.
All requests go through the fetch callback which receives a request object. Thus, before returning a cached response you can look for an additional header parameter (you need to include it into your request to API) to skip the logic returning cached response.
Based on your description, you are using the application cache. It can be accessed from the app fronted independent of the sw-tool box.
function onReloadButtonClicked(event) {
//Check for browser cache support
if ('caches' in window) {
//Update cache if network query is successful
caches.open('your_cache_name')
.then(function(cache) {
cache.add('your_url');
}).catch(function(err) {
// Do something with the error
});
}
}

RestKit: How to reject an entire mapping when a value is not valid

So I'm trying to support offline usage of an iOS application I'm making that uses a REST API. Here's what I have so far:
A server running with a REST interface to manipulate my data model.
An iOS application that uses RestKit to retrieve the data stored on my server.
RestKit stores server responses locally in Core Data.
When the server is unavailable I still want users to be able to update the data model and then when the server becomes available again I want those updates to be pushed to the server.
The issue I've run into is that when a value that has been updated locally (but not pushed to the server yet) is received from the server it is overwritten with the contents of the server's response. To prevent this I am trying to cancel the save to my local storage if the new 'updatedAt' date is before the current 'updatedAt' date.
This is my current validation function:
- (BOOL)validateUpdatedAt:(id *)ioValue error:(NSError **)outError
{
if(self.updatedAt && [self.updatedAt compare:((NSDate *)*ioValue)] == NSOrderedDescending) {
*outError = [NSError errorWithDomain:RKErrorDomain code:RKMappingErrorMappingDeclined userInfo:nil];
return NO;
}
return YES;
}
This works, but only prevents this individual value from being changed. I want the entire update of that object to be canceled if this one field is invalid. How do I do this?
The support available from RestKit is to set discardsInvalidObjectsOnInsert for the whole object to be discarded when validation fails. But, this won't work for you as it only works for NSManagedObject instances that are being inserted, because it uses validateForInsert:.
You could look at using validateForUpdate: to perform a similar check and then reverting the changes, but RestKit isn't really offering you anything in terms of the abort in your case.

How SignalR manages connection between Postbacks

1> Just want to understand how SignalR 1.x functions in a particular scenario
Lets say we have a 10 clients connected to Hub and one of the connected clients say client-1 performs a postback so OnDisconnected is called than OnConnected is called right ?
What happens if during this phase if client-2 try's to send message to client-1 exactly between the said scenario ie (msg is sent after client-1 is disconnected and before connected again )will client-1 miss the message or there's internal mechanism which makes sure client-1 does not miss the message sent by client-2
2> Second query I have is that I'm trying to pass a querystring using following code
var chat = $.connection.myHub;
$.connection.myHub.qs = { "token": "hello" };
but not able to retrieve it on the server side from the Context object
using
Context.QueryString.AllKeys
I even tried
var chat = $.connection.myHub;
$.connection.myHub.qs = "token=hello" ;
But it does not work ie when I check the keys, token is not present in AllKeys
Will appreciate if someone just help me out.
1: If a postback occurs a client will disconnect and then connect. However, when the client performs a connect again it will have a different Connection Id than it had prior to the postback. Therefore, any message sent to the old connection id will be missed because when the users browser connects again it will be known as a different client.
2: You're trying to set the query string on the hub proxy, not the connection. What you should be doing is:
$.connection.hub.qs = { foo: "bar" };

Resources