Service worker sync event fires immediately - service-worker

I am using Chrome 66.0.3359.181 (64-bit). I am running the following code:
navigator.serviceWorker.ready
.then(sw=>{
addData('sync-posts',post)
.then(()=>{
return sw.sync.register('sync-new-posts');
})
.then(()=>{
var snackbarContainer = document.querySelector('#confirmation-toast');
const data = {message: 'Your post was saved for syncing!'};
snackbarContainer.MaterialSnackbar.showSnackbar(data);
})
.catch(err=>{
console.log(err);
});
})
However even when I have disconnected my wifi it still triggers the sync event immediately.
ANSWER: I actually figured this out while I was writing it but since I didn't find anyone else answer this type of question AND it took an hour or so out of my time I thought I'd post it anyway.
I had a VM network adapter enabled (for Docker) and that was causing it to try and sync even though that connection didn't go anywhere useful.
Then I also discovered that provided I was throwing an error from that sync event when it failed, it would retry syncing. Originally I was catching errors and just logging to console but this meant the sync thought it was complete.

Related

Twilio worker client disconnect reason

I'm trying to ensure single worker session/window at a time.
In order to achieve this I have added a parameter closeExistingSessions to the createWorker and it's disconnecting (websocket) the other workerClient as expected.
Just wondering if there is a way to know the disconnect reason using this disconnected event listener so that I can show a relevant message to the end user.
const worker = new Twilio.TaskRouter.Worker(WORKER_TOKEN);
worker.on("disconnected", function(<ANY_ERROR_CODE_OR_SOMETHING_HERE?!>) {
console.log("Websocket has disconnected");
});
We are getting the reason (The reason the Worker websocket disconnected) as parameter to the disconnected callback.
const worker = new Twilio.TaskRouter.Worker(WORKER_TOKEN);
worker.on("disconnected", function(reason) {
console.log(reason.message);
});
And the reason for disconnecting due to existing sessions is 'Websocket disconnected due to new connection being registered'
Hope Twilio will keep their docs up to date

CacheDidUpdate not called after BackgroundSync Sync completed event

This happens on Chrome v. 100 on a MacBook.
I'm using the Workbox's StaleWhileRevalidate together with the BrackgroundSyncPlugin and the BroadcastUpdatePlugin. I have the following scenario:
User opens app offline
App loads cached response
User gets internet connection
BackgroundSyncPlugin retries failed StaleWhileRevalidate request and succeeds
BackgroundSyncPlugin shows Sync completed event in Chrome's dev tools background-sync
Cache is updated
BroadcastUpdatePlugin should broadcast cache is updated
However, I never receive the broadcast. I do receive the broadcast if I refresh the page with internet connection.
Code example:
registerRoute(/^some-working-regex/,new StaleWhileRevalidate(
{
cacheName: 'some-cache-name',
matchOptions: {ignoreVary: true},
plugins: [
new ExpirationPlugin({maxEntries: 20,purgeOnQuotaError: false,}),
new BackgroundSyncPlugin('some-queue-name', {maxRetentionTime: 24 * 60}),
new BroadcastUpdatePlugin()
]}),
"GET"
)
And I do see the sync event after turning on the internet:
Expected outcome:
I expect BroadcastUpdatePlugin to fire when the cache is updated successfully by the BackgroundSyncPlugin after the user obtains connectivity again.
Am I missing a step here?
I've also created a custom plugin with a cacheDidUpdate and the handler also doesn't get called. (so it doesn't seem to be related to the BroadcastUpdatePlugin).
Thanks!

Managing Server Side Events with a Service Worker

I am building a web app to display on my iPad to control my raspberry pi acting as an audio recorder. Part of the need is to maintain an event source open so that the server can send Server Side Events. A specific instance of the app can grab control of the recording process, but will loose control if the server sees sse link closes. This is just protection against a client disappearing and leaving the control held (control of the process does needed to be renewed at least every 5 minutes - but I don't really want to wait that long in the normal case of someone just closing the browser tab.)
Part of my need is to push the browser to the background so I can then open up the camera and record a video.
I built this app and had it almost working see https://github.com/akc42/pi_record.git (master branch).
Until I pushed the browser to the background and found IOS shut down the page and broke the sse link.
I tried restructuring to use a private web worker to manage the sse link, massing messages between the web worker and the main javascript thread - again almost working (see workers branch of above repository). But that got shutdown too!
My last thought is to use a service worker, but how to structure the app?
Clearly the service worker must act as a client to the server for the server side events. It must keep the connection open, but it also needs to keep track of multiple tabs in the browser which may or may not try and grab control of the interface, and only allow one tab to do so.
I can think of three approaches - but its difficult to see which is better. At least I have never even seen any mention of approach 2 and 3 below , but it seems to me that one of these two might actually be the simplest.
Approach 1
Move the code I have now for separate web workers into the service worker. However we will need to add to the message passing some form of ID between window and service. So I can record which tab actually grabbed control of the interface and therefore exclude other tabs from doing so (ie simulate a failed attempt to take control).
As far as I can work out MessageEvent.ports[0] could be a unique object which I could store in a Map somewhere, but I am not entirely convinced that the MessageChannel wouldn't close if the browser moved to the background.
Approach 2
have a set of phantom urls in the service worker that simulate all the different message types (and parameters) that where previously sent my the tab to its private web worker.
The fetch event provides a clientid (which I can use to difference between who actually grabbed control) and which I can use to then do Clients.get(clientid).postMessage() (or Clients.matchAll when a broadcast response is needed)
Code would be something like
self.addEventListener('fetch', (event) => {
const requestURL = new URL(event.request.url);
if (/^\/api\//.test(requestURL.pathname)) {
event.respondWith(fetch(event.request)); //all api requests are a direct pass through
} else if (/^\/service\//.test(requestURL.pathname)) {
/*
process these like a message passing with one extra to say the client is going away.
*/
if (urlRecognised) {
event.respondWith(new Response('OK', {status: 200}));
} else {
event.respondWith(new Response(`Unknown request ${requestURL.pathname}`, {status: 404}));
}
} else {
event.respondWith(async () => {
const cache = await caches.open('recorder');
const cachedResponse = await cache.match(event.request);
const networkResponsePromise = fetch(event.request);
event.waitUntil(async () => {
const networkResponse = await networkResponsePromise;
await cache.put(event.request, networkResponse.clone());
});
// Returned the cached response if we have one, otherwise return the network response.
return cachedResponse || networkResponsePromise;
});
}
});
The top of the the fetch event just passes the standard api requests made by the client straight through. I can't cache these (although I could be more sophisticated and perhaps pre reject those not supported).
The second section matches phantom urls /service/something
The last section is taken from Jake Archibald's offline cookbook and tries to use the cache, but updates the cache in the background if any of the static files have changed.
Approach 3
Similar to the approach above, in that we would have phantom urls and use the clientid as a unique marker, but actually try and simulate a server side event stream with one url.
I'm thinking the code with be more like
...
} else if (/^\/service\//.test(requestURL.pathname)) {
const stream = new TransformStream();
const writer = stream.writeable.getWriter();
event.respondWith(async () => {
const streamFinishedPromise = new Promise(async (resolve,reject) => {
event.waitUntil(async () => {
/* eventually close the link */
await streamFinishedPromise;
});
try {
while (true) writer.write(await nextMessageFromServerSideEventStream());
} catch(e) {
writer.close();
resolve();
}
});
return new Response(stream.readable,{status:200}) //probably need eventstream headers too
}
I am thinking that approach 2 could be the simplest, given where I am now but I am concerned that I can see nothing when searching for how to use service workers that discusses this phantom url approach.
Can anyone comment on any of these approaches and provide guidance on how to best program the tricky bits (for instance does Approach 1 message channel close when the browser is moved to the background on an iPad, or how do you really keep a response channel open, and does that get closed when the browser moves to the background in Approach 3)
The simple truth is that none of these approaches will work. What I didn't realise when I asked the question is that a service worker is re-run by the browser when ever there is something to do and that run only lasts for the length of time of the processing of an event. Although eventWaitUntil can prolong that, the only reference to how long I can find is that the browser is still at liberty to cancel it if it appears it might never close. I can't imagine than in a period of several hours it won't get cancelled. So an Event Source will close effectively terminate its link to the server.
So my only option to achieve what I want is to have the server carry on when the Event Source closes and find some other mechanism to release resources held on behalf of the client

"Internal" Error When Submitting Form With Firebase onCall Function on IOS Safari

I am trying to submit a form but I get an "internal" error after submit on IOS Safari. Happened on two separate devices. I'm using Firebase functions onCall function. Client code:
var contactForm =
window.firebase.functions().httpsCallable('contactForm');
let result = await contactForm({ accountUID, foldersFilter,
firstName, lastName, email, cellNumber, dobDay, dobMonth })
And server code:
exports.contactForm = functions.https.onCall(( data, context ) => {
return contactForm.contactForm( data, context )
});
This function is called via form. The form works great on chrome, safari desktop, but for some reason it gets an internal errror sometimes when testing on IOS device. At first I thought it only happened when I was using autofill, but I've tested more and I get the same error when not using autofill too.
The confusing thing is my function code is actually never being called (I don't see any firebase function logs). Here is my console in safari:
The network connection was lost.
Fetch API cannot load https://us-central1-projectId.cloudfunctions.net/contactForm due to access control checks
Failed to load resource: The network connection was lost.
internal
Why won't this form submit on ios safari?
I fixed the issue. Turns out it has something to do with Google Cloud Functions being IPv4, and Safari requiring IPv6. I suspect this will become a bigger issue moving forward. I'm having to move all onCall Firebase functions to https triggers. In order to make https triggers work, you have to use a custom domain in Firebase hosting and rewrite to your function endpoint.
{
"hosting": {
...
"rewrites": [
{
"source": "/api/contactForm",
"function": "contactForm"
}
}
and so now instead of calling https://us-central1-projectId.cloudfunctions.net/contactForm to trigger my api. I call https://customdomain.com/api/contactForm

mqtt.client:connect() "not established" callback is unexpectedly called after a disconnect

The documentation for mqtt.client:connect() states that the last arg is a "callback function for when the connection could not be established".
I have a case where mqtt.client:connect() succeeds, so the "not established" callback is not called (correct behavior). But, later, when my mqtt broker goes down, the "not established" callback function gets unexpectedly activated.
I have the following code:
function handle_mqtt_error (client, reason)
print("mqtt connect failed, reason = "..reason..". Trying again shortly.")
tmr.create():alarm(10 * 1000, tmr.ALARM_SINGLE, do_mqtt_connect)
end
function do_mqtt_connect ()
print("connecting---")
m:connect(MQTT_HOST, MQTT_PORT, 1, function(client)
print("mqtt connected")
client:publish("topic/status", "online", 1, 1)
end,
handle_mqtt_error
)
print("returning---")
end
-- init mqtt client
m = mqtt.Client(MQTT_CLIENT_ID, 120, MQTT_USER, MQTT_PASS)
-- connect to mqtt
print("Starting Test")
do_mqtt_connect()
I see the output from the test begin, as expected, with:
Starting Test
connecting---
returning---
mqtt connected
At this point, I kill my mqtt broker, and I unexpectedly see:
mqtt connect failed, reason = -5. Trying again shortly.
connecting---
returning---
mqtt connect failed, reason = -5. Trying again shortly.
connecting---
returning---
And, happily, but unexpectedly, when I restart my broker, I see:
mqtt connected
So, it appears that handle_mqtt_error() is not only called "when the connection could not be established". It appears that it also be called if mqtt.client:connect() successfully establishes a connection, then the connection is later broken.
======= New Information =======
I downloaded the "dev" tree and used the Docker image to build the firmware. Within mqtt.c, I enabled NODE_DBG. The interesting lines are:
enter mqtt_socket_reconnected.
mqtt connect failed, reason = -5. Trying again shortly.
enter mqtt_socket_disconnected.
leave mqtt_socket_disconnected.
leave mqtt_socket_reconnected.
The "mqtt connect failed..." message is printed by handle_mqtt_error(), which is my "connect failed" callback.
Here's my theory. When my test starts, do_mqtt_connect() calls mqtt_socket_connect(), which does this:
espconn_regist_reconcb(pesp_conn, mqtt_socket_reconnected);
This sets reconnect_callback (in app/lwip/app/espconn.c). Later, after my broker goes down and comes back up, espconn_tcp_reconnect() is called (in app/lwip/app/espconn_tcp.c). It calls the reconnect_callback, which is mqtt_socket_reconnected(), which calls handle_mqtt_error().
So, I think the end result doesn't match the documentation, but it works out okay for me. If the behavior did match the documentation, I would just add some Lua code to handle the "offline" event, and try to reestablish the mqtt connection. I just thought someone might be interested if the behavior doesn't match the documentation.

Resources