Sharing code between service worker & preact app - service-worker

I am building a web-app using preact+redux. Some of the heavy lifting in being done in the service worker. As part of this logic I want to generate redux actions from the service worker. How can I share the common action creator code between the app & the service worker?

I think you can you try this work around:
Create Event emitter Client (https://github.com/facebook/emitter)
Connect service worker with your event emitter Client
Create you component and subscribe to the right events
Create your own event emitter:
http://craig-russell.co.uk/2016/01/29/service-worker-messaging.html#.Wxpc_jNKhsM
Listen events from components:
https://medium.com/netscape/buid-simple-react-apps-using-event-emitters-7a46554f56cd
Post Message:
https://developer.mozilla.org/en-US/docs/Web/API/Window/postMessage

Related

Service worker will not intercept fetches

I am serving my service worker from /worker.js and want it to intercept fetches to /localzip/*, but the fetch event is never fired.
I register it like this:
navigator.serviceWorker.register(
"worker.js",
{ scope: "/localzip/" }
);
And I claim all clients when it activates, so that I can start intercepting fetches from the current page immediately. I am sure that the service worker is activating and that clients.claim() is succeeding.
self.addEventListener("activate", (e) => {
// Important! Start processing fetches for all clients immediately.
//
// MDN: "When a service worker is initially registered, pages won't use it
// until they next load. The claim() method causes those pages to be
// controlled immediately."
e.waitUntil(clients.claim());
});
Chrome seems happy with it and the scope appears correct:
My fetch event handler is very simple:
self.addEventListener("fetch", (e) => {
console.log("Trying to make fetch happen!");
});
From my application, after the worker is active, I try to make a request, e.g.,
const response = await fetch("/localzip/lol.jpg");
The fetch does not appear to trigger the above event handler, and the browser instead tries to make a request directly to the server and logs GET http://localhost:3000/localzip/lol.jpg 404 (Not Found).
I have tried:
Making sure the latest version of my worker code is running.
Disabling / clearing caches to make sure the fetch isn't being handled by the browser's cache.
Hosting from an HTTPS server. (Chrome is supposed to support service workers on plaintext localhost for development.)
What more does it want from me?
Live demo: https://rgov.github.io/service-worker-zip-experiment/
Note that the scope is slightly different, and the fetch is performed by creating an <img> tag.
First, let's confirm you are not using hard-reload while testing your code. If you use hard-reload, all requests will not go through the service worker.
See https://web.dev/service-worker-lifecycle/#shift-reload
I also checked chrome://serviceworker-internals/ in Chrome, and your service worker has fetch handler.
Then, let's check the codes in detail.
After trying your demo page, I found a network request is handled by the service worker after clicking "Display image from zip archive" button since I can see this log:
Service Worker: Serving archive/maeby.jpg from zip archive
Then, the error is thrown:
Failed to load ‘https://rgov.github.io/localzip/maeby.jpg’. A ServiceWorker passed a promise to FetchEvent.respondWith() that rejected with ‘TypeError: db is undefined’.
This is caused by db object is not initialized properly. It would be worth confirming whether you see the DB related issue as I see in your demo. If not, my following statement might be incorrect.
I try to explain some service worker mechanism alongside my understanding of your code:
Timing of install handler
Your DB open code happens in the install handler only. This means DB object will be assigned only when the install handler is executed.
Please notice the install handler will be executed only when it's necessary. If a service worker exists already and does not need to update, the install handler won't be called. Hence, the db object in your code might not be always available.
Stop/Start Status
When the service worker does not handle events for a while (how long it would be is by browser's design), the service worker will go to stop/idle state.
When the service worker is stopped/idle (you can check the state in the devtools) and started again, the global db object will be undefined.
In my understanding, this is why I see the error TypeError: db is undefined’.
Whenever the service worker wakes up, the whole worker script will be executed. However, the execution of event handlers will depend on whether the events are coming.
How to prevent stop/idle for debugging?
Open your devtools for the page then the browser will keep it being alive.
Once you close the devtool, the service worker might go to "stop" soon.
Why does the service worker stops?
The service worker is designed for handling requests. If no request/event should be handled by a service worker, the service worker thread is not necessary to run for saving resources.
Please notice both fetch and message events (but not limited to) will awake the service worker.
See Prevent Service Worker from automatically stopping
Solution for the demo page
If the error is from the DB, this means the getFromZip function should open the DB when db is unavailable.
By the way, even without any change, your demo code works well in the following steps:
As a user, I open the demo page at the first time. (This is for ensuring that the install handler is called.)
I open the devtools ASAP once I see the page content. (This is for preventing the service worker goes to "stop" state)
I click "Download zip archive to IndexedDB" button.
I click "Display image from zip archive" button.
Then I can see the image is shown properly.
Jake Archibald pointed out that I was confused about the meaning of the service worker's scope, explained here:
We call pages, workers, and shared workers clients. Your service worker can only control clients that are in-scope. Once a client is "controlled", its fetches go through the in-scope service worker.
In other words:
The scope affects which pages (clients) get placed under the service worker's control and not which fetched URLs get intercepted.

MVC.NET: How to update UI on event trigger in backend ?

I want to update the browser UI on the event at controller level, such as some service is updating the database and if database gets updates then, user should get notified about it.
I don't want that browser should keep pinging to server after some seconds.
If you can share some link of working example it'll be helpful.
I read about SignalR but not sure how to use it.
Thanks
options:
1. you can write a javascript script to roll polling a mvc action.
2. you can use SignalR ,to send a message from backend with websocket http://www.asp.net/signalr

Can I have multiple service workers both intercept the same fetch request?

I'm writing a library which I will provide for 3rd parties to run a service worker on their site. It needs to intercept all network requests but I want to allow them to build their own service worker if they like.
Can I have both service workers intercept the same fetches and provide some kind of priority/ordering between them? Alternatively is there some other pattern I should be using?
Thanks
No, you can not. Only one service worker per scope is allowed to be registered so the latest kick the previous one out unless the scope is more specific, in this case, the request is attended by the most specific only.
Nevertheless, you can attach multiple fetch handlers and they all will process the request so maybe you can write your functionality in a separated script and let the user's service worker to include your file via importScripts().
The first handler calling event.respondWith() synchronously (actually, you can not call this method asynchronously) wins and the remaining handlers trying to call will throw.
Prioritization and coordination requires middleware. You can check ServiceWorkerWare or sw-toolbox.

Triggering a SWF Workflow based on SQS messages

Preamble: I'm trying to put together a proposal for what I assume to be a very common use-case, and I'd like to use Amazon's SWF and SQS to accomplish my goals. There may be other services that will better match what I'm trying to do, so if you have suggestions please feel free to throw them out.
Problem: The need at its most basic is for a client (mobile device, web server, etc.) to post a message that will be processed asynchronously without a response to the client - very basic.
The intended implementation is to for the client to post a message to a pre-determined SQS queue. At that point, the client is done. We would also have a defined SWF workflow responsible for picking up the message off the queue and (after some manipulation) placing it in a Dynamo DB - again, all fairly straightforward.
What I can't seem to figure out though, is how to trigger the workflow to start. From what I've been reading a workflow isn't meant to be an indefinite process. It has a start, a middle, and an end. According to the SWF documentation, a workflow can run for no longer than a year (Setting Timeout Values in SWF).
So, my question is: If I assume that a workflow represents one message-processing flow, how can I start the workflow whenever a message is posted to the SQS?
Caveat: I've looked into using SNS instead of SQS as well. This would allow me to run a server that could subscribe to SNS, and then start the workflow whenever a notification is posted. That is certainly one solution, but I'd like to avoid setting up a server for a single web service which I would then have to manage / scale according to the number of messages being processed. The reason I'm looking into using SQS/SWF in the first place is to have an auto-scaling system that I don't have to worry about.
Thank you in advance.
I would create a worker process that listens to the SQS queue. Upon receiving a message it calls into SWF API to start a workflow execution. The workflow execution id should be generated based on the message content to ensure that duplicated messages do not result in duplicated workflows.
You can use AWS Lambda for this purpose . A lambda function will be invoked by SQS event and therefore you don't have to write a queue poller explicitly . The lambda function could then make a post request to SWF to initiate the workflow

Asyncronously send messages to third party systems - queue?

We have an ASP.NET MVC web app which allows users to publish messages onto a web site. Alongside this, the user is also able to syndicate that message content to other 3rd party systems when they post the message.
At present, this is done synchronously, so when they click the 'Post' button, we persist their message to the database and then notify each 3rd party systems in turn. We need to improve the scalability and durability of this operation so I would like to make the notification aspect of the action asynchronous in some way.
I can think of the following possiblities
Save the 3rd party messages into a database table and have some worker process read items from the table and post to the 3rd party systems.
Use a "proper" message queue of some sort like nServiceBus or RabbitMQ (I have no experience with either of these)
Is there a better way to do this? I'm particularly interested in how to notify the user that the message has been syndicated correctly (since it's ansynchronous) and also how to handle multiple retry failures, at which point the sender should just give up.
Thanks
James
NServiceBus is a great framework for implementing asynchronous communication. If you use it for this use case, you will see many other opportunities for applying messaging for improving the scalability and reliability of your system.
Create a MessagePosted event message that is published after a message is persisted to the database. For each third party system that might be notified of the message, create an event handler class that implements IHandleMessages.
Multiple retry failures is facilitated by NServiceBus, just throw an exception within the event handler if something goes wrong. The event will be resubmitted to the event handler for a configurable number of retries before the event is moved to the error queue.
To notify the user you can for instance create a status view or widget which shows the notification results of the latest messages. If a third party system cannot be notified you can consider sending the user an e-mail so that he can take action.
Use this publish subscribe sample to get up to speed quickly: http://docs.particular.net/samples/pubsub/
You should read this: It explains how to use pub sub with RabbitMQ http://www.rabbitmq.com/tutorials/tutorial-three-java.html

Resources