Service Worker file and an offline mode - service-worker

Do I understand correctly that a server worker file in a PWA should not be cached by a PWA? As I understand, once registered, installed and activated, it exits as an entity separate from a page in a browser environment and gets reloaded by the browser once a new version is found (I am omitting details that are not important here). So I see no reason to cache a service worker file. Browser kind of caching it by storing it in memory (or somewhere). I think caching a service worker file will complicate discovery of its code update and will bring no benefits.
However, if a service worker is not cached, there will be an error trying to retrieve it while refreshing a page that registers it in an offline mode because the service worker file is not available when the network is down.
What's the best way to eliminate this error? Or should I cache a service worker file? What's the most efficient strategy here?
I was doing some reading on PWA but found no clear explanation of the matter. Please help me with your advice if possible.

You're correct. Never cache service-worker.js.
To avoid the error that comes from trying to register without connectivity simply check the connection state from window.navigator.onLine and skip calling register if offline.
You can listen for network state changes and call registration at a later point in time if you want.

Related

Difference between clientsClaim and skipWaiting

Im trying to understand the difference between skipWaiting and clientsClaim. In my understanding: calling skipWaiting will cause the new service worker to skip the waiting phase, and become active right away. clientsClaim can then 'claim' any other open tabs as well.
What I gather from documentation online:
skipWaiting skips the waiting phase, and becomes active right away source
clientsClaim immediately start controlling pages source
In every post I find online, I usually always see clientsClaim and skipWaiting used together.
However, I recently found a service worker that only uses clientsClaim, and I'm having a hard time wrapping my head around what actually is the difference between clientsClaim and skipWaiting, and in what scenario do you use clientsClaim but not skipWaiting?
My thinking on this, and this may be where I'm wrong, but this is my understanding of it:
Is that calling clientsClaim, but not skipWaiting is redundant? Considering:
The new service worker will become active when all open pages are closed (because we're not using skipWaiting)
When our new service worker is activated, we call clientsClaim, even though we just closed all open pages to even activate the new service worker. There should be no other pages to control, because we just closed them.
Could someone help me understand?
Read documentation on skipWaiting
Read documentation on clientsClaim
Read about service worker lifecycle by Jake Archibald, and played around with this demo
Read a bunch of stackoverflow posts, offline cookbook, different blog posts, etc.
self.skipWaiting() does exactly what you described:
forces the waiting service worker to become the active service
"Active" in this sense does not mean any currently loaded clients are now talking to that service. It instead means that service is now the service to be used whenever a new client requests it.
This is where Clients.claim() comes in:
When a service worker is initially registered, pages won't use it until they next load.
Without calling claim, any existing clients will still continue to talk to the older service worker until a full page load.
While most of the time it makes sense to use skipWaiting and Clients.claim in conjunction, that is not always the case. If there is a chance of a poor experience for the user due to a service worker not being backwards compatible, Clients.claim should not be called. Instead, the next time a client is refreshed or loaded, it would now have the new service worker without worry of the breaking change.
The difference between skipWaiting() and Clients.claim() in Service Workers
An important concept to understand is that for a service worker to become operational on a page it must be the controller of the page. (You can actually see this property in Navigator.serviceWorker.controller.) To become the controller, the service worker must first be activated, but that's not enough in itself. A page can only be controlled if it has also been requested through a service worker.
Normally, this is the case, particularly if you're just updating a service worker. If, on the other hand, you're registering a service worker for the first time on a page, then the service worker will be installed and activated but it will not become the controller of the page because the page was not requested through a service worker.
You can fix this by calling Clients.claim() somewhere in the activate handler. This simply means that you wont have to refresh the page before you see the effects of the service worker.
There's some question as to how useful this actually is. Jake Archibald, one of the authors of the spec, has this to say about it:
I see a lot of people including clients.claim() as boilerplate, but I rarely do so myself. It only really matters on the very first load, and due to progressive enhancement the page is usually working happily without service worker anyway.
As regarding its use with other tabs, it will again only have any effect if those tabs were not requested through a service worker. It's possible to have a scenario where a user has the same page open in different tabs and has these tabs open for a long period of time, during which the developer introduces a service worker. If the user refreshes one tab but not the other, one tab will have the service worker and the other will not. But this scenario seems somewhat uncommon.
skipWaiting()
A service worker is activated after it is installed, and if there is no other service worker that is currently controlling pages within the scope. In other words, if you have any number of tabs open for a page that is being controlled by the old service worker, then the new service worker will not activate. You can therefore activate the new service worker by closing all open tabs. After this, the old service worker is controlling zero pages, and so the new service worker can become active.
If you don’t want to wait for the old service worker to be killed, you can call skipWaiting(). Normally, this is done within the install event handler. Even if the old service worker is controlling pages, it is killed anyway and this allows the new service worker to be activated.

Service Worker: pitfalls of self.skipWaiting() and self.clients.claim()

To immediately activate a service worker after it's installed, I use self.skipWaiting() in the install listener. To immediately take control of a page (without the need for a page navigation, e.g. page load), I use self.clients.claim(). I understand that doing such things means:
Page could first load without it being under the control of a Service Worker, but then be taken over by a Service Worker during its lifespan.
A page could start under the control of version 1 of Service Worker but then be taken over by version 2 during its lifespan.
There are all kinds of warnings online about doing such things, but I don't see the pitfalls. Perhaps one potential problem is if the controlled page does some initial handshake or setup with a Service Worker when it first loads. That obviously will be missed when the new Service Worker activates in the background, but even then, the Service Worker could message its controlling pages to notify them of the change.
It seems to me that for most applications under most scenarios would benefit significantly by using both self.skipWaiting() and self.clients.claim() without any downside. Did I miss something?
The pitfalls of self.skipWaiting() is described really well here (thanks #RobertRowntree for the link):
https://redfin.engineering/how-to-fix-the-refresh-button-when-using-service-workers-a8e27af6df68
As for self.clients.claim(), I still haven't seen a compelling argument against it, but when I do, I'll update my answer.

Redis flushall command is randomly being called

I have a ruby app in production that uses sidekiq (that uses redis) and I have managed to discover that flushall commands are being called which cause the database to be wiped (thus removing all the processed and scheduled jobs).
I don't know or understand what could be causing this.
Does anyone know how I can begin to trace the call to flushall?
Thanks,
It is most likely that your Redis server is open to the public network without any protection - that is just calling for trouble because anyone can connect and do much more damage than just a FLUSHALL. If that it the case, use password authentication at the very least, after burning the compromised server - the attacker may have gained access to your server's operating system and from there who knows where. More information at: http://antirez.com/news/96
If that isn't the case and you have a rogue application somewhere that randomly calls unwanted commands, you can try tracking it by combining the MONITOR and CLIENT LIST.
Lastly, you can consider renaming/disabling the FLUSHALL command, at least temporarily, until you get to the bottom of this.

Does editing a Web.config file trigger an overlapping recycle or a start+stop of the application pool?

I have Overlapping Recycling configured for my ASP.NET MVC site.
As I understand it (from this SO question), if I Recycle the Application Pool, this will spin up a new w3wp.exe process to take up the load of the one being recycled, and only when the new process is initialised and taking the load, will the old process be shut down. And if I stop/start the Application Pool, it does an immediate kill without letting the process quit gracefully or letting a replacement process spin up first.
Question: when I edit my Web.config file, it will restart the associated IIS Application Pool. Is this going to trigger the nice overlapping recycling behaviour, or the brutal stop/start behaviour?
I'm trying to decide if I need to take a server out of a load-balanced farm and do a drain-stop on the server traffic in order to edit configuration settings.
I decided to use science. I ran an experiment with my server under a load-testing tool, where I repeatedly edited and saved my Web.config file while requests were poured in.
No requests were dropped when changes were saved to the Web.config file.
I did observe a short spike in CPU activity while the new settings were loaded, but really barely noticeable at all.
I was able to prove that the settings were getting loaded by making the Web.config file badly formatted and saving it. This immediately caused requests to start failing.
What surprised me is that the process id did not change with a Web.config edit. My understanding of the IIS Overlapping Recycle was for IIS to start a new w3wp.exe process, and then wind down the old one. Which would have meant a different process id. So I don't think Overlapping Recycle is kicking in here.
Since the process id is the same, then I reason that its a totally separate mechanism which loads/unloads the AppDomain. This seems to be supported by this document, the relevant bit is reproduced below:
Configuration Changes Cause a Restart of the Application Domain
Changes to configuration settings in Web.config files indirectly cause
the application domain to restart. This behavior occurs by design. You
can optionally use the configSource attribute to reference external
configuration files that do not cause a restart when a change is made.
For more information, see configSource in General Attributes Inherited
by Section Elements.
TLDR
Neither Overlapping Recycle, or the brutal stop/start behaviour are caused by a Web.config edit. The AppDomain is re-loaded with the new settings without interruption to request processing.
http://msdn.microsoft.com/en-us/library/ackhksh7.aspx
Editing (or touching) the web.config will not trigger a 'nice overlapping recycling'.
As described above, the request process will not been 'interrupted' but new incoming requests have to wait until the new worker process has finished its initialization. So depending on the time for initialization, there will be a noticeable break.
I noticed that on a WCF-Service Application hosted in IIS7.5 where I implemented IProcessHostPreloadClient to do some time expensive preload stuff.
On the other hand a 'recycle' by clicking the app-pool's context menu item at the IIS Manager will do the nice overlapping: New incoming requests are processed by the old worker process as long as the new one works on the preload method.

Rescue rails app from server failure

I have a rails app which is now hosted on dedicated server. Today something happened: app doesn't respond and I have no ssh access, restarting doesn't help and I am waiting for tech support to respond. But this is not a question, I just need this app to be online even if server fails. Which is the easiest option? Can I create second server on different hosting and serve from there in case of failure, if so, how to sync db and files? Application is not heavily loaded, I just need it to be available.
Difficult problem to solve. There's no one proven way to make this happen, but in general you need "No Single Point of Failure"
There's an entire science devoted to reliability in web applications -- no way can you get that answered in a SO question.
You can take frequent backups of your database, store them on S3 (and/or somewhere else). You can then
have an image of your applications server at your host
spin it up when your server dies
restore the database
Have the new application server take over responsibility (easiest way: assume the old server's IP address)

Resources