Progressive web app launched in standalone does not detect site updates - service-worker

I've a Progressive web app that is added on my home screen. I've chosen standalone run type, I've a service-worker running in it.
All working perfectly, only one doubt: if I update my site (with its relative service-worker, I can see updates if I load it directly into browser, but if I launch it by home added link I see always the old site.
There is a way to request updates when launching my site in standalone mode?

I think you are asking "is there a way to dynamically update my precached assets without updating my service worker?"
Yes!
I have been working on an upgrade to the JSON cache strategy here -> https://serviceworke.rs/json-cache.html We will publish this soon!

After digging I've discovered the simple solution, I report it for others.
iOS does not support service-workers, so that is not the problem.
iOS keep in cache many resources, so the solution is to add a parameter to the various imports like so:
The best solution to ensure updates is to add an hash of imported as the parameter.
Alternatively we can use a timestamp to ensure that resources are ALWAYS updated.
To attach this parameter we can import these resources injecting them with javascript, like so
/**
* inietta uno script nella pagina
*/
function appendScript(url) {
var script = document.createElement('script');
script.type = 'text/javascript';
script.src = url;
script.async = false; // async false to wait for previous file loading
head.appendChild(script);
}
// create parameter with date
var currVersion = '?v=' + new Date().getTime();
// get head html element
var head = document.getElementsByTagName("head")[0];
// append script
appendScript('app.js' + currVersion);
// append css
head.insertAdjacentHTML('beforeend', '<link rel="stylesheet" type="text/css" href="style.css' + currVersion + '">');

Related

Export NextJS project as a module

I'm looking for a little guidance and suggestions here. My attempts and theories will be at the bottom.
I have a NextJS project from which I want to export the top level component (essentially the entry file) so that I can use it as a preview in my dashboard.
The nextjs project is very simple. For the sake of simplicity, let's imagine that all it renders is a colored <h1>Hello world</h1>. Then in my dashboard, I want to render a cellphone with my NextJS component embedded and then from the dashboard change the color of the text, as a way to preview how it would look like. I hope this makes sense.
I'm lost at how I could export this component from NextJS and import it into my dashboard. The dashboard is rendered in Ruby on Rails. It would be simple enough to just import the repo from git and access the file directly form node_modules, but I'm looking for a solution that doesn't require installing npm on our Rails project.
Paths I have thought about:
1 - Install npm on Rails and just import the source code from NextJS repo and access the file and render with react (Simple, but we're looking for a non-npm solution)
2 - Bundle the component with webpack and load it directly into rails (does this even work?) - I exported the js and all it did was freeze everything :P Still trying this path for now
3 - Using an iframe and just accessing the page (then I can't pass any callbacks into the iframe to change the color directly from the dashboard)
4 - I cannot separate this component from NextJS to use as a library in both repos. The component we are exporting is the "ENTIRE" NextJS app jsx and it wouldn't make sense to separate in a different repo
Does anyone have a suggestion on how I could achieve this?
I think you could use an iframe with the nextjs app url. Then if you want to change the color, simply add the color in query parameter of the iframe and handle it on nextjs app.
Simple example
Rails view (erb)
<iframe src="#{#nextjs_url}?color=#{#color}" />
NextJS
# do something to get the query param of the page and and set to prop of the component
const YourComponent = ({color}) => {
return <h1 style={{color}}>Lorem</h1>;
}
While trying Hoang's solution, I decided to dive deeper into how to communicate with an iframe and the solution actually feels quite good.
You can set up listeners on either side and post messages in between the projects.
So in my dashboard:
function handleEvent(e) {
const data = JSON.parse(e.data)
if (data.type === "card_click") {
//if type is what we want from this event, handle it
}
}
// Setup a listener with a handler
// This will run every time a message is posted from my app
window.addEventListener("message", handleEvent, false)
const postMessage = (color) => {
const event = JSON.stringify({
type: "color_update",
color,
})
// Find the iframe and post a message to it
// This will be picked up by the listener on the other side
document.getElementById("my-iframe-id").contentWindow.postMessage(event, "*")
}
And on my app:
function handleEvent(e) {
const data = JSON.parse(e.data)
if (data.type === "color_update") {
// Do whatever is necessary with the data
}
}
// Setup listener
// This will fire with every message posted from my dashboard
window.addEventListener("message", handleEvent, false)
const handleCardClick = (cardIndex) => {
const event = JSON.stringify({
type: "card_click",
cardIndex,
})
// post message to parent, that will be picked up by listener
// on the other side
window.parent.postMessage(event, "*")
}
It feels pretty straight forward to communicate with an iframe with this solution.

How to make sure a cached page has its corresponding assets cached?

tl;dr. My Service Worker is caching HTML pages and CSS files in different versions. Going offline: since I have to limit the number of files I’m caching, how can I make sure that, for each HTML page in the cache, the versioned CSS files it needs are also in the cache? I need to delete old CSS files at some point, and they have no direct relation with the HTML files.
I’m trying to turn a traditional website into a PWA (sort of), implementing caching strategies with a Service Worker (I’m using Workbox but the question is supposed to be more generalist).
I’m caching pages as I navigate through the website (network-first strategy), in order to make them available offline.
I’m also caching a limited number of CSS and JS assets with a cache-first strategy. The URLs pointing to them are already "cachebusted" using a timestamp embedded in the filename: front.320472502.css for instance. Because of the cachebusting technique already in place, I only need/want to keep a small number of assets in this cache.
Now here’s the issue I’m having. Let’s suppose I cached page /contact which referenced /front.123.css (hence was also cached). As I navigate to other pages, CSS has changed several times in the meantime, and my CSS cache now might contain only /front.455.css and /front.456.css. If I’m going offline now, trying to load /contact will successfully retrieve the contents of the page, but the CSS will fail to load because it’s not in the cache anymore, and it will render an unstyled content page.
Either I keep versions of my CSS in cache for a long time, which is not ideal, or I try to purge cached CSS only if it is not required by any of the cached pages. But how would you go about that? Looping through the cached pages, looking for the front.123.css string?
Another solution might be to give back an offline page rather than an unstyled content page, but I’m not sure if it is doable, since the worker responds with the HTML before knowing what assets it will need.
The "best" solution here is to use precaching (either via Workbox, or via some other way of generating a build-time manifest), and making sure that all of your HTML and subresources are cached and expired atomically. You don't have to worry about version mismatches or cache misses if you can precache everything.
That being said, precaching everything isn't always a viable option, if your site relies on a lot of dynamic, server-rendered content, or if you have a lot of distinct HTML pages, or if you have a larger variety of subresources, many of which are only required on a subset of pages.
If you want to go with the runtime caching approach, I'd recommend a technique along the lines of what's described in "Smarter runtime caching of hashed assets". That uses a custom Workbox plugin to handle cache expiration and finding a "best-effort" cache match for a given subresource when the network is unavailable. The main difficulty in generalizing that code is that you need to use a consistent naming scheme for your hashes, and write some utility functions to programmatically translate a hashed URL into the "base" URL.
In the interest of providing some code along with this answer, here's a version of the plugin that I currently use. You'll need to customize it as described above for your hashing scheme, though.
import {WorkboxPlugin} from 'workbox-core';
import {HASH_CHARS} from './path/to/constants';
function getOriginalFilename(hashedFilename: string): string {
return hashedFilename.substring(HASH_CHARS + 1);
}
function parseFilenameFromURL(url: string): string {
const urlObject = new URL(url);
return urlObject.pathname.split('/').pop();
}
function filterPredicate(
hashedURL: string,
potentialMatchURL: string,
): boolean {
const hashedFilename = parseFilenameFromURL(hashedURL);
const hashedFilenameOfPotentialMatch =
parseFilenameFromURL(potentialMatchURL);
return (
getOriginalFilename(hashedFilename) ===
getOriginalFilename(hashedFilenameOfPotentialMatch)
);
}
export const revisionedAssetsPlugin: WorkboxPlugin = {
cachedResponseWillBeUsed: async ({cacheName, cachedResponse, state}) => {
state.cacheName = cacheName;
return cachedResponse;
},
cacheDidUpdate: async ({cacheName, request}) => {
const cache = await caches.open(cacheName);
const keys = await cache.keys();
for (const key of keys) {
if (filterPredicate(request.url, key.url) && request.url !== key.url) {
await cache.delete(key);
}
}
},
handlerDidError: async ({request, state}) => {
if (state.cacheName) {
const cache = await caches.open(state.cacheName);
const keys = await cache.keys();
for (const key of keys) {
if (filterPredicate(request.url, key.url)) {
return cache.match(key);
}
}
}
},
};

Override web page's javascript function using firefox addon sdk

I'm trying to override a JS function named replaceMe in the web page from my add-on's content script, but I see that the original function implementation always gets executed.
Original HTML contains the following function definition:
function replaceMe()
{
alert('original');
}
I'm trying to override it my add-on like (main.js):
tabs.activeTab.attach({
contentScriptFile: self.data.url("replacerContent.js")
});
Here's what my replacerContent.js looks like:
this.replaceMe = function()
{
alert('overridden');
}
However, when I run my addon, I always see the text original being alerted, meaning the redefinition in replacerContent.js never took effect. Can you let me know why? replaceMe not being a privileged method, I should be allowed to override, eh?
This is because there is an intentional security between web content and content scripts. If you want to communicate between web content and you have control over the web page as well, you should use postMessage.
If you don't have control over the web page, there is a hacky workaround. In your content script you can access the window object of the page directly via the global variable unsafeWindow:
var aliased = unsafeWindow.somefunction;
unsafeWindow.somefunction = function(args) {
// do stuff
aliased(args);
}
There are two main caveats to this:
this is unsafe, so you should never trust data that comes from the page.
we have never considered the unsafeWindow hack and have plans to remove it and replace it with a safer api.
Rather than relying on unsafeWindow hack, consider using the DOM.
You can create a page script from a content script:
var script = 'rwt=function()();';
document.addEventListener('DOMContentLoaded', function() {
var scriptEl = document.createElement('script');
scriptEl.textContent = script;
document.head.appendChild(scriptEl);
});
The benefit of this approach is that you can use it in environments without unsafeWindow, e. g. chrome extensions.
You can then use postMessage or DOM events to communicate between the page script and the content script.

call custom js function in UIWebView

I'm trying to pre fill in fields to log in to a forum. However, I don't own the forum. So how do I link my own .js file so that I can fire a function that will pre fill the log in fields?
(Remember I don't own the servers that host the html files, so I cannot hook it up via HTML.)
You can inject your own javascript into a page being displayed by a UIWebView by
1) Put your javascript into a file in your app bundle, for example something like this will inject myFunction().
var script = document.createElement('script');
script.type = 'text/javascript';
script.text = function myFunction()
{
alert("my function");
};
document.getElementsByTagName('head')[0].appendChild(script);
2) Load the .js file and run it using stringByEvaluationJavaScriptFromString:
3) If its important your myFunction() doesn't get added until the dom has loaded, then within the same .js file add some other JavaScript that will ensure that the code in part 1) doesn't get run until you get a dom loaded event.
cross domain javascript fails everytime.
use ajax to retrieve the page you wish.
$(document).ready(function(){
jQuery.ajax(’forumlogin’).done(function(data)
{
$(’body’).html(data)
})
})
then fill in the forms using the forms element.

can't get a ff extension to work in v3.0.5

Does anyone know what might have changed since v3.0.5 that would enable extensions to work? Or, maybe I'm missing a setting somewhere? I wrote this add-on that works fine with newer versions, but I can't get it to launch in older ones. Specifically, I can't even get this part to work (this is in my browser overlay.xul):
<html:script>
<![CDATA[
var Cc = Components.classes;
var Ci = Components.interfaces;
var obSvc = Cc["#mozilla.org/observer-service;1"].getService(Ci.nsIObserverService);
gBrowser.consoleService = Cc["#mozilla.org/consoleservice;1"].getService(Ci.nsIConsoleService);
gBrowser.log = function(msg){
this.consoleService.logStringMessage(msg);
}
gBrowser.newObj= new MyAddOn();
gBrowser.log("initializing...");
function regListener()
{
obSvc.addObserver(gBrowser.newObj, "http-on-modify-request", false);
}
function unregListener()
{
obSvc.removeObserver(gBrowser.newObj, "http-on-modify-request");
}
window.addEventListener("load", regListener, false);
window.addEventListener("unload", unregListener, false);
]]>
This should attach listeners to the new obj (defined by a linked .js) However, I'm not even getting the "initializing..." message in the console. Any ideas?
Don't use <html:script>, use <script> (assuming you have xmlns="http://www.mozilla.org/keymaster/gatekeeper/there.is.only.xul" on your root <overlay> element).
Don't register an application-global listener (http-on-modify-request) from a window overlay. Doing so will make your code run one time in each window the user may have open. Use an XPCOM component instead - https://developer.mozilla.org/en/Setting_HTTP_request_headers
Don't pollute common objects (like gBrowser or the global object (with var Cc)) with your own properties. If everyone did that, no two extensions would work together. Put all your code properties on your own object with a unique name.
accessing gBrowser before the load event is probably what's causing your specific problem.
Set up your environment and check the Error Console to debug problems.
Don't waste time trying to support Firefox 3. It's not supported by Mozilla itself for over a year and shouldn't be used to access the web.
It looks like gBrowser.log is not defined, or at least is not a function, as the error console will probably tell you. I've never heard of it either. Maybe it was added in Fx 3.5?

Resources