Something that I'm running into with a project I'm working on is, I have a single page application. I'm handling browser navigation routing on the client-side, that lets me dynamically import some modules whenever a route is matched. My routing setup looks a bit like this:
router.setRoutes([
{
path: '/',
component: 'app-main', // statically loaded
},
{
path: '/posts',
component: 'app-posts',
action: () => { import('./app-posts.js').catch(() => Router.go('/offline');} // dynamically loaded
},
{
path: '/offline', network
component: 'app-offline', // also statically loaded
}
]);
And here's a simple image of the 'app' for clarity:
I'm caching the app shell in my service worker, which means that the main page, and the offline page get precached, and the posts page should get cached during runtime (once requested, so if the user clicks on the posts link)
So my precache manifest caches: main.js, offline.js, and my index.html.
Where I'm hitting a bump is:
The user loses network connection
The user tries to go to the posts page
The dynamic import for this may fail if it hasnt been requested and cached before (and the user will be redirected to the offline page)
But when my user gains network connectivity again, clicks the posts link, the dynamic import will still fail; I'm guessing because the browser dedupes dynamic imports.
Which is a huge shame, because my user has a network connection; this request should succeed! The only way I can figure out how to deal with this is to have the user reload the page, and request the posts page again.
So my question is, how should I go about this?
Solved it by checking if the user has network connection before trying to do the import like this:
function handleRoute(url) {
if('onLine' in navigator) {
if(navigator.onLine) {
// user is online, safe to import
import(url);
} else {
// user is offline, don't even try to import -> straight to `/offline`
Router.go('/offline');
}
} else {
// incase the browser doesnt support `navigator.onLine`, try to import anyway
import(url);
}
}
It feels like a slightly hacky way to do it, but then again browser support for navigator.onLine is pretty good.
For more information on retrying failed dynamic imports, I found this issue in the tc39/proposal-dynamic-import github repo.
The browser is not doing that sort of caching/deduping/whatever. You can verify this easily by calling import from the console and toggling your network online/offline.
So the problem is most likely with the framework you're using. The caching of the first call to the route action happens somewhere in the framework. Maybe consult the documentation?
Related
I'm currently evaluating whether k6 fits our load testing needs. We have a fairly traditional website architecture that uses Apache webservers with PHP und a MySQL database. Sending simple HTTP requests with k6 looks simple enough and I think we will be able to test all major functionality with it, as we don't rely on JavaScript that much and most pages are static.
However, I'm unsure how to deal with resources (stylesheets, images, etc.) that are referenced in the HTML that is returned in the requests. We need to load them as well, as this sometimes leads to database requests, which must be part of the load test.
Is there some out-of-the-box functionality in k6 that allows you to load all the resources like a browser would? I'm aware that k6 does NOT render the page and I don't need it to. I only need to request all the resources inside the HTML.
You basically have two options, both with their caveats:
Record your session - you can either export har directly from the browser as shown there or use an extension made for your browser here is firefox and chromes. Both should be usable without a k6 cloud account you just need to set them to download the har and it will automatically (and somewhat silently) download them when you hit stop. And then either use the in k6 har converter (which is deprecated, but still works) or the new har-to-k6 one which.
This method is particularly good if you have a lot of pages and/or resources and even works if you have a single page style of application as it just gets what the browser requested as a HAR and then transforms it into a script. And if there were no dynamic things that need to be inputed (username/password) the final script can be used as is most of the time.
The biggest problem with this approach is that if you add a css file you need to redo this whole exercise. This is even more problematic if you css/js file name change on each change or something like that. Which is what the next method is good for:
Use parseHTML and then find the elements you care about and make a request for them.
import http from "k6/http";
import {parseHTML} from "k6/html";
export default function() {
const res = http.get("https://stackoverflow.com");
const doc = parseHTML(res.body);
doc.find("link").toArray().forEach(function (item) {
console.log(item.attr("href"));
// make http gets for it
// or added them to an array and make one batch request
});
}
will produce
NFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/favicon.ico?v=4f32ecc8f43d
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon.png?v=c78bd457575a
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/img/apple-touch-icon.png?v=c78bd457575a
INFO[0001] /opensearch.xml
INFO[0001] https://cdn.sstatic.net/Shared/stacks.css?v=53507c7c6e93
INFO[0001] https://cdn.sstatic.net/Sites/stackoverflow/primary.css?v=d3fa9a72fd53
INFO[0001] https://cdn.sstatic.net/Shared/Product/product.css?v=c9b2e1772562
INFO[0001] /feeds
INFO[0001] https://cdn.sstatic.net/Shared/Channels/channels.css?v=f9809e9ffa90
As you can see some of the urls are relative and not absolute so you will need to handle this. And in this example only some are css, so probably more filtering is needed.
The problem here is that you need to write the code and if you add a relative link or something else you need to handle it. Luckily k6 is scriptable so you can reuse the code :D.
I've followed Михаил Стойков suggestion and written my own function to load resources. You can set the way resources are loaded (batch or sequential gets with options.concurrentResourceLoading).
/**
* #param {http.RefinedResponse<http.ResponseType>} response
*/
export function getResources(response) {
const resources = [];
response
.html()
.find('*[href]:not(a)')
.each((index, element) => {
resources.push(element.attributes().href.value);
});
response
.html()
.find('*[src]:not(a)')
.each((index, element) => {
resources.push(element.attributes().src.value);
});
if (options.concurrentResourceLoading) {
const responses = http.batch(
resources.map((r) => {
return ['GET', resolveUrl(r, response.url), null, {
headers: createHeader()
}];
})
);
responses.forEach(() => {
check(response, {
'resource returns status 200': (r) => r.status === 200,
});
});
} else {
resources.forEach((r) => {
const res = http.get(resolveUrl(r, response.url), {
headers: createHeader(),
});
!check(res, {
'resource returns status 200': (r) => r.status === 200,
});
});
}
}
Usually whenever I read a blog post about PWA's, the tutorial seems to just precache every single asset. But this seems to go against the app shell pattern a bit, which as I understand is: Cache the bare necessities (only the app shell), and runtime cache as you go. (Please correct me if I understood this incorrectly)
Imagine I have this single page application, it's a simple index.html with a web component: <my-app>. That <my-app> component sets up some routes which looks a little bit like this, I'm using Vaadin router and web components, but I imagine the problem would be the same using React with React Router or something similar.
router.setRoutes([
{
path: '/',
component: 'app-main', // statically loaded
},
{
path: '/posts',
component: 'app-posts',
action: () => { import('./app-posts.js');} // dynamically loaded
},
/* many, many, many more routes */
{
path: '/offline', // redirect here when a resource is not cached and failed to get from network
component: 'app-offline', // also statically loaded
}
]);
My app may have many many routes, and may get very large. I don't want to precache all those resources straight away, but only cache the stuff I absolutely need, so in this case: my index.html, my-app.js, app-main.js, and app-offline.js. I want to cache app-posts.js at runtime, when it's requested.
Setting up runtime caching is simple enough, but my problem arises when my user visits one of the potentially many many routes that is not cached yet (because maybe the user hasn't visited that route before, so the js file may not have loaded/cached yet), and the user has no internet connection.
What I want to happen, in that case (when a route is not cached yet and there is no network), is for the user to be redirected to the /offline route, which is handled by my client side router. I could easily do something like: import('./app-posts.js').catch(() => /* redirect user to /offline */), but I'm wondering if there is a way to achieve this from workbox itself.
So in a nutshell:
When a js file hasn't been cached yet, and the user has no network, and so the request for the file fails: let workbox redirect the page to the /offline route.
Option 1 (not always useful):
As far as I can see and according to this answer, you cannot open a new window or change the URL of the browser from within the service worker. However you can open a new window only if the clients.openWindow() function is called from within the notificationclick event.
Option 2 (hardest):
You could use the WindowClient.navigate method within the activate event of the service worker however is a bit trickier as you still need to check if the file requested exists in the cache or not.
Option 3 (easiest & hackiest):
Otherwise, you could respond with a new Request object to the offline page:
const cacheOnly = new workbox.strategies.CacheOnly();
const networkFirst = new workbox.strategies.NetworkFirst();
workbox.routing.registerRoute(
/\/posts.|\/articles/,
async args => {
const offlineRequest = new Request('/offline.html');
try {
const response = await networkFirst.handle(args);
return response || await cacheOnly.handle({request: offlineRequest});
} catch (error) {
return await cacheOnly.handle({request: offlineRequest})
}
}
);
and then rewrite the URL of the browser in your offline.html file:
<head>
<script>
window.history.replaceState({}, 'You are offline', '/offline');
</script>
</head>
The above logic in Option 3 will respond to the requested URL by using the network first. If the network is not available will fallback to the cache and even if the request is not found in the cache, will fetch the offline.html file instead. Once the offline.html file is parsed, the browser URL will be replaced to /offline.
Most examples of registering a Service worker do so through JavaScript. For example (From MDN):
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register('service-worker.js', {
scope: './'
}).then(function (registration) {
var serviceWorker;
if (registration.installing) {
serviceWorker = registration.installing;
document.querySelector('#kind').textContent = 'installing';
} else if (registration.waiting) {
serviceWorker = registration.waiting;
document.querySelector('#kind').textContent = 'waiting';
} else if (registration.active) {
serviceWorker = registration.active;
document.querySelector('#kind').textContent = 'active';
}
if (serviceWorker) {
// logState(serviceWorker.state);
serviceWorker.addEventListener('statechange', function (e) {
// logState(e.target.state);
});
}
}).catch (function (error) {
// Something went wrong during registration. The service-worker.js file
// might be unavailable or contain a syntax error.
});
} else {
// The current browser doesn't support service workers.
}
But I noticed in the Web App Manifest standard that there is a serviceworker member:
"serviceworker": {
"src": "sw.js",
"scope": "/",
"update_via_cache": "none"
}
This is the only place I've seen this referred to.
This raises two questions for me:
1 Which approach SHOULD I use? What are the trade-offs?
The declarative benefit of the manifest approach is obvious, but if I use that approach, how do I reference the registration object in order to track events similar to the script approach? (installing | waiting | active | failed).
Assuming it IS possible to reference the registration object appropriately, can it miss events? Such as finish installing before I could register an event listener to it.
2 What are the caching implications
Since the manifest would be saved in the offline cache, and this manifest would reference the service-worker script, what are the cache implications? Does the 24 hour rule still apply assuming I do NOT store the script in the offline cache? The update_via_cache member is not a simple thing to read in the spec.
It looks like it was added to the spec back in October of 2016, and there is some background discussion in the issue tracker.
My interpretation is that the use case is providing service worker bootstrapping metadata that is relevant when installing a web app via a non-browser mechanism, e.g. via an app store. I don't see any mention of that field in the guidance about Microsoft Store ingestion, though.
So... as of right now, I am not clear that any browsers honor the serviceworker field in the web app manifest, and if your concern is having a functional service worker registration for "browser" use cases, do it using JavaScript.
Your best bet for follow ups would be to ask on the web app manifest's GitHub issue tracker.
I have an Ember CLI app with a Rails back-end API. I am trying to set up end-to-end testing by configuring the Ember app test suite to send requests to a copy of the Rails API. My tests are working, but I am getting the following strange error frequently:
{}
Expected: true
Result: false
at http://localhost:7357/assets/test-support.js:4519:13
at exports.default._emberTestingAdaptersAdapter.default.extend.exception (http://localhost:7357/assets/vendor.js:52144:7)
at onerrorDefault (http://localhost:7357/assets/vendor.js:42846:24)
at Object.exports.default.trigger (http://localhost:7357/assets/vendor.js:67064:11)
at Promise._onerror (http://localhost:7357/assets/vendor.js:68030:22)
at publishRejection (http://localhost:7357/assets/vendor.js:66337:15)
This seems to occur whenever a request is made to the server. An example test script which would recreate this is below. This is a simple test which checks that if a user clicks a 'login' button without entering any email/password information they are not logged in. The test passes, but additionally I get the above error before the test passes. I think this is something to do with connecting to the Rails server, but have no idea how to investigate or fix it - I'd be very grateful for any help.
Many thanks.
import Ember from 'ember';
import { module, test } from 'qunit';
import startApp from 'mercury-ember/tests/helpers/start-app';
module('Acceptance | login test', {
beforeEach: function() {
this.application = startApp();
},
afterEach: function() {
Ember.run(this.application, 'destroy');
}
});
test('Initial Login Test', function(assert)
{
visit('/');
andThen(function()
{
// Leaving identification and password fields blank
click(".btn.login-submit");
andThen(function()
{
equal(currentSession().get('user_email'), null, "User fails to login when identification and password fields left blank");
});
});
});
You can check in the Network panel of Chrome or Firefox developer tools that the request is being made. At least with ember-qunit you can do this by getting ember-cli to run the tests within the browser rather than with Phantom.js/command-line.
That would help you figure out if it's hitting the Rails server at all (the URL could be incorrect or using the wrong port number?)
You may also want to see if there is code that needs to be torn down. Remember that in a test environment the same browser instance is used so all objects need to be torn down; timeouts/intervals need to be stopped; events need to be unbound, etc.
We had that issue a few times where in production there is no error with a utility that sent AJAX requests every 30 seconds, but in testing it was a problem because it bound itself to the window (outside of the iframe) so it kept making requests even after the tests were torn down.
I want to prefix URLs which match my patterns. When I open a new tab in Firefox and enter a matching URL the page should not be loaded normally, the URL should first be modified and then loading the page should start.
Is it possible to modify an URL through a Mozilla Firefox Addon before the page starts loading?
Browsing the HTTPS Everywhere add-on suggests the following steps:
Register an observer for the "http-on-modify-request" observer topic with nsIObserverService
Proceed if the subject of your observer notification is an instance of nsIHttpChannel and subject.URI.spec (the URL) matches your criteria
Create a new nsIStandardURL
Create a new nsIHttpChannel
Replace the old channel with the new. The code for doing this in HTTPS Everywhere is quite dense and probably much more than you need. I'd suggest starting with chrome/content/IOUtils.js.
Note that you should register a single "http-on-modify-request" observer for your entire application, which means you should put it in an XPCOM component (see HTTPS Everywhere for an example).
The following articles do not solve your problem directly, but they do contain a lot of sample code that you might find helpful:
https://developer.mozilla.org/en/Setting_HTTP_request_headers
https://developer.mozilla.org/en/XUL_School/Intercepting_Page_Loads
Thanks to Iwburk, I have been able to do this.
We can do this my overriding the nsiHttpChannel with a new one, doing this is slightly complicated but luckily the add-on https-everywhere implements this to force a https connection.
https-everywhere's source code is available here
Most of the code needed for this is in the files
IO Util.js
ChannelReplacement.js
We can work with the above files alone provided we have the basic variables like Cc,Ci set up and the function xpcom_generateQI defined.
var httpRequestObserver =
{
observe: function(subject, topic, data) {
if (topic == "http-on-modify-request") {
var httpChannel = subject.QueryInterface(Components.interfaces.nsIHttpChannel);
var requestURL = subject.URI.spec;
if(isToBeReplaced(requestURL)) {
var newURL = getURL(requestURL);
ChannelReplacement.runWhenPending(subject, function() {
var cr = new ChannelReplacement(subject, ch);
cr.replace(true,null);
cr.open();
});
}
}
},
get observerService() {
return Components.classes["#mozilla.org/observer-service;1"]
.getService(Components.interfaces.nsIObserverService);
},
register: function() {
this.observerService.addObserver(this, "http-on-modify-request", false);
},
unregister: function() {
this.observerService.removeObserver(this, "http-on-modify-request");
}
};
httpRequestObserver.register();
The code will replace the request not redirect.
While I have tested the above code well enough, I am not sure about its implementation. As far I can make out, it copies all the attributes of the requested channel and sets them to the channel to be overridden. After which somehow the output requested by original request is supplied using the new channel.
P.S. I had seen a SO post in which this approach was suggested.
You could listen for the page load event or maybe the DOMContentLoaded event instead. Or you can make an nsIURIContentListener but that's probably more complicated.
Is it possible to modify an URL through a Mozilla Firefox Addon before the page starts loading?
YES it is possible.
Use page-mod of the Addon-SDK by setting contentScriptWhen: "start"
Then after completely preventing the document from getting parsed you can either
fetch a different document from the same domain and inject it in the page.
after some document.URL processing do a location.replace() call
Here is an example of doing 1. https://stackoverflow.com/a/36097573/6085033