Wait Until OData Request Completed - odata

I have a requirement to oData service inside onExit hook method. I have written below code but unfortunately oData service is not getting called as view is destroyed immediately . Is there any way to add wait or delay the destroy of view until oData read request is complete ?
window.addEventListener("beforeunload", (event) => {
this.fnUnlockGremienVersion();
var s;
event = event || window.event;
if (this._PreviousGreVersion) {
s = "Your most recent changes are still being saved. " +
"If you close the window now, they may not be saved.";
event.returnValue = s;
return s;
}
});

UI5 has no support for such a workflow and it's not only "UI5", in fact webpages can't block/trap a user within a page for various reasons. Just remember the annoying un-closable webpages from the 2000s. Browser dropped support for such things; except a very simple pop-up api which is supported by UI5.
i assume you are in the shell.
Use the dirty flag properly(set it as long there are unsaved changes or a request still running) and the user will get a popup.
https://sapui5.hana.ondemand.com/sdk/#/api/sap.ushell.services.Container%23methods/setDirtyFlag

Related

Service-Worker stays in waiting state in Chrome

I'm working on a SPA with Vue. I'd like to update to a new service-worker when the user navigates to a specific page. A save moment to refresh, because the view of the user already changes (a pattern discussed in this video: https://youtu.be/cElAoxhQz6w)
I have an issue that sometimes (infrequently) the service-worker won't activate while calling skipWaiting. The call is made correctly, and even in Chrome I get a response that the current service-worker stops (see animated GIF), however it the same service-worker starts running again, instead of the waiting one.
After a while (1-2 minutes) the service-worker is suddenly activated. Not a situation you want, because it happens just out of the blue when the user might be in the middle of an activity.
Also when I am in this situation I can't activate the service-worker by calling skipWaiting (by doing multiple navigations) again. It's received by the service-worker but nothing happens. It stays in "waiting to activate". When I press skipWaiting in Chrome itself, it works.
I have no clue what goes wrong. Is this an issue with Chrome, workbox or something else?
Most close comes this topic: self.skipWaiting() not working in Service Worker
I use Vue.js, but I don't depend on the pwa plugin for the service-worker. I use the workbox webpack plugin.
I've edited the example code below, the minimal code probably didn't show the problem well
In main.js:
let sw = await navigator.serviceWorker.register("/service-worker.js", {
updateViaCache: "none",
});
let firstSw = false;
navigator.serviceWorker.addEventListener("controllerchange", () => {
// no need to refresh when the first sw controls the page, we solve this with clientsClaim
// this makes sure when multiple-tabs are open all refresh
if (!firstSw) {
window.location.reload();
}
});
sw.onupdatefound = () => {
const installingWorker = sw.installing;
installingWorker.onstatechange = async () => {
console.log("installing worker state-change: " + installingWorker.state);
if (installingWorker.state === "installed") {
if (navigator.serviceWorker.controller) {
firstSw = false;
// set the waiting service-worker in the store
// so we can update it and refresh the page on navigation
await store.dispatch("setWaitingSW", sw.waiting);
} else {
console.log("First sw available");
firstSw = true;
}
}
};
};
In router.js:
// after navigation to specific routes we check for a waiting service-worker.
router.afterEach(async (to) => {
if (to.name == "specificpage") {
let waitingSw = store.getters["getWaitingSW"];
if (waitingSw) {
waitingSw.postMessage("SKIP_WAITING");
// clean the store, because we might have changed our data model
await store.dispatch("cleanLocalForage");
}
}
});
In service-worker.js:
self.addEventListener("message", event => {
if (event.data === "SKIP_WAITING") {
console.log("sw received skip waiting");
self.skipWaiting();
}
});
skipWaiting() isn't instant. If there are active fetches going through the current service worker, it won't break those. If you're seeing skipWaiting() taking a long time, I'd guess you have some long-running HTTP connections holding the old service worker in place.
I'm not sure that
let sw = await navigator.serviceWorker.register("/service-worker.js", {updateViaCache: "none"});
if (sw.waiting) {
sw.waiting.postMessage("SKIP_WAITING");
}
is the code that you want in this case. Your if (sw.waiting) check is only evaluated once, and the newly registered service worker might still be in the installing state when it's evaluated. If that's the case, then sw.waiting will be false-y at the time of initial evaluation, though it may be true-thy after a small period of time.
Instead, I'd recommend following a pattern like what's demonstrated in this recipe, where you explicitly listen for a service worker to enter waiting on the registration. That example uses the workbox-window library to paper over some of the details.
If you don't want to use workbox-window, you should follow this guidance check to see if sw.installing is set after registration; if it is, listen to the statechange event on sw.installing to detect when it's 'installed'. Once that happens, sw.waiting should be set to the newly installed service worker, and at that point, you could postMessage() to it.
Ok i had a similar issue and it took me two days to find the cause.
There is a scenario where you can cause a race condition between the new service worker and the old if you request a precached asset at the exact same time you call skip waiting.
For me i was prompting the user to update to a new version and upon their confirmation i was showing a loading spinner which was a Vue SFC dynamic import which kicked off a network request to the old service worker to fetch the precached js file which basically caused both to hang and get very confused.
You can check if your having a similar issue by looking at the service worker specific network requests (Network requests button in the image below) that are happening and make sure they aren't happening the instant you're trying to skip waiting on your newer service worker.

Service worker to save form data when browser is offline

I am new to Service Workers, and have had a look through the various bits of documentation (Google, Mozilla, serviceworke.rs, Github, StackOverflow questions). The most helpful is the ServiceWorkers cookbook.
Most of the documentation seems to point to caching entire pages so that the app works completely offline, or redirecting the user to an offline page until the browser can redirect to the internet.
What I want to do, however, is store my form data locally so my web app can upload it to the server when the user's connection is restored. Which "recipe" should I use? I think it is Request Deferrer. Do I need anything else to ensure that Request Deferrer will work (apart from the service worker detector script in my web page)? Any hints and tips much appreciated.
Console errors
The Request Deferrer recipe and code doesn't seem to work on its own as it doesn't include file caching. I have added some caching for the service worker library files, but I am still getting this error when I submit the form while offline:
Console: {"lineNumber":0,"message":
"The FetchEvent for [the form URL] resulted in a network error response:
the promise was rejected.","message_level":2,"sourceIdentifier":1,"sourceURL":""}
My Service Worker
/* eslint-env es6 */
/* eslint no-unused-vars: 0 */
/* global importScripts, ServiceWorkerWare, localforage */
importScripts('/js/lib/ServiceWorkerWare.js');
importScripts('/js/lib/localforage.js');
//Determine the root for the routes. I.e, if the Service Worker URL is http://example.com/path/to/sw.js, then the root is http://example.com/path/to/
var root = (function() {
var tokens = (self.location + '').split('/');
tokens[tokens.length - 1] = '';
return tokens.join('/');
})();
//By using Mozilla’s ServiceWorkerWare we can quickly setup some routes for a virtual server. It is convenient you review the virtual server recipe before seeing this.
var worker = new ServiceWorkerWare();
//So here is the idea. We will check if we are online or not. In case we are not online, enqueue the request and provide a fake response.
//Else, flush the queue and let the new request to reach the network.
//This function factory does exactly that.
function tryOrFallback(fakeResponse) {
//Return a handler that…
return function(req, res) {
//If offline, enqueue and answer with the fake response.
if (!navigator.onLine) {
console.log('No network availability, enqueuing');
return enqueue(req).then(function() {
//As the fake response will be reused but Response objects are one use only, we need to clone it each time we use it.
return fakeResponse.clone();
});
}
//If online, flush the queue and answer from network.
console.log('Network available! Flushing queue.');
return flushQueue().then(function() {
return fetch(req);
});
};
}
//A fake response with a joke for when there is no connection. A real implementation could have cached the last collection of updates and keep a local model. For simplicity, not implemented here.
worker.get(root + 'api/updates?*', tryOrFallback(new Response(
JSON.stringify([{
text: 'You are offline.',
author: 'Oxford Brookes University',
id: 1,
isSticky: true
}]),
{ headers: { 'Content-Type': 'application/json' } }
)));
//For deletion, let’s simulate that all went OK. Notice we are omitting the body of the response. Trying to add a body with a 204, deleted, as status throws an error.
worker.delete(root + 'api/updates/:id?*', tryOrFallback(new Response({
status: 204
})));
//Creation is another story. We can not reach the server so we can not get the id for the new updates.
//No problem, just say we accept the creation and we will process it later, as soon as we recover connectivity.
worker.post(root + 'api/updates?*', tryOrFallback(new Response(null, {
status: 202
})));
//Start the service worker.
worker.init();
//By using Mozilla’s localforage db wrapper, we can count on a fast setup for a versatile key-value database. We use it to store queue of deferred requests.
//Enqueue consists of adding a request to the list. Due to the limitations of IndexedDB, Request and Response objects can not be saved so we need an alternative representations.
//This is why we call to serialize().`
function enqueue(request) {
return serialize(request).then(function(serialized) {
localforage.getItem('queue').then(function(queue) {
/* eslint no-param-reassign: 0 */
queue = queue || [];
queue.push(serialized);
return localforage.setItem('queue', queue).then(function() {
console.log(serialized.method, serialized.url, 'enqueued!');
});
});
});
}
//Flush is a little more complicated. It consists of getting the elements of the queue in order and sending each one, keeping track of not yet sent request.
//Before sending a request we need to recreate it from the alternative representation stored in IndexedDB.
function flushQueue() {
//Get the queue
return localforage.getItem('queue').then(function(queue) {
/* eslint no-param-reassign: 0 */
queue = queue || [];
//If empty, nothing to do!
if (!queue.length) {
return Promise.resolve();
}
//Else, send the requests in order…
console.log('Sending ', queue.length, ' requests...');
return sendInOrder(queue).then(function() {
//Requires error handling. Actually, this is assuming all the requests in queue are a success when reaching the Network.
// So it should empty the queue step by step, only popping from the queue if the request completes with success.
return localforage.setItem('queue', []);
});
});
}
//Send the requests inside the queue in order. Waiting for the current before sending the next one.
function sendInOrder(requests) {
//The reduce() chains one promise per serialized request, not allowing to progress to the next one until completing the current.
var sending = requests.reduce(function(prevPromise, serialized) {
console.log('Sending', serialized.method, serialized.url);
return prevPromise.then(function() {
return deserialize(serialized).then(function(request) {
return fetch(request);
});
});
}, Promise.resolve());
return sending;
}
//Serialize is a little bit convolved due to headers is not a simple object.
function serialize(request) {
var headers = {};
//for(... of ...) is ES6 notation but current browsers supporting SW, support this notation as well and this is the only way of retrieving all the headers.
for (var entry of request.headers.entries()) {
headers[entry[0]] = entry[1];
}
var serialized = {
url: request.url,
headers: headers,
method: request.method,
mode: request.mode,
credentials: request.credentials,
cache: request.cache,
redirect: request.redirect,
referrer: request.referrer
};
//Only if method is not GET or HEAD is the request allowed to have body.
if (request.method !== 'GET' && request.method !== 'HEAD') {
return request.clone().text().then(function(body) {
serialized.body = body;
return Promise.resolve(serialized);
});
}
return Promise.resolve(serialized);
}
//Compared, deserialize is pretty simple.
function deserialize(data) {
return Promise.resolve(new Request(data.url, data));
}
var CACHE = 'cache-only';
// On install, cache some resources.
self.addEventListener('install', function(evt) {
console.log('The service worker is being installed.');
// Ask the service worker to keep installing until the returning promise
// resolves.
evt.waitUntil(precache());
});
// On fetch, use cache only strategy.
self.addEventListener('fetch', function(evt) {
console.log('The service worker is serving the asset.');
evt.respondWith(fromCache(evt.request));
});
// Open a cache and use `addAll()` with an array of assets to add all of them
// to the cache. Return a promise resolving when all the assets are added.
function precache() {
return caches.open(CACHE).then(function (cache) {
return cache.addAll([
'/js/lib/ServiceWorkerWare.js',
'/js/lib/localforage.js',
'/js/settings.js'
]);
});
}
// Open the cache where the assets were stored and search for the requested
// resource. Notice that in case of no matching, the promise still resolves
// but it does with `undefined` as value.
function fromCache(request) {
return caches.open(CACHE).then(function (cache) {
return cache.match(request).then(function (matching) {
return matching || Promise.reject('no-match');
});
});
}
Here is the error message I am getting in Chrome when I go offline:
(A similar error occurred in Firefox - it falls over at line 409 of ServiceWorkerWare.js)
ServiceWorkerWare.prototype.executeMiddleware = function (middleware,
request) {
var response = this.runMiddleware(middleware, 0, request, null);
response.catch(function (error) { console.error(error); });
return response;
};
this is a little more advanced that a beginner level. But you will need to detect when you are offline or in a Li-Fi state. Instead of POSTing data to an API or end point you need to queue that data to be synched when you are back on line.
This is what the Background Sync API should help with. However, it is not supported across the board just yet. Plus Safari.........
So maybe a good strategy is to persist your data in IndexedDB and when you can connect (background sync fires an event for this) you would then POST the data. It gets a little more complex for browsers that don't support service workers (Safari) or don't yet have Background Sync (that will level out very soon).
As always design your code to be a progressive enhancement, which can be tricky, but worth it in the end.
Service Workers tend to cache the static HTML, CSS, JavaScript, and image files.
I need to use PouchDB and sync it with CouchDB
Why CouchDB?
CouchDB is a NoSQL database consisting of a number of Documents
created with JSON.
It has versioning (each document has a _rev
property with the last modified date)
It can be synchronised with
PouchDB, a local JavaScript application that stores data in local
storage via the browser using IndexedDB. This allows us to create
offline applications.
The two databases are both “master” copies of
the data.
PouchDB is a local JavaScript implementation of CouchDB.
I still need a better answer than my partial notes towards a solution!
Yes, this type of service worker is the correct one to use for saving form data offline.
I have now edited it and understood it better. It caches the form data, and loads it on the page for the user to see what they have entered.
It is worth noting that the paths to the library files will need editing to reflect your local directory structure, e.g. in my setup:
importScripts('/js/lib/ServiceWorkerWare.js');
importScripts('/js/lib/localforage.js');
The script is still failing when offline, however, as it isn't caching the library files. (Update to follow when I figure out caching)
Just discovered an extra debugging tool for service workers (apart from the console): chrome://serviceworker-internals/. In this, you can start or stop service workers, view console messages, and the resources used by the service worker.

Google realtime Model update onFileLoad

I see that the workflow is to start authrorizer, giving it file loader. So, we have a sequence of callbacks, onAuthrorized => start loading file => doc.getModel() on file load. Here they say how you get the model. But, I also see that gapi.drive.realtime.load(fileId, onFileLoaded, initializeModel, handleErrors) can elso end up with TOKEN_REFRESH_REQUIRED and it seems that TOKEN_REFRESH_REQUIRED can fire after the document is loaded, after some time of user inactivity, which seems to be related with token expiration. How should re-authorization to go? Should I tell the client that the current model that he is connected to is invalid? Please note that my app starts on file load. So, if I go the whole stack re-authorization, which calls another file load, which calls another document loaded will re-start my application. Is it supposed way to go? To put in other words, is there a way to refresh the token without loosing existing connection?
Where is the token stored actually? I do not see that I receive it on authorized. It is not passed to the realtime.load. How does realtime.load knows about the token? How can I speed up the token expiration for debug?
I am still not sure that this is a right answer but this is what I have got looking at code here, which says that we should provide empty callback to re-authorize
/**
* Reauthorize the client with no callback (used for authorization failure).
* #param onAuthComplete {Function} to call once authorization has completed.
*/
rtclient.Authorizer.prototype.authorize = function(onAuthComplete) {
function authorize() {
gapi.auth.authorize({client_id: rtclient.id, scope: ['install', 'file'],}, handleAuthResult)
}
function handleAuthResult(authResult) {
if (authResult && !authResult.error) {
hideAuthorizationButton() && onAuthComplete()
} else with (authorizationButton) {
display = 'block' ;
onclick = authorize;
}
}
You call it first use it in a function to load your document
(rtclient.authorizer ? rtclient.authorizer = identity : rtclient.authorize) (proceedToLoadingTheFile)
But later, on timeout we have code
function handleErrors(e) { with(gapi.drive.realtime.ErrorType) {switch(e.type) {
case TOKEN_REFRESH_REQUIRED: rtclient.authorizer.authorize() ; break
case CLIENT_ERROR: ...
Note no arguemnts in the latter. Authorizer won't reload the document. I think that this explains the logic asked. However, it does not answer about the internals, how is it possible that loader takes on existing authorizer or switches to a new one.

IE 11 + SignalR not working

Strange behavior is happening when using signalR with IE 11. Scenario:
We have some dispatcher type functionality where the dispatcher does some actions, and the other user can see updates live (querying). The parameters that are sent come through fine and cause updates on the IE client side without having to open the developer console.
BUT the one method that does not work (performUpdate - to get the query results - this is a server > client call, not client > server > client) - never gets called. IT ONLY GETS CALLED WHEN THE DEVELOPER CONSOLE IS OPEN.
Here's what I've tried:
Why JavaScript only works after opening developer tools in IE once?
SignalR : Under IE9, messages can't be received by client until I hit F12 !!!!
SignalR client doesn't work inside AngularJs controller
Some code snippets
Dispatcher side
On dropdown change, we get the currently selected values and send updates across the wire. (This works fine).
$('#Selector').on('change', function(){
var variable = $('#SomeField').val();
...
liveBatchHub.server.updateParameters(variable, ....);
});
Server Side
When the dispatcher searches, we have some server side code that sends out notifications that a search has been ran, and to tell the client to pull results.
public void Update(string userId, Guid bId)
{
var context = GlobalHost.ConnectionManager.GetHubContext<LiveBatchViewHub>();
context.Clients.User(userId).performUpdate(bId);
}
Client side (viewer of live updates)
This never gets called unless developer tools is open
liveBatchHub.client.performUpdate = function (id) {
//perform update here
update(id);
};
Edit
A little more information which might be useful (I am not sure why it makes a difference) but this ONLY seems to happen when I am doing server > client calls. When the dispatcher is changing the search parameters, the update is client > server > client or dispatcher-client > server > viewer-client, which seems to work. After they click search, a service in the search pipeline calls the performUpdate server side (server > viewer-client). Not sure if this matters?
Edit 2 & Final Solution
Eyes bloodshot, I realize I left out one key part to this question: we are using angular as well on this page. Guess I've been staring at it too long and left this out - sorry. I awarded JDupont the answer because he was on the right track: caching. But not jQuery's ajax caching, angulars $http.
Just so no one else has to spend days / nights banging their heads against the desk, the final solution was to disable caching on ajax calls using angulars $http.
Taken from here:
myModule.config(['$httpProvider', function($httpProvider) {
//initialize get if not there
if (!$httpProvider.defaults.headers.get) {
$httpProvider.defaults.headers.get = {};
}
// Answer edited to include suggestions from comments
// because previous version of code introduced browser-related errors
//disable IE ajax request caching
$httpProvider.defaults.headers.get['If-Modified-Since'] = 'Mon, 26 Jul 1997 05:00:00 GMT';
// extra
$httpProvider.defaults.headers.get['Cache-Control'] = 'no-cache';
$httpProvider.defaults.headers.get['Pragma'] = 'no-cache';
}]);
I have experienced similar behavior in IE in the past. I may know of a solution to your problem.
IE caches some ajax requests by default. You may want to try turning this off globally. Check this out: How to prevent IE from caching Ajax with jQuery
Basically you would globally switch this off like this:
$.ajaxSetup({ cache: false });
or for a specific ajax request like this:
$.ajax({
cache: false,
//other options...
});
I had a similar issue with my GET requests caching. My update function would only fire off once unless dev tools was open. When it was open, no caching would occur.
If your code works properly with other browsers, So the problem can be from the SignalR's used transport method. They can be WebSocket, Server Sent Events, Forever Frame and Long Polling based on browser support.
The Forever Frame is for Internet Explorer only. You can see the Introduction to SignalR to know which transport method will be used in various cases (Note that you can't use any of them on each browser, for example, IE doesn't support Server Sent Events).
You can understand the transport method being used Inside a Hub just by looking at the request's QueryString which can be useful for logging:
Context.QueryString["transport"];
I think the issue comes from using Forever Frame by IE likely, since sometimes it causes SignalR to crash on Ajax calls. You can try to remove Forever Frame support in SignalR and force to use the remaining supported methods by the browser with the following code in client side:
$.connection.hub.start({ transport: ['webSockets', 'serverSentEvents', 'longPolling'] });
I showed some realities about SignalR and gave you some logging/trace tools to solve your problem. For more help, put additional details :)
Update:
Since your problem seems to be very strange and I've not enough vision around your code, So I propose you some instructions based on my experience wish to be useful:
Setup Browser Link in IDE suitable
checkout the Network tab request/response data during its process
Make sure you haven't used reserved names in your server/client side
(perhaps by renaming methods and variables)
Also I think that you need to use liveBatchHub.server.update(variable, ....); instead of liveBatchHub.server.updateParameters(variable, ....); in Dispatcher side to make server call since you should use server method name after server.

FacebookRealtimeUpdateController Override Post and return 'OK'

Using Mvc.Facebook.Realtime the FacebookRealtimeUpdateController provides a process for handling user events (HandleUpdateAsync) , but not for page events.
Microsoft.AspNet.Mvc.Facebook.Realtime Namespace
I have managed to process page events by overriding the 'POST'
Public Overrides Function Post() As Task(Of Net.Http.HttpResponseMessage)
Dim content = Request.Content
Dim jsonContent As String = content.ReadAsStringAsync().Result
Dim ConvertedJson As RealTimeEvent = JsonConvert.DeserializeObject(Of RealTimeEvent)(jsonContent)
' Do something with the page events
Return MyBase.Post
End Function
However Facebook resends all events immediately , which I believe is because I am not returning a '200 OK' back to Facebook. (See quote)
First you'll need to prepare the page that will act as your callback URL. This URL will need to be accessible by Facebook servers, and be able to receive both the POST data that is sent when an update happens, but also accept GET requests in order to verify subscriptions.
This URL should always return a 200 OK HTTP response when invoked by Facebook.
I wiresharked my server and I do not see a 200 OK HTTP response, so I believe this something to do with the way I am overloading the post.
Can I somehow return an OK response from my overridden function or maybe it would be better to drop the whole Microsoft.AspNet.Mvc.Facebook.Realtime solution and just handle the Subscription GETs and Posts from facebook myself?
Update: I turned off "only my own code' and I can see an exception occurring in the AspNet.MVC.Facebook.dll.
So new question, How do I isolate this exception?
For others looking.
The issue was actually the HandleUpdateAsync function which must be overridden. This is fired after the 'Post'
If you don't return a valid task the exception is thrown inside AspNet.MVC.Facebook.dll and Facebook is never given a 200 OK.
I was using This blog on how to use the FacebookRealtimeUpdateController but in that version the 'HandleUpdateAsync' does not return a task and the processing is done in the function.
So by creating a task that does nothing , everything appears to be working fine.
Public Overrides Function HandleUpdateAsync(notification As ChangeNotification) As Task
Dim newtask As New Task(New System.Action(Sub()
Dim x As String = ""
End Sub))
Return newtask
End Function
Edit: But it creates a massive memory leak which cannot be cleared even with recycling..
So the real solution is to just create your own controller and forget the FacebookRealtimeUpdateController completely.
Very easy to do and saves allot of hassle!

Resources