Ways to delete indexeddb on beforeunload - storage

I am saving data from the user in an indexeddb. When the user closes my app (i.e. the browser), I want to delete this storage. However, since IndexedDB requests are asynchronous, they won't be executed on beforeunload. This is why I am curious to see, how other people solved this issue? Is there any way at all?
Merry Christmas!

An answer that taken from MDN:
The synchronous API being intended for use only with Web Workers. The synchronous version was removed from the spec because its need was questionable, however it may be reintroduced in the future if there is enough demand from web developers.

Related

Aren't PWAs user unfriendly if the service worker is not immediately active?

I posted another question as a brute-force solution to this one (Angular: fully install service worker before anything else) but I thought I'd make a separate one to discuss the use case for when a service worker is used as intended.
According to the service worker life cycle (https://developers.google.com/web/fundamentals/primers/service-workers/lifecycle), the SW is installed but it's only active once you then reload the page (you can claim() the page but that's only for calls that happen after the service worker is installed). The reasoning is that if and existing version is updated, the old one and the new one do not mix states and caches. I can agree with that decision.
What I have trouble understanding is why it is not immediately active once it is initially installed. Instead, it requires a page reload unless you explicitly define precaching rules in the SW. If you define caching rules with wildcards, it's not possible to precache those so you need the reload.
Given a single page PWA (like Angular), a user will discover the site and browser around on it but the page will never be reloaded during that session. If they then want to use the site offline later, they need to have refreshed or re-opened the tab at least one other time. That seems like a pretty big pitfall to me.
Am I missing something here?
Your understanding of the service worker lifecycle is correct but I do not think the pitfall you mentioned is as severe as you think it is.
If I understand you correctly, the user experience will only be negatively affected if the user loses connectivity during the initial browsing of the page (before the service worker is active) and is missing an offline asset. If this is truly a scenario you want to account for then that offline asset can be pre-cached in the browser-side javascript. Alternatively, as you mentioned, you can skipWaiting() and claim() to make the service worker active without the user refreshing the page.

Accounting for users that have left website without using onunload

I have a webservice with very limited resources (I will be able to handle about 3 simultaneous users).
When users interact with my website they start a complex process server-side. (This process is the limiting factor, as my server machine will not be able to handle many in parallel, and clients cannot run this on their side.)
My question is how to make sure to end the process for users that leave, for example by closing the window.
I have considered onunload and onbeforeunload, but they are also triggered by links within the website (which I need for users to be able to interact with the process) so that does not seem like an option.
This approach seems problematic according to other questions (see this, for example), but it could work if there were a way to check if the user is still an active user when performing the action triggered by onunload (even if in a different page of the website), but I don't know how to do this.
I have also considered periodically checking the list of active users and cancelling the process for users that have left, but I don't know if this is even possible.
I have zero experience with cookies, but could this be a place to use them? Can the server access the (still living) cookies of disconnected users?
Which sounds like a reasonable approach for this problem?
Cases such as these are generally handled by heartbeats. Have your client send periodic heartbeats (which are essentially pings) to the server notifying that it is still alive and interested in the process's results. And the server automatically kills those processes for which it hasn't received client heartbeat for a configured amount of time.
I have considered onunload and onbeforeunload
You are right- you can't rely on them.
I have zero experience with cookies, but could this be a place to use them?
No. Cookies maintain client-side state that is sent to a server on HTTP calls. So, servers don't manage cookies. Instead, they only look at them to identify state.

Solution for Web Application with Unreliable Internet Connection

We've developed a web application which is hosted on premises available for people in the shop floor via Wifi. However, the wifi signal is not reliable and it's not possible to use wired network or improve the signal.
I am looking for a solution to handle this issue. Is there a way to put the http requests into a local queue and process it asynchronously at the background? If so, how to do it? Or is there any other alternative approach?
Any thoughts are greatly appreciated.
I have the same problem in the company where I work, there are certain places where the WiFi can not reach, and the system needs to get information from the DB in order to show that info to the user and then upload some new info.
Part of this system is done with iPads, so to solve the problem I use LocalStorage to store a JSON object that contains the info the user need to work, I store the info that is going to be uploaded in another JSON object, and when there is a connection available the info is Uploaded.
Hope it helps
I would recommend to build the web app with angularjs or another javascript framework of your choice. Once the user has loaded the site you can perform asynchronous ajax/http requests to load the required data and the web app will never reload the entire page.
In case one http request fails you can implement that the web app should try one more time or whatever :)

Protect against 3rd party callers of document.execCommand("ClearAuthenticationCache")? Clears our session cookies

We have a J2EE application (running on -cough- IE only) that uses JSESSIONID to manage session state between client and server. Some of our customers use a third-party web application (https://mdoffice.sentara.com/) in which the client Javascript onload method calls:
document.execCommand("ClearAuthenticationCache");
This smashes our JSESSIONID cookie in the browser and hence causes the the app server to see subsequent requests from our IE client window as an invalid or timed out session and the user gets kicked out. Our server is OAS/OC4J, but I doubt this it matters: The same basic behavior occurs if I hit the above URL while logged into my online banking.
In trying to research this, I found that most folks are interested in duplicating this behavior in non-IE browsers. I'm interested in how to protect against it. We verified that our session cookies are have domain scoping, but the above command doesn't seem to honor that. We have a lame work-around by which we launch IE with a -noframemerging argument. That's ugly, and also ends up messing with our logic that tries to limit the client to a single login.
I can't find much useful on MSDN, but this article (http://blogs.msdn.com/b/ieinternals/archive/2010/04/05/understanding-browser-session-lifetime.aspx) does make it clear that the above command "...clears session cookies ... for ALL sites running in the current session".
Does anyone know of either:
Obviously preferable: A way to protect our precious session cookies from ClearAuthenticationCache?
Vane hope: A less aggressive alternative to ClearAuthenticationCache that we might tell our customers to communicate to the 3rd party? (Of course, they'd have to do this with any 3rd party that causes this problem. Currently there's just the one.)
Thanks for any help!

Ideas for web application with external input and realtime notification

I am to build a web application which will accept different events from external sources and present them quickly to the user for further actions. I want to use Ruby on Rails for the web application. This project is a internal development project. I would prefer simple and easy to use solutions for rapid development over high reliable and complex systems.
What it should do
The user has the web application opened in his browser. Now an phone call comes is. The phone call is registered by a PBX monitoring daemon. In this case via the Asterisk Manager Interface. The daemon sends the available information (remote extension, local extension, call direction, channel status, start time, end time) somehow to the web application. Next the user receives a notified about the phone call event. The user now can work with this. For example by entering a summary or by matching the call to a customer profile.
The duration from the first event on the PBX (e.g. the creation of a new channel) to the popup notification in the browser should be short. Given a fast network I would like to be within two seconds. The single pieces of information about an event are created asynchronously. The local extension may be supplied separate from the remote extension. The user can enter a summary before the call has ended. The end time, new status etc. will show up on the interface as soon as one party has hung up.
The PBX monitor is just one data source. There will be more monitors like email or a request via a web form. The monitoring daemons will not necessarily run on the same host as the database or web server. I do not image the application will serve thousands of logged in users or concurrent requests soon. But from the design 200 users with maybe about the same number of events per minute should not be a scalability issue.
How should I do?
I am interested to know how you would design such an application. What technologies would you suggest? How do the daemons communicate their information? When and by whom is the data about an event stored into the main database? How does the user get notified? Should the browser receive a complete dataset on behalf of a daemon or just a short note that new data is available? Which JS library to use and how to create the necessary code on the server side?
On my research I came across a lot of possibilities: Message brokers, queue services, some rails background task solutions, HTTP Push services, XMPP and so on. Some products I am going to look into: ActiveMQ, Starling and Workling, Juggernaut and Bosh.
Maybe I am aiming too hight? If there is a simpler or easier way, like just using the XML or JSON interface of Rails, I would like to read this even more.
I hope the text is not too long :)
Thanks.
If you want to skip Java and Flash, perhaps it makes sense to use a technology in the Comet family to do the push from the server to the browser?
http://en.wikipedia.org/wiki/Comet_%28programming%29
For the sake of simplicity, for notifications from daemons to the Web browser, I'd leave Rails in the middle, create a RESTful interface to that Rails application, and have all of the daemons report to it. Then in your daemons you can do something as simple as use curl or libcurl to post the notifications. The Rails app would then be responsible for collecting the incoming notifications from the various sources and reporting them to the browser, either via JavaScript using a Comet solution or via some kind of fatter client implemented using Flash or Java.
You could approach this a number of ways but my only comment would be: Push, don't pull. For low latency it's not only quicker it's more efficient, as your server now doesn't have to handle n*clients once a second polling the db/queue. ActiveMQ is OK, but Starling will probably serve you better if you're not looking for insane levels of persistence.
You'll almost certainly end up using Flash on the client side (Juggernaut uses it last time I checked) or Java. This may be an issue for your clients (if they don't have Flash/Java installed) but for most people it's not an issue; still, a fallback mechanism onto a pull notification system might be prudent to implement.
Perhaps http://goldfishserver.com might be of some use to you. It provides a simple API to allow push notifications to your web pages. In short, when your data updates, send it (some payload data) to the Goldfish servers and your client browsers will be notified, with the same data.
Disclaimer: I am a developer working on goldfish.
The problem
There is an event - either external (or perhaps internally within your app).
Users should be notified.
One solution
I am myself facing this problem. I haven't solved it yet, but this is how I intend to do it. It may help you too:
(A) The app must learn about the event (via an exposed end point)
Expose an end point by which you app can be notified about external events.
When the end point is hit (and after authentication then users need to be notified).
(B) Notification
You can notify the user directly by changing the DOM on the current web page they are on.
You can notify users by using the Push API (but you need to make sure your browsers can target that).
All of these notification features should be able to be handled via Action Cable: (i) either by updating the DOM to notify you when a phone call comes in, or (ii) via a push notification that pops up in your browser.
Summary: use Action Cable.
(Also: why use an external service like Pusher, when you have ActionCable at your disposal? Some people say scalability, and infrastructure management. But I do not know enough to comment on these issues. )

Resources