Accounting for users that have left website without using onunload - asp.net-mvc

I have a webservice with very limited resources (I will be able to handle about 3 simultaneous users).
When users interact with my website they start a complex process server-side. (This process is the limiting factor, as my server machine will not be able to handle many in parallel, and clients cannot run this on their side.)
My question is how to make sure to end the process for users that leave, for example by closing the window.
I have considered onunload and onbeforeunload, but they are also triggered by links within the website (which I need for users to be able to interact with the process) so that does not seem like an option.
This approach seems problematic according to other questions (see this, for example), but it could work if there were a way to check if the user is still an active user when performing the action triggered by onunload (even if in a different page of the website), but I don't know how to do this.
I have also considered periodically checking the list of active users and cancelling the process for users that have left, but I don't know if this is even possible.
I have zero experience with cookies, but could this be a place to use them? Can the server access the (still living) cookies of disconnected users?
Which sounds like a reasonable approach for this problem?

Cases such as these are generally handled by heartbeats. Have your client send periodic heartbeats (which are essentially pings) to the server notifying that it is still alive and interested in the process's results. And the server automatically kills those processes for which it hasn't received client heartbeat for a configured amount of time.
I have considered onunload and onbeforeunload
You are right- you can't rely on them.
I have zero experience with cookies, but could this be a place to use them?
No. Cookies maintain client-side state that is sent to a server on HTTP calls. So, servers don't manage cookies. Instead, they only look at them to identify state.

Related

Aren't PWAs user unfriendly if the service worker is not immediately active?

I posted another question as a brute-force solution to this one (Angular: fully install service worker before anything else) but I thought I'd make a separate one to discuss the use case for when a service worker is used as intended.
According to the service worker life cycle (https://developers.google.com/web/fundamentals/primers/service-workers/lifecycle), the SW is installed but it's only active once you then reload the page (you can claim() the page but that's only for calls that happen after the service worker is installed). The reasoning is that if and existing version is updated, the old one and the new one do not mix states and caches. I can agree with that decision.
What I have trouble understanding is why it is not immediately active once it is initially installed. Instead, it requires a page reload unless you explicitly define precaching rules in the SW. If you define caching rules with wildcards, it's not possible to precache those so you need the reload.
Given a single page PWA (like Angular), a user will discover the site and browser around on it but the page will never be reloaded during that session. If they then want to use the site offline later, they need to have refreshed or re-opened the tab at least one other time. That seems like a pretty big pitfall to me.
Am I missing something here?
Your understanding of the service worker lifecycle is correct but I do not think the pitfall you mentioned is as severe as you think it is.
If I understand you correctly, the user experience will only be negatively affected if the user loses connectivity during the initial browsing of the page (before the service worker is active) and is missing an offline asset. If this is truly a scenario you want to account for then that offline asset can be pre-cached in the browser-side javascript. Alternatively, as you mentioned, you can skipWaiting() and claim() to make the service worker active without the user refreshing the page.

Tracking time online in MVC4

I have an website build in MVC4 .NET. Now I want to tracking the time user had online in my website. Example: User open browser and then login to my website and active on my website about 30 minutes then close the browser. I want to store 30 minutes to database but I don;t know how to implement it. Please help me because I very need to do it now. Thank you so much
Here is a script that track user login/logout times on a website. It's a simple script that It has used on some of the sites. Also with this script you can see how many users are online at your site.
But the problem is when the user close the browser he do not log out. his session goes to expire
one of the other ways is global action filter that intercepts requests to all actions on all controllers, then you can get the time of each action in the database for the current user and page. To save hitting the database too hard, you could cache these values and invalidate them every few minutes, depending on how much traffic you're dealing with.
UPDATE
about Closing the Browser This is not something that's provided for in the normal web http protocol. There's no real way to know for sure when the browser closes; you can only sort of know. You have to throw together an ugly hack to get any level of certainty and even then it's bound to fail in plenty of edge cases or cause nasty side effects.
The normal work-around is to send ajax requests at intervals from the browser to your server to set up a sort of heartbeat. When the heartbeat stops, the browser closed and so you kill the session. But again: this is a horrible hack. In this case, the main problems are that it's easy to get false positives for a failed heartbeat if the server and client to get out of sync or there's a javascript error, and there's a side effect that your session will never expire on it's own.

How to properly handle asynchronous database replication?

I'm considering using Amazon RDS with read replicas to scale our database.
Some of our controllers in our web application are read/write, some of them are read-only. We already have an automated way for identifying which controllers are read-only, so my first approach would have been to open a connection to the master when requesting a read/write controller, else open a connection to a read replica when requesting a read-only controller.
In theory, that sounds good. But then I stumbled open the replication lag concept, which basically says that a replica can be several seconds behind the master.
Let's imagine the following use case then:
The browser posts to /create-account, which is read/write, thus connecting to the master
The account is created, transaction committed, and the browser gets redirected to /member-area
The browser opens /member-area, which is read-only, thus connecting to a replica. If the replica is even slightly behind the master, the user account might not exist yet on the replica, thus resulting in an error.
How do you realistically use read replicas in your application, to avoid these potential issues?
I worked with application which used pseudo-vertical partitioning. Since only handful of data was time-sensitive the application usually fetched from slaves and from master only in selected cases.
As an example: when the User updated their password application would always ask master for authentication prompt. When changing non-time sensitive data (like User Preferences) it would display success dialog along with information that it might take a while until everything is updated.
Some other ideas which might or might not work depending on environment:
After update compute entity checksum, store it in application cache and when fetching the data always ask for compliance with checksum
Use browser store/cookie for storing delta ensuring User always sees the latest version
Add "up-to-date" flag and invalidate synchronously on every slave node before/after update
Whatever solution you choose keep in mind it's subject of CAP Theorem.
This is a hard problem, and there are lots of potential solutions. One potential solution is to look at what facebook did,
TLDR - read requests get routed to the read only copy, but if you do a write, then for the next 20 seconds, all your reads go to the writeable master.
The other main problem we had to address was that only our master
databases in California could accept write operations. This fact meant
we needed to avoid serving pages that did database writes from
Virginia because each one would have to cross the country to our
master databases in California. Fortunately, our most frequently
accessed pages (home page, profiles, photo pages) don't do any writes
under normal operation. The problem thus boiled down to, when a user
makes a request for a page, how do we decide if it is "safe" to send
to Virginia or if it must be routed to California?
This question turned out to have a relatively straightforward answer.
One of the first servers a user request to Facebook hits is called a
load balancer; this machine's primary responsibility is picking a web
server to handle the request but it also serves a number of other
purposes: protecting against denial of service attacks and
multiplexing user connections to name a few. This load balancer has
the capability to run in Layer 7 mode where it can examine the URI a
user is requesting and make routing decisions based on that
information. This feature meant it was easy to tell the load balancer
about our "safe" pages and it could decide whether to send the request
to Virginia or California based on the page name and the user's
location.
There is another wrinkle to this problem, however. Let's say you go to
editprofile.php to change your hometown. This page isn't marked as
safe so it gets routed to California and you make the change. Then you
go to view your profile and, since it is a safe page, we send you to
Virginia. Because of the replication lag we mentioned earlier,
however, you might not see the change you just made! This experience
is very confusing for a user and also leads to double posting. We got
around this concern by setting a cookie in your browser with the
current time whenever you write something to our databases. The load
balancer also looks for that cookie and, if it notices that you wrote
something within 20 seconds, will unconditionally send you to
California. Then when 20 seconds have passed and we're certain the
data has replicated to Virginia, we'll allow you to go back for safe
pages.

Ideas for web application with external input and realtime notification

I am to build a web application which will accept different events from external sources and present them quickly to the user for further actions. I want to use Ruby on Rails for the web application. This project is a internal development project. I would prefer simple and easy to use solutions for rapid development over high reliable and complex systems.
What it should do
The user has the web application opened in his browser. Now an phone call comes is. The phone call is registered by a PBX monitoring daemon. In this case via the Asterisk Manager Interface. The daemon sends the available information (remote extension, local extension, call direction, channel status, start time, end time) somehow to the web application. Next the user receives a notified about the phone call event. The user now can work with this. For example by entering a summary or by matching the call to a customer profile.
The duration from the first event on the PBX (e.g. the creation of a new channel) to the popup notification in the browser should be short. Given a fast network I would like to be within two seconds. The single pieces of information about an event are created asynchronously. The local extension may be supplied separate from the remote extension. The user can enter a summary before the call has ended. The end time, new status etc. will show up on the interface as soon as one party has hung up.
The PBX monitor is just one data source. There will be more monitors like email or a request via a web form. The monitoring daemons will not necessarily run on the same host as the database or web server. I do not image the application will serve thousands of logged in users or concurrent requests soon. But from the design 200 users with maybe about the same number of events per minute should not be a scalability issue.
How should I do?
I am interested to know how you would design such an application. What technologies would you suggest? How do the daemons communicate their information? When and by whom is the data about an event stored into the main database? How does the user get notified? Should the browser receive a complete dataset on behalf of a daemon or just a short note that new data is available? Which JS library to use and how to create the necessary code on the server side?
On my research I came across a lot of possibilities: Message brokers, queue services, some rails background task solutions, HTTP Push services, XMPP and so on. Some products I am going to look into: ActiveMQ, Starling and Workling, Juggernaut and Bosh.
Maybe I am aiming too hight? If there is a simpler or easier way, like just using the XML or JSON interface of Rails, I would like to read this even more.
I hope the text is not too long :)
Thanks.
If you want to skip Java and Flash, perhaps it makes sense to use a technology in the Comet family to do the push from the server to the browser?
http://en.wikipedia.org/wiki/Comet_%28programming%29
For the sake of simplicity, for notifications from daemons to the Web browser, I'd leave Rails in the middle, create a RESTful interface to that Rails application, and have all of the daemons report to it. Then in your daemons you can do something as simple as use curl or libcurl to post the notifications. The Rails app would then be responsible for collecting the incoming notifications from the various sources and reporting them to the browser, either via JavaScript using a Comet solution or via some kind of fatter client implemented using Flash or Java.
You could approach this a number of ways but my only comment would be: Push, don't pull. For low latency it's not only quicker it's more efficient, as your server now doesn't have to handle n*clients once a second polling the db/queue. ActiveMQ is OK, but Starling will probably serve you better if you're not looking for insane levels of persistence.
You'll almost certainly end up using Flash on the client side (Juggernaut uses it last time I checked) or Java. This may be an issue for your clients (if they don't have Flash/Java installed) but for most people it's not an issue; still, a fallback mechanism onto a pull notification system might be prudent to implement.
Perhaps http://goldfishserver.com might be of some use to you. It provides a simple API to allow push notifications to your web pages. In short, when your data updates, send it (some payload data) to the Goldfish servers and your client browsers will be notified, with the same data.
Disclaimer: I am a developer working on goldfish.
The problem
There is an event - either external (or perhaps internally within your app).
Users should be notified.
One solution
I am myself facing this problem. I haven't solved it yet, but this is how I intend to do it. It may help you too:
(A) The app must learn about the event (via an exposed end point)
Expose an end point by which you app can be notified about external events.
When the end point is hit (and after authentication then users need to be notified).
(B) Notification
You can notify the user directly by changing the DOM on the current web page they are on.
You can notify users by using the Push API (but you need to make sure your browsers can target that).
All of these notification features should be able to be handled via Action Cable: (i) either by updating the DOM to notify you when a phone call comes in, or (ii) via a push notification that pops up in your browser.
Summary: use Action Cable.
(Also: why use an external service like Pusher, when you have ActionCable at your disposal? Some people say scalability, and infrastructure management. But I do not know enough to comment on these issues. )

Is it okay for my online store to send order confirmation emails via Gmail synchronously?

When a user completes an order at my online store, he gets an email confirmation.
Currently we're sending this email via Gmail (which we chose over sendmail for greater portability) after we authorize the user's credit card and before we show him a confirmation message (i.e., synchronously).
It's working fine in development, but I'm wondering if this will cause a problem in production. Will it require making the user wait too long? Will many simultaneous Gmail connections get us in trouble? Any other general caveats?
If sending the emails synchronously will be a problem, could someone recommend an asynchronous solution (is ar_mailer any good?)
The main issue I can think of is that Gmail limits the amount of email you can send daily, so if you get too many orders a day it might break.
As they say :
"In an effort to fight spam and
prevent abuse, Google will temporarily
disable your account if you send a
message to more than 500 recipients or
if you send a large number of
undeliverable messages. If you use a
POP or IMAP client (Microsoft Outlook
or Apple Mail, e.g.), you may only
send a message to 100 people at a
time. Your account should be
re-enabled within 24 hours. "
http://mail.google.com/support/bin/answer.py?hl=en&answer=22839
I would recommend using sendmail on your server in order to have greater control over what's going on and don't depend on another service, especially when sendmail is not really complicated to set up.
The internet is not as resilient as some people would have you believe, the link between you and GMail will break at some point or GMail will go offline causing the user to think that they have not paid sucessfully.
I would put some other queue in place, sendmail sounds acceptable and you can't create your site now for where it 'might' be hosted in the future.
Ryan
If the server waits for the email to be sent before giving the user any feedback, were there problems connecting to the mailserver (timeouts, server down etc) the user request would timeout too and he wouldn't be told anything about the status of his order, so I believe you should really do this asynchronously.
Also, you should check whether doing that is even allowed by GMail's TOS. If that's not the case, you may check if that's allowed if you purchase one of their subscriptions. Also, there's surely a limit to the number of outgoing emails you may send within a given timeframe so if you're expecting your online store to be successful, you may hit that limit and bump into some nasty issue. If you're not self-hosting the site, you should check whether your host offers email servers (several plans include them for free) as then using your host's ISP would be the most obvious choice.
FACT: Gmail crashes. Not often, but it happens, and you can't control it or test it.
The simplest quick-fix is to start a separate thread or fork a subprocess to send the email. Yes, there likely will arise problems from using Gmail, and I really have no input on that vs. the alternatives. But from a design perspective, there's just no reason to make the user wait for that process to complete.
From a testing perspective, this might be where a proxy pattern might come in handy. It might be easy for you to directly invoke Gmail to send a message. Make it harder. Put in a proxy object that does the mailing for you that you can turn off (because heaven knows you can't for testing purposes make Gmail crash). Just make your team follow what happens in the event of an email malfunction by turning off the proxy and trying to complete an order. If you are doing it synchronously, then all the plagues mentioned here by other posters will rear their heads. If you are doing it asynchronously, you should be able to allow it to fail silently (from the user's perspective--from your perspective there should be enormous logging statements and text messages in the middle of the night and possibly a mild electric current arcing across the surface of someone's skin).

Resources