I have an website build in MVC4 .NET. Now I want to tracking the time user had online in my website. Example: User open browser and then login to my website and active on my website about 30 minutes then close the browser. I want to store 30 minutes to database but I don;t know how to implement it. Please help me because I very need to do it now. Thank you so much
Here is a script that track user login/logout times on a website. It's a simple script that It has used on some of the sites. Also with this script you can see how many users are online at your site.
But the problem is when the user close the browser he do not log out. his session goes to expire
one of the other ways is global action filter that intercepts requests to all actions on all controllers, then you can get the time of each action in the database for the current user and page. To save hitting the database too hard, you could cache these values and invalidate them every few minutes, depending on how much traffic you're dealing with.
UPDATE
about Closing the Browser This is not something that's provided for in the normal web http protocol. There's no real way to know for sure when the browser closes; you can only sort of know. You have to throw together an ugly hack to get any level of certainty and even then it's bound to fail in plenty of edge cases or cause nasty side effects.
The normal work-around is to send ajax requests at intervals from the browser to your server to set up a sort of heartbeat. When the heartbeat stops, the browser closed and so you kill the session. But again: this is a horrible hack. In this case, the main problems are that it's easy to get false positives for a failed heartbeat if the server and client to get out of sync or there's a javascript error, and there's a side effect that your session will never expire on it's own.
Related
I have a webservice with very limited resources (I will be able to handle about 3 simultaneous users).
When users interact with my website they start a complex process server-side. (This process is the limiting factor, as my server machine will not be able to handle many in parallel, and clients cannot run this on their side.)
My question is how to make sure to end the process for users that leave, for example by closing the window.
I have considered onunload and onbeforeunload, but they are also triggered by links within the website (which I need for users to be able to interact with the process) so that does not seem like an option.
This approach seems problematic according to other questions (see this, for example), but it could work if there were a way to check if the user is still an active user when performing the action triggered by onunload (even if in a different page of the website), but I don't know how to do this.
I have also considered periodically checking the list of active users and cancelling the process for users that have left, but I don't know if this is even possible.
I have zero experience with cookies, but could this be a place to use them? Can the server access the (still living) cookies of disconnected users?
Which sounds like a reasonable approach for this problem?
Cases such as these are generally handled by heartbeats. Have your client send periodic heartbeats (which are essentially pings) to the server notifying that it is still alive and interested in the process's results. And the server automatically kills those processes for which it hasn't received client heartbeat for a configured amount of time.
I have considered onunload and onbeforeunload
You are right- you can't rely on them.
I have zero experience with cookies, but could this be a place to use them?
No. Cookies maintain client-side state that is sent to a server on HTTP calls. So, servers don't manage cookies. Instead, they only look at them to identify state.
tl;dr - Is there a way to ensure that a given Rails controller action stops executing when another simultaneous request from the same user comes in?
In my Rails/Angular app, I make requests to the Foursquare API from the client-side. Because they need to be authenticated, and my authentication information should remain secure, I pass these requests through a Rails controller in my own app.
For a more in-depth description of the architecture of this, check out this semi-related question.
My concern, as elaborated there, is that each request to this internal controller takes up server time (and on Heroku, ties up a dyno). I'd tried to make the action as fast as possible, but I'd still like to reduce the amount the server is tied up.
The amount the server is tied up is exacerbated by the real-time nature of the search I'm doing. The request is sent out to my server as a user types, not on enter or anything, because I wanted to allow for auto-suggestion.
I'm debouncing the user input (0.4 seconds), so a the request isn't made til a user briefly stops typing. But if a user pauses a few times while typing, and a request goes out each time, this can quickly cause multiple dynos to get used.
More concretely, assuming a roughly ~1.3s API response time from Foursquare, imagine this scenario: A user types "ameri", then waits 0.4 seconds, then types "can", then waits 0.4 seconds, then types " beauty", completing their query. This would send three separate requests, all of which would need to be handled by different dynos, because none of the requests have a chance to return before the next comes in.
This would either cost me a ton of money (if I have a bunch of users, that means a large number of dynos to protect against concurrency timeouts) or cause really annoying waits on the user front.
So my thought would be that it would be awesome if I could essentially do a retroactive debounce on the server side, by terminating any running requests to Foursquare coming from that user before sending a new request out. That would mean that in the above concrete example, while 3 requests started, only the last request would come back, because the first two would be dropped midstream when a new one came in.
I was thinking of storing some variable in session for each that would be true when a request was executing. Then, the next request wouldn't go out if it was triggered. But that's actually sort of the opposite of what I want, because I want the original request to get canceled when the new one comes in. I just don't know how to access that request from within the latter on.
This feels complicated, so I'm guessing it may be impossible (particularly as each controller action is responded to by a new controller instance), but does anyone know a way to cancel controller actions if the same action is hit by the same user again while the first request is getting resolved?
Thanks!
(APEX 4.1.1.00.23)
I have two applications A and B that share the same session (because they use the same session cookie), and each has Maximum Session Idle Time set to the same value N. Having established a session and visited both applications, if I then spend more than N seconds working in application A (doing lots of page loads so not timing out), if I then navigate to application B it immediately times out and sends me to its login page.
I tried also calling APEX_UTIL.SET_SESSION_MAX_IDLE_SECONDS(N) in both applications, with p_scopr defaulting to 'SESSION', noting that the API docs say
This would be the most common use case when multiple Application
Express applications use a common authentication scheme and are
designed to operate as a suite in a common session.
However the same thing happens.
I want the timeout to apply to the session as a whole, not to each application independently. Is this not what the above is supposed to achieve, or am I doing something wrong?
I got the answer to this from Christian Neumueller on the Oracle APEX forum:
... it's no issue anymore in 4.2. Looking at the 4.1.1
code, it seems that the problem is how we stored the last access time.
While the APEX_UTIL call with SESSION scope would set the idle timeout
for both apps, we maintained a timer (FSP_LAST_REQUEST_TIME) for each
app. Working in TIMTEST1 only updated the timer for TIMTEST1, not for
TIMTEST2. After working with one app and switching back to the other
app, Apex sees the stale timer and decides that the session expired.
This is clearly a bug. The bad news is that a backport is not
feasible, because so much has changed in session state management.
I'm aware of the model that involves a scheduled task runninng in the back ground which runs jobs registered with a web request but how about this for an idea that keeps everything within ASP.net...
User uploads a CSV file with, perhaps, several thousand rows. The rows are persisted to the database. I think this would take maybe a minute or so which would be an acceptable wait.
Request returns to the browser and then an automatic Ajax request would go back to the server and request, say, ten rows at a time and process them. (Each row requires a number of web service requests.)
Ajax call returns, display is updated and then another automatic Ajax request goes back for more rows. This repeats until all rows are completed.
If user leaves the web page, then they could return and restart the job.
Any thoughts?
Cheers, Ian.
If i get you right, you actually dont need any "interaction" between background jobs and the long-running request, you just want to "lauch" background jobs with incoming requests? Not such a good idea. Take a look at the Quartz.NET project, it is scheduler embeddable into ASP.NET application, it will handle this stuff for you without need of requests. Of course, if there is app pool shutdown, also your scheduler goes down, but this you cant guarantee not to happen even with your long-running requests solution, dependent on browser waiting on other side.
Also take a look on this interesting article from phil haack on this topic, with his own little scheduler library specific for ASP.NET :
http://haacked.com/archive/2011/10/16/the-dangers-of-implementing-recurring-background-tasks-in-asp-net.aspx
A server side program (or ideally service) could still be quick and dirty and would be more reliable. You could still do step 1 as you have proposed, upload the file and insert the data ( don't forget to increase the maxRequestLength time out value in the web.config ). Then have a program running on the server that checks for new records and processes them.
If the user needs status you could store an entry in the database for each file and update the database record when the import is complete.
Maybe I'm reading the question and interpreting it in a weird way, but why couldn't you read the file into a database and store in a table the current line of the file that you've completed through. You could then track your progress via the db and just send small json objects telling the user how far along you are. That way if their connection drops you can keep processing their request, and if they return later you can notify them of how far along the job is. Also, if multiple clients are connecting you can use the db to queue and throttle (by serializing) the workload. Or if the user connects mid-job with another file, then their new request will be queued up after their current job.
I am working on a .jsp page that is in an application that has a timeout for any page in that application. Is there any way around this, or would it have to be created in a separate application. I know this is usually done in the xml file, but was curious if there was any other option?
One option would be to poll back to the server and hit a page that would re-set the timeout timer. This could be done with some fairly simple javascript, and jQuery makes it even simpler.