(APEX 4.1.1.00.23)
I have two applications A and B that share the same session (because they use the same session cookie), and each has Maximum Session Idle Time set to the same value N. Having established a session and visited both applications, if I then spend more than N seconds working in application A (doing lots of page loads so not timing out), if I then navigate to application B it immediately times out and sends me to its login page.
I tried also calling APEX_UTIL.SET_SESSION_MAX_IDLE_SECONDS(N) in both applications, with p_scopr defaulting to 'SESSION', noting that the API docs say
This would be the most common use case when multiple Application
Express applications use a common authentication scheme and are
designed to operate as a suite in a common session.
However the same thing happens.
I want the timeout to apply to the session as a whole, not to each application independently. Is this not what the above is supposed to achieve, or am I doing something wrong?
I got the answer to this from Christian Neumueller on the Oracle APEX forum:
... it's no issue anymore in 4.2. Looking at the 4.1.1
code, it seems that the problem is how we stored the last access time.
While the APEX_UTIL call with SESSION scope would set the idle timeout
for both apps, we maintained a timer (FSP_LAST_REQUEST_TIME) for each
app. Working in TIMTEST1 only updated the timer for TIMTEST1, not for
TIMTEST2. After working with one app and switching back to the other
app, Apex sees the stale timer and decides that the session expired.
This is clearly a bug. The bad news is that a backport is not
feasible, because so much has changed in session state management.
Related
I have a logging query (a simple INSERT) that happens on every single request.
For this request only (the one that happens on every page load), I want to set the limit to 500ms in case the database is locked/slow/down it won't affect the site, where the site hangs while it waits to connect/write.
Is there a way I can specify a timeout somehow on a per-query basis that I can abort the LoggedRequest.create! if it's taking too long?
I don't want to set it in my config because I have many other queries that shouldn't have timeouts that low.
I'm using Postgres 11.7
I also don't know how I feel about setting a timeout for the entire session because I don't want that connection to be shared from the pool with other queries that can't have that timeout.
Rails 6 introduces event based triggers for notifications, logging etc that comes in very handy, provided you are using/can afford to migrate to Rails 6. Here'a useful post that demonstrates creating event based triggers for notifications/logging: https://pramodbshinde.wordpress.com/2020/03/20/custom-events-tracking-with-activesupportnotifications-and-audited/
If, for some reason, you cannot use Rails 6, perhaps this article might help you find some answers: https://evilmartians.com/chronicles/the-silence-of-the-ruby-exceptions-a-rails-postgresql-database-transaction-thriller
If I were you, I could also contemplate using AJAX with a fire-and-forget API request to server for logging/whatever that is not critical to normal functioning of the application.
I posted another question as a brute-force solution to this one (Angular: fully install service worker before anything else) but I thought I'd make a separate one to discuss the use case for when a service worker is used as intended.
According to the service worker life cycle (https://developers.google.com/web/fundamentals/primers/service-workers/lifecycle), the SW is installed but it's only active once you then reload the page (you can claim() the page but that's only for calls that happen after the service worker is installed). The reasoning is that if and existing version is updated, the old one and the new one do not mix states and caches. I can agree with that decision.
What I have trouble understanding is why it is not immediately active once it is initially installed. Instead, it requires a page reload unless you explicitly define precaching rules in the SW. If you define caching rules with wildcards, it's not possible to precache those so you need the reload.
Given a single page PWA (like Angular), a user will discover the site and browser around on it but the page will never be reloaded during that session. If they then want to use the site offline later, they need to have refreshed or re-opened the tab at least one other time. That seems like a pretty big pitfall to me.
Am I missing something here?
Your understanding of the service worker lifecycle is correct but I do not think the pitfall you mentioned is as severe as you think it is.
If I understand you correctly, the user experience will only be negatively affected if the user loses connectivity during the initial browsing of the page (before the service worker is active) and is missing an offline asset. If this is truly a scenario you want to account for then that offline asset can be pre-cached in the browser-side javascript. Alternatively, as you mentioned, you can skipWaiting() and claim() to make the service worker active without the user refreshing the page.
The scenario is as follows. I start an instance of MVC app to debug it. The app uses simple membership and I log in during this run. Then I go back to VS change something and start the instance again. It doesn't happen really often but sometimes at this moment membership starts acting odd. As the app starts, some action, that is behind [Authorize] attribute (to be exact the attribute is on the controller), is called. However the action fails because WebSecurity.CurrentUserId is equal -1 (the action in question just loads some user information based on WebSecurity.CurrentUserId).
If I clear cookies in browser, everything is fine, but I can't expect users to do the same when they encounter the problem.
My colleague explaind to me that it's (probably) happening because my local IIS decided to restart and some of session cookies became invalid, but if this can happen on local instance of IIS, wouldn't it be possible to also happen on the remote server?
Other important fact, the action that fails is called (more like redirected to) by a custom filter that we wrote. This filter is applied to all actions (but doesn't affect the one mentioned). Can this filter somehow make MVC ignore [Authorize] attribute?
I have a dirty workaround for this problem that should work (with this specific app), but I would prefer to prevent the problem from appearing int the first place.
I think this is related to this. Basically when the server gets reset authentication cookies die. They get recreated right away, except my app doesn't really have access to them till the page is reloaded (just like with logging in).
I partially solved the problem described above (a redirect is preformed somewhere on the way) so the application no longer gets stuck. However, if someone was logged in during the time the server restarted and he tries to preform a post after that, his post will not work and he will be redirected to a get action with the same name as the post action (our custom filter is to blame for that). Unfortunately I cannot fix the filter, because I would need user id for that and at the point at which the filter is called, it's still -1.
I guess my question is not too well written and kind of very localized (I should probably rewrite it or reask it), but the underlaying problem is more general than it seems, so let me salvage all the useful information into this answer.
Question 1: There is nothing preventing IIS from having a hiccup on a remote server and restarting the app, so yes this can (and happens) on the remote server (frequency will depend on the app itself and IIS configuration). The problem of disappearing session data seems to be related to the restarts of the app pool rather than the app itself.
Question 2: The custom filter has little to do with the situation. As pointed by Larry, in simple membership authorization is kind of unrelated to session data. If your session data is lost, the user does not stop being authorized, however user data is stored in the session. Without session you don't know who the user is. This information becomes available one action after session data was lost. So loosing session data can lead to a crash of the application or like in my case (where a custom filter depends on user data) to even weirder results.
So if you encounter unexpected disappearance of user data in your app (such as WebSecurity.CurrentUserId becoming -1), it might be worth investigating if your app pool is getting restarted (and why). Setting memory limits for an app pool seems to increase the likelihood of those restarts.
I have an website build in MVC4 .NET. Now I want to tracking the time user had online in my website. Example: User open browser and then login to my website and active on my website about 30 minutes then close the browser. I want to store 30 minutes to database but I don;t know how to implement it. Please help me because I very need to do it now. Thank you so much
Here is a script that track user login/logout times on a website. It's a simple script that It has used on some of the sites. Also with this script you can see how many users are online at your site.
But the problem is when the user close the browser he do not log out. his session goes to expire
one of the other ways is global action filter that intercepts requests to all actions on all controllers, then you can get the time of each action in the database for the current user and page. To save hitting the database too hard, you could cache these values and invalidate them every few minutes, depending on how much traffic you're dealing with.
UPDATE
about Closing the Browser This is not something that's provided for in the normal web http protocol. There's no real way to know for sure when the browser closes; you can only sort of know. You have to throw together an ugly hack to get any level of certainty and even then it's bound to fail in plenty of edge cases or cause nasty side effects.
The normal work-around is to send ajax requests at intervals from the browser to your server to set up a sort of heartbeat. When the heartbeat stops, the browser closed and so you kill the session. But again: this is a horrible hack. In this case, the main problems are that it's easy to get false positives for a failed heartbeat if the server and client to get out of sync or there's a javascript error, and there's a side effect that your session will never expire on it's own.
Does anybody have any quick and clever ways to flip an MVC app running on Windows Azure into a "maintenace mode"
I don't have a huge need for this because I use the azure staging environment a lot but occasionally I do have the need to make sure there are no users in the production instance of the application (mainly database updates).
I'd like to be able to do this on the fly without uploading new code or swapping deployment slots. Any suggestions?
The friendliest way to do it is on login. When a user authenticates, check a maintenance mode flag in the database and don't let them log in. Let active users continue to use the application until they log out or their session times out. Keep an activity log so you can know when all users have expired.
Of course this means it will take time from when you put the app into maintenance mode and when it is effectively ready, but it's not nice to boot out an active user.
If the usage pattern of your app makes it so this methodology will not ensure no activity in a reasonable time, you can add a timeout on top of this. Check the same maintenance flag for a request every so often. Doesn't have to be every request but every five minutes or so. If necessary you can also cache the maintenance mode value locally for a reasonable period of time (a few minutes).
I would use routing for this. Have the flag be inspected during routing configuration. If it is on, route to "Maintenance" screens
I would suggest adding a Global Action Filter that respects you maintenance mode Flag.