D2L to our SIS through REST: occasionally the browser gets lost - desire2learn

Occasionally when we go from the "Export Grades to SIS" link in D2L back to our SIS, using the REST API in our SIS to get the grades from D2L, somewhere in the going back and forth between systems, the browser gets lost and simply goes blank. I think the problem is happening in the final hand-off from D2L to our SIS--that seems to be the point where the browser loses track of where it is. When this happens, our SIS continues to act as if it thinks everything is still working correctly--it does pull grades through the REST API, and does everything it is supposed to do. But the browser session has gone blank, and never shows anything from either D2L or our SIS after that, so the user will not know what is going on.
This happens roughly one out of every 6-8 times we test the link between D2L and our SIS. I don't see a definite pattern, at least not yet.
Any ideas what we are doing wrong? What sorts of things should we be looking at to try to determine where the browser is getting lost?

If it is an operational issue with a server it is probably best to open a support ticket with Desire2Learn.

Related

Is AmazonAWS creating non-genuine hits? How can I verify?

I have a site that logs a "hit (via saving a record to a Hits table that captures the date/time and IP of the machine whenever the detail page is loaded)" whenever a user brings up a detail page for a particular item so that admins can see how many hits that particular item gets. We get random instances where items are being hit multiple times/day in twos. So, in the data, it looks like a user is viewing an item, but the site is logging their hit twice in the database (same item, same date/time, same IP Address, etc.). Most hits are only being recorded once, and all my testing has lead to assurance the site is working appropriately. I'm noticing that particular IP Addresses are causing double hits. When I do Reverse IP searches, all the "double hits" are tied to IP Addresses that trace back to Amazonaws in northern Virginia, on the other side of the country. Our site is used locally, and the single hits are coming from IPs that trace back to local areas. Is there a bot hitting my site from afar? Should I block Amazonaws in Azure (which is where my site is hosted) or is that going to lock out genuine users? Is there a way I can detect whether a hit is genuine in my code (my site is in .Net MVC)? Has anyone faced a similar situation in the past?
Note: This IS RELEVANT to software engineering because a part of the question is asking how I can verify in my code that a hit is genuine!!!!!!!!!!!!!!!!!!
Basically, what I found out (no thanks to the elitist user who downvoted my question and offered no contribution) is that, my hit counter is being inflated by web crawlers. The quick and dirty solution is to implement a robots.txt file to block crawlers from hitting that page. Of course, that comes with the sacrifice that my client's site will no longer come up, should the public do a google search for the product being offered.
One alternative is the hidden link method; in which we put a hidden page on the site that no human user would ever access. When a bot hits that page, we record the IP in a "blacklist" table. Then, before our real hit counter logs a hit, it checks the user's IP against the blacklist.
Another alternative is to implement a blacklist of known User-Agents used by bots. We check the user's credentials against that list in order to determine whether a user is a bot.
Neither of these solutions are 100% though.
These are fairly adequate responses to my question. Of course, since this is StackExchange (or StackOverflow or StackYourMomma or whatever it is), people are just going to downvote your question and act like you're beneath a response because you didn't follow all the little bull crap rules that come along with being a member of the SE/SO/SYM community.

Paypal Payments Standard not returning payment variables on iPad

I have just finished developing a website using Paypal Payments Standard, and all is working just fine in most computers, but paypal does not return any payment variables on iPad (and maybe other devices).
I created my own cart, and use the Buy Now functionality to pay for the entire order.
The Buy Now form sets the RETURN variable to the cart page, and the RM variable is set to 2, which should post the variables back to the cart page.
When the payment is complete, the cart page checks for posted payment variables and logs them into the database.
As said, this works perfectly on most computers.
On iPad, though, the user returns to the cart page, but no payment variables are posted. This is just the same when changing the RM variable to 1, which should send the variables as GET parameters.
You can see the code and working website at: http://unameit.ch/
This isn't exactly an answer to your problem, but would solve the issue and would be better for you anyway.
Using your return URL is not a good way to get data into your database because even with Auto-Return enabled there is no guarantee the user would make it back to your return URL. In such cases, that code would never run and the data would never make it into your database.
Instead, what you should use is IPN. IPN will be triggered regardless of whether or not the user makes it back to your site. It's very similar to PDT except that instead of POSTing data back to the return URL it POSTs it to a separate listener script apart from your checkout pages. It happens in real-time so the result would be the same as what you're trying to do now, but it would always work regardless of whether or not the user made it back to your site, and you wouldn't have to worry about issues like you're running into here with the iPad transactions.
I highly recommend you do it this way or you'll find that you're missing order data in your database even if you end up getting this particular problem resolved.
I have had the same issue with the manual callback and have been speaking to PayPal tech support. They have agreed that there is a bug with this working on a mobile/tablet devices. Basically, if you go to the mobile PayPal site to make the payment, you won't get any data POSTed back to your return URL:
They have told us to use the ExpressCheckoutAPI instead:
"Yes the ExpressCheckout works without any issues on all platforms.
As a mater of fact I found that the mobileWPS checkout is Wraped around the
ExpressCheckout and this is the reason why your data is chopped off.
Some of the data is lost in translation from WPS to EC."
Sorry that this isn't a answer but at least we know that PayPal know it's a bug.

BroadCast Admin message to Each Session User

I have a requirement to inform every user to save their work and logout so that admin can reset iis or do some changes in the asp.net MVC application server.
looping through session object collection is not thread safe that is what i have learned.
any other ideas?
and even if i can get hold of active sessions how do i send a message to those clients ?
thanks in advance.
Save the message in a database and query the database for every request to see if a message exist.
This seems like a poorly-defined requirement.
Serious maintenance should be done at a specific time, and users should be alerted to that time window well in advance.
Simply restarting IIS is a pretty quick procedure... is there any reason users would lose their work when simply restarting IIS? While I've been filling out this StackOverflow answer, for instance, they could have restarted the server a dozen times. Once I hit Post, if the server is down, it'll either timeout and leave my work in the textarea, or else it will connect successfully if the server is back in time.
If I'm not submitting data, but just clicking a link, the same applies: either the browser times out, in which case a simple refresh is enough once the server is back up, or it eventually takes the user where they want to go.
If you're doing pure AJAX requests you will need to handle a missing server yourself, rather than relying on the browser to do it, but you'd need to work that out anyway because of the Eight Fallacies of Distributed Computing #1: "The network is reliable." (see http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing)
So, I'd actually push back on that requirement. They're asking you to do something that won't really meet the need (users don't lose data, have a reasonably good experience), that will become complicated, and that will be a brittle solution in the end.
Sounds like a case for SignalR!
https://github.com/SignalR/SignalR

A web app with logins in a kiosk situation

We are looking at setting up an internal web application (ASP.NET MVC) as a kiosk for the employees that don't have a dedicated computer. We currently do not have this kiosk setup. Each employee will have their own login to look at some basic payroll information and request leaves of absence. This same web application will be used by the office workers with a dedicated PC at their desk.
I am going to go out on a limb and say that no matter how many times we tell the employees, the employees will not click log off when they walk away from the kiosk. What would you do to help prevent this from happening?
lets try to fix the users instead of the code :) , i guess that your log out button is like the one here on stackoverflow. its a little text link "logout" some where in the upper right corner. thats perfect for people who use webapps day by day and are aware of the fact that they need to logout before someone comes along a does havoc to thier facebook profile, but less tech savy users wont think of that and walk away.
you need to the get the attention of your users to this logout-button and teach them that logging-out is a good thing.
try the following
give the logout button more visual weight then usally make it bigger, make it a real button instead of a textlink and even change its color to something more alerting (red, orange, ... whatever fits your ci)
if they dont loggout, use the session timeout and some javascript the refresh the page after any amount of inactivity, but also set a flag that this user has not logged out after his last visit. that way you can greet him on his next login with a nice confirmation dialog, and tell him once again why logging out is so important and where your logout-button is located.
The naive solution would be to enforce a timeout. If there's no activity from the user within a certain time limit (say, a minute or so), log them out. Of course, this won't prevent someone from walking up immediately after an employee is done and seeing how much money they make.
ATMs handle this, I think, by timing out after a minute or two, which isn't super-secure but at least offers some minimal security.
If the employees have any kind of RFID card or other security token, you could require them to put it in a reader slot, and log them out whenever the card disappears. Handling this within a web app, though, could get complicated.
The simple way is to use a little javascript.
Just have it set to something like 30 seconds of inactivity. If the user hasn't clicked on anything have the javascript send it back to a login page.
Here's a link to get you started.
Assuming you've already thought of the obvious (aggressive session timeouts, non-persistent authentication cookies, etc); how about a bit of an "out there" suggestion?
I'm not sure how do-able this would be with a web-based interface; but what about using some form of IR sensor with a usb/serial interface and an API you can tie into? This may make it possible to invoke some form of "logout" operation when someone walks away from the kiosk.
Perhaps someone has a better suggestion for external hardware, but this was the first thing that lept to my mind as a out-of-the-box approach.
I found a jQuery version that seems to work quite well. I'll start by using that and see how that goes.

Auto-Logout with multiple tabs open

we've implemented a system similar to the one described in this other SO post. Basically, if the user doesn't do anything for 14 minutes, we prompt them that they will be logged out. If they click on "keep me logged in" we do an ajax request to keep their session alive, otherwise, they are redirected to the logout page after a minute.
It works pretty well, and is inline with similar systems employed at sites like mint.com and bankofamerica.com. The only problem is that our application's users tend to have multiple tabs open to refer back and forth to different pieces of data. So the problem is that they may be actively working in one tab, but then the other tab times out and logs them out. This causes an abrupt session timeout for the user when they were not expecting it. btw, mint.com has this same issue.
So I was wondering if anyone had any ideas to combat this?
I have one idea, each request could set a "last active time" cookie. Upon auto-logout, the server could check this last active time and if it's relatively recent, avoid logging them out. The manual logout would of course ignore this cookie so if the user wants to log out he can do so at any time. However, I'm afraid that this may be exposing some sort of security risk that I'm not able to see at this point. Thoughts?
Before showing the pop-up, ask the server how long ago the user has done his last request.

Resources