Google Analytics causing website to duplicate POSTs - asp.net-mvc

We've noticed a very strange behavior change on our website (asp.net MVC) starting early morning (GMT) on the 12'th of January this year (2018).
Http POSTs from the site started firing twice (unconfirmed, but we suspect sometimes more than twice), and scouring high and low we couldn't find that we'd changed anything.
One of the few things we dynamically load is Google Analytics (specifically Google Tag Manager), and in the course of trial-and-error we tried disabling everything external (which made the phenomenon disappear) and then re-enabling them one-by-one, once we came to re-enabling GA the problem re-appeared.
We also tried removing everything except GA and the problem persisted.
When searching we can't find that anything has been updated in GA, so it's very unclear why it suddenly started, and we have also been unable to find anyone else reporting the same problem (either historically or presently).
Our current best guess is that one of GA's dependencies have updated, and either it contains a bug, or it's exposing an already existing fault in our code.
Has anyone else noticed the same problem? Anyone find something in their code that caused the strange behavior of GA?

I found the error, it was caused by two erroneously set up triggers.
Both triggers were form submit type, and both had double activators, one "Activate trigger when all conditions are met" and one "Activate trigger on specific forms".
The problem was that both "all conditions" activators were set to "url matches regular expression .*", where as the second activator for both targeted a correct Form Path for each respectively.
Whoever set it up must have assumed that Google Tag Manager was using a logical "and" between the two activators (not an unrealistic assumption), but based on my testing at least it seems that the trigger activates on either activator matching.
I couldn't see any reason for the first regex match towards ".*", so the fix was to simply supply a unique url expression for each trigger.
No explanation yet as to why it suddenly became a problem, because the configuration has been wrong for a couple of months at least.
P.S. For whatever reason our GTM is not in English, so take my quoted names on fields/etc with a grain of salt as they are translated.
Update
The website uses ajax to post the forms, the combination of that and the "Await tags" flag on the triggers are looking as likely sources of why the combined conditions were not acting as expected.
Which means a non-announced performance-update to GTM regarding "Await tags" could have been the catalyst which caused the problem to start occuring with alarming frequency.

Related

Using SpecFlow Featurs as subroutines in other features

I may have this completely wrong, but I've been searching available documentation and googling for 2 weeks now, and have my head completely wrapped around the axle.
I am trying to use SpecFlow to write a regression test for our site. This means that I want to exercise all the features so that if we inadvertently broke something, it will catch it.
The site is basically an incident reporting portal. The home page has about 50 different buttons, each of which opens up the data entry pages for a different class of incident.
The data entry pages are arranged in a "wizard" fashion, where it starts with a page of general questions, then moves on to a page of more specific questions and so on. The questions are more or less grouped in the classic "who/what/when/where/why" grouping, with one wizard page for each group, so that we don't overwhelm the user with 100 questions presented all at once.
Exactly which pages are needed depends on the particular type of incident. Some incident types have as many as 8 pages, some as few as 3.
Our specifications for each page are framed in BDD style - Given/When/Then. So it is very natural to translate those specifications into SpecFlow features, and I have done that, at least for the first page of general information questions. But the Scenario had about 30+ steps in it.
I have also written another Feature for testing from the home page -
Given I'm logged in on the home page
When I clicked the button for XYZ ticket
Then it opens XYZ ticket
And the General Information page is displayed.
And I can drive that scenario from a table so that I can test as many different incident types as I want.
So far so good.
But now I want to add
And the General Information page requirements are verified
Where the step definition for that last clause would run the whole scenario for the general information page. In other words, I want to use that other scenario that I have written as a subroutine in this one.
(And then I want to go on and do the same for each of the other wizard pages. But let's get the first one first!)
I can't figure out a way to do that. I tried writing the step definition for the above clause to invoke the step definitions of the General Information scenario, e.g.
Given("I am on the General Information page")
When ("I click this checkbox")
Then ("This happens")
You used to be able to do that (although that would still be a lot of repetition). But now that's giving a warning message that function is deprecated and will be removed (and since I've now upgraded, it may already have been removed - I haven't tried it since I upgraded.) The github issues page (https://github.com/SpecFlowOSS/SpecFlow/issues/1733 has a lot of discussion on it, none of which sheds any light on how to do what I'm trying to do. The primary author (SabotageAndi) seemed to be saying "That's a bad thing; don't do that" without really giving any alternative, at least none that I was able to understand.
Can anyone give me a direction for how to accomplish what I'm trying to do?
I want to use that other scenario that I have written as a subroutine
in this one.
You can't reuse scenarios defined in feature files.
The best you can do is create a new Step that reuse already defined steps by calling them direcly (jameswtelfer comment on 31 Jan in github issue you provide).

Alternative to custom protocols (URI schemes)

I have been extensively using a custom protocol on all our internal apps to open any type of document (CAD, CAM, PDF, etc.), to open File Explorer and select a specific file, and to run other applications.
Years ago I defined one myprotocol protocol that executes C:\Windows\System32\wscript.exe passing the name of my VBScript and whatever argument each request has. The first argument passed to the script describe the type of action (OpenDocument, ShowFileInFileExplorer, ExportBOM, etc.), the following arguments are passed to the action.
Everything worked well until last year, when wscript.exe stopped working (see here for details). I fixed that problem by copying it to wscript2.exe. Creating a copy is now a step in the standard configuration of all our computers and using wscript2.exe is now the official configuration of our custom protocol. (Our anti-virus customer support couldn't find anything that interacts with wscript.exe).
Today, after building a new computer, we found out that:
Firefox doesn't see wscript2.exe. If I click on a custom protocol link, then click on the browse button and open the folder, I only see a small subset of .exe files, which includes wscript.exe, but doesn't include wscript2.exe (I don't know how recent this problem is because I don't personally use FireFox).
Firefox sees wscript.exe, but it still doesn't work (same behavior as described in my previous post linked above)
Chrome works with wscript2.exe, but now it always asks for confirmation. According to this article this seems to be the new approach, and things could change again soon. Clicking on a confirmation box every time is a big no-no with my users. This would slow down many workflows that require quickly clicking hundreds of links on a page and, for example, look at a CAD application zooming to one geometry in a large drawing.
I already fixed one problem last year, I am dealing with another one now, and reading that article scares me and makes me think that more problems will arise soon.
So here is the question: is there an alternative to using custom protocols?
I am not working on a web app for public consumption. My custom protocol requires the VBScript file, the applications that the script uses and tons of network shared folders. They are only used in our internal network and the computers that use them are manually configured.
First of all, that's super risky even if it's on internal network only. Unless computers/users/browsers are locked out of internet, it is possible that someone guesses or finds out your protocol's name, sends link to someone in your company and causes a lot of trouble (possibly loss too).
Anyway...
Since you are controlling software on all of the computers, you could add a mini-server on every machine, listening to localhost only, that simply calls your script. Then define host like secret.myprotocol to point to that server, e.g., localhost:1234.
Just to lessen potential problems a bit, local server would use HTTPS only, with proper certificate, HSTS and HPKP set to a very long time (since you control software, you can refresh those when needed). The last two, just in case someone tries to setup the same domain and, for whatever reason, host override doesn't work and user ends up calling a hostile server.
So, links would have to change from myprotocol://whatever to https://secret.myprotocol/whatever.
It does introduce new attack surface ("mini-server"), but should be easy enough to implement, to minimize size of that surface :). "Mini-server" does not even have to be real www server, a simple script that can listen on socket and call wscript.exe would do (unless you need to pass more info to it).
Real server has more code that can have bugs in it, but also allows to add more things, for example a "pass through" page, that shows info "Opening document X in 3 seconds..." and a "cancel" button.
It could also require session login of some kind (just to be sure it's user who requests action, and not something else).
The title of this blog post says it all: Browser Architecture: Web-to-App Communication Overview.
It describes a list of Web-to-App Communication techniques and links to dedicated posts for some of them.
The first in the list is Application Protocols, which I have been using for years already, and it started to crumble in the last year or so (hence my question).
The fifth is Local Web Server, which is the one described by ahwayakchih.
UPDATE (this update follows the update on the blog post above mentioned)
Apparently I wasn't the only one thinking that this change in behavior was a regression, so a workaround has been issued: the old behavior (showing a checkbox that allows to remember the answer) can be restored by adding these keys to the registry:
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Edge]
"ExternalProtocolDialogShowAlwaysOpenCheckbox"=dword:00000001
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome]
"ExternalProtocolDialogShowAlwaysOpenCheckbox"=dword:00000001
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Chromium]
"ExternalProtocolDialogShowAlwaysOpenCheckbox"=dword:00000001

Disable multi-tab browsing for single session/user

[Disclaimer: I'm not sure if this kind of question is accepted here as it is about a piece of software deployed already. Rest assured I didn't drop any confidential information. Also do tell me if I violated any rules in SO by posting this so I can take it down immediately]
I have a working Learning Management System web application and I recently received a bug report about a button not showing. After investigating, I have proved that the user was not using the web app as intended. When taking an exam, he was opening multiple tabs to exploit the feature that informs him whether the answer was correct or not. He then will use this information to eliminate the wrong answers and submit all the right answers in another tab/window.
I'm using Rails 4.2. Is there a way to prevent multi-tab browsing? I'm thinking like if a user is signed in and he attempted to open a new tab of the webapp, he should see something like "Please use one tab" and all the features/hyperlinks/buttons are disabled.
Here's a screenshot of how I proved he was using multiple tabs. Notice that there are multiple logs of the same attempt # because the current implementation allows saving a study session and resuming later (this is the part that's exploited). The opening of multiple tabs searches for the most recent attempt session and continues from there. This is also the reason why most of the sessions don't have a duration value -- the user only finishes a study session for one tab (by clicking a button that ends the study session). The system cannot compute for the duration because the other sessions don't have an end timestamp.
-
This is what a single-tab user looks like:
This is more of an application misuse issue more than a bug.
You should add protection not only from multi tab, but for multi browsers aw well, so it can't be purely FrontEnd check.
One of the solutions could be using ActionCable to check if a user has an active connection already and then act accordingly.
Another, for example, generate a GUID in JS and pass it with every answer. If its different from previous answer, it means user opened a new window.
But of course the solution would depend on your current architecture, without knowing how do you currently organise client-server communication it's hard to give exact and optimal solution.
I found an answer here. I just placed this js in the application view to prevent any extra instance of the website.
Thanks for everyone who pitched in.

Revulytics data not showing in Dashboard

I am using Revulytics SDK to track feature usage and came across the below problem.
I am sending feature usage after properly setting up the SDK configuration etc, using the EventTrack() method like this:
GenericReturn grTest = telemetryObj.EventTrack("FeatureUsage", textBoxName.Text.ToString(), null, false);
This returns OK and usually, I can see the usage data in the dashboard. However, after multiple tests, the data I am sending does not show up on the dashboard.
Can anyone hint me how to debug this? Thanks for any help!
I hit a similar issue when first working with this SDK.
I was able to address this as soon as I understood the following:
There are event quotas for the incoming events;
Event names are used for making the distinction.
So when I was sending dummy test data, it made it there, but when I sent some demo data for stakeholders, it was not showing up.
I think the same happens here. You're getting the event name form textbox.text... Pretty sure that varies every time you run the code.
Here are the things to keep in mind when testing your code:
the server has a mechanism to discard / consider events;
implicitly, it allows first xx events depending on the quota;
if you are sending more than xx events, they will not show up in reports.
So, you must control which events to discard and which to consider (there are a couple of levels you can configure, and based of them you can get the events in various types of reports).
Find the "Tracked Events Whitelist Management". You will be able to control these things form there.
This blog helped me (it is not SDK documentation): https://www.revulytics.com/blog/getting-started-with-usage-intelligence-part2-event-tracking
Good luck!

Web Load Test with MVC route parameters creates many instance URL's

I have a web test for the following request:
{{WebServer1}}/Listing/Details/{{Active Listing IDs.AWE2_RandomActiveAuctionIds#csv.ListingID}}
And I receive the error message:
The maximum number of unique Web test request URLs to report on has been exceeded; performance data for other requests will not be available
because there are thousands of unique URL's (because I'm testing for different values in the URL). Does anyone know how to fix this?
There are a few features within Visual Studio (2010 and above) that will help with this problem. I am speaking for 2010 specifically, and assuming that the same or similar options are available in later versions as well.
1. Maximum Request URLs Reported:
This is a General Option available in the Run Setting Properties of the load test. The default value set here is 1,000. This default value is usually sufficient... but not always high enough. Depending on the load test size, it may be necessary to increase this. Based on personal experience, if you are thinking about adjusting this value, go through your tests first and get an idea of how many requests you are expecting to see reported. For me, a rough guideline that is helpful:
*number_of_request_in_webtest * number_of_users_in_load_test = total_estimated_requests*
If your load test has multiple web tests in it, adjust the above accordingly by figuring out the number of requests in each indvidual test, sum that value up, and multiply by the number of users.
This option is more appropriate for large load tests that are referencing several web tests and/or have a very high user count. One reference for a problem|resolution use-case of this option is here:
https://social.msdn.microsoft.com/Forums/en-US/ffc16064-c6fc-4ef7-93b0-4decbd72167e/maximum-number-of-unique-web-test-requests-exceeded?forum=vstswebtest
In the specific case mentioned in the originally posted question, this would not resolve the problem entirely. In fact, it could create a new one, where you end up running out of virtual memory. Visual Studio will continue to create new request metrics and counters for every single request that has a unique AWE2_RandomActiveAuctionIds.
2. Reporting Name:
Another option, which #AdrianHHH already touched on, is the "Reporting Names" Option. This option is found in the request properties, inside the web test. It defaults to nothing, which in turn, results in Visual Studio trying to create the name that it will use from the request itself. This behavior creates the issue you are experiencing.
This option is the one that will directly resolve the issue of a new request being reported for every unique request report.
If you have a good idea of the expected number of requests to be seen in the load test (and I think it is a good idea to know this information, for debugging purposes, when running into this exception) a debugging step would be to set the "Maximum Request URLs Reported" DOWN to that value. This would force the exception you are seeing to pop up more quickly. If you see it after adjusting this value, then there is likely a request that is having a new reported value generated each time a virtual user executes the test.
Speaking from experience, this debugging step can save you some time and hair when dealing with requests that contain sessionId, GUID, and other similar types of information in them. Unless you are explicitly defining a Reporting Name for every single request... it can be easy to miss a request that has dynamic data in it.
3. Record Results:
Depending on the request, and its importance to your test, you can opt to remove it from your test results by setting this value to false. It is accessed under the request properties, within the web test itself. I personally do not use this option, but it may also be used to directly resolve the issue you are experiencing, given that it would just remove the request from the report all together.
A holy-grail-of-sorts document can be found on Codeplex that covers the Reporting Name option in a small amount of detail. At the time of writing this answer, the following link can be used to get to it:
http://vsptqrg.codeplex.com/releases/view/42484
I hope this information helps.

Resources