Web Load Test with MVC route parameters creates many instance URL's - asp.net-mvc

I have a web test for the following request:
{{WebServer1}}/Listing/Details/{{Active Listing IDs.AWE2_RandomActiveAuctionIds#csv.ListingID}}
And I receive the error message:
The maximum number of unique Web test request URLs to report on has been exceeded; performance data for other requests will not be available
because there are thousands of unique URL's (because I'm testing for different values in the URL). Does anyone know how to fix this?

There are a few features within Visual Studio (2010 and above) that will help with this problem. I am speaking for 2010 specifically, and assuming that the same or similar options are available in later versions as well.
1. Maximum Request URLs Reported:
This is a General Option available in the Run Setting Properties of the load test. The default value set here is 1,000. This default value is usually sufficient... but not always high enough. Depending on the load test size, it may be necessary to increase this. Based on personal experience, if you are thinking about adjusting this value, go through your tests first and get an idea of how many requests you are expecting to see reported. For me, a rough guideline that is helpful:
*number_of_request_in_webtest * number_of_users_in_load_test = total_estimated_requests*
If your load test has multiple web tests in it, adjust the above accordingly by figuring out the number of requests in each indvidual test, sum that value up, and multiply by the number of users.
This option is more appropriate for large load tests that are referencing several web tests and/or have a very high user count. One reference for a problem|resolution use-case of this option is here:
https://social.msdn.microsoft.com/Forums/en-US/ffc16064-c6fc-4ef7-93b0-4decbd72167e/maximum-number-of-unique-web-test-requests-exceeded?forum=vstswebtest
In the specific case mentioned in the originally posted question, this would not resolve the problem entirely. In fact, it could create a new one, where you end up running out of virtual memory. Visual Studio will continue to create new request metrics and counters for every single request that has a unique AWE2_RandomActiveAuctionIds.
2. Reporting Name:
Another option, which #AdrianHHH already touched on, is the "Reporting Names" Option. This option is found in the request properties, inside the web test. It defaults to nothing, which in turn, results in Visual Studio trying to create the name that it will use from the request itself. This behavior creates the issue you are experiencing.
This option is the one that will directly resolve the issue of a new request being reported for every unique request report.
If you have a good idea of the expected number of requests to be seen in the load test (and I think it is a good idea to know this information, for debugging purposes, when running into this exception) a debugging step would be to set the "Maximum Request URLs Reported" DOWN to that value. This would force the exception you are seeing to pop up more quickly. If you see it after adjusting this value, then there is likely a request that is having a new reported value generated each time a virtual user executes the test.
Speaking from experience, this debugging step can save you some time and hair when dealing with requests that contain sessionId, GUID, and other similar types of information in them. Unless you are explicitly defining a Reporting Name for every single request... it can be easy to miss a request that has dynamic data in it.
3. Record Results:
Depending on the request, and its importance to your test, you can opt to remove it from your test results by setting this value to false. It is accessed under the request properties, within the web test itself. I personally do not use this option, but it may also be used to directly resolve the issue you are experiencing, given that it would just remove the request from the report all together.
A holy-grail-of-sorts document can be found on Codeplex that covers the Reporting Name option in a small amount of detail. At the time of writing this answer, the following link can be used to get to it:
http://vsptqrg.codeplex.com/releases/view/42484
I hope this information helps.

Related

Alternative to custom protocols (URI schemes)

I have been extensively using a custom protocol on all our internal apps to open any type of document (CAD, CAM, PDF, etc.), to open File Explorer and select a specific file, and to run other applications.
Years ago I defined one myprotocol protocol that executes C:\Windows\System32\wscript.exe passing the name of my VBScript and whatever argument each request has. The first argument passed to the script describe the type of action (OpenDocument, ShowFileInFileExplorer, ExportBOM, etc.), the following arguments are passed to the action.
Everything worked well until last year, when wscript.exe stopped working (see here for details). I fixed that problem by copying it to wscript2.exe. Creating a copy is now a step in the standard configuration of all our computers and using wscript2.exe is now the official configuration of our custom protocol. (Our anti-virus customer support couldn't find anything that interacts with wscript.exe).
Today, after building a new computer, we found out that:
Firefox doesn't see wscript2.exe. If I click on a custom protocol link, then click on the browse button and open the folder, I only see a small subset of .exe files, which includes wscript.exe, but doesn't include wscript2.exe (I don't know how recent this problem is because I don't personally use FireFox).
Firefox sees wscript.exe, but it still doesn't work (same behavior as described in my previous post linked above)
Chrome works with wscript2.exe, but now it always asks for confirmation. According to this article this seems to be the new approach, and things could change again soon. Clicking on a confirmation box every time is a big no-no with my users. This would slow down many workflows that require quickly clicking hundreds of links on a page and, for example, look at a CAD application zooming to one geometry in a large drawing.
I already fixed one problem last year, I am dealing with another one now, and reading that article scares me and makes me think that more problems will arise soon.
So here is the question: is there an alternative to using custom protocols?
I am not working on a web app for public consumption. My custom protocol requires the VBScript file, the applications that the script uses and tons of network shared folders. They are only used in our internal network and the computers that use them are manually configured.
First of all, that's super risky even if it's on internal network only. Unless computers/users/browsers are locked out of internet, it is possible that someone guesses or finds out your protocol's name, sends link to someone in your company and causes a lot of trouble (possibly loss too).
Anyway...
Since you are controlling software on all of the computers, you could add a mini-server on every machine, listening to localhost only, that simply calls your script. Then define host like secret.myprotocol to point to that server, e.g., localhost:1234.
Just to lessen potential problems a bit, local server would use HTTPS only, with proper certificate, HSTS and HPKP set to a very long time (since you control software, you can refresh those when needed). The last two, just in case someone tries to setup the same domain and, for whatever reason, host override doesn't work and user ends up calling a hostile server.
So, links would have to change from myprotocol://whatever to https://secret.myprotocol/whatever.
It does introduce new attack surface ("mini-server"), but should be easy enough to implement, to minimize size of that surface :). "Mini-server" does not even have to be real www server, a simple script that can listen on socket and call wscript.exe would do (unless you need to pass more info to it).
Real server has more code that can have bugs in it, but also allows to add more things, for example a "pass through" page, that shows info "Opening document X in 3 seconds..." and a "cancel" button.
It could also require session login of some kind (just to be sure it's user who requests action, and not something else).
The title of this blog post says it all: Browser Architecture: Web-to-App Communication Overview.
It describes a list of Web-to-App Communication techniques and links to dedicated posts for some of them.
The first in the list is Application Protocols, which I have been using for years already, and it started to crumble in the last year or so (hence my question).
The fifth is Local Web Server, which is the one described by ahwayakchih.
UPDATE (this update follows the update on the blog post above mentioned)
Apparently I wasn't the only one thinking that this change in behavior was a regression, so a workaround has been issued: the old behavior (showing a checkbox that allows to remember the answer) can be restored by adding these keys to the registry:
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Edge]
"ExternalProtocolDialogShowAlwaysOpenCheckbox"=dword:00000001
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Google\Chrome]
"ExternalProtocolDialogShowAlwaysOpenCheckbox"=dword:00000001
[HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Chromium]
"ExternalProtocolDialogShowAlwaysOpenCheckbox"=dword:00000001

Get new emails with microsoft graph

I am trying to get only new emails with microsoft graph.
Iam doing this by checking date like
GET https://graph.microsoft.com/v1.0/me/messages?$filter=receivedDateTime+gt+2016-06-06T08:08:08Z
Is there any possibility to build query to get new messages but base on id instead of receivedDateTime? Something like: get messeges until you find id=....?
I think the delta query solution is pretty good (as suggested in a different answer). However, for my purposes, there were two major drawbacks: 1) it's in preview (beta) right now, so it makes it less than ideal for production code and 2) it doesn't seem to support the monitoring of all messages, just those in a particular folder.
I actually prefer the solution you're working with. The timestamp in the header of the response can be used to reset the time field in you query, so that if you have "receivedDateTime gt 12:00:00" and get back the server time of 12:01:00 for your request, you can use "receivedDateTime gt 12:01:00" next time.
The scenario you're looking for is specifically what the new Delta query is designed to support. Deltas allow you to retrieve changes to a given folder (i.e. Inbox) since you last polled that folder. Message IDs not static or consecutive so they're not a suitable property for determining new vs. old messages.

Check Site URL which fills data in Report Suite in SiteCatalyst (Omniture)

This question may seems odd but we have a slight mixup within our Report Suites on Omniture (SiteCatalyst). Multiple Report Suites are generating analytics and it's hard for us to find which site URL is constituting the results.
Hence my question is, is there any way we can find which Site is filling data within a certain Report Suite.
Through this following JS, I am able to find which "report suite" is being used by a certain site though:-
javascript:void(window.open("","dp_debugger","width=600,height=600,location=0,menubar=0,status=1,toolbar=0,resizable=1,scrollbars=1").document.write("<script language=\"JavaScript\" id=dbg src=\"https://www.adobetag.com/d1/digitalpulsedebugger/live/DPD.js\"></"+"script>"));
But I am hoping to find the other way around that where Report Suite gets its data from within the SiteCatalyst admin.
Any assistance?
Thanks
Adobe Analytics (formerly SiteCatalyst) does not have anything native or built in to globally look at all data coming to see which page/site is sending data to which report suite. However, you can contact Adobe ClientCare and request raw hit logs for a date range, and you can parse those logs yourself, if you really want.
Alternatively, if you have Data Warehouse access, you can export urls and domains from there for a given date range. You can only select one report suite at a time but that's also better than nothing, if you really need the historical data now.
Another alternative is if your sites are NOT currently setting s.pageName, then you may be in some measure of luck for your historical data. The pages report is popped from s.pageName value. If you do not set that variable, it will default to the URL of the web page that made the request. So, at a minimum you will be able to see your URLs in that report right now, so that should help you out. And if you define "site" as equivalent of "domain" (location.hostname) you can also setup a classification level for pages for domain and then use the Classification Rule Builder and a regular expression to pop the classification with the domain, which will give you some aggregated numbers.
Some suggestions moving forward...
I good strategy moving forward is to have all of your sites report to a global report suite. Then, you can have each site also send data to a site level report suite (warning: make sure you have enough server calls in your contract to cover this, since AA does not have unlimited server calls). Alternatively, you can stick with one global report suite and setup segments for each site. Another alternative is to create a rollup report suite to have all data from your other report suites to also go to. Rollup report suites do not have as many features as standard report suites, but for basic things such as pages, page views, it works.
The overall point though is that one way or the other, you should have all of your data go into one report suite as the first step.
Then, you should also assign a few custom variables to be output on the pages of all your sites. These are the 4 main things I always try to include in an implementation to make it easier to find out which sites/pages are reporting to what.
A custom variable to identify the site. Some people use s.server for this. However, you may also want to pop a prop or eVar with the value as well, depending on how you'd like to be able to break data down. The big question here is: How do you define "site" ? I have seen it defined many different ways.
If you do NOT define "site" as domain (e.g. location.hostname) then I suggest you pop a prop and eVar with the domain, because AA does not have a native report for this. But if you do, then you can skip this, since it's same thing as point #1
A custom prop and eVar with the report suites(s). Unless you have a super old version of legacy code, just set it with s.sa(). This will ensure you get the final report suite(s), in case you happen to use a version that uses Dynamic Account variables (e.g. s.dynamicAccountList).
If you set s.pageName with a custom value, then I suggest you pop a prop and eVar with the URL. Tip: to save on request url length to AA, you can use dynamic variable syntax to copy the g parameter already in a given AA request. For example (assuming you don't have code that changes the dynamic variable prefix): s.prop1='D=g'; Or, you can pop this with a processing rule if you have the access.
you can normally find this sort of information in the Site Content-> Servers report. There will be information in there the indicates what sites are sending in the hits. Your milage may vary based on the actual tagging implementation, it is not common for anyone to explicitly set the server, so the implicit value is the domain the hit is coming in from.
Thanks C.

ASP.Net MVC: "The given key was not present in the dictionary"

I have a set of websites built in MVC - each, essentially, is a clone of one original site). On one of the sites I'm getting the error from the title of this post ("The given key was not present in the dictionary") - it only occurs on a single page. The code is identical across all of the sites including the one which is acting up. Each of the sites is installed with the same setup (most of them are parallel to each other on a single server). I'm unable to replicate this in my Dev environment so have no idea how to debug (our sites are compiled via a Nant build process and are all set to Release mode - so no debug info available).
When I look into the stack trace of the error I notice that at no point does our code get called - it's all ASP.Net page-lifecycle calls (specifically, the last significant function is a method called "__RenderSearchContent" in the compiled page. So far as I can tell from digging through the relevant controller action there are no instances where the code is using a Dictionary object.
It seems to me that I'm missing something but I'm not sure where to look - the code doesn't seem to be the problem, there shouldn't be any environmental differences (though, it isn't impossible - the database, for example, is a different install but has an identical schema and isn't being used yet according to the stack trace).
One area I have suspicion of is the routing which I know uses a dictionary - but surely, if this was the case, the other sites would suffer from the same problem?
Are there any suggestions as to where I might look to find the cause of this problem?
Any help will be much appreciated.
Cheers
Ok, so the problem was not where I was expecting it to be. If I had a dollar for every time that happened...
Basically, the problem was located in the view that was throwing the error. There was a data item which was present in the other websites but not this one. I guess this type of problem is where strongly-typed objects are ideal.
Check whether the query (or form) parameters for the request that causes the problem contain a parameter (or parameters) corresponding to the names of the controller action's arguments. When the action is invoked, it should look in the parameters of the request for named parameters matching the names of the action's arguments.

Is there a tool to automate/stress POST calls to my site for testing?

I would like to stress (not sure this is the right word, but keep reading) the [POST] actions of my controllers. Does a tool exist that would generate many scenarios, like omitting fields, adding some, generating valid and invalid values, injecting attacks, and so on ? Thx
Update: I don't want to benchmark/performance test my site. Just automatically filling/tampering forms and see what happens
WebInspect from Spidynamics (HP bought them).
I've used this one in my previous job (I recommended it to my employer at the time) and I was overwhelmed with the amount of info and testing I could do with it.
https://download.spidynamics.com/webinspect/default.htm
Apache JMeter, is more likely to benchmark/stress itself rather than your site. I was recently pointed twards Faban which can be used for very simple and more complex tests and scenarios, its very performant. Also, take a look at OpenSTA and WebLoad both free and powerful with capabilities to record and replay complex scenarios.
Apache JMeter might fit the bill?
Have you seen CrossBow Web Stress Tester over at CodePlex?
supports get and post operations
you specify the number of threads, requests, waits, and timeouts
reads a txt file with name/value pairs for posting values
You'd have to download & modify the source if you wanted to generate random data for your Post variables.
Build one yourself by using WebClient with HtmlAgilityPack. Use agilitypack to parse your html and get the form, then just randomly fill the fields and make POSTs

Resources