ASP.Net MVC: "The given key was not present in the dictionary" - asp.net-mvc

I have a set of websites built in MVC - each, essentially, is a clone of one original site). On one of the sites I'm getting the error from the title of this post ("The given key was not present in the dictionary") - it only occurs on a single page. The code is identical across all of the sites including the one which is acting up. Each of the sites is installed with the same setup (most of them are parallel to each other on a single server). I'm unable to replicate this in my Dev environment so have no idea how to debug (our sites are compiled via a Nant build process and are all set to Release mode - so no debug info available).
When I look into the stack trace of the error I notice that at no point does our code get called - it's all ASP.Net page-lifecycle calls (specifically, the last significant function is a method called "__RenderSearchContent" in the compiled page. So far as I can tell from digging through the relevant controller action there are no instances where the code is using a Dictionary object.
It seems to me that I'm missing something but I'm not sure where to look - the code doesn't seem to be the problem, there shouldn't be any environmental differences (though, it isn't impossible - the database, for example, is a different install but has an identical schema and isn't being used yet according to the stack trace).
One area I have suspicion of is the routing which I know uses a dictionary - but surely, if this was the case, the other sites would suffer from the same problem?
Are there any suggestions as to where I might look to find the cause of this problem?
Any help will be much appreciated.
Cheers

Ok, so the problem was not where I was expecting it to be. If I had a dollar for every time that happened...
Basically, the problem was located in the view that was throwing the error. There was a data item which was present in the other websites but not this one. I guess this type of problem is where strongly-typed objects are ideal.

Check whether the query (or form) parameters for the request that causes the problem contain a parameter (or parameters) corresponding to the names of the controller action's arguments. When the action is invoked, it should look in the parameters of the request for named parameters matching the names of the action's arguments.

Related

Google Analytics causing website to duplicate POSTs

We've noticed a very strange behavior change on our website (asp.net MVC) starting early morning (GMT) on the 12'th of January this year (2018).
Http POSTs from the site started firing twice (unconfirmed, but we suspect sometimes more than twice), and scouring high and low we couldn't find that we'd changed anything.
One of the few things we dynamically load is Google Analytics (specifically Google Tag Manager), and in the course of trial-and-error we tried disabling everything external (which made the phenomenon disappear) and then re-enabling them one-by-one, once we came to re-enabling GA the problem re-appeared.
We also tried removing everything except GA and the problem persisted.
When searching we can't find that anything has been updated in GA, so it's very unclear why it suddenly started, and we have also been unable to find anyone else reporting the same problem (either historically or presently).
Our current best guess is that one of GA's dependencies have updated, and either it contains a bug, or it's exposing an already existing fault in our code.
Has anyone else noticed the same problem? Anyone find something in their code that caused the strange behavior of GA?
I found the error, it was caused by two erroneously set up triggers.
Both triggers were form submit type, and both had double activators, one "Activate trigger when all conditions are met" and one "Activate trigger on specific forms".
The problem was that both "all conditions" activators were set to "url matches regular expression .*", where as the second activator for both targeted a correct Form Path for each respectively.
Whoever set it up must have assumed that Google Tag Manager was using a logical "and" between the two activators (not an unrealistic assumption), but based on my testing at least it seems that the trigger activates on either activator matching.
I couldn't see any reason for the first regex match towards ".*", so the fix was to simply supply a unique url expression for each trigger.
No explanation yet as to why it suddenly became a problem, because the configuration has been wrong for a couple of months at least.
P.S. For whatever reason our GTM is not in English, so take my quoted names on fields/etc with a grain of salt as they are translated.
Update
The website uses ajax to post the forms, the combination of that and the "Await tags" flag on the triggers are looking as likely sources of why the combined conditions were not acting as expected.
Which means a non-announced performance-update to GTM regarding "Await tags" could have been the catalyst which caused the problem to start occuring with alarming frequency.

Check Site URL which fills data in Report Suite in SiteCatalyst (Omniture)

This question may seems odd but we have a slight mixup within our Report Suites on Omniture (SiteCatalyst). Multiple Report Suites are generating analytics and it's hard for us to find which site URL is constituting the results.
Hence my question is, is there any way we can find which Site is filling data within a certain Report Suite.
Through this following JS, I am able to find which "report suite" is being used by a certain site though:-
javascript:void(window.open("","dp_debugger","width=600,height=600,location=0,menubar=0,status=1,toolbar=0,resizable=1,scrollbars=1").document.write("<script language=\"JavaScript\" id=dbg src=\"https://www.adobetag.com/d1/digitalpulsedebugger/live/DPD.js\"></"+"script>"));
But I am hoping to find the other way around that where Report Suite gets its data from within the SiteCatalyst admin.
Any assistance?
Thanks
Adobe Analytics (formerly SiteCatalyst) does not have anything native or built in to globally look at all data coming to see which page/site is sending data to which report suite. However, you can contact Adobe ClientCare and request raw hit logs for a date range, and you can parse those logs yourself, if you really want.
Alternatively, if you have Data Warehouse access, you can export urls and domains from there for a given date range. You can only select one report suite at a time but that's also better than nothing, if you really need the historical data now.
Another alternative is if your sites are NOT currently setting s.pageName, then you may be in some measure of luck for your historical data. The pages report is popped from s.pageName value. If you do not set that variable, it will default to the URL of the web page that made the request. So, at a minimum you will be able to see your URLs in that report right now, so that should help you out. And if you define "site" as equivalent of "domain" (location.hostname) you can also setup a classification level for pages for domain and then use the Classification Rule Builder and a regular expression to pop the classification with the domain, which will give you some aggregated numbers.
Some suggestions moving forward...
I good strategy moving forward is to have all of your sites report to a global report suite. Then, you can have each site also send data to a site level report suite (warning: make sure you have enough server calls in your contract to cover this, since AA does not have unlimited server calls). Alternatively, you can stick with one global report suite and setup segments for each site. Another alternative is to create a rollup report suite to have all data from your other report suites to also go to. Rollup report suites do not have as many features as standard report suites, but for basic things such as pages, page views, it works.
The overall point though is that one way or the other, you should have all of your data go into one report suite as the first step.
Then, you should also assign a few custom variables to be output on the pages of all your sites. These are the 4 main things I always try to include in an implementation to make it easier to find out which sites/pages are reporting to what.
A custom variable to identify the site. Some people use s.server for this. However, you may also want to pop a prop or eVar with the value as well, depending on how you'd like to be able to break data down. The big question here is: How do you define "site" ? I have seen it defined many different ways.
If you do NOT define "site" as domain (e.g. location.hostname) then I suggest you pop a prop and eVar with the domain, because AA does not have a native report for this. But if you do, then you can skip this, since it's same thing as point #1
A custom prop and eVar with the report suites(s). Unless you have a super old version of legacy code, just set it with s.sa(). This will ensure you get the final report suite(s), in case you happen to use a version that uses Dynamic Account variables (e.g. s.dynamicAccountList).
If you set s.pageName with a custom value, then I suggest you pop a prop and eVar with the URL. Tip: to save on request url length to AA, you can use dynamic variable syntax to copy the g parameter already in a given AA request. For example (assuming you don't have code that changes the dynamic variable prefix): s.prop1='D=g'; Or, you can pop this with a processing rule if you have the access.
you can normally find this sort of information in the Site Content-> Servers report. There will be information in there the indicates what sites are sending in the hits. Your milage may vary based on the actual tagging implementation, it is not common for anyone to explicitly set the server, so the implicit value is the domain the hit is coming in from.
Thanks C.

Web Load Test with MVC route parameters creates many instance URL's

I have a web test for the following request:
{{WebServer1}}/Listing/Details/{{Active Listing IDs.AWE2_RandomActiveAuctionIds#csv.ListingID}}
And I receive the error message:
The maximum number of unique Web test request URLs to report on has been exceeded; performance data for other requests will not be available
because there are thousands of unique URL's (because I'm testing for different values in the URL). Does anyone know how to fix this?
There are a few features within Visual Studio (2010 and above) that will help with this problem. I am speaking for 2010 specifically, and assuming that the same or similar options are available in later versions as well.
1. Maximum Request URLs Reported:
This is a General Option available in the Run Setting Properties of the load test. The default value set here is 1,000. This default value is usually sufficient... but not always high enough. Depending on the load test size, it may be necessary to increase this. Based on personal experience, if you are thinking about adjusting this value, go through your tests first and get an idea of how many requests you are expecting to see reported. For me, a rough guideline that is helpful:
*number_of_request_in_webtest * number_of_users_in_load_test = total_estimated_requests*
If your load test has multiple web tests in it, adjust the above accordingly by figuring out the number of requests in each indvidual test, sum that value up, and multiply by the number of users.
This option is more appropriate for large load tests that are referencing several web tests and/or have a very high user count. One reference for a problem|resolution use-case of this option is here:
https://social.msdn.microsoft.com/Forums/en-US/ffc16064-c6fc-4ef7-93b0-4decbd72167e/maximum-number-of-unique-web-test-requests-exceeded?forum=vstswebtest
In the specific case mentioned in the originally posted question, this would not resolve the problem entirely. In fact, it could create a new one, where you end up running out of virtual memory. Visual Studio will continue to create new request metrics and counters for every single request that has a unique AWE2_RandomActiveAuctionIds.
2. Reporting Name:
Another option, which #AdrianHHH already touched on, is the "Reporting Names" Option. This option is found in the request properties, inside the web test. It defaults to nothing, which in turn, results in Visual Studio trying to create the name that it will use from the request itself. This behavior creates the issue you are experiencing.
This option is the one that will directly resolve the issue of a new request being reported for every unique request report.
If you have a good idea of the expected number of requests to be seen in the load test (and I think it is a good idea to know this information, for debugging purposes, when running into this exception) a debugging step would be to set the "Maximum Request URLs Reported" DOWN to that value. This would force the exception you are seeing to pop up more quickly. If you see it after adjusting this value, then there is likely a request that is having a new reported value generated each time a virtual user executes the test.
Speaking from experience, this debugging step can save you some time and hair when dealing with requests that contain sessionId, GUID, and other similar types of information in them. Unless you are explicitly defining a Reporting Name for every single request... it can be easy to miss a request that has dynamic data in it.
3. Record Results:
Depending on the request, and its importance to your test, you can opt to remove it from your test results by setting this value to false. It is accessed under the request properties, within the web test itself. I personally do not use this option, but it may also be used to directly resolve the issue you are experiencing, given that it would just remove the request from the report all together.
A holy-grail-of-sorts document can be found on Codeplex that covers the Reporting Name option in a small amount of detail. At the time of writing this answer, the following link can be used to get to it:
http://vsptqrg.codeplex.com/releases/view/42484
I hope this information helps.

A public action method pagerror.gif / refresh.gif could not be found on controller - who is calling that gif?

Hello stackoverflow world.
(This is the first time I actually post a question here. Exciting)
A while ago I inherited a 2 year old MVC website from one of the teams within my corporation. I know most the ins and outs of this solution now but there is something strange cropping up in my error logs which I do not understand.
Every now and then I will get an error messages like this one:
A public action method 'xyz.gif' could not be found on controller MyNamespace.MyController
What I don't understand is WHY is this action (a gif image) being called in the first place?
I've seen 2 different gifs in the error logs pageerror.gif and refresh.gif
As this is an inherited solution I double checked everything and made sure that there are in fact no images like that in the project and no reference anywhere even to those words in the controllers, views, style-sheets or even in the source of pages within the same controller.
I seriously doubt that the users are playing around with the URLs and adding random gif names to them to see what happens.
I'm all out of ideas. Anyone out there who can suggest more places to look for the culprit?
Ta!
As Tchami pointed out in a comment on the original question, this is related to Internet Explorer's default error pages.
As I have set up custom error pages I believe this is either due to an internal server error or possibly somehow an action cancel error from the client side, i.e. client side error. I can't be 100% at this point.
The question is not fully answered but I mostly know what the cause is now.
From my point of view I've identified that I need to improve this ASP.NET MVC application so that
1) it doesn't report/log errors when someone tries to navigate to a non-existing controller action (e.g. these refresh.gif actions or any other)
and 2) handle it better for the client so that they don't end up clicking from one error page (default IE error page) to another (my custom error page when clicking the refresh icon on the IE page)
Another stackoverflow thread on a related topic:
Significance of 'pagerror.gif'?
(i can't post more links as I'm a new user)
CHEERS!
Solveig
Can you get the error to show up in the logs when you use the site yourself? If so, an add-in such as HttpWatch might help you see those .gif requests. If you can understand more about when they happen you might be able to figure out what's going on.
Pagerror.gif and refresh.gif are the default images from the IE browser/IIS server. Normally these images were shown when the browser is not able to retrieve the content.
If you see these errors in logs, then check the iis log to get more information.
For example,
IIS Log : look for this feild ,
cs(User-Agent)
compatible;+MSIE+6.0;+Windows
Here few things to be noted,
1)IE 6.0 apparently makes these requests on its own. I am not sure if any other IE 6+ Browser would show similar behavior.
2)All this will do is generates a 'bogus" event log entry because a null reference exception could happens when you request a non-existing GIF and that request goes through MVC pipeline.
3)Technically this simply can be ignored.
4) Optionally we could check if through routing we could stop “.gif” files from being processed by MVC Pipeline
All i would suggest is to handle it gracefully.
It may be difficult to pinpoint this, but my thought would be: check the javascript. If the image name is being dynamically generated somewhere, and then requested, a simple "find and replace" may miss the reference.

Sticky notes associated with web page - how to?

I have this idea for a project. Associated with any web page, i want to create notes that will be saved locally in a database, the notes will be reloaded automatically from that database the next time i visit the same page.
Creating the note is easy, but i'm looking for how to link the notes to the web page url and how to keep aware of the active web page. Any idea?
(Note: i have come to this searching on the internet: http://webkit.org/demos/sticky-notes/ - this is part of WebKit Open source projects) - this is about what i'm looking for.
Thank.
Browserdependent probably. You'll have to have a plugin for every browser type.
IE might be doable via the COM interface, but that probably would require starting IE via a way you control. So that probably will have to be a plugin too.
For browser independence, there are quite a few challenges in this one. One way would be to implement a proxy server and watch for text/html content....this will work for most of the general cases, but not every case. Handling frames for instance... which resource is the "parent" and which is the "child"? Which one contains the sticky note? I think you would have to inject some client side javascript to keep track of things, and that might break some websites.
protonotes.com is a web service version of this. Not sure how they do it though.
Actually, Daniel H hit the nail on the head mate: http://www.protonotes.com
It does exactly as you want, in fact it gives you two options to store your data, the first is hosted, the second is your own mySQL db - protonotes pipes the data from the tack-on style notes to your own db, if you prefer. This means that you're not the only person who can see the notes - access is granted by a unique 'group' key.
I've just deployed protonotes as our main online review tool for two reasons, we can save our own data, and it lacks some features which I generally label "dubious" anyway.
It's simplicity is great, the only thing I'm aware of that could cause a prob is that it dumps a bunch of stuff in the global namespace - if that's a potential problem for you.
d

Resources