Cassini much slower than IIS for MVC RenderPartial - asp.net-mvc

I have an MVC view with a partial view recursive call that displays hierarchical data.
The complete tree typically includes in the order of 500 or so items.
The data is all included in the model, and the model's a trivial record class - nothing in it except auto-properties.
In IIS this works fine.
However in Cassini/WebDev (Visual Studio's built in web server) this page runs painfully slow and often times out.
A little digging shows that this is due to each call to Html.RenderPartial taking around 200ms (or 1/5 of a second). The actual partial view seems to take under a millisecond or so.
Anyone got any ideas why this is so slow?
Why would it be different between IIS and Cassini? The IIS application is pointed at my development directory; they're running exactly the same code, build and config.

I think this could be related to the caching of view resolved paths. The article here explains the issue to which I am referring.
Do you notice the same behaviour if you pass the full path of the view, like:
RenderPartial("~/Views/MyView.ascx")
Kindness,
Dan

Related

AspNet MVC 5 too slow - iddle time between pipeline methods

I'm using Glimpse to debug some perfomance problems in my website, and it seems that the server/framework sits iddle for too long between method calls.
This picture shows 320 ms of server time;
This second picture reveals that 125.29 ms are used by ViewResult.ExecuteResult (I understand that as "rendering", which seems pretty slow to me, considering that my views are pre-compiled - more on that below);
But the really odd thing here is that more than 100 ms are pretty much wasted with iddle time, as you can verify in this picture.
Those little blocks representing server work sometimes account for 0 ms! But then there's a lapse of about 15 ms before the next block.
Is it really iddle time? Do you have any tips for where to look next, or how to optimize this?
Disclaimer: I've been looking into this for a week or so, and I have already found and applied those general performance recommendations, like:
Only one View Engine is active (RazorViewEngine);
Run in Release mode;
Specify full view paths, like "~/Views/Folder/ActionName.cshtml".
Besides that, Donut Caching is active, views are pre-compiled with Razor Generator, and I'm using Glimpse for diagnostics. Anyways, I've tried disabling these things to ensure that they were not the offenders, and I verified that they're actually improving the times.
Thanks in advance.

Am I missing potential problems with custom page caching in Rails 3?

I use rails to present automated hardware testing results; our tests are run mainly via TCL. Recently, we have implemented a "log4TCL" which is basically a translated version of log4J. The log files have upwards of 40000 lines, each of which is written to the database as a logline record, and load time for the view is too long to be considered usable. I have tried to use ajax requests to speed things up, but the initial query/page load accounts for ~75% of the full page load.
My solution is page caching. I cannot use the rails included page caching because each log report is a different instance of "log_viewer". The report is generated using a test_run_id parameter. Rails-included page caching only caches one instance of "log_viewer.html". What I need is "log_viewer_#{test_run_id}.html". I have implemented a way of doing this. The reports age out after one week and are purged from the test_runs/log_viewer_cache directory to save disk space. If an older report is needed, loading the page re-generates the report with a fresh age-out timer.
I have come to the conclusion that this is the way to go. My concern is that I have not found any other implementations such as this anywhere which leads me to believe that I have missed an inherent flaw in my design. Any input would be much appreciated.
EDIT: For clarification, the "Dynamic" content of this report is what takes too long to load. I need to cache multiple instances of what action/fragment caching is not concerned with.

IIS 7 over time performance degrading when rendering partial views

I have several websites that are currently experiencing the following problem. Over time, rendering of a specific partial view (asp.net mvc 1) will degrade, and take around ten times longer than it does normally. I currently have a workaround, but it's far from ideal.
Take this node off our load balancer
Stop IIS
Delete all temporary asp.net files
Start IIS
Hit the site to get caches populated and views compiled
Put the node back on the load balancer's rotation.
I know that it's not the restarting of IIS fixing it, it seems that the temp asp.net files have to be deleted for this to work properly. After those steps are completed, performance on the site is much, much better for around three to six hours. After that, it goes back to being terrible. The partial view that's having issues pretty much just renders out some html with cached data. We have not been able to reproduce this issue in our dev environment at all, so we're pretty stumped. We're going to be upgrading our live environment shortly, so I'd just like to know what's causing this problem. If it's configuration related at all, I want to make sure it's fixed with our new setup. Anyone ever seen this before?
There could be many things at play here, an initial check list
confirm app is not deployed in debug mode
what logging do you use and is it being done excessively?
what is the bottleneck on the server when this happens? memory? then you might have to check for a leak
do you regularly recycle your app pools?
Can you give some more details on what this partial view actually does?
The solution for this problem was to clean up the temporary asp.net files. We integrated this step into our deploy process, and the site overall has been running faster.

ASP.NET MVC BuildManager.GetAssemblies()

I'm working in an ASP.NET MVC 4 application, and on app start up I'm looking using BuildManager to get all referenced assemblies.I'm looking through all types in the application to find a few that I want (it's dynamic so I don't know what types I need until start up).
Essentially my code looks like this:
var allTypes = BuildManager.GetReferencedAssemblies()
.Cast<Assembly>()
.SelectMany(a => a.GetTypes());
I'm calling this on app startup but also at the beginning of each new request in order to dynamically find types.
So my questions are:
Since ASP.NET doesn't load assemblies until they're needed, by calling BuildManager.GetReferencedAssemblies() am I loading ALL assemblies before they're needed and causing a performance issue?
Is iterating through all types a bad idea for each request? I could cache the types but ASP.NET has the option of loading assemblies dynamically after I've cached them, right? If so I may miss some types which are indeed there.
Thanks!
Don't do it every request: do cache as early as possible; reflection is slow.
Pre-load all the assemblies and do it on app-startup; I have a system that I use in a lot of our websites which has to do a lot of dynamic stuff based on deployed assemblies, and I do all the work on startup.
Yes startup is therefore slower - but that's less of a problem than each request taking longer.
You will then most likely be interested in a question I asked and answered a while ago about how to preload all deployed assemblies reliably: How to pre-load all deployed assemblies for an AppDomain.
I still use the same process to this day and it works like a charm.

How do I improve ASP.Net MVC3 performance on each page's first hit?

If I make a change to a razor view, recompile, or wait 15-20 minutes, a page might take anywhere from 3-20 seconds to render on that first hit. I understand that the view needs to be recompiled after a change. I also understand that the application will be unloaded after a period of inactivity, but I thought that would be a one time penalty on the very first hit. But for me it seems to apply to every single page.
Take, as an example, my homepage. According to YSlow it's a "B" with 15 components and weighing 250K (that's including MiniProfiler's extra jquery reference). From MiniProfiler I see about 500ms on the first line (http://localhost:80). I'm assuming this includes the view compilation. But then I see 1200ms for Find:Index. There are no SQL calls. Total load time on the first hit is about 3000ms, subsequent hits are about 40ms.
On another page with a couple of partial views, the parent view takes 2400ms to "Find", one of the partial views takes 1000ms to find. The parent view also takes 3200ms to "Render". And the biggest impact is on the first line (http://localhost:80/User/Dashboard) which was a whopping 7000ms. This page has only 3 queries with a total query time of 100ms. The total time to load was more than 15000ms. Subsequent hits are about 250ms.
Our setup is ASP.Net MVC 3, Ninject, EF4.2, Razor view engine, ELMAH, NLog, Html5Boilerplate, and MvcMiniProfiler. I created a duplicate project and removed Ninject, ELMAH, NLog, and MvcMiniProfiler. Performance was only marginally faster. We have about 15 controllers and about 40 views, all in one area.
Is this normal performance? When we deploy to Azure, it's even worse (naturally) than testing locally. Are there suggestions for improvement?
Edit:
A first hit after compile on IIS/localhost (in release mode and with compilation debug=false) can be about 15 seconds. The Azure deployment, running in release, has a faster first hit, but still in the range of 5-10 seconds. I tried David Ebbo's project but didn't see anything dramatic.
Do you deploy this application frequently? If so, then I can see why the first hit performance can be of concern.
We deploy often, and have created a separate project to "warm up" our deployments. It is a unit test project that uses WebDriver to hit each uncompiled view in our app after it is deployed.
Basically, you just use the WebDriver API to fire up a browser, then Navigate() to each URL that needs compiled. Run it once, and the deployment is warm.
Also, in Azure, you can turn off the idle timeout, so that your app never gets idled. We use this script:
%windir%\system32\inetsrv\appcmd set config -section:applicationPools -applicationPoolDefaults.processModel.idleTimeout:00:00:00
... and run it during the Azure deployment like so:
<Task commandLine="startup\disableTimeout.cmd" executionContext="elevated" taskType="simple" />

Resources