EF4 LINQ statement times out when using MEF - asp.net-mvc

We are using MEF to customize certain functionality of our ASP MVC application. We are using .NET 4.0, and SQL Server 2008. We have run into some very strange behavior when using EF4 and a LINQ statement to a view inside a MEF assembly.
The code below will timeout when executed as part of the imported assembly using MEF.
It will run in less than a second when it is pulled out and executed in the main assembly.
If the date range is set to one day the MEF implementation will eventually finish in about 15 seconds.
If we use a table instead of a view it will work fine in the MEF implementation <1 sec.
The code below and the view works flawlessly everywhere except when executed inside the MEF code.
Does MEF introduce any special limitations or problems when EF4 is used in a MVC Web application? We can find a work-around to this problem, but I'd like to know if this is a symptom that we are using MEF wrong?
DateTime startDate = DateTime.Now.AddDays(-30);
DateTime endDate = DateTime.Now;
var _Items = dto.ClientContext.vwItemLists.Where(c => c.CreatedOn > startDate && c.CreatedOn <= endDate);
_context.TotalItemsMatched = _Items.Count();

The problem is likely occurring due to a bad cached execution plan. See this similar issue.
This answer has worked for me in the past, when I'm pretty sure some sort of one-time corruption was the culprit, but note the comment that this is a short-term fix and in many cases the issue might crop up again if you don't resolve the root of the problem.
He refers to this article, which is unfortunately pretty heavy reading, to help you try to identify why a bad execution plan was cached in the first place.

Related

How to do Parallel operations with Thread Scope in Web app using Ninject

I've run into the same question repeatedly whenever using a new DI framework... how do you run massively-parallel operation kicked off from an HttpRequest where each thread needs its own unique copy of the dependencies? In my case, I'm using Ninject.
The specific case I always run into is a CPU-intensive report, using Parallel.ForEach, that needs to use an Entity Framework DbContext; the EF context must be unique to the thread, but outside of these special reports the EF context it must be InRequestScope.
How do you achieve this with Ninject? Preferably allow disposing the EF context with each task on the Parallel.ForEach, since the data loaded with the context would just stay in the context and consume memory.
Note that this report is big enough to warrant Parallel.ForEach but small enough that it can run synchronously on a web request and not timeout the browser (<60 seconds). Maybe I'm weird, but I run into this need a lot.
The solution has several different moving parts that, IMO, aren't terribly well-documented parts of Ninject. The upside is that after implementing something like this, you should start feeling comfortable with Ninject in a hurry!
First, you need to change the scope for your objects so they use the HttpContext if it exists, and if not, use the current thread as a fallback. There is no documentation for this, but there is a DefaultScopeCallback that was added to the settings a while back. Set that property to your own scope callback which uses the same code in the Ninject.Web.Common source to get the HttpContext, but then use "?? Thread.CurrentThread" as the fallback. Do that in the CreateKernel code that should have been created automatically when you installed the NuGet package.
(I have substituted the StandardScopeCallbacks.Thread(ctx) where I used to have Thread.CurrentThread, since the former could conceivably change at some point. Currently those two are identical in what they do.)
private static IKernel CreateKernel()
{
var settings = new NinjectSettings{ DefaultScopeCallback = DefaultScopeCallback };
var kernel = new StandardKernel(settings);
// The rest of the default implementation of CreateKernel left out for brevity
}
private static Object DefaultScopeCallback(Ninject.Activation.IContext ctx)
{
var scope = ctx.Kernel.Components.GetAll<INinjectHttpApplicationPlugin>()
.Select(c => c.GetRequestScope(ctx)).FirstOrDefault(s => s != null);
return scope ?? Ninject.Infrastructure.StandardScopeCallbacks.Thread(ctx);
}
Also, don't forget that the Kernel needs to be set aside as a static object for access later. You don't want to new-up a new Kernel every time you need it; I make mine accessible via "MyConfig.ObjectFactory". While this is a code smell of the service locator anti-pattern, we're going to great lengths here to avoid the anti-pattern as much as possible.
Second, according to the commit description, the DefaultScopeCallback only affects explicit bindings with no explicit scope. So if, like me, you were depending on a bunch of implicit bindings that you hadn't added, you now need to configure them:
kernel.Bind(i => i.From(Assembly.GetExecutingAssembly(), Assembly.GetAssembly(typeof(Bll.MyConfig)))
.SelectAllClasses()
.BindToSelf());
If you don't like doing the above, there's another way of setting the default scope for all implicit bindings that is arguably more elegant. Changing default object scope with Ninject 2.2
Third, if you'd like to clear all cached objects from the scope at the end of each Parallel operation so that memory usage doesn't skyrocket due to EF caching or whatnot, here's how clear the Ninject cache scoped to the current thread:
Parallel.ForEach(myList, i =>
{
var threadDb = MyConfig.ObjectFactory.Get<MyContext>();
CreateModelsForItem(i, threadDb);
MyConfig.ObjectFactory.Components.Get<Ninject.Activation.Caching.ICache>().Clear(Thread.CurrentThread);
});
Note that I did some testing without that Clear line at the end, and it seemed like the EF Context was getting re-used even if that HttpRequest finished and I generated the report several more times. This was not what I wanted, so the Clear operation was important. Really, the behavior I want is closer to InCallScope, but trying to get InRequestScope with InCallScope as a fallback is a can of worms I'll open on another day.

Implementing unit of work using DI and Without EF

I have asp.net MVC aplication with castle windsor working as DI and interceptpor.
Currently Dapper is used for ORM mappring, Dapper is a simple object to entity mapping provider which work with ADO .Net connection, common objects.
We already have repository and business service layer in place, now i want to Implemente unit of work.
I know EF has got automatic transaction capability and SaveChanges() made all entity changes persistence.
Is there any way to have Unit Of Work for non EF, multilayer application ?
I think you are looking for some sort of AOP.
Look at this example http://www.sharpcrafters.com/solutions/transaction.
You can wrap your method with transaction using the attribute or even better you can declare that it will be used on all methods from your BAL like this:
[assembly: TransactionScope( AttributeTargetTypes = "MyBALNamespace.*",
AttributeTargetTypeAttributes = MulticastAttributes.Public,
AttributeTargetMemberAttributes = MulticastAttributes.Public )]
as stated here : http://www.sharpcrafters.com/solutions/logging
Update
When you are using TransactionScope, try to avoid unnecessary escalation of transaction to MSDTC. If you have open two connections (even to the same database) during the TransactionScope it will immediately escalates to DTC which have impact on performance (and sometimes it can lead to unwanted exceptions on scope dispose) - it is behavior of SQLServer2008. You should use only one connection during the scope ore close the first before opening another.

How can I track the time it takes a block of my MVC3 code to execute?

My MVC3 application is not running as fast as I hoped. I'd like to find out if some of the database look ups are taking too long. Is there a way that I can time part of a block of code within a controller. For example:
var a = the_time_now;
...
...
var b = the_time_now;
<log the code area name and time b - time a
I am fairly new to MVC and would appreciate any advice. Note that I currently have no uses on my system so if I was logging the time then it would not load things down too much :-)
Use the Stopwatch class for the code execution time tracking. this is not titly bound to MVC, this is used generally in .net Framework

Code analysis in ASP.NET web application

I am using VS 2010; all these days I am confortable runnig code analysis on class libraries.
But for a web application, the UI control names with prefixes like ddl, pnl, etc are causing code analysis warnings as "Correct the spelling...". I googled and think this can be addressed using rulesets; but didn't find a way to suppress these..any pointers ?
You could add them to a custom dictionary.
Why are you using those prefixes?
The most common reason I've seen people give for this is "Hungarian Notation." However, as I've tried to point out in a number of jobs over the years, "you're doing it wrong." Within an IDE like VS 2010, is there any real reason to prefix every DropDownList instance with "ddl"? Both you and the IDE know it's a DropDownList. There's no question or confusion about that.
The idea behind Hungarian Notation isn't to "prefix every variable with a shorthand of its class" but rather to "prefix every variable with what it is." What something is doesn't have to mean its class or type, it's more an idea of what that object intuitively represents. Sure, it's a DropDownList. But what does that DropDownList mean? Is it part of a particular grouping of elements in the UI? That grouping's designation would provide a lot more information than "ddl" ever would.
As an example, say I have an application with various connection strings. In a given method in my DAL, I store one of them in a variable called strConnection. Well, that's adhering to the notation, but it's not telling me anything important. I know it's a string. (I can see its declaration, I can mouse over it in the IDE, etc.) And I know it's a connection string based on its usage. But which one is it? What part of the business does it serve? If instead it was called connstringHR then I could immediately infer that it's the connection string for the HR database.

How do programmers practice code reuse

I've been a bad programmer because I am doing a copy and paste. An example is that everytime i connect to a database and retrieve a recordset, I will copy the previous code and edit, copy the code that sets the datagridview and edit. I am aware of the phrase code reuse, but I have not actually used it. How can i utilize code reuse so that I don't have to copy and paste the database code and the datagridview code.,
The essence of code reuse is to take a common operation and parameterize it so it can accept a variety of inputs.
Take humble printf, for example. Imagine if you did not have printf, and only had write, or something similar:
//convert theInt to a string and write it out.
char c[24];
itoa(theInt, c, 10);
puts(c);
Now this sucks to have to write every time, and is actually kind of buggy. So some smart programmer decided he was tired of this and wrote a better function, that in one fell swoop print stuff to stdout.
printf("%d", theInt);
You don't need to get as fancy as printf with it's variadic arguments and format string. Even just a simple routine such as:
void print_int(int theInt)
{
char c[24];
itoa(theInt, c, 10);
puts(c);
}
would do the trick nickely. This way, if you want to change print_int to always print to stderr you could update it to be:
void print_int(int theInt)
{
fprintf(stderr, "%d", theInt);
}
and all your integers would now magically be printed to standard error.
You could even then bundle that function and others you write up into a library, which is just a collection of code you can load in to your program.
Following the practice of code reuse is why you even have a database to connect to: someone created some code to store records on disk, reworked it until it was usable by others, and decided to call it a database.
Libraries do not magically appear. They are created by programmers to make their lives easier and to allow them to work faster.
Put the code into a routine and call the routine whenever you want that code to be executed.
Check out Martin Fowler's book on refactoring, or some of the numerous refactoring related internet resources (also on stackoverflow), to find out how you could improve code that has smells of duplication.
At first, create a library with reusable functions. They can be linked with different applications. It saves a lot of time and encourages reuse.
Also be sure the library is unit tested and documented. So it is very easy to find the right class/function/variable/constant.
Good rule of thumb is if you use same piece three times, and it's obviously possible to generalize it, than make it a procedure/function/library.
However, as I am getting older, and also more experienced as a professional developer, I am more inclined to see code reuse as not always the best idea, for two reasons:
It's difficult to anticipate future needs, so it's very hard to define APIs so you would really use them next time. It can cost you twice as much time - once you make it more general just so that second time you are going to rewrite it anyway. It seems to me that especially Java projects of late are prone to this, they seem to be always rewritten in the framework du jour, just to be more "easier to integrate" or whatever in the future.
In a larger organization (I am a member of one), if you have to rely on some external team (either in-house or 3rd party), you can have a problem. Your future then depends on their funding and their resources. So it can be a big burden to use foreign code or library. In a similar fashion, if you share a piece of code to some other team, they can then expect that you will maintain it.
Note however, these are more like business reasons, so in open source, it's almost invariably a good thing to be reusable.
to get code reuse you need to become a master of...
Giving things names that capture their essence. This is really really important
Making sure that it only does one thing. This is really comes back to the first point, if you can't name it by its essence, then often its doing too much.
Locating the thing somewhere logical. Again this comes back to being able to name things well and capturing its essence...
Grouping it with things that build on a central concept. Same as above, but said differntly :-)
The first thing to note is that by using copy-and-paste, you are reusing code - albeit not in the most efficient way.
You have recognised a situation where you have come up with a solution previously.
There are two main scopes that you need to be aware of when thinking about code reuse. Firstly, code reuse within a project and, secondly, code reuse between projects.
The fact that you have a piece of code that you can copy and paste within a project should be a cue that the piece of code that you're looking at is useful elsewhere. That is the time to make it into a function, and make it available within the project.
Ideally you should replace all occurrances of that code with your new function, so that it (a) reduces redundant code and (b) ensures that any bugs in that chunk of code only need to be fixed in one function instead of many.
The second scope, code reuse across projects, requires some more organisation to get the maximum benefit. This issue has been addressed in a couple of other SO questions eg. here and here.
A good start is to organise code that is likely to be reused across projects into source files that are as self-contained as possible. Minimise the amount of supporting, project specific, code that is required as this will make it easier to reuse entire files in a new project. This means minimising the use of project specific data-types, minimising the use project specific global variables, etc.
This may mean creating utility files that contain functions that you know are going to be useful in your environment. eg. Common database functions if you often develop projects that depend on databases.
I think the best way to answer your problem is that create a separate assembly for your important functions.. in this way you can create extension methods or modify the helper assemble itself.. think of this function..
ExportToExcel(List date, string filename)
this method can be use for your future excel export functions so why don't store it in your own helper assembly.. i this way you just add reference to these assemblies.
Depending on the size of the project can change the answer.
For a smaller project I would recommend setting up a DatabaseHelper class that does all your DB access. It would just be a wrapper around opening/closing connections and execution of the DB code. Then at a higher level you can just write the DBCommands that will be executed.
A similar technique could be used for a larger project, but would need some additional work, interfaces need to be added, DI, as well as abstracting out what you need to know about the database.
You might also try looking into ORM, DAAB, or over to the Patterns and Practices Group
As far as how to prevent the ole C&P? - Well as you write your code, you need to periodically review it, if you have similar blocks of code, that only vary by a parameter or two, that is always a good candidate for refactoring into its own method.
Now for my pseudo code example:
Function GetCustomer(ID) as Customer
Dim CMD as New DBCmd("SQL or Stored Proc")
CMD.Paramaters.Add("CustID",DBType,Length).Value = ID
Dim DHelper as New DatabaseHelper
DR = DHelper.GetReader(CMD)
Dim RtnCust as New Customer(Dx)
Return RtnCust
End Function
Class DataHelper
Public Function GetDataTable(cmd) as DataTable
Write the DB access code stuff here.
GetConnectionString
OpenConnection
Do DB Operation
Close Connection
End Function
Public Function GetDataReader(cmd) as DataReader
Public Function GetDataSet(cmd) as DataSet
... And So on ...
End Class
For the example you give, the appropriate solution is to write a function that takes as parameters whatever it is that you edit whenever you paste the block, then call that function with the appropriate data as parameters.
Try and get into the habit of using other people's functions and libraries.
You'll usually find that your particular problem has a well-tested, elegant solution.
Even if the solutions you find aren't a perfect fit, you'll probably gain a lot of insight into the problem by seeing how other people have tackled it.
I'll do this at two levels. First within a class or namespace, put that code piece that is reused in that scope in a separate method and make sure it is being called.
Second is something similar to the case that you are describing. That is a good candidate to be put in a library or a helper/utility class that can be reused more broadly.
It is important to evaluate everything that you are doing with an perspective whether it can be made available to others for reuse. This should be a fundamental approach to programming that most of us dont realize.
Note that anything that is to be reused needs to be documented in more detail. Its naming convention be distinct, all the parameters, return results and any constraints/limitations/pre-requisites that are needed should be clearly documented (in code or help files).
It depends somewhat on what programming language you're using. In most languages you can
Write a function, parameterize it to allow variations
Write a function object, with members to hold the varying data
Develop a hierarchy of (function object?) classes that implement even more complicated variations
In C++ you could also develop templates to generate the various functions or classes at compile time
Easy: whenever you catch yourself copy-pasting code, take it out immediately (i.e., don't do it after you've already CP'd code several times) into a new function.

Resources