Entity Framework context - entity-framework-4

I have an application using the Entity Framework code first. My setup is that I have a core service which all other services inherit from. The core service contains the following code:
public static DatabaseContext db = new DatabaseContext();
public CoreService()
{
db.Database.Initialize(force: false);
}
Then, another class will inherit from CoreService and when it needs to query the database will just run some code such as:
db.Products.Where(blah => blah.IsEnabled);
However, I seem to be getting conflicting stories as to which is best.
Some people advise NOT to do what I'm doing.
Other people say that you should define the context for each class (rather than use a base class to instantiate it)
Others say that for EVERY database call, I should wrap it in a using block. I've never seen this in any of the examples from Microsoft.
Can anyone clarify?
I'm currently at a point where refactoring is possible and quite quick, so I'd like some general advice if possible.

You should wrap one context per web request. Hold it open for as long as you need it, then get rid of it when you are finished. That's what the using is for.
Do NOT wrap up your context in a Singleton. That is not a good idea.
If you are working with clients like WinForms then I think you would wrap the context around each form but that's not my area.
Also, make sure you know when you are going to be actually executing against your datasource so you don't end up enumerating multiple times when you might only need to do so once to work with the results.
Lastly, you have seen this practice from MS as lots of the ADO stuff supports being wrapped in a using but hardly anyone realises this.

I suggest to use design principle "prefer composition over inheritance".
You can have the reference of the database context in your base class.
Implement a singleton for getting the DataContext and assign the datacontext to this reference.

The conflicts you get are not related to sharing the context between classes but are caused by the static declaration of your context. If you make the context an instance field of your service class, so that every service instance gets its own context, there should be no issues.
The using pattern you mention is not required but instead you should make sure that context.Dispose() is called at the service disposal.

Related

Getting Injectee from Jersey vs HK2

Short story: I want to get the Injectee from within a Supplier that is bound using Jersey's AbstractBinder.bindFactory(Class) method.
Basically to work around JERSEY-3675
Long story: I was using the org.glassfish.hk2.api.Factory method to create an instance of my RequestScoped Object and life was good.
I moved my registration into a Feature and then life was not good because of JERSEY-3675.
Long story short, org.glassfish.hk2.utilities.binding.AbstractBinder do not work inside Features. No problem, I thought, I will use org.glassfish.jersey.internal.inject.AbstractBinder.
Slight problem I encountered: Jersey's AbstractBinder.bindFactory() method takes in Supplier not Factory. No problem, I thought, I will use Supplier (I like it better, anyway).
Bigger problem I encountered: I had been using org.glassfish.hk2.api.Injectee to get the InstantiationData about who was calling the injection. This does not get injected if I do not use HK2's Factory. The javadoc even says the method is 'indeterminate' if not called from Factory.provide().
Even though there is a Jersey Injectee (org.glassfish.jersey.internal.inject.Injectee), this seems to only be available when using Jersey's InjectionResolver. I do not want to use InjectionResolver because
HK2's InjectionResolver must be Singleton but I want to have RequestScoped stuff in my injected object.
On second read, though, Jersey's InjectionResolver does not say anything about needing to be Singleton. Can anyone confirm?
I do not want to create my own annotation for this (I have created my own annotations and InjectionResolvers for other cases)
I do not feel confident overriding #Inject with an InjectionResolver. Not sure what that means or how I would be able to register multiple of those and have them work together (one for each Feature)
In meantime, I am using the workaround mentioned in the bug.
I am new to DI scene, so if something (or all of it) does not make sense, please let me know.

Best approach for dependency injection of data connections in singletons

I have some "caching" objects in my application, that get a IRepository (custom repository pattern contract) by dependency injection (Ninject). Those objects only uses the repository once, but they have a Refresh function that forces the owner to refresh itself. They are singletons, are created only once, and a ManualResetEvent ensures that all requests are blocked till it is loaded.
The IRepositories are EF CodeFirst based, so is it OK just to simply ensure the connection is closed and keep the reference to the DbContext there forever?
I have disabled the proxies and the lazy loading, so... is OK to have long references from the root of the caching object to hundreds of these cached POCO entities?
Cheers.
We reference to comments from Julie Lerman,
http://msdn.microsoft.com/en-us/magazine/ee532098.aspx?sdmr=JulieLerman
the recommendation is to have several/many smaller contexts and in the case of web scenarios create a new context each call.
Although she writes about Second-Level Caching in the Entity Framework and AppFabric.
Over time, the context would be contain many objects and the performance would decline accordingly.
I think this site has some good tips on EF performance.
eg generated views.
http://msdn.microsoft.com/en-us/data/hh949853
my personal recommendation, that I cant claim is best practice, but from someone who is concerned about performance, is that small bounded context each call is a solid long term compromise.
Use generated views to keep the initial load time as small as possible.
you could potentially manage a permanent DBContext in such a way as to drop unused objects from the context. Or use a caching library with events to do so. Not a small task.
I would be interested in the solution you finally select. please post.
Finally the best solution I found is to create a new kind of wrapper:
public class Generator<T> where T : IDisposable
{
readonly Func<T> _generate;
public Generator(Func<T> generate)
{
_generate= generate;
}
public T Generate()
{
return _generate();
}
}
And create a binding more or less this way:
// Dependency Injection bindings declaration section
DI.Bind<Generator<IRepository>>()
.To(()=> new Generator<IRepository>(()=> DI.Get<IRepository>()));
Therefore, in long lived objects that needs to just create and destroy the element, I can ask for a Generator<IRepository> service, rather than IRepository. Therefore, every time I need to refresh, I would just create a new IRepository, without knowing how it is build under the hood:
using (var repository = repositoryGenerator.Generate())
{
repository.DoStuff();
}
It works like a charm so far.
Actually, I have added this feature to my DI framework. I can now bind IAnything and later on request for Generator, and the framework will give me the fully ready object using this technique How to create a Func<> delegate programmatically
Cheers.

Object lifecycle management and IoC containers

I'm updating a game from single player to multiplayer. In this case the game was originally written with most classes being single instanced. e.g. there was a single Player object, a single GameState object, etc. That is, each of these objects lived as long as the application.
Now that more than one player can play at once I obviously need to support creating more than one Player object, GameState object, etc. Over the course of working on this I have come to realize that most objects have one of three lifespans:
App's lifespan, e.g. a Conductor to handle navigation
Player's lifespan, e.g. the SettingsViewModel for the current player
Game's lifespan, e.g. the GameState for the current game
I'm curious how others deal with the creation of these different objects using an IoC container. I want to avoid creating factory classes for each class with a player or game lifespan.
Here is an example of IOC that may help. The project is called IOC-with-Ninject. It uses Ninject plus an IOC container class to manage all object life spans. You will need to do a little research on Ninject to customize it to your specific needs, but this is your IOC container solution (IMHO) if you are using .NET and will help you organize your code base. This is a personal choice, but I swear by it. If you are not using .NET it will still give you an easy pattern to follow. Cheers.
Many IoC containers have custom life-cycle scopes which you can manage as your wish. For example in Ninject you can define your custom life cycle scope as follows:
kernel.Bind<IService>().To<Service>().InScope((c, o) => yourCustomeScope);
As long as the yourCustomeScope variable has not changed, one single instance of the Service object is returned each time the kernel receives a request for IService. As soon as the yourCustomeScope variable changes, a new instance of Service will be created on the next request for IService. yourCustomeScope can be the current player instance, the game object or anything that you want to change the lifetime of the Service object, based on its reference change.
However, the objects that you just mentioned are more likely to be entities rather than services for which I don't think injection is a good idea.
From my experience the factories approach works the best.
Controlling lifespan of instance is clunky for support and requires efforts, knowledge of all of the classes lifespan requirements and dependencies, time for configuration and management of the configuration. In same time the use of factories is natural and code specific.
Factories (implementation) creation might be avoided by using proxy factories . You can also have factories returning generic arguments to further decrease the needs of factories (interfaces) creation.
If still too many factories are required I suggest reviewing the code flow.
I think this is in part a rehash of some of the comments of the previous answers but I have tried to exemplify expand a little on some of the reasoning.
Once you get into the domain of managing injected objects lifespan, you probably should be creating factories for these objects.
The underlying problem is that the composition root is not aware of what the environmental context of the call will be that needs to create the object.
I think I should take a step back and explain at this point.
Received wisdom on dependancy injection is to have a composition root some where near the entry point of the code. There are many good reasons for this that are not difficult to find on the web so I won't go into that here.
The composition root is where you map your interfaces (usually, but possibly objects) to their implmentations. You can pass in information that is available at this point to the constructor. So you can pass in a reference to an object whose lifetime is current at the time of execution of the composition root.
However, if the lifetime of the composition root does not overlap with the life time of the object you want to create you have to defer the execution of the constructor until the object needs to be created. This is why you need to have a factory. You can pass a factory method in to your mapping at this point and thus pass in the information needed to generate the object, but allow the creation to happen at the time it is required not when the composition root is executed.
You do not need a factory class to do this factory methods are fine, moreover the factory method can be inlined and so the code overhead is not much more than if we were creating the objects in the composition route.
If we have a project with 2 services where the first service is dependant on the first and we only want the lifetime of the second service to start when we create the first service we might have something like the following. (I am using ninject to give a code example, but I expect that other IOC containers work similarly in this respect.)
`
public class Service1:IService
{
private Func<IService>serviceFactoryMethod _Service2Factory;
public Service1(Func<IService>service2FactoryMethod)
{
_Service2Factory=service2FactoryMethod;
}
public void DoSomethingUsingService2()
{
var service2=_Service2Factory();
service2.DoSomething();
}
}
public class MainClass
{
public void CompositionRoot()
{
var kernel= new StandardKernel();
kernel.Bind.ToMethod(m=>
{
return new Service1(m.Kernel.Get<IService2>());
}
}
}
`
This example does not address how you would manage the lifetime of the App, players and games lifespans, but hopefully it gives sufficient clues as to how to remove lifetime issues related to dependancy injection.
Side note: that using Ninject you would be able to change the scope of Service2 in order to manage its lifetime to go beoynd the lifetime of Service1. For example, if you knew each instance of a game were to happen on its own thread (OK, this maybe somewhat unlikely), you might use InThreadScope for the game.

ASP.NET MVC and IoC - Chaining Injection

Please be gentle, I'm a newb to this IoC/MVC thing but I am trying. I understand the value of DI for testing purposes and how IoC resolves dependencies at run-time and have been through several examples that make sense for your standard CRUD operations...
I'm starting a new project and cannot come up with a clean way to accomplish user permissions. My website is mostly secured with any pages with functionality (except signup, FAQ, about us, etc) behind a login. I have a custom identity that has several extra properties which control access to data... So....
Using Ninject, I've bound a concrete type* to a method (Bind<MyIdentity>().ToMethod(c => MyIdentity.GetIdentity()); so that when I add MyIdentity to a constructor, it is injected based on the results of the method call.
That all works well. Is it appropriate to (from the GetIdentity() method) directly query the request cookies object (via FormsAuthentication)? In testing the controllers, I can pass in an identity, but the GetIdentity() method will be essentially untestable...
Also, in the GetIdentity() method, I will query the database. Should I manually create a concrete instance of a repository?
Or is there a better way all together?
I think you are reasonably on the right track, since you abstracted away database communication and ASP.NET dependencies from your unit tests. Don't worry that you can't test everything in your tests. There will always be lines of code in your application that are untestable. The GetIdentity is a good example. Somewhere in your application you need to communicate with framework specific API and this code can not be covered by your unit tests.
There might still be room for improvement though. While an untested GetIdentity isn't a problem, the fact that it is actually callable by the application. It just hangs there, waiting for someone to accidentally call it. So why not abstract the creation of identities. For instance, create an abstract factory that knows how to get the right identity for the current context. You can inject this factory, instead of injecting the identity itself. This allows you to have an implementation defined near the application's composition root and outside reach of the rest of the application. Besides that, the code communicates more clearly what is happening. Nobody has to ask "which identity do I actually get?", because it will be clear by the method on the factory they call.
Here's an example:
public interface IIdentityProvider
{
// Bit verbose, but veeeery clear,
// but pick another name if you like,
MyIdentity GetIdentityForCurrentUser();
}
In your composition root you can have an implementation of this:
private sealed class AspNetIdentityProvider : IIdentityProvider
{
public MyIdentity GetIdentityForCurrentUser()
{
// here the code of the MyIdentity.GetIdentity() method.
}
}
As a trick I sometimes have my test objects implement both the factory and product, just for convenience during unit tesing. For instance:
private sealed class FakeMyIdentity
: FakeMyIdentity, IIdentityProvider
{
public MyIdentity GetIdentityForCurrentUser()
{
// just returning itself.
return this;
}
}
This way you can just inject a FakeMyIdentity in a constructor that expects an IIdentityProvider. I found out that this doesn’t sacrifice readability of the tests (which is important).
Of course you want to have as little code as possible in the AspNetIdentityProvider, because you can't test it (automatically). Also make sure that your MyIdentity class doesn't have any dependency on any framework specific parts. If so you need to abstract that as well.
I hope this makes sense.
There are two things I'd kinda do differently here...
I'd use a custom IPrincipal object with all the properties required for your authentication needs. Then I'd use that in conjunction with custom cookie creation and the AuthenticateRequest event to avoid database calls on every request.
If my IPrincipal / Identity was required inside another class, I'd pass it as a method parameter rather than have it as a dependency on the class it's self.
When going down this route I use custom model binders so they are then parameters to my actions rather than magically appearing inside my action methods.
NOTE: This is just the way I've been doing things, so take with a grain of salt.
Sorry, this probably throws up more questions than answers. Feel free to ask more questions about my approach.

Access to Entity Manager in ASP .NET MVC

Greetings,
Trying to sort through the best way to provide access to my Entity Manager while keeping the context open through the request to permit late loading. I am seeing a lot of examples like the following:
public class SomeController
{
MyEntities entities = new MyEntities();
}
The problem I see with this setup is that if you have a layer of business classes that you want to make calls into, you end up having to pass the manager as a parameter to these methods, like so:
public static GetEntity(MyEntities entityManager, int id)
{
return entityManager.Series.FirstOrDefault(s => s.SeriesId == id);
}
Obviously I am looking for a good, thread safe way, to provide the entityManager to the method without passing it. The way also needs to be unit testable, my previous attempts with putting it in Session did not work for unit tests.
I am actually looking for the recommended way of dealing with the Entity Framework in ASP .NET MVC for an enterprise level application.
Thanks in advance
Entity Framework v1.0 excels in Windows Forms applications where you can use the object context for as long as you like. In asp.net and mvc in particular it's a bit harder. My solution to this was to make the repositories or entity managers more like services that MVC could communicate with. I created a sort of generic all purpose base repository I could use whenever I felt like it and just stopped bothering too much about doing it right. I would try to avoid leaving the object context open for even a ms longer than is absolutely needed in a web application.
Have a look at EF4. I started using EF in production environment when that was in beta 0.75 or something similar and had no real issues with it except for it being "hard work" sometimes.
You might want to look at the Repository pattern (here's a write up of Repository with Linq to SQL).
The basic idea would be that instead of creating a static class, you instantiate a version of the Repository. You can pass in your EntityManager as a parameter to the class in the constructor -- or better yet, a factory that can create your EntityManager for the class so that it can do unit of work instantiation of the manager.
For MVC I use a base controller class. In this class you could create your entity manager factory and make it a property of the class so deriving classes have access to it. Allow it to be injected from a constructor but created with the proper default if the instance passed in is null. Whenever a controller method needs to create a repository, it can use this instance to pass into the Repository so that it can create the manager required.
In this way, you get rid of the static methods and allow mock instances to be used in your unit tests. By passing in a factory -- which ought to create instances that implement interfaces, btw -- you decouple your repository from the actual manager class.
Don't lazy load entities in the view. Don't make business layer calls in the view. Load all the entities the view will need up front in the controller, compute all the sums and averages the view will need up front in the controller, etc. After all, that's what the controller is for.

Resources