I have a .net 4 web api with many controllers that all inherit from the same base controller. This base controller has a method that gets the database connection string and uses it to instantiate a new instance of DbContext. The DbContext is then passed to 10+ repositories which all use that instance of DBContext.
The base controller is doing the following:
public IRepository Repository { get; set; }
public BaseController(IRepository repository)
{
this.Repository = repository;
}
protected override void Initialize(HttpControllerContext controllerContext)
{
base.Initialize(controllerContext);
this.Repository = IRepositoryFactory.GetRepository();
}
public static class IRepositoryFactory
{
public static IRepository GetRepository()
{
// **gets database connectionString**
var context = new DbContext(connectionString)
// this passes context to all the 10+ repositories using same instance
return new IDataRepository(context);
}
}
Is having the base controller instantiate a new instance of DbContext on every api call causing the DbContext to never be reused between calls. In .net core you use the startup class to tell DbContext which database to use, but I don't believe you can do that in .net 4. I am afraid that these new instances of DbContext aren't being used like a Singleton and thus there is no caching going on, causing a lot of unnecessary database hits. Is there a better way to tell DbContext it's connection string and use it without created new instances of DbContext in the BaseController which gets hit on every controller?
For web applications you should leverage a IoC container to manage dependencies and lifetime scoping. Most of these can be wired directly into MVC to instantiate controllers, resolve references to dependencies such as Repositories, and their dependencies including the DbContext or wrappers like Unit of Work implementations. By default the IoC container should be scoping dependency lifecycles to the Request, so if you have 3 repositories spun up to serve a controller call, they would all receive the same reference to the DbContext to service a request.
If you haven't worked with an IoC container / Dependency Injection, I would recommend having a look at Autofac or Unity. Other common, though older ones include Castle Windsor & Ninject. ASP.Net Core has it's own DI extensions as well. I personally recommend Autofac as it is mature, but fairly young so it's designed around modern paradigms such as Generics and Fluent structure. It also has quite good documentation.
DbContext's should be short-lived, but if you are using several repositories, sharing a single instance for all repository calls is almost always advisable so that entity references can be shared between the repositories, and changes are saved as part of one operation. (Committing or rolling back all for one or none at all.) Separate DbContext instances require complicated code to detach and attach entities to share entities between repositories, and explicit transaction handling to guard operations. Personally I use the repository pattern as a separation for testing in isolation, though I avoid the generic repository pattern. I utilize Mehdime's DbContextScope patterns for Unit of Work which gives me more control over the unit of work as opposed to using the Dependency Injection to manage the lifetime of the DbContext.
Singleton instances of DbContexts for web applications, or any application should be avoided. Firstly, EF does not synchronize data with the database. Any cached entities can easily become stale so the longer a context is alive, the more stale overwrites or exceptions for row version checks (if you have 'em) will be occurring. The caching/tracking features of EF also result in increased memory usage as well as performance degradation as EF wants to check caches for object references and related dependencies with every query. This can lead to some odd behaviour when lazy loading is disabled where EF will populate references that happen to be cached which might not reflect complete sets of data.
Related
Over time controllers develop a lot of dependencies, and creating an instance of controller becomes too expensive for each request (especially with DI). Is there any solution to make controllers singletons?
Creating instances of controllers is pretty fast and simple operation. What becomes too expensive is creating dependencies for each request. So, what you really need is many controllers which share same instances of dependencies.
E.g. you have following controller
public class SalesController : Controller
{
private IProductRepository productRepository;
private IOrderRepository orderRepository;
public SalesController(IProductRepository productRepository,
IOrderRepository orderRepository)
{
this.productRepository = productRepository;
this.orderRepository = orderRepository;
}
// ...
}
You should configure your dependency injection framework to use same instances of repositories for all application (keep in mind, you can have synchronization problems). Now creating dependencies is not expensive any more. All dependencies are instantiated only once, and reused for all requests.
If you have many dependencies and you are worrying about costs of getting reference to instance of each dependency and providing these references to controller instance (which I don't think will be very expensive), then you can group your dependencies (something like Introduce Parameter Object refactoring):
public class SalesController : Controller
{
private ISalesService salesService;
public SalesController(ISalesService salesService)
{
this.salesService = salesService;
}
// ...
}
public class SalesService : ISalesService
{
private IProductRepository productRepository;
private IOrderRepository orderRepository;
public SalesService(IProductRepository productRepository,
IOrderRepository orderRepository)
{
this.productRepository = productRepository;
this.orderRepository = orderRepository;
}
// ...
}
Now you have single dependency, which will be injected very quickly. If you will configure your dependency injection framework to use singleton SalesService, then all SalesControllers will reuse same instance of service. Creation of controllers and providing dependencies will be very fast.
So first an answer to the original question:
public void ConfigureServices(IServiceCollection services) {
// put other services bindings here
// bind all Controller classes as singletons
services.AddSingleton<HomeController, HomeController>();
// tell framework to obtain Controller instances from ServiceProvider.
services.AddMvc().AddControllersAsServices();
}
As stated in the original question, if controllers have big dependency trees consisting mainly of request Scoped or Transient dependencies then creating them separately for each request may have some footprint on scalability of your application (in Java for example Servlet instances are singletons by default exactly for this reason). While usually CPU and real time needed to create even a big dependency tree is negligible (unless you have some heavy computations or network communication in constructors of your components, which almost never is a good idea for transient or request scoped components), the memory usage footprint is something to reckon with. In case of common DB-Web apps memory is the main factor limiting number of concurrent requests that a single machine-node can handle. If every request has a separate copy of a big dependency tree, together they may consume a significant amount of memory (the other thing to watch for is initial stack size for a new thread, by the way).
The accepted answer 1220560 solves the problem as well, but I would consider it an ugly hack and it has some drawbacks: you need to create this artificial singleton service that will be used by your Controllers either as a service locator or a proxy for other services. If you have just one such singleton object for all your controllers then you are effectively hiding real dependencies of your Controller: for example if someone wants to write a unit-test for your Controller he needs to analyse carefully its implementation to see which dependencies it actually uses, so that he knows what mocks/fakes he needs to provide in his test setup. If later you change your Controller and as a result of your change the subset of services your controller uses changes as well, it is very easy to forget to update the test setup also. This may sometimes lead to bugs that are hard to track. Contrary to this, if your dependencies are declared explicitly as constructor params, you will get a compiler error in the test setup right away. Another thing you can do is to have a separate such a singleton proxy/service locator for each controller, but then it's a lot of hassle basically.
Regardless whether you use the solution proposed by me or the one from answer #1220560 you must be careful when injecting request Scoped dependencies into singleton objects as described in https://learn.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection#registering-your-own-services right at the end of the "registering-your-own-services" section. You can find possible solutions to this problem here: how to use scoped dependency in a singleton in C# / ASP
Another thing to watch for is concurrency issue: singleton objects may be accessed concurrently by several threads handling different concurrent requests, so make sure to add proper synchronization to any non-thread-safe resources your singleton uses.
edit:
I've just realized the original question was about ASP.NET and this answer is for ASP.NET Core, so it probably won't work for "non-Core".
We have a project written in ASP.NET MVC and we use NInject to inject the repositories into the controllers. Currently we are using properties and the Inject-attribute to inject the repositories, which works well enough:
[Inject]
public IMyRepository MyRepos {get;set;}
An alternative way of injecting would be to do it "manually" using the NInjectServiceLocator:
var myRepos = NInjectServiceLocatorInstance.Resolve<IMyRepository>();
Now I was wondering about the following: the first method requires all repositories to be listed at the top (not necessarily at the top of course, but it's the most logical place) of a controller. Whenever a request is made, NInject instantiates each and every repository. This happens regardless of whether all of the repositories are actually needed inside a specific Action.
With the second method you can more precisely control which repositories are actually necessary and thus this might save some overhead when the controller is created. But you probably also have to include code to retrieve the same repository in multiple places.
So which one would be better? Is it better to just have a bunch of repository-properties or is it better to resolve the repositories which are actually necessary for a specific action when and where you need them? Is there a performance penalty involved for injecting "useless" repositories? Are there (even ;-) better solutions out there?
I prefer constructor injection:
private readonly IMyRepository _repository;
public MyController(IMyRepository repository)
{
_repository = repository;
}
All your dependencies are listed in one operation
Your controller does not need to know anything about NInject
You can unit-test your controller without NInjects involvment by stubbing interfaces straight to the constructor
Controller has a cleaner code
NInject or any other DI framework will do the work behind the scenes and leave you concentrating on the actual problem, not DI.
Constructor Injection should be your default choice when using DI.
You should ask yourself if the controller is really dependent on that specific class to work at all.
Maybe Method injection could also be a solution for specific scenario's, if you have only specific methods that needs dependencies.
I've never used Property Injection but Mark Seeman describes it in his book (Dependency Injection in .NET):
PROPERTY INJECTION should only be used when the class you’re developing has a good
LOCAL DEFAULT and you still want to enable callers to provide different implementations
of the class’s DEPENDENCY.
PROPERTY INJECTION is best used when the DEPENDENCY is optional.
NOTE There’s some controversy around the issue of whether PROPERTY INJECTION
indicates an optional DEPENDENCY. As a general API design principle, I
consider properties to be optional because you can easily forget to assign
them and the compiler doesn’t complain. If you accept this principle in the
general case, you must also accept it in the special case of DI. 4
A local default is described as:
A default implementation of an ABSTRACTION that’s defined in the same assembly as
the consumer.
Unless you're building an API I would suggest not to use Property Injection
Whenever a request is made, NInject instantiates each and every repository. This happens regardless of whether all of the repositories are actually needed inside a specific Action.
I don't think you should worry to much about the performance when using constructor injection
By far my favorite method is:
public class MyController : Controller
{
public IMyRepository MyRepos {get;set;}
public MyController(IMyRepository repo)
{
MyRepos = repo;
}
}
So you can use a NuGet package, such as Ninject.MVC3 (or MVC4) which has specific support for including the Ninject kernel inside the MVC's own IoC classes
https://github.com/ninject/ninject.web.mvc/wiki/MVC3
Once you have Ninject hooks in, you can let it do the work of injection instances into the controller's constructor, which I think is a lot cleaner.
EDIT:
Ahh, OK. Having read your question a bit more thoroughly, I see where you're going with this. In short, if you want to pick and choose which repo classes are instansiated then you will need to manually call, for example:
var myRepos = NInjectServiceLocatorInstance.Resolve<IMyRepository>();
You cannot configure Ninject (or any other IoC AFAIK) to selectively create object instances based on the currently execute method. That level of granularity is a real edge case I feel, which may be solvable by writing your own controller factory class, but that would be overkill.
I'm building a relatively simple webapp in ASP.NET MVC 4, using Entity Framework to talk to MS SQL Server. There's lots of scope to expand the application in future, so I'm aiming for a pattern that maximises reusability and adaptability in the code, to save work later on. The idea is:
Unit of Work pattern, to save problems with the database by only committing changes at the end of each set of actions.
Generic repository using BaseRepository<T> because the repositories will be mostly the same; the odd exception can extend and add its additional methods.
Dependency injection to bind those repositories to the IRepository<T> that the controllers will be using, so that I can switch data storage methods and such with minimal fuss (not just for best practice; there is a real chance of this happening). I'm using Ninject for this.
I haven't really attempted something like this from scratch before, so I've been reading up and I think I've got myself muddled somewhere. So far, I have an interface IRepository<T> which is implemented by BaseRepository<T>, which contains an instance of the DataContext which is passed into its constructor. This interface has methods for Add, Update, Delete, and various types of Get (single by ID, single by predicate, group by predicate, all). The only repository that doesn't fit this interface (so far) is the Users repository, which adds User Login(string username, string password) to allow login (the implementation of which handles all the salting, hashing, checking etc).
From what I've read, I now need a UnitOfWork class that contains instances of all the repositories. This unit of work will expose the repositories, as well as a SaveChanges() method. When I want to manipulate data, I instantiate a unit of work, access the repositories on it (which are instantiated as needed), and then save. If anything fails, nothing changes in the database because it won't reach the single save at the end. This is all fine. My problem is that all the examples I can find seem to do one of two things:
Some pass a data context into the unit of work, from which they retrieve the various repositories. This negates the point of DI by having my Entity-Framework-specific DbContext (or a class inherited from it) in my unit of work.
Some call a Get method to request a repository, which is the service locator pattern, which is at least unpopular, if not an antipattern, and either way I'd like to avoid it here.
Do I need to create an interface for my data source and inject that into the unit of work as well? I can't find any documentation on this that's clear and/or complete enough to explain.
EDIT
I think I've been overcomplicating it; I'm now folding my repository and unit of work into one - my repository is entirely generic so this just gives me a handful of generic methods (Add, Remove, Update, and a few kinds of Get) plus a SaveChanges method. This gives me a worker class interface; I can then have a factory class that provides instances of it (also interfaced). If I also have this worker implement IDisposable then I can use it in a scoped block. So now my controllers can do something like this:
using (var worker = DataAccess.BeginTransaction())
{
Product item = worker.Get<Product>(p => p.ID == prodName);
//stuff...
worker.SaveChanges();
}
If something goes wrong before the SaveChanges(), then all changes are discarded when it exits the scope block and the worker is disposed. I can use dependency injection to provide concrete implementations to the DataAccess field, which is passed into the base controller constructor. Business logic is all in the controller and works with IQueryable objects, so I can switch out the DataAccess provider object for anything I like as long as it implements the IRepository interface; there's nothing specific to Entity Framework anywhere.
So, any thoughts on this implementation? Is this on the right track?
I prefer to have UnitOfWork or a UnitOfWorkFactory injected into the repositories, that way I need not bother it everytime a new reposiory is added. Responsibility of UnitOfWork would be to just manage the transaction.
Here is an example of what I mean.
I have some "caching" objects in my application, that get a IRepository (custom repository pattern contract) by dependency injection (Ninject). Those objects only uses the repository once, but they have a Refresh function that forces the owner to refresh itself. They are singletons, are created only once, and a ManualResetEvent ensures that all requests are blocked till it is loaded.
The IRepositories are EF CodeFirst based, so is it OK just to simply ensure the connection is closed and keep the reference to the DbContext there forever?
I have disabled the proxies and the lazy loading, so... is OK to have long references from the root of the caching object to hundreds of these cached POCO entities?
Cheers.
We reference to comments from Julie Lerman,
http://msdn.microsoft.com/en-us/magazine/ee532098.aspx?sdmr=JulieLerman
the recommendation is to have several/many smaller contexts and in the case of web scenarios create a new context each call.
Although she writes about Second-Level Caching in the Entity Framework and AppFabric.
Over time, the context would be contain many objects and the performance would decline accordingly.
I think this site has some good tips on EF performance.
eg generated views.
http://msdn.microsoft.com/en-us/data/hh949853
my personal recommendation, that I cant claim is best practice, but from someone who is concerned about performance, is that small bounded context each call is a solid long term compromise.
Use generated views to keep the initial load time as small as possible.
you could potentially manage a permanent DBContext in such a way as to drop unused objects from the context. Or use a caching library with events to do so. Not a small task.
I would be interested in the solution you finally select. please post.
Finally the best solution I found is to create a new kind of wrapper:
public class Generator<T> where T : IDisposable
{
readonly Func<T> _generate;
public Generator(Func<T> generate)
{
_generate= generate;
}
public T Generate()
{
return _generate();
}
}
And create a binding more or less this way:
// Dependency Injection bindings declaration section
DI.Bind<Generator<IRepository>>()
.To(()=> new Generator<IRepository>(()=> DI.Get<IRepository>()));
Therefore, in long lived objects that needs to just create and destroy the element, I can ask for a Generator<IRepository> service, rather than IRepository. Therefore, every time I need to refresh, I would just create a new IRepository, without knowing how it is build under the hood:
using (var repository = repositoryGenerator.Generate())
{
repository.DoStuff();
}
It works like a charm so far.
Actually, I have added this feature to my DI framework. I can now bind IAnything and later on request for Generator, and the framework will give me the fully ready object using this technique How to create a Func<> delegate programmatically
Cheers.
How often do you use IoC for controllers/DAL in real projects?
IoC allows to abstract application from concrete implementation with additional layer of interfaces that should be implemented. But how often concrete implementation changes? Should we really have to do job twice adding method to interface then the implementation if implementation hardly will ever be changed? I took part in about 10 asp.net projects and DAL (ORM-like and not) was never rewritten completely.
Watching lots of videos I clearly understand that IoC "is cool" and the really nice way to program, but does it really needed?
Added a bit later:
Yes, IoC allows prepare better testing environment, but we also have nice way to test DAL without IoC. We wrap DAL calls to database into uncommited transactions without risk to make data unstable.
IoC isn't a pattern only for writing modular programs; it also allows for easier testing, by being able to swap in mock objects that implement the same interface as the components they stand in for.
Plus, it actually makes code much easier to maintain down the road.
It's not IOC that allows you to abstract application from concrete implementation with additional layer of interfaces, this is how you should design your application in order to be more modular and reusable. Another important benefit is that once you've designed your application this way it will be much easier to test the different parts in isolation without depending on concrete database access for example.
There's much more about IoC except ability to change implementation:
testing
explicit dependencies - not hidden inside private DataContext
automatic instantiation - you declare in constructor that you need something, and you get it - with all deep nested dependencies resolved
separation of assemblies - take a look at S#arp Architecture to see how IoC allows to avoid referencing NHibernate and other specific assemblies, which otherwise you'll have to reference
management of lifetime - ability to specify per request / singleton / transitive lifetime of objects and change it in one place, instead of in dozens of controllers
ability to do dynamic stuff, like, getting correct data context in model binders, because with IoC you now have metadata about your dependencies; and this shows that maybe IoC does to your object dependencies what reflection does to C# programming - a lot of new possibilities, that you never even thought about
And so on, I'm sure I missed a lot of positive stuff. While the only "bad" thing that I can think about (and that you mentioned) is duplication of interface, which is non-issue with modern IDEs support for refactoring.
Well, if your data interfaces change every day, and you have hundreds of them - you may want to avoid IoC.
But, do you avoid good design practices just because it's harder to follow them? Do you copy and paste code instead of extracting a method/class, just because it takes more time and more code to do so? Do you place business logic in views just because it's harder to create view models and sync them with domain models? If yes, then you can avoid IoC, no problem.
You're arguing that using IOC takes MORE code than not using it. I disagree.
Here is the entire DAL IOC configuration for one of my projects using LinqToSql. The ContextProvider class is simply a thread safe LinqToSql context factory.
container.Register(Component.For<IContextProvider<LSDataContext>, IContextProvider>().LifeStyle.PerWebRequest.ImplementedBy<ContextProvider<LSDataContext>>();
container.Register(Component.For<IContextProvider<WorkSheetDataContext>, IContextProvider>().LifeStyle.PerWebRequest.ImplementedBy<ContextProvider<WorkSheetDataContext>>();
container.Register(Component.For<IContextProvider<OffersReadContext>, IContextProvider>().LifeStyle.PerWebRequest.ImplementedBy<ContextProvider<OffersReadContext>>();
Here is the entire DAL configuration for one of my projects using NHibernate and the repository pattern:
container.Register(Component.For<NHSessionBuilder>().LifeStyle.Singleton);
container.Register(Component.For(typeof(IRepository<>)).ImplementedBy(typeof(NHRepositoryBase<>)));
Here is how I consume the DAL in my BLL (w/ dependency injection):
public class ClientService
{
private readonly IRepository<Client> _Clients;
public ClientService(IRepository<Client> clients)
{
_Clients = clients;
}
public IEnumerable<Client> GetClientsWithGoodCredit()
{
return _Clients.Where(c => c.HasGoodCredit);
}
}
Note that my IRepository<> interface inherits IQueryable<> so this code is very trivial!
Here's how I can test my BLL without connecting to a DB:
public void GetClientsWithGoodCredit_ReturnsClientWithGoodCredit()
{
var clientWithGoodCredit = new Client() {HasGoodCredit = true};
var clientWithBadCredit = new Client() {HasGoodCredit = false};
var clients = new List<Client>() { clientWithGoodCredit, clientWithBadCredit }.ToTestRepository();
var service = new ClientService(clients);
var clientsWithGoodCredit = service.GetClientsWithGoodCredit();
Assert(clientsWithGoodCredit.Count() == 1);
Assert(clientsWithGoodCredit.First() == clientWithGoodCredit);
}
ToTestRepository() is an extension method that returns a fake IRepository<> that uses an in-memory list.
There is no possible way you can argue that this is more complicated than newing up your DAL all over your BLL.
The only way you could have ever written the above test is by connecting to a DB, saving some test clients, and then querying. I guarantee that takes 100+ times longer to execute than this did. (Times that by 1000 tests and you can go get some coffee while you're waiting.)
Also, by using uncommitted transactions for testing you introduce debugging nightmares resulting from ORMs that don't query over uncommitted entities.