Best approach for dependency injection of data connections in singletons - dependency-injection

I have some "caching" objects in my application, that get a IRepository (custom repository pattern contract) by dependency injection (Ninject). Those objects only uses the repository once, but they have a Refresh function that forces the owner to refresh itself. They are singletons, are created only once, and a ManualResetEvent ensures that all requests are blocked till it is loaded.
The IRepositories are EF CodeFirst based, so is it OK just to simply ensure the connection is closed and keep the reference to the DbContext there forever?
I have disabled the proxies and the lazy loading, so... is OK to have long references from the root of the caching object to hundreds of these cached POCO entities?
Cheers.

We reference to comments from Julie Lerman,
http://msdn.microsoft.com/en-us/magazine/ee532098.aspx?sdmr=JulieLerman
the recommendation is to have several/many smaller contexts and in the case of web scenarios create a new context each call.
Although she writes about Second-Level Caching in the Entity Framework and AppFabric.
Over time, the context would be contain many objects and the performance would decline accordingly.
I think this site has some good tips on EF performance.
eg generated views.
http://msdn.microsoft.com/en-us/data/hh949853
my personal recommendation, that I cant claim is best practice, but from someone who is concerned about performance, is that small bounded context each call is a solid long term compromise.
Use generated views to keep the initial load time as small as possible.
you could potentially manage a permanent DBContext in such a way as to drop unused objects from the context. Or use a caching library with events to do so. Not a small task.
I would be interested in the solution you finally select. please post.

Finally the best solution I found is to create a new kind of wrapper:
public class Generator<T> where T : IDisposable
{
readonly Func<T> _generate;
public Generator(Func<T> generate)
{
_generate= generate;
}
public T Generate()
{
return _generate();
}
}
And create a binding more or less this way:
// Dependency Injection bindings declaration section
DI.Bind<Generator<IRepository>>()
.To(()=> new Generator<IRepository>(()=> DI.Get<IRepository>()));
Therefore, in long lived objects that needs to just create and destroy the element, I can ask for a Generator<IRepository> service, rather than IRepository. Therefore, every time I need to refresh, I would just create a new IRepository, without knowing how it is build under the hood:
using (var repository = repositoryGenerator.Generate())
{
repository.DoStuff();
}
It works like a charm so far.
Actually, I have added this feature to my DI framework. I can now bind IAnything and later on request for Generator, and the framework will give me the fully ready object using this technique How to create a Func<> delegate programmatically
Cheers.

Related

Dependency injection: Is it ok to instatiate a concrete object from a concrete factory

I am fairly new to Dependency Injection, and I wrote a great little app that worked exactly like Mark Seemann told me it would and the world was great. I even added some extra complexity to it just to see if I could handle that using DI. And I could, happy days.
Then I took it to a real world application and spent a long time scratching my head. Mark tells me that I am not allowed to use the 'new' keyword to instantiate objects, and I should instead let the IoC do this for me.
However, say that I have a repository and I want it to be able to return me a list of things, thusly:
public interface IThingRepository
{
public IEnumerable<IThing> GetThings();
}
Surely at least one implementation of this interface will have to instantiate some Thing's? And it doesn't seem so bad being allowing ThingRepository to new up some Things as they are related anyway.
I could instead pass round a POCO instead, but at some point I'm going to have to convert the POCO in to a business object, which would require me to new something up.
This situation seems to occur every time I want a number of things which is not knowable in the Composition Root (ie we only find out this information later - for example when querying the database).
Does anyone know what the best practice is in these kinds of situations?
In addition to Steven's answer, I think it is ok for a specific factory to new up it's specific matching-implementation that it was created for.
Update
Also, check this answer, specifically the comments, which say something about new-ing up instances.
Example:
public interface IContext {
T GetById<T>(int id);
}
public interface IContextFactory {
IContext Create();
}
public class EntityContext : DbContext, IContext {
public T GetById<T>(int id) {
var entity = ...; // Retrieve from db
return entity;
}
}
public class EntityContextFactory : IContextFactory {
public IContext Create() {
// I think this is ok, since the factory was specifically created
// to return the matching implementation of IContext.
return new EntityContext();
}
}
Mark tells me that I am not allowed to use the 'new' keyword to instantiate objects
That's not what Mark Seemann tells you, or what he means. You must make the clear separation between services (controlled by your composition root) at one side and primitives, entities, DTOs, view models and messages on the other side. Services are injectables and all other types are newables. You should only prevent using new on service types. It would be silly to prevent newing up strings for instance.
Since in your example the service is a repository, it seems reasonable to assume that the repository returns domain objects. Domain objects are newables and there's no reason not to new them manually.
Thanks for the answers everybody, they led me to the following conclusions.
Mark makes a distinction between stable and unstable dependencies in the book I am reading ( "Dependency injection in .NET"). Stable dependencies (eg Strings) can be created at will. Unstable dependencies should be moved behind a seam / interface.
A dependency is anything that is in a different assembly from the one that we are writing.
An unstable dependency is any of the following
It requires a run time environment to be set up such as a database, web server, maybe even the file system (otherwise it won't be extensible or testable, and it means we couldn't do late binding if we wanted to)
It doesn't exist yet (otherwise we can't do parallel development)
It requires something that isn't installed on all machines (otherwise it can cause test difficulties)
It contains non deterministic behaviour (otherwise impossible to test well)
So this is all well and good.
However, I often hide things behind seams within the same assembly. I find this extremely helpful for testing. For example if I am doing a complex calculation it is impossible to test the entire calculation well in one go. If I split the calculation up into lots of smaller classes and hide these behind seams, then I can easily inject any arbirtary intermediate results into a calculating class.
So, having had a good old think about it, these are my conclusions:
It is always OK to create a stable dependency
You should never create unstable dependencies directly
It can be useful to use seams within an assembly, particularly to break up big classes and make them more easily testable.
And in answer to my original question, it is ok to instatiate a concrete object from a concrete factory.

How to make the Controller a single instance per application in ASP.NET MVC?

Over time controllers develop a lot of dependencies, and creating an instance of controller becomes too expensive for each request (especially with DI). Is there any solution to make controllers singletons?
Creating instances of controllers is pretty fast and simple operation. What becomes too expensive is creating dependencies for each request. So, what you really need is many controllers which share same instances of dependencies.
E.g. you have following controller
public class SalesController : Controller
{
private IProductRepository productRepository;
private IOrderRepository orderRepository;
public SalesController(IProductRepository productRepository,
IOrderRepository orderRepository)
{
this.productRepository = productRepository;
this.orderRepository = orderRepository;
}
// ...
}
You should configure your dependency injection framework to use same instances of repositories for all application (keep in mind, you can have synchronization problems). Now creating dependencies is not expensive any more. All dependencies are instantiated only once, and reused for all requests.
If you have many dependencies and you are worrying about costs of getting reference to instance of each dependency and providing these references to controller instance (which I don't think will be very expensive), then you can group your dependencies (something like Introduce Parameter Object refactoring):
public class SalesController : Controller
{
private ISalesService salesService;
public SalesController(ISalesService salesService)
{
this.salesService = salesService;
}
// ...
}
public class SalesService : ISalesService
{
private IProductRepository productRepository;
private IOrderRepository orderRepository;
public SalesService(IProductRepository productRepository,
IOrderRepository orderRepository)
{
this.productRepository = productRepository;
this.orderRepository = orderRepository;
}
// ...
}
Now you have single dependency, which will be injected very quickly. If you will configure your dependency injection framework to use singleton SalesService, then all SalesControllers will reuse same instance of service. Creation of controllers and providing dependencies will be very fast.
So first an answer to the original question:
public void ConfigureServices(IServiceCollection services) {
// put other services bindings here
// bind all Controller classes as singletons
services.AddSingleton<HomeController, HomeController>();
// tell framework to obtain Controller instances from ServiceProvider.
services.AddMvc().AddControllersAsServices();
}
As stated in the original question, if controllers have big dependency trees consisting mainly of request Scoped or Transient dependencies then creating them separately for each request may have some footprint on scalability of your application (in Java for example Servlet instances are singletons by default exactly for this reason). While usually CPU and real time needed to create even a big dependency tree is negligible (unless you have some heavy computations or network communication in constructors of your components, which almost never is a good idea for transient or request scoped components), the memory usage footprint is something to reckon with. In case of common DB-Web apps memory is the main factor limiting number of concurrent requests that a single machine-node can handle. If every request has a separate copy of a big dependency tree, together they may consume a significant amount of memory (the other thing to watch for is initial stack size for a new thread, by the way).
The accepted answer 1220560 solves the problem as well, but I would consider it an ugly hack and it has some drawbacks: you need to create this artificial singleton service that will be used by your Controllers either as a service locator or a proxy for other services. If you have just one such singleton object for all your controllers then you are effectively hiding real dependencies of your Controller: for example if someone wants to write a unit-test for your Controller he needs to analyse carefully its implementation to see which dependencies it actually uses, so that he knows what mocks/fakes he needs to provide in his test setup. If later you change your Controller and as a result of your change the subset of services your controller uses changes as well, it is very easy to forget to update the test setup also. This may sometimes lead to bugs that are hard to track. Contrary to this, if your dependencies are declared explicitly as constructor params, you will get a compiler error in the test setup right away. Another thing you can do is to have a separate such a singleton proxy/service locator for each controller, but then it's a lot of hassle basically.
Regardless whether you use the solution proposed by me or the one from answer #1220560 you must be careful when injecting request Scoped dependencies into singleton objects as described in https://learn.microsoft.com/en-us/aspnet/core/fundamentals/dependency-injection#registering-your-own-services right at the end of the "registering-your-own-services" section. You can find possible solutions to this problem here: how to use scoped dependency in a singleton in C# / ASP
Another thing to watch for is concurrency issue: singleton objects may be accessed concurrently by several threads handling different concurrent requests, so make sure to add proper synchronization to any non-thread-safe resources your singleton uses.
edit:
I've just realized the original question was about ASP.NET and this answer is for ASP.NET Core, so it probably won't work for "non-Core".

Is it considered bad design to pass a repository interface as an argument to a method on a domain class?

Our domain model is very anemic right now. Our entities are mostly empty shells, almost purely designed for holding values and navigating to collections.
We are using EF 4.1 code-first ORM, and the design so far has been to shield our novice developers against the dreaded "LINQ to Entities cannot translate blablabla to a store expression" exception when querying against the context during early iterations.
We have various aggregate root repository interfaces over EF. However some blocks of code in the impls seems like they should be the domain's responsibility. As long as the repository interface is declared in the domain, and the impl is in the infrastructure (dependency injected), is it considered bad design to pass a repository interface as an argument to a method on an entity (or other domain) class?
For example, would this be bad?
public class EntityAbc {
public void SaveTo(IEntityAbcRepository repos) {...}
public void DeleteFrom(IEntityAbcRepository repos) {...}
}
What if a particular entity needed access to other aggregate root repositories? Would this be ok or not, and why?
public void Save() {
var abcRepos = DependencyInjector.Current.GetService<IEntityAbcRepository>();
var xyzRepos = DependencyInjector.Current.GetService<IEntityXyzRepository>();
// work with repositories
}
Update 1
I did not mention moving code to an application layer because I consider some of the code that uses IEntityAbcRepository to involve business rule enforcement. The repository impl should be as vanilla as possible, right? Its main responsibility should just be a simple abstraction over the ORM, allowing you to find / add / update / delete entities. Wrong?
Also, this question applies to methods on other non-entity domain classes -- factories, services, whatever pattern may be appropriate. Point being, I'm asking the question about any method on a domain class, not just an entity class. #Eranga, this is one place where you can use constructor injection because factories & services are not part of the ORM.
The application layer could then coordinate flow by injecting a repository impl into its constructor, and passing it as an argument to a domain service or factory. Is this bad practice?
Update 2
Adding another clarification here. What if the domain only needs access to the IEntityAbcRepository in order to execute its Find() method(s)? In the example above, the SaveTo and DeleteFrom methods would not invoke any add / update / delete methods on the repository interface.
So far we've combined the find / add / update / delete methods on a single aggregate root repository interface for simplicity. But I suppose there's nothing stopping us from separating them out into 2 interfaces, like so:
IEntityAbcReadRepository <-- defines all find method signatures
IEntityAbcWriteRepository <-- defines all add / update / delete method sigs
In this case, would it be bad practice to pass IEntityAbcReadRepository as a parameter to a domain method?
Your first approach is better compared to the second approach which uses "Service Locator" pattern. Dependencies are more obvious in the first approach.
Here are some links that explains why "Service Locator" is a bad choice
Is it bad to use servicelocation instead of constructor injection
...
Singleton Vs ServiceLocator
Say no to ServiceLocator
Both of these solutions stem from the fact that EF does not allow you to use constructor injection. However you can use property injection as explained in this answer. But that does not guarantee that mandatory dependencies are present.
So your first approach is the better solution.
Short answer: Yes!
Long answer:
Consider creating an AbcService in your application service layer. This service layer sits between your domain and your infrastructure. You can inject as many repositories into AbcService as you want. Then let the service handle SaveTo and DeleteFrom.
SaveTo and DeleteFrom, unless you are saving to and deleting from another entity, i.e. no data access is involved, are methods that sound like they shouldn't be on a domain entity, IMO.
Having persistence logic in your domain entities is IMO bad design in the first place. Good separation of concerns should mean that domain/business logic is separated from persistence logic, so your domain classes should be persistence ignorant.
Previous Entity Framwork versions might not have allowed such a separation but I think most recent versions solved that problem. I'm not that familiar with EF though, so I might be wrong.
With that said, where can you put methods such as Save() and Delete() ?
If you want to add to/remove your entity from its repository, Repository.Add() and Repository.Remove() are good choices. A repository basically serves as an illusion of an in-memory collection of your entities, so it makes sense for it to behave just like a collection or a list with the appropriate methods.
If you want to persist changes made to an existing entity, there are other ways to do that. You could have a Repository.Save() method but some consider it bad practice. Oftentimes the changes are part of a higher level operation handled in a transaction-like context such as a Unit of Work, in that case you can let the operation persist all the objects in its scope when it finishes. For instance, if you use an Open Session in View approach for your web application, changes are automatically persisted when the request ends.
Or you can rely on an ad-hoc call of your ORM's Save() method for your particular entity which hopefully shouldn't be grafted onto the entity code itself (with NHibernate, for instance, it's available at runtime on the proxied entity).
[Update]
Putting that in perspective with your subsequent questions (though I'm not sure I understand all of them well) :
I see no value in splitting your repository into a ReadRepository and a WriteRepository. In DDD, a repository's responsibility is clearly to provide a collection to query from as well as add to or remove from. It's still quite cohesive that way.
It's not an entity's responsibility to fiddle with its own persistence, so it shouldn't be aware of its own repository for that precise purpose. Otherwise, it's pretty rare that an entity rightfully needs to have knowledge of its own repository (usually it means that the entity has a relationship to another entity of the same type, like parent/child, and you want to get the other entity from the repository)
However, entities and other domain objects obviously do need to obtain references to other entities at times. In that case, try to get these references through traversal of other objects within the boundary of your aggregate first before looking for a repository. If you absolutely need a repository to get the object you want, it's a good idea to inject the repository through any flavour of injection you like. As Eranga pointed out, service locator might turn out to be a sub-par dependency injection ersatz though.
Last thing, the kind of injection you mentioned - SaveTo(IEntityAbcRepository repos) - is peculiar because it is neither constructor nor setter injection, but rather an ephemeral injection lasting just the time of a method. It implies that whoever calls your method must know what repository to pass at that precise moment, which is not obvious. It might be useful, but I'd say it's not the form of injection you would typically mainly use.

Entity Framework context

I have an application using the Entity Framework code first. My setup is that I have a core service which all other services inherit from. The core service contains the following code:
public static DatabaseContext db = new DatabaseContext();
public CoreService()
{
db.Database.Initialize(force: false);
}
Then, another class will inherit from CoreService and when it needs to query the database will just run some code such as:
db.Products.Where(blah => blah.IsEnabled);
However, I seem to be getting conflicting stories as to which is best.
Some people advise NOT to do what I'm doing.
Other people say that you should define the context for each class (rather than use a base class to instantiate it)
Others say that for EVERY database call, I should wrap it in a using block. I've never seen this in any of the examples from Microsoft.
Can anyone clarify?
I'm currently at a point where refactoring is possible and quite quick, so I'd like some general advice if possible.
You should wrap one context per web request. Hold it open for as long as you need it, then get rid of it when you are finished. That's what the using is for.
Do NOT wrap up your context in a Singleton. That is not a good idea.
If you are working with clients like WinForms then I think you would wrap the context around each form but that's not my area.
Also, make sure you know when you are going to be actually executing against your datasource so you don't end up enumerating multiple times when you might only need to do so once to work with the results.
Lastly, you have seen this practice from MS as lots of the ADO stuff supports being wrapped in a using but hardly anyone realises this.
I suggest to use design principle "prefer composition over inheritance".
You can have the reference of the database context in your base class.
Implement a singleton for getting the DataContext and assign the datacontext to this reference.
The conflicts you get are not related to sharing the context between classes but are caused by the static declaration of your context. If you make the context an instance field of your service class, so that every service instance gets its own context, there should be no issues.
The using pattern you mention is not required but instead you should make sure that context.Dispose() is called at the service disposal.

ASP.NET MVC IoC usability

How often do you use IoC for controllers/DAL in real projects?
IoC allows to abstract application from concrete implementation with additional layer of interfaces that should be implemented. But how often concrete implementation changes? Should we really have to do job twice adding method to interface then the implementation if implementation hardly will ever be changed? I took part in about 10 asp.net projects and DAL (ORM-like and not) was never rewritten completely.
Watching lots of videos I clearly understand that IoC "is cool" and the really nice way to program, but does it really needed?
Added a bit later:
Yes, IoC allows prepare better testing environment, but we also have nice way to test DAL without IoC. We wrap DAL calls to database into uncommited transactions without risk to make data unstable.
IoC isn't a pattern only for writing modular programs; it also allows for easier testing, by being able to swap in mock objects that implement the same interface as the components they stand in for.
Plus, it actually makes code much easier to maintain down the road.
It's not IOC that allows you to abstract application from concrete implementation with additional layer of interfaces, this is how you should design your application in order to be more modular and reusable. Another important benefit is that once you've designed your application this way it will be much easier to test the different parts in isolation without depending on concrete database access for example.
There's much more about IoC except ability to change implementation:
testing
explicit dependencies - not hidden inside private DataContext
automatic instantiation - you declare in constructor that you need something, and you get it - with all deep nested dependencies resolved
separation of assemblies - take a look at S#arp Architecture to see how IoC allows to avoid referencing NHibernate and other specific assemblies, which otherwise you'll have to reference
management of lifetime - ability to specify per request / singleton / transitive lifetime of objects and change it in one place, instead of in dozens of controllers
ability to do dynamic stuff, like, getting correct data context in model binders, because with IoC you now have metadata about your dependencies; and this shows that maybe IoC does to your object dependencies what reflection does to C# programming - a lot of new possibilities, that you never even thought about
And so on, I'm sure I missed a lot of positive stuff. While the only "bad" thing that I can think about (and that you mentioned) is duplication of interface, which is non-issue with modern IDEs support for refactoring.
Well, if your data interfaces change every day, and you have hundreds of them - you may want to avoid IoC.
But, do you avoid good design practices just because it's harder to follow them? Do you copy and paste code instead of extracting a method/class, just because it takes more time and more code to do so? Do you place business logic in views just because it's harder to create view models and sync them with domain models? If yes, then you can avoid IoC, no problem.
You're arguing that using IOC takes MORE code than not using it. I disagree.
Here is the entire DAL IOC configuration for one of my projects using LinqToSql. The ContextProvider class is simply a thread safe LinqToSql context factory.
container.Register(Component.For<IContextProvider<LSDataContext>, IContextProvider>().LifeStyle.PerWebRequest.ImplementedBy<ContextProvider<LSDataContext>>();
container.Register(Component.For<IContextProvider<WorkSheetDataContext>, IContextProvider>().LifeStyle.PerWebRequest.ImplementedBy<ContextProvider<WorkSheetDataContext>>();
container.Register(Component.For<IContextProvider<OffersReadContext>, IContextProvider>().LifeStyle.PerWebRequest.ImplementedBy<ContextProvider<OffersReadContext>>();
Here is the entire DAL configuration for one of my projects using NHibernate and the repository pattern:
container.Register(Component.For<NHSessionBuilder>().LifeStyle.Singleton);
container.Register(Component.For(typeof(IRepository<>)).ImplementedBy(typeof(NHRepositoryBase<>)));
Here is how I consume the DAL in my BLL (w/ dependency injection):
public class ClientService
{
private readonly IRepository<Client> _Clients;
public ClientService(IRepository<Client> clients)
{
_Clients = clients;
}
public IEnumerable<Client> GetClientsWithGoodCredit()
{
return _Clients.Where(c => c.HasGoodCredit);
}
}
Note that my IRepository<> interface inherits IQueryable<> so this code is very trivial!
Here's how I can test my BLL without connecting to a DB:
public void GetClientsWithGoodCredit_ReturnsClientWithGoodCredit()
{
var clientWithGoodCredit = new Client() {HasGoodCredit = true};
var clientWithBadCredit = new Client() {HasGoodCredit = false};
var clients = new List<Client>() { clientWithGoodCredit, clientWithBadCredit }.ToTestRepository();
var service = new ClientService(clients);
var clientsWithGoodCredit = service.GetClientsWithGoodCredit();
Assert(clientsWithGoodCredit.Count() == 1);
Assert(clientsWithGoodCredit.First() == clientWithGoodCredit);
}
ToTestRepository() is an extension method that returns a fake IRepository<> that uses an in-memory list.
There is no possible way you can argue that this is more complicated than newing up your DAL all over your BLL.
The only way you could have ever written the above test is by connecting to a DB, saving some test clients, and then querying. I guarantee that takes 100+ times longer to execute than this did. (Times that by 1000 tests and you can go get some coffee while you're waiting.)
Also, by using uncommitted transactions for testing you introduce debugging nightmares resulting from ORMs that don't query over uncommitted entities.

Resources