When using inversion of control in a .Net application, is it acceptable to add a reference to your data layer in your host application?
Say I have the following individual projects:
MyApp.Data (EF classes)
MyApp.Business (Service factory / repository)
MyApp.Services.MyWCFService (Host)
MyApp.Presentation.MVC (Host)
MyApp.Business.Tests (Host)
In this situation, I have historically used IoC between MyApp.Business and the host apps - creating interfaces for each service factory / repository, and using DI in the host application. Each application then has the choice to inject its own implementation of my business factories. I've never had an issue with this, as my host apps only ever rely on the business layer, and I never have to reference the MyApp.Data assembly (my MyApp.Business generally deals with all calls to the MyApp.Data assembly, and renders the results in to composite business objects).
What I'm trying to achieve with my latest project is to use IoC at every level - i.e. creating interfaces in MyApp.Data, so I can apply mocking and proper unit tests to MyApp.Business. It seems to me that the only way to achieve this is to create an assembly reference to both MyApp.Business and MyApp.Data in the host application, then use DI to inject both the MyApp.Data and MyApp.Business implementations.
This is contrary to everything I've been taught with conventional nTier applications, though I understand that it's DI that's doing all the work, and the reference is basically for resolution only. Am I right in assuming this is the right way to approach it? Is there a better way?
In short: Yes, it is acceptable to reference each and any part of your app from the main entry point of your application.
The concept is called Composition Root.
Related
I'm still gaining experience in dependency injection. I created a new console app project and added two other projects to mimic a real world app. So the three projects in my app are:
MyConsoleApp
MyBusinessService
MyDataRepository
I created all my interfaces so that MyBusinessService is only using interfaces to get data from the repository.
My question is about MyConsoleApp. As I understand it, this is where Ninject will resolve all the dependencies.
Two questions:
I think this means MyConsoleApp will have to reference both MyBusinessService and MyDataRepository. Is this correct?
I think, in MyConsoleApp, I'll have to "manually" bind the IMyDataRepository interface to MyDataRepository concrete class -- see code below. Is this correct? I get a bit confused here because in some tutorials, they're mentioning that Ninject will resolve dependencies "automatically".
I think my code will look like this:
static void Main()
{
// Get Ninject going
var kernel = new StandardKernel();
// Bindings
kernel.bind<IMyDataRepository>().To<MyDataRepository>();
// Some business logic code my console app will process
}
Disclaimer: I am by no means a NInject expert (projects I've been on have tended to use StructureMap).
To answer question 1:
It depends. If your app code has direct dependencies on the data repository, then yes. However, it's considered by many as better practice to have the business layer wrap the data-access layer, and have the main application code depend only on the business layer (this way your application is persistence-ignorant). Your data-access layer would perform only I/O, any validation would be done in the business layer wrapper.
To answer question 2:
Any code that I've seen using any DI/IoC container has calls to Bind() or an equivalent method. IoC containers do resolve dependencies "automatically" in the sense that if you inject an IFoo into a class, the container will automatically construct an appropriate instance of a class that implements IFoo. But in order for the container to do that, you have to instruct the container which class to use.
Consider the scenario where you've got a web app and occasionally connected mobile app that perform (mostly) the same function, which is managing widgets. The web app gets its data from web services. The mobile app uses a SQLite database.
You'd likely have an IWidgetRepository interface to represent data-access operations. But you'd have two implementations, one to interact with the web service, the other to interact with the SQLite DB. And both these implementations would (most likely) reside in your solution or a shared package.
In the web app, you'd bind IWidgetRepository to the web-service implementation; in the mobile app you'd bind it to the SQLite implementation. Since there are multiple implementations of the interface in question -- or, more generally, because we can't assume there will only ever be one implementation -- the container needs to be instructed as to which class/implementation to use.
Incidentally, most applications place this registration code into a separate module, frequently referred to as a "bootstrapper" (NInject actually has a class by that name), and call the bootstrapper from your startup code.
Suppose I have a BaseForm which depends on an ILogger or IResourceManager or something like that. Currently it resolves the correct implementation of the required service using the service locator which I know is an anti-pattern.
Is using the constructor injection the right way to resolve this kind of dependency?
Do I have to register my BaseForm (and its' derived types) in the container in order to create instances of them with resolved dependencies? Doesn't that complicate everything?
Is it bad to use a static factory wrapped around a service locator?
Unit-testing aside, will I really be punished because of using service locator anti-pattern?
Sorry about asking many questions at once. I've read the following SO questions and many others but reading them only added to my confusion:
How to use Dependency Injection and not Service Locator
What's the difference between the Dependency Injection and Service Locator patterns?
How to avoid Service Locator Anti-Pattern?
If possible, you should always go with dependency injection, since it has a few clear strength. With UI technologies however, it is not always possible to use dependency injection, since some UI technologies (in .NET space, Win Forms and Web Forms, for instance) only allow your UI classes (forms, pages, controls, etc) to have a default constructor. In that case you will have to fall back to something else, which is service locator.
In that case I can give you the following advice:
Only fall back to Service Locator for UI classes that can't be created by your container using dependency injection, and for stuff that you aren't unit testing anyway.
Try to implement as less logic as possible in those UI classes (as Humble objects with only view related stuff). This allows you to unit test as much as possible.
Wrap the container around a static method to hide the container from the rest of the application. Make sure that a call to this static method fails, when the dependency cannot be resolved.
Resolve all dependencies in the (default) constructor of that type. This allows the application to fail fast when one of its dependencies cannot be resolved when that type is created, instead of later on when some button is clicked.
Check during app start-up (or using a unit test), if all those UI types can be created. This saves you from having to go through the whole application (by opening all forms) to see if there is an error in the DI configuration.
When types cannot be built by the container, there is no reason to register them in the container. If they can be created by the container (such as with ASP.NET MVC Controller classes), it can be useful to register them explicitly, because some containers allow you to verify the configuration up front, which will detect configuration errors in those types right away.
Besides unit testing, there are two other important arguments against the use of the Service Locator, which are given by Mark Seemann in his famous blog post Service Locator is an Anti-Pattern:
Service Locator "hides a class’ dependencies, causing run-time errors instead of compile-time errors"
Service Locator is "making the code more difficult to maintain"
Is anyone aware of a set of classes to abstract away the specific Dependency injection library (Spring, Castle, StructureMap, Ninject... etc.) ?
We all use a DI container to abstract away a specific implementation of our code, but we could also use the same interface / strategy pattern to write a generic interface based DI container using specific implementations such as Castle.Windsor, Unity... etc.
In general the basic pattern of "Getting and Object" from a container is pretty universal. For example:
IService service = IocContainer.Get<IService>();
Where IocContainer is a generic wrapper class around a specific library implementation such as Castle.Windsor, Unity... etc.
Of course in addition to writing specific implementations that you could 'plug-in' and of course, each implementation would have its own configuration file format.
Anyone have suggestions on existing well tested wrapper classes for DI containers?
The problem with all these wrappers and container abstractions is that they restrict you to the common subset of the functionallity that all the containers share. That's why you shouldn't do it. Instead use the container of your choice correctly:
Have just one "Get" in your application - in the composition root
Configure the container with conventions rather than usein a one by one service registration. This makes the configuration small.
Use factory interfaces wherever you have to create instances after the initial composition of your application and implement them as part of the container configuration (Some container do this implementation automatically for you)
With these simple rules your container is known in one place only - the composition root - which makes any abstraction of the container obsolete. That way you can use the full power of the container.
I am starting with ASP.NET MVC and trying to learn DI and dependency inversion at the same time. I am setting up at MVC project where the controllers and views reside in one assembly and I also have a couple of more assemblies for domain models and services that to most of the actual business logic.
The plan is to have all my services implement interfaces. The controllers that call the services access them through these interfaces. Instantiation is done using the Ninject DI framework.
Now the actual question; who "owns" the interfaces? From my understanding of dependency inversion, the service interfaces would be owned by the controllers and therefore reside in that assembly.
None of your components have to own the interfaces. Good interfaces are their own animal - they just have to be visible to both the controllers and services.
Physically segregating them creates headaches that may be unnecessary - I recommend that you do not use multiple assemblies without a good reason.
The interfaces must be visible to the implementors. The implementation may be visible to consumers. If you want to separate the implementation from interface such that they can be deployed separately, then the interfaces should reside in their own assembly.
If you are disciplined enough to organize your code based on namespaces and have no special deployment requirements, the interfaces can reside in the same assembly as the implementation.
Interfaces to services (normally provide data and implemented as WCF) need to reside on a separate DLL and MVC will have a reference to.
So here is a typical (and basic) splitting of the classes and interfaces into projects (and assemblies):
Common
Entity (reference Common)
Service Interface (reference Common and Entity)
Service Implementation (reference all above)
Presentation (References all above but Implementation) and has WCF procies, view specific logic
MVC project (references all above but Implementation)
I have looked at simpler applications like Nerddinner and ContactManager as well as more complicated ones like Kigg. I understand the simpler ones and now I would like to understand the more complex ones.
Usually the simpler applications have repository classes and interfaces (as loosely coupled as they can get) on top of either LINQtoSQL or the Entity Framework. The repositories are called from the controllers to do the necessary data operations.
One common pattern I see when I examine more complicated applications like Kigg or Oxite is the introduction of (I am only scratching the surface here but I have to start somewhere):
IOC DI (in Kigg's case Unity)
Web Request Lifetime manager
Unit of Work
Here are my questions:
I understand that in order to truly have a loosely coupled application you have to use something like Unity. But it also seems like the moment you introduce Unity to the mix you also have to introduce a Web Request Lifetime Manager. Why is that? Why is it that sample applications like Nerddinner do not have a Web Request Lifetime Manager? What exactly does it do? Is it a Unity specific thing?
A second pattern I notice is the introduction of Unit of Work. Again, same question: Why does Nerddinner or ContactManager not use Unit of Work? Instead these applications use the repository classes on top of Linq2Sql or Entity Framework to do the data manipulation. No sign of any Unit of Work. What exactly is it and why should it be used?
Thanks
Below is a example of DI in Nerddiner at the DinnersController level:
public DinnersController()
: this(new DinnerRepository()) {
}
public DinnersController(IDinnerRepository repository) {
dinnerRepository = repository;
}
So am I right to assume that because of the first constructor the controller "owns" the DinnerRepository and it will therefore depend on the lifetime of the controller since it is declared there?
With Linq-to-SQL is used directly, your controller owns the reference to the data context. It's usually a private reference inside the controller, and so is created as part of its construction. There's no need in lifetime management, since it's in one place.
However, when you use IoC container, your data repository are created outside your controller. Since IoC container that creates it for you doesn't know how and how long you're going to use the created object, a lifetime strategy is introduced.
For example, data context (repository) is usually created at the beginning of the web request and destroyed at the end. However, for components that work with external web service, or some static mapper (e.g. logger) there's no need to create them each time. So you may want to say to create them once (i.e. singletone lifestyle).
All this happen because IoC container (like Unity) are designed to handle many situations, and they don't know your specific needs. For example, some applications use "conversation" transactions where NHibernate (or Entity Framework maybe) may last during several pages / web requests. IoC containers allow you to tweak objects lifetime to suit your needs. But as said this comes at price - since there's no single predefined strategy, you have to select one yourself.
Why NerdDinner and other applications do not use more advanced techniques is simply because they are intended to demonstrate MVC features, not advanced usages of some other libraries. I remember an article written to demonstrate one IoC container advanced functionality - this article broke some approved design patterns like separation of concerns - but this wasn't that important because design patterns were not the goal of the article. Same with simple MVC-demonstration-applications - they do not want you, the MVC newcomer, to be lost in IoC labyrinths.
And I would not recommend to look at Oxite as a design reference example:
http://codebetter.com/blogs/karlseguin/archive/2008/12/15/oxite-oh-dear-lord-why.aspx
http://ayende.com/Blog/archive/2008/12/19/oxite-open-exchangable-informative-troubled-engine.aspx
Most if not all of the DI containers touch the concept of life times, I believe. Depending on the scenario involved, you may want the container to always return the same instance of a registered component, while for another component, you may want it to always return a new instance. Most containers also allow you to specify that within a particular context, you want it to return the same instance, etc..
I don't know Unity very well (so far I have used Windsor and Autofac), but I suspect the web request lifetime manager to be an implementation of lifetime strategies where the same instance is provided by the container during the lifetime of a single web request. You will find similar strategies in containers like Windsor.
Finally, I suppose you are referring to Unit of Work. A Unit of Work is in essence a group of actions that you want to succeed or fail as one atomic business transaction. For a more formal description, look at Martin Fowler's definition. It is a concept that has gained more popularity in the context of Domain Driven Design. A unit of work keeps track of the changes you apply in such a transaction, and when the time is right, it commits these changes in one ACID transaction. In NHibernate e.g., the session supports the notion of unit of work and more specifically the change tracking, while in Linq2SQL it is the Context ...