I've just started using Inversion of Control containers and I'm having a difficult time understanding when to use the fluent API or XML when configuring and registering components.
Are there any best practices around when you should prefer one over the other? Or is this simply developer preference? Would it be considered bad practice to mix them in a simple application?
Thanks!
The sweet spot for me is a combination of the two. XML for pulling together large units of functionality and possibly configuring them at deployment time; fluent for setting up the individual components within those units. See http://code.google.com/p/autofac/wiki/StructuringWithModules for an Autofac example. Other containers often have similar capabilities.
Container configuration in code (what you call "the fluent API") is more maintainable, because the code is compiled and therefore the compiler will find many errors for you. However, it requires you to recompile if you want to make changes.
Container configuration in XML is the opposite: the compiler cannot help you find errors, but you can make changes without recompiling.
If you are just starting out with dependency injection, I would stick to configuration in code to keep things simple.
I noticed that the other answers don't touch on a very important problem the Fluent Configuration brings to you DI implementation. Which is hard-wiring concrete implementations to your code base, thus creating tightly-coupled dependencies...which ironically is what IoC Containers are trying to solve. You will find yourself contaminating all your applications with component registration code, now, that doesn't sound good.
Why doesn't it sound good? Because when you have a distributed system with many applications that use different components you basically have two options to register your components:
1. Add DI code on all your applications.
Something like this...
var builder = new ContainerBuilder();
builder.RegisterType<ConsoleLogger>().As<ILogger>();
builder.RegisterType<EmailNotificationDispatcher>().As<INotificationDispatcher>();
builder.RegisterType<DefaultMessageHandler>().As<IMessageHandler>();
//many more...
and of course, ALL the applications in the system will need this sort of logic with different components.
Advantage
You're using Fluent Configuration because it is pretty...that's about it :)
Disadvantage
You are contaminating all you're applications with this fluent component registration, including CONCRETE types
You will need to recompile your application(s) if you need to swap out a dependency/component
If you need to migration to different IoC container tool...the same
2. Wrap your DI code inside a wrapper component to make your applications IoC-container-agnostic.
Something like this...
public sealed class ComponentFactory
{
private static IContainer _container;
public static void Init()
{
var builder = new ContainerBuilder();
builder.RegisterType<ConsoleLogger>().As<ILogger>();
builder.RegisterType<EmailNotificationDispatcher>().As<INotificationDispatcher>();
builder.RegisterType<DefaultMessageHandler>().As<IMessageHandler>();
//many more...
_container = builder.Build();
}
public static T Resolve<T>()
{
return _container.Resolve<T>();
}
}
With this approach your applications won't effectively be aware of what concrete classes their using at all.
You don't need to re-compile and redeploy all applications if you need to swap components.
Furthermore, if you need to migrate to a different IoC Container, the changes will be done in one place...the wrapper class.
The huge disadvange of this approach is that you will have to deploy all components to all applications bin folder even if there are components (or binaries assemblies) one application might not need. So, your lightweight applications will become fat heavy clients.
XML (or json) Configuration
With xml configuration none of the above will happen...
you deploy what your applications need...on a per-application basis
you register the components your applications need...on a per-app basis
your applications are light and have no idea of what concrete implementation they're using. They become very scalable
there's more but hopefully you're getting the idea
Now, I'm not an advocate of XML Configuration, I hate it, I'm a coder and prefer the fluent way over xml (or json) but in my experience xml/json configuration proves to be more scalable and maintainable.
Now, shoot me!!!
Related
I have used Unity for my last project and was generally pleased. But benchmarks have me thinking I may go with Simple Injector for my next project.
However, Simple Injector does not seem to have an interface for its Container class. This means that anytime I want to use the container in a method, I cannot mock the container for unit testing.
I am confused how a tool that really functions based of interfaces, would not itself make an interface to the container. I know that the classic methods of dependency injection do not need the container for anywhere more than the startup. (The rest uses constructor injection.) But I have found that when the rubber hits the road that cannot always be true. Sometimes you just need the container in order to do a "resolve" in the code.
If I go with Simple Injector then that code seems to gets harder to unit test.
Am I right? Or am I missing something?
Simple Injector does not contain an IContainer abstraction, because:
It would be useless for Simple Injector to define it,
because in case of depending on IContainer instead of Container, your code would in that case still depend on Simple Injector, and this causes a vendor lock-in, which Simple Injector tries to prevent.
Any code you write, apart from the application's Composition Root, should not depend on the container, nor on an abstraction over the container. Both are implementations of the Service Locator anti-pattern.
You should NOT use a DI library when unit testing. When unit testing, you should manually inject all fake or mock objects in the class under test. Using a container only complicates things. Perhaps you are using a container, because manually creating those classes is too cumbersome for you. This might indicate problems with your code (you might be violating the Single Responsibility Principle) or your tests (you might be missing a factory method to create the class under test).
You might use the container for your integration tests, but you
shouldn't have that many integration tests in the first place. The focus should be on unit tests and this should be easy when applying the dependency injection pattern. On top of that, there are better ways of hiding the container from your integration tests, compared to depending on a very wide library-defined interface.
It is trivial to define such interface (plus an adapter) yourself, which justifies not having it in the library. It is your job as application developer to define the right abstractions for your application as stated by the Dependency Inversion Principle. Libraries and frameworks that tend to do this will fail most of the time in providing an abstraction that works for everyone.
The library itself does not use that abstraction and a library should, according to the Framework Design Guidelines, in that case not define such abstraction for you. As stated in the previous point, Simple Injector would get the abstraction wrong anyway.
Last but not least, the Simple Injector container does actually implement System.IServiceProvider which is defined in mscorlib.dll and can be used for retrieving service objects.
I think the answer given here is entirely founded upon accepting that ServiceLocator is an anti-pattern, which in turn I don't believe is globally accepted as true. See Windows Workflow Foundation's Extensions support.
The anti-pattern link (and its two updates) may also be weak... the latest update claims violation of encapsulation ("relieving you of the burden of having to understand every implementation detail of every piece of code in your code base.") while then at the same time claiming that up-front knowledge of dependencies is somehow different for that claim than discovering them via unit tests. Either way, you're going to need to know what to give it.
All in all, if you want to follow the Locator pattern, either leverage its IServiceProvider, or simplify your container population (to a singleton) and create a static wrapper for it.
I am building a WPF application that uses an IoC container for dependency injection (MEF in my case). The application contains several detailed processes that we are modeling as WF workflows. However, some (not all) of the activities rely on services and other components that are managed by the IoC container. I see a few possible ways to accomplish this but none of them seem to follow best-practices. They are:
Use a service-locator in the constructor or Execute method of each activity to locate
and set the dependencies. Personally, I don't like service locators as I believe they violate one of the tenants of DI where code doesn't know where or how the dependency is created. It also makes the activities less testable (or at least adds a couple of steps to the testing process). I've seen some examples on StackOverflow and CodePlex that use a WF Services extension that basically works the same way. I'm not using WF Services, so that isn't an option.
Export each of the activities and have the workflow import them.
This would ensure that the container has satisfied all of the
dependencies before we need them but means we aren't building the
workflow in XAML.
Export the workflow and have it import the dependencies needed by
the activities. Then I would have to set the dependencies as
parameters for the activities to consume. Not only will this result
in a lot of overhead code in the workflow, but it now means that the
workflow requires knowledge of the dependencies for all of the
activities. If an activity changes, is added or removed, I now have
to make changes to the workflow to accomodate any changes to the
dependencies.
Take the same approach as #3 except instead of exporting the
workflow, have a controlling class that is exported, imports all of
the dependencies and sets them as input parameters for the workflow
itself. Each activity would pull the dependencies it needs. This
has all of the same problems as #3 with more code to maintain.
So, my question is, what approach should I take? (I.e. what approach have you taken?)
I am also assuming the above list is not comprehensive and hope someone will suggest a better option, if one exists.
Thx!
Approach two seems to be most adequate. You might use some activity declaration in xaml which will be used later to import real one.
EDIT:
<wf:Workflow.Activities></activities:PassThrough UserId="mstewart"></wf:Workflow.Activities>
and then you could have something among those lines
interface IActivityInfo
{
IActivity ImportActivity();
}
interface IActivity<TActivityInfo> where TActivityInfo : IActivityInfo
{
IActivityInfo Info { get; }
}
class PassThrough : IActivityInfo
{
public IActivity ImportActivity(){ return ServiceLocator.Current.GetInstance<IActivity<PassThrough>>(); }
}
[Export(typeof(IActivity<PassThrough>))]
class PassThroughActivity : IActivity<PassThrough>
{
}
This approach would let you easily separate xaml design process from underlying activity.
I have looked at simpler applications like Nerddinner and ContactManager as well as more complicated ones like Kigg. I understand the simpler ones and now I would like to understand the more complex ones.
Usually the simpler applications have repository classes and interfaces (as loosely coupled as they can get) on top of either LINQtoSQL or the Entity Framework. The repositories are called from the controllers to do the necessary data operations.
One common pattern I see when I examine more complicated applications like Kigg or Oxite is the introduction of (I am only scratching the surface here but I have to start somewhere):
IOC DI (in Kigg's case Unity)
Web Request Lifetime manager
Unit of Work
Here are my questions:
I understand that in order to truly have a loosely coupled application you have to use something like Unity. But it also seems like the moment you introduce Unity to the mix you also have to introduce a Web Request Lifetime Manager. Why is that? Why is it that sample applications like Nerddinner do not have a Web Request Lifetime Manager? What exactly does it do? Is it a Unity specific thing?
A second pattern I notice is the introduction of Unit of Work. Again, same question: Why does Nerddinner or ContactManager not use Unit of Work? Instead these applications use the repository classes on top of Linq2Sql or Entity Framework to do the data manipulation. No sign of any Unit of Work. What exactly is it and why should it be used?
Thanks
Below is a example of DI in Nerddiner at the DinnersController level:
public DinnersController()
: this(new DinnerRepository()) {
}
public DinnersController(IDinnerRepository repository) {
dinnerRepository = repository;
}
So am I right to assume that because of the first constructor the controller "owns" the DinnerRepository and it will therefore depend on the lifetime of the controller since it is declared there?
With Linq-to-SQL is used directly, your controller owns the reference to the data context. It's usually a private reference inside the controller, and so is created as part of its construction. There's no need in lifetime management, since it's in one place.
However, when you use IoC container, your data repository are created outside your controller. Since IoC container that creates it for you doesn't know how and how long you're going to use the created object, a lifetime strategy is introduced.
For example, data context (repository) is usually created at the beginning of the web request and destroyed at the end. However, for components that work with external web service, or some static mapper (e.g. logger) there's no need to create them each time. So you may want to say to create them once (i.e. singletone lifestyle).
All this happen because IoC container (like Unity) are designed to handle many situations, and they don't know your specific needs. For example, some applications use "conversation" transactions where NHibernate (or Entity Framework maybe) may last during several pages / web requests. IoC containers allow you to tweak objects lifetime to suit your needs. But as said this comes at price - since there's no single predefined strategy, you have to select one yourself.
Why NerdDinner and other applications do not use more advanced techniques is simply because they are intended to demonstrate MVC features, not advanced usages of some other libraries. I remember an article written to demonstrate one IoC container advanced functionality - this article broke some approved design patterns like separation of concerns - but this wasn't that important because design patterns were not the goal of the article. Same with simple MVC-demonstration-applications - they do not want you, the MVC newcomer, to be lost in IoC labyrinths.
And I would not recommend to look at Oxite as a design reference example:
http://codebetter.com/blogs/karlseguin/archive/2008/12/15/oxite-oh-dear-lord-why.aspx
http://ayende.com/Blog/archive/2008/12/19/oxite-open-exchangable-informative-troubled-engine.aspx
Most if not all of the DI containers touch the concept of life times, I believe. Depending on the scenario involved, you may want the container to always return the same instance of a registered component, while for another component, you may want it to always return a new instance. Most containers also allow you to specify that within a particular context, you want it to return the same instance, etc..
I don't know Unity very well (so far I have used Windsor and Autofac), but I suspect the web request lifetime manager to be an implementation of lifetime strategies where the same instance is provided by the container during the lifetime of a single web request. You will find similar strategies in containers like Windsor.
Finally, I suppose you are referring to Unit of Work. A Unit of Work is in essence a group of actions that you want to succeed or fail as one atomic business transaction. For a more formal description, look at Martin Fowler's definition. It is a concept that has gained more popularity in the context of Domain Driven Design. A unit of work keeps track of the changes you apply in such a transaction, and when the time is right, it commits these changes in one ACID transaction. In NHibernate e.g., the session supports the notion of unit of work and more specifically the change tracking, while in Linq2SQL it is the Context ...
In most samples I have seen on the web, DI in MVC Controllers is done like this
public ProductController(IProductRepository Rep)
{
this._rep = Rep;
}
A custom ControllerFactory is used and it utilizes the DI framework of choice and the repository is injected.
Why is the above considered better than
public ProuctController()
{
this._rep = ObjectFactory.GetInstance<IProductRepository>();
}
This will get the same results but doesn't require a custom controller factory.
As far as testing is concerned the Test App can have a separate BootStrapper. That way when the controllers are being tested they can get the fake repositories and when they are used for real they will get the real ones.
Constructor injection (the first approach) is better than the service locator pattern (the second approach) for several reasons.
First, service locator hides dependencies. In your second example, looking at the public interface alone, there's no way to know that ProductControllers need repositories.
What's more, I've got to echo OdeToCode. I think
IProductRepository repository = Mockery.NewMock<IProductRepository>();
IProductController controller = new ProductController(repository);
is clearer than
ObjectFactory.SetFactory(IProductRepository, new MockRepositoryFactory())
IProductController controller = new ProductController();
Especially if the ObjectFactory is configured in a test fixture's SetUp method.
Finally, the service locator pattern is demonstrably sub-optimal in at least one particular case: when you're writing code that will be consumed by people writing applications outside of your control. I wager that people generally prefer constructor injection (or one of the other DI methods) because it's applicable for every scenario. Why not use the method that covers all cases?
(Martin Fowler offers a much more thorough analysis in "Inversion of Control Containers and the Dependency Injection Pattern", particularly the section "Service Locator vs Dependency Injection").
The primary drawback to the second constructor is now your IoC container has to be properly configured for each test. This setup can become a real burden as the code base grows and the test scenarios become more varied. The tests are generally easier to read and maintain when you explicitly pass in a test double.
Another concern is coupling a huge number of classes to a specific DI/IoC framework. There are ways to abstract it away, of course, but you still have code littered throughout your classes to retrieve dependencies. Since all the good frameworks can figure out what dependencies you need by looking at the constructor, it’s a lot of wasted effort and duplicated code.
When you use the second approach, disadvantages are:
Huge and unreadable test setup/context methods are needed
The container is coupled to the controller
You will need to write a lot more code
Why do you want to use an ioc container anyway when you don't want dependency injection?
I'm relatively familiar with the concepts of DI/IOC containers having worked on projects previously where their use were already in place. However, for this new project, there is no existing framework and I'm having to pick one.
Long story short, there are some scenarios where we'll be configuring several implementations for a given interface. Glancing around the web, it seems like using any of the mainstream frameworks to selectively bind to one of the implementations is quite simple.
There are however contexts where we'll need to run ALL the configured implementations. I've scoured all the IOC tagged posts here and I'm trying to pour through documentation of the major frameworks (so far looking at Unity, Ninject, and Windsor), but docs are often sparse and I've not the time to inspect source for all the packages.
So, are there any mainstream IOC containers that will allow me to bind to all the configured concrete types for one of my services?
One thing that caught me the first time I was trying to resolve all implementations of a registered type was that un-named (default) type registrations will not be returned when you call ResolveAll(). Only named instances are returned.
So:
IUnityContainer container = new UnityContainer();
container.RegisterType<IMyInterface, MyFirstClass>();
container.RegisterType<IMyInterface, MySecondClass>("Two");
container.RegisterType<IMyInterface, MyThirdClass>("Three");
var instances = container.ResolveAll<IMyInterface>();
Assert.AreEqual(2, instances.Count, "MyFirstClass doesn't get constructed");
So I somehow missed this my first pass looking through Unity somehow...but I'll answer my own question.
Unity has precisely what I wanted.
http://msdn.microsoft.com/en-us/library/cc440943.aspx
Also, for anyone else doing the IOC hunt and dance like me, this link proved to be invaluable.
http://blog.ashmind.com/index.php/2008/09/08/comparing-net-di-ioc-frameworks-part-2/