I'm relatively familiar with the concepts of DI/IOC containers having worked on projects previously where their use were already in place. However, for this new project, there is no existing framework and I'm having to pick one.
Long story short, there are some scenarios where we'll be configuring several implementations for a given interface. Glancing around the web, it seems like using any of the mainstream frameworks to selectively bind to one of the implementations is quite simple.
There are however contexts where we'll need to run ALL the configured implementations. I've scoured all the IOC tagged posts here and I'm trying to pour through documentation of the major frameworks (so far looking at Unity, Ninject, and Windsor), but docs are often sparse and I've not the time to inspect source for all the packages.
So, are there any mainstream IOC containers that will allow me to bind to all the configured concrete types for one of my services?
One thing that caught me the first time I was trying to resolve all implementations of a registered type was that un-named (default) type registrations will not be returned when you call ResolveAll(). Only named instances are returned.
So:
IUnityContainer container = new UnityContainer();
container.RegisterType<IMyInterface, MyFirstClass>();
container.RegisterType<IMyInterface, MySecondClass>("Two");
container.RegisterType<IMyInterface, MyThirdClass>("Three");
var instances = container.ResolveAll<IMyInterface>();
Assert.AreEqual(2, instances.Count, "MyFirstClass doesn't get constructed");
So I somehow missed this my first pass looking through Unity somehow...but I'll answer my own question.
Unity has precisely what I wanted.
http://msdn.microsoft.com/en-us/library/cc440943.aspx
Also, for anyone else doing the IOC hunt and dance like me, this link proved to be invaluable.
http://blog.ashmind.com/index.php/2008/09/08/comparing-net-di-ioc-frameworks-part-2/
Related
I've spent a lot of time reading these articles (along with many others):
Mark Seemann - Pure DI
Mark Seemann - When to use a DI Container
Mark Seemann - Compose object graphs with confidence
Mark Seemann - Don't call the container; it'll call you
Mark Seemann - Understanding the Composition Root
and I'm still trying to wrap my head around DI and the concept of "wiring up the dependencies" and the "auto wiring" functionality of an IoC container.
I think I understand the theory of Dependency Injection and Inversion of Control and I've implemented the example shown here from 2016 (I updated the code to use PSR-11 and eliminate the need for the container-interop package):
https://www.sitepoint.com/how-to-build-your-own-dependency-injection-container/
The application of the container example is shown at the GitHub link: https://github.com/sitepoint-editors/Container .
Note that while this example uses PHP, I'm trying to understand the details of DI independently from language, so any language is welcome.
Can someone explain the difference between manually wiring up dependencies, and using a container's auto wiring functionality? The SitePoint article briefly mentions that more advanced containers add the automatic wiring functionality, implying that the example doesn't contain this function already. Can someone explain the application shown on the GitHub page and how that relates to core DI and IoC concepts, like the Composition Root.
Can someone explain the difference between manually wiring up dependencies, and using a container's auto wiring functionality?
Pure DI is the practice of applying DI without using a DI Container. This means that you build a graph of objects by newing up objects using the new construct of your programming language. See for instance this example in C# (from listing 12.2 of Mark's book Dependency Injection Principles, Practices, and Patterns):
new HomeController(
new ProductService(
new SqlProductRepository(
new CommerceContext(connectionString)),
new AspNetUserContextAdapter()));
According to that book, Auto-Wiring is:
the ability to automatically compose an object graph from maps between Abstractions and concrete types by making use of type information supplied by the compiler and the [runtime environment]. (see 12.1.2)
In other words, with a DI Container, you will be able to 'just' tell the container about your types, and it will figure out which dependencies a type has and will be able to 'wire' that type with its dependencies.
Considering the previous example, listing 12.3 shows how you only have to specify the mappings between abstractions and concrete types in a container:
var container = new AutoWireContainer();
container.Register(typeof(IUserContext), typeof(AspNetUserContextAdapter));
container.Register(typeof(IProductRepository), typeof(SqlProductRepository));
container.Register(typeof(IProductService), typeof(ProductService));
container.Register(typeof(CommerceContext), () => new CommerceContext(connectionString));
And when you ask for a HomeController, the container knows how to construct the entire graph.
The SitePoint article briefly mentions that more advanced containers add the automatic wiring functionality
To me, Auto-Wiring is what makes a library into a DI Container. Something can't be called a DI Container if it doesn't, at the very least, support Auto-Wiring.
I'm starting a new MVC project and have (almost) decided to give the Repository Pattern and Dependency Injection a go. It has taken a while to sift through the variations but I came up with the following structure for my application:
Presentation Layer: ASP.Net MVC front end (views/controllers, etc.)
Services Layer (Business Layer, if you prefer): interfaces and DTOs.
Data Layer: interface implementations and Entity Framework classes.
They are 3 separate projects in my solution. The Presentation Layer only has a reference to the Services Layer. The Data Layer also only has a reference to the Services Layer - so this is basically following Domain Driven Design.
The point of structuring things in this fashion is for separation of concerns, loose-coupling and testability. I'm happy to take advice on improvements if any of this is unreasonable?
The part I am having difficulty with is injecting an interface-implementing object from the Data Layer into the Presentation Layer, which is only aware of the interfaces in the Services Layer. This seems to be exactly what DI is for, and IoC frameworks (allegedly!) make this easier, so I thought I'd try MEF2. But of the dozens of articles and questions and answers I've read over the last few days, nothing seems to actually address this in a way that fits my structure. Almost all are deprecated and/or are simple console application examples that have all the interfaces and classes in the same assembly, knowing all about one another and entirely defying the point of loose-coupling and DI. I have also seen others that require the Data Layer dll being put in the presentation layer bin folder and configuring other classes to look there - again hampering the idea of loose-coupling.
There are some solutions that explore attribute-based registration, but that has supposedly been superseded by Convention-Based registration. I also see a lot of examples injecting an object into a controller constructor, which introduces it's own set of problems to solve. I'm not convinced the controller should know about this actually, and would rather have the object injected into the model, but there may be reasons for this as so many examples seem to follow that path. I haven't looked too deeply into this yet as I'm still stuck trying to get the Data Layer object up into the Presentation Layer anywhere at all.
I believe one of my main problems is not understanding in which layer the various MEF2 things need to go, since every example I've found only uses one layer. There are containers and registrations and catalogues and exporting and importing configurations, and I've been unable to figure out exactly where all this code should go.
The irony is that modern design patterns are supposed to abstract complexity and simplify our task, but I'd be half finished by now if I'd have just referenced the DAL from the PL and got to work on the actual functionality of the application. I'd really appreciate it if someone could say, 'Yep, I get what you're doing but you're missing xyz. What you need to do is abc'.
Thanks.
Yep, I get what you're doing (more or less) but (as far as I can tell) you're missing a) the separation of contracts and implementation types into their own projects/assemblies and b) a concept for configuring the DI-container, i.e. configure which implementations shall be used for the interfaces.
There are unlimited ways of dealing with this, so what I give you is my personal best practice. I've been working that way for quite a bit now and am still happy with it, so I consider it worth sharing.
a. Always have to projects: MyNamespace.Something and MyNamespace.Something.Contracts
In general, for DI, I have two assemblies: One for contracts which holds only interfaces and one for the implementation of these interfaces. In your case, I would probably have five assemblies: Presentation.dll, Services.dll, Services.Contracts.dll, DataAccess.dll and DataAccess.Contracts.dll.
(Another valid option is to put all contracts in one assembly, lets call it Commons.dll)
Obviously, DataAccess.dll references DataAccess.Contracts.dll, as the classes inside DataAccess.dll implement the interfaces inside DataAccess.Contracts.dll. Same for Services.dll and Services.Contracts.dll.
No, the decoupling part: Presentation references Services.Contracts and Data.Contracts. Services references Data.Contracts. As you see, there is no dependency to concrete implementations. This is, what the whole DI thing is about. If you decide to exchange your data access layer, you can swap DataAccess.dll while DataAccess.Contracts.dll stays the same. None of your othe assemblies reference DataAccess.dll directly, so there are no broken links, version conflicts, etc. If this is not clear, try to draw a little dependency diagram. You will see, that there are no arrows pointing to any assemblies whioch don't have .Contracts in their name.
Does this make sense to you? Please ask, if there is something unclear.
b. Choose how to configure the container
You can choose between explicit configuration (XML, etc.), attribute based configuration and convention based registration. While the former is a pain for obvious reasons, I am a fan of the second. I think it is more readable and easy to debug than convention based config, but that is a matter of taste.
Of course, the container kind of bundles all the dependencies, which you have spared in your application architecture. To make clear what I mean, consider a XML config for your case: It will contain 'links' to all of the implementation assemblies DataAccess.dll, .... Still, this doesn't undermine the idea of decoupling. It is clear, that you need to modify the configuration, when an implementation assembly is exchanged.
However, working with attribute or convention based configs, you generally work with the autodiscovery mechanisms you mention: 'Search in all assemblies located in xyz'. This does require to place all assemblies in the applications bin directory. There is nothing wrong about it, as the code needs to be somewhere, right?
What do you gain? Consider you've deployed your application and decide to swap the DataAccess layer. Say, you've chosen convention based config of your DI container. What you can do now is to open a new project in VS, reference the existing DataAccess.Contracts.dll and implement all the interfaces in whatever way you like, as long as you follow the conventions. Then you build the library, call it DataAccess.dll and copy and paste it to your original application's program folder, replacing the old DataAccess.dll. Done, you've swapped the whole implementation without any of the other assemblies even noticing.
I think, you get the idea. It really is a tradeoff, using IoC and DI. I highly recommend to be pragmatic in your design decisions. Don't interface everything, it just gets messy. Decide for yourself, where DI and IoC really makes sense and don't get too influenced by the community's religious discussions. Still, used wisely, IoC and DI are really, really, really powerful!
Well I've spent a couple more days on this (which is now around a week in total) and made little further progress. I am fairly sure I had the container set up correctly with my conventions discovering the correct parts to be mapped etc., but I couldn't figure out what seemed to be the missing link to get the controller DI to activate - I constantly received the error message stating that I hadn't provided a parameterless constructor. So I'm done with it.
I did, however, manage to move forward with my structure and intention to use DI with an IoC. If anyone hits the same wall I did and wants an alternative solution: ditch MEF 2 and go with Unity. The latest version (3.5 at time of writing) has discovery by convention baked in and just works like a treat out of the box - it even has a fairly thorough manual with worked examples. There are other IoC frameworks, but I chose Unity since it's MS supported and fares well in performance benchmarks. Install the bootstrapper package from NuGet and most of the work is done for you. In the end I only had to write one line of code to map my entire DAL (they even create a stub for you so you know where to insert it):
container.RegisterTypes(
AllClasses.FromLoadedAssemblies().Where(t => t.Namespace == "xxx.DAL.Repository"),
WithMappings.FromMatchingInterface,
WithName.Default);
I'm starting to work with StructureMap on a windows application project. In working on learning the basics, I found 2 ways to arrange my solution that accomplish the same goal, and I'm wondering if anyone can comment on if one of these 2 seems like a better option, and why.
The goal here was to use IOC so that I could use 2 services without taking dependencies on them. So I I created interfaces in my Business Layer, and then implemented those interfaces in my Infrastructure project and wrapped the actual services at that point.
In my fist attempt at this, I created a project DependencyResolver, which has code to intialize structuremap using the fluent interface (when someone wants IServiceA, give them an instance of ServiceX). Because the initialization of DependencyResolver needed to be kicked off from my app, I have a reference from the app to DependencyResolver like this:
So then I found that I could remove my reference to DependencyResolver, and rely on StructureMap scanner and naming conventions to get that reference at runtime, so then my setup looks like this:
So then, I took the naming conventions further, down into the services I was using, and was able to do away with the DependencyResolver all together. At this point, I am totally relying on the structuremap scanner and naming conventions to get things setup correctly:
So. Here I am, not quite sure how I should feel about these 3 options. Option 1 seems good, except I'm left with the UI indirectly referencing all the things that it shouldn't be referencing (directly) because of the use of StructureMap. However, I'm not sure if this really matters.
Option 2 removes the need for a reference from the app to DependencyResolver, and relies on naming conventions to access classes in that project, and I still have a high level of control over all the remaining setup (but I have now taken a dependence on structureMap directly from my application).
Option 3 seems the easiest (just name everything a certain way, and scan your assemblies), but that also seems more error prone, and brittle. Especially if I wanted to do something a little more complex than IServiceAbc => ServiceAbc.
So, can anyone who has a lot more experience with this stuff that I do give me some advice?
Should I be avoiding the indirect references from my App to my services, and if so, what are the real benefits of doing that?
Am I right that trying to do everything with naming conventions is only wise on simple projects?
Is there a standard pattern for doing what I'm trying to do here?
Sorry for the long post..
Encapsulate all usage of StructureMap in a Composition Root and use Constructor Injection throughout the rest of your code base.
You can implement the Composition Root in a separate assembly if you'd like, but I usually prefer placing it directly in the executable itself, and then implement all of the application logic in separate libraries.
I have used the top design on projects and it work extremely well.
Dependency resolvers are more or less factories to return interface instances, isn't structure map just one way to implement this? In that case I would make request for any item via the dependency resolver at one central place. Then it is also possible to remove structure map and add another service locator (unity, castle windsor etc.) in without changing anything else about your app.
Dependencies should not be resolved from two places as seen in your second option, and not only via the UI project in the third option (what happens if you swap out your UI project and put a different one in then?).
I've just started using Inversion of Control containers and I'm having a difficult time understanding when to use the fluent API or XML when configuring and registering components.
Are there any best practices around when you should prefer one over the other? Or is this simply developer preference? Would it be considered bad practice to mix them in a simple application?
Thanks!
The sweet spot for me is a combination of the two. XML for pulling together large units of functionality and possibly configuring them at deployment time; fluent for setting up the individual components within those units. See http://code.google.com/p/autofac/wiki/StructuringWithModules for an Autofac example. Other containers often have similar capabilities.
Container configuration in code (what you call "the fluent API") is more maintainable, because the code is compiled and therefore the compiler will find many errors for you. However, it requires you to recompile if you want to make changes.
Container configuration in XML is the opposite: the compiler cannot help you find errors, but you can make changes without recompiling.
If you are just starting out with dependency injection, I would stick to configuration in code to keep things simple.
I noticed that the other answers don't touch on a very important problem the Fluent Configuration brings to you DI implementation. Which is hard-wiring concrete implementations to your code base, thus creating tightly-coupled dependencies...which ironically is what IoC Containers are trying to solve. You will find yourself contaminating all your applications with component registration code, now, that doesn't sound good.
Why doesn't it sound good? Because when you have a distributed system with many applications that use different components you basically have two options to register your components:
1. Add DI code on all your applications.
Something like this...
var builder = new ContainerBuilder();
builder.RegisterType<ConsoleLogger>().As<ILogger>();
builder.RegisterType<EmailNotificationDispatcher>().As<INotificationDispatcher>();
builder.RegisterType<DefaultMessageHandler>().As<IMessageHandler>();
//many more...
and of course, ALL the applications in the system will need this sort of logic with different components.
Advantage
You're using Fluent Configuration because it is pretty...that's about it :)
Disadvantage
You are contaminating all you're applications with this fluent component registration, including CONCRETE types
You will need to recompile your application(s) if you need to swap out a dependency/component
If you need to migration to different IoC container tool...the same
2. Wrap your DI code inside a wrapper component to make your applications IoC-container-agnostic.
Something like this...
public sealed class ComponentFactory
{
private static IContainer _container;
public static void Init()
{
var builder = new ContainerBuilder();
builder.RegisterType<ConsoleLogger>().As<ILogger>();
builder.RegisterType<EmailNotificationDispatcher>().As<INotificationDispatcher>();
builder.RegisterType<DefaultMessageHandler>().As<IMessageHandler>();
//many more...
_container = builder.Build();
}
public static T Resolve<T>()
{
return _container.Resolve<T>();
}
}
With this approach your applications won't effectively be aware of what concrete classes their using at all.
You don't need to re-compile and redeploy all applications if you need to swap components.
Furthermore, if you need to migrate to a different IoC Container, the changes will be done in one place...the wrapper class.
The huge disadvange of this approach is that you will have to deploy all components to all applications bin folder even if there are components (or binaries assemblies) one application might not need. So, your lightweight applications will become fat heavy clients.
XML (or json) Configuration
With xml configuration none of the above will happen...
you deploy what your applications need...on a per-application basis
you register the components your applications need...on a per-app basis
your applications are light and have no idea of what concrete implementation they're using. They become very scalable
there's more but hopefully you're getting the idea
Now, I'm not an advocate of XML Configuration, I hate it, I'm a coder and prefer the fluent way over xml (or json) but in my experience xml/json configuration proves to be more scalable and maintainable.
Now, shoot me!!!
I'm fairly new to the DI concept, but I have been using it to some extent in my designs - mainly by 'injecting' interfaces into constructors and having factories create my concrete classes. Okay, it's not configuration-based - but it's never NEEDED to be.
I started to look at DI frameworks such as Spring.NET and Castle Windsor, and stumbled across this blog by Ayende.
What I got from this is
A) DI frameworks are awesome, but
B) It means we don't have to worry about how our system is designed in terms of dependencies.
For me, I'm used to thinking hard about how to loosely-couple my system but at the same time have some sort of control over dependencies.
I'm a bit scared of losing this control, and it being just a free-for-all. ClassA needs ClassB = no problem, just ask and ye shall receive! Hmmm.
Or is that just the point and this is the future and I should just go with it?
Thoughts?
One basic OO principle is that you want your code to depend on interfaces and not implementations, DI is how we do that. Historically, here is how it evolved:
People initially created classes they depended upon by "new'ing" them:
IMyClass myClass = new MyClass();
Then we wanted to remove instantiation so there were static methods to create them:
IMyClass myClass = MyClass.Create();
Then we no longer depended on the lifecycle of the class, but still depended on it for instantiation, so then we used the factory:
IMyClass myClass = MyClassFactory.Create();
This moved the direct dependency from the consuming code to the factory, but we still had the dependency on MyClass indirectly, so we used the service locator pattern like this:
IMyClass myClass = (IMyClass)Context.Find("MyClass");
That way we were only dependent on an interface and a name of a class in our code. But it can be made better, why not depend simply on an interface in our code? We can with dependency injection. If you use property injection you would simply put a property setter for the interface you want in your code. You then configure what the actual dependency is outside of your code and the container manages the lifecycle of that class and your class.
I wouldn't say that you don't have to think about dependencies, but using an IoC framework allows you to change the types which fulfill the dependencies with little or no hassle, since all the wiring is done in a central place.
You still have to think about what interfaces you need and getting them right is not always a trivial matter.
I don't see how a loosely coupled system could be considered lazily designed. If you go through all the trouble of getting to know an IoC framework, you're certainly not taking the shortcut.
I think that ideally, if you already have a loosley coupled system.., using a container will only move the place where you take the dependencies out of your code making them softer and let your system depend on the container building your object graph.
In reality, attempting to use the the container will probably show you that your system is not as loosley coupled as you thought it was.. so in this way, it may help you to create a better design.
Well, i'm a newbie at this subjet.. so maybe i'm not that right.
Cheers.
I must be high, because I thought the whole point of dependency injection is that the code that does stuff simply declares its dependencies so that someone who's creating it will know what to create with it for it to operate correctly.
How dependency injection makes you lazy is maybe it forces someone else to deal with dependencies? That's the whole point! That someone else doesn't need to be really someone else; it just means the code you write doesn't need to be concerned with dependencies because it declares them upfront. And they can be managed because they are explicit.
Edit: Added the last sentence above.
Dependency injection can be a bit difficult to get used to - instead of a direct path through your code, you end up looking at seemingly unconnected objects, and a given action traces it's path through a series of these objects whose coupling seems, to be kind, abstract.
It's a paradigm shift similar to getting used to OO. The intention is that your objects are written do have a focused and single responsibility, using the dependent objects as they're declared by the interface and handled by the framework.
This not only makes loose coupling easier, it makes it almost unavoidable, or at least nearly so, which makes it much simpler to do things like run your object in a mock environment - The IOC container is taking the place of the run environment.
I would disagree and say they lead to better design in many cases. Too often devs create components that do too much and have too many dependencies. With IOC developers i find tend to migrate to a better way of thinking and produce smaller simpler components that can be assembled together into an app.s
If they follow the spirit and do tests, they will further refine your components. Both exercises force you to write better testable components which fits very well with how IOC containers work.
You still have to worry. My team use Castle Windsor in our current project. It annoys me that it delays dependency lookup from compile time to runtime.
Without Castle Windsor, you write code and if you haven't sorted your dependencies out. Bang, the compiler will complain. With Castle Windsor you configure the dependencies in an xml file. They're still there, just separated out from the bulk of your code. The problem is, your code can compile fine if you make a mess of defining the dependencies. But, at runtime, Castle Windsor looks up a concrete classes to service requests for an interface by using reflection. If the dependency can't be found, you get an error at runtime.
I think Castle Windsor does check the dependencies exist when its initialized so that it can throw an error pretty quick. But, it's still annoying when using a strongly typed language that this fuss can't be sorted out at runtime.
So... anyway. Dependencies still seriously matter. You'll all most certainly pay more attention to them using DI than before.
We wrote custom DI framework, thought it took some time getting it right but it all worth the effort. We have divided the who systems into layers and the dependency injection in each layer is bound by rules. E.g. In the Log layer, CUD and BO interfaces cannot be injected.
We are still contemplating over the rules and some of these change every week while the others are remain the same.