I understand the benefit of DI from the Singleton point of view and the reduction of boiler-plate code. But I found this on wikipedia too:
Another benefit is that it offers configuration flexibility because
alternative implementations of a given service can be used without
recompiling code
When I use Spring or Guice it always makes a contract between 1 service and 1 implementation of it. Am I missing a feature or understanding the statement incorrectly?
You would normally have to recompile the part of the application that contains the configuration, but the rest of the application can stay the same. When those parts are put in separate modules / assemblies, recompilation of those parts is not needed. When you configure the container using XML (in theory) nothing needs to be recompiled.
You could even go one step further and change behavior at runtime (using decorators for instance) if you wish.
If the decision of which implementation to use for a given service is in configuration, you can change that decision purely in configuration with no code changes, as long as the alternative implementation you want to use already exists.
This is largely the case quite naturally with Spring, where this decision is usually in XML configuration files read at application startup.
Related
I'm starting a new MVC project and have (almost) decided to give the Repository Pattern and Dependency Injection a go. It has taken a while to sift through the variations but I came up with the following structure for my application:
Presentation Layer: ASP.Net MVC front end (views/controllers, etc.)
Services Layer (Business Layer, if you prefer): interfaces and DTOs.
Data Layer: interface implementations and Entity Framework classes.
They are 3 separate projects in my solution. The Presentation Layer only has a reference to the Services Layer. The Data Layer also only has a reference to the Services Layer - so this is basically following Domain Driven Design.
The point of structuring things in this fashion is for separation of concerns, loose-coupling and testability. I'm happy to take advice on improvements if any of this is unreasonable?
The part I am having difficulty with is injecting an interface-implementing object from the Data Layer into the Presentation Layer, which is only aware of the interfaces in the Services Layer. This seems to be exactly what DI is for, and IoC frameworks (allegedly!) make this easier, so I thought I'd try MEF2. But of the dozens of articles and questions and answers I've read over the last few days, nothing seems to actually address this in a way that fits my structure. Almost all are deprecated and/or are simple console application examples that have all the interfaces and classes in the same assembly, knowing all about one another and entirely defying the point of loose-coupling and DI. I have also seen others that require the Data Layer dll being put in the presentation layer bin folder and configuring other classes to look there - again hampering the idea of loose-coupling.
There are some solutions that explore attribute-based registration, but that has supposedly been superseded by Convention-Based registration. I also see a lot of examples injecting an object into a controller constructor, which introduces it's own set of problems to solve. I'm not convinced the controller should know about this actually, and would rather have the object injected into the model, but there may be reasons for this as so many examples seem to follow that path. I haven't looked too deeply into this yet as I'm still stuck trying to get the Data Layer object up into the Presentation Layer anywhere at all.
I believe one of my main problems is not understanding in which layer the various MEF2 things need to go, since every example I've found only uses one layer. There are containers and registrations and catalogues and exporting and importing configurations, and I've been unable to figure out exactly where all this code should go.
The irony is that modern design patterns are supposed to abstract complexity and simplify our task, but I'd be half finished by now if I'd have just referenced the DAL from the PL and got to work on the actual functionality of the application. I'd really appreciate it if someone could say, 'Yep, I get what you're doing but you're missing xyz. What you need to do is abc'.
Thanks.
Yep, I get what you're doing (more or less) but (as far as I can tell) you're missing a) the separation of contracts and implementation types into their own projects/assemblies and b) a concept for configuring the DI-container, i.e. configure which implementations shall be used for the interfaces.
There are unlimited ways of dealing with this, so what I give you is my personal best practice. I've been working that way for quite a bit now and am still happy with it, so I consider it worth sharing.
a. Always have to projects: MyNamespace.Something and MyNamespace.Something.Contracts
In general, for DI, I have two assemblies: One for contracts which holds only interfaces and one for the implementation of these interfaces. In your case, I would probably have five assemblies: Presentation.dll, Services.dll, Services.Contracts.dll, DataAccess.dll and DataAccess.Contracts.dll.
(Another valid option is to put all contracts in one assembly, lets call it Commons.dll)
Obviously, DataAccess.dll references DataAccess.Contracts.dll, as the classes inside DataAccess.dll implement the interfaces inside DataAccess.Contracts.dll. Same for Services.dll and Services.Contracts.dll.
No, the decoupling part: Presentation references Services.Contracts and Data.Contracts. Services references Data.Contracts. As you see, there is no dependency to concrete implementations. This is, what the whole DI thing is about. If you decide to exchange your data access layer, you can swap DataAccess.dll while DataAccess.Contracts.dll stays the same. None of your othe assemblies reference DataAccess.dll directly, so there are no broken links, version conflicts, etc. If this is not clear, try to draw a little dependency diagram. You will see, that there are no arrows pointing to any assemblies whioch don't have .Contracts in their name.
Does this make sense to you? Please ask, if there is something unclear.
b. Choose how to configure the container
You can choose between explicit configuration (XML, etc.), attribute based configuration and convention based registration. While the former is a pain for obvious reasons, I am a fan of the second. I think it is more readable and easy to debug than convention based config, but that is a matter of taste.
Of course, the container kind of bundles all the dependencies, which you have spared in your application architecture. To make clear what I mean, consider a XML config for your case: It will contain 'links' to all of the implementation assemblies DataAccess.dll, .... Still, this doesn't undermine the idea of decoupling. It is clear, that you need to modify the configuration, when an implementation assembly is exchanged.
However, working with attribute or convention based configs, you generally work with the autodiscovery mechanisms you mention: 'Search in all assemblies located in xyz'. This does require to place all assemblies in the applications bin directory. There is nothing wrong about it, as the code needs to be somewhere, right?
What do you gain? Consider you've deployed your application and decide to swap the DataAccess layer. Say, you've chosen convention based config of your DI container. What you can do now is to open a new project in VS, reference the existing DataAccess.Contracts.dll and implement all the interfaces in whatever way you like, as long as you follow the conventions. Then you build the library, call it DataAccess.dll and copy and paste it to your original application's program folder, replacing the old DataAccess.dll. Done, you've swapped the whole implementation without any of the other assemblies even noticing.
I think, you get the idea. It really is a tradeoff, using IoC and DI. I highly recommend to be pragmatic in your design decisions. Don't interface everything, it just gets messy. Decide for yourself, where DI and IoC really makes sense and don't get too influenced by the community's religious discussions. Still, used wisely, IoC and DI are really, really, really powerful!
Well I've spent a couple more days on this (which is now around a week in total) and made little further progress. I am fairly sure I had the container set up correctly with my conventions discovering the correct parts to be mapped etc., but I couldn't figure out what seemed to be the missing link to get the controller DI to activate - I constantly received the error message stating that I hadn't provided a parameterless constructor. So I'm done with it.
I did, however, manage to move forward with my structure and intention to use DI with an IoC. If anyone hits the same wall I did and wants an alternative solution: ditch MEF 2 and go with Unity. The latest version (3.5 at time of writing) has discovery by convention baked in and just works like a treat out of the box - it even has a fairly thorough manual with worked examples. There are other IoC frameworks, but I chose Unity since it's MS supported and fares well in performance benchmarks. Install the bootstrapper package from NuGet and most of the work is done for you. In the end I only had to write one line of code to map my entire DAL (they even create a stub for you so you know where to insert it):
container.RegisterTypes(
AllClasses.FromLoadedAssemblies().Where(t => t.Namespace == "xxx.DAL.Repository"),
WithMappings.FromMatchingInterface,
WithName.Default);
I have used Unity for my last project and was generally pleased. But benchmarks have me thinking I may go with Simple Injector for my next project.
However, Simple Injector does not seem to have an interface for its Container class. This means that anytime I want to use the container in a method, I cannot mock the container for unit testing.
I am confused how a tool that really functions based of interfaces, would not itself make an interface to the container. I know that the classic methods of dependency injection do not need the container for anywhere more than the startup. (The rest uses constructor injection.) But I have found that when the rubber hits the road that cannot always be true. Sometimes you just need the container in order to do a "resolve" in the code.
If I go with Simple Injector then that code seems to gets harder to unit test.
Am I right? Or am I missing something?
Simple Injector does not contain an IContainer abstraction, because:
It would be useless for Simple Injector to define it,
because in case of depending on IContainer instead of Container, your code would in that case still depend on Simple Injector, and this causes a vendor lock-in, which Simple Injector tries to prevent.
Any code you write, apart from the application's Composition Root, should not depend on the container, nor on an abstraction over the container. Both are implementations of the Service Locator anti-pattern.
You should NOT use a DI library when unit testing. When unit testing, you should manually inject all fake or mock objects in the class under test. Using a container only complicates things. Perhaps you are using a container, because manually creating those classes is too cumbersome for you. This might indicate problems with your code (you might be violating the Single Responsibility Principle) or your tests (you might be missing a factory method to create the class under test).
You might use the container for your integration tests, but you
shouldn't have that many integration tests in the first place. The focus should be on unit tests and this should be easy when applying the dependency injection pattern. On top of that, there are better ways of hiding the container from your integration tests, compared to depending on a very wide library-defined interface.
It is trivial to define such interface (plus an adapter) yourself, which justifies not having it in the library. It is your job as application developer to define the right abstractions for your application as stated by the Dependency Inversion Principle. Libraries and frameworks that tend to do this will fail most of the time in providing an abstraction that works for everyone.
The library itself does not use that abstraction and a library should, according to the Framework Design Guidelines, in that case not define such abstraction for you. As stated in the previous point, Simple Injector would get the abstraction wrong anyway.
Last but not least, the Simple Injector container does actually implement System.IServiceProvider which is defined in mscorlib.dll and can be used for retrieving service objects.
I think the answer given here is entirely founded upon accepting that ServiceLocator is an anti-pattern, which in turn I don't believe is globally accepted as true. See Windows Workflow Foundation's Extensions support.
The anti-pattern link (and its two updates) may also be weak... the latest update claims violation of encapsulation ("relieving you of the burden of having to understand every implementation detail of every piece of code in your code base.") while then at the same time claiming that up-front knowledge of dependencies is somehow different for that claim than discovering them via unit tests. Either way, you're going to need to know what to give it.
All in all, if you want to follow the Locator pattern, either leverage its IServiceProvider, or simplify your container population (to a singleton) and create a static wrapper for it.
I have been reading alot about dependency injection thinking that it may be some really advanced way to program, but I can't see the difference between just avoiding global state, as when there is no global state then you are forced to pass in all dependencies to objects.
Can someone please explain to me as I think I may be missing the point about what dependency injection is?
Dependency injection is about decoupling code.
When you avoid the use of globals by passing arguments you are decoupling code. You are removing the dependency the code has on the globals.
You can generalize this decoupling to more than just the avoidance of globals. Take the following code:
def foo(arg):
return ClassBar(arg).attr
foo(1)
The function foo is dependent on or tightly coupled to ClassBar. The reason this is not good is you will be forced to update foo when:
the arguments to constructing ClassBar change
you want to change ClassBar to something else
another piece of code wants to access attr from a different object
If the code was rewritten:
def foo(instanceBar):
return instanceBar.attr
foo(ClassBar(1))
You've pushed the coupling up to the caller. This removed the dependency from the definition of foo. This frees you from having to update foo in the cases outlined above. The more of your code that is decoupled, the fewer code changes you'll need to make.
What I understand about dependency injection is that you leave out the details of creating an object and only declare that such an object is needed. A framework for example will set this object later on before it's needed.
So the value here is the separation of concerns. This is useful for testing when you will inject a mockup of the real object.
Dependency injection is a way for implementing the inversion of control pattern for avoiding global state for dependency resolution. You can use the inversion of control pattern with or without dependency-injection. Yes not using global variables is an important part of the equation weather you use dependency-injection or not.
Dependency injection is really nothing more than if you were to write a program in a bottom up style where the most top level portion of the application resolved dependencies for all subsystems. Say I had a program with a dependency injection configuration like:
<bean id="world" class="com.game.World" start-method="play">
<property name="player1" ref="player1"/>
<property name="player2" ref="player2"/>
</bean>
<bean id="player1" class="com.game.LocalPlayer"/>
<bean id="player2" class="com.game.NetworkPlayer/>
That would really be no different than if you created the objects by hand:
public static void main() {
World world = new World();
world.player1 = new LocalPlayer();
world.player2 = new NetworkPlayer();
world.play();
}
Using dependency injection simply means writing code like the above is handled for you. In this simple example you can't make much of a case for using it over just using code, but in larger programs it does save you a lot of time. It also prevents you or team members from taking shortcuts because it's not as wide open as when you write code.
Dependency-injection frameworks change your program from imperative style code to a declarative-style language for dependencies. So you're writing a program through this declarative language and you can augment that with lots of other features.
Features like having the framework resolve order of construction and cycle dependencies for you. Declaring external configuration and injection of values into your declared objects (i.e. property files, XML configuration, etc) which is really nice. All of these together makes dependency injection frameworks quite compelling to use over doing all of this on your own.
Another thing is that dependency injection generally creates singleton objects. In situations like services and DAOs, you would never want to have more than one object. Its also nice to have it instantiated already(generally on app startup, in spring), so you can use whenever needs arise.
Assuming you are a good coder, you want to be able to test and replace your parts system easily. The best way to do this is with a modular design. That is to say you want to break down your problem into smaller problems that you can solve and keep bug free.
By using dependency injection, you are able to come up with these smaller components, test them and link them together in a standard way. This in tern results in slicker, decoupled design. In turn, your project doesn't grind begin to slow as you are never working in high complexity code (at least in theory) this productivity remains high.
If you are a skilled developer you can make use of the singleton pattern (and others) to get most of the same benefits. However, your entire team needs to have this same skill or once again you get coupled design and low throughput.
Using dependency injection looks to be like using Windows registry. You load up the registry with things you want and then pull them out and use them in some module.
However, it breaks Object oriented code.
Say you have 20 items in your Dependency registry. A database, a logger, an exception handler and so on.
Now in a given module you have NO IDEA which of these dependency services your module uses. Your context is further lost because you don't know what will be in the dependency registry at the time you run the code!
I cannot see any benefit here. It simply makes debugging impossible.
In what concrete web project(s) (you don't have to name them by name of course), specifically what part of the web-application/website, that you have worked on, has dependency injection proven to be a good choice. Can you give concrete examples where you actually substituted one component for another with DI during the life span of the project, excluding cases for mock/unit testing?
Dependency injection is not about substituting components. It's about decoupling code, it helps keep cohesion high and coupling low.
Substituting components is just one (and not too common in my experience) of the things you can do with DI.
If you really want examples of substituting components:
I had a faxing service that connected to a remote Windows Fax Server to send faxes. I replaced that with a service that sends faxes via j2.com instead.
I had a service to search "stuff". This service was implemented first against a RDBMS, later was replaced with a search against a Solr instance.
The application cache was abstracted as a component. First it was implemented using the ASP.NET builtin cache, later it was replaced using memcached.
I've been using StructureMap recently and have enjoyed the experience thoroughly. However, I can see how one can easily get carried away with interfacing everything out and end up with classes that take in a boatload of interfaces into their constructors. Even though that really isn't a huge problem when you're using a dependency injection framework, it still feels that there are certain properties that really don't need to be interfaced out just for the sake of interfacing them.
Where do you draw the line on what to interface out vs just adding a property to the class?
The main problem with dependency injection is that, while it gives the appearance of a loosely coupled architecture, it really doesn't.
What you're really doing is moving that coupling from the compile time to the runtime, but still if class A needs some interface B to work, an instance of a class which implements interface B needs still to be provided.
Dependency injection should only be used for the parts of the application that need to be changed dynamically without recompiling the base code.
Uses that I've seen useful for an Inversion of Control pattern:
A plugin architecture. So by making the right entry points you can define the contract for the service that must be provided.
Workflow-like architecture. Where you can connect several components dynamically connecting the output of a component to the input of another one.
Per-client application. Let's say you have various clients which pays for a set of "features" of your project. By using dependency injection you can easily provide just the core components and some "added" components which provide just the features the client have paid.
Translation. Although this is not usually done for translation purposes, you can "inject" different language files as needed by the application. That includes RTL or LTR user interfaces as needed.
Think about your design. DI allows you to change how your code functions via configuration changes. It also allows you to break dependencies between classes so that you can isolate and test objects easier. You have to determine where this makes sense and where it doesn't. There's no pat answer.
A good rule of thumb is that if its too hard to test, you've got some issues with single responsibility and static dependencies. Isolate code that performs a single function into a class and break that static dependency by extracting an interface and using a DI framework to inject the correct instance at runtime. By doing this, you make it trivial to test the two parts separately.
Dependency injection should only be used for the parts of the
application that need to be changed dynamically without recompiling
the base code
DI should be used to isolate your code from external resources (databases, webservices, xml files, plugin architecture). The amount of time it would take to test your logic in code would almost be prohibitive at a lot of companies if you are testing components that DEPEND on a database.
In most applications the database isn't going to change dynamically (although it could) but generally speaking it's almost always good practice to NOT bind your application to a particular external resource. The amount involve in changing resources should be low (data access classes should rarely have a cyclomatic complexity above one in it's methods).
What do you mean by "just adding a property to a class?"
My rule of thumb is to make the class unit testable. If your class relies on the implementation details of another class, that needs to be refactored/abstracted to the point that the classes can be tested in isolation.
EDIT: You mention a boatload of interfaces in the constructor. I would advise using setters/getters instead. I find that it makes things much easier to maintain in the long run.
I do it only when it helps with separation of concerns.
Like maybe cross-project I would provide an interface for implementers in one of my library project and the implementing project would inject whatever specific implementation they want in.
But that's about it... all the other cases it'd just make the system unnecessarily complex
Even with all the facts and processes in the world.. every decision boils down to a judgment call - Forgot where I read that
I think it's more of a experience / flight time call.
Basically if you see the dependency as a candidate object that may be replaced in the near future, use dependency injection. If I see 'classA and its dependencies' as one block for substitution, then I probably won't use DI for A's deps.
The biggest benefit is that it will help you understand or even uncover the architecture of your application. You'll be able to see very clearly how your dependency chains work and be able to make changes to individual parts without requiring you to change things that are unrelated. You'll end up with a loosely coupled application. This will push you into a better design and you'll be surprised when you can keep making improvements because your design will help you keep separating and organizing code going forward. It can also facilitate unit testing because you now have a natural way to substitute implementations of particular interfaces.
There are some applications that are just throwaway but if there's a doubt I would go ahead and create the interfaces. After some practice it's not much of a burden.
Another item I wrestle with is where should I use dependency injection? Where do you take your dependency on StructureMap? Only in the startup application? Does that mean all the implementations have to be handed all the way down from the top-most layer to the bottom-most layer?
I use Castle Windsor/Microkernel, I have no experience with anything else but I like it a lot.
As for how do you decide what to inject? So far the following rule of thumb has served me well: If the class is so simple that it doesn't need unit tests, you can feel free to instantiate it in class, otherwise you probably want to have a dependency through the constructor.
As for whether you should create an interface vs just making your methods and properties virtual I think you should go the interface route either if you either a) can see the class have some level of reusability in a different application (i.e. a logger) or b) if either because of the amount of constructor parameters or because there is a significant amount of logic in the constructor, the class is otherwise difficult to mock.