I was reading over Injection by Hand and Ninjection (as well as Why use Ninject ). I encountered two pieces of confusion:
The inject by hand technique I am already familiar with, but I am not familiar with Ninjection, and thus am not sure how the complete program would work. Perhaps it would help to provide a complete program rather than, as is done on that page, showing a program broken up into pieces
I still don't really get how this makes things easier. I think I'm missing something important. I can kind of see how an injection framework would be helpful if you were creating a group of injections and then switching between two large groups all at once (this is useful for mocking, among other things), but I think there is more to it than that. But I'm not sure what. Or maybe I just need more examples of why this is exciting to drive home the point.
When injecting your dependencies without a DI framework you end up with arrow code all over your application telling classes how to build their dependencies.
public Contact()
: this(new DataGateWay())
{
}
But if you use something like Ninject, all the arrow code is in one spot making it easier to change a dependency for all the classes using it.
internal class ProductionModule : StandardModule
{
public override void Load()
{
Bind<IDataGateway>().To<DataGateWay>();
}
}
I still don't really get how this makes things easier. I think I'm missing something important.
Wouldn't it would be great if we only had to develop discrete components where each provided distinct functionality we could easily understand, re-use and maintain. Where we only worked on components.
What prevents us from doing so, is we need some infrastructure that can somehow combine and manage these components into a working application automatically. Infrastructure that does this is available to us - an IOC framework.
So an IOC framework isn't about managing dependencies or testing or configuration. Instead it is about managing complexity, by enabling you to only work and think about components.
It allows you to easily test your code by mocking the interfaces that you need for a particular code block. It also allows you to easily swap functionality without breaking other parts of the code.
It's all about cohesion and coupling.
You probably won't see the benefit on small projects, but once you get past small it becomes really apparent when you have to make changes to the system. It's a breeze when you've used DI.
I really like the autowiring aspect of some frameworks ... when you do not have to care about what your types need to be instantiated.
EDIT:
I read this article by Ayende # Rahien. And I really support his point.
Dependency injection using most frameworks can be configured at runtime, without requiring a recompile.
Dependency injection can get really interesting if you get your code to the point where there are very few dependencies in the code at all. Some dependency injection frameworks will allow you define your dependencies in a configuration file. This can be very useful if you need a really flexible piece of software that needs to be changed without modifying the code. For example, workflow software is a prime candidate for this type of solution.
Dependency Injection is essential for the Component Driven Development. The latter allows to build really complex applications in a much more efficient and reliable manner.
Also, it allows to separate common cross-cutting concerns cleanly from the other code (this results in more reusable and flexible codebase).
Related links:
Inversion of Control and Dependency Injection - Wiki
Component-Driven Development - Wiki
Cross-cutting concerns - Wiki
Related
I know that it recommended to use DI when we have multiple implementation of interface. But is there any other benefit that recommend to use DI without having multiple implementation?
I've often found out, the bigger the solution, the smaller the percentage of the interfaces having multiple implementations. But as #Mikhail pointed out, it's certainly a lot easier to plug in newer implementations should they arise.
However, the strongest benefit of dependency injection is that it can make testing a lot easier: by injecting interfaces in the unit under test, you're able to mock those interfaces so that they return some dummy objects that can help you reach certain code paths.
I also think that's easier and more elegant/readable to scale up a project through this inversion-of-control concept, and it's also pretty handy for following a SOLID design.
Just that other implementation(s) might appear in the future.
What is DI for and what is its use case, when we have ServiceManager?
They appear to be similar since in configuration files for both zend-di and zend-servicemanager we can set up some options such as aliases and invokables.
I am trying to get a better understanding of what is happening behind the scenes with these components, and documentation did not give me enough info.
Could you please tell me what the difference is and when I should use Di instead of ServiceManager?
Zend\DI relies on magic, like reflections, to detect and inject dependencies while service manager uses user provided factories. That is main difference.
Di sort of deprecated in community in favor of SM due to complexity, debugging and performance issues.
It supposed to be good for RAD, but you need above average knowledge to use it properly.
On the other hand SM have pretty verbose and explicit wiring, you can open your code year later and easily figure out what is going on.
Zend\Di takes care of wiring your classes together, whereas with Zend\ServiceManager you have to wire things manually and write a factory closure for every class you want to instantiate.
Zend\ServiceManager is much faster since it does not rely on the slow Reflection API. On the other hand, writing closures for large applications with hundreds of classes becomes very tedious. Keeping your closures up-to-date will get trickier as your application grows.
To address this problem, I have written a Zend Framework 2 module called ZendDiCompiler. It relies on Zend\Di to scan your code and auto-generates factory code to instantiate your classes. You get the best of both components: the power of Zend\Di and the performance of Zend\ServiceManager.
I have put quite a bit of work into the documentation of ZendDiCompiler and some easy and more advanced usage examples are provided, too.
Basically the difference is as follows:
Zend\ZerviceManager = Factory driven IoC Container
Zend\Di = Autowiring IoC implementation
Zend\Di was Refactored for Version 3. Its behaviour now more solid and predictable than v2 and it is designed to integrate seamlessly into zend-servicemanager to provide auto-wiring capabilities (no more odd magic). Since it uses PHP's reflection api to resolve dependencies it is slower than a factory driven approach. Therefore version 3 comes with an AoT compiler to create a pre-resolved Injector that omits the use of Reflection. An additional benefit: The generated factories can also be used with Zend\ServiceManager directly.
There is a guide for using AoT with both components: https://zendframework.github.io/zend-di/cookbook/aot-guide/
DI/IOC: There are so many frameworks and the examples quickly get sticky with details particular to that framework. I find that I often learn a new technology best if I can learn its principles outside of a framework (the forest obscuring the trees).
My question: What are the simple principles of DI/IOC? I am looking for a (framework agnostic) building approach that outlines what is gained, what the options are, what the costs are, for each simple principle. Please don't just paste links unless they actually address my question. I keep getting lost in those forests ;}
Second question: Once I understand the core principles, would it be worth building my own simple framework? IE, would the insight gained be valuable relative to the effort expended?
Thanks in advance!
Robert
My very quick and dirty take on it
Dependency Injection and Inversion of Control are not the same thing. Inversion of Control uses DI.
IoC is a way of snapping your application together at run time rather than compile time.
Instead of 'newing up' a type in the code, it is injected at run time by the IoC container.
The IoC container knows what to inject into your class because a) It looks at the constructor of that class - all the parameters are interfaces. b) It looks at its configuration file and sees what classes that implement each interface you have chosen to represent that interface in your application.
Here's a very basic example
Let's say you have an interface IEmailer for sending emails:
public interface IEmailer
{
void SendEmail();
}
And you have at least one implementation of this interface:
public class IainsEmailer : IEmailer
{
public void SendEmail()
{
// Send email
}
}
You define in IoC container's config file (somehow):
IainsEmailer is my choice for IEmailer
Then in your code you can have the following and the IoC container will inject an IainsEmailer into any constructor that needs an IEmailer.
public class MyClass
{
private IEmailer _emailer;
public MyClass(IEmailer emailer)
{
_emailer = emailer
}
// You can now use emailer as if you have created it
_emailer.SendEmail();
}
I could go on. And on. But this really is the whole idea of IoC in a nutshell.
This is a somewhat dated but good overview Inversion of Control
To me the underlying principles that drive Ioc are the concepts of components, modularity, loose coupling, cohesion. This means that you design software that is broken up into cohesive, modular units with clear dependencies, or links to other units. This is a basic principle of engineering in any field, not just programming. Once you have modularity, to create a functional system you need a way to link these components so that they form a functional whole. IoC/DI frameworks are components that abstract this concept of linking, and allow you to write code that declares links between components, and then have the ability to execute these links. The "inversion" part comes from the fact that the components themselves no longer link to their dependencies, instead the linking is performed outside, by a linking component. This in turn, is called dependency injection. A typical IoC/DI framework allows you to write code that specifies a particular implementation of an interface that another component depends on, then allows you to instantiate that component and upon creation, the required implementation will be provided by the IoC container. This then abstracts the concept of object creation.
Your questions warrant a fully fledged Wikipedia article. I assume you've already read up on the actual article so here's my shorter take on your question: Dependency Injection is all about instantiation.
The main idea is that classes should not be responsible for instantiating their dependencies. The DI framework takes over instantiation because it can do it flexibly, via a configuration mechanism that is (typically) external to your code.
This enables you to write classes that are:
Decoupled from their dependencies.
Better focused on their actual responsibility (rather than on their dependencies)
Can be changed through configuration without the need for recompilation.
More easily testable.
Arguably, better designed.
To answer your second question: if you're trying to learn about DI then writing a framework yourself is clearly overkill. I would recommend that you choose one of the popular open source frameworks and write code that consumes it - most have good tutorials just for that purpose.
Taking it to the next level you can take the source code of the framework you were using and start digging into it. The good framework are very well written and relatively easy to read.
Good Luck!
urig
PS - If you're in .net, don't start with MEF. Start with Windsor, Unity or Spring.Net. MEF is both more and less than a DI framework so leave it for a little later.
Is there a down side? I feel almost dependent on it now. Whenever a project gets past a certain size almost feel an allergic reaction to standard patterns and immediately re-wire it with a Dependency Injection framework.
The largest issue I've found is it can be confusing for other developers who are just learning it.
Also, I'd feel much better if it were a part of the language I was using. Though, for Java at least, there are a couple very lightweight libraries which are quite good.
Thoughts? Bad experiences? Or just stop worrying about it?
[EDIT] Re: Description of Dependency Injection itself
Sorry for being vague. Martin Fowler probably describes it FAR better than I ever could... no need to waste the effort.
Coincidentally, this confirms one point about it, that it's still not widely practiced and might tend to be a barrier when working with teams if everyone is not up to speed on it.
I've taken a stab at describing some of the possible downsides in a blog post here: http://kevin-berridge.blogspot.com/2008/06/ioc-and-di-complexity.html
The problem I have with DI is the same problem I have with COM and with any code that looks something like:
i = GetServiceOrInterfaceOrObject(...)
The problem is that such a system cannot be understood from the code. There must be documentation somewhere [else] that defines what service/interface/object can be requested by service/interface/object X. This documention must not only be maintained, but available as easily as the source.
Unless the document is very well written, it's often still not easy to see the relationships between objects. Sometimes relationships are temporal which makes them even harder to discover.
I like the KISS principle, and I'm a strong believer in using the right tool for the job. If the benefit of DI, for a given project, outweighs the need to write comprehensible code, than use it.
Also, I'd feel much better if it were
a part of the language I was using.
FYI there is a very simple and functional dependecy injection as part of JDK 6. If you need lightweight, straightforward dependency injection, then use it.
Using ServiceLoader class you can request a service (or many implementations of the service) based on a class:
package dependecyinjection;
import java.util.ServiceLoader;
public abstract class FooService {
public static FooService getService() {
ServiceLoader<FooService> loader = ServiceLoader.load(FooService.class);
for (FooService service : loader) {
return provider;
}
throw new Exception ("No service");
}
public abstract int fooOperation();
}
package dependecyinjection;
public class FooImpl extends FooService {
#Override
public int fooOperation() {
return 2;
}
}
How does ServiceLoader defines the service implementations that are returned?
In your project folder create a folder named META-INF/services and create a file named dependencyinjection.FooService. This file contain a line pointing to the service implementation. In that case: dependecyinjection.FooImpl
This is not widely known yet.
I am big beleaver in IO however I saw some projects with huge xml configuration files which no one understand. So beware of programming in xml.
In my opinion, the major drawbacks are the learning curve (as you point out) and the potential for the added abstraction to make it more difficult to debug (which is really part of the learning curve as well).
To me, DI seems to me to be more appropriate for larger, complex systems -- for small one-off apps, it may result in the app being over-architected, basically, having the architecture take more development time to adhere to than it can ever make up for in the value it provides.
Just stop worrying about it. It's my opinion that in time IoC techniques will be second nature to most developers. I'm trying to teach devs here at work about it and I find it difficult to get the message across because it feels so unnatural to the way we've always done things.. which just so happened to have been the wrong way. Also, developers both new to IoC and new to a project I find have an even more hard time. They're use to using the IDE to follow the trail of dependencies to gain an understanding of how the whole thing "hangs together". That information is often written into arcane XML.
Could you add a link or two to explain what Dependency Injection actually is, for those of us playing along at home? The wikipedia article is entertaining, but not very enlightening.
The only down side I can think of is tiny performance decrease through constant virtual calls :)
#Blorgbeard: http://www.martinfowler.com/articles/injection.html is probably one of the best articles on the subject
I absolutely need to use an IoC container for decoupling dependencies in an ever increasingly complex system of enterprise services. The issue I am facing is one related to configuration (a.k.a. registration). We currently have 4 different environments -- development to production and in between. These environments have numerous configurations that slightly vary from environment to environment; however, in all cases that I can currently think of, dependencies between components do not differ from environment to environment, though I could have missed something and/or this could obviously change.
So, the ultimate question is, does anybody have a similar experience using an IoC framework? Or, can anybody recommend one framework over another that would provide flexible registration be it through some sort of convention or simplified configuration information? Would I still be able to benefit from a fluent interface or am I stuck with XML -- I'd like to avoid XML-hell.
Edit: This is a .Net environment and I have been looking at Windsor, Ninject and Autofac. They all seem to now support both methods of registration (fluent and XML), though Autofac's support for lambda expressions seems to be a little different than the others. Anybody use that in a similar multi-deployment environment?
If you want to abstract your container, and be able to use different ones, look into having it injectable in a way I tried to do it here
I use Ninject. I like the fact that I don't have to use Xml to configure the dependencies. I can just use straight up C# code. There are multiple ways of doing it also. I know other libraries have that feature, but Ninject offers fast instantiation, it is pretty lightweight, it has conditional binding, supports compact framework, and it supports Silverlight, 2.0. I also use a wrapper on top of it, in case I do switch it out for another framework in the future. You should definitely try Ninject when deciding on a framework.
I'm not sure whether it will suit your particular case, you didn't mention what platform you're working in, but I've had great success with Castle Windsor's IOC framework. The dependencies are setup in the config file (it's a .NET framework)
Look at Ayendes rhino commons. He uses an abstraction over the IoC container. So that you can switch containers whenever you want. Something like container.Resolve is always there in every container.
I use Structuremap to do the dirty work it has a fluent interface and the XML things and it is powerfull enough for most things you want to do. Each one has it's own pros and cons so a little abstraction so you can easily switch (you never know how long they are going to be around) is good. For the rest I think Spring.Net, Castle windsor, Ninject and StructureMap aren't that far apart anymore.