I'm getting my feet wet with DI/IoC and MEF in particular.
I have a web application that has two types of parts (maybe more someday) defined by interfaces which need access to the whole environment. The application has a list with concrete implementations for each type, composed by MEF.
The environment consists of:
several repositories
current application request
render engine
navigation engine
plus some static utility classes
How can I put the Interface definitions in a seperate assembly and at the same time specify the environment injection?
Obviously, I can't just reference the main assembly because that needs to reference the contract assembly and I can't create a circular reference.
It seems that I need to create an interface for each of the environment classes, and their publicly available types and so on... There has to be a better way?!
Maybe I'm also missing the obvious bigger flaw here, if anyone could point it out for me?
If you want to decouple your abstractions from their implementations (always a worthy goal), you should define those abstractions in their own assembly.
From the implementation side, this is easy to deal with, becuase you need to reference the abstractions to implement them. There's no way around that whether you use MEF or not, so that's as it has always been:
[Import(typeof(IFoo))]
public class MyFoo : IFoo { }
However, as you say, this means that you can't reference your Composition Root from the abstraction library. However, that is as it should be, because the abstractions shouldn't worry about how they get composed.
In other words, you must implement the composition of the dependencies outside of the abstraction library. A good candidate for that is the executable itself, whereas you would keep all your concrete implementations in one or separate libraries.
The abstraction library will have no references, while both consumers and implementers would need to reference it, so a dependency graph might look like this:
Composition Root --> Abstractions <-- Implementations
Where the arrows denote a reference.
Related
I’ve been reading about DI and the composition root. I’ve read in the article that only the application should have a composition root, not the libraries.
But let’s assume i have a reusable package with some interfaces and their implementation. I would like to bind that interface to the implementation. I think it would be cumbersome if the user has to do all this themselves.
Would it make sense to include an XML DI configuration file in the reusable module, which would be consumed and processed in the composition root?
Although class libraries should not contain composition roots, you can always include a factory in your library that creates a default graph for simple use cases. Your types in the library will still be public so that advanced users can compose the types in a custom way (e.g. decorate some types with their special decorator). The factory you include can also be parameterized to support multiple basic use cases.
Regarding XML configuration, although it works, maintaining an application that uses XML for DI configuration is very hard in most cases because once a type has been renamed in the code, the type name in the XML will not be renamed automatically.
But let’s assume i have a reusable package with some interfaces and their implementation. I would like to bind that interface to the implementation.
Why does your reusable package provide an interface and an implementation?
You can provide concrete classes if your reusable package is a library. As I write in DI-Friendly Framework:
A Library is a reusable set of types or functions you can use from a wide variety of applications. The application code initiates communication with the library and invokes it.
According to the Dependency Inversion Principle (DIP), the client defines the abstract interface of its dependencies. Any interface provided by the library would violate the DIP.
On the other hand, your reusable package can provide an abstraction (interface or base class) if you expect client code to supply an implementation. In that case, the package begins to look more like a framework, although there's probably a gray area there. Usually, in such cases, the package doesn't have to supply any implementations of the interface, or it can supply some for optional use.
There's probably some edge cases where both an interface and a (default) implementation shipping with the same package might make sense, but I don't see how it warrants more than the sort of default factory that Yacoub Massad recommends. You could make that API a Fluent Builder - that's a common pattern for that sort of scenario.
You can supply all the XML configuration files you wish if you don't want anyone to use your package.
Wiki on Inversion of Control:
In IoC, custom-written portions of a computer program receive the
flow of control from a generic framework. A software architecture
with this design inverts control as compared to traditional procedural
programming: in traditional programming, the custom code that
expresses the purpose of the program calls into reusable libraries to
take care of generic tasks, but with inversion of control, it is the
framework that calls into the custom, or task-specific, code.
Spring's ApplicationContext is called IoC container.
When I write in my main() / some method ApplicationContext ctx = new ClasspathXmlApplicationContext(config.xml); MyCls cl = ctx.getBean("name"); I manage control-flow myself. Where is "Inversion of Control" in Wikipedia's terms? I call what I need myself(!), not some (Spring?) framework gives me any callbacks for me to fill with my code.
Spring uses "Inversion of Control" as exact synonym for "Dependency injection" and Wikipedia's article is about something else?
But here is Wiki on "Dependency injection" :
The "injection" refers to the passing of a dependency (a service)
into the object (a client) that would use it. Passing the service to
the client, rather than allowing a client to build or find the
service, is the fundamental requirement of the pattern. Dependency
injection is one form of the broader technique of inversion of
control. The client delegates the responsibility of providing its
dependencies to external code (the injector). the client only needs to
know about the intrinsic interfaces of the services because these
define how the client may use the services. This separates the
responsibilities of use and construction."
Given the above I don't understand how dependency injection can be a form of Inversion of Control, especially in terms of (inversion of) control flow.
This answer (Inversion of Control vs Dependency Injection) is related, but it is not an answer to this question, neither does it directly address control flow question.
P.S. My guess is that I need to look from the "reference frame" of the object itself - then normal control flow is when this code (=this object, its methods) creates new objects (dependencies) itself, and inverted control flow when somebody calls methods of that object passing already created objects-dependencies into it. Then that somebody (just my own code in main()) is like a magical framework and the code inside that object is like code within a call-back method. Just such a mental view of the process.
There's a lot of confusion about the terminology in this space, but what some people call IoC I call Dependency Injection (DI), exactly for the reason that Inversion of Control is a much broader concept.
When applying DI to a code base, however, you're not supposed to write MyCls cl = ctx.getBean("name"); outside of your Composition Root. This means that most of your code should just use the Constructor Injection pattern, like this:
public class MyCls {
public MyCls(MyDependency dep) {
// Assign dep to a class field here...
// That makes dep available to all method of MyCls.
}
}
This inverts the control in the sense that MyCls never gets to control MyDependency. This enables the composing code to share instances of MyDependency if that's desired, or create a new instance for each consuming class, if that's safer. Which option is best often depends on implementation details of the concrete class that implements the polymorphic dependency. Following the Liskov Subtitution Principle, the consuming class (MyCls) should have no knowledge of the implementation details of its dependencies, including which lifetime is best. Inverting control over dependencies nicely addresses that sort of issue.
I've tried to provide detailed descriptions, as well as a consistent pattern language, in my book on DI. While I'm a C# developer, I've learned a lot from many books that have examples in Java, so I've tried to perform the reverse manoeuvre and written a book with examples in C# (but on .NET Core so that it compiles on both Linux, Mac, and Windows) that should also be readable for Java developers.
There are some cases in which unit test don't work for the project.
I'm studing the Inversion of Control and Dependency Injection utilities and I would like to know if there are good reasons for use it than for make unit tests easier.
--update
Ok, let's analysis 1 of the cited advanteges: less coupling.
You take out the coupling from the child type, and you add coupling to the handler type who need to create the objects to inject.
Without unit testing, what's the advantage in this coupling transfer (not coupling eliminate).
IOC/DI bring some very important features to your application
Plugability: with DI you can inject dependency into the code without explicitly knowing how the functionality is actually working.
For example: your class might get a ILog interface injected so that it can write logs. Since the class works with the ILog interface, it would be possible to implement a FileLog, MemoryLog or a DatabaseLog & inject this into your class. Any of these implementation will work fine as long as they implement the ILog interface
Testability: With DI in your class, you can inject mock objects to test the behaviour of your class without actually needing the concrete implementation.
For example: consider a Controller class which needs a Repository to perform data operations. In this case, the repository can be DI for the controller. If you need to write tests on the Controller class, you can pass a DI'd mock version of the repository without having to work with the actual repository class
Configurability: Some of the common DI frameworks like Castle Windor, Unity, Spring etc., allow doing DI along with lifetime management of the object created. This is a very powerful feature & allow you to manage the dependencies & their lifetime via configuration. For example consider your application needs a ICache dependency. Via the configuration for lifetime & object management, you will be able to configure the cache to be a Per-application or per-session or per-request etc. without having to explicitly bake the implementation in your code.
HTH
IoC reduces coupling, which has a correlation with defect rates in some studies. (If that really long link doesn't work, it's to Software Engineering Quality Practices by Ronald Kirk Kandt.)
Sure, here are a few reasons:
Dynamic generation of proxies for remoting and transactons
Aspect oriented programming
Layering using interfaces and separation of implementation
Enough?
From the IoC wikipedia article:
There is a decoupling of the execution of a certain task from implementation.
Every system can focus on what it is designed for.
Every system does not make assumptions about what other systems do or should do.
Replacing systems will have no side effect on other systems.
While I would call the above feature list a bit vague, you see most of the above benefits, even without testing.
If I had to say it in a nutshell, I would say that IoC significantly improves separation of concerns which is a valuable goal in software development.
Yes, dependency injection helps you make your classes more focused, clearer*, and easier to change, because it makes it easier to adhere to the single-responsibility principle.
It also makes it easier to vary parts of your application independently of one another.
When you use constructor injection in particular, it's easier to tell what your code needs to do its job. If the WeatherUpdater class requires an IWeatherRepository in its constructor no one is surprised that it uses a database.
* Again, constructor injection only.
I'm fairly new to the DI concept, but I have been using it to some extent in my designs - mainly by 'injecting' interfaces into constructors and having factories create my concrete classes. Okay, it's not configuration-based - but it's never NEEDED to be.
I started to look at DI frameworks such as Spring.NET and Castle Windsor, and stumbled across this blog by Ayende.
What I got from this is
A) DI frameworks are awesome, but
B) It means we don't have to worry about how our system is designed in terms of dependencies.
For me, I'm used to thinking hard about how to loosely-couple my system but at the same time have some sort of control over dependencies.
I'm a bit scared of losing this control, and it being just a free-for-all. ClassA needs ClassB = no problem, just ask and ye shall receive! Hmmm.
Or is that just the point and this is the future and I should just go with it?
Thoughts?
One basic OO principle is that you want your code to depend on interfaces and not implementations, DI is how we do that. Historically, here is how it evolved:
People initially created classes they depended upon by "new'ing" them:
IMyClass myClass = new MyClass();
Then we wanted to remove instantiation so there were static methods to create them:
IMyClass myClass = MyClass.Create();
Then we no longer depended on the lifecycle of the class, but still depended on it for instantiation, so then we used the factory:
IMyClass myClass = MyClassFactory.Create();
This moved the direct dependency from the consuming code to the factory, but we still had the dependency on MyClass indirectly, so we used the service locator pattern like this:
IMyClass myClass = (IMyClass)Context.Find("MyClass");
That way we were only dependent on an interface and a name of a class in our code. But it can be made better, why not depend simply on an interface in our code? We can with dependency injection. If you use property injection you would simply put a property setter for the interface you want in your code. You then configure what the actual dependency is outside of your code and the container manages the lifecycle of that class and your class.
I wouldn't say that you don't have to think about dependencies, but using an IoC framework allows you to change the types which fulfill the dependencies with little or no hassle, since all the wiring is done in a central place.
You still have to think about what interfaces you need and getting them right is not always a trivial matter.
I don't see how a loosely coupled system could be considered lazily designed. If you go through all the trouble of getting to know an IoC framework, you're certainly not taking the shortcut.
I think that ideally, if you already have a loosley coupled system.., using a container will only move the place where you take the dependencies out of your code making them softer and let your system depend on the container building your object graph.
In reality, attempting to use the the container will probably show you that your system is not as loosley coupled as you thought it was.. so in this way, it may help you to create a better design.
Well, i'm a newbie at this subjet.. so maybe i'm not that right.
Cheers.
I must be high, because I thought the whole point of dependency injection is that the code that does stuff simply declares its dependencies so that someone who's creating it will know what to create with it for it to operate correctly.
How dependency injection makes you lazy is maybe it forces someone else to deal with dependencies? That's the whole point! That someone else doesn't need to be really someone else; it just means the code you write doesn't need to be concerned with dependencies because it declares them upfront. And they can be managed because they are explicit.
Edit: Added the last sentence above.
Dependency injection can be a bit difficult to get used to - instead of a direct path through your code, you end up looking at seemingly unconnected objects, and a given action traces it's path through a series of these objects whose coupling seems, to be kind, abstract.
It's a paradigm shift similar to getting used to OO. The intention is that your objects are written do have a focused and single responsibility, using the dependent objects as they're declared by the interface and handled by the framework.
This not only makes loose coupling easier, it makes it almost unavoidable, or at least nearly so, which makes it much simpler to do things like run your object in a mock environment - The IOC container is taking the place of the run environment.
I would disagree and say they lead to better design in many cases. Too often devs create components that do too much and have too many dependencies. With IOC developers i find tend to migrate to a better way of thinking and produce smaller simpler components that can be assembled together into an app.s
If they follow the spirit and do tests, they will further refine your components. Both exercises force you to write better testable components which fits very well with how IOC containers work.
You still have to worry. My team use Castle Windsor in our current project. It annoys me that it delays dependency lookup from compile time to runtime.
Without Castle Windsor, you write code and if you haven't sorted your dependencies out. Bang, the compiler will complain. With Castle Windsor you configure the dependencies in an xml file. They're still there, just separated out from the bulk of your code. The problem is, your code can compile fine if you make a mess of defining the dependencies. But, at runtime, Castle Windsor looks up a concrete classes to service requests for an interface by using reflection. If the dependency can't be found, you get an error at runtime.
I think Castle Windsor does check the dependencies exist when its initialized so that it can throw an error pretty quick. But, it's still annoying when using a strongly typed language that this fuss can't be sorted out at runtime.
So... anyway. Dependencies still seriously matter. You'll all most certainly pay more attention to them using DI than before.
We wrote custom DI framework, thought it took some time getting it right but it all worth the effort. We have divided the who systems into layers and the dependency injection in each layer is bound by rules. E.g. In the Log layer, CUD and BO interfaces cannot be injected.
We are still contemplating over the rules and some of these change every week while the others are remain the same.
I've been using StructureMap recently and have enjoyed the experience thoroughly. However, I can see how one can easily get carried away with interfacing everything out and end up with classes that take in a boatload of interfaces into their constructors. Even though that really isn't a huge problem when you're using a dependency injection framework, it still feels that there are certain properties that really don't need to be interfaced out just for the sake of interfacing them.
Where do you draw the line on what to interface out vs just adding a property to the class?
The main problem with dependency injection is that, while it gives the appearance of a loosely coupled architecture, it really doesn't.
What you're really doing is moving that coupling from the compile time to the runtime, but still if class A needs some interface B to work, an instance of a class which implements interface B needs still to be provided.
Dependency injection should only be used for the parts of the application that need to be changed dynamically without recompiling the base code.
Uses that I've seen useful for an Inversion of Control pattern:
A plugin architecture. So by making the right entry points you can define the contract for the service that must be provided.
Workflow-like architecture. Where you can connect several components dynamically connecting the output of a component to the input of another one.
Per-client application. Let's say you have various clients which pays for a set of "features" of your project. By using dependency injection you can easily provide just the core components and some "added" components which provide just the features the client have paid.
Translation. Although this is not usually done for translation purposes, you can "inject" different language files as needed by the application. That includes RTL or LTR user interfaces as needed.
Think about your design. DI allows you to change how your code functions via configuration changes. It also allows you to break dependencies between classes so that you can isolate and test objects easier. You have to determine where this makes sense and where it doesn't. There's no pat answer.
A good rule of thumb is that if its too hard to test, you've got some issues with single responsibility and static dependencies. Isolate code that performs a single function into a class and break that static dependency by extracting an interface and using a DI framework to inject the correct instance at runtime. By doing this, you make it trivial to test the two parts separately.
Dependency injection should only be used for the parts of the
application that need to be changed dynamically without recompiling
the base code
DI should be used to isolate your code from external resources (databases, webservices, xml files, plugin architecture). The amount of time it would take to test your logic in code would almost be prohibitive at a lot of companies if you are testing components that DEPEND on a database.
In most applications the database isn't going to change dynamically (although it could) but generally speaking it's almost always good practice to NOT bind your application to a particular external resource. The amount involve in changing resources should be low (data access classes should rarely have a cyclomatic complexity above one in it's methods).
What do you mean by "just adding a property to a class?"
My rule of thumb is to make the class unit testable. If your class relies on the implementation details of another class, that needs to be refactored/abstracted to the point that the classes can be tested in isolation.
EDIT: You mention a boatload of interfaces in the constructor. I would advise using setters/getters instead. I find that it makes things much easier to maintain in the long run.
I do it only when it helps with separation of concerns.
Like maybe cross-project I would provide an interface for implementers in one of my library project and the implementing project would inject whatever specific implementation they want in.
But that's about it... all the other cases it'd just make the system unnecessarily complex
Even with all the facts and processes in the world.. every decision boils down to a judgment call - Forgot where I read that
I think it's more of a experience / flight time call.
Basically if you see the dependency as a candidate object that may be replaced in the near future, use dependency injection. If I see 'classA and its dependencies' as one block for substitution, then I probably won't use DI for A's deps.
The biggest benefit is that it will help you understand or even uncover the architecture of your application. You'll be able to see very clearly how your dependency chains work and be able to make changes to individual parts without requiring you to change things that are unrelated. You'll end up with a loosely coupled application. This will push you into a better design and you'll be surprised when you can keep making improvements because your design will help you keep separating and organizing code going forward. It can also facilitate unit testing because you now have a natural way to substitute implementations of particular interfaces.
There are some applications that are just throwaway but if there's a doubt I would go ahead and create the interfaces. After some practice it's not much of a burden.
Another item I wrestle with is where should I use dependency injection? Where do you take your dependency on StructureMap? Only in the startup application? Does that mean all the implementations have to be handed all the way down from the top-most layer to the bottom-most layer?
I use Castle Windsor/Microkernel, I have no experience with anything else but I like it a lot.
As for how do you decide what to inject? So far the following rule of thumb has served me well: If the class is so simple that it doesn't need unit tests, you can feel free to instantiate it in class, otherwise you probably want to have a dependency through the constructor.
As for whether you should create an interface vs just making your methods and properties virtual I think you should go the interface route either if you either a) can see the class have some level of reusability in a different application (i.e. a logger) or b) if either because of the amount of constructor parameters or because there is a significant amount of logic in the constructor, the class is otherwise difficult to mock.