I am curious about dependency inversion principle in general, and whether it should be enforced strictly all the time.
I know that using interfaces to be injected in general promotes loose coupling, which has positive impact.
However, there are certain types of class that will most likely always have only one implementation, and likely not to change over time. I really am questioning having every objects backed by an interface, e.g. FooService, with FooServiceImpl.
I am in a dilemma because I think concrete class injection is generally frowned upon by many folks.
tl;dr
Should dependency injection always be done with interfaces only, even when certain classes are unlikely to change, and hence, backing it by interface seems to add unwanted complexity?
You're right, Dependency Injection start to be useful as soon as there's more than one implementation. I understand that you have only one concrete implementation but if you follow developpement bests practices you'll want to unit test every class. And if a BarService depends on your FooService, you will write a unit test BarServiceTest that rely on a fake implementation of FooService (a mock or something like that).
In other words, as soon as you write unit test your app you end up with to implementations of your service : the real one used at runtime, the fake one used for unit testing (or a single implementation but configured differently depending on the context).
Just couple of things.
Dependency injection (DI) !== Inversion of Control (IoC) !== Dependency Inversion Principle (DIP)
I can't recall better (being same brief and explanatory) reading on the topic other than this
Second aspect is language and(or) tools dependent. Certainly I can't speak of all the languages and their tools-stack available, but there might be some where test-doubling an interface would be a better approach than generating a double as a child class of the class being mocked.
So my personal (and subjective) answer for
Should dependency injection always be done with interfaces only, even
when certain classes are unlikely to change, and hence, backing it by
interface seems to add unwanted complexity?
is definitely 'no' because of unneeded complexity in classes structure. Besides, doing DI usually means some kind of dependency injection container is used to manage dependencies, which in turn means extra configuration efforts.
Personally, I introduce interfaces by intent only when I really do have such an intent explicitly. All other interfaces that appear in code are just extractions during development/refactoring process (in other words, I create such abstractions only when me or my code come to the need of those).
Related
If I want my code to follow SOLID principles, specifically the dependency inversion principle, does that mean that I have to create an interface (abstraction) for every module, even if it has only one implementation ?
In my opinion, and according to these posts:
https://josdejong.com/blog/2015/01/06/code-reuse/
https://blog.ploeh.dk/2010/12/02/Interfacesarenotabstractions/
creating an "abstraction" for every module is a code clutter and violates the YAGNI principle.
My rule of thumb is: do not use dependency injection, or create an interface for a module unless it has more than one implementation (the second implementation could be a mock class for unit testing in case of database/server/file modules).
Can somebody clear this up for me ? Does SOLID mean that I have to inject every module and abstract it ? If yes, isn't it just a lot of clutter we're simply not going to use most of the times ?
The Dependency Inversion Principle states:
High-level modules should not depend on low-level modules. Both should
depend on abstractions.
In other words, each module that is depended upon (so that's everything except the entry point modules in your application) should be abstracted. Otherwise a high-level module will have to depend upon the low-level module directly, causing a violation of the DIP.
The number of implemenations an abstraction has is irrelevant to the DIP, because its goal is to make modules resistent to change. Without abstractions, it would become impossible to easily change implementations or add cross-cutting concerns without having to change or recompile a high-level component.
If, however, you find yourself defining many abstractions with just one implementation, you are violating the Reused Abstraction Principle, as Mark Seemann already stated in the article you are referencing:
Having only one implementation of a given interface is a code smell.
What this means, though, is not that you shouldn't define interfaces at all, but you need to take a good look at your design and spot the classes that are related in behavior. Those related classes can often be placed behind the same generic abstraction (generic interface) and this not only allows abstraction reuse, but makes applying cross-cutting concerns childs play.
Here are a few suggestions of functionality that you can place behind the same generic abstraction:
ICommandHandler<TCommand> for classes that make changes to the system on behalf of the user (use cases).
IQueryHandler<TQuery, TResult> as an abstraction for classes that query the database (or file system, web service, what ever) and return data.
IValidator<T> for classes that check report validation errors back to the user
ISecurityValidator<T> for classes that verify whether a user is allowed to execute a certain operation.
IAuthorizationFilter<T> for classes that allow applying row based security based on the user's permissions and roles.
IEventHandler<T> for classes that respond to a certain business event that has occurred.
These are just a few examples of abstractions. It highly depends on the application and design which generic abstractions you will get.
The applications I write make use of these generic abstractions and those applications just have a few interfaces with just one implementation. About 90% to 98% of the modules in the system implement one of those generic abstractions (depending a bit on the size of the application; the bigger the application, the higher the percentage).
These generic abstractions make it really easy to register all implementations in one line of code in your DI library (or at least, if you are using .NET), but more importantly, as I said before, applying cross-cutting concerns becomes really easy. For instance, without having to make sweeping changes to your application, you can run use cases in a database transaction, or apply a deadlock retry mechanism. Or you can apply caching of queries, without having to do sweeping changes throughout your application.
There are some cases in which unit test don't work for the project.
I'm studing the Inversion of Control and Dependency Injection utilities and I would like to know if there are good reasons for use it than for make unit tests easier.
--update
Ok, let's analysis 1 of the cited advanteges: less coupling.
You take out the coupling from the child type, and you add coupling to the handler type who need to create the objects to inject.
Without unit testing, what's the advantage in this coupling transfer (not coupling eliminate).
IOC/DI bring some very important features to your application
Plugability: with DI you can inject dependency into the code without explicitly knowing how the functionality is actually working.
For example: your class might get a ILog interface injected so that it can write logs. Since the class works with the ILog interface, it would be possible to implement a FileLog, MemoryLog or a DatabaseLog & inject this into your class. Any of these implementation will work fine as long as they implement the ILog interface
Testability: With DI in your class, you can inject mock objects to test the behaviour of your class without actually needing the concrete implementation.
For example: consider a Controller class which needs a Repository to perform data operations. In this case, the repository can be DI for the controller. If you need to write tests on the Controller class, you can pass a DI'd mock version of the repository without having to work with the actual repository class
Configurability: Some of the common DI frameworks like Castle Windor, Unity, Spring etc., allow doing DI along with lifetime management of the object created. This is a very powerful feature & allow you to manage the dependencies & their lifetime via configuration. For example consider your application needs a ICache dependency. Via the configuration for lifetime & object management, you will be able to configure the cache to be a Per-application or per-session or per-request etc. without having to explicitly bake the implementation in your code.
HTH
IoC reduces coupling, which has a correlation with defect rates in some studies. (If that really long link doesn't work, it's to Software Engineering Quality Practices by Ronald Kirk Kandt.)
Sure, here are a few reasons:
Dynamic generation of proxies for remoting and transactons
Aspect oriented programming
Layering using interfaces and separation of implementation
Enough?
From the IoC wikipedia article:
There is a decoupling of the execution of a certain task from implementation.
Every system can focus on what it is designed for.
Every system does not make assumptions about what other systems do or should do.
Replacing systems will have no side effect on other systems.
While I would call the above feature list a bit vague, you see most of the above benefits, even without testing.
If I had to say it in a nutshell, I would say that IoC significantly improves separation of concerns which is a valuable goal in software development.
Yes, dependency injection helps you make your classes more focused, clearer*, and easier to change, because it makes it easier to adhere to the single-responsibility principle.
It also makes it easier to vary parts of your application independently of one another.
When you use constructor injection in particular, it's easier to tell what your code needs to do its job. If the WeatherUpdater class requires an IWeatherRepository in its constructor no one is surprised that it uses a database.
* Again, constructor injection only.
I've read a lot about IoC and DI, but I'm not really convinced that you gain a lot by using them in most situations.
If you are writing code that needs pluggable components, then yes, I see the value. But if you are not, then I question whether changing a dependency from a class to an interface is really gaining you anything, other than more typing.
In some cases, I can see where IoC and DI help with mocking, but if you're not using Mocking, or TDD then what's the value? Is this a case of YAGNI?
I doubt you will have any hard data on it, so I will add some thoughts on it.
First, you don't use DI (or other SOLID principles) because it helps you do TDD. Its the other way around, you do TDD because it helps you with the design - which usually means you get code that follow those principles.
Discussing why to use interfaces is a different matter, see: https://stackoverflow.com/questions/667139/what-is-the-purpose-of-interfaces.
I will assume you agree that having your classes do many different things results in messy code. Thus, I am assuming you are already going for SRP.
Because you have different classes that do specific things, you need a way to relate them. If you relate them inside the classes (i.e. the constructors), you get plenty of code that uses specific versions of the classes. This means that making changes to the system will be hard.
You are going to need to change the system, that's a fact of software development. You can call YAGNI about not adding specific extra features, but not on that you won't be needing to change the system. In my case that's something really important as I do weekly sprints.
I use a DI framework where configuration is done through code. With a really small code configuration, you hook up lots of different relations. So, when you take away the discussion on interface vs. concrete classes, you are actually saving typing not the other way around. Also for the cases a concrete class is on the constructor, it hooks it up automatically (I don't have to configure) building the rest of the relations. It also allows me to control some objects life time, in particular I can configure an object to be a Singleton and it hands a single instance all the time.
Also note that just using these practices isn't more overhead. Using them for the first times, is what causes the overhead (because of the learning process + in some cases mind set change).
Bottom line: you ain't gonna need to put all those constructor calls all over the place to go faster.
The most significant gains from DI are not necessarily due to the use of interfaces. You do not actually need to use interfaces to have beneficial effects of dependency injection. If there's only one implementation you can probably inject that directly, and you can use a mix of classes and interfaces.
You're still getting loose coupling, and quite a few development environments you can introduce that interface with a few keypresses if needed.
Hard data on the value of loose coupling I cannot give, but it's been a vision in textbooks for as long as I can remember. Now it's real.
DI frameworks also give you some quite amazing features when it comes to hierarchical construction of large structures. Instead of looking for the leanest DI framework around, I'd recommend you look for a full-featured one. Less isn't always more, at least when it comes to learning about new ways of programming. Then you can go for less.
Apart from testing also the loose coupling is worth it.
I've worked on components for an embedded Java system, which had a fixed configuration of objects after startup (about 50 mostly different objects).
The first component was legacy code without dependency injection, and the subobjects where created all over the place. Now it happened several times that for some modification some code needed to talk to an object which was only available three constructors away. So what can you do but add another parameter to the constructor and pass it through, or even store it in a field to pass it on later. In the long run things became even more tangled than they already where.
The second component I developed from scratch, and used dependency injection (without knowing it at the time). That is, I had one factory which constructed all objects and injected then on a need to know basis. Adding another dependency was easy, just add it to the factory and the objects constructor (or add a setter to avoid loops). No unrelated code needed to be touched.
I'm fairly new to the DI concept, but I have been using it to some extent in my designs - mainly by 'injecting' interfaces into constructors and having factories create my concrete classes. Okay, it's not configuration-based - but it's never NEEDED to be.
I started to look at DI frameworks such as Spring.NET and Castle Windsor, and stumbled across this blog by Ayende.
What I got from this is
A) DI frameworks are awesome, but
B) It means we don't have to worry about how our system is designed in terms of dependencies.
For me, I'm used to thinking hard about how to loosely-couple my system but at the same time have some sort of control over dependencies.
I'm a bit scared of losing this control, and it being just a free-for-all. ClassA needs ClassB = no problem, just ask and ye shall receive! Hmmm.
Or is that just the point and this is the future and I should just go with it?
Thoughts?
One basic OO principle is that you want your code to depend on interfaces and not implementations, DI is how we do that. Historically, here is how it evolved:
People initially created classes they depended upon by "new'ing" them:
IMyClass myClass = new MyClass();
Then we wanted to remove instantiation so there were static methods to create them:
IMyClass myClass = MyClass.Create();
Then we no longer depended on the lifecycle of the class, but still depended on it for instantiation, so then we used the factory:
IMyClass myClass = MyClassFactory.Create();
This moved the direct dependency from the consuming code to the factory, but we still had the dependency on MyClass indirectly, so we used the service locator pattern like this:
IMyClass myClass = (IMyClass)Context.Find("MyClass");
That way we were only dependent on an interface and a name of a class in our code. But it can be made better, why not depend simply on an interface in our code? We can with dependency injection. If you use property injection you would simply put a property setter for the interface you want in your code. You then configure what the actual dependency is outside of your code and the container manages the lifecycle of that class and your class.
I wouldn't say that you don't have to think about dependencies, but using an IoC framework allows you to change the types which fulfill the dependencies with little or no hassle, since all the wiring is done in a central place.
You still have to think about what interfaces you need and getting them right is not always a trivial matter.
I don't see how a loosely coupled system could be considered lazily designed. If you go through all the trouble of getting to know an IoC framework, you're certainly not taking the shortcut.
I think that ideally, if you already have a loosley coupled system.., using a container will only move the place where you take the dependencies out of your code making them softer and let your system depend on the container building your object graph.
In reality, attempting to use the the container will probably show you that your system is not as loosley coupled as you thought it was.. so in this way, it may help you to create a better design.
Well, i'm a newbie at this subjet.. so maybe i'm not that right.
Cheers.
I must be high, because I thought the whole point of dependency injection is that the code that does stuff simply declares its dependencies so that someone who's creating it will know what to create with it for it to operate correctly.
How dependency injection makes you lazy is maybe it forces someone else to deal with dependencies? That's the whole point! That someone else doesn't need to be really someone else; it just means the code you write doesn't need to be concerned with dependencies because it declares them upfront. And they can be managed because they are explicit.
Edit: Added the last sentence above.
Dependency injection can be a bit difficult to get used to - instead of a direct path through your code, you end up looking at seemingly unconnected objects, and a given action traces it's path through a series of these objects whose coupling seems, to be kind, abstract.
It's a paradigm shift similar to getting used to OO. The intention is that your objects are written do have a focused and single responsibility, using the dependent objects as they're declared by the interface and handled by the framework.
This not only makes loose coupling easier, it makes it almost unavoidable, or at least nearly so, which makes it much simpler to do things like run your object in a mock environment - The IOC container is taking the place of the run environment.
I would disagree and say they lead to better design in many cases. Too often devs create components that do too much and have too many dependencies. With IOC developers i find tend to migrate to a better way of thinking and produce smaller simpler components that can be assembled together into an app.s
If they follow the spirit and do tests, they will further refine your components. Both exercises force you to write better testable components which fits very well with how IOC containers work.
You still have to worry. My team use Castle Windsor in our current project. It annoys me that it delays dependency lookup from compile time to runtime.
Without Castle Windsor, you write code and if you haven't sorted your dependencies out. Bang, the compiler will complain. With Castle Windsor you configure the dependencies in an xml file. They're still there, just separated out from the bulk of your code. The problem is, your code can compile fine if you make a mess of defining the dependencies. But, at runtime, Castle Windsor looks up a concrete classes to service requests for an interface by using reflection. If the dependency can't be found, you get an error at runtime.
I think Castle Windsor does check the dependencies exist when its initialized so that it can throw an error pretty quick. But, it's still annoying when using a strongly typed language that this fuss can't be sorted out at runtime.
So... anyway. Dependencies still seriously matter. You'll all most certainly pay more attention to them using DI than before.
We wrote custom DI framework, thought it took some time getting it right but it all worth the effort. We have divided the who systems into layers and the dependency injection in each layer is bound by rules. E.g. In the Log layer, CUD and BO interfaces cannot be injected.
We are still contemplating over the rules and some of these change every week while the others are remain the same.
I've been using StructureMap recently and have enjoyed the experience thoroughly. However, I can see how one can easily get carried away with interfacing everything out and end up with classes that take in a boatload of interfaces into their constructors. Even though that really isn't a huge problem when you're using a dependency injection framework, it still feels that there are certain properties that really don't need to be interfaced out just for the sake of interfacing them.
Where do you draw the line on what to interface out vs just adding a property to the class?
The main problem with dependency injection is that, while it gives the appearance of a loosely coupled architecture, it really doesn't.
What you're really doing is moving that coupling from the compile time to the runtime, but still if class A needs some interface B to work, an instance of a class which implements interface B needs still to be provided.
Dependency injection should only be used for the parts of the application that need to be changed dynamically without recompiling the base code.
Uses that I've seen useful for an Inversion of Control pattern:
A plugin architecture. So by making the right entry points you can define the contract for the service that must be provided.
Workflow-like architecture. Where you can connect several components dynamically connecting the output of a component to the input of another one.
Per-client application. Let's say you have various clients which pays for a set of "features" of your project. By using dependency injection you can easily provide just the core components and some "added" components which provide just the features the client have paid.
Translation. Although this is not usually done for translation purposes, you can "inject" different language files as needed by the application. That includes RTL or LTR user interfaces as needed.
Think about your design. DI allows you to change how your code functions via configuration changes. It also allows you to break dependencies between classes so that you can isolate and test objects easier. You have to determine where this makes sense and where it doesn't. There's no pat answer.
A good rule of thumb is that if its too hard to test, you've got some issues with single responsibility and static dependencies. Isolate code that performs a single function into a class and break that static dependency by extracting an interface and using a DI framework to inject the correct instance at runtime. By doing this, you make it trivial to test the two parts separately.
Dependency injection should only be used for the parts of the
application that need to be changed dynamically without recompiling
the base code
DI should be used to isolate your code from external resources (databases, webservices, xml files, plugin architecture). The amount of time it would take to test your logic in code would almost be prohibitive at a lot of companies if you are testing components that DEPEND on a database.
In most applications the database isn't going to change dynamically (although it could) but generally speaking it's almost always good practice to NOT bind your application to a particular external resource. The amount involve in changing resources should be low (data access classes should rarely have a cyclomatic complexity above one in it's methods).
What do you mean by "just adding a property to a class?"
My rule of thumb is to make the class unit testable. If your class relies on the implementation details of another class, that needs to be refactored/abstracted to the point that the classes can be tested in isolation.
EDIT: You mention a boatload of interfaces in the constructor. I would advise using setters/getters instead. I find that it makes things much easier to maintain in the long run.
I do it only when it helps with separation of concerns.
Like maybe cross-project I would provide an interface for implementers in one of my library project and the implementing project would inject whatever specific implementation they want in.
But that's about it... all the other cases it'd just make the system unnecessarily complex
Even with all the facts and processes in the world.. every decision boils down to a judgment call - Forgot where I read that
I think it's more of a experience / flight time call.
Basically if you see the dependency as a candidate object that may be replaced in the near future, use dependency injection. If I see 'classA and its dependencies' as one block for substitution, then I probably won't use DI for A's deps.
The biggest benefit is that it will help you understand or even uncover the architecture of your application. You'll be able to see very clearly how your dependency chains work and be able to make changes to individual parts without requiring you to change things that are unrelated. You'll end up with a loosely coupled application. This will push you into a better design and you'll be surprised when you can keep making improvements because your design will help you keep separating and organizing code going forward. It can also facilitate unit testing because you now have a natural way to substitute implementations of particular interfaces.
There are some applications that are just throwaway but if there's a doubt I would go ahead and create the interfaces. After some practice it's not much of a burden.
Another item I wrestle with is where should I use dependency injection? Where do you take your dependency on StructureMap? Only in the startup application? Does that mean all the implementations have to be handed all the way down from the top-most layer to the bottom-most layer?
I use Castle Windsor/Microkernel, I have no experience with anything else but I like it a lot.
As for how do you decide what to inject? So far the following rule of thumb has served me well: If the class is so simple that it doesn't need unit tests, you can feel free to instantiate it in class, otherwise you probably want to have a dependency through the constructor.
As for whether you should create an interface vs just making your methods and properties virtual I think you should go the interface route either if you either a) can see the class have some level of reusability in a different application (i.e. a logger) or b) if either because of the amount of constructor parameters or because there is a significant amount of logic in the constructor, the class is otherwise difficult to mock.