Which dependencies should I inject? - dependency-injection

When using dependency injection which dependencies do you inject?
I have previously injected all dependencies but have found when doing TDD there are typically two types of dependency:
Those which are genuine external dependencies which may change e.g. ProductRepository
Those which exist purely for testability e.g. Part of the behaviour of the class that has been extracted and injected just for testability
One approach is to inject ALL dependencies like this
public ClassWithExternalDependency(IExternalDependency external,
IExtractedForTestabilityDependency internal)
{
// assign dependencies ...
}
but I've found this can cause dependency bloat in the DI registry.
Another approach is to hide the "testability dependency" like this
public ClassWithExternalDependency(IExternalDependency external)
: this (external, new ConcreteClassOfInternalDependency())
{}
internal ClassWithExternalDependency(IExternalDependency external,
IExtractedForTestabilityDependency internal)
{
// assign dependencies ...
}
This is more effort but seems to make a lot more sense. The downside being not all objects are configured in the DI framework, thereby breaking a "best practice" that I've heard.
Which approach would you advocate and why?

I believe you're better off injecting all of your dependencies. If it starts to get a little unwieldy, that's probably an indication that you need to simplify things a bit or move the dependencies into another object. Feeling the "pain" of your design as you go can be really enlightening.
As for dependency bloat in the registry, you might consider using some sort of conventional binding technique, rather than registering each dependency by hand. Some IoC containers have convention-based type-scanning bindings built into them. For example, here's part of a module I use in a Caliburn WPF application that uses Ninject:
public class AppModule : NinjectModule
{
public override void Load()
{
Bind<IShellPresenter>().To<ShellPresenter>().InSingletonScope();
BindAllResults();
BindAllPresenters();
}
/// <summary>
/// Automatically bind all presenters that haven't already been manually bound
/// </summary>
public void BindAllPresenters()
{
Type[] types = Assembly.GetExecutingAssembly().GetTypes();
IEnumerable<Type> presenterImplementors =
from t in types
where !t.IsInterface
&& t.Name.EndsWith("Presenter")
select t;
presenterImplementors.Run(
implementationType =>
{
if (!Kernel.GetBindings(implementationType).Any())
Bind(implementationType).ToSelf();
});
}
Even though I have dozens of results and presenters running around, I don't have to register them explicitly.

I certainly won't inject all dependencies, because were to stop? Do you want to inject your string dependencies? I only invert the dependencies that I need for unit testing. I want to stub my database (see this example for instance). I want to stub the sending of e-mail messages. I want to stub the system clock. I want to stub writing to the file system.
The thing about inverting as many dependencies as you can, even those that you don't need for testing, is that make unit testing a lot harder and the more you stub out the less you really test how the system really acts. This makes your tests much less reliable. It also complicates your DI configuration in the application root.

I would wire all my non-external dependencies by hand and 'register' only external dependencies. When I say non-external, I mean the objects which belong to my component and which were extracted out to interfaces just for the sake of single responsibility/testability I would never have any other implementations of such interfaces ever. External dependencies are stuff like DB connections, web services, interfaces which don't belong to my component. I would register them as interfaces because their implementations can be switched to stubbed ones for integration testing. Having a small number of components registered in a DI container makes the DI code easier to read and bloat free.

Related

Laravel 4: Facade vs DI (when to use)

My understanding is that a facade is used as an alternative to dependency injection. Please correct if I'm mistaken. What is not clear is when one should use one or the other.
What are the advantages/disadvantages of each approach? How should I determine when to use one or the other?
Lastly, why not use both? I can create a facade that references an interface. It seems Sentry 2 is written this way. Is there a best practice?
FACADES
Facades are not an alternative to dependency injection.
Laravel Facade is an implementation of the Service Locator Pattern, creating a clean and beautiful way of accessing objects:
MyClass::doSomething();
This is the PHP syntax for a static methods, but Laravel changes the game and make them non-static behind the scenes, giving you a beautiful, enjoyable and testable way of writing your applications.
DEPENDENCY INJECTION
Dependency Injection is, basically, a way of passing parameters to your constructors and methods while automatically instatiating them.
class MyClass {
private $property;
public function __construct(MyOtherClass $property)
{
/// Here you can use the magic of Dependency Injection
$this->property = $property
/// $property already is an object of MyOtherClass
}
}
A better construction of it would be using Interfaces on your Dependency Injected constructors:
class MyClass {
private $property;
public function __construct(MyInterface $property)
{
/// Here you can use the magic of Dependency Injection
$this->property = $property
/// $property will receive an object of a concrete class that implements MyInterface
/// This class should be defined in Laravel elsewhere, but this is a way of also make
/// your application easy to maintain, because you can swap implementations of your interfaces
/// easily
}
}
But note that in Laravel you can inject classes and interfaces the same way. To inject interfaces you just have to tell it wich one will be this way:
App::bind('MyInterface', 'MyOtherClass');
This will tell Laravel that every time one of your methods needs an instance of MyInterface it should give it one of MyOtherClass.
What happens here is that this constuctor has a "dependency": MyOtherClass, which will be automatically injected by Laravel using the IoC container. So, when you create an instance of MyClass, Laravel automatically will create an instance of MyOtherClass and put it in the variable $class.
Dependency Injection is just an odd jargon developers created to do something as simple as "automatic generation of parameters".
WHEN TO USE ONE OR THE OTHER?
As you can see, they are completely different things, so you won't ever need to decide between them, but you will have to decide where go to with one or the other in different parts of your application.
Use Facades to ease the way you write your code. For example: it's a good practice to create packages for your application modules, so, to create Facades for those packages is also a way to make them seem like a Laravel public class and accessing them using the static syntax.
Use Dependency Injection every time your class needs to use data or processing from another class. It will make your code testable, because you will be able to "inject" a mock of those dependencies into your class and you will be also exercising the single responsibility principle (take a look at the SOLID principles).
Facades, as noted, are intended to simplify a potentially complicated interface.
Facades are still testable
Laravel's implementation goes a step further and allows you to define the base-class that the Facade "points" to.
This gives a developer the ability to "mock" a Facade - by switching the base-class out with a mock object.
In that sense, you can use them and still have testable code. This is where some confusion lies within the PHP community.
DI is often cited as making your code testable - they make mocking class dependencies easy. (Sidenote: Interfaces and DI have other important reasons for existing!)
Facades, on the other hand, are often cited as making testing harder because you can't "simply inject a mock object" into whatever code you're testing. However, as noted, you can in fact "mock" them.
Facade vs DI
This is where people get confused regarding whether Facades are an alternative to DI or not.
In a sense, they both add a dependency to your class - You can either use DI to add a dependency or you can use a Facade directly - FacadeName::method($param);. (Hopefully you are not instantiating any class directly within another :D ).
This does not make Facades an alternative to DI, but instead, within Laravel, does create a situation where you may decide to add class dependencies one of 2 ways - either using DI or by using a Facade. (You can, of course, use other ways. These "2 ways" are just the most-often used "testable way").
Laravel's Facades are an implementation of the Service Locator pattern, not the Facade pattern.
In my opinion you should avoid service locator within your domain, opting to only use it in your service and web transport layers.
http://martinfowler.com/articles/injection.html#UsingAServiceLocator
I think that in terms of laravel Facades help you keep you code simple and still testable since you can mock facades however might be a bit harder to tell a controllers dependencies if you use facades since they are probably all over the place in your code.
With dependency injection you need to write a bit more code since you need to deal with creating interfaces and services to handle the depenancies however Its a lot more clear later on what a controller depends on since these are clearly mentioned in the controller constructor.
I guess it's a matter of deciding which method you prefer using

DI Container and custom-scoped state in legacy system

I believe I understand the basic concepts of DI / IoC containers having written a couple of applications using them and reading a lot of stack overflow answers as well as Mark Seeman's book. There are still some cases that I have trouble with, especially when it comes to integrating DI container to a large existing architecture where DI principle hasn't been really used (think big ball of mud).
I know the ideal scenario is to have a single composition root / object graph per operation but in a legacy system this might not be possible without major refactoring (only the new and some select refactored old parts of the code could have dependencies injected through constructor and the rest of the system using the container as a service locator to interact with the new parts). This effectively means that a stack trace deep within an operation might include several object graphs with calls being made back and forth between new subsystems (single object graph until exiting into an old segment) and traditional subsystems (service locator call at some point to code under DI container).
With the (potentially faulty, I might be overthinking this or be completely wrong in assuming this kind of hybrid architecture is a good idea) assumptions out of the way, here's the actual problem:
Let's say we have a thread pool executing scheduled jobs of various types defined in database (or any external place). Each separate type of scheduled job is implemented as a class inheriting a common base class. When the job is started, it gets fed the information about which targets it should write its log messages to and the configuration it should use. The configuration could probably be handled by just passing the values as method parameters to whatever class needs them but if the job implementation gets larger than say 10-20 classes, it doesn't seem very handy.
Logging is the larger problem. Subsystems the job calls probably also need to write things to the log and usually in examples this is done by just requesting instance of ILog in the constructor. But how does that work in this case when we don't know the details / implementation until runtime? Since:
Due to (non DI container controlled) legacy system segments in the call chain (-> there potentially being multiple separate object graphs), child container cannot be used to inject the custom logger for specific sub-scope
Manual property injection would basically require the complete call chain (including all legacy subsystems) to be updated
A simplified example to help better perceive the problem:
Class JobXImplementation : JobBase {
// through constructor injection
ILoggerFactory _loggerFactory;
JobXExtraLogic _jobXExtras;
public void Run(JobConfig configurationFromDatabase)
{
ILog log = _loggerFactory.Create(configurationFromDatabase.targets);
// if there were no legacy parts in the call chain, I would register log as instance to a child container and Resolve next part of the call chain and everyone requesting ILog would get the correct logging targets
// do stuff
_jobXExtras.DoStuff(configurationFromDatabase, log);
}
}
Class JobXExtraLogic {
public void DoStuff(JobConfig configurationFromDatabase, ILog log) {
// call to legacy sub-system
var old = new OldClass(log, configurationFromDatabase.SomeRandomSetting);
old.DoOldStuff();
}
}
Class OldClass {
public void DoOldStuff() {
// moar stuff
var old = new AnotherOldClass();
old.DoMoreOldStuff();
}
}
Class AnotherOldClass {
public void DoMoreOldStuff() {
// call to a new subsystem
var newSystemEntryPoint = DIContainerAsServiceLocator.Resolve<INewSubsystemEntryPoint>();
newSystemEntryPoint.DoNewStuff();
}
}
Class NewSubsystemEntryPoint : INewSubsystemEntryPoint {
public void DoNewStuff() {
// want to log something...
}
}
I'm sure you get the picture by this point.
Instantiating old classes through DI is a non-starter since many of them use (often multiple) constructors to inject values instead of dependencies and would have to be refactored one by one. The caller basically implicitly controls the lifetime of the object and this is assumed in the implementations (the way they handle internal object state).
What are my options? What other kinds of problems could you possibly see in a situation like this? Is trying to only use constructor injection in this kind of environment even feasible?
Great question. In general, I would say that an IoC container loses a lot of its effectiveness when only a portion of the code is DI-friendly.
Books like Working Effectively with Legacy Code and Dependency Injection in .NET both talk about ways to tease apart objects and classes to make DI viable in code bases like the one you described.
Getting the system under test would be my first priority. I'd pick a functional area to start with, one with few dependencies on other functional areas.
I don't see a problem with moving beyond constructor injection to setter injection where it makes sense, and it might offer you a stepping stone to constructor injection. Adding a property is usually less invasive than changing an object's constructor.

Depencency injection question

I have a question regarding dependency injection pattern.
My question is...
If I go for constructor injection, injecting the dependencies for my class, what I get is a "big" constructor with many params.
What if ie. I dont use some of the params in some methods?
Ie. I have a service that exposes many methods. And a constructor with 10 parameters (all dependencies). But not all the methods uses all the dependencies. Some method will use only one dependency, another will use 3 dependencies. But DI container will resolve them all even if non are used.
To me this is a performance penalty of using DI container. Is this true?
It seems your class is doing to much, that it does not comply to the S in SOLID (Single responsibility principle) , maybe you could split the class in multiple smaller classes with less dependencies. The fact that not all dependencies are used by all methods suggests this.
Normally the performance penalty of injecting many dependencies is low, but it depends on the framework you pick. Some will compile methods for this on the fly. You will have to test this. Many dependencies does indicate that your class is doing too much (like Ruben said), so you might want to take a look at that. If creation of an instance of a depedency that you often don't use causes performance problems, you might want to introduce a factory as dependency. I found that the use of factories can solve many problems regarding the use of dependency injection frameworks.
// Constructor
public Consumer(IContextFactory contextFactory)
{
this.contextFactory = contextFactory;
}
public void DoSomething()
{
var context = this.contextFactory.CreateNew();
try
{
// use context here
context.Commit();
}
finally
{
context.Dispose();
}
}
You can also hide some not-yet-needed dependencies behind lazy providers. For instance:
public DataSourceProvider implements Provider<DataSource> {
public DataSource get() {
return lazyGetDataSource();
}
}
The Provider interface is part of javax.inject package.
Actually you can't know which methods are used at runtime when you build your DI container. You would have to deal with that performance penalty or if you know that there are many cases where just a few dependencies are used, you could split your container into several small containers that have less dependencies that are injected.
As rube Says probabily you should review te design of your class to stick to SOLID principles.
Anyway if it is not really necessary I'm used to go for property setter dependency insteadof the constructor. It means that you can create a property for each dependecy you need. That helps also to test the class because you can inject only the dependency you need into the context of the test you are doing instead of stub out all the dependency even if you don't need it

ASP.NET MVC IoC usability

How often do you use IoC for controllers/DAL in real projects?
IoC allows to abstract application from concrete implementation with additional layer of interfaces that should be implemented. But how often concrete implementation changes? Should we really have to do job twice adding method to interface then the implementation if implementation hardly will ever be changed? I took part in about 10 asp.net projects and DAL (ORM-like and not) was never rewritten completely.
Watching lots of videos I clearly understand that IoC "is cool" and the really nice way to program, but does it really needed?
Added a bit later:
Yes, IoC allows prepare better testing environment, but we also have nice way to test DAL without IoC. We wrap DAL calls to database into uncommited transactions without risk to make data unstable.
IoC isn't a pattern only for writing modular programs; it also allows for easier testing, by being able to swap in mock objects that implement the same interface as the components they stand in for.
Plus, it actually makes code much easier to maintain down the road.
It's not IOC that allows you to abstract application from concrete implementation with additional layer of interfaces, this is how you should design your application in order to be more modular and reusable. Another important benefit is that once you've designed your application this way it will be much easier to test the different parts in isolation without depending on concrete database access for example.
There's much more about IoC except ability to change implementation:
testing
explicit dependencies - not hidden inside private DataContext
automatic instantiation - you declare in constructor that you need something, and you get it - with all deep nested dependencies resolved
separation of assemblies - take a look at S#arp Architecture to see how IoC allows to avoid referencing NHibernate and other specific assemblies, which otherwise you'll have to reference
management of lifetime - ability to specify per request / singleton / transitive lifetime of objects and change it in one place, instead of in dozens of controllers
ability to do dynamic stuff, like, getting correct data context in model binders, because with IoC you now have metadata about your dependencies; and this shows that maybe IoC does to your object dependencies what reflection does to C# programming - a lot of new possibilities, that you never even thought about
And so on, I'm sure I missed a lot of positive stuff. While the only "bad" thing that I can think about (and that you mentioned) is duplication of interface, which is non-issue with modern IDEs support for refactoring.
Well, if your data interfaces change every day, and you have hundreds of them - you may want to avoid IoC.
But, do you avoid good design practices just because it's harder to follow them? Do you copy and paste code instead of extracting a method/class, just because it takes more time and more code to do so? Do you place business logic in views just because it's harder to create view models and sync them with domain models? If yes, then you can avoid IoC, no problem.
You're arguing that using IOC takes MORE code than not using it. I disagree.
Here is the entire DAL IOC configuration for one of my projects using LinqToSql. The ContextProvider class is simply a thread safe LinqToSql context factory.
container.Register(Component.For<IContextProvider<LSDataContext>, IContextProvider>().LifeStyle.PerWebRequest.ImplementedBy<ContextProvider<LSDataContext>>();
container.Register(Component.For<IContextProvider<WorkSheetDataContext>, IContextProvider>().LifeStyle.PerWebRequest.ImplementedBy<ContextProvider<WorkSheetDataContext>>();
container.Register(Component.For<IContextProvider<OffersReadContext>, IContextProvider>().LifeStyle.PerWebRequest.ImplementedBy<ContextProvider<OffersReadContext>>();
Here is the entire DAL configuration for one of my projects using NHibernate and the repository pattern:
container.Register(Component.For<NHSessionBuilder>().LifeStyle.Singleton);
container.Register(Component.For(typeof(IRepository<>)).ImplementedBy(typeof(NHRepositoryBase<>)));
Here is how I consume the DAL in my BLL (w/ dependency injection):
public class ClientService
{
private readonly IRepository<Client> _Clients;
public ClientService(IRepository<Client> clients)
{
_Clients = clients;
}
public IEnumerable<Client> GetClientsWithGoodCredit()
{
return _Clients.Where(c => c.HasGoodCredit);
}
}
Note that my IRepository<> interface inherits IQueryable<> so this code is very trivial!
Here's how I can test my BLL without connecting to a DB:
public void GetClientsWithGoodCredit_ReturnsClientWithGoodCredit()
{
var clientWithGoodCredit = new Client() {HasGoodCredit = true};
var clientWithBadCredit = new Client() {HasGoodCredit = false};
var clients = new List<Client>() { clientWithGoodCredit, clientWithBadCredit }.ToTestRepository();
var service = new ClientService(clients);
var clientsWithGoodCredit = service.GetClientsWithGoodCredit();
Assert(clientsWithGoodCredit.Count() == 1);
Assert(clientsWithGoodCredit.First() == clientWithGoodCredit);
}
ToTestRepository() is an extension method that returns a fake IRepository<> that uses an in-memory list.
There is no possible way you can argue that this is more complicated than newing up your DAL all over your BLL.
The only way you could have ever written the above test is by connecting to a DB, saving some test clients, and then querying. I guarantee that takes 100+ times longer to execute than this did. (Times that by 1000 tests and you can go get some coffee while you're waiting.)
Also, by using uncommitted transactions for testing you introduce debugging nightmares resulting from ORMs that don't query over uncommitted entities.

Why not pass your IoC container around?

On this AutoFac "Best Practices" page (http://code.google.com/p/autofac/wiki/BestPractices), they say:
Don't Pass the Container Around
Giving components access to the container, or storing it in a public static property, or making functions like Resolve() available on a global 'IoC' class defeats the purpose of using dependency injection. Such designs have more in common with the Service Locator pattern.
If components have a dependency on the container, look at how they're using the container to retrieve services, and add those services to the component's (dependency injected) constructor arguments instead.
So what would be a better way to have one component "dynamically" instantiate another? Their second paragraph doesn't cover the case where the component that "may" need to be created will depend on the state of the system. Or when component A needs to create X number of component B.
To abstract away the instantiation of another component, you can use the Factory pattern:
public interface IComponentBFactory
{
IComponentB CreateComponentB();
}
public class ComponentA : IComponentA
{
private IComponentBFactory _componentBFactory;
public ComponentA(IComponentBFactory componentBFactory)
{
_componentBFactory = componentBFactory;
}
public void Foo()
{
var componentB = _componentBFactory.CreateComponentB();
...
}
}
Then the implementation can be registered with the IoC container.
A container is one way of assembling an object graph, but it certainly isn't the only way. It is an implementation detail. Keeping the objects free of this knowledge decouples them from infrastructure concerns. It also keeps them from having to know which version of a dependency to resolve.
Autofac actually has some special functionality for exactly this scenario - the details are on the wiki here: http://code.google.com/p/autofac/wiki/DelegateFactories.
In essence, if A needs to create multiple instances of B, A can take a dependency on Func<B> and Autofac will generate an implementation that returns new Bs out of the container.
The other suggestions above are of course valid - Autofac's approach has a couple of differences:
It avoids the need for a large number of factory interfaces
B (the product of the factory) can still have dependencies injected by the container
Hope this helps!
Nick
An IoC takes the responsibility for determining which version of a dependency a given object should use. This is useful for doing things like creating chains of objects that implement an interface as well as having a dependency on that interface (similar to a chain of command or decorator pattern).
By passing your container, you are putting the onus on the individual object to get the appropriate dependency, so it has to know how to. With typical IoC usage, the object only needs to declare that it has a dependency, not think about selecting between multiple available implementations of that dependency.
Service Locator patterns are more difficult to test and it certainly is more difficult to control dependencies, which may lead to more coupling in your system than you really want.
If you really want something like lazy instantiation you may still opt for the Service Locator style (it doesn't kill you straight away and if you stick to the container's interface it is not too hard to test with some mocking framework). Bear in mind, though that the instantiation of a class that doesn't do much (or anything) in the constructor is immensely cheap.
The container's I have come to know (not autofac so far) will let you modify what dependencies should be injected into which instance depending on the state of the system such that even those decisions can be externalized into the configuration of the container.
This can provide you plenty of flexibility without resorting to implementing interaction with the container based on some state you access in the instance consuming dependencies.

Resources