Dependency injection and state setup - dependency-injection

I'd like to ask who is responsible for setting up the injected dependency's state?
For e.g when class A depends on class B, is instance A's responsibility to setup instance B or should it be done somewhere else? Why?
My questions is much more a general one, but I put the concrete situation here:
I have a Context class which handles interactions and states made to a given chart, e.g. you can switch between two series. However this class is also setting up chart's look and displayed data with by delegating them to other (injected) classes. Currently the Context constructor sets up its dependencies state based on its constructor parameters (e.g.: highlight one point on the chart, telling which series to display, etc). I'm not sure that this design is good and would like to get a deeper understanding about the right way. The programming language is Javascript (if it's matters).
Thanks,
Peter

when class A depends on class B, is instance A's responsibility to setup instance B or should it be done somewhere else?
If A would be responsibly of setting up B, this would make A violate the Dependency Inversion Principle (DIP) that says:
High-level modules should not depend on low-level modules. Both should
depend on abstractions.
The DIP is the driving force behind the Dependency Injection pattern.
But if A can't be responsible of creating B, who is? The answer to this question is: The Composition Root:
A Composition Root is a (preferably) unique location in an application
where modules are composed together.
Using a Composition Root is the only way that you can keep both A and B (and everything else in the graph) free from having a dependency on a different module.

Related

Object lifecycle management and IoC containers

I'm updating a game from single player to multiplayer. In this case the game was originally written with most classes being single instanced. e.g. there was a single Player object, a single GameState object, etc. That is, each of these objects lived as long as the application.
Now that more than one player can play at once I obviously need to support creating more than one Player object, GameState object, etc. Over the course of working on this I have come to realize that most objects have one of three lifespans:
App's lifespan, e.g. a Conductor to handle navigation
Player's lifespan, e.g. the SettingsViewModel for the current player
Game's lifespan, e.g. the GameState for the current game
I'm curious how others deal with the creation of these different objects using an IoC container. I want to avoid creating factory classes for each class with a player or game lifespan.
Here is an example of IOC that may help. The project is called IOC-with-Ninject. It uses Ninject plus an IOC container class to manage all object life spans. You will need to do a little research on Ninject to customize it to your specific needs, but this is your IOC container solution (IMHO) if you are using .NET and will help you organize your code base. This is a personal choice, but I swear by it. If you are not using .NET it will still give you an easy pattern to follow. Cheers.
Many IoC containers have custom life-cycle scopes which you can manage as your wish. For example in Ninject you can define your custom life cycle scope as follows:
kernel.Bind<IService>().To<Service>().InScope((c, o) => yourCustomeScope);
As long as the yourCustomeScope variable has not changed, one single instance of the Service object is returned each time the kernel receives a request for IService. As soon as the yourCustomeScope variable changes, a new instance of Service will be created on the next request for IService. yourCustomeScope can be the current player instance, the game object or anything that you want to change the lifetime of the Service object, based on its reference change.
However, the objects that you just mentioned are more likely to be entities rather than services for which I don't think injection is a good idea.
From my experience the factories approach works the best.
Controlling lifespan of instance is clunky for support and requires efforts, knowledge of all of the classes lifespan requirements and dependencies, time for configuration and management of the configuration. In same time the use of factories is natural and code specific.
Factories (implementation) creation might be avoided by using proxy factories . You can also have factories returning generic arguments to further decrease the needs of factories (interfaces) creation.
If still too many factories are required I suggest reviewing the code flow.
I think this is in part a rehash of some of the comments of the previous answers but I have tried to exemplify expand a little on some of the reasoning.
Once you get into the domain of managing injected objects lifespan, you probably should be creating factories for these objects.
The underlying problem is that the composition root is not aware of what the environmental context of the call will be that needs to create the object.
I think I should take a step back and explain at this point.
Received wisdom on dependancy injection is to have a composition root some where near the entry point of the code. There are many good reasons for this that are not difficult to find on the web so I won't go into that here.
The composition root is where you map your interfaces (usually, but possibly objects) to their implmentations. You can pass in information that is available at this point to the constructor. So you can pass in a reference to an object whose lifetime is current at the time of execution of the composition root.
However, if the lifetime of the composition root does not overlap with the life time of the object you want to create you have to defer the execution of the constructor until the object needs to be created. This is why you need to have a factory. You can pass a factory method in to your mapping at this point and thus pass in the information needed to generate the object, but allow the creation to happen at the time it is required not when the composition root is executed.
You do not need a factory class to do this factory methods are fine, moreover the factory method can be inlined and so the code overhead is not much more than if we were creating the objects in the composition route.
If we have a project with 2 services where the first service is dependant on the first and we only want the lifetime of the second service to start when we create the first service we might have something like the following. (I am using ninject to give a code example, but I expect that other IOC containers work similarly in this respect.)
`
public class Service1:IService
{
private Func<IService>serviceFactoryMethod _Service2Factory;
public Service1(Func<IService>service2FactoryMethod)
{
_Service2Factory=service2FactoryMethod;
}
public void DoSomethingUsingService2()
{
var service2=_Service2Factory();
service2.DoSomething();
}
}
public class MainClass
{
public void CompositionRoot()
{
var kernel= new StandardKernel();
kernel.Bind.ToMethod(m=>
{
return new Service1(m.Kernel.Get<IService2>());
}
}
}
`
This example does not address how you would manage the lifetime of the App, players and games lifespans, but hopefully it gives sufficient clues as to how to remove lifetime issues related to dependancy injection.
Side note: that using Ninject you would be able to change the scope of Service2 in order to manage its lifetime to go beoynd the lifetime of Service1. For example, if you knew each instance of a game were to happen on its own thread (OK, this maybe somewhat unlikely), you might use InThreadScope for the game.

Inverse the dependency in inheritance

I have two classes, A & B. B is inheriting from A, I want to inverse the dependency.
Class A { }
Class B : A { }
Class B is inheriting from A. It means B has some dependency from A.
What will be the correct way to inverse the dependency?
Inheritance is a concept implying tight coupling between classes.
In order to use Dependency Injection you need to create some "Seams", as Michael Feathers calls them in Working Effectively with Legacy Code. Here you can find a definition of Seam:
A seam is a place where you can alter behavior in your program without
editing in that place.
When you have a seam, you have a place where behavior can change.
By this definition, there is no Seam in your example, which is not necessarily a bad thing. The question is now, why do you feel the need to do Dependency Injection in this place?
If it's for the sake of example, don't do Dependency Injection here. There are places where it does not really make sense to apply it: if you have no volatility, why would you do it?
If you do really feel the need to do something similar in your project though, you probably want to decouple the volatile concepts out of your inheritance hierarchy and create a Seam for these parts: you could have an interface to abstract these concepts, which at this point can be effectively injected into your client class.

Why not use an IoC container to resolve dependencies for entities/business objects?

I understand the concept behind DI, but I'm just learning what different IoC containers can do. It seems that most people advocate using IoC containers to wire up stateless services, but what about using them for stateful objects like entities?
Whether it's right or wrong, I normally stuff my entities with behavior, even if that behavior requires an outside class. Example:
public class Order : IOrder
{
private string _ShipAddress;
private IShipQuoter _ShipQuoter;
public Order(IOrderData OrderData, IShipQuoter ShipQuoter)
{
// OrderData comes from a repository and has the data needed
// to construct order
_ShipAddress = OrderData.ShipAddress; // etc.
_ShipQuoter = ShipQuoter;
}
private decimal GetShippingRate()
{
return _ShipQuoter.GetRate(this);
}
}
As you can see, the dependencies are Constructor Injected. Now for a couple of questions.
Is it considered bad practice to have your entities depend on outside classes such as the ShipQuoter? Eliminating these dependencies seems to lead me towards an anemic domain, if I understand the definition correctly.
Is it bad practice to use an IoC container to resolve these dependencies and construct an entity when needed? Is it possible to do this?
Thanks for any insight.
The first question is the most difficult to answer. Is it bad practice to have Entities depend on outside classes? It's certainly not the most common thing to do.
If, for example, you inject a Repository into your Entities you effectively have an implementation of the Active Record pattern. Some people like this pattern for the convenience it provides, while others (like me) consider it a code smell or anti-pattern because it violates the Single Responsibility Principle (SRP).
You could argue that injecting other dependencies into Entities would pull you in the same direction (away from SRP). On the other hand you are certainly correct that if you don't do this, the pull is towards an Anemic Domain Model.
I struggled with all of this for a long time until I came across Greg Young's (abandonded) paper on DDDD where he explains why the stereotypical n-tier/n-layer architecture will always be CRUDy (and thus rather anemic).
Moving our focus to modeling Domain objects as Commands and Events instead of Nouns seems to enable us to build a proper object-oriented domain model.
The second question is easier to answer. You can always use an Abstract Factory to create instances at run-time. With Castle Windsor you can even use the Typed Factory Facility, relieving you of the burden of implementing the factories manually.
I know this is an old post but wanted to add. The domain entity should not persist itself even if you pass in an abstracted repository in ctor. The reason I am suggestion this is not merely that it violates SRP, it also contrary to DDD's aggregation. Let me explain, DDD is suited for complex apps with inherently deep graphs, therefore, we use aggregate or composite roots to persist changes to the underlying "children", so when we inject persistence into the individual children we violate the relationship children have to the composite or aggregate root that should be "in charge" of the life cycle or aggregation. Of course the composite root or aggregate does not persist it's own graph either. Another is with injecting dependencies of DDD objects is that an injected domain object effectively has no state until some other event takes place to hydrate its state. ANy consumer of the code will be forced to init or setup the domain object first before they can invoke business behavior which violates encapsulation.

Avoiding dependency carrying

When coding, I often come across the following pattern:
-A method calls another method (Fine), but the method being called/callee takes parameters, so in the wrapping method, I pass in parameters. Problem is, this dependency carrying can go on and on. How could I avoid this (any sample code appreciated)?
Thanks
Passing a parameter along just because a lower-layer component needs it is a sign of a Leaky Abstraction. It can often be more effective to refactor dependencies to aggregate services and hide each dependency behind an interface.
Cross-cutting concerns (which are often the most common reason to pass along parameters) are best addressed by Decorators.
If you use a DI Container with interception capabilities, you can take advantage of those to implement Decorators very efficiently (some people refer to this as a container's AOP capabilities).
You can use a dependency injection framework. One such is Guice: see http://code.google.com/p/google-guice/
Step 1: Instead of passing everything as separate arguments, group the arguments into a class, let's say X.
Step 2: Add getters to the class X to get the relevant information. The callee should use the getters to get the information instead of relying on parameters.
Step 3: Create an interface class of which class X inherits. Put all the getters in the interface (in C++ this is as pure virtual methods).
Step 4: Make the called methods only depend on the interface.
Refactoring: Introduce Parameter Object
You have a group of parameters that naturally go together?
Replace them with an object.
http://www.refactoring.com/catalog/introduceParameterObject.html
The advantage of the parameter object is that the calls passing them around don't need to change if you add/remove parameters.
(given the context of your answers, I don't think that an IoC library or dependency injection patterns are really what you're after)
Since they cannot be (easily) unit tested, most developers choose to inject objects into Views. Since the views are not (normally) used to construct other views, that is where your DI chain ends. You may have the issue (which I have run into every once in ahwile) where you need to construct objects in the correct order especially when using a DI framework like Unity where an attemt to resolve the object will deadlock. The main thing you need to worry about is circular dependency. In order to do this, read the following article:
Can dependency injection prevent a circular dependency?

Why not pass your IoC container around?

On this AutoFac "Best Practices" page (http://code.google.com/p/autofac/wiki/BestPractices), they say:
Don't Pass the Container Around
Giving components access to the container, or storing it in a public static property, or making functions like Resolve() available on a global 'IoC' class defeats the purpose of using dependency injection. Such designs have more in common with the Service Locator pattern.
If components have a dependency on the container, look at how they're using the container to retrieve services, and add those services to the component's (dependency injected) constructor arguments instead.
So what would be a better way to have one component "dynamically" instantiate another? Their second paragraph doesn't cover the case where the component that "may" need to be created will depend on the state of the system. Or when component A needs to create X number of component B.
To abstract away the instantiation of another component, you can use the Factory pattern:
public interface IComponentBFactory
{
IComponentB CreateComponentB();
}
public class ComponentA : IComponentA
{
private IComponentBFactory _componentBFactory;
public ComponentA(IComponentBFactory componentBFactory)
{
_componentBFactory = componentBFactory;
}
public void Foo()
{
var componentB = _componentBFactory.CreateComponentB();
...
}
}
Then the implementation can be registered with the IoC container.
A container is one way of assembling an object graph, but it certainly isn't the only way. It is an implementation detail. Keeping the objects free of this knowledge decouples them from infrastructure concerns. It also keeps them from having to know which version of a dependency to resolve.
Autofac actually has some special functionality for exactly this scenario - the details are on the wiki here: http://code.google.com/p/autofac/wiki/DelegateFactories.
In essence, if A needs to create multiple instances of B, A can take a dependency on Func<B> and Autofac will generate an implementation that returns new Bs out of the container.
The other suggestions above are of course valid - Autofac's approach has a couple of differences:
It avoids the need for a large number of factory interfaces
B (the product of the factory) can still have dependencies injected by the container
Hope this helps!
Nick
An IoC takes the responsibility for determining which version of a dependency a given object should use. This is useful for doing things like creating chains of objects that implement an interface as well as having a dependency on that interface (similar to a chain of command or decorator pattern).
By passing your container, you are putting the onus on the individual object to get the appropriate dependency, so it has to know how to. With typical IoC usage, the object only needs to declare that it has a dependency, not think about selecting between multiple available implementations of that dependency.
Service Locator patterns are more difficult to test and it certainly is more difficult to control dependencies, which may lead to more coupling in your system than you really want.
If you really want something like lazy instantiation you may still opt for the Service Locator style (it doesn't kill you straight away and if you stick to the container's interface it is not too hard to test with some mocking framework). Bear in mind, though that the instantiation of a class that doesn't do much (or anything) in the constructor is immensely cheap.
The container's I have come to know (not autofac so far) will let you modify what dependencies should be injected into which instance depending on the state of the system such that even those decisions can be externalized into the configuration of the container.
This can provide you plenty of flexibility without resorting to implementing interaction with the container based on some state you access in the instance consuming dependencies.

Resources