Dependency Injection and/vs Global Singleton - dependency-injection

I am new to dependency injection pattern. I love the idea, but struggle to apply it to my case. I have a singleton object, let’s call it X, which I need often in many parts of my program, in many different classes, sometimes deep in the call stack. Usually I would implement this as a globally available singleton. How is this implemented within the DI pattern, specifically with .NET Core DI container? I understand I need to register X with the DI container as a singleton, but how then I get access to it? DI will instantiate classes with constructors which will take reference to X, that’s great – but I need X deep within the call hierarchy, within my own objects which .NET Core or DI container know nothing about, in objects that were created using new rather than instantiated by the DI container.
I guess my question is – how does global singleton pattern aligns/implemented by/replaced by/avoided with the DI pattern?

Well, "new is glue" (Link). That means if you have new'ed an instance, it is glued to your implementation. You cannot easily exchange it with a different implementation, for example a mock for testing. Like gluing together Lego bricks.
I you want to use proper dependency injection (using a container/framework or not) you need to structure your program in a way that you don't glue your components together, but instead inject them.
Every class is basically at hierarchy level 1 then. You need an instance of your logger? You inject it. You need an instance of a class that needs a logger? You inject it. You want to test your logging mechanism? Easy, you just inject something that conforms to your logger interface that logs into a list and the at the end of your test you can check your list and see if all the required logs are there. That is something you can automate (in contrast to using your normal logging mechanism and checking the logfiles by hand).
That means in the end, you don't really have a hierarchy, because every class you have just gets their dependencies injected and it will be the container/framework or your controlling code that determines what that means for the order of instantiation of objects.
As far as design patterns go, allow me an observation: even now, you don't need a singleton. Right now in your program, it would work if you had a plain global variable. But I guess you read that global variables are "bad". And design patterns are "good". And since you need a global variable and singleton delivers a global variable, why use the "bad", when you can use the "good" right? Well, the problem is, even with a singleton, the global variable is bad. It's a drawback of the pattern, a toad you have to swallow for the singleton logic to work. In your case, you don't need the singleton logic, but you like the taste of toads. So you created a singleton. Don't do that with design patterns. Read them very carefully and make sure you use them for the intended purpose, not because you like their side-effects or because it feels good to use a design pattern.

Just an idea and maybe I need your thought:
public static class DependencyResolver
{
public static Func<IServiceProvider> GetServiceProvider;
}
Then in Startup:
public void Configure(IApplicationBuilder app, IServiceProvider serviceProvider)
{
DependencyResolver.GetServiceProvider = () => { return serviceProvider; };
}
And now in any deed class:
DependencyResolver.GetServiceProvider().GetService<IService>();

Here's a simplified example of how this would work without a singleton.
This example assumes that your project is built in the following way:
the entry point is main
main creates an instance of class GuiCreator, then calls the method createAndRunGUI()
everything else is handled by that method
So your simplified code looks like this:
// main
// ... (boilerplate)
container = new Container();
gui = new GuiCreator(container.getDatabase(), container.getLogger(), container.getOtherDependency());
gui.createAndRunGUI();
// ... (boilerplate)
// GuiCreator
public class GuiCreator {
private IDatabase db;
private ILogger log;
private IOtherDependency other;
public GuiCreator(IDatabase newdb, ILogger newlog, IOtherDependency newother) {
db = newdb;
log = newlog;
other = newother;
}
public void createAndRunGUI() {
// do stuff
}
}
The Container class is where you actually define which implementations will be used, while the GuiCreator contructor takes interfaces as arguments. Now let's say the implementation of ILogger you choose has itself a dependency, defined by an interface its contructor takes as argument. The Container knows this and resolves it accordingly by instantiating the Logger as new LoggerImplementation(getLoggerDependency());. This goes on for the entire dependency chain.
So in essence:
All classes keep instances of interfaces they depend upon as members.
These members are set in the respective constructor.
The entire dependency chain is thus resolved when the first object is instantiated. Note that there might/should be some lazy loading involved here.
The only places where the container's methods are accessed to create instances are in main and inside the container itself:
Any class used in main receives its dependencies from main's container instance.
Any class not used in main, but rather used only as a dependency, is instantiated by the container and receives its dependencies from within there.
Any class used neither in main nor indirectly as a dependency somewhere below the classes used in main will obviously never be instantiated.
Thus, no class actually needs a reference to the container. In fact, no class needs to know there even is a container in your project. All they know is which interfaces they personally need.
The Container can either be provided by some third party library/framework or you can code it yourself. Typically, it will use some configuration file to determine which implementations are actually supposed to be used for the various interfaces. Third party containers will usually perform some sort of code analysis supported by annotations to "autowire" implementations, so if you go with a ready-made tool, make sure you read up on how that part works because it will generally make your life easier down the road.

Related

The benefits and correct usage of a DI Container

I'm having troubles getting the advantage of a IoC (DI) container like Ninject, Unity or whatever. I understand the concepts as follows:
DI: Injecting a dependency into the class that requires it (preferably via constructor injection). I totally see why the less tight coupling is a good thing.
public MyClass{
ISomeService svc;
public MyClass(ISomeService svc){
svc = svc;
}
public doSomething(){
svc.doSomething();
}
}
Service Locator: When a "container" is used directly inside the class that requires a dependancy, to resolve the dependancy. I do get the point that this generates another dependancy and I also see that basically nothing is getting injected.
public MyClass{
public MyClass(){}
public doSomething(){
ServiceLocator.resolve<ISomeService>().doSomething();
}
}
Now, what confuses me is the concept of a "DI container". To me, it looks exactly like a service locator which - as far as I read - should only be used in the entry point / startup method of an application to register and resolve the dependancies and inject them into the constructors of other classes - and not within a concrete class that needs the dependancy (probably for the same reason why Service locators are considered "bad")
What is the purpose of using the container when I could just create the dependancy and pass it to the constructor?
public void main(){
DIContainer.register<ISomeService>(new SomeService());
// ...
var myclass = new MyClass(DIContainer.resolve<ISomeService>());
myclass.doSomething();
}
Does it really make sense to pass all the dependancies to all classes in the application initialization method? There might be 100 dependancies which will be eventually needed (or not) and just because it's considered a good practice you set create them in the init method?
What is the purpose of using the container when I could just create the dependancy and pass it to the constructor?
DI containers are supposed to help you create an object graph quickly. You just tell it which concrete implementations you want to use for which abstractions (the registration phase), and then it can create any objects you want want (resolve phase).
If you create the dependencies and pass them to the constructor (in the application initialization code), then you are actually doing Pure DI.
I would argue that Pure DI is a better approach in many cases. See my article here
Does it really make sense to pass all the dependancies to all classes in the application initialization method? There might be 100 dependancies which will be eventually needed (or not) and just because it's considered a good practice you set create them in the init method?
I would say yes. You should create the object graph when your application starts up. This is called the composition root.
If you need to create objects after your application has started then you should use factories (mainly abstract factories). And such factories will be created with the other objects in the composition roots.
Your classes shouldn't do much in the constructor, this will make the cost of creating all the dependencies at the composition root low.
However, I would say that it is OK to create some types of objects using the new keyword in special cases. Like when the object is a simple Data Transfer Object (DTO)

DDD: Service and Repositories Instances Injected with DI as Singletons

I've recently been challenged on my view that singletons are only good for logging and configuration. And with Dependency Injection I am now not seeing a reason why you can't use your services or repositories as singletons.
There is no coupling because DI injects a singleton instance through an interface. The only reasonable argument is that your services might have shared state, but if you think about it, services should be stand alone units without any shared state. Yes they do get injected with the repositories but you only have one way that a repository instance is created and passed to the service. Since repositories should never have a shared state I don't see any reasons why it also cannot be a singleton.
So for example a simple service class would look like this:
public class GraphicService : IGraphicService
{
private IGraphicRepository _rep;
public GraphicService(IGraphicRepository rep)
{
_rep = rep;
}
public void CreateGraphic()
{
...
_rep.SaveGraphic(graphic):
}
}
No state is shared in service except repository which also doesn't change or have it's own state.
So the question is, if your services and repositories don't have any state and only passed in, through interface, configuration or anything else that's instantiated the same way they then why wouldn't you have them as singleton?
If you're using the Singleton pattern i.e a static property of a class, then you have the tightly coupling problem.
If you need just a single instance of a class and you're using a DI Container to control its lifetime, then it's not a problem as there are none of the drawbacks. The app doesn't know there is a singleton in place, only the DI Container knows about it.
Bottom line, a single instance of a class is a valid coding requirement, the only issue is how you implement it. The Di Container is the best approach. the Singleton pattern is for quick'n dirty apps where you don't care enough about maintainability and testing.
Some projects use singleton for dependence lookup before dependency injection gets populated. for example iBATIS jpetstore if I'm not mistaken. It's convenient that you can access your dependence gloablly like
public class GraphicService : IGraphicService
{
private IGraphicRepository _rep = GraphicRepository.getInstance();
public void CreateGraphic()
{
...
_rep.SaveGraphic(graphic):
}
}
But this harms testability (for not easy to replace dependence with test doubles) and introduces implicit strong dependence (IGraphicService depends on both the abstraction and the implementation).
Dependeny injection sovles these. I don't see why can't use singleton in your case, but it doesn't add much value when using DI.

Why does one use dependency injection?

I'm trying to understand dependency injections (DI), and once again I failed. It just seems silly. My code is never a mess; I hardly write virtual functions and interfaces (although I do once in a blue moon) and all my configuration is magically serialized into a class using json.net (sometimes using an XML serializer).
I don't quite understand what problem it solves. It looks like a way to say: "hi. When you run into this function, return an object that is of this type and uses these parameters/data."
But... why would I ever use that? Note I have never needed to use object as well, but I understand what that is for.
What are some real situations in either building a website or desktop application where one would use DI? I can come up with cases easily for why someone may want to use interfaces/virtual functions in a game, but it's extremely rare (rare enough that I can't remember a single instance) to use that in non-game code.
First, I want to explain an assumption that I make for this answer. It is not always true, but quite often:
Interfaces are adjectives; classes are nouns.
(Actually, there are interfaces that are nouns as well, but I want to generalize here.)
So, e.g. an interface may be something such as IDisposable, IEnumerable or IPrintable. A class is an actual implementation of one or more of these interfaces: List or Map may both be implementations of IEnumerable.
To get the point: Often your classes depend on each other. E.g. you could have a Database class which accesses your database (hah, surprise! ;-)), but you also want this class to do logging about accessing the database. Suppose you have another class Logger, then Database has a dependency to Logger.
So far, so good.
You can model this dependency inside your Database class with the following line:
var logger = new Logger();
and everything is fine. It is fine up to the day when you realize that you need a bunch of loggers: Sometimes you want to log to the console, sometimes to the file system, sometimes using TCP/IP and a remote logging server, and so on ...
And of course you do NOT want to change all your code (meanwhile you have gazillions of it) and replace all lines
var logger = new Logger();
by:
var logger = new TcpLogger();
First, this is no fun. Second, this is error-prone. Third, this is stupid, repetitive work for a trained monkey. So what do you do?
Obviously it's a quite good idea to introduce an interface ICanLog (or similar) that is implemented by all the various loggers. So step 1 in your code is that you do:
ICanLog logger = new Logger();
Now the type inference doesn't change type any more, you always have one single interface to develop against. The next step is that you do not want to have new Logger() over and over again. So you put the reliability to create new instances to a single, central factory class, and you get code such as:
ICanLog logger = LoggerFactory.Create();
The factory itself decides what kind of logger to create. Your code doesn't care any longer, and if you want to change the type of logger being used, you change it once: Inside the factory.
Now, of course, you can generalize this factory, and make it work for any type:
ICanLog logger = TypeFactory.Create<ICanLog>();
Somewhere this TypeFactory needs configuration data which actual class to instantiate when a specific interface type is requested, so you need a mapping. Of course you can do this mapping inside your code, but then a type change means recompiling. But you could also put this mapping inside an XML file, e.g.. This allows you to change the actually used class even after compile time (!), that means dynamically, without recompiling!
To give you a useful example for this: Think of a software that does not log normally, but when your customer calls and asks for help because he has a problem, all you send to him is an updated XML config file, and now he has logging enabled, and your support can use the log files to help your customer.
And now, when you replace names a little bit, you end up with a simple implementation of a Service Locator, which is one of two patterns for Inversion of Control (since you invert control over who decides what exact class to instantiate).
All in all this reduces dependencies in your code, but now all your code has a dependency to the central, single service locator.
Dependency injection is now the next step in this line: Just get rid of this single dependency to the service locator: Instead of various classes asking the service locator for an implementation for a specific interface, you - once again - revert control over who instantiates what.
With dependency injection, your Database class now has a constructor that requires a parameter of type ICanLog:
public Database(ICanLog logger) { ... }
Now your database always has a logger to use, but it does not know any more where this logger comes from.
And this is where a DI framework comes into play: You configure your mappings once again, and then ask your DI framework to instantiate your application for you. As the Application class requires an ICanPersistData implementation, an instance of Database is injected - but for that it must first create an instance of the kind of logger which is configured for ICanLog. And so on ...
So, to cut a long story short: Dependency injection is one of two ways of how to remove dependencies in your code. It is very useful for configuration changes after compile-time, and it is a great thing for unit testing (as it makes it very easy to inject stubs and / or mocks).
In practice, there are things you can not do without a service locator (e.g., if you do not know in advance how many instances you do need of a specific interface: A DI framework always injects only one instance per parameter, but you can call a service locator inside a loop, of course), hence most often each DI framework also provides a service locator.
But basically, that's it.
P.S.: What I described here is a technique called constructor injection, there is also property injection where not constructor parameters, but properties are being used for defining and resolving dependencies. Think of property injection as an optional dependency, and of constructor injection as mandatory dependencies. But discussion on this is beyond the scope of this question.
I think a lot of times people get confused about the difference between dependency injection and a dependency injection framework (or a container as it is often called).
Dependency injection is a very simple concept. Instead of this code:
public class A {
private B b;
public A() {
this.b = new B(); // A *depends on* B
}
public void DoSomeStuff() {
// Do something with B here
}
}
public static void Main(string[] args) {
A a = new A();
a.DoSomeStuff();
}
you write code like this:
public class A {
private B b;
public A(B b) { // A now takes its dependencies as arguments
this.b = b; // look ma, no "new"!
}
public void DoSomeStuff() {
// Do something with B here
}
}
public static void Main(string[] args) {
B b = new B(); // B is constructed here instead
A a = new A(b);
a.DoSomeStuff();
}
And that's it. Seriously. This gives you a ton of advantages. Two important ones are the ability to control functionality from a central place (the Main() function) instead of spreading it throughout your program, and the ability to more easily test each class in isolation (because you can pass mocks or other faked objects into its constructor instead of a real value).
The drawback, of course, is that you now have one mega-function that knows about all the classes used by your program. That's what DI frameworks can help with. But if you're having trouble understanding why this approach is valuable, I'd recommend starting with manual dependency injection first, so you can better appreciate what the various frameworks out there can do for you.
As the other answers stated, dependency injection is a way to create your dependencies outside of the class that uses it. You inject them from the outside, and take control about their creation away from the inside of your class. This is also why dependency injection is a realization of the Inversion of control (IoC) principle.
IoC is the principle, where DI is the pattern. The reason that you might "need more than one logger" is never actually met, as far as my experience goes, but the actually reason is, that you really need it, whenever you test something. An example:
My Feature:
When I look at an offer, I want to mark that I looked at it automatically, so that I don't forget to do so.
You might test this like this:
[Test]
public void ShouldUpdateTimeStamp
{
// Arrange
var formdata = { . . . }
// System under Test
var weasel = new OfferWeasel();
// Act
var offer = weasel.Create(formdata)
// Assert
offer.LastUpdated.Should().Be(new DateTime(2013,01,13,13,01,0,0));
}
So somewhere in the OfferWeasel, it builds you an offer Object like this:
public class OfferWeasel
{
public Offer Create(Formdata formdata)
{
var offer = new Offer();
offer.LastUpdated = DateTime.Now;
return offer;
}
}
The problem here is, that this test will most likely always fail, since the date that is being set will differ from the date being asserted, even if you just put DateTime.Now in the test code it might be off by a couple of milliseconds and will therefore always fail. A better solution now would be to create an interface for this, that allows you to control what time will be set:
public interface IGotTheTime
{
DateTime Now {get;}
}
public class CannedTime : IGotTheTime
{
public DateTime Now {get; set;}
}
public class ActualTime : IGotTheTime
{
public DateTime Now {get { return DateTime.Now; }}
}
public class OfferWeasel
{
private readonly IGotTheTime _time;
public OfferWeasel(IGotTheTime time)
{
_time = time;
}
public Offer Create(Formdata formdata)
{
var offer = new Offer();
offer.LastUpdated = _time.Now;
return offer;
}
}
The Interface is the abstraction. One is the REAL thing, and the other one allows you to fake some time where it is needed. The test can then be changed like this:
[Test]
public void ShouldUpdateTimeStamp
{
// Arrange
var date = new DateTime(2013, 01, 13, 13, 01, 0, 0);
var formdata = { . . . }
var time = new CannedTime { Now = date };
// System under test
var weasel= new OfferWeasel(time);
// Act
var offer = weasel.Create(formdata)
// Assert
offer.LastUpdated.Should().Be(date);
}
Like this, you applied the "inversion of control" principle, by injecting a dependency (getting the current time). The main reason to do this is for easier isolated unit testing, there are other ways of doing it. For example, an interface and a class here is unnecessary since in C# functions can be passed around as variables, so instead of an interface you could use a Func<DateTime> to achieve the same. Or, if you take a dynamic approach, you just pass any object that has the equivalent method (duck typing), and you don't need an interface at all.
You will hardly ever need more than one logger. Nonetheless, dependency injection is essential for statically typed code such as Java or C#.
And...
It should also be noted that an object can only properly fulfill its purpose at runtime, if all its dependencies are available, so there is not much use in setting up property injection. In my opinion, all dependencies should be satisfied when the constructor is being called, so constructor-injection is the thing to go with.
I think the classic answer is to create a more decoupled application, which has no knowledge of which implementation will be used during runtime.
For example, we're a central payment provider, working with many payment providers around the world. However, when a request is made, I have no idea which payment processor I'm going to call. I could program one class with a ton of switch cases, such as:
class PaymentProcessor{
private String type;
public PaymentProcessor(String type){
this.type = type;
}
public void authorize(){
if (type.equals(Consts.PAYPAL)){
// Do this;
}
else if(type.equals(Consts.OTHER_PROCESSOR)){
// Do that;
}
}
}
Now imagine that now you'll need to maintain all this code in a single class because it's not decoupled properly, you can imagine that for every new processor you'll support, you'll need to create a new if // switch case for every method, this only gets more complicated, however, by using Dependency Injection (or Inversion of Control - as it's sometimes called, meaning that whoever controls the running of the program is known only at runtime, and not complication), you could achieve something very neat and maintainable.
class PaypalProcessor implements PaymentProcessor{
public void authorize(){
// Do PayPal authorization
}
}
class OtherProcessor implements PaymentProcessor{
public void authorize(){
// Do other processor authorization
}
}
class PaymentFactory{
public static PaymentProcessor create(String type){
switch(type){
case Consts.PAYPAL;
return new PaypalProcessor();
case Consts.OTHER_PROCESSOR;
return new OtherProcessor();
}
}
}
interface PaymentProcessor{
void authorize();
}
** The code won't compile, I know :)
The main reason to use DI is that you want to put the responsibility of the knowledge of the implementation where the knowledge is there. The idea of DI is very much inline with encapsulation and design by interface.
If the front end asks from the back end for some data, then is it unimportant for the front end how the back end resolves that question. That is up to the requesthandler.
That is already common in OOP for a long time. Many times creating code pieces like:
I_Dosomething x = new Impl_Dosomething();
The drawback is that the implementation class is still hardcoded, hence has the front end the knowledge which implementation is used. DI takes the design by interface one step further, that the only thing the front end needs to know is the knowledge of the interface.
In between the DYI and DI is the pattern of a service locator, because the front end has to provide a key (present in the registry of the service locator) to lets its request become resolved.
Service locator example:
I_Dosomething x = ServiceLocator.returnDoing(String pKey);
DI example:
I_Dosomething x = DIContainer.returnThat();
One of the requirements of DI is that the container must be able to find out which class is the implementation of which interface. Hence does a DI container require strongly typed design and only one implementation for each interface at the same time. If you need more implementations of an interface at the same time (like a calculator), you need the service locator or factory design pattern.
D(b)I: Dependency Injection and Design by Interface.
This restriction is not a very big practical problem though. The benefit of using D(b)I is that it serves communication between the client and the provider. An interface is a perspective on an object or a set of behaviours. The latter is crucial here.
I prefer the administration of service contracts together with D(b)I in coding. They should go together. The use of D(b)I as a technical solution without organizational administration of service contracts is not very beneficial in my point of view, because DI is then just an extra layer of encapsulation. But when you can use it together with organizational administration you can really make use of the organizing principle D(b)I offers.
It can help you in the long run to structure communication with the client and other technical departments in topics as testing, versioning and the development of alternatives. When you have an implicit interface as in a hardcoded class, then is it much less communicable over time then when you make it explicit using D(b)I. It all boils down to maintenance, which is over time and not at a time. :-)
Quite frankly, I believe people use these Dependency Injection libraries/frameworks because they just know how to do things in runtime, as opposed to load time. All this crazy machinery can be substituted by setting your CLASSPATH environment variable (or other language equivalent, like PYTHONPATH, LD_LIBRARY_PATH) to point to your alternative implementations (all with the same name) of a particular class. So in the accepted answer you'd just leave your code like
var logger = new Logger() //sane, simple code
And the appropriate logger will be instantiated because the JVM (or whatever other runtime or .so loader you have) would fetch it from the class configured via the environment variable mentioned above.
No need to make everything an interface, no need to have the insanity of spawning broken objects to have stuff injected into them, no need to have insane constructors with every piece of internal machinery exposed to the world. Just use the native functionality of whatever language you're using instead of coming up with dialects that won't work in any other project.
P.S.: This is also true for testing/mocking. You can very well just set your environment to load the appropriate mock class, in load time, and skip the mocking framework madness.

DI Container and custom-scoped state in legacy system

I believe I understand the basic concepts of DI / IoC containers having written a couple of applications using them and reading a lot of stack overflow answers as well as Mark Seeman's book. There are still some cases that I have trouble with, especially when it comes to integrating DI container to a large existing architecture where DI principle hasn't been really used (think big ball of mud).
I know the ideal scenario is to have a single composition root / object graph per operation but in a legacy system this might not be possible without major refactoring (only the new and some select refactored old parts of the code could have dependencies injected through constructor and the rest of the system using the container as a service locator to interact with the new parts). This effectively means that a stack trace deep within an operation might include several object graphs with calls being made back and forth between new subsystems (single object graph until exiting into an old segment) and traditional subsystems (service locator call at some point to code under DI container).
With the (potentially faulty, I might be overthinking this or be completely wrong in assuming this kind of hybrid architecture is a good idea) assumptions out of the way, here's the actual problem:
Let's say we have a thread pool executing scheduled jobs of various types defined in database (or any external place). Each separate type of scheduled job is implemented as a class inheriting a common base class. When the job is started, it gets fed the information about which targets it should write its log messages to and the configuration it should use. The configuration could probably be handled by just passing the values as method parameters to whatever class needs them but if the job implementation gets larger than say 10-20 classes, it doesn't seem very handy.
Logging is the larger problem. Subsystems the job calls probably also need to write things to the log and usually in examples this is done by just requesting instance of ILog in the constructor. But how does that work in this case when we don't know the details / implementation until runtime? Since:
Due to (non DI container controlled) legacy system segments in the call chain (-> there potentially being multiple separate object graphs), child container cannot be used to inject the custom logger for specific sub-scope
Manual property injection would basically require the complete call chain (including all legacy subsystems) to be updated
A simplified example to help better perceive the problem:
Class JobXImplementation : JobBase {
// through constructor injection
ILoggerFactory _loggerFactory;
JobXExtraLogic _jobXExtras;
public void Run(JobConfig configurationFromDatabase)
{
ILog log = _loggerFactory.Create(configurationFromDatabase.targets);
// if there were no legacy parts in the call chain, I would register log as instance to a child container and Resolve next part of the call chain and everyone requesting ILog would get the correct logging targets
// do stuff
_jobXExtras.DoStuff(configurationFromDatabase, log);
}
}
Class JobXExtraLogic {
public void DoStuff(JobConfig configurationFromDatabase, ILog log) {
// call to legacy sub-system
var old = new OldClass(log, configurationFromDatabase.SomeRandomSetting);
old.DoOldStuff();
}
}
Class OldClass {
public void DoOldStuff() {
// moar stuff
var old = new AnotherOldClass();
old.DoMoreOldStuff();
}
}
Class AnotherOldClass {
public void DoMoreOldStuff() {
// call to a new subsystem
var newSystemEntryPoint = DIContainerAsServiceLocator.Resolve<INewSubsystemEntryPoint>();
newSystemEntryPoint.DoNewStuff();
}
}
Class NewSubsystemEntryPoint : INewSubsystemEntryPoint {
public void DoNewStuff() {
// want to log something...
}
}
I'm sure you get the picture by this point.
Instantiating old classes through DI is a non-starter since many of them use (often multiple) constructors to inject values instead of dependencies and would have to be refactored one by one. The caller basically implicitly controls the lifetime of the object and this is assumed in the implementations (the way they handle internal object state).
What are my options? What other kinds of problems could you possibly see in a situation like this? Is trying to only use constructor injection in this kind of environment even feasible?
Great question. In general, I would say that an IoC container loses a lot of its effectiveness when only a portion of the code is DI-friendly.
Books like Working Effectively with Legacy Code and Dependency Injection in .NET both talk about ways to tease apart objects and classes to make DI viable in code bases like the one you described.
Getting the system under test would be my first priority. I'd pick a functional area to start with, one with few dependencies on other functional areas.
I don't see a problem with moving beyond constructor injection to setter injection where it makes sense, and it might offer you a stepping stone to constructor injection. Adding a property is usually less invasive than changing an object's constructor.

Why not pass your IoC container around?

On this AutoFac "Best Practices" page (http://code.google.com/p/autofac/wiki/BestPractices), they say:
Don't Pass the Container Around
Giving components access to the container, or storing it in a public static property, or making functions like Resolve() available on a global 'IoC' class defeats the purpose of using dependency injection. Such designs have more in common with the Service Locator pattern.
If components have a dependency on the container, look at how they're using the container to retrieve services, and add those services to the component's (dependency injected) constructor arguments instead.
So what would be a better way to have one component "dynamically" instantiate another? Their second paragraph doesn't cover the case where the component that "may" need to be created will depend on the state of the system. Or when component A needs to create X number of component B.
To abstract away the instantiation of another component, you can use the Factory pattern:
public interface IComponentBFactory
{
IComponentB CreateComponentB();
}
public class ComponentA : IComponentA
{
private IComponentBFactory _componentBFactory;
public ComponentA(IComponentBFactory componentBFactory)
{
_componentBFactory = componentBFactory;
}
public void Foo()
{
var componentB = _componentBFactory.CreateComponentB();
...
}
}
Then the implementation can be registered with the IoC container.
A container is one way of assembling an object graph, but it certainly isn't the only way. It is an implementation detail. Keeping the objects free of this knowledge decouples them from infrastructure concerns. It also keeps them from having to know which version of a dependency to resolve.
Autofac actually has some special functionality for exactly this scenario - the details are on the wiki here: http://code.google.com/p/autofac/wiki/DelegateFactories.
In essence, if A needs to create multiple instances of B, A can take a dependency on Func<B> and Autofac will generate an implementation that returns new Bs out of the container.
The other suggestions above are of course valid - Autofac's approach has a couple of differences:
It avoids the need for a large number of factory interfaces
B (the product of the factory) can still have dependencies injected by the container
Hope this helps!
Nick
An IoC takes the responsibility for determining which version of a dependency a given object should use. This is useful for doing things like creating chains of objects that implement an interface as well as having a dependency on that interface (similar to a chain of command or decorator pattern).
By passing your container, you are putting the onus on the individual object to get the appropriate dependency, so it has to know how to. With typical IoC usage, the object only needs to declare that it has a dependency, not think about selecting between multiple available implementations of that dependency.
Service Locator patterns are more difficult to test and it certainly is more difficult to control dependencies, which may lead to more coupling in your system than you really want.
If you really want something like lazy instantiation you may still opt for the Service Locator style (it doesn't kill you straight away and if you stick to the container's interface it is not too hard to test with some mocking framework). Bear in mind, though that the instantiation of a class that doesn't do much (or anything) in the constructor is immensely cheap.
The container's I have come to know (not autofac so far) will let you modify what dependencies should be injected into which instance depending on the state of the system such that even those decisions can be externalized into the configuration of the container.
This can provide you plenty of flexibility without resorting to implementing interaction with the container based on some state you access in the instance consuming dependencies.

Resources