Dependency Injection is certainly one of the most important concepts when trying to write testable code. But while Java and C# have garbage collection, Delphi has not and normally, object disposal is managed using the ownership-principle (the one who creates the object destroys it). This is nicely supported by the try..finally construct
Obj := TObject.Create;
try
...
finally
Obj.Free;
end;
Now what if one uses dependency injection:
constructor TFileLister.Create(FileSystem: TFileSystem);
Who should now be responsible for destroying the FileSystem object? Does the ownership-principle still work here?
I know that interfaces are a solution to this problem (thanks to the fact that they are reference-counted). But what if there are no interfaces (say in some legacy code)? What other approaches or best practices are there to handle memory management when using dependency injection?
You have to come up with an owner for the FileSystem object. This can be either the entity that creates the TFileLister instances, or you could pass ownership to the file lister, documenting that it will free the file system that was passed to the constructor.
The right approach depends on course on your particular application. For example, if other objects would also use the same file system object, it shouldn't be owned by one of these such as the file lister, but by the object that ties it all together. You could even make the file system object global if it only makes sense to have one of it.
In short, you'll have to do a little more thinking than in Java but that's not necessarily a bad thing.
It's almost always preferable to regard the entity that create an object also to be its owner (i.e. responsible for destroying it).
To understand why I say this, consider the alternative. Suppose that object A creates object B. At some point later it passes B to object C which becomes the owner.
In the period between creating B and handing it over to C, A is responsible for destruction in case of exceptions, or perhaps the selection of a branch that bypasses C. On the other hand, once it has handed off B, A must not attempt to destroy C.
All this can be handled with sufficient care. One approach is that taken by the VCL with TComponent.Owner.
However, if you can find a way to stick to the two standard patterns of ownership then do so.
What are the two standard patterns?
Create in a constructor and assign to a field; destroy in the matching destructor.
Create and destroy inside a single method, with protection provided by try / finally.
I would strongly recommend that you try to shape your code so that all resource acquisition uses one of these two options.
How can you do so in your example? The option that leaps out at me is to use a factory to create your FileSystem object. This allows TFileLister to manage the lifetime of the FileSystem object, but gives you the flexibility of injecting different behaviour into TFileLister.
I disagree with the notion that an object must be destroyed by the one that created it. Many times this is the natural choice but its certainly not the only way of managing memory. A better way to look at it is that an object's lifetime should end when it is no longer needed.
So what options do you have?
Use Interface reference counting
In many cases it is trivial to extract an interface from an existing class so don't shelve this idea just because your working with legacy code.
Use an IoC container that supports lifetime management.
There are a number of IoC containers for Delphi and more are popping up all the time. Spring for Delphi is one that I know of that supports lifetime management. Note: most of these containers target Delphi 2010 or newer so it may be difficult to find one for legacy code.
Use a garbage collector.
The Boehm GC memory manager is the only one I'm aware.
All three of these can be combined with poor man's dependency injection to get the testing benefits while making minimal changes to your legacy code. For those unfamiliar with the term you use constructor chaining to instantiate a default dependency.
constructor TMyClass.Create;
begin
Create(TMyDependency.Create);
//Create(IoCContaineror.Resolve(TMyDependency));
end;
constructor TMyClass.Create(AMyDependency: TMyDependency)
begin
FMyDependency := AMyDependency;
end;
Your production code continues to use the default constructor with the real object while your tests can inject a fake/mock/stub to sense what the class being exercised is playing nicely. Once your test coverage is high enough you can remove the default constructor.
Related
When should I use property injection?
Should I use by default constructor injection if instance creation in fully controlled?
Am I right that using a constructor injection I write container-agnostic code?
When should I use property injection?
You should use property injection in case the dependency is truly optional, when you have a Local Default, or when your object graph contains a cyclic dependency.
Property Injection however causes Temporal Coupling and when writing Line of Business applications, your dependencies should never be optional: you should instead apply the Null Object pattern.
Neither should you use a Local Default, which is:
"A default implementation of an abstraction that's defined in the same assembly as the consumer." [DIPP&P, Section 4.2.2]
Local Defaults should not be used in Line of Business applications, because it complicates testing, hides the dependency, and makes it easy to forget to configure the dependency.
Neither should object graphs have cyclic dependencies. This is an indication of a problem in your application design.
Should I use by default constructor injection if instance creation in fully controlled?
Yes. Constructor injection is the best way. It makes it very easy to see which dependencies a class has, makes it possible to make dependencies required, and prevents Temporal Coupling.
Am I right that using a constructor injection I write container-agnostic code?
This is correct. Constructor injection allows you to delay the decision of which DI library to use, and whether at all you use a DI library.
For a more detailed explanation of the above, and much more, read the book Dependency Injection Principles, Practices, and Pattern (DIPP&P) by Mark Seemann and myself.
The usefulness of property injection is so limited, that while working on the book, Mark and I even discussed labeling Property Injection an anti-pattern. In the end we felt we couldn't make a case that was as strong as, for instance, Ambient Context, which we did decide to describe as an anti-pattern in that edition. The case for Ambient Context was crystal clear, while being much muddier for Property Injection. But this is why, however, we added many warning notes to the Property Injection section (4.4), because we feel strongly Property Injection is not a good solution for the majority of cases.
There are several problems, though, that Property Injection seems to solve at first, such as the problem of Constructor Over-Injection (where a constructor contains many dependencies). Constructor Over-Injection, however, is almost always caused by design deficits, such as:
Class having too many responsibilities
Lack of natural clusters
Choosing inheritance over composition
Application of Cross-Cutting Concerns through base classes instead of Decorators or Interceptors
The accepted answer argues in favor of constructor injection and takes a rather critical stance toward property injection. As such, it does not put focus on addressing the problems that property injection actually solves, if used correctly. Therefore I want to take the opportunity to address some of these points and also to provide some counter arguments to the accepted answer.
When should I use property injection?
Imagine you have a project with 100+ controllers and all those controllers are extending a custom base controller (parent-service). In such a situation, where a service is extended by several child-services, employing constructor injection is a burden: For every constructor you create, you need to relay arguments to your parent-service's constructor. If you decide to extend your parent service's constructor signature you will also be forced to extend the signatures of all child-services' constructors.
To make this example more vivid, say you start your project with a base controller having a parameterless constructor.
Now after a month you decide you want a logger service in your base controller. → Not only will you have to change the base controller constructor's signature but also those of your 100+ child controllers.
Now again after a month you need access to a user service in your base controller → Again, you'll have to change the constructor signature of your 100+ child controllers.
you get the point...
With property injection you can easily circumvent this whole inconvenience by simply adding the necessary properties to your parent service and letting your DI-mechanism handle the injection via reflection. As a side effect, this also greatly reduces the risk of merge conflicts (since the files touched is reduced to a minimum).
So far, I have been talking mostly about controllers but this example counts for any situation in which you have a service hierarchy – the deeper or broader this hierarchy gets, the greater the burden of constructor injection. However, avoiding service-hierarchies altogether may not always be a reasonable choice in a project.
One could say the decision between property and constructor injection is really a decision between a pragmatism and OOP "purism".
From a "purist" OOP perspective, the rule is (as stated in the accepted answer) to initialize all required fields of a class via its constructor in order to avoid granting any access to the newly created instance in an "unfinished" state (which may result in an exception being later thrown).
With reference to this rule, an OOP-purist has a valid point in saying that property injection (temporarily) leaves your services in an "unfinished" state (the timespan between when your constructor has returned and the moment your property is injected) and that this increases the risk that your application may break.
However, when talking about services managed by an IoC/DI container, this risk is practically reduced to zero if you consider that your DI-mechanism is responsible for resolving the dependency graph and wiring everything up before any user-action or api-request actually makes it into your system or needs being processed. For example, at the time a controller's action is invoked you can be sure that your services were properly wired up and injected into your controller's properties (given of course, that you configured them correctly in advance).
Also the argument that it is only possible with constructor injection to make your dependencies "required" is rather weak in a world where you are not responsible for manually injecting services into your classes but delegate this task to your IoC mechanism. Even worse, you may get a false sense of security because you stated via constructor that a ServiceX requires ServiceY – but if you forgot to register your ServiceY with your DI-mechanism, you merely get null injected into your ServiceX's constructor.
Another "argument" against property injection is that it becomes harder for your fellow programmers to distinguish between properties that are managed by the DI mechanism and those which are simply non-DI-related. However, in this case you can just use a marker attribute to "opt-in" for DI or add short comments above your properties to clear things up, when the case is not clear. Also, in a service-class, it is rather unusual to have properties referring to other services that are not supposed to be manged by your DI mechanism.
Finally, as for saying that constructor injection makes unit testing easier (since you know what dependencies are required by a class), I would simply argue, that with property injection you will soon enough notice that you forgot to include a dependency when your tests begin to fail due to a certain service being undefined.
Should I use by default constructor injection if instance creation in fully controlled?
With all the above being said, I think I can answer your second question with: Not necessarily.
It depends on your project's size, the kind of service-hierarchy you employ, how often your parent-services' dependencies change and how much time and resources you are willing to invest in managing parameters and passing arguments up the service-hierarchy.
Am I right that using a constructor injection I write container-agnostic code?
Yes! – Under the premise that you are not injecting the container itself... which you shouldn't! ;)
All the above being said, here are some quotes from Martin Fowler's great discussion on dependency injection directly addressing the question of constructor vs setter/property injection, and I can fully subscribe to the last quote :)
If you have multiple constructors and inheritance, then things can get particularly awkward. In order to initialize everything you have to provide constructors to forward to each superclass constructor, while also adding you own arguments. This can lead to an even bigger explosion of constructors.
Despite the disadvantages my preference is to start with constructor injection, but be ready to switch to setter injection as soon as the problems I've outlined above start to become a problem.
This issue has led to a lot of debate between the various teams who provide dependency injectors as part of their frameworks. However it seems that most people who build these frameworks have realized that it's important to support both mechanisms, even if there's a preference for one of them.
A final remark: If you, for some reason, want to switch back from property injection to constructor injection, no problem, you can always add a constructor with the parameters to be injected and assign the properties via your constructor – dead-simple.
If it was created by the class that is injecting it into the object that uses it then isn't it just moving object creation one step up the stack? And wouldn't this mean that all objects needed by lower level classes would need to be passed through each object one at a time until it reached the object that needed it?
All objects and their dependencies could be set up at the very beginning, but wouldn't this dent performance as objects will be hanging around until needed?
Yes, it is moving the object instantiation up the stack. But it is moving it up the stack to a place where you can make a better decision as to which implementation to actually use. If I want to replace my data access layer with a stubbed version to do performance testing of the business logic, I can without changing a single line of the business logic code.
There are may ways to inject your dependencies. In my case, I use constructor injection everywhere. Using this method, if a lower level class needs a dependency, it just puts the interface for that dependency in its constructor. No need to pass from a class higher up in the stack. If you need the same instance in both classes, then you should look at the lifestyle/scope of when registering your dependency into the container so that both classes happen to get passed the same instance.
Some DI implementations use lazy loading to instantiate their objects. (i.e. it's not until the object is attempted to be used that it is actually instantiated) Some do not. Also, you would need quite the large dependency graph to make a dent in performance. Keep your constructors simple and fast (a good practice anyway) and this will not be a problem, I assure you. And DI containers are smart about releasing objects that are no longer in use (again, pay special attention to lifestyle/scope).
I hope this helps.
I'm trying to divide my monolithic, Delphi-Win32 app in libraries, so I get some questions around how to share global variables and object among my libraries using Delphi 2009. For example, I have 3 global objects (derived from TObject): for user info, for current session info, and for storing the active database connection and managing operations with this database. My libraries require to work with these objects. Moreover certain libraries would give an object derived from TForm to be hosted for another parent control into the main form. Every object derived from TForm passed to main form has its own methods and properties, that is, their classes are different each other.
I'm thinking to put the global objects into a separate library but I guess that it would make things more difficult, but consider it, please.
How to get to work this situation?
One question more, which is better to use: static or dynamic loading for libraries?
Can you recommend some books or sites to learn more about this?
Thanks in advance.
What we have done in the past to share variables between modules (we used BPLs) was to pass them through a shared TStringList. Generally speaking it is best to have a global shared object with all your shared variables in it.
Anything that is going to be referenced between more then one library must be in its own library. Mason's advice was sound.
Go with static loading, unless you really need dynamic for some specific reason (which it doesn't sound like). Let the windows memory manager swap out unneeded libraries from memory.
One tip from someone who managed a large application split into multiple libraries. We had our components in packages, the VCL, some application common routines, and then a library for each "screen" or segment of the application. For changes to the screens, it was possible to just release that one updated library, but for changes to any of the other types of libraries, we found we usually had to redeploy everything. So it was rare we enjoyed an advantage from the configuration.
It seems that by "libraries" you mean BPL Packages, so here are the guidelines:
Each BPL, when it gets loaded, loads all the units in it. No unit can be loaded more than once. That means that if more than one package needs access to one of the globals, then it has to either be in one package that the other(s) have in their Requires list, or in a separate package that all the others Require.
As for static vs. dynamic loading, if your program absolutely needs it, make it statically linked. Dynamic loading is for optional features, such as plug-ins. (If you want to go that route, take a look at JVPlugin in the JVCL. It's a very useful system.)
I do NOT understand why is people mentioning the Delphi version for a simple question like this, the answer is yes it is better to put shared variables in a separate unit before the implementation keyword.
Every object(form, class, control) is derived from TObject, even if you define a class like
type TMyClass = class
// no inheritance ?
end;
the above class is still derived from TObject(read Delphi help).
Your global variables could be declared of type TObject or Pointer and when you access them use hard cast TForm(MyPointerVariable).Method, i.e.
var MyPointerVariable: Pointer; // I presume it is already initialized and is a pointer to a TForm descendant
...
begin
TForm(MyPointerVariable).Caption := 'Stack Overflow';
end;
For more information read Delphi tutorial on my blog it should be very simple to understand.
I've had a certain feeling these last couple of days that dependency-injection should really be called "I can't make up my mind"-pattern. I know this might sound silly, but really it's about the reasoning behind why I should use Dependency Injection (DI). Often it is said that I should use DI, to achieve a higher level of loose-coupling, and I get that part. But really... how often do I change my database, once my choice has fallen on MS SQL or MySQL .. Very rarely right?
Does anyone have some very compelling reasons why DI is the way to go?
Two words, unit testing.
One of the most compelling reasons for DI is to allow easier unit testing without having to hit a database and worry about setting up 'test' data.
DI is very useful for decoupling your system. If all you're using it for is to decouple the database implementation from the rest of your application, then either your application is pretty simple or you need to do a lot more analysis on the problem domain and discover what components within your problem domain are the most likely to change and the components within your system that have a large amount of coupling.
DI is most useful when you're aiming for code reuse, versatility and robustness to changes in your problem domain.
How relevant it is to your project depends upon the expected lifespan of your code. Depending on the type of work you're doing zero reuse from one project to the next for the majority of code you're writing might actually be quite acceptable.
An example for use the use of DI is in creating an application that can be deployed for several clients using DI to inject customisations for the client, which could also be described as the GOF Strategy pattern. Many of the GOF patterns can be facilitated with the use of a DI framework.
DI is more relevant to Enterprise application development in which you have a large amount of code, complicated business requirements and an expectation (or hope) that the system will be maintained for many years or decades.
Even if you don't change the structure of your program during development phases you will find out you need to access several subsystems from different parts of your program. With DI each of your classes just needs to ask for services and you're free of having to provide all the wiring manually.
This really helps me on concentrating on the interaction of things in the software design and not on "who needs to carry what around because someone else needs it later".
Additionally it also just saves a LOT of work writing boilerplate code. Do I need a singleton? I just configure a class to be one. Can I test with such a "singleton"? Yes, I still can (since I just CONFIGURED it to exist only once, but the test can instantiate an alternative implementation).
But, by the way before I was using DI I didn't really understand its worth, but trying it was a real eye-opener to me: My designs are a lot more object-oriented as they have been before.
By the way, with the current application I DON'T unit-test (bad, bad me) but I STILL couldn't live with DI anymore. It is so much easier moving things around and keeping classes small and simple.
While I semi-agree with you with the DB example, one of the large things that I found helpful to use DI is to help me test the layer I build on top of the database.
Here's an example...
You have your database.
You have your code that accesses the database and returns objects
You have business domain objects that take the previous item's objects and do some logic with them.
If you merge the data access with your business domain logic, your domain objects can become difficult to test. DI allows you to inject your own data access objects into your domain so that you don't depend on the database for testing or possibly demonstrations (ran a demo where some data was pulled in from xml instead of a database).
Abstracting 3rd party components and frameworks like this would also help you.
Aside from the testing example, there's a few places where DI can be used through a Design by Contract approach. You may find it appropriate to create a processing engine of sorts that calls methods of the objects you're injecting into it. While it may not truly "process it" it runs the methods that have different implementation in each object you provide.
I saw an example of this where the every business domain object had a "Save" function that the was called after it was injected into the processor. The processor modified the component with configuration information and Save handled the object's primary state. In essence, DI supplemented the polymorphic method implementation of the objects that conformed to the Interface.
Dependency Injection gives you the ability to test specific units of code in isolation.
Say I have a class Foo for example that takes an instance of a class Bar in its constructor. One of the methods on Foo might check that a Property value of Bar is one which allows some other processing of Bar to take place.
public class Foo
{
private Bar _bar;
public Foo(Bar bar)
{
_bar = bar;
}
public bool IsPropertyOfBarValid()
{
return _bar.SomeProperty == PropertyEnum.ValidProperty;
}
}
Now let's say that Bar is instantiated and it's Properties are set to data from some datasource in it's constructor. How might I go about testing the IsPropertyOfBarValid() method of Foo (ignoring the fact that this is an incredibly simple example)? Well, Foo is dependent on the instance of Bar passed in to the constructor, which in turn is dependent on the data from the datasource that it's properties are set to. What we would like to do is have some way of isolating Foo from the resources it depends upon so that we can test it in isolation
This is where Dependency Injection comes in. What we want is to have some way of faking an instance of Bar passed to Foo such that we can control the properties set on this fake Bar and achieve what we set out to do, test that the implementation of IsPropertyOfBarValid() does what we expect it to do, i.e. return true when Bar.SomeProperty == PropertyEnum.ValidProperty and false for any other value.
There are two types of fake object, Mocks and Stubs. Stubs provide input for the application under test so that the test can be performed on something else. Mocks on the other hand provide input to the test to decide on pass\fail.
Martin Fowler has a great article on the difference between Mocks and Stubs
I think that DI is worth using when you have many services/components whose implementations must be selected at runtime based on external configuration. (Note that such configuration can take the form of an XML file or a combination of code annotations and separate classes; choose what is more convenient.)
Otherwise, I would simply use a ServiceLocator, which is much "lighter" and easier to understand than a whole DI framework.
For unit testing, I prefer to use a mocking API that can mock objects on demand, instead of requiring them to be "injected" into the tested unit from a test. For Java, one such library is my own, JMockit.
Aside from loose coupling, testing of any type is achieved with much greater ease thanks to DI. You can put replace an existing dependency of a class under test with a mock, a dummy or even another version. If a class is created with its dependencies directly instantiated it can often be difficult or even impossible to "stub" them out if required.
I just understood tonight.
For me, dependancy injection is a method for instantiate objects which require a lot of parameters to work in a specific context.
When should you use dependancy injection?
You can use dependancy injection if you instanciate in a static way an object. For example, if you use a class which can convert objects into XML file or JSON file and if you need only the XML file. You will have to instanciate the object and configure a lot of thing if you don't use dependancy injection.
When should you not use depandancy injection?
If an object is instanciated with request parameters (after a submission form), you should not use depandancy injection because the object is not instanciated in a static way.
I've been using StructureMap recently and have enjoyed the experience thoroughly. However, I can see how one can easily get carried away with interfacing everything out and end up with classes that take in a boatload of interfaces into their constructors. Even though that really isn't a huge problem when you're using a dependency injection framework, it still feels that there are certain properties that really don't need to be interfaced out just for the sake of interfacing them.
Where do you draw the line on what to interface out vs just adding a property to the class?
The main problem with dependency injection is that, while it gives the appearance of a loosely coupled architecture, it really doesn't.
What you're really doing is moving that coupling from the compile time to the runtime, but still if class A needs some interface B to work, an instance of a class which implements interface B needs still to be provided.
Dependency injection should only be used for the parts of the application that need to be changed dynamically without recompiling the base code.
Uses that I've seen useful for an Inversion of Control pattern:
A plugin architecture. So by making the right entry points you can define the contract for the service that must be provided.
Workflow-like architecture. Where you can connect several components dynamically connecting the output of a component to the input of another one.
Per-client application. Let's say you have various clients which pays for a set of "features" of your project. By using dependency injection you can easily provide just the core components and some "added" components which provide just the features the client have paid.
Translation. Although this is not usually done for translation purposes, you can "inject" different language files as needed by the application. That includes RTL or LTR user interfaces as needed.
Think about your design. DI allows you to change how your code functions via configuration changes. It also allows you to break dependencies between classes so that you can isolate and test objects easier. You have to determine where this makes sense and where it doesn't. There's no pat answer.
A good rule of thumb is that if its too hard to test, you've got some issues with single responsibility and static dependencies. Isolate code that performs a single function into a class and break that static dependency by extracting an interface and using a DI framework to inject the correct instance at runtime. By doing this, you make it trivial to test the two parts separately.
Dependency injection should only be used for the parts of the
application that need to be changed dynamically without recompiling
the base code
DI should be used to isolate your code from external resources (databases, webservices, xml files, plugin architecture). The amount of time it would take to test your logic in code would almost be prohibitive at a lot of companies if you are testing components that DEPEND on a database.
In most applications the database isn't going to change dynamically (although it could) but generally speaking it's almost always good practice to NOT bind your application to a particular external resource. The amount involve in changing resources should be low (data access classes should rarely have a cyclomatic complexity above one in it's methods).
What do you mean by "just adding a property to a class?"
My rule of thumb is to make the class unit testable. If your class relies on the implementation details of another class, that needs to be refactored/abstracted to the point that the classes can be tested in isolation.
EDIT: You mention a boatload of interfaces in the constructor. I would advise using setters/getters instead. I find that it makes things much easier to maintain in the long run.
I do it only when it helps with separation of concerns.
Like maybe cross-project I would provide an interface for implementers in one of my library project and the implementing project would inject whatever specific implementation they want in.
But that's about it... all the other cases it'd just make the system unnecessarily complex
Even with all the facts and processes in the world.. every decision boils down to a judgment call - Forgot where I read that
I think it's more of a experience / flight time call.
Basically if you see the dependency as a candidate object that may be replaced in the near future, use dependency injection. If I see 'classA and its dependencies' as one block for substitution, then I probably won't use DI for A's deps.
The biggest benefit is that it will help you understand or even uncover the architecture of your application. You'll be able to see very clearly how your dependency chains work and be able to make changes to individual parts without requiring you to change things that are unrelated. You'll end up with a loosely coupled application. This will push you into a better design and you'll be surprised when you can keep making improvements because your design will help you keep separating and organizing code going forward. It can also facilitate unit testing because you now have a natural way to substitute implementations of particular interfaces.
There are some applications that are just throwaway but if there's a doubt I would go ahead and create the interfaces. After some practice it's not much of a burden.
Another item I wrestle with is where should I use dependency injection? Where do you take your dependency on StructureMap? Only in the startup application? Does that mean all the implementations have to be handed all the way down from the top-most layer to the bottom-most layer?
I use Castle Windsor/Microkernel, I have no experience with anything else but I like it a lot.
As for how do you decide what to inject? So far the following rule of thumb has served me well: If the class is so simple that it doesn't need unit tests, you can feel free to instantiate it in class, otherwise you probably want to have a dependency through the constructor.
As for whether you should create an interface vs just making your methods and properties virtual I think you should go the interface route either if you either a) can see the class have some level of reusability in a different application (i.e. a logger) or b) if either because of the amount of constructor parameters or because there is a significant amount of logic in the constructor, the class is otherwise difficult to mock.