Removing singleton instances from Delphi project - delphi

I am trying to make my first project that can be unit tested. And it is amazing how I have to rewire my brain of some vicious coding styles.
This article got me the attention that Singletons are pathological liars
I am not trying to be radical on that, but I am used to an artifact that I am not sure how can i get rid of it
example:
initialization
ModelFactory.RegisterFactoryMethod('standard.contasmovimento',
function(AParam: Variant) : TModel
begin
result := TModelContasMovimento.Create(AParam);
end);
end.
ModelFactory is a singleton, defined on its unit and part of the uses clause of this unit.
In my MVP structure I define each of the Models, Views and Presenters in its own unit (1 class 1 unit). All these units are available to be used, according the needs of each project. So, I use it like a parts catalog, according the project I add the units to the project and it gets automatically registered and ready to be used from the factories.
To solve the singleton problem i was thinking in move them to a framework class, so I could create the object at one point and then could use dependency injection to pass the framework object. All the factories and other environment stuff are sit there:
TMyFramework = class
FModelFactory: IModelFactory;
FViewFactory: IViewFactory;
FPresenterFactory: IPresenterFactory;
property ModeFactory: IModelFactory read FModelFactory;
...
My ideia is remove the singletons in a way that I can mock them in a test unit. With singletons in place I cant remove them easily for testing.
But that will make me loose the automatic initialization of each unit, an I rely on that to add the units. I don't want to manually create a list of available classes.
Is there a way to solve this situation?

For unit testing to get off the ground without a complete rewrite of all the singletons you have, I have found that just adding the ability to Free and re-Create the singletons is usually more than enough to get you started.
Assuming the singletons are instantiated in an initialization section (Ban those. They are the bane of any unit test. Go for separate registration and initialization units.), you can simply add two procedures to the interface section. Put them between conditional defines if you don't want other units in your normal project to use them:
{$IFDEF DUNIT}
procedure InstantiateMySingleton;
procedure FreeMySingleton;
{$ENDIF}
Move the code that you now have in your initialization and finalization sections to the implementation of these procedures and just call them from the initialization and finalization.
procedure InstantiateMySingleton;
begin
// ...
end;
procedure FreeMySingleton;
begin
// ...
end;
initialization
InstantiateMySingleton;
finalization
FreeMySingleton;
With this done you can start to use the InstantiateMySingleton and FreeMySingleton in your unit tests' setup and teardown methods.
All you have then left to do is to make sure that creating and destroying the singleton doesn't leak memory and is something that can actually be repeated with the same functional results every single time. One thing I have found that helps in ensuring this is to use the GUI runner and run the whole test suite twice (without exiting the GUI of course!). If there are tests that succeed on the first run and fail on the second run, you have problems in the initialization or finalization of your singleton.

It depends if you need the instance in another class or the factory to instantiate the class at a later point. In the first case you can simply pass the constructed instance as dependency. In the other case you pass the factory. In both cases you are using dependency injection instead of looking it up in some other unit ("Don't look for things but ask for things").
Of course this requires some wire up code for the dependency ("poor man's DI") or the use of a DI container.
Dependency injection really is the only way to make things testable in a clean way because you just simply pass mocks to the SUT.

In general, Misco is making the case about (not) using Singletons in other objects. It's a bit of a stretch but one could view a delphi unit as an object so the question is what to do to remove the Singleton dependency in the unit.
The easiest solution, and one I have been using for some time is to move all initialization code for a give project into one (or several) units who's only purpose is to do the initialization for your application.
As for your testcases, you are now free to ommit that unit and initialize (mock) to whatever you want.

If I look enough I am sure I can find an article describing each and every design pattern either as an anti-pattern or the devil incarnate. For example, here is article that describes the Service Locator pattern as an anti-pattern:
http://blog.ploeh.dk/2010/02/03/ServiceLocatorisanAnti-Pattern/
I have seen comments describing factory patterns as anti-patterns.
I have often used singletons in factories in the past and I have no problem testing the adapters/plugins/etc that I "load" with these factories in unit tests. Typically the code in singletons is simple and not something that I would test in itself. The code is tested enough just through normal use. All of the Dependency Injection libraries that I have seen include a singleton for their global container.
Where possible I like to configure my adapters in XML. The system itself is then completely modular and easy to test.
I am not saying that dependency injection is bad. Far from it. I like it. If you are building a system from scratch I would certainly advocate it as a good framework to design around and Spring4D is a good implementation. I am putting forward a different view point. I think that rewriting your system to try and take advantage of a newer or different technology should be your last port of call. It reminds me of the Joel Spolsky article about the Netscape rewrite of Navigator (http://www.joelonsoftware.com/articles/fog0000000069.html).

Related

ways of using ninject properly

One of the definition of ninject in internet is;
"Somewhere in the middle of your application, you're creating a class
inside another class. That means you're creating a dependency.
Dependency Injection is about passing in those dependencies, usually
through the constructor, instead of embedding them."
what i want to learn is, where ever we see a creation of a class inside another class should we use ninject or just we should use in some part of program that we want/need to apply loosely coupling for design purposes because maybe we would like to use different approaches in the future?
Sorry if this is a silly question.
It's a perfectly valid question, and there are no absolute right or wrong answers. Ninject and other IoC frameworks are designed to de-couple dependencies.
So the moment you do this:
public class MyClass1
{
public MyClass1()
{
MyClass2 mc2 = new MyClass2();
}
}
You can categorically say that MyClass1 has a dependency in MyClass2.
For me, my rule is this: do I need to unit test MyClass1, or is it likely I'll need to unit test MyClass1?
If I don't need to unit test it, then I don't find much value in decoupling the two classes.
However, if I do need to unit test MyClass1, then injecting in MyClass2 gives you much better control over your unit tests (and allows you to test MyClass1 in isolation).
You do need to evaluate each case separately though. In the above example, if I need to unit test MyClass1, and MyClass2 is just a basic string formatting class, then I probably wouldn't decouple it. However, if MyClass2 was an email sending class then I would de-couple it. I don't want my unit tests actually sending emails, so I would feed in a fake for my tests instead.
So I don't believe there are any solid rules, but hopefully the above gives you a better idea of when you might decouple, and when you might not decouple.

Best practice for resolving in Spring4D?

In the spring4d demos, ServiceLocator.GetService<MyType>('Name') is used to resolve the types. But why not use GlobalContainer.Resolve<MyType>('Name')? I don't see any advantage in this approach...
There is one use case, where I use ServiceLocator:
when coding to make legacy code projects unit-testable...
There is an old project, where in mulitiple places, there are constructor calls of an object, which I write tests for (new and changed methods only, in classes, where injection is not possible, e.g. when a Form is created and destroyed in a button event).
In unit-tests, spring4d is helpful to instantiate the class-under-test:
I can use the GlobalContainer in the dpr for the production project and a special (test-only) TContainer-object which is constructed in Testfixture.Setup and destroyed in Testfixture.TearDown... I also re-initialize the global Service-Locator to use my Test-Container (Reason: I have bad experiences to use GlobalContainer in Test, you cannot un-register a type from GlobalContainer in Testfixture.TearDown).
So now, I got a big method in the dpr, where I register all types to GlobalContainer in the production-code project. In the Setup-Method of my test-fixture-class, I register all types needed for the test to my Testing-Container. And in the Methods, that I changed to make them unit-testable, I construct the classes-under-test with ServiceLocator, where formerly constructor-calls where used.
For me, it is the only way to make such a legacy-code project unit-testable... But my strategic goal is to replace most of this code (part-by-part, including the re-initialized ServiceLocators) one day. It is just not possible to replace it now (too much costs, too much risk...).

How to effectively use interfaces for memory management in Delphi

I'm fairly new to Delphi and have been doing all my memory management manually, but have heard references to Delphi being able to use interfaces to do reference counting and providing some memory management that way. I want to get started with that, but have a few questions.
Just generally, how do I use it. Create the interface and the class implementing it. Then anytime I need that object, have the variable actually be of the Interface type, but instantiate the object and presto? No nee to think about freeing it? No more try-finallys?
It seems very cumbersome to create a bunch of interfaces for classes that really don't need them. Any tips on auto generating those? How do I best organize that? Interface and class in the same file?
What are common pitfalls that might cause me grief? Ex: Does casting the interfaced object to the an object of its class break my reference counting? Or are there any non-obvious ways Delphi would create reference loops? (meaning besides A uses B uses C uses A)
If there are tutorials that cover any of this, that would be great, but I didn't come up with anything in my searches. Thanks.
I am currently working with a very large project that takes advantage of the "side affect" of interface reference counting for the purpose of memory management.
My own personal conclusion is that you end up with a lot of code that is overly complex for no better reason than, "I don't have to worry about calling free"
I would strongly advise against this course of action for some very basic reasons:
1) You are using a side affect that exists for the purpose of COM compatibility.
2) You are making your object footprint and efficiency heavier. Interfaces are pointers to lists of pointers.. or something along those lines.
3) Like you stated... you now have to make piles of interfaces for the sole purpose of avoiding freeing memory yourself... this causes more trouble than it's worth in my opinion.
4) Most common bug that will be a HUGE pain to debug will become when an object gets freed, before it's reference. We have special code in our own reference counting to try and test for this problem before software goes out the door.
Now to answer your questions.
1) Given TFoo and interface IFoo you can have a method like the following
function GetFoo: IFoo;
begin
Result := (TFoo.Create as IFoo);
end;
...and presto, you don't need the finally to free it.
2) Yes like I said, you think it's a great idea, but it turns into a huge pain in the bupkis
3) 2 problems.
A) you have Object1.Interface2 and Object2.Interface1... these objects will never be freed due to the circular reference
B) Freeing the object before all the references are released, I cannot stress how dificult these bugs are to track down...
The most common complaint leading to the desire for "automatic garbage collection" in Delphi is the way that even short-lived temporary objects have to be disposed of manually and that you have to write a fair amount of "boiler-plate" code to ensure that this takes place when exceptions occur.
For example, creating a TStringList for some temporary sorting or other algorithmic purpose within a procedure:
procedure SomeStringsOperation(const aStrings: TStrings);
var
list: TStringList;
begin
list := TStringList.Create;
try
:
// do some work with "list"
:
finally
list.Free;
end;
end;
As you mentioned, objects that implement the COM protocol of reference counted lifetime management avoid this by cleaning themselves up when all references to them have been released.
But since TStringList isn't a COM object, you cannot enjoy the convenience this offers.
Fortunately there is a way to use COM reference counting to take care of these things without have to create all new, COM versions of the classes you wish to use. You don't even need to switch to an entirely COM based model.
I created a very simple utility class to allow me to "wrap" ANY object inside a lightweight COM container specifically for the purpose of getting this automatic cleanup behaiour. Using this technique you can replace the above example with:
procedure SomeStringsOperation(const aStrings: TStrings);
var
list: TStringList;
begin
AutoFree(#list);
list := TStringList.Create;
:
// do some work with "list"
:
end;
The AutoFree() function call creates an "anonymous" interfaced object that is Release()'d in the exit code generated by the compiler for the procedure. This autofree object is passed a pointer to the variable that references the object you wish to be free'd. Among other things this allows us to use the AutoFree() function as a pseudo-"declaration", placing any and ALL AutoFree() calls at the top of the method, as close as possible to the variable declarations that they reference, before we have even created any objects.
Full details of the implementation, including source code and further examples, are on my blog in this post.
The memory management of interfaces is done through implementation of _AddRef and _Release which are implemented by TInterfacedObject.
In general using interfaces to make memory management less cumbersome can be a nice idea, but you need to take care of these things:
Make sure the classes that implement interfaces are derived from TInterfacedObject or roll your own ancestor class that provides good implementations for _AddRef and _Release
Use either/or: so either user interfaces references, or use object instance references, don't mix them. That can be problematic when implementing interfaces in components (as those derive from TComponent, not TInterfacedObject)
Don't go the TInterfacedComponent way as that mixes Owner based memory management and _AddRef/_Release based memory management
Watch circular interface references (you can go around implementing "weak interface references" mentioned here and implemented here)
You need to maintain extra code as you need to define interfaces for the parts your classes that you want to expose, and keep those two in sync (you could Model Maker Code Explorer for this; it allows you to extract interfaces and in general boost your development because it manages the interface/implementation parts of code in single-actions)
You need some extra plumbing to create instances of the underlying classes. You can use the factory pattern for that.
That is not always effectively, but does answer a few of your underlying questions.
Shortest possible answer: The default delphi memory model is that owners free the objects they own. All other references are weak references and must let go before the owner does. "Sharing" an object that has a lifetime shorter than the entire lifetime of the app is rarely done. Reference counting is rarely done, and when it is done, it is only done by experts, or else it adds more bugs and crashes than it solves.
Learn idiomatic delphi style and try to imitate it, don't fight the grain. Sadly, people think that "program against interfaces, not implementations" means "Use IUnknown everywhere". That's not true. I recommend you don't use COM IUnknown interfaces, and use abstract base classes instead. The only thing you can't do is implement two abstract base classes in a single class, and the need for that is rare.
Update: I've recently found it helpful to use COM Interfaces (IUnknown based) to help me separate out my model and controller implementations from my UI classes. So I do find using IUnknown based interfaces useful. But there is not a lot of documentation and prior art out there to base your efforts on. I'd like to see a "cookbook" style recipe that lays all this out for people, so they can work without the usual problem of combining interface and non-interface based lifetime management, and all the trouble that comes while you get used to that extra complexity.
Switching to interfaces only for avoiding manual Free's is senseless. Little economy in Free/try-finally lines will hardly compensate the necessity of declaring both g/setters and properties in the interface not mentioning the necessity of keeping the intf/class declarations in sync. Interfaces also bring performance loss due to implicit finalize code and reference counting. If performance is not the main point and all you want to achieve is autofreeing, I'd recommend using some universal interface wrappers like the one Deltics suggested.

What are the pros and cons of using interfaces in Delphi?

I have used Delphi classes for a while now but never really got into using interfaces. I already have read a bit about them but want to learn more.
I would like to hear which pros and cons you have encountered when using interfaces in Delphi regarding coding, performance, maintainability, code clearness, layer separation and generally speaking any regard you can think of.
All I can think of for now:
Pros:
Clear separation between interface and implementation
Reduced unit dependencies
Multiple inheritance
Reference counting (if desired, can be disabled)
Cons:
Class and interface references cannot be mixed (at least with reference counting)
Getter and setter functions required for all properties
Reference counting does not work with circular references
Debugging difficulties (thanks to gabr and Warren for pointing that out)
Adding to the answers few more advantages:
Use interfaces to represent the behavior and each implementation of a behavior will implement the interface.
API Publishing: Interfaces are great to use when publishing APIs. You can publishing an interface without giving out the actual implementation. So you are free to make internal structural changes without causing any problems to the clients.
All I say is that interfaces WITHOUT reference counting are VERY HIGH on my wishlist for delphi!!!
--> The real use of interfaces is the declaration of an interface. Not the ability for reference counting!
There are some SUBTLE downsides to interfaces that I don't know if people consider when using them:
Debugging becomes more difficult. I have seen a lot of strange difficulties stepping into interfaced method calls, in the debugger.
Interfaces in Delphi come with IUnknown semantics, if you like it or not, you'r stuck with reference counting being a supported interface. And, thus, with any interfaces created in Delphi's world, you have to be sure you handle reference counting correctly, and if you don't, you'll end up with leaks. When you want to avoid reference counting, your only choice is to override addref/decref and don't actually free anything, but this is not without its own problems. I find that the more heavily interface-laden codebases have some of the hardest-to-find access violations, and memory leaks, and this is, I think because it is very difficult to combine the refcount semantics, and the default delphi semantics (owner frees objects, and nobody else does, and most objects live for the entire life of their parents.).
Badly-done implementations using Interfaces can contribute some nasty code-smells. For example, Interfaces defined in the same unit that defines the initial concrete implementation of a class, add all the weight of interfaces, without really providing proper separation between the users of the interfaces and the implementors. I know this isn't a problem with interfaces themselves, but more of a quibble with those who write interface-based code. Please put your interface declarations in units that only have those interface declarations in them, and avoid unit-to-unit dependency hell caused by glomming your interface declarations into the same units as your implementor classes.
I mostly use interfaces when I want objects with different ancestry to offer a common service. The best example I can think of from my own experience is an interface called IClipboard:
IClipboard = interface
function CopyAvailable: Boolean;
function PasteAvailable(const Value: string): Boolean;
function CutAvailable: Boolean;
function SelectAllAvailable: Boolean;
procedure Copy;
procedure Paste(const Value: string);
procedure Cut;
procedure SelectAll;
end;
I have a bunch of custom controls derived from standard VCL controls. They each implement this interface. When a clipboard operation reaches one of my forms it looks to see if the active control supports this interface and, if so, dispatches the appropriate method.
For a very simple interface you can do this with an of object event handler, but once it gets sufficiently complex an interface works well. In fact I think that is a very good analogue. Use an interface where you a single of object event won't fit the functionality.
Interfaces solves a certain kind of issues. The primary function is to... well, ...define interfaces. To distinguish between definition and implementation.
When you want to specify or check if a class supports a set of methods - use interfaces.
You cannot do that in any other way.
(If all classes inherits from the same base class, then an abstract class will define the interface. But when you are dealing with different class hierarchies, you need interfaces to define the methods thy have in common...)
Extra note on
Cons: Performance
I think many people are too blithely dismissing the performance penalty of interfaces. (Not that I don't like and use interfaces but you should be aware of what you are getting into). Interfaces can be expensive not just for the _AddRef / _Release hit (even if you are just returning -1) but also that properties are REQUIRED to have a Get method. In my experience, most properties in a class have direct access for the read accessor (e.g., propery Prop1: Integer read FProp1 write SetProp1). Changing that direct, no penalty access to a function call can be significant hit on your speed (especially when you start adding 10s of property calls inside a loop.
For example, a simple loop using a class
for i := 0 to 99 do
begin
j := (MyClass.Prop1 + MyClass.Prop2 + MyClass.Prop3) / MyClass.Prop4;
MyClass.Update;
// do something with j
end;
goes from 0 function calls to 400 function calls when the class becomes an interface. Add more properties in that loop and it quickly gets worse.
The _AddRef / _Release penalty you can ameliorate with some tips (I am sure there are other tips. This is off the top of my head):
Use WITH or assign to a temp variable to only incur the penalty of one _AddRef / _Release per code block
Always pass interfaces using const keyword into a function (otherwise, you get an extra _AddRef / _Release occurs every time that function is called.
The only case when we had to use interfaces (besides COM/ActiveX stuff) was when we needed multiple inheritance and interfaces were the only way to get it. In several other cases when we attempted to use interfaces, we had various kinds of problems, mainly with reference counting (when the object was accessed both as a class instance and via interface).
So my advice would be to use them only when you know that you need them, not when you think that it can make your life easier in some aspect.
Update: As David reminded, with interfaces you get multiple inheritance of interfaces only, not of implementation. But that was fine for our needs.
Beyond what others already listed, a big pro of interfaces is the ability of aggregating them.
I wrote a blog post on that topic a while ago which can be found here: http://www.nexusdb.com/support/index.php?q=intf-aggregation (tl;dr: you can have multiple objects each implementing an interface and then assemble them into an aggregate which to the outside world looks like a single object implementing all these interfaces)
You might also want to have a look at the "Interface Fundamentals" and "Advanced Interface Usage and Patterns" posts linked there.

Dependency injection through constructors or property setters?

I'm refactoring a class and adding a new dependency to it. The class is currently taking its existing dependencies in the constructor. So for consistency, I add the parameter to the constructor.
Of course, there are a few subclasses plus even more for unit tests, so now I am playing the game of going around altering all the constructors to match, and it's taking ages.
It makes me think that using properties with setters is a better way of getting dependencies. I don't think injected dependencies should be part of the interface to constructing an instance of a class. You add a dependency and now all your users (subclasses and anyone instantiating you directly) suddenly know about it. That feels like a break of encapsulation.
This doesn't seem to be the pattern with the existing code here, so I am looking to find out what the general consensus is, pros and cons of constructors versus properties. Is using property setters better?
Well, it depends :-).
If the class cannot do its job without the dependency, then add it to the constructor. The class needs the new dependency, so you want your change to break things. Also, creating a class that is not fully initialized ("two-step construction") is an anti-pattern (IMHO).
If the class can work without the dependency, a setter is fine.
The users of a class are supposed to know about the dependencies of a given class. If I had a class that, for example, connected to a database, and didn't provide a means to inject a persistence layer dependency, a user would never know that a connection to the database would have to be available. However, if I alter the constructor I let the users know that there is a dependency on the persistence layer.
Also, to prevent yourself from having to alter every use of the old constructor, simply apply constructor chaining as a temporary bridge between the old and new constructor.
public class ClassExample
{
public ClassExample(IDependencyOne dependencyOne, IDependencyTwo dependencyTwo)
: this (dependnecyOne, dependencyTwo, new DependnecyThreeConcreteImpl())
{ }
public ClassExample(IDependencyOne dependencyOne, IDependencyTwo dependencyTwo, IDependencyThree dependencyThree)
{
// Set the properties here.
}
}
One of the points of dependency injection is to reveal what dependencies the class has. If the class has too many dependencies, then it may be time for some refactoring to take place: Does every method of the class use all the dependencies? If not, then that's a good starting point to see where the class could be split up.
Of course, putting on the constructor means that you can validate all at once. If you assign things into read-only fields then you have some guarantees about your object's dependencies right from construction time.
It is a real pain adding new dependencies, but at least this way the compiler keeps complaining until it's correct. Which is a good thing, I think.
If you have large number of optional dependencies (which is already a smell) then probably setter injection is the way to go. Constructor injection better reveals your dependencies though.
The general preferred approach is to use constructor injection as much as possible.
Constructor injection exactly states what are the required dependencies for the object to function properly - nothing is more annoying than newing up an object and having it crashing when calling a method on it because some dependency is not set. The object returned by a constructor should be in a working state.
Try to have only one constructor, it keeps the design simple and avoids ambiguity (if not for humans, for the DI container).
You can use property injection when you have what Mark Seemann calls a local default in his book "Dependency Injection in .NET": the dependency is optional because you can provide a fine working implementation but want to allow the caller to specify a different one if needed.
(Former answer below)
I think that constructor injection are better if the injection is mandatory. If this adds too many constructors, consider using factories instead of constructors.
The setter injection is nice if the injection is optional, or if you want to change it halfway trough. I generally don't like setters, but it's a matter of taste.
It's largely a matter of personal taste.
Personally I tend to prefer the setter injection, because I believe it gives you more flexibility in the way that you can substitute implementations at runtime.
Furthermore, constructors with a lot of arguments are not clean in my opinion, and the arguments provided in a constructor should be limited to non-optional arguments.
As long as the classes interface (API) is clear in what it needs to perform its task,
you're good.
I prefer constructor injection because it helps "enforce" a class's dependency requirements. If it's in the c'tor, a consumer has to set the objects to get the app to compile. If you use setter injection they may not know they have a problem until run time - and depending on the object, it might be late in run time.
I still use setter injection from time to time when the injected object maybe needs a bunch of work itself, like initialization.
I personally prefer the Extract and Override "pattern" over injecting dependencies in the constructor, largely for the reason outlined in your question. You can set the properties as virtual and then override the implementation in a derived testable class.
I perfer constructor injection, because this seems most logical. Its like saying my class requires these dependencies to do its job. If its an optional dependency then properties seem reasonable.
I also use property injection for setting things that the container does not have a references to such as an ASP.NET View on a presenter created using the container.
I dont think it breaks encapsulation. The inner workings should remain internal and the dependencies deal with a different concern.
One option that might be worth considering is composing complex multiple-dependencies out of simple single dependencies. That is, define extra classes for compound dependencies. This makes things a little easier WRT constructor injection - fewer parameters per call - while still maintaining the must-supply-all-dependencies-to-instantiate thing.
Of course it makes most sense if there's some kind of logical grouping of dependencies, so the compound is more than an arbitrary aggregate, and it makes most sense if there are multiple dependents for a single compound dependency - but the parameter block "pattern" has been around for a long time, and most of those that I've seen have been pretty arbitrary.
Personally, though, I'm more a fan of using methods/property-setters to specify dependencies, options etc. The call names help describe what is going on. It's a good idea to provide example this-is-how-to-set-it-up snippets, though, and make sure the dependent class does enough error checks. You might want to use a finite state model for the setup.
I recently ran into a situation where I had multiple dependencies in a class, but only one of the dependencies was necessarily going to change in each implementation. Since the data access and error logging dependencies would likely only be changed for testing purposes, I added optional parameters for those dependencies and provided default implementations of those dependencies in my constructor code. In this way, the class maintains its default behavior unless overridden by the consumer of the class.
Using optional parameters can only be accomplished in frameworks that support them, such as .NET 4 (for both C# and VB.NET, though VB.NET has always had them). Of course, you can accomplish similar functionality by simply using a property that can be reassigned by the consumer of your class, but you don't get the advantage of immutability provided by having a private interface object assigned to a parameter of the constructor.
All of this being said, if you are introducing a new dependency that must be provided by every consumer, you're going to have to refactor your constructor and all code that consumers your class. My suggestions above really only apply if you have the luxury of being able to provide a default implementation for all of your current code but still provide the ability to override the default implementation if necessary.
Constructor injection does explicitly reveal the dependencies, making code more readable and less prone to unhandled run-time errors if arguments are checked in the constructor, but it really does come down to personal opinion, and the more you use DI the more you'll tend to sway back and forth one way or the other depending on the project. I personally have issues with code smells like constructors with a long list of arguments, and I feel that the consumer of an object should know the dependencies in order to use the object anyway, so this makes a case for using property injection. I don't like the implicit nature of property injection, but I find it more elegant, resulting in cleaner-looking code. But on the other hand, constructor injection does offer a higher degree of encapsulation, and in my experience I try to avoid default constructors, as they can have an ill effect on the integrity of the encapsulated data if one is not careful.
Choose injection by constructor or by property wisely based on your specific scenario. And don't feel that you have to use DI just because it seems necessary and it will prevent bad design and code smells. Sometimes it's not worth the effort to use a pattern if the effort and complexity outweighs the benefit. Keep it simple.
This is an old post, but if it is needed in future maybe this is of any use:
https://github.com/omegamit6zeichen/prinject
I had a similar idea and came up with this framework. It is probably far from complete, but it is an idea of a framework focusing on property injection
It depends on how you want to implement.
I prefer constructor injection wherever I feel the values that go in to the implementation doesnt change often. Eg: If the compnay stragtegy is go with oracle server, I will configure my datsource values for a bean achiveing connections via constructor injection.
Else, if my app is a product and chances it can connect to any db of the customer , I would implement such db configuration and multi brand implementation through setter injection. I have just taken an example but there are better ways of implementing the scenarios I mentioned above.
When to use Constructor injection?
When we want to make sure that the Object is created with all of its dependencies and to ensure that required dependencies are not null.
When to use Setter injection?
When we are working with optional dependencies that can be assigned reasonable default values within the class. Otherwise, not-null checks must be performed everywhere the code uses the dependency.
Additionally, setter methods make objects of that class open to reconfiguration or re-injection at a later time.
Sources:
Spring documentation ,
Java Revisited

Resources