ServiceManager Advice - zend-framework2

I'm simply looking for advice on the best way I should handle this situation.
Right now I've got several files in a folder called Service. The files contact several functions which do random things of course. Each of these files needs access to the SM Adapter.
My question is, should I implement the ServiceManagerAwareInterface in each of these files OR should I just make a new class which implements the ServiceManagerAwareInterface and just extend my classes on the new class which implements this service?
Both ways work as they should, just not sure which way would be more proper.

If you think that your system will always rely on ZF2, both approaches are equivalent.
Now from an OO design perspective, personally I have a preference for the approach in which you extend your service then implement the ServiceManagerAwareInterface. I would even use an interface for the dependency over the ServiceLocator to protect even more my classes. Why?
Extending your classes does not cost you a lot, same for making your class depending on interfaces.
Let's take this example, Imagine you did not use this approach during a ZF1 project, during which you had probably resolved your dependencies with the Zend_Registry.
Now, let's assume you moved to a ZF2 implementation, how much time you think you'll spend refactoring your code from something like Zend_Registry::get($serviceX) to $this->getServiceManager()->get($serviceX) on your Service layer?
Now Assume you had made the choice of protecting your classes, first by creating your own Service locator interface, as simple as:
public interface MyOwnServiceLocatorInterface{
public function get($service);
}
Under ZF1 you had created an adapter class using the Zend_Registry:
public class MyZF1ServiceLocator implements MyOwnServiceLocatorInterface{
public function get($service){
Zend_Registry::get($service);
}
}
Your Service classes are not coupled to the Zend_Registry, which make the refactoring much more easier.
Now, You decide to move to ZF2 so you'll logically use the ServiceManger. You create then this new Adapter class:
public class MyZF2ServiceLocator implements
ServiceManagerAwareInterface,MyOwnServiceLocatorInterface
{
private $_sm;
public function get($service){
$this->_sm->get($service);
}
public function setServiceManager($serviceManager){
$this->_sm = $serviceManager;
}
}
Again, your Service classes are not coupled to the ZF2 ServiceManger.
Now, how would look like the configuration/registration of you Service layer on the ServiceManager. Well, you'll use your Module::getServiceConfig class for that:
//Module.php
public function getServiceConfig()
{
return array(
'factories'=>array(
'My\ServiceA'=>function($sm){
return new My\ServiceA($sm->get('My\Service\Name\Space\MyZF2ServiceLocator'));
}
//Some other config
)
}
As you can see, no refactoring is needed within your Service classes as we protected them by relying on interface and using adapters. As we used a closure factory, we don't even need to extend our Service classes and implement the ServiceLocatorAwareInterface.
Now, before concluding in my previous example i have to note that I did not treat the case in which my classes are constructed via factories, however, you can check one of my previous answers that address the factory topic but also the importance of loose coupling among an application layers.

you can add initializers to do that. It can reduce repetitive injection in getting the service that pass db adapter. OR, you can set abstract_factories, it will reduce repetitive SM registration. I just posted SM Cheatsheet here, Hope helpful :)
https://samsonasik.wordpress.com/2013/01/02/zend-framework-2-cheat-sheet-service-manager/

Related

what is the main purpose technically speaking about the abstraction in programming using DI, Interfaces, and Abstract Classes? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 months ago.
Improve this question
I viewed a couple of answers online regarding Abstractions, Abstract Classes, Interface's, DI, and Loose coupling. But none of these answers are answering my question. I grouped these topics because they are related to achieving abstractions. Got a good understanding of the mentioned topics, but yet not fully understand them in detail and how they related to each other.
Generally speaking, interfaces are used to make classes loosely coupled. Thus define a set of functions and fields to be implemented. The idea of making Loosely Coupled classes is that will need to remove dependency over several classes.
For instance, if we make a change to one of these classes then we do not need to change other places making code maintainable. The only good example I can think of to use loosely coupling is through DI. So when we say interfaces make classes loosely coupled do we mean by passing an interface as a dependency?
"Please continue reading will further clarify".
A question here is if we are going to use DI and pass interfaces as dependencies then why not pass a class as a dependency instead? maybe I will need further clarification about Interfaces before answering the previous question. I will further explain.
The main idea of interfaces is to establish contact with classes that going to implement the interface meaning we are going to need to define functions and fields to enforce to implement them. but still, the idea of interfaces as a contract is not yet clear because if we enforce a developer to implement an interface called a server that has methods to turn on and off the server but the developer forgot to turn off the server programmatically then what is the point of this contract?
Further, my understanding is that this all falls under the concept of abstraction which means we do not need to worry about details but an abstraction. Does that not mean when building an application we first need to create classes/structures without code such as using UML?
Further, why would we use an abstract class over interfaces where an abstract class has similarities to an interface such as defining a function but without a body?
Coming back to Interfaces and DI we can inject interfaces as a dependency but why? Can we not inject a class it self? is it not easier to use classes as a dependency? where we can access all functions or this is not the idea of interface Can sombody help with this. I only understand one use case on why we should use DI. Example:
//Class1
//Class1 Con
Public Class1 Con(){
Class2 class2 = new Class2(1,1,1)
}
The above example is not maintainable because if we add a new parameter to Class2 then we need to modify it elsewhere. but if we use DI Injection then we won't are there any other reasons.
Also, DI can be useful to create one instance and use that instance across the whole application. Does that save some memory by not creating multiple instances? or saving time connecting to DI?
The question should we use abstractions at the very early coding stage where we create classes without code?
Further, do we use interfaces to make the developer aware that they need to implement a certain set of functions? But why?
Do I predict that we need to use an interface by creating UML diagrams to see if there would be different classes to use an interface with similar functionalities
"Can we not just create a superclass and override methods"
Can somebody explain when to use superclass and override methods over interfaces and provide an implementation?
Also, when to pass an interface as a dependency? And when to pass a class as a dependency? One advantage I can think of when using interfaces is polymorphism where we can make an interface of any implemented types and then access the interface type function; polymorphism. Example:
Class1 class = new Interface1();
Can this be possible?
Bottom line is we would like always to make our class's loosely coupled. Meaning that decouple class's to achieve maintainable. Thus, loosely coupled classes provider's late binding, extensibility, maintainability and easy testing. May refer to reference 1. We use interface's to make class's loosely coupled as well. but before answering how. we need to understand interface's why we use them and how they are different to abstract class's. Interface's are mainly used as contract meaning that when we create multiple class's sharing same behaviour but with different implementation then we use interface's. Thus, its a set of infrastructure to tell developer's what method's to implement. interface's only includes functions, fields signature with no implementation.
How we use DI to achieve loosely coupled class's is by injecting dependencies. suppose the following class's implements interface called Database:
public interface Database
{
void Save();
}
class SqlServer : Database
{
public void Save()
{
Console.WriteLine("Saving...");
}
}
class Oracle : Database
{
public void Save()
{
Console.WriteLine("Saving...");
}
}
Then we can easily inject dependencies as follows:
class Library
{
//private Database _SqlServer;
private Database _Oracle;
public Student(IDAL _SqlServer)
{
this._SqlServer = _SqlServer;
}
public void SaveBoo()
{
_SqlServer.Save();
}
}
Using the above approach we are injecting dependencies meaning that class's are now not fully tightly coupled. if any change made to _SqlServer we do not have to worry. To achieve full decoupled class's then use DI container Refer to reference 1.
The difference between abstract class's and interface's is that we use interface to define a contract where we use abstract if we want partial implementation. In Abstract class's you can define some method's implementation while leaving other as abstract.
You may create UML class diagrams to represent class's relationship without the need to worry about the coding side yet
As I am replying to my own question I would think it’s good to create classes and relationship I will call it classes structure then do all code later in case UML Class diagram is not going to be used. I guess this will fall under the technique/concept that is called abstraction where we do not yet worry about the details yet. So we can have an image about how is the application is structured without using UML’s.
Hope make sense
References:
(https://findnerd.com/account/#url=/list/view/Dependency-Injection-in--Net/24098/)

Dependency Injection and/vs Global Singleton

I am new to dependency injection pattern. I love the idea, but struggle to apply it to my case. I have a singleton object, let’s call it X, which I need often in many parts of my program, in many different classes, sometimes deep in the call stack. Usually I would implement this as a globally available singleton. How is this implemented within the DI pattern, specifically with .NET Core DI container? I understand I need to register X with the DI container as a singleton, but how then I get access to it? DI will instantiate classes with constructors which will take reference to X, that’s great – but I need X deep within the call hierarchy, within my own objects which .NET Core or DI container know nothing about, in objects that were created using new rather than instantiated by the DI container.
I guess my question is – how does global singleton pattern aligns/implemented by/replaced by/avoided with the DI pattern?
Well, "new is glue" (Link). That means if you have new'ed an instance, it is glued to your implementation. You cannot easily exchange it with a different implementation, for example a mock for testing. Like gluing together Lego bricks.
I you want to use proper dependency injection (using a container/framework or not) you need to structure your program in a way that you don't glue your components together, but instead inject them.
Every class is basically at hierarchy level 1 then. You need an instance of your logger? You inject it. You need an instance of a class that needs a logger? You inject it. You want to test your logging mechanism? Easy, you just inject something that conforms to your logger interface that logs into a list and the at the end of your test you can check your list and see if all the required logs are there. That is something you can automate (in contrast to using your normal logging mechanism and checking the logfiles by hand).
That means in the end, you don't really have a hierarchy, because every class you have just gets their dependencies injected and it will be the container/framework or your controlling code that determines what that means for the order of instantiation of objects.
As far as design patterns go, allow me an observation: even now, you don't need a singleton. Right now in your program, it would work if you had a plain global variable. But I guess you read that global variables are "bad". And design patterns are "good". And since you need a global variable and singleton delivers a global variable, why use the "bad", when you can use the "good" right? Well, the problem is, even with a singleton, the global variable is bad. It's a drawback of the pattern, a toad you have to swallow for the singleton logic to work. In your case, you don't need the singleton logic, but you like the taste of toads. So you created a singleton. Don't do that with design patterns. Read them very carefully and make sure you use them for the intended purpose, not because you like their side-effects or because it feels good to use a design pattern.
Just an idea and maybe I need your thought:
public static class DependencyResolver
{
public static Func<IServiceProvider> GetServiceProvider;
}
Then in Startup:
public void Configure(IApplicationBuilder app, IServiceProvider serviceProvider)
{
DependencyResolver.GetServiceProvider = () => { return serviceProvider; };
}
And now in any deed class:
DependencyResolver.GetServiceProvider().GetService<IService>();
Here's a simplified example of how this would work without a singleton.
This example assumes that your project is built in the following way:
the entry point is main
main creates an instance of class GuiCreator, then calls the method createAndRunGUI()
everything else is handled by that method
So your simplified code looks like this:
// main
// ... (boilerplate)
container = new Container();
gui = new GuiCreator(container.getDatabase(), container.getLogger(), container.getOtherDependency());
gui.createAndRunGUI();
// ... (boilerplate)
// GuiCreator
public class GuiCreator {
private IDatabase db;
private ILogger log;
private IOtherDependency other;
public GuiCreator(IDatabase newdb, ILogger newlog, IOtherDependency newother) {
db = newdb;
log = newlog;
other = newother;
}
public void createAndRunGUI() {
// do stuff
}
}
The Container class is where you actually define which implementations will be used, while the GuiCreator contructor takes interfaces as arguments. Now let's say the implementation of ILogger you choose has itself a dependency, defined by an interface its contructor takes as argument. The Container knows this and resolves it accordingly by instantiating the Logger as new LoggerImplementation(getLoggerDependency());. This goes on for the entire dependency chain.
So in essence:
All classes keep instances of interfaces they depend upon as members.
These members are set in the respective constructor.
The entire dependency chain is thus resolved when the first object is instantiated. Note that there might/should be some lazy loading involved here.
The only places where the container's methods are accessed to create instances are in main and inside the container itself:
Any class used in main receives its dependencies from main's container instance.
Any class not used in main, but rather used only as a dependency, is instantiated by the container and receives its dependencies from within there.
Any class used neither in main nor indirectly as a dependency somewhere below the classes used in main will obviously never be instantiated.
Thus, no class actually needs a reference to the container. In fact, no class needs to know there even is a container in your project. All they know is which interfaces they personally need.
The Container can either be provided by some third party library/framework or you can code it yourself. Typically, it will use some configuration file to determine which implementations are actually supposed to be used for the various interfaces. Third party containers will usually perform some sort of code analysis supported by annotations to "autowire" implementations, so if you go with a ready-made tool, make sure you read up on how that part works because it will generally make your life easier down the road.

The benefits and correct usage of a DI Container

I'm having troubles getting the advantage of a IoC (DI) container like Ninject, Unity or whatever. I understand the concepts as follows:
DI: Injecting a dependency into the class that requires it (preferably via constructor injection). I totally see why the less tight coupling is a good thing.
public MyClass{
ISomeService svc;
public MyClass(ISomeService svc){
svc = svc;
}
public doSomething(){
svc.doSomething();
}
}
Service Locator: When a "container" is used directly inside the class that requires a dependancy, to resolve the dependancy. I do get the point that this generates another dependancy and I also see that basically nothing is getting injected.
public MyClass{
public MyClass(){}
public doSomething(){
ServiceLocator.resolve<ISomeService>().doSomething();
}
}
Now, what confuses me is the concept of a "DI container". To me, it looks exactly like a service locator which - as far as I read - should only be used in the entry point / startup method of an application to register and resolve the dependancies and inject them into the constructors of other classes - and not within a concrete class that needs the dependancy (probably for the same reason why Service locators are considered "bad")
What is the purpose of using the container when I could just create the dependancy and pass it to the constructor?
public void main(){
DIContainer.register<ISomeService>(new SomeService());
// ...
var myclass = new MyClass(DIContainer.resolve<ISomeService>());
myclass.doSomething();
}
Does it really make sense to pass all the dependancies to all classes in the application initialization method? There might be 100 dependancies which will be eventually needed (or not) and just because it's considered a good practice you set create them in the init method?
What is the purpose of using the container when I could just create the dependancy and pass it to the constructor?
DI containers are supposed to help you create an object graph quickly. You just tell it which concrete implementations you want to use for which abstractions (the registration phase), and then it can create any objects you want want (resolve phase).
If you create the dependencies and pass them to the constructor (in the application initialization code), then you are actually doing Pure DI.
I would argue that Pure DI is a better approach in many cases. See my article here
Does it really make sense to pass all the dependancies to all classes in the application initialization method? There might be 100 dependancies which will be eventually needed (or not) and just because it's considered a good practice you set create them in the init method?
I would say yes. You should create the object graph when your application starts up. This is called the composition root.
If you need to create objects after your application has started then you should use factories (mainly abstract factories). And such factories will be created with the other objects in the composition roots.
Your classes shouldn't do much in the constructor, this will make the cost of creating all the dependencies at the composition root low.
However, I would say that it is OK to create some types of objects using the new keyword in special cases. Like when the object is a simple Data Transfer Object (DTO)

Laravel 4: Facade vs DI (when to use)

My understanding is that a facade is used as an alternative to dependency injection. Please correct if I'm mistaken. What is not clear is when one should use one or the other.
What are the advantages/disadvantages of each approach? How should I determine when to use one or the other?
Lastly, why not use both? I can create a facade that references an interface. It seems Sentry 2 is written this way. Is there a best practice?
FACADES
Facades are not an alternative to dependency injection.
Laravel Facade is an implementation of the Service Locator Pattern, creating a clean and beautiful way of accessing objects:
MyClass::doSomething();
This is the PHP syntax for a static methods, but Laravel changes the game and make them non-static behind the scenes, giving you a beautiful, enjoyable and testable way of writing your applications.
DEPENDENCY INJECTION
Dependency Injection is, basically, a way of passing parameters to your constructors and methods while automatically instatiating them.
class MyClass {
private $property;
public function __construct(MyOtherClass $property)
{
/// Here you can use the magic of Dependency Injection
$this->property = $property
/// $property already is an object of MyOtherClass
}
}
A better construction of it would be using Interfaces on your Dependency Injected constructors:
class MyClass {
private $property;
public function __construct(MyInterface $property)
{
/// Here you can use the magic of Dependency Injection
$this->property = $property
/// $property will receive an object of a concrete class that implements MyInterface
/// This class should be defined in Laravel elsewhere, but this is a way of also make
/// your application easy to maintain, because you can swap implementations of your interfaces
/// easily
}
}
But note that in Laravel you can inject classes and interfaces the same way. To inject interfaces you just have to tell it wich one will be this way:
App::bind('MyInterface', 'MyOtherClass');
This will tell Laravel that every time one of your methods needs an instance of MyInterface it should give it one of MyOtherClass.
What happens here is that this constuctor has a "dependency": MyOtherClass, which will be automatically injected by Laravel using the IoC container. So, when you create an instance of MyClass, Laravel automatically will create an instance of MyOtherClass and put it in the variable $class.
Dependency Injection is just an odd jargon developers created to do something as simple as "automatic generation of parameters".
WHEN TO USE ONE OR THE OTHER?
As you can see, they are completely different things, so you won't ever need to decide between them, but you will have to decide where go to with one or the other in different parts of your application.
Use Facades to ease the way you write your code. For example: it's a good practice to create packages for your application modules, so, to create Facades for those packages is also a way to make them seem like a Laravel public class and accessing them using the static syntax.
Use Dependency Injection every time your class needs to use data or processing from another class. It will make your code testable, because you will be able to "inject" a mock of those dependencies into your class and you will be also exercising the single responsibility principle (take a look at the SOLID principles).
Facades, as noted, are intended to simplify a potentially complicated interface.
Facades are still testable
Laravel's implementation goes a step further and allows you to define the base-class that the Facade "points" to.
This gives a developer the ability to "mock" a Facade - by switching the base-class out with a mock object.
In that sense, you can use them and still have testable code. This is where some confusion lies within the PHP community.
DI is often cited as making your code testable - they make mocking class dependencies easy. (Sidenote: Interfaces and DI have other important reasons for existing!)
Facades, on the other hand, are often cited as making testing harder because you can't "simply inject a mock object" into whatever code you're testing. However, as noted, you can in fact "mock" them.
Facade vs DI
This is where people get confused regarding whether Facades are an alternative to DI or not.
In a sense, they both add a dependency to your class - You can either use DI to add a dependency or you can use a Facade directly - FacadeName::method($param);. (Hopefully you are not instantiating any class directly within another :D ).
This does not make Facades an alternative to DI, but instead, within Laravel, does create a situation where you may decide to add class dependencies one of 2 ways - either using DI or by using a Facade. (You can, of course, use other ways. These "2 ways" are just the most-often used "testable way").
Laravel's Facades are an implementation of the Service Locator pattern, not the Facade pattern.
In my opinion you should avoid service locator within your domain, opting to only use it in your service and web transport layers.
http://martinfowler.com/articles/injection.html#UsingAServiceLocator
I think that in terms of laravel Facades help you keep you code simple and still testable since you can mock facades however might be a bit harder to tell a controllers dependencies if you use facades since they are probably all over the place in your code.
With dependency injection you need to write a bit more code since you need to deal with creating interfaces and services to handle the depenancies however Its a lot more clear later on what a controller depends on since these are clearly mentioned in the controller constructor.
I guess it's a matter of deciding which method you prefer using

Why does one use dependency injection?

I'm trying to understand dependency injections (DI), and once again I failed. It just seems silly. My code is never a mess; I hardly write virtual functions and interfaces (although I do once in a blue moon) and all my configuration is magically serialized into a class using json.net (sometimes using an XML serializer).
I don't quite understand what problem it solves. It looks like a way to say: "hi. When you run into this function, return an object that is of this type and uses these parameters/data."
But... why would I ever use that? Note I have never needed to use object as well, but I understand what that is for.
What are some real situations in either building a website or desktop application where one would use DI? I can come up with cases easily for why someone may want to use interfaces/virtual functions in a game, but it's extremely rare (rare enough that I can't remember a single instance) to use that in non-game code.
First, I want to explain an assumption that I make for this answer. It is not always true, but quite often:
Interfaces are adjectives; classes are nouns.
(Actually, there are interfaces that are nouns as well, but I want to generalize here.)
So, e.g. an interface may be something such as IDisposable, IEnumerable or IPrintable. A class is an actual implementation of one or more of these interfaces: List or Map may both be implementations of IEnumerable.
To get the point: Often your classes depend on each other. E.g. you could have a Database class which accesses your database (hah, surprise! ;-)), but you also want this class to do logging about accessing the database. Suppose you have another class Logger, then Database has a dependency to Logger.
So far, so good.
You can model this dependency inside your Database class with the following line:
var logger = new Logger();
and everything is fine. It is fine up to the day when you realize that you need a bunch of loggers: Sometimes you want to log to the console, sometimes to the file system, sometimes using TCP/IP and a remote logging server, and so on ...
And of course you do NOT want to change all your code (meanwhile you have gazillions of it) and replace all lines
var logger = new Logger();
by:
var logger = new TcpLogger();
First, this is no fun. Second, this is error-prone. Third, this is stupid, repetitive work for a trained monkey. So what do you do?
Obviously it's a quite good idea to introduce an interface ICanLog (or similar) that is implemented by all the various loggers. So step 1 in your code is that you do:
ICanLog logger = new Logger();
Now the type inference doesn't change type any more, you always have one single interface to develop against. The next step is that you do not want to have new Logger() over and over again. So you put the reliability to create new instances to a single, central factory class, and you get code such as:
ICanLog logger = LoggerFactory.Create();
The factory itself decides what kind of logger to create. Your code doesn't care any longer, and if you want to change the type of logger being used, you change it once: Inside the factory.
Now, of course, you can generalize this factory, and make it work for any type:
ICanLog logger = TypeFactory.Create<ICanLog>();
Somewhere this TypeFactory needs configuration data which actual class to instantiate when a specific interface type is requested, so you need a mapping. Of course you can do this mapping inside your code, but then a type change means recompiling. But you could also put this mapping inside an XML file, e.g.. This allows you to change the actually used class even after compile time (!), that means dynamically, without recompiling!
To give you a useful example for this: Think of a software that does not log normally, but when your customer calls and asks for help because he has a problem, all you send to him is an updated XML config file, and now he has logging enabled, and your support can use the log files to help your customer.
And now, when you replace names a little bit, you end up with a simple implementation of a Service Locator, which is one of two patterns for Inversion of Control (since you invert control over who decides what exact class to instantiate).
All in all this reduces dependencies in your code, but now all your code has a dependency to the central, single service locator.
Dependency injection is now the next step in this line: Just get rid of this single dependency to the service locator: Instead of various classes asking the service locator for an implementation for a specific interface, you - once again - revert control over who instantiates what.
With dependency injection, your Database class now has a constructor that requires a parameter of type ICanLog:
public Database(ICanLog logger) { ... }
Now your database always has a logger to use, but it does not know any more where this logger comes from.
And this is where a DI framework comes into play: You configure your mappings once again, and then ask your DI framework to instantiate your application for you. As the Application class requires an ICanPersistData implementation, an instance of Database is injected - but for that it must first create an instance of the kind of logger which is configured for ICanLog. And so on ...
So, to cut a long story short: Dependency injection is one of two ways of how to remove dependencies in your code. It is very useful for configuration changes after compile-time, and it is a great thing for unit testing (as it makes it very easy to inject stubs and / or mocks).
In practice, there are things you can not do without a service locator (e.g., if you do not know in advance how many instances you do need of a specific interface: A DI framework always injects only one instance per parameter, but you can call a service locator inside a loop, of course), hence most often each DI framework also provides a service locator.
But basically, that's it.
P.S.: What I described here is a technique called constructor injection, there is also property injection where not constructor parameters, but properties are being used for defining and resolving dependencies. Think of property injection as an optional dependency, and of constructor injection as mandatory dependencies. But discussion on this is beyond the scope of this question.
I think a lot of times people get confused about the difference between dependency injection and a dependency injection framework (or a container as it is often called).
Dependency injection is a very simple concept. Instead of this code:
public class A {
private B b;
public A() {
this.b = new B(); // A *depends on* B
}
public void DoSomeStuff() {
// Do something with B here
}
}
public static void Main(string[] args) {
A a = new A();
a.DoSomeStuff();
}
you write code like this:
public class A {
private B b;
public A(B b) { // A now takes its dependencies as arguments
this.b = b; // look ma, no "new"!
}
public void DoSomeStuff() {
// Do something with B here
}
}
public static void Main(string[] args) {
B b = new B(); // B is constructed here instead
A a = new A(b);
a.DoSomeStuff();
}
And that's it. Seriously. This gives you a ton of advantages. Two important ones are the ability to control functionality from a central place (the Main() function) instead of spreading it throughout your program, and the ability to more easily test each class in isolation (because you can pass mocks or other faked objects into its constructor instead of a real value).
The drawback, of course, is that you now have one mega-function that knows about all the classes used by your program. That's what DI frameworks can help with. But if you're having trouble understanding why this approach is valuable, I'd recommend starting with manual dependency injection first, so you can better appreciate what the various frameworks out there can do for you.
As the other answers stated, dependency injection is a way to create your dependencies outside of the class that uses it. You inject them from the outside, and take control about their creation away from the inside of your class. This is also why dependency injection is a realization of the Inversion of control (IoC) principle.
IoC is the principle, where DI is the pattern. The reason that you might "need more than one logger" is never actually met, as far as my experience goes, but the actually reason is, that you really need it, whenever you test something. An example:
My Feature:
When I look at an offer, I want to mark that I looked at it automatically, so that I don't forget to do so.
You might test this like this:
[Test]
public void ShouldUpdateTimeStamp
{
// Arrange
var formdata = { . . . }
// System under Test
var weasel = new OfferWeasel();
// Act
var offer = weasel.Create(formdata)
// Assert
offer.LastUpdated.Should().Be(new DateTime(2013,01,13,13,01,0,0));
}
So somewhere in the OfferWeasel, it builds you an offer Object like this:
public class OfferWeasel
{
public Offer Create(Formdata formdata)
{
var offer = new Offer();
offer.LastUpdated = DateTime.Now;
return offer;
}
}
The problem here is, that this test will most likely always fail, since the date that is being set will differ from the date being asserted, even if you just put DateTime.Now in the test code it might be off by a couple of milliseconds and will therefore always fail. A better solution now would be to create an interface for this, that allows you to control what time will be set:
public interface IGotTheTime
{
DateTime Now {get;}
}
public class CannedTime : IGotTheTime
{
public DateTime Now {get; set;}
}
public class ActualTime : IGotTheTime
{
public DateTime Now {get { return DateTime.Now; }}
}
public class OfferWeasel
{
private readonly IGotTheTime _time;
public OfferWeasel(IGotTheTime time)
{
_time = time;
}
public Offer Create(Formdata formdata)
{
var offer = new Offer();
offer.LastUpdated = _time.Now;
return offer;
}
}
The Interface is the abstraction. One is the REAL thing, and the other one allows you to fake some time where it is needed. The test can then be changed like this:
[Test]
public void ShouldUpdateTimeStamp
{
// Arrange
var date = new DateTime(2013, 01, 13, 13, 01, 0, 0);
var formdata = { . . . }
var time = new CannedTime { Now = date };
// System under test
var weasel= new OfferWeasel(time);
// Act
var offer = weasel.Create(formdata)
// Assert
offer.LastUpdated.Should().Be(date);
}
Like this, you applied the "inversion of control" principle, by injecting a dependency (getting the current time). The main reason to do this is for easier isolated unit testing, there are other ways of doing it. For example, an interface and a class here is unnecessary since in C# functions can be passed around as variables, so instead of an interface you could use a Func<DateTime> to achieve the same. Or, if you take a dynamic approach, you just pass any object that has the equivalent method (duck typing), and you don't need an interface at all.
You will hardly ever need more than one logger. Nonetheless, dependency injection is essential for statically typed code such as Java or C#.
And...
It should also be noted that an object can only properly fulfill its purpose at runtime, if all its dependencies are available, so there is not much use in setting up property injection. In my opinion, all dependencies should be satisfied when the constructor is being called, so constructor-injection is the thing to go with.
I think the classic answer is to create a more decoupled application, which has no knowledge of which implementation will be used during runtime.
For example, we're a central payment provider, working with many payment providers around the world. However, when a request is made, I have no idea which payment processor I'm going to call. I could program one class with a ton of switch cases, such as:
class PaymentProcessor{
private String type;
public PaymentProcessor(String type){
this.type = type;
}
public void authorize(){
if (type.equals(Consts.PAYPAL)){
// Do this;
}
else if(type.equals(Consts.OTHER_PROCESSOR)){
// Do that;
}
}
}
Now imagine that now you'll need to maintain all this code in a single class because it's not decoupled properly, you can imagine that for every new processor you'll support, you'll need to create a new if // switch case for every method, this only gets more complicated, however, by using Dependency Injection (or Inversion of Control - as it's sometimes called, meaning that whoever controls the running of the program is known only at runtime, and not complication), you could achieve something very neat and maintainable.
class PaypalProcessor implements PaymentProcessor{
public void authorize(){
// Do PayPal authorization
}
}
class OtherProcessor implements PaymentProcessor{
public void authorize(){
// Do other processor authorization
}
}
class PaymentFactory{
public static PaymentProcessor create(String type){
switch(type){
case Consts.PAYPAL;
return new PaypalProcessor();
case Consts.OTHER_PROCESSOR;
return new OtherProcessor();
}
}
}
interface PaymentProcessor{
void authorize();
}
** The code won't compile, I know :)
The main reason to use DI is that you want to put the responsibility of the knowledge of the implementation where the knowledge is there. The idea of DI is very much inline with encapsulation and design by interface.
If the front end asks from the back end for some data, then is it unimportant for the front end how the back end resolves that question. That is up to the requesthandler.
That is already common in OOP for a long time. Many times creating code pieces like:
I_Dosomething x = new Impl_Dosomething();
The drawback is that the implementation class is still hardcoded, hence has the front end the knowledge which implementation is used. DI takes the design by interface one step further, that the only thing the front end needs to know is the knowledge of the interface.
In between the DYI and DI is the pattern of a service locator, because the front end has to provide a key (present in the registry of the service locator) to lets its request become resolved.
Service locator example:
I_Dosomething x = ServiceLocator.returnDoing(String pKey);
DI example:
I_Dosomething x = DIContainer.returnThat();
One of the requirements of DI is that the container must be able to find out which class is the implementation of which interface. Hence does a DI container require strongly typed design and only one implementation for each interface at the same time. If you need more implementations of an interface at the same time (like a calculator), you need the service locator or factory design pattern.
D(b)I: Dependency Injection and Design by Interface.
This restriction is not a very big practical problem though. The benefit of using D(b)I is that it serves communication between the client and the provider. An interface is a perspective on an object or a set of behaviours. The latter is crucial here.
I prefer the administration of service contracts together with D(b)I in coding. They should go together. The use of D(b)I as a technical solution without organizational administration of service contracts is not very beneficial in my point of view, because DI is then just an extra layer of encapsulation. But when you can use it together with organizational administration you can really make use of the organizing principle D(b)I offers.
It can help you in the long run to structure communication with the client and other technical departments in topics as testing, versioning and the development of alternatives. When you have an implicit interface as in a hardcoded class, then is it much less communicable over time then when you make it explicit using D(b)I. It all boils down to maintenance, which is over time and not at a time. :-)
Quite frankly, I believe people use these Dependency Injection libraries/frameworks because they just know how to do things in runtime, as opposed to load time. All this crazy machinery can be substituted by setting your CLASSPATH environment variable (or other language equivalent, like PYTHONPATH, LD_LIBRARY_PATH) to point to your alternative implementations (all with the same name) of a particular class. So in the accepted answer you'd just leave your code like
var logger = new Logger() //sane, simple code
And the appropriate logger will be instantiated because the JVM (or whatever other runtime or .so loader you have) would fetch it from the class configured via the environment variable mentioned above.
No need to make everything an interface, no need to have the insanity of spawning broken objects to have stuff injected into them, no need to have insane constructors with every piece of internal machinery exposed to the world. Just use the native functionality of whatever language you're using instead of coming up with dialects that won't work in any other project.
P.S.: This is also true for testing/mocking. You can very well just set your environment to load the appropriate mock class, in load time, and skip the mocking framework madness.

Resources