Dependency Injection, injecting an "injectable" object (service) into a newable (entity) - dependency-injection

When writing code we should be able to identify two big group of objects:
Injectables
Newables
http://www.loosecouplings.com/2011/01/how-to-write-testable-code-overview.html
http://misko.hevery.com/2008/09/30/to-new-or-not-to-new/
Injectable objects are objects (services) that expose dependencies in their constructors these dependencies are usually resolved using an IoC container, these objects can only ask for other injectables in their constructors
Newable are objects that also expose dependencies in their constructors but newables can only ask for other newable objects (Entities, Value Objects), another characteristic of newable objects is they should not hold a reference to an injectable object
But when writing code, we often need to "inject" a service (injectable) into an Entity (newable)
I have been thinking that maybe exposing a service dependency in a newable object is better doing it at method level but this sounds like a lot of work to do.... just thinking about resolving the dependencies every time a method is called.... well this smells like we would have to use the Service Locator anti-pattern
The way I have solved this is:
Create an interface with a method exposing the dependency (the service will be used in this method)
Create an Extension Method for the interface and place it in a different namespace, perhaps in another assembly, and just wrap the call to the original method resolving the dependency using a service locator
Doing this we have a consistent separation between newable and injectable objects with the ability to use services in our newables easily
What do you think?
Using the service locator in an extension method is considered a bad practice?
How would you unit-test the extension method call?

But when writing code, we often need to "inject" a service
(injectable) into an Entity (newable)
This is not the case- if you find the need to do this then there is some functionality that exists in the Entity which should be in a service.
Let's say your newable is ShoppingCart and your injectable is a database repository. You want to be able to do this:
// somehow cart already got the repository
cart.save();
Well, you're doing it wrong. Instead you need to switch things around and do:
respository.save( cart );
If you could provide an situation of when you feel the need to do this, we could discuss specifics of that situation.

Related

How do you deal with shared objects across the app (Service Locator and/or Dependency Injection)?

How do you deal with components shared accros over the app (Service Locator and/or DI)?
Every application has common components that we met over and over again. These might be DatabaseManager, CacheManager, UserManager, ReportingManager, SettingsManager, BackendAPIManager, CrashManager, ....
Typically these components are classes in my case because I like encapsulating related things into components that provide nice and easy acces to other parts of code. As a general rule of thumb, we can encapsulate the code into one class as long as the class operates on single reponsibility principle and it's not too big.
Some of these classes are like wrappers for 3rd party libraries - ReportingManager, CrashManager (Google Analytics, Crashlytics, ...), some encapsulate web services calling part - BackendAPIManager (for AFNetworking). Some also are like wrappers - DatabaseManager (Core data, FMDB), CacheManager (serializing to file) SettingsManager (NSUserDefault).
Most of the mentioned components/classes doesn't store state or anything else - their implementations just encapsulate behavioural code. But sometines, some classes encapsulate both behavioural and state code, for example UserManager interacts with BackendAPIManager, stores retrieved user, stores login state and has nice API methods: login, logout, register, resetPassword, isLoggedIn.
Now, there's a high temptation to create singletons for that, but we all know singletons are bad for unit testing. So, one of the possible ways is to create sort of singleton Registry (also known as Service Locator design pattern) with methods registerObject, getObject and create and register all these objects during app launch and then always use via:
[[Registry shared] getObject:#protocol(UserManagerProtocol)];
Therefore, in your tests you unregister not needed managers and register mocks via Registry registerObject. Service Locator pattern is described by Martin Fowler and according to him it's an alternative to Dependency injection. However, another remarkable author Mark Seemann from .NET world claims Service Locator is an antipattern mostly because you need not forget to register new shared objects, need to know how application works because Registry hides all the dependecies and devs don't like resetting and registering these shared objects for unit tests.
While both Service Locator and Dependency injection helps to separate dependent objects from the usage code, still Dependency Injection is more popular in various discussions and forums and looks like gained more popularity. Though I used Service Locator previously for shared components and sometimes DI for injecting non-shared objects into code, I'm interested for an alternative how I could deal with shared objects. Some migh suggest doesn't care about singleness of the objects, just create new instances and inject everytime using dependency injection, however it's bad approach to duplicate these objects and sometimes will lead to issues if your object stores state.
I'm using and extending from BaseModel, BaseViewController, so it's very easy for me to add related manager properties on those two classes for dependency injection. I'v used property with default value type of DI implementation Approach described here:
- (UserManager *)userManager {
// Lazy instantiation of the default value in case we didn't injected dependency explicitly
if (!_userManager) {
_userManager = [[Registry shared] getObject:#protocol(UserManager)];
}
return _userManager;
}
This way I don't need explicitly pass dependecies to subclasses of BaseModel and BaseViewController over and over again. The default value will be taken from Registry. All the shared objects will be registered on app launch (ServiceLocator):
[[Registry shared] register:[UserManager new]];
[[Registry shared] register:[SettingsManager new]];
[[Registry shared] register:[BackendAPIManager new]];
...
And then use self.userManager, self.settingsManager, self.backendAPIManager inside other code for accesing them.
So the proposed solution uses Dependency Injection + Service Locator, and it's very easy to write unit tests and explicitly inject and set dependecies via properties that we exposed and added to BaseModel and BaseViewController (# userManager, # settingssManager, # backendAPIManager).
What do you think abot this approach? I'm just trying to avoid using DI framework as I always try to keep my code clean and easy to debug

Laravel 4: Facade vs DI (when to use)

My understanding is that a facade is used as an alternative to dependency injection. Please correct if I'm mistaken. What is not clear is when one should use one or the other.
What are the advantages/disadvantages of each approach? How should I determine when to use one or the other?
Lastly, why not use both? I can create a facade that references an interface. It seems Sentry 2 is written this way. Is there a best practice?
FACADES
Facades are not an alternative to dependency injection.
Laravel Facade is an implementation of the Service Locator Pattern, creating a clean and beautiful way of accessing objects:
MyClass::doSomething();
This is the PHP syntax for a static methods, but Laravel changes the game and make them non-static behind the scenes, giving you a beautiful, enjoyable and testable way of writing your applications.
DEPENDENCY INJECTION
Dependency Injection is, basically, a way of passing parameters to your constructors and methods while automatically instatiating them.
class MyClass {
private $property;
public function __construct(MyOtherClass $property)
{
/// Here you can use the magic of Dependency Injection
$this->property = $property
/// $property already is an object of MyOtherClass
}
}
A better construction of it would be using Interfaces on your Dependency Injected constructors:
class MyClass {
private $property;
public function __construct(MyInterface $property)
{
/// Here you can use the magic of Dependency Injection
$this->property = $property
/// $property will receive an object of a concrete class that implements MyInterface
/// This class should be defined in Laravel elsewhere, but this is a way of also make
/// your application easy to maintain, because you can swap implementations of your interfaces
/// easily
}
}
But note that in Laravel you can inject classes and interfaces the same way. To inject interfaces you just have to tell it wich one will be this way:
App::bind('MyInterface', 'MyOtherClass');
This will tell Laravel that every time one of your methods needs an instance of MyInterface it should give it one of MyOtherClass.
What happens here is that this constuctor has a "dependency": MyOtherClass, which will be automatically injected by Laravel using the IoC container. So, when you create an instance of MyClass, Laravel automatically will create an instance of MyOtherClass and put it in the variable $class.
Dependency Injection is just an odd jargon developers created to do something as simple as "automatic generation of parameters".
WHEN TO USE ONE OR THE OTHER?
As you can see, they are completely different things, so you won't ever need to decide between them, but you will have to decide where go to with one or the other in different parts of your application.
Use Facades to ease the way you write your code. For example: it's a good practice to create packages for your application modules, so, to create Facades for those packages is also a way to make them seem like a Laravel public class and accessing them using the static syntax.
Use Dependency Injection every time your class needs to use data or processing from another class. It will make your code testable, because you will be able to "inject" a mock of those dependencies into your class and you will be also exercising the single responsibility principle (take a look at the SOLID principles).
Facades, as noted, are intended to simplify a potentially complicated interface.
Facades are still testable
Laravel's implementation goes a step further and allows you to define the base-class that the Facade "points" to.
This gives a developer the ability to "mock" a Facade - by switching the base-class out with a mock object.
In that sense, you can use them and still have testable code. This is where some confusion lies within the PHP community.
DI is often cited as making your code testable - they make mocking class dependencies easy. (Sidenote: Interfaces and DI have other important reasons for existing!)
Facades, on the other hand, are often cited as making testing harder because you can't "simply inject a mock object" into whatever code you're testing. However, as noted, you can in fact "mock" them.
Facade vs DI
This is where people get confused regarding whether Facades are an alternative to DI or not.
In a sense, they both add a dependency to your class - You can either use DI to add a dependency or you can use a Facade directly - FacadeName::method($param);. (Hopefully you are not instantiating any class directly within another :D ).
This does not make Facades an alternative to DI, but instead, within Laravel, does create a situation where you may decide to add class dependencies one of 2 ways - either using DI or by using a Facade. (You can, of course, use other ways. These "2 ways" are just the most-often used "testable way").
Laravel's Facades are an implementation of the Service Locator pattern, not the Facade pattern.
In my opinion you should avoid service locator within your domain, opting to only use it in your service and web transport layers.
http://martinfowler.com/articles/injection.html#UsingAServiceLocator
I think that in terms of laravel Facades help you keep you code simple and still testable since you can mock facades however might be a bit harder to tell a controllers dependencies if you use facades since they are probably all over the place in your code.
With dependency injection you need to write a bit more code since you need to deal with creating interfaces and services to handle the depenancies however Its a lot more clear later on what a controller depends on since these are clearly mentioned in the controller constructor.
I guess it's a matter of deciding which method you prefer using

Unit of Work with Dependency Injection

I'm building a relatively simple webapp in ASP.NET MVC 4, using Entity Framework to talk to MS SQL Server. There's lots of scope to expand the application in future, so I'm aiming for a pattern that maximises reusability and adaptability in the code, to save work later on. The idea is:
Unit of Work pattern, to save problems with the database by only committing changes at the end of each set of actions.
Generic repository using BaseRepository<T> because the repositories will be mostly the same; the odd exception can extend and add its additional methods.
Dependency injection to bind those repositories to the IRepository<T> that the controllers will be using, so that I can switch data storage methods and such with minimal fuss (not just for best practice; there is a real chance of this happening). I'm using Ninject for this.
I haven't really attempted something like this from scratch before, so I've been reading up and I think I've got myself muddled somewhere. So far, I have an interface IRepository<T> which is implemented by BaseRepository<T>, which contains an instance of the DataContext which is passed into its constructor. This interface has methods for Add, Update, Delete, and various types of Get (single by ID, single by predicate, group by predicate, all). The only repository that doesn't fit this interface (so far) is the Users repository, which adds User Login(string username, string password) to allow login (the implementation of which handles all the salting, hashing, checking etc).
From what I've read, I now need a UnitOfWork class that contains instances of all the repositories. This unit of work will expose the repositories, as well as a SaveChanges() method. When I want to manipulate data, I instantiate a unit of work, access the repositories on it (which are instantiated as needed), and then save. If anything fails, nothing changes in the database because it won't reach the single save at the end. This is all fine. My problem is that all the examples I can find seem to do one of two things:
Some pass a data context into the unit of work, from which they retrieve the various repositories. This negates the point of DI by having my Entity-Framework-specific DbContext (or a class inherited from it) in my unit of work.
Some call a Get method to request a repository, which is the service locator pattern, which is at least unpopular, if not an antipattern, and either way I'd like to avoid it here.
Do I need to create an interface for my data source and inject that into the unit of work as well? I can't find any documentation on this that's clear and/or complete enough to explain.
EDIT
I think I've been overcomplicating it; I'm now folding my repository and unit of work into one - my repository is entirely generic so this just gives me a handful of generic methods (Add, Remove, Update, and a few kinds of Get) plus a SaveChanges method. This gives me a worker class interface; I can then have a factory class that provides instances of it (also interfaced). If I also have this worker implement IDisposable then I can use it in a scoped block. So now my controllers can do something like this:
using (var worker = DataAccess.BeginTransaction())
{
Product item = worker.Get<Product>(p => p.ID == prodName);
//stuff...
worker.SaveChanges();
}
If something goes wrong before the SaveChanges(), then all changes are discarded when it exits the scope block and the worker is disposed. I can use dependency injection to provide concrete implementations to the DataAccess field, which is passed into the base controller constructor. Business logic is all in the controller and works with IQueryable objects, so I can switch out the DataAccess provider object for anything I like as long as it implements the IRepository interface; there's nothing specific to Entity Framework anywhere.
So, any thoughts on this implementation? Is this on the right track?
I prefer to have UnitOfWork or a UnitOfWorkFactory injected into the repositories, that way I need not bother it everytime a new reposiory is added. Responsibility of UnitOfWork would be to just manage the transaction.
Here is an example of what I mean.

Is it considered bad design to pass a repository interface as an argument to a method on a domain class?

Our domain model is very anemic right now. Our entities are mostly empty shells, almost purely designed for holding values and navigating to collections.
We are using EF 4.1 code-first ORM, and the design so far has been to shield our novice developers against the dreaded "LINQ to Entities cannot translate blablabla to a store expression" exception when querying against the context during early iterations.
We have various aggregate root repository interfaces over EF. However some blocks of code in the impls seems like they should be the domain's responsibility. As long as the repository interface is declared in the domain, and the impl is in the infrastructure (dependency injected), is it considered bad design to pass a repository interface as an argument to a method on an entity (or other domain) class?
For example, would this be bad?
public class EntityAbc {
public void SaveTo(IEntityAbcRepository repos) {...}
public void DeleteFrom(IEntityAbcRepository repos) {...}
}
What if a particular entity needed access to other aggregate root repositories? Would this be ok or not, and why?
public void Save() {
var abcRepos = DependencyInjector.Current.GetService<IEntityAbcRepository>();
var xyzRepos = DependencyInjector.Current.GetService<IEntityXyzRepository>();
// work with repositories
}
Update 1
I did not mention moving code to an application layer because I consider some of the code that uses IEntityAbcRepository to involve business rule enforcement. The repository impl should be as vanilla as possible, right? Its main responsibility should just be a simple abstraction over the ORM, allowing you to find / add / update / delete entities. Wrong?
Also, this question applies to methods on other non-entity domain classes -- factories, services, whatever pattern may be appropriate. Point being, I'm asking the question about any method on a domain class, not just an entity class. #Eranga, this is one place where you can use constructor injection because factories & services are not part of the ORM.
The application layer could then coordinate flow by injecting a repository impl into its constructor, and passing it as an argument to a domain service or factory. Is this bad practice?
Update 2
Adding another clarification here. What if the domain only needs access to the IEntityAbcRepository in order to execute its Find() method(s)? In the example above, the SaveTo and DeleteFrom methods would not invoke any add / update / delete methods on the repository interface.
So far we've combined the find / add / update / delete methods on a single aggregate root repository interface for simplicity. But I suppose there's nothing stopping us from separating them out into 2 interfaces, like so:
IEntityAbcReadRepository <-- defines all find method signatures
IEntityAbcWriteRepository <-- defines all add / update / delete method sigs
In this case, would it be bad practice to pass IEntityAbcReadRepository as a parameter to a domain method?
Your first approach is better compared to the second approach which uses "Service Locator" pattern. Dependencies are more obvious in the first approach.
Here are some links that explains why "Service Locator" is a bad choice
Is it bad to use servicelocation instead of constructor injection
...
Singleton Vs ServiceLocator
Say no to ServiceLocator
Both of these solutions stem from the fact that EF does not allow you to use constructor injection. However you can use property injection as explained in this answer. But that does not guarantee that mandatory dependencies are present.
So your first approach is the better solution.
Short answer: Yes!
Long answer:
Consider creating an AbcService in your application service layer. This service layer sits between your domain and your infrastructure. You can inject as many repositories into AbcService as you want. Then let the service handle SaveTo and DeleteFrom.
SaveTo and DeleteFrom, unless you are saving to and deleting from another entity, i.e. no data access is involved, are methods that sound like they shouldn't be on a domain entity, IMO.
Having persistence logic in your domain entities is IMO bad design in the first place. Good separation of concerns should mean that domain/business logic is separated from persistence logic, so your domain classes should be persistence ignorant.
Previous Entity Framwork versions might not have allowed such a separation but I think most recent versions solved that problem. I'm not that familiar with EF though, so I might be wrong.
With that said, where can you put methods such as Save() and Delete() ?
If you want to add to/remove your entity from its repository, Repository.Add() and Repository.Remove() are good choices. A repository basically serves as an illusion of an in-memory collection of your entities, so it makes sense for it to behave just like a collection or a list with the appropriate methods.
If you want to persist changes made to an existing entity, there are other ways to do that. You could have a Repository.Save() method but some consider it bad practice. Oftentimes the changes are part of a higher level operation handled in a transaction-like context such as a Unit of Work, in that case you can let the operation persist all the objects in its scope when it finishes. For instance, if you use an Open Session in View approach for your web application, changes are automatically persisted when the request ends.
Or you can rely on an ad-hoc call of your ORM's Save() method for your particular entity which hopefully shouldn't be grafted onto the entity code itself (with NHibernate, for instance, it's available at runtime on the proxied entity).
[Update]
Putting that in perspective with your subsequent questions (though I'm not sure I understand all of them well) :
I see no value in splitting your repository into a ReadRepository and a WriteRepository. In DDD, a repository's responsibility is clearly to provide a collection to query from as well as add to or remove from. It's still quite cohesive that way.
It's not an entity's responsibility to fiddle with its own persistence, so it shouldn't be aware of its own repository for that precise purpose. Otherwise, it's pretty rare that an entity rightfully needs to have knowledge of its own repository (usually it means that the entity has a relationship to another entity of the same type, like parent/child, and you want to get the other entity from the repository)
However, entities and other domain objects obviously do need to obtain references to other entities at times. In that case, try to get these references through traversal of other objects within the boundary of your aggregate first before looking for a repository. If you absolutely need a repository to get the object you want, it's a good idea to inject the repository through any flavour of injection you like. As Eranga pointed out, service locator might turn out to be a sub-par dependency injection ersatz though.
Last thing, the kind of injection you mentioned - SaveTo(IEntityAbcRepository repos) - is peculiar because it is neither constructor nor setter injection, but rather an ephemeral injection lasting just the time of a method. It implies that whoever calls your method must know what repository to pass at that precise moment, which is not obvious. It might be useful, but I'd say it's not the form of injection you would typically mainly use.

IoC Dependency Injection for stateful objects (not global)

I'm new to this IoC and DI business- I feel like I get the concept if you are passing along objects that are of a global scope, but I don't get how it works when you need to pass around an object that is of a specific logical state. So, for instance, if I wanted to inject a person object into a write file command object- how would I be able to choose the correct person object dynamically? From what I have seen, I could default construct the object, but my disconnect is that you wouldn't use a default person object, it would need to be dynamic. I assume that the IoC container may just maintain the state of the object for you as it gets passed around, but then that assuems you are dealing in only one person object because there would be no thread safety, right? I know I am missing something, (maybe something like a factoryclass), but I need a little more information about how that would work.
Well, you can always inject an Abstract Factory into your consumer and use it to create the locally scoped objects.
This is sometimes necessary. See these examples:
MVC, DI (dependency injection) and creating Model instance from Controller
Is there a pattern for initializing objects created via a DI container
Can't combine Factory / DI
However, in general we tend to not use DI for Entities, but mostly for Services. Instead, Entities are usually created through some sort of Repository.
When you construct an service object (e.g. WriteFileService), you inject into it things it needs internally to complete it's job. Perhaps it needs a filesystem object or something.
The Person object in your example should be passed to the service object as a parameter to a method call. e.g. writeFileService.write(person)

Resources