Should domain objects have dependencies injected into them? - dependency-injection

I'm specifically referring to this question:
DDD - How to implement factories
The selected answer has stated:
"factories should not be tied with dependency injection because domain objects shouldn't have dependencies injected into them."
My question is: what is the reasoning of not being able to inject dependencies in to your entities? Or am I just misunderstanding the statement? Can someone please clarify?

Kind of old, but I really want to address this because I've been running into this a lot, and express my opinion on it:
I often hear that Domain Objects should not "depend" on things. And this is true. I often see people extrapolate this to mean we should not inject things into domain object.
That's the opposite of what dependency means. The domain shouldn't depend on other projects, this is true. But, the domain can define its own interfaces which other projects may then implement, which can then be injected back into the domain. As we all know, that's what is called Dependency Inversion (DI).
That is literally the opposite of having a dependency. Not allowing DI into the domain completely hamstrings your ability to accurately model the domain, forces odd SRP violations, and pretty much kills the usability of domain services.
I really feel like I must be the crazy one here, because I feel like everyone reads "the Domain must not have any dependencies" then thinks "being injected with something means you are dependant on it" and comes to the conclusion "therefore we can't inject dependencies into the domain.
That leaves us with the wonderful logic:
Dependency Inversion == Dependency

Domain Objects aren't Factories, Repos, etc. They are only Entities, Value Objects, Domain Services and Aggregate Roots. That is, they must be classes which encapsulates the data your business domain uses, the relationships between them, and the behaviour(read modifications) that the domain can do on that data.
Repository is a pattern to abstract away the persistence infrastructure you are using. It's in DDD because it makes your app decoupled from your database, but not all DDD app need or even should use repository.
Factory is a pattern to isolate the construction logic of objects. It's also just a good practice that DDD recommends, but not really needed in all scenarios.
Domain Objects shouldn't depend on anything else, because they are the core of your app. Everything will depend on them. So keeping them free of other dependency makes a clear one way dependency chain, and reduces the dependency graph. They are the invariants, the model, the foundation. Change them, and you probably need to change a lot of stuff. So changing other things shouldn't force them to change.

Domain objects should not have many dependencies.
By Fowler's Tell-Don't-Ask principle (https://martinfowler.com/bliki/TellDontAsk.html), you would want the domain objects to do as much as possible. Including having dependencies. But in Clean Code (Uncle Bob) Chapter 6, it mentions it can be a good design to have data structures operated on by procedure/function classes (services). As long as you don't have hybrid objects which combine simple getters/setters as well as more complex tell-don't-ask operations.
Fowler disagreed with thin models and called it an antipattern - AnemicDomainModel. https://www.martinfowler.com/bliki/AnemicDomainModel.html
I disagree with Fowler. I strongly agree with the following quote from another article about this Fat-Models problem: "Following this logic basically every behaviour would end up in the model classes. This is something that we know (by experience) is a bad idea. Hundreds or thousands of lines of code crammed into a single class is a recipe for disaster. Service Objects grew out of this frustration." - https://tmichel.github.io/2015/09/14/oo-controversies-tell-dont-ask-vs-the-web/
We actually have a project with fat domain models which has this exact problem. As requirements change over time and code gets more complex, a huge, fat model is quite inflexible to perform different operations and handle new requirements. Instead of adding new service workflow paths (classes) acting differently on the same simple data model, you have to make expensive, difficult refactors on the enormous, complicated domain model. It encapsulates the data and prevents anyone from modifying the data in unexpected ways but at the same time, it makes it really difficult for new workflow to manipulate the data in new ways.

Related

How can I split domain logic and data access in Grails

How can I split domain logic and data access in Grails (and is it a good idea)?
Many software applications we write are rather data(base) centered and in Grails one often persist from service classes or controllers directly to a database configured in DataSource.groovy. It is easy to change database, but we are not really independent of the persistence implementation in the code.
I am trying to write an application that opens for different persistence and data source (not only database) implementations and focus on the business domain instead of database entities. This is also a plus when testing (easy to write fake/mock persistence)
Initially I have only one persistence implementation - Grails domain classes, using GORM. But it is a possibility that I in the future would like to have other data sources than a database, for example rest services or something else.
For now, I only have the database as data source though and do mostly crud stuff (and some domain logic). I think I am still in a way stuck in "old" thinking, focused on database persistence, because most of my business domain classes, have a Grails domain class equivalent that is a copy of it. When domain classes are to be persisted, I just copy the properties to the Grails domain class.
I am not very happy with this solution. I can think of at least two possible improvements/changes:
My Grails domain classes could be organised more differently from the business domain classes, so I don't just copy properties from one class to the other. This will still involve a lot of property mapping from one class to the other when reading or writing from/to the database though.
Maybe there is a way to use business domain classes, from a regular src/main/groovy package and decorate with GORM stuff? Or in some other way split the domain logic and persistence? I have seen it is possible to do this by using hibernate conf over the domain classes. Is this the only way?
I have seen some interesting discussions of Grails architecture, including clean architecture, hexagonal architecture and ddd, but I have not found any examples yet. Are there any?
At this point, as I said, much of the functionality is CRUD stuff, but not everything. And further on, the application may have more business logic, so I would prefer not to use the "default" architecture of Grails with views, controllers, services, domain. I want a "core" application that is in a way independent of grails view/controllers and domain/GORM
It's been some time since you posted your question, but this is a very interesting topic for me...
I currently work in big-ish Java8 projects that implement principles of clean architecture, ddd, cqrs and hexagonal architecture among others. I also have limited experience with Grails 1.x projects and I remember asking the same questions as you are now.
Now that I have a broader perspective, I honestly think that it doesn't make sense to force Grails into a clean architecture. You're going to have a very painful time trying to achieve it and you probably won't be pleased with the result.
Everything in Grails is designed to be used in an opinionated, convention based way. Starting with GORM being an ActiveRecord implementation and following by every little decision that they've made about directory structure, semantics on artifacts that you need to define (controllers, services, models...), etc.. I'm not saying this is bad. In fact, this is great when you're developing something that fits into this schema-of-things.
This coupling and implicit behavior between your artifacts makes really hard to model your business logic apart from your data access (or your http interaction, or any other interaction with third parties for that matter).
From a DDD point of view you should favor data or collection based Repositories over ActiveRecord implementations. Then you can start separating your persistence logic from your Domain model. Trying to do this while maintaining ActiveRecord-like interaction with your persistence layer is going to produce a very "dirty" layer of adaptation with lots of repetition.
You will have a really hard time especially while trying to adapt complex Domain with aggregate objects that should go into different database tables, for example.
Now, addressing the two improvements that you suggest, this is what I can tell you about them:
My Grails domain classes could be organised more differently from the business domain classes, so I don't just copy properties from one class to the other. This will still involve a lot of property mapping from one class to the other when reading or writing from/to the database though.
You can indeed do what you say. Just place some code on src/groovy folder. The main problem that you will face here is dependency injection. Grails automagically injects dependencies on your services and controllers when they're defined in the standard directories. For everything else, you need to explicitly tell Grails how to take dependencies and pass them to your custom artifacts.
Maybe there is a way to use business domain classes, from a regular src/main/groovy package and decorate with GORM stuff? Or in some other way split the domain logic and persistence? I have seen it is possible to do this by using hibernate conf over the domain classes. Is this the only way?
If you decorate your Domain objects defined in src/groovy with GORM (if it is even possible) you will have the same problem. Your mission here is to isolate your Domain from the persistence logic. Doing so by having any GORM in it fails its purpose.
Everything considered my advice here would be to:
switch to other less coupled libraries that let you desing your own architecture (i.e. Ratpack, Jooq) or
if that is not an option, just embrace the Grails-way-to-do-things completely.
There is a very comprehensive list of libraries that you can browse for inspiration: Awesome Java

Unit Of Work for each module

I am currently building an ASP.net MVC application, which has be broken down into multiple modules (as well as a generic class library).
I have implemented a Unit Of Work pattern for my first module. This unit of work class contains a number of different repositories.
However, I was wondering whether or not it is good idea to have a separate Unit Of Work class for each module?
Well, EF supplies you with UnitOfWork and Repository patterns implemented itself. Usually they are not exactly what you want and it seem nice to add some methods to that native EF Repositories, but in most cases it doesn`t worth the trouble.
Implementing your own Repository based on EF is not a good idea if your project is simple. It adds a lot of work but not as much of value.
Implementing UnitOfWork based on EF is complete different story. The only reason i can see to do it is "to have different UoW for different parts of the solution". Avoid it otherwise, really.
We tried to add both this approaches ignoring prebuilt ones in our project. It was completely reasonable because we were designing modular solution and we didn`t even know how many modules we would have at the end. We expected to add new modules to the system when it is already running and heavy loaded. And i can say that it took a lot of time to develop such application. When you realize that you need to have access to one more entity from some module leads to changes in several places - the first evidence of inefficient design.
So, KISS and YAGNI are against it. If you are tangled by question "should i add this stuff to my project" - just don`t. You need a good reason to implement this parts yourself, not just some "nice design" bias, because it adds lots of complexity. Even if you think you would need it some day - wait until that day. If you would try to estimate which miscalculation would be more disastrous i am pretty sure that it is much easier to add something new to your project then remove something already existing.
Please see this and this
A unit of work is really just a way of keeping of track of a set of entities that have been loaded into memory. Once loaded, we can work with the entities in the normal way: changing state, adding new entities and removing other entities. When we are ready to save our changes we ask the unit of work to commit and it takes care of “flushing” the pending changes to the underlying database.
Is it a good idea to have a separate Unit Of Work class for each module?
My first thought is: how would a unit of work for one module differ from that of another? If they do, they probably shouldn't, because the domain should be persistence ignorant and the data layer should be business logic ignorant.
Take for instance the UoW that comes with Entity Framework itself: the context. [When you create a context, do stuff, call SaveChanges() and dispose of it, it acts as a UoW]. You can use one context class maybe for your whole application. You're not going to program any business logic in your context class. So there is no reason to have a context class per module unless each module uses really distinct parts of the database (which is hardly ever true). The same will hold for a UoW you create yourself.
It's a bit beyond the scope of your question, but you could ask yourself whether you need your own UoW and repository classes as EF offers basic implementations of both (context and DbSets).

Designing repositories for DI (constructor injection) for service layer

I'm building an MVC3 app, trying to use IoC and constructor injection. My database has (so far) about 50 tables. I am using EF4 (w/ POCO T4 template) for my DAC code. I am using the repository pattern, and each table has its own repository. My service classes in my service layer are injected w/ these repositories.
Problem: My service classes are growing in the number of repositories they need. In some cases, I am approaching 10 repositories, and it's starting to smell.
Is there a common approach for designing repositories and service classes such that the services don't require so many repositories?
Here are my thoughts, I'm just not sure which one is right:
1) This is a sign I should consider combining/grouping my repositories into related sections of tables, reducing the number or dependent repositories per service class. The problem with this approach, though, is that it will bloat and complicate my repositories, and will keep me from being able to use a common interface for all repositories (standard methods for data retrieval/update).
2) This is a sign I should consider breaking my services into groups based on my repositories (tables). Problem with this is that some of my service methods share common implementation, and breaking these across classes may complicate my dependencies.
3) This is a sign that I don't know what I'm doing, and have something fundamentally wrong that I'm not even able to see.
UPDATE: For an idea of how I'm implementing EF4 and repositories, check out this sample app on codeplex (I used version 1). However, looking at some of the comments there (and here), looks like I need to do a bit more reading to make sure this is the route I want to take -- sounds like it may not be.
Chandermani is right that some of your tables might not be core domain classes. This means you would never search for that data except in terms of a single type of parent entity. In those cases you can reference them as "complex types" rather than full-blown entities, and EF will still take care of you.
I am using the repository pattern, and each table has its own repository
I hope you're not writing these yourself from scratch.
The EF 4.1 already implements the Repository Pattern (DbSet), and the Unit of Work pattern (DbContext). The older versions do too, though the DbContext template can easily be tweaked to provide a clean mockable implementation by changing those properties to an IDbSet.
I've seen several tutorial articles where people still write their own, though. It is strange to me, because they usually don't provide a justification, other than the fact that they are "implementing the Repository Pattern".
Writing wrappers for these repositories for access methods like FindById make it slightly easier to access, but as you've seen is a big amount of effort potentially little payback. Personally, unless I find that there is interesting domain logic or complex queries to be encapsulated, I don't even bother and just use Linq directly against the IDbSet.
My service classes in my service layer are injected w/ these repositories.
Even if you choose to use custom query wrappers, you might choose to simply inject the DbContext, and let the service code instantiate the wrappers it needs. You'd still be able to mock your data access layer, you just wouldn't be able to mock up the wrapper code. I'd still recommend you inject less generic ones though, because complex implementation is exactly the type of thing you'd like to be able to factor out in maintenance, or replace with mocks.
If you look at DDD Aggregate Root pattern and try to see you data in this perspective you would realize that many of the table do not have a independent existence at all. Their data is only valid in context of their parent. Most of the operations on them require you to get the parent as well. If you can group such tables and find the parent entity\repository all other child repository can be removed. The complexity of associating the parent child which till now you would be doing in your business layer (assuming you are retrieving parent and child using independent repo) not would be shifted to the DAL
Refactoring the Service interface is also a viable option, and any common functionality can be moved into a base class and\or can be itself defined as a service which is consumed by all your existing services (Is A vs Has A)
#Chandermani has a good point about aggregate roots. Repositories should not, necessary have a 1:1 mapping to tables.
Getting large numbers of dependencies injected in is a good sign your services are doing too much. Follow the Single Responsibility Principle, and refactor them into more manageable pieces.
are your services writing to all of the repositories? i find that my services line up pretty closely with repositories, that they provide the business logic around the CRUD operations that the repository expose.

asp.net MVC ddd DRY vs loose coupling and persistance/data access layer

So as I understand it with good loose coupling I should be able to swap out my DAL with a couple lines of code at the application root.
I have 2 DAL written, Linq-to-sql and a JSon file repository (for testing and because I wanted to try out the System.Web.Scripting.JavascriptSerializer).
linq to sql will create entities instead of my business models. and feed them upwards through an IRepository which is using constructor injection at the application root.
my JSon layer doesn't have any autogenerated classes from which to deserialize so I'm lost as to a simple way to have it depend on an interface or abstract class and still function.
This question is based on the following assumptions/understandings:
I believe I would need the linq to sql layer to implement an interface so the application domain at compile time can dictate that the entity classes are going to have a place to read/write all the current model's fields
Any Business logic dictates a need for another set of classes with almost the same names and same properties in the model layer
Then conversion methods that take the DALs objects and translate them to business objects and back would be needed. (even if both sides are implementing the same interface this seems very inefficient)
This code is yet another place that would have to make a change if the model class or interface changed (interface, business class, view, dal entity)
Deserialization of any alternative DALs requires I create 'entities' with the same properties and fields in that layer(more duplication)
So to meet all of the flexibility/agility goals it appears I need an interface for each application domain/business object, a concrete class where business logic can live, and DAL objects that implement the interface (this means layers that don't autogenerate entities would have to be hand coded pure duplication).
How would I use loose coupling without a ton of duplication and loss of DRY?
Welcome to the beautiful and exciting world of loosely coupled code :)
You understand the problem correctly, but let me first reiterate what you are already implying: The Domain Model (that is, all Domain classes) must be defined independently of any specific Data Access technology, so you can't use auto-generated LINQ to SQL (L2S) classes as a basis for your Domain classes for the simple reason that you can't really reuse those together with other technologies (as you have found out with your JSON-based Repository).
Interfaces for each Domain object is not even going to help you, because to avoid Anemic Domain Models you will need to implement behavior in the Domain classes (and you can't put behavior into interfaces).
This means that to hydrate and dehydrate Domain objects you must have some mapping code. It has always been like this: in the old days we had to map from IDataReader instances to Domain classes, while now we need to map from Data (L2S) classes to Domain classes.
Could we wish for something better? Yes. Can we get something better? Probably. The next version of the Entity Framework will support Persistence Ignorance for exactly this reason: you should be able to define your Domain model as POCOs and if you provide a map and a database schema, EF will take care of the rest.
Until that arrives Microsoft doesn't have anything that offers this kind of functionality, but NHibernate does (caveat: I have zero experience with NHibernate, but lots of smart people say that this is true, and I trust them on that). This is a major reason that so many people prefer NHibernate over EF.
Loose coupling requires lots of mapping, so I can only second queen3's suggestion of employing AutoMapper for this kind of tedious work.
As a closing note I do want to point out a related issue: Mapping doesn't necessarily imply a violation of DRY. The best example is when it comes to strongly typed ViewModels that correspond to a given Domain object. Don't be fooled by the semantic similarity. They may have more or less the same properties with the same values, but their responsibilities differ vastly. As an application grows, you will likely experience that little divergences sneak in here and there, and you will be glad you have that separation of concerns - even if it initially looked like a lot of repetitious work.
In any case: Loose coupling is more work at the beginning, but it will enable you to keep on evolving an application where a tightly coupled application would freeze in maintenance hell long before. You are in for the long haul, but instant gratification it ain't.
Not that I understand the problem correctly, but to solve duplicated classes you may use AutoMapper.
Note that you may apply mapping declaratively or using reflection, i.e. semi-automatically. For example see here - this is not about data layer, but shows how simple attributes can help to automate mapping. In that case MVC applies attributes, but you may invent your own engine that looks for [Entity("Order")] attribute and applies AutoMapper.
Also you cannot have 100% persistence independency with just "couple of lines". ORM selection plays big role here. For example Linq-To-SQL cannot use plain classes (POCOs) so it's not as easy to re-use them as with NHibernate, for example. And with Repository you're going to have many queries in the data layer; different ORMs usually have different query syntax or implementation (even Linq not always compatible between ORMs) so switching data access can be a matter of replacing data layer completely, which is not couple of lines (unless your app is "Hello, world!").
The solution with AutoMapper above is actually a kind of self-baked ORM... so maybe you need to consider a better ORM that suites your requirements? Why don't you use EF4, especially given that it supports POCOs now, and is very similar to Linq-to-SQL, at least with query language?

Is there any hard data on the value of Inversion of Control or dependency injection?

I've read a lot about IoC and DI, but I'm not really convinced that you gain a lot by using them in most situations.
If you are writing code that needs pluggable components, then yes, I see the value. But if you are not, then I question whether changing a dependency from a class to an interface is really gaining you anything, other than more typing.
In some cases, I can see where IoC and DI help with mocking, but if you're not using Mocking, or TDD then what's the value? Is this a case of YAGNI?
I doubt you will have any hard data on it, so I will add some thoughts on it.
First, you don't use DI (or other SOLID principles) because it helps you do TDD. Its the other way around, you do TDD because it helps you with the design - which usually means you get code that follow those principles.
Discussing why to use interfaces is a different matter, see: https://stackoverflow.com/questions/667139/what-is-the-purpose-of-interfaces.
I will assume you agree that having your classes do many different things results in messy code. Thus, I am assuming you are already going for SRP.
Because you have different classes that do specific things, you need a way to relate them. If you relate them inside the classes (i.e. the constructors), you get plenty of code that uses specific versions of the classes. This means that making changes to the system will be hard.
You are going to need to change the system, that's a fact of software development. You can call YAGNI about not adding specific extra features, but not on that you won't be needing to change the system. In my case that's something really important as I do weekly sprints.
I use a DI framework where configuration is done through code. With a really small code configuration, you hook up lots of different relations. So, when you take away the discussion on interface vs. concrete classes, you are actually saving typing not the other way around. Also for the cases a concrete class is on the constructor, it hooks it up automatically (I don't have to configure) building the rest of the relations. It also allows me to control some objects life time, in particular I can configure an object to be a Singleton and it hands a single instance all the time.
Also note that just using these practices isn't more overhead. Using them for the first times, is what causes the overhead (because of the learning process + in some cases mind set change).
Bottom line: you ain't gonna need to put all those constructor calls all over the place to go faster.
The most significant gains from DI are not necessarily due to the use of interfaces. You do not actually need to use interfaces to have beneficial effects of dependency injection. If there's only one implementation you can probably inject that directly, and you can use a mix of classes and interfaces.
You're still getting loose coupling, and quite a few development environments you can introduce that interface with a few keypresses if needed.
Hard data on the value of loose coupling I cannot give, but it's been a vision in textbooks for as long as I can remember. Now it's real.
DI frameworks also give you some quite amazing features when it comes to hierarchical construction of large structures. Instead of looking for the leanest DI framework around, I'd recommend you look for a full-featured one. Less isn't always more, at least when it comes to learning about new ways of programming. Then you can go for less.
Apart from testing also the loose coupling is worth it.
I've worked on components for an embedded Java system, which had a fixed configuration of objects after startup (about 50 mostly different objects).
The first component was legacy code without dependency injection, and the subobjects where created all over the place. Now it happened several times that for some modification some code needed to talk to an object which was only available three constructors away. So what can you do but add another parameter to the constructor and pass it through, or even store it in a field to pass it on later. In the long run things became even more tangled than they already where.
The second component I developed from scratch, and used dependency injection (without knowing it at the time). That is, I had one factory which constructed all objects and injected then on a need to know basis. Adding another dependency was easy, just add it to the factory and the objects constructor (or add a setter to avoid loops). No unrelated code needed to be touched.

Resources