Related
How can I split domain logic and data access in Grails (and is it a good idea)?
Many software applications we write are rather data(base) centered and in Grails one often persist from service classes or controllers directly to a database configured in DataSource.groovy. It is easy to change database, but we are not really independent of the persistence implementation in the code.
I am trying to write an application that opens for different persistence and data source (not only database) implementations and focus on the business domain instead of database entities. This is also a plus when testing (easy to write fake/mock persistence)
Initially I have only one persistence implementation - Grails domain classes, using GORM. But it is a possibility that I in the future would like to have other data sources than a database, for example rest services or something else.
For now, I only have the database as data source though and do mostly crud stuff (and some domain logic). I think I am still in a way stuck in "old" thinking, focused on database persistence, because most of my business domain classes, have a Grails domain class equivalent that is a copy of it. When domain classes are to be persisted, I just copy the properties to the Grails domain class.
I am not very happy with this solution. I can think of at least two possible improvements/changes:
My Grails domain classes could be organised more differently from the business domain classes, so I don't just copy properties from one class to the other. This will still involve a lot of property mapping from one class to the other when reading or writing from/to the database though.
Maybe there is a way to use business domain classes, from a regular src/main/groovy package and decorate with GORM stuff? Or in some other way split the domain logic and persistence? I have seen it is possible to do this by using hibernate conf over the domain classes. Is this the only way?
I have seen some interesting discussions of Grails architecture, including clean architecture, hexagonal architecture and ddd, but I have not found any examples yet. Are there any?
At this point, as I said, much of the functionality is CRUD stuff, but not everything. And further on, the application may have more business logic, so I would prefer not to use the "default" architecture of Grails with views, controllers, services, domain. I want a "core" application that is in a way independent of grails view/controllers and domain/GORM
It's been some time since you posted your question, but this is a very interesting topic for me...
I currently work in big-ish Java8 projects that implement principles of clean architecture, ddd, cqrs and hexagonal architecture among others. I also have limited experience with Grails 1.x projects and I remember asking the same questions as you are now.
Now that I have a broader perspective, I honestly think that it doesn't make sense to force Grails into a clean architecture. You're going to have a very painful time trying to achieve it and you probably won't be pleased with the result.
Everything in Grails is designed to be used in an opinionated, convention based way. Starting with GORM being an ActiveRecord implementation and following by every little decision that they've made about directory structure, semantics on artifacts that you need to define (controllers, services, models...), etc.. I'm not saying this is bad. In fact, this is great when you're developing something that fits into this schema-of-things.
This coupling and implicit behavior between your artifacts makes really hard to model your business logic apart from your data access (or your http interaction, or any other interaction with third parties for that matter).
From a DDD point of view you should favor data or collection based Repositories over ActiveRecord implementations. Then you can start separating your persistence logic from your Domain model. Trying to do this while maintaining ActiveRecord-like interaction with your persistence layer is going to produce a very "dirty" layer of adaptation with lots of repetition.
You will have a really hard time especially while trying to adapt complex Domain with aggregate objects that should go into different database tables, for example.
Now, addressing the two improvements that you suggest, this is what I can tell you about them:
My Grails domain classes could be organised more differently from the business domain classes, so I don't just copy properties from one class to the other. This will still involve a lot of property mapping from one class to the other when reading or writing from/to the database though.
You can indeed do what you say. Just place some code on src/groovy folder. The main problem that you will face here is dependency injection. Grails automagically injects dependencies on your services and controllers when they're defined in the standard directories. For everything else, you need to explicitly tell Grails how to take dependencies and pass them to your custom artifacts.
Maybe there is a way to use business domain classes, from a regular src/main/groovy package and decorate with GORM stuff? Or in some other way split the domain logic and persistence? I have seen it is possible to do this by using hibernate conf over the domain classes. Is this the only way?
If you decorate your Domain objects defined in src/groovy with GORM (if it is even possible) you will have the same problem. Your mission here is to isolate your Domain from the persistence logic. Doing so by having any GORM in it fails its purpose.
Everything considered my advice here would be to:
switch to other less coupled libraries that let you desing your own architecture (i.e. Ratpack, Jooq) or
if that is not an option, just embrace the Grails-way-to-do-things completely.
There is a very comprehensive list of libraries that you can browse for inspiration: Awesome Java
I'm starting a new MVC project and have (almost) decided to give the Repository Pattern and Dependency Injection a go. It has taken a while to sift through the variations but I came up with the following structure for my application:
Presentation Layer: ASP.Net MVC front end (views/controllers, etc.)
Services Layer (Business Layer, if you prefer): interfaces and DTOs.
Data Layer: interface implementations and Entity Framework classes.
They are 3 separate projects in my solution. The Presentation Layer only has a reference to the Services Layer. The Data Layer also only has a reference to the Services Layer - so this is basically following Domain Driven Design.
The point of structuring things in this fashion is for separation of concerns, loose-coupling and testability. I'm happy to take advice on improvements if any of this is unreasonable?
The part I am having difficulty with is injecting an interface-implementing object from the Data Layer into the Presentation Layer, which is only aware of the interfaces in the Services Layer. This seems to be exactly what DI is for, and IoC frameworks (allegedly!) make this easier, so I thought I'd try MEF2. But of the dozens of articles and questions and answers I've read over the last few days, nothing seems to actually address this in a way that fits my structure. Almost all are deprecated and/or are simple console application examples that have all the interfaces and classes in the same assembly, knowing all about one another and entirely defying the point of loose-coupling and DI. I have also seen others that require the Data Layer dll being put in the presentation layer bin folder and configuring other classes to look there - again hampering the idea of loose-coupling.
There are some solutions that explore attribute-based registration, but that has supposedly been superseded by Convention-Based registration. I also see a lot of examples injecting an object into a controller constructor, which introduces it's own set of problems to solve. I'm not convinced the controller should know about this actually, and would rather have the object injected into the model, but there may be reasons for this as so many examples seem to follow that path. I haven't looked too deeply into this yet as I'm still stuck trying to get the Data Layer object up into the Presentation Layer anywhere at all.
I believe one of my main problems is not understanding in which layer the various MEF2 things need to go, since every example I've found only uses one layer. There are containers and registrations and catalogues and exporting and importing configurations, and I've been unable to figure out exactly where all this code should go.
The irony is that modern design patterns are supposed to abstract complexity and simplify our task, but I'd be half finished by now if I'd have just referenced the DAL from the PL and got to work on the actual functionality of the application. I'd really appreciate it if someone could say, 'Yep, I get what you're doing but you're missing xyz. What you need to do is abc'.
Thanks.
Yep, I get what you're doing (more or less) but (as far as I can tell) you're missing a) the separation of contracts and implementation types into their own projects/assemblies and b) a concept for configuring the DI-container, i.e. configure which implementations shall be used for the interfaces.
There are unlimited ways of dealing with this, so what I give you is my personal best practice. I've been working that way for quite a bit now and am still happy with it, so I consider it worth sharing.
a. Always have to projects: MyNamespace.Something and MyNamespace.Something.Contracts
In general, for DI, I have two assemblies: One for contracts which holds only interfaces and one for the implementation of these interfaces. In your case, I would probably have five assemblies: Presentation.dll, Services.dll, Services.Contracts.dll, DataAccess.dll and DataAccess.Contracts.dll.
(Another valid option is to put all contracts in one assembly, lets call it Commons.dll)
Obviously, DataAccess.dll references DataAccess.Contracts.dll, as the classes inside DataAccess.dll implement the interfaces inside DataAccess.Contracts.dll. Same for Services.dll and Services.Contracts.dll.
No, the decoupling part: Presentation references Services.Contracts and Data.Contracts. Services references Data.Contracts. As you see, there is no dependency to concrete implementations. This is, what the whole DI thing is about. If you decide to exchange your data access layer, you can swap DataAccess.dll while DataAccess.Contracts.dll stays the same. None of your othe assemblies reference DataAccess.dll directly, so there are no broken links, version conflicts, etc. If this is not clear, try to draw a little dependency diagram. You will see, that there are no arrows pointing to any assemblies whioch don't have .Contracts in their name.
Does this make sense to you? Please ask, if there is something unclear.
b. Choose how to configure the container
You can choose between explicit configuration (XML, etc.), attribute based configuration and convention based registration. While the former is a pain for obvious reasons, I am a fan of the second. I think it is more readable and easy to debug than convention based config, but that is a matter of taste.
Of course, the container kind of bundles all the dependencies, which you have spared in your application architecture. To make clear what I mean, consider a XML config for your case: It will contain 'links' to all of the implementation assemblies DataAccess.dll, .... Still, this doesn't undermine the idea of decoupling. It is clear, that you need to modify the configuration, when an implementation assembly is exchanged.
However, working with attribute or convention based configs, you generally work with the autodiscovery mechanisms you mention: 'Search in all assemblies located in xyz'. This does require to place all assemblies in the applications bin directory. There is nothing wrong about it, as the code needs to be somewhere, right?
What do you gain? Consider you've deployed your application and decide to swap the DataAccess layer. Say, you've chosen convention based config of your DI container. What you can do now is to open a new project in VS, reference the existing DataAccess.Contracts.dll and implement all the interfaces in whatever way you like, as long as you follow the conventions. Then you build the library, call it DataAccess.dll and copy and paste it to your original application's program folder, replacing the old DataAccess.dll. Done, you've swapped the whole implementation without any of the other assemblies even noticing.
I think, you get the idea. It really is a tradeoff, using IoC and DI. I highly recommend to be pragmatic in your design decisions. Don't interface everything, it just gets messy. Decide for yourself, where DI and IoC really makes sense and don't get too influenced by the community's religious discussions. Still, used wisely, IoC and DI are really, really, really powerful!
Well I've spent a couple more days on this (which is now around a week in total) and made little further progress. I am fairly sure I had the container set up correctly with my conventions discovering the correct parts to be mapped etc., but I couldn't figure out what seemed to be the missing link to get the controller DI to activate - I constantly received the error message stating that I hadn't provided a parameterless constructor. So I'm done with it.
I did, however, manage to move forward with my structure and intention to use DI with an IoC. If anyone hits the same wall I did and wants an alternative solution: ditch MEF 2 and go with Unity. The latest version (3.5 at time of writing) has discovery by convention baked in and just works like a treat out of the box - it even has a fairly thorough manual with worked examples. There are other IoC frameworks, but I chose Unity since it's MS supported and fares well in performance benchmarks. Install the bootstrapper package from NuGet and most of the work is done for you. In the end I only had to write one line of code to map my entire DAL (they even create a stub for you so you know where to insert it):
container.RegisterTypes(
AllClasses.FromLoadedAssemblies().Where(t => t.Namespace == "xxx.DAL.Repository"),
WithMappings.FromMatchingInterface,
WithName.Default);
I understand the benefit of DI from the Singleton point of view and the reduction of boiler-plate code. But I found this on wikipedia too:
Another benefit is that it offers configuration flexibility because
alternative implementations of a given service can be used without
recompiling code
When I use Spring or Guice it always makes a contract between 1 service and 1 implementation of it. Am I missing a feature or understanding the statement incorrectly?
You would normally have to recompile the part of the application that contains the configuration, but the rest of the application can stay the same. When those parts are put in separate modules / assemblies, recompilation of those parts is not needed. When you configure the container using XML (in theory) nothing needs to be recompiled.
You could even go one step further and change behavior at runtime (using decorators for instance) if you wish.
If the decision of which implementation to use for a given service is in configuration, you can change that decision purely in configuration with no code changes, as long as the alternative implementation you want to use already exists.
This is largely the case quite naturally with Spring, where this decision is usually in XML configuration files read at application startup.
I've had a certain feeling these last couple of days that dependency-injection should really be called "I can't make up my mind"-pattern. I know this might sound silly, but really it's about the reasoning behind why I should use Dependency Injection (DI). Often it is said that I should use DI, to achieve a higher level of loose-coupling, and I get that part. But really... how often do I change my database, once my choice has fallen on MS SQL or MySQL .. Very rarely right?
Does anyone have some very compelling reasons why DI is the way to go?
Two words, unit testing.
One of the most compelling reasons for DI is to allow easier unit testing without having to hit a database and worry about setting up 'test' data.
DI is very useful for decoupling your system. If all you're using it for is to decouple the database implementation from the rest of your application, then either your application is pretty simple or you need to do a lot more analysis on the problem domain and discover what components within your problem domain are the most likely to change and the components within your system that have a large amount of coupling.
DI is most useful when you're aiming for code reuse, versatility and robustness to changes in your problem domain.
How relevant it is to your project depends upon the expected lifespan of your code. Depending on the type of work you're doing zero reuse from one project to the next for the majority of code you're writing might actually be quite acceptable.
An example for use the use of DI is in creating an application that can be deployed for several clients using DI to inject customisations for the client, which could also be described as the GOF Strategy pattern. Many of the GOF patterns can be facilitated with the use of a DI framework.
DI is more relevant to Enterprise application development in which you have a large amount of code, complicated business requirements and an expectation (or hope) that the system will be maintained for many years or decades.
Even if you don't change the structure of your program during development phases you will find out you need to access several subsystems from different parts of your program. With DI each of your classes just needs to ask for services and you're free of having to provide all the wiring manually.
This really helps me on concentrating on the interaction of things in the software design and not on "who needs to carry what around because someone else needs it later".
Additionally it also just saves a LOT of work writing boilerplate code. Do I need a singleton? I just configure a class to be one. Can I test with such a "singleton"? Yes, I still can (since I just CONFIGURED it to exist only once, but the test can instantiate an alternative implementation).
But, by the way before I was using DI I didn't really understand its worth, but trying it was a real eye-opener to me: My designs are a lot more object-oriented as they have been before.
By the way, with the current application I DON'T unit-test (bad, bad me) but I STILL couldn't live with DI anymore. It is so much easier moving things around and keeping classes small and simple.
While I semi-agree with you with the DB example, one of the large things that I found helpful to use DI is to help me test the layer I build on top of the database.
Here's an example...
You have your database.
You have your code that accesses the database and returns objects
You have business domain objects that take the previous item's objects and do some logic with them.
If you merge the data access with your business domain logic, your domain objects can become difficult to test. DI allows you to inject your own data access objects into your domain so that you don't depend on the database for testing or possibly demonstrations (ran a demo where some data was pulled in from xml instead of a database).
Abstracting 3rd party components and frameworks like this would also help you.
Aside from the testing example, there's a few places where DI can be used through a Design by Contract approach. You may find it appropriate to create a processing engine of sorts that calls methods of the objects you're injecting into it. While it may not truly "process it" it runs the methods that have different implementation in each object you provide.
I saw an example of this where the every business domain object had a "Save" function that the was called after it was injected into the processor. The processor modified the component with configuration information and Save handled the object's primary state. In essence, DI supplemented the polymorphic method implementation of the objects that conformed to the Interface.
Dependency Injection gives you the ability to test specific units of code in isolation.
Say I have a class Foo for example that takes an instance of a class Bar in its constructor. One of the methods on Foo might check that a Property value of Bar is one which allows some other processing of Bar to take place.
public class Foo
{
private Bar _bar;
public Foo(Bar bar)
{
_bar = bar;
}
public bool IsPropertyOfBarValid()
{
return _bar.SomeProperty == PropertyEnum.ValidProperty;
}
}
Now let's say that Bar is instantiated and it's Properties are set to data from some datasource in it's constructor. How might I go about testing the IsPropertyOfBarValid() method of Foo (ignoring the fact that this is an incredibly simple example)? Well, Foo is dependent on the instance of Bar passed in to the constructor, which in turn is dependent on the data from the datasource that it's properties are set to. What we would like to do is have some way of isolating Foo from the resources it depends upon so that we can test it in isolation
This is where Dependency Injection comes in. What we want is to have some way of faking an instance of Bar passed to Foo such that we can control the properties set on this fake Bar and achieve what we set out to do, test that the implementation of IsPropertyOfBarValid() does what we expect it to do, i.e. return true when Bar.SomeProperty == PropertyEnum.ValidProperty and false for any other value.
There are two types of fake object, Mocks and Stubs. Stubs provide input for the application under test so that the test can be performed on something else. Mocks on the other hand provide input to the test to decide on pass\fail.
Martin Fowler has a great article on the difference between Mocks and Stubs
I think that DI is worth using when you have many services/components whose implementations must be selected at runtime based on external configuration. (Note that such configuration can take the form of an XML file or a combination of code annotations and separate classes; choose what is more convenient.)
Otherwise, I would simply use a ServiceLocator, which is much "lighter" and easier to understand than a whole DI framework.
For unit testing, I prefer to use a mocking API that can mock objects on demand, instead of requiring them to be "injected" into the tested unit from a test. For Java, one such library is my own, JMockit.
Aside from loose coupling, testing of any type is achieved with much greater ease thanks to DI. You can put replace an existing dependency of a class under test with a mock, a dummy or even another version. If a class is created with its dependencies directly instantiated it can often be difficult or even impossible to "stub" them out if required.
I just understood tonight.
For me, dependancy injection is a method for instantiate objects which require a lot of parameters to work in a specific context.
When should you use dependancy injection?
You can use dependancy injection if you instanciate in a static way an object. For example, if you use a class which can convert objects into XML file or JSON file and if you need only the XML file. You will have to instanciate the object and configure a lot of thing if you don't use dependancy injection.
When should you not use depandancy injection?
If an object is instanciated with request parameters (after a submission form), you should not use depandancy injection because the object is not instanciated in a static way.
I've been using StructureMap recently and have enjoyed the experience thoroughly. However, I can see how one can easily get carried away with interfacing everything out and end up with classes that take in a boatload of interfaces into their constructors. Even though that really isn't a huge problem when you're using a dependency injection framework, it still feels that there are certain properties that really don't need to be interfaced out just for the sake of interfacing them.
Where do you draw the line on what to interface out vs just adding a property to the class?
The main problem with dependency injection is that, while it gives the appearance of a loosely coupled architecture, it really doesn't.
What you're really doing is moving that coupling from the compile time to the runtime, but still if class A needs some interface B to work, an instance of a class which implements interface B needs still to be provided.
Dependency injection should only be used for the parts of the application that need to be changed dynamically without recompiling the base code.
Uses that I've seen useful for an Inversion of Control pattern:
A plugin architecture. So by making the right entry points you can define the contract for the service that must be provided.
Workflow-like architecture. Where you can connect several components dynamically connecting the output of a component to the input of another one.
Per-client application. Let's say you have various clients which pays for a set of "features" of your project. By using dependency injection you can easily provide just the core components and some "added" components which provide just the features the client have paid.
Translation. Although this is not usually done for translation purposes, you can "inject" different language files as needed by the application. That includes RTL or LTR user interfaces as needed.
Think about your design. DI allows you to change how your code functions via configuration changes. It also allows you to break dependencies between classes so that you can isolate and test objects easier. You have to determine where this makes sense and where it doesn't. There's no pat answer.
A good rule of thumb is that if its too hard to test, you've got some issues with single responsibility and static dependencies. Isolate code that performs a single function into a class and break that static dependency by extracting an interface and using a DI framework to inject the correct instance at runtime. By doing this, you make it trivial to test the two parts separately.
Dependency injection should only be used for the parts of the
application that need to be changed dynamically without recompiling
the base code
DI should be used to isolate your code from external resources (databases, webservices, xml files, plugin architecture). The amount of time it would take to test your logic in code would almost be prohibitive at a lot of companies if you are testing components that DEPEND on a database.
In most applications the database isn't going to change dynamically (although it could) but generally speaking it's almost always good practice to NOT bind your application to a particular external resource. The amount involve in changing resources should be low (data access classes should rarely have a cyclomatic complexity above one in it's methods).
What do you mean by "just adding a property to a class?"
My rule of thumb is to make the class unit testable. If your class relies on the implementation details of another class, that needs to be refactored/abstracted to the point that the classes can be tested in isolation.
EDIT: You mention a boatload of interfaces in the constructor. I would advise using setters/getters instead. I find that it makes things much easier to maintain in the long run.
I do it only when it helps with separation of concerns.
Like maybe cross-project I would provide an interface for implementers in one of my library project and the implementing project would inject whatever specific implementation they want in.
But that's about it... all the other cases it'd just make the system unnecessarily complex
Even with all the facts and processes in the world.. every decision boils down to a judgment call - Forgot where I read that
I think it's more of a experience / flight time call.
Basically if you see the dependency as a candidate object that may be replaced in the near future, use dependency injection. If I see 'classA and its dependencies' as one block for substitution, then I probably won't use DI for A's deps.
The biggest benefit is that it will help you understand or even uncover the architecture of your application. You'll be able to see very clearly how your dependency chains work and be able to make changes to individual parts without requiring you to change things that are unrelated. You'll end up with a loosely coupled application. This will push you into a better design and you'll be surprised when you can keep making improvements because your design will help you keep separating and organizing code going forward. It can also facilitate unit testing because you now have a natural way to substitute implementations of particular interfaces.
There are some applications that are just throwaway but if there's a doubt I would go ahead and create the interfaces. After some practice it's not much of a burden.
Another item I wrestle with is where should I use dependency injection? Where do you take your dependency on StructureMap? Only in the startup application? Does that mean all the implementations have to be handed all the way down from the top-most layer to the bottom-most layer?
I use Castle Windsor/Microkernel, I have no experience with anything else but I like it a lot.
As for how do you decide what to inject? So far the following rule of thumb has served me well: If the class is so simple that it doesn't need unit tests, you can feel free to instantiate it in class, otherwise you probably want to have a dependency through the constructor.
As for whether you should create an interface vs just making your methods and properties virtual I think you should go the interface route either if you either a) can see the class have some level of reusability in a different application (i.e. a logger) or b) if either because of the amount of constructor parameters or because there is a significant amount of logic in the constructor, the class is otherwise difficult to mock.