Two part questions
I have a product aggregate that has;
Prices
PackagingOptions
ProductDescriptions
ProductImages
etc
I have modeled one product repository and did not create individual repositories for any of the child classes. All db operations are handled through product repository.
Am I understanding the DDD concept correctly so far? Sometimes the question comes to my mind that having a repository for lets say packaging options could make my life easier by directly fetching a the packaging option from the DB by using its ID instead of asking the product repository to find it in its PackagingOptions collection and give it to me..
Second part is managing the edit create operations using ASP.MVC frame work
I am currently trying to manage all add edit remove of these child collections of product through product controller(sound right?).
One challenge I am now facing is;
If I edit a specific packaging option of product through
mydomain/product/editpackagingoption/10
I have access to the id of the packaging option
But I don't have the ID of the product it self and this forces me to write a query to first find the product that has this specific packaging option then edit that product and the revelant packaging option. I can do this as all packaging option have their unique ID but this would fail if I have collections that don't have unique ID.
That feels very wrong..
The next option I thought of is sending both the product and packaging option IDs on the url like;
mydomain/product/editpackagingoption/3/10
But I am not sure if that is a good design either.
So I am at a point that I am a bit confused. might be having fundamental misunderstandings around all of this...
I would appreciate if you bear with the long question and help me put this together. thanks!
In my mind, this is one of those muddy things that pops into DDD.
In code, I treat an aggregate root as a container for any "relationships" it has and any Entity Objects that cannot exist without the Aggregate root.
For instance, let's take the Customer->Order->LineItem->Product example that's been bludgeoned to death by now. The aggregate root as I've displayed it is customer in this scenario. That stated, you don't always want to get to the order through the customer. You might want to find orders on a specific date.
Turning it on it's side, you also wouldn't have a Customer that doesn't have an order. The two are in a somewhat symbiotic relationship so one isn't the aggregate root of the other.
The point is that you don't want to have to load a customer through an order, but you don't necessarily want to load an order through the customer either.
Starting at Order, however, it's unlikely that you'd want to just retrieve a LineItem and you're certainly not going to be creating them w/o an order. To that end, the Order serves as the gateway to LineItems. LineItems wouldn't need their own controller or repository. They only exist within the Order itself and, as such, are part of the Order (in this case, Order becomes the aggregate root) and are managed by the Order Entity.
But, a LineItem would likely have a relationship to a Product within the system. Products would have their own controllers, repositories, etc because they can exist outside of the Aggregate root.
In summary to my rambling, I tend to look at it this way: if an Entity can exist by itself, it should have a controller. Entities that cannot exist on their own (LineItems in this case) should only be managed solely by their container (aggregate root).
Will some DDD purist please correct me if/where I'm wrong?
As to the second part of your question, I would need some more details about how you envision these other Entities working. With what you've put here, I'd imagine that PackagingOptions are related to a product and would be part of a Product aggregate root. Now, implying that you're editing them begs the question of is this a lookup table in the system or are they one-off values and, as such, should be treated as Value Objects?
Kaivalya,
Regarding your last comment (stateless http):
It depends on the context. Before getting into the details, I should tell you a basic principle about aggregates:
Aggregates define a group of related objects that should be treated as a single unit for the purpose of data change.
This is extremely important. The purpose of having Aggregates is to enforce invariants. For example, you may have a policy like "An Order cannot exceed $500". Then, to enforce this policy, you put Order and OrderItem together in the Order Aggregate. This way, any time you add a new OrderItem, it should be added via Order object. There, you can check the total price and make sure it does not exceed $500. If you don't have such invariants in your domain, then there is no point loading all these objects together.
Now, getting back to your comment:
If you do have invariants that should be enforced, then it is okay to load the entire aggregate even though it may have some overhead. Yes, HTTP is stateless and you load your whole aggregate just for modifying one of its child objects and then throw it out. That is okay. What is important the most here is that you are enforcing your invariants. This is what DDD is for.
The purpose of DDD is to capture all business logics in your domain. You could definitely achieve a better performance if you didn't have to load the entire aggregate, but how would you enforce your invariants? You'd most likely have to do it in your stored procedure. Yes, it works, and it is fast, but dealing with business logics in stored procedures during maintenance is a nightmare. That is why DDD has evolved. So you can model your business requirements using object-oriented languages/tools, so they are easier to understand and modify.
Just remember, DDD is a great approach but not for all types of projects. If you are dealing with a project in which there are lots of business logics and the chances of them changing due to the nature of a business is high, then you should use DDD. However, if your project is more of a "read something/writing something" without much business logic involved, using DDD is a headache. You could simply use LINQ to SQL (or SqlDataAdapters) and send your objects to your Views. You don't even have to worry about finding Entities, Value Objects, Aggregates, Repositories, etc.
Hope this helps,
Mosh
Related
We have a large application that allows the user to switch between different modules within the application. Each module needs to be able to save separately, so each module has it's own EntityManager.
There are some lookup tables, though, that we would like to use across the application. If we load the lookup tables at the application level, using a different EntityManager, they are not very usable then within the modules.
For example, if I want to load a 'Countries' lookup table at the application level, I then can't do something as simple as:
Person.Country = lookupDataContext.getCountry('Norway')
if Person is within a module's EntityManager. I will get something like:
"An Entity cannot be attached to an entity in another EntityManager. One of the two entities must be detached first."
Am I understanding BreezeJS correctly? If so, does that mean I need to have the Countries lookup within each module's EntityManager? This seems very limiting.
I believe this question is related to your other question about having multiple EntityManagers. Check out my general thoughts there which cover the scenario you describe here.
To be slightly more specific:
Breeze entities cannot navigate to related entities in a different EntityManager; that's what the error message is telling you.
You probably do want separate instances of the reference entities (such as Countries) in both managers.
You can easily copy any set of entities from one manager to another with export and import methods. You don't have to go back to the server.
Now, instead of a lookupDataContext, each sandbox datacontext can have a sandboxContext.lookups.countries method that delivers the appropriate entities from the proper sandbox manager.
Is this limiting? I don't think it's so bad as long as
you aren't duplicating an enormous amount of data across managers.
the reference entities are essentially immutable during a user session.
a user session doesn't keep too many sandbox managers alive at the same time
you dispose of or re-cycle the sandbox managers (and their datacontexts) when you're done with them.
You should be able to achieve your goal of loading lookups from the server once and managing them centrally in the master manager.
This approach has been very successful in a great number of applications over the last decade (pre-Breeze obviously).
HTH.
In a complicated system you may have business logic related to what a user can see in a given context that you want to re-use across your system.
For example, amazon.com offers different prices to different users depending on a bunch of different rules. Those prices have to be shown consistently in search, product detail pages, email ads, etc.
If you're not yet at the place where it makes sense to extract out internal service APIs, where does this kind of user-specific model logic go in Rails-style MVC? It doesn't belong in the Model (requires too much context), but also doesn't belong in the Controller (needs to be re-used across many views in many controllers).
What are the leading design patterns for this type of problem?
I guess such complex decisions should introduce a ProductService into your system with some PricePipeline inside it. When any part of the system requests a list of products, passing some filters, ProductService fetches products from DB, instantiates PricePipeline manager and passes a list of products with their initial, DB-loaded prices to each member of PricePipeline manager.
Some caching is obviously required, but... that's another question, isn't it? ;)
At work last week we had a meeting / presentation about rethinking how we do MVC, based on what's probably a lot of research by our boss and some reading into other SO questions. One takeaway for me was that when people say "separate logic from data" it would probably me more accurate to say "separate logic from your data source". If you do the first, you might fall prey to an anemic domain model. Am I correct in this?
Secondly we learned that MVC doesn't contain your business logic anywhere. This should be in a separate service layer or BLL apart from the web app. Reconciling these two points seems a bit tricky - does a particular piece of logic go with the data objects, as basic OOP principles dictate, or in a separate layer?
Here's a specific example that I need help with right now. I'm pretty convinced that this would belong in the service layer but I still have other questions. Let's say I have some behavior that takes as input multiple different entities of different types. It runs, and then as output, it can modify the input entities, and generate new entities as records. In my case it's for a game, but you could say it's like a transaction. There are multiple people involved, some products, and a receipt generated.
The easy question, where would you put this logic? Is it a separate class that gets instantiated?
The hard question (for me) is who is responsible for calling this code? It would feel wrong to have the controller do it. Or is that exactly its job? What if it doesn't get run on any one particular page, but whenever the user accesses the site after a particular time? Base controller?
In general, how do you decide between "this belongs in my entity class so that it isn't just a pile of getters and setters" and "this belongs in my service layer"? Or am I mixing things up... do the entity classes belong in the service layer?
Let's sort out some terminology first. What you're referring to as the "service layer" is more commonly called the "Domain model", as you say, not to be confused with a MVC model. At the most basic level, the MVC model encapsulates the domain model. How the two interact isn't defined by the pattern itself but the models store states the logical way is to realise there are two different states:
The domain state- in the real world this will more often than not be stored in a database somehow but the domain model should not expose any data source to the MVC model. This allows domain models to retain proper encapsulation. Any logic which mutates or accesses this data should be done here with relevant but abstract accessors for the MVC model to access.
The application state- that is things like "Which record is being edited at the moment?"
To answer your question it really depends on what you are doing with the data. If you're doing any kind of processing then this should be done in the domain model. If you're just fetching collections of data which are needed for display purposes then the MVC model should query the domain model(s) to retrieve the relevant data. The view should then inquire the model for this data.
So to answer your questions:
In thisspecific case: The transaction which processes the data should be inside the domain model. It has direct access to all the relevant data and should just be called with any required parameters. This promotes reusability because it's not directly tied to the MVC model.
Technically, if your controller is accessing the domain model directly it's closer to an MVVM implementation than an MVC one. However, this is not a bad thing, provided your domain models take arguments which aren't tied to domain logic there's not real issue. However a controller should not be constructing a domain object (e.g. creating a user account and passing it to the model). The reason for having the domain model which sits outside the MVC triad is for exactly that: So that it doesn't matter what calls the code, the domain logic is agnostic to the architecture it's running in. This is a good thing. MVC is presentaitonal, sometimes domain logic is just data processing. In regard to "whenever the user accesses the site after a particular time" this is domain logic so should certainly go in the domain model. Where depends on exactly what triggers the event, but in this case it could be part of the login routine or similar.
Indeed. The entities (by which I'm assuming you mean objects which refer to a single domain object, a user, a product, a blog, etc?) will probably not contain much logic themselves as they are mostly data structures. An order may have a "getProducts()" or "getDeliveryAddress()" which fetches related entities but the domain model would do any processing on the data itself.
As a rule of thumb, almost any logic that mutates data or processes data that comes from multiple entities should happen in the domain model. There are two main reasons for this: 1. Reusability, that logic can be reused from anywhere. 2. Encapsulation. Once you start putting this logic inside entities you end up with a situation where domain entities have dependencies on other domain entities. This leads to very brittle code in the real world as you end up with arbitrary rules being introduced at a later date such as "These customers don't have to enter payment details". "This is a corporate customer and they don't have a billing address" if you've modelled your "Order" class to be constructed with dependencies on a set or products a user and a billing address this becomes a larger task than dealing with that at an earlier stage in the domain model.
In my asp.net mvc 3 application, I'm using the repository pattern.
I have 3 entities, Company, Country, City. Each of them has their own repository. Company entity has FoundedCountry and FoundedCity foreign keys.
Now in a view, I want to show the company details. In this view I want to view Company details as well as, FoundedCountry name and FoundedCity name. In my opinion I have to handle this with a kind of JOIN query. But I'm stuck at how to achieve this in repository pattern. How can I handle this JOIN in repository pattern?
Thank you.
The repository should have a task-based interface. This means that ORM's, joins etc are inside the repository. The app just sees an interface whtch returns an object that it can use.
This means you don't create a repository around a table (it pretty much defeats the purpose). In your scenario I suggest you have (at least) 2 repositories: one will handle everything related to updating the model and the other will serve only reads (queries).
This means the query repository will return only the data you want (it basically returns view model bits). Of course, the actual tables and joins are an implementation detail of the repository.
Don't construct your repository pattern in a way that is preventing joins! This usually means to use the same ORM context (DataContext/ObjectContext) for all instances associated with the current HTTP request.
I consider it to be an anti-pattern to have a generic IRepository because database access is rarely constrained to a single type of entity at the same time.
You could consider the DataContext/ObjectContext to be a repository by itself.
A last advice: If you don't know what a repository abstraction is good for - don't use one.
I'm creating an inventory system with Ruby on Rails as the application server and having java clients as the frontend.
Part of the project mandates that we create an integrated class diagram (a class diagram that includes all classes and shows the relationships). The way the class has been designed and what we've been taught before was to use the Boundary-Entity-Controller (BCE) pattern to create appropriate classes, however, since we're using Rails which uses an MVC architecture, they directly conflict since there is not a 1:1 correlation between the two patterns, especially considering that the 'views' in our case is just XML, so there will be no class diagram for the views and a Boundary class shares the input of the controller and the output of a view.
So far, our class diagram just features the Rails related classes (since the client classes are mostly just UI). Here is the result of what we've done so far (ignore the fact that we have a million getters and setters -- it's a requirement for the project that we won't actually be implementing in that way; we'll use attr_accessor):
So, are we on the right track? Anything to add/edit/move? How exactly do we model correctly the built in ActiveRecord validator methods that we'll be using (such as validates_numericality_of :price)?
Any help is greatly appreciated! Thanks.
It seems like you are given several constraints. If I understand it correctly, you used BCE in the analysis and MVC for architecture. In RUP there are two models for these purposes - analysis model and design model - both expressed through class diagrams. So if you want to show that you used the BCE approach as well as the MVC architecture in one monstrous diagram, you can draw Boundaries, Controls and Entities from analysis and your solution classes based on RoR for the design and connect them using dependencies with <<trace>> stereotype.
I am not entirely sure how are the validate methods implemented in RoR, my guess is, when you call validates... method in a model class definition, the particular model class is enhanced through metaprogramming with new private method, which will serve as a callback for the validation phase. I am really not sure about this, but if it is true and there is metaprogramming involved, you have a problem. AFAIK, you can either draw diagram which will show the classes after adding the methods (something like an object diagram on the class level...) or you could model the metaprogram through package merge, which is not easy as well.
You have correctly analyzed the conflict between BCE and MVC. So let's try to map your classes:
Employee, Store, Product, Location are clearly «entity»
EmployeeController, StoreController, ProductController, LocationController are clearly «control» that corresponds to the simpla management of the individual entities.
ActiveRecord is not really an entity. This shows that you are no longer in an analysis model, but already in a design model that is more refined. You could then nevertheless use «entity» since this class contributes only to their implementation.
Manager and Receiver are somewhat to ambiguous for me to categorize properly. If however, there are supposed to represent a special role of an Employee, it would be better to use composition over inheritance, because an Employee may start as Employee and then one day be receiver, and later be manager. The generalisation/specialization relationship does not allow this flexibility: if an employee is created, it will be either Manager or Receiver all its life.
What is not so clear, is if your XxxController really correspond to use-cases, and really do the coordination between the contributing classes:
Use-cases typically are described by verbs, such as Maintain employee records instead of EmployeeController.
Use-cases may need to access several entities. For example one part of maintaining employee records, is certainly to assign an employee to a store. Since the controller will be responsible to coordinate between all the objects, it should also access to store, since it needs to make sure that the store assigned to an employee really exist and is in a status that allow assignement (e.g. not in statuse "StoreShutDownDefinitively"). ANd this is absolutely not clear in your current diagram.
Last but not least, a «boundary» is in principle expected, for every link between an actor (user or remote system) and a use case. Makeing the XML is not sufficient: you need either to send the XML to another system, or display the XML on the screen, with some scrolling if needed. ANd maybe you'll have to react to requests or give the user the opportunity to query for another record:
In an analysis model you would have as many «boundary» classes as linked actors.
But in a design model, you could decide to regroup several boundaries together into one class that cover them all. But you need at least one boundary class.
I don't know RoR, but if I understand well the diagram in that article, your boundary would correspond to the view and the routing. In a classical MVC, you'd also have the controller in the boundary. But looking at the details of the article, I have the impression that RoR ActionController are in fact closer to use-cases (i.e. «control») than to an MVC controller.