The best place for mapping M->VM in MVC? - asp.net-mvc

I use ASP.NET MVC 3.
I encountered at least 2 approaches for mapping Model->ViewModel on the server side:
inside ViewModel class constructor
inside Controller or designated mapper class
I like first approach the most as the ViewModel property declarations and its mapping are in the same place, easy to maintain and unit-test. Can anybody specify more pros and cons, or other better practice?

ViewModels can exist independently of any database-originated model classes.
I don't recommend putting ViewModel population code inside the Controller as this it not the responsibility of the controller (and is also a maintenance nightmare).
My opinion is that mapping from ViewModel to DBModel (and vice-versa) is the responsibility of the ViewModel, so all of my ViewModel classes implement two members:
public static TViewModel FromDBModel(TDBModel dbModel);
public void ToDBModel(TDBModel dbModel);
The first is a static method that the Controller calls when returning a View. The static method constructs an instance of the ViewModel and sets its members accordingly.
The instance ToDBModel method is passed a constructed DBModel instance (either constructed by the Repository when retrieving or updating data, or constructed by the controller when inserting new data).
HTH.
EDIT: Note that many people swear by libraries such as AutoMapper (which uses reflection and other tricks to automate the DBModel<->ViewModel mapping process). I'm not a fan of auto-mapping because it takes control away from the developer and I don't see it buying me time when I have to learn how the mapper works and how to get it to map non-trivial operations. YMMV.

I'll tend to keep entities and view models separate such that they are unaware of each other. This is to improve encapsulation and minimize dependencies when testing the controllers and mapping itself. See Separation of concerns.
Instead I'd write classes to perform the mappings myself (if its simple) or use AutoMapper and use that method from within the controller. For a larger systems with tens or hundreds of database entities and views, I tend to lean towards AutoMapper. Writing the mapping yourself can become very tedious and error prone. You have to balance the value of you writing it yourself with the value such implementation gives to business. After all, if we wanted to know everything about every framework, we'd each be writing our own version of the .NET framework. :)
That said, there may be little benefit using view models for some systems, especially those where there is a one to one mapping between "fields" in a view and database entities [aka typical CRUD]. I usually cringe when I see that, but it is always an option given a time frame and complexity of the system.
Then there is a case when you use ASP.NET MVC to expose an API. In this case "application/json" and "text/xml" representations of your entities are just "views". View models are often used filter sensitive and unnecessary data from that external presentation. In this case mapping becomes rather complex due to the fact that there may be several representations (and versions thereof) for the same entity. However, this seems outside of the OP.

Related

Project Structure in MVC design Pattern iOS

First of all i know MVC well and have been using it in project but when it comes to organizing classes and there role i am bit not sure of there proper implementation. Lets take a scenario to proceed with:
A sample which will display All Employee and Department. Data will be fetched from Web Services(Json) and will be stored as offline(Core Data).
So MVC pattern would be:
View will be my storyboard with Employee and Department UIViewController.
Controller will be EmployeeViewController.swift and DepartmentViewController.swift
Model will be Employee.swift and Department.swift
class Employee: NSObject {
var name: String?
}
class Department: NSObject {
var departmentName: String?
}
ServiceManager which will make calls to the web service.
ParseData which will parse the web service response and convert it into Employee and Department objects
CoreDataManager is singleton class to manage CRUD operation on offline DB.
Here are series of question on the above scenario which i have:
Is my understanding correct? Is the structure which i am trying to build follows proper MVC?
How the controller will interact with these components (Service Manager, ParseData, CoreDataManager). Should there be another class which will facilitate the communication between controller and data management(if controller does this then it will a tightly-coupled structure and massive as well).
Should Model be having any code other then property and initialization method as most of the model which i have seen only have property declaration?
Should there be separate UIView classes instead of storyboard to create a proper MVC structure?
Is my understanding correct? Is the structure which i am trying to
build follows proper MVC?
First I will say that "proper" MVC will depend on who you're asking. Its origin is commonly attributed to Trygve Reenskaug when he introduced this into Smalltalk in the 70's. However, his type of MVC was widely different from the bloated versions most commonly used today. The modern way of thinking about MVC is
Model = mostly a dumb class which primarily encapsulates data
View = whatever we present on the screen
Controller = the big lump of code that does almost everything,
sometimes offloaded by a manager class or two of some sort
Reenskaug, however, would have a model and a view and a controller for a button. For a label. For a field. I'm not saying that is what we should strive for, but there should be better ways to structure a project than using the Massive ViewController pattern (as it is jokingly referred to in the iOS community).
Luckily, there are.
Uncle Bob is preaching Clean Architecture. There are several implementations of this, and various people have made their own implementations of this for iOS, like VIPER and Clean Swift.
How the controller will interact with these components (Service
Manager, ParseData, CoreDataManager). Should there be another class
which will facilitate the communication between controller and data
management(if controller does this then it will a tightly-coupled
structure and massive as well).
Following the principles of Clean Architecture, you should encapsulate these functionalities into layers, in a way that enables you not just to split the code into multiple components, but also enables you to replace them with other components when that is needed. (But yes, at the very least avoid putting all of this in your controller!)
Should Model be having any code other then property and initialization
method as most of the model which i have seen only have property
declaration?
Again, there is not a single answer here. Some proponents of "real" OOP will say that each object should be self-served (i.e. a model object should know how to persist itself), while others extract the knowledge of such operations into "managers". Putting code to persist an object into the object could mean littering persistence functionality into many objects, or require you to rely on subclassing or other solutions to avoid this coupling.
Should there be separate UIView classes instead of storyboard to
create a proper MVC structure?
Storyboard or not does not determine whether you're using "proper" MVC. Also, what kind of class you're choosing (UIView or UIViewController) to represent the View is also not important. Your ViewController can be dumbed down to such a degree that it contains no logic (forwarding the logic that it DOES have to another class, i.e. the Presenter in VIPER).
I would recommend reading about the Clean Architecture and maybe watch a video of Uncle Bob explaining it, read other people's reports on implementing it, and then consider whether MVC is the correct pattern for your iOS project.

MVC thin controller architecture

lately I've been toying with the idea of placing ViewModels in a separate project and populating them in repositories, then handing them to the controller. This could make for really thin controllers.
What is this pattern called?
Hexagonal Architecture has this notion of Adapters, in this case you're adapting from business objects to presentation objects.
However :
If you mean repositories as in persistence layer repositories, it's typically not their responsibility to populate presentation-specific data structures. The persistence layer shouldn't know about the UI.
"Thin controller" doesn't mean you have to place the ViewModels or ViewModel population logic in a separate project. Besides, just because a controller shouldn't contain this logic doesn't mean it can't invoke it. Your controller can call an Adapter object from the same MVC project to convert from whatever it receives to ViewModels, or you could just do the conversion in the ViewModel's constructor.
While #guillauem31's answer is usefull, I think it was missing a bit, and a bit misleading
In short, an adapter is
Adapter
The ‘Design Patterns’ book contains a description of the generic ‘Adapter’ pattern:
“Convert the interface of a class into another interace clients expect.”
In my mind, I'd like to place an adapter between the controller and repository.
He usefully suggests that the adapter can be in a constructor of the viewmodel. I'm not sure I like this, but it seems okay.
I'd really like to keep my models as simple class objects if possible.
So I'd be equally okay with populating the viewmodels in a service layer.
and I guess thats where this question comes in...
Fat model / thin controller vs. Service layer
and here is an approach where the viewmodels are populated using an adapter of sorts
http://paulstovell.com/blog/clean-aspnet-mvc-controllers

Is it considered bad design to pass a repository interface as an argument to a method on a domain class?

Our domain model is very anemic right now. Our entities are mostly empty shells, almost purely designed for holding values and navigating to collections.
We are using EF 4.1 code-first ORM, and the design so far has been to shield our novice developers against the dreaded "LINQ to Entities cannot translate blablabla to a store expression" exception when querying against the context during early iterations.
We have various aggregate root repository interfaces over EF. However some blocks of code in the impls seems like they should be the domain's responsibility. As long as the repository interface is declared in the domain, and the impl is in the infrastructure (dependency injected), is it considered bad design to pass a repository interface as an argument to a method on an entity (or other domain) class?
For example, would this be bad?
public class EntityAbc {
public void SaveTo(IEntityAbcRepository repos) {...}
public void DeleteFrom(IEntityAbcRepository repos) {...}
}
What if a particular entity needed access to other aggregate root repositories? Would this be ok or not, and why?
public void Save() {
var abcRepos = DependencyInjector.Current.GetService<IEntityAbcRepository>();
var xyzRepos = DependencyInjector.Current.GetService<IEntityXyzRepository>();
// work with repositories
}
Update 1
I did not mention moving code to an application layer because I consider some of the code that uses IEntityAbcRepository to involve business rule enforcement. The repository impl should be as vanilla as possible, right? Its main responsibility should just be a simple abstraction over the ORM, allowing you to find / add / update / delete entities. Wrong?
Also, this question applies to methods on other non-entity domain classes -- factories, services, whatever pattern may be appropriate. Point being, I'm asking the question about any method on a domain class, not just an entity class. #Eranga, this is one place where you can use constructor injection because factories & services are not part of the ORM.
The application layer could then coordinate flow by injecting a repository impl into its constructor, and passing it as an argument to a domain service or factory. Is this bad practice?
Update 2
Adding another clarification here. What if the domain only needs access to the IEntityAbcRepository in order to execute its Find() method(s)? In the example above, the SaveTo and DeleteFrom methods would not invoke any add / update / delete methods on the repository interface.
So far we've combined the find / add / update / delete methods on a single aggregate root repository interface for simplicity. But I suppose there's nothing stopping us from separating them out into 2 interfaces, like so:
IEntityAbcReadRepository <-- defines all find method signatures
IEntityAbcWriteRepository <-- defines all add / update / delete method sigs
In this case, would it be bad practice to pass IEntityAbcReadRepository as a parameter to a domain method?
Your first approach is better compared to the second approach which uses "Service Locator" pattern. Dependencies are more obvious in the first approach.
Here are some links that explains why "Service Locator" is a bad choice
Is it bad to use servicelocation instead of constructor injection
...
Singleton Vs ServiceLocator
Say no to ServiceLocator
Both of these solutions stem from the fact that EF does not allow you to use constructor injection. However you can use property injection as explained in this answer. But that does not guarantee that mandatory dependencies are present.
So your first approach is the better solution.
Short answer: Yes!
Long answer:
Consider creating an AbcService in your application service layer. This service layer sits between your domain and your infrastructure. You can inject as many repositories into AbcService as you want. Then let the service handle SaveTo and DeleteFrom.
SaveTo and DeleteFrom, unless you are saving to and deleting from another entity, i.e. no data access is involved, are methods that sound like they shouldn't be on a domain entity, IMO.
Having persistence logic in your domain entities is IMO bad design in the first place. Good separation of concerns should mean that domain/business logic is separated from persistence logic, so your domain classes should be persistence ignorant.
Previous Entity Framwork versions might not have allowed such a separation but I think most recent versions solved that problem. I'm not that familiar with EF though, so I might be wrong.
With that said, where can you put methods such as Save() and Delete() ?
If you want to add to/remove your entity from its repository, Repository.Add() and Repository.Remove() are good choices. A repository basically serves as an illusion of an in-memory collection of your entities, so it makes sense for it to behave just like a collection or a list with the appropriate methods.
If you want to persist changes made to an existing entity, there are other ways to do that. You could have a Repository.Save() method but some consider it bad practice. Oftentimes the changes are part of a higher level operation handled in a transaction-like context such as a Unit of Work, in that case you can let the operation persist all the objects in its scope when it finishes. For instance, if you use an Open Session in View approach for your web application, changes are automatically persisted when the request ends.
Or you can rely on an ad-hoc call of your ORM's Save() method for your particular entity which hopefully shouldn't be grafted onto the entity code itself (with NHibernate, for instance, it's available at runtime on the proxied entity).
[Update]
Putting that in perspective with your subsequent questions (though I'm not sure I understand all of them well) :
I see no value in splitting your repository into a ReadRepository and a WriteRepository. In DDD, a repository's responsibility is clearly to provide a collection to query from as well as add to or remove from. It's still quite cohesive that way.
It's not an entity's responsibility to fiddle with its own persistence, so it shouldn't be aware of its own repository for that precise purpose. Otherwise, it's pretty rare that an entity rightfully needs to have knowledge of its own repository (usually it means that the entity has a relationship to another entity of the same type, like parent/child, and you want to get the other entity from the repository)
However, entities and other domain objects obviously do need to obtain references to other entities at times. In that case, try to get these references through traversal of other objects within the boundary of your aggregate first before looking for a repository. If you absolutely need a repository to get the object you want, it's a good idea to inject the repository through any flavour of injection you like. As Eranga pointed out, service locator might turn out to be a sub-par dependency injection ersatz though.
Last thing, the kind of injection you mentioned - SaveTo(IEntityAbcRepository repos) - is peculiar because it is neither constructor nor setter injection, but rather an ephemeral injection lasting just the time of a method. It implies that whoever calls your method must know what repository to pass at that precise moment, which is not obvious. It might be useful, but I'd say it's not the form of injection you would typically mainly use.

Why not use an IoC container to resolve dependencies for entities/business objects?

I understand the concept behind DI, but I'm just learning what different IoC containers can do. It seems that most people advocate using IoC containers to wire up stateless services, but what about using them for stateful objects like entities?
Whether it's right or wrong, I normally stuff my entities with behavior, even if that behavior requires an outside class. Example:
public class Order : IOrder
{
private string _ShipAddress;
private IShipQuoter _ShipQuoter;
public Order(IOrderData OrderData, IShipQuoter ShipQuoter)
{
// OrderData comes from a repository and has the data needed
// to construct order
_ShipAddress = OrderData.ShipAddress; // etc.
_ShipQuoter = ShipQuoter;
}
private decimal GetShippingRate()
{
return _ShipQuoter.GetRate(this);
}
}
As you can see, the dependencies are Constructor Injected. Now for a couple of questions.
Is it considered bad practice to have your entities depend on outside classes such as the ShipQuoter? Eliminating these dependencies seems to lead me towards an anemic domain, if I understand the definition correctly.
Is it bad practice to use an IoC container to resolve these dependencies and construct an entity when needed? Is it possible to do this?
Thanks for any insight.
The first question is the most difficult to answer. Is it bad practice to have Entities depend on outside classes? It's certainly not the most common thing to do.
If, for example, you inject a Repository into your Entities you effectively have an implementation of the Active Record pattern. Some people like this pattern for the convenience it provides, while others (like me) consider it a code smell or anti-pattern because it violates the Single Responsibility Principle (SRP).
You could argue that injecting other dependencies into Entities would pull you in the same direction (away from SRP). On the other hand you are certainly correct that if you don't do this, the pull is towards an Anemic Domain Model.
I struggled with all of this for a long time until I came across Greg Young's (abandonded) paper on DDDD where he explains why the stereotypical n-tier/n-layer architecture will always be CRUDy (and thus rather anemic).
Moving our focus to modeling Domain objects as Commands and Events instead of Nouns seems to enable us to build a proper object-oriented domain model.
The second question is easier to answer. You can always use an Abstract Factory to create instances at run-time. With Castle Windsor you can even use the Typed Factory Facility, relieving you of the burden of implementing the factories manually.
I know this is an old post but wanted to add. The domain entity should not persist itself even if you pass in an abstracted repository in ctor. The reason I am suggestion this is not merely that it violates SRP, it also contrary to DDD's aggregation. Let me explain, DDD is suited for complex apps with inherently deep graphs, therefore, we use aggregate or composite roots to persist changes to the underlying "children", so when we inject persistence into the individual children we violate the relationship children have to the composite or aggregate root that should be "in charge" of the life cycle or aggregation. Of course the composite root or aggregate does not persist it's own graph either. Another is with injecting dependencies of DDD objects is that an injected domain object effectively has no state until some other event takes place to hydrate its state. ANy consumer of the code will be forced to init or setup the domain object first before they can invoke business behavior which violates encapsulation.

Repository Pattern vs DAL

Are they the same thing? Just finished to watch Rob Connery's Storefront tutorial and they seem to be similar techinques. I mean, when I implement a DAL object I have the GetStuff, Add/Delete etc methods and I always write the interface first so that I can switch db later.
Am I confusing things?
You're definitely not the one who confuses things. :-)
I think the answer to the question depends on how much of a purist you want to be.
If you want a strict DDD point of view, that will take you down one path. If you look at the repository as a pattern that has helped us standardize the interface of the layer that separates between the services and the database it will take you down another.
The repository from my perspective is just a clearly specified layer of access to data.Or in other words a standardized way to implement your Data Access Layer. There are some differences between different repository implementations, but the concept is the same.
Some people will put more DDD constraints on the repository while others will use the repository as a convenient mediator between the database and the service layer. A repository like a DAL isolates the service layer from data access specifics.
One implementation issue that seems to make them different, is that a repository is often created with methods that take a specification. The repository will return data that satisfies that specification. Most traditional DALs that I have seen, will have a larger set of methods where the method will take any number of parameters. While this may sound like a small difference, it is a big issue when you enter the realms of Linq and Expressions.
Our default repository interface looks like this:
public interface IRepository : IDisposable
{
T[] GetAll<T>();
T[] GetAll<T>(Expression<Func<T, bool>> filter);
T GetSingle<T>(Expression<Func<T, bool>> filter);
T GetSingle<T>(Expression<Func<T, bool>> filter, List<Expression<Func<T, object>>> subSelectors);
void Delete<T>(T entity);
void Add<T>(T entity);
int SaveChanges();
DbTransaction BeginTransaction();
}
Is this a DAL or a repository? In this case I guess its both.
Kim
A repository is a pattern that can be applied in many different ways, while the data access layer has a very clear responsibility: the DAL must know how to connect to your data storage to perform CRUD operations.
A repository can be a DAL, but it can also sit in front of the DAL and act as a bridge between the business object layer and the data layer. Which implementation is used is going to vary from project to project.
One large difference is that a DAO is a generic way to deal with persistence for any entity in your domain. A repository on the other hand only deals with aggregate roots.
I was looking for an answer to a similar question and agree with the two highest-ranked answers. Trying to clarify this for myself, I found that if Specifications, which go hand-in-hand with the Repository pattern, are implemented as first-class members of the domain model, then I can
reuse Specification definitions with different parameters,
manipulate existing Specification instances' parameters (e.g. to specialize),
combine them,
perform business logic on them without ever having to do any database access,
and, of course, unit-test them independent of actual Repository implementations.
I may even go so far and state that unless the Repository pattern is used together with the Specification pattern, it's not really "Repository," but a DAL. A contrived example in pseudo-code:
specification100 = new AccountHasMoreOrdersThan(100)
specification200 = new AccountHasMoreOrdersThan(200)
assert that specification200.isSpecialCaseOf(specification100)
specificationAge = new AccountIsOlderThan('2000-01-01')
combinedSpec = new CompositeSpecification(
SpecificationOperator.And, specification200, specificationAge)
for each account in Repository<Account>.GetAllSatisfying(combinedSpec)
assert that account.Created < '2000-01-01'
assert that account.Orders.Count > 200
See Fowler's Specification Essay for details (that's what I based the above on).
A DAL would have specialized methods like
IoCManager.InstanceFor<IAccountDAO>()
.GetAccountsWithAtLeastOrdersAndCreatedBefore(200, '2000-01-01')
You can see how this can quickly become cumbersome, especially since you have to define each of the DAL/DAO interfaces with this approach and implement the DAL query method.
In .NET, LINQ queries can be one way to implement specifications, but combining Specification (expressions) may not be as smooth as with a home-grown solution. Some ideas for that are described in this SO Question.
My personal opinion is that it is all about mapping, see: http://www.martinfowler.com/eaaCatalog/repository.html. So the output/input from the repository are domain objects, which on the DAL could be anything. For me that is an important addition/restriction, as you can add a repository implementation for a database/service/whatever with a different layout, and you have a clear place to concentrate on doing the mapping. If you were not to use that restriction and have the mapping elsewhere, then having different ways to represent data can impact the code in places it shouldn't be changing.
It's all about interpretation and context. They can be very similar or indeed very different, but as long as the solution does the job, what is in a name!
In the external world (i.e. client code) repository is same as DAL, except:
(1) it's insert/update/delete methods is restricted to have the data container object as the parameter.
(2) for read operation it may take simple specification like a DAL (for instance GetByPK) or advanced specification.
Internally it works with a Data Mapper Layer (for instance entity framework context etc) to perform the actual CRUD operation.
What Repository pattern doesn't mean:-
Also, I've seen people often get confused to have a separate Save method as the repository pattern sample implementation besides the Insert/Update/Delete methods which commits all the in-memory changes performed by insert/update/delete methods to database. We can have a Save method definitely in a repository, but that is not the responsibility of repository to isolate in-memory CUD (Create, Update, Delete) and persistence methods (that performs the actual write/change operation in database), but the responsibility of Unit Of Work pattern.
Hope this helps!
Repository is a pattern, this is a way to implement the things in standardized way to reuse the code as we can.
Advantage of using repository pattern is to mock your data access layer, so that you can test your business layer code without calling DAL code. There are other big advantages but this seems to be very vital to me.
From what I understand they can mean basically the same thing - but the naming varies based on context.
For example, you might have a Dal/Dao class that implements an IRepository interface.
Dal/Dao is a data layer term; the higher tiers of your application think in terms of Repositories.
So in most of the (simple) cases DAO is an implementation of Repository?
As far as I understand,it seems that DAO deals precisely with db access (CRUD - No selects though?!) while Repository allows you to abstract the whole data access,perhaps being a facade for multiple DAO (maybe different data sources).
Am I on the right path?
One could argue that a "repository" is a specific class and a "DAL" is the entire layer consisting of the repositories, DTOs, utility classes, and anything else that is required.

Resources