I have been working on a new MVC application that utilizies EF4, POCO Domain Objects and the Repository <--> Service Layer.
I see a lot of talk about using AutoMapper to map the EF4 classes to the DTO for the View Models. I was under the impression that this was to get rid of the tightly bound EF4 classes. So my question is since I am using the POCO classes, can't I just use those in the View Models? Or is there still a need for AutoMapper?
The argument is that your "POCO's" are your domain models, and your View's shouldn't be concerned with Domain Models.
Think about it this way - data validation, if you want data annotations, you would have to put them on your POCO's - but input validation (this field is required, etc) isn't really a domain concern, it's a UI concern - hence the use of ViewModels for Data Annotations and AutoMapper.
Of course, it's not cut and dry, it's a matter of preference.
I also use MVC/EF4/POCO/AutoMapper/Service Layer and i never bind to the POCO's - i always use a ViewModel per View.
This way, you have a good level of consistency:
All View's have a ViewModel
POCO's have nothing but business/domain logic
ViewModel's have basic input validation
They are then mapped to the POCO's, which invoke domain/business validation
**Edit - in response to comments: **
Do your Repositories return IQueryable? If so, how do you handle the context? I mean does your Repositories implement IDisposable and then you dispose of them in the controllers?
Yes - my Repositories return IQueryable<T>, where T is an aggregate root. My Repositories get passed a Unit of Work (which implements IDisposable). The Unit of Work is a wrapper for the EF4 context. StructureMap (DI container) is responsible for the lifetime of the components (including the UoW - aka the context). I new up a UoW per HTTP request and Dispose when it's finished. My "Service" calls methods on my IQueryable repository and returns collections (e.g materializes the query before being passed back to the Controller).
Where do you do your mapping at? Do you do it in the controller?
Up to you. Personally, i would create a static "bootstrapper" class that has a single method, e.g "Configure". Call this once in your Application_Start event (Global.asax). This technique is decribed here.
Good luck!
Related
lately I've been toying with the idea of placing ViewModels in a separate project and populating them in repositories, then handing them to the controller. This could make for really thin controllers.
What is this pattern called?
Hexagonal Architecture has this notion of Adapters, in this case you're adapting from business objects to presentation objects.
However :
If you mean repositories as in persistence layer repositories, it's typically not their responsibility to populate presentation-specific data structures. The persistence layer shouldn't know about the UI.
"Thin controller" doesn't mean you have to place the ViewModels or ViewModel population logic in a separate project. Besides, just because a controller shouldn't contain this logic doesn't mean it can't invoke it. Your controller can call an Adapter object from the same MVC project to convert from whatever it receives to ViewModels, or you could just do the conversion in the ViewModel's constructor.
While #guillauem31's answer is usefull, I think it was missing a bit, and a bit misleading
In short, an adapter is
Adapter
The ‘Design Patterns’ book contains a description of the generic ‘Adapter’ pattern:
“Convert the interface of a class into another interace clients expect.”
In my mind, I'd like to place an adapter between the controller and repository.
He usefully suggests that the adapter can be in a constructor of the viewmodel. I'm not sure I like this, but it seems okay.
I'd really like to keep my models as simple class objects if possible.
So I'd be equally okay with populating the viewmodels in a service layer.
and I guess thats where this question comes in...
Fat model / thin controller vs. Service layer
and here is an approach where the viewmodels are populated using an adapter of sorts
http://paulstovell.com/blog/clean-aspnet-mvc-controllers
I use ASP.NET MVC 3.
I encountered at least 2 approaches for mapping Model->ViewModel on the server side:
inside ViewModel class constructor
inside Controller or designated mapper class
I like first approach the most as the ViewModel property declarations and its mapping are in the same place, easy to maintain and unit-test. Can anybody specify more pros and cons, or other better practice?
ViewModels can exist independently of any database-originated model classes.
I don't recommend putting ViewModel population code inside the Controller as this it not the responsibility of the controller (and is also a maintenance nightmare).
My opinion is that mapping from ViewModel to DBModel (and vice-versa) is the responsibility of the ViewModel, so all of my ViewModel classes implement two members:
public static TViewModel FromDBModel(TDBModel dbModel);
public void ToDBModel(TDBModel dbModel);
The first is a static method that the Controller calls when returning a View. The static method constructs an instance of the ViewModel and sets its members accordingly.
The instance ToDBModel method is passed a constructed DBModel instance (either constructed by the Repository when retrieving or updating data, or constructed by the controller when inserting new data).
HTH.
EDIT: Note that many people swear by libraries such as AutoMapper (which uses reflection and other tricks to automate the DBModel<->ViewModel mapping process). I'm not a fan of auto-mapping because it takes control away from the developer and I don't see it buying me time when I have to learn how the mapper works and how to get it to map non-trivial operations. YMMV.
I'll tend to keep entities and view models separate such that they are unaware of each other. This is to improve encapsulation and minimize dependencies when testing the controllers and mapping itself. See Separation of concerns.
Instead I'd write classes to perform the mappings myself (if its simple) or use AutoMapper and use that method from within the controller. For a larger systems with tens or hundreds of database entities and views, I tend to lean towards AutoMapper. Writing the mapping yourself can become very tedious and error prone. You have to balance the value of you writing it yourself with the value such implementation gives to business. After all, if we wanted to know everything about every framework, we'd each be writing our own version of the .NET framework. :)
That said, there may be little benefit using view models for some systems, especially those where there is a one to one mapping between "fields" in a view and database entities [aka typical CRUD]. I usually cringe when I see that, but it is always an option given a time frame and complexity of the system.
Then there is a case when you use ASP.NET MVC to expose an API. In this case "application/json" and "text/xml" representations of your entities are just "views". View models are often used filter sensitive and unnecessary data from that external presentation. In this case mapping becomes rather complex due to the fact that there may be several representations (and versions thereof) for the same entity. However, this seems outside of the OP.
Our domain model is very anemic right now. Our entities are mostly empty shells, almost purely designed for holding values and navigating to collections.
We are using EF 4.1 code-first ORM, and the design so far has been to shield our novice developers against the dreaded "LINQ to Entities cannot translate blablabla to a store expression" exception when querying against the context during early iterations.
We have various aggregate root repository interfaces over EF. However some blocks of code in the impls seems like they should be the domain's responsibility. As long as the repository interface is declared in the domain, and the impl is in the infrastructure (dependency injected), is it considered bad design to pass a repository interface as an argument to a method on an entity (or other domain) class?
For example, would this be bad?
public class EntityAbc {
public void SaveTo(IEntityAbcRepository repos) {...}
public void DeleteFrom(IEntityAbcRepository repos) {...}
}
What if a particular entity needed access to other aggregate root repositories? Would this be ok or not, and why?
public void Save() {
var abcRepos = DependencyInjector.Current.GetService<IEntityAbcRepository>();
var xyzRepos = DependencyInjector.Current.GetService<IEntityXyzRepository>();
// work with repositories
}
Update 1
I did not mention moving code to an application layer because I consider some of the code that uses IEntityAbcRepository to involve business rule enforcement. The repository impl should be as vanilla as possible, right? Its main responsibility should just be a simple abstraction over the ORM, allowing you to find / add / update / delete entities. Wrong?
Also, this question applies to methods on other non-entity domain classes -- factories, services, whatever pattern may be appropriate. Point being, I'm asking the question about any method on a domain class, not just an entity class. #Eranga, this is one place where you can use constructor injection because factories & services are not part of the ORM.
The application layer could then coordinate flow by injecting a repository impl into its constructor, and passing it as an argument to a domain service or factory. Is this bad practice?
Update 2
Adding another clarification here. What if the domain only needs access to the IEntityAbcRepository in order to execute its Find() method(s)? In the example above, the SaveTo and DeleteFrom methods would not invoke any add / update / delete methods on the repository interface.
So far we've combined the find / add / update / delete methods on a single aggregate root repository interface for simplicity. But I suppose there's nothing stopping us from separating them out into 2 interfaces, like so:
IEntityAbcReadRepository <-- defines all find method signatures
IEntityAbcWriteRepository <-- defines all add / update / delete method sigs
In this case, would it be bad practice to pass IEntityAbcReadRepository as a parameter to a domain method?
Your first approach is better compared to the second approach which uses "Service Locator" pattern. Dependencies are more obvious in the first approach.
Here are some links that explains why "Service Locator" is a bad choice
Is it bad to use servicelocation instead of constructor injection
...
Singleton Vs ServiceLocator
Say no to ServiceLocator
Both of these solutions stem from the fact that EF does not allow you to use constructor injection. However you can use property injection as explained in this answer. But that does not guarantee that mandatory dependencies are present.
So your first approach is the better solution.
Short answer: Yes!
Long answer:
Consider creating an AbcService in your application service layer. This service layer sits between your domain and your infrastructure. You can inject as many repositories into AbcService as you want. Then let the service handle SaveTo and DeleteFrom.
SaveTo and DeleteFrom, unless you are saving to and deleting from another entity, i.e. no data access is involved, are methods that sound like they shouldn't be on a domain entity, IMO.
Having persistence logic in your domain entities is IMO bad design in the first place. Good separation of concerns should mean that domain/business logic is separated from persistence logic, so your domain classes should be persistence ignorant.
Previous Entity Framwork versions might not have allowed such a separation but I think most recent versions solved that problem. I'm not that familiar with EF though, so I might be wrong.
With that said, where can you put methods such as Save() and Delete() ?
If you want to add to/remove your entity from its repository, Repository.Add() and Repository.Remove() are good choices. A repository basically serves as an illusion of an in-memory collection of your entities, so it makes sense for it to behave just like a collection or a list with the appropriate methods.
If you want to persist changes made to an existing entity, there are other ways to do that. You could have a Repository.Save() method but some consider it bad practice. Oftentimes the changes are part of a higher level operation handled in a transaction-like context such as a Unit of Work, in that case you can let the operation persist all the objects in its scope when it finishes. For instance, if you use an Open Session in View approach for your web application, changes are automatically persisted when the request ends.
Or you can rely on an ad-hoc call of your ORM's Save() method for your particular entity which hopefully shouldn't be grafted onto the entity code itself (with NHibernate, for instance, it's available at runtime on the proxied entity).
[Update]
Putting that in perspective with your subsequent questions (though I'm not sure I understand all of them well) :
I see no value in splitting your repository into a ReadRepository and a WriteRepository. In DDD, a repository's responsibility is clearly to provide a collection to query from as well as add to or remove from. It's still quite cohesive that way.
It's not an entity's responsibility to fiddle with its own persistence, so it shouldn't be aware of its own repository for that precise purpose. Otherwise, it's pretty rare that an entity rightfully needs to have knowledge of its own repository (usually it means that the entity has a relationship to another entity of the same type, like parent/child, and you want to get the other entity from the repository)
However, entities and other domain objects obviously do need to obtain references to other entities at times. In that case, try to get these references through traversal of other objects within the boundary of your aggregate first before looking for a repository. If you absolutely need a repository to get the object you want, it's a good idea to inject the repository through any flavour of injection you like. As Eranga pointed out, service locator might turn out to be a sub-par dependency injection ersatz though.
Last thing, the kind of injection you mentioned - SaveTo(IEntityAbcRepository repos) - is peculiar because it is neither constructor nor setter injection, but rather an ephemeral injection lasting just the time of a method. It implies that whoever calls your method must know what repository to pass at that precise moment, which is not obvious. It might be useful, but I'd say it's not the form of injection you would typically mainly use.
Our current MVC project is set up to have ViewModels that encapsulate the data from the repository and pass it to the view.
When doing the mapping (In the controller) from Data Object to View model what is the best way to achieve this?
I've seen AutoMapper (http://www.codeplex.com/AutoMapper), but wondered if there was an out of the box solution?
AutoMapper seems to be the accepted (by many) solution.
And I would say, there's no such thing as "out of the box" solution in the MVC world - unlike in Ruby on Rails, for example. Framework is highly extensible but is very thin at the same time, so in lots of areas you have to invent your own "opinionated" way of doing things. Just an example of your situation, I personally have my view models:
Declare static ConfigureAutoMapper()
Have either optional Setup(realmodel) method or optional constructor
ViewModel(destinationViewModelType) is used on actions, and it performs conversion automatically - creating view model, calling Setup or Constructor, or invoking AutoMapper
ViewModel maps are created with predefined ConstructUsing that uses IoC container to instantiate so that view models gets their IoC dependencies if needed
None of the above exist in MVC out of the box. I'd say that MVC only supports ViewData-like usage "out of the box".
Greetings,
Trying to sort through the best way to provide access to my Entity Manager while keeping the context open through the request to permit late loading. I am seeing a lot of examples like the following:
public class SomeController
{
MyEntities entities = new MyEntities();
}
The problem I see with this setup is that if you have a layer of business classes that you want to make calls into, you end up having to pass the manager as a parameter to these methods, like so:
public static GetEntity(MyEntities entityManager, int id)
{
return entityManager.Series.FirstOrDefault(s => s.SeriesId == id);
}
Obviously I am looking for a good, thread safe way, to provide the entityManager to the method without passing it. The way also needs to be unit testable, my previous attempts with putting it in Session did not work for unit tests.
I am actually looking for the recommended way of dealing with the Entity Framework in ASP .NET MVC for an enterprise level application.
Thanks in advance
Entity Framework v1.0 excels in Windows Forms applications where you can use the object context for as long as you like. In asp.net and mvc in particular it's a bit harder. My solution to this was to make the repositories or entity managers more like services that MVC could communicate with. I created a sort of generic all purpose base repository I could use whenever I felt like it and just stopped bothering too much about doing it right. I would try to avoid leaving the object context open for even a ms longer than is absolutely needed in a web application.
Have a look at EF4. I started using EF in production environment when that was in beta 0.75 or something similar and had no real issues with it except for it being "hard work" sometimes.
You might want to look at the Repository pattern (here's a write up of Repository with Linq to SQL).
The basic idea would be that instead of creating a static class, you instantiate a version of the Repository. You can pass in your EntityManager as a parameter to the class in the constructor -- or better yet, a factory that can create your EntityManager for the class so that it can do unit of work instantiation of the manager.
For MVC I use a base controller class. In this class you could create your entity manager factory and make it a property of the class so deriving classes have access to it. Allow it to be injected from a constructor but created with the proper default if the instance passed in is null. Whenever a controller method needs to create a repository, it can use this instance to pass into the Repository so that it can create the manager required.
In this way, you get rid of the static methods and allow mock instances to be used in your unit tests. By passing in a factory -- which ought to create instances that implement interfaces, btw -- you decouple your repository from the actual manager class.
Don't lazy load entities in the view. Don't make business layer calls in the view. Load all the entities the view will need up front in the controller, compute all the sums and averages the view will need up front in the controller, etc. After all, that's what the controller is for.