This is more of a design concern.
Im building an app and i have created my Repository Pattern Structure as following :
My Core name space is the DAL/Repository/BusinessLogic layers assembly.
By the way, i am using Dapper.NET micro ORM as my data connection, thats why you will see an extension on my SqlConnection object.
For my data access, i have created a base repository class :
namespace Core
{
public class BaseRepository<T>: IDisposable where T : BaseEntity
{
protected SqlConnection conn = null;
#region Constructors
public BaseRepository() : this("LOCAL")
{
}
public BaseRepository(string configurationKey = "LOCAL")
{
conn = new SqlConnection(ConfigurationManager.ConnectionStrings[configurationKey].ConnectionString);
}
#endregion
#region IDisposable
public void Dispose()
{
conn.Dispose();
}
#endregion
/// <summary>
/// returns a list of entities
/// </summary>
/// <typeparam name="T">BaseEntity type</typeparam>
/// <param name="sproc">optional parameters, stored procedure name.</param>
/// <returns>BaseEntity</returns>
protected virtual IEnumerable<T> GetListEntity(string sproc = null)
{
string storedProcName = string.Empty;
if (sproc == null)
{
storedProcName = "[dbo].sp_GetList_" + typeof(T).ToString().Replace("Core.",string.Empty);
}
else
{
storedProcName = sproc;
}
IEnumerable<T> items = new List<T>();
try
{
conn.Open();
items = conn.Query<T>(storedProcName,
commandType: CommandType.StoredProcedure);
conn.Close();
}
finally
{
conn.Close();
}
return items;
}
}
}
And for each entity that I have, lets say ExtendedUser, Messages , i am creating its on Interface-Class pair like this :
namespace Core
{
public class ExtendedUserRepository : BaseRepository<UsersExtended>,IExtendedUserRepository
{
public ExtendedUserRepository() : this("PROD")
{
}
public ExtendedUserRepository(string configurationKey) : base(configurationKey)
{
}
public UsersExtended GetExtendedUser(string username)
{
var list = GetListEntity().SingleOrDefault(u => u.Username == username);
return list;
}
public UsersExtended GetExtendedUser(Guid userid)
{
throw new NotImplementedException();
}
public List<UsersExtended> GetListExtendedUser()
{
throw new NotImplementedException();
}
}
}
etc.
The above code is just one of the entities :ExtendedUser.
The question is : should i create a Interface-ClassThatImplemenetsInterface pair for each entity that i have ? or should i have only one RepositoryClass and one IRepository interface with all my methods from all of my entities?
I don't think you need to create interface without the reason. I even don't see why you need base repository class here. I even think this is not repository but DAL (Data Access Layer) but this is defintion argue.
I think good DAL implementation should decouple database structure from business logic structure - but hardcoding sp_GetList_XXXEntityNameXXX pattern or passing stored procedure name outside of DAL is not decoupling.
You are very optimistic or your application is really simple if you think all entity lists are obtained in one way and you will always need full set of entities in business logic without any parameters.
Separating interface from implementation is only needed if you plan to replace/wrap out different implementations, or mix few interfaces in one class. Otherwise it is not required.
Don't think in terms of entities when creating repositories. Repository contains business logic and should be built over scenarios of usage. Having classes like you have is more about Data Access Layer - and DAL is built over queries you would need in business logic. Probably you would never need list of ALL users at once - but would very often need list of active users, privileged users etc.
It is really hard to predict what queries you will need - so I prefer to start designing from business logic and add DAL methods by the way.
A system with a complex domain model often benefits from a layer, such
as the one provided by Data Mapper (165), that isolates domain objects
from details of the database access code. In such systems it can be
worthwhile to build another layer of abstraction over the mapping
layer where query construction code is concentrated.
http://martinfowler.com/eaaCatalog/repository.html
With a generic repository you can have common methods like findBy(array('id' => 1)), findOneBy(array('email' => 'john#bar.com')), findById(), findAll() and so on. Indeed, it'll have one interface.
You'll always have to create a concrete implementation that will indicates which is the domain object that will be managed by the repository in order to accomplish this getUsersRepository().findAll()
Moreover, if you need more complicated queries, you can create new methods on the concrete implementation, such as findMostActiveUsers() and then reuse it across your application.
Now, answering your question:
Your application will be expecting for at least one interface (that generic one, with the common methods). But if you manage to have specific methods, like the one I've just mentioned above, you'd be better having another interface (e.g. RepositoryInterface and UsersRepositoryInteface).
With that in mind, then you'll only depend on the repository interface. The query construction will be encapsulated by the concrete implementation. So you'll be able to change your repository implementation (e.g. using an full-blow ORM) without affecting the rest of your application.
Related
I'm using Ninject.Extensions.Factory to control the lifecycle of the repository layer. I want to have a single point of reference from which I can get a reference to all repositories and have them lazily available. Ninject Factory approach seems like a good solution but I'm not too sure about my solution:
public class PublicUow : IPublicUow
{
private readonly IPublicRepositoriesFactory _publicRepositoriesFactory;
public PublicUow(IPublicRepositoriesFactory publicRepositoriesFactory)
{
_publicRepositoriesFactory = publicRepositoriesFactory;
}
public IContentRepository ContentRepository { get { return _publicRepositoriesFactory.ContentRepository; } }
public ICategoryRepository CategoryRepository { get { return publicRepositoriesFactory.CategoryRepository; } }
}
The problem lies in the PublicRepositories class.
public class PublicRepositoriesFactory : IPublicRepositoriesFactory
{
private readonly IContentRepositoryFactory _contentRepositoryFactory;
private readonly ICategoryRepositoryFactory _categoryRepositoryFactory;
public PublicRepositoriesFactory(IContentRepositoryFactory contentRepositoryFactory, ICategoryRepositoryFactory categoryRepositoryFactory)
{
_contentRepositoryFactory = contentRepositoryFactory;
_categoryRepositoryFactory = categoryRepositoryFactory;
}
public IContentRepository ContentRepository { get { return _contentRepositoryFactory.CreateContentRepository(); } }
public ICategoryRepository CategoryRepository { get { return _categoryRepositoryFactory.CreateCategoryRepository(); } }
}
I'm worried that this will become hard to manage as the number of repositories increases, this class might at some point need to have around 20-30 constructor arguments with the current implementation.
Is there an approach I can take to reduce the number of ctr arguments, like passing an array/dictionary of interfaces or something similar?
I've thought about using property injection in this scenario but most articles suggest avoiding property injection in general.
Is there maybe a more general pattern that would make this easier to manage?
Is this in general a good approach?
It has become rather common practice to use a repository interface like
public interface IRepository
{
T LoadById<T>(Guid id);
void Save<T>(T entity);
....
}
instead of a plethora of specific repositories like IContentRepository, ICategoryRepository,..
specific repositories are only ever useful in case of having specific logic to the entity type and an operation, for example verifying that it's valid. But such operations are rather an "aspect" or a cut-through-concern which you should model as such. Managing/doing validation on save should not be implemented x-times but only once. The only thing you should specifically implement are the exact validation rules (DRY). But these should be implemented in separate classes and used by composition, not inheritance.
Also, for stuff like retrieving an entity or multiple entities "based on a use case", you should use specific query classes, and not put methods on a repository interface (SRP, SOC). An example would be GetProductsByOrder(Guid orderId). This should be neither on the Products nor the Order Repository but rather in a separate class itself.
Taking things a step further, it does not seem a good idea to use a factory to late create all repositories. Why?
makes software more complex (thus harder to maintain and extend)
usually negligible performance gain
deteriorates testability
also see Mark Seeman's blog post Service Locator is an anti pattern, where he's also talking about the disadvantages of late-creation vs. the composition of the entire object graph in one go.
I'm not trying to say that you should never use factory/lazy, but only when you've got a really good reason to :)
Example of a query
I'm not very familiar with EntityFramework. I know NHibernate a whole lot better, so behold.
public class GetParentCategoriesQuery : IGetParentCategoriesQuery
{
private readonly EntityFrameworkContext context;
public GetParentCategories(EntityFrameworkContext context)
{
this.context = context;
}
public IEnumerable<Category> GetParents(Category child)
{
return this.context.Categories.Where(x => x.Children.Contains(child));
}
}
So basically the only thing you change is extracting the GetParentCategoriesQuery into it's own class. The DbContext instance must be shared with the other query and repository instances. For web projects, this is done by binding the DbContext .InRequestScope(). For other applications you may need to use another machanism.
The usage of the query would be quite simple:
public class CategoryController
{
private readonly IRepository repository;
private readonly IGetParentCategoriesQuery getParentCategoriesQuery;
public CategoryController(
IRepository repository,
IGetParentCategoriesQuery getParentCategoriesQuery)
{
this.repository = repository;
this.getParentCategoriesQuery = getParentCategoriesQuery;
}
public void Process(Guid categoryId)
{
Category category = this.repository.LoadById(categoryId);
IEnumerable<Category> parentCategories =
this.getParentCategoriesQuery(category);
// so some stuff...
}
}
An alternative to the scoping is to have the repository instantiate the the query type and pass the DbContext to the query instance (this can be done using the factory extensions):
public TQuery CreateQuery<TQuery>()
{
return this.queryFactory.Create<TQuery>(this.context);
}
which would be used like:
IEnumerable<Category> parents = repository
.CreateQuery<GetParentCategoriesQuery>()
.GetParents(someCategory);
But please note that this alternative will again only late-create the query and thus result in less testability (binding issues may be remain undetected for longer).
The GetParentCategoriesQuery is part of the repository layer, but not part of the repository class.
I've been reading up on Dependency Incection, UnitOfWork and IRepository patterns, and I intend to implement them all as I go along. But it's all a lot to take in at once. Right now I just want to make sure about some basic things, and that I'm not missing anything crucial that could impact my application.
The application is rather small and there will be no more than 10 simultaneous users at any given point.
The AdressRegDataContext is the dbml generated (using Visual Studio Add -> Linq to SQL classes).
My question concern the adding of the second controller below:
Are there any problems creating a new repository in each controller like this?
To me it feels like two users would have two different contexts if they are using the application at the same time. Or do the datacontext handle that for me? A singleton repository makes sense theoretically, but I've read that that's a big nono so I won't do that.
I have a repository that I use:
Repository
namespace AdressReg.Models
{
public class AdressRegRepository
{
private AdressRegDataContext db = new AdressRegDataContext();
// Return person by id
public Person GetPerson(int id)
{
return db.Persons.SingleOrDefault(p => p.id == id);
}
// Return student by id
public Student GetStudent(int id)
{
return db.Students.SingleOrDefault(s => s.id == id);
}
}
}
So far I've only been using the one controller:
HomeController
namespace AdressReg.Controllers
{
public class HomeController : Controller
{
AdressRegRepository repository = new AdressRegRepository();
// GET: /Home/Person
public ActionResult Person(int id)
{
PersonView view = new PersonView();
view.person = repository.GetPerson(id);
return View(view);
}
}
}
But I was planning on adding another:
EducationController
namespace AdressReg.Controllers
{
public class EducationController: Controller
{
AdressRegRepository repository = new AdressRegRepository();
// GET: /Education/Student
public ActionResult Student(int id)
{
StudentView view = new StudentView();
view.student = repository.GetStudent(id);
return View(view);
}
}
}
Yes they are. First of all that's not a 'true' repository, that is a useless abstraction on top of EF, you can ditch it. A repository should return busines or app objects not EF entities. For querying purposes it should return at least some bits (if not all ) of the view model. So, it should return a PersonView or a StudentView. These are guideliness not the absolute truth or rules, but you really need to be aware of the purpose of a design pattern, in this case the Repository. And your code shows you haven't understood it yet.
In your app, the repo doesn't do anything useful, so use the ORM directly, it's much simpler.
Your controllers are tightly coupled to a specific concrete repository. The point is you should define an abstraction and pass that as a dependency in the constructor. Something like
// a better name is needed, this would be a query repository
public interface IAddressesRegRepository
{
StudentView Get(int it);
}
public class MyController:Controller
{
IAddressesRegRepository _repo;
public MyController(IAddressesRegRepository repo)
{
_repo=repo;
}
public ActionResult Student(int id)
{
StudentView view = _repo.GetStudent(id);
return View(view);
}
}
As you see, the controller now depends on an abstraction, the DI Container will inject a concrete implementation when the controller is created. It will allow you to mock repositories and to have multiple implementations. Same approach is for everything you want to use in the controller.
About repositories, I do prefer to have different business/ query repositories (though nowadays I'm using query handlers) i.e I don't put querying behaviour on a domain repository unless it's needed by the business layer itself.
For strictly querying, it's much straight forward to use EF or if your queries are ugly or want to change implementation later, use a query object/ repository/service/handler (call it however you want) that will construct the view model from persistence. The point is to keep the persistence details (ORM, EF Entites) in the DAL, your UI shouldn't know about them. In your example, the UI knows about EF entites.
Once again, what I've said are guidelines, when you get more experienced you'll know when to bend (break) them without nasty side effects.
Hi looking at the repository pattern which commonly seems to be implemented something like:
public class GenericRepository<TEntity> where TEntity : class
{
// other business
public virtual TEntity GetByID(object id)
{
return db.Set().Find(id);
}
public virtual void Insert(TEntity entity)
{
db.Set().Add(entity);
}
public virtual void Delete(object id)
{
TEntity entityToDelete = db.Set().Find(id);
Delete(entityToDelete);
}
public virtual void Update(TEntity entityToUpdate)
{
db.Set().Attach(entityToUpdate);
context.Entry(entityToUpdate).State = EntityState.Modified;
}
}
So for every type you want to work with (ie update) you need to instantiate a repository.
So if I had two types I wanted to save Cars and Trucks I would need to go:
var carRepository = new GernericRepository<Car>();
carRepository.Update(myCar);
var truckRepository = new GernericRepository<Truck>();
carRepository.Update(myTruck);
So then you have seperate repositories for each type. To make sure you save everything at once you need the unitOfWork to ensure they all use the same context and save at one time.
Surely wouldn't it be better to have something like:
public class GenericRepository
{
// other business
public virtual TEntity GetByID<TEntity>(object id) where TEntity : class
{
return db.Set<TEntity>().Find(id);
}
public virtual void Insert<TEntity>(TEntity entity) where TEntity : class
{
db.Set<TEntity>().Add(entity);
}
public virtual void Delete<TEntity>(object id) where TEntity : class
{
TEntity entityToDelete = db.Set<TEntity>().Find(id);
Delete(entityToDelete);
}
public virtual void Update<TEntity>(TEntity entityToUpdate) where TEntity : class
{
db.Set<TEntity>().Attach(entityToUpdate);
context.Entry(entityToUpdate).State = EntityState.Modified;
}
}
This means the repository only needs to be instantiated once and therefore is truely generic?
So you could update your cars and trucks like this:
var repository = new GernericRepository<Car>();
repository.Update<Car>(myCar);
rRepository.Update<Truck>(myTruck);
Surely this is a better method? Am I missing something? It automatically has only one context too.
The repository pattern does not decouple the data access from the data store, that is what the ETL tool such as NHibernate or the Enity Framework does for. The repository pattern provides reusable methods for extracting data.
I have previously used a so called "Generic" repository as you have described and thought it was great. It isn't until you realise that you have just put another layer on top of NHibernate or the Entity Framework you realise it's all gone Pete Tong.
Ideally what you want are interfaces that describe ways of getting data out of your data store and should not leak what data access you are using. For example:
public interface IEmployee
{
IEmployee GetEmployeeById(Guid employeeId);
IEmployee GetEmployeeByEmployeeNumber(string employeeNumber);
IEnumerable<IEmployee> GetAllEmployeesWithSurname(string surname);
IEnumerable<IEmployee> GetAllEmployeesWithStartDateBetween(DateTime beginDateTime, DateTime endDateTime);
}
This gives you a contract to code to, there is no knowledge of your persistence layer and the queries used to retrieve the data can be unit tested in isolation. The interface could inherit from a base interface that provides common CRUD methods but you would be assuming that all your repositories would need CRUD.
If you go down the road of a Generic Repository you will end up with duplication in your queries and you will find it much harder to unit test the code that uses the repository as you will have to test the queries as well.
Generics by itself does not make an implementation of the repository pattern. We've all seen the generic base class used in example repository pattern implementations but this is to make things DRY (Don't-Repeat-Yourself) by inheriting from the base class ( GenericRepository in your case) to more specialized child classes.
Only using the generic, base class GenericRepository assumes that your repositories will only ever need the most basic CRUD methods. For a more complex system, each repository becomes more specialized based on underlying business entities data requirements.
Also, you will need to have interfaces that define your data contracts with your other layers. Using the repository pattern means you don't want to expose your concrete implementations of your repositories to your other layers.
I've made a custom model, and I want to mock it. I'm fairly new to MVC, and very new to unit testing. Most approaches I've seen create an interface for the class and then make a mock that implements the same interface. However I can't seem to get this to work when actually passing the interface into the View. Cue "simplified" example:
Model-
public interface IContact
{
void SendEmail(NameValueCollection httpRequestVars);
}
public abstract class Contact : IContact
{
//some shared properties...
public string Name { get; set; }
public void SendEmail(NameValueCollection httpRequestVars = null)
{
//construct email...
}
}
public class Enquiry : Contact
{
//some extra properties...
}
View-
<%# Page Language="C#" Inherits="System.Web.Mvc.ViewPage<project.Models.IContact>" %>
<!-- other html... -->
<td><%= Html.TextBoxFor(model => ((Enquiry)model).Name)%></td>
Controller-
[HttpPost]
public ActionResult Index(IContact enquiry)
{
if (!ModelState.IsValid)
return View(enquiry);
enquiry.SendEmail(Request.ServerVariables);
return View("Sent", enquiry);
}
Unit Testing-
[Test]
public void Index_HttpPostInvalidModel_ReturnsDefaultView()
{
Enquiry enquiry = new Enquiry();
_controller.ModelState.AddModelError("", "dummy value");
ViewResult result = (ViewResult)_controller.Index(enquiry);
Assert.IsNullOrEmpty(result.ViewName);
}
[Test]
public void Index_HttpPostValidModel_CallsSendEmail()
{
MockContact mock = new MockContact();
ViewResult result = (ViewResult)_controller.Index(mock);
Assert.IsTrue(mock.EmailSent);
}
public class MockContact : IContact
{
public bool EmailSent = false;
void SendEmail(NameValueCollection httpRequestVars)
{
EmailSent = true;
}
}
Upon a HttpPost I get a "Cannot create an instance of an interface" exception. I seems that I can't have my cake (passing a model) and eat it (pass mock for unit testing). Maybe there's a better approach to unit testing models bound to views?
thanks,
Med
I'm going to throw it out there, if you need to mock your models you're doing it wrong. Your models should be dumb property bags.
There is absolutely no reason that your model should have a SendEmail method. That is functionality that should be invoked from a controller calling to an EmailService.
Responding to your question:
After years of working with Separation of Concern (SOC) patterns like MVC, MVP, MVVM and seeing articles from people brighter than me (I wish I could find the one I'm thinking off about this but maybe I read it in a magazine). You will eventually conclude in an enterprise application you will end up with 3 distinct sets of model objects.
Previously I was a very big fan of doing Domain Driven Design (DDD) using a single set of business entities that were both plain old c# objects (POCO) and Persistent Ignorant (PI). Having domain models that are POCO/PI leaves you with a clean slate of objects where there is no code related to accessing the object storage or having other attributes that have schematic meaning for only 1 area of the code.
While this works, and can work fairly well for a period of time, there is eventually a tipping point where the complexity of expressing the relationship between View, Domain Model, and Physical Storage Model becomes too complex to express correctly with 1 set of entities.
To solve the impedance mismatches of View, Domain and Storage you really need 3 sets of models. Your ViewModels will exactly match your views binding to facilitate it to be easy to work with the UI. So this will frequently have things such as adding a List to populate drop downs with values that are valid for your edit view/action.
In the middle is the Domain Entities, these are the entities that you should validate against your business rules. So you will map to/from them on both sides to/from the view and to/from the storage layer. In these entities is where you could attach your code to do validation. I personally am not a fan of using attributes and coupling validation logic into your domain entities. It does make alot of sense to couple validation attributes into your ViewModels to take advantage of the built in MVC client side validation functionality.
For validation I would recommend using a library like FluentValidation (or your own custom one, they're not hard to write) that lets you separate your business rules from your objects. Although with new features with MVC3 you can do remote validation severside and have it display client side, this is an option to handle true business validation.
Finally you have your storage models. As I said previously I was very zealous on having PI objects being able to be reused through all layers so depending on how you setup your durable storage you might be able to directly use your domain objects. But if you take advantage of tools like Linq2Sql, EntityFramework (EF) etc you will most likely have auto generated models with code for interacting with the data provider so you will want to map your domain objects to your persistence objects.
So wrap all of this up this would be a standard logic flow in MVC actions
User goes to edit product page
EF queries the database to get the existing product information, inside the repository layer the EF data objects are mapped to the Business Entities (BE) so all the data layer methods return BEs and have no external coupling to the EF data objects. (So if you ever change your data provider you don't have to alter a single line of code except for the internal implementation)
The controller gets the Product BE and maps it to a Product ViewModel (VM) and adds collections for the different options that can be set for drop down lists
Return View(theview, ProductVM)
User edits the product and submits the form
Client side validation is passed (useful for date validation / number validation instead of having to submit the form for feedback)
The ProductVM gets mapped back to ProductBE at this point you would validate the business rules along the lines ValidationFactory.Validate(ProductBE), if it's invalid return messages back to view and cancel edit, otherwise continue
You pass the ProductBE into your repository model, inside the internal implementation of the data layer you map the ProductBE to the Product Data Entity for EF and update the database.
2016 edit: removed usages of Interface as separation of concerns and interfaces are entirely orthogonal.
Your issue is here:
public ActionResult Index(IContact enquiry)
MVC in the background has to create a concrete type to pass to the method when calling it. In this method's case, MVC needs to create a type which implements IContract.
Which type? I dunno. Neither does MVC.
Instead of using interfaces in order to be able to mock your models, use normal classes that have protected methods which you can override in mocks.
public class Contact
{
//some shared properties...
public string Name { get; set; }
public virtual void SendEmail(NameValueCollection httpRequestVars = null)
{
//construct email...
}
}
public class MockContact
{
//some shared properties...
public string Name { get; set; }
public bool EmailSent {get;private set;}
public override void SendEmail(NameValueCollection vars = null)
{
EmailSent = true;
}
}
and
public ActionResult Index(Contact enquiry)
It is possible to use interfaces.
See: http://mvcunity.codeplex.com/
For those that create ViewModels (for use by typed views) in ASP.NET MVC, do you prefer to fetch the data from a service/repository from within the ViewModel, or the controller classes?
For example, we started by having ViewModels essentially being DTOs and allowing our Controllers to fetch the data (grossly oversimplified example assumes that the user can only change employee name):
public class EmployeeViewModel
{
public String Name; //posted back
public int Num; //posted back
public IEnumerable<Dependent> Dependents; //static
public IEnumerable<Spouse> Spouses; //static
}
public class EmployeeController()
{
...
public ActionResult Employee(int empNum)
{
Models.EmployeeViewModel model = new Models.EmployeeViewModel();
model.Name = _empSvc.FetchEmployee(empNum).Name;
model.Num = empNum;
model.Dependents = _peopleSvc.FetchDependentsForView(empNum);
model.Spouses = _peopleSvc.FetchDependentsForView(empNum);
return View(model);
}
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Employee(Models.EmployeeViewModel model)
{
if (!_empSvc.ValidateAndSaveName(model.Num, model.Name))
{
model.Dependents = _peopleSvc.FetchDependentsForView(model.Num);
model.Spouses = _peopleSvc.FetchDependentsForView(model.Num);
return View(model);
}
this.RedirectToAction(c => c.Index());
}
}
This all seemed fine until we started creating large views (40+ fields) with many drop downs and such. Since the screens would have a GET and POST action (with POST returning a view if there was a validation error), we'd be duplicating code and making ViewModels larger than they probably should be.
I'm thinking the alternative would be to Fetch the data via the Service within the ViewModel. My concern is that we'd then have some data populated from the ViewModel and some from the Controller (e.g. in the example above, Name would be populated from the Controller since it is a posted value, while Dependents and Spouses would be populated via some type of GetStaticData() function in the ViewModel).
Thoughts?
I encountered the same issue. I started creating classes for each action when the code got too big for the action methods. Yes you will have some data retrieval in classes and some in the controller methods. The alternative is to have all the data retrieval in classes, but half the classes you won't really need, they will have been created for consistency sake or have all the data retrieval in the controller methods, but again, some of those methods will be too complex and needed to have been abstracted into classes... so pick your poison. I would rather have a little inconsistency and have the right solution for the job.
As for putting behavior into the ViewModel, I don't, the point of the ViewModel is to be a thin class for setting and extracting values from the View.
There have been cases where I've put conversion methods in the ViewModel. For instance I need to convert the ViewModel to the corresponding entity or I need to load the ViewModel with data from the Entity.
To answer your question, I prefer to retrieve data from with in the controller/action methods.
Typically with DropDowns, I create a dropdown service. DropDowns tend to be the same data that spans views. With the dropdowns in a service I can use them on other views and/or Cache them.
Depending on the layout, 40 plus fields could create a cluttered view. Depending the type of data, I would try to span that many fields across multiple views with some sort of tabbed or wizard interface.
There's more than that ;-) You can fetch in model binder or action filter. For the second option, check Jimmy Bogard's blog somewhere around here. I personally do it in model binders. I use ViewModel like this: My custom ASP.NET MVC entity binding: is it a good solution?. It is processed by my custom model binder:
public object BindModel(ControllerContext c, BindingContext b)
{
var id = b.ValueProvider[b.ModelName]; // don't remember exact syntax
var repository = ServiceLocator.GetInstance(GetRepositoryType(b.ModelType));
var obj = repository.Get(id);
if (obj == null)
b.ModelState.AddModelError(b.ModelName, "Not found in database");
return obj;
}
public ActionResult Action(EntityViewModel<Order> order)
{
if (!ModelState.IsValid)
...;
}
You can also see an example of model binder doing repository access in S#arp Architecture.
As for static data in view models, I'm still exploring approaches. For example, you can have your view models remember the entities instead of lists, and
public class MyViewModel
{
public MyViewModel(Order order, IEmployeesSvc _svc)
{
}
public IList<Employee> GetEmployeesList()
{
return _svc.GetEmployeesFor(order.Number);
}
}
You decide how you inject _svc into ViewModel, but it's basically the same as you do for controller. Just beware that ViewModel is also created by MVC via parameterless constructor, so you either use ServiceLocator or extend MVC for ViewModel creation - for example, inside your custom model binder. Or you can use Jimmy Bogard's approach with AutoMapper which does also support IoC containers.
The common approach here is that whenever I see repetative code, I look to eliminate it. 100 controller actions doing domain-viewmodel marshalling plus repository lookup is a bad case. Single model binder doing it in generic way is a good one.
I wouldn't be fetching data from the database in your ViewModel. The ViewModel exists to promote separation of concerns (between your View and your Model). Tangling up persistance logic in there kind of defeats the purpose.
Luckily, the ASP.NET MVC framework gives us more integration points, specifically the ModelBinder.
I've got an implementation of a generic ModelBinder pulling information from the service layer at:-
http://www.iaingalloway.com/going-further-a-generic-servicelayer-modelbinder
It doesn't use a ViewModel, but that's easily fixed. It's by no means the only implementation. For a real-world project, you're probably better off with a less generic, more customised solution.
If you're diligent, your GET methods don't even need to know that the service layer exists.
The solution probably looks something like:-
Controller action method:-
public ActionResult Details(MyTypeIndexViewModel model)
{
if( ModelState.IsValid )
{
return View(model);
}
else
{
// Handle the case where the ModelState is invalid
// usually because they've requested MyType/Details/x
// and there's no matching MyType in the repository
// e.g. return RedirectToAction("Index")
}
}
ModelBinder:-
public object BindModel
(
ControllerContext controllerContext,
BindingContext bindingContext
)
{
// Get the Primary Key from the requestValueProvider.
// e.g. bindingContext.ValueProvider["id"]
int id = ...;
// Get an instance of your service layer via your
// favourite dependancy injection framework.
// Or grab the controller's copy e.g.
// (controllerContext.Controller as MyController).Service
IMyTypeService service = ...;
MyType myType = service.GetMyTypeById(id)
if (myType == null)
{
// handle the case where the PK has no matching MyType in the repository
// e.g. bindingContext.ModelState.AddModelError(...)
}
MyTypeIndexViewModel model = new MyTypeIndexViewModel(myType);
// If you've got more repository calls to make
// (e.g. populating extra fields on the model)
// you can do that here.
return model;
}
ViewModel:-
public class MyTypeIndexViewModel
{
public MyTypeIndexViewModel(MyType source)
{
// Bind all the properties of the ViewModel in here, or better
// inherit from e.g. MyTypeViewModel, bind all the properties
// shared between views in there and chain up base(source)
}
}
Build your service layer, and register your ModelBinder as normal.
Here's another solution: http://www.lostechies.com/blogs/jimmy_bogard/archive/2009/06/29/how-we-do-mvc-view-models.aspx
Main points there:
Mapping is done by a mediator - in this case it is AutoMapper but it can be your own class (though more to code). This keeps both Domain and ViewModel concentrated on the domain/presentation logic. The mediator (mapper) will contain (mostly automatic) logic for mapping, including injected services.
Mapping is applied automatically, all you do is tell the action filter the source/destination types - very clean.
(Seems to be important for you) AutoMapper supports nested mappings/types, so you can have your ViewModel combined of several independent view models, so that your "screen DTO" is not messy.
Like in this model:
public class WholeViewModel
{
public Part1ViewModel ModelPart1 { get; set; }
public Part2ViewModel ModelPart2 { get; set; }
}
you re-use mappings for specific parts of your View, and you don't write any new line of code, since there're already mappings for the partial view models.
If you don't want AutoMapper, you have have IViewModelMapper interfaces, and then your IoC container will help your action filter to find appropriate
container.Resolve(typeof(IViewModelMapper<>).MakeGenericType(mysourcetype, mydesttype))
and it will also provide any required external services to that mapper (this is possible with AutoMapper, too). But of course AutoMapper can do recursions and anyway, why write additional AutoMapper ;-)
Consider passing your services into the custom ViewModel on its constructor (ala Dependency Injection). That removes the model population code from your controller and allows it to focus on controlling the logical flow of the application. Custom ViewModels are an ideal place to abstract the preparation of things like SelectLists that your droplists will depend on.
Lots of code in the controller for things like retrieving data isn't considered a best practice. The controller's primary responsibility is to "control" the flow of the application.
Submitting this one late... Bounty is almost over. But...
Another mapper to look at is Automapper: http://www.codeplex.com/AutoMapper
And overview on how to use it: http://www.lostechies.com/blogs/jimmy_bogard/archive/2009/01/22/automapper-the-object-object-mapper.aspx
I really like it's syntax.
// place this somewhere in your globals, or base controller constructor
Mapper.CreateMap<Employee, EmployeeViewModel>();
Now, in your controller, I would use multiple viewmodels. This enforces DRY by allowing you to reuse those viewmodels elsewhere in your application. I would not bind them all to 1 viewmodel. I would refactor to something like:
public class EmployeeController()
{
private IEmployeeService _empSvc;
private ISpouseService _peopleSvc;
public EmployeeController(
IEmployeeService empSvc, ISpouseService peopleSvc)
{
// D.I. hard at work! Auto-wiring up our services. :)
_empSvc = empSvc;
_peopleSvc = peopleSvc;
// setup all ViewModels here that the controller would use
Mapper.CreateMap<Employee, EmployeeViewModel>();
Mapper.CreateMap<Spouse, SpouseViewModel>();
}
public ActionResult Employee(int empNum)
{
// really should have some validation here that reaches into the domain
//
var employeeViewModel =
Mapper.Map<Employee, EmployeeViewModel>(
_empSvc.FetchEmployee(empNum)
);
var spouseViewModel =
Mapper.Map<Spouses, SpousesViewModel>(
_peopleSvc.FetchSpouseByEmployeeID(empNum)
);
employeeViewModel.SpouseViewModel = spouseViewModel;
return View(employeeViewModel);
}
[AcceptVerbs(HttpVerbs.Post)]
public ActionResult Employee(int id, FormCollection values)
{
try
{
// always post to an ID, which is the employeeID
var employee = _empSvc.FetchEmployee(id);
// and bind using the built-in UpdateModel helpers.
// this will throw an exception if someone is posting something
// they shouldn't be posting. :)
UpdateModel(employee);
// save employee here
this.RedirectToAction(c => c.Index());
}
catch
{
// check your domain model for any errors.
// check for any other type of exception.
// fail back to the employee screen
RedirectToAction(c => c.Employee(id));
}
}
}
I generally try to stay away from saving multiple entities on a controller action. Instead, I would refactor the employee domain object to have AddSpouse() and SaveSpouse() methods, that would take an object of Spouse. This concept is known as AggregateRoots, controlling all dependancies from the root - which is the Employee() object. But, that is just me.