I just learn that with the [Contained] attribute I can define a contained collection. This means the collection is no more accessible from my root oData system. Ok fine, but here is my model:
I have a user that have addresses
The user has invoices
Each invoice can have one or two addresses from the user.
On which collection should I add the contained attribute?
The answer to this completely depends on your domain model. The advice I would give is to use OData containment sparingly. It really only makes sense to use it if the entity you are marking as being a contained entity cannot exist outside of the context of the parent entity. Because of this constraint I think the use cases for OData containment are few and far in between. The advantage over a separate controller is that it can make more sense from an architectural standpoint. However your controllers become more bloated and it is more work to maintain the ODataRouteAttributes on your methods. Something which is not necessary when using convention based routing.
The example on the guide to set up OData containment explains it somewhat. It falls a bit short on why you would use it. Note that the PaymentInstrument entity has no foreign key to Account. This means that there is no separate table where the PaymentInstrument information is stored. Instead it is stored directly on the Account record. Yet is still defined as a Collection<T> so it is probably stored as JSON or across multiple columns. This might not necessarily be the case, but from a code standpoint the database could look like that.
To further explain OData containment let's say we have the domain model below.
public class HttpRequest
{
public int Id { get; set; }
public DateTime CreatedOn { get; set; }
public string CreatedBy { get; set; }
public virtual HttpResponse HttpResponse { get; set; }
}
public class HttpResponse
{
public string Content { get; set; }
public DateTime CreatedOn { get; set; }
}
As you can see the HttpResponse class has no navigation property to HttpRequest. Therefore it makes no sense to want to call GET odata/HttpResponses as we would be getting all HttpResponses, but not the HttpRequest they are linked to. In other words the HttpResponse class is useless without the context i.e. the HttpRequest for which it was produced.
The HttpResponse class not having any meaning outside of the HttpRequest context makes it a perfect candidate for OData containment. Both classes could even be saved on the same record in the database. And because it's not possible to perform a GET/POST/PUT/DELETE without specifying the id of the HttpRequest to which the HttpResponse belongs, it makes no sense for the HttpResponse class to have its own controller.
Now, back to your use case. I can see two likely domain models.
The entities User, UserAddress, Invoice and InvoiceAddress.
In this first option every single entity has their own designated address entity. OData containment would make sense here using such a design as the address entities do not exist outside of their respective parent entity. A UserAddress is always linked to a User and an InvoiceAddress is always linked to an Invoice. Getting a single UserAddress entity makes less sense because using this domain model one shouldn't care where the single address is. Instead the focus lays more on what the persisted addresses for this single User are. It's also not possible to create a UserAddress without specifying an existing User. The UserAddress entity relies on the User entity entirely.
The entities User, Invoice, TypedAddress and Address.
In this second option the Address entity is stand-alone. It exists separately from the other entities. Since an address boils down to a unique location on this planet it is only saved once. Other entities then link to the Address entity via the TypedAddress entity where they specify what kind of address it is in relation to the entity linking to it. Getting a single Address makes perfect sense using this domain model. An addressbook of the entire company could easily be retrieved by requesting GET odata/Addresses. This is where OData containment does not make sense.
Do note that it is possible to use the ODataConventionModelBuilder to configure containment. Because you do not need to add the ContainedAttribute to your class, this has the advantage of not polluting your data layer with a reference to the OData library. I would recommend this approach. In your situation I would expect to have the configuration below.
var modelBuilder = new ODataConventionModelBuilder();
modelBuilder
.EntityType<User>()
.ContainsMany(user => user.UserAddresses);
modelBuilder
.EntityType<Invoice>()
.ContainsMany(invoice => invoice.InvoiceAddresses);
Related
What's the most preferred way to work with Entity Framework and DTOs?
Let's say that after mapping I have objects like:
Author
int id
sting name
List<Book> books
Book
int id
string name
Author author
int authorID
My DTOs
AuthorDTO
int id
sting name
BookDTO
int id
string name
int authorID
Since author can have a lot of books I don't want to retrieve all of them, when for example I'm only interested in authors.
But sometimes I might want to get few authors and filtered books or all books.
I could go with multiple queries AuthorDTO GetAuthor(int id) List<BookDTO> GetBooks(int authorID). But that means several accesses to database.
The ways I see it:
If I had in AuthorDTO field List<BookDTO> books the job could be done. But sometimes I would keep this list empty, if for example I listed only authors. And that means some unconsistency, mess and a lot of details to remember.
Return Tuple<AuthorDTO, List<BookDTO>> it might be a bit confusing.
Define new DTO.
AuthorAndBooksDTO
AuthorDTO author
List<BookDTO> books
The problem with sticking to a sinlge AuthorDTO and selectively filling the List is that you are now forced to keep track of where that DTO came from. Is the list of Books not hydrated, or does this Author simply have no books? Do I have to go back to my controller and call a different method to get a different state of the same DTO? This lacks clarity from the consumer's standpoint.
In my experience, I've leaned the way of more DTOs instead of trying to re-use a set of basic DTOs to represent multiple different sets of data. It requires a bit more "boilerplate", having to set up a whole bunch of similar DTOs and mappings between DTO and Entity, but in the end the specificity and clarity makes the codebase easier to read and manage.
I think some clarification of the issues involved will actually solve your confusion here.
First and most importantly, your entity classes are DTOs. In fact, that's all they are. They're classes that represent a table structure in your database so that data from queries Entity Framework makes can be mapped on to them. In other words, they are literally objects that transfer data. The failing of Microsoft and subsequently far too many MVC developers is to conflate them with big-M Models described by the MVC pattern.
As a result, it makes absolutely zero sense to use Entity Framework to return one or more instances of an entity and then map that to yet another DTO class before finally utilizing it in your code. All you're doing is creating a pointless level of abstraction that adds nothing to your application but yet another thing to maintain.
As far as relationships go, that's where Entity Framework's lazy/eager loading comes in. In order to take advantage of it, though, the property representing the relationship must follow a very specific convention:
public virtual ICollection<Book> Books { get; set; }
If you type it as something like List<Book>, Entity Framework will not touch the relationship at all. It will not ever load the related entities and it will not persist changes made to that property when saving the entity back to the database. The virtual keyword allows Entity Framework to dynamically subclass your entity and override the collection property to add the logic for lazy-loading. Without that, the related entities will only ever be loaded if you explicitly use Load from the EF API.
Assuming your property is defined in that way, then you gain a whole world of abilities. If you want all books belonging to the author you can just interact with author.Books directly (iterate, query, whatever). No queries are made until you do something that requires evaluation of the queryset. EF issues just-in-time queries depending on the information you're requesting from the database. If you do want to load all the related books at the same time you retrieve the author, you can just use Include with your query:
var author = db.Authors.Include(m => m.Books).SingleOrDefault(m => m.Id == id);
My first question would be to ask why you are creating DTO's in the first place? Is there a consumer on the other end that is using this data? Is it a screen? Are you building DTO's just to build DTO's?
Since you tagged the question as MVC i'm going to assume you are sending data to a view. You probably want a ViewModel. This ViewModel should contain all the data that is shown on the View that uses it. Then use entity framework to populate the view model. This may be done with a single query using projections or something complex.
So after all that blathering. I would say you want option 3.
Just like the others said, for clarity reasons, you should avoid creating "generic" DTO's for specific cases.
When you want to sometimes have authors and some of their books then model a DTO for that.
When you need only the authors then create another DTO that is more suited for that.
Or maybe you don't need DTOs, maybe a List containing their names is enough. Or maybe you could in fact use an anonymous type, like new { AuthorId = author.Id, AuthorName = author.Name }. It depends on the situation.
If you're using ASP.NET MVC the DTO you'll want is in fact a ViewModel that best represents your page.
Based on what you've described, you're view model could be something like this
public class BookViewModel{
public int Id {get;set;}
public string Name {get;set;}
}
public class AuthorViewModel{
public int Id {get;set;}
public string Name {get;set;}
public List<BookViewModel> Books {get;set;} = new List<BookViewModel>();
}
public class AuthorsViewModel
{
public List<AuthorViewModel> Authors {get;set;} = new List<AuthorViewModel>();
//add in this class other properties, like the filters used on the page...
public void Load(){
//here you can retrieve the data from your database.
//you could do like this:
//step 1: retrieve data from DB via EF
//step 2: fill in the Authors view models from the data at step 1
}
}
//and in your controller you're calling the Load method to fill you're viewmodel with data from db.
public class AuthorsController{
public ActionResult Index(){
AuthorsViewModel model = new AuthorsViewModel();
model.Load();
return View(model);
}
}
Consider the following business entity class. In order to validate itself, it needs to know something about the state of the database, perhaps to prevent a conflict of some kind. So, it has a dependency on the data access layer in order to retrieve this data.
Is it a violation of the Single Responsibility Principle to have a class that encapsulates state, validates the state, and accesses a data store to do so?
class MyBusinessObject
{
private readonly IDataStore DataStore;
public MyBusinessObject(IDataStore dataStore)
{
this.DataStore = dataStore;
}
public virtual int? Id { get; protected set; }
public virtual string Name { get; set; }
// ... Other properties...
public IEnumerable<ValidationResult> Validate()
{
var data = this.DataStore.GetDataThatInfluencesValidation();
return this.ValidateUsing(data);
}
// ... ValidateUsing method would be in here somewhere ...
}
It's throwing a red flag for me because:
In the context of an ASP.NET MVC controller's Create method, I might make a new instance and pass it to my View() method with no intention of validating, so why should I be required to pass in an IDataStore?
I'm using NHibernate (and I'm a noob), and it looks like I have to create an IInterceptor that injects dependencies whenever NH creates entities. Maybe that will be fine, but it feels a little bit wrong to me.
I'm starting to think I should use an anemic/DTO type of object for NHibernate to use, and wrap that inside something else that knows all the business rules AND can access a data store if it depends on one. Now that I've typed up my question and title, StackOverflow has suggested some interesting resources: here and here.
My question also looks very similar to this one, but I thought I'd ask it in a different way that more closely matches my situation.
"Is it a violation of the Single Responsibility Principle to have a class that encapsulates state, validates the state, and accesses a data store to do so?" -- You listed three responsibilities so I'm going to answer yes.
The difficulty with validation is that it depends on context. For example, creating a customer record might require just their name but selling them a product requires payment information and a shipping address.
IN MVC, I do low level (data sizes and nullability) validation using data annotations in view models. More complex validation is done using the specification pattern. The specification pattern is easy to implement and flexible.
In my ASP.NET MVC project, my actions typically call a Service layer to get data. I use the same dozen or so POCOs for all my models. I also plan on using the Service layer in console applications and maybe expose a web api at some point.
To make my database operations more efficient, my service layer only hydrates the properties in the model that are relevant to the particular method (which at this point is mostly driven by the needs of my controller actions).
So for example I might have a class Order with properties Id, Name, Description, Amount, Items. For a given service call I might only need to populate Id, Name, Items. A consumer of that service won't necessarily know that Amount is 0 only because it didn't populate the property.
Similarly, the consumer won't know whether Items is empty b/c there actually aren't any items, or whether this particular service method just doesn't populate that property.
And for a third example, say one of my views displays an ItemCount. I don't want to fully populate my Items collection, I just need an additional property on my "model". I don't want to add this property to my POCO that other service methods will be using because it's not going to be populated anywhere else.
So the natural solution is to make a POCO designed specifically for that method with only those 3 properties. That way the consumer can know that all properties will be populated with its real values. The downside to this is that I'll end writing tons of similarly shaped models.
Any advice on which method works best?
You could use Nullable Types to indicate the missing properties with a null.
For example:
class Order {
public int Id {get;set;}
public string Name {get;set;}
public string Description {get;set;}
public decimal? Amount {get;set;}
public List<Item> Items {get;set;}
}
And then if Items == null, it wasn't set. If it's an empty new List<Item>(), it's set but empty. Same for Amount. If Amount.HasValue == false, it wasn't set. If Amount.Value is 0.0d, it's set and the item is free.
Why don't you use LINQ projection?
One service method does something like:
return DbContext.Orders.Select(o => new { Id = o.Id, Name = o.Name, Description = o.Description });
while the other service method does something like:
return DbContext.Orders.Select(o => o);
I'm not sure how your application is architected, but this may be a way around creating 100's of POCO's.
Hope this helps! Good luck.
You could pass in a selector Func that returns dynamic:
public IEnumerable<dynamic> GetOrders(Func<Order, dynamic> selector) { ... }
I'm not sure how you are accessing data, but the following shows how this would work using a List<T>:
class Program
{
static void Main(string[] args)
{
var service = new Service();
var orderNames = service.GetOrders(o => new { o.Name });
foreach (var name in orderNames)
Console.WriteLine(name.Name);
Console.ReadLine();
}
}
public class Service
{
private List<Order> _orders = new List<Order>
{
new Order { Id = 1, Name = "foo", Description = "test order 1", Amount = 1.23m },
new Order { Id = 2, Name = "bar", Description = "test order 1", Amount = 3.45m },
new Order { Id = 3, Name = "baz", Description = "test order 1", Amount = 5.67m }
};
public IEnumerable<dynamic> GetOrders(Func<Order, dynamic> selector)
{
return _orders.Select(selector);
}
}
public class Order
{
public int Id { get; set; }
public string Name { get; set; }
public string Description { get; set; }
public decimal Amount { get; set; }
}
The use of nullable values is a good solution, however it has the downside you have no way to matk required fields. That is you cannot use a required attribute on any property. So if there is field that is obligatory in some views you have no way to represent it.
If you don't need required fileds validation this is ok. Otherwise, you need a way to represent which fileds are actually used, and then to write a custom validation provider.
A simple way to do this is to use a "Mask" class with the same property names of the original class, but with all fields boolean: a true values means the field is in use.
I used a similar solution in a system where the properties to be shown are configured in a configuration files...so it was the unique option for me since I had no possibility to represent all combination of properties. HOWEVER, I used the "Mask" class also in the View, so I was able to do all the job with just one View..with a lot of ifs.
Now if your 150 service methods and probably about 150 Views...are all different, then maybe it is simpler to use also several classes ...that is in the worst case 150 classes..the extra work to write them is negligible if compared to the effort of preparing 150 different Views.
However this doesnt mean you need 150 POCO classes. You might use an unique POCO class that is copied into an adequate class just into the presentation Layer. The advantage of this approach is that you can put different validation attributes on the various classes and you don't need to write a custom Validation provider.
Return the entire POCO with nullable types as mentioned by #sbolm. You can then create a ViewModel per MVC page view that receives a model with the specific properties it needs. This will take more performance (insignificant) and code, but it keeps your service layer clean, and keeps your views "dumb" in that they are only given what they need and have no direct relation to the service layer.
I.e. (example class from #sbolm)
class Order {
public int Id {get;set;}
public string Name {get;set;}
public string Description {get;set;}
public decimal? Amount {get;set;}
public List<Item> Items {get;set;}
}
// MVC View only needs to know the name and description, manually "map" the POCO properties into this view model and send it to the view
class OrderViewModel {
public string Name {get;set;}
public string Description {get;set;}
}
I would suggest that instead of modifying the models or creating wrapper models, you have to name the service methods such that they are self-explanatory and reveals the consumer what they returns.
The problem with the nullable approach is it makes the user to feel that the property is not required or mandatory and they try inserting instances of those types without setting those properties. Is it won't be bad having nullables every-where?
It won't be a good approach to change the domain models since all you want is just to populate some of the properties instead of that you create service with names and descriptions that are self-explanatory.
Take the Order class itself as the example, say one service method returns the Order with all the items and the other one returns only the details of the Order but not the items. Then obviously you may have to create two service methods GetOrderItems and GetOrderDetail, this sounds so simple, yes it is! but notice the service method names itself tells the client what it is going to return. In the GetOrderDetail you can return an empty items or null (but here I suggest a null) that doesn't matter much.
So for new cases you don't need to frequently change the models but all you got to do is add or remove the service methods and that's fine. Since you are creating a service you can create a strong documentation that says what method does what.
I would not performance optimize this to much unless you realy get performance problems.
I would only distinguish between returning a flat object and an object with a more complete object graph.
I would have methods returning flat objects called something like GetOrder, GetProduct.
If more complete object graphs are requested they would be called : GetOrderWithDetails.
Do you use the POCO classes for the typed views? If yes: try to make new classes that serve as dedicated ViewModels. These ViewModels would contain POCO classes. This will help you keeping the POCO classes clean.
To expand on the nullable idea, you could use the fluentvalidation library to still have validation on the types dependent on whether they are null or not. This would allow you to have a field be required as long as it was not null or any other validation scheme you can think of. Example from my own code as I had a similar requirement:
Imports FluentValidation
Public Class ParamViewModelValidator
Inherits AbstractValidator(Of ParamViewModel)
Public Sub New()
RuleFor(Function(x) x.TextBoxInput).NotEmpty.[When](Function(x) Not (IsNothing(x.TextBoxInput)))
RuleFor(Function(x) x.DropdownListInput).NotEmpty.[When](Function(x) Not (IsNothing(x.DropdownListInput)))
End Sub
End Class
I understand that the "proper" structure for separation-of-concerns in MVC is to have view-models for your structuring your views and separate data-models for persisting in your chosen repository. I started experimenting with MongoDB and I'm starting to think that this may not apply when using a schema-less, NO-SQL style database. I wanted to present this scenario to the stackoverflow community and see what everyone's thoughts are. I'm new to MVC, so this made sense to me, but maybe I am overlooking something...
Here is my example for this discussion: When a user wants to edit their profile, they would go to the UserEdit view, which uses the UserEdit model below.
public class UserEditModel
{
public string Username
{
get { return Info.Username; }
set { Info.Username = value; }
}
[Required]
[MembershipPassword]
[DataType(DataType.Password)]
public string Password { get; set; }
[DataType(DataType.Password)]
[DisplayName("Confirm Password")]
[Compare("Password", ErrorMessage = "The password and confirmation password do not match.")]
public string ConfirmPassword { get; set; }
[Required]
[Email]
public string Email { get; set; }
public UserInfo Info { get; set; }
public Dictionary<string, bool> Roles { get; set; }
}
public class UserInfo : IRepoData
{
[ScaffoldColumn(false)]
public Guid _id { get; set; }
[ScaffoldColumn(false)]
public DateTime Timestamp { get; set; }
[Required]
[DisplayName("Username")]
[ScaffoldColumn(false)]
public string Username { get; set; }
[Required]
[DisplayName("First Name")]
public string FirstName { get; set; }
[Required]
[DisplayName("Last Name")]
public string LastName { get; set; }
[ScaffoldColumn(false)]
public string Theme { get; set; }
[ScaffoldColumn(false)]
public bool IsADUser { get; set; }
}
Notice that the UserEditModel class contains an instance of UserInfo that inherits from IRepoData? UserInfo is what gets saved to the database. I have a generic repository class that accepts any object that inherits form IRepoData and saves it; so I just call Repository.Save(myUserInfo) and its's done. IRepoData defines the _id (MongoDB naming convention) and a Timestamp, so the repository can upsert based on _id and check for conflicts based on the Timestamp, and whatever other properties the object has just get saved to MongoDB. The view, for the most part, just needs to use #Html.EditorFor and we are good to go! Basically, anything that just the view needs goes into the base-model, anything that only the repository needs just gets the [ScaffoldColumn(false)] annotation, and everything else is common between the two. (BTW - the username, password, roles, and email get saved to .NET providers, so that is why they are not in the UserInfo object.)
The big advantages of this scenario are two-fold...
I can use less code, which is therefore more easily understood, faster to develop, and more maintainable (in my opinion).
I can re-factor in seconds... If I need to add a second email address, I just add it to the UserInfo object - it gets added to the view and saved to the repository just by adding one property to the object. Because I am using MongoDB, I don't need to alter my db schema or mess with any existing data.
Given this setup, is there a need to make separate models for storing data? What do you all think the disadvantages of this approach are? I realize that the obvious answers are standards and separation-of-concerns, but are there any real world examples can you think of that would demonstrate some of the headaches this would cause?
Its also worth noting that I'm working on a team of two developers total, so it's easy to look at the benefits and overlook bending some standards. Do you think working on a smaller team makes a difference in that regard?
The advantages of view models in MVC exist regardless of database system used (hell even if you don't use one). In simple CRUD situations, your business model entities will very closely mimick what you show in the views, but in anything more than basic CRUD this will not be the case.
One of the big things are business logic / data integrity concerns with using the same class for data modeling/persistence as what you use in views. Take the situation where you have a DateTime DateAdded property in your user class, to denote when a user was added. If you provide an form that hooks straight into your UserInfo class you end up with an action handler that looks like:
[HttpPost]
public ActionResult Edit(UserInfo model) { }
Most likely you don't want the user to be able to change when they were added to the system, so your first thought is to not provide a field in the form.
However, you can't rely on that for two reasons. First is that the value for DateAdded will be the same as what you would get if you did a new DateTime() or it will be null ( either way will be incorrect for this user).
The second issue with this is that users can spoof this in the form request and add &DateAdded=<whatever date> to the POST data, and now your application will change the DateAdded field in the DB to whatever the user entered.
This is by design, as MVC's model binding mechanism looks at the data sent via POST and tries to automatically connect them with any available properties in the model. It has no way to know that a property that was sent over wasn't in the originating form, and thus it will still bind it to that property.
ViewModels do not have this issue because your view model should know how to convert itself to/from a data entity, and it does not have a DateAdded field to spoof, it only has the bare minimum fields it needs to display (or receive) it's data.
In your exact scenario, I can reproduce this with ease with POST string manipulation, since your view model has access to your data entity directly.
Another issue with using data classes straight in the views is when you are trying to present your view in a way that doesn't really fit how your data is modeled. As an example, let's say you have the following fields for users:
public DateTime? BannedDate { get; set; }
public DateTime? ActivationDate { get; set; } // Date the account was activated via email link
Now let's say as an Admin you are interested on the status of all users, and you want to display a status message next to each user as well as give different actions the admin can do based on that user's status. If you use your data model, your view's code will look like:
// In status column of the web page's data grid
#if (user.BannedDate != null)
{
<span class="banned">Banned</span>
}
else if (user.ActivationDate != null)
{
<span class="Activated">Activated</span>
}
//.... Do some html to finish other columns in the table
// In the Actions column of the web page's data grid
#if (user.BannedDate != null)
{
// .. Add buttons for banned users
}
else if (user.ActivationDate != null)
{
// .. Add buttons for activated users
}
This is bad because you have a lot of business logic in your views now (user status of banned always takes precedence over activated users, banned users are defined by users with a banned date, etc...). It is also much more complicated.
Instead, a better (imho at least) solution is to wrap your users in a ViewModel that has an enumeration for their status, and when you convert your model to your view model (the view model's constructor is a good place to do this) you can insert your business logic once to look at all the dates and figure out what status the user should be.
Then your code above is simplified as:
// In status column of the web page's data grid
#if (user.Status == UserStatuses.Banned)
{
<span class="banned">Banned</span>
}
else if (user.Status == UserStatuses.Activated)
{
<span class="Activated">Activated</span>
}
//.... Do some html to finish other columns in the table
// In the Actions column of the web page's data grid
#if (user.Status == UserStatuses.Banned)
{
// .. Add buttons for banned users
}
else if (user.Status == UserStatuses.Activated)
{
// .. Add buttons for activated users
}
Which may not look like less code in this simple scenario, but it makes things a lot more maintainable when the logic for determining a status for a user becomes more complicated. You can now change the logic of how a user's status is determined without having to change your data model (you shouldn't have to change your data model because of how you are viewing data) and it keeps the status determination in one spot.
tl;dr
There are at least 3 layers of models in an application, sometimes they can be combined safely, sometimes not. In the context of the question, it's ok to combine the persistence and domain models but not the view model.
full post
The scenario you describe fits equally well using any entity model directly. It could be using a Linq2Sql model as your ViewModel, an entity framework model, a hibernate model, etc. The main point is that you want to use the persisted model directly as your view model. Separation of concerns, as you mention, does not explicitly force you to avoid doing this. In fact separation of concerns is not even the most important factor in building your model layers.
In a typical web application there are at least 3 distinct layers of models, although it is possible and sometimes correct to combine these layers into a single object. The model layers are, from highest level to lowest, your view model, your domain model and your persistence model. Your view model should describe exactly what is in your view, no more and no less. Your domain model should describe your complete model of the system exactly. Your persistence model should describe your storage method for your domain models exactly.
ORMs come in many shapes and sizes, with different conceptual purposes, and MongoDB as you describe it is simply one of them. The illusion most of them promise is that your persistence model should be the same as your domain model and the ORM is just a mapping tool from your data store to your domain object. This is certainly true for simple scenarios, where all of your data comes from one place, but eventually has it's limitations, and your storage degrades into something more pragmatic for your situation. When that happens, the models tend to become distinct.
The one rule of thumb to follow when deciding whether or not you can separate your domain model from your persistence model is whether or not you could easily swap out your data store without changing your domain model. If the answer is yes, they can be combined, otherwise they should be separate models. A repository interface naturally fits here to deliver your domain models from whatever data store is available. Some of the newer light weight ORMs, such as dapper and massive, make it very easy to use your domain model as your persistence model because they do not require a particular data model in order to perform persistence, you are simply writing the queries directly, and letting the ORM just handle the mapping.
On the read side, view models are again a distinct model layer because they represent a subset of your domain model combined however you need in order to display information to the page. If you want to display a user's info, with links to all his friends and when you hover over their name you get some info about that user, your persistence model to handle that directly, even with MongoDB, would likely be pretty insane. Of course not every application is showing such a collection of interconnected data on every view, and sometimes the domain model is exactly what you want to display. In that case there is no reason to put in the extra weight of mapping from an object that has exactly what you want to display to a specific view model that has the same properties. In simple apps if all I want to do is augment a domain model, my view model will directly inherit from the domain model and add the extra properties I want to display. That being said, before your MVC app becomes large, I highly recommend using a view model for your layouts, and having all of page based view models inherit from that layout model.
On the write side, a view model should only allow the properties you wish to be editable for the type of user accessing the view. Do not send an admin view model to the view for a non admin user. You could get away with this if you write the mapping layer for this model yourself to take into account the privileges of the accessing user, but that is probably more overhead than just creating a second admin model that inherits from the regular view model and augments it with the admin properties.
Lastly about your points:
Less code is only an advantage when it actually is more understandable. Readability and understand-ability of it are results of the skills of the person writing it. There are famous examples of short code that has taken even solid developers a long time to dissect and understand. Most of those examples come from cleverly written code which is not more understandable. More important is that your code meets your specification 100%. If your code is short, easily understood and readable but does not meet the specification, it is worthless. If it is all of those things and does meet the specification, but is easily exploitable, the specification and the code are worthless.
Refactoring in seconds safely is the result of well written code, not it's terseness. Following the DRY principle will make your code easily refactorable as long as your specification correctly meets your goals. In the case of model layers, your domain model is the key to writing good, maintainable and easy to refactor code. Your domain model will change at the pace at which your business requirements change. Changes in your business requirements are big changes, and care has to be taken to make sure that a new spec is fully thought out, designed, implemented, tested, etc. For example you say today you want to add a second email address. You still will have to change the view (unless you're using some kind of scaffolding). Also, what if tomorrow you get a requirements change to add support for up to 100 email addresses? The change you originally proposed was rather simple for any system, bigger changes require more work.
I'm using ASP.NET 4 and MVC3.
Often, I find that I need a ViewModel to display information for my Model. For example, take the following model
class Profile
{
public int UserID { get; set; }
public string Name { get; set; }
public DateTime DOB { get; set; }
}
There is a requirement to hide the UserID, but to show the UserName, so often time for models that are similar to the one above, I have to come up with a ViewModel with just the UserID changed to UserName:
class ProfileViewModel
{
public string UserName { get; set; }
public string Name { get; set; }
public DateTime DOB { get; set; }
}
Are there any ways?
Until recently I always passed my models to my action methods as I also thought that creating viewModels with the same property names was duplication (its not). This caused me a lot of pain. I have now been re-educated and almost always use viewModels exclusively in my action methods (of course there will always be situations were it is fine to pass the model directly to the action method).
Have a read of this post which is the one that converted me to using viewModels. This will tell you the following:
The difference between models and viewModels
When each should be used.
How to avoid some security issues with the default model binder.
On top of the information in the linked post you should also consider things such as validation. I had a model that implemented the IValidateableObject interface to ensure the entity was in a valid state before being saved to the database.
In my ASP.NET application I wanted to create a multi-step form that allowed the user to enter the information over a number of pages. The problem I had here was that ASP.NET also uses the IValidatableObject interface during the model binding process.
If you are only allowing the user to enter a subset of the information required for the entity, the model binder will only be able to fill in the information that was given. Depending on how complex your validation is, this can result in the ModelState being marked as invalid as the entire entity is not valid.
The way I got around this was to have a viewModel representing each step each with its own validation. This way you are only validating the properties at each step. Once you get to the final step and everything is valid, I create an appropriate entity using the information given by the user. This entity will only have database-level validation checks performed upon it (field lengths etc.)
My suggestion is not to avoid viewModels but to understand why they are used and embrace them.
No, there isn't, once a member is public, it's public. Now, if the UserID property was internal, then you wouldn't have that problem.
However, one of the aims of MVVM here is to encapsulate logic regarding the interaction of the model and the view. Even if you have the view model and model in separate assemblies and make the UserID property internal, you should still have a view model; if changes come down the line where more functionality is required than simply binding to the model, you are prepared.
Direct access to the model is always a no no.
Additionally, if you really wanted, you could always use T4 templates to auto-generate the code for you (you could use Code DOM on the original CS file) to output your view models for you.
I usually have multiple ViewModels per model - the tradeoff you have to make comes down to this:
Are you comfortable coupling business logic (data annotations, display information, etc...) with your (persistence) models?
Are you comfortable doing all of the hide / display business logic purely within the View and not use the Controller + scaffolding to make those decisions for you?
The downside of creating all of those ViewModels of course is sub-class explosion, but the right way to think about it is in terms of the questions I listed IMHO.