When to create a class vs setting a boolean flag? - ruby-on-rails

I have an interesting question to pose; when should one create a model class/object as opposed to setting a boolean flag for data stored in a database?
For example, say I have a Person class that has boolean flags for President, Guard, and PartTime. This class/model is treated differently depending on the value of the flags. So the President gets different privileges in the system from the Guard and from the PartTime(r).
When would one use Single Table Inheritance to represent this information and when would one just continue to use the boolean flag?
My instinct is to convert these to different Objects using STI since this seems more OO to me. Checking booleans seems wrong in some way, but I can also see a place for it.
Update for clarification
Let me use another example because the one above has too many cases involved with it.
I am working on a CMS application that contains Pages, a Page can be Public, Private, Shared, Hidden, or Default (meaning it is what you get when you don't specify a page in the url). Right now, we have a Page model and everything is a boolean flag - Public, Default, Shared.
I am not convinced this is the best method of handling this. Especially since we have rules governing what page can be what, i.e., the Default page or a Shared page must be a Public page whereas a Private page is just Private.
I agree with the comment below that Roles for the Person example makes a lot of sense. I am not sure that for the Page example it does.
And to make things more complicated, there can only be one Default page and one Shared page. STI may allow me to validate this, but I am not sure since there can be many default and shared pages in the table (just not associated with a particular site).
Note: The context for the question is a Ruby on Rails application, but is applicable for any object-oriented language.

First of all, let's establish what single-table inheritance typically is used for. It is a way to combine the storage and behaviour of multiple things that resemble each other. Sticking to a CMS, an example would be a table with posts, which could be either a Comment or an Article. They share similar data and behavior, but are ultimately different things. Whether or not something is a comment is not the state of the object, it's an identity.
In your example, however, whether or not a page is public or private, shared or not, or hidden, appears to be a part of the state of the page. Although single-table inheritance might technically work (provided all subclasses are mutually exclusive), it's not a good fit.
State should be implemented in one or more columns. An attribute that represents a certain dual state can be specified as a boolean; yes or no. If a page always is either private or public, you can model this as a single boolean column, private. If it's not private it's public (or the other way around).
In some cases you may want to store three or more different states that are mutually exclusive. For example, a page could be either private, or public, or shared (I don't know if this is the case -- let's pretend that it is). In this case a boolean will not help. You could use multiple boolean flags, but as you correctly observe that is very confusing. The easiest way is to model this as an enumeration. Or when you lack this (as is the case with Rails), simply use string values with a special meaning and add a validation that ensures the only values you use are one of private, public or shared.
Sometimes certain combinations of different state variables are invalid. For example, a page might be a draft or approved (reflected by a boolean column approved); and it is also either public or private (also reflected by a boolean column). We could decide that a page should must be approved before it is made public. In this case we declare one of the states invalid. This should be reflected by the validation of your model. It is important to realise that a draft, public page is not fundamentally impossible, it's only impossible because you decide it should not happen.
When creating your model, make a careful distinction between the attributes that reflect actual properties and states of the subjects in the real world, and the business rules that determine what should be possible and what shouldn't be. The first should be modelled as columns, the second as validations.
Original answer:
One obvious difference is that boolean flags allow a Person to be marked as president and guard at the same time. If your model should allow these situations, single-table inheritance will not work for you.
On the other hand, maybe a Person that is a president behaves differently from a regular person; and a single person can only be president or guard. In this case inheritance may be a better fit. I don't think you should model "part time" as a subclass, though. That is an attribute in any case.
There is also an important third option, one where you completely separate the job or role of a person from the model. One person has one (or many?) jobs, which are or are not part-time. The advantage of this model is that you separate attributes of a person from the attributes of their job. After all, people change jobs, but that does not make them literally a different person. Ultimately this seems to me the most realistic way to model your situation.

I prefer not to use a flag for this, but also not to subclass Person for this. Rather, attach a Role (or if you have someone who's both a President and a Guard, a set of Roles) with subclasses of Role governing the prvileges.
Personally, I am neither a President nor a Guard, but I am both a Programmer and a Musician, and have a few other roles at times (in fact, I was a Guard for a while simultaneous with being a Student many years ago.).
A Person has-a Role.

I have found that whenever I think "Hm, I have these 3 types of behavior and they do look like subclasses, but need to change at runtime", look at a strategy or state pattern. It usually fits very well and usually also beats a simple boolean flag with respect to keeping responsiblities apart.
In your case, this heuristic would say that you have a Person with an attribute of type AccessRights, which decides if a certain action can be performed or not. Person either gives access to this object or delegates appropiate methods. After that, you have PresidentialRights, GuardRights and PartTimeRights implemetning this AccessRights interface and you are good to go.
Given this, you never need to change the person class whenever a new type of access right appears, you might need to change the person class if a new type of action appears (depends on if you delegate and how you delegate) and in order to add new types of AccessRights, you just add new implementations of AccessRights.

the answer is that it is basically a design decision. There is not an a priori right way of designing an architecture. When you define classes and relationships among them you define an architecture and, at the same time, a language representing the domain of your application.
As any languages it consists of a vocabulary (i.e. Person, President, Guard, etc.); a Syntax (i.e. the relationships you can specify for the instances of your vocabulary) and Semantics (i.e. the meaning of the terms you specify in vocabulary and relationships).
Now you can obviously obtain the same behaviour in possibly infinite way. And anyone would come up with a different architecture for the same system since anyone might have a different way of thinking at the problem.
Despite this there are some criteria you should take into account when designing.
When you define a Class you are defining a "first order" construct of your language, when you define attributes for a Class you are describing the characteristics of your first order constructs.
The best way to decide if you need a class or an attribute might be this.
Do Presidents and Guards have different characteristics apart of those they share since they are both person? If that is the case, and they have a number of different characteristics you should create two classes (one for the President and one for the Guard)both inheriting from Person. Otherwise you have to collapse all the characteristics (those belonging to person, those belonging to President and those belonging to Guard) in the Person class and condition their validity to another flag (type). This would be a very bad design
The characteristic of a Page of being public or not is instead something which actually describes the status of a page. It is therefore quite reasonable to model it as a Property of the Page Class

Related

has_many :through model names, controller and attributes best practices?

Disclaimer: I really spent time thinking about names of models and variables. If you also do, this question is for you.
I have a Rails project which contains two models: User and Project.
They are connected by the model ProjectsUser, which is a connection model in a many-to-many relationship. This model also holds the role of a user in the given project, along with other attributes such as tier and departments. So this is a has_many :through relationship.
Given this scenario, here is everything that always bothered me on all my rails projects since I started developing on it:
Should I use a ProjectsUserController or better add the relevant actions on UserController and ProjectController? At some point, I want to assign users to a project, or even changing the role of a user in a given project. Is it a better practice to leave those actions on the connection controller, or use the model controllers?
Should I write a method to get the role of a user for a given project? This is basically if I should have a method User#role_for(project) or not. Since this method basically is getting the information from the projects_user object it could make more sense to always let this explicity on the code, since most of the times I'll have the project and the user, but not the projects_user. Is this line of thinking correct, or maybe the problem is that I'm should have more project_user on my code than I really do? Are there good caveats for this?
Should I try to rename my table to a non-standard name if it is not obvious? Ok, I got that if I have the models User and NewsSite I should use has_many :subscriptions, but the thing is that naming those models in real life cases are usually harder, by my experience. When the name ends up not being that obvious (for exemple, in my case, maybe project_participation as #wonderingtomato suggested) is for the best, or in those cases it is better to fall back to the ProjectsUser approach?
One extra cookie for pointing beautiful open source Rails code, or by book indications that might help with my kind of questions.
I would use a specific controller. Even if now the interaction sounds simple, you can't know if in the future you'll need to add more advanced features.
I've been handling these kind of relationships in several projects, and using a controller for the join model has always paid off.
You can structure it this way, for example:
index should expect a params[:project_id], so that you can display only the index of users for a specific project.
create is where you add new users, that is where you create new join models.
update is to modify a value on an existing join model, for example when you want to update the role of a user in a project.
destroy is where you remove users from the project, that is where you delete the corresponding join models.
You might not need a show and edit actions, if you decide to manage everything in the index view.
Also, I'd suggest to choose a different name. Rails relies heavily on naming conventions, and projects_users is the default name for the join_table you would use with a has_and_belongs_to_many association. In theory you can use it for an independent model (and a has_many through:), but it's not immediately clear and you might break something. In addiction, it will confuse the hell out of any new programmer that could join the project in the future (personal experience).
What about calling the model something like project_participation?
If you haven't built a lot of functionality yet, and don't have yet that table in production, changing it now will save you a lot of headaches in the future.
update
1) I stand by what I said earlier: your join model is a full fledged record, it holds state, can be fetched, modified (by the user) and destroyed.
A dedicated controller is the way to go. Also, this controller should handle all the operations that modify the join model, that is that alter its properties.
2) You can define User#role_for(project), just remember that it should properly handle the situation where the user is not participating to the project.
You can also make it explicit with something like:
#user.project_participations.where(project_id: #project.id).first.try(:role)
# or...
ProjectParticipation.find_by(project_id: #project.id, user_id: #user.id).try(:role)
But I'd say that encapsulating this logic in a method (on one of the two models) would be better.
3) You are already using a non standard name for your table. What I mean is that it's the default name for a different kind of association (has_and_belongs_to_many), not the one you are using (has_many through:).
Ask yourself this: is the table backing an actual model? If yes, that model represents something in the real world, and thus should have an appropriate name. If, on the other hand, the table is not backing a model (e.g. it's a join table), then you should combine the names of the tables (models) it's joining.
In my mind, REST doesn't always have to map directly to DB records. A conceptual resource here is the association of Projects to Users. Implementation would be different depending on your persistence layer, but a RESTful API would be standard.
Convention over Configuration in Rails is a great helper, but it isn't necessarily applicable to every case 100% of the way through the stack. There doesn't need to be a 1-to-1 mapping between controllers, models, and their respective names. At the app-level, particularly, I want my routes/controllers to represent the public view of the API, not the internal implementation details of the persistence and domain layers.
You might have a UserProjectsController which you can perform CRUD on to add/remove project associations to users, and it will do the appropriate record manipulation without being overly bound to the DB implementation. Note the naming, where the route might be /user/:id/projects, so it's clear you are manipulating not Users or Projects, but their associations.
I think thinking about this sort of thing (both before and after the fact) is what leads to better designs.
I too start with the model and think about the application structurally. The next step in my oppinion is to build the user interface to make sense based on what makes it easy and useful for the user (spending more effort on things that matter more). So if it makes sense for the user to separately edit the ProjectsUser objects then the ProjectsUsersController is the way to go. More likely editing the join model objects as part of the Project (or User depending on the structure of you app) will be a better fit for the user. In that case using a nested form and editing via the controller (and model) that's the main model referenced by the form is better. The controller really lives to serve the UI, so decisions about it should be dependent on the UI.
Yes, if it makes your code simpler or more readable. If you use role more than once I suspect it will.
I would actually name that model something like Member, or ProjectMember (or Membership). It defines a relationship between a user and a project, so its name should reflect what relationship that is. In the occasions where such a name is too unwieldly or too hard to define then falling back to something like ProjectUser is reasonable (but not ProjectsUser). But I definitely like finding a more meaningful name when possible.

Scope of viewmodels in asp.net MVC 3

I have read online that it is bad practice to use a "kitchen sink" model:
Rule #3 – The View dictates the design of the ViewModel. Only what is
required to render a View is passed in with the ViewModel.
If a Customer object has fifty properties, but one component only
shows their name, then we create a custom ViewModel type with only
those two properties.
Jimmy Bogard's subsequent explanation of how this is good, however, left me a little questioning. It'd be so easy to have my Model just contain a list of Customers, I could even use my POCO's.
So now I get to create custom little view model fragments for every page on the site? Every page that uses a Customer property would get one, but of course could not be shared since some of the information is extraneous, if one page used Age but not Name, for example. Two new mini view model classes right?
This is very time consuming, and seems like it'll lead to a million little custom view models - can someone elaborate as to the utility of this approach and why the easier approach is bad?
View model class can be used not only to transfer values, but it also defines data types (data annotations), validation rules and relations different then ones used in model. Some advantages that come to my mind right now:
There are different validation rules when you change user's password,
change his basic data or his subscription setting. It can be
complicated to define all these rules in one model class. It looks
much better and cleaner when different view models are used.
Using view model can also give you performance advantages. If you
want to display user list, you can define view model with id and name
only and use index to retrieve it from database. If you retrieved
whole objects and pass it to view, you transfer more data from
database than you need to.
You can define display, and editor templates for view models and reuse them on different pages using html helpers. It looks much worse, when you define templates for model POCOs.
If you would use your POCO objects as view models, you would essentially be showing your private objects and break the encapsulation. This in turn would make your model hard to change without altering the corresponding views.
Your data objects may contain details that are appropriate only to the data access layer. If you expose those things to the view, someone might alter those values that you did not expect to be altered and cause bugs.
Many of the same reasons as for having private members in OO languages apply to this reasoning. That being said, it's still very often broken because it's a lot of extra work to create all these "throw-away" models that only gets used once. There exists frameworks for creating these sorts of models, though the name eludes me, that can tie objects together and pick out the interesting properties only which takes away some of the drudgery from creating specific view models.
Your View Model tells the View how data should be shown. It expresses the model. I don't think its necessary to have two view models unless you have two ways to express your model. Just because you have two pages, doesn't mean you will be showing the data any different way, so I wouldn't waste time making two mini View Models when it can be in one reusable view model, Imagine if later you have a page that needs Name and Age, you would create another view model? It's absolutely silly. However, if you had two pages both showing 'Age' and it needed to be shown in a different way, then I would create another one.

Best practice question - Working straight with Linq to sql classes

This is possibly a bit of a stupid question, but I am getting confused due to the ASP.NET MVC book I am currently reading...
Working with Linq-To-SQL it seems to say that it is not good practice to pass the Linq-to-SQL objects straight to the controller, but that each object should be modelled separately first and this should be passed between the controller and the repository.
Say, I have a database of products. Linq-to-SQl creates a product class for me with Name, Price and Whatnotelse properties. I could pass that straight from repository to controller and then view, but instead it seems to recommend that I use and third class, say Product_Entity, with also Name, Price etc. properties and pass that to the controller.
I fail to see the benefit of this approach, except possibly for adding attributes to the properties... But apart from that it seems to have more drawbacks than benefits. Say each product has manufacturer information as well, I don't see how I can model that easily in my third class.
Is this approach really best practice? Or did I misunderstand all that? If so, why is it bad to work straight off the linq-to-sql generated objects? And how do you deal with relationships between objects in y
The huge benefit to this other class you create is that, to use your example, it doesn't necessarily map to either a product or a manufacturer. Think about it like this:
Your Linq to SQL classes are meant for talking in the "data" domain.
Your "data" classes (the ones you're having trouble with) are meant for talking in the "application" domain.
Let's take an example. Suppose in your MVC application you wanted to show a grid of information about products. You want to see their Name, Price (from the Product table) and their Country of Manufacture and Manufacturer name (from the Manufacturer table). What would you name this class? Product_Manufacturer? What if later on you wanted to add properties from yet a third table such as product discounts? Instead of thinking about these objects in purely the data domain, think about them with regard to your application.
So instead of Product_Manufacturer, what about calling it ProductSummaryItem? Each property of the ProductSummaryItem class would map 1:1 with a field shown in your grid on the UI. Your controller would perform the mapping between the information in the data domain (Product, Manufacturer) with the custom class you'd created in the application domain (ProductSummaryItem).
By doing this, you get some awesome benefits:
1) Writing your views becomes really, really simple. All you have to do to display your data is loop through the ProductSummaryItems and wrap them in and tags, and you're done. It also allows for simple aggregation. Say for example you wanted to add a field called ProductsSoldLastYear to your ProductSummaryItem class. You could do that very simply in your views because all it is to them is another property.
2) Since the view is trivial and there's mapping logic in the controller, it becomes much easier to test the controller's output because it's customized to what the view is going to see.
3) Since the ProductSummaryItem class only has the data it needs, your queries can potentially become much faster because they only need to query for the fields that would populate your ProductSummaryItem object, and nothing else. This overhead can become overbearing the more data-domain objects make up your ProductSummaryItem object.
This pattern is called Model View ViewModel (MVVM) and is hugely popular with MVC as well as in frameworks like WPF.
The argument against MVVM is that you have to somewhat reimplement simple classes for CRUD operations. Fair enough, I guess, but you can use a tool like automapper to help out with things like that. I think you'll find fairly quickly, though, that using the MVVM pattern even for CRUD pays dividends, because before you know it, even with simple classes, you'll start wishing you had extra fields which can easily drive your views.

How can an ASP.NET MVC Action method access sub entities of an aggregate root?

I'm having trouble understanding how one would access the sub entities of an aggregate root. From answers to my previous question I now understand that I need to identify the aggregate roots of my model, and then only setup repositories which handle these root objects.
So say I have an Order object that contains Items. Items must exist within and Order so the Order is the aggregate root. But what if I want to include as part of my site an OrderItem details page? The URL to this page may be something like /Order/ItemDetails/1234, where 1234 is the ID of the OrderItem. Yet this would require that I retrieve an Item directly by ID, and because it is not an aggregate root I should not have a OrderItemRepository that can retrive an OrderItem by ID.
Since I want to work with OrderItems independent of an Orders does that imply that OrderItem is not actually an aggregate of Order but another aggregate root?
I don't know your business rules, of course, but I can't think of a case where you would have an orderitem that doesn't have an order. Not saying you wouldn't want to "work with one" by itself, but it still has to have an order, imo, and the order is sort of in charge of the relationship; e.g. you would represent all this by adding or deleting items from an order.
In situations like this, I usually will still require access to the items through the order. It's pretty easy to setup, in URLs I would just do /order/123/item/456. Or, if item ordering is stored / important (which it normally is stored at least indirectly via the order of entry), you could do /order/123/item/1 to retrieve the first item on the order.
In the controller, then, I just retrieve the order from the OrderRepository and then access the appropriate item from there.
All that said, I do agree w/ Arnis that you don't always have to follow this pattern at all. It's a case-by-case thing that you should evaluate the tradeoffs before doing it.
In Your case, I would retrieve OrderItem directly by URL /OrderItem/1234.
I personally don't try to abstract persistence (I don't use repository pattern). Also - I don't follow repository per aggregate root principle. But I do isolate domain model from persistence.
Main reason for that is - it's near-impossible to abstract persistence mechanisms completely. It's a leaky abstraction (e.g. try specifying eager/lazy loading for ORM that lives underneath w/o polluting repository API).
Another reason - it does not matter that much in what way You report data. Reporting part is boring and relatively unimportant. Real value of application is what it can do - automation of processes. So it's much more important how Your application behaves, how it manages to stay consistent, how objects interact etc.
When thinking about this problem, it's good to remember Law of Demeter. The point is - it should be applied only if we explicitly want to hide internals. In Your case - we don't want to hide order items.
So - exploiting fact that we know that entity Ids are globally unique (as opposed to unique only in Order context) it's just a short-cut and there is nothing wrong with retrieving them directly.
Interestingly enough - this can be pushed forward.
Even behavior encapsulation can and should be loosened up too.
E.g. - it makes more sense to have orderItem.EditComments("asdf") than order.EditOrderItemComments(order.OrderItems[0], "asdf").

Reusing validation attributes in custom ViewModels

When I started using xVal for client-side validation, I was only implementing action methods which used domain model objects as a viewmodel or embedded instances of those objects in the viewmodel.
This approach works fine most of the time, but there are cases when the view needs to display and post back only a subset of the model's properties (for example when the user wants to update his password, but not the rest of his profile data).
One (ugly) workaround is to have a hidden input field on the form for each property that is not otherwise present on the form.
Apparently the best practice here is to create a custom viewmodel which only contains properties relevant to the view and populate the viewmodel via Automapper. It's much cleaner since I am only transferring the data relevant to the view, but it's far from perfect since I have to repeat the same validation attributes that are already present on the domain model object.
Ideally I'd like to specify the Domain Model object as a meta class via a MetaData attribute (this is also often referred to as "buddy class"), but that doesn't work since xVal throws when the metadata class has properties that are not present on the viewmodel.
Is there any elegant workaround to this? I've been considering hacking the xVal sourcecode, but perhaps there is some other way I have overlooked so far.
Thanks,
Adrian
Edit: With the arrival of ASP.NET MVC 2, this is not only a problem related to validation attributes anymore, but it also applies to editor and display attributes.
This is the quintessential reason why your input screens should not be tightly coupled to your model. This question actually pops up here on the MVC tag about 3-4 times a month. I'd dupe if I could find the previous question and some of the comment discussion here is interesting. ;)
The issue your having is you're trying to force two different validation contexts of a model into a single model which fails under a large amount of scenarios. The best example is signing up a new user and then having an admin edit a user field later. You need to validate a password on a user object during registration but you won't show the password field to the admin editing the user details.
The choices for getting around these are all sub-optimal. I've worked on this problem for 3 projects now and implementing the following solutions has never been clean and usually frustrating. I'm going to try and be practical and forget all the DDD/db/model/hotnessofthemonth discussions everybody else is having.
1) Multiple View Models
Having viewmodels that are almost the same violates the DRY principal but I feel the costs of this approach are really low. Usually violating DRY amps up maintenance costs but IMHO the costs for this are the lowest and don't amount to much. Hypothetically speaking you don't change how max number characters the LastName field can have very often.
2) Dynamic Metadata
There are hooks in MVC 2 for providing your own metadata for a model. With this approach you could have whatever your using to provide metadata exclude certain fields based on the current HTTPRequest and therefore Action and Controller. I've used this technique to build a database driven permissions system which goes to the DB and tells the a subclass of the DataAnnotationsMetadataProvider to exclude properties based values stored in the database.
This technique is working great atm but the only problem is validating with UpdateModel(). To solve this problem we created a SmartUpdateModel() method which also goes to the database and automatically generates the exclude string[] array so that any non-permissisable fields aren't validated. We of course cached this for performance reasons so its not bad.
Just want to reiterate that we used [ValidationAttributes] on our models and then superceeded them with new rules on runtime. The end result was that the [Required] User.LastName field wasn't validated if the user didn't have permission to access it.
3) Crazy Interface Dynamic Proxy Thing
The last technique I tried to was to use interfaces for ViewModels. The end result was I had a User object that inherited from interfaces like IAdminEdit and IUserRegistration. IAdminEdit and IUserRegistration would both contain DataAnnotation attributes that performed all the context specific validation like a Password property with the interfaces.
This required some hackery and was more an academic exercise than anything else. The problem with 2 and 3 is that UpdateModel and the DataAnnotationsAttribute provider needed to be customized to be made aware of this technique.
My biggest stumbling block was I didn't ever want to send the whole user object to the view so I ended up using dynamic proxies to create runtime instances of IAdminEdit
Now I understand this is a very xVal specific question but all of the roads to dynamic validation like this lead to customization of the internal MVC Metadata providers. Since all the metadata stuff is new nothing is that clean or simple to do at this point. The work you'd have to do to customize MVC's validation behavior isn't hard but requires some in depth knowledge of how all of the internals work.
We moved our validation attributes to the ViewModel layer. In our case, this provided a cleaner separation of concerns anyway, as we were then able to design our domain model such that it couldn't get into an invalid state in the first place. For example, Date might be required on a BillingTransaction object. So we don't want to make it Nullable. But on our ViewModel, we might need to expose Nullable such that we can catch the situation where the user didn't enter a value.
In other cases, you might have validation that is specific per page/form, and you'll want to validate based on the command the user is trying to perform, rather than set a bunch of stuff and ask the domain model, "are you valid for trying to do XYZ", where in doing "ABC" those values are valid.
If ViewModels are hypothetically being forced upon you, then I recommend that they only enforce domain-agnostic requirements. This includes things like "username is required" and "email is formatted properly".
If you duplicate validation from the domain models in the view models, then you have tightly coupled the domain to the UI. When the domain validation changes ("can only apply 2 coupon per week" becomes "can only apply 1 coupon per week"), the UI must be updated. Generally speaking, this would be awful, and detrimental to agility.
If you move the validation from the domain models to the UI, you've essentially gutted your domain and placed the responsibility of validation on the UI. A second UI would have to duplicate all the validation, and you have coupled two separate UI's together. Now if the customer wants a special interface to administrate the inventory from their iPhone, the iPhone project needs to replicate all the validation that is also found in the website UI.
This would be even more awful than validation duplication described above.
Unless you can predict the future and can rule out these possibilities, only validate domain-agnostic requirements.
I don't know how this will play for client-side validation, but if partial validation is your issue you can modify the DataAnnotationsValidationRunner discussed here to take in an IEnumerable<string> list of property names, as follows:
public static class DataAnnotationsValidationRunner
{
public static IEnumerable<ErrorInfo> GetErrors(object instance, IEnumerable<string> fieldsToValidate)
{
return from prop in TypeDescriptor.GetProperties(instance).Cast<PropertyDescriptor>().Where(p => fieldsToValidate.Contains(p.Name))
from attribute in prop.Attributes.OfType<ValidationAttribute>()
where !attribute.IsValid(prop.GetValue(instance))
select new ErrorInfo(prop.Name, attribute.FormatErrorMessage(string.Empty), instance);
}
}
I'm gonna risk the downvotes and state that there is no benefit to ViewModels (in ASP.NET MVC), especially considering the overhead of creating and maintaining them. If the idea is to decouple from the domain, that is indefensible. A UI decoupled from a domain is not a UI for that domain. The UI must depend on the domain, so you're either going to have your Views/Actions coupled to the domain model, or your ViewModel management logic coupled to the domain model. The architecture argument is thus moot.
If the idea is to prevent users from hacking malicious HTTP POSTs that take advantage of ASP.NET MVC's model binding to mutate fields they shouldn't be allowed to change, then A) the domain should enforce this requirement, and B) the actions should provide whitelists of updateable properties to the model binder.
Unless you're domain is exposing something crazy like a live, in-memory object graph instead of entity copies, ViewModels are wasted effort. So to answer your question, keep domain validation in the domain model.

Resources