I am bootstrapping myself through my very first ASP MVC project, and could use some suggestions regarding the most appropriate (data-side) model binding solution I should pursue. The project is 'small', which really just means I have very limited time for chasing down dead-ends, which cramps my available learning curve. Thanks heaps for reading!
Background: a simple address search app (ASP.NET MVC 3) that sends user form input to a vendor-sourced server, and presents the results returned from the server in a rules-driven format. There is no CRUD-style repository as such; this is an entirely read-only application. The vendor's .NET server API specifies the use DataTable objects (System.Data.DataTable) for both requests and replies.
Currently: I have a prototype under construction to validate server behavior and inform the application's design. There is a conventional MVC data model class for the input form which works great, auto-binding with the view just as you'd expect. When submitted, the controller passes this input model to my vendor API wrapper, which is currently binding to the request table medieval-style:
public DataTable GetGlobalCandidateAddresses(AddressInputModel input)
{
...
DataRow newRow = dataTable.NewRow();
newRow[0] = input.AddressLine1;
newRow[1] = input.AddressLine2;
newRow[2] = input.AddressLine3;
...
dataTable.Rows.Add(newRow);
...
This works fine; the inputs are fixed and have very light validation requirements. However, if there is a modern framework solution for quickly reflecting a DataTable from my model, that would be peachy.
My real conundrum, however, is on the reply. The table returned by the server contains a variable column-set, with any combination of at least 32 possible unordered fields on a per-transaction basis. Some of the column names contain compiler-invalid characters ("status.description") that won't map literally to a model property name. I will also have a need to dynamically map or combine some of the reply fields in the view model, based on a series of rules (address formats vary considerably by country), so not all fields are 1-to-1.
Just to get the prototype fired up and running, I am currently passing the reply DataTable instance all the way back to a strongly-typed view, which spins it out into the display exactly as is. This is nice for quickly validating the server replies, but is not sufficient for the real app (much less satisfying best practices!).
Question: what is my best tooling approach for binding DataTable rows and columns into a proper model class for display in a view, including some custom binding rules, where no underlying database is present?
Virtually all of the training materials I am consuming discuss classic database repository scenarios. The OnModelCreating override available in the Entity Framework seems ideal in some respects, but can you use a DBContext without a DB connection or schema?
If I have to roll my own model binder, are there any tricks I should consider? I've been away from .NET for a while (mostly Java & SQL), so I'm picking up LINQ as I go as well as MVC.
Thanks again for your attention!
Create a poco display model AddressDisplay and do custom object mapping from the data table to the display model. You can use data annotations for formatting but you can also do that in your custom mapping. You shouldn't need to create a custom model binder.
So create two poco models, AddressInput and AddressDisplay, you can use data annotations on AddressInput for validation. When AddressInput is posted back, map it to the outbound data table. When the response is received, map the inbound data table to AddressDisplay and return the view to the user.
Related
I have a separate .dll with our database model and partial classes etc in using FluentValidation which works fine (it's to be used by both by desktop barcoding terminals and also by our website).
For the desktop apps I can read and display all errors like below
public override int SaveChanges()
{
var errors = this.GetValidationErrors();
if (errors.Any())
{
//handle validation errors
return 0;
}
else
{
return base.SaveChanges();
}
}
For the MVC site I can set up validators on individual models or create data annotations and get those working okay (which is not what I want). The thing I can't get my head around is how I can force my models to map to my entities in a way that I can display the fluent validation messages in the views. I don't want to have two separate sets of logic to be maintained and the barcoding apps and website must use the same.
Do I have to map my entities directly to the views? which i've been led to believe is a bad thing and not very flexible. Or is there a way of stating that a field in a model maps back to an attribute of one of my entities? perhaps an annotation of some description.
EDIT:
Just some clarification for the types of validation i'll need.
most front end input type validation will still stay in the viewModels (required/length/password matches etc - basically all the stuff I can use for client side validation as well). But there are all the business logic validations that I don't want there. Things like email addresses must be validated before other options can be set, account numbers must be of a particular format based on name (stuff I can't do with a regex). This particular date is not a valid delivery date etc.
I guess one thing I could do is add these to the ValidationSummary somehow and display them separate from the individual fields.
I think you're just looking at the situation wrong. What MVC is all about is a separation of concerns. There's things the database needs to know about that your views could care less, and vice versa. That's why the recommended practice is to use view model with your views and not the entity itself.
Validation is much the same. Something like the fact that a password confirmation needs to match the password the user entered does not concern the database at all. Or more appropriately, something like a validation on minimum amount of characters in the password does not concern the database, either; it will only ever receive a salted and hashed version of the password. Therefore it would be wrong to put this kind of validation on your entity. It belongs on the view model.
When I first started out using MVC, I used to add all the validation logic to my entity class and then go and repeat that same validation on my view model. Over time, I started to realize that very little of the validation actually needs to be on both. In fact, the largest majority of validation will should just go on your view model. It acts as a gatekeeper of sorts; if the data is good enough to get through your view model, it's good enough for your database. The types of validation that make sense on your entity is things like Required, but even that is really only necessary on a nullable field that must have a value by the time it gets to your database. Things like DateTimes are non-nullable by default, and EF is smart enough to make them non-nullable on the tables it creates by default. MaxLength is at times worthwhile if there should be a hard limit on the length of a text field in your database, but more often than not, nvarchars work just fine.
Anyways the point is that if you actually sit down and start evaluating the validation on your entity, you'll likely see that most of it is business logic that only applies to how your application works and not to how the data is represented at the database level. That's the key takeaway: on your entity, you only need the validation that's necessary for the database. And that's generally pretty thin.
Just an update. to get the two tier validation that i needed i had to mark all my entity model classes as IValidatable. Then i overrode the validate method for each class and invoked my fluent validation validator method there, passing back the errors needed. for the modelstate.addmodelerror i set the key as the field name and it mapped back okay. bit more code but it works. if i find a better way to do this ill update.
I have recently started working as a web developer. I work with ASP .NET MVC 4 and NHibernate.
At my work-place, we are strictly made to use viewmodels to transfer data to and fro between a controller and a view. And the viewmodels are not supposed to contain any object of a model.
I understand that it is a sort of a tier between the controller and the view.
But I find it repetitive and redundant to write a viewmodel class even if we can directly send the model's object to the view (in most cases).
For example, if i want to display an order i can do this in the controller's action -
return View(Repository.Get<Order>(id));
But instead, I have to write a viewmodel, fill it with the fetched order and then pass it to the view.
So, my question is, what purpose does writing viewmodels serve when we can use the model's object as it is?
For smaller projects, you're right. I hear your argument and sympathise - however there are good reasons for this, drudged and repetitive work, especially in larger and more complicated applications:
It's essential to perform all processing within the Controller's action. However in the example you've given, the Repository.Get method might return a lazily-evaluated IQueryable object, which would mean the DB wouldn't be hit until the View is evaluated. For a variety of reasons this is bad. (A workaround is to call .ToList while still in the controller).
"A view should not contain any non-presentational logic" and "You should not trust the View" (because a View could be user-provided). By providing a Model object (potentially still connected to an active DatabaseContext) a view can make malicious changes to your database.
A View's data-to-display does not always map 1:1 with its Model's data, for example consider a User Details page:
A User's EF Model object represents its entity in the database, so it probably looks like this: User { UserId, UserName, PasswordHash, PasswordSalt, EmailAddress, CreatedDate }, whereas the fields on a "User details" page are going to be User { UserId, UserName, Password, ConfirmYourPassword, EmailAddress }, do you see the difference? Ergo, you cannot use the EF User model as the view model, you have to use a separate class.
The dangers of model manipulation: if you let ASP.NET MVC (or any other framework) do the model binding to the incoming HTTP POST Request then (taking the User details example above), a user could reset anyone's password by faking the UserId property value. ASP.NET will rewrite that value during binding and unless you specifically sanitize it (which will be just as drudgeful as making individual ViewModels anyway) then this vulnerability will remain.
In projects with multiple developers working in a team situation, is is important that everything is consistent. It is not consistent to have some pages using bespoke ViewModels but other pages using EF Models because the team does not share a concious mind, things have to be documented and generally make-sense. For the same reason a single developer can get away without putting excessive XML documentation in his source code, but in a team situation you'll fall apart if you don't.
There is a slight workaround in your case I'll share with you, but please note the preconditions:
Your views can be fully trusted
Your views contain only presentational logic
Your application is largely CRUD
Your views correspond 1:1 with each EF entity model (i.e. no JOINs)
Your views only deal with single Simple models for POST forms, not Complex models (i.e. an object graph)
...then you can do this:
Put all one-way, non-form-related data into your ViewData collection, or the ViewBag in MVC 4 (or even a generic ViewData<T> if you're hardcore). This is useful for storing HTML page titles and sharing data with Master pages.
Use your fully-evaluated and loaded EF models as your View<TModel> models.
But use this approach with caution because it can introduce inconsistency.
Well, i'm starting to think the pragmatic approach to every problem is required and not to just subscribe to the purist architectural standards out there. Your app may be required to run in the wild and be maintained by many developers serving a large set of client etc. and this may direct or drive your architecture.
The ViewModel is essential when you want a separation of concerns between your DomainModel (DataModel) and the rest of your code.
The less dependencies you have between the Model, View and Controller the easier down the line it will be to make changes to the DomainModel without breaking the interface contracts in the View and Controller etc. etc. But once again it's something to be pragmatic. I like the approach as code re-factoring is a big part of system maintenance - refactoring may include a simple spelling mistake on a property of a Model - that change could ripple through the code to the Contract level if the dependencies are not separated; for example.
The ViewModel is used to translate the data between your DomainModel and you Views
A simple example of a datetime stored in Informix has to be translated to a .Net DateTime. The ViewModel is the perfect place to do this translation and does not force you to put translation code in all sorts of unwanted places.
One attribute of a good design [of anything] is the ability to replace or modify a part of the implementation with little or no affects to the rest of the parts of the system. But this takes effort and time to achieve - it's up to you to find that practical balance between a perfect design and a design that is just enough
But yeah, there are many other good reasons to use certain patterns - but the bottom line is this:
Nothing forces you to use ViewModels... ASP.NET MVC won't force you. Take advice from the pragmatist inside you.
If you use same Models as your ViewModels, your application should be very small and simple and should contain only CRUD operations. But if you are building large or enterprise applications with large teams (with two or probably more developers), you should have concepts like Dependency Injection, Services, Repositories, Façades, Units of Work, Data Access Objects etc.
To simplify your mapping needs between Models and ViewModels, you can use AutoMapper
https://github.com/AutoMapper/AutoMapper
or install with nuget
Install-Package AutoMapper
According to me, it is essential to have one more layer(ViewModel) on top of Model layer for complex applications that performs most of the CRUD operations because it has following advantages:
To establish loose coupling between Model and Controller. So that any DataModel related modifications will not be affected to Controller.
If you've implemented your application's ViewModel layer correctly
by providing maximum level of IOC(Inversion of Control) via
DI(dependency Injection using Unity/other frameworks), etc., it will
also help you to MOQ your ViewModels(dependencies) for testing only
the controller's logic.
First of all i shall clarify that i am using Database First Approach using WEB API for a REST service. (Generally Developing The old fashioned way , and using the EF only for some features)
I have a model corresponding to a Database table let's say
Model Client
--id
--owns
--address
--VAT number
--Credit card number
Model Session
--id
--clientID (FK)
--date
Now there are several times when i want to return only part of a model to the client, and some times combination of model data
{ClientName , Owns , LastSessionDate ) Or several other combinations
Only tactic that comes in mind is creating different models for each response (that comes with duplicate validation declarations etc).
Or when the response is only part of a model(Not a combination) Just nullify the fields i don't want and tell the parser not to render null fields.
Is this the correct way or am i misconceiving something?
Typically I have a different model class for each response (or screen/view in a web app). Sometimes you can re-use these view models, but it's usually more trouble than it's worth.
This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
ASP.NET MVC - Linq to Entities model as the ViewModel - is this good practice?
Is is OK to use EF entities classes as view models in ASP.NET MVC?
What if viewmodel is 90% the same of EF entity class?
Let's say I have a Survey class in Entity Framework model. It 90% matches data required for view to edit it.
The only difference from what view model should have - is one or several properties to be used in it (that are required to populate Survey object because EF class cannot be directly mapped onto how it's properties are represented (sub-checkboxes, radio groups, etc.))
Do you pass them using ViewData[]? Or create a copy of Survey class (SurveyViewModel) with new additional properties (it should be able to copy data from Survey and back to it)?
Edit:
I'm also trying to avoid using Survey as SurveyViewModel property. It will look strange when some Survey properties are updated using UpdateModel or with default binder, while others (that cannot be directly mapped to entity) - using SurveViewModel custom properties in controller.
I like using Jimmy Bogard's approach of always having a 1:1 relationship between a view and a view model. In other words, I would not use my domain models (in this case your EF entities) as view models. If you feel like you are doing a lot of work mapping between the two, you could use something like AutoMapper to do the work for you.
Some people don't like passing these model classes all the way through to the view, especially as they are classes that are tied to the particular ORM you're currently using. This does mean that you're tightly binding your data framework to your view types.
However, I have done this in several simple MVC apps, using the EF entity type as the Model for some strongly-typed views - it works fine and is very simple. Sometimes simple wins, otherwise you may find yourself putting a lot of effort and code into copying values between near-identical Model types in an app where realistically you'll never move away from EF.
You should always have view models even if they are 1:1. There are practical reasons rather than database layer coupling which I'll focus on.
The problem with domain, entity framework, nhibernate or linq 2 sql models as your view classes is you cannot handle contextual validation well. For example given a user class:
When a person signs up on your site they get a User screen, you then:
Validate Name
Validate Email
Validate Password Exists
When an admin edits a user's name they get a User screen, you then:
Validate Name
Validate Email
Now expose contextual validation via FluentValidation, DataAnnotations Attributes, or even custom IsValid() methods on business classes and validate just Name and Email changes. You can't. You need to represent different contexts as different view models because validation on those models changes depending on the screen representation.
Previously in MVC 1 you could get around this by simple not posting fields you didn't want validated. In MVC 2 this has changed and now every part of a model gets validated, posted or not.
Robert Harvey pointed out another good point. How does your user Entity Framework display a screen and validate double password matching?
On bigger projects, I usually split up business objects from data objects as a matter of style. It's a much easier way to let the program and database both change and only affect the control (or VM) layer.
When I started using xVal for client-side validation, I was only implementing action methods which used domain model objects as a viewmodel or embedded instances of those objects in the viewmodel.
This approach works fine most of the time, but there are cases when the view needs to display and post back only a subset of the model's properties (for example when the user wants to update his password, but not the rest of his profile data).
One (ugly) workaround is to have a hidden input field on the form for each property that is not otherwise present on the form.
Apparently the best practice here is to create a custom viewmodel which only contains properties relevant to the view and populate the viewmodel via Automapper. It's much cleaner since I am only transferring the data relevant to the view, but it's far from perfect since I have to repeat the same validation attributes that are already present on the domain model object.
Ideally I'd like to specify the Domain Model object as a meta class via a MetaData attribute (this is also often referred to as "buddy class"), but that doesn't work since xVal throws when the metadata class has properties that are not present on the viewmodel.
Is there any elegant workaround to this? I've been considering hacking the xVal sourcecode, but perhaps there is some other way I have overlooked so far.
Thanks,
Adrian
Edit: With the arrival of ASP.NET MVC 2, this is not only a problem related to validation attributes anymore, but it also applies to editor and display attributes.
This is the quintessential reason why your input screens should not be tightly coupled to your model. This question actually pops up here on the MVC tag about 3-4 times a month. I'd dupe if I could find the previous question and some of the comment discussion here is interesting. ;)
The issue your having is you're trying to force two different validation contexts of a model into a single model which fails under a large amount of scenarios. The best example is signing up a new user and then having an admin edit a user field later. You need to validate a password on a user object during registration but you won't show the password field to the admin editing the user details.
The choices for getting around these are all sub-optimal. I've worked on this problem for 3 projects now and implementing the following solutions has never been clean and usually frustrating. I'm going to try and be practical and forget all the DDD/db/model/hotnessofthemonth discussions everybody else is having.
1) Multiple View Models
Having viewmodels that are almost the same violates the DRY principal but I feel the costs of this approach are really low. Usually violating DRY amps up maintenance costs but IMHO the costs for this are the lowest and don't amount to much. Hypothetically speaking you don't change how max number characters the LastName field can have very often.
2) Dynamic Metadata
There are hooks in MVC 2 for providing your own metadata for a model. With this approach you could have whatever your using to provide metadata exclude certain fields based on the current HTTPRequest and therefore Action and Controller. I've used this technique to build a database driven permissions system which goes to the DB and tells the a subclass of the DataAnnotationsMetadataProvider to exclude properties based values stored in the database.
This technique is working great atm but the only problem is validating with UpdateModel(). To solve this problem we created a SmartUpdateModel() method which also goes to the database and automatically generates the exclude string[] array so that any non-permissisable fields aren't validated. We of course cached this for performance reasons so its not bad.
Just want to reiterate that we used [ValidationAttributes] on our models and then superceeded them with new rules on runtime. The end result was that the [Required] User.LastName field wasn't validated if the user didn't have permission to access it.
3) Crazy Interface Dynamic Proxy Thing
The last technique I tried to was to use interfaces for ViewModels. The end result was I had a User object that inherited from interfaces like IAdminEdit and IUserRegistration. IAdminEdit and IUserRegistration would both contain DataAnnotation attributes that performed all the context specific validation like a Password property with the interfaces.
This required some hackery and was more an academic exercise than anything else. The problem with 2 and 3 is that UpdateModel and the DataAnnotationsAttribute provider needed to be customized to be made aware of this technique.
My biggest stumbling block was I didn't ever want to send the whole user object to the view so I ended up using dynamic proxies to create runtime instances of IAdminEdit
Now I understand this is a very xVal specific question but all of the roads to dynamic validation like this lead to customization of the internal MVC Metadata providers. Since all the metadata stuff is new nothing is that clean or simple to do at this point. The work you'd have to do to customize MVC's validation behavior isn't hard but requires some in depth knowledge of how all of the internals work.
We moved our validation attributes to the ViewModel layer. In our case, this provided a cleaner separation of concerns anyway, as we were then able to design our domain model such that it couldn't get into an invalid state in the first place. For example, Date might be required on a BillingTransaction object. So we don't want to make it Nullable. But on our ViewModel, we might need to expose Nullable such that we can catch the situation where the user didn't enter a value.
In other cases, you might have validation that is specific per page/form, and you'll want to validate based on the command the user is trying to perform, rather than set a bunch of stuff and ask the domain model, "are you valid for trying to do XYZ", where in doing "ABC" those values are valid.
If ViewModels are hypothetically being forced upon you, then I recommend that they only enforce domain-agnostic requirements. This includes things like "username is required" and "email is formatted properly".
If you duplicate validation from the domain models in the view models, then you have tightly coupled the domain to the UI. When the domain validation changes ("can only apply 2 coupon per week" becomes "can only apply 1 coupon per week"), the UI must be updated. Generally speaking, this would be awful, and detrimental to agility.
If you move the validation from the domain models to the UI, you've essentially gutted your domain and placed the responsibility of validation on the UI. A second UI would have to duplicate all the validation, and you have coupled two separate UI's together. Now if the customer wants a special interface to administrate the inventory from their iPhone, the iPhone project needs to replicate all the validation that is also found in the website UI.
This would be even more awful than validation duplication described above.
Unless you can predict the future and can rule out these possibilities, only validate domain-agnostic requirements.
I don't know how this will play for client-side validation, but if partial validation is your issue you can modify the DataAnnotationsValidationRunner discussed here to take in an IEnumerable<string> list of property names, as follows:
public static class DataAnnotationsValidationRunner
{
public static IEnumerable<ErrorInfo> GetErrors(object instance, IEnumerable<string> fieldsToValidate)
{
return from prop in TypeDescriptor.GetProperties(instance).Cast<PropertyDescriptor>().Where(p => fieldsToValidate.Contains(p.Name))
from attribute in prop.Attributes.OfType<ValidationAttribute>()
where !attribute.IsValid(prop.GetValue(instance))
select new ErrorInfo(prop.Name, attribute.FormatErrorMessage(string.Empty), instance);
}
}
I'm gonna risk the downvotes and state that there is no benefit to ViewModels (in ASP.NET MVC), especially considering the overhead of creating and maintaining them. If the idea is to decouple from the domain, that is indefensible. A UI decoupled from a domain is not a UI for that domain. The UI must depend on the domain, so you're either going to have your Views/Actions coupled to the domain model, or your ViewModel management logic coupled to the domain model. The architecture argument is thus moot.
If the idea is to prevent users from hacking malicious HTTP POSTs that take advantage of ASP.NET MVC's model binding to mutate fields they shouldn't be allowed to change, then A) the domain should enforce this requirement, and B) the actions should provide whitelists of updateable properties to the model binder.
Unless you're domain is exposing something crazy like a live, in-memory object graph instead of entity copies, ViewModels are wasted effort. So to answer your question, keep domain validation in the domain model.