In my APS.NET MVC project I'm using DataAnnotations for validations.
I moved from L2S to NHibernate orm and in fact found that NHibernate has its own validator (NHibernate.Validator)
Does it make sense to move to NHibernate.Validator as well?
For example DataAnnotations has [Required] attribute and NHibernate.Validator [NotEmpty, NotNull, NotNullNotEmpty] and it makes me think what to use.
I've used both in production projects and, if you have the time to make the switch, I would highly reccomend NHibernate.Validator for a couple of reasons:
NHibernate.Validators has a
richer set of validation attributes
(for example, the handful you
mention above)
If implemented
properly, NHibernate.Validators
validations are easier to unit test.
No. 1 wasn't huge for me, and may not be for you, as the set of attributes in DataAnnotations is pretty complete (and you can fall back to RegEx, if need be), but no. 2 was a big deal for me because I wanted to be able to include data validation as a part of my Domain Model unit tests, as opposed to testing those only through UI/Web testing via WatiN or Selenium. Using Validators also allowed me to mix Domain Model rule validation (Property X OR Y must have a value, but both cannot be null) without having to go to another place to do so.
For some basic guidance on using NHibernate Validators, check out this article: http://nhibernate.info/blog/2009/04/02/nhibernate-validator-and-asp-net-mvc.html, and I would also reccomend getting the source for S#arp Architecture, Billy McCafferty's great framework for creating DDD-style ASP.NET MVC applications. In particular, check out his implementation of Validators and the Validator ModelBinder you'll need to create to transfer NHibernate validation errors into MVC ModelErrors. Download the S#arpArchitecture source here: http://github.com/codai/Sharp-Architecture.
The bottom line is this: Using NHibernate.Validators is the more extensible, testable option, but it will take some doing for you to use it properly. DataAnnotations is baked into the framework and easier to get running with, there's no question about that.
Hope that helps.
Related
does it make sense to use KnockoutJS Viewmodels in combination with ASP.NET MVC3 or 4? Because it is not very DRY, isn't it? I have to write models for EF, Viewmodels for the MVC Views and Viewmodels for Knockout... and i lose a lot of magic. Automatic client-side validations for example.
Does it make sense to use MVC at all if one sticks with the MVVM Pattern?
With Knockout Mapping, you can automatically generate a KO view model from your MVC view model.
This is a proper pattern: your models are raw entities, your data. Your views are the UI. And your view models are your models adapted to that specific view.
This may be an unpopular answer, but I don't use ko.mapping to translate my C# POCOs into JS viewmodels. Two reasons, really.
The first is a lack of control. ko.mapping will turn everything into an observable if you let it. This can result in a lot of overhead for fields that just don't need to be observable.
Second reason is about extensibility. Sure, ko.mapping may translate my C# POCOS into JS objects with observable properties. That's fine until the point you want a JS method, which at some point, you invariably will.
In a previous project, I was actually adding extra methods to ko.mapped objects programmatically. At that point, I questioned whether ko.mapping was really creating more problems than it solves.
I take on board your DRY concerns, but then, I have different domain-focused versions of my POCOs anyway. For example a MyProject.Users.User object served up by a UserController might be very different from a MyProject.Articles.User. The user in the Users namespace might contain a lot of stuff that is related to user administration. The User object in the Articles namespace might just be a simple lookup to indicate the author of an article. I don't see this approach as a violation of the DRY principle; rather a means of looking at the same concept in two different ways.
It's more upfront work, but it means I have problem-specific representations of User that do not pollute each others' implementations.
And so it is with Javascript view models. They are not C# POCOs. They're a specific take on a concept suited to a specific purpose; holding and operating on client side data. While ko.mapping will initially give you what seems to be a productivity boost, I think it is better to hand-craft specific view-models designed for the client.
btw, I use exactly the same MVC3/KnockoutJS strategy as yourself.
We use knockout Mapping to generate the KO view models well.
We have a business layer in a separate project that does CRUD, reporting, caching, and some extra "business logic". We aren't going to be using EF, or something similar. Currently we've defined c# classes as MVC models, and our controllers call the business layer to construct the Models that are defined in the usual place in our MVC app. These C# models get serialized as JSON for use in our pages.
Since everything we do in the browser is c#/JSON based using knockout, we aren't using MVC models in the traditional MVC way - everything gets posted as JSON and serialized to c#, so we don't use MVC model binding, validation, etc. We're considering moving these models to our business layer so they can be tested independently of the web app.
Se we'll be left with an MVC app that has controllers and views, but no models - controllers will get models that are defined in the business layer. We're nervous about departing from the normal MVC structure, but a KO/javascript based client is fundamentally different from a DOM based client that MVC was originally built around.
Does this sound like a viable way to go?
I work now on project which mixes MVC3 and knockouts and I have to tell you - it's a mess...
IMO it's nonsense to force some patterns just to be up to date with trend.
This is an old topic, but now in 2014 (unfortunately) I still feel this question has a huge relevance.
I'm currently working on a project which mixes MVC4 with knockoutjs. I had some difficoulties to find whichs part should be handled on which side. Also, we needed a "SPA-ish" kind of architecture, where each module has its own page, but then inside that module there is only AJAX interaction. Also faced some heavy validation scenarios, and needed to provide user (and SEO) friendly URLs inside each module. I ended up with the following concept, which seems to be working well:
Basic MVC and .NET side roles:
Handling authentication and other security stuff.
Implementing the Web API interface for the client-side calls (setting up viewmodels, retrieving and mapping data from the domain, etc.)
Generating knockout viewmodels from my (pre-existing) C# viewmodels with T4 templates, also including knockout validation plugin extensions from .NET validation attributes. (This was inspired by this aticle). The generated viewmodels are easily extensible, and the generation can be finetuned with several "data annotation"-like custom or built-in attributes (such as DefaultValue, Browsable, DataType, DisplayFormat, etc.). This way the DRY doesn't get violated (too much).
Providing strongly typed, but data-independent partial view templates for each submodule (each knockout viewmodel). Because property names on C# viewmodels are same as in KO models, I can benefit from the strongly typed helpers specifically written for KO bindings, etc.
Providing the main view for each module similarly to previous point.
Bundling and minification of all scripts and stylesheets.
Basic client-side roles:
Loading the initial state of all viewmodels encapsulated into one module page, taking the whole URL into account with a simple route parser implementation.
Handling history with history.js
Data-binding, user interaction handling.
Posting relevant parts of viewmodels to the server, and processing the returned data (usually updating some viewmodel with it).
I hope this could help anyone else who feels lost in the world of trendy technologies. Please, if anyone has any thought on this, feel free to post any question or suggestion in the comments.
I've been developing a web application with asp.net mvc, nhibernate and ddd concepts.
I've developed validations with Fluent Validation for my domain classes and it works fine. Well, now, I need a ViewModel to edit an entity in a View, soo, my question is, Do I need to create another validation class to validate my viewmodel? Or what should I do to get around this situation?
I ask it because I don't want to broke the DRY (don't repeat yourself) concetp.
Thanks!
Domain level validation, and View-Model validation are quite different imho (although they can have lots of overlap).
For instance, it may be perfectly allowable to have a certain field as null in your database, but require it's input on certain webforms. In this case you would check for null within the Model validation.
It would also be quite normal for multiple client applications to share the same Domain controllers (via WCF for example), but to possess different application validation logic.
If you use DataAnnotations in your view model you can get client-side javascript validation for free, so as a general rule, I always have a separate ViewModel from my Domain objects, even if it's a 1:1 mapping - I just use AutoMapper to translate between them. In addition to getting the client-side validation, it also reduces the clutter within the Domain validation.
I'm thinking of two options right now for model-base validation for an ASP.net project I'm starting:
xVal (Steve Sanderson's project) and the Enterprise module that Stephen Walther uses on this page
I don't really know enough to talk about the preferences as I haven't used either of them yet. Any ideas?
Update Using LinqToSql for ORM right now, but am open to changes.
One difference I see in reviewing the two is that Stephen Walther's blog post describes a library which does only validation in the Web server, where as xVal works with jQuery validators to do in-browser validation, as well. This feature, incidentally, is almost completely automatic.
FluentValidation is nice. NHibernate also has built in model validation. Then you need something like Scott Guthrie's technique for binding errors to the UI.
I've been using xVal to and i have integrated it into the IDataErrorInfo interface introduced into MVC RC1. I like it.
Here is a post I wrote which explains a few things.
http://schotime.net/blog/index.php/2009/03/05/validation-with-aspnet-mvc-xval-idataerrorinfo/
Hope this helps.
Shamelessly promote my validation library. Built for jQuery validate & Enterprise Library and work out of the box for just that. That said, functionality and code are simple enough to modify/extend if you want.
You could also check out this new technique on LosTechies http://www.lostechies.com/blogs/hex/archive/2009/06/10/opinionated-input-builders-for-asp-net-mvc-part-5-the-required-input.aspx I like the fact that you inputs are setup globaly which is really DRY. Also you could just skip the client side validation and do an jquery ajax submit form to the server, which performs validation model and business logic all in one place, which is also DRY :) Also it means you will get the product out the door quicker and you can add client side validation later as a bonus or to progressively enhance the forms.
Another vote for xVal. It's real sweet. I like using Buddy Classes and DataAnnotations to do the validation lifting. Outside of making things work with Linq2Sql as you cannot add attributes to your fields, buddy classes give one a bit of flexibility to have multiple models share the same validation info. Comes in real handy for those ModelEditData classes that seem to always become neccessary.
Are you using an ORM? If so, which one are you using? I've had a lot of luck, when using Castle ActiveRecord, simply sticking with their default model-level validation. If you're not using that, though, this is probably not too helpful. :-)
There have been plenty of questions on MVC validation but so far the answer has been pretty much inconclusive.
For my needs in particular, I would like something that generates client validation and server validation from the same description and should allow both model based attributes as well as code based declarations for those using an ORM (e.g. LINQ TO SQL) exclusively.
I have seen some validation packs that have been whipped up to do this but they tend to introduce a lot of dependencies or are reasonably incomplete (e.g. no support for check boxes or no "higher-level" validation).
Do we begin writing our own validation framework or do we wait in hope that the team may actually release something now that they have the structure for validation in place and jQuery on board?
For those out there actually using MVC in the field now what are you using for validation?
Are you aware that validation semantics have been added in preview 5? This article from Scott Gu describes how to use it, and this one details the changes in the beta.
I have not personally used it, and it may not fit all your requirements, but I have no doubt it could be extended to behave like you want.
The best way as of the released MVC 1.0 is to use xVal.
You may also need to look at this post on implementing Linq2SQL with xVal in case that hasn't been resolved/doesn't work.
I'm trying to decide what validation approach to take for a new ASP.NET MVC project. (And wow there are plenty of options!)
The project uses NHibernate, so the first thing I considered was the NHibernate Validator (Because of tight integration with NHibernate). However, as far as I can see there are only a couple of benefits to this tight integration:
1) DB Schemas generated by NHibernate will include details of validation (e.g. column lengths will be set to max value allowed in validation). (This is not really of interest to me though, as I generate schemas manually.)
2) NHibernate will throw an exception if you try to save data that doesn't meet the validation specs. (This seems fairly redundant to me, since the data presumably will already be validated by whatever mechanism you choose before saving anyway)
If there are more benefits to NHibernate Validator please let me know!
Other libraries which I've been reading a little about include:
MS DataAnnotations
Castle Validator
Something else?
I've also been thinking about using xVal to provide client side validation from the same set of rules. However, I hear that ASP.NET MVC v2 will include something similar to xVal (integration with jquery) out of the box? Will this new included functionality render some of the others redundant?
So, I'm basically asking for people's advice on which direction to take here. I don't want to implement a particular scheme, only to have to rip it out when another one becomes the dominant tech.
What has worked for you? Which option do you think has/will have the edge?
Thanks!
I have been using FluentValidation along with jQuery validation plugin and still cannot find a situation they cannot handle.
I like xVal.
You can implement very easily client and server validation with it. Also there is support for column (property) validation on entities that you would like to use.
DataAnnotations implemented by buddy classes and JQuery client validation
Make sure you're using MVC Preview 2
You might be interested in this delegate approach. I was because i didn't like the xVal idea (the solution im currently going with) and the fact that it didn't seem to cater for complex validation cases that crossed multiple properties of the same or even different class structures.