Can someone, please, give me some enlightenment about how Symfony2 uses the ORM metadata to automatically build validation rules? I'm using Symfony2.1.
For example, if we have a Foo entity with a required name attribute, we have the following Doctrine metada (as annotation):
#ORM\Column(name="name", type="string", length=255, nullable=false)
But then we have to repeat the nullable information with Assert if we want it to be validated in the server side:
#Assert\NotBlank()
In the other side, if we have a number attribute with an integer type, this is automatically used as a validation rule without the need of using any #Assert annotation.
#ORM\Column(name="number", type="integer", nullable=true)
//#Assert\Type(type="integer") is no needed
Basically, you have two different layers:
the persistence layer (Doctrine2, Propel, ...);
the Validation layer.
The peristence layer adds validation rules to the database using metadata. In Doctrine2, you can use annotations, but with Propel you describe your database using XML. It's mainly used to define SQL statements (basically nullable=false is transformed in SQL NOT NULL).
The validation layer is used to validate your data at the application level. Doctrine2 metadata are used to construct your database while the Validation layer is used to validate data (from your users for example) before to insert them in the database.
You can add more constraints at the application level using the Validator component like business validation rules. And, you should not rely on the database to validate data.
As you have two different layers with two different concerns, you can't mix them.
Related
I understand the fact that generating a model based on the DataBaseFirst method woill produce a collection of entitites on the ORM that are essentially mapped to the DB tables.
It is my understanding, that if you need properties from other entities or just dropdownlist fields, you can make a ViewModel and use that class as your model.
I have an AppDev course that I just finished and the author wrote something that if I understand it correctly, he is referring to change the ORM entities to fit what your ViewModels would look like, hence, no need for ViewModels. However, if you do this, and regenerate the ORM from the database, those new entities that you placed as "ViewModels" would be gone. If you changed the ORM to update the database, then your database structure in SQL Server would be "undone".
Please inform me if my understanding is correct that I simply need to use a ViewModel in a separate folder to gather specific classes and or properties in a superclass or a single class with the properties that I just need and use that as my model....
Here is the excerpt from the author:
EntityFramework is initially a one to one mapping of classes to tables, but you can create a model that better represents the entities in your application no matter how the data is stored in relational tables.
What I think the author may have been hinting at is the concept of complex models. Let's say, for instance, that in your Database you have a Customer Table and an Address Table. A one to one mapping would create 2 model items, one Customer class and one Address class. Using complex model mapping, you could instead create a single Customer class which contained the columns from both the Customer Table and the Address table. Thus, instead of Customer.Address.Street1 you could refer simply to Customer.Street1. This is only one of many cases where you could represent a conceptual model in code differently than the resulting data in storage. Check out the excellent blog series Inheritance with EF CodeFirst for examples of different mapping strategies, like Table Per Hierarchy (TPH), Table Per Type (TPT), Table Per Concrete Class (TPC). Note that even though these examples are CodeFirst, they still apply to Entity Framework even if the initial models are generated from a Database.
In general, if you use DatabaseFirst and then never modify the resulting entities, you will always have a class in code for each table in the database. However, the power of Enity Framework is that it allows you to more efficiently work with your entities by allowing these hybrid mappings, thus freeing you to think about your code without the extra burden of your classes having to abide by rigid SQL expectations.
Edit
When the Database-First or Model-First entities are generated, they are purposely generated as partial classes. You can create your own partials which extend the classes that are generated from Entity Framework without modifying the existing class files. If the models are re-generated, your partial classes will still be intact. Granted, using partials means that you have the Entity Framework default behaviors as well as your extended behaviors, but this isn't usually a huge issue.
Another option is that you can modify the TT files which Entity Framework uses to generate your models, ensuring that your models are always regenerated in the same state.
Ultimately, the main idea is that just because the default behavior of Entity Framework is to map the database to classes 1:1, there are other options and you are never forced into something that isn't efficient for your project.
I was looking for a good way to organize validation rules within BeforeSaveEntity method and I have found this comment in the file: TodoContextProvider.cs within the project: BreezeMvcSPATemplate:
// A second DbContext for db access during custom save validation.
// "this.Context" is reserved for Breeze save only!
Why this.Context can not be used?
Excellent question. The answer isn't obvious and it's not easy to cover briefly. I will try.
The EFContextProvider takes the save data from the client and (ultimately) turns these data into entities within the EFContextProvider.Context. When the save is approved, the EFContextProvider calls the SaveChanges method on this EF Context and all of its contents are saved as a single transaction.
There are two potential problems.
1. Data integrity and security
Client data can never be fully trusted. If you have business rules that limit what an authorized user can see or change, you must compare the client-derived entity to the corresponding entity from the database.
An EF Context cannot contain two copies of the "same entity". It can't hold two entities with the same key. So you can't use the EFContextProvider.Context both to fetch the clean copy from the database and to hold the copy with changes.
You'll need a second Context to get the clean copy and you'll have to write logic to compare the critical values of the entity-to-save in the EFContextProvider.Context with the values of the clean entity in the second Context.
2. Cross-entity validation
Many validation do not require comparison of values with a clean entity.
For example, the out-of-the-box System.ComponentModel.DataAnnotations attributes, such as Required and MaxLength are simple data validations to determine if an entity is self-consistent. Either there is a value or there is not. The value is less than the maximum length or it is not. You don't need a comparison entity for such tests.
You could write your own custom System.ComponentModel.DataAnnotations attributes that compare data values within a single entity. You might have a rule that says that order.InvoiceDate must be on-or-before order.ShipDate. that is also a self-consistency test and you won't need a comparison entity for that one either.
If these are the only kinds of validation you care about - and you're using an EF DbContext - you can let EF run them for you during its save processing. You won't need a second Context.
But cross entity validations are another story. In a cross-entity validation, entity 'A' is valid only when some condition is true for entity 'B' (and perhaps 'C', 'D', 'E', ...). For example, you may require that an order item have a parent order that is already in the database.
There is an excellent chance that the parent order is not in the EFContextProvider.Context at the time you are validating the order item.
"No problem," you say. "I'll just navigate to the parent with someItem.Order."
No you cannot. First, it won't work because lazy loading is disabled for the EFContextProvider.Context. The EFContextProvider disables lazy loading mostly to break circular references during serialization but also to prevent performance killing "n+1" bugs on the server.
You can get around that by loading any entity or related entities at will. But then you hit the second problem: the entity you load for validation could conflict with another entity that you are trying to save in this batch.
The EFContextProvider doesn't populate its Context all at once. It starts validating the entities one-by-one, adding them to the Context as it goes.
Continuing our example, suppose we had loaded the parent order for someItem during validation. That order is now in EFContextProvider.Context.
The save process continues to the next entity and ... surprise, surprise ... the next entity happens to be the very same parent order. The EFContextProvider tries to attach this copy to the Context which already has a copy (the one we just loaded) ... it can't.
There's a conflict. Which of the two orders belongs in the EFContextProvider? The clean copy we just loaded for validation purposes ... or the one that came from the client with modifications to be saved?
Maybe you think you know the answer. Maybe I agree. But the fact is, the EFContextProvider throws an exception because there is already an order with that key in the Context.
Conclusion
If all your validations are self-consistency checks, the EFContextProvider.Context is all you need. You won't have to create a second Context
But if you have data security concerns and/or business logic that involves other entities, you need a second Context ... and you'll need sufficient EF skills to use that Context.
This is not a limitation of Breeze or the Entity Framework. Non-trivial business logic demands comparable server-side complexity no matter what technology you choose. That's the nature of the beast.
What advantage does the FluentValidation library have over the .NET System.ComponentModel.DataAnnotations?
Does it offer more flexible validation as it is not annotated (static field validation) on the property like validating a property depending on another's property value?
I'm not sure about data annotations but we've used FluentValidation for validation part in business logic. And easy integrating with ASP.NET MVC is nice bonus :)
It supports a lot of built-in rules, error messages localization, using object data in error messages, custom validation methods, conditional validation - applying some rules if object data matches a condition, rule sets - apply named set of rules, validation of aggregated objects and collections - and property names would be compatible with ASP.NET MVC property names and so on.
I'm trying to implement a very granular security module in an ASP.NET MVC 3 app where only certain users can edit certain columns on records in a table. I can imagine that the update SQL statement's list of columns would only include the columns that the user had the right to change. The thing is, I'm planning to use an ORM like NHibernate. I'm wondering if NHibernate provides a way to determine at runtime which properties of a model should be part of an Update. Or is my only option to, on the POST method, get the model again from the database, set only the properties that the user is allowed to set then finally Save the model. Also, is this a good way to handle my requirement of of granular security?
Would dynamic-update and dynamic-insert be enough?
dynamic-update (optional, defaults to false): Specifies that UPDATE SQL should be generated at runtime and contain only those columns whose values have changed.
dynamic-insert (optional, defaults to false): Specifies that INSERT SQL should be generated at runtime and contain only the columns whose values are not null.
Otherwise it might be possible with events or interceptors, but I've never used them so I don't know exactly.
I am bootstrapping myself through my very first ASP MVC project, and could use some suggestions regarding the most appropriate (data-side) model binding solution I should pursue. The project is 'small', which really just means I have very limited time for chasing down dead-ends, which cramps my available learning curve. Thanks heaps for reading!
Background: a simple address search app (ASP.NET MVC 3) that sends user form input to a vendor-sourced server, and presents the results returned from the server in a rules-driven format. There is no CRUD-style repository as such; this is an entirely read-only application. The vendor's .NET server API specifies the use DataTable objects (System.Data.DataTable) for both requests and replies.
Currently: I have a prototype under construction to validate server behavior and inform the application's design. There is a conventional MVC data model class for the input form which works great, auto-binding with the view just as you'd expect. When submitted, the controller passes this input model to my vendor API wrapper, which is currently binding to the request table medieval-style:
public DataTable GetGlobalCandidateAddresses(AddressInputModel input)
{
...
DataRow newRow = dataTable.NewRow();
newRow[0] = input.AddressLine1;
newRow[1] = input.AddressLine2;
newRow[2] = input.AddressLine3;
...
dataTable.Rows.Add(newRow);
...
This works fine; the inputs are fixed and have very light validation requirements. However, if there is a modern framework solution for quickly reflecting a DataTable from my model, that would be peachy.
My real conundrum, however, is on the reply. The table returned by the server contains a variable column-set, with any combination of at least 32 possible unordered fields on a per-transaction basis. Some of the column names contain compiler-invalid characters ("status.description") that won't map literally to a model property name. I will also have a need to dynamically map or combine some of the reply fields in the view model, based on a series of rules (address formats vary considerably by country), so not all fields are 1-to-1.
Just to get the prototype fired up and running, I am currently passing the reply DataTable instance all the way back to a strongly-typed view, which spins it out into the display exactly as is. This is nice for quickly validating the server replies, but is not sufficient for the real app (much less satisfying best practices!).
Question: what is my best tooling approach for binding DataTable rows and columns into a proper model class for display in a view, including some custom binding rules, where no underlying database is present?
Virtually all of the training materials I am consuming discuss classic database repository scenarios. The OnModelCreating override available in the Entity Framework seems ideal in some respects, but can you use a DBContext without a DB connection or schema?
If I have to roll my own model binder, are there any tricks I should consider? I've been away from .NET for a while (mostly Java & SQL), so I'm picking up LINQ as I go as well as MVC.
Thanks again for your attention!
Create a poco display model AddressDisplay and do custom object mapping from the data table to the display model. You can use data annotations for formatting but you can also do that in your custom mapping. You shouldn't need to create a custom model binder.
So create two poco models, AddressInput and AddressDisplay, you can use data annotations on AddressInput for validation. When AddressInput is posted back, map it to the outbound data table. When the response is received, map the inbound data table to AddressDisplay and return the view to the user.