ASP.NET MVC - model decision: how to design it? - asp.net-mvc

This is concerning an enterprise application with a very generic database (all objects are identified using data in the database and internationalized/globalized/localized).
Make a model for Repository pattern, then make (generate 1:1) another model for DB access (LINQ2SQL or EF) and use the later as repository model data access layer?
Just use L2S/EF/NHibernate model directly, mapping model to DB and opening persistence layer?
Will this dual model idea (repository pattern) popup problems making dynamic stackable LINQ search queries possible when using L2S/EF model directly in a dual model environment?
Please advise.

As long as you are exposing IQueryable objects in your repository, you should have no problem stacking queries in the manner you suggest.
I would be cautious about using Entity Framework for this, since lazy loading is not supported in the way you might expect. Linq to SQL will handle lazy loading without problems.
For more information about lazy loading in the Entity Framework, see: http://www.singingeels.com/Articles/Entity_Framework_and_Lazy_Loading.aspx

Take a look at sharp architecture.
Regarding returning IQueryable from your repository objects, it is my opinion that doing such blurs a proper separation of concerns in your application. I'm all for working with IQueryable within your data access layer but once you start returning objects as IQueryable you provide the opportunity for your controllers and/or views to start meddling with data access. Such may even negatively impact the testability of your application as well.

Related

MVC Security Violation - Improperly Controlled Modification of Dynamically-Determined Object Attributes

We are developing an MVC 5 Application and while we ran security scan using Veracode we are getting the below flaw saying
"Improperly Controlled Modification of Dynamically-Determined Object Attributes"
And added this link as reference to fix.
Tried implementing Bind Attribute to my Controllers functions with HTTP Post and the issue is fixed.
So in ASP.NET MVC is it mandatory to use Bind Attribute for all the Post to avoid security violation ?
Or can i ignore this flaw or any other alternative way i can address this as hard coding and maintaining Bind Attributes really gets difficult in real time applications.
Please share your views.
it is not mandatory to use the Bind attribute.
The link which you have posted is basically the dirtiest example they could have came up with. They are directly binding an EF model into the controller, which no real world application would do and I hate Miscrosoft where they show you how easily you can go from DB to Web by applying the dirtiest worst practise patterns without explaining that this is not something you would want to do in real life.
In real life you would create a (View)Model which is tailored to your View. This means the class will ONLY have the properties which you want to accept from the request, therefore you wouldn't really need the Bind attribute in most cases.
EF models are low level classes in your data layer and shouldn't be bound to any controllers IMO.
UPDATE:
Actually on the top of the link they have posted this:
Note It's a common practice to implement the repository pattern in
order to create an abstraction layer between your controller and the
data access layer. To keep these tutorials simple and focused on
teaching how to use the Entity Framework itself, they don't use
repositories. For information about how to implement repositories, see
the ASP.NET Data Access Content Map.
However, this is just talking about the repository pattern, which is a good pattern to abstract your data layer, but the DTO which the repository pattern would return is still too low level for binding to a View.
You should create a model which is tailored to your view and in your controller or service layer you can do the infrastructure mapping between the different layers.

using multiple dbcontext for each business function

I am using ASP.NET MVC5, entity framework in my web application. It is expected complex business logic so achieving separation in code based on an individual business concern is required. I am using Code First with existing database approach. I have created 3 ADO.NET Entity Data Model in design wizard. so separate dbContext with its model. My issue arise when i created 3rd dbContext which has one table share from one of the model i have created initially. the error is Metadata Exception was unhandle by user code. I believe is something to do with Meta data but not sure how to approach this problem?
what i am trying to achieve, if one webpage (one business function) to only two table, why load whole data in memory plus decoupling will improve maintainability and flexibility to extend application without disturbing existing code!
The key to using Bounded Contexts is use of
Ignore entity
modelBuilder.Ignore<MyUnNecessaryEntity>();
and/or
change the DataBAse initializer on a MINI context to none
Database.SetInitializer(new ContextInitializerNone<MyContext>());
I like teh idea of ONLY 1 context is responsible for keeping a set of tables consistent.
The other contexts can access those tables using the same POCO definitions. They can be a subset of pocs. The context is reduced and or has NO initializer.
There is a good article worth a read from Julie Lerman on MSDN on the topic of Bounded Contexts.

Domain Driven Design vs Database Driven Design for an MVC Web application

I am expanding/converting a legacy Web Forms application into a totally new MVC application. The expansion is both in terms of technology as well as business use case. The legacy application is a well done Database Driven Design (DBDD). So for e.g. if you have different types of Employees like Operator, Supervisor, Store Keeper etc and you need to add a new type, you just go and add some rows in a couple of tables and voila, your UI automatically has everything to add/update the new type of Employee.
However the seperation of layers is not so good.
The new project has two primary goals
Extensibility (for currently and future pipeline requirements)
Performance
I intend to create the new project replacing the Database Driven Design (DBDD) with a Domain Driven Design (DDD) keeping the Extensibility requirement in mind. However moving from a Database Driven Design to Domain Driven Design seems to inversely impact the Performance requirement if I compare it to the performance of the legacy DBDD application. In the legacy application any call for data from the UI would directly interact with the Database and any data would be returned in form of a DataReader or (in some cases) a DataSet.
Now with a strict DDD in place any call for data will be routed through the Business layer and the Data Access layer. This would mean each call would initialize a Business Object and a Data Access Object. A single UI page could need different types of data and this being a Web application each page could be requested by multiple users. Also a MVC Web application being stateless, each request would need initializing the business objects and data access objects each and every time.
So it seems for an MVC stateless application the DBDD is preferrable to DDD for performance.
Or there a way in DDD to achieve both, Extensibility that DDD provides and performance that DBDD provides ?
Have you considered some form of Command Query Seperation where the updates are through the domain model yet reads come as DataReaders? Full blown DDD is not always appropriate.
"Now with a strict DDD in place any call for data will be routed through the Business layer and the Data Access layer."
I don't believe this is true, and it's certainly not practical. I believe this should read:
Now with strict DDD in place, any call for a transaction will be routed through the business layer and the data access layer.
There is nothing that says you can't call the data access layer directly in order to fetch whatever data you need to display on the screen. It is only when you need to amend data that you need to invoke your domain model that is designed based on its behavior. In my opinion this is a key distinction. If you route everything through your domain model you will have three problems:
Time - it'll take you MUCH longer to implement functionality, for no benefit.
Model Design - your domain model will be bent out of shape in order to meet the needs querying rather than behavior.
Performance - not because of an extra layer, but because you wont be able to get the aggregated data from your model as quickly as you can directly from a query. i.e. Consider the total value of all orders placed for a particular customer - its much faster to write a query for this than to fetch all order entities for the customer, iterate over and sum.
As Chriseyre2000 has mentioned, CQRS aims at solving these exact issues.
Using DDD should not have significant performance implications in your scenario. What you worried about seems more like a data access issue. You refer to it as
initialize a Business Object and a Data Access Object
Why is 'initializing' expensive? What data access mechanisms are you using?
DDD with long-lived objects stored in a relational database is usually implemented with ORM. If used properly, ORM will have very little, if any, impact on performance for most applications. And you can always switch back the most performance-sensitive parts of the app to raw SQL if there is a proven bottleneck.
For what's it worth, NHibernate only needs to be initialized once on application startup, after that it uses the same ADO.NET connection pool as your regular data readers. So it all boils down to a proper mapping, fetching strategy and avoiding classic data access mistakes like 'n+1 selects'.

Working with MVC 2.0 and the Model in a separate assembly

I'm new to MVC and even though there is a lot (and I do mean a lot) of information out there that is very useful - it's proofing very difficult to get a clear understanding on how to achieve my exact requirements with MVC 2.0.
I would like to set up a solution as follows:
Provide a web UI using an MVC 2.0 project.
Use Linq to SQL classes project for data persistence.
I have a two separate code modules that will need to access the above Linq to SQL model - so I won't be able to include my Linq to SQL model directly in the MVC project itself.
Also I have a Business Logic layer in front of my Linq to SQL project.
My questions are:
How do I set up the Model part of my MVC application to point to my Linq to SQL project via my BLL?
How do I perform web app validation? Can I use MVC 2.0 Model Validation? If not what are the alternatives?
Finally (and slightly aside) - What is the ViewModel and how does this differ from the Model?
So many questions. But this is an exciting new technology and data access issues aside, everything else I've got to grips with very quickly and I think MVC 2.0 is fantastic.
Thanks for any pointers you can provide.
How do I set up the Model part of my
MVC application to point to my Linq to
SQL project via my BLL?
Typically you'd use a repository pattern for this. Your controller has a reference to your repository - the repository returns your domain objects from your database. The MVC app has no knowledge LINQ to SQL exists.
How do I perform web app validation?
Can I use MVC 2.0 Model Validation? If
not what are the alternatives?
Put view models in your MVC project. These view models may closely align with your domain models but their concern is to be the presentation model. Put your data annotations for validation on these view models - the MVC framework will automatically ensure validation occurs on these view models decorated with data annotations. It's pluggable so you could use alternatives - but with MVC 2, it's baked in fairly well and this includes client side validation.
Finally (and slightly aside) - What is
the ViewModel and how does this differ
from the Model?
I partially answered this one above. the shape of your domain models may not be the shape you need to display your views - view models are great to bridge this gap. Additionally, even if the shape does match exactly - view models are still a good idea so that you can put UI validation code there and other presentation meta-data on them (since you do not want anything related to presentation logic on your domain model).
Here's link for view model patterns.
Hope this helps.
You can add a reference to the objects exposed from your BLL assembly and use them as your Models.
When you want to add validation to classes that are generated use buddy classes.
A ViewModel is a custom-shaped aggregate of Model data. There is exactly one per View, as the ViewModel's purpose is to surface exactly the data needed by a particular View in a convenient and concise way.
An example might be a View that contains both Order and OrderDetail information. A ViewModel can hold internal references to the repositories and business objects for each type. Properties of the ViewModel merge together the data from these objects.
ViewModels will be useful in your case also because you want your Models to be in a separate assembly. You can apply the DataAnnotations to ViewModel properties for validation. You would make the "raw" business object models internal properties of your ViewModels, and expose public methods to retrieve and persist data.

MVC: Repositories and Services

I am confused as to the limitations of what gets defined in the Repositories and what to leave to the Services. Should the repository only create simple entities matching tables from the database or can it create complex custom object with combinations of those entities?
in other words: Should Services be making various Linq to SQL queries on the Repository? Or should all the queries be predefined in the Repository and the business logic simply decide which method to call?
You've actually raised a question here that's currently generating a lot of discussion in the developer community - see the follow-up comments to Should my repository expose IQueryable?
The repository can - and should - create complex combination objects containing multiple associated entities. In domain-driven design, these are called aggregates - collections of associated objects organized into some cohesive structure. Your code doesn't have to call GetCustomer(), GetOrdersForCustomer(), GetInvoicesForCustomer() separately - you just call myCustomerRepository.Load(customerId), and you get back a deep customer object with those properties already instantiated. I should also add that if you're returning individual objects based on specific database tables, then that's a perfectly valid approach, but it's not really a repository per sé - it's just a data-access layer.
On one hand, there is a compelling argument that Linq-to-SQL objects, with their 'smart' properties and their deferred execution (i.e. not loading Customer.Orders until you actually use it) are a completely valid implementation of the repository pattern, because you're not actually running database code, you're running LINQ statements (which are then translated into DB code by the underlying LINQ provider)
On the other hand, as Matt Briggs' post points out, L2S is fairly tightly coupled to your database structure (one class per table) and has limitations (no many-many mappings, for example) - and you may be better off using L2S for data access within your repository, but then map the L2S objects onto your own domain model objects and return those.
A repository should be the only piece of your application that knows anything about your data access technology. So it should not be returning objects generated by L2S at all, but map those properties to model objects of your own.
If you are using this sort of pattern, you may want to re think L2S. It generates up a data access layer for you, but doesn't really handle impedance mismatch, you have to do that manually. If you look at something like NHibernate, that mapping is done in a more robust fashion. L2S is more for a 2 tier application, where you want a quick and dirty DAL that you can extend on easily.
If you're using LINQ then my belief is that the repository should be a container for your LINQ syntax. This gives you a level of abstraction of the database access routines from your model object interfacing. As Dylan mentions above there are other views on the matter, some people like to return the IQueryable so they can continue to query the database at a later point after the repository. There is nothing wrong with either of these approaches, as long as you're clear in your standards for your application. There is more informaiton on the best practices I use for LINQ repositories here.

Resources