Asp.net application in a n-layered architecture (better if is a DDD architecture).
In the presentation layer I have a grid (let's say telerik radgrid or standard gridview) where I need to show a list of products (product is my entity).
Make sense talking about Linqdatasource provider for the grid? How can I use it in this scenario? Or I should write the binding operations "manully" (intercepiting the binding events and call my getproductlist function from my application layer?
Example are welcome...thanks.
The LinqDataSource/SqlDataSource/ObjectDataSource controls, in my experience, provide little in the way of development time and hinder maintainability. In general, my code for binding to a grid would look something like this:
using(ApplicationService appService = new ApplicationService())
{
RadGrid1.DataSource = appService.GetCollection();
RadGrid1.DataBind();
}
The application service would call the repository, where the Linq query would be performed. Some new techniques such as CQRS where your SQL query would only be 'SELECT * FROM TABLE' may be suited to the DataSource objects but I am left familliar with this.
Related
I watched this video and read this blog post. There is something in this post confused me; The last part of the post. In the last part Mosh emphasized, Repository should never return IQueryable, because it results in performance issue. But I read something that sounds contradictory.
This is the confusing part:
IEnumerable: While querying data from database, IEnumerable executes select query on server side, load data in-memory on client side and then filter data. Hence does more work and becomes slow.
IQueryable: While querying data from database, IQueryable executes select query on server side with all filters. Hence does less work and becomes fast.
this is another answer about IQueryable vs IEnumerable in Repository pattern.
These are opposite of Mosh's advice. If these are true, why we should not use IQueryable instead of IEnumerable.
And something else, What about situations that we want to use OData; As you know it’s better to use IQueryable instead of IEnumerable when querying by OData.
one more thing, is it good or bad to use OData for querying e-Commerce website APIs.
please let me know your opinion.
Thank you
A repository should never return a IQueryable. But not due to performance. It's due to complexity. A repository is about reducing the complexity in the business layer.
Buy exposing an IQueryable you increase the complexity in two ways:
You leak persistence knowledge to the business domain. There is things that you must know about the underlying Linq to Sql provider to write effective queries.
You must design the business entities so that querying them is possible (i.e. not pure business entities).
Examples:
var blockedUsers = _repository.GetBlockedUsers();
//vs
var blockUsers = _dbContext.Users.Where(x => x.State == 1);
var user = _repos.GetById(1);
//and an enum is used internally in the user class
user.Block();
_repos.Update(user);
// vs
var user = _dbContext.Users.FirstOrDefault(x => x.Id == 1);
user.State = 1;
_dbContext.SaveChanges();
By wrapping everything behind your repository, you design your business entities in a way that make it easy to work with them (child entites, enums, date management etc). And you design the repository so that those entities can be stored in an efficient way. No compromises and code that is more easily maintained.
Regarding OData: Do not use the repository pattern. It doesn't add any value in that case.
If you insist on using IQueryable in your business domain, do not use the repository pattern. It would only complicate things without adding any value.
Finally:
Business logic that uses properly designed repositories is so much easier to test (unit tests). Code where LINQ and business logic is mixed must ALWAYS be integration tests (against a DB) since Linq to Sql differs from Linq to Objects.
I've seen other questions regarding using straight SQL for MVC data access, for example here. Most responses don't answer the question but ask
"Why would you not want to use ORM, EF, Linq, etc?"
My group does custom reporting out of a data warehouse that requires a lot of complex, highly tuned Oracle queries that are manipulated based on user GUI parameter selections.
My newest project is to develop a SQL plugin reporting tool for SQL report developers. They would create a pre-tuned SQL for the report with pseudo parameters and enter (and store) via the GUI. Then the GUI would prompt them for the parameter definitions (name and type) that need to be displayed/requested at run time to ultimately replace the pseudo variables.
So a SQL statement may look like:
SELECT * FROM orders WHERE order date BETWEEN '<Date1>' AND '<Date2>'
And the report developer would then, via the GUI, add two parameters named Date1 and Date2, and flag them as date fields.
End users would then select the report, get prompted for Date1 and Date2, and the GUI would do the substitution and run the SQL.
As you can see, I have no choice but to use straight SQL (especially in the 2nd example and understand I would have to forgo strongly typed in the 2nd also).
So my questions are:
When is it necessary to bypass EF/Linq (and there are definitely reasons to), what is best practice in MVC 4?
And how best to strongly type when I do know the output columns ahead of time?
And CRUD processing?
Can anyone point me to any examples of non-EF/Linq based coding in this regard?
I think this is a bit open ended question, so here's my 2c. (If tl dr, go to last section)
To me, it's not so much "by passing EF/Linq", but rather, the need to choose the appropriate data persistence library. I have used PetaPoco, Ado.Net, NHibernate/ActiveRecord, Linq2Sql, EF (My main choice) with MVC.
Best practice actually comes from realising that Controllers are STILL a part of presentation layer, and that it should not deal with anything other than HttpContext related operations + calling business logic service classes.
I arrange my classes as:
Presentation (MVC) -> Logic Services (Simple classes) -> Data Access (Context wrapped in "repositories").
So I can't quite imagine whether to use EF or not would have any implication on asp.net MVC.
For me, Data Access returns data in DTO, e.g.
public List GetAllFoos()
Whether that method string concatenate from a xml, etc or do a simple Context.Foos.ToList() is irrelevant to the rest of the application. All I care is Data Access do NOT return me a DataSet with string matching for columns. Those stay in DAL.
See point 1 and 2. My repositories have CRUD methods on it. How it's done is irrelevant to the rest of application. Take one of the most basic interface to my repositories classes:
public interface IFooRepository
{
void Save(Foo foo)
Foo Get(int id)
void Create(Foo foo)
void Delete(int id)
}
One point not mentioned yet, DI is also crucial. The concrete implimentation "FooRepository" may choose to request dependencies such as Web services, context classes, etc. Those are, however, again, completely irrelevant to the caller who depends on the interface.
If you still require an example after the 3 points above, drop a comment and I'll whip up something extremely simple using Ado.net.
===========================================================================
To EF or not to EF.
For me, if starting a new project with new schema, I use EF code first.
Fitting new code to old database + old project has no ORM mapping I can reuse = PetaPoco.
===========================================================================
In the context of your project:
The "SQL plugin reporting tool for SQL report developers". "The" sql reporting service? I'm not sure why you need to do anything? Doesn't SSRS already do that? (Enter sql statement/data source, generate form for parameter, etc).
If not I'd question the design decision. IMVHO, the need for users of an application (I don't care if it's "report developer" or w/e) to enter SQL statements is usually stemmed from "architectural astronauts". How do you debug the SQL statement when you enter via GUI as a string? How do you know the tables and the relationships? You either dig into SSMS and come back to gui, or you build complex UI (aka rebuild SSMS).
At the end of day, if you want bazillion reports for gazillion different users, you have to pay for it. I see too many "architectural astronauts" who exposes application to accept SQL statements only to make everyone waste time guessing what should be put into it. No cost saving at all.
Ok, if you must do that, well eh... Good luck. Best bet is to return as a DataTable and dump the rows/columns/data on to the view with nested foreach looping through rows then columns.
I've been looking at the different approaches to solving the mass assignment issues with MVC as well as doing things the right way.
So far, the 2 approaches which I think are the best are below: (I have also looked at AutoMapper)
1: Value Injecter - This seems to do the job pretty well, but also relies on a third party library
2: Using the UpdateModel method and bind to a View Model interface which exposes a subset of the required properties in your domain model. http://www.codethinked.com/easy-and-safe-model-binding-in-aspnet-mvc
Before I jump in and code my whole application (without spending a week on each to find out which one I actually like) using one of the above practices, does anybody have real world experience of using these 2 methods and which one you would recommend?
in simple scenarios where you have only text fields matching strings/int properties anything will do just as well.
but when you have properties on the viewmodel that match objects in the model (FK in the DB) it gets a bit more complex, you might need to pull data from the DB for some individual props and map some property of that object to the ViewModel, stuff like that.
prodinner asp.net mvc demo application uses valueinjecter in Mapper classes, there's a pdf where this approach is explained, you can download it here: http://prodinner.codeplex.com/
General consensus from all the reading I've done on this topic is that if you're going from a Entity or Domain Model (from your database) to a View Model to show on the form feel free to use automation tools like AutoMapper or whatever your preferred tool is to automate it.
If you are however going from a Input or Form Model (the object populated via the automatic model binding) back into to your Entity or Domain Model, do not automate this. It's a slippery slope to navigate correctly and can result in your automation tool mapping over fields that were not intended/permitted. Everything I've read about this (and various implementations myself) suggest the best practice is to do this manually/explicitly. It's pretty straight forward and object initializers can make it very easy to read.
var person = new Person()
{
PersonId = model.PersonId,
FirstName = model.FirstName,
LastName= model.LastName
}
personService.UpdatePerson(person);
Something along that line.
I've read quite a few Q & As relating to logic in views within an MVC architecture and in most cases I agree that business logic shouldn't live in a view. However having said this, I constantly question my approach when using Microsoft's MVC Framework in conjunction with the Entity Framework because of the ease of accessibility to foreign key relationships a single entity can give me which ultimately results in me performing Linq to Entities queries inline within a View.
For example:
If I have the following two entities:
Product ([PK]ProductId, Title, Amount)
Image ([PK]ImageId, [FK]ProductId, ImageTitle, DisplayOrder)
Assuming I have a strongly typed Product view and I want to display the primary image (lowest display order) then I could do something like this in the view:
#{
Image image = (from l in Model.Image
orderby l.DisplayOrder
select l).FirstOrDefault();
}
This is a simple example for demonstration purposes, but surely this begins to bend the rules in relation to MVC architecture, but on the other hand doing this in the Controller and then (heaven forbid) jamming it into the ViewBag or ViewData would surely be just as much of a crime and become painful to manage for more than a few different related classes.
I used to create custom classes for complex Models, but it's time-consuming and ugly and I no longer see the point as the Entity Framework makes it quick and easy to define the View to be the primary Model (in this case a Product) and then easily retrieve all the peripheral components of the product using Linq queries.
I'd be interested to know how other people handle this type of scenario.
EDIT:
I also quite often do things like:
#foreach(Image i in Model.Image.OrderBy(e => e.DisplayOrder).ToList())
{
<img ... />
}
I'm going the 'custom classes for Models' way, and I agree it's time consuming and mundane, hence tools like http://automapper.codeplex.com/ have been created, to accompany you with this task.
Overall, I'm having similar feelings to yours. Reading some stuff saying it's good to have your domain model unrelated to your storage, then different class for your view model than the domain model, and then seeing that libraries actually seem to 'promote' the easy way (data annotations over your domain classes seem to be simplier than EF fluent interface etc etc).
At least we've got the choice I guess!
Model binding There is also issue that when you want to POST back the model data and store it in the database, you need to be careful and make sure MVC model binders bind all fields correctly. Else you may loose some data. With custom models for views, it might be simplier.
Validation
MVC gives you a way to validate using attributes, when you use viewmodels, you can freely pollute it with such annotations, because it's view specific (and validation should be view/controller action specific as well). When you use EF classes, you would be polluting those classes with unrelated (and possibly conflicting) logic.
Firstly, Sorry For lengthy question but I have to give some underlying information.
We are creating an Application which uses ASP.net MVC, JQuery Templates, Entity Framework, WCF and we used POCO as our domain layer. In our application, there is a WCF Services Layer to exchange data with ASP.net MVC application and it uses Data Transfer Objects (DTOs) from WCF to MVC.
Furthermore, the application uses Lazy Loading in Entity Framework by using AutoMapper when converting Domain-TO-DTOs in our WCF Service Layer.
Our Backend Architecture as follows (WCF Services -> Managers -> Repository -> Entity Framework(POCO))
In our application, we don't use View Models as we don't want another mapping layer for MVC application and we use only DTOs as View Models.
Generally, we have Normal and Lite DTOs for domain such as Customer, CustomerLite, etc (Lite object is having few properties than Normal).
Now we are having some difficulties with DTOs because our DTO structure is becoming more complex and when we think maintainability (with general Hierarchical structure of DTOs) we lose Performance.
For example,
We have Customer View page and our DTO hierarchy as follows
public class CustomerViewDetailsDTO
{
public CustomerLiteDto Customer{get;set;}
public OrderLiteDto Order{get;set;}
public AddressLiteDto Address{get;set;}
}
In this case we don't want some fields of OrderLiteDto for this view. But some other view needs that fields, so in order to facilitates that it we use that structure.
When it comes to Auto Mapping, we map CustomerViewDetailsDTO and we will get additional data (which is not required for particular view) from Lazy Loading (Entity Framework).
My Questions:
Is there any mechanism that we can use for improving performance while considering the maintainability?
Is it possible to use Automapper with more map view based mapping functions for same DTO ?
First of don't use Lazy Loading as probably it will result in Select N+1 problems or similar.
Select N + 1 is a data access anti-pattern where the database is
accessed in a suboptimal way.
In other words, using lazy loading without eager loading collections causes Entity Framework to go to the database and bring the results back one row at a time.
Use jsRender for templates as it's much faster than JQuery Templates:
Render Bemchmark
and here's an nice info for how to use it: Reducing JavaScript Code Using jsRender Templates in HTML5 Applications
Generally, we have Normal and Lite DTOs for domain such as Customer,
CustomerLite, etc (Lite object is having few properties than Normal).
Your normal DTO is probably a ViewModel as ViewModels may or may not map one to one to DTOs, and ViewModels often contain logic pushed back from the view, or help out with pushing data back to the Model on a user's response.
DTOs don't have any behavior and their purpose is to reduce the number of calls between tiers of the application.
Is there any mechanism that we can use for improving performance while considering the maintainability?
Use one ViewModel for one view and you won't have to worry about maintainability. Personally i usually create an abtract class that is a base, and for edit,create or list i inherit that class and add properties that are specific for a view. So for an example Create view doesn't need PropertyId (as someone can hijack your post and post it) so only Edit and List ViewModels have PropertyId property exposed.
Is it possible to use Automapper with more map view based mapping functions for same DTO ?
You can use AutoMapper to define every map, but the question is, how complicated the map will be. Using one ViewModel per view and your maps will be simple to write and maintain. I must point out that it is not recommended to use Automapper in the data access code as:
One downside of AutoMapper is that projection from domain objects
still forces the entire domain object to be queried and loaded.
Source: Autoprojecting LINQ queries
You can use a set of extension that are limited as of now to speed up your mapping in data access code: Stop using AutoMapper in your Data Access Code
Regards