Can you map the results of a manual SQL query to objects in Entity Framework? - asp.net-mvc

In LINQ, you can write a manual SQL query, take the results from it and have LINQ "map" those into the appropriate properties of your Model classes (or at least, I'm pretty sure I read you can do that).
Is it possible to do something like that in Entity Framework?
I have an web app that's using EF, and it's CPU usage is ridiculously high compared to the traffic it has, and profiling it in my machine, all that time is (predictably) spent in the DB functions, and the largest portion of that time (> 85%) is spent by EF "generating" SQL and doing stuff before actually executing a query.
So my reasoning is that I can just go in and hardcode the SQL queries, but still use my populated Model properties in my view.
Is this possible? If so, how would I do it?
Thanks!
Daniel

So what you want to do is hydrate an object from an IDataReader? It's pretty easy to write code to do this (hint: reflection! or you can get fancy and use a member initialization expression) or you can Google for myriad existing implementations on the Internet.
You can do this within EF using ObjectContext.ExecuteStoreQuery<T> or ObjectContext.Translate<T> if you already have a DbDataReader.

1 ObjectContext.SqlQuery in EF 4.0
As said #Jason, you can :
IEnumerable<MiniClient> results =
myContext.SqlQuery<MiniClient>("select name,company from client");
Reference : https://msdn.microsoft.com/en-us/library/dd487208.aspx
DbContext.Database.SqlQuery, in EF 4.1+
In Entity Framework 4.1+, DbContext is preferable to use to ObjectContext, so you'd better use :
DbContext myContext= new DbContext();
IEnumerable<MiniClient> results =
myContext.SqlQuery<MiniClient>("select name,company from client");
Reference : https://msdn.microsoft.com/en-us/library/jj592907(v=vs.113).aspx
dynamic and anonymous class
Too lazy to create a projection class like MiniClient ?
So you should use anonymous type and dynamic keyword :
IEnumerable<dynamic> results =
myContext.Clients.Select( c => new {Name = c.Name, Firstname = c.Firstname});
Note : In all the samples MiniClient is not an entity of the DbContext.
(=not a DbSet<T> property)

Related

IEnumerable vs IQueryable in OData and Repository Pattern

I watched this video and read this blog post. There is something in this post confused me; The last part of the post. In the last part Mosh emphasized, Repository should never return IQueryable, because it results in performance issue. But I read something that sounds contradictory.
This is the confusing part:
IEnumerable: While querying data from database, IEnumerable executes select query on server side, load data in-memory on client side and then filter data. Hence does more work and becomes slow.
IQueryable: While querying data from database, IQueryable executes select query on server side with all filters. Hence does less work and becomes fast.
this is another answer about IQueryable vs IEnumerable in Repository pattern.
These are opposite of Mosh's advice. If these are true, why we should not use IQueryable instead of IEnumerable.
And something else, What about situations that we want to use OData; As you know it’s better to use IQueryable instead of IEnumerable when querying by OData.
one more thing, is it good or bad to use OData for querying e-Commerce website APIs.
please let me know your opinion.
Thank you
A repository should never return a IQueryable. But not due to performance. It's due to complexity. A repository is about reducing the complexity in the business layer.
Buy exposing an IQueryable you increase the complexity in two ways:
You leak persistence knowledge to the business domain. There is things that you must know about the underlying Linq to Sql provider to write effective queries.
You must design the business entities so that querying them is possible (i.e. not pure business entities).
Examples:
var blockedUsers = _repository.GetBlockedUsers();
//vs
var blockUsers = _dbContext.Users.Where(x => x.State == 1);
var user = _repos.GetById(1);
//and an enum is used internally in the user class
user.Block();
_repos.Update(user);
// vs
var user = _dbContext.Users.FirstOrDefault(x => x.Id == 1);
user.State = 1;
_dbContext.SaveChanges();
By wrapping everything behind your repository, you design your business entities in a way that make it easy to work with them (child entites, enums, date management etc). And you design the repository so that those entities can be stored in an efficient way. No compromises and code that is more easily maintained.
Regarding OData: Do not use the repository pattern. It doesn't add any value in that case.
If you insist on using IQueryable in your business domain, do not use the repository pattern. It would only complicate things without adding any value.
Finally:
Business logic that uses properly designed repositories is so much easier to test (unit tests). Code where LINQ and business logic is mixed must ALWAYS be integration tests (against a DB) since Linq to Sql differs from Linq to Objects.

ASP.NET MVC & Entity Framework stored procedures

My ASP.NET MVC web app needs to get data from existing database using T-SQL stored procedures. I've seen tutorials on how to do that using the code-first approach (basically for a model named Product, the Entity Framework generates stored procedures like Product_Update, Product_Delete, etc).
But in my case I can't use code-first b/c the database and the stored procedures already exist and their names don't follow this convention. What's the way to go? Any help will be appreciated. Thanks in advance.
Begin Edited 2017-03-28
If I go about using straight ADO.NET classes as Shyju and WillHua said, will data annotations on my Model data classes work? If not, how else can validation, etc be implemented?
If I follow this approach, do I need to reference Entity Framework in my project at all?
End Edited 2017-03-28
If you're using Entity Framework you could do something similar to the following:
var startDateParam = new SqlParameter("StartDate", 8) { Value = startDate };
var endDateParam = new SqlParameter("EndDate", 8) { Value = endDate };
var parameters = new[] { startDateParam, endDateParam };
var result= Context.Database.SqlQuery<CampaignReferralReportItem>("CampaignReferralReportItems #StartDate, #EndDate", parameters).ToList<CampaignReferralReportItem>();
return result;
So what you've got here is the declaration of two parameters which are passed into the Stored Procedure 'CampaignReferralReportItems'. After the query is complete, the result is mapped as closely as possible to the class CampaignReferralReportItem.
Keep in mind the order of the properties must be identical as the query results or the mapping can throw exceptions.
It's worth noting that the Context stated in the above code is your DataContext.
Also, before you start throwing this kind of code everywhere. It might be worthwhile looking at the Repository pattern
I'd suggest either using Dapper, which I am strating to use, and love for it's speed - or, Entity Framework, and 'Update model from Database' - so, a database first approach, where the references to the procs are pulled in.
But, I'd suggest Dapper, as it's pretty simple and quick.

Self Tracking entities and the mysterious ChangeTracker_ChangeTrackingEnabled datatable column

I'm using Self Tracking Entities that implements IObjectWithChangeTracker with the last Entity Framework RC available as a Nuget. The target database is PostgreSQL. I'm also using Code First fluent API to construct the model and LINQ to Entity for querying the database.
To my surpise, a simple SELECT query on the entity generates a SQL query with a mysterious column ChangeTracker_ChangeTrackingEnabled that does not exist in the datatable ! I do not understand this behavior as it seems to me that the EntityTypeConfiguration derived class maps the entity properties to the datatable columns in its constructor.
Is there a way to disable this behavior or at least tell which column should be mapped by the change tracker ?
For that purpose, Context.Configuration.AutoDetectChangesEnabled = false or calling IsConcurrencyToken() mapping in the EntityTypeConfiguration derived object does not help.
Any help appreciated.
TIA.
You must inform EF about every public property you want to avoid in mapping by either marking property with NotMapped attribute or by using Ignore in fluent API.
Btw. as I know STEs are not designed to be used with code first or DbContext API.

ASP.NET MVC: Best Way To Call Stored Procedure

I'm trying to decide which is the best way to call a stored procedure.
I'm new to ASP.NET MVC and I've been reading a lot about Linq to SQL and Entity Framework, as well as the Repository Pattern. To be honest, I'm having a hard time understanding the real differences between L2S and EF... but I want to make sure that what I'm building within my application is right.
For right now, I need to properly call stored procedures to: a) save some user information and get a response and, b) grab some inforation for a catalog of products.
So far, I've created a Linq to SQL .dbml file, selected the sotred procedure from the Server Explorer and dragged that instance into the .dbml. I'm currently calling the Stored Procedure like so:
MyLinqModel _db = new MyLinqModel();
_db.MyStoredProcedure(args);
I know there's got to be more involved... plus I'm doing this within my controller, which I understand to be not a good practice.
Can someone recognize what my issues are here?
LINQ and EF are probably overkill if all you're trying to do is call a stored proc.
I use Enterprise Library, but ADO.NET will also work fine.
See this tutorial.
Briefly (shamelessly copied-and-pasted from the referenced article):
SqlConnection conn = null;
SqlDataReader rdr = null;
// typically obtained from user
// input, but we take a short cut
string custId = "FURIB";
Console.WriteLine("\nCustomer Order History:\n");
// create and open a connection object
conn = new SqlConnection("Server=(local);DataBase=Northwind; Integrated Security=SSPI");
conn.Open();
// 1. create a command object identifying
// the stored procedure
SqlCommand cmd = new SqlCommand(
"CustOrderHist", conn);
// 2. set the command object so it knows
// to execute a stored procedure
cmd.CommandType = CommandType.StoredProcedure;
// 3. add parameter to command, which
// will be passed to the stored procedure
cmd.Parameters.Add(
new SqlParameter("#CustomerID", custId));
// execute the command
rdr = cmd.ExecuteReader();
// iterate through results, printing each to console
while (rdr.Read())
{
Console.WriteLine(
"Product: {0,-35} Total: {1,2}",
rdr["ProductName"],
rdr["Total"]);
}
}
Update
I missed the part where you said that you were doing this in your controller.
No, that's not the right way to do this.
Your controller should really only be involved with orchestrating view construction. Create a separate class library, called "Data Access Layer" or something less generic, and create a class that handles calling your stored procs, creating objects from the results, etc. There are many opinions on how this should be handled, but perhaps the most common is:
View
|
Controller
|
Business Logic
|
Data Access Layer
|--- SQL (Stored procs)
-Tables
-Views
-etc.
|--- Alternate data sources
-Web services
-Text/XML files
-blah blah blah.
MSDN has a decent tutorial on the topic.
Try this:
Read:
var authors = context.Database.SqlQuery<Author>("usp_GetAuthorByName #AuthorName",
new SqlParameter("#AuthorName", "author"));
Update:
var affectedRows = context.Database.ExecuteSqlCommand
("usp_CreateAuthor #AuthorName = {0}, #Email= {1}",
"author", "email");
From this link: http://www.dotnetthoughts.net/how-to-execute-a-stored-procedure-with-entity-framework-code-first/
And I would go with the framework David Lively mentioned, instead of having the routines in the controller. Simply pass the results back as IEnumerable<blah> from a function in a separate repository class for an edit, pass a boolean back for if the update succeeded for an update.
LINQ to SQL and ADO.NET EF attach read stored procs to the data/object context class that you use to go against its various entities. For create, update, and delete, you can create a proc that maps the properties of an entity that the model generates, and using the entity mapping window (forget the exact name right now), you can map an entities fields with the proc parameters. So, say you have a Customers table, EF generates a Customers Entity, and you can map the proc parameters to the properties of the Customer entity when attempting to update/insert/delete.
Now, you can map a CUD proc to a function, but I don't know all the repercussions; I like the way I just mentioned the best.
HTH.
I common pattern is to pass a repository interface into your controller by dependency injection. The choice of what persistence/orm technology you use is really another issue and unrelated to the fact that you are using MVC. Using the repository pattern and coding to abstractions (interfaces) makes your application easy to test by mocking out your repositories.
I think you should also try to use as few stored procedures as possible. This means you can more easily test your logic in isolation (unit tests) without needing to be connected to a database. I would highly recommend looking at NHibernate. The learning curve is fairly steep but you are in full control of your mappings and configuration. There are obviously occasions where you will need stored procs for performance reasons, but using an ORM predominantly is very beneficial.
I can't imagine that your goal is to be able to call a stored procedure. To me it sounds as if you need to forget stored procedures and use Linq to Sql. I say L2S because EF is far more to learn, and not needed in this case.

Some issues about Rob Conery's repository pattern

Please read my update at the end of question after reading the answers:
I'm trying to apply repository pattern
as Rob Conery's described on
his blog under "MVC Storefront".
But I want to ask about some issues
that I had before I apply this design
pattern.
Rob made his own "Model" and used some
ORM "LINQ to SQL or Entity Framework (EF)" to map his database to
Entities.
Then he used custom Repositories which
gives IQueryable<myModel> and in
these repositories he made sort of
Mapping or "Parsing" between ORM Entities and his Model classes.
What I'm asking here:
Is it possible to make custom mapping between ORM Entities and my
model "classes" and load just
properties that I want? I hope
the point is clear.
Update For POCO
**
This is what I decided after many of suggestions and many of tries:
**
After all and with respect to Mr. Rob Conery's opinion I've got better solution as:
I built my model as "POCOs" and put them in my "Models Layers" so they had nothing to do with the "edmx" file.
Built my repositories to deal with this "POCO" model dependent on "DbContext"
Then I created a "ViewModels" to get just the information that needed by view from those repositories.
So I do not need to add one more layer to be between "EF Models" and "My Model". I just twist my model a little and force EF to deal with it.
As I see this pattern is better than Rob Conery's one.
Yes, it's possible if you're using LINQ to SQL. All you need to do is use projection to pull out the data you want into an object of your choosing. You don't need all this decoration with interfaces and whatnot - if you use a model specific to a view (which it sounds like you need) - create a ViewModel class.
Let's call it ProductSummaryView:
public class ProductSummaryView{
public string Name {get;set;}
public decimal Price {get;set;}
}
Now load it from the repository:
var products= from p in _repository.GetAllProducts
where p.Price > 100
select new ProductSummaryView {
Name=p.ProductName,
Price=p.Price
}
This will pull all products where the price > 100 and return an IQueryable. In addition, since you're only asking for two columns, only two columns will be specified in the SQL call.
Not a dodge to your question, but it's ultimately up to you to decide how your repository would work.
The high-level premise is that your controller would point to some repository interface, say IRepository<T> where T : IProduct. The implementation of which could do any number of things---load up your whole database from disk and store in memory and then parse LINQ expressions to return stuff. Or it could just return a fixed set of dummy data for testing purposes. Because you're banging away on an repository interface, then you could have any number of concrete implementations.
Now, if you're looking for a critique of Rob's specific implementation, I'm not sure that's germane to Stack Overflow.
While it's possible to populate part of an object based on a query of a subset of the columns for that object using a query (which has nothing to do with the repository pattern), that's not how things are "normally" done.
If you want to return a subset of an object, you generally create a new class with just that subset of properties. This is often (in the MVC world view) referred to as a View Model class. Then, you use a projection query to fill that new class.
You can do all of that whether you are using the repository pattern or not. I would argue there is no conflicting overlap between the two concepts.
DeferringTheLoad
Remember that IQueryable defers all the loading up to the last responsible moment. You probably won't have to load all the data using the LINQ operators to get the data you want. ; )
Respecting the dependency in domain classes in views, I will say NO. Use a ViewModel pattern for this. It's more maintainable; you could use AutoMapper to avoid the mapping problems, and they are very flexible in composite views scenarios : )
According to the new question...The answer is yes, you can. Just as Rob Conery says, use projection ; ):
var query = from p in DataContext.Persons}
select new Persons
{
firstname = p.firstname,
lastname = p.lastname
});

Resources