I'm trying to decide which is the best way to call a stored procedure.
I'm new to ASP.NET MVC and I've been reading a lot about Linq to SQL and Entity Framework, as well as the Repository Pattern. To be honest, I'm having a hard time understanding the real differences between L2S and EF... but I want to make sure that what I'm building within my application is right.
For right now, I need to properly call stored procedures to: a) save some user information and get a response and, b) grab some inforation for a catalog of products.
So far, I've created a Linq to SQL .dbml file, selected the sotred procedure from the Server Explorer and dragged that instance into the .dbml. I'm currently calling the Stored Procedure like so:
MyLinqModel _db = new MyLinqModel();
_db.MyStoredProcedure(args);
I know there's got to be more involved... plus I'm doing this within my controller, which I understand to be not a good practice.
Can someone recognize what my issues are here?
LINQ and EF are probably overkill if all you're trying to do is call a stored proc.
I use Enterprise Library, but ADO.NET will also work fine.
See this tutorial.
Briefly (shamelessly copied-and-pasted from the referenced article):
SqlConnection conn = null;
SqlDataReader rdr = null;
// typically obtained from user
// input, but we take a short cut
string custId = "FURIB";
Console.WriteLine("\nCustomer Order History:\n");
// create and open a connection object
conn = new SqlConnection("Server=(local);DataBase=Northwind; Integrated Security=SSPI");
conn.Open();
// 1. create a command object identifying
// the stored procedure
SqlCommand cmd = new SqlCommand(
"CustOrderHist", conn);
// 2. set the command object so it knows
// to execute a stored procedure
cmd.CommandType = CommandType.StoredProcedure;
// 3. add parameter to command, which
// will be passed to the stored procedure
cmd.Parameters.Add(
new SqlParameter("#CustomerID", custId));
// execute the command
rdr = cmd.ExecuteReader();
// iterate through results, printing each to console
while (rdr.Read())
{
Console.WriteLine(
"Product: {0,-35} Total: {1,2}",
rdr["ProductName"],
rdr["Total"]);
}
}
Update
I missed the part where you said that you were doing this in your controller.
No, that's not the right way to do this.
Your controller should really only be involved with orchestrating view construction. Create a separate class library, called "Data Access Layer" or something less generic, and create a class that handles calling your stored procs, creating objects from the results, etc. There are many opinions on how this should be handled, but perhaps the most common is:
View
|
Controller
|
Business Logic
|
Data Access Layer
|--- SQL (Stored procs)
-Tables
-Views
-etc.
|--- Alternate data sources
-Web services
-Text/XML files
-blah blah blah.
MSDN has a decent tutorial on the topic.
Try this:
Read:
var authors = context.Database.SqlQuery<Author>("usp_GetAuthorByName #AuthorName",
new SqlParameter("#AuthorName", "author"));
Update:
var affectedRows = context.Database.ExecuteSqlCommand
("usp_CreateAuthor #AuthorName = {0}, #Email= {1}",
"author", "email");
From this link: http://www.dotnetthoughts.net/how-to-execute-a-stored-procedure-with-entity-framework-code-first/
And I would go with the framework David Lively mentioned, instead of having the routines in the controller. Simply pass the results back as IEnumerable<blah> from a function in a separate repository class for an edit, pass a boolean back for if the update succeeded for an update.
LINQ to SQL and ADO.NET EF attach read stored procs to the data/object context class that you use to go against its various entities. For create, update, and delete, you can create a proc that maps the properties of an entity that the model generates, and using the entity mapping window (forget the exact name right now), you can map an entities fields with the proc parameters. So, say you have a Customers table, EF generates a Customers Entity, and you can map the proc parameters to the properties of the Customer entity when attempting to update/insert/delete.
Now, you can map a CUD proc to a function, but I don't know all the repercussions; I like the way I just mentioned the best.
HTH.
I common pattern is to pass a repository interface into your controller by dependency injection. The choice of what persistence/orm technology you use is really another issue and unrelated to the fact that you are using MVC. Using the repository pattern and coding to abstractions (interfaces) makes your application easy to test by mocking out your repositories.
I think you should also try to use as few stored procedures as possible. This means you can more easily test your logic in isolation (unit tests) without needing to be connected to a database. I would highly recommend looking at NHibernate. The learning curve is fairly steep but you are in full control of your mappings and configuration. There are obviously occasions where you will need stored procs for performance reasons, but using an ORM predominantly is very beneficial.
I can't imagine that your goal is to be able to call a stored procedure. To me it sounds as if you need to forget stored procedures and use Linq to Sql. I say L2S because EF is far more to learn, and not needed in this case.
Related
My ASP.NET MVC web app needs to get data from existing database using T-SQL stored procedures. I've seen tutorials on how to do that using the code-first approach (basically for a model named Product, the Entity Framework generates stored procedures like Product_Update, Product_Delete, etc).
But in my case I can't use code-first b/c the database and the stored procedures already exist and their names don't follow this convention. What's the way to go? Any help will be appreciated. Thanks in advance.
Begin Edited 2017-03-28
If I go about using straight ADO.NET classes as Shyju and WillHua said, will data annotations on my Model data classes work? If not, how else can validation, etc be implemented?
If I follow this approach, do I need to reference Entity Framework in my project at all?
End Edited 2017-03-28
If you're using Entity Framework you could do something similar to the following:
var startDateParam = new SqlParameter("StartDate", 8) { Value = startDate };
var endDateParam = new SqlParameter("EndDate", 8) { Value = endDate };
var parameters = new[] { startDateParam, endDateParam };
var result= Context.Database.SqlQuery<CampaignReferralReportItem>("CampaignReferralReportItems #StartDate, #EndDate", parameters).ToList<CampaignReferralReportItem>();
return result;
So what you've got here is the declaration of two parameters which are passed into the Stored Procedure 'CampaignReferralReportItems'. After the query is complete, the result is mapped as closely as possible to the class CampaignReferralReportItem.
Keep in mind the order of the properties must be identical as the query results or the mapping can throw exceptions.
It's worth noting that the Context stated in the above code is your DataContext.
Also, before you start throwing this kind of code everywhere. It might be worthwhile looking at the Repository pattern
I'd suggest either using Dapper, which I am strating to use, and love for it's speed - or, Entity Framework, and 'Update model from Database' - so, a database first approach, where the references to the procs are pulled in.
But, I'd suggest Dapper, as it's pretty simple and quick.
I've seen other questions regarding using straight SQL for MVC data access, for example here. Most responses don't answer the question but ask
"Why would you not want to use ORM, EF, Linq, etc?"
My group does custom reporting out of a data warehouse that requires a lot of complex, highly tuned Oracle queries that are manipulated based on user GUI parameter selections.
My newest project is to develop a SQL plugin reporting tool for SQL report developers. They would create a pre-tuned SQL for the report with pseudo parameters and enter (and store) via the GUI. Then the GUI would prompt them for the parameter definitions (name and type) that need to be displayed/requested at run time to ultimately replace the pseudo variables.
So a SQL statement may look like:
SELECT * FROM orders WHERE order date BETWEEN '<Date1>' AND '<Date2>'
And the report developer would then, via the GUI, add two parameters named Date1 and Date2, and flag them as date fields.
End users would then select the report, get prompted for Date1 and Date2, and the GUI would do the substitution and run the SQL.
As you can see, I have no choice but to use straight SQL (especially in the 2nd example and understand I would have to forgo strongly typed in the 2nd also).
So my questions are:
When is it necessary to bypass EF/Linq (and there are definitely reasons to), what is best practice in MVC 4?
And how best to strongly type when I do know the output columns ahead of time?
And CRUD processing?
Can anyone point me to any examples of non-EF/Linq based coding in this regard?
I think this is a bit open ended question, so here's my 2c. (If tl dr, go to last section)
To me, it's not so much "by passing EF/Linq", but rather, the need to choose the appropriate data persistence library. I have used PetaPoco, Ado.Net, NHibernate/ActiveRecord, Linq2Sql, EF (My main choice) with MVC.
Best practice actually comes from realising that Controllers are STILL a part of presentation layer, and that it should not deal with anything other than HttpContext related operations + calling business logic service classes.
I arrange my classes as:
Presentation (MVC) -> Logic Services (Simple classes) -> Data Access (Context wrapped in "repositories").
So I can't quite imagine whether to use EF or not would have any implication on asp.net MVC.
For me, Data Access returns data in DTO, e.g.
public List GetAllFoos()
Whether that method string concatenate from a xml, etc or do a simple Context.Foos.ToList() is irrelevant to the rest of the application. All I care is Data Access do NOT return me a DataSet with string matching for columns. Those stay in DAL.
See point 1 and 2. My repositories have CRUD methods on it. How it's done is irrelevant to the rest of application. Take one of the most basic interface to my repositories classes:
public interface IFooRepository
{
void Save(Foo foo)
Foo Get(int id)
void Create(Foo foo)
void Delete(int id)
}
One point not mentioned yet, DI is also crucial. The concrete implimentation "FooRepository" may choose to request dependencies such as Web services, context classes, etc. Those are, however, again, completely irrelevant to the caller who depends on the interface.
If you still require an example after the 3 points above, drop a comment and I'll whip up something extremely simple using Ado.net.
===========================================================================
To EF or not to EF.
For me, if starting a new project with new schema, I use EF code first.
Fitting new code to old database + old project has no ORM mapping I can reuse = PetaPoco.
===========================================================================
In the context of your project:
The "SQL plugin reporting tool for SQL report developers". "The" sql reporting service? I'm not sure why you need to do anything? Doesn't SSRS already do that? (Enter sql statement/data source, generate form for parameter, etc).
If not I'd question the design decision. IMVHO, the need for users of an application (I don't care if it's "report developer" or w/e) to enter SQL statements is usually stemmed from "architectural astronauts". How do you debug the SQL statement when you enter via GUI as a string? How do you know the tables and the relationships? You either dig into SSMS and come back to gui, or you build complex UI (aka rebuild SSMS).
At the end of day, if you want bazillion reports for gazillion different users, you have to pay for it. I see too many "architectural astronauts" who exposes application to accept SQL statements only to make everyone waste time guessing what should be put into it. No cost saving at all.
Ok, if you must do that, well eh... Good luck. Best bet is to return as a DataTable and dump the rows/columns/data on to the view with nested foreach looping through rows then columns.
In LINQ, you can write a manual SQL query, take the results from it and have LINQ "map" those into the appropriate properties of your Model classes (or at least, I'm pretty sure I read you can do that).
Is it possible to do something like that in Entity Framework?
I have an web app that's using EF, and it's CPU usage is ridiculously high compared to the traffic it has, and profiling it in my machine, all that time is (predictably) spent in the DB functions, and the largest portion of that time (> 85%) is spent by EF "generating" SQL and doing stuff before actually executing a query.
So my reasoning is that I can just go in and hardcode the SQL queries, but still use my populated Model properties in my view.
Is this possible? If so, how would I do it?
Thanks!
Daniel
So what you want to do is hydrate an object from an IDataReader? It's pretty easy to write code to do this (hint: reflection! or you can get fancy and use a member initialization expression) or you can Google for myriad existing implementations on the Internet.
You can do this within EF using ObjectContext.ExecuteStoreQuery<T> or ObjectContext.Translate<T> if you already have a DbDataReader.
1 ObjectContext.SqlQuery in EF 4.0
As said #Jason, you can :
IEnumerable<MiniClient> results =
myContext.SqlQuery<MiniClient>("select name,company from client");
Reference : https://msdn.microsoft.com/en-us/library/dd487208.aspx
DbContext.Database.SqlQuery, in EF 4.1+
In Entity Framework 4.1+, DbContext is preferable to use to ObjectContext, so you'd better use :
DbContext myContext= new DbContext();
IEnumerable<MiniClient> results =
myContext.SqlQuery<MiniClient>("select name,company from client");
Reference : https://msdn.microsoft.com/en-us/library/jj592907(v=vs.113).aspx
dynamic and anonymous class
Too lazy to create a projection class like MiniClient ?
So you should use anonymous type and dynamic keyword :
IEnumerable<dynamic> results =
myContext.Clients.Select( c => new {Name = c.Name, Firstname = c.Firstname});
Note : In all the samples MiniClient is not an entity of the DbContext.
(=not a DbSet<T> property)
I started working with linq to SQL several weeks ago. I got really tired of working with SQL server directly through the SQL queries (sqldatareader, sqlcommand and all this good stuff).
After hearing about linq to SQL and mvc I quickly moved all my projects to these technologies. I expected linq to SQL work slower but it suprisongly turned out to be pretty fast, primarily because I always forgot to close my connections when using datareaders. Now I don't have to worry about it.
But there's one problem that really bothers me. There's one page that's requested thousands of times a day. The system gets data in the beginning, works with it and updates it. Primarily the updates are ++ # -- (increase and decrease values). I used to do it like this
UPDATE table SET value=value+1 WHERE ID=#I'd
It worked with no problems obviously. But with linq to SQL the data is taken in the beginning, moved to the class, changed and then saved.
Stats.registeredusers++;
Db.submitchanges();
Let's say there were 100 000 users. Linq will say "let it be 100 001" instead of "let it be increased by 1".
But if there value of users has already been increased (that happens in my site all the time) then linq will be like oops, this value is already 100 001. Whatever I'll throw an exception"
You can change this behavior so that it won't throw an exception but it still will not set the value to 100 002.
Like I said, it happened with me all the time. The stas value was increased twice a second on average. I simply had to rewrite this chunk of code with classic ado net.
So my question is how can you solve the problem with linq
For these types of "write-only queries" I usually use a Stored Procedure. You can drag the stored procedure into the designer and execute it through the Linq to SQL DataContext class (it will be added as a method).
Sorry for the trite answer but it really is that simple; no need to to finagle with raw ADO.NET SqlCommand objects and the like, just import the SP and you're done. Or, if you want to go really ad-hoc, use the ExecuteCommand method, as in:
context.ExecuteCommand("UPDATE table SET value = value + 1 WHERE ID = {0}", id);
(But don't overuse this, it can get difficult to maintain since the logic is no longer contained in your DataContext instance. And before anybody jumps on this claiming it to be a SQL injection vulnerability, please note that ExecuteCommand/ExecuteQuery are smart methods that turn this into a parameterized statement/query.)
Linq to Sql supports "optimistic" concurrency out of the box. If you need tighter control, you can add a Timestamp column to your table, and Linq to Sql will use that timestamp to tighten the concurrency.
http://mtaulty.com/CommunityServer/blogs/mike_taultys_blog/archive/2008/07/01/10557.aspx
However, as Morten points out in the comments below, this solution is not going to perform well. Of course, you can always use ADO.NET to update the value, just like you were doing before; that won't adversely affect the operation of your Linq queries at all.
You could turn off concurrency on that property by changing the UpdateCheck value:
http://msdn.microsoft.com/en-us/library/bb399394(v=VS.90).aspx
Messy if your using generated code and the designer but I think this is the only way to do this.
Let's start with this basic scenario:
I have a bunch of Tables that are essentially rarely changed Enums (e.g. GeoLocations, Category, etc.) I want to load these into my EF ObjectContext so that I can assign them to entities that reference them as FK. These objects are also used to populate all sorts of dropdown controls. Pretty standard scenarios so far.
Since a new controller is created for each page request in MVC, a new entity context is created and these "enum" objects are loaded repeatedly. I thought about using a static context object across all instances of controllers (or repository object).
But will this require too much locking and therefore actually worsen perf?
Alternatively, I'm thinking of using a static context only for read-only tables. But since entities that reference them must be in the same context anyway, this isn't any different from the above.
I also don't want to get into the business of attaching/detaching these enum objects. Since I believe once I attach a static enum object to an entity, I can't attach it again to another entity??
Please help, I'm quite new to EF + MVC, so am wondering what is the best approach.
Personally, I never have any static Context stuff, etc. For me, when i call the database (CRUD) I use that context for that single transaction/unit of work.
So in this case, what you're suggesting is that you wish to retrieve some data from the databse .. and this data is .. more or less .. read only and doesn't change / static.
Lookup data is a great example of this.
So your Categories never change. Your GeoLocations never change, also.
I would not worry about this concept on the database/persistence level, but on the application level. So, just forget that this data is static/readonly etc.. and just get it. Then, when you're in your application (ie. ASP.NET web MVC controller method or in the global.asax code) THEN you should cache this ... on the UI layer.
If you're doing a nice n-tiered MVC app, which contains
UI layer
Services / Business Logic Layer
Persistence / Database data layer
Then I would cache this in the Middle Tier .. which is called by the UI Layer (ie. the MVC Controller Action .. eg. public void Index())
I think it's important to know how to seperate your concerns .. and the database stuff is should just be that -> CRUD'ish stuff and some unique stored procs when required. Don't worry about caching data, etc. Keep this layer as light as possible and as simple as possible.
Then, your middle Tier (if it exists) or your top tier should worry about what to do with this data -> in this case, cache it because it's very static.
I've implemented something similar using Linq2SQL by retrieving these 'lookup tables' as lists on app startup and storing them in ASP's caching mechanism. By using the ASP cache, I don't have to worry about threading/locking etc. Not sure why you'd need to attach them to a context, something like that could easily be retrieved if necessary via the table PK id.
I believe this is as much a question of what to cache as how. When your are dealing with EF, you can quickly run into problems when you try to persist EF objects across different contexts and attempt to detach/attach those objects. If you are using your own POCO objects with custom t4 templates then this isn't an issue, but if you are using vanilla EF then you will want to create POCO objects for your cache.
For most simple lookup items (i.e numeric primary key and string text description), you can use Dictionary. If you have multiple fields you need to pass and back with the UI then you can build a more complete object model. Since these will be POCO objects they can then be persisted pretty much anywhere and any way you like. I recommend using caching logic outside of your MVC application such that you can easily mock the caching activity for testing. If you have multiple lists you need to cache, you can put them all in one container class that looks something like this:
public class MyCacheContainer
{
public Dictionary<int, string> GeoLocations { get; set; }
public List<Category> Categories { get; set; }
}
The next question is do you really need these objects in your entity model at all. Chances are all you really need are the primary keys (i.e. you create a dropdown list using the keys and values from the dictionary and just post the ID). Therefore you could potentially handle all of the lookups to the textual description in the construction of your view models. That could look something like this:
MyEntityObject item = Context.MyEntityObjects.FirstOrDefault(i => i.Id == id);
MyCacheContainer cache = CacheFactory.GetCache();
MyViewModel model = new MyViewModel { Item = item, GeoLocationDescription = GeoLocations[item.GeoLocationId] };
If you absolutely must have those objects in your context (i.e. if there are referential entities that tie 2 or more other tables together), you can pass that cache container into your data access layer so it can do the proper lookups.
As for assigning "valid" entities, in .Net 4 you can just set the foreign key properties and don't have to actually attach an object (technically you can do this in 3.5, but it requires magic strings to set the keys). If you are using 3.5, you might just try something like this:
myItem.Category = Context.Categories.FirstOrDefault(c => c.id == id);
While this isn't the most elegant solution and does require an extra roundtrip to the DB to get a category you don't really need, it works. Doing a single record lookup based on a primary key should not really be that big of a hit especially if the table is small like the type of lookup data you are talking about.
If you are stuck with 3.5 and don't want to make that extra round trip and you want to go the magic string route, just make sure you use some type of static resource and/or code generator for your magic strings so you don't fat finger them. There are many examples here that show how do assign a new EntityKey to a reference without going to the DB so I won't go into that on this question.