How to structure Entityframework DB Contexts - asp.net-mvc

I haven't been able to find any answers online for this.
What are the advantages/disadvantages of using Multiple DB Contexts against using a single?
Is the below solution ok for setting related objects, (when saving to DB) (to make it more efficient, because I already have the ID, no need to fetch object)
I've heard its reccommened to use contexts like Using(MyContext c = MyContext) {}, at the moment I'm using the default MVC way, of having the context instantiated in the controller, is this ok?
Person P = new Person();
P.QuickSetCar(CarID);
db.People.Add(P);
db.saveChanges();
and
private void QuickSetCar(int CarID)
{
if(this.Car == null) {
Car C = new Car();
C.ID = ID;
this.Car = C;
}
}

Using multiple contexts is always a disadvantage unless you have no choice (e.g. your data is dispersed across multiple databases).
Almost, but you can simplify your new car initialization to:
private void QuickSetCar(int CarID) {
if(this.Car == null)
this.Car = new Car(){ ID = CarID };
}
That's fine. Just don't use more than one context for the duration of the web transaction (request/response) and don't keep it around for longer than that either.

Multiple contexts are normally only useful for very large models spread over multiple databases. If you are starting a project it is normally best to use a single context.
Your method for using IDs is fine.
Creating contexts within controllers is certainly workable but I would strongly suggest against it. This approach is often shown in demos and for basic scaffolding but in real world applications it is not a great idea. Large amounts of code will often be duplicated by instatiating and querying contexts in each controller. Also have a central repository will allow for much easier caching to improve performance.

Related

EF CORE 3.0 Cannot use multiple DbContext instances within a single query execution

After upgrading from .Net Core 2.2 to 3.0, the apps is throwing this err message.
Cannot use multiple DbContext instances within a single query execution
What is the workaround?
IQueryable<ApplicationUser> query;
var queryJoin = from ci in _courseInstructorRepository.Table
join uc in _userCourseRepository.Table on ci.CourseId equals uc.CourseId
select new { ci, uc };
if (userId > 0)
queryJoin = queryJoin.Where(x => x.ci.UserId == userId);
if (courseId > 0)
queryJoin = queryJoin.Where(x => x.uc.CourseId == courseId);
if (classId > 0)
queryJoin = queryJoin.Where(x => x.uc.CourseClassId == classId);
query = queryJoin.Select(x => x.uc.User).Distinct();
if (!string.IsNullOrEmpty(studentFirstChar))
query = query.Where(x => x.FirstName.StartsWith(studentFirstChar));
if (schoolId > 0)
query = query.Where(x => x.SchoolId == schoolId);
query = query.OrderBy(x => x.UserName);
return new PagedList<ApplicationUser>(query, pageIndex, pageSize);
You have a couple of design flaws in your code that EF core 2 swept under the carpet.
Your repositories don't share one context instance. EF core 2 couldn't create one SQL query from your code either, but it silently switched to client-side evaluation. That is, it just executed two SQL queries and joined them in memory. This must have been highly inefficient. One of the best design decisions in EF core 3 was to abandon automatic client-side evaluation, so now you're getting this error.
You don't use navigation properties. Using an ORM like EF, using manual joins should hardly ever be necessary. The Instructor class should have a navigation property like Courses, and Course a navigation property like Instructor.
Don't use this redundant repository layer anyway. As you're already experiencing in this small piece of code, it usually makes things harder than necessary without any added value.
One of your variables was created using another instance of DBContext, so when you try to use it as part of another DBContext's query it throws. The work around is to close the first DBContext and invoke DBContext.Attach(model) on the second.

Continuing issues with "An entity object cannot be referenced by multiple instances of IEntityChangeTracker" using MVC3 EF4.1

I posted this question previously and it explains what I'm doing pretty thoroughly:
ASP.NET MVC3 and Entity Framework v4.1 with error An entity object cannot be referenced by multiple instances of IEntityChangeTracker
The problem is that this issue has resurfaced several times with the mini-cart, lost cart, checkout page, etc after I fixed this particular problem in the question above. The further issues have been related to this, but not necessarily easy to identify and took a considerable amount of time to troubleshoot, find, and fix. Rather than post my most current specific issue I'd rather find out if I'm doing something wrong in general. Either by storing the cart, which is an entity, in a Session, or a better way to detach it (detach method shown below), or is there an easier way to debug these types of issues? Here is an update to my detach method:
public void DetachCart(Cart cart)
{
var objectContext = ((IObjectContextAdapter)context).ObjectContext;
if (cart.Customer != null)
{ objectContext.Detach(cart.Customer); }
if (cart.ShipFromAddress != null)
{
var shipFromAddress = cart.ShipFromAddress;
objectContext.Detach(cart.ShipFromAddress);
cart.ShipFromAddress = shipFromAddress;
}
if (cart.ShipToAddress != null)
{
var shipToAddress = cart.ShipToAddress;
objectContext.Detach(cart.ShipToAddress);
cart.ShipToAddress = shipToAddress;
}
if (cart.Lines != null && cart.Lines.Count > 0)
{
List<CartLine> lines = new List<CartLine>();
foreach (var item in cart.Lines.ToList())
{
objectContext.Detach(item);
lines.Add(item);
}
cart.Lines = lines;
}
objectContext.Detach(cart);
}
Thank you for any insight you could provide me on this issue. It's been a long painful road with this one.
UPDATE
It seems that a lot of my trouble stems from the fact that CartModelBinder leaves the cart in the attached state rather than the detached state. By changing that it has eliminated my current issue and removed several places where I had to detach to avoid this issue. However, my question "is there an easier way to detach all, or a way to debug/track these issues" still stands.
There is one solution - don't use entities in your views or model binders. Use view models and convert them to entities only when you are going save data to the database. It can make your application more complex but it will save you a great amount of time when troubleshooting issues with leaked contexts, attaching and detaching.

Performance of repository pattern and IQueryable<T>

I have no idea if I'm doing this right, but this is how a Get method in my repository looks:
public IQueryable<User> GetUsers(IEnumerable<Expression<Func<User, object>>> eagerLoading)
{
IQueryable<User> query = db.Users.AsNoTracking();
if (eagerLoading != null)
{
foreach (var expression in eagerLoading)
{
query = query.Include(expression);
}
}
return query;
}
Lets say I also have a GeographyRepository that has GetCountries method, which is similar to this.
I have 2 separate service layer classes calling these 2 separate repositories, sharing the same DbContext (EF 4.1 code-first).
So in my controller, I'd do:
myViewModel.User = userService.GetUserById(1);
myViewModel.Countries = geoService.GetCountries();
This is 2 separate calls to the database. If I didn't use these patterns and tie up the interface and database, I'd have 1 call. I guess its something of a performance vs maintainability.
My question is, can this be pushed to 1 database call? Can we merge queries like this when views calls multiple repositories?
I'd say that if performance is the real issue then I'd try and avoid going back to the database altogether. I'm assuming the list returned from geoService.GetCountries() is fairly static, so I'd be inclined to cache it in the service after the initial load and remove the database hit altogether. The fact that you have a service there suggests that it would be the perfect place to abstract away such details.
Generally when asking questions about performance, it's rare that all perf related issues can be tarred with the same brush and you need to analyse each situation and work out an appropriate solution for the specific perf issue you're having.

how do i implement / build / create an 'in memory database' for my unit test

i've started unit testing a while ago and as turned out i did more regression testing than unit testing because i also included my database layer thus going to the database verytime.
So, implemented Unity to inject a fake database layer, but i of course want to store some data, and the main opinion was: "create an in-memory database"
But what is that / how do i implement that?
Main question is: i think i have to fake the database layer, but doesn't that make me create a 'simple database' myself or: how can i keep it simple and not rebuilding Sql Server just for my unit tests :)
At the end of this question i'll give an explanation of the situation i got in on the project i just started on, and i was wondering if this was the way to go.
Michel
Current situation i've seen at this client is that testdata is contained in XML files, and there is a 'fake' database layer that connects all the xml files together.
For the real database we're using the entity framework, and this works very simple.
And now, in the 'fake' layer, i have top create all kind of classes to load, save, persist etc. the data.
It sounds weird that there is so much work in the fake layer, and so little in the real layer.
I hope this all makes sense :)
EDIT:
so i know i have to create a separate database layer for my unit test, but how do i implement it?
Define an interface for your data access layer and have (at least) two implementations of it:
The real database provider, which will in turn run queries on an SQL database, etc.
An in-memory test provider, which can be prepopulated with test data as part of each unit test.
The advantage of this is that the modules making use of the data provider do not need to whether the database is the real one or the test one, and hence more of the real code will be tested. The test database can be simple (like simple collections of objects) or complex (custom structures with indexes). It can also be a mocked implementation that will assert that it's being called appropriately as part of the test.
Additionally, if you ever need to support another data storage method (or different SQL database), you just need to write another implementation that conforms to the interface, and can be confident that none of the calling code will need to be reworked.
This approach is easiest if you plan for it from (or near) the start, so I'm not sure how easy it will be to apply to your situation.
What it might look like
If you're just loading and saving objects by id, then you can have an interface and implementations like (in Java-esque pseudo-code; I don't know much about asp.net):
interface WidgetDatabase {
Widget loadWidget(int id);
saveWidget(Widget w);
deleteWidget(int id);
}
class SqlWidgetDatabase extends WidgetDatabase {
Connection conn;
// connect to database server of choice
SqlWidgetDatabase(String connectionString) { conn = new Connection(connectionString); }
Widget loadWidget(int id) {
conn.executeQuery("SELECT * FROM widgets WHERE id = " + id);
Widget w = conn.fetchOne();
return w;
}
// more methods that run simple sql queries...
}
class MemeoryWidgetDatabase extends WidgetDatabase {
Set widgets;
MemoryWidgetDatabase() { widgets = new Set(); }
Widget loadWidget(int id) {
for (Widget w: widgets)
if (w.getId() == id)
return w;
return null;
}
// more methods that find/add/delete a widget in the "widgets" set...
}
If you need to run more other queries (such as batch selects based on more complex criteria), you can add methods to do this to the interface.
Likewise for complex updates. Transaction support is possible for the real database implementation. I'm not sure how easy it is to build an in-memory db that is capable of providing proper transaction support. To test it you'd need "open" several "connections" to the same data set, and to only apply updates to that shared dataset when a transaction is committed.
i used Sqlite for unit test as fake DB
Why don't you use a mocking framework (like moq or rhino mocks)? If you access your data through an interface, you can mock that interface and specify whatever you want to return on every test. Other approach is to have a separate environment for testing purposes, with a "real" database, where you make tests before taking your code for the production environment.
Uhhhh...... If you're storing all your test data in XML files. You've just changed one database for another. That is not an in memory database. In PHP you would use something like this.
class MemoryProductDB {
private $products;
function MemoryProductDB() {
$this->products = array();
}
public function find($index) {
return $this->products[$index];
}
public function save($product) {
$this->products[$product['index']] = $product;
}
}
You notice that all my data is stored in a memory array and is retrieved from a memory array. This is a simple In Memory Database.
IMHO, if you're using XML to store test data then you really haven't disconnected the dependencies from the model and the database effectively. No matter how complex your business rules are, when they touch the database, all they really are doing is CRUD (create, retrieve, update, and delete) functionality.
If you what your dealing with in the model is multiple objects from the database then maybe you need to compose all those objects into a single object and have the model use that one object. An example would be an order composed of products. Don't be retrieving products then saving products. Retrieve orders then save orders and have your model work on orders. The model shouldn't know anything about products.
This is called granularity of abstraction.
[Edit]
There was a very good question in the comments. When testing with an In Memory Database we don't care about how the select works in a database. The controller, first off, has to have functionality on the database to count the number of possible records that could be accessed for paging. The IMDb (in memory database) should just send a number. The controller should never care what that number is. Same with the actual records. Hopefully all your controller is doing is displaying what it gets back from the IMDb.
[EDit]
You should never be unit testing your controllers with a live model and imdb. The setup code for the imdb will have a lot of friction. Instead when unit testing a controller, you need to unit test a mock, stub, fake model. The best use of an imdb is during an integration test or when unit testing a model. Isn't an imdb a fake?
My scenario is:
In my client I use a plug in for a table. DataTables. Server side processing.
Client GET requests items in table product.get(5,10). The return data will be encoded JSON.
The model will be responsible for forming the JSON from retrieving information from the gateway to the database. The gateway is just a facade over the database. I'm a mocker so my gateway is a mock not an in memory gateway.
public function testSkuTable() {
$skus = array(
array('id' => '1', 'data' => 'data1'),
array('id' => '2', 'data' => 'data2'),
array('id' => '3', 'data' => 'data3'));
$names = array(
'id',
'data');
$start_row = $this->parameters['start_row'];
$num_rows = $this->parameters['num_rows'];
$sort_col = $this->parameters['sort_col'];
$search = $this->parameters['search'];
$requestSequence = $this->parameters['request_sequence'];
$direction = $this->parameters['dir'];
$filterTotals = 1;
$totalRecords = 1;
$this->gateway->expects($this->once())
->method('names')
->with($this->vendor)
->will($this->returnValue($names));
$this->gateway->expects($this->once())
->method('skus')
->with($this->vendor, $names, $start_row, $num_rows, $sort_col, $search, $direction)
->will($this->returnValue($skus));
$this->gateway->expects($this->once())
->method('filterTotals')
->will($this->returnValue($filterTotals));
$this->gateway->expects($this->once())
->method('totalRecords')
->with($this->vendor)
->will($this->returnValue($totalRecords));
$expectJson = '{"sEcho": '.$requestSequence.', "iTotalRecords": '.$totalRecords.', "iTotalDisplayRecords": '.$filterTotals.', "aaData": [ ["1","data1"],["2","data2"],["3","data3"]] }';
$actualJson = $this->skusModel->skuTable($this->vendor, $this->parameters);
$this->assertEquals($expectJson, $actualJson);
}
You will notice that with this unit test that I'm not concerned what the data looks like. $skus doesn't even look anything like that actual table schema. Just that I return records. Here is the actual code for the model:
public function skuTable($vendor, $parameterList) {
$startRow = $parameterList['start_row'];
$numRows = $parameterList['num_rows'];
$sortCols = $parameterList['sort_col'];
$search = $parameterList['search'];
if($search == null) {
$search = "";
}
$requestSequence = $parameterList['request_sequence'];
$direction = $parameterList['dir'];
$names = $this->propertyNames($vendor);
$skus = $this->skusList($vendor, $names, $startRow, $numRows, $sortCols, $search, $direction);
$filterTotals = $this->filterTotals($vendor, $names, $startRow, $numRows, $sortCols, $search, $direction);
$totalRecords = $this->totalRecords($vendor);
return $this->buildJson($requestSequence, $totalRecords, $filterTotals, $skus, $names);
}
The first part of the method breaks the individual parameters from the $parameterList that I get from the get request. The rest are calls to the gateway. Here is one of the methods:
public function skusList($vendor, $names, $start_row, $num_rows, $sort_col, $search, $direction) {
return $this->skusGateway->skus($vendor, $names, $start_row, $num_rows, $sort_col, $search, $direction);
}
I've been using in memory Sqlite for my unit tests, its really usefull

MVC DataContext Ok to Share One Connection or Not

When is the "unit of work" no longer a "unit"? Which scenario is better on resources? THe first creates one connection wheras the second creates 4.
using (DataContext dc=new DataContext)
{
var orders= from o in dc.orders
select ( new Product { property a= from a in ..join... select x,
property b= from b in ..join... select y,
property c= from c in ..join... select z.}
)
}
OR
using (DataContext dc=new DataContext)
{
var orders= from o in dc.orders
select ( new Product { property a = GetPropertyA(),
property b = GetPropertyB(),
property c = GetPropertyC()}
)
}
When using LINQ if you add one entity to another and they use a different data context you are going to get an error. I would suggest keeping the same data context for each unit of work you are using. You can easily identify the unit of work you are processing by using the repository pattern to write your data access classes. If you're not clear on the repository pattern, check out this entry
Generally speaking the less round trips to your database the better. However, in high scale apps we often make our databases more and more 'dumb' so we end up not doing any joins so we don't tie up CPU and memory on the SQL database server. It really depends on which resources you find more scarce. If your SQL server has plenty of resources do the join there, otherwise pulling it back separately would be better. Have you looked at the results of this in the SQL profiler? Are you sure it's 4 connections, or just 4 separate calls?

Resources