Controlling and managing DbContext - asp.net-mvc

for now I have this context
namespace Dafoor_MVC.Models
{
public class DafoorDBContext : DbContext
{
public DbSet<Department> departments { get; set; }
public DbSet<Course> courses { get; set; }
public DbSet<Reply> replies { get; set; }
}
}
This context will grow large because I have about 40 models that I wanna add.
1- is it a good idea to have the 40 models in one context ?
2- i want this context to be shared among all users, because i don't want to hit the database with queries every time if the record is already in a context, but this will affect the server memory,so how can i implement something like " last object used to be disposed or the object that didn't get called for an amount of time to be disposed from the context " ? i don't want to dispose the whole context.
3-if point 2 didn't work, can i put an instance of the context in a user session so the context will be a user specific not application spicific

is it a good idea to have the 40 models in one context?
There is nothing wrong with that if they all logically belong together.
i want this context to be shared among all users
No, you don't. You want to instantiate a context for each individual HTTP request, and dispose of it before the handling of the HTTP request is done. Do not cache your DbContext.
can i put an instance of the context in a user session so the context will be a user specific not application spicific
You should not cache your context. However, you can store objects retrieved using the context in Session. You will not be able to update the objects / object graph without first re-attaching the objects to a new context.
UPDATE
Here's why the DbContext should never be stored beyond the current HTTP request:
One DbContext per web request... why?

1) there is no reason to worry about having 40+ DbSets in a single context class. They're simply collections and are only populated with objects you're currently using
2) instance members of DbContext and DbSet are not guaranteed to be thread safe. I don't recommend the singleton approach
3) you could, but be sure to handle database concurrency exceptions properly

Related

Real examples of using EF lazy loading in MVC application

Can anyone post correct and useful an example of using EF lazy loading in MVC application?
I've tried to research the question, but I can't get proper case.
As a result my conclusion is: since web apps are stateless there is no sense to include LL to entities. But it sounds strange. That's why the question is here.
Can you confirm or otherwise refute my conclusion?
EDIT
The statment "stateless" in question context is important in my mind. Let's pretend 2 scenarios. First one relates for example to WPF app and the second one to MVC. Let's suppose that thre is the next simple object:
public class Person
{
public int Age { get; set; }
public string Name { get; set; }
...
public virtual List<Activity> Activities { get; set; }
}
1) WPF. User is able to request the only Person without his Activities. Thus he get a small portion of data. Overhead are reasonable. At the same time user can decide to request person's activities.
Due to ll mechanism, EF simply loads activities without requesting person object again, since Person still exists in application (of course, if we code it in such a way).
2) MVC. The same actions are there. But the only difference that, after server response, all resources including object Person are disposes. And we can't load Person activities as we did in WPF application. We are forced to load Person again (overhead is increases comparing with WPF app)
The point is that Lazy loading can be executed only in the scope of the context to which the entity is attached - if you dispose the context you cannot use it.
I don't think you understand what lazy loading does, as it has nothing to do with whether there's any state or not. It's not like caching or something. Lazy loading is simply Entity Framework overloading a property to add a custom getter that issues a query to fetch the object or set of objects when the property is accessed for the first time.
For example, if you had something like:
public class Foo
{
public virtual Bar Bar { get; set; }
}
And you were to query a set of Foos from the database, the Bar property on all of them would be null, as EF would not have issued any queries yet to fetch the related Bar instance. However, if you were to iterate over this list of Foo and access some property on Bar (i.e. foo.Bar.Baz, then EF would issue a just-in-time query for the Bar instance, so that it could then return the Baz property on it.

Discard an already saved entity

I have a distributed system where users can make changes into one single database. To illustrate the problem, let's assume we have the following entities:
public class Product{
public int Id{get;set;}
public List<ProductOwner> ProductOwners{get;set;}
}
public class ProductOwner{
public int ProductId { get; set; }
[ForeignKey("ProductId")]
[Inversroperty("ProductOwners")]
public Product Product{ get; set; }
public int OwnerId { get; set; }
[ForeignKey("OwnerId")]
public Owner Owner{ get; set; }
}
public class Owner{
public int Id{get;set;}
}
Let's also assume we have two users, UserOne and UserTwo connected to the system.
UserOne adds Product1 and assigns Owner1 as an owner. As a result, a new ProductOwner1 is created with key=[Product1.Id, Owner1.Id]
UserTwo does the same operation, another instance ProductOwner2 with key=[Product1.Id, Owner1.Id] is created. This will result in an EF exception on the server side, which is expected, as a row with key=[Product1.Id, Owner1.Id] already exists in the database.
Question
The issue above can be partly resolved by having some sort of real time data refresh on both UserOne and UserTwo machines (I am already doing this) and running a validation task on the server to ignore and not save entities that are already in the DB.
The remaining issue is how to tell Breeze on 'userTwo' machine to mark ProductOwner2 as saved and change its state from Added to Unchanged?
I think this is an excellent question and has been raised enough that I wanted to chime in on how I would do it given the above scenario in hopes others can find a good way to accomplish this from a Breeze.js perspective as well. This answer doesn't really address server logic so it is incomplete at best.
Step 1 - Open a web socket
First and foremost we need some way to tell the other connected clients that there has been a change. SignalR is a great way to do this if you are using the ASP.NET MVC stack and there are a bunch of other tools.
The point is that we don't need to have a great way of passing data down and forcing it in to the client's cache, we just need a lightweight way to tell the client that some information has changed and if they are concerned with this to refresh something. My recommendation in this area would be to use a payload that tells the client either what entity type and Id changed or give a resource to the client to let them know what collection of entities to refresh. Two examples of a JSON payload that would work well here -
{
"entityChanges": [
{
"id": "123",
"type": "product",
"new": false
},
{
"id": "234",
"type": "product",
"new": true
}
],
collectionChanges: [
{
"type": "productOwners"
}
]
}
In this scenario we are simply telling the client that the products with Ids of 123 and 234 have changed, and that 234 happens to be a new entity. We aren't pushing any data about what properties have changed to the client as that is their responsibility to decide whether to refresh or requery for data. There is also the possibility of telling the client to refresh a whole collection like in the second array but I will focus on the first example.
Step 2 - Handle the changes
Ok we got a payload from our web socket that we need to pass to some analyzer to decide whether to requery. My recommendation here is to check if that entity exists in cache, and if so, refresh it. If a flag comes down in the JSON that says it is a new entity we probably also need to requery it. Here is some basic logic -
function checkForChanges (payload) {
var parsedJson = $.parse(payload);
$.each(parsedJson.entityChanges, function (index, item) {
// If it is a new entity,
if (item.new === true) {
// Go get it from the database
manager.fetchEntityByKey(item.type, item.id)
.then(fetchSucceeded).fail(fetchFailed);
} else {
// Check local cache first
var localentity = manager.getEntityByKey(item.type, item.id);
// And if we have a local copy already,
if (localentity) {
// Go refresh it from the database
manager.fetchEntityByKey(item.type, item.id)
.then(fetchSucceeded).fail(fetchFailed);
}
}
}
}
Now there is probably some additional logic in your application that need to be handled but in a nut shell we are -
Opening up a lightweight connection to the client to listen for changes only
Creating a handler for when those changes occur
Applying some logic on how to query for or refresh the data
Some considerations here are you may want to use different merge strategies depending on various conditions. For instance if the entity already has changes you may want to preserve changes, where as if it is a entity that is always in a state of flux you may want to overwrite changes.
http://www.breezejs.com/sites/all/apidocs/classes/MergeStrategy.html
Hope this provides some insight, and if it doesn't answer your question directly I apologize for crowding up the answers : )
Would it be possible to catch the entity framework / unique key constraint error on the breeze client and react by creating a new entity manager (using the createEmptyCopy method), loading the relevant ProductOwner records and using them to determine which ProductOwner records in the original entityManager need to be set "unchanged" using the entity's entityAspect's setUnchanged method. Once this "synchronization" is done the save changes can be retried.
In other words, the client is optimistic the save will succeed but can recover if necessary. The server remains oblivious to the potential race condition and has no custom code.
A brute force approach, apologies if I'm stating the obvious.

persisting MVC data to ORM

Java or dotNet world is rich of open source frameworks and libraries. We all like to use Spring and Hibernate almost everywhere.
Everyone agrees that hibernate is a very handy tool.
What Hibernate can do ? well, Basically - Hibernate can track our domain objects changes and persist only modified data to database, that is it.
Basically, That is everything we want. I want to load some records from database, do some modifications to them, and call transaction.commit(), and all modifications get persisted, instantaneously.
That is excelent, right !
But how about web world ? In web applications database session must be closed.
I cannot load some domain objects and wait for user to do modifications through HTTP, and persist those objects after modifications.
We have to use detached objects or DTO. How it works ?
User makes modifications in HTML browser, spring Mvc automatically thransfers those HTML modifiactions to our customized DTO objects using MVC model binding,
then we do some programming effort to transfer modifications from DTO objects to hibernate domain objects and only then we persist them.
For example - we have a web form that updates Customer address, and another form which updates customer details.
We must have two different business layer methods - UpdateAddress() and UpdateDetails(), both methods must accept some kind of DTO,
one represents address information, the other represents details infprmation.
We also have custom logic that transfers data from those 2 DTO to the domain class 'Customer'.
Yes, of course, instead of DTO objects we could reuse our domain classes. But it does not make it simpler.
In both cases we will still have to implement custom logic that transfer modifications to persistent objects,
I cannot persist detached object rightaway, because usually domain classes have lots and lots of properties representing numerous relations, for ex. Customer has - Orders property. When I update customer address I don't want to update its orders.
Is there a beautifull universal way to mapping modifications from mvc model to domain objects without writing a lot of custom code and without risk of overwriting too many fields ?
It's good practice to have a data access layer, which translates into having a repository for each domain object / entity. Furthermore, all repositories share common code so you you naturally have an abstract repository:
public abstract class AbstractRepository<E extends BaseModel> implements Repository<E> {
#PersistenceContext
private EntityManager entityManager;
private Class<E> entityClass;
public AbstractRepository(Class<E> entityClass) {
this.entityClass = entityClass;
}
protected EntityManager getEM() {
return entityManager;
}
protected TypedQuery<E> createQuery(String jpql) {
return createQuery(jpql, entityClass);
}
protected <T> TypedQuery<T> createQuery(String jpql, Class<T> typeClass) {
return getEM().createQuery(jpql, typeClass);
}
#Override
public E merge(E entity) {
return getEM().merge(entity);
}
#Override
public void remove(E entity) {
getEM().remove(entity);
}
#Override
public E findById(long id) {
return getEM().find(entityClass, id);
}
}
It's also good practice to have a service layer where you are to create, update and delete instances of an entity (where you could pass through a DTO to the create and update methods if you so desire).
...
#Inject
private CustomerRepository customerRepository;
public Customer createCustomer(CustomerDto customerDto) {
Customer customer = new Customer();
customer.setEmail(customerDto.getEmail());
...
return customerRepository.merge(customer);
}
public Customer updateCustomerAddress(Customer customer, String address) {
customer.setAddress(address);
return customerRepository.merge(customer);
}
...
So it's up to you how many update methods you want. I would typically group them into common operations such as updating the customer's address, where you would pass the customer Id and the updated address from the front end (probably via ajax) to your controller listening on a specific endpoint. This endpoint is where you would use the repository to find the entity first by Id and then pass it to your service to do the address update for example.
Lastly you need to ensure that the data actually gets persisted, so in Spring you can add the #Transactional annotation either to you Spring MVC controller or to your service that does the persisting. I'm not aware of any best practices around this but I prefer adding it to my controllers so that you're always guaranteed to have a transaction no matter what service you are in.

Business entity that accesses data store to validate itself: an SRP violation?

Consider the following business entity class. In order to validate itself, it needs to know something about the state of the database, perhaps to prevent a conflict of some kind. So, it has a dependency on the data access layer in order to retrieve this data.
Is it a violation of the Single Responsibility Principle to have a class that encapsulates state, validates the state, and accesses a data store to do so?
class MyBusinessObject
{
private readonly IDataStore DataStore;
public MyBusinessObject(IDataStore dataStore)
{
this.DataStore = dataStore;
}
public virtual int? Id { get; protected set; }
public virtual string Name { get; set; }
// ... Other properties...
public IEnumerable<ValidationResult> Validate()
{
var data = this.DataStore.GetDataThatInfluencesValidation();
return this.ValidateUsing(data);
}
// ... ValidateUsing method would be in here somewhere ...
}
It's throwing a red flag for me because:
In the context of an ASP.NET MVC controller's Create method, I might make a new instance and pass it to my View() method with no intention of validating, so why should I be required to pass in an IDataStore?
I'm using NHibernate (and I'm a noob), and it looks like I have to create an IInterceptor that injects dependencies whenever NH creates entities. Maybe that will be fine, but it feels a little bit wrong to me.
I'm starting to think I should use an anemic/DTO type of object for NHibernate to use, and wrap that inside something else that knows all the business rules AND can access a data store if it depends on one. Now that I've typed up my question and title, StackOverflow has suggested some interesting resources: here and here.
My question also looks very similar to this one, but I thought I'd ask it in a different way that more closely matches my situation.
"Is it a violation of the Single Responsibility Principle to have a class that encapsulates state, validates the state, and accesses a data store to do so?" -- You listed three responsibilities so I'm going to answer yes.
The difficulty with validation is that it depends on context. For example, creating a customer record might require just their name but selling them a product requires payment information and a shipping address.
IN MVC, I do low level (data sizes and nullability) validation using data annotations in view models. More complex validation is done using the specification pattern. The specification pattern is easy to implement and flexible.

MVC + EF4 + POCO - How to go about storing the Entity Context?

I'm making a start on an MVC project, having gone through the MvcMusicStore tutorial. I'm trying to get my head around how the POCO-generated data/entity context is intended to be stored.
In the samples, the controller generates a copy of the entity context, and all operations complete there:
MusicStoreEntities storeDB = new MusicStoreEntities();
//
// GET: /Store/
public ActionResult Index()
{
// Retrieve list of Genres from database
var genres = from genre in storeDB.Genres
select genre.Name;
[...]
If I'm to split my solution into layers, what is the standard practice (or key options) for retaining the context? Do I generate it in the controller, and pass it to the repository, or is it possible for the repository to keep a general-use copy?
I understand that the the above would be necessary to use the Unit of Work pattern.
My layers are:
Data (edmx file)
Entities (Generated from POCO)
Repository
Mvc web app
My other questions:
- What is the overhead of generating the context?
- As there is no .Close(), and it doesn't implement IDisposable, is the ObjectContext behind it generating individual connections, connection pooling, sharing a single instance?
- Is it possible to lock an ObjectContext if it's passed around between layers / operations too much?
Thanks in advance.
I don't want to go into too much detail/code here, so i'll just mention some points:
Your controller can work with multiple repositories
There should be one repository per aggregate root
Controller work amongst multiple repositories are made possible by Unit of Work
Use a DI container to handle lifetime management of Unit of Work (which is actually the context)
Do not use singletons for the Context, let the DI container instantiate/dispose of the context per HTTP request
I create a single repository for each controller and put my context in there. The rules I follow are that the repository handles anything that I might want to mock (not really the definition of repository, but it works for me). The repository can call other repositories if necessary, but the controller shouldn't have to know about it. The Context is an instance property of the repository and is created on demand (I haven't taken the leap into IOC yet). If the repository calls another repository, it passes the Context instance.
It looks a little like this...
public class MyController : Controller
{
public IMyControllerRepository Repository { get; set; }
public ActionResult MyAction(int id)
{
var model = Repository.GetMyModel(id);
return View(model);
}
}
public class MyControllerRepository : IMyControllerRepository
{
public MyContext Context { get; set; };
public MyModel GetMyModel(int id)
{
return (from m in Context.MyModels
where m.ID = id
select m).SingleOrDefault();
}
}

Resources