My MVC4 app uses code-first Entity Framework 5.0. I want to access my SQL Server data from a timer thread. Is there any reason why I can't instantiate, use, and dispose an instance of the same derived DbContext class that I also use on the main ASP.NET worker thread? (Yes, I use the same using() pattern to instantiate, use, and dispose the object on the main thread.)
A little problem context: My website has a WebsiteEnabled field in a table in the database. Currently, I incur a database fetch for each GET request to read that value. I want to change the code to read the value once every 15 seconds on a background thread, and store the value in a static variable that the GET request can read. I know that you run into problems if you try to instantiate multiple instances of the same DbContext on the same thread; I'm not sure if the same restrictions apply to instances of the same DbContext on different threads.
We use a background thread as well to check for emails and do cleanups every so often in one of our larger MVC applications. As long as you create a new context (and dispose of it) on the background thread and not try and use the one from your main application thread, you will be fine. The DbContext is not thread safe, meaning you cannot share it across multiple threads safely. This does not mean you cannot have multiple threads each with their own copy of the db context. The only caution is beware of concurrency issues (trying to update a row at the same time).
Statics and EF are recipe for a mess. Under asp.net, 1 app pool, many threads. Store statics if you must but not the context. So always make sure each thread gets its own context.
but given your problem, there is a simple out of the box solution I would use
In thr controller that should have cached values. On the GET method
You can cache per ID, for a specific period of time...
Worth checking out. Let IIS, ASP.NET do work for you. :-)
[OutputCache(Duration = int.MaxValue, VaryByParam = "id", Location = OutputCacheLocation.ServerAndClient)]
public ActionResult Get(string id) {
// the value that can be cached is collected with a NEW CONTEXT !
Related
In my project I use entity framework 4.0 as ORM to persist data in a SQL Server.
My project is a ribbon from application with a grid view and navigation tree in the main form with ribbon panel on top of it. My app basically acts a CRUD UI with very little business logic.
Being first time with EF, I developed this project by creating and holding an instance of objectContext in the orchestrating form (main form or the one that shows up as application to user) as a member variable and bound a query to a grid view.
For various events like ribbon panel button clicks, grid view row clicks etc, I open another windows form. In that window form I create another object context and stored in member variable of that form class.
I had read through few blogs and questions like:
How to decide on a lifetime for your objectcontext
Entity Framework and ObjectContext n-tier architecture etc.
One set of authors suggests to share the object context while other suggest to short lived and non-shared.
I reached this state of confusion because I am now in a state where the changes I made to objectContext in one of the child form is not reflecting the parent form that showed it. I attempted to refresh but still nothing useful. Just for an experiment, I shared the objectContext that I first created in the most parent class through constructor injection and my problem of change reflection is solved.
It is a huge work for me to convert all my child forms to share objectContext. But I am ready if it is worth. I am not sure what will be the lurking problems of sharing it?
I may opt for a static instance of objectContext as I am not using this for Web and not planning for multi threaded scenarios. If required I can rise to as a singleton.
My Questions:
To share or not to share ObjectContext for my situation?
If not to share, how can I solve my present problem of updating one objectContext with the changes made in other?
If to share - which would be better way? Static or singleton or something else?
The details of the project and environment are as below:
Winforms
C#
VS 2012
EF 4.0, model created with data first approach.
I am posting this after searching and reading through many questions and blog posts. The more I read, the more confusing it becomes :) Please bear with me if I am leaving someone to assume something to answer. I will try to update the question if such clarifications are asked through comments.
Your Questions
To share or not to share ObjectContext for my situation?
Do not share your context. The EntityFramework context should follow a UnitOfWork pattern. Your object context should be as short lived as possible without unnecessarily creating/destroying too many contexts. This usually translates to individual "operations" in your app as units of work. For a web app/api this might be per HttpWebRequest, or you might do it per logical data operation (for each of your implemented pieces of "Business Logic").
For example:
LoadBusinssObjects() would create a context, load your list of data plus any related data you want then dispose of the context.
CreateBusinessObject() would create a context, create an instance of some entity, populate it with data, attached it to a collect in the context, save changes and then dispose of the context.
UpdateBusinessObject() would read some object from the context, update it, save changes, and dispose of the context.
DeleteBusinessObject() would find a business object in the context, remove it from the collection in the context, save changes and dispose of the context.
If not to share, how can I solve my present problem of updating one objectContext with the changes made in other?
This is a job for a pub/sub architecture. This can be as simple as a few static event handlers on your objects for each operation you implemented above. Then in your code for each business operation, you fire the corresponding events.
If to share - which would be better way? Static or singleton or something else?
This is incorrect. The EF context will continue to grow in memory footprint as the context's state manager continuously collects up cached objects (both attached and not-attached) for every single interaction you do in your application. The context is not designed to work like this.
In addition to resource usage, the EF context is not thread safe. For example what if you wanted to allow one of your editor forms to save some changes at the same time as the tree list is loading some new data? With one static instance you better make sure this is all running on the UI thread or synchronized with a semaphore (yuck, and yuck - bad practices).
Example
Here's an example using C# and code first approach as per your post. Note, I'm not addressing things like data concurrency or threading to keep the example short. Also in a real application this concept is implemented with generics and reflection so that ALL of our models have basic events on them for Creating, Updating, Deleting.
public class MyCodeFirstEntityChangedArgs : EventArgs
{
/// <summary>
/// The primary key of the entity being changed.
/// </summary>
public int Id {get;set;}
/// <summary>
/// You probably want to make this an ENUM for Added/Modified/Removed
/// </summary>
public string ChangeReason {get;set;}
}
public class MyCodeFirstEntity
{
public int Id {get;set;}
public string SomeProperty {get;set;}
/// <summary>
/// Occurs when an instance of this entity model has been changed.
/// </summary>
public static event EventHandler<MyCodeFirstEntityChangedArgs> EntityChanged;
}
public class MyBusinessLogic
{
public static void UpdateMyCodeFirstEntity(int entityId, MyCodeFirstEntity newEntityData)
{
using(var context = new MyEFContext())
{
// Find the existing record in the database
var existingRecord = context.MyCodeFirstEntityDbSet.Find(entityId);
// Copy over some changes (in real life we have a
// generic reflection based object copying method)
existingRecord.Name = newEntityData.Name;
// Save our changes via EF
context.SaveChanges();
// Fire our event handler so that other UI components
// subscribed to this event know to refresh/update their views.
// ----
// NOTE: If SaveChanges() threw an exception, you won't get here.
MyCodeFirstEntity.EntityChanged(null, new MyCodeFirstEntityChangedArgs()
{
Id = existingRecord.Id,
ChangeReason = "Updated"
});
}
}
}
Now you can attach event handlers to your model from anywhere (its a static eventhandler) like this:
MyCodeFirstEntity.EntityChanged += new EventHandler<MyCodeFirstEntityChangedArgs>(MyCodeFirstEntity_LocalEventHandler);
and then have a handler in each view that will refresh local UI views whenever this event is fired:
static void MyCodeFirstEntity_LocalEventHandler(object sender, MyCodeFirstEntityChangedArgs e)
{
// Something somewhere changed a record! I better refresh some local UI view.
}
Now every UI component you have can subscribe to what events are important to it. If you have a Tree list and then some editor form, the tree list will subscribe for any changes to add/update/remove a node (or the easy way - just refresh the whole tree list).
Updates Between Applications
If you want to go a step further and even link separate instances of your app in a connected environment you can implement a pub/sub eventing system over the network using something like WebSync - a comet implementation for the Microsoft Technology Stack. WebSync has all the stuff built in to separate events into logical "channels" for each entity/event you want to subscribe to or publish to. And yes, I work for the software company who makes WebSync - they're paying for my time as I write this. :-)
But if you didn't want to pay for a commercial implementation, you could write your own TCP socket client/server that distributes notifications for the above events when an entity changes. Then when the subscribing app gets the notification over the network, it can fire its local event handlers the same way which will cause local views to refresh. You can't do this with a poorly architected static instance of your data context (you'd be bound to only ever have one instance of your app running). With some good setup early on, you can easily tack on a distributed pub-sub system later that works across multiple instances of native apps and web apps all at the same time! That gets very powerful.
When do you dispose an Entities object context objects in entity framework and MVC?
For example if I have a persons table and I select a record in a controller method, dispose it and pass it back to my view, then the record won't be usable in the view.
Should I be disposing it somehow after my view is processed? or not disposing it at all?
One option is to create it in Global.asax's begin request event, and dispose of it in Global.asax's end request event. Every page simply uses that one (stored and obtained in HttpContext.Current.Items or in thread local storage) without disposing it. That lets it be available to your view to do lazy loading but still disposes of it after the request is completed.
The other option is to make sure everything you need is already loaded before calling your view (via .First(), .ToList(), and .Include(property) to include navigation property data) and dispose of it immediately. Both methods work.
I assume you're talking about disposing the Entity Framework "Contexts," since the objects themselves aren't disposable.
We've found it best to leave the entities themselves in our data layer and map them to POCOs/DTOs that contain all the information we need for a given view. That way we're not trying to lazy-load data while we render our view. We wrap the data-access code in a using(var context = contextFactory.Get()), so that the context will automatically be disposed before the method ends, but after we have loaded all the data we're retrieving into an in-memory collection.
Let's consider typical usage pattern of user, you will never just open one item and go away, in fact we move back and forth between items, search and review items again, modify and save them.
If you keep your ObjectContext alive for entire session, you will use little more memory per user, but you will reduce your application to database transfers, and you will be able to accumulate changes. And save changes at once. Since EF implements Identity Pattern, you will not be loading multiple copies of same object.
Otherwise if you dispose ObjectContext, will reduce memory but will increase overhead of loading objects again and again. You might be loading multiple copies of same object again and again over views and increasing query load on database server.
I have an hierarchical structure with millions of records.
I'm doing a recursive scan on the DB in order to update some of the connections and some of the data.
the problem is that I get an outofmemory exception since the entire DB is eventually loaded to the context (lazy). data that I no longer need stays in the context without any way of removing it.
I also can't use Using(context...) since I need the context alive because I'm doing a recursive scan.
Please take the recursion as a fact.
Thanks
This sort of an operation is really not handled well nor does it scale well using entities. I tend to resort to stored procedures for batch ops.
If you do want to remove/dump objects from context, I believe this post has some info (solution to your problem at the bottom).
just ran into the same problem. I've used NHibernate before I used EF as ORM tool and it had exactly the same problem. These frameworks just keep the objects in memory as long as the context is alive, which has two consequences:
serious performance slowdown: the framework does comparisons between the objects in memory (e.g. to see if an object exists or not). You will notice a gradual degradation of performance when processing many records
you will eventually run out of memory.
If possible I always try to do large batch operation on the database using pure SQL (as the post above states clearly), but in this case that wasn't an option. So to solve this, what NHibernate has is a 'Clear' method on the session, which throws away all object in memory that refer to database records (new ones, added ones, corrupt ones...)
I tried to mimic this method in entity framework as follows (using the post described above):
public partial class MyEntities
{
public IEnumerable<ObjectStateEntry> GetAllObjectStateEntries()
{
return ObjectStateManager.GetObjectStateEntries(EntityState.Added |
EntityState.Deleted |
EntityState.Modified |
EntityState.Unchanged);
}
public void ClearEntities()
{
foreach (var objectStateEntry in GetAllObjectStateEntries())
{
Detach(objectStateEntry.Entity);
}
}
}
The GetAllObjectStateEntries() method is taken separately because it's useful for other things. This goes into a partial class with the same name as your Entities class (the one EF generates, MyEntities in this example), so it is available on your entities instance.
I call this clear method now every 1000 records I process and my application that used to run for about 70 minutes (only about 400k entities to process, not even millions) does it in 25mins now. Memory used to peak to 300MB, now it stays around 50MB
A week back, I had an ASP.NET MVC application that called on a logical POCO service layer to perform business logic against entities. One approach I commonly used was to use AutoMapper to map a populated viewmodel to an entity and call update on the entity (pseudo code below).
MyEntity myEntity = myService.GetEntity(param);
Mapper.CreateMap<MyEntityVM, MyEntity>();
Mapper.Map(myEntityVM, myEntity);
this.myService.UpdateEntity(myEntity);
The update call would take an instance of the entity and, through a repository, call NHibernate's Update method on the entity.
Well, I recently changed my logical service layer into WCF Web Services. I've noticed that the link NHibernate makes with an entity is now lost when the entity is sent from the service layer to my application. When I try to operate against the entity in the update method, things are in NHibernate's session that shouldn't be and vice-versa - it fails complaining about nulls on child identifiers and such.
So my question...
What can I do to efficiently take input from my populated viewmodel and ultimately end up modifying the object through NHibernate?
Is there a quick fix that I can apply with NHibernate?
Should I take a different approach in conveying the changes from the application to the service layer?
EDIT:
The best approach I can think of right now, is to create a new entity and map from the view model to the new entity (including the identifier). I would pass that to the service layer where it would retrieve the entity using the repository, map the changes using AutoMapper, and call the repository's update method. I will be mapping twice, but it might work (although I'll have to exclude a bunch of properties/children in the second mapping).
No quick fix. You've run into the change tracking over the wire issue. AFAIK NHibernate has no native way to handle this.
These may help:
https://forum.hibernate.org/viewtopic.php?f=25&t=989106
http://lunaverse.wordpress.com/2007/05/09/remoting-using-wcf-and-nhibernate/
In a nutshell your two options are to adjust your service to send state change information over the Nhibernate can read or load the objects, apply the changes and then save in your service layer.
Don't be afraid of doing a select before an update inside your service. This is good practice anyway to prevent concurrency issues.
I don't know if this is the best approach, but I wanted to pass along information on a quick fix with NHibernate.
From NHibernate.xml...
<member name="M:NHibernate.ISession.SaveOrUpdateCopy(System.Object)">
<summary>
Copy the state of the given object onto the persistent object with the same
identifier. If there is no persistent instance currently associated with
the session, it will be loaded. Return the persistent instance. If the
given instance is unsaved or does not exist in the database, save it and
return it as a newly persistent instance. Otherwise, the given instance
does not become associated with the session.
</summary>
<param name="obj">a transient instance with state to be copied</param>
<returns>an updated persistent instance</returns>
</member>
It's working although I haven't had time to examine the database calls to see if it's doing exactly what I expect it to do.
What is the best way to do entity-based validation (each entity class has an IsValid() method that validates its internal members) in ASP.NET MVC, with a "session-per-request" model, where the controller has zero (or limited) knowledge of the ISession? Here's the pattern I'm using:
Get an entity by ID, using an IFooRepository that wraps the current NH session. This returns a connected entity instance.
Load the entity with potentially invalid data, coming from the form post.
Validate the entity by callings its IsValid() method.
If valid, call IFooRepository.Save(entity), which delegates to ISession.Save(). Otherwise, display error message.
The session is currently opened when the request begins and flushed when the request ends. Since my entity is connected to a session, flushing the session attempts to save the changes even if the object is invalid.
What's the best way to keep validation logic in the entity class, limit controller knowledge of NH, and avoid saving invalid changes at the end of a request?
Option 1: Explicitly evict on validation failure, implicitly flush: if the validation fails, I could manually evict the invalid object in the action method. If successful, I do nothing and the session is automatically flushed.
Con: error prone and counter-intuitive ("I didn't call .Save(), why are my invalid changes being saved anyways?")
Option 2: Explicitly flush, do nothing by default: By default I can dispose of the session on request end, only flushing if the controller indicates success. I'd probably create a SaveChanges() method in my base controller that sets a flag indicating success, and then query this flag when closing the session at request end.
Pro: More intuitive to troubleshoot if dev forgets this step [relative to option 1]
Con: I have to call IRepository.Save(entity)' and SaveChanges().
Option 3: Always work with disconnected objects: I could modify my repositories to return disconnected/transient objects, and modify the Repo.Save() method to re-attach them.
Pro: Most intuitive, given that controllers don't know about NH.
Con: Does this defeat many of the benefits I'd get from NH?
Option 1 without a doubt. It's not counter intuitive, it's how NH works. Objects retrieved using NH are persistent and changes will be saved when the session is flushed. Calling Evict makes the object transient which is exactly the behavior you want.
You don't mention it but another option might be to use Manual or Commit FlushMode.
How about a validation service with an IsValid (or something similar) method which validates the object passed to it, if it fails it could publish a ValidationFailed event. Then when your request finishes instead of calling the session's flush you could publish a RequestEnd event. You could then have a handler that listens for both RequestEnd events and ValidationFailed events - if there is a ValidationFailed event then don't flush the session but if not then flush it.
Having said that I just do Option 2!
As Mauricio and Jamie have pointed out in their answers/comments, it's not easy (and probably not desirable) to do exactly what the question asks. NH returns persistent objects, so exposing those objects to the controllers means the controllers are responsible for treating them as such. I want to use lazy loading, so exposing detached instances won't work.
Option 4: Introduce a new pattern that provides the desired semantics
The point of this question is that I'm introducing NH+Repositories to an existing project using a hand-rolled, Active-Record-like DAL. I want code written NH to use patterns similar to the legacy code.
I created a new class called UnitOfWork that is a very thin wrapper over an ITransaction and that knows how to access the ambient session related to the current HttpRequest. This class is designed to be used in a using block, similar to TransactionScope which the team is familiar with:
using (var tx = new UnitOfWork()) {
var entity = FooRepository.GetById(x);
entity.Title = "Potentially Invalid Data";
if (!entity.IsValid()) {
tx.DiscardChanges();
return View("ReloadTheCurrentView");
}
else {
tx.Success();
return RedirectToAction("Success");
}
}
The tx.DiscardChanges() is optional, this class has the same semantics as TransactionScope which means it will implicitly rollback if it is disposed before the success flag is set.
On a greenfield NH project I think it's preferable to use Option 1, as Jamie indicates in his answer. But I think Option 4 is a decent way to introduce NH on a legacy project that already uses similar patterns.