EntityState.Detached, do I need to reload? - entity-framework-4

All,
Using .Net 4 and EF 4.4 Database First.
Let's say I have a DbContext. I load data from this DbContext, do stuff, and then detach everything from the DbContext and dispose of the DbContext.
Then, I create a new DbContext (same model) and load other data that overlaps with data from the first DbContext. Do I need to, prior to executing my query, do Entry().Reload() or will the Detatched Entities refresh automatically when they're loaded into the new context.
The reason I ask is because I ran into an issue in the past with, when using the same DbContext, I had to manually reattach entities that were in a detached state and call Reload. So I'm wondering if in this situation the entities that were in a detached state from the prior DbContext are simply attached to the new DbContext or if they're also refreshed?
Yes, I know I could setup a simple test, but was curious to know if someone else out there has already done this so that they could share their findings with the SO Universe and save others wondering about this some time.
Hopefully this question makes sense.
Thanks.

Entities are not automatically attached to the new context. You must attach them manually. If you then just load overlapping data your entities will not be updated. You must force such update either by calling Reload or by using ObjectContext and MergeOption for the query. If you don't attach your detached entities and you just run the query on the new context you will get new data but you will have two instances of overlapped entities - one detached with old data and one attached with new data.

will the Detatched Entities refresh automatically when they're loaded into the new context?
No, they won't. Which is good, because you can attach changed entities (which often happens in a disconnected n-tier application). An automatic refresh would erase the changes. So you'll have to reload manually if you want a refresh.

Related

ASP.NET MVC - How to use code first and database first method in same project and with same context

I have a requirement that I have to use code first pattern (which I have to implement) along with database first (which exists in the current system).
Now the condition is I must not create a different context for new tables or any other changes which I make in the database but have to maintain current context from DB first pattern. Is it possible to create code first and DB first in the same project sharing the same context? Do I must manage .edmx file or is it possible to handle the database from code first pattern only? And that too with managing TransactionScope.
I need some suggestions on this.
There are some things that I learned from my above problem.
One cannot use the same context for Code first and Database first.
To use code first and database first in the same project, context must be different and irrespective of ConnectionString. (one can use either existing ConnectionString or make a new one)
It is not good practice to use two patterns at same, but if situation demands it, than one must have no choice. Therfore at last one can create POCO classes for code first, from database first so it can be useful.
TransactionScope be used with irrespective number of database connections and it will work properly.
If anything I am missing, than please add it so other can have better idea, or they could at least save their time.

Refresh Controllers/Views after Database Change in MVC4 Database First

My DBA decided to rename some fields in the database, so I refreshed my EF data context.
Now I'm wondering if I need to delete / re-create my controllers and views or if I can 'refresh' them without deleting (since I've made modifications to my controllers). Either option would be faster than hand editing in the changes, since there's quite a few.
Thanks for any advice.
Unfortunately, it can't be... The model can only update entity classes, not views and controllers.
A trick for could be to creating a new controller with a new name and keeping the old controller as well. Then, copy those important codes from the old contoller to the new one.
And the same approach for views

EF database migrations

I'm working on an MVC4 application that uses Entity Framework 5, database migrations, and the asp membership api. In my development environment I can delete my database entirely, run a package (.zip), refresh the page and all works as expected; a database gets created, the seed method is called and some default values are stuck into the database.
Perfect, really easy too! But.....yes, there is always a but!
When I go to deploy this same package(only changed the db connection string) and run this in a remote environment, the behavior changes. Again, if I go in and delete the database entirely, run the package (.zip), refresh the page, the database gets created with only the asp membership api tables. I've checked the connection string, and that is for sure, not the cause, or else the database and membership tables couldn't get created.
I am aware of using the nuget package manager, which is really a powershell instance, but I am not using it since it cannot be used in the production environment. I'd like the package to handle it all, and in my test environment, it works perfectly.
Does anyone have any suggestions? Could this be a side effect of mixed migration history?
Thanks in advance!
The default MVC4 Internet Application project messes up a lot of people. Microsoft wanted to demonstrate the SimpleMembership functionality, which requires an EF context, but it's not common sense that you have to actually modify this portion to effectively use the rest of your app.
So, a little primer on how Entity Framework works will help clear up why this happening I think.
Entity Framework (at least version 5, it may change in 6 or successive versions) only allows one DbContext for your application. Most likely, you have at least two, the AccountsContext that was autogenerated by the project template and the context you use for the rest of your application. If you were to turn off automatic migrations and attempted to generate a migration, EF would helpfully tell you that you need to specify which context to use. However, automatic migrations are the default in Code First, very few disable that, and thus very few ever get alerted.
When automatic migrations are enabled, and you have no existing database, EF happily (and silently) creates the database for you from your context. But, what if you have multiple contexts? Well, the other nasty little feature of EF is that it creates the database just-in-time. So, which context gets used is a function of which one is accessed first. Nice, huh? So, if you attempt to do anything like a logon, then the database is created from the AccountsContext and then, when you try to access something from your app's context, the database already exists, and EF does nothing.
So, what to do about this? Well, you need one context to remove ambiguity. You can still have more than one context if you want in your app, but you have to essentially tell EF that all the other contexts are database-first, i.e. don't do anything.
public class AccountsContext : DbContext
{
public AccountsContext()
: base("name=YourConnectionStringName")
{
Database.SetInitializer<AccountsContext>(null);
}
...
}
public class MyAppContext : DbContext
{
public MyAppContext()
: base("name=YourConnectionStringName")
{
}
// All your entities here, including stuff from AccountsContext
}
MyAppContext will be your "master" context, which will have every entity in your database that EF should know about. The base call on both serves to take EF's guessing out of of the way and explicitly make sure everyone is one the same page as to what database connection should be used. Your AccountsContext is now essentially database-first, so it won't spark EF to create the database or attempt any migrations -- that's what MyAppContext is for. You can create other contexts in the same way as AccountsContext to break up your app's functionality; you just have to mirror any DbSet property declarations into your "master" context.
Julie Lehrman calls this concept "bounded contexts" in her book, Programming Entity Framework: DbContext, and presents a generic class you can inherit all of your "bounded contexts" from, so you don't have to specify the connection string name and set the database initializer to null each time.

Integrating ice/ace:dataTable with JPA and request-scoped beans

I'm wondering what is the right way to deal with dataTables that take input in a Hibernate/JPA world. As far as I can tell, one of the following three choices is causing the whole house of cards to fall apart, but I don't know which one is wrong.
Semi-automatic transaction and EntityManager handling via a custom JSF PhaseListener that begins and commits transactions around every request
Putting editing components inside a dataTable
Using request-scoped managed beans that fetch their data from a request-scoped EntityManager (with some help from PrettyFaces to set IDs on the request scoped beans from their URLs)
Backing a dataTable with a request-scoped bean instead of a view- or session-scoped bean.
I see an ICEfaces dataTable demo using JPA but they are both manually managing the transactions and not displaying editing components by default. You click on the row which causes an object to be nominated for editability and then when you hit "save" it manually reconnects the object to the new EntityManager before manually triggering a save. I see the click-to-edit function here as giving us a way to ensure that the right object gets reattached to the current session, and I don't know how one would live without something similar.
The impression I'm getting about the new ICEfaces 3.0 ace:dataTable (née PrimeFaces 2.0 dataTable) is that it is intended to be used in a View- or Session-scoped bean, but I don't see how one could get around StaleObjectState and/or LazyInitializationExceptions if one has model objects coming out of the DAO in request A and EntityManager A and then being modified or paged in by request B with EntityManager B.
I suppose it might work under Java EE through some kind of deep fu, but I don't have the luxury of upgrading us from Tomcat 6 to anything fancier right now (though it is my intent in the long run). We're also not about to start using Spring or Seam or whatever the other cool stuff is. ICEfaces is enough weird for us, probably too much weird honestly.
So to sum up, which of these is the wrong choice? The request-scoped entity manager, the request-scoped dataTable or using editing components inside a dataTable? Or is something else really at fault here?
If you'd ask me, the prime fault seems to be sticking to an almost bare Tomcat when your requirements seem to scream for something a little fancier. The mantra is normally that you use Tomcat when you don't need "all that that other stuff", so when you do need it, why keep using a bare Tomcat?
That said, the pattern really isn't that difficult.
Have a view scoped backing bean
Obtain the initial data in an #PostConstruct- (when there are no parameters like IDs) or PreRenderViewEvent method in combination with view parameters
Use a separate Service class that uses an entity manager to obtain and save the data
Make the entity manager "transaction scoped"
Without EJB/CDI/Spring:
Obtain a new entity manager from an entity manager factory for every operation.
Start a (resource local) transaction, do the operation, commit transaction, close entity manager.
Return the list of entities directly from your backing bean, bind the edit mode input fields of the table to the corresponding properties of the entity.
When updating a single row, pass the corresponding entity to the update method of your service. Apart from the overhead of getting an entity manager, starting the transaction etc, this basically only calls merge() on the entity manager.
Realize that outside the service you're working with detached entities all the time. There is thus no risk for any LazyInitializationExceptions. The backing beans need to be in view scope so the correct (detached!) entity is updated by JSF, which your own code then passes to the service, which merges it into the persistence context.
The flow for persisting is thus:
View state View scope Transaction scoped PC
Facelet/components Backing Bean Service
Strings ------> Detached entities --> Attached entities
(the flow for obtaining data is exactly the reverse)
Creating the Service this way is a little tedious and a kind of masochist exercise though. For an example app and just the two methods (get and update) discussed above it wouldn't be so bad, but for any sizable app this will quickly run out of hand.
If you are already adding JSF and JPA to Tomcat, just do yourself a favor and use something like TomEE. This is barely bigger than Tomcat (25MB vs 7MB) and contains all the stuff you're supposedly trying to avoid but in reality need anyway.
In case you absolutely can't upgrade your Tomcat installation (e.g. the product owner or manager thinks he owns the server instead of the developers), you might want to invest in learning about CDI. This can be easily added to your war (just one extra jar) and let's you abstract away lots of the tedious code. One thing that you also could really use is a JTA provider. This too can be added separately to your war, but the more of this stuff you add the better you'll be off by just using TomEE (or alternatives like GlassFish, Resin, JBoss, etc).
Also see this article, which covers various parts of your requirements: Communication in JSF 2.0

Using LINQ2SQL and MVC for wizard-type save functionality

I have an ASP.NET MVC application that uses LINQ2SQL as the database layer. I can save data back to the database no problem but I've came across a few issues when trying to save using a wizard-type scenario where data is collected over a few different forms but not saved to the database until the last form "Save" button is clicked.
At first I tried adding new objects to the datacontext, using InsertOnSubmit() or DeleteOnSubmit() and on the final page using SubmitChanges() to commit to the database. The problem with this is that if I tried to DeleteOnSubmit() an object that hadn't been submitted yet I would get an error.
I got round it eventually by writing a lot of code to manage the state of each object (insert, update or delete) and then on the final submit I make all changes to the DataContext before saving.
I'm wondering if there is a better way of managing the state of objects across pages using LINQ2SQL or if the manual code is the best way round it?
If you use TempData["key"] = data; you can pass data between redirections.
It uses session state but that can be overriden using you own provider that implements ITempDataProvider (probably the default will work you though!)
I'd build object through wizard and then do one SubmitChanges() in last page of wizard.
Kindness & HTH,
Dan

Resources