ASP Mvc Nhibernate Issue - asp.net-mvc

I am experiencing some bizarre problems with Nhibernate within my MVC web application.
There is not 1 consistent error, I keep getting loads of random ones:
Transaction not successfully started
New request is not allowed to start because it should come with valid transaction descriptor
Unexpected row count: -1; expected: 1
To give a little context to the setup, I am using Ninject to DI the sessions and other Nhibernate related objects, currently I am using RequestScope however I have tried SingletonScope. I have a large and complicated data model, which is read out as a whole, but persisted back in separate parts, as these can all be edited and saved individually.
An example would be having a Customer object, which contains a address object, a contact object, friends object, previous orders object etc etc...
So the whole object is read out, then mapped to the UI domain models and then displayed in different partials within the page. Each partial can be updated individually via ajax, so you may update 1 section or you could update them all together. It seems mainly to give me the problems when I try to persist them all together (so 2-4 simultanious ajax requests to persist chunks of the model).
Now I have integration tests that work fine, which just test the persistence and retrieval of entities. As a whole and individually and all pass fine, however in the web app they just seem to keep throwing random exceptions, and originally refused to persist outside of the Nhibernate cache. I found a way round this by wrapping most units of work within transactions, which got the data persisting but started adding new errors to the mix.
Originally I was thinking of just scrapping Nhibernate from the project, as although I really want its persistance/caching layer, it just didnt seem to be flexible enough for my domain, which seems odd as I have used it before without much problem, although it doesn't like 1-1 mappings.
So has anyone else had flakey transaction/nhibernate issues like this within an ASP MVC app... I know this may be a bit vague as the errors dont point to one thing, and it doesn't always error, so its like stabbing in the dark, but I am out of ideas so any help would be great!
-- Update --
I cannot post all relevant code as the project is huge, but the transaction bit looks like:
using (var transaction = sessionManager.Session.BeginTransaction(IsolationLevel.ReadUncommitted))
{
try
{
// Do unit of work
transaction.Commit();
}
catch (Exception)
{
transaction.Rollback();
throw;
}
}
Some of the main problems I have had on this project have stemmed from:
There are some 1-1 relationships with composite keys, but logically it makes sense
The Nhibernate domain entities go through a mapping layer to become the UI domain entities, then vice versa when saving. Problem here is that with the 1-1 mappings, when persisting the example Address I have to make a Surrogate Customer object with the correct Id then merge.
There is ALOT of Ajax that deals with chunks of the overall model (I talk like there is one single model, but there are quite a few top level models, just one that is most important)

Some notes that may help. I use windsor but imagine the concepts are the same. Sounds like there may be a combination of things.
SessionFactory should be created as singleton and session should be per web request. Something like:
Bind<ISessionFactory>()
.ToProvider<SessionFactoryBuilder>()
.InSingletonScope();
Bind<ISession>()
.ToMethod( context => context.Kernel.Get<ISessionFactory>().OpenSession() )
.InRequestScope();
Be careful of keeping transactions open for too long, keep them as short lived as possible to avoid deadlocks.
Check your queries are running as as expected by using a tool like NHProf. Often people load up too much of the graph which impacts performance and can create deadlocks.
Check your mappings for things like not.lazyload() and see if you actually need the additional data in the queries and keep results returned to a min. Check your queries execution plans and ensure adequate indexes are in place.
I have had issues with mvc3 action filters being cached, which meant transactions were not always started, but would attempt to be closed causing issues. Moved all my transaction commits into ActionResults in the controllers to keep transaction as short as possible and close to the action.
Check your cascades in your mappings and keep the updates to a minimum.

Related

Breeze projection query from already-loaded entity

If I use breeze to load a partial entity:
var query = EntityQuery.from('material')
.select('Id, MaterialName, MaterialType, MaterialSubType')
.orderBy(orderBy.material);
return manager.executeQuery(query)
.then(querySucceeded)
.fail(queryFailed);
function querySucceeded(data) {
var list = partialMapper.mapDtosToEntities(
manager, data.results, entityNames.material, 'id');
if (materialsObservable) {
materialsObservable(list);
}
log('Retrieved Materials from remote data source',
data, true);
}
...and I also want to have another slightly different partial query from the same entity (maybe a few other fields for example) then I'm assuming that I need to do another separate query as those fields weren't retrieved in the first query?
OK, so what if I want to use the same fields retrieved in the first query (Id, Materialname, MaterialType, MaterialSubType) but I want to call those fields different names in the second query (Materialname becomes just "name", MaterialType becomes "masterType" and so on) then is it possible to clone the partial entity I already have in memory (assuming it is in memory?) and rename the fields or do I still need to do a completely separate query?
I think I would "union" the two cases into one projection if I could afford to do so. That would simplify things dramatically. But it's really important to understand the following point:
You do not need to turn query projection results into entities!
Backgound: the CCJS example
You probably learned about the projection-into-entities technique from the CCJS example in John Papa's superb PluralSight course "Single Page Apps JumpStart". CCJS uses this technique for a very specific reason: to simplify list update without making a trip to the server.
Consider the CCJS "Sessions List" which is populated by a projection query. John didn't have to turn the query results into entities. He could have bound directly to the projected results. Remember that Knockout happily binds to raw data values. The user never edits the sessions on that list directly. If displayed session values can't change, turning them into observable properties is a waste of CPU.
When you tap on a Session, you go to a Session view/edit screen with access to almost every property of the complete session entity. CCJS needs the full entity there so it looks for the full (not partial) session in cache and, if not found, loads the entity from the server. Even to this point there is no particular value in having previously converted the original projection results into (partial) session entities.
Now edit the Session - change the title - and save it. Return to the "Sessions List"
Question
How do you make sure that the updated title appears in the Sessions List?
If we bound the Sessions List HTML to the projection data objects, those objects are not entities. They're just objects. The entity you edited in the session view is not an object in the collection displayed in the Sessions List. Yes, there is a corresponding object in the list - one that has the same session id. But it is not the same object.
Choices
#1: Refresh the list from the server by reissuing the projection query. Bind directly to the projection data. Note that the data consist of raw JavaScript objects, not entities; they are not in the Breeze cache.
#2: Publish an event after saving the real session entity; the subscribing "Sessions List" ViewModel hears the event, extracts the changes, and updates its copy of the session in the list.
#3: Use the projection-into-entity technique so that you can use a session entity everywhere.
Pros and Cons
#1 is easy to implement. But it requires a server trip every time you enter the Sessions List view.
One of the CCJS design goals was that, once loaded, it should be able to operate entirely offline with zero access to the server. It should work crisply when connectivity is intermittent and poor.
CCJS is your always-ready guide to the conference. It tells you instantly what sessions are available, when and where so you can find the session you want, as you're walking the halls, and get there. If you've been to a tech conference or hotel you know that the wifi is generally awful and the app is almost useless if it only works when it has direct access to the server.
#1 is not well suited to the intended operating environment for CCJS.
The CCJS Jumpstart is part way down that "server independent" path; you'll see something much closer to a full offline implementation soon.
You'll also lose the ability to navigate to related entities. The Sessions List displays each session's track, timeslot and room. That's repetitive information found in the "lookup" reference entities. You'll either have to expand the projection to include this information in a "flattened" view of the session (fatter payload) or get clever on the client-side and patch in the track, timeslot and room data by hand (complexity).
#2 helps with offline/intermittent connectivity scenarios. Of course you'll have to set up the messaging system, establish a protocol about saved entities and teach the Sessions List to find and update the affected session projection object. That's not super difficult - the Breeze EntityManager publishes an event that may be sufficient - but it would take even more code.
#3 is good for "server independence", has a small projection payload, is super-easy, and is a cool demonstration of breeze. You have to manage the isPartial flag so you always know whether the session in cache is complete. That's not hard.
It could get more complicated if you needed multiple flavors of "partial entity" ... which seems to be where you are going. That was not an issue in CCJS.
John chose #3 for CCJS because it fit the application objectives.
That doesn't make it the right choice for every application. It may not be the right choice for you.
For example, if you always have a fast, low latency connection, then #1 may be your best choice. I really don't know.
I like the cast-to-entity approach myself because it is easy and works so well most of the time. I do think carefully about that choice before I make it.
Summary
You do not have to turn projection query results into entities
You can bind to projected data directly, without Knockout observable properties, if they are read-only
Make sure you have a good reason to convert projected data into (partial) entities.
CCJS has a good reason to convert projected query data into entities. Do you?

Where / How to fit Solr into ASP.net MVC app (using nHibernate / Repository Pattern)

I'm currently in the middle of a reasonably large question / answer based application (kind of like stackoverflow / answerbag.com)
We're using SQL (Azure) and nHibernate for data access and MVC for the UI app.
So far, the schema is roughly along the lines of the stackoverflow db in the sense that we have a single Post table (contains both questions / answers)
Probably going to use something along the lines of the following repository interface:
public interface IPostRepository
{
void PutPost(Post post);
void PutPosts(IEnumerable<Post> posts);
void ChangePostStatus(string postID, PostStatus status);
void DeleteArtefact(string postId, string artefactKey);
void AddArtefact(string postId, string artefactKey);
void AddTag(string postId, string tagValue);
void RemoveTag(string postId, string tagValue);
void MarkPostAsAccepted(string id);
void UnmarkPostAsAccepted(string id);
IQueryable<Post> FindAll();
IQueryable<Post> FindPostsByStatus(PostStatus postStatus);
IQueryable<Post> FindPostsByPostType(PostType postType);
IQueryable<Post> FindPostsByStatusAndPostType(PostStatus postStatus, PostType postType);
IQueryable<Post> FindPostsByNumberOfReplies(int numberOfReplies);
IQueryable<Post> FindPostsByTag(string tag);
}
My question is:
Where / how would i fit solr into this for better querying of these "Posts"
(I'll be using solrnet for the actual communication with Solr)
Ideally, I'd be using the SQL db as merely a persistant store-
The bulk of the above IQueryable operations would move into some kind of SolrFinder class (or something like that)
The Body property is the one that causes the problems currently - it's fairly large, and slows down queries on sql.
My main problem is, for example, if someone "updates" a post - adds a new tag, for example, then that whole post will need re-indexing.
Obviously, doing this will require a query like this:
"SELECT * FROM POST WHERE ID = xyz"
This will of course, be very slow.
Solrnet has an nHibernate facility- but i believe this will be the same result as above?
I thought of a way around this, which I'd like your views on:
Adding the ID to a queue (amazon sqs or something - i like the ease of use with this)
Having a service (or bunch of services) somewhere that do the above mentioned query, construct the document, and re-add it to solr.
Another problem I'm having with my design:
Where should the "re-indexing" method(s) be called from?
The MVC controller? or should i have a "PostService" type class, that wraps the instance of IPostRepository?
Any pointers are greatly received on this one!
On the e-commerce site that I work for, we use Solr to provide fast faceting and searching of the product catalog. (In non-Solr geek terms, this means the "ATI Cards (34), NVIDIA (23), Intel (5)" style of navigation links that you can use to drill-down through product catalogs on sites like Zappos, Amazon, NewEgg, and Lowe's.)
This is because Solr is designed to do this kind of thing fast and well, and trying to do this kind of thing efficiently in a traditional relational database is, well, not going to happen, unless you want to start adding and removing indexes on the fly and go full EAV, which is just cough Magento cough stupid. So our SQL Server database is the "authoritative" data store, and the Solr indexes are read-only "projections" of that data.
You're with me so far because it sounds like you are in a similar situation. The next step is determining whether or not it is OK that the data in the Solr index may be slightly stale. You've probably accepted the fact that it will be somewhat stale, but the next decisions are
How stale is too stale?
When do I value speed or querying features over staleness?
For example, I have what I call the "Worker", which is a Windows service that uses Quartz.NET to execute C# IJob implementations periodically. Every 3 hours, one of these jobs that gets executed is the RefreshSolrIndexesJob, and all that job does is ping an HttpWebRequest over to http://solr.example.com/dataimport?command=full-import. This is because we use Solr's built-in DataImportHandler to actually suck in the data from the SQL database; the job just has to "touch" that URL periodically to make the sync work. Because the DataImportHandler commits the changes periodically, this is all effectively running in the background, transparent to the users of the Web site.
This does mean that information in the product catalog can be up to 3 hours stale. A user might click a link for "Medium In Stock (3)" on the catalog page (since this kind of faceted data is generated by querying SOLR) but then see on the product detail page that no mediums are in stock (since on this page, the quantity information is one of the few things not cached and queried directly against the database). This is annoying, but generally rare in our particularly scenario (we are a reasonably small business and not that high traffic), and it will be fixed up in 3 hours anyway when we rebuild the whole index again from scratch, so we have accepted this as a reasonable trade-off.
If you can accept this degree of "staleness", then this background worker process is a good way to go. You could take the "rebuild the whole thing every few hours" approach, or your repository could insert the ID into a table, say, dbo.IdentitiesOfStuffThatNeedsUpdatingInSolr, and then a background process can periodically scan through that table and update only those documents in Solr if rebuilding the entire index from scratch periodically is not reasonable given the size or complexity of your data set.
A third approach is to have your repository spawn a background thread that updates the Solr index in regards to that current document more or less at the same time, so the data is only stale for a few seconds:
class MyRepository
{
void Save(Post post)
{
// the following method runs on the current thread
SaveThePostInTheSqlDatabaseSynchronously(post);
// the following method spawns a new thread, task,
// queueuserworkitem, whatevever floats our boat this week,
// and so returns immediately
UpdateTheDocumentInTheSolrIndexAsynchronously(post);
}
}
But if this explodes for some reason, you might miss updates in Solr, so it's still a good idea to have Solr do a periodic "blow it all away and refresh", or have a reaper background Worker-type service that checks for out-of-date data in Solr everyone once in a blue moon.
As for querying this data from Solr, there are a few approaches you could take. One is to hide the fact that Solr exists entirely via the methods of the Repository. I personally don't recommend this because chances are your Solr schema is going to be shamelessly tailored to the UI that will be accessing that data; we've already made the decision to use Solr to provide easy faceting, sorting, and fast display of information, so we might as well use it to its fullest extent. This means making it explicit in code when we mean to access Solr and when we mean to access the up-to-date, non-cached database object.
In my case, I end up using NHibernate to do the CRUD access (loading an ItemGroup, futzing with its pricing rules, and then saving it back), forgoing the repository pattern because I don't typically see its value when NHibernate and its mappings are already abstracting the database. (This is a personal choice.)
But when querying on the data, I know pretty well if I'm using it for catalog-oriented purposes (I care about speed and querying) or for displaying in a table on a back-end administrative application (I care about currency). For querying on the Web site, I have an interface called ICatalogSearchQuery. It has a Search() method that accepts a SearchRequest where I define some parameters--selected facets, search terms, page number, number of items per page, etc.--and gives back a SearchResult--remaining facets, number of results, the results on this page, etc. Pretty boring stuff.
Where it gets interesting is that the implementation of that ICatalogSearchQuery is using a list of ICatalogSearchStrategys underneath. The default strategy, the SolrCatalogSearchStrategy, hits SOLR directly via a plain old-fashioned HttpWebRequest and parsing the XML in the HttpWebResponse (which is much easier to use, IMHO, than some of the SOLR client libraries, though they may have gotten better since I last looked at them over a year ago). If that strategy throws an exception or vomits for some reason, then the DatabaseCatalogSearchStrategy hits the SQL database directly--although it ignores some parameters of the SearchRequest, like faceting or advanced text searching, since that is inefficient to do there and is the whole reason we are using Solr in the first place. The idea is that usually SOLR is answering my search requests quickly in full-featured glory, but if something blows up and SOLR goes down, then the catalog pages of the site can still function in "reduced-functionality mode" by hitting the database with a limited feature set directly. (Since we have made explicit in code that this is a search, that strategy can take some liberties in ignoring some of the search parameters without worrying about affecting clients too severely.)
Key takeaway: What is important is that the decision to perform a query against a possibly-stale data store versus the authoritative data store has been made explicit--if I want fast, possibly stale data with advanced search features, I use ICatalogSearchQuery. If I want slow, up-to-date data with the insert/update/delete capability, I use NHibernate's named queries (or a repository in your case). And if I make a change in the SQL database, I know that the out-of-process Worker service will update Solr eventually, making things eventually consistent. (And if something was really important, I could broadcast an event or ping the SOLR store directly, telling it to update, possibly in a background thread if I had to.)
Hope that gives you some insight.
We use solr to query a large product database.
Around 1 million products, and 30 stores.
What we did is we used triggers on the product table and stock tables on our Sql server.
Each time a row is changed it flags the product to be reindexed. And we have a windows service that grabs these products and post them to Solr every 10 seconds. (With a limit of 100 products per batch).
It's super efficient, almost real time info for the stock.
If you have a big text field (your 'body' field), then yes, re-index in background. The solutions you mentioned (queue or periodic background service) will do.
MVC controllers should be oblivious of this process.
I noticed you have IQueryables in your repository interface. SolrNet does not currently have a LINQ provider. Anyway, if those operations are all you're going to do with Solr (i.e. no faceting), you might want to consider using Lucene.Net instead, which does have a LINQ provider.

Update relationships when saving changes of EF4 POCO objects

Entity Framework 4, POCO objects and ASP.Net MVC2. I have a many to many relationship, lets say between BlogPost and Tag entities. This means that in my T4 generated POCO BlogPost class I have:
public virtual ICollection<Tag> Tags {
// getter and setter with the magic FixupCollection
}
private ICollection<Tag> _tags;
I ask for a BlogPost and the related Tags from an instance of the ObjectContext and send it to another layer (View in the MVC application). Later I get back the updated BlogPost with changed properties and changed relationships. For example it had tags "A" "B" and "C", and the new tags are "C" and "D". In my particular example there are no new Tags and the properties of the Tags never change, so the only thing which should be saved is the changed relationships. Now I need to save this in another ObjectContext. (Update: Now I tried to do in the same context instance and also failed.)
The problem: I can't make it save the relationships properly. I tried everything I found:
Controller.UpdateModel and Controller.TryUpdateModel don't work.
Getting the old BlogPost from the context then modifying the collection doesn't work. (with different methods from the next point)
This probably would work, but I hope this is just a workaround, not the solution :(.
Tried Attach/Add/ChangeObjectState functions for BlogPost and/or Tags in every possible combinations. Failed.
This looks like what I need, but it doesn't work (I tried to fix it, but can't for my problem).
Tried ChangeState/Add/Attach/... the relationship objects of the context. Failed.
"Doesn't work" means in most cases that I worked on the given "solution" until it produces no errors and saves at least the properties of BlogPost. What happens with the relationships varies: usually Tags are added again to the Tag table with new PKs and the saved BlogPost references those and not the original ones. Of course the returned Tags have PKs, and before the save/update methods I check the PKs and they are equal to the ones in the database so probably EF thinks that they are new objects and those PKs are the temp ones.
A problem I know about and might make it impossible to find an automated simple solution: When a POCO object's collection is changed, that should happen by the above mentioned virtual collection property, because then the FixupCollection trick will update the reverse references on the other end of the many-to-many relationship. However when a View "returns" an updated BlogPost object, that didn't happen. This means that maybe there is no simple solution to my problem, but that would make me very sad and I would hate the EF4-POCO-MVC triumph :(. Also that would mean that EF can't do this in the MVC environment whichever EF4 object types are used :(. I think the snapshot based change tracking should find out that the changed BlogPost has relationships to Tags with existing PKs.
Btw: I think the same problem happens with one-to-many relations (google and my colleague say so). I will give it a try at home, but even if that works that doesn't help me in my six many-to-many relationships in my app :(.
Let's try it this way:
Attach BlogPost to context. After attaching object to context the state of the object, all related objects and all relations is set to Unchanged.
Use context.ObjectStateManager.ChangeObjectState to set your BlogPost to Modified
Iterate through Tag collection
Use context.ObjectStateManager.ChangeRelationshipState to set state for relation between current Tag and BlogPost.
SaveChanges
Edit:
I guess one of my comments gave you false hope that EF will do the merge for you. I played a lot with this problem and my conclusion says EF will not do this for you. I think you have also found my question on MSDN. In reality there is plenty of such questions on the Internet. The problem is that it is not clearly stated how to deal with this scenario. So lets have a look on the problem:
Problem background
EF needs to track changes on entities so that persistance knows which records have to be updated, inserted or deleted. The problem is that it is ObjectContext responsibility to track changes. ObjectContext is able to track changes only for attached entities. Entities which are created outside the ObjectContext are not tracked at all.
Problem description
Based on above description we can clearly state that EF is more suitable for connected scenarios where entity is always attached to context - typical for WinForm application. Web applications requires disconnected scenario where context is closed after request processing and entity content is passed as HTTP response to the client. Next HTTP request provides modified content of the entity which has to be recreated, attached to new context and persisted. Recreation usually happends outside of the context scope (layered architecture with persistance ignorace).
Solution
So how to deal with such disconnected scenario? When using POCO classes we have 3 ways to deal with change tracking:
Snapshot - requires same context = useless for disconnected scenario
Dynamic tracking proxies - requires same context = useless for disconnected scenario
Manual synchronization.
Manual synchronization on single entity is easy task. You just need to attach entity and call AddObject for inserting, DeleteObject for deleting or set state in ObjectStateManager to Modified for updating. The real pain comes when you have to deal with object graph instead of single entity. This pain is even worse when you have to deal with independent associations (those that don't use Foreign Key property) and many to many relations. In that case you have to manually synchronize each entity in object graph but also each relation in object graph.
Manual synchronization is proposed as solution by MSDN documentation: Attaching and Detaching objects says:
Objects are attached to the object
context in an Unchanged state. If you
need to change the state of an object
or the relationship because you know
that your object was modified in
detached state, use one of the
following methods.
Mentioned methods are ChangeObjectState and ChangeRelationshipState of ObjectStateManager = manual change tracking. Similar proposal is in other MSDN documentation article: Defining and Managing Relationships says:
If you are working with disconnected
objects you must manually manage the
synchronization.
Moreover there is blog post related to EF v1 which criticise exactly this behavior of EF.
Reason for solution
EF has many "helpful" operations and settings like Refresh, Load, ApplyCurrentValues, ApplyOriginalValues, MergeOption etc. But by my investigation all these features work only for single entity and affects only scalar preperties (= not navigation properties and relations). I rather not test this methods with complex types nested in entity.
Other proposed solution
Instead of real Merge functionality EF team provides something called Self Tracking Entities (STE) which don't solve the problem. First of all STE works only if same instance is used for whole processing. In web application it is not the case unless you store instance in view state or session. Due to that I'm very unhappy from using EF and I'm going to check features of NHibernate. First observation says that NHibernate perhaps has such functionality.
Conclusion
I will end up this assumptions with single link to another related question on MSDN forum. Check Zeeshan Hirani's answer. He is author of Entity Framework 4.0 Recipes. If he says that automatic merge of object graphs is not supported, I believe him.
But still there is possibility that I'm completely wrong and some automatic merge functionality exists in EF.
Edit 2:
As you can see this was already added to MS Connect as suggestion in 2007. MS has closed it as something to be done in next version but actually nothing had been done to improve this gap except STE.
I have a solution to the problem that was described above by Ladislav. I have created an extension method for the DbContext which will automatically perform the add/update/delete's based on a diff of the provided graph and persisted graph.
At present using the Entity Framework you will need to perform the updates of the contacts manually, check if each contact is new and add, check if updated and edit, check if removed then delete it from the database. Once you have to do this for a few different aggregates in a large system you start to realize there must be a better, more generic way.
Please take a look and see if it can help http://refactorthis.wordpress.com/2012/12/11/introducing-graphdiff-for-entity-framework-code-first-allowing-automated-updates-of-a-graph-of-detached-entities/
You can go straight to the code here https://github.com/refactorthis/GraphDiff
I know it's late for the OP but since this is a very common issue I posted this in case it serves someone else.
I've been toying around with this issue and I think I got a fairly simple solution,
what I do is:
Save main object (Blogs for example) by setting its state to Modified.
Query the database for the updated object including the collections I need to update.
Query and convert .ToList() the entities I want my collection to include.
Update the main object's collection(s) to the List I got from step 3.
SaveChanges();
In the following example "dataobj" and "_categories" are the parameters received by my controller "dataobj" is my main object, and "_categories" is an IEnumerable containing the IDs of the categories the user selected in the view.
db.Entry(dataobj).State = EntityState.Modified;
db.SaveChanges();
dataobj = db.ServiceTypes.Include(x => x.Categories).Single(x => x.Id == dataobj.Id);
var it = _categories != null ? db.Categories.Where(x => _categories.Contains(x.Id)).ToList() : null;
dataobj.Categories = it;
db.SaveChanges();
It even works for multiple relations
The Entity Framework team is aware that this is a usability issue and plans to address it post-EF6.
From the Entity Framework team:
This is a usability issue that we are aware of and is something we have been thinking about and plan to do more work on post-EF6. I have created this work item to track the issue: http://entityframework.codeplex.com/workitem/864 The work item also contains a link to the user voice item for this--I encourage you to vote for it if you have not done so already.
If this impacts you, vote for the feature at
http://entityframework.codeplex.com/workitem/864
All of the answers were great to explain the problem, but none of them really solved the problem for me.
I found that if I didn't use the relationship in the parent entity but just added and removed the child entities everything worked just fine.
Sorry for the VB but that is what the project I am working in is written in.
The parent entity "Report" has a one to many relationship to "ReportRole" and has the property "ReportRoles". The new roles are passed in by a comma separated string from an Ajax call.
The first line will remove all the child entities, and if I used "report.ReportRoles.Remove(f)" instead of the "db.ReportRoles.Remove(f)" I would get the error.
report.ReportRoles.ToList.ForEach(Function(f) db.ReportRoles.Remove(f))
Dim newRoles = If(String.IsNullOrEmpty(model.RolesString), New String() {}, model.RolesString.Split(","))
newRoles.ToList.ForEach(Function(f) db.ReportRoles.Add(New ReportRole With {.ReportId = report.Id, .AspNetRoleId = f}))

Nhibernate/MVC: Dealing with lazy loaded collections in View

I'm currently using an attribute based approach to nhibernate session management, which means that the session is open for the duration of the Action method, but is closed once control is passed to the View.
This seems like good practice to me, however I'm running in to problems with lazy loaded collections. (This is complicated by the fact that some collections seem to be lazy loading even though they have Not.LazyLoad() set in the fluent mapping).
As I see it, my options are:
Change my ISession management strategy and Keep the session open in the View
Make better use of ViewModels (I'm currently not using them everywhere).
Eager load all of the collections that are causing problems (maybe paged) (fluent problem not withstanding)
1 seems a bit wrong to be - but may be the 'easiest' solution. 2 is probably the proper way to go, but in some cases ViewModels seem slightly redundant, and I'm loathed to introduce more classes just to deal with this issue. 3 seems to be a bit of a dirty fix.
What do you think?
The best way to handle this (in my opinion anyways) is to introduce a service layer in between your UI and your repositories; It should take care of loading everything needed by the view and pass off a flattened (and fully populated) dto to the view.
Often, I go one step further and map the dtos returned from the service layer to view models, which often need to contain data that is very view-specific, and not appropriate for inclusion into the dtos coming from your service layer. Remember, Automapper is your friend when it comes to situations like this.
Using an open-session-in-view pattern is still perfectly acceptable, just don't have your views invoking lazy loading on entity models - this is almost always a horrible idea.
consider your current usage as making implicit database operations. The object is sent to the View but the object contains proxies which when touched will try to return the data, and that requires a database operation.
Now,
extending the ISession life including the View, its not wrong, as long as you are not doing explicit database calls...
i wouldn't know about that
This is actually the proper way regardless of the session EOL: you should try to do as less queries as possible per request and nhibernate gives you that ability via lazyless loading, futures, multihql/criteria etc.
note: although you may have mapped a collection as not lazy loaded it matters also How you query and get your desired result set. eg if you are using HQL then use a fetch join
I don't think there's anything wrong about the first approach, and it will be the easiest to implement.
Session per request is a well known session management pattern for NHibernate.

Nhibernate (and ORMs in General): work with Objects or ObjectIds?

This is something that has been pulling at me for a while. Consider a (MVC type) web application with an ORM (e.g. Nhiberate) as the data access layer.
On one hand - the OOP/Rich domain model hand - I feel I should be passing around (references to) the real objects I am talking about.
On the other hand - the DB/Web App hand - I feel that it is easier and more efficient just to pass the integer Ids of the objects rather than the object themselves.
Consider an ecommerce catalogue type application:
The user is logged in and navigates to a product page.
They post a comment.
The controller action tasked with persisting this comment has 3 pieces of information: a) The user id (from the auth cookie or wherever), b) The product id (probably from the querystring), and c) the comment text.
Now, what what is best practice here? Is it really worth inflating the user and product objects (e.g. by getting them from the repository, with all the DB work that entails) when we know that all they will be used for is so the ORM can read their IDs and set the appropriate foreign keys in the DB table that stores the comments?
What are peoples views on this? Perhaps web apps should be given a little more leway than other apps, due to their stateless nature? I imagine there will be 'it depends' answers, but maybe some people are purists about the issue.
This is a general question which probably is applicable to many platforms, but if giving examples I would prefer them to be ASP.NET MVC if possible.
Thank you.
NHibernate has the load operation (as opposed to doing a get) exactly for this reason.
session.Save(
new Comment
{
Text = commentTextFromScreen,
User = session.Load<User>(userID),
Product = session.Load<Product>(productID)
}
};
In the above example, you are telling NHibernate: I know these already exist in the database, so don't bother selecting them right now. NHibernate will return proxy objects for them and a select won't happen against the database as long as you don't attempt to access any properties on the objects.
For more info check out Ayende's blog post: The difference between Get, Load, and query by id.

Resources