The Optimistic ConcurrencyException is not returned right. I tested this with the breeze ToDo sample and my app.
This is what is returned if i provoke a an OptimisticConcurrencyException:
{"$id":"1","$type":"System.Web.Http.HttpError, System.Web.Http","Message":"An error has occurred."}
The ExceptionType is missing. In debug-mode in VS this works right.
#sascha - You beat me to it on the <customErrors> thing which works fine if you're running in IIS (see Jimmy Bogard's alternative if you are one of the very few who would self-host your Web Api).
But I’m pretty sure it is the wrong thing to do ultimately. It is expedient for now but, as Jimmy says in his post, “It’s likely not something we want in production.” An app shouldn’t expose unfiltered exceptions to the client for routine stuff like optimistic concurrency or validation errors.
I intend to find a better approach, most likely involving the HttpResponseException as described here. I'll give strong consideration to a “Custom Exception Filter” for dealing with unhandled exceptions in a controlled manner.
I don’t think that approach is something that belongs in Breeze itself. It strikes me as requiring an application specific solution … one that knows which exceptions should be exposed and how they should be phrased. But the mechanism would be good to teach. Once you know how to do it, you can roll your own custom exception handling ... and leave the Web.config alone.
Hoping to write this guidance soon. Feel free to beat me to it :)
Hm..., when I attempt a concurrency exception in my test cases.
// assuming this causes a concurrency exception
em.saveChanges().then(
).fail(function(error) {
// error object detailed below
})
I get the following returned with the 'error' parameter
error.message: "Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or deleted since entities were loaded. Refresh ObjectStateManager entries."
error.responseText: {"$id":"1","$type":"System.Web.Http.HttpError, System.Web.Http","Message":"An error has occurred.","ExceptionMessage":"Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or deleted since entities were loaded. Refresh ObjectStateManager entries.","ExceptionType":"System.Data.OptimisticConcurrencyException","StackTrace":" at System.Data.Mapping.Update.Internal.UpdateTranslator.ValidateRowsAffected(Int64 <... more here ...> }
error.detail: < an even more detailed error object
error.detail.ExceptionType: "System.Data.OptimisticConcurrencyException"
There are other properties but these are the important ones.
I wonder what we are doing differently?
Related
We are using breeze.js with entity framework to initiate client side entity management.
We randomly get "Failed to set the ‘$visited’ property on ‘DOMStringMap’: ‘data-$visited’ is not a valid attribute name" error the the breeze.js from __toJSONSafe method of it.
does anybody have any idea what could make "obj._$visited" property undefined? It is coming up as undefined and that is causing the issue during call to the saveChanges()
I'm guessing ... it seems like you've added some kind of DOM object to the entity before saving it. I can't imagine how else you could be subjecting the DOMStringMap to __toJSONSafe.
Would need to know more precisely what object (and entity) is involved when you get this exception.
You say it happens randomly. That doesn't make it easy on any of us. If you can make it happen often enough to detect, you could patch the __toJSONSafe method in your local copy of breeze.debug.js so that you can better trap the error and the information about what makes it happen.
Come back and share that information with us.
I created a Service Object (http://railscasts.com/episodes/398-service-objects), which basically creates two models, A and B, sets up the association between them where B belongs to A, and returns A (which in turn you can access B via A given the association).
On error, I return a hash with the error information. As I'm trying to test this method, I'm having an issue now where there are two possible types of returns: either a model (when it passes) or a hash with the error information.
Is this a signal that the design is wrong? I know when you test first (TDD) you avoid such design issues.
If it is an issue, then I know I would need to return an invalid A model. Assuming the model B throws an error on create, how would I still be able to return an invalid A model?
If returning an error hash is okay, how else can I design this method to be testing friendly?
Good insight and good question. :-) On the face of it, this seems like a good situation to raise a Ruby exception for the "error" case rather than returning as a hash as a result of the method call. You can define your own error class (probably should be a subclass of StandardError) and include the hash or whatever information you want as part of the error you raise. See http://www.ruby-doc.org/docs/ProgrammingRuby/html/tut_exceptions.html for a discussion of this general topic.
As for checking for the raising of errors in rspec, see the last answer to How to use RSpec's should_raise with any kind of exception?, including links to additional information.
If you want to present your API as a RESTful interface, the consensus seems to be to utilize the HTTP response codes to be to present exceptions, as discussed in How to handle REST Exceptions? If you do use the exceptional response codes (e.g. 40X) for presenting your exceptions, then the fact that you return a hash in that case and a model in the other case is not a bad smell, imho, as there is no expectation of consistency between the data that accompanies an error and the data that accompanies a successful return. In any event, I don't think returning an "invalid" model in the error case makes any sense, assuming you're not in fact creating/persisting the model in that situation.
You can test the type of result.
In rspec...
expect(actual).to be_an_instance_of(expected)
expect(actual).to be_a_kind_of(expected)
Linq To SQL's DataContext has an overload on SubmitChanges that allows for updates to continue when a Optimistic Concurrency Exception is thrown, and provides the developer with a mechanism to resolve the conflicts afterwards in a single Try Catch block.
Even the WCFDataServicesContext has a SaveChangedOptions.ContinueOnError parameter for its SaveChanges method that at least allows you to continue updating when an error has occurred and leaves conflicting updates unresolved so you can look into them later.
(1) Why then does the ObjectContext.SaveChanges method have no such option?
(2) Do any update patterns exist that will mimick the Linq To SQL behaviour? The examples I find on MSDN make it appear as if a single Try Catch block will see you home in the case of multiple updates. But this pattern does not allow you to investigate each conflicting update separately: it just alerts you to the first conflict and then gives you the option to "wipe the table clean in one sweep" to prevent any further optimistic concurrency exceptions from surfacing, without your knowing if any exist and what you would have liked to do about them.
Why then does the ObjectContext.SaveChanges method have no such option?
I think the simplest answer is because Linq-to-Sql, Entity Framework and WCF Data Services were all implemented by different teams and internal communication among these teams doesn't work as we would hope. I have described some interesting features missing in newer APIs in one of my former answers but I don't think this is a missing feature - I will explain it in the second part of my answer.
WCF Data Services have more interesting features which should be part of Entity framework as well. For example:
Change and Query interceptors
Batching multiple queries and SaveChanges operations to single call to server
Asynchronous operations - this will come to EF6 in form of async/await implementation
Do any update patterns exist that will mimick the Linq To SQL behaviour?
There is a pattern how to solve this but you will probably not like it. EF's SaveChanges works as Unit of work. It either saves all changes or none. If you have a scenario where your saving operation can result in case where only part of your changes is persisted than it should not be handled by single SaveChanges call. Each atomic set of changes should have its own SaveChanges call:
using (var scope = new TransactionScope(...)) {
foreach (var entity in someEntitiesToModify) {
try {
context.SomeEntities.Attach(entity);
context.ObjectStateManager.ChangeObjectState(entity, EntityState.Modified);
context.SaveChanges();
catch (OptimisticConcurrencyException e) {
// Do something here
context.Refresh(e.StateEntries[0].Entity, RefreshMode.ClientWins);
context.SaveChanges();
}
}
scope.Complete();
}
I think the reason why this feature doesn't exist is because it is not generic and as mentioned about it goes against unit of work pattern. Suppose this example:
You load an entity
You add a new dependent entity to navigation property of your loaded entity
You change something on the loaded entity
In the mean time somebody else concurrently delete your loaded entity
You trigger SaveChanges with relaxed conflict resolution
EF will try to save changes to the principal entity but it conflicts because there is no entity to update in the database
EF will continue because conflict resolution is relaxed
EF will try to insert dependent entity but it will fire SqlException because the principal entity doesn't exist in the database. This exception will break the persistence operation and you will not know why it is complaining about referential integrity because you have a principal entity. (It is possible that this insert will even not happen and EF fires another exception due to inconsistency of context's inner state but it depends on EF's internal implementation).
This immediately makes whole relaxing of conflict resolution much more complex feature. There are IMHO three ways to solve it:
Simply not support it. If you need conflict resolution per entity basis you can still use the example I showed above but for complex scenarios it may not work because complex scenarios are hard to solve.
Rebuild database change set each time the conflict occurs - it means to explore the remaining change set and exclude all entities related to conflicting entity and their relations an so on from the processed persistence. There is a problem: EF cannot exclude any changed entity from processing. That would break the meaning of unit of work and I repeat it one more: Relaxing conflict resolution can also break meaning of unit of work.
Let EF to proceed with dependencies even if the principal entity conflicted. This requires to handle the database exception and understand its content to know if the exception is fired due to conflicting principal or due to other error (which should fail whole persistence operation immediately). It can be quite difficult to understand database exceptions on the code level and moreover it is provider specific for every supported database.
It doesn't mean it may not be possible to make such functionality but it will need to cover all scenarios when it comes to relations and this can be pretty complex. I'm not sure if Linq-to-Sql handles this.
You can always make a suggestion on Data UserVoice or check out the code and try to implement it yourselves. Maybe I see this feature too complicated and it can be implemented easily.
I am experiencing some bizarre problems with Nhibernate within my MVC web application.
There is not 1 consistent error, I keep getting loads of random ones:
Transaction not successfully started
New request is not allowed to start because it should come with valid transaction descriptor
Unexpected row count: -1; expected: 1
To give a little context to the setup, I am using Ninject to DI the sessions and other Nhibernate related objects, currently I am using RequestScope however I have tried SingletonScope. I have a large and complicated data model, which is read out as a whole, but persisted back in separate parts, as these can all be edited and saved individually.
An example would be having a Customer object, which contains a address object, a contact object, friends object, previous orders object etc etc...
So the whole object is read out, then mapped to the UI domain models and then displayed in different partials within the page. Each partial can be updated individually via ajax, so you may update 1 section or you could update them all together. It seems mainly to give me the problems when I try to persist them all together (so 2-4 simultanious ajax requests to persist chunks of the model).
Now I have integration tests that work fine, which just test the persistence and retrieval of entities. As a whole and individually and all pass fine, however in the web app they just seem to keep throwing random exceptions, and originally refused to persist outside of the Nhibernate cache. I found a way round this by wrapping most units of work within transactions, which got the data persisting but started adding new errors to the mix.
Originally I was thinking of just scrapping Nhibernate from the project, as although I really want its persistance/caching layer, it just didnt seem to be flexible enough for my domain, which seems odd as I have used it before without much problem, although it doesn't like 1-1 mappings.
So has anyone else had flakey transaction/nhibernate issues like this within an ASP MVC app... I know this may be a bit vague as the errors dont point to one thing, and it doesn't always error, so its like stabbing in the dark, but I am out of ideas so any help would be great!
-- Update --
I cannot post all relevant code as the project is huge, but the transaction bit looks like:
using (var transaction = sessionManager.Session.BeginTransaction(IsolationLevel.ReadUncommitted))
{
try
{
// Do unit of work
transaction.Commit();
}
catch (Exception)
{
transaction.Rollback();
throw;
}
}
Some of the main problems I have had on this project have stemmed from:
There are some 1-1 relationships with composite keys, but logically it makes sense
The Nhibernate domain entities go through a mapping layer to become the UI domain entities, then vice versa when saving. Problem here is that with the 1-1 mappings, when persisting the example Address I have to make a Surrogate Customer object with the correct Id then merge.
There is ALOT of Ajax that deals with chunks of the overall model (I talk like there is one single model, but there are quite a few top level models, just one that is most important)
Some notes that may help. I use windsor but imagine the concepts are the same. Sounds like there may be a combination of things.
SessionFactory should be created as singleton and session should be per web request. Something like:
Bind<ISessionFactory>()
.ToProvider<SessionFactoryBuilder>()
.InSingletonScope();
Bind<ISession>()
.ToMethod( context => context.Kernel.Get<ISessionFactory>().OpenSession() )
.InRequestScope();
Be careful of keeping transactions open for too long, keep them as short lived as possible to avoid deadlocks.
Check your queries are running as as expected by using a tool like NHProf. Often people load up too much of the graph which impacts performance and can create deadlocks.
Check your mappings for things like not.lazyload() and see if you actually need the additional data in the queries and keep results returned to a min. Check your queries execution plans and ensure adequate indexes are in place.
I have had issues with mvc3 action filters being cached, which meant transactions were not always started, but would attempt to be closed causing issues. Moved all my transaction commits into ActionResults in the controllers to keep transaction as short as possible and close to the action.
Check your cascades in your mappings and keep the updates to a minimum.
I am getting an error in my MVC-based website surrounding data concurrency in Linq to SQL:
"Row Not Found Or Changed"
After reading several posts on here it seems as though an accepted solution is to set all non-Primary key fields to UpdateCheck = false in the dbml designer. Before taking the plunge with this, I wanted to ask, will I be losing anything if I do this?
To be honest, it seems to me like this should always be the case, as using the primary key should be the fastest way to find a record anyway. This would be assuming that you don't have any composite PKs. I'm not terribly familiar with data concurrency scenarios, but am I missing something here?
Thanks for your time!
[EDIT]
Thanks for the feedback guys! To give some more detail, the specific scenario I have is this:
I have an Article table with a number of fields (title, content, author, etc.). One of the fields that gets updated very frequently (anytime anyone views the article), is a popularity field, which gets incremented with every click. I saw the original error mentioned above when I updated the article text in the database. Then navigated to that article on the live site (which attempted to update the popularity field).
For starters it sounds like I need to NOT be using a shared DataContext. Beyond that, maybe look into setting certain fields to UpdateCheck never. That being said, I definitely do not want the Article Popularity to not get update due to some concurrency issue. So with that being the case, is there a way for me to ensure this with optimistic concurrency?
Thanks again!!
I do sometimes change UpdateCheck to Never on all but the primary key(s) on entity classes in the designer myself. What you lose is notification of interim changes to the data since you loaded it; what you end up with is a "last change wins" type of scenario. If that is OK in your situation, then... fine.
There are situations where that is definitely not a good thing to do. For instance, checking/adjusting a bank account funds balance... or any situation where an alternation to a field is going to be based on a calculation with the current value as one of the operands.
set all non-Primary key fields to UpdateCheck = false in the dbml designer. Before taking the plunge with this, I wanted to ask, will I be losing anything if I do this?
The job of concurrency mechanisms is to prevent multiple users from stomping each others changes. Pessimistic concurrency achieves this by locking out changes to a record when it is read. Then unlocking the record when changes are complete. Optimistic concurrency achieves this by remembering what the record looked like when it was read, and then not modifying the record if it has changed.
Linq to Sql uses optimistic concurrency. If you set the UpdateCheck property that way, you are disabling optimistic concurrency checking.
Consider this record:
Customer (Name = "Bob", Job = "Programmer", Salary = 35000.00)
LinqToSql DataContext#1 reads the record
LinqToSql DataContext#2 reads the record and commits the following change (Salary = 50000.00)
LinqToSql DataContext#1 attempts to commit the following change (Job = "Janitor")
With optimistic concurrency, DataContext#1 now gets a concurrency exception to resolve. Without optimistic concurrency, we now have a janitor making 50k.