Going nuts with Breeze, Linq, DTO's, projections - breeze

After struggling for a while, in order to protect my model while style enjoying Breeze metadata, I finally created a second DbContext just for the metadata. That's the one passed to EFContextProvider. So I have one DbContext for the model, and one that serves as a data access layer, with dto's.
After that I've tried hard to use automapper to automap in linq projections, but kept hitting the wall with a null reference exception. However, this library: http://linqprojector.codeplex.com/ that's related and uses the exact same syntax, works perfectly.
Now, I have a method on my server that actually returns what I want: a dto, containing a list.
So say I have a class Blog containing a list of Posts in the model. The method returns an object BlogDTO containing a list of PostsDTO.
BUT, in Breeze, in the BlogDTO object, the array of posts stays empty. I witness with my own eyes the data being sent to the browser, but for some reason, Breeze ignores some of it!
Honestly, there really are quite a few problems to solve going down this path.
Just wanted to share it with you guys. If anyone understands this and can help me. Here's the Breeze query:
var query = EntityQuery
.from('BlogWithPosts')
.withParameters({id: blogId});
return manager.executeQuery(query)
.then(querySucceeded)
.fail(queryFailed);
function querySucceeded(data) {
console.log(data);
var s = data.results[0];
return blogObservable(s);
}
So to be clear, in the object data, in the XHR property, the responseText property holds all the data that I want! Do I have to parse it myself? What was the point of getting my metadata down to breeze then...

Ok I finally figured this one out. Apparently Breeze requires the InverseProperty attribute. Once that has been set, I could see my related entities!

Other important element to take into account: avoid circular references in your classes. Otherwise Breeze might just simply ignore related entities, for all I know.

Related

Deserialize Breeze JObject into EntityInfo without saving changes

I have a breeze controller that accepts a JObject, is there an easy way to deserialize that JObject into it's strongly typed source EntityInfo objet without going through Save changes / Before Save changes. I just want to get the object that the JObject payload is referring to.
Thanks for your help.
I ended up using the approach outlined in this related question.
Uninitialised JsonSerializer in Breeze SaveBundleToSaveMap sample
Option 1
Take a look at the code in the Breeze.ContextProvider class's CreateEntityInfoFromJson method. It's protected-internal so you'll need to copy the code or call it using reflection. Use at your own risk.
Option 2
The breeze savechanges code uses a public class called SaveWorkState which is constructed using two arguments: a ContextProvider and a JArray. To get an idea of what's expected for the
JArray, take a look at the "entities" property in the JSON sent to the server during a savechanges.
Once the SaveWorkState is constructed you can access the EntityInfo objects via the EntityInfoGroups property.
I've never tried either option before, found these options by looking at the breeze.server.net code.
Yes it is possible ... and pretty easy too. I answered this question in GREAT detail on the related SO question, Uninitialised JsonSerializer in Breeze SaveBundleToSaveMap sample that you referenced.

Many Duplicate Queries in Entity Framework 5 Code-First(n+1)

One of our contractors implemented a repository pattern with code first approach. We use Service Locator as DI pattern. what we do when we retrieve data from DB, we pass interface to GetQueryable function and get the data. However, I see serious performance issues on our application. I implemented MiniProfiler and MiniProfiler.EF to see where the bottleneck is.
We have a case table which has quite a few fields(around 25) and some of those fields are associated to other tables as one to one and one to many(only one field has many relation to other table). when I try to see the case detail, it runs around 400 SQL queries and SQL takes around 40 percent of the load time as far as the miniprofiler concerned. Here our GetQueryable and Find methods
public IQueryable<T> GetQueryable<T>(params string[] includes)
{
Type type = _impls.Value[typeof (T).Name].GetType();
DbSet dbSet = Db.Set(type);
foreach (var include in includes)
{
dbSet.Include(include);
}
return ((IQueryable<T>) dbSet);
}
I added included to this method to attach other related tables, but it did not make any difference. and here is the Find Method
public T Find<T>(long? id)
{
Type type = _impls.Value[typeof(T).Name].GetType();
return (T) Db.Set(type).Find(id);
}
I pretty much tried to apply all the performance improvements, but the number of the SQL queries has not gone down. I tried to disable lazy loading, but it caused many problems in other parts of the application.
Just some additional information, in case table, there are 70000 rows and in out dialogs table, there are 500000 rows. Case and Dialog are associates as one-to-many. and each case has 20-40 dialog entries.
My questions are;
Why does include not make any difference when I use?
Is there any other way to crop number of the queries run?
Do you think the implementation is the problem?
Thanks
Include returns a new IQueryable and does not modify the source query. In addition you can use the generic version of Set which simplifies the code a bit:
public IQueryable<T> GetQueryable<T>(params string[] includes)
{
IQueryable<T> query = Db.Set<T>();
foreach (var include in includes)
{
query = query.Include(include);
}
return query;
}
Step 1: Fire your contractor. Seriously. Like right now. That is some awful code. Not only did they miss something as simple and basic as using the generic version of Set, but they've successfully only made working with Entity Framework more complex, because all the repository does is proxy Entity Framework methods with its own unique and bastardized API.
That said, there's really not enough here to diagnose what your problem is. The use of Include may give you larger queries, but it should actually serve to reduce the overall number of queries issued. It's possible, you're just not using includes where you should be.
Now, the fact that you "tried to disable lazy loading, but it caused many problems in other parts of the application", means that you're relying too heavily on lazy-loading. Basically, you're loading in stuff you don't even know about, which is the antithesis of optimization. Ironically, you'd actually be best served by going ahead and disabling lazy-loading, and then tracking down where your code fails because of that. If you want to actually lazy-load that thing, you can use .Load (see: Explicit Loading). But, if you want to eager-load to reduce queries, then you know what includes you need to add.

Proper implementation of Repository Pattern using IQueryables?

I'm building a Repository layer for my MVC application with methods like GetObject, UpdateObject, DeleteObject, etc.
This is what I have now:
public List<Object> GetObjects()
{
return _db.Objects.Where(o => o.IsArchived == false).ToList();
}
But I'm wondering if it would be better to return IQueryables for lists so that the least amount of data gets sent to the client when filters are applied in the UoW or Service layers. Would it be best to do something like this?
public IQueryable<Object> GetObjects()
{
return _db.Objects.Where(o => o.IsArchived == false);
}
The not nice thing about returning IQueryable, is that if you ever have a different implementation of repository, say using different ORM, storing data in non-SQL database, cloud or XML file, it would be hard to implement same interface. It would be much easier to implement if you return more generic colections of domain objects. For example IEnumerable. You can always pass filtering criteria in.
The other drawback of returning IQueryable, is that it may happen, that when you actually run the query your object context may be already disposed (Depending on your implementation) or may be kept in memory longer than required.
A leaky abstraction such as IQueryable could cause problems, for example imagine you want to get some data from database and order it by Guid. If you enumerate the query by calling ToList() prior to sorting, you'll get different results if you do it after. The reason is that in first case the sorting will happen in .NET, but in other case it will happen in SQL which uses completely different order.
The nice thing about returning IQueryable here is that you can continue to build up your query further without hitting the db. Once you call ToList it will hit the db and you can't customize your query further without hitting the database a second time.

Update relationships when saving changes of EF4 POCO objects

Entity Framework 4, POCO objects and ASP.Net MVC2. I have a many to many relationship, lets say between BlogPost and Tag entities. This means that in my T4 generated POCO BlogPost class I have:
public virtual ICollection<Tag> Tags {
// getter and setter with the magic FixupCollection
}
private ICollection<Tag> _tags;
I ask for a BlogPost and the related Tags from an instance of the ObjectContext and send it to another layer (View in the MVC application). Later I get back the updated BlogPost with changed properties and changed relationships. For example it had tags "A" "B" and "C", and the new tags are "C" and "D". In my particular example there are no new Tags and the properties of the Tags never change, so the only thing which should be saved is the changed relationships. Now I need to save this in another ObjectContext. (Update: Now I tried to do in the same context instance and also failed.)
The problem: I can't make it save the relationships properly. I tried everything I found:
Controller.UpdateModel and Controller.TryUpdateModel don't work.
Getting the old BlogPost from the context then modifying the collection doesn't work. (with different methods from the next point)
This probably would work, but I hope this is just a workaround, not the solution :(.
Tried Attach/Add/ChangeObjectState functions for BlogPost and/or Tags in every possible combinations. Failed.
This looks like what I need, but it doesn't work (I tried to fix it, but can't for my problem).
Tried ChangeState/Add/Attach/... the relationship objects of the context. Failed.
"Doesn't work" means in most cases that I worked on the given "solution" until it produces no errors and saves at least the properties of BlogPost. What happens with the relationships varies: usually Tags are added again to the Tag table with new PKs and the saved BlogPost references those and not the original ones. Of course the returned Tags have PKs, and before the save/update methods I check the PKs and they are equal to the ones in the database so probably EF thinks that they are new objects and those PKs are the temp ones.
A problem I know about and might make it impossible to find an automated simple solution: When a POCO object's collection is changed, that should happen by the above mentioned virtual collection property, because then the FixupCollection trick will update the reverse references on the other end of the many-to-many relationship. However when a View "returns" an updated BlogPost object, that didn't happen. This means that maybe there is no simple solution to my problem, but that would make me very sad and I would hate the EF4-POCO-MVC triumph :(. Also that would mean that EF can't do this in the MVC environment whichever EF4 object types are used :(. I think the snapshot based change tracking should find out that the changed BlogPost has relationships to Tags with existing PKs.
Btw: I think the same problem happens with one-to-many relations (google and my colleague say so). I will give it a try at home, but even if that works that doesn't help me in my six many-to-many relationships in my app :(.
Let's try it this way:
Attach BlogPost to context. After attaching object to context the state of the object, all related objects and all relations is set to Unchanged.
Use context.ObjectStateManager.ChangeObjectState to set your BlogPost to Modified
Iterate through Tag collection
Use context.ObjectStateManager.ChangeRelationshipState to set state for relation between current Tag and BlogPost.
SaveChanges
Edit:
I guess one of my comments gave you false hope that EF will do the merge for you. I played a lot with this problem and my conclusion says EF will not do this for you. I think you have also found my question on MSDN. In reality there is plenty of such questions on the Internet. The problem is that it is not clearly stated how to deal with this scenario. So lets have a look on the problem:
Problem background
EF needs to track changes on entities so that persistance knows which records have to be updated, inserted or deleted. The problem is that it is ObjectContext responsibility to track changes. ObjectContext is able to track changes only for attached entities. Entities which are created outside the ObjectContext are not tracked at all.
Problem description
Based on above description we can clearly state that EF is more suitable for connected scenarios where entity is always attached to context - typical for WinForm application. Web applications requires disconnected scenario where context is closed after request processing and entity content is passed as HTTP response to the client. Next HTTP request provides modified content of the entity which has to be recreated, attached to new context and persisted. Recreation usually happends outside of the context scope (layered architecture with persistance ignorace).
Solution
So how to deal with such disconnected scenario? When using POCO classes we have 3 ways to deal with change tracking:
Snapshot - requires same context = useless for disconnected scenario
Dynamic tracking proxies - requires same context = useless for disconnected scenario
Manual synchronization.
Manual synchronization on single entity is easy task. You just need to attach entity and call AddObject for inserting, DeleteObject for deleting or set state in ObjectStateManager to Modified for updating. The real pain comes when you have to deal with object graph instead of single entity. This pain is even worse when you have to deal with independent associations (those that don't use Foreign Key property) and many to many relations. In that case you have to manually synchronize each entity in object graph but also each relation in object graph.
Manual synchronization is proposed as solution by MSDN documentation: Attaching and Detaching objects says:
Objects are attached to the object
context in an Unchanged state. If you
need to change the state of an object
or the relationship because you know
that your object was modified in
detached state, use one of the
following methods.
Mentioned methods are ChangeObjectState and ChangeRelationshipState of ObjectStateManager = manual change tracking. Similar proposal is in other MSDN documentation article: Defining and Managing Relationships says:
If you are working with disconnected
objects you must manually manage the
synchronization.
Moreover there is blog post related to EF v1 which criticise exactly this behavior of EF.
Reason for solution
EF has many "helpful" operations and settings like Refresh, Load, ApplyCurrentValues, ApplyOriginalValues, MergeOption etc. But by my investigation all these features work only for single entity and affects only scalar preperties (= not navigation properties and relations). I rather not test this methods with complex types nested in entity.
Other proposed solution
Instead of real Merge functionality EF team provides something called Self Tracking Entities (STE) which don't solve the problem. First of all STE works only if same instance is used for whole processing. In web application it is not the case unless you store instance in view state or session. Due to that I'm very unhappy from using EF and I'm going to check features of NHibernate. First observation says that NHibernate perhaps has such functionality.
Conclusion
I will end up this assumptions with single link to another related question on MSDN forum. Check Zeeshan Hirani's answer. He is author of Entity Framework 4.0 Recipes. If he says that automatic merge of object graphs is not supported, I believe him.
But still there is possibility that I'm completely wrong and some automatic merge functionality exists in EF.
Edit 2:
As you can see this was already added to MS Connect as suggestion in 2007. MS has closed it as something to be done in next version but actually nothing had been done to improve this gap except STE.
I have a solution to the problem that was described above by Ladislav. I have created an extension method for the DbContext which will automatically perform the add/update/delete's based on a diff of the provided graph and persisted graph.
At present using the Entity Framework you will need to perform the updates of the contacts manually, check if each contact is new and add, check if updated and edit, check if removed then delete it from the database. Once you have to do this for a few different aggregates in a large system you start to realize there must be a better, more generic way.
Please take a look and see if it can help http://refactorthis.wordpress.com/2012/12/11/introducing-graphdiff-for-entity-framework-code-first-allowing-automated-updates-of-a-graph-of-detached-entities/
You can go straight to the code here https://github.com/refactorthis/GraphDiff
I know it's late for the OP but since this is a very common issue I posted this in case it serves someone else.
I've been toying around with this issue and I think I got a fairly simple solution,
what I do is:
Save main object (Blogs for example) by setting its state to Modified.
Query the database for the updated object including the collections I need to update.
Query and convert .ToList() the entities I want my collection to include.
Update the main object's collection(s) to the List I got from step 3.
SaveChanges();
In the following example "dataobj" and "_categories" are the parameters received by my controller "dataobj" is my main object, and "_categories" is an IEnumerable containing the IDs of the categories the user selected in the view.
db.Entry(dataobj).State = EntityState.Modified;
db.SaveChanges();
dataobj = db.ServiceTypes.Include(x => x.Categories).Single(x => x.Id == dataobj.Id);
var it = _categories != null ? db.Categories.Where(x => _categories.Contains(x.Id)).ToList() : null;
dataobj.Categories = it;
db.SaveChanges();
It even works for multiple relations
The Entity Framework team is aware that this is a usability issue and plans to address it post-EF6.
From the Entity Framework team:
This is a usability issue that we are aware of and is something we have been thinking about and plan to do more work on post-EF6. I have created this work item to track the issue: http://entityframework.codeplex.com/workitem/864 The work item also contains a link to the user voice item for this--I encourage you to vote for it if you have not done so already.
If this impacts you, vote for the feature at
http://entityframework.codeplex.com/workitem/864
All of the answers were great to explain the problem, but none of them really solved the problem for me.
I found that if I didn't use the relationship in the parent entity but just added and removed the child entities everything worked just fine.
Sorry for the VB but that is what the project I am working in is written in.
The parent entity "Report" has a one to many relationship to "ReportRole" and has the property "ReportRoles". The new roles are passed in by a comma separated string from an Ajax call.
The first line will remove all the child entities, and if I used "report.ReportRoles.Remove(f)" instead of the "db.ReportRoles.Remove(f)" I would get the error.
report.ReportRoles.ToList.ForEach(Function(f) db.ReportRoles.Remove(f))
Dim newRoles = If(String.IsNullOrEmpty(model.RolesString), New String() {}, model.RolesString.Split(","))
newRoles.ToList.ForEach(Function(f) db.ReportRoles.Add(New ReportRole With {.ReportId = report.Id, .AspNetRoleId = f}))

How do I delete an object from an Entity Framework model without first loading it?

I am quite sure I've seen the answer to this question somewhere, but as I couldn't find it with a couple of searches on SO or google, I ask it again anyway...
In Entity Framework, the only way to delete a data object seems to be
MyEntityModel ent = new MyEntityModel();
ent.DeleteObject(theObjectToDelete);
ent.SaveChanges();
However, this approach requires the object to be loaded to, in this case, the Controller first, just to delete it. Is there a way to delete a business object referencing only for instance its ID?
If there is a smarter way using Linq or Lambda expressions, that is fine too. The main objective, though, is to avoid loading data just to delete it.
It is worth knowing that the Entity Framework supports both to Linq to Entities and Entity SQL. If you find yourself wanting to do deletes or updates that could potentially affect many records you can use the equivalent of ExecuteNonQuery.
In Entity SQL this might look like
Using db As New HelloEfEntities
Dim qStr = "Delete " & _
"FROM Employee"
db.ExecuteStoreCommand(qStr)
db.SaveChanges()
End Using
In this example, db is my ObjectContext. Also note that the ExecuteStoreCommand function takes an optional array of parameters.
I found this post, which states that there really is no better way to delete records. The explanation given was that all the foreign keys, relations etc that are unique for this record are also deleted, and so EF needs to have the correct information about the record. I am puzzled by why this couldn't be achieved without loading data back and forth, but as it's not going to happen very often I have decided (for now) that I won't bother.
If you do have a solution to this problem, feel free to let me know =)
Apologies in advance, but I have to question your goal.
If you delete an object without ever reading it, then you can't know if another user has changed the object in between the time you confirmed that you wanted to delete the object and the actual delete. In "plain old SQL", this would be like doing:
DELETE FROM FOO
WHERE ID = 1234
Of course, most people don't actually do this. Instead, they do something like:
DELETE FROM FOO
WHERE ID = 1234
AND NAME = ?ExpectedName AND...
The point is that the delete should fail (do nothing) if another user has changed the record in the interim.
With this, better statement of the problem, there are two possible solutions when using the Entity Framework.
In your Delete method, the existing instance, compare the expected values of the properties, and delete if they are the same. In this case, the Entity Framework will take care of writing a DELETE statement which includes the property values.
Write a stored procedure which accepts both the IDE and the other property values, and execute that.
There's a way to spoof load an entity by re-calculating it's EntityKey. It looks like a bit of a hack, but might be the only way to do this in EF.
Blog article on Deleting without Fetching
You can create an object with the same id and pass it through to the delete
BUT its good for simple objects if you have complex relations you may need more than that
var user = new User { ID = 15 };
context.Entry(user).State = EntityState.Deleted;
context.SaveChanges();
var toDelete = new MyEntityModel{
GUID = guid,
//or ID = id, depending on the key
};
Db.MyEntityModels.Attach(toDelete);
Db.MyEntityModels.DeleteObject(toDelete);
Db.SaveChanges();
In case your key contains multiple columns, you need to provide all values (e.g. GUID, columnX, columnY etc).
Have also a look here for a generic function if you feel for something fancy.
With Entity Framework 5 there is Entity Framework Extended Library. Available on NuGet. Then you can write something like:
context.Users.Delete(u => u.Id == id);
It is also useful for bulk deletes.

Resources