I have a rather deep hierarchy of objects that I'm trying to persist with Entity Framework 4, POCO, PI (Persistence Ignorance) and Code First. Suddenly things started working pretty well when it dawned on me to not use the new() operator. As originally written, the objects frequently use new() to create child objects.
Instead I'm using my take on the Repository Pattern to create all child objects as needed. For example, given:
class Adam
{
List<Child> children;
void AddChildGivenInput(string input) { children.Add(new Child(...)); }
}
class Child
{
List<GrandChild> grandchildren;
void AddGrandChildGivenInput(string input) { grandchildren.Add(new GrandChild(...)); }
}
class GrandChild
{
}
("GivenInput" implies some processing not shown here)
I define an AdamRepository like:
class AdamRepository
{
Adam Add()
{
return objectContext.Create<Adam>();
}
Child AddChildGivenInput(Adam adam, string input)
{
return adam.children.Add(new Child(...));
}
GrandChild AddGrandchildGivenInput(Child child, string input)
{
return child.grandchildren.Add(new GrandChild(...));
}
}
Now, this works well enough. However, I'm no longer "ignorant" of my persistence mechanism as I have abandoned the new() operator.
Additionally, I'm at risk of an anemic domain model since so much logic ends up in the repository rather than in the domain objects.
After much adieu, a question:
Or rather several questions...
Is this pattern required to work with EF 4 Code First?
Is there a way to retain use of new() and still work with EF 4 / POCO / Code First?
Is there another pattern that would leave logic in the domain object and still work with EF 4 / POCO / Code First?
Will this restriction be lifted in later versions of Code First support?
Sometimes trying to go the POCO /
Persistence Ignorance route feels like
swimming upstream, other times it feels
like swimming up Niagra Falls. Still, I want to believe...
Here are a couple of points that might help answer your question:
In your classes you have a field for the children collection and a method to add to the children. EF in general (not just Code First) currently requires that collections are surface as properties, so this pattern is not currently supported. More flexibility in how we interact with classes is a common ask for EF and our team is looking at how we can support this at the moment
You mentioned that you need to explicitly register entities with the context, this isn’t necessarily the case. In the following example if GetAdam() returned a Adam object that is attached to the underlying context then the new child Cain would be automatically discovered by EF when you save and inserted into the database.
var adam = myAdamRepository.GetAdam();
var cain = new Child();
adam.Children.Add(cain);
~Rowan
Related
I am using ASP.NET MVC with Entity Framework.
I have an entity called "AllUserData".
I have a second entity called "Genres". Each row in the table is a genre.
These two entities have a one-to-many relationship. So in the class definition of the "AllUserData" class, I have
public virtual ICollection PreferredGenres { get; set; }
I am able to successfully read the genres preferred by each user using
AllUserData aud = db.AllUserData.Single(b => b.UserId == currentUserId);
var chosengenres = aud.PreferredGenres.ToList()
However I cannot use
AllUserData aud = db.AllUserData.Single(b => b.UserId == currentUserId);
var chosengenres = await aud.PreferredGenres.ToListAsync()
Visual Studio says "ICollection does not contain a definition for 'ToListAsync' and the best extension method overload 'QueryableExtensions.ToListAsync(IQueryable)' requires a receiver of type 'IQueryable'.
Why is this happening? The only difference between the two was that in one case I used ToList() and in the other I used ToListAsync().
Can async methods not be used with navigation properties? In a real life application there are many cases where there are relationships between various entities; can asynchronous methods not be used when accessing properties using these relationships? Is there some way around this? I'd rather do things asynchronously if possible.
Your navigation property would have been declared as an ICollection, standard for Entity properties -- the compiler won't be pleased.
However, you can get an IQueryable with AsQueryable or with another Select() (if that makes sense for your needs) between the navigation collection and the ToListAsync().
ToList() can operate on IEnumerable (which includes ICollection and IQueryable). ToListAsync() has only been made to work with IQueryable, hence the message from VS. The "why" may lie in the implementation details.
Ok, it seems entity framework doesn't allow for calling the navigation property asynchronously in this manner.
What worked for me is earger loading the navigation property in the beginning. (Thanks to user daf for sending me in the correct direction).
So instead of
AllUserData aud = await db.AllUserData.SingleAsync(b => b.UserId == currentUserId);
var usergenres = await aud.PreferredGenres.ToListAsync();
which does not work, I can instead do this :
AllUserData aud = await db.AllUserData.Include(p=>p.PreferredGenres).SingleAsync(b => b.UserId == currentUserId);
var usergenres = aud.PreferredGenres.ToList();
This way the first query asks to pull all necessary information from the database through eager loading with .Include(), and does so asynchronously due to the user of .SingleAsync(). The next statement uses .ToList(), but shouldn't make a second trip to the database since the data is already eager loaded, so it doesn't matter that it is async. The whole operation is now async.
I also realized this same question has been asked previously on StackOverflow; it is a little difficult to find depending on what search terms you used: Entity Framework Designer First get navigation property as Tasks. The selected answer provides 4 different solutions, out of which the first one is similar to what I am using now.
It also appears possible to designate navigation properties as async when defining the entity class- see the section titled "Async Lazy Loading" on this page: here The sample snippet demonstrates how to do this for a single navigation property; I don't know if it can be done for an ICollection navigation property as well, which is what I need, but I didn't do any further digging either.
Trying to figure out how to extend entities that I query from breeze.js on a per-view basis in a single page application. Right now breeze is acting as the gate-keeper when it comes to extending (a.k.a materializing) them and I’m wondering what other options are available to allow me to do this. I initially started with knockout’s mapping plugin but found that it refused to handle child collections for some reason so I moved to using breeze’s constructor function and initializer methodology. The problem with this is that you can only define one custom "model" for an entity. I am looking for approaches that would allow a custom "model" of an entity on a per-view basis. I’ve already ruled out multiple managers. Querying meta-data multiple times is a huge unnecessary hit just to get this working.
This diagram visualizes what I’m trying to achieve. Both View 1 and View 2 ultimately query Entity B and both views require their own specific customization of the "model" of Entity B. Since View 1 loads first it’s custom "model" of Entity B "wins" and View 2 doesn’t have the opportunity to customize it. When View 2 eventually runs it’s query, any entities of type B that were already loaded by View 1 will have the custom "model" that View 1 defined which will make View 2 explode during binding. Any entities not already loaded by View 1 will now have View 2's custom "model" which would eventually crash View 1 if it could even get that far down the road. See this post.
My thought was to manually create my own custom "model" for each view that has an Entity observable and I could then iterate over every entity returned from a breeze query and new up this custom "model" and pass in the current item, assigning it to the Entity property. I don't really want to do this because I now have I'll have tons of iteration code everywhere and I'd much rather use knockout's mapping plugin. Pseudo code:
function view1EntityBModel(entity) {
var self = this;
self.Entity = ko.observable(entity);
self.myCustomProperty = ko.observable();
...
}
function view2EntityBModel(entity) {
var self = this;
self.Entity = ko.observable(entity);
self.isExpanded = ko.observable(false);
...
}
I was wondering if there are any other solutions available to achieve this same goal?
Or even better does anyone know how to make the knockout mapping plugin working on child collections?
I think the problem here is that by the time the mapping plugin gets a-hold of the breeze data the Children collection has already been converted into an observable array and the mapping plugin doesn't know that it needs to "call" the Children() property in order to get back a list.
var categoryMapper = {
create: function (options) {
return new categoryModel(options.data);
},
Children: { // this doesn't fire for the children
create: function (options) {
return new categoryModel(options.data);
}
}
}
function categoryModel(data) {
var self = this;
ko.mapping.fromJS(data, {}, self);
}
Guessing that you've moved on by now, but thought I'd offer a recommendation for others in a similar position.
Our solution to a similar situation borrows from the breeze.js TempHire sample solution which implements a client side repository/uow pattern. The solution uses an EntityMananagerProvider to manage multiple EntityManagers. The EntityMananagerProvider makes a single call for metadata, which is then used to create new child EntityManagers - satisfying your concern regarding multiple metadata calls. You can then use custom models/uow/repositories to extend the child manager for specific views.
I have no idea if I'm doing this right, but this is how a Get method in my repository looks:
public IQueryable<User> GetUsers(IEnumerable<Expression<Func<User, object>>> eagerLoading)
{
IQueryable<User> query = db.Users.AsNoTracking();
if (eagerLoading != null)
{
foreach (var expression in eagerLoading)
{
query = query.Include(expression);
}
}
return query;
}
Lets say I also have a GeographyRepository that has GetCountries method, which is similar to this.
I have 2 separate service layer classes calling these 2 separate repositories, sharing the same DbContext (EF 4.1 code-first).
So in my controller, I'd do:
myViewModel.User = userService.GetUserById(1);
myViewModel.Countries = geoService.GetCountries();
This is 2 separate calls to the database. If I didn't use these patterns and tie up the interface and database, I'd have 1 call. I guess its something of a performance vs maintainability.
My question is, can this be pushed to 1 database call? Can we merge queries like this when views calls multiple repositories?
I'd say that if performance is the real issue then I'd try and avoid going back to the database altogether. I'm assuming the list returned from geoService.GetCountries() is fairly static, so I'd be inclined to cache it in the service after the initial load and remove the database hit altogether. The fact that you have a service there suggests that it would be the perfect place to abstract away such details.
Generally when asking questions about performance, it's rare that all perf related issues can be tarred with the same brush and you need to analyse each situation and work out an appropriate solution for the specific perf issue you're having.
This question is very similiar to this one. However, the resolution to that question:
Does not seem to apply, or
Are somewhat suspect, and don't seem like a good approach to resolving the problem.
Basically, I'm iterating over a generic list of objects, and inserting them. Using MVC 2, EF 4 with the default code generation.
foreach(Requirement r in requirements)
{
var car = new CustomerAgreementRequirement();
car.CustomerAgreementId = viewModel.Agreement.CustomerAgreementId;
car.RequirementId = r.RequirementId;
_carRepo.Add(car); //Save new record
}
And the Repository.Add() method:
public class BaseRepository<TEntity> : IRepository<TEntity> where TEntity : class
{
private TxRPEntities txDB;
private ObjectSet<TEntity> _objectSet;
public void Add(TEntity entity)
{
SetUpdateParams(entity);
_objectSet.AddObject(entity);
txDB.SaveChanges();
}
I should note that I've been successfully using the Add() method throughout my code for single inserts; this is the first time I've tried to use it to iteratively insert a group of objects.
The error:
System.InvalidOperationException: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: AcceptChanges cannot continue because the object's key values conflict with another object in the ObjectStateManager. Make sure that the key values are unique before calling AcceptChanges.
As stated in the prior question, the EntityKey is set to True, StoreGeneratedPattern = Identity. The actual table that is being inserted into is a relationship table, in that it is comprised of an identity field and two foreign key fields. The error always occurs on the second insert, regardless of whether that specific entity has been inserted before or not, and I can confirm that the values are always different, no key conflicts as far as the database is concerned. My suspicion is that it has something to do with the temporary entitykey that gets set prior to the actual insert, but I don't know how to confirm that, nor do I know how to resolve it.
My gut feeling is that the solution in the prior question, to set the SaveOptions to None, would not be the best solution. (See prior discussion here)
I've had this issue with my repository using a loop as well and thought that it might be caused by some weird race-like condition. What I've done is refactor out a UnitOfWork class, so that the repository.add() method is strictly adding to the database, but not storing the context. Thus, the repository is only responsible for the collection itself, and every operation on that collection happens in the scope of the unit of work.
The issue there is that: In a loop, you run out of memory damn fast with EF4. So you do need to store the changes periodically, I just don't store after every save.
public class BaseRepository : IRepository where TEntity : class
{
private TxRPEntities txDB;
private ObjectSet _objectSet;
public void Add(TEntity entity)
{
SetUpdateParams(entity);
_objectSet.AddObject(entity);
}
public void Save()
{
txDB.SaveChanges();
}
Then you can do something like
foreach(Requirement r in requirements)
{
var car = new CustomerAgreementRequirement();
car.CustomerAgreementId = viewModel.Agreement.CustomerAgreementId;
car.RequirementId = r.RequirementId;
_carRepo.Add(car); //Save new record
if (some number limiting condition if you have thousands)
_carRepo.Save(); // To save periodically and clear memory
}
_carRepo.Save();
Note: I don't really like this solution, but I hunted around to try to find why things break in a loop when they work elsewhere, and that's the best I came up with.
We have had some odd collision issues if the entity is not added to the context directly after being created (before doing any assignments). The only time I've noticed the issue is when adding objects in a loop.
Try adding the newed up entity to the context, do the assignments, then save the context. Also, you don't need to save the context each time you add a new entity unless you absolutely need the primary key.
Good day!
I've a legacy application where data access layer consists of classes where queries are done using SqlConnection/SqlCommand and results are passed to upper layers wrapped in untyped DataSets/DataTable.
Now I'm working on integrating this application into newer one where written in ASP.NET MVC 2 where LINQ2SQL is used for data access. I don't want to rewrite fancy logic of generating complex queries that are passed to SqlConnection/SqlCommand in LINQ2SQL (and don't have permission to do this), but I'd like to have result of these queries as strong-typed objects collection instead of untyped DataSets/DataTable.
The basic idea is to wrap old data access code in a nice-looking from ASP.NET MVC "Model".
What is the fast\easy way of doing this?
Additionally to the answer below here is a nice solution based on AutoMapper: http://elegantcode.com/2009/10/16/mapping-from-idatareaderidatarecord-with-automapper/
An approach that you could take is using the DataReader and transfer. So for every object you want to work with define the class in a data transfer object folder (or however your project is structured) then in you data access layer have something along the lines of the below.
We used something very similar to this in a project with a highly normalized database but in the code we did not need that normalization so we used procedures to put the data into more usable objects. If you need to be able to save these objects as well you will need handle translating the objects into database commands.
What is the fast\easy way of doing
this?
Depending on the number of classes etc this is could not be the fastest approach but it will allow you to use the objects very similarly to the Linq objects and depending on the type of collections used (IList, IEnumerable etc) you will be able to use the extension methods on those types of collections.
public IList<NewClass> LoadNewClasses(string abc)
{
List<NewClass> newClasses = new List<NewClass>();
using (DbCommand command = /* Get the command */)
{
// Add parameters
command.Parameters["#Abc"].Value = abc;
// Could also put the DataReader in a using block
IDataReader reader = /* Get Data Reader*/;
while (reader.Read())
{
NewClass newClass = new NewClass();
newClass.Id = (byte)reader["Id"];
newClass.Name = (string)reader["Name"];
newClasses.Add(newClass);
}
reader.Close();
}
return newClasses;
}