I have a web app using MVC and EF. I am using Repository and Unit of Work Patterns from Microsoft online doc.
I am trying to insert multiple rows from multiple tables.
The code look something like this:
unitOfWork.Table1.Insert(row1);
unitOfWork.Save();//recId is primary key, will be auto generated after save.
table2Row.someId = table1Row.recId;
unitOfWork.Table2.Insert(row2);
unitOfWork.Save();
If anything goes wrong when inserting row2, I need to rollback row1 and row2.
How do I implement BeginTransaction/Commit/Rollback with UnitOfWork pattern?
Thanks.
To avoid these issues it is better to utilize EF as an ORM rather than a simple data translation service by using reference properties rather than relying on setting FK values directly.
Your example doesn't really appear to be providing anything more than a thin abstraction of the DbContext.
Given an example of an Order for a new Customer where you want a CustomerId to be present on the Order.
Problem: the new Customer's ID is generated by the DB, so it will only be available after SaveChanges.
Solution: Use EF navigation properties and let EF manage the FKs.
var customer = new Customer
{
Name = customerName,
// etc.
};
var order = new Order
{
OrderNumber = orderNumber,
// etc.
Customer = customer,
};
dbContext.Orders.Add(order);
dbContext.SaveChanges();
Note that we didn't have to add the Customer to the dbContext explicitly, it will be added via the order's Customer reference and the Order table's CustomerId will be set automatically. If there is a Customers DbSet on the context you can add the customer explicitly as well, but only one SaveChanges call is needed.
To set the navigation properties up see: https://stackoverflow.com/a/50539429/423497
** Edit (based on comments on 1-many relationships) **
For collections some common traps to avoid would include setting the collection references directly for entities that have already been associated with the DbContext, and also utilizing virtual references so that EF can best manage the tracking of instances in the collection.
If an order has multiple customers then you would have a customers collection in the order:
public virtual List<Customer> Customers{ get; set; } = new List<Customer>();
and optionally an Order reference in your customer:
public virtual Order Order { get; set; }
Mapping these would look like: (from the Order perspective)
HasMany(x => x.Customers)
.WithRequired(x => x.Order)
.Map(x => x.MapKey("OrderId"));
or substitute .WithRequired() if the customer does not have an Order reference.
This is based on relationships where the entities do not have FK fields declared. If you declare FKs then .Map(...) becomes .HasForeignKey(x => x.FkProperty)
From there if you are creating a new order:
var order = new Order
{
OrderNumber = "12345",
Customers = new []
{
new Customer { Name = "Fred" },
new Customer { Name = "Ginger" }
}.ToList()
};
using (var context = new MyDbContext())
{
context.Orders.Add(order);
context.SaveChanges();
}
This should all work as expected. It will save the new order and both associated customers.
However, if you load the order from the DbContext and want to manipulate the customers associated to it, then there are a couple caveats.
1. You should to eager-load the Customers collection so that EF knows about these entities.
2. You need to manipulate the collection with Add/Remove rather than setting the reference to avoid confusion about what the code looks like it does and what EF interprets.
Something like:
using (var context = new MyDbContext())
{
var order = context.Orders.Find(1);
order.Customers = new []
{
new Customer { Name = "Roy" }
}.ToList();
context.SaveChanges();
}
Will result in "Roy" being added to the Customers, rather than replacing them.
To replace them you need to remove them first, then add the new one.
using (var context = new MyDbContext())
{
var order = context.Orders.Find(1);
context.Customers.RemoveRange(order.Customers); // Assuming customers cannot exist without orders. If OrderId is nullable, this line can be omitted.
order.Customers.Clear();
order.Customers,Add(new Customer { Name = "Roy" });
context.SaveChanges();
}
This starts to fall apart if the collections are not virtual. For instance:
using (var context = new MyDbContext())
{
var order = context.Orders.Find(1);
order.Customers = new []
{
new Customer { Name = "Roy" }
}.ToList();
context.SaveChanges();
}
if the customers collection is virtual, after the SaveChanges, order.Customers will report a collection size of 3 elements. If it is not virtual it will report the size as 1 element even though there are 3 records now associated to the order in the DB. This leads to all kinds of issues where projects get caught out with invalid data state, duplicate records and the like.
Cases where you're seeing some records getting missed will likely be due to missing the virtual on the references, manipulating the collections outside of what EF is tracking, or other manipulations of the tracking state. (a common issue when projects are set up to detach/re-attach entities from contexts.)
Related
I have 2 classes, like the below.
They can have very large collections - a Website may have 2,000+ WebsitePages and vice-versa.
class WebsitePage
{
public int ID {get;set;}
public string Title {get;set;}
public List<Website> Websites {get;set;}
}
class Website
{
public int ID {get;set;}
public string Title {get;set;}
public List<WebsitePage> WebsitePages {get;set;}
}
I am having trouble removing a WebsitePage from a Website. Particularly when removing a WebsitePage from mutliple Websites.
For example, I might have code like this:
var pageToRemove = db.WebsitePages.FirstOrDefault();
var websites = db.Websites.Include(i => i.WebsitePages).ToList();
foreach(var website in websites)
{
website.WebsitePages.Remove(pageToRemove)
}
If each website Include() 2k pages, you can imagine it takes ages to load that second line.
But if I don't Include() the WebsitePages when fetching the Websites, there is no child collection loaded for me to delete from.
I have tried to just Include() the pages that I need to delete, but of course when saving that gives me an empty collection.
Is there a recommended or better way to approach this?
I am working with an existing MVC site and I would rather not have to create an entity class for the join table unless absolutely necessary.
No, you can't... normally.
A many-to-many relationship (with a hidden junction table) can only be affected by adding/removing items in the nested collections. And for this the collections must be loaded.
But there are some options.
Option 1.
Delete data from the junction table by raw SQL. Basically this looks like
context.Database.ExecuteSqlCommand(
"DELETE FROM WebsiteWebsitePage WHERE WebsiteID = x AND WebsitePageID = y"));
(not using parameters).
Option 2.
Include the junction into the class model, i.e. map the junction table to a class WebsiteWebsitePage. Both Website and WebsitePage will now have
public ICollection<WebsiteWebsitePage> WebsiteWebsitePages { get; set; }
and WebsiteWebsitePage will have reference properties to both Website and WebsitePage. Now you can manipulate the junctions directly through the class model.
I consider this the best option, because everything happens the standard way of working with entities with validations and tracking and all. Also, chances are that sooner or later you will need an explicit junction class because you're going to want to add more data to it.
Option 3.
The box of tricks.
I tried to do this by removing a stub entity from the collection. In your case: create a WebsitePage object with a valid primary key value and remove it from Website.WebsitePages without loading the collection. But EF doesn't notice the change because it isn't tracking Website.WebsitePages, and the item is not in the collection to begin with.
But this made me realize I had to make EF track a Website.WebsitePages collection with 1 item in it and then remove that item. I got this working by first building the Website item and then attaching it to a new context. I'll show the code I used (a standard Product - Category model) to prevent typos.
Product prd;
// Step 1: build an object with 1 item in its collection
Category cat = new Category { Id = 3 }; // Stub entity
using(var db = new ProdCatContext())
{
db.Configuration.LazyLoadingEnabled = false;
prd = db.Products.First();
prd.Categories.Add(cat);
}
// Step 2: attach to a new context and remove the category.
using(var db = new ProdCatContext())
{
db.Configuration.LazyLoadingEnabled = false;
db.Products.Attach(prd);
prd.Categories.Remove(cat);
db.SaveChanges(); // Deletes the junction record.
}
Lazy loading is disabled, otherwise the Categories would still be loaded when prd.Categories is addressed.
My interpretation of what happens here is: In the second step, EF not only starts tracking the product when you attach it, but also its associations, because it 'knows' you can't load these associations yourself in a many to many relationship. It doesn't do this, however, when you add the category in the first step.
I am using Entity Framework 6 Code First. I created a separate table to contain the UserId from the aspnet_Users table and a field for Department. So the user listed with the departments have only access to those departments.
Since all of my table are generated via code first and is using the db.context. The membership and roles are pre-generated table from MVC. How to I get a list of UserId from the aspnet_Users table when it's not in the db.context? The aspnet tables are pre-generated via a script in the entity framework.
How do I query tables in MVC outside of my db.context?
Your DbContext will have a Database property. Off of that you will find two overloads for general SQL queries:
DbRawSqlQuery SqlQuery (Type elementType, string sql, params object[] parameters);
DbRawSqlQuery<T> SqlQuery<T>(string sql, params object[] parameters);
For example:
var result = ctx.Database.SqlQuery<Foo>("select * from foo");
More information here
Since you are using .Net Membership, you could always call the
MembershipProvider.GetAllUsers(int pageIndex, int pageSize, out int totalRecords)
Then from that result, generate a list of UserIds. The role provider also offers similar functionality with a Roles.GetAllRoles() method.
The membership provider and role provider offer many more useful methods to hopefully get you the data you are looking for. If they still don't have what you are after you have a couple of more options. You can use your db context to execute raw SQL. See MSDN for more info.
Another option is to create additional entity classes that match the DB structure of those tables, and add them to your DB Context. The downside to this approach is it could allow another developer, or even yourself to create and update users and roles without going through the proper providers, in which case you would lose some functionality. If that is a concern, you could always create a DB View and map to that to ensure read only access. It's a bit more overhead, but does give you type safety and a familiar way to query the data.
Here is what I did to get the complete solution I wanted:
To query the data:
string sql = "Select UserId, UserName from aspnet_users where applicationid='" + Config.Instance.AppId() + "'";
using (var context = new NameSystemContext())
{
var users = context.Database.SqlQuery<AspnetUser>(sql);
ViewBag.UserId = new SelectList(users, "UserId", "UserName").ToList();
}
In your view models or somewhere define a class:
class AspnetUser
{
[Key]
//[Column(Order = 1)]
public Guid UserId { get; set; }
//[Key]
//[Column(Order = 2)]
public string UserName { get; set; }
}
//ViewBag.UserId could be use in the view
#Html.DropDownList("UserId", null, "Select...", htmlAttributes: new { #class = "form-control" })
We have controllers that read Entities with certain criteria and return a set of view models containing the data to the view. The view uses a Kendo Grid, but I don't think that makes a particular difference.
In each case, we have a Linq Query that gets the overall collection of entity rows and then a foreach loop that creates a model from each row.
Each entity has certain look ups as follows:
those with a 1:1 relationship, e.g. Assigned to (via a foreign key to a single person)
those with a 1:many relationship e.g. copy parties (to 0:many people - there are not many of these)
counts of other relationships (e.g. the number of linked orders)
any (e.g. whether any history exists)
If we do these in the model creation, the performance is not good as the queries must be run separately for each and every row.
We have also tried using includes to eager load the related entities but once you get more than two, this starts to deteriorate too.
I have seen that compiled queries and LoadProperty may be an option and I am particularly interested in the latter.
It would be great to understand best practice in these situations, so I can direct my investigations.
Thanks,
Chris.
Edit - sample code added. However, I'm looking for best practice.
public JsonResult ReadUsersEvents([DataSourceRequest]DataSourceRequest request, Guid userID)
{
var diaryEventModels = new List<DiaryEventModel>();
var events = UnitOfWork.EventRepository.All().Where(e => e.UserID == userID);
foreach (var eventItem in events)
{
var diaryModel = new DiaryEventModel(eventItem);
diaryEventModels.Add(diaryModel);
}
var result = diaryEventModels.ToDataSourceResult(request);
return Json(result, JsonRequestBehavior.AllowGet);
}
public DiaryEventModel(Event eventItem) {
// Regular property from Entity examples - no issue here as data retrived simply in original query
ID = eventItem.ID;
Start = eventItem.StartDateTime;
End = eventItem.EndDateTime;
EventDescription = eventItem.Description;
// One to one looked up Property example
eventModel.Creator = eventItem.Location.FullName;
// Calculation example based on 0 to many looked up properties, also use .Any in some cases
// This is a simplified example
eventModel.AttendeeCount = eventItem.EventAttendees.Count();
// 0 to Many looked up properties
EventAttendees = eventItem.EventAttendees.Select(e => new SelectListItem
{
Text = e.Person.FullName,
Value = e.Person.ID.ToString()
}).ToList();
}
I've read an article that in Entity Framework, the query will be sent to database after we call .ToList(), Single(), or First()
I have thousands of data so rather than load all the data I'd like to return data in paged. So I'm using PagedList to create paging in MVC. If it doesn't wrong when we called for example products.ToPagedList(pageNumber, 10), it will take only 10 records of data, not the whole data. Am I right?
Next, I'm using automapper to map from entities to viewmodel.
List<ProductViewModel> productsVM = Mapper.Map<List<Product>, List<ProductViewModel>>(products);
return productsVM.ToPagedList(pageNumber, 10);
As you can see in the snippet code above, does it take only 10 records before called .ToPagedList()? If when we do mapping, it will call .ToList() inside, I think it will call all of the data from the database then return 10 records. How to trace it?
The easiest way to see what is going on at database level is to use Sql Server Profiler. Then you will be able to see the sql queries that the entity framework is executing.
If you are using Sql Express then you can use Sql Express Profiler to do the same thing.
No, it doesn't take 10 records before paged list. The way your code is shown, AutoMapper will cause a deferred execution of the query, before it reaches paged list, which means it will return all data (let's suppose, 1000 records). Then PagedList will properly retrieve 10 of the already materialized List, and recognize the total amount of record was 1000.
I think you want to filter 10 in database, which has better performance, so you should use PagedList in the IQueryable of your database entities like this:
List<Product> filteredProducts = dbContext.Products.OrderBy(p => p.ProductId).ToPagedList(pageNumber, 10);
return Mapper.Map<List<Product>, List<ProductViewModel>>(filteredProducts);
The OrderBy is mandatory for PagedList.
BE CAREFUL WITH AUTOMAPPER
Consider the following scenario. What if your Product entity had a child relationship with ProductReview (a ICollection<ProductReview>) like this:
public class ProductReview
{
public int ProductId { get; set; }
public string Description { get; set; }
public int ReviewerId { get; set; }
public double Score { get; set; }
}
public class Product
{
public int ProductId { get; set; }
public string Name { get; set; }
public string Description { get; set; }
public virtual ICollection<ProductReview> Reviews { get; set; }
}
...and your ProductViewModel had an int property ReviewsCount to show in your view?
When Automapper would map and transform your entities into view model, it would access the Reviews property of each Product in the List (let's suppose, 10 in your case), one by one, and get the Reviews.Count() to fill ReviewsCount in your ProductViewModel.
Considering my example, where I never eager loaded Reviews of products, if Lazy Load was on, AutoMapper would execute ten queries (one per product) to count Reviews. Count is a fast operation and ten products are just a few. but if instead of count you were actually mapping the ProductReview to a ProductReviewViewModel, this would be kinda heavy. If Lazy Load was turned off, we would get an exception, since Reviews would be null.
One possible solution, is to eager load all child you might need during mapping, like this:
List<Product> filteredProducts = dbContext.Products.Include("Reviews").OrderBy(p => p.ProductId).ToPagedList(pageNumber, 10);
return Mapper.Map<List<Product>, List<ProductViewModel>>(filteredProducts);
...so 10 products and their reviews would be retrieved in just one query, and no other queries would be executed by AutoMapper.
But.......I just need a count, do I really need to retrieve ALL Reviews just to avoid multiple queries?
Isn't it also heavy to load all reviews and all their expensive fields like Description which may have thousands of characters???
Yes, absolutely. Avoid mixing PagedList with AutoMapper for these scenarios.
Just do a projection like this:
List<Product> filteredProducts = dbContext.Products
.Select(p => new ProductViewModel
{
ProductId = p.ProductId,
ProductName = p.Name,
ProductDescription = p.Description,
ReviewsCount = p.Reviews.Count(),
ScoreAverage = p.Reviews.Select(r => r.Score).DefaultIfEmpty().Average()
})
.OrderBy(p => p.ProductId).ToPagedList(pageNumber, 10);
Now you are loading your 10 products, projecting them into ProductViewModel, calculating the Reviews count and score average, without retrieving all Reviews from database.
Of course there are scenarios where you might really need all child entities loaded/materialized, but other than that, projection ftw.
You can also put the Select() part inside an extension class, and encapsulate all your projections in extension methods, so you can reuse them like you would to with AutoMapper.
I'm not saying AutoMapper is evil and you shouldn't use it, I use it myself in some situations, you just need to use it when it's appropriate.
EDIT: AUTOMAPPER DOES SUPPORT PROJECTION
I found this question where #GertArnold explains the following about AutoMapper:
...the code base which added support for projections that get translated
into expressions and, finally, SQL
So be happy, just follow his suggestion.
I have this code in a Windows Service targeted to .Net 4.5 that uses a database-first Entity Framework layer:
var existingState = DataProcessor.GetProcessState(workerId);
existingState.ProcessStatusTypeId = (int)status;
existingState.PercentProgress = percentProgress;
existingState.ProgressLog = log;
DataProcessor.UpdateProcessState(existingState);
And this code in a data processing class in the same solution:
public ProcessState GetProcessState(int id)
{
using (var context = new TaskManagerEntities())
{
var processes = (from p in context.ProcessStates.Include("ProcessType").Include("ProcessStatusType")
where p.IsActive && p.ProcessStateId == id
select p);
return processes.FirstOrDefault();
}
}
public ProcessState UpdateProcessState(ProcessState processState)
{
using (var context = new TaskManagerEntities())
{
context.ProcessStates.Add(processState);
context.Entry(processState).State = System.Data.EntityState.Modified;
context.SaveChanges();
}
return processState;
}
ProcessState is a parent to two other classes, ProcessStatusType and ProcessType. When I run that code in the windows service, it retrieves a record, updates the entity and saves it. Despite the fact that the ProcessType child is never used in the above code, when the save on the ProcessState entity is performed, EF does an insert on the ProcessType table and creates a new record in it. It then changes the FK in the ProcessStatus entity to point it at the new child and saves it to the database.
It does not do this in the ProcessStatusType table, which is set up with an essentially identical FK parent-child relationship.
I now have a database full of identical ProcessType entries that I don't need, and I don't know why this is occurring. I feel like I'm making some obvious mistake that I can't see because this is my first EF project. Is the issue that I'm allowing the context to expire in between calls but maintaining the same entity?
Using Add will set the state of all elements to Added, which is causing the child elements to be inserted. The parent element is not inserted as you specify EntityState.Modified for this element.
Try using the following in the UpdateProcessState rather than using Add.
context.ProcessStates.Attach(processState);
context.Entry(processState).State = EntityState.Modified;
context.SaveChanges();
Attach will set the state of all elements to Unchanged and by specifying Modified for the parent element you are indicating that only this element should be updated.
On another note. You should use the strongly-typed Include(x => x.ProcessType) rather than Include("ProcessType").