Database First with Multiple Tables with Foreign Keys in a Single View - asp.net-mvc

I have two tables department and teacher like this:
Department table (DeptID is the primary key)
DeptID | DeptName
1 P
2 C
3 M
Teacher table (DeptID is a foreign key)
DeptID | TeacherName
1 ABC
1 PQR
2 XYZ
I have used database first approach to create a single model out of these two tables. I want to display both details in a single view like this:
TeacherName | DeptName
ABC P
PQR P
XYZ C
I tried to create controllers using scaffolding but it would provide views and CRUD operations for a single table in the model.
Is there any method using which I can map these two tables together in a single view ? or is it possible (easily achievable) when I use different models for each table in the database ?

You have to create Viewmodel.
public class DepartmentTeacher
{
public int DeptID {get;set;}
public string DeptName {get;set;}
public int TeachID {get;set;}
public string TeachName {get;set;}
}
using (var db = new SchoolContext())
{
var query = (from tc in db.Teacher
join dp in db.Department on tc.DeptID equals dp.DeptID
//where st.STUDENT_ID == Customer_Id maybe you need
select new
{
dp.DeptName,
tc.TeachName
});
foreach (var item in query)
{
DepartmentTeacher.DeptName = item.DeptName;
DepartmentTeacher.TeachName = item.TeachName;
}
}
return View(DepartmentTeacher);
You can use every process this viewmodel.However you have to description this Viewmodel on your view page.

Related

EF Core Accessing a list relation

i have a question about ef core query and ill be appreciated if you friends can help me ! i have a .net core mvc project , that has three (classes) tables including , Products , Groups and ProductToGroups with these relations :
in products and groups table (class) :
public List<ProductToGroup> ProductToGroups { get; set; }
and in ProductToGroup table :
public Product Product { get; set; }
public Group Group { get; set; }
lets assume that i have 2 product with productId 1,2 , and have 2 group with groupId 1,2
and in ProductToGroup table i declared that the product id 1 has the groupId 1,2 and also for productId 2 , it has groupId 1,2 !
ive wrote these query and gets products on a list:
IQueryable<Products> result = _context.Products.Include(p => p.ProductToGroups);
now i want to write a query that gets all products with groupId that i sent to it from result using ProductToGroups table (PS. its a list relation and if i use single or first it just take the first group that stored in database , for example if i want to gets products with groupId=2 , it returns null , and just returns groupId=1) !
Thanks alot!
if i want to gets products with groupId=2
Try
int[] groupid = { 2};
var products = _context.Products.Include(u => u.ProductToGroups).ThenInclude(u => u.Groups).ToList();
var list =products.Where(x => groupid.All(r => x.ProductToGroups.Any(y => y.Groups.groupId== r)));
My demo is about UserRole, like your ProductToGroups.
Result:

Neo4j Adding Multiple Nodes and Edges Efficiently

I have the below example.
I was wondering what is the best and quickest way to add a list of nodes and edges in a single transaction? I use standard C# Neo4j .NET packages but open to the Neo4jClient as I've read that's faster. Anything that supports .NET and 4.5 to be honest.
I have an lists of about 60000 FooA objects that need to be added into Neo4j and it can take hours!
Firstly, FooB objects hardly change so I don't have to add them everyday. The performance issues is with adding new FooA objects twice a day.
Each FooA object has a list of FooB objects has two lists containing the relationships I need to add; RelA and RelB (see below).
public class FooA
{
public long Id {get;set;} //UniqueConstraint
public string Name {get;set;}
public long Age {get;set;}
public List<RelA> ListA {get;set;}
public List<RelB> ListB {get;set;}
}
public class FooB
{
public long Id {get;set;} //UniqueConstraint
public string Prop {get;set;}
}
public class RelA
{
public string Val1 {get;set;}
pulic NodeTypeA Node {get;set;
}
public class RelB
{
public FooB Start {get;set;}
public FooB End {get;set;}
public string ValExample {get;set;}
}
Currently, I check if Node 'A' exists by matching by Id. If it does then I completely skip and move onto the next item. If not, I create Node 'A' with its own properties. I then create the edges with their own unique properties.
That's quite a few transactions per item. Match node by Id -> add nodes -> add edges.
foreach(var ntA in FooAList)
{
//First transaction.
MATCH (FooA {Id: ntA.Id)})
if not exists
{
//2nd transaction
CREATE (n:FooA {Id: 1234, Name: "Example", Age: toInteger(24)})
//Multiple transactions.
foreach (var a in ListA)
{
MATCH (n:FooA {Id: ntA.Id}), (n2:FooB {Id: a.Id }) with n,n2 LIMIT 1
CREATE (n)-[:RelA {Prop: a.Val1}]-(n2)
}
foreach (var b in Listb)
{
MATCH (n:FooB {Id: b.Start.Id}), (n2:FooB {Id: b.End.Id }) with n,n2 LIMIT 1
CREATE (n)-[:RelA {Prop: b.ValExample}]-(n2)
}
}
How would one go about adding a list of FooA's using for example Neo4jClient and UNWIND or any other way apart from CSV import.
Hope that makes sense, and thanks!
The biggest problem is the nested lists, which mean you have to do your foreach loops, so you end up executing a minimum of 4 queries per FooA, which for 60,000 - well - that's a lot!
Quick Note RE: Indexing
First and foremost - you need an index on the Id property of your FooA and FooB nodes, this will speed up your queries dramatically.
I've played a bit with this, and have it storing 60,000 FooA entries, and creating 96,000 RelB instances in about 12-15 seconds on my aging computer.
The Solution
I've split it into 2 sections - FooA and RelB:
FooA
I've had to normalise the FooA class into something I can use in Neo4jClient - so let's introduce that:
public class CypherableFooA
{
public CypherableFooA(FooA fooA){
Id = fooA.Id;
Name = fooA.Name;
Age = fooA.Age;
}
public long Id { get; set; }
public string Name { get; set; }
public long Age { get; set; }
public string RelA_Val1 {get;set;}
public long RelA_FooBId {get;set;}
}
I've added the RelA_Val1 and RelA_FooBId properties to be able to access them in the UNWIND. I convert your FooA using a helper method:
public static IList<CypherableFooA> ConvertToCypherable(FooA fooA){
var output = new List<CypherableFooA>();
foreach (var element in fooA.ListA)
{
var cfa = new CypherableFooA(fooA);
cfa.RelA_FooBId = element.Node.Id;
cfa.RelA_Val1 = element.Val1;
output.Add(cfa);
}
return output;
}
This combined with:
var cypherable = fooAList.SelectMany(a => ConvertToCypherable(a)).ToList();
Flattens the FooA instances, so I end up with 1 CypherableFooA for each item in the ListA property of a FooA. e.g. if you had 2 items in ListA on every FooA and you have 5,000 FooA instances - you would end up with cypherable containing 10,000 items.
Now, with cypherable I call my AddFooAs method:
public static void AddFooAs(IGraphClient gc, IList<CypherableFooA> fooAs, int batchSize = 10000, int startPoint = 0)
{
var batch = fooAs.Skip(startPoint).Take(batchSize).ToList();
Console.WriteLine($"FOOA--> {startPoint} to {batchSize + startPoint} (of {fooAs.Count}) = {batch.Count}");
if (batch.Count == 0)
return;
gc.Cypher
.Unwind(batch, "faItem")
.Merge("(fa:FooA {Id: faItem.Id})")
.OnCreate().Set("fa = faItem")
.Merge("(fb:FooB {Id: faItem.RelA_FooBId})")
.Create("(fa)-[:RelA {Prop: faItem.RelA_Val1}]->(fb)")
.ExecuteWithoutResults();
AddFooAs(gc, fooAs, batchSize, startPoint + batch.Count);
}
This batches the query into batches of 10,000 (by default) - this takes about 5-6 seconds on mine - about the same as if I try all 60,000 in one go.
RelB
You store RelB in your example with FooA, but the query you're writing doesn't use the FooA at all, so what I've done is extract and flatten all the RelB instances in the ListB property:
var relBs = fooAList.SelectMany(a => a.ListB.Select(lb => lb));
Then I add them to Neo4j like so:
public static void AddRelBs(IGraphClient gc, IList<RelB> relbs, int batchSize = 10000, int startPoint = 0)
{
var batch = relbs.Select(r => new { StartId = r.Start.Id, EndId = r.End.Id, r.ValExample }).Skip(startPoint).Take(batchSize).ToList();
Console.WriteLine($"RELB--> {startPoint} to {batchSize + startPoint} (of {relbs.Count}) = {batch.Count}");
if(batch.Count == 0)
return;
var query = gc.Cypher
.Unwind(batch, "rbItem")
.Match("(fb1:FooB {Id: rbItem.StartId}),(fb2:FooB {Id: rbItem.EndId})")
.Create("(fb1)-[:RelA {Prop: rbItem.ValExample}]->(fb2)");
query.ExecuteWithoutResults();
AddRelBs(gc, relbs, batchSize, startPoint + batch.Count);
}
Again, batching defaulted to 10,000.
Obviously time will vary depending on the number of rels in ListB and ListA - My tests has one item in ListA and 2 in ListB.

EF6 not loading child entities with alphanumeric IDs

Given the following tables:
SCHOOL
schoolid (int PK)
name
TEACHER
teacherId (int PK)
name, homeRoomId (fk varchar10)
subjectId (fk varchar10)
schoolid (FK int)
HOMEROOM
homeRoomId (PK varchar10)
roomNumber
active
SUBJECT
subjectId (PK varchar10)
name
active
I am using EF6 in an MVC app. I have lazy loading enabled. I am trying to return a list of all teachers for a given SchoolId and I need to include homeroom and subject data for each teacher.
A school contains many teachers, a teacher works for only one school, a teacher has only one homeroom and teachs only one subject. The homeroom and subject ids are varchars because they are pre-existing ids and look like: SUBJECT: A03, Math.
My code to load all teachers with homeroom and subject for a single schoolid:
public List<TeacherModel> GetTeachersBySchool(int schoolId)
{
List<TeacherModel> teachers = new List<TeacherModel>();
using (var db = new myDBEntities())
{
var list = db.Teacher.Where(a => a.SchoolId == schoolId).ToList();
foreach ( var s in list)
{
TeacherModel teacher = new TeacherModel()
{
TeacherId = s.TeacherId,
Name = s.Name,
HomeRoomId = s.HomeRoomId,
HomeRoomNumber = s.HomeRoom.RoomNumber,
SubjectId = s.SubjectId,
SubjectName = s.Subject.Name
};
teachers.Add(teacher);
}
return teachers;
}
}
The homeroom entity is loading but the Subject entity is null even through a sql query in the database returns one row for this teacher. Due to the null Subject entity, the query errors out as object reference not set to blah blah.
I have found that the problem seems to be when the SubjectId contains alpha characters. A couple examples of a subjectid are: "A03" or "1001023". The second entity will load, the first will not. I assume that even though the datatype is string/varchar EF6 is pulling out the numeric values and passing those as the id, so if the ID has alphas, it fails.
Does this jibe? How do I fix it? As a last resort I can add a surrogate key (INT Identity 1,1) for use with these entities but I'm hoping there is another way.

Entity Framework Include OrderBy random generates duplicate data

When I retrieve a list of items from a database including some children (via .Include), and order the randomly, EF gives me an unexpected result.. I creates/clones addition items..
To explain myself better, I've created a small and simple EF CodeFirst project to reproduce the problem.
First i shall give you the code for this project.
The project
Create a basic MVC3 project and add the EntityFramework.SqlServerCompact package via Nuget.
That adds the latest versions of the following packages:
EntityFramework v4.3.0
SqlServerCompact v4.0.8482.1
EntityFramework.SqlServerCompact v4.1.8482.2
WebActivator v1.5
The Models and DbContext
using System.Collections.Generic;
using System.Data.Entity;
namespace RandomWithInclude.Models
{
public class PeopleContext : DbContext
{
public DbSet<Person> Persons { get; set; }
public DbSet<Address> Addresses { get; set; }
}
public class Person
{
public int ID { get; set; }
public string Name { get; set; }
public virtual ICollection<Address> Addresses { get; set; }
}
public class Address
{
public int ID { get; set; }
public string AdressLine { get; set; }
public virtual Person Person { get; set; }
}
}
The DB Setup and Seed data: EF.SqlServerCompact.cs
using System.Collections.Generic;
using System.Data.Entity;
using System.Data.Entity.Infrastructure;
using RandomWithInclude.Models;
[assembly: WebActivator.PreApplicationStartMethod(typeof(RandomWithInclude.App_Start.EF), "Start")]
namespace RandomWithInclude.App_Start
{
public static class EF
{
public static void Start()
{
Database.DefaultConnectionFactory = new SqlCeConnectionFactory("System.Data.SqlServerCe.4.0");
Database.SetInitializer(new DbInitializer());
}
}
public class DbInitializer : DropCreateDatabaseAlways<PeopleContext>
{
protected override void Seed(PeopleContext context)
{
var address1 = new Address {AdressLine = "Street 1, City 1"};
var address2 = new Address {AdressLine = "Street 2, City 2"};
var address3 = new Address {AdressLine = "Street 3, City 3"};
var address4 = new Address {AdressLine = "Street 4, City 4"};
var address5 = new Address {AdressLine = "Street 5, City 5"};
context.Addresses.Add(address1);
context.Addresses.Add(address2);
context.Addresses.Add(address3);
context.Addresses.Add(address4);
context.Addresses.Add(address5);
var person1 = new Person {Name = "Person 1", Addresses = new List<Address> {address1, address2}};
var person2 = new Person {Name = "Person 2", Addresses = new List<Address> {address3}};
var person3 = new Person {Name = "Person 3", Addresses = new List<Address> {address4, address5}};
context.Persons.Add(person1);
context.Persons.Add(person2);
context.Persons.Add(person3);
}
}
}
The controller: HomeController.cs
using System;
using System.Data.Entity;
using System.Linq;
using System.Web.Mvc;
using RandomWithInclude.Models;
namespace RandomWithInclude.Controllers
{
public class HomeController : Controller
{
public ActionResult Index()
{
var db = new PeopleContext();
var persons = db.Persons
.Include(p => p.Addresses)
.OrderBy(p => Guid.NewGuid());
return View(persons.ToList());
}
}
}
The View: Index.cshtml
#using RandomWithInclude.Models
#model IList<Person>
<ul>
#foreach (var person in Model)
{
<li>
#person.Name
</li>
}
</ul>
this should be all, and you application should compile :)
The problem
As you can see, we have 2 straightforward models (Person and Address) and Person can have multiple Addresses.
We seed the generated database 3 persons and 5 addresses.
If we get all the persons from the database, including the addresses and randomize the results and just print out the names of those persons, that's where it all goes wrong.
As a result, i sometimes get 4 persons, sometimes 5 and sometimes 3, and i expect 3. Always.
e.g.:
Person 1
Person 3
Person 1
Person 3
Person 2
So.. it's copying/cloning data! And that's not cool..
It just seems that EF looses track of what addresses are a child of which person..
The generated SQL query is this:
SELECT
[Project1].[ID] AS [ID],
[Project1].[Name] AS [Name],
[Project1].[C2] AS [C1],
[Project1].[ID1] AS [ID1],
[Project1].[AdressLine] AS [AdressLine],
[Project1].[Person_ID] AS [Person_ID]
FROM ( SELECT
NEWID() AS [C1],
[Extent1].[ID] AS [ID],
[Extent1].[Name] AS [Name],
[Extent2].[ID] AS [ID1],
[Extent2].[AdressLine] AS [AdressLine],
[Extent2].[Person_ID] AS [Person_ID],
CASE WHEN ([Extent2].[ID] IS NULL) THEN CAST(NULL AS int) ELSE 1 END AS [C2]
FROM [People] AS [Extent1]
LEFT OUTER JOIN [Addresses] AS [Extent2] ON [Extent1].[ID] = [Extent2].[Person_ID]
) AS [Project1]
ORDER BY [Project1].[C1] ASC, [Project1].[ID] ASC, [Project1].[C2] ASC
Workarounds
If i remove the .Include(p =>p.Addresses) from the query, everything goes fine. but of course the addresses aren't loaded and accessing that collection will make a new call to the database every time.
I can first get the data from the database and randomize later by just adding a .ToList() before the .OrderBy.. like this: var persons = db.Persons.Include(p => p.Addresses).ToList().OrderBy(p => Guid.NewGuid());
Does anybody have any idea of why it is happening like this?
Might this be a bug in the SQL generation?
As one can sort it out by reading AakashM answer and Nicolae Dascalu answer, it strongly seems Linq OrderBy requires a stable ranking function, which NewID/Guid.NewGuid is not.
So we have to use another random generator that would be stable inside a single query.
To achieve this, before each querying, use a .Net Random generator to get a random number. Then combine this random number with a unique property of the entity to get randomly sorted. And to 'randomize' a bit the result, checksum it. (checksum is a SQL Server function that compute a hash; original idea founded on this blog.)
Assuming Person Id is an int, you could write your query this way :
// Random instances should be stored and reused, not instanciated at each usage.
// But beware, it is not thread safe. If you want to share it between threads, you
// would have to use locks, see its documentation.
// https://learn.microsoft.com/en-us/dotnet/api/system.random.
// But using locks is a bad idea for scalability, especially in a Web context.
var randomGenerator = new Random();
// ...
var rnd = randomGenerator.NextDouble();
var persons = db.Persons
.Include(p => p.Addresses)
.OrderBy(p => SqlFunctions.Checksum(p.Id * rnd));
Like the NewGuid hack, this is very probably not a good random generator with a good distribution and so on. But it does not cause entities to get duplicated in results.
Beware:
If your query ordering does not guarantees uniqueness of your entities ranking, you must complement it for guarantying it. By example, if you use a non-unique property of your entities for the checksum call, then add something like .ThenBy(p => p.Id) after the OrderBy.
If your ranking is not unique for your queried root entity, its included children may get mixed with children of other entities having the same ranking. And then the bug will stay here.
Note:
I would prefer use .Next() method to get an int then combine it through a xor (^) to an entity int unique property, rather than using a double and multiply it. But SqlFunctions.Checksum unfortunately does not provide an overload for int data type, though the SQL server function is supposed to support it. You may use a cast to overcome this, but for keeping it simple I finally had chosen to go with the multiply.
tl;dr: There's a leaky abstraction here. To us, Include is a simple instruction to stick a collection of things onto each single returned Person row. But EF's implementation of Include is done by returning a whole row for each Person-Address combo, and reassembling at the client. Ordering by a volatile value causes those rows to become shuffled, breaking apart the Person groups that EF is relying on.
When we have a look at ToTraceString() for this LINQ:
var people = c.People.Include("Addresses");
// Note: no OrderBy in sight!
we see
SELECT
[Project1].[Id] AS [Id],
[Project1].[Name] AS [Name],
[Project1].[C1] AS [C1],
[Project1].[Id1] AS [Id1],
[Project1].[Data] AS [Data],
[Project1].[PersonId] AS [PersonId]
FROM ( SELECT
[Extent1].[Id] AS [Id],
[Extent1].[Name] AS [Name],
[Extent2].[Id] AS [Id1],
[Extent2].[PersonId] AS [PersonId],
[Extent2].[Data] AS [Data],
CASE WHEN ([Extent2].[Id] IS NULL) THEN CAST(NULL AS int) ELSE 1 END AS [C1]
FROM [Person] AS [Extent1]
LEFT OUTER JOIN [Address] AS [Extent2] ON [Extent1].[Id] = [Extent2].[PersonId]
) AS [Project1]
ORDER BY [Project1].[Id] ASC, [Project1].[C1] ASC
So we get n rows for each A, plus 1 row for each P without any As.
Adding an OrderBy clause, however, puts the thing-to-order-by at the start of the ordered columns:
var people = c.People.Include("Addresses").OrderBy(p => Guid.NewGuid());
gives
SELECT
[Project1].[Id] AS [Id],
[Project1].[Name] AS [Name],
[Project1].[C2] AS [C1],
[Project1].[Id1] AS [Id1],
[Project1].[Data] AS [Data],
[Project1].[PersonId] AS [PersonId]
FROM ( SELECT
NEWID() AS [C1],
[Extent1].[Id] AS [Id],
[Extent1].[Name] AS [Name],
[Extent2].[Id] AS [Id1],
[Extent2].[PersonId] AS [PersonId],
[Extent2].[Data] AS [Data],
CASE WHEN ([Extent2].[Id] IS NULL) THEN CAST(NULL AS int) ELSE 1 END AS [C2]
FROM [Person] AS [Extent1]
LEFT OUTER JOIN [Address] AS [Extent2] ON [Extent1].[Id] = [Extent2].[PersonId]
) AS [Project1]
ORDER BY [Project1].[C1] ASC, [Project1].[Id] ASC, [Project1].[C2] ASC
So in your case, where the ordered-by-thing is not a property of a P, but is instead volatile, and therefore can be different for different P-A records of the same P, the whole thing falls apart.
I'm not sure where on the working-as-intended ~~~ cast-iron bug continuum this behaviour falls. But at least now we know about it.
I dont think there is an issue in query generation, but there is definately an issue when EF tries to convert rows into object.
It looks like there is an inherent assumption here that data for the same person in a joined statement will be returned grouped together order by or not.
for example the result of a joined query will always be
P.Id P.Name A.Id A.StreetLine
1 Person 1 10 ---
1 Person 1 11
2 Person 2 12
3 Person 3 13
3 Person 3 14
even if you order by some other column, same person would always appear one after the other.
this assumption is mostly true for any joined query.
But there is a deeper issue here i think. OrderBy is for when you want data in certain order ( as opposite to random), so that assumption does seem reasonable.
i think you should really get data out and then randomize it according to some other means in your code
From theory:
To sort a list of items, the compare function should be stable relative to items; this means that for any 2 items x, y the result of x< y should be the same as many time is queried(called).
I think the issue is related to misunderstanding of specification(documentation) of OrderBy method:
keySelector - A function to extract a key from an element.
EF didn't mention explicitly if the provided function should return the same value for same object as many times is called (in your case returns different/random values), but I think the "key" term that they used in documentation implicitly suggested this.
When you define a query path to define the query results, (use Include), the query path is only valid on the returned instance of ObjectQuery. Other instances of ObjectQuery and the object context itself are not affected. This functionality lets you chain multiple "Includes" for eager loading.
Therefor, Your statement translates into
from person in db.Persons.Include(p => p.Addresses).OrderBy(p => Guid.NewGuid())
select person
instead of what you intended.
from person in db.Persons.Include(p => p.Addresses)
select person
.OrderBy(p => Guid.NewGuid())
Hence your second workaround works fine :)
Reference: Loading Related Objects While Querying A Conceptual Model in Entity
Framework - http://msdn.microsoft.com/en-us/library/bb896272.aspx
I also ran into this problem, and solved it by adding a Randomizer Guid property to the main class I was fetching. I then set the column's default value to NEWID() like this (using EF Core 2)
builder.Entity<MainClass>()
.Property(m => m.Randomizer)
.HasDefaultValueSql("NEWID()");
When fetching, it gets a bit more complicated. I created two random integers to function as my order-by indexes, then ran the query like this
var rand = new Random();
var randomIndex1 = rand.Next(0, 31);
var randomIndex2 = rand.Next(0, 31);
var taskSet = await DbContext.MainClasses
.Include(m => m.SubClass1)
.ThenInclude(s => s.SubClass2)
.OrderBy(m => m.Randomizer.ToString().Replace("-", "")[randomIndex1])
.ThenBy(m => m.Randomizer.ToString().Replace("-", "")[randomIndex2])
.FirstOrDefaultAsync();
This seems to be working well enough, and should provide enough entropy for even a large dataset to be fairly randomized.

How to do multiple Group By's in linq to sql?

how can you do multiple "group by's" in linq to sql?
Can you please show me in both linq query syntax and linq method syntax.
Thanks
Edit.
I am talking about multiple parameters say grouping by "sex" and "age".
Also I forgot to mention how would I say add up all the ages before I group them.
If i had this example how would I do this
Table Product
ProductId
ProductName
ProductQty
ProductPrice
Now imagine for whatever reason I had tons of rows each with the same ProductName, different ProductQty and ProductPrice.
How would I groupt hem up by Product Name and add together ProductQty and ProductPrice?
I know in this example it probably makes no sense why there would row after row with the same product name but in my database it makes sense(it is not products).
To group by multiple properties, you need to create a new object to group by:
var groupedResult = from person in db.People
group by new { person.Sex, person.Age } into personGroup
select new
{
personGroup.Key.Sex,
personGroup.Key.Age,
NumberInGroup = personGroup.Count()
}
Apologies, I didn't see your final edit. I may be misunderstanding, but if you sum the age, you can't group by it. You could group by sex, sum or average the age...but you couldn't group by sex and summed age at the same time in a single statement. It might be possible to use a nested LINQ query to get the summed or average age for any given sex...bit more complex though.
EDIT:
To solve your specific problem, it should be pretty simple and straightforward. You are grouping only by name, so the rest is elementary (example updated with service and concrete dto type):
class ProductInventoryInfo
{
public string Name { get; set; }
public decimal Total { get; set; }
}
class ProductService: IProductService
{
public IList<ProductInventoryInfo> GetProductInventory()
{
// ...
var groupedResult = from product in db.Products
group by product.ProductName into productGroup
select new ProductInventoryInfo
{
Name = productGroup.Key,
Total = productGroup.Sum(p => p.ProductCost * p.ProductQty)
}
return groupedResult.ToList();
}
}

Resources