I use MVC4 and entity framework 5 in my project and I've lots of tables. As a rule of our project, we don't delete any records from database, each record has a isActive field and if that field is false, then we consider it deleted. I wanted to write an extension method to get active records and after some googling I wrote this:
public static IQueryable<Company> GetAll(this IQueryable<Company> source)
{
return source.Where(p => p.isActive);
}
Now I can use my extension method to get only active records like
Context db = new Context();
db.Company.GetAll();
But let's say I've 50+ tables in my database, is it a good approach to write the same extension method for each of my tables. Is there a better way to write a only one GetAll() extension method for all of our tables? Actually I'm not even sure if is it right way to use extension methods for this instance?
Could somebody please help me and show me the right way? I appreciate if you help with code examples.
It depends on how you are using Entity Framework, if you are depending on the normal generator (and I think you do), your case gets harder and I never had a similar case study. But, if you use normal POCO classes generator, you can use a base class let's call it CEntity which is a base class for each of your other classes (tables).
Are we there yet? No, to continue with this, I prefer to use the Repository Pattern, and you can make that repository generic (CEntity), for example:
public class Repository<CEntity> where CEntity : class
{
public IQueryable<CEntity> GetAll()
{
return source.Where(p => p.isActive);
}
}
And this is how to use it:
Repository<Company> com = new Repository<Company>();
Repository<Employee> emp = new Repository<Employee>();
var coms = com.GetAll(); // will get all ACTIVE companies
var emps = emp.GetAll(); // will get all ACTIVE employees
This is off the top of my head, if you had any other problems, put them as comments, glad to help.
Just as a point of interest, this is exactly how I implement my data layers, and i think its awesome :)
I also jam a repository in the middle as well but the general concept should work with or without.
Here's some working code examples of how I use this method in my blog for some similar use cases.
https://github.com/lukemcgregor/StaticVoid.Blog/blob/master/Blog/Data/Entities/Post/PostRepositoryExtensions.cs
I've found that it makes some pretty elegant code while still not restricting what you can do too much. Like i said, i think this method is awesome and really recommend its usage.
Related
One of our contractors implemented a repository pattern with code first approach. We use Service Locator as DI pattern. what we do when we retrieve data from DB, we pass interface to GetQueryable function and get the data. However, I see serious performance issues on our application. I implemented MiniProfiler and MiniProfiler.EF to see where the bottleneck is.
We have a case table which has quite a few fields(around 25) and some of those fields are associated to other tables as one to one and one to many(only one field has many relation to other table). when I try to see the case detail, it runs around 400 SQL queries and SQL takes around 40 percent of the load time as far as the miniprofiler concerned. Here our GetQueryable and Find methods
public IQueryable<T> GetQueryable<T>(params string[] includes)
{
Type type = _impls.Value[typeof (T).Name].GetType();
DbSet dbSet = Db.Set(type);
foreach (var include in includes)
{
dbSet.Include(include);
}
return ((IQueryable<T>) dbSet);
}
I added included to this method to attach other related tables, but it did not make any difference. and here is the Find Method
public T Find<T>(long? id)
{
Type type = _impls.Value[typeof(T).Name].GetType();
return (T) Db.Set(type).Find(id);
}
I pretty much tried to apply all the performance improvements, but the number of the SQL queries has not gone down. I tried to disable lazy loading, but it caused many problems in other parts of the application.
Just some additional information, in case table, there are 70000 rows and in out dialogs table, there are 500000 rows. Case and Dialog are associates as one-to-many. and each case has 20-40 dialog entries.
My questions are;
Why does include not make any difference when I use?
Is there any other way to crop number of the queries run?
Do you think the implementation is the problem?
Thanks
Include returns a new IQueryable and does not modify the source query. In addition you can use the generic version of Set which simplifies the code a bit:
public IQueryable<T> GetQueryable<T>(params string[] includes)
{
IQueryable<T> query = Db.Set<T>();
foreach (var include in includes)
{
query = query.Include(include);
}
return query;
}
Step 1: Fire your contractor. Seriously. Like right now. That is some awful code. Not only did they miss something as simple and basic as using the generic version of Set, but they've successfully only made working with Entity Framework more complex, because all the repository does is proxy Entity Framework methods with its own unique and bastardized API.
That said, there's really not enough here to diagnose what your problem is. The use of Include may give you larger queries, but it should actually serve to reduce the overall number of queries issued. It's possible, you're just not using includes where you should be.
Now, the fact that you "tried to disable lazy loading, but it caused many problems in other parts of the application", means that you're relying too heavily on lazy-loading. Basically, you're loading in stuff you don't even know about, which is the antithesis of optimization. Ironically, you'd actually be best served by going ahead and disabling lazy-loading, and then tracking down where your code fails because of that. If you want to actually lazy-load that thing, you can use .Load (see: Explicit Loading). But, if you want to eager-load to reduce queries, then you know what includes you need to add.
I came across a problem mapping multiple beans with Super CSV. I got a csv file, containing information for multiple beans (per line). But as I can see from examples on the website it is only possible to map each line into one bean (not into two or more beans).
Is there a way of doing this? The only way I can think of is creating a new bean, containing the all beans I need and do a deep mapping, i.e.:
class MultiBeanWrapper {
Address addreass;
BankAccount bankAccount;
}
...
String[] FIELD_MAPPING = new String[]
{address.street, bankAccount.bankNumber};
...
beanReader.read(MultiBeanWrapper.class, processors));
I didn't try this, because I want to be sure that there is no other / better way.
Thanks for your help
Daniel
No, you can't read a line into multiple beans - sorry! (I'm not sure what this would even look like - would you get a List<Object> back?)
You have a few options:
Add a relationship between the objects
Then can use a mapping like parent.fieldA, parent.child.fieldB. In your scenario Address and BankAccount aren't related semantically, so I'd recommend creating a common parent (see next option)
Add a common parent object
Then you can use a mapping like parent.child1.fieldA, parent.child2.fieldB. This is what you've suggested, but I'd recommend giving it a better name than Wrapper - it looks like a customer to me!
Oh, and I'd recommend trying stuff out before posting a question - often you'll answer your own question, or be able to give more details which will get you better answers!
I'm having trouble getting the design behind this correct. I'm using a repository pattern to manage my datalayer. In one of my controllers (MVC3) i am constructing a LINQ query that needs to perform a join. I have 2 questions about this:
Is is better to add a method to my repository to perform all joins, projections etc? I'm a bit hesitant about that because it would result in an ever growing contract definition on my repository?
Let's say I have a List() method in a Post repo that returns all items. I currently can't use this method in a linq (join) query because it can't convert it to a store expression. Please note that the code below use a class called repo, which holds a reference to all my repositories (post, friends) using the same context instance.
BIG EDIT:
Things are a bit more clear to me now, but i'm hoping someone will jump in here and help me get everything organised ;-). What I'm looking to do is implement a specification pattern along with my repository pattern. The problem is, i'm using POCO and my repositories are using an IContext interface which uses IObjectSets. So in the list method below, Posts is an IObjectSet and Context is an IContext interface, into which I inject my actual context.
I've been reading up and there is some very nice ready to use code, like
implementing-repository-pattern-with-ef4-poco-support and implementing-repository-pattern-with-entity-framework.
Both these examples use the objectContext in the repository. Isn't it better to abstract that out as well?
My repository:
public System.Linq.IQueryable<Post> List()
{
return this.context.Posts;
}
And in my controller method:
var friendquestions = (from q in base._repo.Post.List().OfType<Question>()
from f in _repo.Friends.List()
where f.userId == myid
where q.Author == f.friendId
select q.Id).ToArray();
The following does work however (why?):
var temp = (from q in base._repo.Post.List().OfType<Question>()
where q.Id > 6
select q.Id).ToArray();
It's basically the same problem as described here: linq-to-entities-does-not-recognize-the-method
How do I design around this? I've been reading up on Model defined functions, but i'm not sure if that's the way to go?
thanks in advance
Isn't it better to abstract that out as well?
Answer : You can look into - NCommon of Ritesh rao. It has been used IEFSession to wrap
the ObjectContext.
Please read my update at the end of question after reading the answers:
I'm trying to apply repository pattern
as Rob Conery's described on
his blog under "MVC Storefront".
But I want to ask about some issues
that I had before I apply this design
pattern.
Rob made his own "Model" and used some
ORM "LINQ to SQL or Entity Framework (EF)" to map his database to
Entities.
Then he used custom Repositories which
gives IQueryable<myModel> and in
these repositories he made sort of
Mapping or "Parsing" between ORM Entities and his Model classes.
What I'm asking here:
Is it possible to make custom mapping between ORM Entities and my
model "classes" and load just
properties that I want? I hope
the point is clear.
Update For POCO
**
This is what I decided after many of suggestions and many of tries:
**
After all and with respect to Mr. Rob Conery's opinion I've got better solution as:
I built my model as "POCOs" and put them in my "Models Layers" so they had nothing to do with the "edmx" file.
Built my repositories to deal with this "POCO" model dependent on "DbContext"
Then I created a "ViewModels" to get just the information that needed by view from those repositories.
So I do not need to add one more layer to be between "EF Models" and "My Model". I just twist my model a little and force EF to deal with it.
As I see this pattern is better than Rob Conery's one.
Yes, it's possible if you're using LINQ to SQL. All you need to do is use projection to pull out the data you want into an object of your choosing. You don't need all this decoration with interfaces and whatnot - if you use a model specific to a view (which it sounds like you need) - create a ViewModel class.
Let's call it ProductSummaryView:
public class ProductSummaryView{
public string Name {get;set;}
public decimal Price {get;set;}
}
Now load it from the repository:
var products= from p in _repository.GetAllProducts
where p.Price > 100
select new ProductSummaryView {
Name=p.ProductName,
Price=p.Price
}
This will pull all products where the price > 100 and return an IQueryable. In addition, since you're only asking for two columns, only two columns will be specified in the SQL call.
Not a dodge to your question, but it's ultimately up to you to decide how your repository would work.
The high-level premise is that your controller would point to some repository interface, say IRepository<T> where T : IProduct. The implementation of which could do any number of things---load up your whole database from disk and store in memory and then parse LINQ expressions to return stuff. Or it could just return a fixed set of dummy data for testing purposes. Because you're banging away on an repository interface, then you could have any number of concrete implementations.
Now, if you're looking for a critique of Rob's specific implementation, I'm not sure that's germane to Stack Overflow.
While it's possible to populate part of an object based on a query of a subset of the columns for that object using a query (which has nothing to do with the repository pattern), that's not how things are "normally" done.
If you want to return a subset of an object, you generally create a new class with just that subset of properties. This is often (in the MVC world view) referred to as a View Model class. Then, you use a projection query to fill that new class.
You can do all of that whether you are using the repository pattern or not. I would argue there is no conflicting overlap between the two concepts.
DeferringTheLoad
Remember that IQueryable defers all the loading up to the last responsible moment. You probably won't have to load all the data using the LINQ operators to get the data you want. ; )
Respecting the dependency in domain classes in views, I will say NO. Use a ViewModel pattern for this. It's more maintainable; you could use AutoMapper to avoid the mapping problems, and they are very flexible in composite views scenarios : )
According to the new question...The answer is yes, you can. Just as Rob Conery says, use projection ; ):
var query = from p in DataContext.Persons}
select new Persons
{
firstname = p.firstname,
lastname = p.lastname
});
Reviewing Conery's storefront, and I dont understand why he used Linqs auto-generated classes (ie Order class) and then he has another Order class defined that is not a partial class. WHen using repository pattern should one manually create the classes, and disregard Datacontext altogether?
If you don't decouple your front end from the linq classes using an intermediary class, you can't control with the data context gets garbage collected. Typically with data context types of instances you want to rid of them as soon as you're done using them. Here's how you might want to do this with the linq to sql context:
using (MyDataContext data = new MyDataContext())
{
SomeThing thing = data.Things(t => t.ID == 1);
return thing;
}
... the MyDataContext instance is gone
With the "using" block, you're disposing of the instance of MYDataContext at the last "}". However, if you do this you'll get an error then trying to use "thing" because the data context instance is gone. If you don't dispose of the data context, it's left hanging around until it's eventually garbage collected.
If you introduce an intermediary class to decouple the linq to sql code from the calling app you can still get rid of your data context instance and return the same data (just in a different object):
using (MyDataContext data = new MyDataContext())
{
SomeThing thing = data.Things(t => t.ID == 1);
SometThingElse otherThing = ConvertSomethingToSomethingElse(thing);
return otherThing;
}
... the MyDataContext instance is gone
Hope that helps.
Rob has answered on this question in one of his show.
He Using POCO classes to be aware from all dataaccess classes. For example when he change LINQ-to-SQL to NHibernate all he will need to do i change his "mappings" in his filters, and he will not have to make any changes in bussiness logic.
He said in one of his recent videos he doesn't like the way LINQ to SQL does mapping. I agree though I think it is complete overkill.
I'd say you're not breaking any major design patterns as long as you're sticking to the repository pattern itself. I think it's a matter of choice to have 2 sets of classesa, allbeit a bad one, still a choice.