Reviewing Conery's storefront, and I dont understand why he used Linqs auto-generated classes (ie Order class) and then he has another Order class defined that is not a partial class. WHen using repository pattern should one manually create the classes, and disregard Datacontext altogether?
If you don't decouple your front end from the linq classes using an intermediary class, you can't control with the data context gets garbage collected. Typically with data context types of instances you want to rid of them as soon as you're done using them. Here's how you might want to do this with the linq to sql context:
using (MyDataContext data = new MyDataContext())
{
SomeThing thing = data.Things(t => t.ID == 1);
return thing;
}
... the MyDataContext instance is gone
With the "using" block, you're disposing of the instance of MYDataContext at the last "}". However, if you do this you'll get an error then trying to use "thing" because the data context instance is gone. If you don't dispose of the data context, it's left hanging around until it's eventually garbage collected.
If you introduce an intermediary class to decouple the linq to sql code from the calling app you can still get rid of your data context instance and return the same data (just in a different object):
using (MyDataContext data = new MyDataContext())
{
SomeThing thing = data.Things(t => t.ID == 1);
SometThingElse otherThing = ConvertSomethingToSomethingElse(thing);
return otherThing;
}
... the MyDataContext instance is gone
Hope that helps.
Rob has answered on this question in one of his show.
He Using POCO classes to be aware from all dataaccess classes. For example when he change LINQ-to-SQL to NHibernate all he will need to do i change his "mappings" in his filters, and he will not have to make any changes in bussiness logic.
He said in one of his recent videos he doesn't like the way LINQ to SQL does mapping. I agree though I think it is complete overkill.
I'd say you're not breaking any major design patterns as long as you're sticking to the repository pattern itself. I think it's a matter of choice to have 2 sets of classesa, allbeit a bad one, still a choice.
Related
I'm learning ASP.NET MVC and I'm having some questions that the tutorials I've read until now haven't explored in a way that covers me. I've tried searching, but I didn't see any questions asking this. Still, please forgive me if I have missed an existing ones.
If I have a single ASP.NET MVC application that has a number of models (some of which related and some unrelated with each other), how many DbContext subclasses should I create, if I want to use one connection string and one database globally for my application?
One context for every model?
One context for every group of related models?
One context for all the models?
If the answer is one of the first two, then is there anything I should have in mind to make sure that only one database is created for the whole application? I ask because, when debugging locally in Visual Studio, it looks to me like it's creating as many databases as there are contexts. That's why I find myself using the third option, but I'd like to know if it's a correct practice or if I'm making some kind of mistake that will come back and bite me later.
#jrummell is only partially correct. Entity Framework will create one database per DbContext type, if you leave it to its own devices. Using the concept of "bounded contexts" that #NeilThompson mentioned from Julie Lerhman, all you're doing is essentially telling each context to actually use the same database. Julie's method uses a generic pattern so that each DbContext that implements it ends up on the same database, but you could do it manually for each one, which would look like:
public class MyContext : DbContext
{
public MyContext()
: base("name=DatabaseConnectionStringNameHere")
{
Database.SetInitializer(null);
}
}
In other words, Julie's method just sets up a base class that each of your contexts can inherit from that handles this piece automatically.
This does two things: 1) it tells your context to use a specific database (i.e., the same as every other context) and 2) it tells your context to disable database initialization. This last part is important because these contexts are now essentially treated as database-first. In other words, you now have no context that can actually cause a database to be created, or to signal that a migration needs to occur. As a result, you actually need another "master" context that will have every single entity in your application in it. You don't have to use this context for anything other than creating migrations and updating your database, though. For your code, you can use your more specialized contexts.
The other thing to keep in mind with specialized contexts is that each instantiation of each context represents a unique state even if they share entities. For example, a Cat entity from one context is not the same thing as a Cat entity from a second context, even if they share the same primary key. You will get an error if you retrieved the Cat from the first context, updated it, and then tried save it via the second context. That example is a bit contrived since you're not likely to have the same entity explicitly in two different contexts, but when you get into foreign key relationships and such it's far more common to run into this problem. Even if you don't explicitly declare a DbSet for a related entity, it an entity in the context depends on it, EF will implicitly create a DbSet for it. All this is to say that if you use specialized contexts, you need to ensure that they are truly specialized and that there is zero crossover at any level of related items.
I use what Julie Lerman calls the Bounded Context
The SystemUsers code might have nothing to do with Products - so I might have a System DbContext and a Shop DbContext (for example).
Life is easier with a single context in a small app, but for larger application it helps to break the contexts up.
Typically, you should have one DbContext per database. But if you have separate, unrelated groups of models, it would make sense to have separate DbContext implementations.
it looks to me like it's creating as many databases as there are
contexts.
That's correct, Entity Framework will create one database per DbContext type.
I'm building a Repository layer for my MVC application with methods like GetObject, UpdateObject, DeleteObject, etc.
This is what I have now:
public List<Object> GetObjects()
{
return _db.Objects.Where(o => o.IsArchived == false).ToList();
}
But I'm wondering if it would be better to return IQueryables for lists so that the least amount of data gets sent to the client when filters are applied in the UoW or Service layers. Would it be best to do something like this?
public IQueryable<Object> GetObjects()
{
return _db.Objects.Where(o => o.IsArchived == false);
}
The not nice thing about returning IQueryable, is that if you ever have a different implementation of repository, say using different ORM, storing data in non-SQL database, cloud or XML file, it would be hard to implement same interface. It would be much easier to implement if you return more generic colections of domain objects. For example IEnumerable. You can always pass filtering criteria in.
The other drawback of returning IQueryable, is that it may happen, that when you actually run the query your object context may be already disposed (Depending on your implementation) or may be kept in memory longer than required.
A leaky abstraction such as IQueryable could cause problems, for example imagine you want to get some data from database and order it by Guid. If you enumerate the query by calling ToList() prior to sorting, you'll get different results if you do it after. The reason is that in first case the sorting will happen in .NET, but in other case it will happen in SQL which uses completely different order.
The nice thing about returning IQueryable here is that you can continue to build up your query further without hitting the db. Once you call ToList it will hit the db and you can't customize your query further without hitting the database a second time.
I'm trying to clean up my action methods in an ASP.NET MVC project by making use of view models. Currently, my view models contain entities that might have relationships with other entities. For example, ContactViewModel class might have a Contact, which might have an Address, both of which are separate entities. To query for a list of Contact objects, I might do something like the following.
IList<Contact> contacts;
using (IContactRepository repository = new ContactRepository())
{
contacts = repository.Fetch().ToList();
}
EditContactViewModel vm = new EditContactViewModel(contacts);
return View(vm);
This method brings on a few problems. For example, the repository is queried within a using statement. By the time the view renders, the context has gone out of scope, making it impossible for the view to query the Address associated with the Contact. I could enable eager loading, but I'd rather not. Furthermore, I don't like that the entity model has bled over into my view (I feel like it's a bad idea for my View to have knowledge of the relationship between Contact and Address, but feel free to disagree with me).
I have considered creating a fattened class that contains properties from both the Contact and Address entities. I could then project the Contact and Address entities into my new, flattened object. One of my concerns with this approach is that my action methods may get a little busy and I don't think AutoMapper is able to map two or more objects into a single type.
What technique is/are preferred for overcoming my concerns?
Automapper will work for your case. What you have is an object graph, a thing has some more things, which Automapper handles fine.
Taking these concerns in order...
First, if you are worried about the using statement and the repository (I don't know if it is LINQ-to-SQL or LINQ-to-Entities, but it doesn't matter), what I would recommend you do is implement IDisposable on your Controller, and then store the repository in a field either on the model or in the controller or somewhere where you have access to it in the view (if you need it, if the model has knowledge of it while the object is "alive" then you just need to keep it around for the life of the controller).
Then, when the request is complete, the Dispose method on your controller is called and you can dispose of the repository there.
Personally, I have a method on my base controller class which looks like this:
protected T AddDisposable<T>(T disposable) where T : class, IDisposable
{
// Error checking.
if (disposable == null) throw new ArgumentNullException("disposable");
// Add to list
...
}
Basically, it allows you to store the IDisposable implementations, then in the IDisposable implementation of the controller, it iterates through the list, disposing of everything.
Regarding the exposure of the address on the entity model, I don't see this as a bleed issue, personally. The address is part of the composition of the contact (IMO), so it would be wrong to not have it there.
However, I don't disagree if you don't want it there because you want to focus on one type in one controller at a time, etc, etc.
To that end, you would want to create Data Transfer Objects which basically map between the type you expose in the view model and your entity model.
Please read my update at the end of question after reading the answers:
I'm trying to apply repository pattern
as Rob Conery's described on
his blog under "MVC Storefront".
But I want to ask about some issues
that I had before I apply this design
pattern.
Rob made his own "Model" and used some
ORM "LINQ to SQL or Entity Framework (EF)" to map his database to
Entities.
Then he used custom Repositories which
gives IQueryable<myModel> and in
these repositories he made sort of
Mapping or "Parsing" between ORM Entities and his Model classes.
What I'm asking here:
Is it possible to make custom mapping between ORM Entities and my
model "classes" and load just
properties that I want? I hope
the point is clear.
Update For POCO
**
This is what I decided after many of suggestions and many of tries:
**
After all and with respect to Mr. Rob Conery's opinion I've got better solution as:
I built my model as "POCOs" and put them in my "Models Layers" so they had nothing to do with the "edmx" file.
Built my repositories to deal with this "POCO" model dependent on "DbContext"
Then I created a "ViewModels" to get just the information that needed by view from those repositories.
So I do not need to add one more layer to be between "EF Models" and "My Model". I just twist my model a little and force EF to deal with it.
As I see this pattern is better than Rob Conery's one.
Yes, it's possible if you're using LINQ to SQL. All you need to do is use projection to pull out the data you want into an object of your choosing. You don't need all this decoration with interfaces and whatnot - if you use a model specific to a view (which it sounds like you need) - create a ViewModel class.
Let's call it ProductSummaryView:
public class ProductSummaryView{
public string Name {get;set;}
public decimal Price {get;set;}
}
Now load it from the repository:
var products= from p in _repository.GetAllProducts
where p.Price > 100
select new ProductSummaryView {
Name=p.ProductName,
Price=p.Price
}
This will pull all products where the price > 100 and return an IQueryable. In addition, since you're only asking for two columns, only two columns will be specified in the SQL call.
Not a dodge to your question, but it's ultimately up to you to decide how your repository would work.
The high-level premise is that your controller would point to some repository interface, say IRepository<T> where T : IProduct. The implementation of which could do any number of things---load up your whole database from disk and store in memory and then parse LINQ expressions to return stuff. Or it could just return a fixed set of dummy data for testing purposes. Because you're banging away on an repository interface, then you could have any number of concrete implementations.
Now, if you're looking for a critique of Rob's specific implementation, I'm not sure that's germane to Stack Overflow.
While it's possible to populate part of an object based on a query of a subset of the columns for that object using a query (which has nothing to do with the repository pattern), that's not how things are "normally" done.
If you want to return a subset of an object, you generally create a new class with just that subset of properties. This is often (in the MVC world view) referred to as a View Model class. Then, you use a projection query to fill that new class.
You can do all of that whether you are using the repository pattern or not. I would argue there is no conflicting overlap between the two concepts.
DeferringTheLoad
Remember that IQueryable defers all the loading up to the last responsible moment. You probably won't have to load all the data using the LINQ operators to get the data you want. ; )
Respecting the dependency in domain classes in views, I will say NO. Use a ViewModel pattern for this. It's more maintainable; you could use AutoMapper to avoid the mapping problems, and they are very flexible in composite views scenarios : )
According to the new question...The answer is yes, you can. Just as Rob Conery says, use projection ; ):
var query = from p in DataContext.Persons}
select new Persons
{
firstname = p.firstname,
lastname = p.lastname
});
Let's start with this basic scenario:
I have a bunch of Tables that are essentially rarely changed Enums (e.g. GeoLocations, Category, etc.) I want to load these into my EF ObjectContext so that I can assign them to entities that reference them as FK. These objects are also used to populate all sorts of dropdown controls. Pretty standard scenarios so far.
Since a new controller is created for each page request in MVC, a new entity context is created and these "enum" objects are loaded repeatedly. I thought about using a static context object across all instances of controllers (or repository object).
But will this require too much locking and therefore actually worsen perf?
Alternatively, I'm thinking of using a static context only for read-only tables. But since entities that reference them must be in the same context anyway, this isn't any different from the above.
I also don't want to get into the business of attaching/detaching these enum objects. Since I believe once I attach a static enum object to an entity, I can't attach it again to another entity??
Please help, I'm quite new to EF + MVC, so am wondering what is the best approach.
Personally, I never have any static Context stuff, etc. For me, when i call the database (CRUD) I use that context for that single transaction/unit of work.
So in this case, what you're suggesting is that you wish to retrieve some data from the databse .. and this data is .. more or less .. read only and doesn't change / static.
Lookup data is a great example of this.
So your Categories never change. Your GeoLocations never change, also.
I would not worry about this concept on the database/persistence level, but on the application level. So, just forget that this data is static/readonly etc.. and just get it. Then, when you're in your application (ie. ASP.NET web MVC controller method or in the global.asax code) THEN you should cache this ... on the UI layer.
If you're doing a nice n-tiered MVC app, which contains
UI layer
Services / Business Logic Layer
Persistence / Database data layer
Then I would cache this in the Middle Tier .. which is called by the UI Layer (ie. the MVC Controller Action .. eg. public void Index())
I think it's important to know how to seperate your concerns .. and the database stuff is should just be that -> CRUD'ish stuff and some unique stored procs when required. Don't worry about caching data, etc. Keep this layer as light as possible and as simple as possible.
Then, your middle Tier (if it exists) or your top tier should worry about what to do with this data -> in this case, cache it because it's very static.
I've implemented something similar using Linq2SQL by retrieving these 'lookup tables' as lists on app startup and storing them in ASP's caching mechanism. By using the ASP cache, I don't have to worry about threading/locking etc. Not sure why you'd need to attach them to a context, something like that could easily be retrieved if necessary via the table PK id.
I believe this is as much a question of what to cache as how. When your are dealing with EF, you can quickly run into problems when you try to persist EF objects across different contexts and attempt to detach/attach those objects. If you are using your own POCO objects with custom t4 templates then this isn't an issue, but if you are using vanilla EF then you will want to create POCO objects for your cache.
For most simple lookup items (i.e numeric primary key and string text description), you can use Dictionary. If you have multiple fields you need to pass and back with the UI then you can build a more complete object model. Since these will be POCO objects they can then be persisted pretty much anywhere and any way you like. I recommend using caching logic outside of your MVC application such that you can easily mock the caching activity for testing. If you have multiple lists you need to cache, you can put them all in one container class that looks something like this:
public class MyCacheContainer
{
public Dictionary<int, string> GeoLocations { get; set; }
public List<Category> Categories { get; set; }
}
The next question is do you really need these objects in your entity model at all. Chances are all you really need are the primary keys (i.e. you create a dropdown list using the keys and values from the dictionary and just post the ID). Therefore you could potentially handle all of the lookups to the textual description in the construction of your view models. That could look something like this:
MyEntityObject item = Context.MyEntityObjects.FirstOrDefault(i => i.Id == id);
MyCacheContainer cache = CacheFactory.GetCache();
MyViewModel model = new MyViewModel { Item = item, GeoLocationDescription = GeoLocations[item.GeoLocationId] };
If you absolutely must have those objects in your context (i.e. if there are referential entities that tie 2 or more other tables together), you can pass that cache container into your data access layer so it can do the proper lookups.
As for assigning "valid" entities, in .Net 4 you can just set the foreign key properties and don't have to actually attach an object (technically you can do this in 3.5, but it requires magic strings to set the keys). If you are using 3.5, you might just try something like this:
myItem.Category = Context.Categories.FirstOrDefault(c => c.id == id);
While this isn't the most elegant solution and does require an extra roundtrip to the DB to get a category you don't really need, it works. Doing a single record lookup based on a primary key should not really be that big of a hit especially if the table is small like the type of lookup data you are talking about.
If you are stuck with 3.5 and don't want to make that extra round trip and you want to go the magic string route, just make sure you use some type of static resource and/or code generator for your magic strings so you don't fat finger them. There are many examples here that show how do assign a new EntityKey to a reference without going to the DB so I won't go into that on this question.