Manually loading associations in EF model - entity-framework-4

I need to use parameterized table-valued functions to retrieve the data for an association (the TVFs abstract the actual database tables), but would like to use all the good stuff provided by the EF. So looking at the generated Navigation Property code from the EDMX, I see that the RelationshipManager wraps the population etc. of the association.
So my question: can I retrieve the results I need from the DB (via the TVFs) and attach them to the context before the generated calls to the RelationshipManager, and also stop the RM itself from accessing the database?

EF4 does not support TVF. TVFs are available only in .NET 4.5 where you can use them in Linq-to-entities queries. .NET 4.5 also by default uses POCO entities which are now strongly recommended where RelationshipManager is not used inside the entity (except dynamic proxies for lazy loading).

Related

Passing POCOs or EF objects to the Web project?

Say I have a Visual studio solution with two projects: one web project, and one data project.
Does it really matter whether I pass the EF object from the data project to the web project, or do I need to explicitly define POCOs in the data project to pass to the web project?
It seems to me that needing to create POCOs instead of simply using EF objects adds yet another thing that needs to be done...and I don't particularly see the purpose.
If you're using Entity Framework 4, the EF objects are POCO objects, and so should be used in any situation where your Model matches your EF object (CRUD operations are the typical ones). However there will more than likely be situations whereby the standard POCO object doesn't encapsulate all of the fields that are needed for a View Model (typical ones I have this with are account pages where you have two password fields) and so you will need to create a Model for the page, which then passes the data into your EF POCO objects.
So if you're using DbContext (which creates POCO objects for you) there is no reason not to use those objects.
It's not a good practice to pass the EF objects directly to the web project. As the state changes occurring in the EF objects can directly reflect on the database. Because of this you should explicitly define POC objects for web project.(we can call this as Data Model or simply Model. The POC object used for retrieving data from the View can be called as View Model. In appropriate situations you can use the same POCO class as a Data Model as well a View Model)

EF 4.x generated entity classes (POCO) and Map files

I have an MVC 4 app that I am working on and using the code first implementation except I cheated a bit and created my database first then generated my entity classes (poco) from my database using the EF power tools (reverse engineer). I guess you can say I did database first method but I have no edmx file just the context class and my entity classes (poco)
I have a few projects in the works using MVC and EF with pocos but just the one project I used the tool to generate my pocos from the database.
My question is about the mapping files that get created when I generate my pocos using the tool. What is the purpose of these Map files? I figured the map files are needed when generating the db from the model like with the true code first method, in my case where I am using a tool to generate my model from the database do the map files have any influence on how my app uses the entity classes?
The mapping files are to fluent files help Code First generate the database from your model, as well as help EF create the proper relationships.
They can map properties - things like setting a primary key, max length, data type.
They can also map relationships - set things like foreign keys, and defining relationships that don't follow standard CF naming conventions.
Even though you're not generating your database from the app, CF will use this system to ensure that the database it's pointing to is compatible with your model, and in the case of foreign keys and things setup related properties. For example, if you wanted to name a FK something other than NavigationPropertyId, you would need fluent to tell the engine what property to set in the database.

SubSonic 3, Entity Data Model (Entity Framework) or LINQ to SQL for ASP.NET MVC development?

Having used all of them (some more than others), I am still undecided on which could be the best to use (with .NET 3.5). What are the pro's and con's of each when developing?
SubSonic 3
Not enough samples/documentation (I know its a wiki and people can update it, but it can be tricky to track things down - e.g. where are the sample apps (WebForms, MVC (current version, rather than pre-3), WinForms)). Text Templates didn't work as expected when I first tried them. For example, it does not clean up the table names like SubSonic 2 did (removing/replacing spaces for example). Also, you cannot say which tables to include, just the ones you can exclude (in ActiveRecord.tt). Context.tt and Structs.tt generates code for all the tables (you would probably not want the 'aspnet_' ones, and probably some others (session tables) as well).
Saying that though (hope I'm not being too harsh), it is quite a good ORM to use, minor issues aside.
Entity Data Model (Entity Framework)
Visual Designer for setting up models. Syncs with database, might be considered a bit 'verbose' and hard to understand. There is fairly decent documentation though. Not gone that much further in depth though.
LINQ to SQL
Also has a designer for creating models. Simple to use. Less features than the other two. I also had to apply a hotfix for an obscure bug (wouldn't update when the model had foreign keys that were not of type int)
NHibernate
Looked at this in the past, but not that easy to set up compared to the above.
Any sample ASP.NET MVC apps using this?
Ideally I would want a framework that:
a) could generate the models from a database
b) support LINQ syntax
c) retrieve only the data that is needed (e.g. for paging)
d) allow data annotations
e) could generate the sql to update or create new tables in an existing database
MVC is presentation layer, ORM is Data layer
I don't think that ORM has anything so much to do with an MVC application. At least if you tier your application properly. Model in an MVC application is rather model of the presentation layer view. A view model through which controller and view communicate. Not necessarily data model. MVC project template is a bit confusing, since developers think that MVC model = data model. In any business application that is not completely trivial (like a single assembly simple application) this is not equal. And it's better that its not. Especially if we take into cosideration separation of concerns. We shouldn't rely on particular ORM classes in any layer other than data layer.
But if you intend to use DTOs in your MVC application (as view models) I suggest you use that ORM that creates partial classes, so you can easily add additional stuff on them (like attributes). Your data annotations can be written inside of a special metadata class that can be attached to your model class by a single class-level attribute.
Suggestion
But I suggest doing something else. Use a separate layer with POCOs that will be used on all layers and would have data annotations on them. This will make your presentation layer independent of data layer and your POCOs may also be presentation optimised (like having a class called UserRegistration for instance with two password properties - with repeated value as well). Your repository in your data layer would be responsible for POCO conversion so all layers will exchange data using POCOs only instead of using data objects.
ORMs and class generation
With Entity Framework you do get a very controllable environment to generate your classes from a data store schema. Unfortunately it's not the same with others. Generateion is not all-in, but can be controlled and manipulated manually as well (if you'd like to do TPH/TPT structures).
Similar with LINQ to SQL. I haven't used any other ORM but I guess LinqConnect may have it's own editor similar to EF Visual Studio editor, because I've been working with MySql connector from the same company and I've used their designer for entities because it was better than the one provided with Visual Studio 2008.
But you have tools that provide code generation (and you can get templates on the internet for various ORMs as well):
there's T4 built into Visual Studio that can generate code for you; You can as well fint templates for ORMs written in T4 that you can then easily customize. Or write your own acording to your needs (I've written enumeration generator from DB lookup tables in the past)
MyGeneration is open source and you can find lots of templates for it
CodeSmith is not free but is a proved product that I've used in the past with .netTiers template (before we had LINQ) which saved lots of time and worked perfectly

ASP.NET MVC - model decision: how to design it?

This is concerning an enterprise application with a very generic database (all objects are identified using data in the database and internationalized/globalized/localized).
Make a model for Repository pattern, then make (generate 1:1) another model for DB access (LINQ2SQL or EF) and use the later as repository model data access layer?
Just use L2S/EF/NHibernate model directly, mapping model to DB and opening persistence layer?
Will this dual model idea (repository pattern) popup problems making dynamic stackable LINQ search queries possible when using L2S/EF model directly in a dual model environment?
Please advise.
As long as you are exposing IQueryable objects in your repository, you should have no problem stacking queries in the manner you suggest.
I would be cautious about using Entity Framework for this, since lazy loading is not supported in the way you might expect. Linq to SQL will handle lazy loading without problems.
For more information about lazy loading in the Entity Framework, see: http://www.singingeels.com/Articles/Entity_Framework_and_Lazy_Loading.aspx
Take a look at sharp architecture.
Regarding returning IQueryable from your repository objects, it is my opinion that doing such blurs a proper separation of concerns in your application. I'm all for working with IQueryable within your data access layer but once you start returning objects as IQueryable you provide the opportunity for your controllers and/or views to start meddling with data access. Such may even negatively impact the testability of your application as well.

MVC: Repositories and Services

I am confused as to the limitations of what gets defined in the Repositories and what to leave to the Services. Should the repository only create simple entities matching tables from the database or can it create complex custom object with combinations of those entities?
in other words: Should Services be making various Linq to SQL queries on the Repository? Or should all the queries be predefined in the Repository and the business logic simply decide which method to call?
You've actually raised a question here that's currently generating a lot of discussion in the developer community - see the follow-up comments to Should my repository expose IQueryable?
The repository can - and should - create complex combination objects containing multiple associated entities. In domain-driven design, these are called aggregates - collections of associated objects organized into some cohesive structure. Your code doesn't have to call GetCustomer(), GetOrdersForCustomer(), GetInvoicesForCustomer() separately - you just call myCustomerRepository.Load(customerId), and you get back a deep customer object with those properties already instantiated. I should also add that if you're returning individual objects based on specific database tables, then that's a perfectly valid approach, but it's not really a repository per sé - it's just a data-access layer.
On one hand, there is a compelling argument that Linq-to-SQL objects, with their 'smart' properties and their deferred execution (i.e. not loading Customer.Orders until you actually use it) are a completely valid implementation of the repository pattern, because you're not actually running database code, you're running LINQ statements (which are then translated into DB code by the underlying LINQ provider)
On the other hand, as Matt Briggs' post points out, L2S is fairly tightly coupled to your database structure (one class per table) and has limitations (no many-many mappings, for example) - and you may be better off using L2S for data access within your repository, but then map the L2S objects onto your own domain model objects and return those.
A repository should be the only piece of your application that knows anything about your data access technology. So it should not be returning objects generated by L2S at all, but map those properties to model objects of your own.
If you are using this sort of pattern, you may want to re think L2S. It generates up a data access layer for you, but doesn't really handle impedance mismatch, you have to do that manually. If you look at something like NHibernate, that mapping is done in a more robust fashion. L2S is more for a 2 tier application, where you want a quick and dirty DAL that you can extend on easily.
If you're using LINQ then my belief is that the repository should be a container for your LINQ syntax. This gives you a level of abstraction of the database access routines from your model object interfacing. As Dylan mentions above there are other views on the matter, some people like to return the IQueryable so they can continue to query the database at a later point after the repository. There is nothing wrong with either of these approaches, as long as you're clear in your standards for your application. There is more informaiton on the best practices I use for LINQ repositories here.

Resources