I saw this comment on MSDN (link and link):
"Note that Independent Associations should often be avoided since things like
N-Tier and concurrency becomes more difficult."
I'm new to EF4 and I'm building an n-Tier web app. This sounds like an important pitfall. Can someone explain to me what this means?
I think it's personal preference. Originally, EF was created to only use indep. associations and aligned with a more classic ERM approach. However, most of us devs are so dependent on FKs that it made life very complex. So MS gave us FKs in EF4 which meant not only having the FK as a property in the dependent entity but the relationships are defined through constraints in the conceptual model rather than buried in the mappings. There are still a few relationships that you can only define with an indep association: many to many and unique foreign keys. Note that if you are planning to use RIA Services (doesn't sound like it) that RIA only recognizes FK associations.
So if you prefer to leverage the independent associations you still absolutely can use them in EF4. They are totally supported. But as James suggests, there are a few more traps to be aware of...things that you'll need to do more explicitly because of the way EF works with graphs especially. Or that case where you do just want that FK , e.g., you have the ID of a customer but you don't h ave the instance. You could create an order but without that nice CustomerID FK property, you have to do some extra juggling to get that CustomerID in there.
hth
If you're new to EF and starting with EF4 the easy answer is ignore this - you will almost certainly be using Foreign Key Associations rather than Independent Associations.
A Foreign Key Association is backed by a foreign key relationship in the database and this relationship is explicitly described in the conceptual model. This kind of association is new to EF4 and I understand it is a concession following the issues people had with Independent Associations.
Strictly if you want to separate the storage schema and the conceptual schema (which is kind of the point of EF) you wouldn't want your conceptual schema to know about things like foreign keys as these are a database (i.e. storage) concept. Earlier versions of EF followed this approach and we have this thing called an Independent Association.
Think of Independent Associations as associations that are tracked by EF without the knowledge of the underlying foreign key. EF still supports this but they have significant weaknesses.
EF4 in VS2010 will use your Foreign Keys and create Foreign Key relationships unless you tell it otherwise. On the whole these work as you would expect. There are still some gotchas - e.g. around cascading deletes.
If you want to learn EF - I can recommend this book:
http://learnentityframework.com/learnentityframework/
Everything you want to know, very clearly explained.
Related
I am using MVC3, ASP.NET 4.5, C#, MSSQL.
I need to create ViewModels from my Domain Model that is automatically generated by Entity Developer.
Once I create the relevant ViewModel for an entity I can comment out non required properties for a particular View.
However there is the ongoing concern that once an entity is upgraded then the ViewModel could become out of sync, and I want to minimise the risk/effort in fixing this.
Thanks in advance.
I see the same complaint endlessly about using view models. True, they can be repetitive in nature, but copy and paste works beautifully there. If you so wanted, you could even design an interface that both your model and view model must implement, which can help you keep the two in sync somewhat. However, I think you'll find that the two will diverge more than you think.
As far as validation goes, this is also a common complaint, but it's actually a symptom of bad design. Your entity class should only have validation specific to the database, which you'll find is actually pretty sparse. Entity Framework actually does a fantastic job translating most of the properties inherent limitations to the database. For example, a DateTime property's column is set as NOT NULL by default, because the C# type itself cannot be null. There's no need to add something like [Required], because the behavior is inherent.
Other types of validations such as regex are totally inappropriate for a domain model because there's no correlation to anything happening at the database level. It's entirely for the UI, and thus belongs on your view model. I think you'll find that if you evaluate all the things you're trying to validate on your domain model, you'll find most if not all should be strictly on your view model(s) instead.
I have one project,need build more then 300 models, i want use EF codefirst.
But I think saved in one database Seems not so good.
so I want to know how to Save more then 300models to 5 database and use code first?
Do it right?
How to do it?
Have the mature example ?
how to query data by Navigate properties in tow models? They are not in same database,
I want query by lambda int these database like One database (on DbContext).
I am chinase .so English is very Bad.
I hope you can understand what I'm saying
The problem with splitting the models across multiple databases is that you cannot have foreign key relationships between the two databases.
If you are using multiple databases you will need to handle all the navigation yourself in code.
You should consider redesigning the database so that there are less base models and then using application level models to access the required models.
Another option is to use ubermodels keeping all 300 tables and then use application level models. This can be aided by techniques proposed in the article here on shrinking EF models that may help.
In order to avoid magic strings when running the ExecuteStore command is it possible to get underlying table name [and columns] from the model in Entity Framework 4
Liam
I can't tell you if there is, but the best shot would be to dive into the metadata of the context.
This might help you, it is a documentation about the metadata of the EF. If you can't find it in the metadata, you are most likely out of luck.
Edit according to this (bottom of the page):
I have also been trying to query the mapping metadata. I wanted to find the metadata which describes how tables and entities are mapped and which stored procedures are mapped to entities. I was not able to find the metadata I needed via the MetadataWorkSpace. Afterwards Danny Simmons from Microsoft did let me know that this mapping metadata is not available publically and that it is something they have to do in a future release of the Entity Framework.
It seems to be impossible at the moment as this information is not publically avaiable; however, this is from 2008, so it might have changed in the meantime.
I'll quote Rowan Miller ...
Hi,
Unfortunately in CT4 there is no
public surface that allows you to find
what the table name is for a given
entity type. This is actually a
limitation of EF in general as we
don't have a public API to access the
mapping section of the model. This is
something we are working to improve at
the moment.
~Rowan
CTP 4 being a pre-production update to EF4, in this case. My only suggestion for bypassing this, evil as it is, is for you to try parsing the XML in the model directly, and strip the table names from that. Very evil, but it should be doable. Remember though that models can relate to multiple tables, or views, or combinations.
Say I'm building a model from a blank canvas in EF and I have a one-to-many relationship in the model (Category->Product or something). How can I make that collection (Category.Products) a Set (HashSet or similar) instead of a collection, so that I can enforce set constraints (such as uniqueness) at the model level?
I recently have moved to using POCO with Linq-To-Sql and really like the freedom it gives to not have to use EntitySet et al. So I think POCO is the answer for you, but I suspect (haven't researched it so can't answer definitively) there are going to be restrictions on what types you can use for your associations and the framework (EF or L2S) still be able to use them. For instance, you probably have to use something that derives from IList, or whatever.
I was looking at something vaguely similar a bit ago, and found that one of the features of EntitySet is the ability to subscribe to the Add and Remove events. There is an ObservableCollection type that also has similar functionality, so you can look into those. Otherwise, you're most probably stuck rolling your own.
I am confused as to the limitations of what gets defined in the Repositories and what to leave to the Services. Should the repository only create simple entities matching tables from the database or can it create complex custom object with combinations of those entities?
in other words: Should Services be making various Linq to SQL queries on the Repository? Or should all the queries be predefined in the Repository and the business logic simply decide which method to call?
You've actually raised a question here that's currently generating a lot of discussion in the developer community - see the follow-up comments to Should my repository expose IQueryable?
The repository can - and should - create complex combination objects containing multiple associated entities. In domain-driven design, these are called aggregates - collections of associated objects organized into some cohesive structure. Your code doesn't have to call GetCustomer(), GetOrdersForCustomer(), GetInvoicesForCustomer() separately - you just call myCustomerRepository.Load(customerId), and you get back a deep customer object with those properties already instantiated. I should also add that if you're returning individual objects based on specific database tables, then that's a perfectly valid approach, but it's not really a repository per sé - it's just a data-access layer.
On one hand, there is a compelling argument that Linq-to-SQL objects, with their 'smart' properties and their deferred execution (i.e. not loading Customer.Orders until you actually use it) are a completely valid implementation of the repository pattern, because you're not actually running database code, you're running LINQ statements (which are then translated into DB code by the underlying LINQ provider)
On the other hand, as Matt Briggs' post points out, L2S is fairly tightly coupled to your database structure (one class per table) and has limitations (no many-many mappings, for example) - and you may be better off using L2S for data access within your repository, but then map the L2S objects onto your own domain model objects and return those.
A repository should be the only piece of your application that knows anything about your data access technology. So it should not be returning objects generated by L2S at all, but map those properties to model objects of your own.
If you are using this sort of pattern, you may want to re think L2S. It generates up a data access layer for you, but doesn't really handle impedance mismatch, you have to do that manually. If you look at something like NHibernate, that mapping is done in a more robust fashion. L2S is more for a 2 tier application, where you want a quick and dirty DAL that you can extend on easily.
If you're using LINQ then my belief is that the repository should be a container for your LINQ syntax. This gives you a level of abstraction of the database access routines from your model object interfacing. As Dylan mentions above there are other views on the matter, some people like to return the IQueryable so they can continue to query the database at a later point after the repository. There is nothing wrong with either of these approaches, as long as you're clear in your standards for your application. There is more informaiton on the best practices I use for LINQ repositories here.