Should a database table exist in more than one edmx file? - entity-framework-4

Let's say I have an existing database with about 90 tables. I've seen comments that state including them all into one big edmx file is not considered good practice. Suppose I have logical groupings like HR, Legal, and Accounting that I can use to create multiple edmx files. That makes sense. However, what I don't know is what to do if each of these logical groupings would contain a foreign key to commonly used tables (like employee, address, etc). Should each edmx file contain these tables as well, or is there a better way to handle this?
On a side note, when creating an edmx file, how small is too small? Is a context with 5 entities too small? 2? Is there a general rule of thumb?
Any guidance is appreciated!

From the runtime perspective it should not really matter whether you split your model to multiple edmx files or not. 90 entities should be fine but you may start seeing some delay when your app starts. If you experience this you may want pregenerate views which should address the issue. The EF designer is known to be slow if you have many entities. The EF Designer in VS 2012 however allows to have multiple diagrams per model to visualize subsections of your overall model.
If you think you will be able to manage the model easily without splitting it then you can try going with just one model. If it becomes unmanagable then you can think about splitting.

Related

Landing Zone vs Staging tables

I am building a small DWH in SQL Server. We have 6 source tables that we have to combine into a single BASE table based on a given logic.
My question is - should I start by creating 6 LZ tables (corresponding to each of the 6 source tables) to land the data on the system. Secondly, combine these 6 LZ tables into 1 Staging table and then finally, move the data from the Staging table to the Base table ?
My first thought was to create 6 Staging tables (instead of LZ tables) and then combine these 6 to form the base table. However, I decided against it based on my understanding - that the structure of LZ tables should match the source tables and that the sructure of Staging table should refect the base tables ?
Which alternative should be pursued in this case ? What are the pros & cons ?
Pls share your thoughts.
Thanks
Honestly, I don't see one single answer to this question. It really depends on various factors - frequency of extract, source system availability, complexity of transformations, data lineage requirements, etc.
I will start with creating one staging table per source system table/entity. If you are using an ETL tool do the ETL process, then most of the ETL tools are pretty good at doing simple to complex transformations "on the fly" (in memory). I have extensively used SSIS and it is pretty good at most of the transformations.
You can sometimes end up with some other tables in the staging area if your transformations have very complex business rules. It helps in debugging in the sense that you can see the data before, during and after transformations. But as I said, that really depends on the data and the transformations required.
It really is a broad question and difficult to answer in a few paragraphs but I hope it helps you in getting you started with your ETL process!
In my experience using LZ tables or datadump area, is a good idea.
First of all it provides one to one mapping with minimal transformations if any, ie adding the file name attribute.
Secondly if the process fails, before achieving another milestone, the Landing Zone tables allow for restarting the process without the need to access the data source, which may or may not be accessible at that time.
You can also archive the data from LZ tables, which, if you only taking a subset of data further down the pipeline, might save you lots of work if suddenly pipeline needs to add another attribute and the historic values are needed and the attribute is on the original files.
Hope that helps
In order to land the source tables , I Would Recommend you to Make Separate table for every source table .
There should be no dependency to the source table.LANDED tables would help you in making staging area.

Entity, dealing with large number of records (> 35 mlns)

We have a rather large set of related tables with over 35 million related records each. I need to create a couple of WCF methods that would query the database with some parameters (data ranges, type codes, etc.) and return related results sets (from 10 to 10,000 records).
The company is standardized on EF 4.0 but is open to 4.X. I might be able to make argument to migrate to 5.0 but it's less likely.
What’s the best approach to deal with such a large number of records using Entity? Should I create a set of stored procs and call them from Entity or there is something I can do within Entity?
I do not have any control over the databases so I cannot split the tables or create some materialized views or partitioned tables.
Any input/idea/suggestion is greatly appreciated.
At my work I faced a similar situation. We had a database with many tables and most of them contained around 7- 10 million records each. We used Entity framework to display the data but the page seemed to display very slow (like 90 to 100 seconds). Even the sorting on the grid took time. I was given the task to see if it could be optimized or not. and well after profiling it (ANTS profiler) I was able to optimize it (under 7 secs).
so the answer is Yes, Entity framework can handle loads of records (in millions) but some care must be taken
Understand that call to database made only when the actual records are required. all the operations are just used to make the query (SQL) so try to fetch only a piece of data rather then requesting a large number of records. Trim the fetch size as much as possible
Yes, not you should, you must use stored procedures and import them into your model and have function imports for them. You can also call them directly ExecuteStoreCommand(), ExecuteStoreQuery<>(). Sames goes for functions and views but EF has a really odd way of calling functions "SELECT dbo.blah(#id)".
EF performs slower when it has to populate an Entity with deep hierarchy. be extremely careful with entities with deep hierarchy .
Sometimes when you are requesting records and you are not required to modify them you should tell EF not to watch the property changes (AutoDetectChanges). that way record retrieval is much faster
Indexing of database is good but in case of EF it becomes very important. The columns you use for retrieval and sorting should be properly indexed.
When you model is large, VS2010/VS2012 Model designer gets real crazy. so break your model into medium sized models. There is a limitation that the Entities from different models cannot be shared even though they may be pointing to the same table in the database.
When you have to make changes in the same entity at different places, try to use the same entity by passing it and send the changes only once rather than each one fetching a fresh piece, makes changes and stores it (Real performance gain tip).
When you need the info in only one or two columns try not to fetch the full entity. you can either execute your sql directly or have a mini entity something. You may need to cache some frequently used data in your application also.
Transactions are slow. be careful with them.
if you keep these things in mind EF should give almost similar performance as plain ADO.NET if not the same.
My experience with EF4.1, code first: if you only need to read the records (i.e. you won't write them back) you will gain a performance boost by turning of change tracking for your context:
yourDbContext.Configuration.AutoDetectChangesEnabled = false;
Do this before loading any entities. If you need to update the loaded records you can allways call
yourDbContext.ChangeTracker.DetectChanges();
before calling SaveChanges().
The moment I hear statements like: "The company is standardized on EF4 or EF5, or whatever" This sends cold shivers down my spine.
It is the equivalent of a car rental saying "We have standardized on a single car model for our entire fleet".
Or a carpenter saying "I have standardized on chisels as my entire toolkit. I will not have saws, drills etc..."
There is something called the right tool for the right job
This statement only highlights that the person in charge of making key software architecture decisions has no clue about software architecture.
If you are dealing with over 100K records and the datamodels are complex (i.e. non trivial), Maybe EF6 is not the best option.
EF6 is based on the concepts of dynamic reflection and has similar design patterns to Castle Project Active Record
Do you need to load all of the 100K records into memory and perform operations on these ? If yes ask yourself do you really need to do that and why wouldn't executing a stored procedure across the 100K records achieve the same thing. Do some analysis and see what is the actual data usage pattern. Maybe the user performs a search which returns 100K records but they only navigate through the first 200. Example google search, Hardly anyone goes past page 3 of the millions of search results.
If the answer is still yes you need to load all of the 100K records into memory and perform operations. Then maybe you need to consider something else like a custom built write through cache with light weight objects. Maybe lazy load dynamic object pointers for nested objects. etc... One instance where I use something like this is large product catalogs for eCommerce sites where very large numbers of searches get executed against the catalog. Why is in order to provide custom behavior such as early exit search, and regex wildcard search using pre-compiled regex, or custom Hashtable indexes into the product catalog.
There is no one size fits all answer to this question. It all depends the data usage scenarios and how the application works with the data. Consider Gorilla Vs Shark who would win? It all depends on the environment and the context.
Maybe EF6 is perfect for one piece that would benefit from dynamic reflection, While NetTiers is better for another that needs static reflection and an extensible ORM. While low level ADO is perhaps best for extreme high performance pieces.

Is there some way in Delphi to cache master-detail rows and post both master and detail child rows at the same time

I want to post in memory some child rows, and then conditionally post them, or don't post them to an underlying SQL database, depending on whether or not a parent row is posted, or not posted. I don't need a full ORM, but maybe just this:
User clicks Add doctor. Add doctor dialog box opens.
Before clicking Ok on Add doctor, within the Add doctor dialog, the user adds one or more patients which persist in memory only.
User clicks Ok in Add doctor window. Now all the patients are stored, plus the new doctor.
If user clicked Cancel on the doctor window, all the doctor and patient info is discarded.
Try if you like, mentally, to imagine how you might do the above using delphi data aware controls, and TADOQuery or other ADO objects. If there is a non-ADO-specific way to do this, I'm interested in that too, I'm just throwing ADO out there because I happen to be using MS-SQL Server and ADO in my current applications.
So at a previous employers where I worked for a short time, they had a class called TMasterDetail that was specifically written to add the above to ADO recordsets. It worked sometimes, and other times it failed in some really interesting and difficult to fix ways.
Is there anything built into the VCL, or any third party component that has a robust way of doing this technique? If not, is what I'm talking about above requiring an ORM? I thought ORMs were considered "bad" by lots of people, but the above is a pretty natural UI pattern that might occur in a million applications. If I was using a non-ADO non-Delphi-db-dataset style of working, the above wouldn't be a problem in almost any persistence layer I might write, and yet when databases with primary keys that use identity values to link the master and detail rows get into the picture, things get complicated.
Update: Transactions are hardly ideal in this case. (Commit/Rollback is too coarse a mechanism for my purposes.)
Your asking two separate questions:
How do I cache updates?
How can I commit updates to related tables at the same time.
Cached updates can be accomplished a number of different ways. Which one is best depends on your specific situation:
ADO Batch Updates
Since you've already stated that you're using ADO to access the data this is a reasonable option. You simply need to set the LockType to ltBatchOptimistic and CursorType to either ctKeySet or ctStatic before opening the dataset. Then call TADOCustomDataset.UpdateBatch when you're ready to commit.
Note: The underlying OLEDB provider must support batch updates to take advantage of this. The provider for SQL Server fully supports this.
I know of no other way to enforce the master/detail relationship when persisting the data than to call UpdateBatch sequentially on both datasets.
Parent.UpdateBatch;
Child.UpdateBatch;
Client Datasets
Data caching is one of the primary reasons for TClientDataset's existence and synchronizing a master/detail relationship isn't difficult at all.
To accomplish this you define the master/detail relationship on two dataset components as usual (in your case ADOQuery or ADOTable). Then create a single provider and connect it to the master dataset. Connect a single TClientDataset to the provider and you're done. TClientDatset interprets the detail dataset as a nested dataset field, which can be accessed and bound to data aware controls just like any other dataset.
Once this is in place you simply call TClientDataset.ApplyUpdates and the client dataset will take care of ordering the updates for the master/detail data correctly.
ORMs
There is a lot that can be said about ORMs. Too much to fit into an answer on StackOverflow so I'll try to be brief.
ORMs have gotten a bad rap lately. Some pundits have gone so far as to label them an anti-pattern. Personally I think this is a bit unfair. Object-relational mapping is an incredibly difficult problem to solve correctly. ORMs attempt to help by abstracting away a lot of the complexity involved in transferring data between a relational table and an instance of an object. But like with everything else in software development there are no silver bullets and ORMs are no exception.
For a simple data entry application without a lot of business rules an ORM is probably overkill. But as an application becomes more and more complex an ORM starts to look more appealing.
In most cases you'll want to use a third party ORM rather than rolling your own. Writing a custom ORM that perfectly fits your requirements sounds like a good idea and its easy to get started with simple mappings but you'll soon start running into issues like parent/child relationships, inheritance, caching and cache invalidation (trust me I know this from experience). Third party ORMs have already encountered these issues and spent an enormous amount of resources to solve them.
With many ORMs you trade code complexity for configuration complexity. Most of them are actively working to reduce the boilerplate configuration by turning to conventions and policies. If you name all your primary keys Id rather than having to map each table's Id column to a corresponding Id property for each class you simply tell the ORM about this convention and it assumes all tables and classes its aware of follow the convention. You only have to override the convention for specific cases where it doesn't apply. I'm not familiar with all of the ORMs for Delphi so I can't say which support this and which don't.
In any case you'll want to design your application architecture so you can push off the decision of which ORM framework (or for that matter any framework) to use as long as possible.

Entity Framework 4: Does it make sense to create a single diagram for all entities?

I wrote a few assumptions regarding Entity Framework, then a few questions (so please correct where I am wrong). I am trying to use POCOs with EF 4.
My assumptions:
Only one data context can exist for an EF diagram.
Data Contexts can refer to more than one entity.
If you have two data sources, say MS SQL server and Oracle, EF requires two different diagrams to access the data.
The EF diagram data context is the "Unit of Work", having a single Save() for anything on the diagram. (Sure you could wrap it in a UnitOfWork class, but it essentially has the same duties).
Assuming that's correct, here are my questions:
If you don't keep all entities on the same EF diagram, how do you maintain data integrity, like "Orders" cannot exist without a "Customer"? Is this solely a function of the repository to load data just to verify integrity, or do we "try/catch" on database referential integrity errors?
Wouldn't you create an EF diagram for each Entity? For example, I wouldn't expect changes to a customer and changes to a product to be written together as they have nothing to do with each other (having them on the same diagram would cause them to be written together). Or is the scope of an EF diagram to encompass all similar entities stored in the same storage medium?
Is it the norm to divide up the entities like that, or just have a single diagram holding all the entities? I would think the latter, but the thinking is getting the better of me.
Having one big EDM containing all the entities generally is NOT a good practice and is not recommended.
Using one large EDM will cause several issues such as:
Performance Issue in Metadata Load Times:
As the size of the schema files increase, the time it takes to parse and create an in-memory model for this metadata would also increase.
Performance Issue in View Generation:
View generation is a process that compiles the declarative mapping provided by the user into client side Entity Sql views that will be used to query and store Entities to the database. The process runs the first time either a query or SaveChanges happens. The performance of view generation step not only depends on the size of your model but also on how interconnected the model is. If two Entities are connected via an inheritance chain or an Association, they are said to be connected. Similarly if two tables are connected via a foreign key, they are connected. As the number of connected Entities and tables in your schemas increase, the view generation cost increases.
Cluttered Designer Surface:
When you generate an Edm model from a big database schema, the designer surface is cluttered with a lot of Entities and it would be hard to make sense of how your Entity model in total looks like. If you don’t have a good overview of the Entity Model, how are you going to customize it?
Intellisense experience is not great:
When you generate an Edm model from a database with say 1000 tables, you will end up with 1000 different entity sets. Imagine how your intellisense experience would be when you type “context.” in the VS code window.
Cluttered CLR Namespaces:
Since a model schema will have a single EDM namespace, the generated code will place the classes in a single namespace.
For a more detailed discussion, have a look at Working With Large Models In Entity Framework – Part 1
Solution:
While there is no out of the box solution for this but it suggests that instead, you should Naturally Disconnected Subsets in your model meaning that based on your domain model, you should come up with different sets of domain models each containing related objects while each set is unrelated and disconnected from the other one. No Foreign Keys in between could be a good sign for separation. And this make sense because in a large model, usually your application does not require all the tables in a database to be mapped to one Entity Model in order to work.
Even if this kind of separation is not 100% possible - meaning that there are subsets of tables that have out going foreign keys to other tables in the database - it still encourages you do separate them. When you do this, you would have to take the responsibility of setting the foreign key appropriately. There would be no navigation property that allows you to get the Entity that represents this foreign key. Of course you could manually query for this Entity in the other container if needed.
Also, for some tips and tricks on how you can split one large entity model into smaller ones while reusing types, take a look at: Working With Large Models In Entity Framework – Part 2
About your question: Order and Customer belong to the same natural domain and should be kept in the same EDM. Like I said, you can scatter them over 2 different entity data models but then you have to take the responsibility of setting the appropriate foreign keys or you'll get runtime exceptions, by the same token, Customer and Product should be kept in separate entity data models. Following these rules, you can come up with a well defined domain set design in your data access layer.
I realize that this question was about EF4 but I am sure that many people who are just now "making the switch" will end up here via Google and will read this and the approved answer and make decisions based on it even though they are using EF5 (or EF4.4 if you are stuck on .Net 4.0)
EF5 allows multiple diagrams per edmx. This is a big deal, at least to my team, because it allows us to visually separate entities without requiring separate edmx files. Dr. Zim's points are all still valid except (obviously) the "cluttered designer surface".
There are draw backs to having multiple edmx files, the biggest one is that even if you create separate namespaces for each, you cannot duplicate entity names. Yes, if you truly are designing your system "code first" then this should not be a problem. However, many (most) of us are adding EF to existing systems that are already built on top of relational databases which have normalization.
"But normalization is a good thing, right?" Well, if you are using a relational database yes. "But why does that matter if I am using EF?" A common "normalized" table is Address. Possible scenario: Company (location of business/office) and Contact (might be "remote" worker so they are not at the business location) and they both have a FK that points to Address. Using one edmx file for Company and one for Contact (even with different namespaces) that both include the Address table, the code will compile but at run time you will get this beauty:
Multiple types with the name 'Address' exist in the EdmItemCollection
in different namespaces. Convention based mapping requires unique names
without regard to namespace in the EdmItemCollection
You can change the mapping that is used by EF but then you have other "issues" when working through implementation and most people use the default mapping so forums like this won't have many pertinent questions and answers.
You could also rename the Model name for the Address table to "ContactAddress" and "CompanyAddress" respectively, but that gives the illusion that they are different types when they really aren't. OK, so they are different types in EF but not in the database and, as I said, most of us "live" in the world of tacking on EF to an existing system with an existing data store that is a relational database.
This is already a long-winded "answer" so I will stop here. I just wanted to make sure that people who landed here because they searched for "multiple edmx" and did not realize that there are significant difference between EF4 and EF5 were made aware and realized they may need to do some more investigating.

Linq to Sql structure standard

I was wondering what the "correct"-way is when using a Linq to Sql-model in Visual Studio.
Should I create a new model for each of my components, say Blog, Users, News and so on and have all different xxxDataContext's with tables and SPROCs added in each of these.
Or should I create one MyDbDataContext and always work against that?
What's the pro's/con's? My gut tells me to divide it up in smaller context's, but it also feels like that could bring problems as the project expands?
What's the deal? Help me Stackoverflow :)
There will always be overhead when creating the data context as the model needs to be built. Depending on the number of tables in your database this might not be much of a big deal though. If it's only 10 tables or so, the overhead will not be much more than that for a context with say 1 table (sorry, I don't have actual stress testing to show the overhead, but, hey, maybe that gives me something to blog on this weekend).. When looking at large databases the overhead might be a enough to consider using seperate contexts.
The main advantage I would see with using a single data context is that you gain the ability to use JOINs in your LINQ query and that will be translated to T-SQL. Where as if you do the join after the arrays of objects are pulled, the performance might be a bit slower. Additionally, keeping track of multiple data contexts might be confusing and good naming conventions would be needed. So building your own data model w/ business logic which encapsulates the contexts would be a bit harder. I've done this and it's not fun :)
However, if you still feel you want to go that route, then I would recommend putting similar tables (that you might need to join) in the same context. Also, there are some tuts online that recommend using a shared MappingSource when using multiple contexts that use the same source. Information on this can be found here: http://www.albahari.com/nutshell/speedinguplinqtosql.aspx
Sorry, I know that's not really a black and white answer, but hopefully it helps :)
Addition:
Just wanted to add that I did a small test and ran 20,000 SELECT statements against a small sized table using 2 different data contexts:
DataClasses1DataContext contained mappings to all tables in the db (4 total)
DataClasses2DataContext contained a single mapping for just the one table
Results:
Time to execute 20000 SELECTs using DataClasses1DataContext: 00:00:10.4843750
Time to execute 20000 SELECTs using DataClasses2DataContext: 00:00:10.4218750
As you can see, it's not much of a difference.

Resources