database design for dictionary of words - ruby-on-rails

(my reason for asking this question is based on having read this answer, which made me rethink my current setup)
I currently am developing a ruby on rails application in which there are many languages, each of which has a dictionary of base words attached to it, as well as a list of the words that map to each base word. The way I currently have it set up, there is a base_words table that contains the base_word as a string, along with the language_id as a foreign key. There is also a words table, each row of which contains a word string, along with the base_word_id as a foreign key. There is also a language_id indexed on each column, although I'm almost positive that this is superfluous due to the language_id on base_word, so I'm planning to take it off (although this could be a bad assumption on my part).
In sum, on the contrary to the answer I mentioned in the beginning, the tables are not separated by language, because I've reasoned that I can simply pull out the language words programmatically when the time comes. However, my application will also have translation(s) associated with each base word (as did the answer I referenced), and so I'm doubting my structure due to the realization that each translation will actually be a base_word in the same table as itself, which would mean that the translation would actually be just an id of another base word in said table. This may be completely fine, or it may not be - I have no clue (this is my first ever programming project).
Is this ok? Do I need to separate my base_words into separate tables for each language, or can I leave it all in one table?
Another example: I also need to store many phrases for each language, along with their translations. Should I have one table where each row has the appropriate translation of the phrase, or one table where each row contains simply one phrase and a language_id, or multiple tables (one for each language)?
Un saludo,
Michael

As in the other scenario, you'll have a translations table. There is no technical reason it couldn't have multiple foreign keys to base_words (a source_word_id and target_word_id, perhaps). So yes, you can absolutely store all your words in one table. There are some minor side effects involved with translations being directional relationships: it becomes possible to have translations which only work one way, and there will be many pairs of entries with opposite source and target. Neither of these is much of a worry: the first is even potentially desirable in order to represent words with double meanings in one language but not the other, and as for the second, space is cheap and indexing is easy.
You are correct that you do not need words.language_id, so long as you always join base_words when you're querying words and the language matters. This obviously changes if you have a use case where it makes sense to leave base_words out, but that scenario sounds unlikely based on what you describe.
As for phrases: why should they be handled any differently than base_words?

Related

Surrogate Keys in Datawarehouse

I want to understand how surrogate keys are leveraged in real-time DWH environments. I get that they add the benefit of not being dependent on source-generated data to store each dimension key and also avoid having composite key built out of natural keys from dimensions in the fact, For eg, (prod id + cust id+ time id)
But does it not add the complexity of having to maintain the lookup of (natural key, surrogate key) while we load data into facts. I have been working in BI/DW teams for last 3 years and we do not maintain any surrogate keys in our systems. We leverage natural keys to build our datamarts. One sample usecase is revenue data which is stored in transactional system, which is loaded into warehouse at customer, product, time period granularity using the same natural keys from source. We use the same to join with corresponding dimensions to build STAR schema.
Main reason I think it makes sense in our case is that business uses EDW data to do micro-analysis of data at account level, not just trending analysis. We would need to maintain data integrity in that case which we achieve using natural keys. I want to understand how other DW environments work. How do you leverage surrogate keys or natural keys in your systems.
Thanks!
One reason is to maintain and being able to compare historical changes.
Example, if one of your product attributes changes and you wanted to look at and compare revenue before and after the attribute change, how would you do that without using surrogate product keys? Using a natural key would just overwrite the old value when you ETL.
The lookup doesn't have to be very complex to maintain. Most ETL tools have support for this and usually have some caching mechanism built in to cache lookup values.
Also, what do you mean when you say "real-time" data warehouse? Are you using ROLAP, DirectQuery or something similar? If so, you might be building your marts directly on your OLTP system and de-normalize in some semantic model. Then you could use your natural keys because there is no traditional ETL/data warehouse to do lookups and store your surrogate keys.
Lastly, granularity is not related to what type of key you are using.
If your business is stable and runs on top of a single application for everything, natural keys will work just fine, as your experience tells you.
Most businesses are not in such a state or not for very long. Mergers happen, new applications are introduced, legacy stuff refuses to die. New lines of business are started or split off and require wholesale renaming of existing natural key schemes.
Surrogate keys provide great benefits in keeping reporting dimensions stable and usable across the business when you have a bunch of separate new and legacy applications that all have their own versions of your customers and products and regularly get migrated or swapped out for similar systems with new natural key definitions. The major work is linking the various natural keys of a customer/product/whatever, assigning a surrogate key is just a simple and very helpful step in that.
Even in your scenario, I would use surrogate keys as they prepare you for future changes and are very helpful with historical data (as NITHIN B also answered) in Type 2 Dimensions.
It's quite possible to do versioning with natural keys by adding a version field to your dimension and fact tables, but it makes the joins harder to write for reporting and your whole system still gets messy if business or application changes cause the natural keys to change.
To illustrate:
Select bla
from Fact F
inner join Dim_Customer DC
on F.Surrogate_key = DC.Surrogate_Key
is almost foolproof. If you mess this up, it will be immediately obvious in your report.
Select bla
from Fact F
inner join Dim_Customer DC
on F.Natural_key = DC.Natural_Key
and F.Version = DC.Version
does the same job, but if you forget that last line, everything will look normal but your numbers will be inflated depending on how many versions there are on average. Kinda painful when that 25% sales increase turns out to be an error.
An additional reason, which has not been mentioned yet, is performance. Sometimes (very often in my experience) natural keys are strings, sometimes long strings.
It seems not a big deal using 10, 20 or 30 byte string instead of a 4 byte integer, but when you have 10 dimension and hundred of millions of rows, it adds up fast.
Could you please post a sample design.
I would be interested to see how you can load a fact table with Dimension Keys which are natural keys. Kimball design never recommends it.
My stand on Surrogate Keys in DWH.
Surrogate keys give you a lot of flexibility with Type 2 Dimensions,
ie if you have Type 2 Dimensions. For eg: You can track changes of a customer
if he or she changes her second name. You can have rows withe old values and
new values.
Fact tables usually hold keys which are surrogate keys. It makes your star
schema neat and tidy and robust.
However I am not jumping queues here, would wait for your design before going pro or against your stand.
Cheers
Nithin

Database design for book structure (table of contents) and content

I have a list of entries, which can be thought of as paragraphs from a book, stored as separate objects of the same class. These objects have a ‘num’ property, along with the actual text, so that I know their order and can later display them in as a list in the correct order (1,2,3, …).
Now I want to bring this one step further and be able to ‘record’ the structure of the book, like the table of contents. In other words, say the book is divided into chapters, and each chapter is further divided into sections. The first few paragraphs are found under Ch.1 Sec.1, then Ch.1 Sec. 2, and so on all the way to Ch. n, S. m. What I’m not sure of is what’s a good way to record this information? I've been told that I should use a database with SQL but I'm not sure where to begin.
The implementation must allow me to ‘quickly’ determine the following two things at any point: (1) Given a chapter and section #, what paragraphs are contained within this section? (2) Given a paragraph #, which chapter and section is it under? It must also be flexible enough that I could use the same platform in the future with few edits if the structure (depth-wise) of the book changes (e.g. sections are divided into subsections, etc.). Finally, should be able to handle optional divisions (i.e. some sections have subsections while others do not).
This is for an iOS app and my code is written in Objective-C so far.
SQL would certainly be one possibility. If you follow this route, there is a certain trade-off between flexibility and easy of coding which impacts maintainability. For example, if you build a fixed structure, say with some additional levels attempting to cater for the future, such as:
Book
Chapter
Section
Sub-section
Paragraph
you will have code with unambiguous references, such as section.fk_chapter, paragraph.fk_subSection, etc. This will make it easier to troubleshoot and build queries. However you have the problem of having to refactor your code a fair amount if you wanted to add, say, sub-paragaphs, or sub-sub-sections. Your UI will be simpler to code in this approach as you always know which "level" you are working at. Alternatively, you can go for a hierarchical approach:
Book
Chapter
Content Item
Content Item
Content Item
....
where the contentItem table has a self-reference foreign key. This has the quite big advantage of allowing you any number of levels. Some attribute on the Content Item could tell you the name and "type" of level you are at if needed. It is definitely much more flexible, but will come with some complexity in implementation and UI presentation. columns called contentItem.fk_contentItem to refer to the parent level do not tell the coder where they are in the hierarchy. Queries will be a bit more difficult to write. The UI will have to cater for "any" number of levels. But on the other hand, these problems are not insurmountable and many have gone before you on this route.
Your question is quite broad, so opinions will vary on the approach and the above is admittedly very general.

Would like to Understand 6NF with an Example

I have just read #PerformanceDBA's arguments re: 6NF and E-A-V. I am intrigued. I had previously been skeptical of 6NF as it was presented as "merely" sticking some timestamp columns on tables.
I have always worked with a data dictionary and do not need to be convinced to use one, or to generate SQL code. So I expect an answer that would require a dictionary (or catalog) that is used to generate code.
So I would like to know how 6NF would deal with an extremely simple example. A table of items, descriptions and prices. The prices change over time.
So anyway, what does the Items table look like when converted to 6NF? What is the "explosion of tables?" that happens here?
If the example does not work with a table this simple, feel free to add what is necessary to get the point across.
I actually started putting an answer together, but I ran into complications, because you (quite understandably) want a simple example. The problem is manifold.
First I don't have a good idea of your level of actual expertise re Relational Databases and 5NF; I don't have a starting point to take up and then discuss the specifics of 6NF,
Second, just like any of the other NFs, it is variegated. You can just barely step into it; you can implement 6NF for certan tables; you can go the full hog on every table, etc. Sure there is an explosion of tables, but then you Normalise that, and kill the explosion; that's an advanced or mature implementation of 6NF. No use providing the full or partial levels of 6NF, when you are asking for the simplest, most straight-forward example.
I trust you understand that some tables can be "in 5NF" while others are "in 6NF".
So I put one together for you. But even that needs explanation.
Now SQL barely supports 5NF, it does not support 6NF at all (I think dportas says the same thing in different words). Now I implement 6NF at a deep level, for performance reasons, simplified pivoting (of entire tables; any and all columns, not the silly PIVOT function in MS), columnar access, etc. For that you need a full catalogue, which is an extension to the SQL catalogue, to support the 6NF that SQL does not support, and maintain data Integrity and business Rules. So, you really do not want to implement 6NF for fun, you only do that if you have a need, because you have to implement a catalogue. (This is what the EAV crowd do not do, and this is why most EAV systems have data integrity problems. Most of them do not use the declarative Referential & Data Integrity that SQL does have.)
But most people who implement 6NF don't implement the deeper level, with a full catalogue. They have simpler needs, and thus implement a shallower level of 6NF. So, let's take that, to provide a simple example for you. Let's start with an ordinary Product table that is declared to be in 5NF (and let's not argue about what 5NF is). The company sells various different kinds of Products, half the columns are mandatory, and the other half are optional, meaning that, depending on the Product Type, certain columns may be Null. While they may have done a good job with the database, the Nulls are now a big problem: columns that should be Not Null for certain ProductTypes are Null, because the declaration states NULL, and their app code is only as good as the next guy's.
So they decide to go with 6NF to fix that problem, because the subtitle of 6NF states that it eliminates The Null Problem. Sixth Normal Form is the irreducible Normal Form, there will be no further NFs after this, because the data cannot be Normalised further. The rows have been Normalised to the utmost degree. The definition of 6NF is:
a table is in 6NF when the row contains the Primary Key, and at most one, attribute.
Notice that by that definition, millions of tables across the planet are already in 6NF, without having had that intent. Eg. typical Reference or Look-up tables, with just a PK and Description.
Right. Well, our friends look at their Product table, which has eight non-key attributes, so if they make the Product table 6NF, they will have eight sub-Product tables. Then there is the issue that some columns are Foreign Keys to other tables, and that leads to more complications. And they note the fact that SQL does not support what they are doing, and they have to build a small catalogue. Eight tables are correct, but not sensible. Their purpose was to get rid of Nulls, not to write a little subsytem around each table.
Simple 6NF Example
Readers who are unfamiliar with the Standard for Modelling Relational Databases may find IDEF1X Notation useful in order to interpret the symbols in the example.
So typically, the Product Table retains all the Mandatory columns, especially the FKs, and each Optional column, each Nullable column, is placed in a separate sub-Product table. That is the simplest form I have seen. Five tables instead of eight. In the Model, the four sub-Product tables are "in 6NF"; the main Product table is "in 5NF".
Now we really do not need every code segment that SELECTs from Product to have to figure out what columns it should construct, based on the ProductType, etc, so we supply a View, which essentially provides the 5NF "view" of the Product table cluster.
The next thing we need is the basic rudiments of an extension to the SQL catalog, so that we can ensure that the rules (data integrity) for the various ProductTypes are maintained in one place, in the database, and not dependent on app code. The simplest catalogue you can get away with. That is driven off ProductType, so ProductType now forms part of that Metadata. You can implement that simple structure without a catalogue, but I would not recommend it.
Update
It is important to note that I implement all Business Rules in the database. Otherwise it is not a database (the notion of implementing rules "in application code" is hilarious in the extreme, especially nowadays, when we have florists working as "developers"). Therefore all rules, etc are first and foremost implemented as SQL declarations, CHECK constraints, functions, etc. That preserves all Declarative Referential Integrity, and declarative Data Integrity. The extension to the SQL catalog covers the area that SQL does not have declarations for, and they are then implemented as SQL. Being a good data dictionary, it does much more. Eg. I do not write Views every time I change the tables or add or change columns or their characteristics, they are created directly from the catalog+extension using a simple code generator.
One more very important note. You cannot implement 6NF (or EAV properly, for that matter), without completing a full and faithful Normalisation exercise, to 5NF. The problem I see at every site is, they don't have a genuine 5NF state, they have a mish-mash of partial normalisation or no normalisation at all, but they are very attached to that. Creating either 6NF or EAV from that is a disaster. Creating EAV or 6NF from that without all business rules implemented in declarative SQL is a nuclear disaster, burning for years. You get what you pay for.
End update.
Finally, yes, there are at least four further levels of Normalisation (Normalisation is a Principle, not a mere reference to a Normal Form), that can be applied to that simple 6NF Product cluster, providing more control, less tables, etc. The deeper we go, the more extensive the catalogue. And higher levels of performance. When you are ready, just ask, I have already erected the models and posted details in other answers.
In a nutshell, 6NF means that every relation consists of a candidate key plus no more than one other (key or non-key) attribute. To take up your example, if an "item" is identified by a ProductCode and the other attributes are Description and Price then a 6NF schema would consist of two relations (* denotes the key in each):
ItemDesc {ProductCode*, Description}
ItemPrice {ProductCode*, Price}
This is potentially a very flexible approach because it minimises the dependencies. That's also its main disadvantage however, especially in a SQL database. SQL makes it hard or impossible to enforce many multi-table constraints. Using the above schema, in most cases it will not be possible to enforce a business rule that every product must always have a description AND a price. Similarly, you may not be able to enforce some compound keys that ought to apply (because their attributes could be split over multiple tables).
So in considering 6NF you have to weigh up which dependencies and integrity rules are important to you. In many cases you may find it more practical and useful to stick to 5NF and normalize no further than that.
I had previously been skeptical of 6NF
as it was presented as "merely"
sticking some timestamp columns on
tables.
I'm not quite sure where this apparent misconception comes from. Perhaps the fact that 6NF was introduced for the book "Temporal Data and The Relational Mode" by Date, Darwen and Lorentzos? Anyhow, I hope the other answers here have clarified that 6NF is not limited to temporal databases.
The point I wanted to make is, although 6NF is "academically respectable" and always achievable, it may not necessarily lead to the optimal design in every case (and not just when considering implementation using SQL either). Even the aforementioned discoverers and proponents of 6NF seem to agree e.g.
Chris Date: "For practical purposes, stick to 5NF (and 6NF)."
Hugh Darwen: "the 6NF decomposition around Date [not the person!] would be overkill... an optimal design for the soccer club is... 5-and-a-bit-NF!"
Hugh Darwen: "we are in 5NF but not in 6NF, and again 5NF is sufficient" (several similar examples).
Then again, I can also find evidence to the contrary:
Chris Date: "Darwen and I have both felt for some time that all base relvars should be in 6NF".
On a practical note, I recently extended the SQL schema of one of our products to add a minor feature. I adopted a 6NF to avoid nullable columns and ended up with six new tables where most (all?) of my colleagues would have used one table (or perhaps extended an existing table) with nullable columns. Despite me proving several 'helper' stored procs and a 'denormalized' VIEW with a INSTEAD OF triggers, every coder that has had to work with this feature at the SQL level has gone out of their way to curse me :)
These guys have it down: Anchor Modeling. Great academic papers on the subject, combined with practical examples. Their writings have finally pushed me over the edge to consider building a DW in 6nf on an upcoming project. The POC work I have done has validated (for me, at least) that the enormous benefits of 6nf don't outweigh the costs.

Using multiple key value stores

I am using Ruby on Rails and have a situation that I am wondering if is appropriate for using some sort of Key Value Store instead of MySQL. I have users that have_many lists and each list has_many words. Some lists have hundreds of words and I want users to be able to copy a list. This is a heavy MySQL task b/c it is going to have to create these hundreds of word objects at one time.
As an alternative, I am considering using some sort of key value store where the key would just be the word. A list of words could be stored in a text field in mysql. Each list could be a new key value db? It seems like it would be faster to copy a key value db this way rather than have to go through the database. It also seems like this might be faster in general. Thoughts?
The general way to solve this using a relational database would be to have a list table, a word table, and a table-words table relating the two. You are correct that there would be some overhead, but don't overestimate it; because table structure is defined, there is very little actual storage overhead for each record, and records can be inserted very quickly.
If you want very fast copies, you could allow lists to be copied-on-write. Meaning a single list could be referred to by multiple users, or multiple times by the same user. You only actually duplicate the list when the user tries to add, remove, or change an entry. Of course, this is premature optimization, start simple and only add complications like this if you find they are necessary.
You could use a key-value store as you suggest. I would avoid trying to build one on top of a MySQL text field in less you have a very good reason, it will make any sort of searching by key very slow, as it would require string searching. A key-value data store like CouchDB or Tokyo Cabinet could do this very well, but it would most likely take up more space (as each record has to have it's own structure defined and each word has to be recorded separately in each list). The only dimension of performance I would think would be better is if you need massively scalable reads and writes, but that's only relevant for the largest of systems.
I would use MySQL naively, and only make changes such as this if you need the performance and can prove that this method will actually be faster.

Composite primary keys versus unique object ID field

I inherited a database built with the idea that composite keys are much more ideal than using a unique object ID field and that when building a database, a single unique ID should never be used as a primary key. Because I was building a Rails front-end for this database, I ran into difficulties getting it to conform to the Rails conventions (though it was possible using custom views and a few additional gems to handle composite keys).
The reasoning behind this specific schema design from the person who wrote it had to do with how the database handles ID fields in a non-efficient manner and when it's building indexes, tree sorts are flawed. This explanation lacked any depth and I'm still trying to wrap my head around the concept (I'm familiar with using composite keys, but not 100% of the time).
Can anyone offer opinions or add any greater depth to this topic?
Most of the commonly used engines (MS SQL Server, Oracle, DB2, MySQL, etc.) would not experience noticeable issues using a surrogate key system. Some may even experience a performance boost from the use of a surrogate, but performance issues are highly platform-specific.
In general terms, the natural key (and by extension, composite key) verses surrogate key debate has a long history with no likely “right answer” in sight.
The arguments for natural keys (singular or composite) usually include some the following:
1) They are already available in the data model. Most entities being modeled already include one or more attributes or combinations of attributes that meet the needs of a key for the purposes of creating relations. Adding an additional attribute to each table incorporates an unnecessary redundancy.
2) They eliminate the need for certain joins. For example, if you have customers with customer codes, and invoices with invoice numbers (both of which are "natural" keys), and you want to retrieve all the invoice numbers for a specific customer code, you can simply use "SELECT InvoiceNumber FROM Invoice WHERE CustomerCode = 'XYZ123'". In the classic surrogate key approach, the SQL would look something like this: "SELECT Invoice.InvoiceNumber FROM Invoice INNER JOIN Customer ON Invoice.CustomerID = Customer.CustomerID WHERE Customer.CustomerCode = 'XYZ123'".
3) They contribute to a more universally-applicable approach to data modeling. With natural keys, the same design can be used largely unchanged between different SQL engines. Many surrogate key approaches use specific SQL engine techniques for key generation, thus requiring more specialization of the data model to implement on different platforms.
Arguments for surrogate keys tend to revolve around issues that are SQL engine specific:
1) They enable easier changes to attributes when business requirements/rules change. This is because they allow the data attributes to be isolated to a single table. This is primarily an issue for SQL engines that do not efficiently implement standard SQL constructs such as DOMAINs. When an attribute is defined by a DOMAIN statement, changes to the attribute can be performed schema-wide using an ALTER DOMAIN statement. Different SQL engines have different performance characteristics for altering a domain, and some SQL engines do not implement DOMAINS at all, so data modelers compensate for these situations by adding surrogate keys to improve the ability to make changes to attributes.
2) They enable easier implementations of concurrency than natural keys. In the natural key case, if two users are concurrently working with the same information set, such as a customer row, and one of the users modifies the natural key value, then an update by the second user will fail because the customer code they are updating no longer exists in the database. In the surrogate key case, the update will process successfully because immutable ID values are used to identify the rows in the database, not mutable customer codes. However, it is not always desirable to allow the second update – if the customer code changed it is possible that the second user should not be allowed to proceed with their change because the actual “identity” of the row has changed – the second user may be updating the wrong row. Neither surrogate keys nor natural keys, by themselves, address this issue. Comprehensive concurrency solutions have to be addressed outside of the implementation of the key.
3) They perform better than natural keys. Performance is most directly affected by the SQL engine. The same database schema implemented on the same hardware using different SQL engines will often have dramatically different performance characteristics, due to the SQL engines data storage and retrieval mechanisms. Some SQL engines closely approximate flat-file systems, where data is actually stored redundantly when the same attribute, such as a Customer Code, appears in multiple places in the database schema. This redundant storage by the SQL engine can cause performance issues when changes need to be made to the data or schema. Other SQL engines provide a better separation between the data model and the storage/retrieval system, allowing for quicker changes of data and schema.
4) Surrogate keys function better with certain data access libraries and GUI frameworks. Due to the homogeneous nature of most surrogate key designs (example: all relational keys are integers), data access libraries, ORMs, and GUI frameworks can work with the information without needing special knowledge of the data. Natural keys, due to their heterogeneous nature (different data types, size etc.), do not work as well with automated or semi-automated toolkits and libraries. For specialized scenarios, such as embedded SQL databases, designing the database with a specific toolkit in mind may be acceptable. In other scenarios, databases are enterprise information resources, accessed concurrently by multiple platforms, applications, report systems, and devices, and therefore do not function as well when designed with a focus on any particular library or framework. In addition, databases designed to work with specific toolkits become a liability when the next great toolkit is introduced.
I tend to fall on the side of natural keys (obviously), but I am not fanatical about it. Due to the environment I work in, where any given database I help design may be used by a variety of applications, I use natural keys for the majority of the data modeling, and rarely introduce surrogates. However, I don’t go out of my way to try to re-implement existing databases that use surrogates. Surrogate-key systems work just fine – no need to change something that is already functioning well.
There are some excellent resources discussing the merits of each approach:
http://www.google.com/search?q=natural+key+surrogate+key
http://www.agiledata.org/essays/keys.html
http://www.informationweek.com/news/software/bi/201806814
I've been developing database applications for 15 years and I have yet to come across a case where a non-surrogate key was a better choice than a surrogate key.
I'm not saying that such a case does not exist, I'm just saying when you factor in the practical issues of actually developing an application that accesses the database, usually the benefits of a surrogate key start to overwhelm the theoretical purity of non-surrogate keys.
the primary key should be constant and meaningless; non-surrogate keys usually fail one or both requirements, eventually
if the key is not constant, you have a future update issue that can get quite complicated
if the key is not meaningless, then it is more likely to change, i.e. not be constant; see above
take a simple, common example: a table of Inventory items. It may be tempting to make the item number (sku number, barcode, part code, or whatever) the primary key, but then a year later all the item numbers change and you're left with a very messy update-the-whole-database problem...
EDIT: there's an additional issue that is more practical than philosophical. In many cases you're going to find a particular row somehow, then later update it or find it again (or both). With composite keys there is more data to keep track of and more contraints in the WHERE clause for the re-find or update (or delete). It is also possible that one of the key segments may have changed in the meantime!. With a surrogate key, there is always only one value to retain (the surrogate ID) and by definition it cannot change, which simplifies the situation significantly.
It sounds like the person who created the database is on the natural keys side of the great natural keys vs. surrogate keys debate.
I've never heard of any problems with btrees on ID fields, but I also haven't studied it in any great depth...
I fall on the surrogate key side: You have less repetition when using a surrogate key, because you're only repeating a single value in the other tables. Since humans rarely join tables by hand, we don't care if it's a number or not. Also, since there's only one fixed-size column to look up in the index, it's safe to assume surrogates have a faster lookup time by primary key as well.
Using 'unique (object) ID' fields simplifies joins, but you should aim to have the other (possibly composite) key still unique -- do NOT relax the not-null constraints and DO maintain the unique constraint.
If the DBMS can't handle unique integers effectively, it has big problems. However, using both a 'unique (object) ID' and the other key does use more space (for the indexes) than just the other key, and has two indexes to update on each insert operation. So it isn't a freebie -- but as long as you maintain the original key, too, then you'll be OK. If you eliminate the other key, you are breaking the design of your system; all hell will break loose eventually (and you might or might not spot that hell broke loose).
I basically am a member of the surrogate key team, and even if I appreciate and understand arguments such as the ones presented here by JeremyDWill, I am still looking for the case where "natural" key is better than surrogate ...
Other posts dealing with this issue usually refer to relational database theory and database performance. Another interesting argument, always forgotten in this case, is related to table normalisation and code productivity:
each time I create a table, shall I
lose time
identifying its primary key and its
physical characteristics (type,
size)
remembering these characteristics
each time I want to refer to it in
my code?
explaining my PK choice to other
developers in the team?
My answer is no to all of these questions:
I have no time to lose trying to
identify "the best Primary Key" when
dealing with a list of persons.
I do not want to remember that the
Primary Key of my "computer" table
is a 64 characters long string (does
Windows accepts that many characters
for a computer name?).
I don't want to explain my choice to
other developers, where one of them
will finally say "Yeah man, but
consider that you have to manage
computers over different domains?
Does this 64 characters string allow
you to store the domain name + the
computer name?".
So I've been working for the last five years with a very basic rule: each table (let's call it 'myTable') has its first field called 'id_MyTable' which is of uniqueIdentifier type. Even if this table supports a "many-to-many" relation, such as a 'ComputerUser' table, where the combination of 'id_Computer' and 'id_User' forms a very acceptable Primary Key, I prefer to create this 'id_ComputerUser' field being a uniqueIdentifier, just to stick to the rule.
The major advantage is that you don't have to care animore about the use of Primary Key and/or Foreign Key within your code. Once you have the table name, you know the PK name and type. Once you know which links are implemented in your data model, you'll know the name of available foreign keys in the table.
I am not sure that my rule is the best one. But it is a very efficient one!
A practical approach to developing a new architecture is one that utilizes surrogate keys for tables that will contain thousands of multi-column highly unique records and composite keys for short descriptionary tables. I usually find that the colleges dictate the use of surrogate keys while the real world programmers prefer composite keys. You really need to apply the right type of primary key to the table - not just one way or the other.
using natural keys makes a nightmare using any automatic ORM as persistence layer. Also, foreign keys on multiple column tend to overlap one another and this will give further problem when navigating and updating the relationship in a OO way.
Still you could transform the natural key in an unique constrain and add an auto generated id; this doesn't remove the problem with the foreign keys, though, those will have to be changed by hand; hopefully multiple columns and overlapping constraints will be a minority of all the relationship, so you could concentrate on refactoring where it matter most.
natural pk have their motivation and usages scenario and are not a bad thing(tm), they just tend to not get along well with ORM.
my feeling is that as any other concepts, natural keys and table normalization should be used when sensible and not as blind design constraints
I'm going to be short and sweet here: Composite primary keys are not good these days. Add in surrogate arbitrary keys if you can and maintain the current key schemes via unique constraints. ORM is happy, you're happy, original programmer not-so-happy but unless he's your boss then he can just deal with it.
Composite keys can be good - they may affect performance - but they are not the only answer, in much the same way that a unique (surrogate) key isn't the only answer.
What concerns me is the vagueness in the reasoning for choosing composite keys. More often than not vagueness about anything technical indicates a lack of understanding - maybe following someone else's guidelines, in a book or article....
There is nothing wrong with a single unique ID - infact if you've got an application connected to a database server and you can choose which database you're using it will all be good, and you can pretty much do anything with your keys and not really suffer too badly.
There has been, and will be, a lot written about this, because there is no single answer. There are methods and approaches that need to be applied carefully in a skilled manner.
I've had lots of problems with ID's being provided automatically by the database - and I avoid them wherever possible, but still use them occasionally.
... how the database handles ID fields in a non-efficient manner and when it's building indexes, tree sorts are flawed ...
This was almost certainly nonsense, but may have related to the issue of index block contention when assigning incrementing numbers to a PK at a high rate from different sessions. If so then the REVERSE KEY index is there to help, albeit at the expense of a larger index size due to a change in block-split algorithm. http://download.oracle.com/docs/cd/B19306_01/server.102/b14220/schema.htm#sthref998
Go synthetic, particularly if it aids more rapid development with your toolset.
I am not a experienced one but still i m in favor of Using primary key as id here is the explanation using an example..
The format of external data may change over time. For example, you might think that the ISBN of a book would make a good primary key in a table of books. After all, ISBNs are unique. But as this particular book is being written, the publishing industry in the United States is gearing up for a major change as additional digits are added to all ISBNs.
If we’d used the ISBN as the primary key in a table of books, we’d have to update each row to reflect this change. But then we’d have another problem. There’ll be other tables in the database that reference rows in the books table via the primary key. We can’t change the key in the books table unless we first go through and update all of these references. And that will involve dropping foreign key constraints, updating tables, updating the books table, and finally reestablishing the constraints. All in all, this is something of a pain.
The problems go away if we use our own internal value as a primary key. No third party can come along and arbitrarily tell us to change our schema—we control our own keyspace. And if something such as the ISBN does need to change, it can change without affecting any of the existing relationships in the database. In effect, we’ve decoupled the knitting together of rows from the external representation of data in those rows.
Although the explanation is quite bookish but i think it explains the things in a simpler way.
#JeremyDWill
Thank you for providing some much-needed balance to the debate. In particular, thanks for the info on DOMAINs.
I actually use surrogate keys system-wide for the sake of consistency, but there are tradeoffs involved. The most common cause for me to curse using surrogate keys is when I have a lookup table with a short list of canonical values—I'd use less space and all my queries would be shorter/easier/faster if I had just made the values the PK instead of having to join to the table.
You can do both - since any big company database is likely to be used by several applications, including human DBAs running one-off queries and data imports, designing it purely for the benefit of ORM systems is not always practical or desirable.
What I tend to do these days is to add a "RowID" property to each table - this field is a GUID, and so unique to each row. This is NOT the primary key - that is a natural key (if possible). However, any ORM layers working on top of this database can use the RowID to identify their derived objects.
Thus you might have:
CREATE TABLE dbo.Invoice (
CustomerId varchar(10),
CustomerOrderNo varchar(10),
InvoiceAmount money not null,
Comments nvarchar(4000),
RowId uniqueidentifier not null default(newid()),
primary key(CustomerId, CustomerOrderNo)
)
So your DBA is happy, your ORM architect is happy, and your database integrity is preserved!
I just wanted to add something here that I don't ever see covered when discussing auto-generated integer identity fields with relational databases (because I see them a lot), and that is, it's base type can an will overflow at some point.
Now I'm not trying to say this automatically makes composite ids the way to go, but it's just a matter of fact that even though more data could be logically added to a table (which is still unique), the single auto-generated integer identity could prevent this from happening.
Yes I realize that for most situations it's unlikely, and using a 64bit integer gives you lots of headroom, and realistically the database probably should have been designed differently if an overflow like this ever occurred.
But that doesn't prevent someone from doing it... a table using a single auto-generated 32bit integer as it's identity, which is expected to store all transactions at a global level for a particular fast-food company, is going fail as soon as it tries to insert it's 2,147,483,648th transaction (and that is a completely feasible scenario).
It's just something to note, that people tend to gloss over or just ignore entirely. If any table is going to be inserted into with regularity, considerations should be made to just how often and how much data will accumulate over time, and whether or not an integer based identifier should even be used.

Resources