A database design for variable column names - ruby-on-rails

I have a situation that involves Companies, Projects, and Employees who write Reports on Projects.
A Company owns many projects, many reports, and many employees.
One report is written by one employee for one of the company's projects.
Companies each want different things in a report. Let's say one company wants to know about project performance and speed, while another wants to know about cost-effectiveness. There are 5-15 criteria, set differently by each company, which ALL apply to all of that company's project reports.
I was thinking about different ways to do this, but my current stalemate is this:
To company table, add text field criteria, which contains an array of the criteria desired in order.
In the report table, have a company_id and columns criterion1, criterion2, etc.
I am completely aware that this is typically considered horrible database design - inelegant and inflexible. So, I need your help! How can I build this better?
Conclusion
I decided to go with the serialized option in my case, for these reasons:
My requirements for the criteria are simple - no searching or sorting will be required of the reports once they are submitted by each employee.
I wanted to minimize database load - where these are going to be implemented, there is already a large page with overhead.
I want to avoid complicating my database structure for what I believe is a relatively simple need.
CouchDB and Mongo are not currently in my repertoire so I'll save them for a more needy day.

This would be a great opportunity to use NoSQL! Seems like the textbook use-case to me. So head over to CouchDB or Mongo and start hacking.
With conventional DBs you are slightly caught in the problem of how much to normalize your data:
A sort of "good" way (meaning very normalized) would look something like this:
class Company < AR::Base
has_many :reports
has_many :criteria
end
class Report < AR::Base
belongs_to :company
has_many :criteria_values
has_many :criteria, :through => :criteria_values
end
class Criteria < AR::Base # should be Criterion but whatever
belongs_to :company
has_many :criteria_values
# one attribute 'name' (or 'type' and you can mess with STI)
end
class CriteriaValues < AR::Base
belongs_to :report
belongs_to :criteria
# one attribute 'value'
end
This makes something very simple and fast in NoSQL a triple or quadruple join in SQL and you have many models that pretty much do nothing.
Another way is to denormalize:
class Company < AR::Base
has_many :reports
serialize :criteria
end
class Report < AR::Base
belongs_to :company
serialize :criteria_values
def criteria
self.company.criteria
end
# custom code here to validate that criteria_values correspond to criteria etc.
end
Related to that is the rather clever way of serializing at least the criteria (and maybe values if they were all boolean) is using bit fields. This basically gives you more or less easy migrations (hard to delete and modify, but easy to add) and search-ability without any overhead.
A good plugin that implements this is Flag Shih Tzu which I've used on a few projects and could recommend.
Variable columns (eg. crit1, crit2, etc.).
I'd strongly advise against it. You don't get much benefit (it's still not very searchable since you don't know in which column your info is) and it leads to maintainability nightmares. Imagine your db gets to a few million records and suddenly someone needs 16 criteria. What could have been a complete no-issue is suddenly a migration that adds a completely useless field to millions of records.
Another problem is that a lot of the ActiveRecord magic doesn't work with this - you'll have to figure out what crit1 means by yourself - now if you wan't to add validations on these fields then that adds a lot of pointless work.
So to summarize: Have a look at Mongo or CouchDB and if that seems impractical, go ahead and save your stuff serialized. If you need to do complex validation and don't care too much about DB load then normalize away and take option 1.

Well, when you say "To company table, add text field criteria, which contains an array of the criteria desired in order" that smells like the company table wants to be normalized: you might break out each criterion in one of 15 columns called "criterion1", ..., "criterion15" where any or all columns can default to null.
To me, you are on the right track with your report table. Each row in that table might represent one report; and might have corresponding columns "criterion1",...,"criterion15", as you say, where each cell says how well the company did on that column's criterion. There will be multiple reports per company, so you'll need a date (or report-number or similar) column in the report table. Then the date plus the company id can be a composite key; and the company id can be a non-unique index. As can the report date/number/some-identifier. And don't forget a column for the reporting-employee id.
Any and every criterion column in the report table can be null, meaning (maybe) that the employee did not report on this criterion; or that this criterion (column) did not apply in this report (row).
It seems like that would work fine. I don't see that you ever need to do a join. It looks perfectly straightforward, at least to these naive and ignorant eyes.

Create a criteria table that lists the criteria for each company (company 1 .. * criteria).
Then, create a report_criteria table (report 1 .. * report_criteria) that lists the criteria for that specific report based on the criteria table (criteria 1 .. * report_criteria).

Related

Is it a good idea to serialize immutable data from an association?

Let's say we have a collection of products, each with their own specifics e.g. price.
We want to issue invoices that contain said products. Using a direct association from Invoice to Product via :has_many is a no-go, since products may change and invoices must be immutable, thus resulting in an alteration of the invoice price, concept, etc.
I first thought of having an intermediate model like InvoiceProduct that would be associated to the Invoice and created from a Product. Each InvoiceProduct would be unique to its parent invoice and immutable. This option would increase the db size significantly as more invoices get issued though, so I think it is not a good option.
I'm now considering adding a serialized field to the invoice model with all the products information that are associated to it, a hash of the collection of items the invoice contains. This way we can have them in an immutable manner even if the product gets modified in the future.
I'm not sure of possible mid or long term downsides to this approach, though. Would like to hear your thoughts about it.
Also, if there's some more obvious approach that I might have overlooked I'd love to hear about it too.
Cheers
In my experience, the main downside of a serialized field approach vs the InvoiceProducts approach described above is decreased flexibility in terms of how you can use your invoice data going forward.
In our case, we have Orders and OrderItems tables in our database and use this data to generate sales analytics reports as well as customer Invoices.
Querying the OrderItem data to generate the sales reports we need is much faster and easier with this approach than it would be if the same data was stored as serialized data in the db.
No.
Serialized columns have no place in a modern application. They are a overused dirty hack from the days before native JSON/JSONB columns were widespread and have only downsides. The only exception to this rule is when you're using application side encryption.
JSON/JSONB columns can be used for a limited number of tasks where the data defies being defined by a fixed schema or if you're just storing raw json responses - but it should not be how you're defining your schema out of convenience because you're just shooting yourself in the foot. Its a special tool for special jobs.
The better alternative is to actually use good relational database design and store the price at the time of sale and everything else in a separate table:
class Order < ApplicationRecord
has_many :line_items
end
# rails g model line_item order:belongs_to product:belongs_to units:decimal unit_price:decimal subtotal:decimal
# The line item model is responsible for each item of an order
# and records the price at the time of order and any discounts applied to that line
class LineItem < ApplicationRecord
belongs_to :order
belongs_to :product
end
class Product < ApplicationRecord
has_many :line_items
end
A serialized column is not immutable in any way - its actually more prone to denormalization and corruption as there are no database side constraints to ensure its correctness.
Tables can actually be made immutable in many databases by using triggers.
Advantages:
No violation of 1NF.
A normalized fixed data schema to work with - constraints ensure the validity of the data on the database level.
Joins are an extremely powerful tool and not as expensive as you might think.
You can actually access and make sense of the data outside of the application if needed.
DECIMAL data types. JSON/JSONB only has a single number type that uses IEEE 754 floating point.
You have an actual model and assocations instead of having to deal with raw hashes.
You can query the data in sane queries.
You can generate aggregates on the database level and use tools like materialized views.

Rails/Ruby: Performing calculate on ActiveRecord_AssociationRelation (including custom foreign_key)

I hope I am asking the proper question in the title, as my issue feels like it should be quite trivial yet I'm having terrible luck figuring it out.
I have two basic models with a standard has_many and belongs_to relationship:
class StandingEvent < Event
belongs_to :standing, foreign_key: 'actor_id'
end
class Standing < ActiveRecord::Base
has_many :standing_events
end
My goal is simple: To calculate the SUM of a field in a collection of StandingEvents records acquired through a Standing association. As expected, this collection is of type StandingEvent::ActiveRecord_AssociationRelation.
Ignoring everything else and cutting it down to the barest of bones, I get an error when running the following:
#standing = Standing.find(4)
#standing.standing_events.sum(:change)
The error produced is found below:
Mysql2::Error: Unknown column 'events.standing_id' in 'where clause':
SELECT SUM(`events`.`change`) AS sum_id FROM `events` WHERE `events`.`actor_type` IN ('StandingEvent') AND `events`.`standing_id` = 4
So, as seen from the above error, the exact problem is that the generated SQL query is trying to use standing_id as the column name (presumably because of the associated record) instead of the actual column name specified in the model itself (actor_id).
This issue only comes up when using a calculate method (such as sum), since I'm using both these models and their association very heavily throughout the application without any issue.
The only way "around" this issue that I've found so far seems very poor (and strikes me as unnecessary), which is essentially to chain my where clauses and sum through the base class, rather then using a previously gathered set of associated records:
#standing = Standing.find(4)
StandingEvent.where(standing: #standing).sum(:change)
The above code performs the calculation without issue, but since I'd like to perform multiple calculations upon the same collection within a single request, it seems like a very poor solution to re-query the entire set every time as above (though perhaps I don't understand Rails enough, to be fair).
As mentioned in my question title, I can't help but wonder if this is a bug (for lack of a better term) of some sort related to the use of the foreign_key field I specified for the child association (in this case, the foreign_key for StandingEvent's Standing association is renamed to actor_id).
Any and all insight would be most appreciated!
I think you'll need to specify the foreign_key on both sides of the association
has_many :standing_events, foreign_key: :actor_id

Rails: polymorphic association, different options depending on type?

I'm building a diet analysis app in Rails 4.1. I have a model, FoodEntry, which at a simple level has a quantity value and references a Food and a Measure:
class FoodEntry < ActiveRecord::Base
belongs_to :food
belongs_to :measure
end
However I actually have two different types of measures, standard generic measures (cups, teaspoons, grams, etc.) and measures which are specific to a food (heads of broccoli, medium-sized bananas, large cans, etc.). Sounds like a case for a polymorphic association right? e.g.
class FoodEntry < ActiveRecord::Base
belongs_to :food
belongs_to :measure, polymorphic: true # Uses measure_id and measure_type columns
end
class StandardMeasure < ActiveRecord::Base
has_many :food_entries, as: :measure
end
class FoodMeasure < ActiveRecord::Base
has_many :food_entries, as: :measure
end
The thing is, the food-specific measures come from a legacy database dump. These records are uniquely identified by a combination of their food_id and description - they aren't supplied with a single-column primary key (description is not unique on its own because there are multiple foods with the same measure description but different numeric data). Because I'm importing to my Rails Postgres db, I'm able to add a surrogate primary key - the auto-incrementing integer id column that Rails expects. But I don't want to utilize this id as a reference in my FoodEntry model because it poses a pretty big challenge for keeping referential integrity intact when the (externally-supplied) data is updated and I have to reimport. Basically, those ids are completely subject to change, so I'd much rather reference the food_id and description directly.
Luckily it's not very difficult to do this in Rails by using a scope on the association:
class FoodEntry < ActiveRecord::Base
belongs_to :food
belongs_to :measure, ->(food_entry) { where(food_id: food_entry.food_id) }, primary_key: 'description', class_name: 'FoodMeasure'
# Or even: ->(food_entry) { food_entry.food.measures }, etc.
end
Which produces a perfectly acceptable query like this:
> FoodEntry.first.measure
FoodMeasure Load (15.6ms) SELECT "food_measures".* FROM "food_measures" WHERE "food_measures"."description" = $1 AND "food_measures"."food_id" = '123' LIMIT 1 [["description", "Broccoli head"]]
Note that this assumes that measure_id is a string column in this case (because description is a string).
In contrast the StandardMeasure data is under my control and doesn't reference Foods, and so it makes perfect sense to simply reference the id column in that case.
So the crux of my issue is this: I need a way for a FoodEntry to reference only one type of measure, as it would in the polymorphic association example I made above. However I don't know how I'd implement a polymorphic association with respect to my measure models because as it stands:
an associated FoodMeasure needs to be referenced through a scope, while a StandardMeasure doesn't.
an associated FoodMeasure needs to be referenced by a string, while a StandardMeasure is referenced by an integer (and the columns being referenced have different names).
How do I reconcile these issues?
Edit: I think I should explain why I don't want to use the autonumber id on FoodMeasures as my foreign key in FoodEntries. When the data set is updated, my plan was to:
Rename the current food_measures table to retired_food_measures (or whatever).
Import the new set of data into a new food_measures table (with a new set of autonumber ids).
Run a join between these two tables, then delete any common records in retired_food_measures, so it just has the retired records.
If I'm referencing those measures by food_id and description, that way I get the benefit that food entries automatically refer to the new records, and therefore any updated numeric data for a given measure. And I can instruct my application to go searching in the retired_food_measures table if a referenced measure can't be found in the new one.
This is why I think using the id column would make things more complicated, in order to receive the same benefits I'd have to ensure that every updated record received the same id as the old one, every new record received a new not-used-before id, and that any retired id is never used again.
There's also one other reason I don't want to do this: ordering. The records in the dump are ordered first by food_id, however the measures for any given food_id are in a non-alphabetical but nevertheless logical order I'd like to retain. The id column can serve this purpose elegantly (because ids are assigned in row order on import), but I lose this benefit the moment the ids start getting messed around with.
So yeah I'm sure I could implement solutions to these problems, but I'm not sure it would be worth the benefit?
it poses a pretty big challenge for keeping referential integrity
intact when the (externally-supplied) data is updated
This is an illusion. You have total control over the surrogates. You can do exactly the processing of external updates whether they are there or not.
This is just one of those times when you want your own new names for things, in this case Measures, of which FoodMeasures and StandardMeasures are subtypes. Have a measure_id in all three models/tables. You can find many idioms for simplifying subtype constraints, eg using type tags.
If you process external updates in such a way that it is convenient for such objects to also have such surrogates then you need to clearly separate such PutativeFoodMeasures and FoodMeasures as subtypes of some supertype PutativeOrProvenFoodMeasure and/or of PutativeOrProvenMeasure.
EDIT:
Your update helps. You have described what I did. It is not difficult to map old to new ids; join on old & new (food_id,description) and select old id (not a food_id!). You control ids; how can it matter to reuse ids compared to them not even existing otherwise? Ditto for sorting FoodMeasures; do as you would have. It is only when you mix them with StandardMeasures giving some result that you need order a mixture differently; but you would do that anyway whether or not a shared id existed. (Though "polymorphic:" may not be the best id sharing design.)
The Measures model offers measures; and when you know you have a FoodMeasure or StandardMeasure you can get at its subtype-particular parts.

Rails and multiple profiles

I have the app, where user can have one of several different profiles. Some of profile data are always the same (like first and last name, gender etc). Other fields may vary (for example, doctor can have license number and text about himself, while patient can have phone number etc).
I found approach, that fits pretty well, but still have some doubts. The point of my approach looks like this:
User model contains a lot of system-specific data, controlled by Devise and has_one :person
Person model contains common profile data and belongs_to :profile, :polymorphic => true
Doctor/Patient/Admin/etc contains more specific profile data and has_one :person, :as => :profile
With this approach i can simply check in Person model:
def doctor?
self.profile_type == 'Doctor'
end
But there is the few things doesn't give me a rest.
First one is performance. This approach requires a lot of additional joins. For example, for reading doctor's license number, first/last name and email at the same time it will generate 2 additional joins.
Second one is different ids for profile-specific model (i.e. Doctor) and for Person/User models. There will be situations, when user with ID=1 will have Patient relation with different ID, but it would be logical to have same ID for all this associated models.
Maybe you guys will see any more pitfalls in this approach? Is there any better solution for my situation?
You've got four basic patterns you can use here that might work.
Omnirecord
In this model you have all the possible fields in a single record and then use STI to differentiate between profile types. This is the simplest to implement but looks the most messy as few people will have all fields populated. Keep in mind that NULL string fields don't take up a lot of database space, typically one bit per column, so having a lot of them isn't a big deal.
Optional Joins
In this model you create a number of possible linkages to different profile types, like doctor_profile_id linking to a DoctorProfile, patient_profile_id linking to a PatientProfile, and so forth. Since each relationship is spelled out in a specific field, you can even enforce foreign key constraints if you prefer, and indexing is easy. This can come in handy when a single record requires multiple different profiles to be associated with it, as in the case of a patient that's also a doctor.
Polymorphic Join
In this model you link to a specific profile type and profile id using the :polymorphic option, much like you've suggested. Indexing is more complicated and foreign keys are impossible. You're also limited to having one and only one profile. These tend to work as a starting point but may prove to be trouble down the road when you get a doctor + patient requirement.
Key/Value Store
In this model you abandon all efforts to organize things into singular records and instead build an associated ProfileField and ProfileValue table. The ProfileField identifies what fields are available on what kinds of profiles, such as label, allowed values, data type and so forth, while the ProfileValue is used to store specific values for specific profiles.
class User < ActiveRecord::Base
has_many :profile_fields
end
class ProfileField < ActiveRecord::Base
has_many :profile_values
end
class ProfileValue < ActiveRecord::Base
belongs_to :user
belongs_to :profile_field
end
Since this one is wide open you can allow the site administrator to redefine what field are required, add new fields, and so on, without having to make a schema change.

Best way to handle multiple tables to replace one big table in Rails? (e.g. 'todo_items1', 'todo_items2', etc., instead of just 'todo_items')?

Update:
Originally, this post was using Books as the example entity, with
Books1, Books2, etc. being the
separated table. I think this was a
bit confusing, so I've changed the
example entity to be "private
todo_items created by a particular
user."
This kind of makes Horace and Ryan's original comments seem a bit off, and
I apologize for that. Please know that
their points were valid when it looked
like I was dealing with books.
Hello,
I've decided to use multiple tables for an entity (e.g. todo_items1, todo_items2, todo_items3, etc.), instead of just one main table which could end up having a lot of rows (e.g. just todo_items). I'm doing this to try and to avoid a potential future performance drop that could come with having too many rows in one table.
With that, I'm looking for a good way to handle this in Rails, mainly by trying to avoid loading a bunch of unused associations for each User object. I'm guessing that other have done something similar, so there's probably a few good tips/recommendations out there.
(I know that I could use a partition for this, but, for now, I've decided to go the 'multiple tables' route.)
Each user has their todo_items placed into a specific table. The actual "todo items" table is chosen when the user is created, and all of their todo_items go into the same table. The data in their todo items collection is private, so when it comes time to process a users todo_items, I'll only have to look at one table.
One thing I don't particularly want to have is a bunch of unused associations in the User class. Right now, it looks like I'd have to do the following:
class User < ActiveRecord::Base
has_many :todo_items1, :todo_items2, :todo_items3, :todo_items4, :todo_items5
end
class todo_items1 < ActiveRecord::Base
belongs_to :user
end
class todo_items2 < ActiveRecord::Base
belongs_to :user
end
class todo_items3 < ActiveRecord::Base
belongs_to :user
end
The thing is, for each individual user, only one of the "todo items" tables would be usable/applicable/accessible since all of a user's todo_items are stored in the same table. This means only one of the associations would be in use at any time and all of the other has_many :todo_itemsX associations that were loaded would be a waste.
For example, with a user.id of 2, I'd only need todo_items3.find_by_text('search_word'), but the way I'm thinking of setting this up, I'd still have access to todo_items1, todo_items2, todo_items4 and todo_items5.
I'm thinking that these "extra associations" adds extra overhead and makes each User object's size in memory much bigger than it has to be. Also, there's a bunch of stuff that Ruby/Rails is doing in the background which may cause other performance problems.
I'm also guessing that there could be some additional method call/lookup overhead for each User object, since it has to load all of those associations, which in turn creates all of those nice, dynamic model accessor methods like User.find_by_something.
I don't really know Ruby/Rails does internally with all of those has_many associations though, so maybe it's not so bad. But right now I'm thinking that it's really wasteful, and that there may just be a better, more efficient way of doing this.
So, a few questions:
1) Is there's some sort of special Ruby/Rails methodology that could be applied to this 'multiple tables to represent one entity' scheme? Are there any 'best practices' for this?
2) Is it really bad to have so many unused has_many associations for each object? Is there a better way to do this?
3) Does anyone have any advice on how to abstract the fact that there's multiple "todo items" tables behind a single todo_items model/class? For example, so I can call todo_items.find_by_text('search_phrase') instead of todo_items3.find_by_text('search_phrase').
Thank you!
This is not the way to scale.
It would probably be better going with master-slave replication and proper indexing (besides primary key) on fields such as "title" and/or "author" if that's what you're going to be looking up books based on. Having it in n-tables, how are you going to know the best place to go looking for the book the user is after? Are you going to go looking through 4 tables?
I agree with Horace: " don't try to solve a performance issue before you have figures to prove it." I suggest, however, that you should really look into adding indexes to your table if you want lookups to be fast. If they aren't fast, then tell us how they aren't fast and we will tell you how to make it go ZOOOOOM.

Resources