How to Organize an out of control table? - ruby-on-rails

Hello and good morning.
I am working on a side project where I am adding an analytic board to an already existing app. The problem is that now the users table has over 400 columns. My question is that what's a better way of organizing this table such as splintering the table off into separate tables. How do you do that and how do you communicate the tables between the new tables?
Another concern is that If I separate the table will I still be able to save into it through the user model? I have code right now that says:
user.wallet += 100
user.save
If I separate wallet from user and link the two tables will I have to change this code. The reason I'm asking this is that there is a ton of code like this in the app.
Thank you so much if you can help me understanding how to organize a database. As a bonus if there is a book that talks about database organization can you recommend it to me (preferably one that is in rails).
Edit: Is there also a way to do all of this without loosing any data. For example transfer the data to a new column on the new table then destroying the old column.

Please read about:
Database Normalization
You'll get loads of hits when searching for that string and there are many books about database design covering that subject.
It is most likely, that this table of yours lacks normalization, but you have to see yourself!
Just to give an orientation - I would get a little anxious when dealing with a tenth of that number of columns. That saying, I clearly have to stress that there might be well normalized tables with 400 columns as well as sloppily created examples with just 10 columns.
Generally speaking, the probability of dealing with bad designed tables and hence facing trouble simply rises with the number of columns.
So take your time and if you find out, that users table needs normalization next step would indeed be to spread data over several tables. Because that clearly (and most likely even heavily) affects the coding of your application here is where you thoroughly have to balance pros and cons - simply impossible to judge that from far away.
Say, you have substantial problems (e.g. fierce performance problems - you wouldn't post it) that could be eased by normalization there are different approaches of how to split data. Here please read about:
Cardinalities
Usually the new tables are linked by
Foreign Keys
, identical data (like a user id) that appear in multiple tables and that are used to join them, that is.
And finally, yes, you can do that without losing data as the overall amount of information never changes when normalizing.
In case your last question was meant to be technical: There is no problem in reading data from one column and inserting them into a new one (of a new table). That has to happen in a certain order as foreign keys have to be filled before you can use them. See
Referential Integrity
However, quite obvious: Deleting data and dropping columns interferes with the operability of your application. Good planning is due.

Related

Is ~44 columns too much for a model? Does it make sense to break a one-to-one relation?

I am interested in what the best practice is for a model that has a lot of data attached to it. Most of my app revolve around one model (SKU), and it seems to have more and more things associated with it.
For example, my SKU model has multiple prices, dimensions, weight, recommended prices for multiple price levels, title, description, shelf life, etc. Would it make sense to break all the pricing info to another table? Or break up the SKU into different uses of the SKU and associate them? For example, WebSKU, StockSKU, etc.
As mentioned in the answer linked by Tom, if all your attributes really belong to that model there is no reason to break it up. However, if you have columns like price1, price2, price3 or dimension_x_1, dimension_y_1, dimension_x_2, dimension_y_2, etc, then it usually means you should be creating another table to contain those.
For example, you could set it up so that you have the following models
Sku
has_many :prices
has_many :dimensions
Price
belongs_to :sku
Dimension
belongs_to :sku
As everyone else said, the design of a database should respond to the logic behind it. Why? Mainly, because it will be easier to maintain and understand.
I was also going to drive attention to normalization rules, as #sawa did.
Generally, is a good approach to normalize your database, as it provides several advantages. You should read this wikipedia link (at least as a starting point).
Following normal rules will help you to design your database taking into account the logic behind your data.
But denormalization also has it's advantages. The first (always considered) being optimizing read performance. This basically means having data on one table that you would have had in different tables when following normal rules, and generally makes sense when that data has some logic relation.
You have to aim to achieve a balance depending on the problem you are facing.
On the other side, for the tags on your post I can see you are using ruby on rails, that uses the active record pattern. One consequence of the database model you are presenting, is that you will probably have a domain model just as complex. I mean, very large. I don't know every detail about your project, but I guess that it will quickly grow to be a god object, making your code hard to maintain, extend and understand.
Database should be designed not according to how many columns it has, but according to logic, particularly following Codd's normal forms. If there is systematic redundancy in your database, then that is a sign for splitting it into multiple tables. If not, keep it as is.
I think it is good to design data model, taking into account how DB engine works with files and memory. The first bottleneck of PostgreSQL is file IO. Memory consumption is also an important part. When PostgreSQL reads some table data (FYI: table data is not read at Index-Only-Scans) it reads 8 KB (compile time parameter) pages. More tuples in such a page, - less file IO, less memory consumption, better cache using (more often hits, fast prewarming, etc.), better performance.
So, if one have a really high-loaded project, it can be useful to think about separation of often used data to isolated tables (as a next step - place this tables into a separate tablespace on SDD or powerful RAID).
I.e. there should be some balance between a logic simplicity and performance tweaks.

Transaction Fact Table approach

I'm working on financial data mart structure.
And I'm having some doubts on whats the better approach to do so.
The source system database,Dynamics AX 2009, has three tables for customer transaction.
One table for open transactions, where the Customer still needs to pay for service/product;
One table for settle transactions, where it holds what the customer have already paid;
Finally a table that have all customers transactions, holds transactions from open to settle and also others transactions as customer to bank or ledger accounts.
I thought in two options, first I will maintain a fact table representing the three table, fact for open transactions, fact for any customer transaction and fact for settle transaction.
Second is to create a single fact to hold all transactions, to do so I would have to do a full join on three tables.
I'm not sure on both approaches, as the first seems to copy tables from production and create the proper dimension.
On the Second one I would create a massive fact table, that where data would constantly change, as open transaction are delete on source system when they are settle.
Another doubt, should i create a fact with scd(slowly changing dimension) structure to maintain history data?(star date, end date , flag)
It's hard to say from the information given whether this needs to be one or more Fact tables. However, the key point which you should use to decide is whether all of the information is at the same granularity. Consider the grain of your intended Fact table(s) and you should find an answer for whether you need one table or multiple tables.
If all of the information sits at the same grain - i.e. all of the same dimensions apply to all of the measures you are considering putting into the same Fact table - then they can probably all live in the same Fact table. If you're finding that some of the Dimensions wouldn't apply to some of the measures then you probably need to re-think your design. Either you might need multiple Fact tables, or you might need to take all of your measures down to the lowest grain and combine hierarchies into single Dimensions if you currently have them split across multiple Dimensions.
While it's been mentioned that having measures in separate cubes could make it difficult to compare things, keep in mind that you don't need one cube per Fact table. You can have multiple Fact tables in a single cube, and sometimes this is very helpful when you need to be able to compare measures which share some Dimensions but not others. This is far, far better than forcing data which does not have the same grain into one Fact table.
Also, it sounds like what you're trying to model is the sales ledger of an organisation. I'd suggest having a dig around via Google as you may well be able to find materials discussing dimensional data warehouse design for sales ledger structures, rather than reinventing the wheel. If you don't have a decent understanding of the accounting concepts you're trying to model I would especially recommend looking for a reference schema to work from, or failing that doing some reading up on accountancy concepts (and sales ledgers specifically). Understanding the account structure should help you understand what the grain of your Fact table(s) needs to be, how to model the Dimensions, and so on.
This is a really helpful abridged version of Kimball's modelling techniques which discusses grain, and the different types of Fact table, amongst many other topics:
http://www.kimballgroup.com/wp-content/uploads/2013/08/2013.09-Kimball-Dimensional-Modeling-Techniques11.pdf
I think you should just use one fact table (one cube) and use a dimension to differentiate between open/settled/etc. transactions. That's what dimensions are for: They help you to categorize your measures and get a specific view on them. This approach will also open much more possibilities to create knowledge with your cube. With separate cubes for open/settled/etc. transactions, it will be harder or not possible to set this data into contrast.
Since the data is changing constantly, you should consider to update your fact table in a given time and rebuild your cube if it needs to.
If you use scd or not really depends on the data you process and what it is used for. Is there a business case claiming it? Is there a technical use?
I think this is something you have to decide on your own.

Ruby on Rails database and application design

We have to create rather large Ruby on Rails application based on large database. This database is updated daily, each table has about 500 000 records (or more) and this number will grow over time. We will also have to provide proper versioning of all data along with referential integrity. It must be possible for user to move from version to version, which are kind of "snapshots" of main database at different points of time. In addition some portions of data need to be served to other external applications with and API.
Considering large amounts of data we thought of splitting database into pieces:
State of the data at present time
Versioned attributes of each table
Snapshots of the first database at specific, historical points in time
Each of those would have it's own application, creating a service with API to interact with the data. It's needed as we don't want to create multiple applications connecting to multiple databases directly.
The question is: is this the proper approach? If not, what would you suggest?
We've never had any experience with project of this magnitude and we're trying to find the best possible solution. We don't know if this kind of data separation has any sense. If so, how to provide proper communication of different applications with individual services and between services themselves, as this will be also required.
In general the amount of data in the tables should not be your first concern. In PostgreSQL you have a very large number of options to optimize queries against large tables. The larger question has to do with what exactly you are querying, when, and why. Your query loads are always larger concerns than the amount of data. It's one thing to have ten years of financial data amounting to 4M rows. It's something different to have to aggregate those ten years of data to determine what the balance of the checking account is.
In general it sounds to me like you are trying to create a system that will rely on such aggregates. In that case I recommend the following approach, which I call log-aggregate-snapshot. In this, you have essentially three complementary models which work together to provide up-to-date well-performing solution. However the restrictions on this are important to recognize and understand.
Event model. This is append-only, with no updates. In this model inserts occur, and updates to some metadata used for some queries only as absolutely needed. For a financial application this would be the tables representing the journal entries and lines.
The aggregate closing model. This is append-only (though deletes are allowed for purposes of re-opening periods). This provides roll-forward information for specific purposes. Once a closing entry is in, no entries can be made for a closed period. In a financial application, this would represent closing balances. New balances can be calculated by starting at an aggregation point and rolling forward. You can also use partial indexes to make it easier to pull just the data you need.
Auxiliary data model. This consists of smaller tables which do allow updates, inserts, and deletes provided that integrity to the other models is not impinged. In a financial application this might be things like customer or vendor data, employee data, and the like.

What database should I use in an app where my models don't represent different ideas, but instead different types with overlapping fields?

I'm building an application where I will be gathering statistics from a game. Essentially, I will be parsing logs where each line is a game event. There are around 50 different kinds of events, but a lot of them are related. Each event has a specific set of values associated with it, and related events share a lot of these attributes. Overall there are around 50 attributes, but any given event only has around 5-10 attributes.
I would like to use Rails for the backend. Most of the queries will be event type related, meaning that I don't especially care about how two event types relate with each other in any given round, as much as I care about data from a single event type across many rounds. What kind of schema should I be building and what kind of database should I be using?
Given a relational database, I have thought of the following:
Have a flat structure, where there are only a couple of tables, but the events table has as many columns as there are overall event attributes. This would result in a lot of nulls in every row, but it would let me easily access what I need.
Have a table for each event type, among other things. This would let me save space and improve performance, but it seems excessive to have that many tables given that events aren't really seperate 'ideas'.
Group related events together, minimizing both the numbers of tables and number of attributes per table. The problem then becomes the grouping. It is far from clear cut, and it could take a long time to properly establish event supertypes. Also, it doesn't completely solve the problem of there being a fair amount of nils.
It was also suggested that I look into using a NoSQL database, such as MongoDB. It seems very applicable in this case, but I've never used a non-relational database before. It seems like I would still need a lot of different models, even though I wouldn't have tables for each one.
Any ideas?
This feels like a great use case for MongoDB and a very awkward fit for a relational database.
The types of queries you would be making against this data is very key to best schema design but imagine that your documents (in a single collection similar to 1. above) look something like this:
{ "round" : 1,
"eventType": "et1",
"attributeName": "attributeValue",
...
}
You can easily query by round, by eventType, getting back all attributes or just a specified subset, etc.
You don't have to know up front how many attributes you might have, which ones belong with which event types, or even how many event types you have. As you build your prototype/application you will be able to evolve your model as needed.
There is a very large active community of Rails/MongoDB folks and there's a good chance that you can find a lot of developers you can ask questions and a lot of code you can look at as examples.
I would encourage you to try it out, and see if it feels like a good fit. I was going to add some links to help you get started but there are too many of them to choose from!
Since you might have a question about whether to use an object mapper or not so here's a good answer to that.
A good write-up of dealing with dynamic attributes with Ruby and MongoDB is here.

Rails and databases - Store old data in a separate table?

Okay, so I'm putting together a book store with Ruby on Rails. Books are fast moving and varied, so at any point of time there are a small number in the store. Books that have been ordered and shipped must be stored, mainly for the purpose of records.
Hence, I have a situation where a small section of data from a table is going to be very frequently accessed. A much much larger section of it will very rarely accessed at all. My plan is to move books that have been ordered and shipped to a separate table, so that the table of current books is small and very quick to access.
Does this approach make sense? Is there a better way of achieving this?
If I am to use this approach, is there a way of sharing a model between tables in Rails?
I agree with Randy's comment about considering the number of books in the database, and whether or not it's really worth it. Only after you try it, and come back with real performance numbers to consider should you consider optimizing in this way, I believe.
On the other hand, there's plenty of precedent for having the idea of an "archive" table. From a design standpoint, this is totally fine. It's a question of the tradeoff between complexity and performance. But again, only after you try it and see whether or not the performance is acceptable, will you have a solid reason to choose one approach over another.

Resources