Horizontal database scaling in Ruby on Rails - ruby-on-rails

I have a Ruby on Rails app with a PostgreSQL database which has this structure:
class A < ActiveRecord::Base
has_many :B
end
class B < ActiveRecord::Base
has_many :C
end
class C < ActiveRecord::Base
attr_accessible :x, :y :z
end
The are only a few A's, and they grow slowly (say 5 a month). Each A has thousands of B's, and each B has tens of thousands of C's (so each A has millions of C's).
A's are independent and B's and C's from different A's will never be needed together (i.e. in the same query).
My problem is that now that I have only a couple of A's, ActiveRecord queries take pretty long. When the table for C has tens of millions of rows, queries will take forever.
I am thinking about scaling the database horizontally (i.e a table for A's, one table of B's and one table of C's for each A). But I don't know how to do it. It is a kind of sharding i guess, but I can't figure out how to create DB tables dynamically and use ActiveRecord to access the data if the table depends on which A im working with.
Thank you very much.

If you have performance concerns with only a few rows, or even with several million rows, you need to take a step back before trying to atmosphere engineer a solution. The problem you are describing is very easily solved by indexing; there is no advantage to creating additional physical tables and you'd be introducing incredible complexity.
As #mu-is-too-short already stated: pay attention to your query plans. Use your tools to analyze performance.
That being said you can use table partitioning to physically and transparently house the storage of data into different sharded tables which is especially useful for data that grows very fast but is only useful in a given time box (like a month). You can also do this with an archive bit flag column to shuttle old or deleted records onto some slower storage (say, standard RAID comprised of spinning rust) while keeping active records on faster storage (like a RAID of SSDs).

So it seems you have a tree-like structure. If there is really no need to pull them out of the database in a some kind of cross-referenced manner, then your A's have exactly the properties of a "document", have a look at MongoDB. A's would be saved with all of their B's and there C's in a single record.
http://www.mongodb.org/
If you are looking for an ORM, check
http://mongoid.org/en/mongoid/index.html

Related

Is it a good idea to serialize immutable data from an association?

Let's say we have a collection of products, each with their own specifics e.g. price.
We want to issue invoices that contain said products. Using a direct association from Invoice to Product via :has_many is a no-go, since products may change and invoices must be immutable, thus resulting in an alteration of the invoice price, concept, etc.
I first thought of having an intermediate model like InvoiceProduct that would be associated to the Invoice and created from a Product. Each InvoiceProduct would be unique to its parent invoice and immutable. This option would increase the db size significantly as more invoices get issued though, so I think it is not a good option.
I'm now considering adding a serialized field to the invoice model with all the products information that are associated to it, a hash of the collection of items the invoice contains. This way we can have them in an immutable manner even if the product gets modified in the future.
I'm not sure of possible mid or long term downsides to this approach, though. Would like to hear your thoughts about it.
Also, if there's some more obvious approach that I might have overlooked I'd love to hear about it too.
Cheers
In my experience, the main downside of a serialized field approach vs the InvoiceProducts approach described above is decreased flexibility in terms of how you can use your invoice data going forward.
In our case, we have Orders and OrderItems tables in our database and use this data to generate sales analytics reports as well as customer Invoices.
Querying the OrderItem data to generate the sales reports we need is much faster and easier with this approach than it would be if the same data was stored as serialized data in the db.
No.
Serialized columns have no place in a modern application. They are a overused dirty hack from the days before native JSON/JSONB columns were widespread and have only downsides. The only exception to this rule is when you're using application side encryption.
JSON/JSONB columns can be used for a limited number of tasks where the data defies being defined by a fixed schema or if you're just storing raw json responses - but it should not be how you're defining your schema out of convenience because you're just shooting yourself in the foot. Its a special tool for special jobs.
The better alternative is to actually use good relational database design and store the price at the time of sale and everything else in a separate table:
class Order < ApplicationRecord
has_many :line_items
end
# rails g model line_item order:belongs_to product:belongs_to units:decimal unit_price:decimal subtotal:decimal
# The line item model is responsible for each item of an order
# and records the price at the time of order and any discounts applied to that line
class LineItem < ApplicationRecord
belongs_to :order
belongs_to :product
end
class Product < ApplicationRecord
has_many :line_items
end
A serialized column is not immutable in any way - its actually more prone to denormalization and corruption as there are no database side constraints to ensure its correctness.
Tables can actually be made immutable in many databases by using triggers.
Advantages:
No violation of 1NF.
A normalized fixed data schema to work with - constraints ensure the validity of the data on the database level.
Joins are an extremely powerful tool and not as expensive as you might think.
You can actually access and make sense of the data outside of the application if needed.
DECIMAL data types. JSON/JSONB only has a single number type that uses IEEE 754 floating point.
You have an actual model and assocations instead of having to deal with raw hashes.
You can query the data in sane queries.
You can generate aggregates on the database level and use tools like materialized views.

is it possible to manage Multiple tables using a single model in rails?

Is there any way to use one single model for different tables if all of
the tables have the same fields and attributes? I have first table with 3 millions of records. so I'm thinking to make another table like 'tablename_2' to make fast querying in future.
I would like to use existing model ,but based on some conditions it should decide which table need to be accessed.
I want to know whether this is possible?
You may want to try something like this:
class Example < ActiveRecord::Base
def self.within_table(name)
begin
previous_table_name = self.table_name
self.table_name = name if name.present?
yield if block_given?
ensure
self.table_name = previous_table_name
end
end
# ...
end
And call it with this:
# This register will be created in 'examples' table (default table name)
Example.create! attribute: 'value'
# Las element from 'examples' table
Example.last
Example.within_table 'examples_copy' do
# This one will be created in 'examples_copy' table
Example.create! attribute: 'value'
# Last register of 'examples_copy' table
Example.last
end
Please, bear in mind that this code is probably not thread safe, and should be used carefully. Also, is not a good idea to split your model content between different tables. You should use different models, or single table inheritange.
Yes. It's called horizontal database scaling or scaling out whereby a large database is split into smaller sets to handle load. Vertical scaling or scaling up is done by increasing hardware.
There are two options for scaling out:
1. Read replicas
These are usually used for apps with a high read/write ratio. Think news websites where articles written by a few writers can be consumed by millions around the world. Essentially you have one master database handling all writes which are then replicated onto slave databases that handle read operations.
2. Database sharding
Databases can be split either by rows, by tables, by feature, by geography, by client, or by any other measure. No data is shared between databases. You would use this architecture if there are clear boundaries that can be drawn between your data i.e you have Saas customers in different countries and there's no chance that they would need data from another country.
Read more here https://www.wikiwand.com/en/Shard_(database_architecture)
This question has more to do with your database architecture than Rails however. If I were in your shoes I'd focus on denormalization, indexing, query optimization, and vertical scaling before I'd consider scaling it out.
3 million records should not a problem for postgreSQL, but if you're growing ^100% month on month it would be prudent to start baking in some scalability into your database.

One polymorphic association vs many through/HABTM associations

I am working on a project that currently has tons of HABTM associations. Essentially, everything is related to everything else. I am considering setting up a single intermediate table/model that has two polymorphic fields. This way, if I add another model I can easily connect it to the remaining models. Is this a good idea? If not, why not? If it is, why don't all rails projects have this kind of intermediate table?
I see two other options. I could keep adding intermediate tables or I could add a table that contains one of each type. The former option is kind of a hassle and the latter option does not allow for self joins.
While a polymorphic join table sounds like it would make things easier, I think you will end up creating more headache for yourself than it's worth. Here are a few potential challenges/problems off the top of my head:
You will not be able to use ActiveRecord's has_and_belongs_to_many association or related helpers without a ton of hacking/monkeypatching which will immediately eclipse the time it would take to setup individual pairwise link tables.
Your join table will have two id columns, let's call them a_id and b_id. For any given pair of models you will have to ensure that the ids always end up in the same column.
Example: If you have two models called User and Role, you would have to ensure for that pair that the user_id is always stored in col a_id and the role_id is always stored in col b_id, otherwise you will not be able to index the table in any kind of meaningful way (and will run the risk of defining the same relationship twice).
If you ever want to use database enforcement of FOREIGN KEY constraints it is unlikely that this polymorphic link table scheme will be supported.
The universal link table will get n times larger than n separate link tables. It shouldn't matter much with good indexing but as your application and data grow this could become a headache and limit some of your options in regards to scaling. Give your DB a break.
Most or least importantly (I can't decide) you will be bucking the norm which means a lot fewer (if any) resources out there to help you when you run into trouble. Basically the Adam Sandler "they're all gonna laugh at you" rationale.
Last thought: Can you eliminate any of the link tables by using has_many :xxx, :through => :xxx relationships?
Thinking it all through, you could actually do this, but I wouldn't. Join tables grow fast enough as it is and i like to keep model relationships simple and easy to alter.
I'm used to working on very large systems / data sets though, so if you're going going to have much in each join then ok. I'd still do it separately for joins however and i really like my polymorphics.
I think it would be cleaner and more flexible if you were to use multiple join tables as opposed to one giant multipurpose join table.

Ruby dynamically tied to table

I've got a huge monster of a database (Okay that's not quite true, but there are over 8 million records in one product table)..
This table is fed by 13 suppliers.
Even with the best indexing I could come up with, searching for the top 10,000 records that are ready for supplier 8, is crazy slow.
What I'd like to do is create a product table for each supplier and parse the table into smaller tables.
Now in c++ or what have you, I'd just switch the table that I'm working with inside the class.
In ruby, it seems I'll have to create a new class for each table, and do a migration.
Also as I plan to have some in session tables #, I'd be interested in getting ruby to work with them..
Oh.. 8 million and set to grow to 20 million in the next 6 months.
A question posed, was what's my db engine.. Right now it's sql, but I'm open to pushing my db to another engine, if it will mean I can use temp tables, and "partitioned" tables.
One additional point to indexing.. Indexing on fields that change frequently isn't practical. Like price and quantity.. I'd have to re-index the changed items, each time I made a change.
By Ruby, I am assuming you mean that inheriting from the ActiveRecord::Base class in a Ruby on Rails application. By convention, you are correct in that each class is meant to represent a separate table.
You can easily execute arbitrary SQL using the "ActiveRecord::Base.connection.execute" method, and passing a string that is your SQL query. This would bypass having to create separate Ruby classes that would represent transient tables. This is not the "Rails approach", however it does address your question of allowing switching of the tables inside a class file.
More information on ActiveRecord database statements can be found here: http://api.rubyonrails.org/classes/ActiveRecord/ConnectionAdapters/DatabaseStatements.html
However, as other people have pointed out, you should be able to optimize your query such that splitting across multiple tables is not necessary. You may want to analyze your SQL query's execution plan using various tools to optimize the execution. If you are using MySQL view check out their query execution planning functionality: http://dev.mysql.com/doc/refman/5.5/en/execution-plan-information.html
By introducing indexes, or changing join methods between tables, etc you should be able to return reduce your query execution time.

Rails Caching DB Queries and Best Practices

The DB load on my site is getting really high so it is time for me to cache common queries that are being called 1000s of times an hour where the results are not changing.
So for instance on my city model I do the following:
def self.fetch(id)
Rails.cache.fetch("city_#{id}") { City.find(id) }
end
def after_save
Rails.cache.delete("city_#{self.id}")
end
def after_destroy
Rails.cache.delete("city_#{self.id}")
end
So now when I can City.find(1) the first time I hit the DB but the next 1000 times I get the result from memory. Great. But most of the calls to city are not City.find(1) but #user.city.name where Rails does not use the fetch but queries the DB again... which makes sense but not exactly what I want it to do.
I can do City.find(#user.city_id) but that is ugly.
So my question to you guys. What are the smart people doing? What is
the right way to do this?
With respect to the caching, a couple of minor points:
It's worth using slash for separation of object type and id, which is rails convention. Even better, ActiveRecord models provide the cacke_key instance method which will provide a unique identifier of table name and id, "cities/13" etc.
One minor correction to your after_save filter. Since you have the data on hand, you might as well write it back to the cache as opposed to delete it. That's saving you a single trip to the database ;)
def after_save
Rails.cache.write(cache_key,self)
end
As to the root of the question, if you're continuously pulling #user.city.name, there are two real choices:
Denormalize the user's city name to the user row. #user.city_name (keep the city_id foreign key). This value should be written to at save time.
-or-
Implement your User.fetch method to eager load the city. Only do this if the contents of the city row never change (i.e. name etc.), otherwise you can potentially open up a can of worms with respect to cache invalidation.
Personal opinion:
Implement basic id based fetch methods (or use a plugin) to integrate with memcached, and denormalize the city name to the user's row.
I'm personally not a huge fan of cached model style plugins, I've never seen one that's saved a significant amount of development time that I haven't grown out of in a hurry.
If you're getting way too many database queries it's definitely worth checking out eager loading (through :include) if you haven't already. That should be the first step for reducing the quantity of database queries.
If you need to speed up sql queries on data that doesnt change much over time then you can use materialized views.
A matview stores the results of a query into a table-like structure of
its own, from which the data can be queried. It is not possible to add
or delete rows, but the rest of the time it behaves just like an
actual table. Queries are faster, and the matview itself can be
indexed.
At the time of this writing, matviews are natively available in Oracle
DB, PostgreSQL, Sybase, IBM DB2, and Microsoft SQL Server. MySQL
doesn’t provide native support for matviews, unfortunately, but there
are open source alternatives to it.
Here is some good articles on how to use matviews in Rails
sitepoint.com/speed-up-with-materialized-views-on-postgresql-and-rails
hashrocket.com/materialized-view-strategies-using-postgresql
I would go ahead and take a look at Memoization, which is now in Rails 2.2.
"Memoization is a pattern of
initializing a method once and then
stashing its value away for repeat
use."
There was a great Railscast episode on it recently that should get you up and running nicely.
Quick code sample from the Railscast:
class Product < ActiveRecord::Base
extend ActiveSupport::Memoizable
belongs_to :category
def filesize(num = 1)
# some expensive operation
sleep 2
12345789 * num
end
memoize :filesize
end
More on Memoization
Check out cached_model

Resources