Rails: N+1 problem... statistical data needed - ruby-on-rails

After realizing that an application suffer of the N+1 problem because the ORM, I would like to have more information about the improvements that can be performed and the statistics with the time compared before the improvements (with the N+1 problem) and after. So what is the time difference before and after such improvements ? Can anyone give me a link to some paper that analyze the problem and retrieve statisics on that?

You really don't need statistical data for this, just math. N+1 (or better 1+N) stands for
1 query to get a record, and
N queries to get all records associated with it
The bigger N is, the more a performance hit this becomes, particularly if your queries are sent across the network to a remote database. That's why N+1 problems keep cropping up in production - they're usually insignificant in development mode with little data in the DB, but as your data grows in production to thousands or millions of rows, your queries will slowly choke your server.
You can instead use
a single query (via a join) or
2 queries (one for the primary record, one for all associated records
The first query will return more data than strictly needed (the data of the first record will be duplicated in each row), but that's usually a good tradeoff to make. The second query might get a bit cumbersome for large data sets since all foreign keys are passed in as a single range, but again, it's usually a tradeoff worth making.
The actual numbers depend on too many variables for statistics to be meaningful. Number or records, DB version, hardware etc. etc.
Since you tagged this question with rails, ActiveRecord does a good job avoiding N+1 queries if you know how to use it. Check out the explanation of eager loading.

The time difference would depend on how many additional selects were performed because of the N+1 problem. Here's a quote from an answer given to another stackoverflow question regarding N+1 -
Quote Start
SELECT * FROM Cars;
/* for each car */
SELECT * FROM Wheel WHERE CarId = ?
In other words, you have one select for the Cars, and then N additional selects, where N is the total number of cars.
Quote End
In the example above the time difference would depend on how many car records were in the database and how long it took to query the 'Wheel' table each time the code/ORM fetched a new record. If you only had 2 car records then the difference after removing the N+1 problem would be negligible, but if you have a million car records then it would have a significant affect.

Related

How is query performance affected by the number of records in rails?

Is the query speed affected by the number of rows?
Let's say we have an active record model "Post", and many of those have a status=false, would it be useful, if all the records with status=false are not gonna be used, but are necessary, to create a different model, like "OffPost" to store all those posts with status false, so when I query any object in "Post" the query is faster? or just a scope getting all Post with a status equal to true, would represent the same efficiency?
If you frequently query by status, the most important thing would be to add an index to the status column first.
https://en.wikipedia.org/wiki/Database_index
The speed is indeed affected by the number of rows and splitting tables or the whole database (e.g. by country, city, user ids) is one strategy to keep the number of records low. This is called shardening (https://en.wikipedia.org/wiki/Shard_(database_architecture)). However, introducing this kind of logic comes with a big price of a more complex system which is more difficult to maintain and understand (e.g. queries will get more difficult). It is only worth if you have billions of records. If you only have a few (hundred) million records, selecting good indexes on the table is the best approach.
If the records with status=false are not used in your application but necessary e.g. for data analysis another approach could be to move them to a data warehouse from time to time and delete from your database to keep the number of rows small. But again, you introduce more complexity with a data warehouse.

Effects of Rail's default_scope on performance

Can default_scope when used to not order records by ID significantly slow down a Rails application?
For example, I have a Rails (currently 3.1) app using PostgreSQL where nearly every Model has a default_scope ordering records by their name:
default_scope order('users.name')
Right now because the default_scope's order records by name rather by ID, I am worried I might be incurring a significant performance penalty when normal queries are run. For example with:
User.find(5563401)
or
#User.where('created_at = ?', 2.weeks.ago)
or
User.some_scope_sorted_best_by_id.all
In the above examples, what performance penalty might I incur by having a default_scope by name on my Model? Should I be concerned about this default_scope affecting application performance?
Your question is missing the point. The default scope itself is just a few microseconds of Ruby execution to cause an order by clause to be added to every SQL statement sent to PostgreSQL.
So your question is really asking about the performance difference between unordered queries and ordered ones.
Postgresql documentation is pretty explicit. Ordered queries on unindexed fields are much slower than unordered because (no surprise), PostgreSQL must sort the results before returning them, first creating temporary table or index to contain the result. This could easily be a factor of 4 in query time, possibly much more.
If you introduce an index just to achieve quick ordering, you are still paying to maintain the index on every insert and update. And unless it's the primary index, sorted access still involves random seeks, which may actually be slower than creating a temporary table. This also is discussed in the Postgres docs.
In a nutshell, NEVER add an order clause to an SQL query that doesn't need it (unless you enjoy waiting for your database).
NB: I doubt a simple find() will have order by attached because it must return exactly one result. You can verify this very quickly by starting rails console, issuing a find, and watching the generated SQL scroll by. However, the where and all definitely will be ordered and consequently definitely be slower than needed.

How to efficiently fetch n most recent rows with GROUP BY in sqlite?

I have a table of event results, and I need to fetch the most recent n events per player for a given list of players.
This is on iOS so it needs to be fast. I've looked at a lot of top-n-per-group solutions that use subqueries or joins, but these run slow for my 100k row dataset even on a macbook pro. So far my dumb solution, since I will only run this with a maximum of 6 players, is to do 6 separate queries. It isn't terribly slow, but there has to be a better way, right? Here's the gist of what I'm doing now:
results_by_pid = {}
player_ids = [1,2,3,4,5,6]
n_results = 6
for pid in player_ids:
results_by_pid[pid] = exec_sql("SELECT *
FROM results
WHERE player_id = #{pid}
ORDER BY event_date DESC
LIMIT n_events")
And then I go on my merry way. But how can I turn this into a single fast query?
There is no better way.
SQL window functions, which might help, are not implemented in SQLite.
SQLite is designed as an embedded database where most of the logic stays in the application.
In contrast to client/server databases where network communication should be avoided, there is no performance disadvantage to mixing SQL commands and program logic.
A less dumb solution requires you to do some SELECT player_id FROM somewhere beforehand, which should be no trouble.
To make the individual queries efficient, ensure you have one index on the two columns player_id and event_date.
This won't be much of an answer, but here goes...
I have found that making things really quick can involve ideas from the nature of the data and schema themselves. For example, searching an ordered list is faster than searching an unordered list, but you have to pay a cost up front - both in design and execution.
So ask yourself if there are any natural partitions on your data that may reduce the number of records SQLite must search. You might ask whether the latest n events fall within a particular time period. Will they all be from the last seven days? The last month? If so then you can construct the query to rule out whole chunks of data before performing more complex searches.
Also, if you just can't get the thing to work quickly, you can consider UX trickery! Soooooo many engineers don't get clever with their UX. Will your query be run as the result of a view controller push? Then set the thing going in a background thread from the PREVIOUS view controller, and let it work while iOS animates. How long does a push animation take? .2 seconds? At what point does your user indicate to the app (via some UX control) which playerids are going to be queried? As soon as he touches that button or TVCell, you can prefetch some data. So if the total work you have to do is O(n log n), that means you can probably break it up into O(n) and O(log n) pieces.
Just some thoughts while I avoid doing my own hard work.
More thoughts
How about a separate table that contains the ids of the previous n inserts? You could add a trigger to delete old ids if the size of the table grows above n. Say..
CREATE TABLE IF NOT EXISTS recent_results
(result_id INTEGER PRIMARY KEY, event_date DATE);
// is DATE a type? I don't know. you get the point
CREATE TRIGGER IF NOT EXISTS optimizer
AFTER INSERT ON recent_results
WHEN (SELECT COUNT(*) FROM recent_results) > N
BEGIN
DELETE FROM recent_results
WHERE result_id = (SELECT result_id
FROM recent_results
WHERE event_date = MIN(event_date));
// or something like that. I have no idea if this will work,
// I just threw it together.
Or you could just create a temporary memory-based table that you populate at app load and keep up to date as you perform transactions during app execution. That way you only pay the steep price once!
Just a few more thoughts for you. Be creative, and remember that you can usually define what you want as a data structure as well as an algorithm. Good luck!

Handling lots of report / financial data in rails 3, without slowing down?

I'm trying to figure out how to ask this - so I'll update the question as it goes to clear things up if needed.
I have a virtual stock exchange game site I've been building for fun. People make tons of trades, and each trade is its own record in a table.
When showing the portfolio page, I have to calculate everything on the fly, on the table of data - i.e. How many shares the user has, total gains, losses etc.
Things have really started slowing down, when I try to segment it by trades by company by day.
I don't really have any code to show to demonstrate this - but it just feels like I'm not approaching this correctly.
UPDATE: This code in particular is very slow
#Returning an array of values for a total portfolio over time
def portfolio_value_over_time
portfolio_value_array = []
days = self.from_the_first_funding_date
companies = self.list_of_companies
days.each_with_index do |day, index|
#Starting value
days_value = 0
companies.each do |company|
holdings = self.holdings_by_day_and_company(day, company)
price = Company.find_by_id(company).day_price(day)
days_value = days_value + (holdings * price).to_i
end
#Adding all companies together for that day
portfolio_value_array[index] = days_value
end
The page load time can be up to 20+ seconds - totally insane. And I've cached a lot of the requests in Memcache.
Should I not be generating reports / charts on the live data like this? Should I be running a cron task and storing them somewhere? Whats the best practice for handling this volume of data in Rails?
The Problem
Of course it's slow. You're presumably looking up large volumes of data from each table, and performing multiple lookups on multiple tables on every iteration through your loop.
One Solution (Among Many)
You need to normalize your data, create a few new models to store expensive calculated values, and push more of the calculations onto the database or into tables.
The fact that you're doing a nested loop over high-volume data is a red flag. You're making many calls to the database, when ideally you should be making as few sequential requests as possible.
I have no idea how you need to normalize your data or optimize your queries, but you can start by looking at the output of explain. In general, though, you probably will want to eliminate any full table scans and return data in larger chunks, rather than a record at a time.
This really seems more like a modeling or query problem than a Rails problem, per se. Hope this gets you pointed in the right direction!
You should precompute and store all this data on another table. An example table might look like this:
Table: PortfolioValues
Column: user_id
Column: day
Column: company_id
Column: value
Index: user_id
Then you can easily load all the user's portfolio data with a single query, for example:
current_user.portfolio_values
Since you're using memcached anyway, use it to cache some of those queries. For example:
Company.find_by_id(company).day_price(day)

Does Ruby on Rails "has_many" array provide data on a "need to know" basis?

On Ruby on Rails, say, if the Actor model object is Tom Hanks, and the "has_many" fans is 20,000 Fan objects, then
actor.fans
gives an Array with 20,000 elements. Probably, the elements are not pre-populated with values? Otherwise, getting each Actor object from the DB can be extremely time consuming.
So it is on a "need to know" basis?
So does it pull data when I access actor.fans[500], and pull data when I access actor.fans[0]? If it jumps from each record to record, then it won't be able to optimize performance by doing sequential read, which can be faster on the hard disk because those records could be in the nearby sector / platter layer -- for example, if the program touches 2 random elements, then it will be faster just to read those 2 records, but what if it touches all elements in random order, then it may be faster just to read all records in a sequential way, and then process the random elements. But how will RoR know whether I am doing only a few random elements or all elements in random?
Why would you want to fetch 50000 records if you only use 2 of them? Then fetch only those two from DB. If you want to list the fans, then you will probably use pagination - i.e. use limit and offset in your query, or some pagination gem like will_paginate.
I see no logical explanation why should you go the way you try to. Explain a real situation so we could help you.
However there is one think you need to know wile loading many associated objects from DB - use :include like
Actor.all(:include => :fans)
this will eager-load all the fans so there will only be 2 queries instead of N+1, where N is a quantity of actors
Look at the SQL which is spewed out by the server in development mode, and that will tell you how many fan records are being loaded. In this case actor.fans will indeed cause them all to be loaded, which is probably not what you want.
You have several options:
Use a paginator as suggested by Tadas;
Set up another association with the fans table that pulls in just the ones you're interested in. This can be done either with a conditions on the has_many statement, e.g.
has_many :fans, :conditions => "country of residence = 'UK'"
Specifying the full SQL to narrow down the rows returned with the :finder_sql option
Specifying the :limit option which will, well, limit, the number of rows returned.
All depends on what you want to do.

Resources