I have an app based on RubyOnRails 4.0. I have two models: Stores and Products. There are about 1.5 million products in the system making it quite slow if I do not use indices properly.
Some basic info
Store has_many Products
Store.affiliate_type_id is used where 1=Affiliated 2=Not affiliated
Products have attributes like "category_connection_id" (integer) and "is_available" (boolean)
In FeededProduct model:
scope :affiliated, -> { joins(:store).where("stores.affiliate_type_id = 1") }
This query takes about 500ms which basically interrupts the website:
FeededProduct.where(:is_available => true).affiliated.where(:category_connection_id => #feeded_product.category_connection_id)
Corresponding postgresql:
FeededProduct Load (481.4ms) SELECT "feeded_products".* FROM "feeded_products" INNER JOIN "stores" ON "stores"."id" = "feeded_products"."store_id" WHERE "feeded_products"."is_available" = 't' AND "feeded_products"."category_connection_id" = 345 AND (stores.affiliate_type_id = 1)
Update. Postgresql EXPLAIN:
QUERY PLAN
-------------------------------------------------------------------------------------------------
Hash Join (cost=477.63..49176.17 rows=21240 width=1084)
Hash Cond: (feeded_products.store_id = stores.id)
-> Bitmap Heap Scan on feeded_products (cost=377.17..48983.06 rows=38580 width=1084)
Recheck Cond: (category_connection_id = 5923)
Filter: is_available
-> Bitmap Index Scan on cc_w_store_index_on_fp (cost=0.00..375.25 rows=38580 width=0)
Index Cond: ((category_connection_id = 5923) AND (is_available = true))
-> Hash (cost=98.87..98.87 rows=452 width=4)
-> Seq Scan on stores (cost=0.00..98.87 rows=452 width=4)
Filter: (affiliate_type_id = 1)
(10 rows)
Question: How can I create an index that will take the inner join into consideration and make this faster?
That depends on the join algorithm that PostgreSQL chooses. Use EXPLAIN on the query to see how PostgreSQL processes the query.
These are the answers depending on the join algorithm:
nested loop join
Here you should create an index on the join condition for the inner relation (the bottom table in the EXPLAIN output). You may further improve things by adding columns that appear in the WHERE clause and significantly improve selectivity (i.e., significantly reduce the number of rows filtered out during the index scan.
For the outer relation, an index on the columns that appear in the WHERE clause will speed up the query if these conditions filter out most of the rows in the table.
hash join
Here it helps to have indexes on both tables on those columns in the WHERE clause where the conditions filter out most of the rows in the table.
merge join
Here you need indexes on the columns in the merge condition to allow PostgreSQL to use an index scan for sorting. Additionally, you can append columns that appear in the WHERE clause.
Always test with EXPLAIN if your indexes get used. If not, odds are that either they cannot be used or that using them would make the query slower than a sequential scan, e.g. because they do not filter out enough rows.
Related
There are three models that matter here: Objective, Student, and Seminar. All are associated with has_and_belongs_to_many.
There is an ObjectiveStudent join model that includes columns "ready" and "points_all_time". There is an ObjectiveSeminar join model that includes column "priority".
I need to collect all of the objectives that are associated with a given student and also with a given seminar.
They need to also be marked with a "priority" above zero in the seminar. So I think I need this line:
obj_sems = ObjectiveSeminar.where(:seminar => given_seminar).where("priority > ?", 0)
Finally, they need to also be objectives where the student is ready, but has not scored above 7. So I think I need this line:
obj_studs = ObjectiveStudent.where(:user => given_student, :ready => true).where("points_all_time <= ?", 7)
Is there a way to gather all the objectives whose join table records appear in both of the above queries? Note that neither of the lists return objectives; they return objective_seminars, and objective_students, respectively. My end goal is to collect the objectives that meet all of the above criteria.
Or am I approaching this all wrong?
Bonus question: I would also love to sort the objectives by their priority in the given seminar. But I'm afraid that would add too much to the database load. What are your thoughts on this?
Thank you in advance for any insight.
In order to get Objectives you'll need to start your query from that.
In order to query with an AND condition the associated tables, you'll need inner joins with these tables.
Finally you'll need a distinct operator to only fetch each objective once.
The extended version of what (I think) you need is:
Objective.joins(objective_seminars: :seminar, objective_student: :student).
where(seminars: seminar_search_params, strudents: student_search_params).
where('objective_seminars.priority > 0').
where('objective_students.ready = 1 AND points_all_time <= 7').
order('objective_seminars.priority ASC').
distinct
Now for the database load it all depends on your indexes and the size of your tables.
The above query will translate to the following SQL (or something similar).
SELECT DISTINCT objectives.* FROM objectives
INNER JOIN objective_students ON objective_students.objective_id = objectives.id
INNER JOIN students ON students.id = objective_students.student_id
INNER JOIN objective_seminars ON objective_seminars.objective_id = objectives.id
INNER JOIN seminars ON seminars.id = objective_seminars.seminar_id
WHERE seminars_query AND
students_query AND
objective_seminars.priority > 0 AND
objective_students.ready = 1 AND points_all_time <= 7 AND
objective_seminars.priority ASC
So you'll need to add or extend your indexes so that all 5 tables queries can have an index helping out. The actual index implementation is up to you and depends on your application's specific (read - write load, tables size, cardinality etc)
I want to be able to limit the activerecord objects to 20 being returned, then perform a where() that returns a subset of the limited objects which I currently know only 10 will fulfil the second columns criteria.
e.g. of ideal behaviour:
o = Object.limit(20)
o.where(column: criteria).count
=> 10
But instead, activerecord still looks for 20 objects that fulfil the where() criteria, but looks outside of the original 20 objects that the limit() would have returned on its own.
How can I get the desired response?
One way to decrease the search space is to use a nested query. You should search the first N records rather than all records which match a specific condition. In SQL this would be done like this:
select * from (select * from table order by ORDERING_FIELD limit 20) where column = value;
The query above will only search for the condition in 20 rows from the table. Notice how I have added a ORDERING_FIELD, this is required because each query could give you a different order each time you run it.
To do something similar in Rails, you could try the following:
Object.where(id: Object.order(:id).limit(20).select(:id)).where(column: criteria)
This will execute a query similar to the following:
SELECT [objects].* FROM [objects] WHERE [objects].[id] IN (SELECT TOP (20) [objects].[id] FROM [objects] ORDER BY [objects].id ASC) AND [objects].[column] = criteria
I have three models: Catalog, Product and Value.
The Value table has a characteristic_id column, and I'd like to get the list of different characteristic_id on a set of values.
The relationships are:
a catalog has many products
a product has many values
Here is the query I came up with:
Value.joins(:product).select(:characteristic_id).distinct.where(products: {catalog_id: catalog.id}).pluck(:characteristic_id)
=> [441, 2582, 3133]
which returns the right result, but it is extremely slow on a large catalog with a million products (about 50 seconds).
I can't find a more efficient way to do this.
Here is an EXPLAIN of the query:
=> EXPLAIN for: SELECT DISTINCT "values"."characteristic_id" FROM "values" INNER JOIN "products" ON "products"."id" = "values"."product_id" WHERE "products"."catalog_id" = $1 [["catalog_id", 1767]]
QUERY PLAN
----------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=1515106.82..1515109.15 rows=233 width=4)
Group Key: "values".characteristic_id
-> Hash Join (cost=124703.76..1492245.65 rows=9144469 width=4)
Hash Cond: ("values".product_id = products.id)
-> Seq Scan on "values" (cost=0.00..1002863.07 rows=34695107 width=8)
-> Hash (cost=114002.20..114002.20 rows=652285 width=4)
-> Bitmap Heap Scan on products (cost=12311.64..114002.20 rows=652285 width=4)
Recheck Cond: (catalog_id = 1767)
-> Bitmap Index Scan on index_products_on_catalog_id (cost=0.00..12148.57 rows=652285 width=0)
Index Cond: (catalog_id = 1767)
(10 rows)
Any idea on how to run this query faster?
Make sure you have indexes on both foreign keys:
index "values"."product_id"
index "products"."catalog_id"
Try to add an index on values.characteristic_id.
Oftentimes GROUP BY may be faster than DISTINCT :
Value.joins(:product).where(products: {catalog_id: catalog.id}).select(:characteristic_id).group(:characteristic_id).pluck(:characteristic_id)
I have the following validation:
validates :username, uniqueness: { case_sensitive: false }
Which causes the following query to be run painfully slow:
5,510 ms
SELECT ? AS one FROM "users" WHERE (LOWER("users"."username") = LOWER(?) AND "users"."id" != ?) LIMIT ?
Explain plan
1 Query plan Limit (cost=0.03..4.03 rows=1 width=0)
2 Query plan -> Index Scan using idx_users_lower_username on users (cost=0.03..4.03 rows=1 width=0)
3 Query plan Index Cond: ?
4 Query plan Filter: ?
The index was created in my structure.sql using CREATE INDEX idx_users_lower_username ON users USING btree (lower((username)::text)); See my question How to create index on LOWER("users"."username") in Rails (using postgres) for more on this.
This is using the index I set and still takes over 5 seconds? What's wrong here?
There are several different, interrelated things going on here. Exactly how you carry out the changes depends on how you manage changes to your database structure. The most common way is to use Rails migrations, but your linked question suggests you're not doing that. So I'll speak mostly in SQL, and you can adapt that to your method.
Use a sargable WHERE clause
Your WHERE clause isn't sargable. That means it's written in a way that prevents the dbms from using an index. To create an index PostgreSQL can use here . . .
create index on "users" (lower("username") varchar_pattern_ops);
Now queries on lowercased usernames can use that index.
explain analyze
select *
from users
where lower(username) = lower('9LCDgRHk7kIXehk6LESDqHBJCt9wmA');
It might appear as if PostgreSQL must lowercase every username in the table, but its query planner is smart enough to see that the expression lower(username) is itself indexed. PostgreSQL uses an index scan.
"Index Scan using users_lower_idx on users (cost=0.43..8.45 rows=1 width=35) (actual time=0.034..0.035 rows=1 loops=1)"
" Index Cond: (lower((username)::text) = 'b0sa9malg7yt1shssajrynqhiddm5d'::text)"
"Total runtime: 0.058 ms"
This table has a million rows of random-ish data; the query returns very, very quickly. It's just about equally fast with the additional condition on "id", but the LIMIT clause slows it down a lot. "Slows it down a lot" doesn't mean it's slow; it still returns in less than 0.1 ms.
Also, here the varchar_pattern_ops lets queries that use the LIKE operator use the index.
explain analyze
select *
from users
where lower(username) like 'b%'
"Bitmap Heap Scan on users (cost=1075.12..9875.78 rows=30303 width=35) (actual time=10.217..91.030 rows=31785 loops=1)"
" Filter: (lower((username)::text) ~~ 'b%'::text)"
" -> Bitmap Index Scan on users_lower_idx (cost=0.00..1067.54 rows=31111 width=0) (actual time=8.648..8.648 rows=31785 loops=1)"
" Index Cond: ((lower((username)::text) ~>=~ 'b'::text) AND (lower((username)::text) ~<~ 'c'::text))"
"Total runtime: 93.541 ms"
Only 94 ms to select and return 30k rows from a million.
Queries on very small tables might use a sequential scan even though there's a usable index. I wouldn't worry about that if I were you.
Enforce uniqueness in the database
If you're expecting any bursts of traffic, you should enforce uniqueness in the database. I do this all the time, regardless of any expectations (guesses) about traffic.
The RailsGuides Active Record Validations includes this slightly misleading or confusing paragraph about the "uniqueness" helper.
This helper validates that the attribute's value is unique right
before the object gets saved. It does not create a uniqueness
constraint in the database, so it may happen that two different
database connections create two records with the same value for a
column that you intend to be unique. To avoid that, you must create a
unique index on both columns in your database. See the MySQL manual
for more details about multiple column indexes.
It clearly says that, in fact, it doesn't guarantee uniqueness. The misleading part is about creating a unique index on "both columns". If you want "username" to be unique, you need to declare a unique constraint on the column "username".
alter table "users"
add constraint constraint_name unique (username);
Case-sensitivity
In SQL databases, case-sensitivity is determined by collation. Collation is part of the SQL standards.
In PostgreSQL, you can set collation at the database level, at the column level, at the index level, and at the query level. Values come from the locales the operating system exposes at the time you create a new database cluster using initdb.
On Linux systems, you probably have no case-insensitive collations. That's one reason we have to jump through rather more hoops than people who target SQL Server and Oracle.
try to run the query in psql using explain analyze, so you make sure postgres is running fine, because apparently the index and query are right.
if it is fast in psql, then there is a problem with your rails code.
this query against a 3k records table gave this result (in my local dev machine):
app=# explain analyze SELECT id AS one FROM "users" WHERE (LOWER(email) = LOWER('marcus#marcus.marcus') AND "users"."id" != 2000);
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------
Bitmap Heap Scan on users (cost=4.43..58.06 rows=19 width=4) (actual time=0.101..0.101 rows=0 loops=1)
Recheck Cond: (lower((email)::text) = 'marcus#marcus.marcus'::text)
Filter: (id <> 2000)
-> Bitmap Index Scan on users_lower_idx (cost=0.00..4.43 rows=19 width=0) (actual time=0.097..0.097 rows=0 loops=1)
Index Cond: (lower((email)::text) = 'marcus#marcus.marcus'::text)
Total runtime: 0.144 ms
(6 rows)
Suppose you want to find the last record entered into the database (highest ID) matching a string: Model.where(:name => 'Joe'). There are 100,000+ records. There are many matches (say thousands).
What is the most efficient way to do this? Does PostgreSQL need to find all the records, or can it just find the last one? Is this a particularly slow query?
Working in Rails 3.0.7, Ruby 1.9.2 and PostgreSQL 8.3.
The important part here is to have a matching index. You can try this small test setup:
Create schema xfor testing:
-- DROP SCHEMA x CASCADE; -- to wipe it all for a retest or when done.
CREATE SCHEMA x;
CREATE TABLE x.tbl(id serial, name text);
Insert 10000 random rows:
INSERT INTO x.tbl(name) SELECT 'x' || generate_series(1,10000);
Insert another 10000 rows with repeating names:
INSERT INTO x.tbl(name) SELECT 'y' || generate_series(1,10000)%20;
Delete random 10% to make it more real life:
DELETE FROM x.tbl WHERE random() < 0.1;
ANALYZE x.tbl;
Query can look like this:
SELECT *
FROM x.tbl
WHERE name = 'y17'
ORDER BY id DESC
LIMIT 1;
--> Total runtime: 5.535 ms
CREATE INDEX tbl_name_idx on x.tbl(name);
--> Total runtime: 1.228 ms
DROP INDEX x.tbl_name_idx;
CREATE INDEX tbl_name_id_idx on x.tbl(name, id);
--> Total runtime: 0.053 ms
DROP INDEX x.tbl_name_id_idx;
CREATE INDEX tbl_name_id_idx on x.tbl(name, id DESC);
--> Total runtime: 0.048 ms
DROP INDEX x.tbl_name_id_idx;
CREATE INDEX tbl_name_idx on x.tbl(name);
CLUSTER x.tbl using tbl_name_idx;
--> Total runtime: 1.144 ms
DROP INDEX x.tbl_name_id_idx;
CREATE INDEX tbl_name_id_idx on x.tbl(name, id DESC);
CLUSTER x.tbl using tbl_name_id_idx;
--> Total runtime: 0.047 ms
Conclusion
With a fitting index, the query performs more than 100x faster.
Top performer is a multicolumn index with the filter column first and the sort column last.
Matching sort order in the index helps a little in this case.
Clustering helps with the simple index, because still many columns have to be read from the table, and these can be found in adjacent blocks after clustering. It doesn't help with the multicolumn index in this case, because only one record has to be fetched from the table.
Read more about multicolumn indexes in the manual.
All of these effects grow with the size of the table. 10000 rows of two tiny columns is just a very small test case.
You can put the query together in Rails and the ORM will write the proper SQL:
Model.where(:name=>"Joe").order('created_at DESC').first
This should not result in retrieving all Model records, nor even a table scan.
This is probably the easiest:
SELECT [columns] FROM [table] WHERE [criteria] ORDER BY [id column] DESC LIMIT 1
Note: Indexing is important here. A huge DB will be slow to search no matter how you do it if you're not indexing the right way.