ActiveRecord query much slower than straight SQL? - ruby-on-rails

I've been working on optimizing my project's DB calls and I noticed a "significant" difference in performance between the two identical calls below:
connection = ActiveRecord::Base.connection()
pgresult = connection.execute(
"SELECT SUM(my_column)
FROM table
WHERE id = #{id}
AND created_at BETWEEN '#{lower}' and '#{upper}'")
and the second version:
sum = Table.
where(:id => id, :created_at => lower..upper).
sum(:my_column)
The method using the first version on average takes 300ms to execute (the operation is called a couple thousand times total within it), and the method using the second version takes about 550ms. That's almost 100% decrease in speed.
I double-checked the SQL that's generated by the second version, it's identical to the first with exception for it prepending table columns with the table name.
Why the slow-down? Is the conversion between ActiveRecord and SQL really making the operation take almost 2x?
Do I need to stick to writing straight SQL (perhaps even a sproc) if I need to perform the same operation a ton of times and I don't want to hit the overhead?
Thanks!

A couple of things jump out.
Firstly, if this code is being called 2000 times and takes 250ms extra to run, that's ~0.125ms per call to convert the Arel to SQL, which isn't unrealistic.
Secondly, I'm not sure of the internals of Range in Ruby, but lower..upper may be doing calculations such as the size of the range and other things, which will be a big performance hit.
Do you see the same performance hit with the following?
sum = Table.
where(:id => id).
where(:created_at => "BETWEEN ? and ?", lower, upper).
sum(:my_column)

Related

Effects of Rail's default_scope on performance

Can default_scope when used to not order records by ID significantly slow down a Rails application?
For example, I have a Rails (currently 3.1) app using PostgreSQL where nearly every Model has a default_scope ordering records by their name:
default_scope order('users.name')
Right now because the default_scope's order records by name rather by ID, I am worried I might be incurring a significant performance penalty when normal queries are run. For example with:
User.find(5563401)
or
#User.where('created_at = ?', 2.weeks.ago)
or
User.some_scope_sorted_best_by_id.all
In the above examples, what performance penalty might I incur by having a default_scope by name on my Model? Should I be concerned about this default_scope affecting application performance?
Your question is missing the point. The default scope itself is just a few microseconds of Ruby execution to cause an order by clause to be added to every SQL statement sent to PostgreSQL.
So your question is really asking about the performance difference between unordered queries and ordered ones.
Postgresql documentation is pretty explicit. Ordered queries on unindexed fields are much slower than unordered because (no surprise), PostgreSQL must sort the results before returning them, first creating temporary table or index to contain the result. This could easily be a factor of 4 in query time, possibly much more.
If you introduce an index just to achieve quick ordering, you are still paying to maintain the index on every insert and update. And unless it's the primary index, sorted access still involves random seeks, which may actually be slower than creating a temporary table. This also is discussed in the Postgres docs.
In a nutshell, NEVER add an order clause to an SQL query that doesn't need it (unless you enjoy waiting for your database).
NB: I doubt a simple find() will have order by attached because it must return exactly one result. You can verify this very quickly by starting rails console, issuing a find, and watching the generated SQL scroll by. However, the where and all definitely will be ordered and consequently definitely be slower than needed.

iterating through table in Ruby using hash runs slow

I have the following code for
h2.each {|k, v|
#count += 1
puts #count
sq.each do |word|
if Wordsdoc.find_by_docid(k).tf.include?(word)
sum += Wordsdoc.find_by_docid(k).tf[word] * #s[word]
end
end
rec_hash[k] = sum
sum = 0
}
h2 -> is a hash that contain ids of documents, the hash contains more than a 1000 of these
Wordsdoc -> is a model/table in my database...
sq -> is a hash that contain around 10 words
What i'm doing is i'm going through each of the document ids and then for each word in sq i look up in the Wordsdoc table if the word exists (Wordsdoc.find_by_docid(k).tf.include?(word) , here tf is a hash of {word => value}
and if it does I get the value of that word in Wordsdoc and multiple it with the value of the word in #s which is also a hash of {word = > value}
This seems to be running very slow. Tt processe one document per second. Is there a way to process this faster?
thanks really appreciate your help on this!
You do a lot of duplicate querying. While ActiveRecord can do some caching in the background to speed things up, there is a limit to what it can do, and there is no reason to make things harder for it.
The most obvious cause for slowdown is the Wordsdoc.find_by_docid(k). For each value of k, you call it 10 times, and each time you call it there is a possibility to call it again. That means you call that method with the same argument 10-20 times for each entry in h2. Queries to the database are expensive, since the database is on the hard disk, and accessing the hard disk is expensive in any system. You can just as easily call Wordsdoc.find_by_Docid(k) once, before you enter the sq.each loop, and store it in a variable - that would save a lot of querying and make your loop go much faster.
Another optimization - though not nearly as important as the first one - is to get all the Wordsdoc records in a single query. Almost all mid to high level(and some of the low level, too!) programming languages and libraries work better and faster when they work in bulks, and ActiveRecord is no exception. If you can query for all entries of Wordsdoc, and filter them by the docid's in h2's keys, you can turn 1000 queries(after the first optimization. Before the first optimization it was 10000-20000 queries) to a single, huge query. That will enable ActiveRerocd and the underlying database to retrieve your data in bigger chunks, and save you a lot of disc access.
There are some more minor optimization you can do, but the two I've specified should be more than enough.
You're calling Wordsdoc.find_by_docid(k) twice.
You could refactor the code to:
wordsdoc = Wordsdoc.find_by_docid(k)
if wordsdoc.tf.include?(word)
sum += wordsdoc.tf[word] * #s[word]
end
...but still it will be ugly and inefficient.
You should prefetch all records in batches, see: https://makandracards.com/makandra/1181-use-find_in_batches-to-process-many-records-without-tearing-down-the-server
For example something like that should be much more efficient:
Wordsdoc.find_in_batches(:conditions => {:docid => array_of_doc_ids}).each do |wordsdoc|
if wordsdoc.tf.include?(word)
sum += wordsdoc.tf[word] * #s[word]
end
end
Also you can retrieve only certain columns from Wordsdoc table using for example :select => :tf in find_in_batches method.
As you have a lot going on I'm just going to offer you up to things to check out.
A book called Eloquent Ruby deals with Documents and iterating through documents to count the number of times a word was used. All his examples are about a Document system he was maintaining and so it could even tackle other problems for you.
inject is a method that could speed up what you're looking to do for the sum part, maybe.
Delayed Job the whole thing if you are doing this async-ly. meaning if this is a web app, you must be timing out if you're waiting a 1000 seconds for this job to complete before it shows it's answers on the screen.
Go get em.

Ruby on Rails: Model.all.each vs find_by_sql("SELECT * FROM model").each?

I'm fairly new to RoR. In my controller, I'm iterating over every tuple in the database. For every table, for every column I used to call
SomeOtherModel.find_by_sql("SELECT column FROM model").each {|x| #etc }
which worked fine enough. When I later changed this to
Model.all(:select => "column").each {|x| #etc }
the loop starts out at roughly the same speed but quickly slows down to something like 100 times slower than the the find_by_sql command. These calls should be identical so I really don't know what's happening.
I know these calls are not the most efficient but this is just an intermediate step and I will optimize it more once this works correctly.
So to clarify: Why in the world does calling Model.all.each run so much slower than using find_by_sql.each?
Thanks!
Both calls do make the same SQL call, so they both should be roughly the same speed. Model.all does go through an extra level of indirection. I think that Rails in all collects all the rows from the DB connection, then calls that collection as a param to another method that loops over that collection and creates the model objects. In find_by_sql it does it all in 1 method. Maybe that is a little faster. How large is your data set? If the set is over 100 or so records you shouldn't use all, but use find_in_batches. It uses limits and offsets to run your code in batches and not load the entire table at once into memory. You do need to include the primary key in the select since it uses that to do the order and limit. Example:
Model.find_in_batches(:batch_size => 100) do |group|
group.each {|item| item.do_something_interesting }
end
ScottD is correct. The find_by_sql call does not create the slowdown you are seeing B_. The speed issues are due to the fact that Model.all loads a model instance into memory for every single record fetched. This can result in complete failure with a large enough result set. FYI, this is covered very well in the Rails Guides here: http://guides.rubyonrails.org/active_record_querying.html#retrieving-multiple-objects

Ruby on Rails migrations are very slow

I have SQLite3 database, which is populated with some large set of data.
I use migration for that.
3 tables will have following count of records:
Table_1 will have about 10 records
each record of Table_1 will be associated with ~100 records in Table_2
each record of Table_2 will be associated with ~2000 records in Table_3
The count of records will be about 10*100*2000 = 2000000
This takes a long time... Event, if i populate my database with about 20000 records, it takes about 10 minutes.
Also, i have noticed, that, during migration execution, ruby interpreter takes just 5% from CPU time and 95% remains unused ...
What the reason of such pure performance ?
Quite simply, inserting large amounts of records through manually saving AR objects one at a time is going to take years.
The best compromise between speed and "cleanness" (i.e. not a complete dodgy hack) for inserting large amounts of data is ar-extensions's (http://github.com/zdennis/ar-extensions) import method. It's not ideal, but it's better than any of the alternatives I could find, and the syntax is clean and doesn't require you to drop to raw sql (or anywhere close).
Example syntax:
items = Array.new
1.upto(200) do |n|
items << Item.new :some_field => n
end
Item.import items, :validate => false
At least in mysql this will batch the records into a single INSERT statement with multiple sets of values. Pretty damn fast.
If you run each INSERT statement in it's own transaction, SQLite can be very, very slow. But if you run it all in one transaction (or a logical set of transactions), then it can be very fast.
Seed_fu could help, as discussed in this question

Question about Ruby on Rails, Constants, belongs_to & Database Optimization/Performance

I've developed a web based point of sale system for one of my clients in Ruby on Rails with MySQL backend. These guys are growing so fast that they are ringing close to 10,000 transactions per day corporate-wide. For this question, I will use the transactions table as an example. Currently, I store the transactions.status as a string (ie: 'pending', 'completed', 'incomplete') within a varchar(255) field that has an index. In the beginning, it was fine when I was trying to lookup records by different statuses as I didn't have to worry about so many records. Over time, using the query analyzer, I have noticed that performance has worsened and that varchar fields can really slowdown your query speed over thousands of lookups. I've been thinking about converting these varchar fields to integer based status fields utilizing STATUS CONSTANT within the Transaction model like so:
class Transaction < ActiveRecord::Base
STATUS = { :incomplete => 0, :pending => 1, :completed => 2 }
def expensive_query_by_status(status)
self.find(:all,
:select => "id, cashier, total, status",
:condition => { :status => STATUS[status.to_sym] })
end
Is this the best route for me to take? What do you guys suggest? I am already using proper indexes on various lookup fields and memcached for query caching wherever possible. They're currently setup on a distributed server environment of 3 servers where 1st is for application, 2nd for DB & 3rd for caching (all in 1 datacenter & on same VLAN).
Have you tried the alternative on a representative database? From the example given, I'm a little sceptical that it's going to make much difference, you see. If there are only three statuses then a query by status may be better-off not using an index at all.
Say "completed" comprises 80% of your table - with no other indexed column involved, you're going to be requiring more reads if the index is used than not. So a query of that type is almost certainly going to get slower as the table grows. "incomplete" and "pending" queries would probably still benefit from an index, however; they'd only be affected as the total number of rows with those statuses grew.
How often do you look at everything, complete and otherwise, without some more selective criterion? Could you partition the table in some (internal or external) way? For example, store completed transactions in a separate table, moving new ones there as they reach their final (?) state. I think internal database partitioning was introduced in MySQL 5.1 - looking at the documentation it seems that a RANGE partition might be appropriate.
All that said, I do think there's probably some benefit to moving away from storing statuses as strings. Storage and bandwidth considerations aside, it's a lot less likely that you'll inadvertently mis-spell an integer or, better yet, a constant or symbol.
You might want to start limiting your searchings (if your not doing that already), #find(:all) is pretty taxing on that scale. Also you might want to think about what your Transaction model is reaching out for as it gets translated into your views and perhaps eager load those to minimize requests to the db for extra information.

Resources