I have a method which returns the results from a query. The code that calls the method then loops through each result and launches a sidekiq worker. The problem I'm running into is that the looping is actually taking a decent amount of time (almost the same amount of time it takes to run all the workers). Here's the query:
Object.where("last_updated > ?" , 1.days.ago.midnight )
I then do the following:
objects.each { |o| o.perform_async(something) }
I'm trying to figure out how to make this process more efficient. The result is that it's taking around 10 minutes for this process to complete, effectively taking 20 milliseconds per launch (if the query returns 30,000 results). Is there any way to make this quicker?
I see that you already have last_updated indexed. Next:
Object.select('id, only_columns_you_need').where(...).find_each do |object|
object.perform_async(something)
end
If the "objects" table has lots of columns but you only need a few for this operation, selecting only those columns can really speed things up in both db and Ruby land.
find_each will load the records in batches of 1000 by default. Use the :batch_size option to tweak that.
UPDATE
def do_stuff_to_objects(&stuff)
Object.select('id, only_columns_you_need').where(...).find_each(&stuff)
end
...
do_stuff_to_objects do |object|
object.perform_async(something)
end
Related
I've been working on optimizing my project's DB calls and I noticed a "significant" difference in performance between the two identical calls below:
connection = ActiveRecord::Base.connection()
pgresult = connection.execute(
"SELECT SUM(my_column)
FROM table
WHERE id = #{id}
AND created_at BETWEEN '#{lower}' and '#{upper}'")
and the second version:
sum = Table.
where(:id => id, :created_at => lower..upper).
sum(:my_column)
The method using the first version on average takes 300ms to execute (the operation is called a couple thousand times total within it), and the method using the second version takes about 550ms. That's almost 100% decrease in speed.
I double-checked the SQL that's generated by the second version, it's identical to the first with exception for it prepending table columns with the table name.
Why the slow-down? Is the conversion between ActiveRecord and SQL really making the operation take almost 2x?
Do I need to stick to writing straight SQL (perhaps even a sproc) if I need to perform the same operation a ton of times and I don't want to hit the overhead?
Thanks!
A couple of things jump out.
Firstly, if this code is being called 2000 times and takes 250ms extra to run, that's ~0.125ms per call to convert the Arel to SQL, which isn't unrealistic.
Secondly, I'm not sure of the internals of Range in Ruby, but lower..upper may be doing calculations such as the size of the range and other things, which will be a big performance hit.
Do you see the same performance hit with the following?
sum = Table.
where(:id => id).
where(:created_at => "BETWEEN ? and ?", lower, upper).
sum(:my_column)
I have the following code for
h2.each {|k, v|
#count += 1
puts #count
sq.each do |word|
if Wordsdoc.find_by_docid(k).tf.include?(word)
sum += Wordsdoc.find_by_docid(k).tf[word] * #s[word]
end
end
rec_hash[k] = sum
sum = 0
}
h2 -> is a hash that contain ids of documents, the hash contains more than a 1000 of these
Wordsdoc -> is a model/table in my database...
sq -> is a hash that contain around 10 words
What i'm doing is i'm going through each of the document ids and then for each word in sq i look up in the Wordsdoc table if the word exists (Wordsdoc.find_by_docid(k).tf.include?(word) , here tf is a hash of {word => value}
and if it does I get the value of that word in Wordsdoc and multiple it with the value of the word in #s which is also a hash of {word = > value}
This seems to be running very slow. Tt processe one document per second. Is there a way to process this faster?
thanks really appreciate your help on this!
You do a lot of duplicate querying. While ActiveRecord can do some caching in the background to speed things up, there is a limit to what it can do, and there is no reason to make things harder for it.
The most obvious cause for slowdown is the Wordsdoc.find_by_docid(k). For each value of k, you call it 10 times, and each time you call it there is a possibility to call it again. That means you call that method with the same argument 10-20 times for each entry in h2. Queries to the database are expensive, since the database is on the hard disk, and accessing the hard disk is expensive in any system. You can just as easily call Wordsdoc.find_by_Docid(k) once, before you enter the sq.each loop, and store it in a variable - that would save a lot of querying and make your loop go much faster.
Another optimization - though not nearly as important as the first one - is to get all the Wordsdoc records in a single query. Almost all mid to high level(and some of the low level, too!) programming languages and libraries work better and faster when they work in bulks, and ActiveRecord is no exception. If you can query for all entries of Wordsdoc, and filter them by the docid's in h2's keys, you can turn 1000 queries(after the first optimization. Before the first optimization it was 10000-20000 queries) to a single, huge query. That will enable ActiveRerocd and the underlying database to retrieve your data in bigger chunks, and save you a lot of disc access.
There are some more minor optimization you can do, but the two I've specified should be more than enough.
You're calling Wordsdoc.find_by_docid(k) twice.
You could refactor the code to:
wordsdoc = Wordsdoc.find_by_docid(k)
if wordsdoc.tf.include?(word)
sum += wordsdoc.tf[word] * #s[word]
end
...but still it will be ugly and inefficient.
You should prefetch all records in batches, see: https://makandracards.com/makandra/1181-use-find_in_batches-to-process-many-records-without-tearing-down-the-server
For example something like that should be much more efficient:
Wordsdoc.find_in_batches(:conditions => {:docid => array_of_doc_ids}).each do |wordsdoc|
if wordsdoc.tf.include?(word)
sum += wordsdoc.tf[word] * #s[word]
end
end
Also you can retrieve only certain columns from Wordsdoc table using for example :select => :tf in find_in_batches method.
As you have a lot going on I'm just going to offer you up to things to check out.
A book called Eloquent Ruby deals with Documents and iterating through documents to count the number of times a word was used. All his examples are about a Document system he was maintaining and so it could even tackle other problems for you.
inject is a method that could speed up what you're looking to do for the sum part, maybe.
Delayed Job the whole thing if you are doing this async-ly. meaning if this is a web app, you must be timing out if you're waiting a 1000 seconds for this job to complete before it shows it's answers on the screen.
Go get em.
In a rails 2 app I'm building, I have a need to update a collection of records with specific attributes. I have a named scope to find the collection, but I have to iterate over each record to update the attributes. Instead of making one query to update several thousand records, I'll have to make several thousand queries.
What I've found so far is something like Model.find_by_sql("UPDATE products ...)
This feels really junior, but I've googled and looked around SO and haven't found my answer.
For clarity, what I have is:
ps = Product.last_day_of_freshness
ps.each { |p| p.update_attributes(:stale => true) }
What I want is:
Product.last_day_of_freshness.update_attributes(:stale => true)
It sounds like you are looking for ActiveRecord::Base.update_all - from the documentation:
Updates all records with details given if they match a set of conditions supplied, limits and order can also be supplied. This method constructs a single SQL UPDATE statement and sends it straight to the database. It does not instantiate the involved models and it does not trigger Active Record callbacks or validations.
Product.last_day_of_freshness.update_all(:stale => true)
Actually, since this is rails 2.x (You didn't specify) - the named_scope chaining may not work, you might need to pass the conditions for your named scope as the second parameter to update_all instead of chaining it onto the end of the Product scope.
Have you tried using update_all ?
http://api.rubyonrails.org/classes/ActiveRecord/Relation.html#method-i-update_all
For those who will need to update big amount of records, one million or even more, there is a good way to update records by batches.
product_ids = Product.last_day_of_freshness.pluck(:id)
iterations_size = product_ids.count / 5000
puts "Products to update #{product_ids.count}"
product_ids.each_slice(5000).with_index do |batch_ids, i|
puts "step #{i} of iterations_size"
Product.where(id: batch_ids).update_all(stale: true)
end
If your table has a lot indexes, it also will increase time for such operations, because it will need to rebuild them. When I called update_all for all records in table, there were about two million records and twelve indexes, operation didn't accomplish in more than one hour. With this approach it took about 20 minutes in development env and about 4 minutes in production, of course it depends on application settings and server hardware. You can put it in rake task or some background worker.
Loos like update_all is the best option... though I'll maintain my hacky version in case you're curious:
You can use just plain-ole SQL to do what you want thus:
ps = Product.last_day_of_freshness
ps_ids = ps.map(%:id).join(',') # local var just for readability
Product.connection.execute("UPDATE `products` SET `stale` = TRUE WHERE id in (#{ps_ids)")
Note that this is db-dependent - you may need to adjust quoting style to suit.
I'm fairly new to RoR. In my controller, I'm iterating over every tuple in the database. For every table, for every column I used to call
SomeOtherModel.find_by_sql("SELECT column FROM model").each {|x| #etc }
which worked fine enough. When I later changed this to
Model.all(:select => "column").each {|x| #etc }
the loop starts out at roughly the same speed but quickly slows down to something like 100 times slower than the the find_by_sql command. These calls should be identical so I really don't know what's happening.
I know these calls are not the most efficient but this is just an intermediate step and I will optimize it more once this works correctly.
So to clarify: Why in the world does calling Model.all.each run so much slower than using find_by_sql.each?
Thanks!
Both calls do make the same SQL call, so they both should be roughly the same speed. Model.all does go through an extra level of indirection. I think that Rails in all collects all the rows from the DB connection, then calls that collection as a param to another method that loops over that collection and creates the model objects. In find_by_sql it does it all in 1 method. Maybe that is a little faster. How large is your data set? If the set is over 100 or so records you shouldn't use all, but use find_in_batches. It uses limits and offsets to run your code in batches and not load the entire table at once into memory. You do need to include the primary key in the select since it uses that to do the order and limit. Example:
Model.find_in_batches(:batch_size => 100) do |group|
group.each {|item| item.do_something_interesting }
end
ScottD is correct. The find_by_sql call does not create the slowdown you are seeing B_. The speed issues are due to the fact that Model.all loads a model instance into memory for every single record fetched. This can result in complete failure with a large enough result set. FYI, this is covered very well in the Rails Guides here: http://guides.rubyonrails.org/active_record_querying.html#retrieving-multiple-objects
I have SQLite3 database, which is populated with some large set of data.
I use migration for that.
3 tables will have following count of records:
Table_1 will have about 10 records
each record of Table_1 will be associated with ~100 records in Table_2
each record of Table_2 will be associated with ~2000 records in Table_3
The count of records will be about 10*100*2000 = 2000000
This takes a long time... Event, if i populate my database with about 20000 records, it takes about 10 minutes.
Also, i have noticed, that, during migration execution, ruby interpreter takes just 5% from CPU time and 95% remains unused ...
What the reason of such pure performance ?
Quite simply, inserting large amounts of records through manually saving AR objects one at a time is going to take years.
The best compromise between speed and "cleanness" (i.e. not a complete dodgy hack) for inserting large amounts of data is ar-extensions's (http://github.com/zdennis/ar-extensions) import method. It's not ideal, but it's better than any of the alternatives I could find, and the syntax is clean and doesn't require you to drop to raw sql (or anywhere close).
Example syntax:
items = Array.new
1.upto(200) do |n|
items << Item.new :some_field => n
end
Item.import items, :validate => false
At least in mysql this will batch the records into a single INSERT statement with multiple sets of values. Pretty damn fast.
If you run each INSERT statement in it's own transaction, SQLite can be very, very slow. But if you run it all in one transaction (or a logical set of transactions), then it can be very fast.
Seed_fu could help, as discussed in this question