Efficient ActiveRecord association conditions - ruby-on-rails

Let's say you have an assocation in one of your models like this:
class User
has_many :articles
end
Now assume you need to get 3 arrays, one for the articles written yesterday, one of for the articles written in the last 7 days, and one of for the articles written in the last 30 days.
Of course you might do this:
articles_yesterday = user.articles.where("posted_at >= ?", Date.yesterday)
articles_last7d = user.articles.where("posted_at >= ?", 7.days.ago.to_date)
articles_last30d = user.articles.where("posted_at >= ?", 30.days.ago.to_date)
However, this will run 3 separate database queries. More efficiently, you could do this:
articles_last30d = user.articles.where("posted_at >= ?", 30.days.ago.to_date)
articles_yesterday = articles_last30d.select { |article|
article.posted_at >= Date.yesterday
}
articles_last7d = articles_last30d.select { |article|
article.posted_at >= 7.days.ago.to_date
}
Now of course this is a contrived example and there is no guarantee that the array select will actually be faster than a database query, but let's just assume that it is.
My question is: Is there any way (e.g. some gem) to write this code in a way which eliminates this problem by making sure that you simply specify the association conditions, and the application itself will decide whether it needs to perform another database query or not?
ActiveRecord itself does not seem to cover this problem appropriately. You are forced to decide between querying the database every time or treating the association as an array.

There are a couple of ways to handle this:
You can create separate associations for each level that you want by specifying a conditions hash on the association definition. Then you can simply eager load these associations for your User query, and you will be hitting the db 3x for the entire operation instead of 3x for each user.
class User
has_many articles_yesterday, class_name: Article, conditions: ['posted_at >= ?', Date.yesterday]
# other associations the same way
end
User.where(...).includes(:articles_yesterday, :articles_7days, :articles_30days)
You could do a group by.
What it comes down to is you need to profile your code and determine what's going to be fastest for your app (or if you should even bother with it at all)

You can get rid of the necessity of checking the query with something like the code below.
class User
has_many :articles
def article_30d
#articles_last30d ||= user.articles.where("posted_at >= ?", 30.days.ago.to_date)
end
def articles_last7d
#articles_last7d ||= articles_last30d.select { |article| article.posted_at >= 7.days.ago.to_date }
end
def articles_yesterday
#articles_yesterday ||= articles_last30d.select { |article| article.posted_at >= Date.yesterday }
end
end
What it does:
Makes only one query maximum, if any of the three is used
Calculates only the used array, and the 30d version in any case, but only once
It does not however simplifies the initial 30d query even if you do not use it. Is it enough, or you need something more?

Related

Does splitting up an active record query over 2 methods hit the database twice?

I have a database query where I want to get an array of Users that are distinct for the set:
#range is a predefinded date range
#shift_list is a list of filtered shifts
def listing
Shift
.where(date: #range, shiftname: #shift_list)
.select(:user_id)
.distinct
.map { |id| User.find( id.user_id ) }
.sort
end
and I read somewhere that for readability, or isolating for testing, or code reuse, you could split this into seperate methods:
def listing
shiftlist
.select(:user_id)
.distinct
.map { |id| User.find( id.user_id ) }
.sort
end
def shift_list
Shift
.where(date: #range, shiftname: #shift_list)
end
So I rewrote this and some other code, and now the page takes 4 times as long to load.
My question is, does this type of method splitting cause the database to be hit twice? Or is it something that I did elsewhere?
And I'd love a suggestion to improve the efficiency of this code.
Further to the need to remove mapping from the code, this shift list is being created with the following code:
def _month_shift_list
Shift
.select(:shiftname)
.distinct
.where(date: #range)
.map {|x| x.shiftname }
end
My intention is to create an array of shiftnames as strings.
I am obviously missing some key understanding in database access, as this method is clearly creating part of the problem.
And I think I have found the solution to this with the following:
def month_shift_list
Shift.
.where(date: #range)
.pluck(:shiftname)
.uniq
end
Nope, the database will not be hit twice. The queries in both methods are lazy loaded. The issue you have with the slow page load times is because the map function now has to do multiple finds which translates to multiple SELECT from the DB. You can re-write your query to this:
def listing
User.
joins(:shift).
merge(Shift.where(date: #range, shiftname: #shift_list).
uniq.
sort
end
This has just one hit to the DB and will be much faster and should produce the same result as above.
The assumption here is that there is a has_one/has_many relationship on the User model for Shifts
class User < ActiveRecord::Base
has_one :shift
end
If you don't want to establish the has_one/has_many relationship on User, you can re-write it to:
def listing
User.
joins("INNER JOIN shifts on shifts.user_id = users.id").
merge(Shift.where(date: #range, shiftname: #shift_list).
uniq.
sort
end
ALTERNATIVE:
You can use 2 queries if you experience issues with using ActiveRecord#merge.
def listing
user_ids = Shift.where(date: #range, shiftname: #shift_list).uniq.pluck(:user_id).sort
User.find(user_ids)
end

Rails 5: use .where with two attributes

I'm combining the two .where like this:
#questions = (FirstQuestion.where(user_id: current_user) + SecondQuestion.where(user_id: current_user)).sort_by(&:created_at).reverse
Both .where searches are with one attribute... the user_id. But now I want to search with two attributes, the user_id and this created_at >= ?", Date.today + 60.days. So basically I want to find the object with a user_id: current_user and the objects that where created less then or equal to 60 days.
Any idea on how to implement this?
Please see my comment as well... because it is kind of a code smell when you have models names FirstQuestion, SecondQuestion. There's really no reason for having separate models. You could probably easily model the logic via an attribute question_depth or something (I don't know what you are trying to achieve exactly).
With regard to your question: ActiveRecord is quite a nice class, that allows for very customizable queries. In your case, you could easily write both conditions each in a separate where, or create a single where. That's totally up to you:
Question.where(user: current_user).where('created_at <= ?', 60.days.from_now)
Or in a single where
Question.where('user_id = ? AND created_at <= ?', current_user.id, 60.days.from_now)
Also, consider using scopes on your Question model for readability and reusability:
class Question < AppModel
scope :by_user, -> (user) { where(user: user) }
scope :min_age, -> (date) { where('created_at <= ?', date) }
end
And use it like:
Question.by_user(current_user).min_age(60.days.from_now)

Are .select and or .where responsible for causing N+1 queries in rails?

I have two methods here, distinct_question_ids and #correct_on_first attempt. The goal is to show a user how many distinct multiple choice questions have been answered that are correct.
The second one will let me know how many of these distinct MCQs have been answered correctly on the first attempt. (A user can attempt a MCQ many times)
Now, when a user answers thousands of questions and has thousands of user answers, the page to show their performance is taking 30 seconds to a minute to load. And I believe it's due to the .select method, but I don't know how to replace .select without using .select, since it loops just like .each
Is there any method that doesn't cause N+1?
distinct_question_ids = #user.user_answers.includes(:multiple_choice_question).
where(is_correct_answer: true).
distinct.pluck(:multiple_choice_question_id)
#correct_on_first_attempt = distinct_question_ids.select { |qid|
#user.user_answers.
where(multiple_choice_question_id: qid).first.is_correct_answer
}.count
.pluck returns an Array of values, not an ActiveRecord::Relation.
So when you do distinct_question_ids.select you're not calling ActiveRecord's select, but Array's select. Within that select, you're issuing a fresh new query against #user for every id you just plucked -- including ones that get rejected in the select.
You could create a query named distinct_questions that returns a relation (no pluck!), and then build correct_on_first_attempt off of that, and I think you'll avoid the N+1 queries.
Something along these lines:
class UserAnswer < ActiveRecord::Base
scope :distinct_correct, -> { includes(:multiple_choice_question)
.where(is_correct_answer: true).distinct }
scope :first_attempt_correct, -> { distinct_correct
.first.is_correct_answer }
end
class User < ActiveRecord::Base
def good_guess_count
#correct_on_first_attempt = #user.user_answers.distinct_correct.first_attempt_correct.count
end
end
You'll need to ensure that .first is actually getting their first attempt, probably by sorting by id or created_at.
As an aside, if you track the attempt number explicitly in UserAnswer, you can really tighten this up:
class UserAnswer < ActiveRecord::Base
scope :correct, -> { where(is_correct_answer: true) }
scope :first_attempt, -> { where(attempt: 1) }
end
class User < ActiveRecord::Base
def lucky_guess_count
#correct_on_first_attempt = #user.user_answers.includes(:multiple_choice_question)
.correct.first_attempt.count
end
end
If you don't have an attempt number in your schema, you could .order and .group to get something similar. But...it seems that some of your project requirements depend on that sequence number, so I'd recommend adding it if you don't have it already.
ps. For fighting N+1 queries, use gem bullet. It is on-point.

Rails best way to get previous and next active record object

I need to get the previous and next active record objects with Rails. I did it, but don't know if it's the right way to do that.
What I've got:
Controller:
#product = Product.friendly.find(params[:id])
order_list = Product.select(:id).all.map(&:id)
current_position = order_list.index(#product.id)
#previous_product = #collection.products.find(order_list[current_position - 1]) if order_list[current_position - 1]
#next_product = #collection.products.find(order_list[current_position + 1]) if order_list[current_position + 1]
#previous_product ||= Product.last
#next_product ||= Product.first
product_model.rb
default_scope -> {order(:product_sub_group_id => :asc, :id => :asc)}
So, the problem here is that I need to go to my database and get all this ids to know who is the previous and the next.
Tried to use the gem order_query, but it did not work for me and I noted that it goes to the database and fetch all the records in that order, so, that's why I did the same but getting only the ids.
All the solutions that I found was with simple order querys. Order by id or something like a priority field.
Write these methods in your Product model:
class Product
def next
self.class.where("id > ?", id).first
end
def previous
self.class.where("id < ?", id).last
end
end
Now you can do in your controller:
#product = Product.friendly.find(params[:id])
#previous_product = #product.next
#next_product = #product.previous
Please try it, but its not tested.
Thanks
I think it would be faster to do it with only two SQL requests, that only select two rows (and not the entire table). Considering that your default order is sorted by id (otherwise, force the sorting by id) :
#previous_product = Product.where('id < ?', params[:id]).last
#next_product = Product.where('id > ?', params[:id]).first
If the product is the last, then #next_product will be nil, and if it is the first, then, #previous_product will be nil.
There's no easy out-of-the-box solution.
A little dirty, but working way is carefully sorting out what conditions are there for finding next and previous items. With id it's quite easy, since all ids are different, and Rails Guy's answer describes just that: in next for a known id pick a first entry with a larger id (if results are ordered by id, as per defaults). More than that - his answer hints to place next and previous into the model class. Do so.
If there are multiple order criteria, things get complicated. Say, we have a set of rows sorted by group parameter first (which can possibly have equal values on different rows) and then by id (which id different everywhere, guaranteed). Results are ordered by group and then by id (both ascending), so we can possibly encounter two situations of getting the next element, it's the first from the list that has elements, that (so many that):
have the same group and a larger id
have a larger group
Same with previous element: you need the last one from the list
have the same group and a smaller id
have a smaller group
Those fetch all next and previous entries respectively. If you need only one, use Rails' first and last (as suggested by Rails Guy) or limit(1) (and be wary of the asc/desc ordering).
This is what order_query does. Please try the latest version, I can help if it doesn't work for you:
class Product < ActiveRecord::Base
order_query :my_order,
[:product_sub_group_id, :asc],
[:id, :asc]
default_scope -> { my_order }
end
#product.my_order(#collection.products).next
#collection.products.my_order_at(#product).next
This runs one query loading only the next record. Read more on Github.

How do I calculate the most popular combination of a order lines? (or any similar order/order lines db arrangement)

I'm using Ruby on Rails. I have a couple of models which fit the normal order/order lines arrangement, i.e.
class Order
has_many :order_lines
end
class OrderLines
belongs_to :order
belongs_to :product
end
class Product
has_many :order_lines
end
(greatly simplified from my real model!)
It's fairly straightforward to work out the most popular individual products via order line, but what magical ruby-fu could I use to calculate the most popular combination(s) of products ordered.
Cheers,
Graeme
My suggestion is to create an array a of Product.id numbers for each order and then do the equivalent of
h = Hash.new(0)
# for each a
h[a.sort.hash] += 1
You will naturally need to consider the scale of your operation and how much you are willing to approximate the results.
External Solution
Create a "Combination" model and index the table by the hash, then each order could increment a counter field. Another field would record exactly which combination that hash value referred to.
In-memory Solution
Look at the last 100 orders and recompute the order popularity in memory when you need it. Hash#sort will give you a sorted list of popularity hashes. You could either make a composite object that remembered what order combination was being counted, or just scan the original data looking for the hash value.
Thanks for the tip digitalross. I followed the external solution idea and did the following. It varies slightly from the suggestion as it keeps a record of individual order_combos, rather than storing a counter so it's possible to query by date as well e.g. most popular top 10 orders in the last week.
I created a method in my order which converts the list of order items to a comma separated string.
def to_s
order_lines.sort.map { |ol| ol.id }.join(",")
end
I then added a filter so the combo is created every time an order is placed.
after_save :create_order_combo
def create_order_combo
oc = OrderCombo.create(:user => user, :combo => self.to_s)
end
And finally my OrderCombo class looks something like below. I've also included a cached version of the method.
class OrderCombo
belongs_to :user
scope :by_user, lambda{ |user| where(:user_id => user.id) }
def self.top_n_orders_by_user(user,count=10)
OrderCombo.by_user(user).count(:group => :combo).sort { |a,b| a[1] <=> b[1] }.reverse[0..count-1]
end
def self.cached_top_orders_by_user(user,count=10)
Rails.cache.fetch("order_combo_#{user.id.to_s}_#{count.to_s}", :expiry => 10.minutes) { OrderCombo.top_n_orders_by_user(user, count) }
end
end
It's not perfect as it doesn't take into account increased popularity when someone orders more of one item in an order.

Resources