For demo purposes, suppose that I have a class called DemoThing with a method called do_something.
Is there a way that (in code) I can check the number of times that do_something hits the database? Is there a way that I can "spy" on active record to count the number of times that the database was called?
For instance:
class DemoThing
def do_something
retVal = []
5.times do |i|
retVal << MyActiveRecordModel.where(:id => i)
end
retVal
end
end
dt = DemoThing.new
stuff = dt.do_something # want to assert that do_something hit the database 5 times
ActiveRecord should be logging each query in STDOUT.
But for the above code, it's pretty obvious that you're making 5 calls for each iteration of i.
Queries can be made more efficient by not mixing Ruby logic with querying.
In this example, gettings the ids before querying will mean that the query isn't called for each Ruby loop.
ids = 5.times.to_a
retVal = MyActiveRecordModel.where(id: ids) # .to_a if retVal needs to be an Array
Sure is. But first you must understand Rails' Query Cache and logger. By default, Rails will attempt to optimize performance by turning on a simple query cache. It is a hash stored on the current thread (one for every active database connection - Most rails processes will have just one ). Whenever a select statement is made (like find or where etc.), the corresponding result set is stored in a hash with the SQL that was used to query them as the key. You'll notice when you run the above method your log will show Model Load statement and then a CACHE statement. Your database was only queried one time, with the other 4 being loaded via cache. Watch your server logs as you run that query.
I found a gem for queries count https://github.com/comboy/sql_queries_count
Related
Let's say on some Child model method I need to do calculations based on some data stored on its Parent model. For example,
def child_method(minutes)
remaining_time = minutes % self.parent.parent_settings
if remaining_time >= 1
return minutes/ self.parent.parent_settings
else
return [minutes/self.parent.parent_settings - 1 , 0].max
end
end
In the above I've called self.parent.parent_settings 3 times. Based on how Rails works, is this efficient? Or is it a terrible idea, and I should instead set the parent_settings locally, e.g.,:
def child_method(minutes)
parent_settings = self.parent.parent_settings
remaining_time = minutes % parent_settings
if remaining_time >= 1
return minutes/ parent_settings
else
return [minutes/parent_settings - 1 , 0].max
end
end
I have more complex instances of this (e.g., where in one child method I'm accessing multiple parent attributes, and also in some instances, grandparent attributes). I realize the answer might be "it depends" on exactly what is the data, etc., but looking to see if there are general rules of thumb or convention
Like you said, it depends.
Rails will cache fetched associations as long as the object remains in memory:
puts self.parent.parent_settings.object_id
# ... Some code
puts self.parent.parent_settings.object_id # => This should be the same object ID as before
This cache is cleared automatically by the framework and can be explicitly cleared via #reload:
self.reload
Your code should be fine as long as you're not running child_method multiple times in a request/response cycle. Even if you do run child_method multiple times in the same request/response cycle, there's another database query cache that will intercept the same DB queries. The db query cache is only active when in production mode or when a special ENV var is set.
Following the principle of fail-fast:
When querying the database where there should only ever be one record, I want an exception if .first() (first) encounters more than one record.
I see that there is a first! method that throws if there's less records than expected but I don't see anything for if there's two or more.
How can I get active record to fail early if there are more records than expected?
Is there a reason that active record doesn't work this way?
I'm used to C#'s Single() that will throw if two records are found.
Why would you expect activerecord's first method to fails if there are more than 1 record? it makes no sense for it to work that way.
You can define your own class method the count the records before getting the first one. Something like
def self.first_and_only!
raise "more than 1" if size > 1
first!
end
That will raise an error if there are more than 1 and also if there's no record at all. If there's one and only one it will return it.
It seems ActiveRecord has no methods like that. One useful method I found is one?, you can call it on an ActiveRecord::Relation object. You could do
users = User.where(name: "foo")
raise StandardError unless users.one?
and maybe define your own custom exception
If you care enough about queries performance, you have to avoid ActiveRecord::Relation's count, one?, none?, many?, any? etc, which spawns SQL select count(*) ... query.
So, your could use SQL limit like:
def self.single!
# Only one fast DB query
result = limit(2).to_a
# Array#many? not ActiveRecord::Calculations one
raise TooManySomthError if result.many?
# Array#first not ActiveRecord::FinderMethods one
result.first
end
Also, when you expect to get only one record, you have to use Relation's take instead of first. The last one is for really first record, and can produce useless SQL ORDER BY.
find_sole_by (Rails 7.0+)
Starting from Rails 7.0, there is a find_sole_by method:
Finds the sole matching record. Raises ActiveRecord::RecordNotFound if no record is found. Raises ActiveRecord::SoleRecordExceeded if more than one record is found.
For example:
Product.find_sole_by(["price = %?", price])
Sources:
ActiveRecord::FinderMethods#find_sole_by.
Rails 7 adds ActiveRecord methods #sole and #find_sole_by.
Rails 7.0 adds ActiveRecord::FinderMethods 'sole' and 'find_sole_by'.
We are working on a data visualization problem right now. Our customer wants us to show the last 6 months data for a honeybee hive on a graph.
Clearly it's gonna be a huge dataset. Adding indexes we overcame the database slowness problem in loading data though we still have problem in visualizing data on a graph.
Here is the related code:
def self.prepare_single_hive_messages_for_datatable_dygraph(messages, us_metric_enabled)
data = []
messages.each do |message|
record = []
record << message.occurance_time.to_s(:dygraph_format)
record << weight_according_to_metric(message.weight, us_metric_enabled)
record << temperature_according_to_metric(message.temperature, us_metric_enabled)
record << (message.humidity.nil? ? nil : message.humidity.to_f)
data << record
end
return data
end
The problem is that messages.each is very slow and takes more than 30 seconds. Is there any solution to overcome this?
Project Specification:
Rails Version: 4.1.9
Graph Library: Dygraph
Database: Postgres
There are two ways to attack a performance problem like this.
Find and correct the performance bottle neck
Break it into smaller pieces
Finding Performance issues
First, get a dataset large enough to reproduce the problem setup on your dev system. Then look at the logs so you can see how long the transaction is taking. You should be looking for a line like this:
Completed 200 OK in 432.1ms (Views: 367.7ms | ActiveRecord: 61.4ms)
Rerun the task a couple times since caching can cause variations. Write down your different times. Then remove everything in the loop and run it with just the loop. Do the numbers go back to looking reasonable? If that is the case then you know the problem is the work you are doing inside the loop. Next, add each line in the loop back on its own (or one at a time if they depend on each other). Figure out which line causes those numbers to jump the most.
This is the point where you should try to performance tune your code. Check for queries that could be smarter. Make sure you aren't querying the same data over and over. If you have a function in a model that computes something and you call it multiple times to get the same answer then use this to only compute once:
def something
return #savedvalue if #savedvalue
#savedvalue = really complex calculation
end
The goal is to find the worse offender so you can make changes that have the biggest impact. However, if you are working with a LOT of data this may only get you so far. It may be impossible to performance tune enough for all the data. In that case there is option 2.
Break it into smaller pieces
Write a second rails action who's only job is to render a single record on a graph. It will do the inner part of your loop but only on the message who's id was passed to it.
Call your original function to setup the view and pass the list of messages to the view. In the view loop through the list of messages to setup jquery ajax code to call the above action once for each message. Have this run in on document ready.
Then, the page will load with an empty graph... but as soon as it is up the individual processed records will be fed to it and appear one at a time on the page. It will still take just ask long (or even a little longer because of overhead) to complete the graph... but it will no longer time out. Each ajax call will be its own quick hit to the server instead of one big long hit.
I just used this very technique to load a rather long report on a site I work on. Ideally we'd like to fix any underlying performance issues... but what we really wanted was to have a report working right away and then fix the performance issues as we had time.
Ok you said every person sees the same set of data, which is great, means we can cache without worrying about who's logged in, first here's your method, with tiny improvements
def self.prepare_single_hive_messages_for_datatable_dygraph(messages, us_metric_enabled)
messages.inject([]) do |records, message|
records << [].tap do |record|
record << message.occurance_time.to_s(:dygraph_format)
record << weight_according_to_metric(message.weight, us_metric_enabled)
record << temperature_according_to_metric(message.temperature, us_metric_enabled)
record << (message.humidity.nil? ? nil : message.humidity.to_f)
end
end
end
Then create a caching function, that runs this method and caches it
# some class constants
CACHE_KEY = 'some_cache_key'
EXPIRY_TIME = 15.minutes
# the methods
def self.write_single_hive_messages_to_cache(messages, us_metric_enabled)
Rails.cache.write CACHE_KEY,
self.class.prepare_single_hive_messages_for_datatable_dygraph(messages, us_metric_enabled),
expires_in: EXPIRY_TIME
end
And a simple cache reading method
self.read_single_hive_messages_from_cache
Rails.cache.read CACHE_KEY
end
Then create a rake task that just fetches these messages and call the caching method, and rails will write the cache.
Create a cron job that calls this rake task, set the cron job to 5 minutes or so, the expiry time is longer just in case for some reason the cron job didn't run, the data will still be available for the next run.
This way your processing is run in the background, every 5 ( or whatever time you choose ) minutes, the page load should happen normally with no delay at all, since the array data will be loaded from the pre-calculated cache.
In case the cron stops working, the data will expire in the 15 minutes I've set, and then the read cache method will return nil, you could avoid this and set the data to never expire, but then the data will become stale and the old data will keep getting returned.
Another way to handle this is to tell the cache reading method how to generate the cache it self, so if it finds the cache empty it generates one and caches it itself before returning the data, the method would look like this
def self.read_single_hive_messages_from_cache(messages, us_metric_enabled)
Rails.cache.fetch CACHE_KEY, expires_in: EXPIRY_TIME do
self.class.write_single_hive_messages_to_cache(messages, us_metric_enabled)
end
end
But then make sure that messages is an ActiveRecord::Relation and not a processed array, because you don't want to query for 1+ million records and then find the cache already ready, if it's an ActiveRecord::Relation it will not touch the database until the array is started ( inside the caching block ), if the cache exists it will be returned before you enter the block and thus the data won't get fetched, saving you that huge query.
I know the answer got long, if you need more help tell me.
I'm writing an application that allows users to send one another messages about an 'offer'.
I thought I'd save myself some work and use the Mailboxer gem.
I'm following a test driven development approach with RSpec. I'm writing a test that should ensure that only one Conversation is allowed per offer. An offer belongs_to two different users (the user that made the offer, and the user that received the offer).
Here is my failing test:
describe "after a message is sent to the same user twice" do
before do
2.times { sending_user.message_user_regarding_offer! offer, receiving_user, random_string }
end
specify { sending_user.mailbox.conversations.count.should == 1 }
end
So before the test runs a user sending_user sends a message to the receiving_user twice. The message_user_regarding_offer! looks like this:
def message_user_regarding_offer! offer, receiver, body
conversation = offer.conversation
if conversation.nil?
self.send_message(receiver, body, offer.conversation_subject)
else
self.reply_to_conversation(conversation, body)
# I put a binding.pry here to examine in console
end
offer.create_activity key: PublicActivityKeys.message_received, owner: self, recipient: receiver
end
On the first iteration in the test (when the first message is sent) the conversation variable is nil therefore a message is sent and a conversation is created between the two users.
On the second iteration the conversation created in the first iteration is returned and the user replies to that conversation, but a new conversation isn't created.
This all works, but the test fails and I cannot understand why!
When I place a pry binding in the code in the location specified above I can examine what is going on... now riddle me this:
self.mailbox.conversations[0] returns a Conversation instance
self.mailbox.conversations[1] returns nil
self.mailbox.conversations clearly shows a collection containing ONE object.
self.mailbox.conversations.count returns 2?!
What is going on there? the count method is incorrect and my test is failing...
What am I missing? Or is this a bug?!
EDIT
offer.conversation looks like this:
def conversation
Conversation.where({subject: conversation_subject}).last
end
and offer.conversation_subject:
def conversation_subject
"offer-#{self.id}"
end
EDIT 2 - Showing the first and second iteration in pry
Also...
Conversation.all.count returns 1!
and:
Conversation.all == self.mailbox.conversations returns true
and
Conversation.all.count == self.mailbox.conversations.count returns false
How can that be if the arrays are equal? I don't know what's going on here, blown hours on this now. Think it's a bug?!
EDIT 3
From the source of the Mailboxer gem...
def conversations(options = {})
conv = Conversation.participant(#messageable)
if options[:mailbox_type].present?
case options[:mailbox_type]
when 'inbox'
conv = Conversation.inbox(#messageable)
when 'sentbox'
conv = Conversation.sentbox(#messageable)
when 'trash'
conv = Conversation.trash(#messageable)
when 'not_trash'
conv = Conversation.not_trash(#messageable)
end
end
if (options.has_key?(:read) && options[:read]==false) || (options.has_key?(:unread) && options[:unread]==true)
conv = conv.unread(#messageable)
end
conv
end
The reply_to_convesation code is available here -> http://rubydoc.info/gems/mailboxer/frames.
Just can't see what I'm doing wrong! Might rework my tests to get around this. Or ditch the gem and write my own.
see this Rails 3: Difference between Relation.count and Relation.all.count
In short Rails ignores the select columns (if more than one) when you apply count to the query. This is because
SQL's COUNT allows only one or less columns as parameters.
From Mailbox code
scope :participant, lambda {|participant|
select('DISTINCT conversations.*').
where('notifications.type'=> Message.name).
order("conversations.updated_at DESC").
joins(:receipts).merge(Receipt.recipient(participant))
}
self.mailbox.conversations.count ignores the select('DISTINCT conversations.*') and counts the join table with receipts, essentially counting number of receipts with duplicate conversations in it.
On the other hand, self.mailbox.conversations.all.count first gets the records applying the select, which gets unique conversations and then counts it.
self.mailbox.conversations.all == self.mailbox.conversations since both of them query the db with the select.
To solve your problem you can use sending_user.mailbox.conversations.all.count or sending_user.mailbox.conversations.group('conversations.id').length
I have tended to use the size method in my code. As per the ActiveRecord code, size will use a cached count if available and also returns the correct number when models have been created through relations and have not yet been saved.
# File activerecord/lib/active_record/relation.rb, line 228
def size
loaded? ? #records.length : count
end
There is a blog on this here.
In Ruby, #length and #size are synonyms and both do the same thing: they tell you how many elements are in an array or hash. Technically #length is the method and #size is an alias to it.
In ActiveRecord, there are several ways to find out how many records are in an association, and there are some subtle differences in how they work.
post.comments.count - Determine the number of elements with an SQL COUNT query. You can also specify conditions to count only a subset of the associated elements (e.g. :conditions => {:author_name => "josh"}). If you set up a counter cache on the association, #count will return that cached value instead of executing a new query.
post.comments.length - This always loads the contents of the association into memory, then returns the number of elements loaded. Note that this won't force an update if the association had been previously loaded and then new comments were created through another way (e.g. Comment.create(...) instead of post.comments.create(...)).
post.comments.size - This works as a combination of the two previous options. If the collection has already been loaded, it will return its length just like calling #length. If it hasn't been loaded yet, it's like calling #count.
It is also worth mentioning to be careful if you are not creating models through associations, as the related model will not necessarily have those instances in its association proxy/collection.
# do this
mailbox.conversations.build(attrs)
# or this
mailbox.conversations << Conversation.new(attrs)
# or this
mailbox.conversations.create(attrs)
# or this
mailbox.conversations.create!(attrs)
# NOT this
Conversation.new(mailbox_id: some_id, ....)
I don't know if this explains what's going on, but the ActiveRecord count method queries the database for the number of records stored. The length of the Relation could be different, as discussed in http://archive.railsforum.com/viewtopic.php?id=6255, although in that example, the number of records in the database was less than the number of items in the Rails data structure.
Try
self.mailbox.conversations.reload; self.mailbox.conversations.count
or perhaps
self.mailbox.reload; self.mailbox.conversations.count
or, if neither of those work, just try reloading as many of the objects as possible to see if you can get it to work (self, mailbox, conversations, etc.).
My guess is that something is messed up between memory and the DB. This is definitely a really weird error though, might wanna put in an issue on Rails to see why this would be the case.
The result of mailbox.conversations is cached after the first call. To reload it write mailbox.conversations(true)
For example, suppose there is the code in Rails 3.2.3
def test_action
a = User.find_by_id(params[:user_id])
# some calculations.....
b = Reporst.find_by_name(params[:report_name])
# some calculations.....
c = Places.find_by_name(params[:place_name])
end
This code does 3 requests to database and opens 3 different connections. Most likely it's going to be a quite long action.
Is there any way to open only one connection and do 3 requests within it? Or I want to control which connection to use by myself.
You would want to bracket the calls with transaction:
Transactions are protective blocks where SQL statements are only
permanent if they can all succeed as one atomic action. The classic
example is a transfer between two accounts where you can only have a
deposit if the withdrawal succeeded and vice versa. Transactions
enforce the integrity of the database and guard the data against
program errors or database break-downs. So basically you should use
transaction blocks whenever you have a number of statements that must
be executed together or not at all.
def test_action
User.transaction do
a = User.find_by_id(params[:user_id])
# some calculations.....
b = Reporst.find_by_name(params[:report_name])
# some calculations.....
c = Places.find_by_name(params[:place_name])
end
end
Even though they invoke different models the actions are encapsulated into one call to the DB. It is all or nothing though. If one fails in the middle then the entire capsule fails.
Though the transaction class method is called on some Active Record
class, the objects within the transaction block need not all be
instances of that class. This is because transactions are per-database
connection, not per-model.
You can take a look at ActiveRecord::ConnectionAdapters::ConnectionPool documentation
Also AR doesn't open a connection for each model/query it reuses the existent connection.
[7] pry(main)> [Advertiser.connection,Agent.connection,ActiveRecord::Base.connection].map(&:object_id)
=> [70224441876100, 70224441876100, 70224441876100]