I have a method called on user object which has many documents (associated).
Inside method I had to call documents many places where caller is self by default.
So I was wondering whether it will call documents for user so many times, and thought that I will call once and refer by docs, docs = self.documents or docs = documents and I will use this reference wherever user's documents are needed & thus we can avoid calling association method documents on user object
But does it really call again and again or just cache it for first time when it gets called?
I checked in console, When I call user.documents, it loaded documents (db call) but later for same call It was not loading.
Suggest how it works. Is it good to use reference variable for first call and use it further ?
Rails automatically caches the result of database calls. From the Rails Guides:
Query caching is a Rails feature that caches the result set returned by each query. If Rails encounters the same query again for that request, it will use the cached result set as opposed to running the query against the database again.
For example:
class ProductsController < ApplicationController
def index
# Run a find query
#products = Product.all
...
# Run the same query again
#products = Product.all
end
end
The second time the same query is run against the database, it's not actually going to hit the database. The first time the result is returned from the query it is stored in the query cache (in memory) and the second time it's pulled from memory.
However, it's important to note that query caches are created at the start of an action and destroyed at the end of that action and thus persist only for the duration of the action. If you'd like to store query results in a more persistent fashion, you can with low level caching.
My recommendation is not to assign it to a variable because it does nothing to improve the readability of the code and the performance difference is negligible. It could introduce confusion; if I were reading the code and saw someone replaced all calls to documents with docs I would wonder why and would have to take time to understand why.
Ultimately, setting docs = self.documents just tells Ruby "docs should point at the same memory location as self.documents", and regardless of which one you call Ruby will return the same data from the same memory location. There is a performance difference between calling a method and calling a variable, but that performance difference is so minor in comparison to something like the speed of a database call that it can be ignored; there are much better ways to improve the performance of an app than switching method calls to variable calls.
If your concern is that you don't want to type out documents over and over again when you could just type docs, then use alias_method:
class User < ApplicationRecord
has_many :documents
alias_method :docs, :documents
end
Then there is no difference between calling user.documents and user.docs -- they call the same method. But again, does it do anything to improve readability of the code? In my opinion, no.
Stick with calling documents.
Related
I want to implement low level cache on my application but I'm having some troubles following the documentation. This is what they have as example:
class Product < ActiveRecord::Base
def competing_price
Rails.cache.fetch("#{cache_key}/competing_price", expires_in: 12.hours) do
Competitor::API.find_price(id)
end
end
end
My questions are:
How am I suppose to get that variable cache_key? Should be given somehow via rails cache or should I have my pre-builded key?
I'm not sure if I clearly understood how this works, confirm if this logic is correct: I set this for example on my controller to generate a ton of variables. And then every time a variable is requested (from view for example), the controller instead of recalculating it every time (long query) will retrieve the pre-made query in case the key haven't changed. If the key has changed it will recalculate again all variables inside the cache block.
ActiveRecord::Integration includes the cache_key method in Rails 4. It should be included by default in the standard Rails configuration. To test this, pop open console, get a record and call cache_key on it.
Reference on the method is available here.
It will usually generate a string similar to "#{record.class.name.underscore}/#{record.to_param}-#{record.updated_at}". This key-based invalidation approach lets you avoid a lot of the effort involved by simply looking for a cache value based on whenever the record was last updated. Old cache values will be ignored because they're not being retrieved.
DHH wrote a great article on the topic here.
The special_item_id_list method is responsible for returning an array of ids. The query and logic is complicated enough that I only want to have to run it once per any page request, but I'll be utilizing that resulting array of ids in many different places. The idea is to be able to use the is_special? method or the special_items scope freely without worrying about incurring overhead each time they are used, so they rely on the special_item_id_list method to do the heavy lifting and caching.
I don't want the results of this query to persist between page loads, but I'd like the query ran only once per page load. I don't want to use a global variable and thought a class variable on the model might work, however it appears that the class variable does persist between page loads. I'm guessing the Item class is part of the Rails stack and stays in memory.
So where would be the preferred place for storing my id list so that it's rebuilt on each page load?
class Item < ActiveRecord::Base
scope :special_items, lambda { where(:id => special_item_id_list) }
def self.special_item_id_list
#special_item_id_list ||= ... # some complicated queries
end
def is_special?
self.class.special_item_id_list.include?(id)
end
end
UPDATE: What about using Thread? I've done this before for tracking the current user and I think it could be applied here, but I wonder if there's another way? Here's a StackOverflow conversation discussing threads! and also mentions the request_store! gem as possibly a cleaner way of doing so.
This railscast covers what you're looking for. In short, you're going to want to do something like this:
after_commit :flush_cache
def self.cached_special_item_list
Rails.cache.fetch("special_items") do
special_item_id_list
end
end
private
def flush_cache
Rails.cache.delete("special_items")
end
At first I went with a form of Jonathan Bender's suggestion of utilizing Rails.cache (thanks John), but wasn't quite happy with how I was having to expire it. For lack of a better idea I thought it might be better to use Thread after all. I ultimately installed the request_store gem to store the query results. This keeps the data around for the duration I wanted (the lifetime of the request/response) and no longer, without any need for expiration.
Are you really sure this optimisation is necessary? Are you having performance issues because of it? Unless it's actually a problem I would not worry about it.
That said; you could create a new class, make special_item_id_list an instance method on that class and then pass the class around to anything needs to use that expensive-to-calculate data.
Or it might suffice to cache the data on instances of Item (possibly by making special_item_id_list an instance method), and not worry about different instances not being able to share the cache.
Is there a way to run code BEFORE model retrieval? I know about the after_find callback, but I need to run before. I'd also like it to run only ONCE per retrieval regardless of the number of records returned. Looking at the RoR source it seems the query is actually executed in exec_queries(or to_a) in ActiveRecord::Relation. Do/Should I override this method to add this hook?
And just in case I'm going about this all wrong, the reason I'm asking is I have an external REST API I am using to retrieve data, but it is too slow to retrieve after every page reload. I was originally using memcached, but I figured I could just use ActiveRecord to cache the data in a database so I can easily query the data and possibly join it with similar data from other REST APIs. I'd like to plug in a callback that would after a certain timeout duration, reload the data from REST before returning ActiveRecord results.
Basically I'm looking for the best way to centralize refreshing my database from another source (REST) instead of cluttering up my controllers or overriding every model accessor that I use (is there a way to override them all easily?). Perhaps the best solution lies here.
It appears all of the built in methods like all, find, first, and last call apply_finder_options (and then where), but the dynamically created finders (find_by_name, etc) call find_dynamic_match which eventually calls where. This is what lead me to the to_a method on the relation, since it is common and called when the query is actually executed, and not just when building a relation before a query is executed. However getting this low level in into Rails makes me uncomfortable.
It seems like my problem shouldn't be an uncommon one, so perhaps my approach is wrong?
FYI, I'm new to rails and ruby. Thanks!
I would strongly advise against using a low-level hook in place of explicit cache checking. ActiveRecord has it's own caching mechanism, but if that isn't doing it for you and you need to build your own - use it explicitly before using ActiveRecord finders. Hooks like these can make it very confusing to determine what is happening and why, and is not a recommended practice. Here is an example using a proxy model:
class CacheProxy
attr_accessor :klass
def initialize(klass)
#klass = klass
end
def method_missing(method_id, *arguments, &block)
reload_if_necessary
klass.send(method_id, *arguments, &block)
end
private
def reload_if_necessary
return unless needs_reload?
# perform reload
end
def needs_reload?
# determine if we need to reload the cache
end
end
class ActiveRecord::Base
def self.proxy
CacheProxy.new(self)
end
end
Now you can even do MyModel.proxy.find_by_first_name_and_last_name('John', 'Doe')
I have a rails application that serves as an interface to a hybrid of data. Most of the information I require is retrieved from the command-line program using XML-RPC. Aside from this, I require some additional bit of data which I have no option but to store in a database. For this reason, I am having trouble figuring out what would be the best way to design the application.
I have overridden self.all and self.find(id) such that they rely on calls to super and then "enrich" the object by defining its instance variables to the appropriate data retrieved from the program using XML-RPC.
This all seems pretty convoluted though. For example, I imagine I have lost the ability to use the magic finders (find_by_x), and I don't know if anything else will break as a result of this.
My question is if there is a more logical and sensible way of going on about doing this. That is, designing an application that depends on XML-RPC for its data for the most part, but also some data stored in a database.
I did read about after_find. Using this callback, I can implement the "object enriching" process and have it run anytime there is a found record. However, my method of retrieving data associated with an item is different than that of retrieving all item data. The way I do it for retrieving all item data (self.all) is way more efficient, but unfortunately not applicable, to retrieving only one item's data (self.find). This would work well if there were a way I could make the callback not apply to self.all calls.
In my experience, you shouldn't mess with ActiveRecord's finders - there is a lot of magic that they rely on.
after_find is a great direction to start with, but if you're having issues with batching, then what I'd recommend is twofold - use a caching layer and alias_method_chain to implement a version of #all that performs your batched XML-RPC find, caches it, and then pass the call through to the unaliased original all. Then, your after_find would check your cache for data first, and if it's not there, perform the remote find. This would let you successfully batch data for all finds while utilizing the callback.
That said, there is probably an easier way to do this. I would just use models that don't descend from ActiveRecord::Base, but rather, which descend from some XMLRPC base interface, and then have faux associations on them that point to AR instances with your database information. Thus, you might have something like:
class XmlRpcModelBase
...
def find(...)
end
def all(...)
end
def extra_data
#extra_data ||= SomeActiveRecordModel.find(...)
end
end
class Foo < XmlRpcModelBase
end
It's not ideal, and honestly, it's going to depend a lot on how much of this is read, and how much is read/write, but I would try to stay out of ActiveRecord's way where possible, and instead just bolt on the AR-related pieces as necessary.
I'm creating a tableless Rails model, and am a bit stuck on how I should use it.
Basically I'm trying to create a little application using Feedzirra that scans a RSS feed every X seconds, and then sends me an email with only the updates.
I'm actually trying to use it as an activerecord model, and although I can get it to work, it doesn't seem to "hold" data as expected.
As an example, I have an initializer method that parses the feed for the first time. On the next requests, I would like to simply call the get_updates method, which according to feedzirra, is the existing object (created during the initialize) that gets updated with only the differences.
I'm finding it really hard to understand how this all works, as the object created on the initialize method doesn't seem to persist across all the methods on the model.
My code looks something like:
def initialize
feed parse here
end
def get_updates
feedzirra update passing the feed object here
end
Not sure if this is the right way of doing it, but it all seems a bit confusing and not very clear. I could be over or under-doing here, but I'd like your opinion about this approach.
Thanks in advance
Using the singleton design pattern it is possible to keep values in memory between requests in ruby on rails. Rails does not reload all objects on every request, so it is possible to keep an in memory store.
with the following in config/initializers/xxx
require 'singleton'
class PersistanceVariableStore
include Singleton
def set(val)
#myvar = val
end
def get
#myvar
end
end
In a controller for example :
#r = PersistanceVariableStore.instance
#r.set(params[:set]) if params[:set]
Then in a view :
<%= #r.get %>
The value in #r will persist between requests ( unless running in cgi mode ).
Not that I think this is a good idea...
The instance variable will not persist between requests since they are entirely different instances. You will likely want to store the feed data in a database so it can be saved between requests and updated after the next request.