In a stats part of a Rails app, I have some custom SQL calls that are called with ActiveRecord::Base.execute() from the model code. They return various aggregates.
Some of these (identical) queries are run in a loop in the controller, and it seems that they aren't cached by the ActiveRecord query cache.
Is there any way to cache custom SQL queries within a single request?
Not sure if AR supports query caching for #execute, you might want to dig in the documentation.
Anyway, what you can do, is to use memoization, which means that you'll keep the results manually until the current request is over.
do something like this in your model:
def repeating_method_with_execute
#rs ||= ActiveRecord::Base.execute(...)
end
This will basically run the query only on the first time and then save the response to #rs until the entire request is over.
If I am not wrong, Rails 2.x already has a macro named memoization on ActiveRecord that does all that automatically
hope it helps
Related
I have Rails 3 project that I updated to 5, that uses Mongoind instead of Active Record. I'm trying to implement fragment caching.
My understanding is, that with ActiveRecord, even if I have something like
#films = Film.all in a controller, but never use #films in the view, the query won't actually run. Hence, if I cache #films in the view, on the second request, it'll be read from cache, and query isn't going to run.
This is how I think ActiveRecord works.
Now to Mongoid. I cache variable in the view, but even though it's being read from cache, the query still hits db.
My question is, is there a way to avoid that with Mongoid?
Or am I missing something in terms of caching?
I tried searching online, but there isn't much on Rails Mongoid caching, not to mention, anything written after 2012.
Let's say I need to implement a search algorithm for a product catalog database. This would include multiple joins across the products table, manufacturers table, inventory table, etc. etc.
In .NET / MSSQL, I would isolate such logic in a DB stored procedure, then write a wrapper method in my data access layer of my .NET app to simply call this stored procedure.
How does something like this work in RoR? From my basic understanding, RoR uses its ORM by default. Does this mean, I have to move my search logic into the application layer, and write it using its ORM? The SQL stored proc is pretty intense... For performance, it needs to be in the stored procedure.
How does this work in RoR?
Edit: From the first two responses, I gather that ActiveRecord is the way to do things in Ruby. Does this mean that applications that require large complex queries with lots of joins, filtering and even dynamic SQL can (should) be re-written using ActiveRecord classes?
Thanks!
While it is possible to run raw SQL statements in Rails, using the execute method on a connection object, by doing so you will forfeit all the benefits of ActiveRecord. If you still want to go down this path, you can use it like so:
ActiveRecord::Base.connection.execute("call stored_procedure_name")
Another option to explore might be to create a "query object" to encapsulate your query logic. Inside, you could still use ActiveRecord query methods. ActiveRecord has become fairly proficient in optimizing your SQL queries, and there is still some manual tweaking you could do.
Below is a simple scaffoold for such an object:
# app/queries/search_products.rb
class SearchProducts
def initialize(params)
#search = search
end
def call
Product.where(...) # Plus additional search logic
end
end
The third option would be to go with something like elasticsearch-rails or sunspot. This will require some additional setup time and added complexity, but might pay off down the line, if your search requirements change.
Stored procedure is one of way to make the apps can be faster sometimes but it will need high costs and time to debug code for developer. Rails is using ActiveRecord ORM so if you want to use stored procedure that will lead to the main function ActiveRecord unused well.
There are some explains about rails
stored-procedures-in-ruby-on-rails and using stored procedure
I'm working on an existing Rails 2 site with a large codebase that recently updated to Ruby 1.9.2 and the mysql2 gem. I've noticed that this setup allows for non-blocking database queries; you can do client.query(sql, :async => true) and then later call client.async_result, which blocks until the query completes.
It seems to me that we could get a performance boost by having all ActiveRecord queries that return a collection decline to block until a method is called on the collection. e.g.
#widgets = Widget.find(:all, :conditions=> conditions) #sends the query
do_some_stuff_that_doesn't_require_widgets
#widgets.each do #if the query hasn't completed yet, wait until it does, then populate #widgets with the result. Iterate through #widgets
...
This could be done by monkey-patching Base::find and its related methods to create a new database client, send the query asynchronously, and then immediately return a Delegator or other proxy object that will, when any method is called on it, call client.async_result, instantiate the result using ActiveRecord, and delegate the method to that. ActiveRecord association proxy objects already work similarly to implement ORM.
I can't find anybody who's done this, though, and it doesn't seem to be an option in any version of Rails. I've tried implementing it myself and it works in console (as long as I append ; 1 to the line calling everything so that to_s doesn't get called on the result). But it seems to be colliding with all sorts of other magic and creating various problems.
So, is this a bad idea for some reason I haven't thought of? If not, why isn't it the way ActiveRecord already works? Is there a clean way to make it happen?
I suspect that that .async_result method isn't available for all database drivers; if not, it's not something that could be merged into generic ActiveRecord calls.
A more portable way to help performance when looping over a large recordset would be to use find_each or find_in_batches. I think they'll work in rails 2.3 as well as rails 3.x. http://guides.rubyonrails.org/active_record_querying.html#retrieving-multiple-objects-in-batches
Is there a way to run code BEFORE model retrieval? I know about the after_find callback, but I need to run before. I'd also like it to run only ONCE per retrieval regardless of the number of records returned. Looking at the RoR source it seems the query is actually executed in exec_queries(or to_a) in ActiveRecord::Relation. Do/Should I override this method to add this hook?
And just in case I'm going about this all wrong, the reason I'm asking is I have an external REST API I am using to retrieve data, but it is too slow to retrieve after every page reload. I was originally using memcached, but I figured I could just use ActiveRecord to cache the data in a database so I can easily query the data and possibly join it with similar data from other REST APIs. I'd like to plug in a callback that would after a certain timeout duration, reload the data from REST before returning ActiveRecord results.
Basically I'm looking for the best way to centralize refreshing my database from another source (REST) instead of cluttering up my controllers or overriding every model accessor that I use (is there a way to override them all easily?). Perhaps the best solution lies here.
It appears all of the built in methods like all, find, first, and last call apply_finder_options (and then where), but the dynamically created finders (find_by_name, etc) call find_dynamic_match which eventually calls where. This is what lead me to the to_a method on the relation, since it is common and called when the query is actually executed, and not just when building a relation before a query is executed. However getting this low level in into Rails makes me uncomfortable.
It seems like my problem shouldn't be an uncommon one, so perhaps my approach is wrong?
FYI, I'm new to rails and ruby. Thanks!
I would strongly advise against using a low-level hook in place of explicit cache checking. ActiveRecord has it's own caching mechanism, but if that isn't doing it for you and you need to build your own - use it explicitly before using ActiveRecord finders. Hooks like these can make it very confusing to determine what is happening and why, and is not a recommended practice. Here is an example using a proxy model:
class CacheProxy
attr_accessor :klass
def initialize(klass)
#klass = klass
end
def method_missing(method_id, *arguments, &block)
reload_if_necessary
klass.send(method_id, *arguments, &block)
end
private
def reload_if_necessary
return unless needs_reload?
# perform reload
end
def needs_reload?
# determine if we need to reload the cache
end
end
class ActiveRecord::Base
def self.proxy
CacheProxy.new(self)
end
end
Now you can even do MyModel.proxy.find_by_first_name_and_last_name('John', 'Doe')
ActiveRecord query caching is enabled when a controller's action is invoked. I also know that you can do something like this to temporarily enable caching for a given Model for a given code block:
User.cache do
....
end
But is there a way to enable query caching for other contexts (e.g. when a script is run under the rails environment using ./script/runner)?
An ActiveRecord query cache lives only for the duration of a particular action (i.e. request). If you want a cached object to survive longer or be used between processes you need to look at something like memcached.
This plugin will persist the query cache using memcached.
A simple solution is to wrap the code given to script/runner in a block, directly on the commandline:
script/runner "User.cache { ... }"
Notice the caching is not simply for the User model, but for all queries performed within the codeblock.
To get query caching when running a script you need to wrap any code with:
ActiveRecord::Base.connection.cache do
your_code_here
end
It looks like rails automatically sets the query cache for normal calls to Actions, but you have to do it manually in other contexts.