What is automatically "cached" when rendering a view? - ruby-on-rails

Let's say we have these two outputs in a view:
#post.user.first_name
current_user.posts.size
If the outputs above would be called multiple times each in a single view, is Rails "smart enough" to not hit the database every time?
If the answer is yes - are there any "general rule" about this worth knowing?
If the answer is no - would a good practice then be to store the associated object/objects in it's own variable?

ActiveRecord by default caches queries for performance. If you do AC query in console a few times you will see that the second query executes much faster due to AC cache. So I guess this works for queries in the view as well.
You can manually cache objects with Rails Fragment Caching feature.
Fragment Caching allows a fragment of view logic to be wrapped in a
cache block and served out of the cache store when the next request
comes in.
Also there is Cache Stores for you to use.
Rails provides different stores for the cached data (apart from SQL
and page caching).

Queries for a view are done in the controller action that renders the view. You will notice that you define #post in your controller action but you may not see current_user defined. This is typically because you're using the devise gem and the code defining the current_user method is part of the gem.
Everything you need to render a view should be queried in your controller by ActiveRecord and in memory of the app in preparation of the rendering of the view. So multiple calls of #post or current_user shouldn't matter.
Sometimes objects are called via associations for a view eg. #post.user.name Is going to have to query for the user. This will work but it is better Model-View-Controller separation to eager load the users along with the posts in the controller. Following MVC and making sure your queries happen in the controller will also help you avoid N + 1 query performance issues. See Rails Active Record Query Eager Load
An example of querying users with their posts.
#post = Post.find(post_params).includes(:user)

Related

How do I efficiently pass an ActiveRecord relation to a controller?

I have a Rails view that makes heavy use of scopes to filter down an Invoice table of hundreds of thousands of rows down to several thousand #invoices filtered records. #invoices object is an ActiveRecord relation.
Once the user presses a button in the view, these thousands of records need to be sent for processing to another controller / model.
Which would be the best way to accomplish this?
Passing the #invoices object as a param is not possible, so I can only think of two options:
1) passing an array of ids as parameters like this:
link_to bulk_process_path(#comprobantes.pluck(:id)), method: :post
but I'm worried I would hit the server's post max size if there a lot of records
2) passing the scopes involved in the original filtered view as parameters and recreating the filter in the target controller.
However this seems like unnecessary hits on the database. Furthermore, if I ever wanted to implement checkboxes to further refine the filtered view, then this method wouldn't work
3) Creating a temp table in the view, sending the name as a parameter and then reading it from the external controller? Then I'd have to keep track of and delete stale temp tables. Doesn't seem very elegant.
Maybe I'm missing something obvious but there doesn't seem to be an elegant solution.
Any help would be appreciated.
I can suggest another option.
When the user enters the page and starts filtering, you can save the filters on the session, then you do ajax requests on each checkbox changes and you can save those ids as exceptions when it's unchecked or remove the exception when it's checked.
You can even use websockets to make it more realtime.
You can also change the session storage method to ActiveRecordStore if you think the exceptions array can be too big, or use something like redis which is really fast.
That way, when the user has finished finteuning the filter, you do a post request but you don't need to send any params, everything is saved on the session. You then can exclude all the ids of unchecked and recreate the filter with the parameters.
Personally I think I would go this way. Hope this helps.

Does Mongoid support lazy loading in similar manner Active Record does?

I have Rails 3 project that I updated to 5, that uses Mongoind instead of Active Record. I'm trying to implement fragment caching.
My understanding is, that with ActiveRecord, even if I have something like
#films = Film.all in a controller, but never use #films in the view, the query won't actually run. Hence, if I cache #films in the view, on the second request, it'll be read from cache, and query isn't going to run.
This is how I think ActiveRecord works.
Now to Mongoid. I cache variable in the view, but even though it's being read from cache, the query still hits db.
My question is, is there a way to avoid that with Mongoid?
Or am I missing something in terms of caching?
I tried searching online, but there isn't much on Rails Mongoid caching, not to mention, anything written after 2012.

Model methods in views

I have the following piece of code in my view:
<% if #user.get_products.any? %>
<%= f.select('products', #user.get_products.collect { |user| [user.name, user.id]}) %>
Where get_products is making an ActiveRecord query to DB. So I wonder if it actually does two same queries or uses cached result from first query on second call? I tried to check it in server logs - but found nothing. I would also like to know if I can control this behavior somehow, i.e. set/unset cache for this view.
Thanks!
UPDATE:
I think it violates MVC too, but what confused me was IDE warning: "view should not share more than two variables with controller."
However, I am creating somewhat "one page" website, so I need to have #user, #nearest_users, and #user_products in the same view. So I found the following article:
http://matthewpaulmoore.com/post/5190436725/ruby-on-rails-code-quality-checklist#instances
which said
Only one or two instance variables are shared between each controller
and view.
Instance variable hell - when a lot of instance variables are shared
between a controller and a view - is easy to do in Rails. It’s also
potentially dangerous to performance, because you can end up making
duplicate calls on associations unknowingly. Instead, your controller
should only manage one instance variable - and perhaps a second for
the current_user. That way, all calls to associations are loaded “on
demand”, and can be instance-variable-cached in a single place.
This methodology also works out well for fragment caching, because you
can check caches in views before actually loading associations.
For example, instead of having your Blog controller create an instance
variable for both #post and #related_posts, just make a single method,
#post, and give your Post model a related_posts method, so you can
just call #post.related_posts in your views.
To answer your question: Queries in view don't get cached.
What's more, Ruby codes in ERB template are executed using eval, which is very inefficient.
So my advice is: avoid writing logic in view, it's kind of bad practice.

How to improve performance of single-page application?

Introduction
I have a (mostly) single-page application built with BackboneJS and a Rails backend.
Because most of the interaction happens on one page of the webapp, when the user first visits the page I basically have to pull a ton of information out of the database in one large deeply joined query.
This is causing me some rather extreme load times on this one page.
NewRelic appears to be telling me that most of my problems are because of 457 individual fast method calls.
Now I've done all the eager loading I can do (I checked with the Bullet gem) and I still have a problem.
These method calls are most likely ocurring in my Rabl serializer which I use to serialize a bunch of JSON to embed into the page for initializing Backbone. You don't need to understand all this but suffice to say it could add up to 457 method calls.
object #search
attributes :id, :name, :subscription_limit
# NOTE: Include a list of the members of this search.
child :searchers => :searchers do
attributes :id, :name, :gravatar_icon
end
# Each search has many concepts (there could be over 100 of them).
child :concepts do |search|
attributes :id, :title, :search_id, :created_at
# The person who suggested each concept.
child :suggester => :suggester do
attributes :id, :name, :gravatar_icon
end
# Each concept has many suggestions (approx. 4 each).
node :suggestions do |concept|
# Here I'm scoping suggestions to only ones which meet certain conditions.
partial "suggestions/show", object: concept.active_suggestions
end
# Add a boolean flag to tell if the concept is a favourite or not.
node :favourite_id do |concept|
# Another method call which occurs for each concept.
concept.favourite_id_for(current_user)
end
end
# Each search has subscriptions to certain services (approx. 4).
child :service_subscriptions do
# This contains a few attributes and 2 fairly innocuous method calls.
extends "service_subscriptions/show"
end
So it seems that I need to do something about this but I'm not sure what approach to take. Here is a list of potential ideas I have:
Performance Improvement Ideas
Dumb-Down the Interface
Maybe I can come up with ways to present information to the user which don't require the actual data to be present. I don't see why I should absolutely need to do this though, other single-page apps such as Trello have incredibly complicated interfaces.
Concept Pagination
If I paginate concepts it will reduct the amount of data being extracted from the database each time. Would product an inferior user interface though.
Caching
At the moment, refreshing the page just extracts the entire search out of the DB again. Perhaps I can cache parts of the app to reduce on DB hits. This seems messy though because not much of the data I'm dealing with is static.
Multiple Requests
It is technically bad to serve the page without embedding the JSON into the page but perhaps the user will feel like things are happening faster if I load the page unpopulated and then fetch the data.
Indexes
I should make sure that I have indexes on all my foreign keys. I should also try to think about places where it would help to have indexes (such as favourites?) and add them.
Move Method Calls into DB
Perhaps I can cache some of the results of the iteration I do in my view layer into the DB and just pull them out instead of computing them. Or I could sync things on write rather than on read.
Question
Does anyone have any suggestions as to what I should be spending my time on?
This is a hard question to answer without being able to see the actual user interface, but I would focus on loading exactly only as much data as is required to display the initial interface. For example, if the user has to drill down to see some of the data you're presenting, then you can load that data on demand, rather than loading it as part of the initial payload. You mention that a search can have as many as 100 "concepts," maybe you don't need to fetch all of those concepts initially?
Bottom line, it doesn't sound like your issue is really on the client side -- it sounds like your server-side code is slowing things down, so I'd explore what you can do to fetch less data, or to defer the complex queries until they are definitely required.
I'd recommend separating your JS code-base into modules that are dynamically loaded using an asset loader like RequireJS. This way you won't have so many XHRs firing at load time.
When a specific module is needed it can load and initialize at an appropriate time instead of every page load.
If you complicate your code a little, each module should be able to start and stop. So, if you have any polling occurring or complex code executing you can stop the module to increase performance and decrease the network load.

Caching bunch of simple queries in rails

In my app there're objects, and they belong to countries, regions, cities, types, groups, companies and other sets. Every set is rather simple - it has id, name and sometimes some pointers to other sets, and it never changes. Some sets are small and I load them in before_filter like that:
#countries = Country.all
#regions = Region.all
But then I call, for example,
offer.country.name
or
region.country.name
and my app performs a separate db query-by-id, although I've already loaded them all. After that I perform query through :include, and this case ids, generated by eager loading, do not depend on either I've already loaded this data with another query-by-id or not.
So I want some cache. For example, I may generate hashes with keys as records-ids in my before_filter and then call #countries[offer.country_id].name. This case it seems I don't need eager loading and it's easy turn on Rails.cache here. But maybe there's some smart built-in rails solution that does not require to rewrite everything?
Caching lists of models like that won't cache individual instances of that exist in other model's associations.
The Rails team has worked on implementing Identity Maps in Rails 3.1 to solve this exact problem, but it is disabled by default for now. You can enable it and see if it works for your problem.

Resources