Rails 4 Fragment Caching - ruby-on-rails

I'm trying to increase performance for my app so I'm looking into fragment caching.
I'm trying to understand what to cache. For example, on all pages of my site I display a list of recent articles.
In my application controller I have a filter that sets:
#recent_articles = Article.get_recent
I have the following in my view/footer:
- cache(cache_key_for_recent_articles) do
%h3 RECENT ARTICLES
- #recent_articles.each do |article|
.recent-article
= link_to add_glyph_to_link("glyphicon glyphicon-chevron-right", article.name), article_path(article, recent: true)
- if Article.count > 4
= link_to "MORE ARTICLES", articles_path(), class: "btn btn-primary more-articles"
My question is. Am I properly caching this? I'm tailing the logs, but I see a query for the articles so I'm assuming no. It's not clear to me what this would do when I query in the controller, but cache a section of the page.
Is this a place for low level caching rather than fragment caching?
Thanks.

You're doing it right. It might seem silly, because it always has to make the db hit anyway, but the gains can be substantial. Imagine each article had threaded comments, with images. In this case, if you kept the controller the exact same, using the same caching construct would save you a tremendous amount of db effort. So, yeah, if you can pull from memcached instead of running through haml with a bunch of rails helpers (those link_tos aren't free) you'll save a bit for sure, but the real gains are found when you can subtly restructure your architecture (as lazy as possible) in order to really take advantage. And that initial hit on Articles? Your db should do a pretty good job of caching that call, I'm not sure you would want to cache it too aggressively, in this case anyway, given the name of the method called.

Related

Why is my fragment caching writing so much?

I'm using fragment caching in Rails 3 and I see a write for nearly every single read. I only have a delete cache when an individual record is change:
after_save do
Rails.cache.delete("#{self.class.table_name}_for_environment_#{merchant_id}")
Rails.cache.delete("jobs_index_table_environment_#{merchant_id}_job_#{self}")
Rails.cache.delete("tech_archives_page_environment_#{merchant_id}_job_#{self}")
end
after_destroy do
Rails.cache.delete("#{self.class.table_name}_for_environment_#{merchant_id}")
Rails.cache.delete("jobs_index_table_environment_#{merchant_id}_job_#{self}")
Rails.cache.delete("tech_archives_page_environment_#{merchant_id}_job_#{self}")
end
What I observe with the cache is that nearly every record is both read and written on every request. This is not my understanding of how caching works, so obviously I'm messing something up.
Here is a snippet of comments:
2015-10-19T16:06:39.939351+00:00 app[web.2]: Read fragment views/job_comments_comment_#<Comment:0x007f9aa9b69860> 1.7ms
2015-10-19T16:06:39.955819+00:00 app[web.2]: Read fragment views/job_comments_comment_#<Comment:0x007f9aa9b69450> 10.1ms
2015-10-19T16:06:39.945527+00:00 app[web.2]: Write fragment views/job_comments_comment_#<Comment:0x007f9aa9b69860> 1.9ms
2015-10-19T16:06:39.999099+00:00 app[web.2]: Write fragment views/job_comments_comment_#<Comment:0x007f9aa9b69450> 42.6ms
So far as I can tell, the two objects there being read and written are the same (memory is identical between the sets for read and write).
As the comment (in this case), hasn't changed, why is being written to the cache? Shouldn't it only be read?
EDIT:
I think that I understand it better now - I chanced upon a blog that had this wonderful snippet:
You can give an Array to cache and your cache key will be based on a
concatenated version of everything in the Array. This is useful for
different caches that use the same ActiveRecord objects.
And holy cow does that clear things up in my head.
I re-use the same AR objects on multiple views for various reasons, and I discovered the hard way you cannot simply:
<% cache(ar) do %>
On every page because it will bring the HTML and styling with it. So I renamed each of my cache keys something like:
<% cache("this_specific_page_#{ar}") do %>
But with the variable interpolation, it seems that each of these cache keys is unique.
Passing it as an array of args to the cache method makes it function as expected (I think - still testing).
So now I have:
<% cache(["this_specific_page", session[:key], ar]) do %>
This separates the objects per page as I need, but it maintains that a single ar record is just that - free of the string I had it concatenated with earlier.

Rails 4 / Heroku smart expire cache

We have in our application some blocks which are cached.
According to some login we sometimes modify some of them, and in this case we have a logic that expires the relevant blocks.
When we perform changes in the code, we need to expire these blocks via console. In this case we need to detect and be precise with the exact logic in order to expire all modified blocks. For example, if we change header html of closed streams, it will look like:
a = ActionController::Base.new
Stream.closed.each {|s| a.expire_fragment("stream_header_#{s.id}") }; nil
Actually, I think that must be a more generic way to simply compare cached blocks with how it should be rendered, and expire only blocks which their html is different that their cached version.
I wonder if there is a gem that does this task, and if not - if somebody has already written some deploy hook to do it.
============== UPDATE ============
Some thought:
In a rake task one can get the cached fragment, as long as you know which fragments you have.
For example, in my case I can do:
a = ActionController::Base.new
Stream.each do |s|
cached_html = a.read_fragment("stream_header_#{s.id}")
:
:
If I could generate the non-cached html I could simply compare them, and expire the cached fragment in case they are different.
Is it possible?
How heavy do you think this task will be?
Not at all easy to answer with so little code.
You can use the
cache #object do
render somthing
end
So based on the hash of the object the cache will invalided itself. This is also true for the rendering template as Rails will create a has on this also and combined it with the hash of the object to invalidate it properly. This also can work at a deeper level and in this way it is possible to invalidate an entire branch of the render tree.
Let me point you toward the documentation of Rails and the Russian doll caching.
http://edgeguides.rubyonrails.org/caching_with_rails.html
There was also a great video on the caching by these guys:
https://www.codeschool.com/courses/rails-4-zombie-outlaws
They are free, but it look like you have to register now.
I hope this is in the right direction for your need.

Rails: Skip controller if cache fragment exist (with cache_key)

I have been using cache for a long time and recently discovered that my fragment caching doesn't stop my controller from executing the code, as it always has been. I have boiled the problem down to have to do with the cache_key that seems to be a new feature?
This is my previous solution that no longer works as expected.
Product#Show-view:
cache('product-' + #product.id.to_s) do
# View stuff
end
Product#Show-controller:
unless fragment_exist?('product-19834') # ID obviously dynamically loaded
# Perform slow calculations
end
The caching works fine. It writes and reads the fragment, but it still executes the controller (which is the whole reason I want to use caching). This boils down to the fragment has an added unique id so the fragment created is something like:
views/product-19834/b05c4ed1bdb428f73b2c73203769b40f
So when I check if the fragment_exist I am not checking for the right string (since I am checking for 'views/product-19834'). I have also tried to use:
fragment_exist?("product-#{#product.id}/#{#product.cache_key}")
but it checks with a different cache_key than is actually created.
I would rather use this solution than controller-caching or gems like interlock.
My question is:
- How do I, in the controller, check if a fragment exist for a specific view considering this cache key?
As Kelseydh pointed out in the link, the solution to this is to use skip_digest => true in the cache request:
View
cache ("product" + #product.id, :skip_digest => true)
Controller
fragment_exist?("product-#{#product.id}")
It might be worth pointing out that while the proposed solution (fragment_exist?) could work, it's more like a hack.
In your question, you say
It writes and reads the fragment, but it still executes the controller
(which is the whole reason I want to use caching)
So what you actually want is "controller caching". But fragment caching is "view caching":
Fragment Caching allows a fragment of view logic to be wrapped in a
cache block and served out of the cache store
(Rails Guides 5.2.3)
For "controller caching", Rails already provides some options:
Page Caching
Action Caching
Low-Level Caching
Which are all, from my point of view, better suited for your particular use case.

Rails 3 and Memcached - Intelligent caching without expiration

I am implementing caching into my Rails project via Memcached and particularly trying to cache side column blocks (most recent photos, blogs, etc), and currently I have them expiring the cache every 15 minutes or so. Which works, but if I can do it more up-to-date like whenever new content is added, updated or whatnot, that would be better.
I was watching the episode of the Scaling Rails screencasts on Memcached http://content.newrelic.com/railslab/videos/08-ScalingRails-Memcached-fixed.mp4, and at 8:27 in the video, Gregg Pollack talks about intelligent caching in Memcached in a way where intelligent keys (in this example, the updated_at timestamp) are used to replace previously cached items without having to expire the cache. So whenever the timestamp is updated, the cache would refresh as it seeks a new timestamp, I would presume.
I am using my "Recent Photos" sideblock for this example, and this is how it's set up...
_side-column.html.erb:
<div id="photos"">
<p class="header">Photos</p>
<%= render :partial => 'shared/photos', :collection => #recent_photos %>
</div>
_photos.html.erb
<% cache(photos) do %>
<div class="row">
<%= image_tag photos.thumbnail.url(:thumb) %>
<h3><%= link_to photos.title, photos %></h3>
<p><%= photos.photos_count %> Photos</p>
</div>
</div>
<% end %>
On the first run, Memcached caches the block as views/photos/1-20110308040600 and will reload that cached fragment when the page is refreshed, so far so good. Then I add an additional photo to that particular row in the backend and reload, but the photo count is not updated. The log shows that it's still loading from views/photos/1-20110308040600 and not grabbing an updated timestamp. Everything I'm doing appears to be the same as what the video is doing, what am I doing wrong above?
In addition, there is a part two to this question. As you see in the partial above, #recent_photos query is called for the collection (out of a module in my lib folder). However, I noticed that even when the block is cached, this SELECT query is still being called. I attempted to wrap the entire partial in a block at first as <% cache(#recent_photos) do %>, but obviously this doesn't work - especially as there is no real timestamp on the whole collection, just it's individual items of course. How can I prevent this query from being made if the results are cached already?
UPDATE
In reference to the second question, I found that unless Rails.cache.exist? may just be my ticket, but what's tricky is the wildcard nature of using the timestamp...
UPDATE 2
Disregard my first question entirely, I figured out exactly why the cache wasn't refreshing. That's because the updated_at field wasn't being updated. Reason for that is that I was adding/deleting an item that is a nested resource in a parent, and I probably need to implement a "touch" on that in order to update the updated_at field in the parent.
But my second question still stands...the main #recent_photos query is still being called even if the fragment is cached...is there a way using cache.exists? to target a cache that is named something like /views/photos/1-2011random ?
One of the major flaws with Rails caching is that you cannot reliably separate the controller and the view for cached components. The only solution I've found is to embed the query in the cached block directly, but preferably through a helper method.
For instance, you probably have something like this:
class PhotosController < ApplicationController
def index
# ...
#recent_photos = Photos.where(...).all
# ...
end
end
The first instinct would be to only run that query if it will be required by the view, such as testing for the presence of the cached content. Unfortunately there is a small chance that the content will expire in the interval between you testing for it being cached and actually rendering the page, something that will lead to a template rendering error when the nil-value #recent_photos is used.
Here's a simpler approach:
<%= render :partial => 'shared/photos', :collection => recent_photos %>
Instead of using an instance variable, use a helper method. Define your helper method as you would've the load inside the controller:
module PhotosHelper
def recent_photos
#recent_photos ||= Photos.where(...).all
end
end
In this case the value is saved so that multiple calls to the same helper method only triggers the query once. This may not be necessary in your application and can be omitted. All the method is obligated to do is return a list of "recent photos", after all.
A lot of this mess could be eliminated if Rails supported sub-controllers with their own associated views, which is a variation on the pattern employed here.
As I've been working further with caching since asking this question, I think I'm starting to understand exactly the value of this kind of caching technique.
For example, I have an article and through a variety of things I need for the page which include querying other tables, maybe I need to do five-seven different queries per article. However, caching the article in this way reduces all those queries to one.
I am assuming that with this technique, there always needs to have at least "one" query, as there needs to be "some" way to tell whether the timestamp has been updated or not.

Rails View DRYness - Do you set variables in the view or just make clean methods?

I have a view in which I have the same link 3 times (actual view is large):
%h1= link_to "Title", model_path(#model, :class => "lightbox")
= link_to "Check it out", model_path(#model, :class => "lightbox")
%footer= link_to "Last time", model_path(#model, :class => "lightbox")
That model_path(#model, :class => "lightbox") call, though fairly clean, can be made even leaner wrapping it in this (maybe you had some more options, so doing this was worthwhile):
def popup_model_path(model)
model_path(model, :class => "lightbox")
end
My question is, I am having to recalculate that path 3 times in a view. What is the preferred way of a) DRYing this up and b) optimizing performance?
I think setting variables at the top of the view might be a good idea here:
- path = model_path(#model, :class => "lightbox")
-# ... rest of view
It's almost like mustache in the end then. What are your thoughts?
I think using variables in the view is a good idea here. Since these method calls are exactly the same.
The solution as proposed by Matt i prefer in some cases, but not in this case, because i find it confusing: the fact that it is cached in the method is not clear, and if i want to see two different models in one page i still get the first cached link for both models.
So in this case I would choose the somewhat more explicit approach and assign it to a variable in the view.
I really hate putting variables in the view. I would change your helper to
def popup_model_path(model)
#model_path ||= {}
#model_path[model] ||= model_path(model, :class => "lightbox")
end
to "memoize" it, and just keep the three function calls.
This seems like a possible case of premature optimization. Making a function like popup_model_path is a fantastically DRY idea. Especially if that bit of code, however terse it may be originally, is going to be used frequently across multiple views. However, worrying about the performance impact of calculating the path 3 times in one view is, in my opinion, needless. Unless we're talking about something that is going to be used dozens or hundreds of times per view, and you're expecting many many simultaneous users, and the app is running on a shared server or something I really don't see what you have currently having any perceptible impact on performance.
As a general rule, I do my best to avoid variables in my view code. They make it harder to read and with a few exceptions (such as variables directly related to loops that display stuff like lists) I feel they kinda go against the whole MVC concept as I understand it.
I think above all else you should strive for code that is easily readable, understandable, and maintainable; both for yourself and others not previously familiar with your project. popup_model_path as you have it now is simple enough to where anyone who knows Rails can follow what you're doing. I don't see any need to make it any more complicated than that since it's not terribly repetitive. I wish I could find this excellent blog post I remember reading a while ago that made the point that DRYing up your code is great, but it has its limits, and like all great things the law of diminishing returns eventually kicks in.

Resources