Rails partial enumerating over associations extremely slow - ruby-on-rails

I've got a Document model basically, which has_many Pages, and I have a view where I need to enumerate a bunch of documents (eg. 300) and make a button for each page. (I want to do pagination client side using the DataTables jQuery plugin so that the table can be sortable and searchable). The problem is that if I try to enumerate all the buttons for each Page in each Document, it takes over 10 seconds to render which is just not useful.
Is there any 'trick' to doing this kind of nested collection rendering fast? Should I just cache the fragments for each document (they don't change much once they're created)? Or is this just a bad situation for Rails partials and would my best bet be to do some client side rendering as part of the pagination in DataTables?
Edit: I'm already including associations so that I don't have an N+1 query problem, that's not the issue. And I tried caching, and it seems like that might be my solution for now because this index page gets reloaded often between every few documents being added, so it never has to rebuild the full cache of all the partials.

The immediate need for caching seems like a code smell. In lieu of guessing, have you tried profiling? (e.g. http://hiltmon.com/blog/2012/02/27/quick-and-dirty-rails-performance-profiling/)
gem 'ruby-prof'
Define a profiler:
# /lib/development_profiler.rb
class DevelopmentProfiler
def self.prof(file_name)
RubyProf.start
yield
results = RubyProf.stop
# Print a flat profile to text
File.open "#{Rails.root}/tmp/performance/#{file_name}-graph.html", 'w' do |file|
RubyProf::GraphHtmlPrinter.new(results).print(file)
end
File.open "#{Rails.root}/tmp/performance/#{file_name}-flat.txt", 'w' do |file|
# RubyProf::FlatPrinter.new(results).print(file)
RubyProf::FlatPrinterWithLineNumbers.new(results).print(file)
end
File.open "#{Rails.root}/tmp/performance/#{file_name}-stack.html", 'w' do |file|
RubyProf::CallStackPrinter.new(results).print(file)
end
end
end
Wrap your view-logic with...
DevelopmentProfiler::prof("render-profiling") do
# Your slow code here
end
Edit - An additional thought: Because your model's data is not likely to change, it may be better to eat the rendering cost, once. You could statically-generate the whole rendered page in an after_save callback. Then just serve that single file for subsequent requests.
Though if charging a more traditional cache isn't a huge inconvenience, this may be more trouble than it's worth.

Related

Memcached for fragment cachine in Rails

I'm trying to figure out the best way to set organize the caching system for my scenario:
My web app has "trending movies," which are basically like twitter's trending topics -- the popular topics of conversation. I've written the function Movie.trending, which returns an array of 5 Movie objects. However, since calculating the trending movies is fairly CPU intensive and it will be shown on every page, I want to cache the result and let it expire after 5 minutes. Ideally, I'd like to be able to call Movie.trending from anywhere in the code and assume that cachine will work how i expect it to -- if the results are 5 minutes or earlier, then renew the results, otherwise, serve the cached results.
Is fragment caching the right selection for a task like this? Are there any additional gems I ought to be using? I'm not using Heroku.
Thanks!
To reach this model's caching you can try Rails.cache.fetch, see the example below:
# model - app/models/movie.rb
class Movie
def self.trending
Rails.cache.fetch("trending_movies", :expires_in => 5.minutes) do
# CPU intensive operations
end
end
end
# helper - app/views/helpers/application.rb
module ApplicationHelper
def trending_movies
content_tag :div do
Movie.trending
end
end
end
# view - app/views/shared/_trending_movies
trending_movies
To test it in development mode don't forget to turn on caching for a specific environment

Rails 3 and Memcached - Intelligent caching without expiration

I am implementing caching into my Rails project via Memcached and particularly trying to cache side column blocks (most recent photos, blogs, etc), and currently I have them expiring the cache every 15 minutes or so. Which works, but if I can do it more up-to-date like whenever new content is added, updated or whatnot, that would be better.
I was watching the episode of the Scaling Rails screencasts on Memcached http://content.newrelic.com/railslab/videos/08-ScalingRails-Memcached-fixed.mp4, and at 8:27 in the video, Gregg Pollack talks about intelligent caching in Memcached in a way where intelligent keys (in this example, the updated_at timestamp) are used to replace previously cached items without having to expire the cache. So whenever the timestamp is updated, the cache would refresh as it seeks a new timestamp, I would presume.
I am using my "Recent Photos" sideblock for this example, and this is how it's set up...
_side-column.html.erb:
<div id="photos"">
<p class="header">Photos</p>
<%= render :partial => 'shared/photos', :collection => #recent_photos %>
</div>
_photos.html.erb
<% cache(photos) do %>
<div class="row">
<%= image_tag photos.thumbnail.url(:thumb) %>
<h3><%= link_to photos.title, photos %></h3>
<p><%= photos.photos_count %> Photos</p>
</div>
</div>
<% end %>
On the first run, Memcached caches the block as views/photos/1-20110308040600 and will reload that cached fragment when the page is refreshed, so far so good. Then I add an additional photo to that particular row in the backend and reload, but the photo count is not updated. The log shows that it's still loading from views/photos/1-20110308040600 and not grabbing an updated timestamp. Everything I'm doing appears to be the same as what the video is doing, what am I doing wrong above?
In addition, there is a part two to this question. As you see in the partial above, #recent_photos query is called for the collection (out of a module in my lib folder). However, I noticed that even when the block is cached, this SELECT query is still being called. I attempted to wrap the entire partial in a block at first as <% cache(#recent_photos) do %>, but obviously this doesn't work - especially as there is no real timestamp on the whole collection, just it's individual items of course. How can I prevent this query from being made if the results are cached already?
UPDATE
In reference to the second question, I found that unless Rails.cache.exist? may just be my ticket, but what's tricky is the wildcard nature of using the timestamp...
UPDATE 2
Disregard my first question entirely, I figured out exactly why the cache wasn't refreshing. That's because the updated_at field wasn't being updated. Reason for that is that I was adding/deleting an item that is a nested resource in a parent, and I probably need to implement a "touch" on that in order to update the updated_at field in the parent.
But my second question still stands...the main #recent_photos query is still being called even if the fragment is cached...is there a way using cache.exists? to target a cache that is named something like /views/photos/1-2011random ?
One of the major flaws with Rails caching is that you cannot reliably separate the controller and the view for cached components. The only solution I've found is to embed the query in the cached block directly, but preferably through a helper method.
For instance, you probably have something like this:
class PhotosController < ApplicationController
def index
# ...
#recent_photos = Photos.where(...).all
# ...
end
end
The first instinct would be to only run that query if it will be required by the view, such as testing for the presence of the cached content. Unfortunately there is a small chance that the content will expire in the interval between you testing for it being cached and actually rendering the page, something that will lead to a template rendering error when the nil-value #recent_photos is used.
Here's a simpler approach:
<%= render :partial => 'shared/photos', :collection => recent_photos %>
Instead of using an instance variable, use a helper method. Define your helper method as you would've the load inside the controller:
module PhotosHelper
def recent_photos
#recent_photos ||= Photos.where(...).all
end
end
In this case the value is saved so that multiple calls to the same helper method only triggers the query once. This may not be necessary in your application and can be omitted. All the method is obligated to do is return a list of "recent photos", after all.
A lot of this mess could be eliminated if Rails supported sub-controllers with their own associated views, which is a variation on the pattern employed here.
As I've been working further with caching since asking this question, I think I'm starting to understand exactly the value of this kind of caching technique.
For example, I have an article and through a variety of things I need for the page which include querying other tables, maybe I need to do five-seven different queries per article. However, caching the article in this way reduces all those queries to one.
I am assuming that with this technique, there always needs to have at least "one" query, as there needs to be "some" way to tell whether the timestamp has been updated or not.

Rails 3: Caching to Global Variable

I'm sure "global variable" will get the hair on the back of everyone's neck standing up. What I'm trying to do is store a hierarchical menu in an acts_as_tree data table (done). In application_helper.rb, I create an html menu by querying the database and walking the tree (done). I don't want to do this for every page load.
Here's what I tried:
application.rb
config.menu = nil
application_helper.rb
def my_menu_builder
return MyApp::Application.config.menu if MyApp::Application.config.menu
# All the menu building code that should only run once
MyApp::Application.config.menu = menu_html
end
menu_controller.rb
def create
# whatever create code
expire_menu_cache
end
protected
def expire_menu_cache
MyApp::Application.config.menu = nil
end
Where I stand right now is that on first page load, the database is, indeed, queried and the menu built. The results are stored in the config variable and the database is never again hit for this.
It's the cache expiration part that's not working. When I reset the config.menu variable to nil, presumably the next time through my_menu_builder, it will detect that change and rebuild the menu, caching the new results. Doesn't seem to happen.
Questions:
Is Application.config a good place to store stuff like this?
Does anyone see an obvious flaw in this caching strategy?
Don't say premature optimization -- that's the phase I'm in. The premature-optimization iteration :)
Thanks!
I would avoid global variables, and use Rails' caching facilities.
http://guides.rubyonrails.org/caching_with_rails.html
One way to achieve this is to set an empty hash in your application.rb file:
MY_VARS = {}
Then you can add whatever you want in this hash which is accessible everywhere.
MY_VARS[:foo] = "bar"
and elsewhere:
MY_VARS[:foo]
As you felt, this is not the Rails way to behave, even if it works. There are different ways to use caching in Rails:
simple cache in memory explained here:
Rails.cache.read("city") # => nil
Rails.cache.write("city", "Duckburgh")
Rails.cache.read("city") # => "Duckburgh"
use of a real engine like memcached
I encourage you to have a look at http://railslab.newrelic.com/scaling-rails
This is THE place to learn caching in all it's shapes.

Rails optmization (with activerecord and view helpers)

Is there a way to do this in Rails:
I have an activerecord query
#posts = Post.find_by_id(10)
Anytime the query is called, SQL is generated and executed at the DB that looks like this
SELECT * FROM 'posts' WHERE id = 10
This happens every time the AR query is executed. Similarly with a helper method like this
<%= f.textarea :name => 'foo' %>
#=> <input type='textarea' name='foo' />
I write some Railsy code that generates some text that is used by some other system (database, web browser). I'm wondering if there's a way to write an AR query or a helper method call that generates the text in the file. This way the text rendering is only done once (each time the code changes) instead of each time the method is called?
Look at the line, it may be going to the database for the first one but ones after it could be saying CACHE at the start of the line meaning it's going to ActiveRecord's query cache.
It also sounds to me like you want to cache the page, not the query. And even if it were the query, I don't think it's as simple as find_by_id(10) :)
Like Radar suggested you should probably look into Rails caching. You can start with something simple like the memory store or file cache and then move to something better like memcached if necessary. You can throw in some caching into the helper method which will cache the result after it is queried once. So for example you can do:
id = 10 # id is probably coming in as a param/argument
cache_key = "Post/#{id}"
#post = Rails.cache.read(cache_key)
if #post.nil?
#post = Post.find_by_id(id)
# Write back to the cache for the next time
Rails.cache.write(cache_key,#post)
end
The only other thing left to do is put in some code to expire the cache entry if the post changes. For that take a look at using "Sweepers" in Rails. Alternatively you can look at some of the caching gems like Cache-fu and Cached-model.
I'm not sure I understand your question fully.
If you're asking about the generated query, you can just do find_by_sql and write your own SQL if you don't want to use the active record dynamic methods.
If you're asking about caching the resulset to a file, it's already in the database, I don't know that if it was in a file it would be much more efficient.

Profile a rails controller action

What is the best way to profile a controller action in Ruby on Rails. Currently I am using the brute-force method of throwing in puts Time.now calls between what I think will be a bottleneck. But that feels really, really dirty. There has got to be a better way.
I picked up this technique a while back and have found it quite handy.
When it's in place, you can add ?profile=true to any URL that hits a controller. Your action will run as usual, but instead of delivering the rendered page to the browser, it'll send a detailed, nicely formatted ruby-prof page that shows where your action spent its time.
First, add ruby-prof to your Gemfile, probably in the development group:
group :development do
gem "ruby-prof"
end
Then add an around filter to your ApplicationController:
around_action :performance_profile if Rails.env == 'development'
def performance_profile
if params[:profile] && result = RubyProf.profile { yield }
out = StringIO.new
RubyProf::GraphHtmlPrinter.new(result).print out, :min_percent => 0
self.response_body = out.string
else
yield
end
end
Reading the ruby-prof output is a bit of an art, but I'll leave that as an exercise.
Additional note by ScottJShea:
If you want to change the measurement type place this:
RubyProf.measure_mode = RubyProf::GC_TIME #example
Before the if in the profile method of the application controller. You can find a list of the available measurements at the ruby-prof page. As of this writing the memory and allocations data streams seem to be corrupted (see defect).
Use the Benchmark standard library and the various tests available in Rails (unit, functional, integration). Here's an example:
def test_do_something
elapsed_time = Benchmark.realtime do
100.downto(1) do |index|
# do something here
end
end
assert elapsed_time < SOME_LIMIT
end
So here we just do something 100 times, time it via the Benchmark library, and ensure that it took less than SOME_LIMIT amount of time.
You also may find these links useful: The Benchmark.realtime reference and the Test::Unit reference. Also, if you're into the 'book reading' thing, I picked up the idea for the example from Agile Web Development with Rails, which talks all about the different testing types and a little on performance testing.
There's a Railscast on profiling that's well worth watching
http://railscasts.com/episodes/98-request-profiling
You might want to give the FiveRuns TuneUp service a try, as it's really rather impressive. Disclaimer: I'm not associated with FiveRuns in any way, I've just tried this service out.
TuneUp is a free service whereby you download a plugin and when you run your application it injects a panel at the top of the screen that can be expanded to display detailed performance metrics.
It gives you some nice graphs, including one that shows what proportion of time is spent in the Model, View and Controller. You can even drill right down to see the individual SQL queries that ActiveRecord is executing if you need to and it can show you the underlying database schema with another click.
Finally, you can optionally upload your profiling data to the FiveRuns site for community performance analysis and advice.
This works in Rails 4.2.6:
o=OpenStruct.new(logger: Rails.logger)
o.extend ActiveSupport::Benchmarkable
o.benchmark 'name' do
# ... your code ...
end

Resources