Fragment Caching In Rails 3 - ruby-on-rails

I have a partial that pulls in weather data using the Barometer gem. The problem is that each time the page is loaded the gem is pulling in the weather feed again. Is it possible to cache the partial using fragment caching but only have that cached version to be good for say 3-4 hours? That way the weather data will stay current and the gem won't have to pull in brand new data each time.

the cache helper function takes 3 arguments :
def cache(name = {}, options = nil, &block)
if controller.perform_caching
safe_concat(fragment_for(name, options, &block))
else
yield
end
nil
end
if you provide an expiration key in the second argument it will be passed on to write_fragment and eventually the cache itself.
Like this :
-cache "cache_path", :expires => 3.hours do
=cached_fragment_code
I recommend reading into ActionView::Helpers::CacheHelper for more info.

Related

Rails Low-Level Caching: Update cache when ActiveRecord object updated_at changes OR when a new object is added to collection

Rails ships with Fragment Caching and Low-Level Caching. It is pretty clear how fragment caching works:
Rails will write a new cache entry with a unique key. If the value of
updated_at has changed, a new key will be generated. Then Rails will
write a new cache to that key, and the old cache written to the old
key will never be used again. This is called key-based expiration.
Cache fragments will also be expired when the view fragment changes
(e.g., the HTML in the view changes). Cache stores like Memcached will
automatically delete old cache files.
<% #products.each do |product| %>
<% cache product do %>
<%= render product %>
<% end %>
<% end %>
So the view will be cached and when the updated_at of the ActiveRecord object associated with the view changes or the html changes, then new cache is created. Understandable.
I don't want to cache a view. I want to cache a Hash collection built from an ActiveRecord query. And I know Rails has SQL Caching, where it caches the result of the same exact query when used in one request. But I need results available among multiple requests and only updated when updated_at changes for one of objects or a new object is added to the Hash collection.
Low-Level Caching caches a particular value or query result instead of caching view fragments.
def get_events
#events = Event.search(params)
time = Benchmark.measure {
event_data = Rails.cache.fetch 'event_data' do
# A TON OF EVENTS TO LOAD ON CALENDAR
#events.collect do |event|
{
title: event.title,
description: event.description || '',
start: event.starttime.iso8601,
end: event.endtime.iso8601,
allDay: event.all_day,
recurring: (event.event_series_id) ? true : false,
backgroundColor: (event.event_category.color || "red"),
borderColor: (event.event_category.color || "red")
}
end
end
}
Rails.logger.info("CALENDAR EVENT LOAD TIME: #{time.real}")
render json: event_data.to_json
end
But right now I do not think the cache expires if one of the events is updated or a new hash is added to the collection. How can I do this?
Rails.cache.fetch is a shortcut to read and write
What we're trying to do with the cache is this:
value = Rails.cache.read( :some_key )
if value.nil?
expected_value = ...
Rails.cache.write( :some_key, expected_value )
end
We first try to read from the cache, and if no value exist, then we retrieve the data from wherever it is, add it to the cache and do whatever you need to do with it.
On next calls, we'll try to access again the :some_key cache key, but this time it will exist and we won't need to retrieve the value again, we just need to take the value that's already cached.
This is what fetch does all at once:
value = Rails.cache.fetch( :some_key ) do
# if :some_key doesn't exist in cache, the result of this block
# is stored into :some_key and gets returned
end
It's no more than a very handy shortcut, but it's important to understand its behaviour.
How do we deal with our low-level caching?
The key here (pun intended) is to chose a cache key that changes when the cached data is not up-to-date with the underlying data. It's generally easier than updating the existing cache values: instead, you make sure not to reuse a cache key where old data has been stored.
For instance, to store all events data in the cache, we'd do something like:
def get_events
# Get the most recent event, this should be a pretty fast query
last_modified = Event.order(:updated_at).last
# Turns a 2018-01-01 01:23:45 datetime into 20180101220000
# We could use to_i or anything else but examples would be less readable
last_modified_str = last_modified.updated_at.utc.to_s(:number)
# And our cache key would be
cache_key = "all_events/#{last_modified_str}"
# Let's check this cache key: if it doesn't exist in our cache store,
# the block will store all events at this cache key, and return the value
all_events = Rails.cache.fetch(cache_key) do
Event.all
end
# do whatever we need to with all_events variable
end
What's really important here:
The main data loading happens inside the fetch block. It must not be triggered every time you get in this method, or you lose all the interest of caching.
The choice of the key is paramount! It must :
Change as soon as your data gets stale. Otherwise, you'd hit an "old" cache key with old data, and wouldn't be serving the latest data.
BUT! determining the cache key must cost a lot less than retrieving the cached data, since you'll be "calculating" what the cache key is every single time you pass into this method. So if determining the cache key takes longer than retrieving the actual data, well, maybe you'll have to think things differently.
An example with this previous method
Let's check how my previous get_events method behaves with an example. Say we have the following Events in your DB:
| ID | Updated_at |
|----|------------------|
| 1 | 2018-01-01 00:00 |
| 2 | 2018-02-02 00:00 |
| 3 | 2018-03-03 00:00 |
First call
At this point, let's call get_events. Event #3 is the most recently updated one, so Rails will check the cache key all_events/20180303000000. It does not exist yet, so all events will be requested from the DB and stored into the cache with this cache key.
Same data, subsequent calls
If you don't update any of those events, all next calls to get_events will be hitting the cache key all_events/20180303000000 which now exists and contains all events. Therefore, you won't hit the DB, and just use the value from the cache.
What if we modify an Event?
Event.find(2).touch
We've modify event #2, so what's previoulsy been stored in the cache is not up to date anymore. We now have the following events list:
| ID | Updated_at |
|----|------------------|
| 1 | 2018-01-01 00:00 |
| 2 | 2018-04-07 19:27 | <--- just updated :)
| 3 | 2018-03-03 00:00 |
Next call to get_events will take the most recent event (#2 now), and therefore try to access the cache key all_events/20180407192700... which does not exist yet! Rails.cache.fetch will evaluated the block, and put all the current events, in their current state, into this new key all_events/20180407192700. And you don't get served stale data.
What about your particular issue?
You'll have to find the proper cache key, and make it so that event data loading is done inside the fetch block.
Since you filter your events with params, the cache will be depending on your params, so you'll need to find a way to represent the params as a string to integrate this to your cache key. Cached events will differ from one set of params to another.
Find the most recently updated event for those params to avoid retrieving stale data. We can use ActiveRecord cache_key method on any ActiveRecord object as its cache key, which is handy and avoid tedious timestamp formatting like we did previously.
This should give you something like:
def get_events
latest_event = Event.search(params).order(:updated_at).last
# text representation of the given params
# Check https://apidock.com/rails/ActiveRecord/Base/cache_key
# for an easy way to define cache_key for an ActiveRecord model instance
cache_key = "filtered_events/#{text_rep_of_params}/#{latest_event.cache_key}"
event_data = Rails.cache.fetch(cache_key) do
events = Event.search(params)
# A TON OF EVENTS TO LOAD ON CALENDAR
events.collect do |event|
{
title: event.title,
description: event.description || '',
start: event.starttime.iso8601,
end: event.endtime.iso8601,
allDay: event.all_day,
recurring: (event.event_series_id) ? true : false,
backgroundColor: (event.event_category.color || "red"),
borderColor: (event.event_category.color || "red")
}
end
end
render json: event_data.to_json
end
Voilà! I hope it helped. Good luck with your implementation details.

Cache with expiring keys

I'm working on a mashup site and would like to limit the number of fetches to scrape the source sites. There is essentially one bit of data I need, an integer, and would like to cache it with a defined expiration period.
To clarify, I only want to cache the integer, not the entire page source.
Is there a ruby or rails feature or gem that already accomplishes this for me?
Yes, there is ActiveSupport::Cache::Store
An abstract cache store class. There are multiple cache store
implementations, each having its own additional features. See the
classes under the ActiveSupport::Cache module, e.g.
ActiveSupport::Cache::MemCacheStore. MemCacheStore is currently the
most popular cache store for large production websites.
Some implementations may not support all methods beyond the basic
cache methods of fetch, write, read, exist?, and delete.
ActiveSupport::Cache::Store can store any serializable Ruby object.
http://api.rubyonrails.org/classes/ActiveSupport/Cache/Store.html
cache = ActiveSupport::Cache::MemoryStore.new
cache.read('Chicago') # => nil
cache.write('Chicago', 2707000)
cache.read('Chicago') # => 2707000
Regarding the expiration time, this can be done by passing the time as a initialization parameter
cache = ActiveSupport::Cache::MemoryStore.new(expires_in: 5.minutes)
If you want to cache a value with a different expiration time, you can also set this when writing a value to the cache
cache.write(key, value, expires_in: 1.minute) # Set a lower value for one entry
See Caching with Rails, particularly the :expires_in option to ActiveSupport::Cache::Store.
For example, you might go:
value = Rails.cache.fetch('key', expires_in: 1.hour) do
expensive_operation_to_compute_value()
end

why is there need to set an expiry time for caching?

I don't see this issue explained in the Rails caching guide http://guides.rubyonrails.org/caching_with_rails.html, so I wonder if I might ask how caching is working exactly in this example. On my user profile page, I cache the languages the user speaks and set an expiry for 15 minutes. When I did this, I assumed that if the user update his languages before those 15 minutes expired, then the updated languages wouldn't show, because the cache hadn't expired. However, when I test this on my app, the updated languages are showing immediately, so I assume that updating breaks the cache. If that's the case, then why wouldn't I set the expiry date to 1 hour or infinity?
#languages = Rails.cache.fetch("lang", :expires_in => 15.minutes) do
Language.where({:user_id => #user.id})
end
Note, I'm using Rails 4 with memcached if that's important.
Update, if the expiry time is just about clearing the cache due to size limitations, how long should I set the expiry for?
I have a lot of information (about 15 queries similar to below) on my profile pages that I'd prefer to cache if a user keeps refreshing the page, therefore I was just going to do this
#endorsements = Rails.cache.fetch("endorsements", :expires_in => 15.minutes) do
Endorsement.where({:subject_id => #user.id})
end
#languages = Rails.cache.fetch("lang", :expires_in => 15.minutes) do
Language.where({:user_id => #user.id})
end
Here's what you need to do in Rails4 to get the caching to work (in development) as you'd expect:
Add 'dalli' to your Gemfile
add config.cache_store = :mem_cache_store to your config/environments/development.rb
add config.action_controller.perform_caching = true to your config/environments/development.rb
(I know you already have #3 done)
Once this is complete, you won't see the "SELECT *" in your logs anymore, and when you update your models, it will not automatically update your cache.
UPDATE:
Like #FrederickCheung says, you need to cache objects, not relations (queries). Best way is to call "to_a" on them.
You're not actually caching anything: you are just caching ActiveRecord::Relation objects (which is pretty much just a ruby description of a query), rather than the query results itself.
Each time the code runs, this query is pulled in its unexecuted state and then run again. To achieve what you wanted to do, you need to force the query to be executed, for example
#endorsements = Rails.cache.fetch("endorsements", :expires_in => 15.minutes) do
Endorsement.where({:subject_id => #user.id}).all
end
Cache expiry can be tricky - it's sometimes easier just to have cached items expire automatically rather than ensuring that every single way of changing the data clears the cache. In some cases you may not even know when the data changes, for example if you are caching the results of an external api call.

Rails 2 cache view helper not saving to memcached

I have a block
<% cache 'unique_key', 60.minutes.from_now do %>
...
<% begin %>
...
<% rescue %>
...
<%end>
<% end %>
and I'm trying to make the implementation more robust by only caching (and thus allowing the user to see) the rescue message if there isn't a previous value already in the cache. Currently, if the response in the begin block sends back an error for any reason, I'm caching the user viewed error message. I would prefer to fall back onto the old cached data. The problem that I can't get past is -
Where is cache storing the data?
Every time I try Rails.cache.read 'unique_key', I get nil back. Is cache not storing the value in memcached? Is there a way that I can dump the cache to screen?
I couldn't follow the rails source. It seemed to me the the fragment_for method in cache was a rails 3 thing, and thus, I didn't debug further.
The cache view helper constructs a cache key based on the arguments you give it. At a minimum it adds the prefix 'views/' to the key.
You can use the fragment_cache_key helper to find out what cache key rails is using for any of your calls to cache. If you just want to grab what is currently stored, read_fragment does that. Of course with your particular usage, if your block is executed again it is because the 60 minutes are up: the cached value has been deleted from memcache.
With the memcache store you can't list all of the keys currently in the store - it's just something thy memcached itself doesn't support.
I solved this by using the fetch method. I used
<% Rails.cache.fetch('unique_key', :expires_in => 60.minutes){
begin
...
rescue
...
end
} %>
When I did this, I could successfully find the key. I'm still not sure why I couldn't find the cached data after adding the fragment_cache_key that I found, but using Rails.cache.fetch seemed to do the trick.

Rails 2.3 caching by time

I would like to cache my fragment page in my rails application by time.
I found this plugin to do this => ici but any download is available.
I searched in the rails doc but I don't found how to cache my fragment by time.
Are you know another plugin to do this or another method to do this ?
Thanks.
Creating a time-based cache key is quite simple.
Here's an example.
Now in your app you can write
<% cache :expires => CacheKey.expirable(:hour) do %>
...
<% end %>
If you want a more accurate control (for example 5.minutes instead of simply 1 minute), you can easily adapt the module in order to dynamically generate the cache key reading the time value passed as parameter.
An other approach is to check the last-modified time of the cache file. Here's a plugin.

Resources