Rails 4 / Heroku smart expire cache - ruby-on-rails

We have in our application some blocks which are cached.
According to some login we sometimes modify some of them, and in this case we have a logic that expires the relevant blocks.
When we perform changes in the code, we need to expire these blocks via console. In this case we need to detect and be precise with the exact logic in order to expire all modified blocks. For example, if we change header html of closed streams, it will look like:
a = ActionController::Base.new
Stream.closed.each {|s| a.expire_fragment("stream_header_#{s.id}") }; nil
Actually, I think that must be a more generic way to simply compare cached blocks with how it should be rendered, and expire only blocks which their html is different that their cached version.
I wonder if there is a gem that does this task, and if not - if somebody has already written some deploy hook to do it.
============== UPDATE ============
Some thought:
In a rake task one can get the cached fragment, as long as you know which fragments you have.
For example, in my case I can do:
a = ActionController::Base.new
Stream.each do |s|
cached_html = a.read_fragment("stream_header_#{s.id}")
:
:
If I could generate the non-cached html I could simply compare them, and expire the cached fragment in case they are different.
Is it possible?
How heavy do you think this task will be?

Not at all easy to answer with so little code.
You can use the
cache #object do
render somthing
end
So based on the hash of the object the cache will invalided itself. This is also true for the rendering template as Rails will create a has on this also and combined it with the hash of the object to invalidate it properly. This also can work at a deeper level and in this way it is possible to invalidate an entire branch of the render tree.
Let me point you toward the documentation of Rails and the Russian doll caching.
http://edgeguides.rubyonrails.org/caching_with_rails.html
There was also a great video on the caching by these guys:
https://www.codeschool.com/courses/rails-4-zombie-outlaws
They are free, but it look like you have to register now.
I hope this is in the right direction for your need.

Related

Rails: Skip controller if cache fragment exist (with cache_key)

I have been using cache for a long time and recently discovered that my fragment caching doesn't stop my controller from executing the code, as it always has been. I have boiled the problem down to have to do with the cache_key that seems to be a new feature?
This is my previous solution that no longer works as expected.
Product#Show-view:
cache('product-' + #product.id.to_s) do
# View stuff
end
Product#Show-controller:
unless fragment_exist?('product-19834') # ID obviously dynamically loaded
# Perform slow calculations
end
The caching works fine. It writes and reads the fragment, but it still executes the controller (which is the whole reason I want to use caching). This boils down to the fragment has an added unique id so the fragment created is something like:
views/product-19834/b05c4ed1bdb428f73b2c73203769b40f
So when I check if the fragment_exist I am not checking for the right string (since I am checking for 'views/product-19834'). I have also tried to use:
fragment_exist?("product-#{#product.id}/#{#product.cache_key}")
but it checks with a different cache_key than is actually created.
I would rather use this solution than controller-caching or gems like interlock.
My question is:
- How do I, in the controller, check if a fragment exist for a specific view considering this cache key?
As Kelseydh pointed out in the link, the solution to this is to use skip_digest => true in the cache request:
View
cache ("product" + #product.id, :skip_digest => true)
Controller
fragment_exist?("product-#{#product.id}")
It might be worth pointing out that while the proposed solution (fragment_exist?) could work, it's more like a hack.
In your question, you say
It writes and reads the fragment, but it still executes the controller
(which is the whole reason I want to use caching)
So what you actually want is "controller caching". But fragment caching is "view caching":
Fragment Caching allows a fragment of view logic to be wrapped in a
cache block and served out of the cache store
(Rails Guides 5.2.3)
For "controller caching", Rails already provides some options:
Page Caching
Action Caching
Low-Level Caching
Which are all, from my point of view, better suited for your particular use case.

Rails - how to cache data for server use, serving multiple users

I have a class method (placed in /app/lib/) which performs some heavy calculations and sub-http requests until a result is received.
The result isn't too dynamic, and requested by multiple users accessing a specific view in the app.
So, I want to schedule a periodic run of the method (using cron and Whenever gem), store the results somewhere in the server using JSON format and, by demand, read the results alone to the view.
How can this be achieved? what would be the correct way of doing that?
What I currently have:
def heavyMethod
response = {}
# some calculations, eventually building the response
File.open(File.expand_path('../../../tmp/cache/tests_queue.json', __FILE__), "w") do |f|
f.write(response.to_json)
end
end
and also a corresponding method to read this file.
I searched but couldn't find an example of achieving this using Rails cache convention (and not some private code that I wrote), on data which isn't related with ActiveRecord.
Thanks!
Your solution should work fine, but using Rails.cache should be cleaner and a bit faster. Rails guides provides enough information about Rails.cache and how to get it to work with memcached, let me summarize how I would use it in your case
Heavy method
def heavyMethod
response = {}
# some calculations, eventually building the response
Rails.cache.write("heavy_method_response", response)
end
Request
response = Rails.cache.fetch("heavy_method_response")
The only problem here is that when ur server starts for the first time, the cache will be empty. Also if/when memcache restarts.
One advantage is that somewhere on the flow, the data u pass in is marshalled into storage, and then unmartialled on the way out. Meaning u can pass in complex datastructures, and dont need to serialize to json manually.
Edit: memcached will clear your item if it runs out of memory. Will be very rare since its using a LRU (i think) algoritm to expire things, and I presume you will use this often.
To prevent this,
set expires_in larger than your cron period,
change your fetch code to call the heavy_method if ur fetch fails (like Rails.cache.fetch("heavy_method_response") {heavy_method}, and change heavy_method to just return the object.
Use something like redis which will not delete items.

How does Rails 4 Russian doll caching prevent stampedes?

I am looking to find information on how the caching mechanism in Rails 4 prevents against multiple users trying to regenerate cache keys at once, aka a cache stampede: http://en.wikipedia.org/wiki/Cache_stampede
I've not been able to find out much information via Googling. If I look at other systems (such as Drupal) cache stampede prevention is implemented via a semaphores table in the database.
Rails does not have a built-in mechanism to prevent cache stampedes.
According to the README for atomic_mem_cache_store (a replacement for ActiveSupport::Cache::MemCacheStore that mitigates cache stampedes):
Rails (and any framework relying on active support cache store) does
not offer any built-in solution to this problem
Unfortunately, I'm guessing that this gem won't solve your problem either. It supports fragment caching, but it only works with time-based expiration.
Read more about it here:
https://github.com/nel/atomic_mem_cache_store
Update and possible solution:
I thought about this a bit more and came up with what seems to me to be a plausible solution. I haven't verified that this works, and there are probably better ways to do it, but I was trying to think of the smallest change that would mitigate the majority of the problem.
I assume you're doing something like cache model do in your templates as described by DHH (http://37signals.com/svn/posts/3113-how-key-based-cache-expiration-works). The problem is that when the model's updated_at column changes, the cache_key likewise changes, and all your servers try to re-create the template at the same time. In order to prevent the servers from stampeding, you would need to retain the old cache_key for a brief time.
You might be able to do this by (dum da dum) caching the cache_key of the object with a short expiration (say, 1 second) and a race_condition_ttl.
You could create a module like this and include it in your models:
module StampedeAvoider
def cache_key
orig_cache_key = super
Rails.cache.fetch("/cache-keys/#{self.class.table_name}/#{self.id}", expires_in: 1, race_condition_ttl: 2) { orig_cache_key }
end
end
Let's review what would happen. There are a bunch of servers calling cache model. If your model includes StampedeAvoider, then its cache_key will now be fetching /cache-keys/models/1, and returning something like /models/1-111 (where 111 is the timestamp), which cache will use to fetch the compiled template fragment.
When you update the model, model.cache_key will begin returning /models/1-222 (assuming 222 is the new timestamp), but for the first second after that, cache will keep seeing /models/1-111, since that is what is returned by cache_key. Once 1 second passes, all of the servers will get a cache-miss on /cache-keys/models/1 and will try to regenerate it. If they all recreated it immediately, it would defeat the point of overriding cache_key. But because we set race_condition_ttl to 2, all of the servers except for the first will be delayed for 2 seconds, during which time they will continue to fetch the old cached template based on the old cache key. Once the 2 seconds have passed, fetch will begin returning the new cache key (which will have been updated by the first thread which tried to read/update /cache-keys/models/1) and they will get a cache hit, returning the template compiled by that first thread.
Ta-da! Stampede averted.
Note that if you did this, you would be doing twice as many cache reads, but depending on how common stampedes are, it could be worth it.
I haven't tested this. If you try it, please let me know how it goes :)
The :race_condition_ttl setting in ActiveSupport::Cache::Store#fetch should help avoid this problem. As the documentation says:
Setting :race_condition_ttl is very useful in situations where a cache entry is used very frequently and is under heavy load. If a cache expires and due to heavy load seven different processes will try to read data natively and then they all will try to write to cache. To avoid that case the first process to find an expired cache entry will bump the cache expiration time by the value set in :race_condition_ttl. Yes, this process is extending the time for a stale value by another few seconds. Because of extended life of the previous cache, other processes will continue to use slightly stale data for a just a bit longer. In the meantime that first process will go ahead and will write into cache the new value. After that all the processes will start getting new value. The key is to keep :race_condition_ttl small.
Great question. A partial answer that applies to single multi-threaded Rails servers but not multiprocess(or) environments (thanks to Nick Urban for drawing this distinction) is that the ActionView template compilation code blocks on a mutex that is per template. See line 230 in template.rb here. Notice there is a check for completed compilation both before grabbing the lock and after.
The effect is to serialize attempts to compile the same template, where only the first will actually do the compilation and the rest will get the already completed result.
Very interesting question. I searched on google (you get more results if you search for "dog pile" instead of "stampede") but like you, did I not get any answers, except this one blog post: protecting from dogpile using memcache.
Basically does it store you fragment in two keys: key:timestamp (where timestamp would be updated_at for active record objects) and key:last.
def custom_write_dogpile(key, timestamp, fragment, options)
Rails.cache.write(key + ':' + timestamp.to_s, fragment)
Rails.cache.write(key + ':last', fragment)
Rails.cache.delete(key + ':refresh-thread')
fragment
end
Now when reading from the cache, and trying to fetch a non existing cache, will it instead try to fecth the key:last fragment instead:
def custom_read_dogpile(key, timestamp, options)
result = Rails.cache.read(timestamp_key(name, timestamp))
if result.blank?
Rails.cache.write(name + ':refresh-thread', 0, raw: true, unless_exist: true, expires_in: 5.seconds)
if Rails.cache.increment(name + ':refresh-thread') == 1
# The cache didn't exists
result = nil
else
# Fetch the last cache, as the new one has not been created yet
result = Rails.cache.read(name + ':last')
end
end
result
end
This is a simplified summary of the by Moshe Bergman that i linked to before, or you can find here.
There is no protection against memcache stampedes. This is a real problem when multiple machines are involved and multiple processes on those multiple machines. -Ouch-.
The problem is compounded when one of the key processes has "died" leaving any "locking" ... locked.
In order to prevent stampedes you have to re-compute the data before it expires. So, if your data is valid for 10 minutes, you need to regenerate again at the 5th minute and re-set the data with a new expiration for 10 more minutes. Thus you don't wait until the data expires to set it again.
Should also not allow your data to expire at the 10 minute mark, but re-compute it every 5 minutes, and it should never expire. :)
You can use wget & cron to periodically call the code.
I recommend using redis, which will allow you to save the data and reload it in the advent of a crash.
-daniel
A reasonable strategy would be to:
use a :race_condition_ttl with at least the expected time it takes to refresh the resource. Setting it to less time than expected to perform a refresh is not advisable as the angry mob will end up trying to refresh it, resulting in a stampede.
use an :expires_in time calculated as the maximum acceptable expiry time minus the :race_condition_ttl to allow for refreshing the resource by a single worker and avoiding a stampede.
Using the above strategy will ensure that you don't exceed your expiry/staleness deadline and also avoid a stampede. It works because only one worker gets through to refresh, whilst the angry mob are held off using the cache value with the race_condition_ttl extension time right up to the originally intended expiry time.

What is the right way to clear cache in Rails without sweepers

Observers and Sweepers are removed from Rails 4. Cool.
But what is the way to cache and clear cache then ?
I read about russian doll caching. It nice and all but it only concerns the view rendering cache. It doesn't prevent the database from being hit.
For instance:
<% cache #product do %>
Some HTML code here
<% end %>
You still need to get #product from the db to get its cache_key. So page or action caching can still be useful to prevent unnecessary load.
I could use some timeout to clear the cache sometimes but what for if the records didn't change ?
At least with sweepers you have control on that aspect. What is/will be the right way to do cache and to clear it ?
Thanks ! :)
Welcome to one of the two hard problems in computer science, cache invalidation :)
You would have to handle that manually since the logic for when a cached object, unlike a cached view which can be simply derived from the objects it displays, should be invalidated is application and situation dependent.
You goto method for this is the Rails.cache.fetch method. Rails.cache.fetch takes 3 arguments; the cache key, an options hash, and a block. It first tries to read a valid cache record based on the key; if that key exists and hasn’t expired it will return the value from the cache. If it can’t find a valid record it instead takes the return value from the block and stores it in the cache with your specified key.
For example:
#models = Rails.cache.fetch my_cache_key do
Model.where(condition: true).all
end
This will cache the block and reuse the result until something (tm) invalidates the key, forcing the block to be reevaluated. Also note the .all at the end of the method chain. Normally Rails would return an ActiveRecord relation object that would be cached and this would then be evaluated when you tried to use #models for the first time, neatly sidestepping the cache. The .all call forces Rails to eager load the records and ensure that it's the result that we cache, not the question.
So now that you get all your cache on and never talk to the database again we have to make sure we cover the other end, invalidating the cache. This is done with the Rails.cache.delete method that simply takes a cache key and removes it, causing a miss the next time you try to fetch it. You can also use the force: trueoption with fetch to force a re-evaluation of the block. Whichever suits you.
The science of it all is where to call Rails.cache.delete, in the naïve case this would be on update and delete for a single instance and update, delete, create on any member for a collection. There will always bee corner cases and they are always application specific, so I can't help you much there.
I assume in this answer that you will set up some sane cache store, like memcached or Redis.
Also remember to add this to config/environments/development.rb:
config.cache_store = :null_store
or you development environment will cache and you will end up hairless from frustration.
For further reference read: Everyone should be using low level caching in Rails and The rails API docs
It is also worth noting that functionality is not removed from Rails 4, merely extracted into a gem. If you need or would like the full features of the sweepers simply add it back to your app with a gem 'rails-observers' line in your Gemfile. That gem contains both the sweepers and observers that where removed from Rails 4 core.
I hope that helpt you get started.

Rails 3: Caching to Global Variable

I'm sure "global variable" will get the hair on the back of everyone's neck standing up. What I'm trying to do is store a hierarchical menu in an acts_as_tree data table (done). In application_helper.rb, I create an html menu by querying the database and walking the tree (done). I don't want to do this for every page load.
Here's what I tried:
application.rb
config.menu = nil
application_helper.rb
def my_menu_builder
return MyApp::Application.config.menu if MyApp::Application.config.menu
# All the menu building code that should only run once
MyApp::Application.config.menu = menu_html
end
menu_controller.rb
def create
# whatever create code
expire_menu_cache
end
protected
def expire_menu_cache
MyApp::Application.config.menu = nil
end
Where I stand right now is that on first page load, the database is, indeed, queried and the menu built. The results are stored in the config variable and the database is never again hit for this.
It's the cache expiration part that's not working. When I reset the config.menu variable to nil, presumably the next time through my_menu_builder, it will detect that change and rebuild the menu, caching the new results. Doesn't seem to happen.
Questions:
Is Application.config a good place to store stuff like this?
Does anyone see an obvious flaw in this caching strategy?
Don't say premature optimization -- that's the phase I'm in. The premature-optimization iteration :)
Thanks!
I would avoid global variables, and use Rails' caching facilities.
http://guides.rubyonrails.org/caching_with_rails.html
One way to achieve this is to set an empty hash in your application.rb file:
MY_VARS = {}
Then you can add whatever you want in this hash which is accessible everywhere.
MY_VARS[:foo] = "bar"
and elsewhere:
MY_VARS[:foo]
As you felt, this is not the Rails way to behave, even if it works. There are different ways to use caching in Rails:
simple cache in memory explained here:
Rails.cache.read("city") # => nil
Rails.cache.write("city", "Duckburgh")
Rails.cache.read("city") # => "Duckburgh"
use of a real engine like memcached
I encourage you to have a look at http://railslab.newrelic.com/scaling-rails
This is THE place to learn caching in all it's shapes.

Resources