I put many question about caching in here but no one answer. So mayby i will have answer on this.
I have in my search model code like this:
users = users.where('id >?', my_value)
Rails.cache.fetch('my_cached',expires_in: 3.minutes) do
users = users.order('current_sign_in_at DESC')
end
Can somebody explain me what this do. I thought that this will return sorted users table and put it in cache for 3 minutes so when i require next search it will return me my_cached result if it doesnt expired.
But it does not work like that. When some user login and current_sign_in_at is changed - cache is override and new query is returned.
I am not very experienced with the caching procedure for RoR, but I think what you say is right.
I thought that this will return sorted users table and put it in cache
for 3 minutes so when i require next search it will return me
my_cached result if it doesnt expired.
Then, I think that the caching should be done in this matter:
Rails.cache.fetch('my_cached',expires_in: 3.minutes) do
User.where('id >?', my_value).order('current_sign_in_at DESC')
end
Remember, that can exist some rules that can expire the cache before the expiration time assigned. This depends how the User model is configured.
Again, the query will not be executed only if you use the variable my_cached. Maybe the query you see is coming from another procedure (maybe from Devise itself if you use it?)
SOME HELPFUL REFERENCE
https://devcenter.heroku.com/articles/caching-strategies#low-level-caching
http://guides.rubyonrails.org/caching_with_rails.html#activesupport-cache-store
http://robotmay.com/post/23161612605/everyone-should-be-using-low-level-caching
UPDATE
If the variable my_value changes frequently (can be an aleatory value), then the cache will be used only if the my_value is the same as a previous (in 3 minutes) value.
Maybe an alternative solution for a variable my_value can be:
Rails.cache.fetch('my_cached',expires_in: 3.minutes) do
User.all # cache all users
end
# then filter the cached value by the my_value and order it
Related
Observers and Sweepers are removed from Rails 4. Cool.
But what is the way to cache and clear cache then ?
I read about russian doll caching. It nice and all but it only concerns the view rendering cache. It doesn't prevent the database from being hit.
For instance:
<% cache #product do %>
Some HTML code here
<% end %>
You still need to get #product from the db to get its cache_key. So page or action caching can still be useful to prevent unnecessary load.
I could use some timeout to clear the cache sometimes but what for if the records didn't change ?
At least with sweepers you have control on that aspect. What is/will be the right way to do cache and to clear it ?
Thanks ! :)
Welcome to one of the two hard problems in computer science, cache invalidation :)
You would have to handle that manually since the logic for when a cached object, unlike a cached view which can be simply derived from the objects it displays, should be invalidated is application and situation dependent.
You goto method for this is the Rails.cache.fetch method. Rails.cache.fetch takes 3 arguments; the cache key, an options hash, and a block. It first tries to read a valid cache record based on the key; if that key exists and hasn’t expired it will return the value from the cache. If it can’t find a valid record it instead takes the return value from the block and stores it in the cache with your specified key.
For example:
#models = Rails.cache.fetch my_cache_key do
Model.where(condition: true).all
end
This will cache the block and reuse the result until something (tm) invalidates the key, forcing the block to be reevaluated. Also note the .all at the end of the method chain. Normally Rails would return an ActiveRecord relation object that would be cached and this would then be evaluated when you tried to use #models for the first time, neatly sidestepping the cache. The .all call forces Rails to eager load the records and ensure that it's the result that we cache, not the question.
So now that you get all your cache on and never talk to the database again we have to make sure we cover the other end, invalidating the cache. This is done with the Rails.cache.delete method that simply takes a cache key and removes it, causing a miss the next time you try to fetch it. You can also use the force: trueoption with fetch to force a re-evaluation of the block. Whichever suits you.
The science of it all is where to call Rails.cache.delete, in the naïve case this would be on update and delete for a single instance and update, delete, create on any member for a collection. There will always bee corner cases and they are always application specific, so I can't help you much there.
I assume in this answer that you will set up some sane cache store, like memcached or Redis.
Also remember to add this to config/environments/development.rb:
config.cache_store = :null_store
or you development environment will cache and you will end up hairless from frustration.
For further reference read: Everyone should be using low level caching in Rails and The rails API docs
It is also worth noting that functionality is not removed from Rails 4, merely extracted into a gem. If you need or would like the full features of the sweepers simply add it back to your app with a gem 'rails-observers' line in your Gemfile. That gem contains both the sweepers and observers that where removed from Rails 4 core.
I hope that helpt you get started.
I have a long database query on one of our dashboard systems that I would like to cache as results do not need to be accurate in realtime but can give a "close enough" value from the cache.
I would like to do this without the user ever having to wait. I was looking at using something like
Rails.cache.write('my_val', 'val', :expires_in => 60.minutes)
to store this value, but I don't believe it gives the exact functionality that I want. I would like to call with
Rails.fetch('my_val') { create a background task to update my_val; return expired my_val}
It seems that my_val is removed from the cache when it expired though. Is there any way to access this expired value or perhaps another built in mechanism that would enable this functionality?
Thanks.
Just do this:
Rails.cache.write('my_val', 'val')
never expire
Now run your background job:
SomeLongJob.process
In the SomeLongJob.process job do this:
def SomeLongJob.process
some_long_calculation = Blah.calc
Rails.cache.write('my_val', some_long_calculation)
end
Now read the data with
def get_value
val = Rails.cache.read('my_val', 'val')
end
The :race_condition_ttl option for Rails.cache.fetch is REALLY close to what you are looking for: http://api.rubyonrails.org/classes/ActiveSupport/Cache/Store.html#method-i-fetch
But from what I can tell, the first request is still blocked (it's just subsequent ones that get the old value while it's updating). Not sure why they didn't go all the way with it. It would be better if the pattern #drhenner mentioned was abstracted into an option like that, but I haven't seen one yet.
Every time I hit an authenticated page, I notice devise issuing an SQL statement :
User Load (0.2ms) SELECT users.* FROM users WHERE (users.id = 1) LIMIT 1
(I'm using Rails 3 btw .. so cache_money seems out as a solution and despite a lot of searching I've found no substitute).
I tried many overrides in the user model and only find_by_sql seems called. Which gets passed a string of the entire SQL statement. Something intuitive like find_by_id or find doesn't seem to get called. I 'can' override this method and glean the user-id and do a reasonable cache system from that - but that's quite ugly.
I also tried overriding authenticate_user which I can intercept one SQL attempt but then calls to current_user seems to try it again.
Simply, my user objects change rarely and its a sad state to keep hitting the db for this instead of a memcache solution. (assume that I'm willing to accept all responsibility for invalidating said cache with :after_save as part but not all of that solution)
The following code will cache the user by its id and
invalidate the cache after each modification.
class User < ActiveRecord::Base
after_save :invalidate_cache
def self.serialize_from_session(key, salt)
single_key = key.is_a?(Array) ? key.first : key
user = Rails.cache.fetch("user:#{single_key}") do
User.where(:id => single_key).entries.first
end
# validate user against stored salt in the session
return user if user && user.authenticatable_salt == salt
# fallback to devise default method if user is blank or invalid
super
end
private
def invalidate_cache
Rails.cache.delete("user:#{id}")
end
end
WARNING: There's most likely a better/smarter way to do this.
I chased this problem down a few months back. I found -- or at least, I think I found -- where Devise loads the user object here:
https://github.com/plataformatec/devise/blob/master/lib/devise/rails/warden_compat.rb#L31
I created a monkey patch for that deserialized method in /initializers/warden.rb to do a cache fetch instead of get. It felt dirty and wrong, but it worked.
I've been struggling with this, too.
A less convoluted way of doing this is to add this class method to your User model:
def self.serialize_from_session(key, salt)
single_key = key.is_a?(Array) ? key.first : key
Rails.cache.fetch("user:#{single_key}") { User.find(single_key) }
end
Note that I'm prepending the model name to the object ID that is passed in for storing/retrieving the object from the cache; you can use whatever scheme fits your needs.
The only thing to worry about, of course, is invalidating the user in the cache when something changes. It would have been nice instead to store the User in the cache using the session ID as part of the key, but the session is not available in the model class, and is not passed in to this method by Devise.
What's the best way to cache a paginated result set with rails and memcached?
For example, posts controller:
def index
#posts = Rails.cache.fetch('all_posts') do
Post.paginate(:conditions => ['xx = ?', yy], :include => [:author], :page => params[:page], :order => 'created_at DESC')
end
end
This obviously doesn't work when the params[:page] changes. I can change the key to "all_posts_#{params[:page]}_#{params[:order]_#{last_record.created_at.to_i}", but then there could be several possible orders (recent, popular, most voted etc) and there will be a combination of pages and orders ... lots of keys this way.
Problem #2 - It seems that when I implement this solution, the caches get written correctly and the page loads fine during the first call to a paginated action. When I click back on the same page i.e. page1, with recent order, it seems the browser does not even make a call to the server. I don't see any controller action being called in the production log.
I am using passenger, REE, memcached, and rails 2.3.5. Firebug shows no requests being made....
Is there a simples/more graceful way of handling this?
When it comes to caching there is no easy solution. You might cache every variant of the result, and thats ok if you implement auto-expiration of entries. You can't just use all_posts, because this way you will have to expire dozens of keys if posts will get changed.
Every AR model instance has the .cache_key based on updated_at method, which is prefered way, so use this instead of last record. Also don't base your key on last record, because if some post in the middle will get deleted your key wont change. You can use logic like this instead.
class ActiveRecord::Base
def self.newest
order("updated_at DESC").first
end
def self.cache_key
newest.nil? ? "0:0" : "#{newest.cache_key}:#{count}"
end
end
Now you can use Post.cache_key, which will get changed if any post will get changed/deleted or created.
In general I would just cache Post.all and then paginate on this object. You really need to do some profiling to find bottle necks in your application.
Besides, if you want to cache every variant, then do fragment/page caching instead.
If up to you how and where to cache. No one-way here.
As for the second part of the question, there is way to few hints for me to figure an answer. Check if the browser is making a call at all LiveHTTPHeaders, tcpdump, etc.
Is there a way to do this in Rails:
I have an activerecord query
#posts = Post.find_by_id(10)
Anytime the query is called, SQL is generated and executed at the DB that looks like this
SELECT * FROM 'posts' WHERE id = 10
This happens every time the AR query is executed. Similarly with a helper method like this
<%= f.textarea :name => 'foo' %>
#=> <input type='textarea' name='foo' />
I write some Railsy code that generates some text that is used by some other system (database, web browser). I'm wondering if there's a way to write an AR query or a helper method call that generates the text in the file. This way the text rendering is only done once (each time the code changes) instead of each time the method is called?
Look at the line, it may be going to the database for the first one but ones after it could be saying CACHE at the start of the line meaning it's going to ActiveRecord's query cache.
It also sounds to me like you want to cache the page, not the query. And even if it were the query, I don't think it's as simple as find_by_id(10) :)
Like Radar suggested you should probably look into Rails caching. You can start with something simple like the memory store or file cache and then move to something better like memcached if necessary. You can throw in some caching into the helper method which will cache the result after it is queried once. So for example you can do:
id = 10 # id is probably coming in as a param/argument
cache_key = "Post/#{id}"
#post = Rails.cache.read(cache_key)
if #post.nil?
#post = Post.find_by_id(id)
# Write back to the cache for the next time
Rails.cache.write(cache_key,#post)
end
The only other thing left to do is put in some code to expire the cache entry if the post changes. For that take a look at using "Sweepers" in Rails. Alternatively you can look at some of the caching gems like Cache-fu and Cached-model.
I'm not sure I understand your question fully.
If you're asking about the generated query, you can just do find_by_sql and write your own SQL if you don't want to use the active record dynamic methods.
If you're asking about caching the resulset to a file, it's already in the database, I don't know that if it was in a file it would be much more efficient.