Page-view-counter with Rails and Memcached - ruby-on-rails

I'm trying to implement a page-views-counter with Rails and memcached. Every time I render a page, through rails I increase a memcached key (key.incr is atomic). My main worry is the possibility where this key gets expired or deleted from memcached before I update my DB record. Even if I visit all the keys with frequency greater than their expiration time, memcached might delete a key in the meantime because of full memory.
Any suggestions?
Thank you
Dimitris

I would go with redis as a memcached replacement. It's perfect for realtime stats. It gives you the speed and atomic increments that you want, plus it persists. Problem solved.

If you want that data to be persitent, you must not write it to memcache (which is a caching mecanism, and not a data persistance storage).
Basically, what I'd probably do would be like this :
When trying to get a counter for a page :
Check if it's stored in memcache
if yes, use it
if not, fetch it from the DB and store it to memcache
When trying to write a counter (i.e. counter += 1) :
Write the data to the database (update ... set counter = counter + 1 where... )
select the data back from the database ; wrapping both update and select in a transaction might help : isolation is something databases do well.
and immediatly write it to memcache, so it's up to date for the next "get" operation
But I would not use memcache for persistance :
I would never write to memcache any data that has not been written to the database
persistance is the job of the database ; not of a caching engine.

Related

Can I use Rails.cache to store short-lived session data?

We are already using cookie based sessions, and switching off them to file store sessions in not an option. However, I need a way to store larger amounts of session data (up to 10MG or so) -- beyond the limit of cookie session and, even it weren't, round-tripping that much data on multiple requests would be slow.
I am currently attempting to solve this by using (abusing?) Rails.cache. The basic setup is like this:
I post to a route, which has the following code:
# calculate some results...
Rails.cache.write('search_results' + session.id), search_results)
redirect_to '/results'
Inside GET /results, I read the cached data and send it to the client:
#results = Rails.cache.read('search_results' + session.id)
This works fine. However, if I subsequently make a request to another route like GET /results2 that also calls Rails.cache.read('search_results' + session.id), it will return nil. Even if all calls happen within a 5-10s span.
So my questions are:
Why does this happen? What determines when Rails.cache clean itself?
Is there a way to make this work?
Is there a better approach altogether that doesn't involve using a DB or redis?
Answer to your questions:
The problem with file cache store is that it stores file locally. Thus, if you have multiple servers, cache can be written to one server while cache is read on another server which will return ‘nil’. The solution is to use cache store that can be shared among multiple servers.
Using redis-store may be a solution: https://github.com/redis-store/redis-rails

Memcache not able to fetch value

I set 10000 keys in memcache
for i in 1..10000
Rails.cache.write("short_key#{i}", i)
end
After ~500s (not benchmarked but happens around 10m), when I do
_random = rand(10000)
Rails.cache.read("short_key#{_random}")
returns nil. This is fine. Memcached LRU policy might have destroyed those keys.
But, issue is I see a lot of free memory on server.
Also, when I run following command in telnet session,
stats cachedump 1 10
I get some random keys which I set earlier in loop and even when I try to fetch them via rails or telnet/get, memcached is not able to read that value.
Those key/values are eating up memory but somehow getting destroyed.
I use dalli to connect with memcached.
How can I correct this?
At first glance, this seems possible if the default keep alive time value is low (10 minutes or 500 seconds are both possible default values).
Since you are not setting up the expires_in (or equivalent time_to_live field), the key will be setup for default time, after which the value will expire.
Referring here:
Setting :expires_in will set an expiration time on the cache. All caches support auto-expiring content after a specified number of seconds. This value can be specified as an option to the constructor (in which case all entries will be affected), or it can be supplied to the fetch or write method to effect just one entry.

Rails - how to cache data for server use, serving multiple users

I have a class method (placed in /app/lib/) which performs some heavy calculations and sub-http requests until a result is received.
The result isn't too dynamic, and requested by multiple users accessing a specific view in the app.
So, I want to schedule a periodic run of the method (using cron and Whenever gem), store the results somewhere in the server using JSON format and, by demand, read the results alone to the view.
How can this be achieved? what would be the correct way of doing that?
What I currently have:
def heavyMethod
response = {}
# some calculations, eventually building the response
File.open(File.expand_path('../../../tmp/cache/tests_queue.json', __FILE__), "w") do |f|
f.write(response.to_json)
end
end
and also a corresponding method to read this file.
I searched but couldn't find an example of achieving this using Rails cache convention (and not some private code that I wrote), on data which isn't related with ActiveRecord.
Thanks!
Your solution should work fine, but using Rails.cache should be cleaner and a bit faster. Rails guides provides enough information about Rails.cache and how to get it to work with memcached, let me summarize how I would use it in your case
Heavy method
def heavyMethod
response = {}
# some calculations, eventually building the response
Rails.cache.write("heavy_method_response", response)
end
Request
response = Rails.cache.fetch("heavy_method_response")
The only problem here is that when ur server starts for the first time, the cache will be empty. Also if/when memcache restarts.
One advantage is that somewhere on the flow, the data u pass in is marshalled into storage, and then unmartialled on the way out. Meaning u can pass in complex datastructures, and dont need to serialize to json manually.
Edit: memcached will clear your item if it runs out of memory. Will be very rare since its using a LRU (i think) algoritm to expire things, and I presume you will use this often.
To prevent this,
set expires_in larger than your cron period,
change your fetch code to call the heavy_method if ur fetch fails (like Rails.cache.fetch("heavy_method_response") {heavy_method}, and change heavy_method to just return the object.
Use something like redis which will not delete items.

How does Rails 4 Russian doll caching prevent stampedes?

I am looking to find information on how the caching mechanism in Rails 4 prevents against multiple users trying to regenerate cache keys at once, aka a cache stampede: http://en.wikipedia.org/wiki/Cache_stampede
I've not been able to find out much information via Googling. If I look at other systems (such as Drupal) cache stampede prevention is implemented via a semaphores table in the database.
Rails does not have a built-in mechanism to prevent cache stampedes.
According to the README for atomic_mem_cache_store (a replacement for ActiveSupport::Cache::MemCacheStore that mitigates cache stampedes):
Rails (and any framework relying on active support cache store) does
not offer any built-in solution to this problem
Unfortunately, I'm guessing that this gem won't solve your problem either. It supports fragment caching, but it only works with time-based expiration.
Read more about it here:
https://github.com/nel/atomic_mem_cache_store
Update and possible solution:
I thought about this a bit more and came up with what seems to me to be a plausible solution. I haven't verified that this works, and there are probably better ways to do it, but I was trying to think of the smallest change that would mitigate the majority of the problem.
I assume you're doing something like cache model do in your templates as described by DHH (http://37signals.com/svn/posts/3113-how-key-based-cache-expiration-works). The problem is that when the model's updated_at column changes, the cache_key likewise changes, and all your servers try to re-create the template at the same time. In order to prevent the servers from stampeding, you would need to retain the old cache_key for a brief time.
You might be able to do this by (dum da dum) caching the cache_key of the object with a short expiration (say, 1 second) and a race_condition_ttl.
You could create a module like this and include it in your models:
module StampedeAvoider
def cache_key
orig_cache_key = super
Rails.cache.fetch("/cache-keys/#{self.class.table_name}/#{self.id}", expires_in: 1, race_condition_ttl: 2) { orig_cache_key }
end
end
Let's review what would happen. There are a bunch of servers calling cache model. If your model includes StampedeAvoider, then its cache_key will now be fetching /cache-keys/models/1, and returning something like /models/1-111 (where 111 is the timestamp), which cache will use to fetch the compiled template fragment.
When you update the model, model.cache_key will begin returning /models/1-222 (assuming 222 is the new timestamp), but for the first second after that, cache will keep seeing /models/1-111, since that is what is returned by cache_key. Once 1 second passes, all of the servers will get a cache-miss on /cache-keys/models/1 and will try to regenerate it. If they all recreated it immediately, it would defeat the point of overriding cache_key. But because we set race_condition_ttl to 2, all of the servers except for the first will be delayed for 2 seconds, during which time they will continue to fetch the old cached template based on the old cache key. Once the 2 seconds have passed, fetch will begin returning the new cache key (which will have been updated by the first thread which tried to read/update /cache-keys/models/1) and they will get a cache hit, returning the template compiled by that first thread.
Ta-da! Stampede averted.
Note that if you did this, you would be doing twice as many cache reads, but depending on how common stampedes are, it could be worth it.
I haven't tested this. If you try it, please let me know how it goes :)
The :race_condition_ttl setting in ActiveSupport::Cache::Store#fetch should help avoid this problem. As the documentation says:
Setting :race_condition_ttl is very useful in situations where a cache entry is used very frequently and is under heavy load. If a cache expires and due to heavy load seven different processes will try to read data natively and then they all will try to write to cache. To avoid that case the first process to find an expired cache entry will bump the cache expiration time by the value set in :race_condition_ttl. Yes, this process is extending the time for a stale value by another few seconds. Because of extended life of the previous cache, other processes will continue to use slightly stale data for a just a bit longer. In the meantime that first process will go ahead and will write into cache the new value. After that all the processes will start getting new value. The key is to keep :race_condition_ttl small.
Great question. A partial answer that applies to single multi-threaded Rails servers but not multiprocess(or) environments (thanks to Nick Urban for drawing this distinction) is that the ActionView template compilation code blocks on a mutex that is per template. See line 230 in template.rb here. Notice there is a check for completed compilation both before grabbing the lock and after.
The effect is to serialize attempts to compile the same template, where only the first will actually do the compilation and the rest will get the already completed result.
Very interesting question. I searched on google (you get more results if you search for "dog pile" instead of "stampede") but like you, did I not get any answers, except this one blog post: protecting from dogpile using memcache.
Basically does it store you fragment in two keys: key:timestamp (where timestamp would be updated_at for active record objects) and key:last.
def custom_write_dogpile(key, timestamp, fragment, options)
Rails.cache.write(key + ':' + timestamp.to_s, fragment)
Rails.cache.write(key + ':last', fragment)
Rails.cache.delete(key + ':refresh-thread')
fragment
end
Now when reading from the cache, and trying to fetch a non existing cache, will it instead try to fecth the key:last fragment instead:
def custom_read_dogpile(key, timestamp, options)
result = Rails.cache.read(timestamp_key(name, timestamp))
if result.blank?
Rails.cache.write(name + ':refresh-thread', 0, raw: true, unless_exist: true, expires_in: 5.seconds)
if Rails.cache.increment(name + ':refresh-thread') == 1
# The cache didn't exists
result = nil
else
# Fetch the last cache, as the new one has not been created yet
result = Rails.cache.read(name + ':last')
end
end
result
end
This is a simplified summary of the by Moshe Bergman that i linked to before, or you can find here.
There is no protection against memcache stampedes. This is a real problem when multiple machines are involved and multiple processes on those multiple machines. -Ouch-.
The problem is compounded when one of the key processes has "died" leaving any "locking" ... locked.
In order to prevent stampedes you have to re-compute the data before it expires. So, if your data is valid for 10 minutes, you need to regenerate again at the 5th minute and re-set the data with a new expiration for 10 more minutes. Thus you don't wait until the data expires to set it again.
Should also not allow your data to expire at the 10 minute mark, but re-compute it every 5 minutes, and it should never expire. :)
You can use wget & cron to periodically call the code.
I recommend using redis, which will allow you to save the data and reload it in the advent of a crash.
-daniel
A reasonable strategy would be to:
use a :race_condition_ttl with at least the expected time it takes to refresh the resource. Setting it to less time than expected to perform a refresh is not advisable as the angry mob will end up trying to refresh it, resulting in a stampede.
use an :expires_in time calculated as the maximum acceptable expiry time minus the :race_condition_ttl to allow for refreshing the resource by a single worker and avoiding a stampede.
Using the above strategy will ensure that you don't exceed your expiry/staleness deadline and also avoid a stampede. It works because only one worker gets through to refresh, whilst the angry mob are held off using the cache value with the race_condition_ttl extension time right up to the originally intended expiry time.

Calling redis 'get' inside a loop to display forum post view data

I'm rebuilding a forum/board in rails. One of the requirements is that view information be recorded for a subject.
In the current system, a database call is made every time the page is loaded updating the view count for that post.
I would like to avoid that and am looking at implementing redis to record that information using a technique similar to this post - jQuery Redis hit counter to track view of cached Rails pages
So I would make a request to a controller that would record the view - via javascript - and then a cron job would move the redis usage data to the database (removing it from redis).
My quandary is that the current system offers real-time usage information so that will be the expectation moving forward. Using Heroku - as I plan - the most frequent cron jobs would run hourly, which I don't think will be acceptable.
My thought was that I could store the usage information in redis and then while I'm looping through the subjects, I would combine the usage value stored in redis with the value that had been saved in the database from the cron job.
Is this a dumb idea? I'm new to redis so I don't really know what is possible. Is it a huge no-no to do a redis call in a loop like I'm suggesting?
If you really need the old application to mantain real-time statistics, and want to use Redis, then you would have to change legacy code to access it.
Here's a starting point for your code.
At every hit, you can check thread's counter in Redis. If the counter key doesn't exist, this activates load.
So this would be a way to keep the stats updated (using php, phpredis client):
try {
$redis = new \Redis();
$thread_id = getFromPostGet("thread_id"); //suppose so
$key = 'ViewCounterKey:' . $thread_id; //each thread has a counter key
$redis->multi(); //begin transaction
if (!$redis->exists($key)) {
$counter = getFromDB("count(*) where thread_id = $thread_id"); //suppose so
$redis->set($key, $counter);
}
$redis->incr($key); //every hit incrs the counter
$redis->exec(); //end transaction
}
catch (\RedisException $e) {
echo "Server down";
}
So this solution can be put together with cron jobs, which would persist the view count, and the latency of 1h between each cron would not matter, because you're always looking into memory (Redis, not DB).
Hope that makes sense.

Resources