To encrypt and store user information in database, my Rails application uses salt (per User) to generate key using ActiveSupport::KeyGenerator. However, the default iterations performed by generate_key method is 2**16 [1]. Performing key generation using the user's salt on every read (for decryption) and write (for encryption) is slowing down my application.
I found that ActiveSupport::CachingKeyGenerator can be used to cache the key if salt and length used for key generation are remains same [2]. Internally, it uses Concurrent::Map [3] for caching the keys. Using ActiveSupport::CachingKeyGenerator has increased performance of my application because it doesn't generate the key always.
Will this increase memory usage of my application to a level where it can bring the app down?
References:
https://github.com/rails/rails/blob/b9ca94caea2ca6a6cc09abaffaad67b447134079/activesupport/lib/active_support/key_generator.rb#L16
https://api.rubyonrails.org/classes/ActiveSupport/CachingKeyGenerator.html
https://github.com/rails/rails/blob/b9ca94caea2ca6a6cc09abaffaad67b447134079/activesupport/lib/active_support/key_generator.rb#L33
Sharing a "working" solution
Since ActiveSupport::CachingKeyGenerator uses Concurrent::Map, I found that caching keys will result in increased memory usage proportional to the number of users (provided all the requests are somehow routed to same node).
For solving this, I wrote a similar CachingKeyGenerator that wraps ActiveSupport::KeyGenerator and ActiveSupport::Cache::MemoryStore.
class CachingKeyGenerator
BASE = Rails.application.secrets.secret_key_base
LENGTH = ActiveSupport::MessageEncryptor.key_len
KEY_EXPIRY = 1.day
def initialize
#key_generator = ActiveSupport::KeyGenerator.new(Rails.application.secrets.secret_key_base)
#keys_cache = ActiveSupport::Cache::MemoryStore.new(expires_in: KEY_EXPIRY)
end
def generate_key(salt)
key = #keys_cache.fetch(salt)
if key.nil?
key = #key_generator.generate_key(salt, LENGTH)
#keys_cache.write(salt, key)
end
key
end
end
As per the Rails documentation, ActiveSupport::Cache::MemoryStore is thread-safe and implements an LRU based cleanup mechanism [1]. This should make the cache's memory usage deterministic - to a size bound set for the memory store (default - 32Mb, can be defined during initialization).
P.S: Yet to deploy this on production, will update here if I face any unexpected issues.
[1] http://api.rubyonrails.org/classes/ActiveSupport/Cache/MemoryStore.html
Related
I want to run my Rails 5 app on Puma. I use low-level caching and suppose the way to have thread-safe caching:
# somewhere in a model ...
##mutex = Mutex.new
def nice_suff
Rails.cache.fetch("a_key") do
##mutex.synchronize do
Rails.cache.fetch("a_key", 60) do
Model.stuff.to_a
end
end
end
end
Will this be working fine?
The proper way to handle concurrent cache access is already built-in.
val_1 = cache.fetch('foo', race_condition_ttl: 10.seconds) do
Model.stuff.to_a
end
Setting :race_condition_ttl is very useful in situations where a cache entry is used very frequently and is under heavy load. If a cache expires and due to heavy load several different processes will try to read data natively and then they all will try to write to cache. To avoid that case the first process to find an expired cache entry will bump the cache expiration time by the value set in :race_condition_ttl. Yes, this process is extending the time for a stale value by another few seconds. Because of extended life of the previous cache, other processes will continue to use slightly stale data for a just a bit longer. In the meantime that first process will go ahead and will write into cache the new value. After that all the processes will start getting the new value. The key is to keep :race_condition_ttl small.
I am looking to find information on how the caching mechanism in Rails 4 prevents against multiple users trying to regenerate cache keys at once, aka a cache stampede: http://en.wikipedia.org/wiki/Cache_stampede
I've not been able to find out much information via Googling. If I look at other systems (such as Drupal) cache stampede prevention is implemented via a semaphores table in the database.
Rails does not have a built-in mechanism to prevent cache stampedes.
According to the README for atomic_mem_cache_store (a replacement for ActiveSupport::Cache::MemCacheStore that mitigates cache stampedes):
Rails (and any framework relying on active support cache store) does
not offer any built-in solution to this problem
Unfortunately, I'm guessing that this gem won't solve your problem either. It supports fragment caching, but it only works with time-based expiration.
Read more about it here:
https://github.com/nel/atomic_mem_cache_store
Update and possible solution:
I thought about this a bit more and came up with what seems to me to be a plausible solution. I haven't verified that this works, and there are probably better ways to do it, but I was trying to think of the smallest change that would mitigate the majority of the problem.
I assume you're doing something like cache model do in your templates as described by DHH (http://37signals.com/svn/posts/3113-how-key-based-cache-expiration-works). The problem is that when the model's updated_at column changes, the cache_key likewise changes, and all your servers try to re-create the template at the same time. In order to prevent the servers from stampeding, you would need to retain the old cache_key for a brief time.
You might be able to do this by (dum da dum) caching the cache_key of the object with a short expiration (say, 1 second) and a race_condition_ttl.
You could create a module like this and include it in your models:
module StampedeAvoider
def cache_key
orig_cache_key = super
Rails.cache.fetch("/cache-keys/#{self.class.table_name}/#{self.id}", expires_in: 1, race_condition_ttl: 2) { orig_cache_key }
end
end
Let's review what would happen. There are a bunch of servers calling cache model. If your model includes StampedeAvoider, then its cache_key will now be fetching /cache-keys/models/1, and returning something like /models/1-111 (where 111 is the timestamp), which cache will use to fetch the compiled template fragment.
When you update the model, model.cache_key will begin returning /models/1-222 (assuming 222 is the new timestamp), but for the first second after that, cache will keep seeing /models/1-111, since that is what is returned by cache_key. Once 1 second passes, all of the servers will get a cache-miss on /cache-keys/models/1 and will try to regenerate it. If they all recreated it immediately, it would defeat the point of overriding cache_key. But because we set race_condition_ttl to 2, all of the servers except for the first will be delayed for 2 seconds, during which time they will continue to fetch the old cached template based on the old cache key. Once the 2 seconds have passed, fetch will begin returning the new cache key (which will have been updated by the first thread which tried to read/update /cache-keys/models/1) and they will get a cache hit, returning the template compiled by that first thread.
Ta-da! Stampede averted.
Note that if you did this, you would be doing twice as many cache reads, but depending on how common stampedes are, it could be worth it.
I haven't tested this. If you try it, please let me know how it goes :)
The :race_condition_ttl setting in ActiveSupport::Cache::Store#fetch should help avoid this problem. As the documentation says:
Setting :race_condition_ttl is very useful in situations where a cache entry is used very frequently and is under heavy load. If a cache expires and due to heavy load seven different processes will try to read data natively and then they all will try to write to cache. To avoid that case the first process to find an expired cache entry will bump the cache expiration time by the value set in :race_condition_ttl. Yes, this process is extending the time for a stale value by another few seconds. Because of extended life of the previous cache, other processes will continue to use slightly stale data for a just a bit longer. In the meantime that first process will go ahead and will write into cache the new value. After that all the processes will start getting new value. The key is to keep :race_condition_ttl small.
Great question. A partial answer that applies to single multi-threaded Rails servers but not multiprocess(or) environments (thanks to Nick Urban for drawing this distinction) is that the ActionView template compilation code blocks on a mutex that is per template. See line 230 in template.rb here. Notice there is a check for completed compilation both before grabbing the lock and after.
The effect is to serialize attempts to compile the same template, where only the first will actually do the compilation and the rest will get the already completed result.
Very interesting question. I searched on google (you get more results if you search for "dog pile" instead of "stampede") but like you, did I not get any answers, except this one blog post: protecting from dogpile using memcache.
Basically does it store you fragment in two keys: key:timestamp (where timestamp would be updated_at for active record objects) and key:last.
def custom_write_dogpile(key, timestamp, fragment, options)
Rails.cache.write(key + ':' + timestamp.to_s, fragment)
Rails.cache.write(key + ':last', fragment)
Rails.cache.delete(key + ':refresh-thread')
fragment
end
Now when reading from the cache, and trying to fetch a non existing cache, will it instead try to fecth the key:last fragment instead:
def custom_read_dogpile(key, timestamp, options)
result = Rails.cache.read(timestamp_key(name, timestamp))
if result.blank?
Rails.cache.write(name + ':refresh-thread', 0, raw: true, unless_exist: true, expires_in: 5.seconds)
if Rails.cache.increment(name + ':refresh-thread') == 1
# The cache didn't exists
result = nil
else
# Fetch the last cache, as the new one has not been created yet
result = Rails.cache.read(name + ':last')
end
end
result
end
This is a simplified summary of the by Moshe Bergman that i linked to before, or you can find here.
There is no protection against memcache stampedes. This is a real problem when multiple machines are involved and multiple processes on those multiple machines. -Ouch-.
The problem is compounded when one of the key processes has "died" leaving any "locking" ... locked.
In order to prevent stampedes you have to re-compute the data before it expires. So, if your data is valid for 10 minutes, you need to regenerate again at the 5th minute and re-set the data with a new expiration for 10 more minutes. Thus you don't wait until the data expires to set it again.
Should also not allow your data to expire at the 10 minute mark, but re-compute it every 5 minutes, and it should never expire. :)
You can use wget & cron to periodically call the code.
I recommend using redis, which will allow you to save the data and reload it in the advent of a crash.
-daniel
A reasonable strategy would be to:
use a :race_condition_ttl with at least the expected time it takes to refresh the resource. Setting it to less time than expected to perform a refresh is not advisable as the angry mob will end up trying to refresh it, resulting in a stampede.
use an :expires_in time calculated as the maximum acceptable expiry time minus the :race_condition_ttl to allow for refreshing the resource by a single worker and avoiding a stampede.
Using the above strategy will ensure that you don't exceed your expiry/staleness deadline and also avoid a stampede. It works because only one worker gets through to refresh, whilst the angry mob are held off using the cache value with the race_condition_ttl extension time right up to the originally intended expiry time.
I'm working on a mashup site and would like to limit the number of fetches to scrape the source sites. There is essentially one bit of data I need, an integer, and would like to cache it with a defined expiration period.
To clarify, I only want to cache the integer, not the entire page source.
Is there a ruby or rails feature or gem that already accomplishes this for me?
Yes, there is ActiveSupport::Cache::Store
An abstract cache store class. There are multiple cache store
implementations, each having its own additional features. See the
classes under the ActiveSupport::Cache module, e.g.
ActiveSupport::Cache::MemCacheStore. MemCacheStore is currently the
most popular cache store for large production websites.
Some implementations may not support all methods beyond the basic
cache methods of fetch, write, read, exist?, and delete.
ActiveSupport::Cache::Store can store any serializable Ruby object.
http://api.rubyonrails.org/classes/ActiveSupport/Cache/Store.html
cache = ActiveSupport::Cache::MemoryStore.new
cache.read('Chicago') # => nil
cache.write('Chicago', 2707000)
cache.read('Chicago') # => 2707000
Regarding the expiration time, this can be done by passing the time as a initialization parameter
cache = ActiveSupport::Cache::MemoryStore.new(expires_in: 5.minutes)
If you want to cache a value with a different expiration time, you can also set this when writing a value to the cache
cache.write(key, value, expires_in: 1.minute) # Set a lower value for one entry
See Caching with Rails, particularly the :expires_in option to ActiveSupport::Cache::Store.
For example, you might go:
value = Rails.cache.fetch('key', expires_in: 1.hour) do
expensive_operation_to_compute_value()
end
I'm transforming my application (single client) in to a multi-tenant application.
I used to store a few settings (rarely changed / very frequently accessed) into a global variable (a hash).
The values for this global variable were pulled out of the DB when the application started.
Obviously, these settings are specific to a client / tenant.
I now see a few options, but none of them seems good:
keep the global hash, but introduce the notion of tenant $global[:client1][:settingX] but this doesn't feel good/scalable/safe
Call the DB everytime, but I fear to take a performance hit (~40 additional calls to the DB )
Is there anything I could do with Memcache? But I don't know where to start with option.
Note: the application is hosted on Heroku
This sounds like a great argument for using memcached or redis. You want a centralized place where these values can live that's accessible to all of your pool of app servers. Your use case is predominantly read with occasional write to update values.
I would build a currency helper that lazy-loads from memcached/redis once per request and then saves the currency rate data for any value calculations in that request.
I don't know your problem domain, but let's assume you're trying to provide local currency pricing to different users in your system and that the rates are different for different users (ie: discounting, etc).
class CurrencyHelper
attr_reader :currency_rates
def initialize(user_id)
#currency_rates = load_or_generate_exchange_rates
end
def load_or_generate_exchange_rates
key = "/currency/rates/#{user_id}"
REDIS.get(key) || begin
rates = generate_exchange_rates
REDIS.put(key, rates)
rates
end
end
def convert_from_usd_to(amount_usd, currency)
round_money( usd * currency_rates[currency] )
end
end
In your controller code:
def currency_helper
#currency_helper ||= CurrencyHelper.new(current_user.id)
end
def show
localized_price = currency_helper.convert_from_usd_to(price_usd, params[:country_code])
end
I have a Ruby hash that reaches approximately 10 megabytes if written to a file using Marshal.dump. After gzip compression it is approximately 500 kilobytes.
Iterating through and altering this hash is very fast in ruby (fractions of a millisecond). Even copying it is extremely fast.
The problem is that I need to share the data in this hash between Ruby on Rails processes. In order to do this using the Rails cache (file_store or memcached) I need to Marshal.dump the file first, however this incurs a 1000 millisecond delay when serializing the file and a 400 millisecond delay when serializing it.
Ideally I would want to be able to save and load this hash from each process in under 100 milliseconds.
One idea is to spawn a new Ruby process to hold this hash that provides an API to the other processes to modify or process the data within it, but I want to avoid doing this unless I'm certain that there are no other ways to share this object quickly.
Is there a way I can more directly share this hash between processes without needing to serialize or deserialize it?
Here is the code I'm using to generate a hash similar to the one I'm working with:
#a = []
0.upto(500) do |r|
#a[r] = []
0.upto(10_000) do |c|
if rand(10) == 0
#a[r][c] = 1 # 10% chance of being 1
else
#a[r][c] = 0
end
end
end
#c = Marshal.dump(#a) # 1000 milliseconds
Marshal.load(#c) # 400 milliseconds
Update:
Since my original question did not receive many responses, I'm assuming there's no solution as easy as I would have hoped.
Presently I'm considering two options:
Create a Sinatra application to store this hash with an API to modify/access it.
Create a C application to do the same as #1, but a lot faster.
The scope of my problem has increased such that the hash may be larger than my original example. So #2 may be necessary. But I have no idea where to start in terms of writing a C application that exposes an appropriate API.
A good walkthrough through how best to implement #1 or #2 may receive best answer credit.
Update 2
I ended up implementing this as a separate application written in Ruby 1.9 that has a DRb interface to communicate with application instances. I use the Daemons gem to spawn DRb instances when the web server starts up. On start up the DRb application loads in the necessary data from the database, and then it communicates with the client to return results and to stay up to date. It's running quite well in production now. Thanks for the help!
A sinatra app will work, but the {un}serializing, and the HTML parsing could impact performance compared to a DRb service.
Here's an example, based on your example in the related question. I'm using a hash instead of an array so you can use user ids as indexes. This way there is no need to keep both a table on interests and a table of user ids on the server. Note that the interest table is "transposed" compared to your example, which is the way you want it anyways, so it can be updated in one call.
# server.rb
require 'drb'
class InterestServer < Hash
include DRbUndumped # don't send the data over!
def closest(cur_user_id)
cur_interests = fetch(cur_user_id)
selected_interests = cur_interests.each_index.select{|i| cur_interests[i]}
scores = map do |user_id, interests|
nb_match = selected_interests.count{|i| interests[i] }
[nb_match, user_id]
end
scores.sort!
end
end
DRb.start_service nil, InterestServer.new
puts DRb.uri
DRb.thread.join
# client.rb
uri = ARGV.shift
require 'drb'
DRb.start_service
interest_server = DRbObject.new nil, uri
USERS_COUNT = 10_000
INTERESTS_COUNT = 500
# Mock users
users = Array.new(USERS_COUNT) { {:id => rand(100000)+100000} }
# Initial send over user interests
users.each do |user|
interest_server[user[:id]] = Array.new(INTERESTS_COUNT) { rand(10) == 0 }
end
# query at will
puts interest_server.closest(users.first[:id]).inspect
# update, say there's a new user:
new_user = {:id => 42}
users << new_user
# This guy is interested in everything!
interest_server[new_user[:id]] = Array.new(INTERESTS_COUNT) { true }
puts interest_server.closest(users.first[:id])[-2,2].inspect
# Will output our first user and this new user which both match perfectly
To run in terminal, start the server and give the output as the argument to the client:
$ ruby server.rb
druby://mal.lan:51630
$ ruby client.rb druby://mal.lan:51630
[[0, 100035], ...]
[[45, 42], [45, 178902]]
Maybe it's too obvious, but if you sacrifice a little access speed to the members of your hash, a traditional database will give you much more constant time access to values. You could start there and then add caching to see if you could get enough speed from it. This will be a little simpler than using Sinatra or some other tool.
be careful with memcache, it has some object size limitations (2mb or so)
One thing to try is to use MongoDB as your storage. It is pretty fast and you can map pretty much any data structure into it.
If it's sensible to wrap your monster hash in a method call, you might simply present it using DRb - start a small daemon that starts a DRb server with the hash as the front object - other processes can make queries of it using what amounts to RPC.
More to the point, is there another approach to your problem? Without knowing what you're trying to do, it's hard to say for sure - but maybe a trie, or a Bloom filter would work? Or even a nicely interfaced bitfield would probably save you a fair amount of space.
Have you considered upping the memcache max object size?
Versions greater than 1.4.2
memcached -I 11m #giving yourself an extra MB in space
or on previous versions changing the value of POWER_BLOCK in the slabs.c and recompiling.
What about storing the data in Memcache instead of storing the Hash in Memcache? Using your code above:
#a = []
0.upto(500) do |r|
#a[r] = []
0.upto(10_000) do |c|
key = "#{r}:#{c}"
if rand(10) == 0
Cache.set(key, 1) # 10% chance of being 1
else
Cache.set(key, 0)
end
end
end
This will be speedy and you won't have to worry about serialization and all of your systems will have access to it. I asked in a comment on the main post about accessing the data, you will have to get creative, but it should be easy to do.