Doing efficient mathematical calculations in Redis - ruby-on-rails

Looking around the web for information on doing maths in Redis and don't actually find much. I'm using the Redis-RB gem in Rails, and caching lists of results:
e = [1738738.0, 2019461.0, 1488842.0, 2272588.0, 1506046.0, 2448701.0, 3554207.0, 1659395.0, ...]
$redis.lpush "analytics:math_test", e
Currently, our lists of numbers max in the thousands to tens of thousands per list per day, with number of lists likely in the thousands per day. (This is not actually that much; however, we're growing, and expect much larger sample sizes very soon.)
For each of these lists, I'd like to be able to run stats. I currently do this in-app
def basic_stats(arr)
return nil if arr.nil? or arr.empty?
min = arr.min.to_f
max = arr.max.to_f
total = arr.inject(:+)
len = arr.length
mean = total.to_f / len # to_f so we don't get an integer result
sorted = arr.sort
median = len % 2 == 1 ? sorted[len/2] : (sorted[len/2 - 1] + sorted[len/2]).to_f / 2
sum = arr.inject(0){|accum, i| accum +(i-mean)**2 }
variance = sum/(arr.length - 1).to_f
std_dev = Math.sqrt(variance).nan? ? 0 : Math.sqrt(variance)
{min: min, max: max, mean: mean, median: median, std_dev: std_dev, size: len}
end
and, while I could simply store the stats, I will often have to aggregate lists together to run stats on the aggregated list. Thus, it makes sense to store the raw numbers rather than every possible aggregated set. Because of this, I need the math to be fast, and have been exploring ways to do this. The simplest way is just doing it in-app, with 150k items in a list, this isn't actually too terrible:
$redis_analytics.llen "analytics:math_test", 0, -1
=> 156954
Benchmark.measure do
basic_stats $redis_analytics.lrange("analytics:math_test", 0, -1).map(&:to_f)
end
=> 2.650000 0.060000 2.710000 ( 2.732993)
While I'd rather not push 3 seconds for a single calculation, given that this might be outside of my current use-case by about 10x number of samples, it's not terrible. What if we were working with a sample size of one million or so?
$redis_analytics.llen("analytics:math_test")
=> 1063454
Benchmark.measure do
basic_stats $redis_analytics.lrange("analytics:math_test", 0, -1).map(&:to_f)
end
=> 21.360000 0.340000 21.700000 ( 21.847734)
Options
Use the SORT method on the list, then you can instantaneously get min/max/length in Redis. Unfortunately, it seems that you still have to go in-app for things like median, mean, std_dev. Unless we can calculate these in Redis.
Use Lua scripting to do the calculations. (I haven't learned any Lua yet, so can't say I know what this would look like. If it's likely faster, I'd like to know so I can try it.)
Some more efficient way to utilize Ruby, which seems a wee bit unlikely since utilizing what seems like a fairly decent stats gem has analogous results
Use a different database.
Example using StatsSample gem
Using a gem seems to gain me nothing. In Python, I'd probably write a C module, not sure if many ruby stats gems are in C.
require 'statsample'
def basic_stats(stats)
return nil if stats.nil? or stats.empty?
arr = stats.to_scale
{min: arr.min, max: arr.max, mean: arr.mean, median: arr.median, std_dev: arr.sd, size: stats.length}
end
Benchmark.measure do
basic_stats $redis_analytics.lrange("analytics:math_test", 0, -1).map(&:to_f)
end
=> 20.860000 0.440000 21.300000 ( 21.436437)
Coda
It's quite possible, of course, that such large stats calculations will simply take a long time and that I should offload them to a queue. However, given that much of this math is actually happening inside Ruby/Rails, rather than in the database, I thought I might have other options.

I want to keep this open in case anyone has any input that could help others doing the same thing. For me, however, I've just realized that I'm spending too much time trying to force Redis to do something that SQL does quite well. If I simply dump this into Postgres, I can do really efficient aggregation AND math directly in the database. I think I was just stuck using Redis for something that, when it started, was a good idea, but scaled out to something bad.

Lua scripting is probably the best way to solve this problem, if you can switch to Redis 2.6. Btw testing the speed should be pretty straightforward so given the small time investment needed I strongly suggest trying Lua scripting to see what is the result you get.
Another thing you could do is to use Lua to set data, and make sure it will also update a related Hash type per every list to directly retain the min/max/average stats, so you don't have to compute those stats every time, as they are incrementally updated. Not always possible btw, depends on your specific use case.

I would take a look at NArray. From their homepage:
This extension library incorporates fast calculation and easy manipulation of large numerical arrays into the Ruby language.
It looks like their array class has most all of the functions you need built in. Cmd-F "Statistics" on that page.

Related

Ruby: reverse calculation or spreadsheet function

I've got a module that calculates about 150-200 values. Once it's done, I want the ability to edit one or some of the results—which may or may not be one of the original, non-calculated values—and have the other results update, like the functionality of a spreadsheet.
The problem is I really don't know where to start. The module's code looks largely like this:
if #user.override > 0
h[:floor] += (#user.override / h[:size]) * 0.03
end
if #user.other_override > 0
h[:floor] += (#user.other_override / h[:size]) * 0.03
end
And is quite chronological, making this task even tougher.
Is there any approach here that'll work? I can barely wrap my head around how it might, other than to leverage an actual spreadsheet into my app.
What you are looking for is called "Reactive Programming". Many languages have an implementation of the ReactiveX framework, so does Ruby: RxRuby

Lua tables: performance hit for starting array indexing at 0?

I'm porting FFT code from Java to Lua, and I'm starting to worry a bit about the fact that in Lua the array part of a table starts indexing at 1 while in Java array indexing starts at 0.
For the input array this causes no problem because the Java code is set up to handle the possibility that the data under consideration is not located at the start of the array. However, all of the working arrays internal to the code are assumed to starting indexing at 0. I know that the code will work as written -- Lua tables are awesome like that -- but I have no sense at all about the performance hit I might incur by having the "0" element of the array going into the hash table part of the underlying C structure (or indeed, if that is what will happen).
My question: is this something worth worrying about? Should I be planning to profile and hand-optimize the code? (The code will eventually be used to transform many relatively small (> 100 time points) signals of varying lengths not known in advance.)
I have made small, probably not that reliable, test:
local arr = {}
for i=0,10000000 do
arr[i] = i*2
end
for k, v in pairs(arr) do
arr[k] = v*v
end
And similar version with 1 as the first index. On my system:
$ time lua example0.lua
real 2.003s
$ time lua example1.lua
real 2.014s
I was also interested how table.insert will perform
for i=1,10000000 do
table.insert(arr, 2*i)
...
and, suprisingly
$ time lua example2.lua
real 6.012s
Results:
Of course, it depends on what system you're running it, probably also whic lua version, but it seems that it makes little to no difference between zero-start and one-start. Bigger difference is caused by the way you insert things to array.
I think the correct answer in this case is changing the algorithm so that everything is indexed with 1. And consider that part of the conversion.
Your FFT will be less surprising to another Lua user (like me), given that all "array-like" tables are indexed by one.
It might not be as stressful as you might think, given the way numeric loops are structured in Lua (where the "start" and the "end" are "inclusive"). You would be exchanging this:
for i=0,#array-1 do
... (do stuff with i)
end
By this:
for i=1,#array do
... (do stuff with i)
end
The non-numeric loops would remain unchanged (except that you will be able to use ipairs too, if you so desire).

Is there a cleverer Ruby algorithm than brute-force for finding correlation in multidimensional data?

My platform here is Ruby - a webapp using Rails 3.2 in particular.
I'm trying to match objects (people) based on their ratings for certain items. People may rate all, some, or none of the same items as other people. Ratings are integers between 0 and 5. The number of items available to rate, and the number of users, can both be considered to be non-trivial.
A quick illustration -
The brute-force approach is to iterate through all people, calculating differences for each item. In Ruby-flavoured pseudo-code -
MATCHES = {}
for each (PERSON in (people except USER)) do
for each (RATING that PERSON has made) do
if (USER has rated the item that RATING refers to) do
MATCHES[PERSON's id] += difference between PERSON's rating and USER's rating
end
end
end
lowest values in MATCHES are the best matches for USER
The problem here being that as the number of items, ratings, and people increase, this code will take a very significant time to run, and ignoring caching for now, this is code that has to run a lot, since this matching is the primary function of my app.
I'm open to cleverer algorithms and cleverer databases to achieve this, but doing it algorithmically and as such allowing me to keep everything in MySQL or PostgreSQL would make my life a lot easier. The only thing I'd say is that the data does need to persist.
If any more detail would help, please feel free to ask. Any assistance greatly appreciated!
Check out the KD-Tree. It's specifically designed to speed up neighbour-finding in N-Dimensional spaces, like your rating system (Person 1 is 3 units along the X axis, 4 units along the Y axis, and so on).
You'll likely have to do this in an actual programming language. There are spatial indexes for some DBs, but they're usually designed for geographic work, like PostGIS (which uses GiST indexing), and only support two or three dimensions.
That said, I did find this tantalizing blog post on PostGIS. I was then unable to find any other references to this, but maybe your luck will be better than mine...
Hope that helps!
Technically your task is matching long strings made out of characters of a 5 letter alphabet. This kind of stuff is researched extensively in the area of computational biology. (Typically with 4 letter alphabets). If you do not know the book http://www.amazon.com/Algorithms-Strings-Trees-Sequences-Computational/dp/0521585198 then you might want to get hold of a copy. IMHO this is THE standard book on fuzzy matching / scoring of sequences.
Is your data sparse? With rating, most of the time not every user rates every object.
Naively comparing each object to every other is O(n*n*d), where d is the number of operations. However, a key trick of all the Hadoop solutions is to transpose the matrix, and work only on the non-zero values in the columns. Assuming that your sparsity is s=0.01, this reduces the runtime to O(d*n*s*n*s), i.e. by a factor of s*s. So if your sparsity is 1 out of 100, your computation will be theoretically 10000 times faster.
Note that the resulting data will still be a O(n*n) distance matrix, so strictl speaking the problem is still quadratic.
The way to beat the quadratic factor is to use index structures. The k-d-tree has already been mentioned, but I'm not aware of a version for categorical / discrete data and missing values. Indexing such data is not very well researched AFAICT.

Multiple calculations on the same set of data: ruby or database?

I have a model Transaction for which I need to display the results of many calculations on many fields for a subset of transactions.
I've seen 2 ways to do it, but am not sure which is the best. I'm after the one that will have the least impact in terms of performance when data set grows and number of concurrent users increases.
data[:total_before] = Transaction.where(xxx).sum(:amount_before)
data[:total_after] = Transaction.where(xxx).sum(:amount_after)
...
or
transactions = Transaction.where(xxx)
data[:total_before]= transactions.inject(0) {|s, e| s + e.amount_before }
data[:total_after]= transactions.inject(0) {|s, e| s + e.amount_after }
...
edit: the where clause is always the same.
Which one should I choose? (or is there a 3rd, better way?)
Thanks, P.
Not to nag, but what about
transactions = Transaction.where(xxx)
data[:total_before] = transactions.sum(:amount_before)
data[:total_after] = transactions.sum(:amount_before)
? This looks like the union of strengths of methods 1 and 2 :) You reuse search results and employ more clean rails-specific sum aggregator.
PS If you were asking whether it's possible to rely on Rails in caching results of Transaction.where(xxx) query, that I don't know. And when I don't know, I prefer to play safe.
Really you're talking about scalability.
If you're talking about millions of rows and needing to do calculations on them, then which do you think would be faster?
Asking the DBM to summarize millions of rows and return you two numbers.
Returning millions of query results across the network which you iterate over twice.
In the first scenario you can scale up your DB host with faster CPUs, more RAM, faster drives or pre-compute your values at regular intervals. The calculations you want done in the DBM are exactly the sort of things it's written to do.
In the second scenario you have to scale up your computing host, and maybe the switch connecting the DBM and computing host, plus maybe the database host because it will have to retrieve and push the data. Imagine the impact on the network as it's handling the data, and the impact on the computing host's CPU as it's doing everything.
I'd do the first one as it seems a lot more scalable to me.

Rails design doubt: Should/could I load the whole dictionary/table into memory?

I am a newbie working in a simple Rails app that translates a document (long string) from a language to another. The dictionary is a table of terms (a string regexp to find and substitute, and a block that ouputs a substituting string). The table is 1 million records long.
Each request is a document that wants to be translated. In a first brutish force approach I need to run the whole dictionary against each request/document.
Since the dictionary will run whole every time (from the first record to the last), instead of loading the table of records of the dictionary with each document, I think the best would be to have the whole dictionary as an array in memory.
I know it is not the most efficient, but the dictionary has to run whole at this point.
1.- If no efficiency can be gained by restructuring the document and dictionary (meaning it is not possible to create smaller subsets of the dictionary). What is the best design approach?
2.- Do you know of similar projects that I can learn from?
3.- Where should I look to learn how to load such a big table into memory (cache?) at rails startup?
Any answer to any of the posed questions will be greatly appreciated. Thank you very much!
I don't think your web hoster will be happy with a solution like this. This script
dict = {}
(0..1000_000).each do | num |
dict[/#{num}/] = "#{num}_subst"
end
consumes a gigabyte of RAM on my MBP for storing the hash table. Another approach will be to store your substitutions marshaled in memcached so that you could (at least) store them across machines.
require 'rubygems'
require 'memcached'
#table = Memcached.new("localhost:11211")
retained_keys = (0..1000_000).each do | num |
stored_blob = Marshal.dump([/#{num}/, "#{num}_subst"])
#table.set("p#{num}", stored_blob)
end
You will have to worry about keeping the keys "hot" since memcached will expire them if they are not needed.
The best approach however, for your case, would be very simple - write your substitutions to a file (one line per substitution) and make a stream-filter that reads the file line by line, and replaces from this file. You can also parallelize that by mapping work on this, say, per letter of substitution and replacing markers.
But this should get you started:
require "base64"
File.open("./dict.marshal", "wb") do | file |
(0..1000_000).each do | num |
stored_blob = Base64.encode64(Marshal.dump([/#{num}/, "#{num}_subst"]))
file.puts(stored_blob)
end
end
puts "Table populated (should be a 35 meg file), now let's run substitutions"
File.open("./dict.marshal", "r") do | f |
until f.eof?
pattern, replacement = Marshal.load(Base64.decode64(f.gets))
end
end
puts "All replacements out"
To populate the file AND load each substitution, this takes me:
real 0m21.262s
user 0m19.100s
sys 0m0.502s
To just load the regexp and the string from file (all the million, piece by piece)
real 0m7.855s
user 0m7.645s
sys 0m0.105s
So this is 7 seconds IO overhead, but you don't lose any memory (and there is huge room for improvement) - the RSIZE is about 3 megs. You should easily be able to make it go faster if you do IO in bulk, or make one file for 10-50 substitutions and load them as a whole. Put the files on an SSD or a RAID and you got a winner, but you get to keep your RAM.
In production mode, Rails will not reload classes between requests. You can keep something in memory easily by putting it into a class variable.
You could do something like:
class Dictionary < ActiveRecord::Base
##cached = nil
mattr_accessor :cached
def self.cache_dict!
##cached = Dictionary.all
end
end
And then in production.rb:
Dictionary.cache_dict!
For your specific questions:
Possibly write the part that's inefficient in C or Java or a faster language
Nope, sorry. Maybe you could do a MapReduce algorithm to distribute the load across servers.
See above.
This isn't so much a specific answer to one of your questions as a process recommendation. If you're having (or anticipating) performance issues, you should be using a profiler from the get-go.
Check out this tutorial: How to Profile Your Rails Application.
My experience on a number of platforms (ANSI C, C#, Ruby) is that performance problems are very hard to deal with in advance; rather, you're better off implementing something that looks like it might be performant then load-testing it through a profiler.
Then, once you know where your time is being spent, you can expend some effort on optimisation.
If I had to take a guess, I'd say the regex work you'll be performing will be as much of a performance bottleneck as any ActiveRecord work. But without verifying that with a profiler, that guess is of little value.
If you use something like cache_fu, you can then leverage something like memcache without doing any work yourself. If you are trying to bring 1 MM rows into memory, being able to leverage the distributed nature of memcache will probably be useful.

Resources