I'm using Mahout's EuclideanDistanceSimilarity class to rank the similarity of several users given the following data set of user preferences. The range for preferences is currently all integers from 1 to 5 inclusive. However I have control over the scale, so that can change if it would help.
User Preferences:
Item 1 Item 2 Item 3 Item 4 Item 5 Item 6
1 2 4 3 5 1 2
2 5 1 5 1 5 1
3 1 5 1 5 1 5
4 2 4 3 5 1 2
5 3 3 4 5 2 2
I'm getting unexpected results when I run the following test code, which I added to the Test class found here: http://www.massapi.com/source/mahout-distribution-0.4/core/src/test/java/org/apache/mahout/cf/taste/impl/similarity/EuclideanDistanceSimilarityTest.java.html
#Test
public void testSimple2() throws Exception {
DataModel dataModel = getDataModel(
new long[]{1, 2, 3, 4, 5},
new Double[][]{
{2.0, 4.0, 3.0, 5.0, 1.0, 2.0},
{5.0, 1.0, 5.0, 1.0, 5.0, 1.0},
{1.0, 5.0, 1.0, 5.0, 1.0, 5.0},
{2.0, 4.0, 3.0, 5.0, 1.0, 2.0},
{3.0, 3.0, 4.0, 5.0, 2.0, 2.0},});
for (int i = 1; i <= 5; i++) {
for (int j = 1; j <= 5; j++) {
System.out.println( i + "," + j + ": " + new EuclideanDistanceSimilarity(dataModel).userSimilarity(i, j));
}
}
}
It produces the following results:
1,1: 1.0
1,2: 0.7129109430106292
1,3: 1.0
1,4: 1.0
1,5: 1.0
2,1: 0.7129109430106292
2,2: 1.0
2,3: 0.5556605665978556
2,4: 0.7129109430106292
2,5: 0.8675434911352263
3,1: 1.0
3,2: 0.5556605665978556
3,3: 1.0
3,4: 1.0
3,5: 0.9683428667784535
4,1: 1.0
4,2: 0.7129109430106292
4,3: 1.0
4,4: 1.0
4,5: 1.0
5,1: 1.0
5,2: 0.8675434911352263
5,3: 0.9683428667784535
5,4: 1.0
5,5: 1.0
Would someone please help me understand what I'm doing wrong here? Clearly, user 1's preferences are not identical to users 3 & 5, so why do I get 1.0 for the similarity?
I'm open to using a different algorithm if Euclidean won't work, however Pearson doesn't work for me because I need to handle users that submit identical preferences for each item and I do not want to correct for "grade inflation."
It is a little weird but I can explain what's happening.
The Euclidean distance d can't be used as a similarity metric directly since it gets bigger with "less similarity". You could use 1/d, but then perfect matches result in infinity, not 1. You can use 1/(1+d).
The problem is that the distance can only be calculated over dimensions that both users have in common. More dimensions typically means more distance. So it's penalizing overlap, the opposite of what you'd expect.
So the formula is really n/(1+d), where n is the number of dimensions of overlap. That results in a similarity greater than 1, which is capped back to 1, in some cases.
n is not the right factor. It's an old simple kludge. I will ask on the mailing list about the right-er expression. For large data, this tends to work OK though.
Related
In my model, I am saving results from numerous Parameter Variation runs in a Histogram Data object.
Here are my Histogram Data settings:
Number of intervals: 7
Value range:
Automatically detected
Initial Interval Size: 10
I then print out these results using the following :
//if final replication, write Histogram Data into Excel
if(getCurrentReplication() == lastReplication){
double intervalWidth = histogramData.getIntervalWidth();
int intervalQty = histogramData.getNumberOfIntervals();
for(int i = 0; i < intervalQty; i++){
traceln(intervalWidth*i + " " + histogramData.getPDF(i));
excelRecords.setCellValue(String.valueOf(intervalWidth*i) + " - " + String.valueOf(intervalWidth*(i+1)), 1, rowIndex, columnIndex);
excelRecords.setCellValue(histogramData.getPDF(i), 1, rowIndex, columnIndex+1);
rowIndex++;
}
}
Example of my intended results:
10 - 80%
20 - 10%
30 - 5%
40 - 2%
50...
60...
Actual results:
0.0 0.0
10.0 0.0
20.0 0.0
30.0 0.998782775272379
40.0 0.0011174522089635631
50.0 9.9772518657461E-5
60.0 0.0
Results after settings initial interval size to 0.1:
0.0 0.9974651710510558
4.0 0.001117719851502934
8.0 9.181270208774101E-4
12.0 2.3951139675062872E-4
16.0 1.5967426450041916E-4
20.0 9.979641531276197E-5
24.0 0.0
How would I go about obtaining my desired results? Am I fundamentally misunderstanding something about the HistogramData object?
Thank you for your help.
The function you are using (getPDF(i)) returns value for the interval in fractions (not in percentages). So, you have to multiply the value by 100 in order to get it as a percentage. As for histogram bars, model analyze the results, specified interval numbers and interval size. After that, it will build the respective number of bars that cover all results. In your case, intervals from 0 to 30 do not provide any results and bars are not presented (PDF here is 0.0).
Check out this question:
Swift probability of random number being selected?
The top answer suggests to use a switch statement, which does the job. However, if I have a very large number of cases to consider, the code looks very inelegant; I have a giant switch statement with very similar code in each case repeated over and over again.
Is there a nicer, cleaner way to pick a random number with a certain probability when you have a large number of probabilities to consider? (like ~30)
This is a Swift implementation strongly influenced by the various
answers to Generate random numbers with a given (numerical) distribution.
For Swift 4.2/Xcode 10 and later (explanations inline):
func randomNumber(probabilities: [Double]) -> Int {
// Sum of all probabilities (so that we don't have to require that the sum is 1.0):
let sum = probabilities.reduce(0, +)
// Random number in the range 0.0 <= rnd < sum :
let rnd = Double.random(in: 0.0 ..< sum)
// Find the first interval of accumulated probabilities into which `rnd` falls:
var accum = 0.0
for (i, p) in probabilities.enumerated() {
accum += p
if rnd < accum {
return i
}
}
// This point might be reached due to floating point inaccuracies:
return (probabilities.count - 1)
}
Examples:
let x = randomNumber(probabilities: [0.2, 0.3, 0.5])
returns 0 with probability 0.2, 1 with probability 0.3,
and 2 with probability 0.5.
let x = randomNumber(probabilities: [1.0, 2.0])
return 0 with probability 1/3 and 1 with probability 2/3.
For Swift 3/Xcode 8:
func randomNumber(probabilities: [Double]) -> Int {
// Sum of all probabilities (so that we don't have to require that the sum is 1.0):
let sum = probabilities.reduce(0, +)
// Random number in the range 0.0 <= rnd < sum :
let rnd = sum * Double(arc4random_uniform(UInt32.max)) / Double(UInt32.max)
// Find the first interval of accumulated probabilities into which `rnd` falls:
var accum = 0.0
for (i, p) in probabilities.enumerated() {
accum += p
if rnd < accum {
return i
}
}
// This point might be reached due to floating point inaccuracies:
return (probabilities.count - 1)
}
For Swift 2/Xcode 7:
func randomNumber(probabilities probabilities: [Double]) -> Int {
// Sum of all probabilities (so that we don't have to require that the sum is 1.0):
let sum = probabilities.reduce(0, combine: +)
// Random number in the range 0.0 <= rnd < sum :
let rnd = sum * Double(arc4random_uniform(UInt32.max)) / Double(UInt32.max)
// Find the first interval of accumulated probabilities into which `rnd` falls:
var accum = 0.0
for (i, p) in probabilities.enumerate() {
accum += p
if rnd < accum {
return i
}
}
// This point might be reached due to floating point inaccuracies:
return (probabilities.count - 1)
}
Is there a nicer, cleaner way to pick a random number with a certain probability when you have a large number of probabilities to consider?
Sure. Write a function that generates a number based on a table of probabilities. That's essentially what the switch statement you've pointed to is: a table defined in code. You could do the same thing with data using a table that's defined as a list of probabilities and outcomes:
probability outcome
----------- -------
0.4 1
0.2 2
0.1 3
0.15 4
0.15 5
Now you can pick a number between 0 and 1 at random. Starting from the top of the list, add up probabilities until you've exceeded the number you picked, and use the corresponding outcome. For example, let's say the number you pick is 0.6527637. Start at the top: 0.4 is smaller, so keep going. 0.6 (0.4 + 0.2) is smaller, so keep going. 0.7 (0.6 + 0.1) is larger, so stop. The outcome is 3.
I've kept the table short here for the sake of clarity, but you can make it as long as you like, and you can define it in a data file so that you don't have to recompile when the list changes.
Note that there's nothing particularly specific to Swift about this method -- you could do the same thing in C or Swift or Lisp.
This seems like a good opportunity for a shameless plug to my small library, swiftstats:
https://github.com/r0fls/swiftstats
For example, this would generate 3 random variables from a normal distribution with mean 0 and variance 1:
import SwiftStats
let n = SwiftStats.Distributions.Normal(0, 1.0)
print(n.random())
Supported distributions include: normal, exponential, binomial, etc...
It also supports fitting sample data to a given distribution, using the Maximum Likelihood Estimator for the distribution.
See the project readme for more info.
You could do it with exponential or quadratic functions - have x be your random number, take y as the new random number. Then, you just have to jiggle the equation until it fits your use case. Say I had (x^2)/10 + (x/300). Put your random number in, (as some floating-point form), and then get the floor with Int() when it comes out. So, if my random number generator goes from 0 to 9, I have a 40% chance of getting 0, and a 30% chance of getting 1 - 3, a 20% chance of getting 4 - 6, and a 10% chance of an 8. You're basically trying to fake some kind of normal distribution.
Here's an idea of what it would look like in Swift:
func giveY (x: UInt32) -> Int {
let xD = Double(x)
return Int(xD * xD / 10 + xD / 300)
}
let ans = giveY (arc4random_uniform(10))
EDIT:
I wasn't very clear above - what I meant was you could replace the switch statement with some function that would return a set of numbers with a probability distribution that you could figure out with regression using wolfram or something. So, for the question you linked to, you could do something like this:
import Foundation
func returnLevelChange() -> Double {
return 0.06 * exp(0.4 * Double(arc4random_uniform(10))) - 0.1
}
newItemLevel = oldItemLevel * returnLevelChange()
So that function returns a double somewhere between -0.05 and 2.1. That would be your "x% worse/better than current item level" figure. But, since it's an exponential function, it won't return an even spread of numbers. The arc4random_uniform(10) returns an int from 0 - 9, and each of those would result in a double like this:
0: -0.04
1: -0.01
2: 0.03
3: 0.1
4: 0.2
5: 0.34
6: 0.56
7: 0.89
8: 1.37
9: 2.1
Since each of those ints from the arc4random_uniform has an equal chance of showing up, you get probabilities like this:
40% chance of -0.04 to 0.1 (~ -5% - 10%)
30% chance of 0.2 to 0.56 (~ 20% - 55%)
20% chance of 0.89 to 1.37 (~ 90% - 140%)
10% chance of 2.1 (~ 200%)
Which is something similar to the probabilities that other person had. Now, for your function, it's much more difficult, and the other answers are almost definitely more applicable and elegant. BUT you could still do it.
Arrange each of the letters in order of their probability - from largest to smallest. Then, get their cumulative sums, starting with 0, without the last. (so probabilities of 50%, 30%, 20% becomes 0, 0.5, 0.8). Then you multiply them up until they're integers with reasonable accuracy (0, 5, 8). Then, plot them - your cumulative probabilities are your x's, the things you want to select with a given probability (your letters) are your y's. (you obviously can't plot actual letters on the y axis, so you'd just plot their indices in some array). Then, you'd try find some regression there, and have that be your function. For instance, trying those numbers, I got
e^0.14x - 1
and this:
let letters: [Character] = ["a", "b", "c"]
func randLetter() -> Character {
return letters[Int(exp(0.14 * Double(arc4random_uniform(10))) - 1)]
}
returns "a" 50% of the time, "b" 30% of the time, and "c" 20% of the time. Obviously pretty cumbersome for more letters, and it would take a while to figure out the right regression, and if you wanted to change the weightings you're have to do it manually. BUT if you did find a nice equation that did fit your values, the actual function would only be a couple lines long, and fast.
Check out this question:
Swift probability of random number being selected?
The top answer suggests to use a switch statement, which does the job. However, if I have a very large number of cases to consider, the code looks very inelegant; I have a giant switch statement with very similar code in each case repeated over and over again.
Is there a nicer, cleaner way to pick a random number with a certain probability when you have a large number of probabilities to consider? (like ~30)
This is a Swift implementation strongly influenced by the various
answers to Generate random numbers with a given (numerical) distribution.
For Swift 4.2/Xcode 10 and later (explanations inline):
func randomNumber(probabilities: [Double]) -> Int {
// Sum of all probabilities (so that we don't have to require that the sum is 1.0):
let sum = probabilities.reduce(0, +)
// Random number in the range 0.0 <= rnd < sum :
let rnd = Double.random(in: 0.0 ..< sum)
// Find the first interval of accumulated probabilities into which `rnd` falls:
var accum = 0.0
for (i, p) in probabilities.enumerated() {
accum += p
if rnd < accum {
return i
}
}
// This point might be reached due to floating point inaccuracies:
return (probabilities.count - 1)
}
Examples:
let x = randomNumber(probabilities: [0.2, 0.3, 0.5])
returns 0 with probability 0.2, 1 with probability 0.3,
and 2 with probability 0.5.
let x = randomNumber(probabilities: [1.0, 2.0])
return 0 with probability 1/3 and 1 with probability 2/3.
For Swift 3/Xcode 8:
func randomNumber(probabilities: [Double]) -> Int {
// Sum of all probabilities (so that we don't have to require that the sum is 1.0):
let sum = probabilities.reduce(0, +)
// Random number in the range 0.0 <= rnd < sum :
let rnd = sum * Double(arc4random_uniform(UInt32.max)) / Double(UInt32.max)
// Find the first interval of accumulated probabilities into which `rnd` falls:
var accum = 0.0
for (i, p) in probabilities.enumerated() {
accum += p
if rnd < accum {
return i
}
}
// This point might be reached due to floating point inaccuracies:
return (probabilities.count - 1)
}
For Swift 2/Xcode 7:
func randomNumber(probabilities probabilities: [Double]) -> Int {
// Sum of all probabilities (so that we don't have to require that the sum is 1.0):
let sum = probabilities.reduce(0, combine: +)
// Random number in the range 0.0 <= rnd < sum :
let rnd = sum * Double(arc4random_uniform(UInt32.max)) / Double(UInt32.max)
// Find the first interval of accumulated probabilities into which `rnd` falls:
var accum = 0.0
for (i, p) in probabilities.enumerate() {
accum += p
if rnd < accum {
return i
}
}
// This point might be reached due to floating point inaccuracies:
return (probabilities.count - 1)
}
Is there a nicer, cleaner way to pick a random number with a certain probability when you have a large number of probabilities to consider?
Sure. Write a function that generates a number based on a table of probabilities. That's essentially what the switch statement you've pointed to is: a table defined in code. You could do the same thing with data using a table that's defined as a list of probabilities and outcomes:
probability outcome
----------- -------
0.4 1
0.2 2
0.1 3
0.15 4
0.15 5
Now you can pick a number between 0 and 1 at random. Starting from the top of the list, add up probabilities until you've exceeded the number you picked, and use the corresponding outcome. For example, let's say the number you pick is 0.6527637. Start at the top: 0.4 is smaller, so keep going. 0.6 (0.4 + 0.2) is smaller, so keep going. 0.7 (0.6 + 0.1) is larger, so stop. The outcome is 3.
I've kept the table short here for the sake of clarity, but you can make it as long as you like, and you can define it in a data file so that you don't have to recompile when the list changes.
Note that there's nothing particularly specific to Swift about this method -- you could do the same thing in C or Swift or Lisp.
This seems like a good opportunity for a shameless plug to my small library, swiftstats:
https://github.com/r0fls/swiftstats
For example, this would generate 3 random variables from a normal distribution with mean 0 and variance 1:
import SwiftStats
let n = SwiftStats.Distributions.Normal(0, 1.0)
print(n.random())
Supported distributions include: normal, exponential, binomial, etc...
It also supports fitting sample data to a given distribution, using the Maximum Likelihood Estimator for the distribution.
See the project readme for more info.
You could do it with exponential or quadratic functions - have x be your random number, take y as the new random number. Then, you just have to jiggle the equation until it fits your use case. Say I had (x^2)/10 + (x/300). Put your random number in, (as some floating-point form), and then get the floor with Int() when it comes out. So, if my random number generator goes from 0 to 9, I have a 40% chance of getting 0, and a 30% chance of getting 1 - 3, a 20% chance of getting 4 - 6, and a 10% chance of an 8. You're basically trying to fake some kind of normal distribution.
Here's an idea of what it would look like in Swift:
func giveY (x: UInt32) -> Int {
let xD = Double(x)
return Int(xD * xD / 10 + xD / 300)
}
let ans = giveY (arc4random_uniform(10))
EDIT:
I wasn't very clear above - what I meant was you could replace the switch statement with some function that would return a set of numbers with a probability distribution that you could figure out with regression using wolfram or something. So, for the question you linked to, you could do something like this:
import Foundation
func returnLevelChange() -> Double {
return 0.06 * exp(0.4 * Double(arc4random_uniform(10))) - 0.1
}
newItemLevel = oldItemLevel * returnLevelChange()
So that function returns a double somewhere between -0.05 and 2.1. That would be your "x% worse/better than current item level" figure. But, since it's an exponential function, it won't return an even spread of numbers. The arc4random_uniform(10) returns an int from 0 - 9, and each of those would result in a double like this:
0: -0.04
1: -0.01
2: 0.03
3: 0.1
4: 0.2
5: 0.34
6: 0.56
7: 0.89
8: 1.37
9: 2.1
Since each of those ints from the arc4random_uniform has an equal chance of showing up, you get probabilities like this:
40% chance of -0.04 to 0.1 (~ -5% - 10%)
30% chance of 0.2 to 0.56 (~ 20% - 55%)
20% chance of 0.89 to 1.37 (~ 90% - 140%)
10% chance of 2.1 (~ 200%)
Which is something similar to the probabilities that other person had. Now, for your function, it's much more difficult, and the other answers are almost definitely more applicable and elegant. BUT you could still do it.
Arrange each of the letters in order of their probability - from largest to smallest. Then, get their cumulative sums, starting with 0, without the last. (so probabilities of 50%, 30%, 20% becomes 0, 0.5, 0.8). Then you multiply them up until they're integers with reasonable accuracy (0, 5, 8). Then, plot them - your cumulative probabilities are your x's, the things you want to select with a given probability (your letters) are your y's. (you obviously can't plot actual letters on the y axis, so you'd just plot their indices in some array). Then, you'd try find some regression there, and have that be your function. For instance, trying those numbers, I got
e^0.14x - 1
and this:
let letters: [Character] = ["a", "b", "c"]
func randLetter() -> Character {
return letters[Int(exp(0.14 * Double(arc4random_uniform(10))) - 1)]
}
returns "a" 50% of the time, "b" 30% of the time, and "c" 20% of the time. Obviously pretty cumbersome for more letters, and it would take a while to figure out the right regression, and if you wanted to change the weightings you're have to do it manually. BUT if you did find a nice equation that did fit your values, the actual function would only be a couple lines long, and fast.
I want to apply a Gaussian filter of dimension 5x5 pixels on an image of 512x512 pixels. I found a scipy function to do that:
scipy.ndimage.filters.gaussian_filter(input, sigma, truncate=3.0)
How I choose the parameter of sigma to make sure that my Gaussian window is 5x5 pixels?
Check out the source code here: https://github.com/scipy/scipy/blob/master/scipy/ndimage/filters.py
You'll see that gaussian_filter calls gaussian_filter1d for each axis. In gaussian_filter1d, the width of the filter is determined implicitly by the values of sigma and truncate. In effect, the width w is
w = 2*int(truncate*sigma + 0.5) + 1
So
(w - 1)/2 = int(truncate*sigma + 0.5)
For w = 5, the left side is 2. The right side is 2 if
2 <= truncate*sigma + 0.5 < 3
or
1.5 <= truncate*sigma < 2.5
If you choose truncate = 3 (overriding the default of 4), you get
0.5 <= sigma < 0.83333...
We can check this by filtering an input that is all 0 except for a single 1 (i.e. find the impulse response of the filter) and counting the number of nonzero values in the filtered output. (In the following, np is numpy.)
First create an input with a single 1:
In [248]: x = np.zeros(9)
In [249]: x[4] = 1
Check the change in the size at sigma = 0.5...
In [250]: np.count_nonzero(gaussian_filter1d(x, 0.49, truncate=3))
Out[250]: 3
In [251]: np.count_nonzero(gaussian_filter1d(x, 0.5, truncate=3))
Out[251]: 5
... and at sigma = 0.8333...:
In [252]: np.count_nonzero(gaussian_filter1d(x, 0.8333, truncate=3))
Out[252]: 5
In [253]: np.count_nonzero(gaussian_filter1d(x, 0.8334, truncate=3))
Out[253]: 7
Following the excellent previous answer:
set sigma s = 2
set window size w = 5
evaluate the 'truncate' value: t = (((w - 1)/2)-0.5)/s
filtering: filtered_data = scipy.ndimage.filters.gaussian_filter(data, sigma=s, truncate=t)
i want to generate a series of number through looping.
my series will contain numbers like 0,3,5,8,10,13,15,18 and so on.
i try to take reminder and try to add 2 and 3 but it wont work out.
can any one please help me in generating this series.
You can just use an increment which toggles between 3 and 2, e.g.
for (i = 0, inc = 3; i < 1000; i += inc, inc = 5 - inc)
{
printf("%d\n", i);
}
It looks like the the sequence starts at zero, and uses increments of 3 and 2. There are several ways of implementing this, but perhaps the simplest one would be iterating in increments of 5 (i.e. 3+2) and printing two numbers - position and position plus three.
Here is some pseudocode:
i = 0
REPEAT N times :
PRINT i
PRINT i + 3
i += 5
The iteration i=0 will print 0 and 3
The iteration i=5 will print 5 and 8
The iteration i=10 will print 10 and 13
The iteration i=15 will print 15 and 18
... and so on
I was pulled in with the tag generate-series, which is a powerful PostgreSQL function. This may have been tagged by mistake (?) but it just so happens that there would be an elegant solution:
SELECT ceil(generate_series(0, 1000, 25) / 10.0)::int;
generate_series() returns 0, 25, 50, 75 , ... (can only produces integer numbers)
division by 10.0 produces numeric data: 0, 2.5, 5, 7.5, ...
ceil() rounds up to your desired result.
The final cast to integer (::int) is optional.
SQL Fiddle.