Too small RMSE. Recommender systems - machine-learning

Sorry, I'am newbie at recommender systems, but i wrote few lines of code using apache mahout lib. Well, my dataset is pretty small, 500x100 with 8102 cells known.
So, my dataset is actually a subset of Yelp dataset from "Yelp business rating prediction" competition. I just take top 100 most commented restaurants, and then take 500 most active customers.
I created SVDRecommender and then I evaluated RMSE. And so the result is about 0.4... Why is it so small? Maybe i just don't understand something and my dataset is not so sparse, but then i tried with larger and more sparse dataset and RMSE become even smaller (about 0.18)! Could anyone explain me such behaviour?
DataModel model = new FileDataModel(new File("datamf.csv"));
final RatingSGDFactorizer factorizer = new RatingSGDFactorizer(model, 20, 200);
final Factorization f = factorizer.factorize();
RecommenderBuilder builder = new RecommenderBuilder() {
public Recommender buildRecommender(DataModel model) throws TasteException {
//build here whatever existing or customized recommendation algorithm
return new SVDRecommender(model, factorizer);
}
};
RecommenderEvaluator evaluator = new RMSRecommenderEvaluator();
double score = evaluator.evaluate(builder,
null,
model,
0.6,
1);
System.out.println(score);

RMSE is calculated by looking at predicted ratings versus their hidden ground-truth. So a sparse dataset may only have very few hidden ratings to predict, or your algorithm may not be able to predict for many hidden ratings because there's no correlation to other ratings. This means that even though your RMSE is low ("better"), your coverage will be low because you aren't predicting very many items.
There's another issue: RMSE is completely dataset dependent. On the MovieLens ratings dataset which has star ratings 0.5 to 5.0 stars, an RMSE of roughly 0.9 is common. But on another dataset with 0.0 to 1.0 points, I've observed an RMSE of around 0.2. Look at the properties of your dataset and see if 0.4 makes sense.

Related

How to get only the predictions with a probability greater than x

I used a random forest to classify texts to certain categories. When I used my testdata I got an accuracy of 0.98. But with another set of data the overall accuracy decreases to 0.7. I think, most of the rows still have a high accuracy.
So now I want to show only the predicted categories with a high confidence.
random-forrest gives me a column "probability", which is an array of probabilities. How do I get the actual probabilty of the chosen prediction?
val randomForrest = new RandomForestClassifier()
.setLabelCol(labelIndexer.getOutputCol)
.setFeaturesCol(vectorAssembler.getOutputCol)
.setProbabilityCol("probability")
.setSeed(123)
.setPredictionCol("prediction")
I eventually came up with the following udf to get the best prediction together with its probability.
If there is a more convenient way, please comment.
def getBestPrediction = udf((
rawPrediction: org.apache.spark.ml.linalg.Vector, probability: org.apache.spark.ml.linalg.Vector) => {
val bestPrediction = probability.argmax
val bestProbability = probability(bestPrediction)
(bestPrediction, bestProbability)
})

How to calculate accuracy score of a random classifier?

Say for example, a dataset contains 60% instances for "Yes" class and 30% instances for "NO" class.
In this scenario, Precision, Recall for the random classifier are
Precision =60%
Recall =50%
Then, what will be the accuracy for random classifier in this scenario?
Some caution is required here, since the very definition of a random classifier is somewhat ambiguous; this is best illustrated in cases of imbalanced data.
By definition, the accuracy of a binary classifier is
acc = P(class=0) * P(prediction=0) + P(class=1) * P(prediction=1)
where P stands for probability.
Indeed, if we stick to the intuitive definition of a random binary classifier as giving
P(prediction=0) = P(prediction=1) = 0.5
then the accuracy computed by the above formula is always 0.5, irrespectively of the class distribution (i.e. the values of P(class=0) and P(class=1)).
However, in this definition, there is an implicit assumption, i.e. that our classes are balanced, each one consisting of 50% of our dataset.
This assumption (and the corresponding intuition) breaks down in cases of class imbalance: if we have a dataset where, say, 90% of samples are of class 0 (i.e. P(class=0)=0.9), then it doesn't make much sense to use the above definition of a random binary classifier; instead, we should use the percentages of the class distributions themselves as the probabilities of our random classifier, i.e.:
P(prediction=0) = P(class=0) = 0.9
P(prediction=1) = P(class=1) = 0.1
Now, plugging these values to the formula defining the accuracy, we get:
acc = P(class=0) * P(prediction=0) + P(class=1) * P(prediction=1)
= (0.9 * 0.9) + (0.1 * 0.1)
= 0.82
which is nowhere close to the naive value of 0.5...
As I already said, AFAIK there are no clear-cut definitions of a random classifier in the literature. Sometimes the "naive" random classifier (always flip a fair coin) is referred to as a "random guess" classifier, while what I have described is referred to as a "weighted guess" one, but still this is far from being accepted as a standard...
The bottom line here is the following: since the main reason for using a random classifier is as a baseline, it makes sense to do so only in relatively balanced datasets. In your case of a 60-40 balance, the result turns out to be 0.52, which is admittedly not far from the naive one of 0.5; but for highly imbalanced datasets (e.g. 90-10), the usefulness itself of the random classifier as a baseline ceases to exist, since the correct baseline has become "always predict the majority class", which here would give an accuracy of 90%, in contrast to the random classifier accuracy of just 82% (let alone the 50% accuracy of the naive approach)...
As #desertnaut mentioned, if you're after a naïve benchmark for your model you're always better using "always predict the majority class" as your benchmark, achieving accuracy of %of_samples_in_majority_class (which is always better than either a random guess or a weighted guess).
In Deepchecks (a package I maintain) we have a check that automatically compares the performance of your model to a simple model (either weighted random, majority class or simple decision tree).
from deepchecks.checks import SimpleModelComparison
from deepchecks import Dataset
SimpleModelComparison().run(Dataset(train_df, label='target'), Dataset(test_df, label='target'), model)

Relation between Word2Vec vector size and total number of words scanned?

What is the optimum number of vector size to be set in word2vec algorithm if the total number of unique words is greater than 1 billion?
I am using Apache Spark Mllib 1.6.0 for word2vec.
Sample code :-
public class Main {
public static void main(String[] args) throws IOException {
SparkConf conf = new SparkConf().setAppName("JavaWord2VecExample");
conf.setMaster("local[*]");
JavaSparkContext jsc = new JavaSparkContext(conf);
SQLContext sqlContext = new SQLContext(jsc);
// $example on$
// Input data: Each row is a bag of words from a sentence or document.
JavaRDD<Row> jrdd = jsc.parallelize(Arrays.asList(
RowFactory.create(Arrays.asList("Hi I heard about Spark".split(" "))),
RowFactory.create(Arrays.asList("Hi I heard about Java".split(" "))),
RowFactory.create(Arrays.asList("I wish Java could use case classes".split(" "))),
RowFactory.create(Arrays.asList("Logistic regression models are neat".split(" ")))
));
StructType schema = new StructType(new StructField[]{
new StructField("text", new ArrayType(DataTypes.StringType, true), false, Metadata.empty())
});
DataFrame documentDF = sqlContext.createDataFrame(jrdd, schema);
// Learn a mapping from words to Vectors.
Word2Vec word2Vec = new Word2Vec()
.setInputCol("text")
.setOutputCol("result")
.setVectorSize(3) // What is the optimum value to set here
.setMinCount(0);
Word2VecModel model = word2Vec.fit(documentDF);
DataFrame result = model.transform(documentDF);
result.show(false);
for (Row r : result.select("result").take(3)) {
System.out.println(r);
}
// $example off$
}
}
There's no one answer: it will depend on your dataset and goals.
Common values for the dimensionality-size of word-vectors are 300-400, based on values preferred in some of the original papers.
But, the best approach is to create some sort of project-specific quantitative quality score – are the word-vectors performing well in your intended application? – and then optimize the size like any other meta-parameter.
Separately, if you truly have 1 billion unique word tokens – a 1 billion word vocabulary – it will be hard to train those vectors in typical system environments. (1 billion word-tokens is 333 times larger than Google's released 3-million-vectors dataset.)
1 billion 300-dimensional word-vectors would require (1 billion * 300 float dimensions * 4 bytes/float =) 1.2TB of addressable memory (essentially, RAM) just to store the raw vectors during training. (The neural network will need another 1.2TB for output-weights during training, plus other supporting structures.)
Relatedly, words with very few occurrences can't get quality word-vectors from those few contexts, but still tend to interfere with the training of nearby words – so a minimum-count of 0 is never a good idea, and throwing away more lower-frequency words tends to speed training, lower memory-requirements, and improve the quality of the remaining words.
According to research, the quality for vector representations improves as you increase the vector size until you reach 300 dimensions. After 300 dimensions, the quality of vectors starts to decrease. You can find analysis of the different vector and vocabulary sizes here (See Table 2, where SG refers to the Skip Gram model that is the model behind Word2Vec).
Your choice for the vector size also depends on you computational power, even though 300 probably gives you the most reliable vectors, you may need to lower the size if your machine is too slow at computing the vectors.

Random Forests and ROC Curves in Julia

I'm using the ScikitLearn flavour of the DecisionTree.jl package to create a random forest model for a binary classification problem of one of the RDatasets data sets (see bottom of the DecisionTree.jl home page for what I mean by ScikitLearn flavour). I'm also using the MLBase package for model evaluation.
I have built a random forest model of my data and would like to create a ROC Curve for this model. Reading the documentation available, I do understand what a ROC curve is in theory. I just can't figure out how to create one for a specific model.
From the Wikipedia page the last part of the first sentence that I have marked in bold italics below is the one that is causing my confusion: "In statistics, a receiver operating characteristic (ROC), or ROC curve, is a graphical plot that illustrates the performance of a binary classifier system as its discrimination threshold is varied." There is more on the threshold value throughout the article but this still confuses me for binary classification problems. What is the threshold value and how do I vary it?
Also, in the MLBase documentation on ROC Curves it says "Compute an ROC instance or an ROC curve (a vector of ROC instances), based on given scores and a threshold thres." But doesn't mention this threshold anywhere else really.
Example code for my project is given below. Basically, I want to create a ROC curve for the random forest but I'm not sure how to or if it's even appropriate.
using DecisionTree
using RDatasets
using MLBase
quakes_data = dataset("datasets", "quakes");
# Add in a binary column as feature column for classification
quakes_data[:MagGT5] = convert(Array{Int32,1}, quakes_data[:Mag] .> 5.0)
# Getting features and labels where label = 1 is mag > 1 and label = 2 is mag <= 5
features = convert(Array, quakes_data[:, [1:3;5]]);
labels = convert(Array, quakes_data[:, 6]);
labels[labels.==0] = 2
# Create a random forest model with the tuning parameters I want
r_f_model = RandomForestClassifier(nsubfeatures = 3, ntrees = 50, partialsampling=0.7, maxdepth = 4)
# Train the model in-place on the dataset (there isn't a fit function without the in-place functionality)
DecisionTree.fit!(r_f_model, features, labels)
# Apply the trained model to the test features data set (here I haven't partitioned into training and test)
r_f_prediction = convert(Array{Int64,1}, DecisionTree.predict(r_f_model, features))
# Applying the model to the training set and looking at model stats
TrainingROC = roc(labels, r_f_prediction) #getting the stats around the model applied to the train set
# p::T # positive in ground-truth
# n::T # negative in ground-truth
# tp::T # correct positive prediction
# tn::T # correct negative prediction
# fp::T # (incorrect) positive prediction when ground-truth is negative
# fn::T # (incorrect) negative prediction when ground-truth is positive
I also read this question and didn't find it helpful really.
The task in binary classification is to give a 0/1 (or true/false, red/blue) label to a new, unlabeled, data-point. Most classification algorithms are designed to output a continuous real value. This value is optimized to be higher for points with known or predicted label 1, and lower for points with known or predicted label 0. To use this value to generate a 0/1 prediction, an additional threshold is used. Points with a value higher than threshold are predicted to be labeled 1 (and for lower than threshold a 0 label is predicted ).
Why is this setup useful? Because, sometimes mispredicting a 0 instead of a 1 is more costly, and then you can set the threshold low, making the algorithm output predict 1s more often.
In an extreme case when predicting 0 instead of a 1 costs nothing for the application, you can set the threshold at infinity, making it always output 0 (which is obviously the best solution, since it incurs no cost).
The threshold trick cannot eliminate errors from the classifier - no classifier in real-world problems is perfect or free from noise. What it can do is change the ratio between the 0-when-really-1 errors and 1-when-really-0 errors for the final classification.
As you increase the threshold, more points are classified with a 0 label. Consider a chart with the fraction of points classified with 0 on the x-axis, and the fraction of points with a 0-when-really-1 error on the y-axis. For each value of the threshold, plot a point for the resulting classifier on this chart. Plotting a point for all thresholds you get a curve. This is (some variant of) the ROC curve, which summarizes the abilities of the classifier. An often used metric for quality of classification is the AUC or area-under-curve of this chart, but in fact, the whole curve can be of interest in applications.
A summary like this appears in many texts on machine learning, which are a google query away.
Hope this clarifies the role of the threshold and its relation to ROC curves.

How to use SelectFromModel in sklearn to find the positively informative features for a class

I think I understand that until recently people used the attribute coef_ to extract the most informative features from linear models in python's machine learning library sklearn. Now users get pointed to SelectFromModel instead. SelectFromModel allows to reduce the features based on a threshold. So something like the following code reduces the features down to those features which have an importance > 0.5. My question now: Is there any way to determine whether a feature is positivly or negatively discriminating for a class?
I have my data in a pandas dataframe called data, first column a list of filenames of text files, second column the label.
count_vect = CountVectorizer(input="filename", analyzer="word")
X_train_counts = count_vect.fit_transform(data["filenames"])
print(X_train_counts.shape)
tf_transformer = TfidfTransformer(use_idf=True)
traindata = tf_transformer.fit_transform(X_train_counts)
print(traindata.shape) #report size of the training data
clf = LogisticRegression()
model = SelectFromModel(clf, threshold=0.5)
X_transform = model.fit_transform(traindata, data["labels"])
print("reduced features: ", X_transform.shape)
#get the names of all features
words = np.array(count_vect.get_feature_names())
#get the names of the important features using the boolean index from model
print(words[model.get_support()])
To my knowledge you need to stick back to the .coef_ method and see which coefficients are negative or positive. a negative coefficient obviously decreases the odds of that class to happen (so negative relationship), while a positive coefficient increases the odds the class to happen (so positive relationship).
However this method will not give you the significance, only the direction. You will need the SelectFromModel method to extract that.

Resources