Recommended values for OpenCV RTrees parameters - opencv

Any idea on the recommended parameters for OpenCV RTrees? I have read the documentation and I'm trying to apply it to MNIST dataset, i.e. 60000 training images, with 10000 testing images. I'm trying to optimize MaxDepth, MinSampleCount, setMaxCategories, and setPriors? e.g.
Ptr<RTrees> model = RTrees::create();
/* Depth of the tree.
A low value will likely underfit and conversely
a high value will likely overfit.
The optimal value can be obtained using cross validation
or other suitable methods.
*/
model->setMaxDepth(?); // letter_recog.cpp uses 10
/* minimum samples required at a leaf node for it to be split.
A reasonable value is a small percentage of the total data e.g. 1%.
MNIST 70000 * 0.01 = 700
*/
model->setMinSampleCount(700?); letter_recog.cpp uses 10
/* regression_accuracy – Termination criteria for regression trees.
If all absolute differences between an estimated value in a node and
values of train samples in this node are less than this parameter
then the node will not be split. */
model->setRegressionAccuracy(0); // I think this is already correct
/*
use_surrogates – If true then surrogate splits will be built.
These splits allow to work with missing data and compute variable importance correctly.'
To compute variable importance correctly, the surrogate splits must be enabled in
the training parameters, even if there is no missing data.
*/
model->setUseSurrogates(true); // I think this is already correct
/*
Cluster possible values of a categorical variable into K \leq max_categories clusters
to find a suboptimal split. If a discrete variable, on which the training procedure
tries to make a split, takes more than max_categories values, the precise best subset
estimation may take a very long time because the algorithm is exponential.
Instead, many decision trees engines (including ML) try to find sub-optimal split
in this case by clustering all the samples into max_categories clusters that is
some categories are merged together. The clustering is applied only in n>2-class
classification problems for categorical variables with N > max_categories possible values.
In case of regression and 2-class classification the optimal split can be found
efficiently without employing clustering, thus the parameter is not used in these cases.
*/
model->setMaxCategories(?); letter_recog.cpp uses 15
/*
priors – The array of a priori class probabilities, sorted by the class label value.
The parameter can be used to tune the decision tree preferences toward a certain class.
For example, if you want to detect some rare anomaly occurrence, the training base will
likely contain much more normal cases than anomalies, so a very good classification
performance will be achieved just by considering every case as normal.
To avoid this, the priors can be specified, where the anomaly probability is
artificially increased (up to 0.5 or even greater), so the weight of the misclassified
anomalies becomes much bigger, and the tree is adjusted properly. You can also think about
this parameter as weights of prediction categories which determine relative weights that
you give to misclassification. That is, if the weight of the first category is 1 and
the weight of the second category is 10, then each mistake in predicting the
second category is equivalent to making 10 mistakes in predicting the first category.
*/
model->setPriors(Mat()); // ?
/* If true then variable importance will be calculated and
then it can be retrieved by CvRTrees::get_var_importance().
*/
model->setCalculateVarImportance(true); // I think this is already correct
/*
The size of the randomly selected subset of features at each tree node and
that are used to find the best split(s). If you set it to 0 then the size
will be set to the square root of the total number of features.
*/
model->setActiveVarCount(0); // I think this is already correct
/*
CV_TERMCRIT_ITER Terminate learning by the max_num_of_trees_in_the_forest;
CV_TERMCRIT_EPS Terminate learning by the forest_accuracy;
CV_TERMCRIT_ITER | CV_TERMCRIT_EPS Use both termination criteria.
*/
model->setTermCriteria(TC(100,0.01f)); // I think this is already correct

Related

Imbalanced dataset, size limitation of 60mb, email categorization

I have a highly imbalanced dataset(approx - 1:100) of 1gb of raw emails, have to categorize these mails in 15 categories.
Problem that i have is that the size limit of file which will be used to train the model can not be more than 40mb.
So i want to filter out mails for each category which best represent the whole category.
For eg: for a category A, there are 100 emails in the dataset, due to size limitation i want to filter out only 10 emails which will represent the max features of all 100 emails.
I read that tfidf can be used to do this, for all the categories create a corpus of all the emails for that particular category and then try to find the emails that best represent but not sure how to do that. A code snippet will be of great help.
plus there are a lot of junk words and hash values in the dataset, should i clean all of those, even if i try its a lot to clean and manually its hard.
TF-IDF stands for Term Frequency, Inverse Term Frequency. The idea is to find out which words are more representative based on generality and specificity.
The idea that you were proposed is not that bad and could work for a shallow approach. Here's a snippet to help you understand how to do it:
from sklearn.feature_extraction.text import TfidfVectorizer
import numpy as np
## Suppose Docs1 and Docs2 are the groups of e-mails. Notice that docs1 has more lines than docs2
docs1 = ['In digital imaging, a pixel, pel,[1] or picture element[2] is a physical point in a raster image, or the smallest addressable element in an all points addressable display device; so it is the smallest controllable element of a picture represented on the screen',
'Each pixel is a sample of an original image; more samples typically provide more accurate representations of the original. The intensity of each pixel is variable. In color imaging systems, a color is typically represented by three or four component intensities such as red, green, and blue, or cyan, magenta, yellow, and black.',
'In some contexts (such as descriptions of camera sensors), pixel refers to a single scalar element of a multi-component representation (called a photosite in the camera sensor context, although sensel is sometimes used),[3] while in yet other contexts it may refer to the set of component intensities for a spatial position.',
'The word pixel is a portmanteau of pix (from "pictures", shortened to "pics") and el (for "element"); similar formations with \'el\' include the words voxel[4] and texel.[4]',
'The word "pixel" was first published in 1965 by Frederic C. Billingsley of JPL, to describe the picture elements of video images from space probes to the Moon and Mars.[5] Billingsley had learned the word from Keith E. McFarland, at the Link Division of General Precision in Palo Alto, who in turn said he did not know where it originated. McFarland said simply it was "in use at the time" (circa 1963).[6]'
]
docs2 = ['In applied mathematics, discretization is the process of transferring continuous functions, models, variables, and equations into discrete counterparts. This process is usually carried out as a first step toward making them suitable for numerical evaluation and implementation on digital computers. Dichotomization is the special case of discretization in which the number of discrete classes is 2, which can approximate a continuous variable as a binary variable (creating a dichotomy for modeling purposes, as in binary classification).',
'Discretization is also related to discrete mathematics, and is an important component of granular computing. In this context, discretization may also refer to modification of variable or category granularity, as when multiple discrete variables are aggregated or multiple discrete categories fused.',
'Whenever continuous data is discretized, there is always some amount of discretization error. The goal is to reduce the amount to a level considered negligible for the modeling purposes at hand.',
'The terms discretization and quantization often have the same denotation but not always identical connotations. (Specifically, the two terms share a semantic field.) The same is true of discretization error and quantization error.'
]
## We sum them up to have a universal TF-IDF dictionary, so that we can 'compare oranges to oranges'
docs3 = docs1+docs2
## Using Sklearn TfIdfVectorizer - it is easy and straight forward!
vectorizer = TfidfVectorizer()
## Now we make the universal TF-IDF dictionary, MAKE SURE TO USE THE MERGED LIST AND fit() [not fittransform]
X = vectorizer.fit(docs3)
## Checking the array shapes after using transform (fitting them to the tf-idf dictionary)
## Notice that they are the same size but with distinct number of lines
print(X.transform(docs1).toarray().shape, X.transform(docs2).toarray().shape)
(5, 221) (4, 221)
## Now, to "merge" them all, there are many ways to do it - here I used a simple "mean" method.
transformed_docs1 = np.mean(X.transform(docs1).toarray(), axis=0)
transformed_docs2 = np.mean(X.transform(docs1).toarray(), axis=0)
print(transformed_docs1)
print(transformed_docs2)
[0.02284796 0.02284796 0.02805426 0.06425141 0. 0.03212571
0. 0.03061173 0.02284796 0. 0. 0.04419432
0.08623564 0. 0. 0. 0.03806573 0.0385955
0.04569592 0. 0.02805426 0.02805426 0. 0.04299283
...
0. 0.02284796 0. 0.05610853 0.02284796 0.03061173
0. 0.02060219 0. 0.02284796 0.04345487 0.04569592
0. 0. 0.02284796 0. 0.03061173 0.02284796
0.04345487 0.07529817 0.04345487 0.02805426 0.03061173]
## These are the final Shapes.
print(transformed_docs1.shape, transformed_docs2.shape)
(221,) (221,)
About Removing junk words, TF-IDF averages rare words out (such as number, and etc) - if it is too rare, it wont matter much. But this could increase a lot the size of your input vectors, so I'd advise you to find a way to clean them. Also, consider some NLP preprocessing steps, such as lemmatization, to reduce dimensionality.

How to squish a continuous cosine-theta score to a discrete (0/1) output?

I implemented a cosine-theta function, which calculates the relation between two articles. If two articles are very similar then the words should contain quite some overlap. However, a cosine theta score of 0.54 does not mean "related" or "not related". I should end up with a definitive answer which is either 0 for 'not related' or 1 for 'related'.
I know that there are sigmoid and softmax functions, yet I should find the optimal parameters to give to such functions and I do not know if these functions are satisfactory solutions. I was thinking that I have the cosine theta score, I can calculate the percentage of overlap between two sentences two (e.g. the amount of overlapping words divided by the amount of words in the article) and maybe some more interesting things. Then with the data, I could maybe write a function (what type of function I do not know and is part of the question!), after which I can minimize the error via the SciPy library. This means that I should do some sort of supervised learning, and I am willing to label article pairs with labels (0/1) in order to train a network. Is this worth the effort?
# Count words of two strings.
v1, v2 = self.word_count(s1), self.word_count(s2)
# Calculate the intersection of the words in both strings.
v3 = set(v1.keys()) & set(v2.keys())
# Calculate some sort of ratio between the overlap and the
# article length (since 1 overlapping word on 2 words is more important
# then 4 overlapping words on articles of 492 words).
p = min(len(v1), len(v2)) / len(v3)
numerator = sum([v1[w] * v2[w] for w in v3])
w1 = sum([v1[w]**2 for w in v1.keys()])
w2 = sum([v2[w]**2 for w in v2.keys()])
denominator = math.sqrt(w1) * math.sqrt(w2)
# Calculate the cosine similarity
if not denominator:
return 0.0
else:
return (float(numerator) / denominator)
As said, I would like to use variables such as p, and the cosine theta score in order to produce an accurate discrete binary label, either 0 or 1.
As said, I would like to use variables such as p, and the cosine theta score in order to produce an accurate discrete binary label, either 0 or 1.
Here it really comes down to what you mean by accuracy. It is up to you to choose how the overlap affects whether or not two strings are "matching" unless you have a labelled data set. If you have a labelled data set (I.e., a set of pairs of strings along with a 0 or 1 label), then you can train a binary classification algorithm and try to optimise based on that. I would recommend something like a neural net or SVM due to the potentially high dimensional, categorical nature of your problem.
Even the optimisation, however, is a subjective measure. For example, in theory let's pretend you have a model which out of 100 samples only predicts 1 answer (Giving 99 unknowns). Technically if that one answer is correct, that is a model with 100% accuracy, but which has a very low recall. Generally in machine learning you will find a trade off between recall and accuracy.
Some people like to go for certain metrics which combine the two (The most famous of which is the F1 score), but honestly it depends on the application. If I have a marketing campaign with a fixed budget, then I care more about accuracy - I would only want to target consumers who are likely to buy my product. If however, we are looking to test for a deadly disease or markers for bank fraud, then it's feasible for that test to be accurate only 10% of the time - if its recall of true positives is somewhere close to 100%.
Finally, if you have no labelled data, then your best bet is just to define some cut off value which you believe indicates a good match. This is would then be more analogous to a binary clustering problem, and you could use some more abstract measure such as distance to a centroid to test which cluster (Either the "related" or "unrelated" cluster) the point belongs to. Note however that here your features feel like they would be incredibly hard to define.

Predicting probabilities

I have time series data consisting of a vector
v=(x_1,…, x_n)
of binary categorical variables and the probabilities for four outcomes
p_1, p_2, p_3, p_4.
Given a new vector of categorical variables I want to predict the probabilities
p_1,…,p_4
The probabilities are very unbalanced with
p_1>.99 and p_2, p_3, p_4 < .01.
For example
v_1= (1,0,0,0,1,0,0,0) , p_1=.99, p_2=.005, p_3=.0035, p_4= .0015
v_2=(0,0,1,0,0,0,0,1), p_1=.99, p_2=.006, p_3=.0035, p_4= .0005
v_3=(0,1,0,0,1,1,1,0), p_1=.99, p_2=.005, p_3=.003, p_4= .002
v_4=(0,0,1,0,1,0,0,1), p_1=.99, p_2=.0075, p_3=.002, p_4= .0005
Given a new vector
v_5= (0,0,1,0,1,1,0,0)
I want to predict
p_1, p_2, p_3, p_4.
I should also note that the new vector could be identical to one of the input vectors, i.e.,
v_5=(0,0,1,0,1,0,0,1)= v_4.
My initial approach is to turn this into 4 regression problems.
The first would predict p_1, the second would predict p_2, the third would predict p_3, and the fourth would predict p_4. The problem with this is that I need
p_1+p_2+p_3+p_4=1
I’m not classifying, but should I also be worried about the unbalanced probabilities. Any ideas would be welcome.
Your suggestion of considering this as a multiple problem + final normalization, has some sense, but it's known to be problematic in many cases (see, e.g., the problem of masking).
What you're describing here is multiclass (soft) classification, and there are many many known techniques for doing so. You didn't specify which language/tool/library you're using, or if you're planning on rolling your own (which only makes sense for didactic purposes). I'd suggest starting with Linear Discriminant Analysis which is very simple to understand and implement, and - despite its strong assumptions - is known to often work well in practice (see the classical book by Hastie & Tibshirani).
Irrespective of the underlying algorithm you use for soft binary classification (e.g., LDA or not), It is not very difficult to transform aggregate input into labeled input.
Consider for example the instance
v_1= (1,0,0,0,1,0,0,0) , p_1=.99, p_2=.005, p_3=.0035, p_4= .0015
If your classifier supports instance weights, feed it 4 instances, labeled 1, 2, ..., with weights given by p_1, p_2, ..., respectively.
If it does not support instance weights, simply simulate what the law of large numbers says would happen: generate some large n instance from this input; for each such new input, choose a label randomly proportionally to its probability.

how to handle large number of features machine learning

I developed a image processing program that identifies what a number is given an image of numbers. Each image was 27x27 pixels = 729 pixels. I take each R, G and B value which means I have 2187 variables from each image (+1 for the intercept = total of 2188).
I used the below gradient descent formula:
Repeat {
θj = θj−α/m∑(hθ(x)−y)xj
}
Where θj is the coefficient on variable j; α is the learning rate; hθ(x) is the hypothesis; y is real value and xj is the value of variable j. m is the number of training sets. hθ(x), y are for each training set (i.e. that's what the summation sign is for). Further the hypothesis is defined as:
hθ(x) = 1/(1+ e^-z)
z= θo + θ1X1+θ2X2 +θ3X3...θnXn
With this, and 3000 training images, I was able to train my program in just over an hour and when tested on a cross validation set, it was able to identify the correct image ~ 67% of the time.
I wanted to improve that so I decided to attempt a polynomial of degree 2.
However the number of variables jumps from 2188 to 2,394,766 per image! It takes me an hour just to do 1 step of gradient descent.
So my question is, how is this vast number of variables handled in machine learning? On the one hand, I don't have enough space to even hold that many variables for each training set. On the other hand, I am currently storing 2188 variables per training sample, but I have to perform O(n^2) just to get the values of each variable multiplied by another variable (i.e. the polynomial to degree 2 values).
So any suggestions / advice is greatly appreciated.
try to use some dimensionality reduction first (PCA, kernel PCA, or LDA if you are classifying the images)
vectorize your gradient descent - with most math libraries or in matlab etc. it will run much faster
parallelize the algorithm and then run in on multiple CPUs (but maybe your library for multiplying vectors already supports parallel computations)
Along with Jirka-x1's answer, I would first say that this is one of the key differences in working with image data than say text data for ML: high dimensionality.
Second... this is a duplicate, see How to approach machine learning problems with high dimensional input space?

How are binary classifiers generalised to classify data into arbitrarily large sets?

How can algorithms which partition a space in to halves, such as Suport Vector Machines, be generalised to label data with labels from sets such as the integers?
For example, a support vector machine operates by constructing a hyperplane and then things 'above' the hyperplane take one label, and things below it take the other label.
How does this get generalised so that the labels are, for example, integers, or some other arbitrarily large set?
One option is the 'one-vs-all' approach, in which you create one classifier for each set you want to partition into, and select the set with the highest probability.
For example, say you want to classify objects with a label from {1,2,3}. Then you can create three binary classifiers:
C1 = 1 or (not 1)
C2 = 2 or (not 2)
C3 = 3 or (not 3)
If you run these classifiers on a new piece of data X, then they might return:
C1(X) = 31.6% chance of being in 1
C2(X) = 63.3% chance of being in 2
C3(X) = 89.3% chance of being in 3
Based on these outputs, you could classify X as most likely being from class 3. (The probabilities don't add up to 1 - that's because the classifiers don't know about each other).
If your output labels are ordered (with some kind of meaningful, rather than arbitrary ordering). For example, in finance you want to classify stocks into {BUY, SELL, HOLD}. Although you can't legitimately perform a regression on these (the data is ordinal rather than ratio data) you can assign the values of -1, 0 and 1 to SELL, HOLD and BUY and then pretend that you have ratio data. Sometimes this can give good results even though it's not theoretically justified.
Another approach is the Cramer-Singer method ("On the algorithmic implementation of multiclass kernel-based vector machines").
Svmlight implements it here: http://svmlight.joachims.org/svm_multiclass.html.
Classification into an infinite set (such as the set of integers) is called ordinal regression. Usually this is done by mapping a range of continuous values onto an element of the set. (see http://mlg.eng.cam.ac.uk/zoubin/papers/chu05a.pdf, Figure 1a)

Resources