Clustering and kmeans have unclear documentation - opencv

I have to use kmeans in my future work, I know it is available in OpenCV as they have a documentation page on it.
I cannot make sense of the format displayed, it has also not been explained in the given details below (it appears to be details related to OpenCV 1.1). I mean, with the C++ line:
double kmeans(InputArray data, int K, InputOutputArray bestLabels, TermCriteria criteria, int attempts, int flags, OutputArray centers=noArray() )
what datatype is data, vector or matrix? which is the input matrix, which will be the output?
I am used to reading documentation like this where it is clearly stated which is the input/output/flag etc and what data types they are.
C++: void FeatureDetector::detect(const Mat& image, vector<KeyPoint>& keypoints, const Mat& mask=Mat() ) const
I would really appreciate if someone could give a short example of kmeans being used.
P.S. the input matrix I have ready to be used for kmeans is the one produces by DescriptorExtractor::compute
Thank you

You can find examples of using most of OpenCV's functions in folder samples. In your situation take a look at these two:
kmeans.cpp
matcher_simple.cpp

Related

Flow matrix from Earth Mover's Distance

I was reading some paper and it said:
By adopting the Earth Mover's Distance (EMD) algorithm, a flow matrix
f = {fij} from one histogram to another can be obtained.
I found an implementation for EMD in OpenCV. However, this implementation looks like:
float EMDL1(InputArray signature1, InputArray signature2);
It returns a single float value rather than a flow matrix. Is there a way to obtain the flow matrix using OpenCV?
While I was Writing the post I found the answer. It might help someone...
There is another function which is:
float EMD(InputArray signature1, InputArray signature2, int distType, InputArray cost=noArray(), float* lowerBound=0, OutputArray flow=noArray() );
flow is an output parameter that return the flow matrix.

OpenCV Principal Component Analysis terminology - what actually is a 'sample'?

I'm working with Principal Component Analysis (PCA) in openCV. The constructor inputs for the case I'm interested in are:
PCA(InputArray data, InputArray mean, int flags, double retainedVariance);
Regarding the InputArray 'data' the documents state the appropriate flags should be:
CV_PCA_DATA_AS_ROW indicates that the input samples are stored as
matrix rows.
CV_PCA_DATA_AS_COL indicates that the input samples are
stored as matrix columns.
My question pertains to the use of the term 'samples' in that I'm not sure what a sample is in this context.
For example let's say I have 4 sets of data and for the sake of illustration let's label them A-D. Now each set A through D has 8 elements. They are then set up in the Mat variable I'll use as InputArray as follows:
The question is, which is it:
My sets are samples?
My data elements are samples?
Another way of asking:
Do I have 4 samples (CV_PCA_DATA_AS_COL)
Or do I have 4 sets of 8 samples (CV_PCA_DATA_AS_ROW)
?
As a guess, I'd choose CV_PCA_DATA_AS_COL (i.e. I have 4 samples) - but that's just where my head is at... Until I learn the correct terminology it seems the word 'sample' could apply to either reasoning.
Ugh...
So the answer was found by reversing the logic behind the documentation for the PCA::project step...
Mat PCA::project(InputArray vec)
vec – input vector(s); must have the same dimensionality and the same
layout as the input data used at PCA phase, that is, if
CV_PCA_DATA_AS_ROW are specified, then vec.cols==data.cols (vector
dimensionality)
i.e. 'sample' is equivalent to 'set', and the elements are the 'dimension'.
(and my guess was correct :)

Undocumented groupRectangles variants in OpenCV

In cascadedetect.cpp in OpenCV, there are several variants of groupRectangles function:
void groupRectangles(std::vector<Rect>& rectList, int groupThreshold, double eps);
void groupRectangles(std::vector<Rect>& rectList, std::vector<int>& weights, int groupThreshold, double eps);
void groupRectangles(std::vector<Rect>& rectList, std::vector<int>& rejectLevels, std::vector<double>& levelWeights, int groupThreshold, double eps);
But in the OpenCV document, only the first variant is documented clearly, the second variant is mentioned but the weights argument is not explained. The third isn't even mentioned.
Can anyone explain the meanings of weights, rejectLevels, and levelWeights?
I read the groupRectangles source code and understood the meanings of these parameters to some degree.
groupRectangles is defined in cascadedetect.cpp, which is used by traincascade project in OpenCV. This project uses viola-jones's cascaded adaboost framework to detect objects, thus it has several cascade stages, and each of them is a strong classifier. The cascade classifier by default outputs positive only if the input sample passed every stage, but you can also set it to output the index of stage at which the sample is rejected if you want to plot a ROC curve.
So rejectLevels means the index of stage at which the rectangle is rejected. According to source code, the effect of weight is same as rejectLevels.
The above two parameters may not be very practical for us, but levelWeights is sometimes useful. It's originally the score of the rectangle outputted by the stage which rejects it, but we can use it for a more general purpose. If every rectangle has a score(no matter where it comes from), and we want to get the scores of grouped rectangles, the documented variant of groupRectangles won't help us. We must use the third one, with rejectLevels set to zeros:
vector<int> levels(wins.size(), 0);
groupRectangles(wins, levels, scores, groupThreshold, eps);
In which scores is the scores of wins. They have same size.

Comparing Images for similarity

I want to compare two images (number plate images). I already separated each character from number plate using ROI command. Now, I want to compare this with the stored templates or characters to recognize the character. I want to know how to compare their similarity.I am new to openCV. I am using the standard number plates.
Opencv implements the template matching function. Here is the prototype:
void matchTemplate(const Mat& image, const Mat& templ, Mat& result, int method);
Methods of comparison are mostly based on sum of squared differences with differents normalization terms.
In case of colored images each sum in the denominator is done over all of the channels (and separate mean values are used for each channel).
Use the OpenCV function minMaxLoc to find the maximum and minimum values.
try cvMatchTemplate
void cvMatchTemplate(const CvArr* image, const CvArr* templ, CvArr* result, int method);
http://opencv.willowgarage.com/documentation/c/object_detection.html

OpenCV CalcPca input data

I am trying to implement a face recognition training function with opencv, using "eigenfaces". I have the sample data, but I can't find any info on CalcPCA function arguments. All I know is that it takes data matrix, reference to average eigenface matrix, reference to eigen vector, and reference to eigen values matrix.
My question is, how should I pass the data from several test image matrices into the first argument of CalcPCA so I can get the average eigenface and vectors?
This seems to be a good example: http://tech.groups.yahoo.com/group/OpenCV/message/47627
You can do in this way:
You have for example 10 Mat where each math represent an image.
Now you can create a new Mat that you can put into this new Mat the previus 10 Mat.
At this point use Mat.pushback(...) to insert the 10 Mat.
Hope this is helpful for you.
Marco

Resources