I am using the SVM implementation of OpenCV (based on LibSVM) on iOS. Is it possible to obtain the weight vector after training?
Thank you!
After dealing with it I have been able to obtain the weights. For obtaining the weights one has to obtain first the support vectors and then add them multiplied by the alpha values.
// get the svm weights by multiplying the support vectors by the alpha values
int numSupportVectors = SVM.get_support_vector_count();
const float *supportVector;
const CvSVMDecisionFunc *dec = SVM.decision_func;
svmWeights = (float *) calloc((numOfFeatures+1),sizeof(float));
for (int i = 0; i < numSupportVectors; ++i)
{
float alpha = *(dec[0].alpha + i);
supportVector = SVM.get_support_vector(i);
for(int j=0;j<numOfFeatures;j++)
*(svmWeights + j) += alpha * *(supportVector+j);
}
*(svmWeights + numOfFeatures) = - dec[0].rho; //Be careful with the sign of the bias!
The only trick here is that the instance variable float *decision_function is protected on the opencv framework, so I had to change it in order to access it.
A cursory glance of the doc and the source code (https://github.com/Itseez/opencv/blob/master/modules/ml/src/svm.cpp) tells me that on the surface the answer is "No". The hyperplane parameters seem to be tucked away into the CvSVMSolver class. CvSVM contains an object of this class called "solver". See if you can get to its members.
Related
I need to write an own implementation of computing the fundamental matrix between two images based on the corresponding image coordinates without using OpenCV.
Is it possible to describe this algorithm in its simplest form in accordance with the following function? a simple and straightforward formula.
FMatrixEightPoint()
Input Arguments:
points1(x,y)−pixel coordinates in the first image ,
corresponding to points2 in the second image
points2(x,y)−pixel coordinates in the second image ,
corresponding to points1 in the first image
Output :
F − the fundamental matrix between the first image and the second image
Yes, it is possible to describe the algorithm in the mentioned form.
If you would use OpenCV, you could just use findFundamentalMat. This also provides the 8-point method for computing the fundamental matrix.
The example (in C++) taken from the OpenCV documentation, but adapted (using the RANSAC algorithm for computing the fundamental matrix):
// Example. Estimation of fundamental matrix using the 8-point algorithm
int point_count = 8; // must be >= 8
vector<Point2f> points1(point_count);
vector<Point2f> points2(point_count);
// initialize the points here ... */
for( int i = 0; i < point_count; i++ )
{
points1[i] = ...;
points2[i] = ...;
}
Mat fundamental_matrix =
findFundamentalMat(points1, points2, CV_FM_8POINT);
If you want to write your own function, it would look like this (no valid code)
Matrix findFundamentalMat(Array points1, Array points2)
{
Matrix fundamentalMatrix;
// compute fundamental matrix based on input points1 and points2 or call OpenCV's findFundamentalMat
return fundamentalMatrix;
}
What I am doing is trying to implement an Skin Probability Maps algorithm for skin detection in OpenCV.
I've stuck in a place where I should compare SkinHistValue / NonSkinHistValue probability of each pixel with Theta threshold according to http://www.cse.unsw.edu.au/~icml2002/workshops/MLCV02/MLCV02-Morales.pdf and this tutorial http://www.morethantechnical.com/2013/03/05/skin-detection-with-probability-maps-and-elliptical-boundaries-opencv-wcode/
My problems lies in calculating the coords for hist value:
From the tutorial:
calcHist(&nRGB_frame,1,channels,mask,skin_Histogram,2,histSize,ranges,uniform,accumulate);
calcHist(&nRGB_frame,1,channels,~mask,non_skin_Histogram,2,histSize,ranges,uniform,accumulate);
Calculates the histograms. Than i normalize them.
And after that:
for (int i=0; i<nrgb.rows; i++) {
int gbin = cvRound((nrgb(i)[1] - 0)/range_dist[0] * hist_bins[0]);
int rbin = cvRound((nrgb(i)[2] - low_range[1])/range_dist[1] * hist_bins[1]);
float skin_hist_val = skin_Histogram.at<float>(gbin,rbin);
};
Where nrgb is my image, and im trying to get skin_hist_value for that. But the gbin and rbin are probably calculated wrong and it throws an exception (i run outside of array?) when it comes to
skin_Histogram.at<float>(gbin,rbin);
I have totally no idea how to calculate it correctly. Any help?
I have read various posts here at StackOverflow regarding the execution of FFT on accelerometer data, but none of them helped me understand my problem.
I am executing this FFT implementation on my accelerometer data array in the following way:
int length = data.size();
double[] re = new double[256];
double[] im = new double[256];
for (int i = 0; i < length; i++) {
input[i] = data[i];
}
FFT fft = new FFT(256);
fft.fft(re, im);
float outputData[] = new float[256];
for (int i = 0; i < 128; i++) {
outputData[i] = (float) Math.sqrt(re[i] * re[i]
+ im[i] * im[i]);
}
I plotted the contents of outputData (left,) and also used R to perform the FFT on my data (right.)
What am I doing wrong here? I am using the same code for executing the FFT that I see in other places.
EDIT: Following the advice of #PaulR to apply a windowing function, and the link provided by #BjornRoche (http://baumdevblog.blogspot.com.br/2010/11/butterworth-lowpass-filter-coefficients.html), I was able to solve my problem. The solution is pretty much what is described in that link. This is my graph now: http://imgur.com/wGs43
The low frequency artefacts are probably due to a lack of windowing. Try applying a window function.
The overall shift is probably due to different scaling factors in the two different FFT implementations - my guess is that you are seeing a shift of 24 dB which corresponds to a difference in scaling by a factor of 256.
Because all your data on left are above 0, for frequency analyze it is a DC signal. So after your fft, it abstract the DC signal out, it is very hugh. For your scene, you only need to cut off the DC signal, just preserve the signal over 0 Hz(AC signal), that makes sense.
Is there any built in library for sliding a window (custom size) over an image in opencv version 2.x?
I tried to write the algorithm by myself but I found it very painful and probably error-prone.
I need to slide over an image and create histogram for the input of svm.
there is one for HOG Descriptor, which calculates HOG features but I have my own feature set so I just need an algorithm to let me slide over an image.
You can define a Region of Interest (ROI) on a cv::Mat object, which gives you a new Mat object referring to the sub-window. This does not copy the underlying data, merely a new header with the appropriate metadata.
cv::Mat::operator()
See also this other question:
OpenCV C++, getting Region Of Interest (ROI) using cv::Mat
Basic code can looks like. The code is described good enought. I hope.
This is single scale slideing window 60x60 witch Step 30.
Result of this simple example is ROI.
You can visit this basic tutorial Tutorial Here.
// Parameters of your slideing window
int windows_n_rows = 60;
int windows_n_cols = 60;
// Step of each window
int StepSlide = 30;
for (int row = 0; row <= LoadedImage.rows - windows_n_rows; row += StepSlide)
{
for (int col = 0; col <= LoadedImage.cols - windows_n_cols; col += StepSlide)
{
Rect windows(col, row, windows_n_rows, windows_n_cols);
Mat Roi = LoadedImage(windows);
}
}
Here's my problem. I manually extracted key points features with SURF on multiple images. But I also already know which pair of points are going to match. The thing is, I'm trying to create my matching pairs, but I don't understand how. I tried by looking at the code, but it's a mess.
Right now, I know that the size of the features.descriptors, a matrix, is the same as the number of key points (the other dimension is 1). In the code, to detect matching pairs, it's only using the descriptors, so it's comparing rows (or columns, I'm not sure) or two descriptors matrix and determined if there's anything in common.
But in my case, I already know that there's a match between keypoint i from image 1 and keypoint j from image 2. How do I describe that as a MatchesInfo value. Particularly the element matches of type std::vector< cv::DMatch >.
EDIT: So, for this, I don't need to use any matcher or anything like this. I know which pairs are going together!
If I understood you're question correctly, I assume that you want the keypoint matches in std::vector<cv::DMatch> for the purpose of drawing them with the OpenCV cv::drawMatches or usage with some similar OpenCV function. Since I was also doing matching "by hand" recently, here's my code that draws up arbitrary matches contained originally in a std::vector<std::pair <int, int> > aMatches and displays them in a window:
const cv::Mat& pic1 = img_1_var;
const cv::Mat& pic2 = img_2_var;
const std::vector <cv::KeyPoint> &feats1 = img_1_feats;
const std::vector <cv::KeyPoint> &feats2 = img_2_feats;
// you of course can work directly with original objects
// but for drawing you only need const references to
// images & their corresponding extracted feats
std::vector <std::pair <int, int> > aMatches;
// fill aMatches manually - one entry is a pair consisting of
// (index_in_img_1_feats, index_in_img_2_feats)
// the next code draws the matches:
std::vector <cv::DMatch> matches;
matches.reserve((int)aMatches.size());
for (int i=0; i < (int)aMatches.size(); ++i)
matches.push_back(cv::DMatch(aMatches[i].first, aMatches[i].second,
std::numeric_limits<float>::max()));
cv::Mat output;
cv::drawMatches(pic1, feats1, pic2, feats2, matches, output);
cv::namedWindow("Match", 0);
cv::setWindowProperty("Match", CV_WINDOW_FULLSCREEN, 1);
cv::imshow("Match", output);
cv::waitKey();
cv::destroyWindow("Match");
Alternatively, if you need fuller information about the matches for purposes more complicated than drawing then you might also want to set the distance between matches to a proper value. E.g. if you want to calculate distances using L2 distance, you should replace the following line:
for (int i=0; i < (int)aMatches.size(); ++i)
matches.push_back(cv::DMatch(aMatches[i].first, aMatches[i].second,
std::numeric_limits<float>::max()));
with this (note, for this a reference to feature descriptor vectors is also needed):
cv::L2<float> cmp;
const std::vector <std::vector <float> > &desc1 = img_1_feats_descriptors;
const std::vector <std::vector <float> > &desc2 = img_2_feats_descriptors;
for (int i=0; i < (int)aMatches.size(); ++i){
float *firstFeat = &desc1[aMatches[i].first];
float *secondFeat = &desc2[aMatches[i].second];
float distance = cmp(firstFeat, secondFeat, firstFeat->size());
matches.push_back(cv::DMatch(aMatches[i].first, aMatches[i].second,
distance));
}
Note that in the last snippet, descX[i] is a descriptor for featsX[i], each element of the inner vector being one component of the descriptor vector. Also, note that all descriptor vectors should have the same dimensionality.