I'm new to C and OpenCV, I want to get the surf descriptor's data matrix.
double tt = (double)cvGetTickCount();
cvExtractSURF( object, 0, &objectKeypoints, &objectDescriptors, storage, params );
printf("Object Descriptors: %d\n", objectDescriptors->total);
If I use cvSave(fileName, objectDescriptors) then I can get the XML file, my question is how can I get just the matrix of descriptor of the data of objectDescriptor, for example, there are 45 keypoints, then the matrix is A=matrix[45][64]?
How can I get A in directly from the objectDescriptors?
How can I get A from the xml file?
You can use OpenCV new API SurfFeatureDetector. It will directly save keypoints to a vector<KeyPoint>.
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints;
detector.detect( img, keypoints);
Check out cv::KeyPoint Class Reference.
Check out [1] and [2] for real examples.
Related
I am trying to do object detection using OpenCV on iOS. I'm using this code sample from the documentation.
Here's my code:
Mat src = imread("src.jpg");
Mat templ = imread("logo.jpg");
Mat src_gray;
cvtColor(src, src_gray, CV_BGR2GRAY);
Mat templ_gray;
cvtColor(templ, templ_gray, CV_BGR2GRAY);
int minHessian = 500;
OrbFeatureDetector detector(minHessian);
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect(src_gray, keypoints_1);
detector.detect(templ_gray, keypoints_2);
OrbDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute(src_gray, keypoints_1, descriptors_1);
extractor.compute(templ_gray, keypoints_2, descriptors_2);
The problem is on the line extractor.compute(src_gray, keypoints_1, descriptors_1); which leaves descriptors_1 always empty.
src and templ are not empty.
Any thoughts?
Thanks
First of all I think that if you want to use feature detectors and descriptors you must inform yourself of how they work.
You can see this topic, the answer of 'Penelope' explains everything better than I can do:
https://dsp.stackexchange.com/questions/10423/why-do-we-use-keypoint-descriptors
After the first step I think you should know better how the ORB detector/descriptor works (if u really want to use it), what are its parameters, etc. For this u can check the opencv documentation and the ORB paper:
http://docs.opencv.org/modules/features2d/doc/feature_detection_and_description.html
https://www.willowgarage.com/sites/default/files/orb_final.pdf
I say this because you set 'minHessian' parameter on ORB detector when ‘minHessian’ is actually a parameter from the SURF detector.
Anyway the problem of your code is not that. Try to load ur images like the example you are following:
Mat src = imread("src.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Mat templ = imread("logo.jpg", CV_LOAD_IMAGE_GRAYSCALE );
Then detect the keypoints:
detector.detect(src, keypoints_1);
detector.detect(templ, keypoints_2);
and now check if keypoints_1 and keypoints_2 are not empty. If they are go for the descriptor extraction! It should work
Hope this helps
I was able to successfully load a number of images into a vector, vector<Mat>. The images once loaded in can be displayed with the imread function.
The problem is that I want to apply SIFT on this set of images using the second variant, as mentioned in the documentation:
void FeatureDetector::detect(const vector<Mat>& images, vector<vector<KeyPoint>>& keypoints, const vector<Mat>& masks=vector<Mat>() ) const
This is producing the following error:
error C2664: 'void cv::FeatureDetector::detect(const cv::Mat &,std::vector<_Ty> &,const cv::Mat &) const' : cannot convert parameter 1 from 'std::vector<_Ty>' to 'const cv::Mat &'
The code I am using:
vector<Mat> images;
/* code to add all images to vector not shown as its messy but it was performed with FindFirstFile of windows.h. All images loaded correctly as they can be read by imread*/
initModule_nonfree();
Ptr<FeatureDetector> get_keypoints = FeatureDetector::create("SIFT");
vector<KeyPoint> keypoints;
get_keypoints->detect(images , keypoints);
The error is detected at get_keypoints->detect(images , keypoints);
From the detect signature, keypoints should be vector<vector<KeyPoint>>, yet you declare it as vector<KeyPoint>.
How can I convert a cvMat matrix to IplImage that can be saved using cvSaveImage, in C using OpenCV?
I learnt about a function cvGetImage(const CvArr* arr, IplImage* imageHeader). I understand that arr stands for the cvMat Array, but could not really understand what 'image header' actually is. Is that the pointer that stores the image ?? That is, would the following work?
clusters = cvCreateMat( image2_size, 1, CV_32SC1 );
IplImage *kmeans;
cvGetImage(clusters, &kmeans);
cvSaveImage("kmeans.jpg", kmeans);
//clusters is the output matrix after performing k-means clustering
// on a certain image.
How to use FERN descriptor matcher in OpenCV? Does it take as an input keypoints extracted by some algrithm (sift/surf?) or it calculates everything by itself?
edit:
I'm trying to apply it do database of images
fernmatcher->add(all_images, all_keypoints);
fernmatcher->train();
there are 20 images, in total less than 8MB, i extract keypoints using SURF. Memory usage jumps to 2.6GB and training takes who knows how long...
FERN is not different from rest of the matchers. Here is a sample code for using FERN as Key point Descriptor Matcher.
int octaves= 3;
int octaveLayers=2;
bool upright=false;
double hessianThreshold=0;
std::vector<KeyPoint> keypoints_1,keypoints_2;
SurfFeatureDetector detector1( hessianThreshold, octaves, octaveLayers, upright );
detector1.detect( image1, keypoints_1 );
detector1.detect( image2, keypoints_2 );
std::vector< DMatch > matches;
FernDescriptorMatcher matcher;
matcher.match(image1,keypoints_1,image2,keypoints_2,matches);
Mat img_matches;
drawMatches( templat_img, keypoints_1,tempimg, keypoints_2,matches, img_matches,Scalar::all(-1), Scalar::all(-1),vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
imshow( "Fern Matches", img_matches);
waitKey(0);
*But But my suggestion is use FAST which is faster compared to FERN and also FERN can be used to train a set of images with keypoints and the trained FERN can be used as classifier just like all other.
//Open the image
Mat img_rgb = imread("sudoku2.png", CV_LOAD_IMAGE_GRAYSCALE);
if (img_rgb.empty())
{
cout<<"Cannot open the image"<<endl;
return;
}
Mat img_bw = img_rgb > 128;
imwrite("image_bw.jpg", img_bw);
Now, I want to get all pixels of img_bw and save it into a matrix M (int[img_bw.rows][img_bw.cols]). How to do it in C++.
What format ?
The raw byte data in cv::Mat is available from the .ptr() function, ie img_bw.ptr().
Opencv also has an xml and json read and write functions for matrices, just by using the << operator - see opencv tutorial on xml and yaml i/o
EDIT: In c++ you can access pixels with the .at operator.
Use img_data.at<uchar>(x,y) for an unsigned char (CV_8U) pixel and img_data.at<float>(x,y) for a CV_32F image.