I am trying to do object detection using OpenCV on iOS. I'm using this code sample from the documentation.
Here's my code:
Mat src = imread("src.jpg");
Mat templ = imread("logo.jpg");
Mat src_gray;
cvtColor(src, src_gray, CV_BGR2GRAY);
Mat templ_gray;
cvtColor(templ, templ_gray, CV_BGR2GRAY);
int minHessian = 500;
OrbFeatureDetector detector(minHessian);
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect(src_gray, keypoints_1);
detector.detect(templ_gray, keypoints_2);
OrbDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute(src_gray, keypoints_1, descriptors_1);
extractor.compute(templ_gray, keypoints_2, descriptors_2);
The problem is on the line extractor.compute(src_gray, keypoints_1, descriptors_1); which leaves descriptors_1 always empty.
src and templ are not empty.
Any thoughts?
Thanks
First of all I think that if you want to use feature detectors and descriptors you must inform yourself of how they work.
You can see this topic, the answer of 'Penelope' explains everything better than I can do:
https://dsp.stackexchange.com/questions/10423/why-do-we-use-keypoint-descriptors
After the first step I think you should know better how the ORB detector/descriptor works (if u really want to use it), what are its parameters, etc. For this u can check the opencv documentation and the ORB paper:
http://docs.opencv.org/modules/features2d/doc/feature_detection_and_description.html
https://www.willowgarage.com/sites/default/files/orb_final.pdf
I say this because you set 'minHessian' parameter on ORB detector when ‘minHessian’ is actually a parameter from the SURF detector.
Anyway the problem of your code is not that. Try to load ur images like the example you are following:
Mat src = imread("src.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Mat templ = imread("logo.jpg", CV_LOAD_IMAGE_GRAYSCALE );
Then detect the keypoints:
detector.detect(src, keypoints_1);
detector.detect(templ, keypoints_2);
and now check if keypoints_1 and keypoints_2 are not empty. If they are go for the descriptor extraction! It should work
Hope this helps
Related
I'm new to C and OpenCV, I want to get the surf descriptor's data matrix.
double tt = (double)cvGetTickCount();
cvExtractSURF( object, 0, &objectKeypoints, &objectDescriptors, storage, params );
printf("Object Descriptors: %d\n", objectDescriptors->total);
If I use cvSave(fileName, objectDescriptors) then I can get the XML file, my question is how can I get just the matrix of descriptor of the data of objectDescriptor, for example, there are 45 keypoints, then the matrix is A=matrix[45][64]?
How can I get A in directly from the objectDescriptors?
How can I get A from the xml file?
You can use OpenCV new API SurfFeatureDetector. It will directly save keypoints to a vector<KeyPoint>.
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints;
detector.detect( img, keypoints);
Check out cv::KeyPoint Class Reference.
Check out [1] and [2] for real examples.
How can I convert a cvMat matrix to IplImage that can be saved using cvSaveImage, in C using OpenCV?
I learnt about a function cvGetImage(const CvArr* arr, IplImage* imageHeader). I understand that arr stands for the cvMat Array, but could not really understand what 'image header' actually is. Is that the pointer that stores the image ?? That is, would the following work?
clusters = cvCreateMat( image2_size, 1, CV_32SC1 );
IplImage *kmeans;
cvGetImage(clusters, &kmeans);
cvSaveImage("kmeans.jpg", kmeans);
//clusters is the output matrix after performing k-means clustering
// on a certain image.
How to use FERN descriptor matcher in OpenCV? Does it take as an input keypoints extracted by some algrithm (sift/surf?) or it calculates everything by itself?
edit:
I'm trying to apply it do database of images
fernmatcher->add(all_images, all_keypoints);
fernmatcher->train();
there are 20 images, in total less than 8MB, i extract keypoints using SURF. Memory usage jumps to 2.6GB and training takes who knows how long...
FERN is not different from rest of the matchers. Here is a sample code for using FERN as Key point Descriptor Matcher.
int octaves= 3;
int octaveLayers=2;
bool upright=false;
double hessianThreshold=0;
std::vector<KeyPoint> keypoints_1,keypoints_2;
SurfFeatureDetector detector1( hessianThreshold, octaves, octaveLayers, upright );
detector1.detect( image1, keypoints_1 );
detector1.detect( image2, keypoints_2 );
std::vector< DMatch > matches;
FernDescriptorMatcher matcher;
matcher.match(image1,keypoints_1,image2,keypoints_2,matches);
Mat img_matches;
drawMatches( templat_img, keypoints_1,tempimg, keypoints_2,matches, img_matches,Scalar::all(-1), Scalar::all(-1),vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
imshow( "Fern Matches", img_matches);
waitKey(0);
*But But my suggestion is use FAST which is faster compared to FERN and also FERN can be used to train a set of images with keypoints and the trained FERN can be used as classifier just like all other.
I am trying to make the dft of one single channeled image, and as cvDft is expecting complex values, I was adviced to merge the original image with another image with all 0's so this last one will be considered as imaginary part.
My problem comes when using cvmerge function,
Mat tmp = imread(filename,0);
if( tmp.empty() )
{cout << "Usage: dft <image_name>" << endl;
return -1;}
Mat Result(tmp.rows,tmp.cols,CV_64F,2);
Mat tmp1(tmp.rows,tmp.cols,CV_64F, 0);
Mat image(tmp.rows,tmp.cols,CV_64F,2);
cvMerge(tmp,tmp1,image);`
It gives me the next error: can not convert cvMAt to cvArr
Anyone could help me? thanks!
1) it seems like you're mixing up 2 different styles of opencv code
cv::Mat (- Mat) is a c++ class from the new version of opencv, cvMerge is a c function from the old version of opencv.
instead of using cvmerge use merge
2) you're trying to merge a matrix (tmp) of type CV_8U (probably) with a CV_64F
use convertTo to get tmp as CV_64F
3) why is your Result & image mats (the destination mat) are initializes to cv::Scalar(2)? i think you're misusing the constractor parameters. see here for more info.
4) you're image mat is a single channel mat and you wanted it as a 2 channel mat (as mentioned in the question), change the declaration to
Mat image(tmp.rows,tmp.cols,CV_64FC2);
I have OpenCV and libfreenect configured on my ubuntu 11.04 and works seperately.
I also have some experience with OpenCV but the problem is i don't know how to combine both kinect and OpenCV.I was hoping if someone would kindly help me out by pointing to a good documentation or providing a simple sample code of using kinect in opencv.
The first link on google for "OpenCV kinect" was this. I hope it helps.
To quickly get things working, I would recommend including opencv libraries to one of the openni samples (for example NiUserTracker). There you can acquire the depth image from the DepthMetaData object in the following way.
//obtain depth image
DepthMetaData depthMD;
g_DepthGenerator.GetMetaData(depthMD);
const XnDepthPixel* g_Depth = depthMD.Data();
cv::Mat DepthBuf(480,640,CV_16UC1,(unsigned char*)g_Depth);
//To display the depth image you probably would want to normalize it to 0-255 range first
//obtain rgb image
ImageMetaData ImageMD;
g_ImageGenerator.GetMetaData(ImageMD);
const XnUInt8* g_Img =ImageMD.Data();
cv::Mat ImgBuf(480,640,CV_8UC3,(unsigned short*)g_Img);
cv::Mat ImgBuf2;
cv::cvtColor(ImgBuf,ImgBuf2,CV_RGB2BGR);
To get work MrglMrgl code, I've had to add the following at the beginning:
nRetVal = g_Context.FindExistingNode(XN_NODE_TYPE_IMAGE, g_ImageGenerator);
if (nRetVal != XN_STATUS_OK)
{
printf("No image node exists! Check your XML.");
return 1;
}
And this at the final:
cv::namedWindow( "Example1", CV_WINDOW_AUTOSIZE );
cv::imshow( "Example1", ImgBuf2 );
cv::waitKey(0);