FernDescriptorMatch - how to use it? how it works? - image-processing

How to use FERN descriptor matcher in OpenCV? Does it take as an input keypoints extracted by some algrithm (sift/surf?) or it calculates everything by itself?
edit:
I'm trying to apply it do database of images
fernmatcher->add(all_images, all_keypoints);
fernmatcher->train();
there are 20 images, in total less than 8MB, i extract keypoints using SURF. Memory usage jumps to 2.6GB and training takes who knows how long...

FERN is not different from rest of the matchers. Here is a sample code for using FERN as Key point Descriptor Matcher.
int octaves= 3;
int octaveLayers=2;
bool upright=false;
double hessianThreshold=0;
std::vector<KeyPoint> keypoints_1,keypoints_2;
SurfFeatureDetector detector1( hessianThreshold, octaves, octaveLayers, upright );
detector1.detect( image1, keypoints_1 );
detector1.detect( image2, keypoints_2 );
std::vector< DMatch > matches;
FernDescriptorMatcher matcher;
matcher.match(image1,keypoints_1,image2,keypoints_2,matches);
Mat img_matches;
drawMatches( templat_img, keypoints_1,tempimg, keypoints_2,matches, img_matches,Scalar::all(-1), Scalar::all(-1),vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
imshow( "Fern Matches", img_matches);
waitKey(0);
*But But my suggestion is use FAST which is faster compared to FERN and also FERN can be used to train a set of images with keypoints and the trained FERN can be used as classifier just like all other.

Related

how can i removing sinusoidal noise w\ frequency domain in opencv

i'm trying to remove '5 lines' in section, in music papers, my original image is this : http://en.wikipedia.org/wiki/Requiem_(Mozart)#/media/File:K626_Requiem_Mozart.jpg
First, i apply gaussian filter and binarized with threshold (min:100, max 255).
Then applying dft to this image, erase some appropriate lines, and reconstruct image by inverse dft.
i use sample code in opencv documentation, actually i doubt myself that i understand this code. :(
http://docs.opencv.org/doc/tutorials/core/discrete_fourier_transform/discrete_fourier_transform.html
in this sample code, there's 2 Mats. one is 'complexI' for spectrum, another is 'magI' for actual visualized. the result of cv::dft is complexI, magI is normalized complexI. my question is this. how can i add a black line(to cancel in freq domain) and reconstruct?
OpenCV (now) provides a detailed tutorial on how to deal with periodic noise by spectral filtering: https://docs.opencv.org/trunk/d2/d0b/tutorial_periodic_noise_removing_filter.html
It hinges on using cv::dft(), cv::idft(), cv::mulSpectrums(), and cv::magnitude().
The core function (from the tutorial) to perform the filtering goes like so:
void filter2DFreq(const Mat& inputImg, Mat& outputImg, const Mat& H)
{
Mat planes[2] = { Mat_<float>(inputImg.clone()), Mat::zeros(inputImg.size(), CV_32F) };
Mat complexI;
merge(planes, 2, complexI);
// find FT of image
dft(complexI, complexI, DFT_SCALE);
Mat planesH[2] = { Mat_<float>(H.clone()), Mat::zeros(H.size(), CV_32F) };
Mat complexH;
merge(planesH, 2, complexH);
Mat complexIH;
// apply spectral filter
mulSpectrums(complexI, complexH, complexIH, 0);
// reconstruct the filtered image
idft(complexIH, complexIH);
split(complexIH, planes);
outputImg = planes[0];
}
Refer to the tutorial for more information.

OpenCV DescriptorExtractor returns empty

I am trying to do object detection using OpenCV on iOS. I'm using this code sample from the documentation.
Here's my code:
Mat src = imread("src.jpg");
Mat templ = imread("logo.jpg");
Mat src_gray;
cvtColor(src, src_gray, CV_BGR2GRAY);
Mat templ_gray;
cvtColor(templ, templ_gray, CV_BGR2GRAY);
int minHessian = 500;
OrbFeatureDetector detector(minHessian);
std::vector<KeyPoint> keypoints_1, keypoints_2;
detector.detect(src_gray, keypoints_1);
detector.detect(templ_gray, keypoints_2);
OrbDescriptorExtractor extractor;
Mat descriptors_1, descriptors_2;
extractor.compute(src_gray, keypoints_1, descriptors_1);
extractor.compute(templ_gray, keypoints_2, descriptors_2);
The problem is on the line extractor.compute(src_gray, keypoints_1, descriptors_1); which leaves descriptors_1 always empty.
src and templ are not empty.
Any thoughts?
Thanks
First of all I think that if you want to use feature detectors and descriptors you must inform yourself of how they work.
You can see this topic, the answer of 'Penelope' explains everything better than I can do:
https://dsp.stackexchange.com/questions/10423/why-do-we-use-keypoint-descriptors
After the first step I think you should know better how the ORB detector/descriptor works (if u really want to use it), what are its parameters, etc. For this u can check the opencv documentation and the ORB paper:
http://docs.opencv.org/modules/features2d/doc/feature_detection_and_description.html
https://www.willowgarage.com/sites/default/files/orb_final.pdf
I say this because you set 'minHessian' parameter on ORB detector when ‘minHessian’ is actually a parameter from the SURF detector.
Anyway the problem of your code is not that. Try to load ur images like the example you are following:
Mat src = imread("src.jpg", CV_LOAD_IMAGE_GRAYSCALE);
Mat templ = imread("logo.jpg", CV_LOAD_IMAGE_GRAYSCALE );
Then detect the keypoints:
detector.detect(src, keypoints_1);
detector.detect(templ, keypoints_2);
and now check if keypoints_1 and keypoints_2 are not empty. If they are go for the descriptor extraction! It should work
Hope this helps

Can i specify the FAST keypoints number i get when using opencv FastFeatureDetector

i am using openCV FastFeatureDetector to extract fast keypoints from image.
but the number of the FastFeatureDetector detect is not a const number.
i want set the max keypoints number FastFeatureDetector get.
Can i specify the FAST key points number i get when using openCV FastFeatureDetector
How?
I recently had this problem and after a brief search I found DynamicAdaptedFeatureDetector that iteratively detects keypoints until the desired number is found.
check: http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_feature_detectors.html#dynamicadaptedfeaturedetector
code:
int maxKeypoints, minKeypoints;
Ptr<FastAdjuster> adjust = new FastAdjuster();
Ptr<FeatureDetector> detector = new DynamicAdaptedFeatureDetector(adjust,minKeypoints,maxKeypoints,100);
vector<KeyPoint> keypoints;
detector->detect(image, keypoints);
I am offering main part of the code, in that case you can set the number of key-points as what you expected. Good luck.
# define MAX_FEATURE 500 // specify maximum expected feature size
string detectorType = "FAST";
string descriptorType = "SIFT";
detector = FeatureDetector::create(detectorType);
extractor = DescriptorExtractor::create(descriptorType);
Mat descriptors;
vector<KeyPoint> keypoints;
detector->detect(img, keypoints);
if( keypoints.size() > MAX_FEATURE )
{
cout << " [INFO] key-point 1 size: " << keypoints.size() << endl;
KeyPointsFilter::retainBest(keypoints, MAX_FEATURE);
}
cout << " [INFO] key-point 2 size: " << keypoints.size() << endl;
extractor->compute(img, keypoints, descriptors);
The other solution is to detect as many keypoints as possible with a low threshold and apply adaptive non-maximal suppression (ANMS) described in this paper. You can specify the number of points you need. Additionally, for free, you get your points homogenously distributed on the image. Codes can be found here.

Convert opencv's surf descriptor to a matrix

I'm new to C and OpenCV, I want to get the surf descriptor's data matrix.
double tt = (double)cvGetTickCount();
cvExtractSURF( object, 0, &objectKeypoints, &objectDescriptors, storage, params );
printf("Object Descriptors: %d\n", objectDescriptors->total);
If I use cvSave(fileName, objectDescriptors) then I can get the XML file, my question is how can I get just the matrix of descriptor of the data of objectDescriptor, for example, there are 45 keypoints, then the matrix is A=matrix[45][64]?
How can I get A in directly from the objectDescriptors?
How can I get A from the xml file?
You can use OpenCV new API SurfFeatureDetector. It will directly save keypoints to a vector<KeyPoint>.
int minHessian = 400;
SurfFeatureDetector detector( minHessian );
std::vector<KeyPoint> keypoints;
detector.detect( img, keypoints);
Check out cv::KeyPoint Class Reference.
Check out [1] and [2] for real examples.

converting cvmat to iplimage

How can I convert a cvMat matrix to IplImage that can be saved using cvSaveImage, in C using OpenCV?
I learnt about a function cvGetImage(const CvArr* arr, IplImage* imageHeader). I understand that arr stands for the cvMat Array, but could not really understand what 'image header' actually is. Is that the pointer that stores the image ?? That is, would the following work?
clusters = cvCreateMat( image2_size, 1, CV_32SC1 );
IplImage *kmeans;
cvGetImage(clusters, &kmeans);
cvSaveImage("kmeans.jpg", kmeans);
//clusters is the output matrix after performing k-means clustering
// on a certain image.

Resources