Can i specify the FAST keypoints number i get when using opencv FastFeatureDetector - opencv

i am using openCV FastFeatureDetector to extract fast keypoints from image.
but the number of the FastFeatureDetector detect is not a const number.
i want set the max keypoints number FastFeatureDetector get.
Can i specify the FAST key points number i get when using openCV FastFeatureDetector
How?

I recently had this problem and after a brief search I found DynamicAdaptedFeatureDetector that iteratively detects keypoints until the desired number is found.
check: http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_feature_detectors.html#dynamicadaptedfeaturedetector
code:
int maxKeypoints, minKeypoints;
Ptr<FastAdjuster> adjust = new FastAdjuster();
Ptr<FeatureDetector> detector = new DynamicAdaptedFeatureDetector(adjust,minKeypoints,maxKeypoints,100);
vector<KeyPoint> keypoints;
detector->detect(image, keypoints);

I am offering main part of the code, in that case you can set the number of key-points as what you expected. Good luck.
# define MAX_FEATURE 500 // specify maximum expected feature size
string detectorType = "FAST";
string descriptorType = "SIFT";
detector = FeatureDetector::create(detectorType);
extractor = DescriptorExtractor::create(descriptorType);
Mat descriptors;
vector<KeyPoint> keypoints;
detector->detect(img, keypoints);
if( keypoints.size() > MAX_FEATURE )
{
cout << " [INFO] key-point 1 size: " << keypoints.size() << endl;
KeyPointsFilter::retainBest(keypoints, MAX_FEATURE);
}
cout << " [INFO] key-point 2 size: " << keypoints.size() << endl;
extractor->compute(img, keypoints, descriptors);

The other solution is to detect as many keypoints as possible with a low threshold and apply adaptive non-maximal suppression (ANMS) described in this paper. You can specify the number of points you need. Additionally, for free, you get your points homogenously distributed on the image. Codes can be found here.

Related

Insufficient Memory Error: Bag of Words OpenCV 2.4.6 Visual Studio 2010

I am implementing the Bag of words Model using SURF and SIFT features and SVM Classifier. I want to train(80% of 2876 images) and test(20% of 2876 images) it. I have kept dictionarySize set to 1000. My Computer configuration is intel Xeon(2 processors)/ 32GB RAM/ 500GB HDD. Here, images are read whenever necessary instead of storing them.
like,
std::ifstream file("C:\\testFiles\\caltech4\\train0.csv", ifstream::in);
if (!file)
{
string error_message = "No valid input file was given, please check the given filename.";
CV_Error(CV_StsBadArg, error_message);
}
string line, path, classlabel;
printf("\nReading Training images................\n");
while (getline(file, line))
{
stringstream liness(line);
getline(liness, path, separator);
getline(liness,classlabel);
if(!path.empty())
{
Mat image = imread(path, 0);
cout << " " << path << "\n";
detector.detect(image, keypoints1);
detector.compute(image, keypoints1,descriptor1);
featuresUnclustered.push_back(descriptor1);
}
}
Here, the train0.csv contains the paths to the images with the labels. It stops from this loop while reading the images, computing the descriptor and adding it to the features to be clustered. Following error apprears on the console:
Here, in the code, I re-sized images being read to the dimension 256*256; the requirement of the memory is reduced. Ergo, the error disappeared.
Mat image = imread(path, 0);
resize(image,image,Size(256,256));
cout << " " << path << "\n";
detector.detect(image, keypoints1);
detector.compute(image, keypoints1,descriptor1);
featuresUnclustered.push_back(descriptor1);
But, it might appear with bigger dataset.

how can i removing sinusoidal noise w\ frequency domain in opencv

i'm trying to remove '5 lines' in section, in music papers, my original image is this : http://en.wikipedia.org/wiki/Requiem_(Mozart)#/media/File:K626_Requiem_Mozart.jpg
First, i apply gaussian filter and binarized with threshold (min:100, max 255).
Then applying dft to this image, erase some appropriate lines, and reconstruct image by inverse dft.
i use sample code in opencv documentation, actually i doubt myself that i understand this code. :(
http://docs.opencv.org/doc/tutorials/core/discrete_fourier_transform/discrete_fourier_transform.html
in this sample code, there's 2 Mats. one is 'complexI' for spectrum, another is 'magI' for actual visualized. the result of cv::dft is complexI, magI is normalized complexI. my question is this. how can i add a black line(to cancel in freq domain) and reconstruct?
OpenCV (now) provides a detailed tutorial on how to deal with periodic noise by spectral filtering: https://docs.opencv.org/trunk/d2/d0b/tutorial_periodic_noise_removing_filter.html
It hinges on using cv::dft(), cv::idft(), cv::mulSpectrums(), and cv::magnitude().
The core function (from the tutorial) to perform the filtering goes like so:
void filter2DFreq(const Mat& inputImg, Mat& outputImg, const Mat& H)
{
Mat planes[2] = { Mat_<float>(inputImg.clone()), Mat::zeros(inputImg.size(), CV_32F) };
Mat complexI;
merge(planes, 2, complexI);
// find FT of image
dft(complexI, complexI, DFT_SCALE);
Mat planesH[2] = { Mat_<float>(H.clone()), Mat::zeros(H.size(), CV_32F) };
Mat complexH;
merge(planesH, 2, complexH);
Mat complexIH;
// apply spectral filter
mulSpectrums(complexI, complexH, complexIH, 0);
// reconstruct the filtered image
idft(complexIH, complexIH);
split(complexIH, planes);
outputImg = planes[0];
}
Refer to the tutorial for more information.

Using OpenCV to recognise similar (not completely identical) simple images?

Say I have a very simple image or shape such as this stick man drawing:
I also have a library of other simple images which I want to compare the first image to and determine the closest match:
Notice that the two stick men are not completely identical but are reasonably similar.
I want to be able to compare the first image to each image in my library until a reasonably close match is found. If necessary, my image library could contain numerous variations of the same image in order to help decide which type of image I have. For example:
My question is whether this is something that OpenCV would be capable of? Has it been done before, and if so, can you point me in the direction of some examples? Many thanks for your help.
Edit: Through my searches I have found many examples of people who are comparing images, or even people that are comparing images which have been stretched or skewed such as this: Checking images for similarity with OpenCV . Unfortunately as you can see, my images are not just translated (Rotated/Skewed/Stretched) versions of one another - They actually different images although they are very similar.
You should be able to do it using feature template match function of OpenCV. You can use matchTemplate function to look for the feature and then, minMaxLoc to find its location. Check out the tutorial on OpenCV web site for matchTemplate.
seems you need feature points detections and matching. Check these docs from OpenCV:
http://docs.opencv.org/doc/tutorials/features2d/feature_detection/feature_detection.html
http://docs.opencv.org/doc/tutorials/features2d/feature_flann_matcher/feature_flann_matcher.html
For your particular type of images, you might get good results by using moments/HuMoments for the connected components (which you can find with findContours).
since there is a rotation involved, I dont think template matching would work well. You probably need to use Feature point detection such as SIFT or SURF.
EDIT: This won't work with rotation. Same for matchTemplate. I am yet to try the findContours + moments as in bjoernz answer which sounds promising.
Failed Solution:
I tried using ShapeContextDistanceExtractor(1) available in OpenCV 3.0 along with findContours on your sample images to get good results. The sample images were cropped to same size as original image(128*200). You can could as well use resize in OpenCV.
Code below compares images in images folder with 1.png as the base image.
#include "opencv2/shape.hpp"
#include "opencv2/opencv.hpp"
#include <iostream>
#include <string>
using namespace std;
using namespace cv;
const int MAX_SHAPES = 7;
vector<Point> findContours( const Mat& compareToImg )
{
vector<vector<Point> > contour2D;
findContours(compareToImg, contour2D, RETR_LIST, CHAIN_APPROX_NONE);
//converting 2d vector contours to 1D vector for comparison
vector <Point> contour1D;
for (size_t border=0; border < contour2D.size(); border++) {
for (size_t p=0; p < contour2D[border].size(); p++) {
contour1D.push_back( contour2D[border][p] );
}
}
//limiting contours size to reduce distance comparison time
contour1D.resize( 300 );
return contour1D;
}
int main()
{
string path = "./images/";
cv::Ptr <cv::ShapeContextDistanceExtractor> distanceExtractor = cv::createShapeContextDistanceExtractor();
//base image
Mat baseImage= imread( path + "1.png", IMREAD_GRAYSCALE);
vector<Point> baseImageContours= findContours( baseImage );
for ( int idx = 2; idx <= MAX_SHAPES; ++idx ) {
stringstream imgName;
imgName << path << idx << ".png";
Mat compareToImg=imread( imgName.str(), IMREAD_GRAYSCALE ) ;
vector<Point> contii = findContours( compareToImg );
float distance = distanceExtractor->computeDistance( baseImageContours, contii );
std::cout<<" distance to " << idx << " : " << distance << std::endl;
}
return 0;
}
Result
distance to 2 : 89.7951
distance to 3 : 14.6793
distance to 4 : 6.0063
distance to 5 : 4.79834
distance to 6 : 0.0963184
distance to 7 : 0.00212693
Do three things: 1. Forget about image comparison since you really comparing stroke symbols. 2. Download and play wth a Gesture Search app from google store; 3. Realize that for good performance you cannot recognize your strokes without using timestamp information about stroke drawing. Otherwice we would have a successful handwriting recognition. Then you can research Android stroke reco library to write your code properly.

OpenCV FaceRecognizer wrong shapes for given matrices

I'm trying to make a FisherFaceRecognizer's predict() method work, but I keep getting an error
Bad argument (Wrong shapes for given matrices. Was size(src) =
(1,108000), size(W) = (36000,1).) in subspaceProject, file
/tmp/opencv-DCb7/OpenCV-2.4.3/modules/contrib/src/lda.cpp, line 187
This is similar to a question that was asked at Wrong shapes for given matrices in OPENCV
but in my case, both source and training images are the same data type, full color.
My code is adapted from the tutorial at http://docs.opencv.org/modules/contrib/doc/facerec/facerec_tutorial.html#fisherfaces
however, my test image is larger than the training images, so I needed to work on a region of interest (ROI) of the right size.
Here's how I read the images and converted sizes. I cloned the ROI matrix because an
earlier error message told me the target matrix must be contiguous:
vector<Mat> images;
images.push_back( cvLoadImage( trainingList[i].c_str()));
IplImage* img;
img = cvLoadImage( imgName.c_str() );
// take ROI and clone into a new Mat
Mat testSample1(img, Rect( xLoc, yLoc, images[0].cols, images[0].rows));
Mat testSample = testSample1.clone();
// Create a FisherFaceRecognizer in OpenCV
Ptr<FaceRecognizer> FFR = createFisherFaceRecognizer(0,DBL_MAX);
model->train(images, labels);
cout << " check of data type testSample is " << testSample.type() << " images is " << images[0].type() << endl;
int predictedLabel = model->predict(testSample);
//
I get an exception message at the predict statement.
The cout statement tells me both matrices have type 16, yet somehow it still doesn't believe the matrices are the same size and data type...
You should ensure the shapes, not types
Try
cout << testSample.rows << testSample.cols << images[0].rows << images[0].cols ;
Also
ensure that both ,training img & test img, are in the same color space
If not, Try
cvtColor(testSample, testSample_inSameSpaceOfTraining, CV_BGR2***); // default opencv colors "BGR"
I found out that the FisherFaceRecognizer requires grayscale images, so I should have loaded both training and test images like this:
trainingImages.push_back( imread( trainingList[i].c_str(), CV_LOAD_IMAGE_GRAYSCALE));
and
Mat img;
img = imread( imgName.c_str(), CV_LOAD_IMAGE_GRAYSCALE );
(also reconciled the type of img for consistency). The grayscale-only requirement is documented in the OpenCV reference manual (pdf available online) but apparently not in any of the online tutorials or other documents for FisherFaceRecognizer.

FernDescriptorMatch - how to use it? how it works?

How to use FERN descriptor matcher in OpenCV? Does it take as an input keypoints extracted by some algrithm (sift/surf?) or it calculates everything by itself?
edit:
I'm trying to apply it do database of images
fernmatcher->add(all_images, all_keypoints);
fernmatcher->train();
there are 20 images, in total less than 8MB, i extract keypoints using SURF. Memory usage jumps to 2.6GB and training takes who knows how long...
FERN is not different from rest of the matchers. Here is a sample code for using FERN as Key point Descriptor Matcher.
int octaves= 3;
int octaveLayers=2;
bool upright=false;
double hessianThreshold=0;
std::vector<KeyPoint> keypoints_1,keypoints_2;
SurfFeatureDetector detector1( hessianThreshold, octaves, octaveLayers, upright );
detector1.detect( image1, keypoints_1 );
detector1.detect( image2, keypoints_2 );
std::vector< DMatch > matches;
FernDescriptorMatcher matcher;
matcher.match(image1,keypoints_1,image2,keypoints_2,matches);
Mat img_matches;
drawMatches( templat_img, keypoints_1,tempimg, keypoints_2,matches, img_matches,Scalar::all(-1), Scalar::all(-1),vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS);
imshow( "Fern Matches", img_matches);
waitKey(0);
*But But my suggestion is use FAST which is faster compared to FERN and also FERN can be used to train a set of images with keypoints and the trained FERN can be used as classifier just like all other.

Resources