I want get the SIFT feature for specified points. These points is gotten by hand not by KeyPoint Detector. My question is: I only know the position of the points but have no idea about the size and angle value. How should I set this value?
Here is my code:
int main()
{
Mat img_object = imread("img/test.jpg", 0);
SiftDescriptorExtractor extractor;
Mat descriptors;
std::vector<KeyPoint> keypoints;
// set keypoint position and size: should I set
// size parameter to 32 for 32x32 patch?
KeyPoint kp(50, 60, 32);
keypoints.push_back(kp);
extractor.compute( img_object, keypoints, descriptors );
return 0;
}
Should I set the size param of KeyPoint to 32 for 32x32 patch. Is this implementation reasonable?
Usually, keypoint detectors work on a local neighbourhood around a point. This is the size field of OpenCV's KeyPoint class. The angle field is the dominant orientation of the keypoint (this could be set to -1, note).
OpenCV KeyPoint class
Another reference here.
Related
I'm reading this paper for obtaining better VLAD descriptors. The main difference is the use of the so called CSURF which is (quoting the paper):
In order to extract CSURF, the image
is first transformed to grayscale and
interest points are computed using the
standard SURF algorithm. Then, instead
of computing the SURF descriptor of
each interest point on the intensity
channel, CSURF computes three SURF
descriptors, one on each color band.
How could I implement this in OpenCV? All the descriptors that I've seen so far (SIFT, SURF, ETC) are computed on the gray scale, how could I use SURF to describe keypoints based on one color band (Red, Green or Blue)?
UPDATE: IS THIS SOLUTION CORRECT?
Mat img = imread( src, CV_LOAD_IMAGE_COLOR );
Mat grey;
cvtColor(img,gray,CV_BGR2GRAY);
int minHessian = 400;
Ptr<SURF> detector = SURF::create( minHessian );
std::vector<KeyPoint> keypoints;
detector->detect( grey, keypoints );
vector<Mat> spl;
Mat blueDesc, greenDesc, redDesc;
split(img,spl);
detector->detectAndCompute( spl[0], Mat(), keypoints, blueDesc, true);
detector->detectAndCompute( spl[1], Mat(), keypoints, greenDesc, true);
detector->detectAndCompute( spl[2], Mat(), keypoints, redDesc, true);
I'm new to opencv and It's developing. I'm using SIFT keypoint to match two images. In order to do that i used the following code. But which provides some Error matchings.
I think if i changed the size of key points. that would give use to get more correct key points. current size of the key point which is not adequate. Please help me to improve the size of the key points.
Ptr<FeatureDetector> featureDetector = FeatureDetector::create("SIFT");
featureDetector->detect(input_right, keypoints_right);// is pointer so i used ->
featureDetector->detect(input_left, keypoints_left);
//-- Step 2: Calculate descriptors (feature vectors)
Ptr<DescriptorExtractor> featureExtractor = DescriptorExtractor::create("SIFT");
featureExtractor->compute(input_right, keypoints_right, descriptor_right);
// which is pointer so i used ->
featureExtractor->compute(input_left, keypoints_left, descriptor_left);
// show features
// check detected keypoints right
Mat outputImageright;
Scalar keypointColor = Scalar(255, 0, 0); // Blue keypoints.
drawKeypoints(input_right, keypoints_right, outputImageright, keypointColor, DrawMatchesFlags::DEFAULT);
namedWindow("Right View");
imshow("Right View", outputImageright);
Mat outputImageleft;
Scalar keypointColorred = Scalar(0, 0, 255); // Red keypoints.
drawKeypoints(input_left, keypoints_left, outputImageleft, keypointColorred, DrawMatchesFlags::DEFAULT);
namedWindow("Left View");
imshow("Left View", outputImageleft);
In last few days I'm looking for good and quick method for finding quadratic shape in image.
For example, take a look at attached image.
I want to find the edges of white screen part (the TV screen in this case).
I can replace the white canvas with whatever I want, e.g. QR code, some texture, etc. - just looking for the coordinates of that shape.
Other features of the shape:
Only one shape should be detected.
Perspective transform should be used.
The languages is not that important, but I want to use OpenCV for this.
These are good algorithms that have been implemented in OpenCV:
Harris corner detector as GoodFeatureToTrackDetector
GoodFeaturesToTrackDetector harris_detector (1000, 0.01, 10, 3, true);
vector<KeyPoint> keypoints;
cvtColor (image, gray_image, CV_BGR2GRAY);
harris_detector.detect (gray_image, keypoints);
Fast corner detector as FeatureDetector::create("FAST") and FASTX
Ptr<FeatureDetector> feature_detector = FeatureDetector::create("FAST");
vector<KeyPoint> keypoints;
cvtColor (image, gray_image, CV_BGR2GRAY);
feature_detector->detect (gray_image, keypoints);
Or
FASTX (gray_image, keypoints, 50, true, FastFeatureDetector::TYPE_9_16);
SIFT (Scale Invariant Feature Transform) as FeatureDetector::create("SIFT")
Ptr<FeatureDetector> feature_detector = FeatureDetector::create("SIFT");
vector<KeyPoint> keypoints;
cvtColor (image, gray_image, CV_BGR2GRAY);
feature_detector->detect (gray_image, keypoints);
Update for perspective transform (you must know 4 points before haned):
Point2f source [4], destination [4];
// Assign values to source and destination points.
perspective_matrix = getPerspectiveTransform( source, destination );
warpPerspective( image, result, perspective_matrix, result.size() );
I've been working on detecting fiducial markers in scenes. An example of my fiducial marker is here:
http://tinypic.com/view.php?pic=4r6k3q&s=8#.VNgsgzVVK1F
I have been able to detect a single fiducial marker in a scene very well. What is the methodology for detecting multiple fiducial markers in a scene? Doing feature detection, extraction, and then matching is great for finding a single match, but it seems to be the wrong method for detecting multiple matches since it would be difficult to determine which features belong to which marker?
The fiducial markers would be the same, and would not be in a known location in the scene.
Update:
Below is some sample code. I was trying to match the first fiducial marker with x number of keypoints, and then use the remaining keypoints to match the second marker. However, this is not robust at all. Does anybody have any suggestions?
OrbFeatureDetector detector;
vector<KeyPoint> keypoints1, keypoints2;
detector.detect(im1, keypoints1);
detector.detect(im2, keypoints2);
Mat display_im1, display_im2;
drawKeypoints(im1, keypoints1, display_im1, Scalar(0,0,255));
drawKeypoints(im2, keypoints2, display_im2, Scalar(0,0,255));
SiftDescriptorExtractor extractor;
Mat descriptors1, descriptors2;
extractor.compute( im1, keypoints1, descriptors1 );
extractor.compute( im2, keypoints2, descriptors2 );
BFMatcher matcher;
vector< DMatch > matches1, matches2;
matcher.match( descriptors1, descriptors2, matches1 );
sort (matches1.begin(), matches1.end());
matches2 = matches;
int numElementsToSave = 50;
matches1.erase(matches1.begin()+numElementsToSave,matches1.end());
matches2.erase(matches2.begin(),matches2.begin()+numElementsToSave);
Mat match_im1, match_im2;
drawMatches( im1, keypoints1, im2, keypoints2,
matches1, match_im1, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
drawMatches( im1, keypoints1, im2, keypoints2,
matches2, match_im2, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
I have never tried it before, but here you have a good explanation about the detection of multiple occurrences:
This tutorial shows how enabling multiple detection of the same
object. To enable multiple detection, the parameter
General->multiDetection should be checked. The approach is as
following:
As usual, we match all features between the objects and the scene.
For an object which is in the scene two times (or more), it should have twice of matched features. We apply a RANSAC algorithm to find a
homography. The inliers should belong to only one occurrence of the
object, all others considered as outliers. We redo the homography
process on the outliers, then find another homography… we do this
process until no homography can be computed.
It may happens that a homography can be found superposed on a previous one using the outliers. You could set
Homography->ransacReprojThr (in pixels) higher to accept more
inliers in the homographies computed, which would decrease the chance
of superposed detections. Another way is to ignore superposed
homographies on a specified radius with the parameter General->multiDetectionRadius (in pixels).
For more information see the page below:
https://code.google.com/p/find-object/wiki/MultiDetection
I developed a semi-automatic algorithm to detect multiple markers (interest points) from image using findContours method on a binary image (my markers are white on a green surface then I limit my search to area constraint as I know how big is each marker in each frame. of course this got some false positives but it was good enough. I couldn't see the picture in your post as tinypic is blocked here for some reason. But you can use the matchShape opencv function to eliminate the bad contours.
here is the part of code I made for this.
Mat tempFrame;
cvtColor(BallFrame, tempFrame, COLOR_BGR2GRAY);
GaussianBlur(tempFrame, tempFrame, Size(15, 15), 2, 2); // remove noise
Mat imBw;
threshold(tempFrame, imBw, 220, 255, THRESH_BINARY); // High threshold to get better results
std::vector<std::vector<Point> > contours;
std::vector<Vec4i> hierarchy;
findContours(imBw, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
Point2f center;
float radius = 0.0;
for (int i = 0; i < contours.size(); i++)
{
double area = contourArea(contours[i]);
if (area > 1 && area < 4000) {
minEnclosingCircle(contours[i], center, radius);
if (radius < 50) // eliminate wide narrow contours
{
// You can use `matchShape` here to match your marker shape
}
}
}
I hope this will help
I would like to use SurfFeatureDetector to detect keypoints on specifying area of a picture:
Train_pic & Source_pic
Detect Train_pic keypoint_1 using SurfFeatureDetector.
Detect Source_pic keypoint_2 using SurfFeatureDetector in specifying area.
Compute and match.
OpenCV SurfFeatureDetector as below.
void FeatureDetector::detect(const Mat& image, vector<KeyPoint>& keypoints, const Mat& mask=Mat())
mask – Mask specifying where to look for keypoints (optional). Must be a char matrix with non-zero values in the region of interest.
Any one can helps to explain how to create mask=Mat() for Source_pic?
Thanks
Jay
You don't technically have to specify the empty matrix to use the detect function as it is the default parameter.
You can call detect like this:
Ptr<FeatureDetector> detector = FeatureDetector::create("SURF");
vector<KeyPoint> keyPoints;
detector->detect(anImage, keyPoints);
Or, by explicitly creating the empty matrix:
Ptr<FeatureDetector> detector = FeatureDetector::create("SURF");
vector<KeyPoint> keyPoints;
detector->detect(anImage, keyPoints, Mat());
If you want to create a mask in a region of interest, you could create one like this:
Assuming Source_pic is of type CV_8UC3,
Mat mask = Mat::zeros(Source_pic.size(), Source_pic.type());
// select a ROI
Mat roi(mask, Rect(10,10,100,100));
// fill the ROI with (255, 255, 255) (which is white in RGB space);
// the original image will be modified
roi = Scalar(255, 255, 255);
EDIT : Had a copy-pasta error in there. Set the ROI for the mask, and then pass that to the detect function.
Hope that clears things up!