Color band on SURF descriptors - opencv

I'm reading this paper for obtaining better VLAD descriptors. The main difference is the use of the so called CSURF which is (quoting the paper):
In order to extract CSURF, the image
is first transformed to grayscale and
interest points are computed using the
standard SURF algorithm. Then, instead
of computing the SURF descriptor of
each interest point on the intensity
channel, CSURF computes three SURF
descriptors, one on each color band.
How could I implement this in OpenCV? All the descriptors that I've seen so far (SIFT, SURF, ETC) are computed on the gray scale, how could I use SURF to describe keypoints based on one color band (Red, Green or Blue)?
UPDATE: IS THIS SOLUTION CORRECT?
Mat img = imread( src, CV_LOAD_IMAGE_COLOR );
Mat grey;
cvtColor(img,gray,CV_BGR2GRAY);
int minHessian = 400;
Ptr<SURF> detector = SURF::create( minHessian );
std::vector<KeyPoint> keypoints;
detector->detect( grey, keypoints );
vector<Mat> spl;
Mat blueDesc, greenDesc, redDesc;
split(img,spl);
detector->detectAndCompute( spl[0], Mat(), keypoints, blueDesc, true);
detector->detectAndCompute( spl[1], Mat(), keypoints, greenDesc, true);
detector->detectAndCompute( spl[2], Mat(), keypoints, redDesc, true);

Related

The best and quickest method for detecting quadratic shape in image using OpenCV?

In last few days I'm looking for good and quick method for finding quadratic shape in image.
For example, take a look at attached image.
I want to find the edges of white screen part (the TV screen in this case).
I can replace the white canvas with whatever I want, e.g. QR code, some texture, etc. - just looking for the coordinates of that shape.
Other features of the shape:
Only one shape should be detected.
Perspective transform should be used.
The languages is not that important, but I want to use OpenCV for this.
These are good algorithms that have been implemented in OpenCV:
Harris corner detector as GoodFeatureToTrackDetector
GoodFeaturesToTrackDetector harris_detector (1000, 0.01, 10, 3, true);
vector<KeyPoint> keypoints;
cvtColor (image, gray_image, CV_BGR2GRAY);
harris_detector.detect (gray_image, keypoints);
Fast corner detector as FeatureDetector::create("FAST") and FASTX
Ptr<FeatureDetector> feature_detector = FeatureDetector::create("FAST");
vector<KeyPoint> keypoints;
cvtColor (image, gray_image, CV_BGR2GRAY);
feature_detector->detect (gray_image, keypoints);
Or
FASTX (gray_image, keypoints, 50, true, FastFeatureDetector::TYPE_9_16);
SIFT (Scale Invariant Feature Transform) as FeatureDetector::create("SIFT")
Ptr<FeatureDetector> feature_detector = FeatureDetector::create("SIFT");
vector<KeyPoint> keypoints;
cvtColor (image, gray_image, CV_BGR2GRAY);
feature_detector->detect (gray_image, keypoints);
Update for perspective transform (you must know 4 points before haned):
Point2f source [4], destination [4];
// Assign values to source and destination points.
perspective_matrix = getPerspectiveTransform( source, destination );
warpPerspective( image, result, perspective_matrix, result.size() );

OpenCV Multiple marker detection?

I've been working on detecting fiducial markers in scenes. An example of my fiducial marker is here:
http://tinypic.com/view.php?pic=4r6k3q&s=8#.VNgsgzVVK1F
I have been able to detect a single fiducial marker in a scene very well. What is the methodology for detecting multiple fiducial markers in a scene? Doing feature detection, extraction, and then matching is great for finding a single match, but it seems to be the wrong method for detecting multiple matches since it would be difficult to determine which features belong to which marker?
The fiducial markers would be the same, and would not be in a known location in the scene.
Update:
Below is some sample code. I was trying to match the first fiducial marker with x number of keypoints, and then use the remaining keypoints to match the second marker. However, this is not robust at all. Does anybody have any suggestions?
OrbFeatureDetector detector;
vector<KeyPoint> keypoints1, keypoints2;
detector.detect(im1, keypoints1);
detector.detect(im2, keypoints2);
Mat display_im1, display_im2;
drawKeypoints(im1, keypoints1, display_im1, Scalar(0,0,255));
drawKeypoints(im2, keypoints2, display_im2, Scalar(0,0,255));
SiftDescriptorExtractor extractor;
Mat descriptors1, descriptors2;
extractor.compute( im1, keypoints1, descriptors1 );
extractor.compute( im2, keypoints2, descriptors2 );
BFMatcher matcher;
vector< DMatch > matches1, matches2;
matcher.match( descriptors1, descriptors2, matches1 );
sort (matches1.begin(), matches1.end());
matches2 = matches;
int numElementsToSave = 50;
matches1.erase(matches1.begin()+numElementsToSave,matches1.end());
matches2.erase(matches2.begin(),matches2.begin()+numElementsToSave);
Mat match_im1, match_im2;
drawMatches( im1, keypoints1, im2, keypoints2,
matches1, match_im1, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
drawMatches( im1, keypoints1, im2, keypoints2,
matches2, match_im2, Scalar::all(-1), Scalar::all(-1),
vector<char>(), DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
I have never tried it before, but here you have a good explanation about the detection of multiple occurrences:
This tutorial shows how enabling multiple detection of the same
object. To enable multiple detection, the parameter
General->multiDetection should be checked. The approach is as
following:
As usual, we match all features between the objects and the scene.
For an object which is in the scene two times (or more), it should have twice of matched features. We apply a RANSAC algorithm to find a
homography. The inliers should belong to only one occurrence of the
object, all others considered as outliers. We redo the homography
process on the outliers, then find another homography… we do this
process until no homography can be computed.
It may happens that a homography can be found superposed on a previous one using the outliers. You could set
Homography->ransacReprojThr (in pixels) higher to accept more
inliers in the homographies computed, which would decrease the chance
of superposed detections. Another way is to ignore superposed
homographies on a specified radius with the parameter General->multiDetectionRadius (in pixels).
For more information see the page below:
https://code.google.com/p/find-object/wiki/MultiDetection
I developed a semi-automatic algorithm to detect multiple markers (interest points) from image using findContours method on a binary image (my markers are white on a green surface then I limit my search to area constraint as I know how big is each marker in each frame. of course this got some false positives but it was good enough. I couldn't see the picture in your post as tinypic is blocked here for some reason. But you can use the matchShape opencv function to eliminate the bad contours.
here is the part of code I made for this.
Mat tempFrame;
cvtColor(BallFrame, tempFrame, COLOR_BGR2GRAY);
GaussianBlur(tempFrame, tempFrame, Size(15, 15), 2, 2); // remove noise
Mat imBw;
threshold(tempFrame, imBw, 220, 255, THRESH_BINARY); // High threshold to get better results
std::vector<std::vector<Point> > contours;
std::vector<Vec4i> hierarchy;
findContours(imBw, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
Point2f center;
float radius = 0.0;
for (int i = 0; i < contours.size(); i++)
{
double area = contourArea(contours[i]);
if (area > 1 && area < 4000) {
minEnclosingCircle(contours[i], center, radius);
if (radius < 50) // eliminate wide narrow contours
{
// You can use `matchShape` here to match your marker shape
}
}
}
I hope this will help

Get the SIFT descriptor for specified point using OpenCV

I want get the SIFT feature for specified points. These points is gotten by hand not by KeyPoint Detector. My question is: I only know the position of the points but have no idea about the size and angle value. How should I set this value?
Here is my code:
int main()
{
Mat img_object = imread("img/test.jpg", 0);
SiftDescriptorExtractor extractor;
Mat descriptors;
std::vector<KeyPoint> keypoints;
// set keypoint position and size: should I set
// size parameter to 32 for 32x32 patch?
KeyPoint kp(50, 60, 32);
keypoints.push_back(kp);
extractor.compute( img_object, keypoints, descriptors );
return 0;
}
Should I set the size param of KeyPoint to 32 for 32x32 patch. Is this implementation reasonable?
Usually, keypoint detectors work on a local neighbourhood around a point. This is the size field of OpenCV's KeyPoint class. The angle field is the dominant orientation of the keypoint (this could be set to -1, note).
OpenCV KeyPoint class
Another reference here.

SurfFeatureDetector and creating an empty mask with Mat()

I would like to use SurfFeatureDetector to detect keypoints on specifying area of a picture:
Train_pic & Source_pic
Detect Train_pic keypoint_1 using SurfFeatureDetector.
Detect Source_pic keypoint_2 using SurfFeatureDetector in specifying area.
Compute and match.
OpenCV SurfFeatureDetector as below.
void FeatureDetector::detect(const Mat& image, vector<KeyPoint>& keypoints, const Mat& mask=Mat())
mask – Mask specifying where to look for keypoints (optional). Must be a char matrix with non-zero values in the region of interest.
Any one can helps to explain how to create mask=Mat() for Source_pic?
Thanks
Jay
You don't technically have to specify the empty matrix to use the detect function as it is the default parameter.
You can call detect like this:
Ptr<FeatureDetector> detector = FeatureDetector::create("SURF");
vector<KeyPoint> keyPoints;
detector->detect(anImage, keyPoints);
Or, by explicitly creating the empty matrix:
Ptr<FeatureDetector> detector = FeatureDetector::create("SURF");
vector<KeyPoint> keyPoints;
detector->detect(anImage, keyPoints, Mat());
If you want to create a mask in a region of interest, you could create one like this:
Assuming Source_pic is of type CV_8UC3,
Mat mask = Mat::zeros(Source_pic.size(), Source_pic.type());
// select a ROI
Mat roi(mask, Rect(10,10,100,100));
// fill the ROI with (255, 255, 255) (which is white in RGB space);
// the original image will be modified
roi = Scalar(255, 255, 255);
EDIT : Had a copy-pasta error in there. Set the ROI for the mask, and then pass that to the detect function.
Hope that clears things up!

OpenCV - Image Stitching

I am using following code to stitch to input images. For an unknown
reason the output result is crap!
It seems that the homography matrix is wrong (or is affected wrongly)
because the transformed image is like an "exploited star"!
I have commented the part that I guess is the source of the problem
but I cannot realize it.
Any help or point is appriciated!
Have a nice day,
Ali
void Stitch2Image(IplImage *mImage1, IplImage *mImage2)
{
// Convert input images to gray
IplImage* gray1 = cvCreateImage(cvSize(mImage1->width, mImage1->height), 8, 1);
cvCvtColor(mImage1, gray1, CV_BGR2GRAY);
IplImage* gray2 = cvCreateImage(cvSize(mImage2->width, mImage2->height), 8, 1);
cvCvtColor(mImage2, gray2, CV_BGR2GRAY);
// Convert gray images to Mat
Mat img1(gray1);
Mat img2(gray2);
// Detect FAST keypoints and BRIEF features in the first image
FastFeatureDetector detector(50);
BriefDescriptorExtractor descriptorExtractor;
BruteForceMatcher<L1<uchar> > descriptorMatcher;
vector<KeyPoint> keypoints1;
detector.detect( img1, keypoints1 );
Mat descriptors1;
descriptorExtractor.compute( img1, keypoints1, descriptors1 );
/* Detect FAST keypoints and BRIEF features in the second image*/
vector<KeyPoint> keypoints2;
detector.detect( img1, keypoints2 );
Mat descriptors2;
descriptorExtractor.compute( img2, keypoints2, descriptors2 );
vector<DMatch> matches;
descriptorMatcher.match(descriptors1, descriptors2, matches);
if (matches.size()==0)
return;
vector<Point2f> points1, points2;
for(size_t q = 0; q < matches.size(); q++)
{
points1.push_back(keypoints1[matches[q].queryIdx].pt);
points2.push_back(keypoints2[matches[q].trainIdx].pt);
}
// Create the result image
result = cvCreateImage(cvSize(mImage2->width * 2, mImage2->height), 8, 3);
cvZero(result);
// Copy the second image in the result image
cvSetImageROI(result, cvRect(mImage2->width, 0, mImage2->width, mImage2->height));
cvCopy(mImage2, result);
cvResetImageROI(result);
// Create warp image
IplImage* warpImage = cvCloneImage(result);
cvZero(warpImage);
/************************** Is there anything wrong here!? *******************/
// Find homography matrix
Mat H = findHomography(Mat(points1), Mat(points2), 8, 3.0);
CvMat HH = H; // Is this line converted correctly?
// Transform warp image
cvWarpPerspective(mImage1, warpImage, &HH);
// Blend
blend(result, warpImage);
/*******************************************************************************/
cvReleaseImage(&gray1);
cvReleaseImage(&gray2);
cvReleaseImage(&warpImage);
}
This is what I would suggest you to try, in this order:
1) Use CV_RANSAC option for homography. Refer http://opencv.willowgarage.com/documentation/cpp/calib3d_camera_calibration_and_3d_reconstruction.html
2) Try other descriptors, particularly SIFT or SURF which ship with OpenCV. For some images FAST or BRIEF descriptors are not discriminating enough. EDIT (Aug '12): The ORB descriptors, which are based on BRIEF, are quite good and fast!
3) Try to look at the Homography matrix (step through in debug mode or print it) and see if it is consistent.
4) If above does not give you a clue, try to look at the matches that are formed. Is it matching one point in one image with a number of points in the other image? If so the problem again should be with the descriptors or the detector.
My hunch is that it is the descriptors (so 1) or 2) should fix it).
Also switch to Hamming distance instead of L1 distance in BruteForceMatcher. BRIEF descriptors are supposed to be compared using Hamming distance.
Your homography, might calculated based on wrong matches and thus represent bad allignment.
I suggest to path the matrix through additional check of interdependancy between its rows.
You can use the following code:
bool cvExtCheckTransformValid(const Mat& T){
// Check the shape of the matrix
if (T.empty())
return false;
if (T.rows != 3)
return false;
if (T.cols != 3)
return false;
// Check for linear dependency.
Mat tmp;
T.row(0).copyTo(tmp);
tmp /= T.row(1);
Scalar mean;
Scalar stddev;
meanStdDev(tmp,mean,stddev);
double X = abs(stddev[0]/mean[0]);
printf("std of H:%g\n",X);
if (X < 0.8)
return false;
return true;
}

Resources