SIFT change key point scale (meaningful key point neighbourhood) - opencv

I'm new to opencv and It's developing. I'm using SIFT keypoint to match two images. In order to do that i used the following code. But which provides some Error matchings.
I think if i changed the size of key points. that would give use to get more correct key points. current size of the key point which is not adequate. Please help me to improve the size of the key points.
Ptr<FeatureDetector> featureDetector = FeatureDetector::create("SIFT");
featureDetector->detect(input_right, keypoints_right);// is pointer so i used ->
featureDetector->detect(input_left, keypoints_left);
//-- Step 2: Calculate descriptors (feature vectors)
Ptr<DescriptorExtractor> featureExtractor = DescriptorExtractor::create("SIFT");
featureExtractor->compute(input_right, keypoints_right, descriptor_right);
// which is pointer so i used ->
featureExtractor->compute(input_left, keypoints_left, descriptor_left);
// show features
// check detected keypoints right
Mat outputImageright;
Scalar keypointColor = Scalar(255, 0, 0); // Blue keypoints.
drawKeypoints(input_right, keypoints_right, outputImageright, keypointColor, DrawMatchesFlags::DEFAULT);
namedWindow("Right View");
imshow("Right View", outputImageright);
Mat outputImageleft;
Scalar keypointColorred = Scalar(0, 0, 255); // Red keypoints.
drawKeypoints(input_left, keypoints_left, outputImageleft, keypointColorred, DrawMatchesFlags::DEFAULT);
namedWindow("Left View");
imshow("Left View", outputImageleft);

Related

Get the SIFT descriptor for specified point using OpenCV

I want get the SIFT feature for specified points. These points is gotten by hand not by KeyPoint Detector. My question is: I only know the position of the points but have no idea about the size and angle value. How should I set this value?
Here is my code:
int main()
{
Mat img_object = imread("img/test.jpg", 0);
SiftDescriptorExtractor extractor;
Mat descriptors;
std::vector<KeyPoint> keypoints;
// set keypoint position and size: should I set
// size parameter to 32 for 32x32 patch?
KeyPoint kp(50, 60, 32);
keypoints.push_back(kp);
extractor.compute( img_object, keypoints, descriptors );
return 0;
}
Should I set the size param of KeyPoint to 32 for 32x32 patch. Is this implementation reasonable?
Usually, keypoint detectors work on a local neighbourhood around a point. This is the size field of OpenCV's KeyPoint class. The angle field is the dominant orientation of the keypoint (this could be set to -1, note).
OpenCV KeyPoint class
Another reference here.

drawing a line between feature points without using "drawmatches" function

I got the feature points from two consecutive by using different detectors in features2d framework:
In the first frame, the feature points are plotted in red
In the next frame, the feature points are plotted in blue
I want to draw a line between these red and blue (matched) points inside the first frame (the image with red dots). drawmatches function in opencv doesn't help as it shows a window with two consecutive frames next to each other for matching. Is it possible in OpenCV?
Thanks in advance
I guess that you want to visualize how each keypoint moves between two frames. According to my knowledge, there is no built-in function in OpenCV meeting your requirement.
However, as you have called the drawMatches() function, you already have the two keypoint sets (take C++ code as example) vector<KeyPoint>& keypoints1, keypoints2 and the matches vector<DMatch>& matches1to2. Then you can get the pixel coordinate of each keypoint from Point keypoints1.pt and draw lines between keypoints by calling the line() function.
You should be careful that since you want to draw keypoints2 in the first frame, the pixel coordinate may exceed the size of img1.
There is a quick way to get a sense of the keypoints' motion. Below is the result shown by imshowpair() in Matlab:
After I found good matches, I draw the lines by using this code.
(...some code...)
//draw good matches
for( int i = 0; i < (int)good_matches.size(); i++ )
{
printf( "-- Good Match [%d] Keypoint 1: %d -- Keypoint 2: %d \n", i, good_matches[i].queryIdx, good_matches[i].trainIdx );
//query image is the first frame
Point2f point_old = keypoints_1[good_matches[i].queryIdx].pt;
//train image is the next frame that we want to find matched keypoints
Point2f point_new = keypoints_2[good_matches[i].trainIdx].pt;
//keypoint color for frame 1: RED
circle(img_1, point_old, 3, Scalar(0, 0, 255), 1);
circle(img_2, point_old, 3, Scalar(0, 0, 255), 1);
//keypoint color for frame 2: BLUE
circle(img_1, point_new, 3, Scalar(255, 0, 0), 1);
circle(img_2, point_new, 3, Scalar(255, 0, 0), 1);
//draw a line between keypoints
line(img_1, point_old, point_new, Scalar(0, 255, 0), 2, 8, 0);
line(img_2, point_old, point_new, Scalar(0, 255, 0), 2, 8, 0);
}
imwrite("directory/image1.jpg",img_1);
imwrite("directory/image2.jpg",img_2);
(...some code...)
I saved the results to the first (img_1) and next frame (img_2). As you see, I have different results, but the line shape is same. In video homography sample of OpenCV, keypoint tracking seems accurate. They follow this approach:detect keypoints-->compute keypoints-->warp keypoints--> match--> find homography-->draw matches. However, I apply detect keypoints-->compute keypoints-->match-->draw matches.
I am confused if I have to consider homography and perspective (or other things) to see the keypoint movements accurately.
my results for first frame (img_1)
and next frame (img_2)

comparing blob detection and Structural Analysis and Shape Descriptors in opencv

I need to use blob detection and Structural Analysis and Shape Descriptors (more specifically findContours, drawContours and moments) to detect colored circles in an image. I need to know the pros and cons of each method and which method is better. Can anyone show me the differences between those 2 methods please?
As #scap3y suggested in the comments I'd go for a much simpler approach. What I'm always doing in these cases is something similar to this:
// Convert your image to HSV color space
Mat hsv;
hsv.create(originalImage.size(), CV_8UC3);
cvtColor(originalImage,hsv,CV_RGB2HSV);
// Chose the range in each of hue, saturation and value and threshold the other pixels
Mat thresholded;
uchar loH = 130, hiH = 170;
uchar loS = 40, hiS = 255;
uchar loV = 40, hiV = 255;
inRange(hsv, Scalar(loH, loS, loV), Scalar(hiH, hiS, hiV), thresholded);
// Find contours in the image (additional step could be to
// apply morphologyEx() first)
vector<vector<Point>> contours;
findContours(thresholded,contours,CV_RETR_EXTERNAL,CHAIN_APPROX_SIMPLE);
// Draw your contours as ellipses into the original image
for(i=0;i<(int)valuable_rectangle_indices.size();i++) {
rect=minAreaRect(contours[valuable_rectangle_indices[i]]);
ellipse(originalImage, rect, Scalar(0,0,255)); // draw ellipse
}
The only thing left for you to do now is to figure out in what range your markers are in HSV color space.

How can I use Homography?

I am developing a program where I receive 2 pictures of the same scene, but one of them has a distortion:
Mat img_1 = imread(argv[1], 0); // nORMAL pICTURE
Mat img_2 = imread(argv[2], 0); // PICTURE WITH DISTORTION
AND I WOULD LIKE TO EVALUATE THE DISTORTIONS' PATTERN AND BE ABLE TO COMPENSATE IT
I AM ALREADY ABLE TO FIND THE KEYPOINTS AND I WOULD LIKE TO KNOW IF I CAN USE THE FUNCTION cv::findHomography for this... In any case, how to do so?
A homography will map one image plane to another. That means that if your distortion can be expressed as a 3x3 matrix, findHomography is what you want. If not, then it isn't what you want. It takes two vectors of corresponding points as input and will return the 3x3 matrix that best represents the transform between those points.
Alright, so suppose I've two pictures (A and B) slightly distorted one from the other, where there are translation, rotation and scale differences between them (for example, these pictures:)
Ssoooooooo what I need is to apply a kind of transformation in pic B so it compensates the distortion/translation/rotation that exists to make both pictures with the same size, orientation and with no translation
I've already extracted the points and found the Homography, as shown bellow. But I don'know how to use the Homography to transform Mat img_B so it looks like Mat img_A. Any idea?
//-- Localize the object from img_1 in img_2
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for (unsigned int i = 0; i < good_matches.size(); i++) {
//-- Get the keypoints from the good matches
obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
}
Mat H = findHomography(obj, scene, CV_RANSAC);
Cheers,

SurfFeatureDetector and creating an empty mask with Mat()

I would like to use SurfFeatureDetector to detect keypoints on specifying area of a picture:
Train_pic & Source_pic
Detect Train_pic keypoint_1 using SurfFeatureDetector.
Detect Source_pic keypoint_2 using SurfFeatureDetector in specifying area.
Compute and match.
OpenCV SurfFeatureDetector as below.
void FeatureDetector::detect(const Mat& image, vector<KeyPoint>& keypoints, const Mat& mask=Mat())
mask – Mask specifying where to look for keypoints (optional). Must be a char matrix with non-zero values in the region of interest.
Any one can helps to explain how to create mask=Mat() for Source_pic?
Thanks
Jay
You don't technically have to specify the empty matrix to use the detect function as it is the default parameter.
You can call detect like this:
Ptr<FeatureDetector> detector = FeatureDetector::create("SURF");
vector<KeyPoint> keyPoints;
detector->detect(anImage, keyPoints);
Or, by explicitly creating the empty matrix:
Ptr<FeatureDetector> detector = FeatureDetector::create("SURF");
vector<KeyPoint> keyPoints;
detector->detect(anImage, keyPoints, Mat());
If you want to create a mask in a region of interest, you could create one like this:
Assuming Source_pic is of type CV_8UC3,
Mat mask = Mat::zeros(Source_pic.size(), Source_pic.type());
// select a ROI
Mat roi(mask, Rect(10,10,100,100));
// fill the ROI with (255, 255, 255) (which is white in RGB space);
// the original image will be modified
roi = Scalar(255, 255, 255);
EDIT : Had a copy-pasta error in there. Set the ROI for the mask, and then pass that to the detect function.
Hope that clears things up!

Resources