OpenCV Surf and Outliers detection - opencv

I know there are already several questions with the same subject asked here, but I couldn't find any help.
So I want to compare 2 images in order to see how similar they are and I'm using the well known find_obj.cpp demo to extract surf descriptors and then for the matching I use the flannFindPairs.
But as you know this method doesn't discard the outliers and I'd like to know the number of true positive matches so I can figure how similar those two images are.
I have already seen this question: Detecting outliers in SURF or SIFT algorithm with OpenCV and the guy there suggests to use the findFundamentalMat but once you get the fundamental matrix how can I get the number of outliers/true positive from that matrix? Thank you.

Here is a snippet from the descriptor_extractor_matcher.cpp sample available from OpenCV:
if( !isWarpPerspective && ransacReprojThreshold >= 0 )
{
cout << "< Computing homography (RANSAC)..." << endl;
vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
H12 = findHomography( Mat(points1), Mat(points2), CV_RANSAC, ransacReprojThreshold );
cout << ">" << endl;
}
Mat drawImg;
if( !H12.empty() ) // filter outliers
{
vector<char> matchesMask( filteredMatches.size(), 0 );
vector<Point2f> points1; KeyPoint::convert(keypoints1, points1, queryIdxs);
vector<Point2f> points2; KeyPoint::convert(keypoints2, points2, trainIdxs);
Mat points1t; perspectiveTransform(Mat(points1), points1t, H12);
double maxInlierDist = ransacReprojThreshold < 0 ? 3 : ransacReprojThreshold;
for( size_t i1 = 0; i1 < points1.size(); i1++ )
{
if( norm(points2[i1] - points1t.at<Point2f>((int)i1,0)) <= maxInlierDist ) // inlier
matchesMask[i1] = 1;
}
// draw inliers
drawMatches( img1, keypoints1, img2, keypoints2, filteredMatches, drawImg, CV_RGB(0, 255, 0), CV_RGB(0, 0, 255), matchesMask
#if DRAW_RICH_KEYPOINTS_MODE
, DrawMatchesFlags::DRAW_RICH_KEYPOINTS
#endif
);
#if DRAW_OUTLIERS_MODE
// draw outliers
for( size_t i1 = 0; i1 < matchesMask.size(); i1++ )
matchesMask[i1] = !matchesMask[i1];
drawMatches( img1, keypoints1, img2, keypoints2, filteredMatches, drawImg, CV_RGB(0, 0, 255), CV_RGB(255, 0, 0), matchesMask,
DrawMatchesFlags::DRAW_OVER_OUTIMG | DrawMatchesFlags::NOT_DRAW_SINGLE_POINTS );
#endif
}
else
drawMatches( img1, keypoints1, img2, keypoints2, filteredMatches, drawImg );
The key lines for the filtering are performed here:
if( norm(points2[i1] - points1t.at<Point2f>((int)i1,0)) <= maxInlierDist ) // inlier
matchesMask[i1] = 1;
Which is measuring the L2-norm distance between the points (either 3 pixels if nothing was specified, or user-defined number of pixels reprojection error).
Hope that helps!

you can use the size of the vector named "ptpairs" in order to decide how similiar the pictures are.
this vector contains the matching keypoints, so his size/2 is the number of matches.
i think you can use the size of ptpairs divided by the total number of keypoints in order to set an appropriate threshold.
this will probably give you an estimation to the similiarty between them.

Related

Non connecting morphological filter

After some simple preprocessing I am receiving boolean mask of segmented images.
I want to "enhance" borders of the mask and make them more smooth. For that I am using OPEN morphology filter with a rather big circle kernel , it works very well until the distance between segmented objects is enough. But In alot of samples objects stick together. Is there exists some more or less simple method to smooth such kind of images without changing its morphology ?
Without applying a morphological filter first, you can try to detect the external contours of the image. Now you can draw these external contours as filled contours and then apply your morphological filter. This works because now you don't have any holes to fill. This is fairly simple.
Another approach:
find external contours
take the x, y of coordinates of the contour points. you can consider these as 1-D signals and apply a smoothing filter to these signals
In the code below, I've applied the second approach to a sample image.
Input image
External contours without any smoothing
After applying a Gaussian filter to x and y 1-D signals
C++ code
Mat im = imread("4.png", 0);
Mat cont = im.clone();
Mat original = Mat::zeros(im.rows, im.cols, CV_8UC3);
Mat smoothed = Mat::zeros(im.rows, im.cols, CV_8UC3);
// contour smoothing parameters for gaussian filter
int filterRadius = 5;
int filterSize = 2 * filterRadius + 1;
double sigma = 10;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
// find external contours and store all contour points
findContours(cont, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, Point(0, 0));
for(size_t j = 0; j < contours.size(); j++)
{
// draw the initial contour shape
drawContours(original, contours, j, Scalar(0, 255, 0), 1);
// extract x and y coordinates of points. we'll consider these as 1-D signals
// add circular padding to 1-D signals
size_t len = contours[j].size() + 2 * filterRadius;
size_t idx = (contours[j].size() - filterRadius);
vector<float> x, y;
for (size_t i = 0; i < len; i++)
{
x.push_back(contours[j][(idx + i) % contours[j].size()].x);
y.push_back(contours[j][(idx + i) % contours[j].size()].y);
}
// filter 1-D signals
vector<float> xFilt, yFilt;
GaussianBlur(x, xFilt, Size(filterSize, filterSize), sigma, sigma);
GaussianBlur(y, yFilt, Size(filterSize, filterSize), sigma, sigma);
// build smoothed contour
vector<vector<Point> > smoothContours;
vector<Point> smooth;
for (size_t i = filterRadius; i < contours[j].size() + filterRadius; i++)
{
smooth.push_back(Point(xFilt[i], yFilt[i]));
}
smoothContours.push_back(smooth);
drawContours(smoothed, smoothContours, 0, Scalar(255, 0, 0), 1);
cout << "debug contour " << j << " : " << contours[j].size() << ", " << smooth.size() << endl;
}
Not 100% sure what you are trying to achieve, but this may be an avenue to explore... the tool potrace takes images and converts them to vectorised images which involves smoothing. It prefers PGM format input files so I use ImageMagick to prepare them. Anyway, here is an example of the command and the result so see what you think:
convert disks.png pgm:- | potrace - -s -o out.svg
I have converted the resulting SVG file to a PNG so I can upload it to SO.

Distinguish rock scences using opencv

I am struggling with finding the appropriate contour algorithm for a low quality image. The example image shows a rock scene:
What I am trying to achieve is to find contours arround features such as:
light areas
dark areas
grey1 areas
grey2 areas
etc. until grey-n areas
(The number of areas shall be a parameter of choice)
I do not want to take a simple binary-threshold but rather use some sort of contour-finding (for example watershed or other). The major feature-lines shall be kept, noise within a feature-are can be flattened.
The result of my code can be seen on the images to the right.
Unfortunately, as you can easily tell, the colors do not really represent the original large-scale image features! For example: check out the two areas that I circled with red - these features are almost completely flooded with another color. What I imagine is that at least the very light and the very dark areas are covered by its own color.
cv::Mat cv_src = cv::imread(argv[1]);
cv::Mat output;
cv::Mat cv_src_gray;
cv::cvtColor(cv_src, cv_src_gray, cv::COLOR_RGB2GRAY);
double clipLimit = 0.1;
cv::Size titleGridSize = cv::Size(8,8);
cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE(clipLimit, titleGridSize);
clahe->apply(cv_src_gray, output);
cv::equalizeHist(output, output);
cv::cvtColor(output, cv_src, cv::COLOR_GRAY2RGB);
// Create binary image from source image
cv::Mat bw;
cv::cvtColor(cv_src, bw, cv::COLOR_BGR2GRAY);
cv::threshold(bw, bw, 180, 255, cv::THRESH_BINARY);
// Perform the distance transform algorithm
cv::Mat dist;
cv::distanceTransform(bw, dist, cv::DIST_L2, CV_32F);
// Normalize the distance image for range = {0.0, 1.0}
cv::normalize(dist, dist, 0, 1., cv::NORM_MINMAX);
// Threshold to obtain the peaks
cv::threshold(dist, dist, .2, 1., cv::THRESH_BINARY);
// Create the CV_8U version of the distance image
cv::Mat dist_8u;
dist.convertTo(dist_8u, CV_8U);
// Find total markers
std::vector<std::vector<cv::Point> > contours;
cv::findContours(dist_8u, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
int ncomp = contours.size();
// Create the marker image for the watershed algorithm
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
// Draw the foreground markers
for (int i = 0; i < ncomp; i++)
cv::drawContours(markers, contours, i, cv::Scalar::all(i+1), -1);
// Draw the background marker
cv::circle(markers, cv::Point(5,5), 3, CV_RGB(255,255,255), -1);
// Perform the watershed algorithm
cv::watershed(cv_src, markers);
// Generate random colors
std::vector<cv::Vec3b> colors;
for (int i = 0; i < ncomp; i++)
{
int b = cv::theRNG().uniform(0, 255);
int g = cv::theRNG().uniform(0, 255);
int r = cv::theRNG().uniform(0, 255);
colors.push_back(cv::Vec3b((uchar)b, (uchar)g, (uchar)r));
}
// Create the result image
cv::Mat dst = cv::Mat::zeros(markers.size(), CV_8UC3);
// Fill labeled objects with random colors
for (int i = 0; i < markers.rows; i++)
{
for (int j = 0; j < markers.cols; j++)
{
int index = markers.at<int>(i,j);
if (index > 0 && index <= ncomp)
dst.at<cv::Vec3b>(i,j) = colors[index-1];
else
dst.at<cv::Vec3b>(i,j) = cv::Vec3b(0,0,0);
}
}
// Show me what you got
imshow("final_result", dst);
I think you can use a simple clustering such as k-means for this, then examine the cluster centers (or the mean and standard deviations of each cluster). I quickly tried it in matlab.
im = imread('tvBqt.jpg');
gr = rgb2gray(im);
x = double(gr(:));
idx = kmeans(x, 4);
cl = reshape(idx, 600, 472);
figure,
subplot(1, 2, 1), imshow(gr, []), title('original')
subplot(1, 2, 2), imshow(label2rgb(cl), []), title('clustered')
The result:
You could try using SLIC Superpixels. I tried it and showed some good results. You could vary the parameters to get better clustering.
SLIC Superpixels
SLIC Superpixels with OpenCV C++
SLIC Superpixels with OpenCV Python

OpenCV::solvePNP() - Assertion failed

I am trying to get the pose of the camera with the help of solvePNP() from OpenCV.
After running my program I get the following errors:
OpenCV Error: Assertion failed (npoints >= 0 && npoints == std::max(ipoints.checkVector(2, CV_32F), ipoints.checkVector(2, CV_64F))) in solvePnP, file /opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_graphics_opencv/opencv/work/OpenCV-2.4.2/modules/calib3d/src/solvepnp.cpp, line 55
libc++abi.dylib: terminate called throwing an exception
I tried to search how to solve these errors, but I couldn't resolve it unfortunately!
Here is my code, all comment/help is much appreciated:
enum Pattern { NOT_EXISTING, CHESSBOARD, CIRCLES_GRID, ASYMMETRIC_CIRCLES_GRID };
void calcBoardCornerPositions(Size boardSize, float squareSize, vector<Point3f>& corners,
Pattern patternType)
{
corners.clear();
switch(patternType)
{
case CHESSBOARD:
case CIRCLES_GRID:
for( int i = 0; i < boardSize.height; ++i )
for( int j = 0; j < boardSize.width; ++j )
corners.push_back(Point3f(float( j*squareSize ), float( i*squareSize ), 0));
break;
case ASYMMETRIC_CIRCLES_GRID:
for( int i = 0; i < boardSize.height; i++ )
for( int j = 0; j < boardSize.width; j++ )
corners.push_back(Point3f(float((2*j + i % 2)*squareSize), float(i*squareSize), 0));
break;
}
}
int main(int argc, char* argv[])
{
float squareSize = 50.f;
Pattern calibrationPattern = CHESSBOARD;
//vector<Point2f> boardCorners;
vector<vector<Point2f> > imagePoints(1);
vector<vector<Point3f> > boardPoints(1);
Size boardSize;
boardSize.width = 9;
boardSize.height = 6;
vector<Mat> intrinsics, distortion;
string filename = "out_camera_xml.xml";
FileStorage fs(filename, FileStorage::READ);
fs["camera_matrix"] >> intrinsics;
fs["distortion_coefficients"] >> distortion;
fs.release();
vector<Mat> rvec, tvec;
Mat img = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE); // at kell adnom egy kepet
bool found = findChessboardCorners(img, boardSize, imagePoints[0], CV_CALIB_CB_ADAPTIVE_THRESH);
calcBoardCornerPositions(boardSize, squareSize, boardPoints[0], calibrationPattern);
boardPoints.resize(imagePoints.size(),boardPoints[0]);
//***Debug start***
cout << imagePoints.size() << endl << boardPoints.size() << endl << intrinsics.size() << endl << distortion.size() << endl;
//***Debug end***
solvePnP(Mat(boardPoints), Mat(imagePoints), intrinsics, distortion, rvec, tvec);
for(int i=0; i<rvec.size(); i++) {
cout << rvec[i] << endl;
}
return 0;
}
EDIT (some debug info):
I debugged it row by row. I stepped into all of the functions. I am getting the Assertion failed in SolvePNP(...). You can see below what I see when I step into the solvePNP function. First it jumps over the first if statement /if(vec.empty())/, and goes into the second if statement /if( !copyData )/, there when it executes the last line /*datalimit = dataend = datastart + rows*step[0]*/ jumps back to the first if statement and returns => than I get the Assertion failed error.
template<typename _Tp> inline Mat::Mat(const vector<_Tp>& vec, bool copyData)
: flags(MAGIC_VAL | DataType<_Tp>::type | CV_MAT_CONT_FLAG),
dims(2), rows((int)vec.size()), cols(1), data(0), refcount(0),
datastart(0), dataend(0), allocator(0), size(&rows)
{
if(vec.empty())
return;
if( !copyData )
{
step[0] = step[1] = sizeof(_Tp);
data = datastart = (uchar*)&vec[0];
datalimit = dataend = datastart + rows*step[0];
}
else
Mat((int)vec.size(), 1, DataType<_Tp>::type, (uchar*)&vec[0]).copyTo(*this);
}
Step into the function in a debugger and see exactly which assertion is failing. ( Probably it requires values in double (CV_64F) rather than float. )
OpenCVs new "inputarray" wrapper issuppsoed to allow you to call functions with any shape of mat, vector of points, etc - and it will sort it out. But a lot of functions assume a particular inut format or have obsolete assertions enforcing a particular format.
The stereo/calibration systems are the worst for requiring a specific layout, and frequently succesive operations require a different layout.
The types don't seem right, at least in the code that worked for me I used different types(as mentioned in the documentation).
objectPoints – Array of object points in the object coordinate space, 3xN/Nx3 1-channel or 1xN/Nx1 3-channel, where N is the number of points. vector can be also passed here.
imagePoints – Array of corresponding image points, 2xN/Nx2 1-channel or 1xN/Nx1 2-channel, where N is the number of points.
vector can be also passed here.
cameraMatrix – Input camera matrix A = \vecthreethree{fx}{0}{cx}{0}{fy}{cy}{0}{0}{1} .
distCoeffs – Input
vector of distortion coefficients (k_1, k_2, p_1, p_2[, k_3[, k_4,
k_5, k_6]]) of 4, 5, or 8 elements. If the vector is NULL/empty, the
zero distortion coefficients are assumed.
rvec – Output rotation vector (see Rodrigues() ) that, together with tvec , brings points from the model coordinate system to the
camera coordinate system.
tvec – Output translation vector.
useExtrinsicGuess – If true (1), the function uses the provided rvec and tvec values as initial
approximations of the rotation and translation vectors, respectively,
and further optimizes them.
Documentation from here.
vector<Mat> rvec, tvec should be Mat rvec, tvec instead.
vector<vector<Point2f> > imagePoints(1) should be vector<Point2f> imagePoints(1) instead.
vector<vector<Point3f> > boardPoints(1) should be
vector<Point3f> boardPoints(1) instead.
Note: I encountered the exact same problem, and this worked for me(It is a little bit confusing since calibrateCamera use vectors). Haven't tried it for imagePoints or boardPoints though.(but as it is documented in the link above, vector,vector should work, I thought I'd better mention it), but for rvec,trec I tried it myself.
I run in exactly the same problem with solvePnP and opencv3. I tried to isolate the problem in a single test case. I seams passing a std::vector to cv::InputArray does not what is expected. The following small test works with opencv 2.4.9 but not with 3.2.
And this is exactly the problem when passing a std::vector of points to solvePnP and causes the assert at line 63 in solvepnp.cpp to fail !
Generating a cv::mat out of the vector list before passing to solvePnP works.
//create list with 3 points
std::vector<cv::Point3f> vectorList;
vectorList.push_back(cv::Point3f(1.0, 1.0, 1.0));
vectorList.push_back(cv::Point3f(1.0, 1.0, 1.0));
vectorList.push_back(cv::Point3f(1.0, 1.0, 1.0));
//to input array
cv::InputArray inputArray(vectorList);
cv::Mat mat = inputArray.getMat();
cv::Mat matDirect = cv::Mat(vectorList);
LOG_INFO("Size vector: %d mat: %d matDirect: %d", vectorList.size(), mat.checkVector(3, CV_32F), matDirect.checkVector(3, CV_32F));
QVERIFY(vectorList.size() == mat.checkVector(3, CV_32F));
Result opencv 2.4.9 macos:
TestObject: OpenCV
Size vector: 3 mat: 3 matDirect: 3
Result opencv 3.2 win64:
TestObject: OpenCV
Size vector: 3 mat: 9740 matDirect: 3
I faced the same issue. In my case, (in python) converted the input array type as float.
It worked fine afterwards.

Finding length of contour in opencv

This is regarding a project that concerns detection of text in an image using OpenCV in C. The process is to detect the colors inside and outside the corresponding contours and the way to do that is to draw normals on the contours in equal spaced positions and extract the pixel colors in the corresponding positions of the normals end-points.
I am trying to implement this using the following code but it's not working. I mean, its drawing the normals but not in and equi-spaced fashion.
for( ; contours!=0 ; contours = contours->h_next )
{
CvScalar color = CV_RGB( rand()&255, rand()&255, rand()&255 );
cvDrawContours( cc_color, contours, color, CV_RGB(0,0,0), -1, 1, 8, cvPoint(0,0) );
ptr = contours;
for( i=1; i<ptr->total; i++)
{
p1 = CV_GET_SEQ_ELEM( CvPoint, ptr, i );
p2 = CV_GET_SEQ_ELEM( CvPoint, ptr, i+1 );
x1 = p1->x;
y1 = p1->y;
x2 = p2->x;
y2 = p2->y;
printf("%d %d %d %d\n",x1,y1,x2,y2);
draw_normals(x1,y1,x2,y2);
}
}
So is there a way to find the length of a contour so that I can divide it by the number of normals I want to draw. Thanks in advance.
EDIT: The draw_normal function draws the normals between two points passed to it as parameters.
So is there a way to find the length of a contour?
Yes, you can find length of a contour using OpenCV standard function , cvarcLength().
Check Documentation here.

OpenCV - Image Stitching

I am using following code to stitch to input images. For an unknown
reason the output result is crap!
It seems that the homography matrix is wrong (or is affected wrongly)
because the transformed image is like an "exploited star"!
I have commented the part that I guess is the source of the problem
but I cannot realize it.
Any help or point is appriciated!
Have a nice day,
Ali
void Stitch2Image(IplImage *mImage1, IplImage *mImage2)
{
// Convert input images to gray
IplImage* gray1 = cvCreateImage(cvSize(mImage1->width, mImage1->height), 8, 1);
cvCvtColor(mImage1, gray1, CV_BGR2GRAY);
IplImage* gray2 = cvCreateImage(cvSize(mImage2->width, mImage2->height), 8, 1);
cvCvtColor(mImage2, gray2, CV_BGR2GRAY);
// Convert gray images to Mat
Mat img1(gray1);
Mat img2(gray2);
// Detect FAST keypoints and BRIEF features in the first image
FastFeatureDetector detector(50);
BriefDescriptorExtractor descriptorExtractor;
BruteForceMatcher<L1<uchar> > descriptorMatcher;
vector<KeyPoint> keypoints1;
detector.detect( img1, keypoints1 );
Mat descriptors1;
descriptorExtractor.compute( img1, keypoints1, descriptors1 );
/* Detect FAST keypoints and BRIEF features in the second image*/
vector<KeyPoint> keypoints2;
detector.detect( img1, keypoints2 );
Mat descriptors2;
descriptorExtractor.compute( img2, keypoints2, descriptors2 );
vector<DMatch> matches;
descriptorMatcher.match(descriptors1, descriptors2, matches);
if (matches.size()==0)
return;
vector<Point2f> points1, points2;
for(size_t q = 0; q < matches.size(); q++)
{
points1.push_back(keypoints1[matches[q].queryIdx].pt);
points2.push_back(keypoints2[matches[q].trainIdx].pt);
}
// Create the result image
result = cvCreateImage(cvSize(mImage2->width * 2, mImage2->height), 8, 3);
cvZero(result);
// Copy the second image in the result image
cvSetImageROI(result, cvRect(mImage2->width, 0, mImage2->width, mImage2->height));
cvCopy(mImage2, result);
cvResetImageROI(result);
// Create warp image
IplImage* warpImage = cvCloneImage(result);
cvZero(warpImage);
/************************** Is there anything wrong here!? *******************/
// Find homography matrix
Mat H = findHomography(Mat(points1), Mat(points2), 8, 3.0);
CvMat HH = H; // Is this line converted correctly?
// Transform warp image
cvWarpPerspective(mImage1, warpImage, &HH);
// Blend
blend(result, warpImage);
/*******************************************************************************/
cvReleaseImage(&gray1);
cvReleaseImage(&gray2);
cvReleaseImage(&warpImage);
}
This is what I would suggest you to try, in this order:
1) Use CV_RANSAC option for homography. Refer http://opencv.willowgarage.com/documentation/cpp/calib3d_camera_calibration_and_3d_reconstruction.html
2) Try other descriptors, particularly SIFT or SURF which ship with OpenCV. For some images FAST or BRIEF descriptors are not discriminating enough. EDIT (Aug '12): The ORB descriptors, which are based on BRIEF, are quite good and fast!
3) Try to look at the Homography matrix (step through in debug mode or print it) and see if it is consistent.
4) If above does not give you a clue, try to look at the matches that are formed. Is it matching one point in one image with a number of points in the other image? If so the problem again should be with the descriptors or the detector.
My hunch is that it is the descriptors (so 1) or 2) should fix it).
Also switch to Hamming distance instead of L1 distance in BruteForceMatcher. BRIEF descriptors are supposed to be compared using Hamming distance.
Your homography, might calculated based on wrong matches and thus represent bad allignment.
I suggest to path the matrix through additional check of interdependancy between its rows.
You can use the following code:
bool cvExtCheckTransformValid(const Mat& T){
// Check the shape of the matrix
if (T.empty())
return false;
if (T.rows != 3)
return false;
if (T.cols != 3)
return false;
// Check for linear dependency.
Mat tmp;
T.row(0).copyTo(tmp);
tmp /= T.row(1);
Scalar mean;
Scalar stddev;
meanStdDev(tmp,mean,stddev);
double X = abs(stddev[0]/mean[0]);
printf("std of H:%g\n",X);
if (X < 0.8)
return false;
return true;
}

Resources