How can I use Homography? - opencv

I am developing a program where I receive 2 pictures of the same scene, but one of them has a distortion:
Mat img_1 = imread(argv[1], 0); // nORMAL pICTURE
Mat img_2 = imread(argv[2], 0); // PICTURE WITH DISTORTION
AND I WOULD LIKE TO EVALUATE THE DISTORTIONS' PATTERN AND BE ABLE TO COMPENSATE IT
I AM ALREADY ABLE TO FIND THE KEYPOINTS AND I WOULD LIKE TO KNOW IF I CAN USE THE FUNCTION cv::findHomography for this... In any case, how to do so?

A homography will map one image plane to another. That means that if your distortion can be expressed as a 3x3 matrix, findHomography is what you want. If not, then it isn't what you want. It takes two vectors of corresponding points as input and will return the 3x3 matrix that best represents the transform between those points.

Alright, so suppose I've two pictures (A and B) slightly distorted one from the other, where there are translation, rotation and scale differences between them (for example, these pictures:)
Ssoooooooo what I need is to apply a kind of transformation in pic B so it compensates the distortion/translation/rotation that exists to make both pictures with the same size, orientation and with no translation
I've already extracted the points and found the Homography, as shown bellow. But I don'know how to use the Homography to transform Mat img_B so it looks like Mat img_A. Any idea?
//-- Localize the object from img_1 in img_2
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for (unsigned int i = 0; i < good_matches.size(); i++) {
//-- Get the keypoints from the good matches
obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
}
Mat H = findHomography(obj, scene, CV_RANSAC);
Cheers,

Related

JavaCV or OpenCV Image Rotation

My objective is to rotate an image by a certain angle (e.g. 30 degrees). One possible way of rotating by 90 degrees in OpenCV is given by tenta4 but unfortunately, it only performs 90-degree flips.
Another possible way is a method "SkewGrayImage" given in JavaCV samples where it performs "small angle rotations" that appear to work for rotations of up to approximately 45 - 50 degrees but not for any other higher values.
So - my issues is, is there a proper way/method in OpenCV or JavaCV to actually perform an angular rotation of images or objects?
Meta has explained how to compute a rotation matrix with respect to the center of the image and then to perform a rotation as follows:
Mat rotated_image;
warpAffine(src, rotated_image, rot_mat, src.size());
there is an operation that's called warp, and it is able to just rotate, but also to do some other transformations on the image.
Some useful links are here
https://docs.opencv.org/2.4.13.2/modules/stitching/doc/warpers.html
https://docs.opencv.org/3.1.0/db/d29/group__cudawarping.html
https://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html
Hope it helps ;)
A more detailed answer for IplImage rotation is given by Martin based on Mat variables which can then be converted and returned as an IplImage as follows:
Mat source = imread(argv[1], CV_LOAD_IMAGE_COLOR);
Mat rotation_matrix = getRotationMatrix2D(src_center, angle, 1.0);
Mat destinationMat;
warpAffine(source, destinationMat, rotation_matrix, source.size());
IplImage iplframe = IplImage(destinationMat);
Hope this helps! Worked for me with JavaCV.
Mat raw = ... // your raw mat
// Create your "new" Mat and the center of your Raw Mat
Mat result = new Mat(raw.size(), [your Image Type]); // my Img type was CV_8U
Point2f rawCenter = new Point2f(raw.cols() / 2.0F, raw.rows() / 2.0F);
// Scale and Rotation of new Mat
double scale = 1.0;
int rotation = -5;
// Rotation Matrix
Mat rotationMatrix = getRotationMatrix2D(rawCenter, rotation, scale);
// Rotate
warpAffine(raw, result, rotationMatrix, raw.size());

OpenCV 2.4.3 - warpPerspective with reversed homography on a cropped image

When finding a reference image in a scene using SURF, I would like to crop the found object in the scene, and "straighten" it back using warpPerspective and the reversed homography matrix.
Meaning, let's say I have this SURF result:
Now, I would like to crop the found object in the scene:
and "straighten" only the cropped image with warpPerspective using the reversed homography matrix. The result I'm aiming at is that I'll get an image containing, roughly, only the object, and some distorted leftovers from the original scene (as the cropping is not a 100% the object alone).
Cropping the found object, and finding the homography matrix and reversing it are simple enough. Problem is, I can't seem to understand the results from warpPerspective. Seems like the resulting image contains only a small portion of the cropped image, and in a very large size.
While researching warpPerspective I found that the resulting image is very large due to the nature of the process, but I can't seem to wrap my head around how to do this properly. Seems like I just don't understand the process well enough. Would I need to warpPerspective the original (not cropped) image and than crop the "straightened" object?
Any advice?
try this.
given that you have the unconnected contour of your object (e.g. the outer corner points of the box contour) you can transform them with your inverse homography and adjust that homography to place the result of that transformation to the top left region of the image.
compute where those object points will be warped to (use the inverse homography and the contour points as input):
cv::Rect computeWarpedContourRegion(const std::vector<cv::Point> & points, const cv::Mat & homography)
{
std::vector<cv::Point2f> transformed_points(points.size());
for(unsigned int i=0; i<points.size(); ++i)
{
// warp the points
transformed_points[i].x = points[i].x * homography.at<double>(0,0) + points[i].y * homography.at<double>(0,1) + homography.at<double>(0,2) ;
transformed_points[i].y = points[i].x * homography.at<double>(1,0) + points[i].y * homography.at<double>(1,1) + homography.at<double>(1,2) ;
}
// dehomogenization necessary?
if(homography.rows == 3)
{
float homog_comp;
for(unsigned int i=0; i<transformed_points.size(); ++i)
{
homog_comp = points[i].x * homography.at<double>(2,0) + points[i].y * homography.at<double>(2,1) + homography.at<double>(2,2) ;
transformed_points[i].x /= homog_comp;
transformed_points[i].y /= homog_comp;
}
}
// now find the bounding box for these points:
cv::Rect boundingBox = cv::boundingRect(transformed_points);
return boundingBox;
}
modify your inverse homography (result of computeWarpedContourRegion and inverseHomography as input)
cv::Mat adjustHomography(const cv::Rect & transformedRegion, const cv::Mat & homography)
{
if(homography.rows == 2) throw("homography adjustement for affine matrix not implemented yet");
// unit matrix
cv::Mat correctionHomography = cv::Mat::eye(3,3,CV_64F);
// correction translation
correctionHomography.at<double>(0,2) = -transformedRegion.x;
correctionHomography.at<double>(1,2) = -transformedRegion.y;
return correctionHomography * homography;
}
you will call something like
cv::warpPerspective(objectWithBackground, output, adjustedInverseHomography, sizeOfComputeWarpedContourRegionResult);
hope this helps =)

HOW TO use Homography to transform pictures in OpenCV?

I've two pictures (A and B) slightly distorted one from the other, where there are translation, rotation and scale differences between them (for example, these pictures:)
Ssoooooooo what I need is to apply a kind of transformation in pic B so it compensates the distortion/translation/rotation that exists to make both pictures with the same size, orientation and with no translation
I've already extracted the points and found the Homography, as shown bellow. But I don'know how to use the Homography to transform Mat img_B so it looks like Mat img_A. Any idea?
//-- Localize the object from img_1 in img_2
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for (unsigned int i = 0; i < good_matches.size(); i++) {
//-- Get the keypoints from the good matches
obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
}
Mat H = findHomography(obj, scene, CV_RANSAC);
Cheers,
You do not need homography for this problem. You can compute an affine transform instead. However, if you do want to use homography for other purposes, you can check out the code below. It was copied from this much detailed article on homography.
C++ Example
// pts_src and pts_dst are vectors of points in source
// and destination images. They are of type vector<Point2f>.
// We need at least 4 corresponding points.
Mat h = findHomography(pts_src, pts_dst);
// The calculated homography can be used to warp
// the source image to destination. im_src and im_dst are
// of type Mat. Size is the size (width,height) of im_dst.
warpPerspective(im_src, im_dst, h, size);
Python Example
'''
pts_src and pts_dst are numpy arrays of points
in source and destination images. We need at least
4 corresponding points.
'''
h, status = cv2.findHomography(pts_src, pts_dst)
'''
The calculated homography can be used to warp
the source image to destination. Size is the
size (width,height) of im_dst
'''
im_dst = cv2.warpPerspective(im_src, h, size)
You want the warpPerspective function. The process is analogous to the one presented in this tutorial (for affine transforms and warps)

Having some difficulty in image stitching using OpenCV

I'm currently working on Image stitching using OpenCV 2.3.1 on Visual Studio 2010, but I'm having some trouble.
Problem Description
I'm trying to write a code for stitching multiple images derived from a few cameras(about 3~4), i,e, the code should keep executing image stitching until I ask it to stop.
The following is what I've done so far:
(For simplification, I'll replace some part of the code with just a few words)
1.Reading frames(images) from 2 cameras (Currently I'm just working on 2 cameras.)
2.Feature detection, descriptor calculation (SURF)
3.Feature matching using FlannBasedMatcher
4.Removing outliers and calculate the Homography with inliers using RANSAC.
5.Warp one of both images.
For step 5., I followed the answer in the following thread and just changed some parameters:
Stitching 2 images in opencv
However, the result is terrible though.
I just uploaded the result onto youtube and of course only those who have the link will be able to see it.
http://youtu.be/Oy5z_7LeaMk
My code is shown below:
(Only crucial parts are shown)
VideoCapture cam1, cam2;
cam1.open(0);
cam2.open(1);
while(1)
{
Mat frm1, frm2;
cam1 >> frm1;
cam2 >> frm2;
//(SURF detection, descriptor calculation
//and matching using FlannBasedMatcher)
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_1.rows; i++ )
{
double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
(Draw only "good" matches
(i.e. whose distance is less than 3*min_dist ))
vector<Point2f> frame1;
vector<Point2f> frame2;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
frame1.push_back( keypoints_1[ good_matches[i].queryIdx ].pt );
frame2.push_back( keypoints_2[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( Mat(frame1), Mat(frame2), CV_RANSAC );
cout << "Homography: " << H << endl;
/* warp the image */
Mat warpImage2;
warpPerspective(frm2, warpImage2,
H, Size(frm2.cols, frm2.rows), INTER_CUBIC);
Mat final(Size(frm2.cols*3 + frm1.cols, frm2.rows),CV_8UC3);
Mat roi1(final, Rect(frm1.cols, 0, frm1.cols, frm1.rows));
Mat roi2(final, Rect(2*frm1.cols, 0, frm2.cols, frm2.rows));
warpImage2.copyTo(roi2);
frm1.copyTo(roi1);
imshow("final", final);
What else should I do to make the stitching better?
Besides, is it reasonable to make the Homography matrix fixed instead of keeping computing it ?
What I mean is to specify the angle and the displacement between the 2 cameras by myself so as to derive a Homography matrix that satisfies what I want.
Thanks. :)
It sounds like you are going about this sensibly, but if you have access to both of the cameras, and they will remain stationary with respect to each other, then calibrating offline, and simply applying the transformation online will make your application more efficient.
One point to note is, you say you are using the findHomography function from OpenCV. From the documentation, this function:
Finds a perspective transformation between two planes.
However, your points are not restricted to a specific plane as they are imaging a 3D scene. If you wanted to calibrate offline, you could image a chessboard with both cameras, and the detected corners could be used in this function.
Alternatively, you may like to investigate the Fundamental matrix, which can be calculated with a similar function. This matrix describes the relative position of the cameras, but some work (and a good textbook) will be required to extract them.
If you can find it, I would strongly recommend having a look at Part II: "Two-View Geometry" in the book "Multiple View Geometry in computer vision", by Richard Hartley and Andrew Zisserman, which goes through the process in detail.
I have been working lately on image registration. My algorithm takes two images, calculates the SURF features, find correspondences, find homography matrix and then stitch both images together, I did it with the next code:
void stich(Mat base, Mat target,Mat homography, Mat& panorama){
Mat corners1(1, 4,CV_32F);
Mat corners2(1,4,CV_32F);
Mat corners(1,4,CV_32F);
vector<Mat> planes;
/* compute corners
of warped image
*/
corners1.at<float>(0,0)=0;
corners2.at<float>(0,0)=0;
corners1.at<float>(0,1)=0;
corners2.at<float>(0,1)=target.rows;
corners1.at<float>(0,2)=target.cols;
corners2.at<float>(0,2)=0;
corners1.at<float>(0,3)=target.cols;
corners2.at<float>(0,3)=target.rows;
planes.push_back(corners1);
planes.push_back(corners2);
merge(planes,corners);
perspectiveTransform(corners, corners, homography);
/* compute size of resulting
image and allocate memory
*/
double x_start = min( min( (double)corners.at<Vec2f>(0,0)[0], (double)corners.at<Vec2f> (0,1)[0]),0.0);
double x_end = max( max( (double)corners.at<Vec2f>(0,2)[0], (double)corners.at<Vec2f>(0,3)[0]), (double)base.cols);
double y_start = min( min( (double)corners.at<Vec2f>(0,0)[1], (double)corners.at<Vec2f>(0,2)[1]), 0.0);
double y_end = max( max( (double)corners.at<Vec2f>(0,1)[1], (double)corners.at<Vec2f>(0,3)[1]), (double)base.rows);
/*Creating image
with same channels, depth
as target
and proper size
*/
panorama.create(Size(x_end - x_start + 1, y_end - y_start + 1), target.depth());
planes.clear();
/*Planes should
have same n.channels
as target
*/
for (int i=0;i<target.channels();i++){
planes.push_back(panorama);
}
merge(planes,panorama);
// create translation matrix in order to copy both images to correct places
Mat T;
T=Mat::zeros(3,3,CV_64F);
T.at<double>(0,0)=1;
T.at<double>(1,1)=1;
T.at<double>(2,2)=1;
T.at<double>(0,2)=-x_start;
T.at<double>(1,2)=-y_start;
// copy base image to correct position within output image
warpPerspective(base, panorama, T,panorama.size(),INTER_LINEAR| CV_WARP_FILL_OUTLIERS);
// change homography to take necessary translation into account
gemm(T, homography,1,T,0,T);
// warp second image and copy it to output image
warpPerspective(target,panorama, T, panorama.size(),INTER_LINEAR);
//tidy
corners.release();
T.release();
}
Any question I will try

OpenCV warping from one triangle to another

I would like to map one triangle inside an OpenCV Mat to another one, pretty much like warpAffine does (check it here), but for triangles instead of quads, in order to use it in a Delaunay triangulation.
I know one is able to use a mask, but I'd like to know if there's a better solution.
I have copied the above image and the following C++ code from my post Warp one triangle to another using OpenCV ( C++ / Python ). The comments in the code below should provide a good idea what is going on. For more details and for python code you can visit the above link. All the pixels inside triangle tri1 in img1 are transformed to triangle tri2 in img2. Hope this helps.
void warpTriangle(Mat &img1, Mat &img2, vector<Point2f> tri1, vector<Point2f> tri2)
{
// Find bounding rectangle for each triangle
Rect r1 = boundingRect(tri1);
Rect r2 = boundingRect(tri2);
// Offset points by left top corner of the respective rectangles
vector<Point2f> tri1Cropped, tri2Cropped;
vector<Point> tri2CroppedInt;
for(int i = 0; i < 3; i++)
{
tri1Cropped.push_back( Point2f( tri1[i].x - r1.x, tri1[i].y - r1.y) );
tri2Cropped.push_back( Point2f( tri2[i].x - r2.x, tri2[i].y - r2.y) );
// fillConvexPoly needs a vector of Point and not Point2f
tri2CroppedInt.push_back( Point((int)(tri2[i].x - r2.x), (int)(tri2[i].y - r2.y)) );
}
// Apply warpImage to small rectangular patches
Mat img1Cropped;
img1(r1).copyTo(img1Cropped);
// Given a pair of triangles, find the affine transform.
Mat warpMat = getAffineTransform( tri1Cropped, tri2Cropped );
// Apply the Affine Transform just found to the src image
Mat img2Cropped = Mat::zeros(r2.height, r2.width, img1Cropped.type());
warpAffine( img1Cropped, img2Cropped, warpMat, img2Cropped.size(), INTER_LINEAR, BORDER_REFLECT_101);
// Get mask by filling triangle
Mat mask = Mat::zeros(r2.height, r2.width, CV_32FC3);
fillConvexPoly(mask, tri2CroppedInt, Scalar(1.0, 1.0, 1.0), 16, 0);
// Copy triangular region of the rectangular patch to the output image
multiply(img2Cropped,mask, img2Cropped);
multiply(img2(r2), Scalar(1.0,1.0,1.0) - mask, img2(r2));
img2(r2) = img2(r2) + img2Cropped;
}
You should use the getAffineTransform to find the transform, and use warpAffine to apply it

Resources