I've two pictures (A and B) slightly distorted one from the other, where there are translation, rotation and scale differences between them (for example, these pictures:)
Ssoooooooo what I need is to apply a kind of transformation in pic B so it compensates the distortion/translation/rotation that exists to make both pictures with the same size, orientation and with no translation
I've already extracted the points and found the Homography, as shown bellow. But I don'know how to use the Homography to transform Mat img_B so it looks like Mat img_A. Any idea?
//-- Localize the object from img_1 in img_2
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for (unsigned int i = 0; i < good_matches.size(); i++) {
//-- Get the keypoints from the good matches
obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
}
Mat H = findHomography(obj, scene, CV_RANSAC);
Cheers,
You do not need homography for this problem. You can compute an affine transform instead. However, if you do want to use homography for other purposes, you can check out the code below. It was copied from this much detailed article on homography.
C++ Example
// pts_src and pts_dst are vectors of points in source
// and destination images. They are of type vector<Point2f>.
// We need at least 4 corresponding points.
Mat h = findHomography(pts_src, pts_dst);
// The calculated homography can be used to warp
// the source image to destination. im_src and im_dst are
// of type Mat. Size is the size (width,height) of im_dst.
warpPerspective(im_src, im_dst, h, size);
Python Example
'''
pts_src and pts_dst are numpy arrays of points
in source and destination images. We need at least
4 corresponding points.
'''
h, status = cv2.findHomography(pts_src, pts_dst)
'''
The calculated homography can be used to warp
the source image to destination. Size is the
size (width,height) of im_dst
'''
im_dst = cv2.warpPerspective(im_src, h, size)
You want the warpPerspective function. The process is analogous to the one presented in this tutorial (for affine transforms and warps)
Related
Ok here's my scenario. I built a paper detection app that finds a piece of paper in an image. It doesn't work 100% of the time, given white balance and focusing changes, so if we found a sheet of paper in frame 1, we want to show a border around it in frame 2 (in reality there can be many frame gap), even if we didn't find it in frame 2. In order to do so, we keep the old image, and old 4 point contour from frame 1.
Then, in frame N where we did not find a convex contour, we want to transform the old contour using an affine transformation that we compute using estimageRigidTransform.
I am 100% positive that my math is slightly off, but I'm not sure where:
// new image = 3200 x 6400
// old image = 3200 x 6400
// old contour = contour found in old image, same scale
vector<cv::Point> transformContourWithNewImage(Mat & newImage, Mat & oldImage, vector<cv::Point> oldContour) {
CGFloat ratio = newImage.size().height / 500.0;
cv::Size outputSize = cv::Size(newImage.size().width / ratio, 500);
Mat image_copy;
resize(newImage, image_copy, outputSize); //shrink images down so that computations are cheaper
Mat oldImage_copy = cv::Mat();
resize(oldImage, oldImage_copy, outputSize);
cv::Mat transform = estimateRigidTransform(image_copy, oldImage_copy, false);
vector<cv::Point> transformedPoints;
cv::transform(oldContour, transformedPoints, transform);
return transformedPoints;
}
I think that I need to scale the transform, since it was done on a smaller image than the contour vector represents. I also get a crash saying my transform Mat has the wrong number of rows/cols
I have a PointGrey Ladybug3 Camera. It's a panoramic (multi)camera (5 camera to do a 360ยบ and 1 camera looking up).
I've done all the calibration and rectification so what I end up is from all pixels of the 6 images I know it's 3d position wrt a global frame.
What I would do now is convert this 3d points to a panoramic image. The most common is a radial (Equirectangular) projection like the following one:
For all the 3D points (X,Y,Z) it's possible to find theta and phi coordinate like:
My question is, Is it possible to do this automatically with opencv? Or if I do this manually what is the best way to convert that bunch of pixels in theta,phi coordinates to an image?
The official ladybug SDK uses OpenGL for all this operations, but I was wondering if it's possible to do this in opencv.
Thanks,
Josep
The approach I used to solve this problem was the following:
Create an empty image with the desired output size.
For every pixel in the output image find the theta and phi coordinates. (Linearly) Theta goes from -Pi to Pi and phi from 0 to Pi
Set a projection radius R and find 3D coordinate from theta, phi and R.
Find for how many cameras is the 3D point visible and the correspondent pixel position.
Copy the pixel of the image where the pixel is closer to the principal point. Or any other valid criteria...
My code looks like:
cv::Mat panoramic;
panoramic=cv::Mat::zeros(PANO_HEIGHT,PANO_WIDTH,CV_8UC3);
double theta, phi;
double R=calibration.getSphereRadius();
int result;
double dRow=0;
double dCol=0;
for(int y = 0; y!= PANO_HEIGHT; y++){
for(int x = 0; x !=PANO_WIDTH ; x++) {
//Rescale to [-pi, pi]
theta=-(2*PI*x/(PANO_WIDTH-1)-PI); //Sign change needed.
phi=PI*y/(PANO_HEIGHT-1);
//From theta and phi find the 3D coordinates.
double globalZ=R*cos(phi);
double globalX=R*sin(phi)*cos(theta);
double globalY=R*sin(phi)*sin(theta);
float minDistanceCenter=5000; // Doesn't depend on the image.
float distanceCenter;
//From the 3D coordinates, find in how many camera falls the point!
for(int cam = 0; cam!= 6; cam++){
result=calibration.ladybugXYZtoRC(globalX, globalY, globalZ, cam, dRow, dCol);
if (result==0){ //The 3d point is visible from this camera
cv::Vec3b intensity = image[cam].at<cv::Vec3b>(dRow,dCol);
distanceCenter=sqrt(pow(dRow-imageHeight/2,2)+pow(dCol-imageWidth/2,2));
if (distanceCenter<minDistanceCenter) {
panoramic.ptr<unsigned char>(y,x)[0]=intensity.val[0];
panoramic.ptr<unsigned char>(y,x)[1]=intensity.val[1];
panoramic.ptr<unsigned char>(y,x)[2]=intensity.val[2];
minDistanceCenter=distanceCenter;
}
}
}
}
}
When finding a reference image in a scene using SURF, I would like to crop the found object in the scene, and "straighten" it back using warpPerspective and the reversed homography matrix.
Meaning, let's say I have this SURF result:
Now, I would like to crop the found object in the scene:
and "straighten" only the cropped image with warpPerspective using the reversed homography matrix. The result I'm aiming at is that I'll get an image containing, roughly, only the object, and some distorted leftovers from the original scene (as the cropping is not a 100% the object alone).
Cropping the found object, and finding the homography matrix and reversing it are simple enough. Problem is, I can't seem to understand the results from warpPerspective. Seems like the resulting image contains only a small portion of the cropped image, and in a very large size.
While researching warpPerspective I found that the resulting image is very large due to the nature of the process, but I can't seem to wrap my head around how to do this properly. Seems like I just don't understand the process well enough. Would I need to warpPerspective the original (not cropped) image and than crop the "straightened" object?
Any advice?
try this.
given that you have the unconnected contour of your object (e.g. the outer corner points of the box contour) you can transform them with your inverse homography and adjust that homography to place the result of that transformation to the top left region of the image.
compute where those object points will be warped to (use the inverse homography and the contour points as input):
cv::Rect computeWarpedContourRegion(const std::vector<cv::Point> & points, const cv::Mat & homography)
{
std::vector<cv::Point2f> transformed_points(points.size());
for(unsigned int i=0; i<points.size(); ++i)
{
// warp the points
transformed_points[i].x = points[i].x * homography.at<double>(0,0) + points[i].y * homography.at<double>(0,1) + homography.at<double>(0,2) ;
transformed_points[i].y = points[i].x * homography.at<double>(1,0) + points[i].y * homography.at<double>(1,1) + homography.at<double>(1,2) ;
}
// dehomogenization necessary?
if(homography.rows == 3)
{
float homog_comp;
for(unsigned int i=0; i<transformed_points.size(); ++i)
{
homog_comp = points[i].x * homography.at<double>(2,0) + points[i].y * homography.at<double>(2,1) + homography.at<double>(2,2) ;
transformed_points[i].x /= homog_comp;
transformed_points[i].y /= homog_comp;
}
}
// now find the bounding box for these points:
cv::Rect boundingBox = cv::boundingRect(transformed_points);
return boundingBox;
}
modify your inverse homography (result of computeWarpedContourRegion and inverseHomography as input)
cv::Mat adjustHomography(const cv::Rect & transformedRegion, const cv::Mat & homography)
{
if(homography.rows == 2) throw("homography adjustement for affine matrix not implemented yet");
// unit matrix
cv::Mat correctionHomography = cv::Mat::eye(3,3,CV_64F);
// correction translation
correctionHomography.at<double>(0,2) = -transformedRegion.x;
correctionHomography.at<double>(1,2) = -transformedRegion.y;
return correctionHomography * homography;
}
you will call something like
cv::warpPerspective(objectWithBackground, output, adjustedInverseHomography, sizeOfComputeWarpedContourRegionResult);
hope this helps =)
I am developing a program where I receive 2 pictures of the same scene, but one of them has a distortion:
Mat img_1 = imread(argv[1], 0); // nORMAL pICTURE
Mat img_2 = imread(argv[2], 0); // PICTURE WITH DISTORTION
AND I WOULD LIKE TO EVALUATE THE DISTORTIONS' PATTERN AND BE ABLE TO COMPENSATE IT
I AM ALREADY ABLE TO FIND THE KEYPOINTS AND I WOULD LIKE TO KNOW IF I CAN USE THE FUNCTION cv::findHomography for this... In any case, how to do so?
A homography will map one image plane to another. That means that if your distortion can be expressed as a 3x3 matrix, findHomography is what you want. If not, then it isn't what you want. It takes two vectors of corresponding points as input and will return the 3x3 matrix that best represents the transform between those points.
Alright, so suppose I've two pictures (A and B) slightly distorted one from the other, where there are translation, rotation and scale differences between them (for example, these pictures:)
Ssoooooooo what I need is to apply a kind of transformation in pic B so it compensates the distortion/translation/rotation that exists to make both pictures with the same size, orientation and with no translation
I've already extracted the points and found the Homography, as shown bellow. But I don'know how to use the Homography to transform Mat img_B so it looks like Mat img_A. Any idea?
//-- Localize the object from img_1 in img_2
std::vector<Point2f> obj;
std::vector<Point2f> scene;
for (unsigned int i = 0; i < good_matches.size(); i++) {
//-- Get the keypoints from the good matches
obj.push_back(keypoints_object[good_matches[i].queryIdx].pt);
scene.push_back(keypoints_scene[good_matches[i].trainIdx].pt);
}
Mat H = findHomography(obj, scene, CV_RANSAC);
Cheers,
I'm currently working on Image stitching using OpenCV 2.3.1 on Visual Studio 2010, but I'm having some trouble.
Problem Description
I'm trying to write a code for stitching multiple images derived from a few cameras(about 3~4), i,e, the code should keep executing image stitching until I ask it to stop.
The following is what I've done so far:
(For simplification, I'll replace some part of the code with just a few words)
1.Reading frames(images) from 2 cameras (Currently I'm just working on 2 cameras.)
2.Feature detection, descriptor calculation (SURF)
3.Feature matching using FlannBasedMatcher
4.Removing outliers and calculate the Homography with inliers using RANSAC.
5.Warp one of both images.
For step 5., I followed the answer in the following thread and just changed some parameters:
Stitching 2 images in opencv
However, the result is terrible though.
I just uploaded the result onto youtube and of course only those who have the link will be able to see it.
http://youtu.be/Oy5z_7LeaMk
My code is shown below:
(Only crucial parts are shown)
VideoCapture cam1, cam2;
cam1.open(0);
cam2.open(1);
while(1)
{
Mat frm1, frm2;
cam1 >> frm1;
cam2 >> frm2;
//(SURF detection, descriptor calculation
//and matching using FlannBasedMatcher)
double max_dist = 0; double min_dist = 100;
//-- Quick calculation of max and min distances between keypoints
for( int i = 0; i < descriptors_1.rows; i++ )
{
double dist = matches[i].distance;
if( dist < min_dist ) min_dist = dist;
if( dist > max_dist ) max_dist = dist;
}
(Draw only "good" matches
(i.e. whose distance is less than 3*min_dist ))
vector<Point2f> frame1;
vector<Point2f> frame2;
for( int i = 0; i < good_matches.size(); i++ )
{
//-- Get the keypoints from the good matches
frame1.push_back( keypoints_1[ good_matches[i].queryIdx ].pt );
frame2.push_back( keypoints_2[ good_matches[i].trainIdx ].pt );
}
Mat H = findHomography( Mat(frame1), Mat(frame2), CV_RANSAC );
cout << "Homography: " << H << endl;
/* warp the image */
Mat warpImage2;
warpPerspective(frm2, warpImage2,
H, Size(frm2.cols, frm2.rows), INTER_CUBIC);
Mat final(Size(frm2.cols*3 + frm1.cols, frm2.rows),CV_8UC3);
Mat roi1(final, Rect(frm1.cols, 0, frm1.cols, frm1.rows));
Mat roi2(final, Rect(2*frm1.cols, 0, frm2.cols, frm2.rows));
warpImage2.copyTo(roi2);
frm1.copyTo(roi1);
imshow("final", final);
What else should I do to make the stitching better?
Besides, is it reasonable to make the Homography matrix fixed instead of keeping computing it ?
What I mean is to specify the angle and the displacement between the 2 cameras by myself so as to derive a Homography matrix that satisfies what I want.
Thanks. :)
It sounds like you are going about this sensibly, but if you have access to both of the cameras, and they will remain stationary with respect to each other, then calibrating offline, and simply applying the transformation online will make your application more efficient.
One point to note is, you say you are using the findHomography function from OpenCV. From the documentation, this function:
Finds a perspective transformation between two planes.
However, your points are not restricted to a specific plane as they are imaging a 3D scene. If you wanted to calibrate offline, you could image a chessboard with both cameras, and the detected corners could be used in this function.
Alternatively, you may like to investigate the Fundamental matrix, which can be calculated with a similar function. This matrix describes the relative position of the cameras, but some work (and a good textbook) will be required to extract them.
If you can find it, I would strongly recommend having a look at Part II: "Two-View Geometry" in the book "Multiple View Geometry in computer vision", by Richard Hartley and Andrew Zisserman, which goes through the process in detail.
I have been working lately on image registration. My algorithm takes two images, calculates the SURF features, find correspondences, find homography matrix and then stitch both images together, I did it with the next code:
void stich(Mat base, Mat target,Mat homography, Mat& panorama){
Mat corners1(1, 4,CV_32F);
Mat corners2(1,4,CV_32F);
Mat corners(1,4,CV_32F);
vector<Mat> planes;
/* compute corners
of warped image
*/
corners1.at<float>(0,0)=0;
corners2.at<float>(0,0)=0;
corners1.at<float>(0,1)=0;
corners2.at<float>(0,1)=target.rows;
corners1.at<float>(0,2)=target.cols;
corners2.at<float>(0,2)=0;
corners1.at<float>(0,3)=target.cols;
corners2.at<float>(0,3)=target.rows;
planes.push_back(corners1);
planes.push_back(corners2);
merge(planes,corners);
perspectiveTransform(corners, corners, homography);
/* compute size of resulting
image and allocate memory
*/
double x_start = min( min( (double)corners.at<Vec2f>(0,0)[0], (double)corners.at<Vec2f> (0,1)[0]),0.0);
double x_end = max( max( (double)corners.at<Vec2f>(0,2)[0], (double)corners.at<Vec2f>(0,3)[0]), (double)base.cols);
double y_start = min( min( (double)corners.at<Vec2f>(0,0)[1], (double)corners.at<Vec2f>(0,2)[1]), 0.0);
double y_end = max( max( (double)corners.at<Vec2f>(0,1)[1], (double)corners.at<Vec2f>(0,3)[1]), (double)base.rows);
/*Creating image
with same channels, depth
as target
and proper size
*/
panorama.create(Size(x_end - x_start + 1, y_end - y_start + 1), target.depth());
planes.clear();
/*Planes should
have same n.channels
as target
*/
for (int i=0;i<target.channels();i++){
planes.push_back(panorama);
}
merge(planes,panorama);
// create translation matrix in order to copy both images to correct places
Mat T;
T=Mat::zeros(3,3,CV_64F);
T.at<double>(0,0)=1;
T.at<double>(1,1)=1;
T.at<double>(2,2)=1;
T.at<double>(0,2)=-x_start;
T.at<double>(1,2)=-y_start;
// copy base image to correct position within output image
warpPerspective(base, panorama, T,panorama.size(),INTER_LINEAR| CV_WARP_FILL_OUTLIERS);
// change homography to take necessary translation into account
gemm(T, homography,1,T,0,T);
// warp second image and copy it to output image
warpPerspective(target,panorama, T, panorama.size(),INTER_LINEAR);
//tidy
corners.release();
T.release();
}
Any question I will try