I have two images, one is the result of applying an affine transform to the other.
I can register them using homography by extracting the points using the ORB_create function in OpenCV.
However, I want to calculate the Affine matrix needed for this transformation.
is there any way of doing it simply by having the two images?
Detect a rotated rectangle and use its corners to get your transformation matrix
Use : getPerspectiveTransform or getAffineTransform
Edit: regarding rotated rectangle detection :
Please check this Opencv tutorial on how to find contours and detect rotated rectangles Creating Bounding rotated boxes and ellipses for contours
Related
Consider the following problem:
I have the original image A saved as "A.png".
Moreover, I also have camera video feed that shows (possibly with some perspective transformation) an image of A, denoted Va, with some level of radial distortion.
I also have an homography from A to Va and its inverse.
How could I undistort Va? Note that I do not want to undo the perspective transformation, just remove the radial distortion from Va.
Example:
I have a fully mapped and undistorted reference image (including real world size)
an image from a video frame (left image)
and an homography and its inverse between those two
In our use case, the left image would have radial distortion but we would like to remove it without applying a simple backprojection (this would create artifacts)
Undistortion is the process to transform a distorted image (e.g. image with a fisheye pattern) to undistorted version.
In this case your video frames do not suffer from distortion. And if you have already determined homography matrix, you need to apply perspective transformation.
You might need to invert the homography matrix in case you need to invert the transformation direction.
I have two images.
Say one is a 10x10 which we call trainImage and then there is another queryImage which is the same chessboard photographed using a phone camera. Now I have to find the position of camera in (x,y,z) coordinates. Using openCV and feature detection I have been able to identify the chessboard object in photographed object, but how to go ahead with calculating the transformations on chessboard so that I can eventually calculate the position of camera. Any pointers to start looking upon will also be really appreciated. Thanks.
Edit:
Reframing the problem statement again, I have two images trainImage and queryImage. I need to find the position of camera i.e. (x,y,z) if we assume that trainImage is at (0,0,0) in queryImage. I did some reading to find this I need rvec(rotation vector) and tvec(translation vector).
When I use findHomography() function on two images I get a 3x3 homgraphy matrix using which I can find the pixels points(x,y) in queryImage by multiplying to pixel points(x,y) in trainImage. How can I use this homographyMatrix for calculating tvec and rvec.
I have a dataset of images with faces. I also have for each face within the dataset a set of 66 2D points that correspond to my face landmarks(nose, eyes, shape of my face, mouth).
So basically I have the shape of my face in terms of 2D points from my image.
Do you know any algorithm that I can use and that can rotate my shape so that the face shape is straight? Let's say that the pan angle is 30 degrees and I want it rotated to 30 degrees so that it is positioned at 0 degrees on the pan angle. I have illustrated bellow what I want to say.
Basically you can consider the above illustrated shapes outlines for my images, which are represented in 2D. I want to rotate my first shape points so that they can look like the second shape. A shape is made out of a set of 66 2D points which are basically pixel coordinates. All I want to do is to find the correspondence of each of those 66 points so that the new shape is rotated with a certain degree on the pan angle.
From your question, I can assume you either have the rotation parameters (e.g. degrees in x,y) or the point correspondences (since you have a database of matched points). Thus you either need to apply or estimate (and apply) a 2D similarity transformation for image alignment/registration. See also the response on this question: face alignment algorithm on images
From rotation angle and to new point locations: You can define a 2D rotation matrix R and transform your point coordinates with it.
From point correspondences between shape A and Shape B to rotation: Estimate a 2D similarity transform (image alignment) using 3 or more matching points.
From either rotation or point correspondences to warped image: From the similarity transform, map image values (accounting for interpolation or non-values) using the underlying coordinate transformation for the entire image grid.
(image courtesy of Denis Simakov, AAM Slides)
Most of these are already implemented in OpenCV and MATLAB. See also the background and relevant methods around Active Shape and Active Appearance Models (Tim Cootes page includes binaries and background material).
I'm now trying to analyze the perspective transform/homography matrix between two images capturing the same object (e.g., a rectangle) but at different perspectives/shooting angles. The perspective transform can be derived by using the function getPerspectiveTransform in OpenCV 2.3.1. I want to find the corresponding rotation and translation matrices.
The output of getPerspectiveTransform is a 3x3 matrix which I can directly use it to warp the source image into the target image. But my question is that how I can find the rotation and translation matrices based on the obtained 3x3 matrix?
I was looking into the funciton decomposeProjectionMatrix for the corresponding rotation and translation matrices. But the input is required to be a 3x4 projection matrix. How can I relate the perspective transformation (i.e., a 3x3 matrix) to the 3x4 projection matrix? Am I on the right track?
Thank you very much.
The information contained in the homography matrix (returned from getPerspectiveTransform) is not enough to extract rotation/translation. The missing column is key to correctly find the angles.
The good news is that in some scenarios, you can use the solvePnP() function to extract the desired parameters from two sets of points.
Also, this question is about the same thing you are asking for. It should help
Analyze camera movement with OpenCV
I have an image where the user selects an arbitrary 4-cornered polygon.
I want to stretch this polygon into the entire image.
I've tried doing it with homography and cvWarpPerspective,
but the result was a Perspective transformation, which is not what I want.
Any ideas how to do this with OpenCV/EMGU ?
Thanks,
SW
What you're trying should work. Calculate the homography by making the 4 corners of the polygon correspond to (0,0) (0,height) (width,0) and (width,height).
Have a look at GetPerspectiveTransform
I think what you want is a reversal of perspective transform.
Here is what you must consider doing. Assume that you had the polygon at locations (x1,y1)....(x4,y4) originally on your screen (0,0) ....(w,h).
Applying perspective transform using cvWarpPerspective/getPerspectiveTransform you would be able to get the original co-ordinates to the known co-ordinates. So you should basically multiply the known co-ordinates with the inverse of the perspective transform matrix (unless that is non-invertible, in which case you must add a delta term to the homogeneous -coordinate term )