Reverse image distortion - image-processing

Say I've the following image
which is a circle and a square capture by a camera positioned 30° far from the ground.
This the scene from an orthogonal POV:
This is the camera:
Is it possible to reverse the distortion in order to obtain the second image (orthogonal POV) from the first one (distorted image) without knowing the camera angle?
Regards

To inverse a perspective transformation, also known as homography, you need to identify 8 parameters. For this you need to know the (X, Y) coordinates of four points both in the original and undistorted image.
A possibility is to use the four corners of the square, but this won't be very accurate. Alternatively, use two corners and the tangency points of the lines from these corners to the circle.
If you don't know the relative sizes of the square and the circle and their distance, you are a little stuck.

Related

Defining a 3D scene from a photo of a circle

Given a photo containing a circle, for example this photo of a fountain:
is it possible to define the 3D position and rotation of the fountain in relation to the camera?
I realise we have to define the scale, so lets say the fountain is 2m wide (the diameter of the circle consisting of the inner rim of the fountain is 2m).
So assuming the circle is a perfect circle, and defining the diameter to 2m, is it possible to determine how the circle and the camera relate spatially? I dont know any camera matrix or anything, the only information i have is the picture.
I specifically want to determine the 3D coordinates of a given pixel on the rim of the fountain.
What would be the math and/or OpenCV code to do this?
Circle with perspective is an ellipse. So you basicly you need an ellipse detector.
This algorithm should work:
Detect all ellipses in the given image.
Filter ellipses that you think they are not a circles in origin. (This is not possible using just 1 Camera so you have to depend on previous knowledge. Something like that you knows that you are taking a photo for a circle).
mmm I stopped typing here and bring a paper&pen and started figuring how to estimate the Homography and it is not that easy! you should deal with the circle a special case of an ellipse and then try to construct a linear system of equations. However, I made quick googling :
https://www.researchgate.net/publication/265212988_Homography_estimation_using_one_ellipse_correspondence_and_minimal_additional_information
http://www.macs.hw.ac.uk/bmvc2006/papers/306.pdf
Seems very interesting topic, I am going to spare sometimes on it later!

Cropping out Extreme Distortion from a Homography

I have a picture of a checkerboard taken from an arbitrary camera angle. I find the two vanishing points corresponding to the two sets of lines that form the checkerboard grid. From these two vanishing points, I compute a homography from the checkerboard plane to the image plane.
I then apply the inverse homography to re-render the checkerboard from a top view. However, for certain images, the re-rendered top view is very large. That is, due to the camera angle, the inverse homography stretches certain parts of the image (i.e. the regions of the image that are very close to one of the vanishing points) to be very large.
This takes up an unnecessarily large amount of memory, and most of the region that becomes highly stretched is stuff I do not need. So, when applying the inverse homography, I would like to avoid rendering regions of the image that will be highly stretched. What is a good way to do this?
(I am coding in MATLAB)
If you just need to render the checkerboard, without the background, you could just extract the four corners of the checkerboard and compute the homography that maps them to the four corners of a square.
Then you can obtain a rectified image of the checkerboard by warping your input image with this homography, paying attention to render only the needed region (ie the square on which you map the checkerboard).

Pixel-Milimeter Proportion

I have a digital image, and I want to make some calculation based on distances on it. So I need to get the Milimeter/Pixel proportion. What I'm doing right now, is to mark two points wich I know the real world distance, to calculate the Euclidian distance between them, and than obtain the proportion.
The question is, Only with two points can I make the correct Milimeter/Pixel's proportion, or do I need to use 4 points, 2 for the X-Axis and 2 for Y-axis?
If your image is of a flat surface and the camera direction is perpendicular to that surface, then your scale factor should be the same in both directions.
If your image is of a flat surface, but it is tilted relative to the camera, then marking out a rectangle of known proportions on that surface would allow you to compute a perspective transform. (See for example this question)
If your image is of a 3D scene, then of course there is no way in general to convert pixels to distances.
If you know the distance between the points A and B measured on the picture(say in inch) and you also know the number of pixels between the points, you can easily calculate the pixels/inch ratio by dividing <pixels>/<inches>.
I suggest to take the points on the picture such that the line which intersects them is either horizontal either vertical such that calculations do not have errors taking into account the pixels have a rectangular form.

Pose correction for face recognition

I have a dataset of images with faces. I also have for each face within the dataset a set of 66 2D points that correspond to my face landmarks(nose, eyes, shape of my face, mouth).
So basically I have the shape of my face in terms of 2D points from my image.
Do you know any algorithm that I can use and that can rotate my shape so that the face shape is straight? Let's say that the pan angle is 30 degrees and I want it rotated to 30 degrees so that it is positioned at 0 degrees on the pan angle. I have illustrated bellow what I want to say.
Basically you can consider the above illustrated shapes outlines for my images, which are represented in 2D. I want to rotate my first shape points so that they can look like the second shape. A shape is made out of a set of 66 2D points which are basically pixel coordinates. All I want to do is to find the correspondence of each of those 66 points so that the new shape is rotated with a certain degree on the pan angle.
From your question, I can assume you either have the rotation parameters (e.g. degrees in x,y) or the point correspondences (since you have a database of matched points). Thus you either need to apply or estimate (and apply) a 2D similarity transformation for image alignment/registration. See also the response on this question: face alignment algorithm on images
From rotation angle and to new point locations: You can define a 2D rotation matrix R and transform your point coordinates with it.
From point correspondences between shape A and Shape B to rotation: Estimate a 2D similarity transform (image alignment) using 3 or more matching points.
From either rotation or point correspondences to warped image: From the similarity transform, map image values (accounting for interpolation or non-values) using the underlying coordinate transformation for the entire image grid.
(image courtesy of Denis Simakov, AAM Slides)
Most of these are already implemented in OpenCV and MATLAB. See also the background and relevant methods around Active Shape and Active Appearance Models (Tim Cootes page includes binaries and background material).

How to do non-perspective image warping in OpenCV?

I have an image where the user selects an arbitrary 4-cornered polygon.
I want to stretch this polygon into the entire image.
I've tried doing it with homography and cvWarpPerspective,
but the result was a Perspective transformation, which is not what I want.
Any ideas how to do this with OpenCV/EMGU ?
Thanks,
SW
What you're trying should work. Calculate the homography by making the 4 corners of the polygon correspond to (0,0) (0,height) (width,0) and (width,height).
Have a look at GetPerspectiveTransform
I think what you want is a reversal of perspective transform.
Here is what you must consider doing. Assume that you had the polygon at locations (x1,y1)....(x4,y4) originally on your screen (0,0) ....(w,h).
Applying perspective transform using cvWarpPerspective/getPerspectiveTransform you would be able to get the original co-ordinates to the known co-ordinates. So you should basically multiply the known co-ordinates with the inverse of the perspective transform matrix (unless that is non-invertible, in which case you must add a delta term to the homogeneous -coordinate term )

Resources