I have the coordinates of a quadrilateral which was photographed out of two different perspectives. Furthermore I have the coordinates of one Point but only from one of the two perspectives. I need to transform the coordinates of this point to the perspective the second photograph of the rectangle was made. Do do this I use OpenCV
I've calculated the Perspective Transform Matrix:
cv::getPerspectiveTransform(quad1, quad2);
My Problem now is that I don't really know how to transform the Point with the calculated Perspective Transform Matrix. This is probably quite simple but I just don't know how to do it.
I recommend the new OpenCV forum for OpenCV-related questions, where I answered a very similar question with a little sample code.
But basically, it's using the
void perspectiveTransform(vector<Point2f> origPoints, vector<Point2f> transformedPoints, Mat h)
function.
Related
For a project, I need to store circles detected on some photos. The problem is that some of these photos are taken from an angle, meaning the circles are ellipses. Is it possible to somehow turn the ellipses into circles?
I thought of rectifying the ellipse, then transforming the rectangle to a square. Indeterminate problem comes to my mind, meaning there are too many possible variations for my approach, and the results are different for each approach.
To find perspective transform, you need to have 4 pairs of corresponding coordinates: points at distorted picture and their ideal positions after correction of perspective.
In this case you can calculate matrix of perspective transform with getPerspectiveTransform function and apply it to correct all the picture. Example
Assuming I have a view (or image) like this:
And I'd like to transform it to look like this:
How do I create a CATransform3D matrix for that based on the 4 corners coordinates of the shape I'd like image to be transformed to?
You can't do that with a CGAffineTransform.
A CGAffineTransform is an euclidean transformation, meaning all parallel lines will remain parallel in all cases. You only can stretch, rotate, scale and translate the object.
It will be possible with a 3D affine transform. But there is no function to get the transform based on the projection (that's what you are asking for). You'll need to do the math yourself. I can't help you with that, but someone used to 3d gaming will do it it a breeze.
I would go down the path of using a 3rd party framework that makes the transformation for you. Take a look at this
I do have two sets of points and I want to find the best transformation between them.
In OpenCV, you have the following function:
Mat H = Calib3d.findHomography(src_points, dest_points);
that returns you a 3x3 Homography matrix, using RANSAC. My problem is now, that I only need translation and rotation (& maybe scale), I don't need affine and perspective.
The thing is, my points are only in 2D.
(1) Is there a function to compute something like a homography but with less degrees of freedom?
(2) If there is none, is it possible to extract a 3x3 matrix that does only translation and rotation from the 3x3 homography matrix?
Thanks in advance for any help!
Isa
OpenCV estimateRigidTransform function is exactly what you need: it returns Translation, Rotation and Scale (use false value for fullAffine flag). And it DOES use RANSAC (see source code to be sure of it).
Homography is for 2D points, the third dimension is just for casting points in 3 dim homogeneous coordinates and performing perspective effects. You can always cast points back:
homogeneous [x, y, w]
cartesian [x/w, y/w]
However since you calculate 6DOF instead of 4DOF (similarity) you result is pretty different from what you expect with 4DOF. More flexible transformation will fit more points in RANSAC at the expense of distortions in transformations you care about. Bottom line - don’t try to decompose H, instead fit similarity or isometry (also called rigid or euclidean). The reason why they are absent in the library - they are expressed in closed form even with correct least squared metric in point coordinates and thus don't require non-linear optimization. In other words, they are very simple.
If you only have rotation and translation, I wrote a quick functions to find them (no RANSAC though). It is probably similar to a rigidTransform but more understandable (hopefully)
https://stackoverflow.com/a/18091472/457687
With scale there is still a closed form solution, but slightly different formulas for translation and scaling. See Learning similarity parameters, p. 25
I want to create the same transforming effect on XNA 4 as Photoshop does:
Transform tool is used to scale, rotate, skew, and just distort the perspective of any graphic you’re working with in general
This is what all the things i want to do in XNA with any textures http://www.tutorial9.net/tutorials/photoshop-tutorials/using-transform-in-photoshop/
Skew: Skew transformations slant objects either vertically or horizontally.
Distort: Distort transformations allow you to stretch an image in ANY direction freely.
Perspective: The Perspective transformation allows you to add perspective to an object.
Warping an Object(Im interesting the most).
Hope you can help me with some tutorial or somwthing already made :D, iam think vertex has the solution but maybe.
Thanks.
Probably the easiest way to do this in XNA is to pass a Matrix to SpriteBatch.Begin. This is the overload you want to use: MSDN (the transformMatrix argument).
You can also do this with raw vertices, with an effect like BasicEffect by setting its World matrix. Or by setting vertex positions manually, perhaps transforming them with Vector3.Transform().
Most of the transformation matrices you want are provided by the Matrix.Create*() methods (MSDN). For example, CreateScale and CreateRotationZ.
There is no provided method for creating a skew matrix. It should be something like this:
Matrix skew = Matrix.Identity;
skew.M12 = (float)Math.Tan(MathHelper.ToRadians(36.87f));
(That is to skew by 36.87f degrees, which I pulled off this old answer of mine. You should be able to find the full maths for a skew matrix via Google.)
Remember that transformations happen around the origin of world space (0,0). If you want to, for example, scale around the centre of your sprite, you need to translate that sprite's centre to the origin, apply a scale, and then translate it back again. You can combine matrix transforms by multiplying them. This example (untested) will scale a 200x200 image around its centre:
Matrix myMatrix = Matrix.CreateTranslation(-100, -100, 0)
* Matrix.CreateScale(2f, 0.5f, 1f)
* Matrix.CreateTranslation(100, 100, 0);
Note: avoid scaling the Z axis to 0, even in 2D.
For perspective there is CreatePerspective. This creates a projection matrix, which is a specific kind of matrix for projecting a 3D scene onto a 2D display, so it is better used with vertices when setting (for example) BasicEffect.Projection. In this case you're best off doing proper 3D rendering.
For distort, just use vertices and place them manually wherever you need them.
I'm trying to transform a rectangle to an quadrilateral and created a CATransform3D projection matrix as described by hfossli here.
The matrix works with a CALayer with out problem, but i would like/have to use it with GPUImage and the GPUImageTransformFilter, which takes a CATransform3D.
It doesn't really work.
The scaling doesn't fit, which means my transformed image gets cut of or points are not "stretched" to the position they should be. There are some threads which describe the translation from a OpenGL projection to a proper CATransform3D projection matrix like here.
It involves some scaling and y-flipping.
So I tried to scale and flip in reversed order in the hope to be able to use this CATransform3D matrix with the GPUImageTransformFilter, but couldn't really get it to work.
Did maybe someone solve this?