Are there any UV Coordinates (or similar) for UIImageView? - ios

I have a simple UIImageView in my view, but I can't seem to find any feature in Apple's documentation to change the UV Coordinates of this UIImageView, to convey my idea to you, this GIF file should preview how changing 4 vertices coordinates can change how the image gets viewed on the final UIImageView.
I tried to find a solution online too (other than documentation) and found none.
I use Swift.

You can achieve that very animation using UIView.transform or CALayer.transform. You'll need basic geometry to convert UV coordinates to a CGAffineTransform or CATransform3D.
I made an assumption that affine transform would suffice because in your animation the transform is affine (parallel lines stay parallel). In that case, 3 vertices are free -- the 4th one is constrained by the other 3.
If you have 3 vertices, you can compute the affine transform matrix using: Affine transformation algorithm
To achieve the infinite repeat, use UIImageResizingMode.Tile.

Related

Create perspective affine transform matrix based on point coordinates in iOS

Assuming I have a view (or image) like this:
And I'd like to transform it to look like this:
How do I create a CATransform3D matrix for that based on the 4 corners coordinates of the shape I'd like image to be transformed to?
You can't do that with a CGAffineTransform.
A CGAffineTransform is an euclidean transformation, meaning all parallel lines will remain parallel in all cases. You only can stretch, rotate, scale and translate the object.
It will be possible with a 3D affine transform. But there is no function to get the transform based on the projection (that's what you are asking for). You'll need to do the math yourself. I can't help you with that, but someone used to 3d gaming will do it it a breeze.
I would go down the path of using a 3rd party framework that makes the transformation for you. Take a look at this

Xna transform a 2d texture like photoshop transforming tool

I want to create the same transforming effect on XNA 4 as Photoshop does:
Transform tool is used to scale, rotate, skew, and just distort the perspective of any graphic you’re working with in general
This is what all the things i want to do in XNA with any textures http://www.tutorial9.net/tutorials/photoshop-tutorials/using-transform-in-photoshop/
Skew: Skew transformations slant objects either vertically or horizontally.
Distort: Distort transformations allow you to stretch an image in ANY direction freely.
Perspective: The Perspective transformation allows you to add perspective to an object.
Warping an Object(Im interesting the most).
Hope you can help me with some tutorial or somwthing already made :D, iam think vertex has the solution but maybe.
Thanks.
Probably the easiest way to do this in XNA is to pass a Matrix to SpriteBatch.Begin. This is the overload you want to use: MSDN (the transformMatrix argument).
You can also do this with raw vertices, with an effect like BasicEffect by setting its World matrix. Or by setting vertex positions manually, perhaps transforming them with Vector3.Transform().
Most of the transformation matrices you want are provided by the Matrix.Create*() methods (MSDN). For example, CreateScale and CreateRotationZ.
There is no provided method for creating a skew matrix. It should be something like this:
Matrix skew = Matrix.Identity;
skew.M12 = (float)Math.Tan(MathHelper.ToRadians(36.87f));
(That is to skew by 36.87f degrees, which I pulled off this old answer of mine. You should be able to find the full maths for a skew matrix via Google.)
Remember that transformations happen around the origin of world space (0,0). If you want to, for example, scale around the centre of your sprite, you need to translate that sprite's centre to the origin, apply a scale, and then translate it back again. You can combine matrix transforms by multiplying them. This example (untested) will scale a 200x200 image around its centre:
Matrix myMatrix = Matrix.CreateTranslation(-100, -100, 0)
* Matrix.CreateScale(2f, 0.5f, 1f)
* Matrix.CreateTranslation(100, 100, 0);
Note: avoid scaling the Z axis to 0, even in 2D.
For perspective there is CreatePerspective. This creates a projection matrix, which is a specific kind of matrix for projecting a 3D scene onto a 2D display, so it is better used with vertices when setting (for example) BasicEffect.Projection. In this case you're best off doing proper 3D rendering.
For distort, just use vertices and place them manually wherever you need them.

Transforming a Rectangle to an Quadrilateral with CATransform3D and GPUImage

I'm trying to transform a rectangle to an quadrilateral and created a CATransform3D projection matrix as described by hfossli here.
The matrix works with a CALayer with out problem, but i would like/have to use it with GPUImage and the GPUImageTransformFilter, which takes a CATransform3D.
It doesn't really work.
The scaling doesn't fit, which means my transformed image gets cut of or points are not "stretched" to the position they should be. There are some threads which describe the translation from a OpenGL projection to a proper CATransform3D projection matrix like here.
It involves some scaling and y-flipping.
So I tried to scale and flip in reversed order in the hope to be able to use this CATransform3D matrix with the GPUImageTransformFilter, but couldn't really get it to work.
Did maybe someone solve this?

Transform position of point form one perspective into another

I'm trying to convert the position of a point which was filmed with a freely moving camera (local space) into the position in a image of the same scene (global space). The position of the point is given in local space and I need to calculate it in global space. I have markers distributed all over the scene to have corresponding points in both global and local space to calculate the perspective transform.
I tried to calculate the perspective transform matrix by comparing the points of corresponding markers in gloabl and local space with the help of JavaCV (cvGetPerspectiveTransform(localMarker, globalMarker, mmat)). Then I transform the postion of the point in local space with the help of the perspective transform matrix (cvPerspectiveTransform(localFieldPoints, globalFieldPoints, mmat)).
I though that would be enough to solve my problem, but it doesn't quite work good. I also noticed that when I calculate the perspective transform matrix of different markers in one specific image of the video, i get diefferent perspective transform matrices. If I understood everything correct, this shouldn't happen, because the perspective is alway the same here, so I should always get the same perspective transform matrix, shouldn't I?
Because I'm quite new to all of this and this was my first attempt, I just wanted to know If the method I used is generally right or should it be done differently? Maybe I just missed something?
EDIT:
Again, I have one image of the complete scene i look at and a video from a camara which moves freely in the scene. Now I take every Image of the video and compare it with the image of the complete scene (I used different cameras for making the image and the video, so the camera intrinsics actually aren't the same. Could that be the Problem?
Perspective Transform Screenshot.
On the rigth side I have the image of the scene, on the left one Image of the video. The red circle in the left video image is the given point. The red square in the right image ist the calculated point with the help of perspective transform. As you can see, the calculated point isn't at the right position.
What I meant with „I get different perspective transform matrices“ is that when I calculate a perspective transform matrix with the help of marker „0E3E“ I get a different matrix than using marker „0272“.

How to do non-perspective image warping in OpenCV?

I have an image where the user selects an arbitrary 4-cornered polygon.
I want to stretch this polygon into the entire image.
I've tried doing it with homography and cvWarpPerspective,
but the result was a Perspective transformation, which is not what I want.
Any ideas how to do this with OpenCV/EMGU ?
Thanks,
SW
What you're trying should work. Calculate the homography by making the 4 corners of the polygon correspond to (0,0) (0,height) (width,0) and (width,height).
Have a look at GetPerspectiveTransform
I think what you want is a reversal of perspective transform.
Here is what you must consider doing. Assume that you had the polygon at locations (x1,y1)....(x4,y4) originally on your screen (0,0) ....(w,h).
Applying perspective transform using cvWarpPerspective/getPerspectiveTransform you would be able to get the original co-ordinates to the known co-ordinates. So you should basically multiply the known co-ordinates with the inverse of the perspective transform matrix (unless that is non-invertible, in which case you must add a delta term to the homogeneous -coordinate term )

Resources