I am trying to transform a Sprite into a trapezoid, I don't really care about the interpolation even though I know without it my image will lose detail. All I really want to do is Transform my rectangular Sprite into a trapezoid like this:
/ \
/ \
/__________\
Has anyone done this with CGAffineTransforms or with cocos2d?
The transformation you're proposing is not affine. Affine transformations have to be undoable. So they can typically:
Scale
Rotate
Shear (make lopsided, like square -> parallelogram)
Translate
But they cannot "squeeze". Notice that the left and right sides of your trapezoid, if extended, would intersect at a particular spot. They're not parallel anymore. So you couldn't "undo" the transformation, because if there was anything to transform at that spot, you couldn't decide where they would transform to. In other words, if a transformation doesn't preserve parallelism, it pinches space, can't be undone, and isn't affine.
I don't know that much about transformations in Core Animation, so I hope that mathy stuff helps you find an alternative.
But I do know how you could do it in OpenGL, but it would require you to start over on how you draw your application:
If I'm envisioning the result you want correctly, you want to build your rectangle in 3D, use an affine transformation to rotate it away a little bit, and use a (non-affine) projection transformation to flatten it into a 2D image.
If you're not looking for a 3D effect, but you really just want to pinch in the corners, then you can specify a GL_RECT with the points of your trapezoid and map your sprite onto it as a texture.
The easiest thing might be to pre-squeeze your image in a photo editor, save it as a .png with transparency, and draw a rectangle with that image.
You need to apply a CATransform3D to the layer of the UIView.
To find out the right one, it is easier to use AGGeometryKit.
#import <AGGeometryKit/AGGeometryKit.h>
UIView *view = ...; // create a view
view.layer.anchorPoint = CGPointZero;
AGKQuad quad = view.layer.quadrilateral;
quad.tl.x += 20; // shift top left x-value with 20 pixels
quad.tr.x -= 20; // shift top right x-value with 20 pixels
view.layer.quadrilateral = quad; // the quad is converted to CATransform3D and applied
Related
I have a simple UIImageView in my view, but I can't seem to find any feature in Apple's documentation to change the UV Coordinates of this UIImageView, to convey my idea to you, this GIF file should preview how changing 4 vertices coordinates can change how the image gets viewed on the final UIImageView.
I tried to find a solution online too (other than documentation) and found none.
I use Swift.
You can achieve that very animation using UIView.transform or CALayer.transform. You'll need basic geometry to convert UV coordinates to a CGAffineTransform or CATransform3D.
I made an assumption that affine transform would suffice because in your animation the transform is affine (parallel lines stay parallel). In that case, 3 vertices are free -- the 4th one is constrained by the other 3.
If you have 3 vertices, you can compute the affine transform matrix using: Affine transformation algorithm
To achieve the infinite repeat, use UIImageResizingMode.Tile.
I want to create the same transforming effect on XNA 4 as Photoshop does:
Transform tool is used to scale, rotate, skew, and just distort the perspective of any graphic you’re working with in general
This is what all the things i want to do in XNA with any textures http://www.tutorial9.net/tutorials/photoshop-tutorials/using-transform-in-photoshop/
Skew: Skew transformations slant objects either vertically or horizontally.
Distort: Distort transformations allow you to stretch an image in ANY direction freely.
Perspective: The Perspective transformation allows you to add perspective to an object.
Warping an Object(Im interesting the most).
Hope you can help me with some tutorial or somwthing already made :D, iam think vertex has the solution but maybe.
Thanks.
Probably the easiest way to do this in XNA is to pass a Matrix to SpriteBatch.Begin. This is the overload you want to use: MSDN (the transformMatrix argument).
You can also do this with raw vertices, with an effect like BasicEffect by setting its World matrix. Or by setting vertex positions manually, perhaps transforming them with Vector3.Transform().
Most of the transformation matrices you want are provided by the Matrix.Create*() methods (MSDN). For example, CreateScale and CreateRotationZ.
There is no provided method for creating a skew matrix. It should be something like this:
Matrix skew = Matrix.Identity;
skew.M12 = (float)Math.Tan(MathHelper.ToRadians(36.87f));
(That is to skew by 36.87f degrees, which I pulled off this old answer of mine. You should be able to find the full maths for a skew matrix via Google.)
Remember that transformations happen around the origin of world space (0,0). If you want to, for example, scale around the centre of your sprite, you need to translate that sprite's centre to the origin, apply a scale, and then translate it back again. You can combine matrix transforms by multiplying them. This example (untested) will scale a 200x200 image around its centre:
Matrix myMatrix = Matrix.CreateTranslation(-100, -100, 0)
* Matrix.CreateScale(2f, 0.5f, 1f)
* Matrix.CreateTranslation(100, 100, 0);
Note: avoid scaling the Z axis to 0, even in 2D.
For perspective there is CreatePerspective. This creates a projection matrix, which is a specific kind of matrix for projecting a 3D scene onto a 2D display, so it is better used with vertices when setting (for example) BasicEffect.Projection. In this case you're best off doing proper 3D rendering.
For distort, just use vertices and place them manually wherever you need them.
I'm doing some drawing relative to a scaled image so I end up with fractional CGPoints. I am scaling the results from the CoreImage face detection routine.
Do I want to round these myself or leave it to iOS to do it when I use these points in CGPathAddLineToPoint calls? If it is better to round, should I round up or down?
I've read about pixel boundaries, etc. but I'm not sure how to apply that here. I am drawing to a CALayer
CGPoint leftEye = CGPointMake((leftEyePosition.x * xScale),
(leftEyePosition.y * yScale));
// result
features {
faceRect = "{{92, 144.469}, {166.667, 179.688}}";
hasLeftEyePosition = 1;
hasMouthPosition = 1;
hasRightEyePosition = 1;
leftEyePosition = "{142.667, 268.812}";
mouthPosition = "{176, 189.75}";
rightEyePosition = "{207.333, 269.531}";
}
Whether or not you round, and in what direction, depends on the effect you are trying to accomplish.
CoreGraphics itself has absolutely no problem with fractional coordinates. However, drawing anything using fractional coordinates is going to end up antialiasing the drawn objects. This typically causes them to look fuzzy. Rounding your coordinates appropriately is a good idea to avoid this.
Be warned, however. Depending on what you're drawing and how, you may want coordinates that are 0.5 pixels off instead of integral coordinates. For example, if you're drawing a line, the line is centered on the coordinate you give. So a 1-pixel line drawn on integral coordinates will actually end up being a fuzzy line 2 pixels wide (with each pixel accounting for half of the line). The simplest thing to remember is that strokes are centered on the coordinates, but fills are bounded by them. So when filling a rectangle, integral coordinates is best. When stroking a rectangle, inset your coordinates by 0.5 pixels (or, rather, by half of the stroke width you want to use).
Also, don't forget that when drawing an image that's meant to be displayed on a retina screen with scale=2, coordinates that are 0.5 units off are actually still on pixel boundaries. So if you know it's retina, you can avoid rounding to fully integral coordinates when the nearest half-unit coordinate is fine.
I am trying to find out the reason why when I apply affine transformations on an image in OpenCV, the result of it is not visible in the preview window, but the entire window is black.How can I find workaround for this problem so that I can always view my transformed image (the result of the affine transform) in the window no matter the applied transformation?
Update: I think that this happens because all the transformations are calculated with respect to the origin of the coordinate system (top left corner of the image). While for rotation I can specify the center of the rotation, and I am able to view the result, when I perform scaling I am not able to control where the transformed image goes. Is it possible to somehow move the coordinate system to make the image fit in the window?
Update2: I have an image which contains only ROI at some position in it (the rest of the image is black), and I need to apply a set of affine transforms on it. To make things simpler and to see the effect of each individual transform, I applied each transform one by one. What I noticed is that, whenever I move (translate) the image such that the center of the ROI is in the center of the coordinate system (top left corner of the view window), all the affine transforms perform correctly without moving. However, by translating the center of ROI at the center of the coordinate system, the upper and the left part of the ROI remain cut out of the current view window.
If I move ROI's central point to another point in the view window (for example the window center), an affine transform of type:
A=[a 0 0; 0 b 0] (A is 2x3 matrix, parameter of the warpAffine function)
moves the image (ROI), outside of the view window (which doesn't happen if the ROI's center is in the top-left corner). How can I modify the affine transform so the image doesn't move out of its place (behaves the same way as when the ROI center is in the center of the coordinate system)?
If you want to be able to apply any affine transform, you will not always be able to view it. A better idea might be to manually apply your transform to 4 corners of a square and then look at the coordinates where those 4 points end up. That will tell you where your image is going.
If you have several transforms, just combine them into one transform. If you have 3 transforms
[A],[B],[C]
transforming an image by A,then B, then C is equivalent to transforming the image once by
[C]*[B]*[A]
If your transforms are in 2x3 matrices, just convert them to 3x3 matrices by adding
[0,0,1]
as the new bottom row, then multiply the 3x3 matrices together, when you are finished the bottom row will be unchanged, then just drop it to get your new, combined affine transform
Update
If you want to apply a transform to an object as if the object were somewhere else. You can combine 3 transforms. First translate the object to the location you want it to be transformed in (center of coordinate system in your case) with an affine transform [A]. Then apply your scaling transform [B], then a translation back to where you started. The translation back should be the inverse of [A]. That means your final transform would be
final_transform = [A].inv()*[B]*[A]
order of operations reads right to left when doing matrix multiplication.
in my previous code, I changed the coordinate system in my view's drawRect, so that the rectangle had 0,0 in the centre, 0,1 at the top centre and 1,0 in the centre of the right edge. I.e. a normalised Cartesian system.
{
// SCALE so that we range from TL(0, 0) - BR(2, -2)
CGContextScaleCTM (X, 0.5 * bitmapSize.width, -0.5 * bitmapSize.height);
// TRANSLATE so that we range from TL(-1, 1) - BR(1, -1)
// ie: a cartesian coordinate system, centred on (0, 0) with:
// x increasing to the right
// y increasing upwards
// x&y each ranging from -1 to 1
CGContextTranslateCTM(X, 1, -1);
T = CGContextGetCTM (X);
}
in this box I create a wheel with 12 custom drawn buttons arranged around it. The buttons glow before gradually fading when pressed.
now I am redesigning the code, as the animation was rendering far too slowly
in my view's Load event I create a wheel object, which draws the wheel onto its own CALayer. this is then added as a sublayer to the view's Layer.
( the wheel object will in turn create the buttons which will each draw onto their own CALayer, and these layers will be added as sub layers to the wheels layer )
anyway, I would very much like to perform the drawing of the wheel and the buttons using my normalised Cartesian system. But I can't quite see how to implement it.
I could change the views transform. But this changes the boundary rectangle of the view. One solution would be to have a view within a view, but I discovered by chance testing that clipping to a circular path and drawing within that path will substantially slower than drawing without the clip. So I am hesitant to do this. I am looking for optimal efficiency (without going to GL, just yet -- I'm not ready for that -- I need to understand this stuff first, I think)
alternatively I could change the transform of the layer. but this is a 3D transform! I am having a lot of trouble getting my head around the logic of this. IPhone is a 2D screen. On it is represented a 2D interface. views and layers I conceptualise as flat rectangles sitting on top of one another. Is this wrong? Is this 3-D business just to do funky flip effects?
what if the layer has been set to rasterize, and it has a weird transform? How can it rasterize if it doesn't know what pixel resolution it is running? what if we have 10 nested layers each with a funky transform? And the innermost one needs to be rasterised? Does it somehow go all the way down the chain and figure out what pixels it is to overlay? What if the base layer is within a view within a view within a view and these views have 2D transforms on them? Does it really go all the way until it gets to pixel?
I want to rasterize my buttons -- that's why I'm asking. They are very complex drawing objects. It would save a lot of CPU GPU if they were prerendered dead and alive, and the fading would simply consist of compositing x*dead plus (1-x)*alive. so really I want bitmaps for both.
could someone slay my confusion?
many thanks,
Ohmu