Get rid of misaligned pixels after UIView CA Transformation - ios

I have a UILabel on which I perform a CGAffineTransformConcat to tilt the text for some degrees. Instruments CA analysis tells me that the view has misaligned pixels now (leave out the transformation and the label is fine).
I wonder if there is any way to get rid of the misaligned pixels in this label or if that would be not possible since the transformation causes fractional values in the coordinates anyway.
I did a CGRectIntegral call on the frame which has fractional values but for some reason the view is still misaligned.

When a layer is rotated by an angle that is not a multiple of 90° it cannot be pixel-aligned.
If you want to present tilted text and nevertheless need aligned pixels the only way is to draw the layer (view) yourself. You would align the layer and instead do the rotation using Quartz.
Note after edit: You cannot use the frame when a transform is set:
Warning If the transform property is not the identity transform, the
value of this property is undefined and therefore should be ignored.

Related

Mapping textures to 2 triangles in roblox

I am currently trying to map textures using image labels onto 2 different triangles (because im using right angle wedges so i need 2 to make scalene triangles), but here is the problem, I can only set positional, size, and rotational data so I need to figure out how I can use this information to correctly map the texture onto the triangle
the position is based on the topleft corner and size of triangle (<1,1> corner is at the bottom right and <0,0> corner is at top left) and the size is based on triangle size also (<1,1> is same size as triangle and <0,0> is infinitely tiny) and rotation is central based.
I have the UV coordinates (given 0-1) and face vertices, all from an obj file. The triangles in 3D are made up of 2 wedges which are split at a right angle from the longest surface and from the opposite angle.
I don't quite understand this however it may be help to change the canvas properties on the Surface GUI

SceneKit / ARKit: map screen shape into 3D object

I have a UIView in the middle of the screen, 100x100 in this case, and it should function like a "target" guideline for the user.
When the user presses the Add button, a SCNBox should be added on the world at the exact same spot, with width / height / scale / distance / rotation corresponding the UIView's size and position on the screen.
This image may help understanding what I mean.
The UIView's size may vary, but it will always be rectangular and centered on the screen. The corresponding 3D model also may vary. In this case, a square UIView will map to a box, but later the same square UIView may be mapped into a cylinder with corresponding diameter (width) and height.
Any ideas how I can achieve something like this?
I've tried scrapping the UIView and placing the box as the placeholder, as a child of the sceneView.pointOfView, and later converting it's position / parent to the rootNode, with no luck.
Thank you!
Get the center position of the UIView in its parent view, and use the SCNRenderer’s unprojectPoint method to convert it to 3D coordinates in the scene, to get the position where the 3D pbject should be placed. You will have to implement some means to determine the z value.
Obviously the distance of the object will determine its size in screen but if you also want to scale the object you could use the inverse of the zoom level of your camera as the scale. With zoom level I mean your own means of zooming (e.g. moving the camera closer than a default distances would create smaller scale models). If the UIView changes in size, you could in addition or instead of the center point unproject all its corner points individually to 3D space, but it may just be easier to convert the scale of the uiview to the scale of the 3D node. For example, if you know the maximum size the UIView will be you can express a smaller view as a percentage of its max size and use the same percentage to scale the object.
You also mentioned the rotation of the object should correspond to the UIView. I assume that means you want the object to face the camera. For this you can apply a rotation to the object based on the .transform property of the pointOfView node.

Rounding corner causes top edge to blur

I have a UIButton subclass that I am rounding the corners of. Using either the usual cornerRadius property on its layer, or creating a rounded mask and applying that to the layer, I always get the effect shown in the image below (blown up so you can see it clearly). The top pixel is slightly transparent, making the edge look soft. If I remove the rounded corners, the edge goes back to solid (like the bottom edge in the image), so I know it's not just trying to draw the view between pixels.
Any ideas?
Be sure that the frame and the mask are fully composed of integers not floats, in case use floor or ceil to get the closest integer rounding by low or top.
With frames CGRectIntegral is very helpful. Floats values automatically create a sort of antialiasing while rendering on screen.
What you see is anti-aliasing (on by default) to make the edges look smooth. This is the only way to make rounded corners look smooth and not stair stepped. If you want rough edges, set the allowsEdgeAntialiasing and edgeAntialiasingMask

View results of affine transform

I am trying to find out the reason why when I apply affine transformations on an image in OpenCV, the result of it is not visible in the preview window, but the entire window is black.How can I find workaround for this problem so that I can always view my transformed image (the result of the affine transform) in the window no matter the applied transformation?
Update: I think that this happens because all the transformations are calculated with respect to the origin of the coordinate system (top left corner of the image). While for rotation I can specify the center of the rotation, and I am able to view the result, when I perform scaling I am not able to control where the transformed image goes. Is it possible to somehow move the coordinate system to make the image fit in the window?
Update2: I have an image which contains only ROI at some position in it (the rest of the image is black), and I need to apply a set of affine transforms on it. To make things simpler and to see the effect of each individual transform, I applied each transform one by one. What I noticed is that, whenever I move (translate) the image such that the center of the ROI is in the center of the coordinate system (top left corner of the view window), all the affine transforms perform correctly without moving. However, by translating the center of ROI at the center of the coordinate system, the upper and the left part of the ROI remain cut out of the current view window.
If I move ROI's central point to another point in the view window (for example the window center), an affine transform of type:
A=[a 0 0; 0 b 0] (A is 2x3 matrix, parameter of the warpAffine function)
moves the image (ROI), outside of the view window (which doesn't happen if the ROI's center is in the top-left corner). How can I modify the affine transform so the image doesn't move out of its place (behaves the same way as when the ROI center is in the center of the coordinate system)?
If you want to be able to apply any affine transform, you will not always be able to view it. A better idea might be to manually apply your transform to 4 corners of a square and then look at the coordinates where those 4 points end up. That will tell you where your image is going.
If you have several transforms, just combine them into one transform. If you have 3 transforms
[A],[B],[C]
transforming an image by A,then B, then C is equivalent to transforming the image once by
[C]*[B]*[A]
If your transforms are in 2x3 matrices, just convert them to 3x3 matrices by adding
[0,0,1]
as the new bottom row, then multiply the 3x3 matrices together, when you are finished the bottom row will be unchanged, then just drop it to get your new, combined affine transform
Update
If you want to apply a transform to an object as if the object were somewhere else. You can combine 3 transforms. First translate the object to the location you want it to be transformed in (center of coordinate system in your case) with an affine transform [A]. Then apply your scaling transform [B], then a translation back to where you started. The translation back should be the inverse of [A]. That means your final transform would be
final_transform = [A].inv()*[B]*[A]
order of operations reads right to left when doing matrix multiplication.

Rotation of a part of sprite in XNA

I have a texture with 250px width and 2000px height. 250x250 part of it drawn on screen according to various conditions (some kind of sprite sheet, yes). All I want is to draw it within a fixed destination rectangle with some rotation. Is it possible?
Yes. Here's how to effectively rotate your destination rectangle:
Take a look at the overloads for SpriteBatch.Draw.
Notice that none of the overloads that take a Rectangle as a destination take a rotation parameter. It's because such a thing does not make much sense. It's ambiguous as to how you want the destination rotated.
But you can achieve the same effect as setting a destination rectangle by careful use of the position and scale parameters. Combine these with the origin (centroid of scaling and rotation, specified in pixels in relation to your sourceRectangle) and rotation parameters to achieve the effect you want.
(If, on the other hand, you want to "fit" to a rectangle - effectively scaling after rotating - you would have to also use the transformMatrix parameter to Begin.)
Now - your question isn't quite clear on this point: But if the effect you are after is more like rotating your source rectangle, this is not something you can achieve with plain ol' SpriteBatch.
The quick-and-dirty way to achieve this is to set a viewport that acts as your destination rectangle. Then draw your rotated sprite within it. Note that SpriteBatch's coordinate system is based on the viewport, not the screen.
The "nicer" (but much harder to implement) way to do it would be to not use SpriteBatch at all, but implement your own sprite drawing that will allow you to rotate the texture coordinates.

Resources