KonvaJS: How to know updated width and height of shape after applying transformation using transformer - konvajs

After applying transformation using transformer on a shape, its scaleX and scaleY properties are changed where as its width and height property remains the same.
But I need to calculate the updated width and height of shape after applying transformation.
Can anybody tell me the work around code/solution to get the width and height of the shape
I have seen other questions and applied it but the problem persists.

Related

SceneKit / ARKit: map screen shape into 3D object

I have a UIView in the middle of the screen, 100x100 in this case, and it should function like a "target" guideline for the user.
When the user presses the Add button, a SCNBox should be added on the world at the exact same spot, with width / height / scale / distance / rotation corresponding the UIView's size and position on the screen.
This image may help understanding what I mean.
The UIView's size may vary, but it will always be rectangular and centered on the screen. The corresponding 3D model also may vary. In this case, a square UIView will map to a box, but later the same square UIView may be mapped into a cylinder with corresponding diameter (width) and height.
Any ideas how I can achieve something like this?
I've tried scrapping the UIView and placing the box as the placeholder, as a child of the sceneView.pointOfView, and later converting it's position / parent to the rootNode, with no luck.
Thank you!
Get the center position of the UIView in its parent view, and use the SCNRenderer’s unprojectPoint method to convert it to 3D coordinates in the scene, to get the position where the 3D pbject should be placed. You will have to implement some means to determine the z value.
Obviously the distance of the object will determine its size in screen but if you also want to scale the object you could use the inverse of the zoom level of your camera as the scale. With zoom level I mean your own means of zooming (e.g. moving the camera closer than a default distances would create smaller scale models). If the UIView changes in size, you could in addition or instead of the center point unproject all its corner points individually to 3D space, but it may just be easier to convert the scale of the uiview to the scale of the 3D node. For example, if you know the maximum size the UIView will be you can express a smaller view as a percentage of its max size and use the same percentage to scale the object.
You also mentioned the rotation of the object should correspond to the UIView. I assume that means you want the object to face the camera. For this you can apply a rotation to the object based on the .transform property of the pointOfView node.

Minimum enclosing rectangle with given orientation

I'm looking for a function that computes the minimum area "oriented" rectangle of an object with a given orientation.
The minAreaRect function isn't good in this case because, once I compute the orientation of the object, I need to find the MER aligned with it, so to get the height and the width of the object.

Given the texture in the right way

So as you can see, the texture of the photoFrame is a square image. But when I set it to the diffuse contents, the effect is terrible. So how can I display the square image in the rectangle frame but not stretch the image.
A lot of what you see depends on what geometry the texture is mapped onto. Assuming those picture frames are SCNPlane or SCNBox geometries, the face of the frame has texture coordinates ranging from (0,0) in the upper left to (1,1) in the lower right, regardless of the geometry's dimensions or aspect ratio.
SceneKit texture maps images such that the top left of the image is at texture coordinate (0,0) and the lower right is at (1,1) regardless of the pixel dimensions of the image. So, unless you have a geometry whose aspect ratio matches that of the texture image, you're going to see cases like this where the image gets stretched.
There are a couple of things you can do to "fix" your texture:
Know (or calculate) the aspect ratios of your image and the geometry (face) you want to put it on, then use the material's contentsTransform to correct the image.
For example, if you have an SCNPlane whose width is 2 and height is 1, and you assign a square image to it, the image will get stretched horizontally. If you set the contentsTransform to a matrix created with SCNMatrix4MakeScale(1,2,1) it'll double the texture coordinates in the horizontal direction, effectively scaling the image in half in that direction, "fixing" the aspect ratio for your 2:1 plane. Note that you might also need a translation, depending on where you want your half-width image to appear on the face of the geometry.
If you're doing this in the scene editor in Xcode, contentsTransform is the "offset", "scale", and "rotation" controls in the material editor, down below where you assigned an image in your screenshot.
Know (or calculate) the aspect ratio of your geometry, and at least some information about the size of your image, and create a modified texture image to fit.
For example, if you have a 2:1 plane as above, and you want to put 320x480 image on it, create a new texture image with dimensions of 960x480 — that is, matching the aspect ratio of the plane. You can use this image to create whatever style of background you want, with your 320x480 image composited on top of that background at whatever position you want.
I change the scale and offset and WrapT property in the material editor. And the effect is good. But when I run it, I couldn't get the same effect. So I try to program by change the contentsTransform property. But the scale, offset they both affect the contentsTransform. So if the offSet is (0, -4.03) and the Scale is (1, 1,714), what is the contentsTransform?

Ambiguity in relative magnitude of dimensions of a RotatedRect in OpenCV

I am trying to put thresholds on the aspect ratios of rotated rectangles obtained around certain objects in the image using OpenCV. To compare the aspect ratio of a rotated rectangle with the threshold, I need to take the ratio of the longer dimension and the shorter dimension of the rotated rectangle.
I am confused in this regard: what is the convention in OpenCV? Is rotatedRectanlge.size.width always smaller than rotatedRectangle.size.height? i.e., is the width of a rotated rectangle always assigned to the smaller of the two dimensions of the rotated Rectangle in OpenCV?
I tried running some code to find an answer. And, it seems like rotatedRectangle.size.width is actually the smaller dimension of a rotatedRectangle. But I still want some confirmation from anyone who has encountered something similar.
EDIT: I am using fitEllipse to get the rotated rectangle and my version of OpenCV is 2.4.1.
Please help!
There is no convention for a rotated rectangle per se, as the documentation says
The class represents rotated (i.e. not up-right) rectangles on a plane. Each rectangle is specified by the center point (mass center), length of each side (represented by cv::Size2f structure) and the rotation angle in degrees.
However, you don't specify what function or operation is creating your rotated rects - for example, if you used fitEllipse it may be that there is some internal detail of the algorithm that prefers to use the larger (or smaller) dimension as the width (or height).
Perhaps you could comment or edit your question with more information. As it stands, if you want the ratio of the longer:shorter dimensions, you will need to specifically test which is longer first.
EDIT
After looking at the OpenCV source code, the fitEllipse function contains the following code
if( box.size.width > box.size.height )
{
float tmp;
CV_SWAP( box.size.width, box.size.height, tmp );
box.angle = (float)(90 + rp[4]*180/CV_PI);
}
So, at least for this implementation, it seems that width is always taken as the shorter dimension. However, I wouldn't rely on that staying true in a future implementation.

Get rid of misaligned pixels after UIView CA Transformation

I have a UILabel on which I perform a CGAffineTransformConcat to tilt the text for some degrees. Instruments CA analysis tells me that the view has misaligned pixels now (leave out the transformation and the label is fine).
I wonder if there is any way to get rid of the misaligned pixels in this label or if that would be not possible since the transformation causes fractional values in the coordinates anyway.
I did a CGRectIntegral call on the frame which has fractional values but for some reason the view is still misaligned.
When a layer is rotated by an angle that is not a multiple of 90° it cannot be pixel-aligned.
If you want to present tilted text and nevertheless need aligned pixels the only way is to draw the layer (view) yourself. You would align the layer and instead do the rotation using Quartz.
Note after edit: You cannot use the frame when a transform is set:
Warning If the transform property is not the identity transform, the
value of this property is undefined and therefore should be ignored.

Resources