Calculate real width based on picture, knowing distance - ios

I know the distance between the camera and the object
I know the type of camera used
I know the width in pixel on the picture
Can I figure the real life width of the object?

you have to get the angle of camera. For example, iphone 5s is 61.4 in vertical and 48.0 horizontal. call it alpha.
then you calculate the width of object by this way:
viewWidth = distance * tan(alpha / 2) * 2;
objWidth = viewWidth * (imageWidth / screenWidth)

Related

How to get the rendered/displayed size of a scaled image/frame in Roblox Studio

I need to get the width and height of an image within a frame. Both the frame and image use the Scale property instead of the Offset property to set the size. I have an UIAspectRatioConstraint on the frame that the image is in. Everything scales with the screen size just fine.
However, I need to be able to get the current width/height of the image (or the frame) so that I can perform some math functions in order to move a marker over the image to a specific position (X, Y). I cannot get the size of the image/frame, and therefore cannot update the position.
Is there a way to get the currently rendered width of an image or frame that is using the Scale size options with the UIAspectRatioConstraint?
I'm sleepy. I hope this makes sense...
My current math for getting a position on another image that uses Offset instead of Size is:
local _x = (_miniMapImageSize.X.Offset / _worldCenterSize.X) * (_playerPos.X - _worldCenterPos.X) + (_miniMapFrameSize.X.Offset / 2)
local _y = (_miniMapImageSize.Y.Offset / _worldCenterSize.Z) * (_playerPos.Z - _worldCenterPos.Z) + (_miniMapFrameSize.Y.Offset / 2)
Which gives me the player position within my mini-map. But that doesn't scale. The actual map does, and I need to position the player's marker on that map as well.
Work-Around
For now (for anyone else looking for a solution), I have created a work-around. I now specify my actual image size:
local _mapSize = Vector2.new(814, 659)
Then I use the screen width and height to decide if I need to scale based off the x-axis or the y-axis. (Scale my math formula, not the image.)
if (_mouse.ViewSizeX / _mouse.ViewSizeY) - (_mapSize.X / _mapSize.Y) <= 0 then
-- If the width of the screen is at the same or smaller ratio with the height of the screen
-- then calculate the new size based off the width
local _smallerByPercent = (_mouse.ViewSizeX * 0.9) / _mapSize.X
_mapWidth = _mapSize.X * _smallerByPercent
mapHeight = _mapSize.Y * _smallerByPercent
else
local _smallerByPercent = (_mouse.ViewSizeY * 0.9) / _mapSize.Y
_mapWidth = _mapSize.X * _smallerByPercent
_mapHeight = _mapSize.Y * _smallerByPercent
end
After that, I can create the position for my marker on my map.
_x = ((_mapWidth / _worldCenterSize.X) * (_playerPos.X - _worldCenterPos.X)) * -1
_y = ((_mapHeight / _worldCenterSize.Z) * (_playerPos.Z - _worldCenterPos.Z)) * -1
_mapCharacterArrow.Position = UDim2.new(0.5, _x, 0.5, _y)
Now my marker is able to be placed where my character is within the larger map opened when I press "M".
HOWEVER
I would still love to know of a way to get the rendered/displayed image size... I was trying to make it to where I did not have to enter the image size into the script manually. I want it to be dynamic.
So it seems there is a property of most GUI elements called AbsoluteSize. This is the actual display size of the element, no matter what it is scaled to. (It is not that it stays the same when scaled, but it changes as it is scaled to give you the new size.)
With this, I was able to re-write my code to:
local _x = (_mapImageSize.X / _worldCenterSize.X) * (_playerPos.X - _worldCenterPos.X) * -1
local _y = (_mapImageSize.Y / _worldCenterSize.Z) * (_playerPos.Z - _worldCenterPos.Z) * -1
_mapCharacterArrow.Position = UDim2.new(0.5, _x, 0.5, _y)
Where _mapImageSize = [my map image].AbsoluteSize.
Much better than before.

Using CATransform3DRotate with perspective: how to correct the 2D size increase?

I'm trying to create a paper folding effect in Swift using CALayers and CATransform3DRotate. There are some libraries out there, but those are pretty outdated and don't fit my needs (they don't have symmetric folds, for example).
My content view controller will squeeze to the right half side of the screen, revealing the menu at the left side.
Everything went well, until I applied perspective: then the dimensions I calculate are not correct anymore.
To explain the problem, I created a demo to show you what I'm doing.
This the content view controller with three squares. I will use three folds, so each square will be on a separate fold.
The even folds will get anchor point (0, 0.5) and the odd folds will get anchor point (1, 0.5), plus they'll receive a shadow.
When fully folded, the content view will be half of the screen's width.
On an iPhone 7, each fold/plane will be 125 points unfolded and 62.5 points fully folded when looked at.
To calculate the rotation needed to achieve this 62.5 points width, we can use a trigonometric function. To illustrate, look at this top-down view:
We know the original plane size (125) and the 2D width (62.5), so we can calculate the angle α using arccos:
let angle = acos(width / originalWidth)
The result is 1.04719755 rad or 60 degrees.
When using this formula with CATransform3DRotate, I get the correct result:
Now for the problem: when I add perspective, my calculation isn't correct anymore. The planes are bigger. Probably because of the now different projection.
You can see the planes are now overlapping and being clipped.
I reconstructed the desired result on the right by playing with the angle, but the correction needed is not consistent, unfortunately.
Here's the code I use. It works perfectly without perspective.
// Loop layers
for i in 0..<self.layers.count {
// Get layer
let layer = self.layers[i]
// Get dimensions
let width = self.frame.size.width / CGFloat(self.numberOfFolds)
let originalWidth = self.sourceView.frame.size.width / CGFloat(self.numberOfFolds)
// Calculate angle
let angle = acos(width / originalWidth)
// Set transform
layer.transform = CATransform3DIdentity
layer.transform.m34 = 1.0 / -500
layer.transform = CATransform3DRotate(layer.transform, angle * (i % 2 == 0 ? -1 : 1), 0, 1, 0)
// Update position
if i % 2 == 0 {
layer.position = CGPoint(x: (width * CGFloat(i)), y: layer.position.y)
} else {
layer.position = CGPoint(x: (width * CGFloat(i + 1)), y: layer.position.y)
}
}
So my question is: how do I achieve the desired result? Do I need to correct the angle, or should I calculate the projected/2D width differently?
Thanks in advance! :)

Unit of measurement between two CGPoints

How to determine unit of measurement between two CGPoint's. I basically want to convert distance between two CGPoints in centi meter and milli meter. I cannot find in any docs to implement correctly.
CGFloat xDist = point2.x - point1.x;
CGFloat yDist = point2.y - point1.y;
CGFloat distance = sqrt((xDist * xDist) + (yDist * yDist));
There is no API to correlate a device's physical screen size to the number of points on the screen.
You also have to realize that pixels (and points) aren't square. So you need both horizontal and vertical values.
Your only (bad) option is to hardcode values for every known iOS device and update your app every time a new device comes out.

Understanding CGAfflineTransform in the context of Rotation

I am working on a few experiments to learn gestures and animations in iOS. Creating a Tinder-like interface is one of them. I am following this guide: http://guti.in/articles/creating-tinder-like-animations/
I understand the changing of the position of the image, but don't understand the rotation. I think I've pinpointed my problem to not understanding CGAfflineTransform. Particularly, the following code:
CGFloat rotationStrength = MIN(xDistance / 320, 1);
CGFloat rotationAngle = (CGFloat) (2 * M_PI * rotationStrength / 16);
CGFloat scaleStrength = 1 - fabsf(rotationStrength) / 4;
CGFloat scale = MAX(scaleStrength, 0.93);
CGAffineTransform transform = CGAffineTransformMakeRotation(rotationAngle);
CGAffineTransform scaleTransform = CGAffineTransformScale(transform, scale, scale);
self.draggableView.transform = scaleTransform;
Where are these values and calculations, such as: 320, 1-fabs(strength) / 4 , .93, etc, coming from? How do they contribute to the eventual rotation?
On another note, Tinder seems to use a combination of swiping and panning. Do they add a swipe gesture to the image, or do they just take into account the velocity of the pan?
That code has a lot of magic constants, most of which are likely chosen because they resulted in something that "looked good". This can make it hard to follow. It's not so much about the actual transforms, but about the values used to create them.
Let's break it down, line by line, and see if that makes it clearer.
CGFloat rotationStrength = MIN(xDistance / 320, 1);
The value 320 is likely assumed to be the width of the device (it was the portrait width of all iPhones until the 6 and 6+ came out).
This means that xDistance / 320 is a factor of how far along the the x axis (based on the name xDistance) that the user has dragged. This will be 0.0 when the user hasn't dragged any distance and 1.0 when the user has dragged 320 points.
MIN(xDistance / 320, 1) Takes the smallest value of the dragged distance factor and 1). This means that if the user drags further than 320 points (so that the distance factor would be larger than 1, the rotation strength would never be larger than 1.0. It doesn't protect agains negative values (if the user dragged to the left, xDistance would be a negative value, which would always be smaller than 1. However, I'm not sure if the guide accounted for that (since 320 is the full width, not the half width.
So, the first line is a factor between 0 and 1 (assuming no negative values) of how much rotation should be applied.
CGFloat rotationAngle = (CGFloat) (2 * M_PI * rotationStrength / 16);
The next line calculates the actual angle of rotation. The angle is specified in radians. Since 2π is a full circle (360°), the rotation angle is ranging from 0 and 1/16 of a full circle (22.5°). Th value 1/16 is likely chosen because it "looked good".
The two lines together means that as the user drags further, the view rotates more.
CGFloat scaleStrength = 1 - fabsf(rotationStrength) / 4;
From the variable name, it would look like it would calculate how much the view should scale. But it's actually calculating what scale factor the view should have. A scale of 1 means the "normal" or unscaled size. When the rotation strength is 0 (when the xDistance is 0), the scale strength will be 1 (unscaled). As rotation strength increase, approaching 1, this scale factor approaches 0.75 (since that's 1 - 1/4).
fabsf is simply the floating point absolute value (fabsf(-0.3) is equal to 0.3)
CGFloat scale = MAX(scaleStrength, 0.93);
On the next line, the actual scale factor is calculated. It's simply the largest value of the scaleStrength and 0.93 (scaled down to 93%). The value 0.93 is completely arbitrary and is likely just what the author found appealing.
Since the scale strength ranges from 1 to 0.75 and the scale factor is never smaller than 0.93, the scale factor only changes for the first third of the xDistance. All scale strength values in the next two thirds will be smaller than 0.93 and thus won't change the scale factor.
With the scaleFactor and rotationAngle calculated as above, the view is first rotated (by that angle) and then scaled down (by that scale factor).
Summary
So, in short. As the view is dragged to the right (as xDistance approaches 320 points), The view linearly rotates from 0° to 22.5° over the full drag and scales from 100% to 93% over the first third of the drag (and then stays at 93% for the remainder of the drag gesture).

iOs - Image scaling and positioning in larger image

My question is related to calculating position. Scale the INNER IMAGE in FRONT END then i need to find the relative position on same in BACKGROUND PROCESS. If any one experience in it please share here like an equation or something.
The FRONT END FRAME IMAGE have a size of 188x292(WidthxHeight)
and Larger FRAME IMAGE have the size of 500x750(WidthxHeight).
INNER IMAGE 75x75(WidthxHeight) and Larger INNER IMAGE
199.45x199.45(WidthxHeight)
Question : When i scale the INNER IMAGE in FRONT END. That is 75x75 to 100x100, then we have the x and y position of that. And i need to calculate the exact position in BACKGROUND PROCESS. It's for scale that image programatically.
After INNER IMAGE scale you will have x and y position for it, now convert it to %.
if position of INNER IMAGE is
relativeX = (x * 100)/frameImageWidth;
relativeY = (y * 100)/frameImageHeight;
position of INNER IMAGE for Background will be
x = (relativeX * backgroundFrameImageWidth)/100;
y = (relativeY * backgroundFrameImageHeight)/100;
CGPoint foregroundLocation = CGPointMake(x, y);
static float xscale = BACKGROUND_IMAGE_WIDTH / FOREGROUND_IMAGE_WIDTH;
static float yscale = BACKGROUND_IMAGE_HEIGHT / FOREGROUND_IMAGE_HEIGHT;
CGPoint backgroundLocation = CGPointApplyAffineTransform(foregroundLocation, CGAffineTransformMakeScale(xscale, yscale));

Resources