What does bound values between 0.0 and 1.0 mean? - ios

From the documentation for AVMetadataObject I read:
For video content, the bounding rectangle may be expressed using
scalar values in the range 0.0 to 1.0. Scalar values remain meaningful
even when the original video has been scaled down.
What does that mean?

I'll give you a basic example. Let's say we have two views A and B
A = {0.0, 0.0, 320.0, 568.0}
B = {100.0, 100.0, 100.0, 100.0}
So now we can translate in to our new coordinate system where
A = {0.0, 0.0, 1.0, 1.0}
Let's do some basic calculation for b
the point x : 320 == 1 like 100 == x so 100 / 320 = x = 0.3125
the point y : 568 == 1 like 100 == y so 100 / 568 = y = 0.1760
Do the same calculation for width and height and you will have your new frame translated into the new coordinate system, and obviously you can do the opposite calculation to translate back to your system of coordinates.

Related

How would I convert the position of a point in a normalized coordinate system, to a regular coordinate system that has a relative position?

This math is not platform specific and I'll take any language as an answer. This is difficult to explain why I'm doing this, but I'll try to include images.
I have a view (View A) that overlays a map as a container. The purpose is to contain our content while remaining fixed to the map as the user drags the map. That view has a coordinate system where it's origin is in the top left of the screen. It will be our absolute coordinate system, where we are trying to convert the positions to and from.
Next, we have a Rectangle that is formed in the intersection between View A and what is visible on the screen. I achieved that with the following property in my UIView:
var visibleRect: CGRect {
guard let superview = self.superview?.superview else {
return CGRect(x: 0, y: 0, width: 0, height: 0)
}
// Convert the superview's coordinate system to the current view's coordinate system and find the intersecting rectangle. This is so we don't have to render excessive particles.
let val = self.bounds.intersection(
self.convert(superview.frame, from: superview)
)
return val
}
This returns said Rectangle with an origin relative to the origin of View A. The purpose of this Rectangle is that Metal View cannot be too large, so I limited Metal View to drawing within what is visible exclusively while maintaining the exact coordinates of where Metal View will be drawn in relation to View A.
I've been able to verify that the relative origin of the Rectangle within View A is accurate, and I've verified that Metal View fits within the Rectangle.
Here's what I keep banging my head on: Metal vertex functions have a center coordinate system that spans from -1 to 1 for the entire width and height as shown in the following figure (except I'm only using it in 2D space so ignore z position):
Now my question is:
If I have a point found in the top left quadrant of the Metal View (-0.002, 0.5), how would I take the given information to find it's position in View A? How would I return the point's position in View A to Metal View accounting for the relatively positioned Rectangle?
Visual
Here's a visual for hopeful clarification:
Edit
I was able to convert from View A to Metal View if Metal View takes up the entire span of View A with the following text code:
def normalizedToViewA(x, y):
x *= 1.0
y *= -1.0
x += 1.0
y += 1.0
return ((x / 2.0) * width, (y / 2.0) * height)
def viewAToNormalized(x, y):
normalizedX = x / (width / 2.0)
normalizedY = y / (height / 2.0)
normalizedX -= 1.0
normalizedY -= 1.0
normalizedX *= 1.0
normalizedY *= -1.0
return (normalizedX, normalizedY)
But now I need to calculate as if Metal View fills the Rectangle which is only a portion of View A.
I found the answer and have used Python for legibility.
View A is 1270*680.
# View A (Top Left Origin)
width = 1270
height = 680
# Visible Rectangle (Top Left Origin)
# with an origin as a position in View A
subOriginX = 1000
subOriginY = 400
subWidth = 20
subHeight = 20
# Centered origin converted to top left origin
# where origin is (0,0)
def normalizedToSubview(x, y):
x *= 1.0
y *= -1.0
x += 1.0
y += 1.0
return ((x / 2.0) * subWidth, (y / 2.0) * subHeight)
# Top Left origin to centered origin
def subviewToNormalized(x, y):
normalizedX = x / (subWidth / 2.0)
normalizedY = y / (subHeight / 2.0)
normalizedX -= 1.0
normalizedY -= 1.0
normalizedX *= 1.0
normalizedY *= -1.0
return (normalizedX, normalizedY)
# Relative position of a point within subview
# but on View A's plane
def subviewToViewA(x, y):
return (x + subOriginX, y + subOriginY)
# Relative position of a point within View A
# but on the subview's plane
def viewAToSubView(x, y):
return (x - subOriginX, y - subOriginY)
# Position within Metal View to a position within View A
normalizedCoord = (0.0, 0.0)
toSubview = normalizedToSubview(*normalizedCoord)
viewACoord = subviewToViewA(*toSubview)
print(f"Converted {normalizedCoord} to {toSubview}")
print(f"Converted {toSubview} to {viewACoord}")
# Converted (0.0, 0.0) to (10.0, 10.0)
# Converted (10.0, 10.0) to (1010.0, 410.0)
# Position within View A to Metal View
backToSubview = viewAToSubView(*viewACoord)
backToNormalized = subviewToNormalized(*backToSubview)
# Converted (1010.0, 410.0) to (10.0, 10.0)
# Converted (10.0, 10.0) to (0.0, -0.0)
print(f"Converted {viewACoord} to {backToSubview}")
print(f"Converted {backToSubview} to {backToNormalized}")
This is an extremely niche problem, but please comment if you are facing something similar and I will try to expand the best that I can.

SCNNode direction issue

I put a SCNNode to a scene. I want to provide proper rotation in space, because this node is a pyramid. I want Z axis to be pointed to V2 point, and X axis to be pointed to V1 point (these V2 and V1 point are calculated dynamically, and of course in this case the angle between axis will be 90 degrees, because I calculate them properly).
The problem: I can't point X axis, because SCNLookAtConstraint(target: nodeWithV2) points only Z axis, and what I see is that Z axis is OK, but X axis is always random, that's why my Pyramid orientation is always wrong. How can I point X axis?
Here is my code:
let pyramidGeometry = SCNPyramid(width: 0.1, height: 0.2, length: 0.001)
pyramidGeometry.firstMaterial?.diffuse.contents = UIColor.white
pyramidGeometry.firstMaterial?.lightingModel = .constant
pyramidGeometry.firstMaterial?.isDoubleSided = true
let nodePyramid = SCNNode(geometry: pyramidGeometry)
nodePyramid.position = SCNVector3(2, 1, 2)
parent.addChildNode(nodePyramid)
let nodeZToLookAt = SCNNode()
nodeZToLookAt.position = v2
parent.addChildNode(nodeZToLookAt)
let constraint = SCNLookAtConstraint(target: nodeZToLookAt)
nodePyramid.constraints = [constraint]
But that only sets the direction for Z axis, that's why X axis is always random, so the rotation is always random. How can I set X axis direction to my point V1?
starting iOS 11 SCNLookAtConstraint has a localFront property that allows you to change which axis is used to orientate the constrained node.

Translate 3D position taken from CMMotionManager() to set angle on 2D CG Coordinate System

I don't know if this is basic math I have to compute, or my inexperience with the pitch, roll, and yaw values. At the moment I have an image object that moves based on my accelerometer values.
//Move the ball based on accelerator values
delta.x = CGFloat(acceleration.x * 10)
delta.y = CGFloat(acceleration.y * 10)
ball.center = CGPointMake(ball.center.x + delta.x, ball.center.y + delta.y)
I can calculate the pitch through the attitude and get the angle. What I want to do is line up my "ball" in the center of the screen only when the angle of the phone is a certain angle, lets say 45 degrees. How can I move my ball so that it lines up in the center based on specific angles given?
Your screen height is Η pixels.
Your screen width is W pixels.
The horizontal centre of the screen is x = W / 2
I'm assuming from your question you want the ball centre to vary between the top (x, 0) when the screen is flat and bottom (x, H) when the screen is vertical.
If the angle of your phone θ varies between 0 and π, then y = θ / π * H
ball.center = CGPoint(x: W / 2, y: θ / π * H)
All you need is the trig to work out θ based on the gyro readings

How to get the X and Y of the edges of the screen in Swift

I've wrote this code down here but it's not working the way I want, I have only got the random position in the all view, my meaning is to get random positions on the edges of the screen. How can I do it?
let viewHeight = self.view!.bounds.height
let viewWidth = self.view!.bounds.width
randomYPosition = CGFloat(arc4random_uniform(UInt32(viewHeight)))
randomXPosition = CGFloat(arc4random_uniform(UInt32(viewWidth)))
Use the following property:
For Max:
view.frame.maxX
view.frame.maxY
For minimum:
view.frame.minX
view.frame.minY
If you really want the bounds of the screen, then you should see How to get the screen width and height in iOS?, which tells you that you need to look at the UIScreen object for its dimensions. In Swift, the code works out to:
let screenRect = UIScreen.mainScreen().bounds
let screenWidth = screenRect.size.width
let screenHeight = screenRect.size.height
That really gives you the screen width and height, but it'd be strange if the screen origin weren't at (0,0). If you want to be sure, you can add the origin's coordinates:
let screenWidth = screenRect.size.width + screenRect.origin.x
let screenHeight = screenRect.size.height + screenRect.origin.y
All that said, there's usually little reason to look at the screen itself. With iOS now supporting split screen, your app may only be using part of the screen. It'd make more sense to look at the app's window, or even just at the view controller's view. Since these are both UIViews, they both have a bounds property just like any other view.
If I got you correctly, what you want is random points which fall in the edges of the view. That means on the four sides of the view, either x or y is fixed. So here is the solution :
top wall random points( where y is fixed to 0, and varying x):
topRandomPoint = CGPointMake(CGFloat(arc4random_uniform(UInt32(viewWidth))), 0)
right side wall points ( where x is fixed to max-width (e.g 320), and varying y)
rightRandomPoint = CGPointMake(viewWidth, CGFloat(arc4random_uniform(UInt32(viewHeight))))
Bottom random points ( where y is fixed to max-height (e.g 568) and x varying)
bottomRandomPoint = CGPointMake(CGFloat(arc4random_uniform(UInt32(viewWidth))), viewHeight)
Left random points (where x is fixed to 0 and y is varying)
leftRandomPoint = CGPointMake(0, CGFloat(arc4random_uniform(UInt32(viewHeight))))
This should give you a random number in the range 0 - ViewHeight and 0 - ViewWidth respectively
CGFloat(arc4random_uniform(UInt32(0..viewHeight)))
CGFloat(arc4random_uniform(UInt32(0..viewWidth)))
Now you need to construct the position based on the 4 cases
X = 0, Y = Random (0 - ViewHeight) --> Left side
X = self.view!.bounds.width, Y = Random (0 - ViewHeight) --> Right side
X = Random (0 - ViewWidth), Y = 0 --> Bottom side
X = Random (0 - ViewWidth), Y = self.view!.bounds.width --> Top side

How to get coordinates of corners from GPUImageHarrisCornerDetector?

I want to detect rectangles and its corners via harris corner detector. It contains a block with corners:
filter.cornersDetectedBlock = { (cornerArray:UnsafeMutablePointer<GLfloat>, cornersDetected:UInt, frameTime:CMTime) in
The problem is, cornerArray is of type GLfloat and it returns a value between 0 and 1. I don't know how to create something like CGPoint with x and y values. Any ideas how to achieve this?
Thanks!
I don't know the specifics, but in general you have to interpolate.
I assume that you're getting back values for x and y that both range from 0 to 1, where 0 is the left/bottom edge, and 1 is the right/top edge?
You just need to lay out a ratio and convert from one coordinates system to the other.
0...1000
is to
0...1
.5 x
---- = ----
1 1000
x * 1 = 0.5 * 1000
x = 0.5 * 1000 / 1
x = 500
So if you get a value 0.5 it would be halfway between 0 and 1000. (1000-0) * 0.5. If your pixel rectangle has an origin of 0,0, you'd just multiply your 0..1 x value by your width and your 0..1 y value by your pixel height. If the pixel origin is not 0 then you need to add the origin.

Resources