Get 3D model's height and its transformed screen coordinates - ios

I'm rendering a Collada (*.dae) file with ARKit. As an overlay of my ARSCNView I'm adding a SKScene that simply shows a message bubble (without text yet).
Currently, I know how to modify the position of the bubble so that it looks like it's always at the feet of my 3D model. I'm doing like this:
func renderer(_ renderer: SCNSceneRenderer, didRenderScene scene: SCNScene, atTime time: TimeInterval) {
if let overlay = sceneView.overlaySKScene as? BubbleMessageScene {
guard let borisNode = sceneView.scene.rootNode.childNode(withName: "boris", recursively: true) else { return }
let boxWorldCoordinates = sceneView.scene.rootNode.convertPosition(borisNode.position, from:sceneView.scene.rootNode.parent)
let screenCoordinates = self.sceneView.projectPoint(boxWorldCoordinates)
let boxY = overlay.size.height - CGFloat(screenCoordinates.y)
overlay.bubbleNode?.position.x = CGFloat(screenCoordinates.x) - (overlay.bubbleNode?.size.width)!/2
overlay.bubbleNode?.position.y = boxY
}
}
However my bubble is always at the feet of the 3D model because I can only get the SCNNode position of my model, where it is anchored. I would like it to be at the head of my model.
Is there a way I can get the height of my 3D model, and then its transformed screen coordinates, so no matter where I am with my phone it looks like the bubble message is always next to the head?

Each SCNNode has a boundingBox property which is the:
The minimum and maximum corner points of the object’s bounding box.
So what this means is that:
Scene Kit defines a bounding box in the local coordinate space using two points identifying its corners, which implicitly determine six axis-aligned planes marking its limits. For example, if a geometry’s bounding box has the minimum corner {-1, 0, 2} and the maximum corner {3, 4, 5}, all points in the geometry’s vertex data have an x-coordinate value between -1.0 and 3.0, inclusive.
If you look in SceneKit Editor you will also be able to see the size of your model in meters (I am saying this simply as a point you can refer to in order to check the calculations):
In my example I am using a Pokemon model with the size above.
I scaled the model (which you likely did as well) e.g:
pokemonModel.scale = SCNVector3(0.01, 0.01, 0.01)
So in order to get the boundingBox of the SCNNode we can do this:
/// Returns The Original Width & Height Of An SCNNode
///
/// - Parameter node: SCNNode
func getSizeOfModel(_ node: SCNNode){
//1. Get The Size Of The Node Without Scale
let (minVec, maxVec) = node.boundingBox
let unScaledHeight = maxVec.y - minVec.y
let unScaledWidth = maxVec.x - minVec.x
print("""
UnScaled Height = \(unScaledHeight)
UnScaled Width = \(unScaledWidth)
""")
}
Calling it like so:
getSizeOfModel(pokemonModel)
Now of course since our SCNNode has been scaled this doesn't help much so obviously we need to take this into account, by re-writing the function:
/// Returns The Original & Scaled With & Height On An SCNNode
///
/// - Parameters:
/// - node: SCNode
/// - scalar: Float
func getOriginalAndScaledSizeOfNode(_ node: SCNNode, scalar: Float){
//1. Get The Size Of The Node Without Scale
let (minVec, maxVec) = node.boundingBox
let unScaledHeight = maxVec.y - minVec.y
let unScaledWidth = maxVec.x - minVec.x
print("""
UnScaled Height = \(unScaledHeight)
UnScaled Width = \(unScaledWidth)
""")
//2. Get The Size Of The Node With Scale
let max = node.boundingBox.max
let maxScale = SCNVector3(max.x * scalar, max.y * scalar, max.z * scalar)
let min = node.boundingBox.min
let minScale = SCNVector3(min.x * scalar, min.y * scalar, min.z * scalar)
let heightOfNodeScaled = maxScale.y - minScale.y
let widthOfNodeScaled = maxScale.x - minScale.x
print("""
Scaled Height = \(heightOfNodeScaled)
Scaled Width = \(widthOfNodeScaled)
""")
}
Which would be called like so:
getOriginalAndScaledSizeOfNode(pokemonModel, scalar: 0.01)
Having done this you say you want to position a 'bubble' above your model, which could then be done like so:
func getSizeOfNodeAndPositionBubble(_ node: SCNNode, scalar: Float){
//1. Get The Size Of The Node Without Scale
let (minVec, maxVec) = node.boundingBox
let unScaledHeight = maxVec.y - minVec.y
let unScaledWidth = maxVec.x - minVec.x
print("""
UnScaled Height = \(unScaledHeight)
UnScaled Width = \(unScaledWidth)
""")
//2. Get The Size Of The Node With Scale
let max = node.boundingBox.max
let maxScale = SCNVector3(max.x * scalar, max.y * scalar, max.z * scalar)
let min = node.boundingBox.min
let minScale = SCNVector3(min.x * scalar, min.y * scalar, min.z * scalar)
let heightOfNodeScaled = maxScale.y - minScale.y
let widthOfNodeScaled = maxScale.x - minScale.x
print("""
Scaled Height = \(heightOfNodeScaled)
Scaled Width = \(widthOfNodeScaled)
""")
//3. Create A Buubble
let pointNodeHolder = SCNNode()
let pointGeometry = SCNSphere(radius: 0.04)
pointGeometry.firstMaterial?.diffuse.contents = UIColor.cyan
pointNodeHolder.geometry = pointGeometry
//4. Place The Bubble At The Origin Of The Model, At The Models Origin + It's Height & At The Z Position
pointNodeHolder.position = SCNVector3(node.position.x, node.position.y + heightOfNodeScaled, node.position.z)
self.augmentedRealityView.scene.rootNode.addChildNode(pointNodeHolder)
}
This yields the following result (which I also tested on a few other unfortunate Pokemon as well):
You will probably want to add a bit of 'padding' as well to the calculation, so that the node is a bit higher up than the top of the model e.g:
pointNodeHolder.position = SCNVector3(node.position.x, node.position.y + heightOfNodeScaled + 0.1, node.position.z)
I am not great at Maths, and this uses an SCNNode for the bubble rather than an SKScene, but hopefully it will point you in the right direction...

you can get borisNode.boundingBox : (float3, float3) to calculate the size of the node, you get a tuple of 2 points, then calculate the heigh by subtracting the y from one point from the other. Finally move your overlay's Y position by the number you get.

Related

How do we do rectilinear image conversion with swift and iOS 11+

How do we use the function apple provides (below) to perform rectilinear conversion?
Apple provides a reference implementation in 'AVCameraCalibrationData.h' on how to correct images for lens distortion. Ie going from images taken with a wide-angle or telephoto lens to the rectilinear 'real world' image. A pictoral representation is here:
To create a rectilinear image we must begin with an empty destination buffer and iterate through it row by row, calling the sample implementation below for each point in the output image, passing the lensDistortionLookupTable to find the corresponding value in the distorted image, and write it to your output buffer.
func lensDistortionPoint(for point: CGPoint, lookupTable: Data, distortionOpticalCenter opticalCenter: CGPoint, imageSize: CGSize) -> CGPoint {
// The lookup table holds the relative radial magnification for n linearly spaced radii.
// The first position corresponds to radius = 0
// The last position corresponds to the largest radius found in the image.
// Determine the maximum radius.
let delta_ocx_max = Float(max(opticalCenter.x, imageSize.width - opticalCenter.x))
let delta_ocy_max = Float(max(opticalCenter.y, imageSize.height - opticalCenter.y))
let r_max = sqrt(delta_ocx_max * delta_ocx_max + delta_ocy_max * delta_ocy_max)
// Determine the vector from the optical center to the given point.
let v_point_x = Float(point.x - opticalCenter.x)
let v_point_y = Float(point.y - opticalCenter.y)
// Determine the radius of the given point.
let r_point = sqrt(v_point_x * v_point_x + v_point_y * v_point_y)
// Look up the relative radial magnification to apply in the provided lookup table
let magnification: Float = lookupTable.withUnsafeBytes { (lookupTableValues: UnsafePointer<Float>) in
let lookupTableCount = lookupTable.count / MemoryLayout<Float>.size
if r_point < r_max {
// Linear interpolation
let val = r_point * Float(lookupTableCount - 1) / r_max
let idx = Int(val)
let frac = val - Float(idx)
let mag_1 = lookupTableValues[idx]
let mag_2 = lookupTableValues[idx + 1]
return (1.0 - frac) * mag_1 + frac * mag_2
} else {
return lookupTableValues[lookupTableCount - 1]
}
}
// Apply radial magnification
let new_v_point_x = v_point_x + magnification * v_point_x
let new_v_point_y = v_point_y + magnification * v_point_y
// Construct output
return CGPoint(x: opticalCenter.x + CGFloat(new_v_point_x), y: opticalCenter.y + CGFloat(new_v_point_y))
}
Additionally apple states: "point", "opticalCenter", and "imageSize" parameters below must be in the same coordinate system.
With that in mind, what values do we pass for opticalCenter and imageSize and why? What exactly is the "applying radial magnification" doing?
The opticalCenter is actually named distortionOpticalCenter. So you can provide lensDistortionCenter from AVCameraCalibrationData.
Image size is a height and width of image you want to rectilinear.
"Applying radial magnification". It changes the coordinates of given point to the point where it will be with ideal lens without distortion.
"How do we use the function...". We should create an empty buffer with same size as the distorted image. For each pixel of empty buffer we should apply the lensDistortionPointForPoint function. And take a pixel with corrected coordinates from distorted image to empty buffer. After fill all buffer space you should get an undistorted image.

Get x and y coordinates of an image according to the superview's centre position in ios

I'm trying to find x and y coordinates of an image which can be scaled and moved relative to the view's centre position.x and y coordinates of for one position should be the same in any device.thats why I want to find image position coordinates because image resolution is not changing regardless of the device. I have attached the source code what I have tried to do so far. can anyone give me a suggestion to do this?
link to the GitHub project
I was able to get exactly image coordinates by touching the image this is how I tried this but I want to get that coordinates using a pin by button click event
#objc func tapAction(tapGestureRecognizer: UITapGestureRecognizer)
{
let touchPoint: CGPoint = tapGestureRecognizer.location(in:
self.imageView)
let Z1 = imageView.image!.size.height
let Z2 = imageView.image!.size.width
let Z3 = imageView.bounds.minY
let Z4 = imageView.bounds.minX
let Z5 = imageView.bounds.height
let Z6 = imageView.bounds.width
let pos1 = (touchPoint.x - Z4) * Z2 / Z6
let pos2 = (touchPoint.y - Z3) * Z1 / Z5
let ZZ1 = "\(pos1)"
let ZZ2 = "\(pos2)"
tochpointvalues.text = "Touched point x:\(ZZ1), y:\(ZZ2)"
print("Touched point (\(ZZ1), \(ZZ2)")
}
To get the (x, y) coordinate of a child view in relative to its superview, you can use convert(_ :, to:) method. Example:
let childCoordinate = parentView.convert(childView.frame.origin, to: parentView)

Camera is not following the airplane in Scenekit

I have a flying aircraft which I am following and I am also showing the path the aircraft has followed. I am drawing cylinders as a line for drawing the path. Its kind of drawing a line between 2 points. I have a cameraNode which is set to (0,200,200) initially. At that point I can see the aircraft. But when I start my flight. It goes out of the screen. I want 2 things :
Follow just the aircraft (Path won't matter).
Show whole path and also the aircraft.
I tried finding the min ad max x,y and z and taking average but it din't work. If you see below gif its too zoomed and aircraft has moved out of the screen
Here is how I set my camera:
- (void)setUpCamera {
SCNScene *workingScene = [self getWorkingScene];
_cameraNode = [[SCNNode alloc] init];
_cameraNode.camera = [SCNCamera camera];
_cameraNode.camera.zFar = 500;
_cameraNode.position = SCNVector3Make(0, 60, 50);
[workingScene.rootNode addChildNode:_cameraNode];
SCNNode *frontCameraNode = [SCNNode node];
frontCameraNode.position = SCNVector3Make(0, 100, 50);
frontCameraNode.camera = [SCNCamera camera];
frontCameraNode.camera.xFov = 75;
frontCameraNode.camera.zFar = 500;
[_assetActivity addChildNode:frontCameraNode]; //_assetActivity is the aircraft node.
}
Here is how I am changing camera position which is not working:
- (void)showRealTimeFlightPath {
DAL3DPoint *point = [self.aircraftLocation convertCooridnateTo3DPoint];
DAL3DPoint *previousPoint = [self.previousAircraftLocation convertCooridnateTo3DPoint];
self.minCoordinate = [self.minCoordinate findMinPoint:self.minCoordinate currentPoint:point];
self.maxCoordinate = [self.minCoordinate findMaxPoint:self.maxCoordinate currentPoint:point];
DAL3DPoint *averagePoint = [[DAL3DPoint alloc] init];
averagePoint = [averagePoint averageBetweenCoordiantes:self.minCoordinate maxPoint:self.maxCoordinate];
SCNVector3 positions[] = {
SCNVector3Make(point.x,point.y,point.z) ,
SCNVector3Make(previousPoint.x,previousPoint.y,previousPoint.z)
};
SCNScene *workingScene = [self getWorkingScene];
DALLineNode *lineNodeA = [[DALLineNode alloc] init];
[lineNodeA init:workingScene.rootNode v1:positions[0] v2:positions[1] radius:0.1 radSegementCount:6 lineColor:[UIColor greenColor]] ;
[workingScene.rootNode addChildNode:lineNodeA];
self.previousAircraftLocation = [self.aircraftLocation mutableCopy];
self.cameraNode.position = SCNVector3Make(averagePoint.x, averagePoint.y, z);
self.pointOfView = self.cameraNode;
}
Code in swift or objective c are welcomed.
Thanks!!
The first behavior you describe would most easily be achieved by chaining a look-at constraint and a distance constraint, both targeting the aircraft.
let lookAtConstraint = SCNLookAtConstraint(target: aircraft)
let distanceConstraint = SCNDistanceConstraint(target: aircraft)
distanceConstraint.minimumDistance = 10 // set to whatever minimum distance between the camera and aircraft you'd like
distanceConstraint.maximumDistance = 10 // set to whatever maximum distance between the camera and aircraft you'd like
camera.constraints = [lookAtConstraint, distanceConstraint]
For iOS 10 and earlier, you can implement a distance constraint using SCNTransformConstraint. Here's a basic (though slightly ugly 😛) implementation that uses linear interpolation to update the node's position.
func normalize(_ value: Float, in range: ClosedRange<Float>) -> Float {
return (value - range.lowerBound) / (range.upperBound - range.lowerBound)
}
func interpolate(from start: Float, to end: Float, alpha: Float) -> Float {
return (1 - alpha) * start + alpha * end
}
let target = airplane
let minimumDistance: Float = 10
let maximumDistance: Float = 15
let distanceConstraint = SCNTransformConstraint(inWorldSpace: false) { (node, transform) -> SCNMatrix4 in
let distance = abs(sqrt(pow(target.position.x - node.position.x, 2) + pow(target.position.y - node.position.y, 2) + pow(target.position.z - node.position.z, 2)))
let normalizedDistance: Float
switch distance {
case ...minimumDistance:
normalizedDistance = self.normalize(minimumDistance, in: 0 ... distance)
case maximumDistance...:
normalizedDistance = self.normalize(maximumDistance, in: 0 ... distance)
default:
return transform
}
node.position.x = self.interpolate(from: target.position.x, to: node.position.x, alpha: normalizedDistance)
node.position.y = self.interpolate(from: target.position.y, to: node.position.y, alpha: normalizedDistance)
node.position.z = self.interpolate(from: target.position.z, to: node.position.z, alpha: normalizedDistance)
return transform
}
The second behavior could be implemented by determining the bounding box of your aircraft and all of its path segments in the camera's local coordinate space, then updating the camera's distance from the center of that bounding box to frame all of those nodes in the viewport. frameNodes(_:), a convenience method that implements this functionality, was introduced in iOS 11 and is defined on SCNCameraController. I'd recommend using it if possible, unless you want to dive into the trigonometry yourself. You could use your scene view's default camera controller or create a temporary instance, whichever suits the needs of your app.
You need to calculate the angle of the velocity so that the camera points in the direction of the moving SCNNode.
This code will point you in the right direction.
func renderer(_ aRenderer: SCNSceneRenderer, didSimulatePhysicsAtTime time: TimeInterval) {
// get velocity angle using velocity of vehicle
var degrees = convertVectorToAngle(vector: vehicle.chassisBody.velocity)
// get rotation of current camera on X and Z axis
let eX = cameraNode.eulerAngles.x
let eZ = cameraNode.eulerAngles.z
// offset rotation on y axis by 90 degrees
// this needs work, buggy
let ninety = deg2rad(90)
// default camera Y Euler angle facing north at 0 degrees
var eY : Float = 0.0
if degrees != 0 {
eY = Float(-degrees) - Float(ninety)
}
// rotate camera direction using cameraNode.eulerAngles and direction of velocity as eY
cameraNode.eulerAngles = SCNVector3Make(eX, eY, eZ)
// put camera 25 points behind vehicle facing direction of velocity
let dir = calculateCameraDirection(cameraNode: vehicleNode)
let pos = pointInFrontOfPoint(point: vehicleNode.position, direction:dir, distance: 25)
// camera follows driver view from 25 points behind, and 10 points above vehicle
cameraNode.position = SCNVector3Make(pos.x, vehicleNode.position.y + 10, pos.z)
}
func convertVectorToAngle(vector: SCNVector3) -> CGFloat {
let degrees = atan2(vector.z, vector.x)
return CGFloat(degrees)
}
func pointInFrontOfPoint(point: SCNVector3, direction: SCNVector3, distance: Float) -> SCNVector3 {
var x = Float()
var y = Float()
var z = Float()
x = point.x + distance * direction.x
y = point.y + distance * direction.y
z = point.z + distance * direction.z
let result = SCNVector3Make(x, y, z)
return result
}
func calculateCameraDirection(cameraNode: SCNNode) -> SCNVector3 {
let x = -cameraNode.rotation.x
let y = -cameraNode.rotation.y
let z = -cameraNode.rotation.z
let w = cameraNode.rotation.w
let cameraRotationMatrix = GLKMatrix3Make(cos(w) + pow(x, 2) * (1 - cos(w)),
x * y * (1 - cos(w)) - z * sin(w),
x * z * (1 - cos(w)) + y*sin(w),
y*x*(1-cos(w)) + z*sin(w),
cos(w) + pow(y, 2) * (1 - cos(w)),
y*z*(1-cos(w)) - x*sin(w),
z*x*(1 - cos(w)) - y*sin(w),
z*y*(1 - cos(w)) + x*sin(w),
cos(w) + pow(z, 2) * ( 1 - cos(w)))
let cameraDirection = GLKMatrix3MultiplyVector3(cameraRotationMatrix, GLKVector3Make(0.0, 0.0, -1.0))
return SCNVector3FromGLKVector3(cameraDirection)
}
func deg2rad(_ number: Double) -> Double {
return number * .pi / 180
}

SceneKit: use transform or directly manipulate rotation/position properties if goal is to rotate camera and project node X units in front of camera?

For a voxel art app, the goal is to let users move and rotate a camera in a SceneKit scene then tap to place a block.
The code below lets a user rotate a camera by panning. After the gesture ends, we move an existing block so it is -X units on the camera's Z-axis (i.e., -X units in front of the camera).
cameraNode is the scene's point of view and is a child of userNode. When the user moves a joystick, we update the position of userNode.
Question: Other SO posts manipulate camera nodes by applying a transform instead of changing the rotation and position properties. Is one approach better than the other?
func sceneViewPannedOneFinger(sender: UIPanGestureRecognizer) {
// Get pan distance & convert to radians
let translation = sender.translationInView(sender.view!)
var xRadians = GLKMathDegreesToRadians(Float(translation.x))
var yRadians = GLKMathDegreesToRadians(Float(translation.y))
// Get x & y radians
xRadians = (xRadians / 6) + curXRadians
yRadians = (yRadians / 6) + curYRadians
// Limit yRadians to prevent rotating 360 degrees vertically
yRadians = max(Float(-M_PI_2), min(Float(M_PI_2), yRadians))
// Set rotation values to avoid Gimbal Lock
cameraNode.rotation = SCNVector4(x: 1, y: 0, z: 0, w: yRadians)
userNode.rotation = SCNVector4(x: 0, y: 1, z: 0, w: xRadians)
// Save value for next rotation
if sender.state == UIGestureRecognizerState.Ended {
curXRadians = xRadians
curYRadians = yRadians
}
// Set preview block
setPreviewBlock()
}
private func setPreviewBlock(var futurePosition: SCNVector3 = SCNVector3Zero, reach: Float = 8) -> SCNVector3 {
// Get future position
if SCNVector3EqualToVector3(futurePosition, SCNVector3Zero) {
futurePosition = userNode.position
}
// Get current position after accounting for rotations
let hAngle = Float(cameraNode.rotation.w * cameraNode.rotation.x)
let vAngle = Float(userNode.rotation.w * userNode.rotation.y)
var position = getSphericalCoords(hAngle, t: vAngle, r: reach)
position += userNode.position
// Snap position to grid
position = position.rounded()
// Ensure preview block never dips below floor
position.y = max(0, position.y)
// Return if snapped position hasn't changed
if SCNVector3EqualToVector3(position, previewBlock.position) {
return position
}
// If here, animate preview block to new position
SCNTransaction.begin()
SCNTransaction.setAnimationDuration(AnimationTime)
previewBlock.position = position
SCNTransaction.commit()
// Return position
return position
}
func getSphericalCoords(s: Float, t: Float, r: Float) -> SCNVector3 {
return SCNVector3(-(cos(s) * sin(t) * r),
sin(s) * r,
-(cos(s) * cos(t) * r))
}

Graphing Polar Functions with UIBezierPath

Is it possible to graph a polar function with UIBezierPath? More than just circles, I'm talking about cardioids, limacons, lemniscates, etc. Basically I have a single UIView, and want to draw the shape in the view.
There are no built in methods for shapes like that, but you can always approximate them with a series of very short straight lines. I've had reason to approximate a circle this way, and a circle with ~100 straight lines looks identical to a circle drawn with ovalInRect. It was easiest when doing this, to create the points in polar coordinates first, then convert those in a loop to rectangular coordinates before passing the points array to a method where I add the lines to a bezier path.
Here's my swift helper function (fully commented) that generates the (x,y) coordinates in a given CGRect from a polar coordinate function.
func cartesianCoordsForPolarFunc(frame: CGRect, thetaCoefficient:Double, thetaCoefficientDenominator:Double, cosScalar:Double, iPrecision:Double) -> Array<CGPoint> {
// Frame: The frame in which to fit this curve.
// thetaCoefficient: The number to scale theta by in the cos.
// thetaCoefficientDenominator: The denominator of the thetaCoefficient
// cosScalar: The number to multiply the cos by.
// iPrecision: The step for continuity. 0 < iPrecision <= 2.pi. Defaults to 0.1
// Clean inputs
var precision:Double = 0.1 // Default precision
if iPrecision != 0 {// Can't be 0.
precision = iPrecision
}
// This is ther polar function
// var theta: Double = 0 // 0 <= theta <= 2pi
// let r = cosScalar * cos(thetaCoefficient * theta)
var points:Array<CGPoint> = [] // We store the points here
for theta in stride(from: 0, to: 2*Double.pi * thetaCoefficientDenominator, by: precision) { // Try to recreate continuity
let x = cosScalar * cos(thetaCoefficient * theta) * cos(theta) // Convert to cartesian
let y = cosScalar * cos(thetaCoefficient * theta) * sin(theta) // Convert to cartesian
let scaled_x = (Double(frame.width) - 0)/(cosScalar*2)*(x-cosScalar)+Double(frame.width) // Scale to the frame
let scaled_y = (Double(frame.height) - 0)/(cosScalar*2)*(y-cosScalar)+Double(frame.height) // Scale to the frame
points.append(CGPoint(x: scaled_x, y:scaled_y)) // Add the result
}
return points
}
Given those points here's an example of how you would draw a UIBezierPath. In my example, this is in a custom UIView function I would call UIPolarCurveView.
let flowerPath = UIBezierPath() // Declare my path
// Custom Polar scalars
let k: Double = 9/4
let length = 50
// Draw path
let points = cartesianCoordsForPolarFunc(frame: frame, thetaCoefficient: k, thetaCoefficientDenominator:4 cosScalar: length, iPrecision: 0.01) flowerPath.move(to: points[0])
for i in 2...points.count {
flowerPath.addLine(to: points[i-1])
}
flowerPath.close()
Here's the result:
PS: If you plan on having multiple graphs in the same frame, make sure to modify the scaling addition by making the second cosScalar the largest of the cosScalars used.You can do this by adding an argument to the function in the example.

Resources