I want to show 3D path between two SCNVector3 points and want to achieve like below screenshot in iOS Swift. from below code shows only simple line between points
let indices: [Int32] = [0, 1]
let source = SCNGeometrySource(vertices: [vector1, vector2])
let element = SCNGeometryElement(indices: indices, primitiveType: .line)
SCNGeometry(sources: [source], elements: [element])
You can use a lerp (linear interpolation) to get points between the start and end nodes. You can factor in a percentage - like 10,20,30 to get those 3d positions in line between your start and end node.
Related
I am currently developing a grid for a simple simulation and I have been tasked with interpolating some values tied to vertices of a triangle.
So far I have this:
let val1 = 10f
let val2 = 15f
let val3 = 12f
let point1 = Vector2(100f, 300f), val1
let point2 = Vector2(300f, 102f), val2
let point3 = Vector2(100f, 100f), val3
let points = [point1; point2; point3]
let find (points : (Vector2*float32) list) (pos : Vector2) =
let (minX, minXv) = points |> List.minBy (fun (v, valu) -> v.X)
let (maxX, maxXv) = points |> List.maxBy (fun (v, valu)-> v.X)
let (minY, minYv) = points |> List.minBy (fun (v, valu) -> v.Y)
let (maxY, maxYv) = points |> List.maxBy (fun (v, valu) -> v.Y)
let xy = (pos - minX)/(maxX - minX)*(maxX - minX)
let dx = ((maxXv - minXv)/(maxX.X - minX.X))
let dy = ((maxYv - minYv)/(maxY.Y - minY.Y))
((dx*xy.X + dy*xy.Y)) + minXv
Where you get a list of points forming a triangle. I find the minimum X and Y and the max X and Y with the corresponding values tied to them.
The problem is this approach only works with a right sided triangle. With an equilateral triangle the mid point will end up having a higher value at its vertex than the value that is set.
So I guess the approach is here to essentially project a right sided triangle and create some sort of transformation matrix between any triangle and this projected triangle?
Is this correct? If not, then any pointers would be most appreciated!
You probably want a linear interpolation where the interpolated value is the result of a function of the form
f(x, y) = a*x + b*y + c
If you consider this in 3d, with (x,y) a position on the ground and f(x,y) the height above it, this formula will give you a plane.
To obtain the parameters you can use the points you have:
f(x1, y1) = x1*a + y1*b * 1*c = v1 ⎛x1 y1 1⎞ ⎛a⎞ ⎛v1⎞
f(x2, y2) = x2*a + y2*b * 1*c = v2 ⎜x2 y2 1⎟ * ⎜b⎟ = ⎜v2⎟
f(x3, y3) = x3*a + y3*b * 1*c = v3 ⎝x3 y3 1⎠ ⎝c⎠ ⎝v3⎠
This is a 3×3 system of linear equations: three equations in three unknowns.
You can solve this in a number of ways, e.g. using Gaussian elimination, the inverse matrix, Cramer's rule or some linear algebra library. A numerics expert may tell you that there are differences in the numeric stability between these approaches, particularly if the corners of the triangle are close to lying on a single line. But as long as you're sufficiently far away from that degenerate situation, it probably doesn't make a huge practical difference for simple use cases. Note that if you want to interpolate values for multiple positions relative to a single triangle, you'd only compute a,b,c once and then just use the simple linear formula for each input position, which might lead to a considerable speed-up.
Advanced info: For some applications, linear interpolation is not good enough, but to find something more appropriate you would need to provide more data than your question suggests is available. One example that comes to my mind is triangle meshes for 3d rendering. If you use linear interpolation to map the triangles to texture coordinates, then they will line up along the edges but the direction of the mapping can change abruptly, leading to noticeable seams. A kind of projective interpolation or weighted interpolation can avoid this, as I learned from a paper on conformal equivalence of triangle meshes (Springborn, Schröder, Pinkall, 2008), but for that you need to know how the triangle in world coordinates maps to the triangle in texture coordinates, and your also need the triangle mesh and the correspondence to the texture to be compatible with this mapping. Then you'd map in such a way that you not only transport corners to corners, but also circumcircle to circumcircle.
I try to calculate the bounce-vector or reflection-vector to a given direction at a specific intersection point/surface in 3D SceneKit space within a AR Session.
To do this, I send out a hittest from the exact center of the screen straight forward. There is i.Ex a cube positioned let’s say 2 meter in front of me. Now I’d like to continue this hittest in the logical re-bounce/reflection direction, just as a light-ray on a mirror would do. Of course the hittest is ended at its intersection point, but from there I would like to draw like a line or small and long SCNTube node to visualise the direction in which this hittest would continue if it was reflected by one of the faces of the cube. And this from any particular direction.
Lets say, I have the direction vector in which I send the hittest. I also have the intersection point given by the hittest result. And I have the normal of the surface at the intersection point.
Regarding to some Answers I found about this issue on Linear Algebra Forums:
https://math.stackexchange.com/questions/2235997/reflecting-ray-on-triangle-in-3d-space
https://math.stackexchange.com/questions/13261/how-to-get-a-reflection-vector
the following Formula should do, and this in 3D space:
(and it should give me the re-bounce/reflection vector)
r = d − 2(d⋅n)n
(where d⋅n is the dot product, and n must be normalised. r is the reflection vector.)
I tried to make a kind of Swift implementation of that resulting in nonsense. Here is my code:
let location: CGPoint = screenCenter
let hits = self.sceneView.hitTest(location, options: [SCNHitTestOption.categoryBitMask: NodeCategory.catCube.rawValue, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])
if !hits.isEmpty {
print("we have a hittest")
let d = currentDirection
let p = hits.first?.worldCoordinates // Hit Location
let n = hits.first?.worldNormal // Normal of Hit Location
print("D = \(d)")
print("P = \(p)")
print("N = \(n)")
// r = d - 2*(d*n).normalized // the Formula
let r : SCNVector3 = d - (d.crossProduct(n!).crossProduct(d.crossProduct(n!))).normalized
// let r : SCNVector3 = d - (2 * d.crossProduct(n!).normalized) // I also tried this, but that gives me errors in Xcode
print("R = \(r)")
// This Function should setup then the node aligned to that new vector
setupRay(position: p!, euler: r)
}
All this results in nonsense. I get the following console output:
we are in gesture TAP recognizer
we have a hittest
D = SCNVector3(x: -0.29870644, y: 0.5494926, z: -0.7802771)
P = Optional(__C.SCNVector3(x: -0.111141175, y: 0.034069262, z: -0.62390435))
N = Optional(__C.SCNVector3(x: 2.672451e-08, y: 1.0, z: 5.3277716e-08))
R = SCNVector3(x: nan, y: nan, z: nan)
My Euler Angle: SCNVector3(x: nan, y: nan, z: nan)
(D is the direction of the hittest, P is the Point of intersetion, N is the Normal at the Point of intersection, R should be the reflection Vector but is always just nan, not a number)
I also tried the extension dotProduct instead of crossProduct, but dotProduct gives me a Float value, which I cannot calc with a SCNVector3
How can I calculate this re-bounce vector and align a SCNNode facing in that direction (with the Pivot at the start Point, the point of intersection from the hittest)
What am I doing wrong? Can anyone show me a working Swift implementation of that calculation?
Any Help would be so helpful. (Linear Algebra belongs not to my powers)
PS: I use standard SCNVector 3 math extensions as available from GitHub
Finally this Solution should work as far as I can say for iOS 12 (successfully tested). It gives you the reflection Vector from any surface and any Point of view.
let location: CGPoint = screenCenter
let hits = self.sceneView.hitTest(location, options: [SCNHitTestOption.categoryBitMask: NodeCategory.catCube.rawValue, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])
if let hitResult = hits.first {
let direction = normalize(double3(sceneView.pointOfView!.worldFront))
// let reflectedDirection = reflect(direction, n: double3(hitResult.worldNormal))
let reflectedDirection = simd_reflect(direction, double3(hitResult.worldNormal))
print(reflectedDirection)
// Use the Result for whatever purpose
setupRay(position: hitResult.worldCoordinates, reflectDirection: SCNVector3(reflectedDirection))
}
the SIMD library is really useful for thing like that:
if let hitResult = hitResults.first {
let direction = normalize(scnView.pointOfView!.simdPosition - hitResult.simdWorldCoordinates)
let reflectedDirection = reflect(direction, n: hitResult.simdWorldNormal)
print(reflectedDirection)
}
I have created a custom SCNGeometry object with some help from this question. However, when applying the geometry to a SCNNode, the pivot seems to not quite be in the correct location. When rotating the node, I want to rotate the node around the center of the geometry, but instead it rotates around another point. I can fix this problem by changing the node's pivot using node.pivot = SCNMatrix4MakeTranslation(ARROW_WIDTH / 2, 0, ARROW_LENGTH / 2), where the ARROW_WIDTH refers to the width of the geometry and the ARROW_LENGTH refers to the length of the geometry. This is however not ideal as every time I create a new node with the geometry, I have to manually fix the pivot of the node. Is there a way to define the "pivot" of a geometry somehow?
Current code that creates the custom SCNGeometry:
/**
Default constructor of an arrow geometry. This constructor takes in the needed parameters to construct the geometry at the specified size.
- parameters:
- length: The length of the arrow, which is the dimension that the arrow is pointing in.
- height: The height of the arrow, which is the thickness of the arrow.
- width: The width of the arrow, which defines the width of the arrow.
- indent: The indent of the arrow, which is the point and gap on the front and back of the arrow.
*/
init(length: Float, height: Float, width: Float, indent: Float) {
self.length = length
self.height = height
self.width = width
self.indent = indent > length ? length : indent
// Vertices
let v0 = SCNVector3(0, height / 2, 0)
let v1 = SCNVector3(width / 2, height / 2, indent)
... more vertices
let h4 = SCNVector3(width, -height / 2, indent)
let h5 = SCNVector3(width / 2, -height / 2, length - indent)
let vertices = [
// Top layer bottom triangles
v0, v1, h0,
v1, v2, h1,
... more vertices
v4, v10, v5,
v10, v11, v5
]
// Normals
let pX = SCNVector3(1, 0, 0)
... more normals
let topRight = calculateNormal(v1: v3, v2: v9, v3: v4)
let normals = [
// Top layer bottom triangles
pY, pY, pY,
... more normals
topLeft, topLeft, topLeft
]
// Indices
let indices: [Int32] = vertices.enumerated().map({ Int32($0.0) })
// Sources
let vertexSource = SCNGeometrySource(vertices: vertices)
let normalSource = SCNGeometrySource(vertices: normals)
// Create the geometry
let pointer = UnsafeRawPointer(indices)
let indexData = NSData(bytes: pointer, length: MemoryLayout<Int32>.size * indices.count)
let element = SCNGeometryElement(data: indexData as Data, primitiveType: .triangles, primitiveCount: indices.count / 3, bytesPerIndex: MemoryLayout<Int32>.size)
self._geometry = SCNGeometry(sources: [vertexSource, normalSource], elements: [element])
}
Without applying the manual pivot fix on the node, the arrow renders like this: (Note that the red point is the scene origin (0, 0, 0) and the arrow is positioned in the root node of the scene at that same position)
no manual pivot fix
When applying the manual pivot fix on the node, the arrow renders like this:
manual pivot fix
I'm trying to get the four vectors that make up the boundaries of the frustum in ARKit, and the solution I came up with is as follows:
Find the field of view angles of the camera
Then find the direction and up vectors of the camera
Using these information, find the four vectors using cross products and rotations
This may be a sloppy way of doing it, however it is the best one I got so far.
I am able to get the FOV angles and the direction vector from the ARCamera.intrinsics and ARCamera.transform properties. However, I don't know how to get the up vector of the camera at this point.
Below is the piece of code I use to find the FOV angles and the direction vector:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
if xFovDegrees == nil || yFovDegrees == nil {
let imageResolution = frame.camera.imageResolution
let intrinsics = frame.camera.intrinsics
xFovDegrees = 2 * atan(Float(imageResolution.width) / (2 * intrinsics[0,0])) * 180 / Float.pi
yFovDegrees = 2 * atan(Float(imageResolution.height) / (2 * intrinsics[1,1])) * 180 / Float.pi
}
let cameraTransform = SCNMatrix4(frame.camera.transform)
let cameraDirection = SCNVector3(-1 * cameraTransform.m31,
-1 * cameraTransform.m32,
-1 * cameraTransform.m33)
}
I am also open to suggestions for ways to find the the four vectors I'm trying to get.
I had not understood how this line worked:
let cameraDirection = SCNVector3(-1 * cameraTransform.m31,
-1 * cameraTransform.m32,
-1 * cameraTransform.m33)
This gives the direction vector of the camera because the 3rd row of the transformation matrix gives where the new z-direction of the transformed camera points at. We multiply it by -1 because the default direction of the camera is the negative z-axis.
Considering this information and the fact that the default up vector for a camera is the positive y-axis, the 2nd row of the transformation matrix gives us the up vector of the camera. The following code gives me what I want:
let cameraUp = SCNVector3(cameraTransform.m21,
cameraTransform.m22,
cameraTransform.m23)
It could be that I'm misunderstanding what you're trying to do, but I'd like to offer an alternative solution (the method and result is different than your answer).
For my purposes, I define the up vector as (0, 1, 0) when the phone is pointing straight up - basically I want the unit vector that is pointing straight out of the top of the phone. ARKit defines the up vector as (0, 1, 0) when the phone is horizontal to the left - so the y-axis is pointing out of the right side of the phone - supposedly because they expect AR apps to prefer horizontal orientation.
camera.transform returns the camera's orientation relative to its initial orientation when the AR session started. It is a 4x4 matrix - the first 3x3 of which is the rotation matrix - so when you write cameraTransform.m21 etc. you are referencing part of the rotation matrix, which is NOT the same as the up vector (however you define it).
So if I define the up vector as the unit y-vector where the y axis is pointing out of the top of the phone, I have to write this as (-1, 0, 0) in ARKit space. Then simply multiplying this vector (slightly modified... see below) by the camera's transform will give me the "up vector" that I'm looking for. Below is an example of using this calculation in a ARSessionDelegate callback.
func session(_ session: ARSession, didUpdate frame: ARFrame) {
// the unit y vector is appended with an extra element
// for multiplying with the 4x4 transform matrix
let unitYVector = float4(-1, 0, 0, 1)
let upVectorH = frame.camera.transform * unitYVector
// drop the 4th element
let upVector = SCNVector3(upVectorH.x, upVectorH.y, upVectorH.z)
}
You can use let unitYVector = float4(0, 1, 0, 1) if you are working with ARKit's horizontal orientation.
You can also do the same sort of calculation to get the "direction vector" (pointing out of the front of the phone) by multiplying unit vector (0, 0, 1, 1) by the camera transform.
Is it possible to graph a polar function with UIBezierPath? More than just circles, I'm talking about cardioids, limacons, lemniscates, etc. Basically I have a single UIView, and want to draw the shape in the view.
There are no built in methods for shapes like that, but you can always approximate them with a series of very short straight lines. I've had reason to approximate a circle this way, and a circle with ~100 straight lines looks identical to a circle drawn with ovalInRect. It was easiest when doing this, to create the points in polar coordinates first, then convert those in a loop to rectangular coordinates before passing the points array to a method where I add the lines to a bezier path.
Here's my swift helper function (fully commented) that generates the (x,y) coordinates in a given CGRect from a polar coordinate function.
func cartesianCoordsForPolarFunc(frame: CGRect, thetaCoefficient:Double, thetaCoefficientDenominator:Double, cosScalar:Double, iPrecision:Double) -> Array<CGPoint> {
// Frame: The frame in which to fit this curve.
// thetaCoefficient: The number to scale theta by in the cos.
// thetaCoefficientDenominator: The denominator of the thetaCoefficient
// cosScalar: The number to multiply the cos by.
// iPrecision: The step for continuity. 0 < iPrecision <= 2.pi. Defaults to 0.1
// Clean inputs
var precision:Double = 0.1 // Default precision
if iPrecision != 0 {// Can't be 0.
precision = iPrecision
}
// This is ther polar function
// var theta: Double = 0 // 0 <= theta <= 2pi
// let r = cosScalar * cos(thetaCoefficient * theta)
var points:Array<CGPoint> = [] // We store the points here
for theta in stride(from: 0, to: 2*Double.pi * thetaCoefficientDenominator, by: precision) { // Try to recreate continuity
let x = cosScalar * cos(thetaCoefficient * theta) * cos(theta) // Convert to cartesian
let y = cosScalar * cos(thetaCoefficient * theta) * sin(theta) // Convert to cartesian
let scaled_x = (Double(frame.width) - 0)/(cosScalar*2)*(x-cosScalar)+Double(frame.width) // Scale to the frame
let scaled_y = (Double(frame.height) - 0)/(cosScalar*2)*(y-cosScalar)+Double(frame.height) // Scale to the frame
points.append(CGPoint(x: scaled_x, y:scaled_y)) // Add the result
}
return points
}
Given those points here's an example of how you would draw a UIBezierPath. In my example, this is in a custom UIView function I would call UIPolarCurveView.
let flowerPath = UIBezierPath() // Declare my path
// Custom Polar scalars
let k: Double = 9/4
let length = 50
// Draw path
let points = cartesianCoordsForPolarFunc(frame: frame, thetaCoefficient: k, thetaCoefficientDenominator:4 cosScalar: length, iPrecision: 0.01) flowerPath.move(to: points[0])
for i in 2...points.count {
flowerPath.addLine(to: points[i-1])
}
flowerPath.close()
Here's the result:
PS: If you plan on having multiple graphs in the same frame, make sure to modify the scaling addition by making the second cosScalar the largest of the cosScalars used.You can do this by adding an argument to the function in the example.