How to apply SCNVector3 force/impulse from an orientation in scenekit? - ios

In ARKit/SceneKit, when the user taps the button, I want to apply an impulse to my node. I want the impulse to come from the current user's perspective. This means the node would be moving away from the user's perspective. I'm able to get the current orientation/direction, thanks to this code:
func getUserVector() -> (SCNVector3, SCNVector3) { // (direction, position)
if let frame = self.sceneView.session.currentFrame {
let mat = SCNMatrix4(frame.camera.transform) // 4x4 transform matrix describing camera in world space
let dir = SCNVector3(-1 * mat.m31, -1 * mat.m32, -1 * mat.m33) // orientation of camera in world space
let pos = SCNVector3(mat.m41, mat.m42, mat.m43) // location of camera in world space
return (dir, pos)
}
return (SCNVector3(0, 0, -1), SCNVector3(0, 0, -0.2))
}
via https://github.com/farice/ARShooter/blob/master/ARViewer/ViewController.swift#L191
I have an arbitrary SCNVector, that I've created. It contains info on how high (Y axis), how much to the left or right, and how much forward to apply to the node.
I want to convert/translate my SCNVector3 to come from the orientation/direction of the camera.
Meaning, I have
let (direction, position) = self.getUserVector()
let force = SCNVector3(x: 1.67, y: 13.83, z: -18.3)
How do I apply the force from the location/origin of the direction?

Figured it out after lots of googling. To convert the impulse vector3 to the direction I need, I used something like this:
let original = SCNVector3(x: 1.67, y: 13.83, z: -18.3)
let force = simd_make_float4(original.x, original.y, original.z, 0)
let rotatedForce = simd_mul(currentFrame.camera.transform, force)
let vectorForce = SCNVector3(x:rotatedForce.x, y:rotatedForce.y, z:rotatedForce.z)
node.physicsBody?.applyForce(vectorForce, asImpulse: true)

Related

SceneKit unprojectPoint() with custom camera

I am developing a molecular visualizer for macOS / iPadOS with SceneKit. Long story short, I want that when the user clicks (or touches) the screen at a certain position, a new atom is placed (in this example just a SCNSphere).
Previously, I had the allowsCameraControl property of the SCNView active, which allowed me to freely move the camera and with the unprojectPoint() method, I could successfully place a new node at touch location. The limitation of the default camera controller is that it does not zoom. When you pinch the screen, it changes the FOV property of the camera instead of moving it through the Z axis.
Therefore, I made a custom camera node with a SCNCamera. I succesfully recreated the default camera behaviour (movement, rotation) and furthermore I am able to correclty zoom into the scene. The downside of this is that the unprojectPoint() method no longer works as expeced, as the new nodes are placed at a very close position of the camera node itself. No matter where I click on the scene, that the unprojected point will always be very close to 0, 0, 10
internal func newNodeAt(point: CGPoint) {
let pointVector = SCNVector3(point.x, point.y, 0.8)
let position = self.unprojectPoint(pointVector)
print("x:\(position.x), y: \(position.y), z: \(position.z)")
let newSphere = SCNSphere(radius: 1)
let newNode = SCNNode(geometry: newSphere)
self.scene?.rootNode.addChildNode(newNode)
}
The camera node is setup as folows and its directly attached to the scene root node.
internal func setupCameraNode() -> SCNNode {
let cam = SCNCamera()
cam.name = "camera"
cam.zFar = 200
cam.zNear = 0.1
let camNode = SCNNode()
camNode.camera = cam
camNode.position = SCNVector3(0, 0, 5)
camNode.name = "Camera node"
return camNode
}
These are the printed positions after clicking on random positions of the scene.
x:-0.1988764852285385, y: -0.05589345842599869, z: 10.920427322387695
x:-0.18989555537700653, y: 0.14564114809036255, z: 10.920427322387695
x: 0.2168566882610321, y: 0.13085339963436127, z: 10.920427322387695
x: 0.24202580749988556, y: -0.15493911504745483, z: 10.920427322387695
x:-0.06516486406326294, y: -0.1781780868768692, z: 10.920427322387695
x:-0.08134553581476212, y: 0.12478446960449219, z: 10.920427322387695
x:-0.25866374373435974, y: 0.1456427276134491, z: 10.920427322387695
x: 0.217658132314682, y: 0.16270162165164948, z: 10.920427322387695
x: 0.2053154855966568, y: -0.12679903209209442, z: 10.920427322387695
I suppose that the unprojectPoint() is somehow related to the point of view? But I do not know how to fix this. Thanks.
I think you are on the right track, you just have to provide some kind of depth reference for the user. This is my code for similar, but when I call airStrike, I deal with the depth based on a plane facing the user and that's how I know where Z needs to be.
Just a guess without a visual, but seems there are a couple of options. Create a reference plane in the middle of the molecule and ++/-- that to show where the tap will land from a depth perpective.
Or just let them put it anywhere, then select it and depth++/depth-- to get it in the right position.
#objc func handleTap(recognizer: UITapGestureRecognizer)
{
let location: CGPoint = recognizer.location(in: gameScene)
if(data.isAirStrikeModeOn == true)
{
let projectedPoint = gameScene.projectPoint(SCNVector3(0, 0, 0))
let scenePoint = gameScene.unprojectPoint(SCNVector3(location.x, location.y, CGFloat(projectedPoint.z)))
gameControl.airStrike(position: scenePoint)
}
}
After days of testing I figured out a workaround and now I can place the nodes correctly where they should be.
My node tree was is like this:
RootNode
CameraNode
atomNodes
atom (individual spheres)
Therefore, all I had to do was to convert the unprojected position from the RootNode (which I suppose is the one that the camera takes the reference from) to the atomNodes, thus:
let unprojected = unprojectPoint(SCNVector3(location.x, location.y, 0.99))
let position = atomNodes.convertPosition(unprojected, from: rootNode)
The 0.99 is just a nice Z position in my view for the spheres to be placed. (More info here)
My advice would be to always check the node tree because the positions are relative to each other.

Need starting and ending position of MeshResource/Box in RealityKit

let material = SimpleMaterial.init(color: .red,roughness: 1,isMetallic: false)
let doorBox = MeshResource.generateBox(width: 0.02,height: 1, depth: 0.5)
let doorEntity = ModelEntity(mesh: doorBox, materials: [material])
let anchor = ARAnchorEntity()
anchor.addChild(doorEntity)
In RealityKit, I am having box which is MeshResource, Box looks like a line. I have added this box in ARView, and have set realtime camera position. In one scenario I want to know Box/Line’s starting and ending position.
Lets say box with entity has middle/current position (0.1,0.23,-1.3) then what will be box’s left and right position ? Anchor with box is keep changing it's position with camera movement.
Thanks in advance.
Check explanation with the image
You can use this extension.
extension Entity {
func getDistancedPosition(x: Float, y: Float, z: Float) -> SIMD3<Float> {
let referenceNodeTransform = transform.matrix
var translationMatrix = matrix_identity_float4x4
translationMatrix.columns.3.x = x
translationMatrix.columns.3.y = y
translationMatrix.columns.3.z = z
let updatedTransform = matrix_multiply(referenceNodeTransform,
translationMatrix)
return .init(updatedTransform.columns.3.x,
updatedTransform.columns.3.y,
updatedTransform.columns.3.z)
}
}
To get left and right for your box, Use below code:
let side1Position = door.getDistancedPosition(x: 0, y: 0, z: self.viewModel.doorDepth/2)
let side2Position = door.getDistancedPosition(x: 0, y: 0, z: -(self.viewModel.doorDepth/2))
To make the box look like the line you must have used depth. If not then you can change the parameter accordingly. e.g. door.getDistancedPosition(x: -0.1, y: 0, z: 0)
You can also refer to this question and its accepted answer:
Position a SceneKit object in front of SCNCamera's current orientation

SceneKit matrix transformation to match camera angle

I'm building a UIPanGestureRecognizer so I can move nodes in 3D space.
Currently, I have something that works, but only when the camera is exactly perpendicular to the plane, my UIPanGestureRecognizer looks like this:
#objc func handlePan(_ sender:UIPanGestureRecognizer) {
let projectedOrigin = self.sceneView!.projectPoint(SCNVector3Zero)
let viewCenter = CGPoint(
x: self.view!.bounds.midX,
y: self.view!.bounds.midY
)
let touchlocation = sender.translation(in: self.view!)
let moveLoc = CGPoint(
x: CGFloat(touchlocation.x + viewCenter.x),
y: CGFloat(touchlocation.y + viewCenter.y)
)
let touchVector = SCNVector3(x: Float(moveLoc.x), y: Float(moveLoc.y), z: Float(projectedOrigin.z))
let worldPoint = self.sceneView!.unprojectPoint(touchVector)
let loc = SCNVector3( x: worldPoint.x, y: 0, z: worldPoint.z )
worldHandle?.position = loc
}
The problem happens when the camera is rotated, and the coordinates are effected by the perspective change. Here is you can see the touch position drifting:
Related SO post for which I used to get to this position:
How to use iOS (Swift) SceneKit SCNSceneRenderer unprojectPoint properly
It referenced these great slides: http://www.terathon.com/gdc07_lengyel.pdf
The tricky part of going from 2D touch position to 3D space is obviously the z-coordinate. Instead of trying to convert the touch position to an imaginary 3D space, map the 2D touch to a 2D plane in that 3D space using a hittest. Especially when movement is required only in two direction, for example like chess pieces on a board, this approach works very well. Regardless of the orientation of the plane and the camera settings (as long as the camera doesn't look at the plane from the side obviously) this will map the touch position to a 3D position directly under the finger of the touch and follow consistently.
I modified the Game template from Xcode with an example.
https://github.com/Xartec/PrecisePan/
The main parts are:
the pan gesture code:
// retrieve the SCNView
let scnView = self.view as! SCNView
// check what nodes are tapped
let p = gestureRecognize.location(in: scnView)
let hitResults = scnView.hitTest(p, options: [SCNHitTestOption.searchMode: 1, SCNHitTestOption.ignoreHiddenNodes: false])
if hitResults.count > 0 {
// check if the XZPlane is in the hitresults
for result in hitResults {
if result.node.name == "XZPlane" {
//NSLog("Local Coordinates on XZPlane %f, %f, %f", result.localCoordinates.x, result.localCoordinates.y, result.localCoordinates.z)
//NSLog("World Coordinates on XZPlane %f, %f, %f", result.worldCoordinates.x, result.worldCoordinates.y, result.worldCoordinates.z)
ship.position = result.worldCoordinates
ship.position.y += 1.5
return;
}
}
}
The addition of a XZ plane node in viewDidload:
let XZPlaneGeo = SCNPlane(width: 100, height: 100)
let XZPlaneNode = SCNNode(geometry: XZPlaneGeo)
XZPlaneNode.geometry?.firstMaterial?.diffuse.contents = UIImage(named: "grid")
XZPlaneNode.name = "XZPlane"
XZPlaneNode.rotation = SCNVector4(-1, 0, 0, Float.pi / 2)
//XZPlaneNode.isHidden = true
scene.rootNode.addChildNode(XZPlaneNode)
Uncomment the isHidden line to hide the helper plane and it will still work. The plane obviously needs to be large enough to fill the screen or at least the portion where the user is allowed to pan.
By setting a global var to hold a startWorldPosition of the pan (in state .began) and comparing it to the hit worldPosition in the state .change you can determine the delta/translation in world space and translate other objects accordingly.

ARKit Place a SCNNode facing the camera

I'm using ARKit to display 3D objects. I managed to place the nodes in the real world in front of the user (aka the camera). But I don't manage to make them to face the camera when I drop them.
let tap_point=CGPoint(x: x, y: y)
let results=arscn_view.hitTest(tap_point, types: .estimatedHorizontalPlane)
guard results.count>0 else{
return
}
guard let r=results.first else{
return
}
let hit_tf=SCNMatrix4(r.worldTransform)
let new_pos=SCNVector3Make(hit_tf.m41, hit_tf.m42+Float(0.2), hit_tf.m43)
guard let scene=SCNScene(named: file_name) else{
return
}
guard let node=scene.rootNode.childNode(withName: "Mesh", recursively: true) else{
return
}
node.position=new_pos
arscn_view.scene.rootNode.addChildNode(node)
The nodes are well positioned on the plane, in front of the camera. But they are all looking in the same direction. I guess I should rotate the SCNNode but I didn't manage to do this.
First, get the rotation matrix of the camera:
let rotate = simd_float4x4(SCNMatrix4MakeRotation(sceneView.session.currentFrame!.camera.eulerAngles.y, 0, 1, 0))
Then, combine the matrices:
let rotateTransform = simd_mul(r.worldTransform, rotate)
Lastly, apply a transform to your node, casting as SCNMatrix4:
node.transform = SCNMatrix4(rotateTransform)
Hope that helps
EDIT
here how you can create SCNMatrix4 from simd_float4x4
let rotateTransform = simd_mul(r.worldTransform, rotate)
node.transform = SCNMatrix4(m11: rotateTransform.columns.0.x, m12: rotateTransform.columns.0.y, m13: rotateTransform.columns.0.z, m14: rotateTransform.columns.0.w, m21: rotateTransform.columns.1.x, m22: rotateTransform.columns.1.y, m23: rotateTransform.columns.1.z, m24: rotateTransform.columns.1.w, m31: rotateTransform.columns.2.x, m32: rotateTransform.columns.2.y, m33: rotateTransform.columns.2.z, m34: rotateTransform.columns.2.w, m41: rotateTransform.columns.3.x, m42: rotateTransform.columns.3.y, m43: rotateTransform.columns.3.z, m44: rotateTransform.columns.3.w)
guard let frame = self.sceneView.session.currentFrame else {
return
}
node.eulerAngles.y = frame.camera.eulerAngles.y
here's my code for the SCNNode facing the camera..hope help for someone
let location = touches.first!.location(in: sceneView)
var hitTestOptions = [SCNHitTestOption: Any]()
hitTestOptions[SCNHitTestOption.boundingBoxOnly] = true
let hitResultsFeaturePoints: [ARHitTestResult] = sceneView.hitTest(location, types: .featurePoint)
let hitTestResults = sceneView.hitTest(location)
guard let node = hitTestResults.first?.node else {
if let hit = hitResultsFeaturePoints.first {
let rotate = simd_float4x4(SCNMatrix4MakeRotation(sceneView.session.currentFrame!.camera.eulerAngles.y, 0, 1, 0))
let finalTransform = simd_mul(hit.worldTransform, rotate)
sceneView.session.add(anchor: ARAnchor(transform: finalTransform))
}
return
}
Do you want the nodes to always face the camera, even as the camera moves? That's what SceneKit constraints are for. Either SCNLookAtConstraint or SCNBillboardConstraint can keep a node always pointing at the camera.
Do you want the node to face the camera when placed, but then hold still (so you can move the camera around and see the back of it)? There are a few ways to do that. Some involve fun math, but a simpler way to handle it might just be to design your 3D assets so that "front" is always in the positive Z-axis direction. Set a placed object's transform based on the camera transform, and its initial orientation will match the camera's.
Here's how I did it:
func faceCamera() {
guard constraints?.isEmpty ?? true else {
return
}
SCNTransaction.begin()
SCNTransaction.animationDuration = 5
SCNTransaction.completionBlock = { [weak self] in
self?.constraints = []
}
constraints = [billboardConstraint]
SCNTransaction.commit()
}
private lazy var billboardConstraint: SCNBillboardConstraint = {
let constraint = SCNBillboardConstraint()
constraint.freeAxes = [.Y]
return constraint
}()
As stated earlier a SCNBillboardConstraint will make the node always look at the camera. I am animating it so the node doesn't just immediately snap into place, this is optional. In the SCNTransaction.completionBlock I remove the constraint, also optional.
Also I set the SCNBillboardConstraint's freeAxes, which customizes on what axis the node follows the camera, again optional.
I want the node to face the camera when I place it then keep it here (and be able to move around). – Marie Dm
Blockquote
You can put object facing to camera, using this:
if let rotate = sceneView.session.currentFrame?.camera.transform {
node.simdTransform = rotate
}
This code will save you from gimbal lock and other troubles.
The four-component rotation vector specifies the direction of the rotation axis in the first three components and the angle of rotation (in radians) in the fourth. The default rotation is the zero vector, specifying no rotation. Rotation is applied relative to the node’s simdPivot property.
The simdRotation, simdEulerAngles, and simdOrientation properties all affect the rotational aspect of the node’s simdTransform property. Any change to one of these properties is reflected in the others.
https://developer.apple.com/documentation/scenekit/scnnode/2881845-simdrotation
https://developer.apple.com/documentation/scenekit/scnnode/2881843-simdtransform

ARKit - Applying Force in User's Phone Direction

I have the following code that creates a SCNBox and shoots it on the screen. This works but as soon as I turn the phone in any other direction then the force impulse does not get updated and it always shoots the box in the same old position.
Here is the code:
#objc func tapped(recognizer :UIGestureRecognizer) {
guard let currentFrame = self.sceneView.session.currentFrame else {
return
}
/
let box = SCNBox(width: 0.2, height: 0.2, length: 0.2, chamferRadius: 0)
let material = SCNMaterial()
material.diffuse.contents = UIColor.red
material.lightingModel = .constant
var translation = matrix_identity_float4x4
translation.columns.3.z = -0.01
let node = SCNNode()
node.geometry = box
node.geometry?.materials = [material]
print(currentFrame.camera.transform)
node.physicsBody = SCNPhysicsBody(type: .dynamic, shape: nil)
node.simdTransform = matrix_multiply(currentFrame.camera.transform, translation)
node.physicsBody?.applyForce(SCNVector3(0,2,-10), asImpulse: true)
self.sceneView.scene.rootNode.addChildNode(node)
}
Line 26 is where I apply the force but it does not take into account the user's current phone orientation. How can I fix that?
On line 26 you're passing a constant vector to applyForce. That method takes a vector in world space, so passing a constant vector means you're always applying a force in the same direction — if you want a direction that's based on the direction the camera or something else is pointing, you'll need to calculate a vector based on that direction.
The (new) SCNNode property worldFront might prove helpful here — it gives you the direction a node is pointing, automatically converted to world space, so it's useful with physics methods. (Though you might want to scale it.)

Resources