a-frame how to fix the obj/mtl model position to its first displayed position when it detects a hiro marker but not when moving the camera - augmented-reality

i want to show my object model on screen when it detects a hiro marker but when i move the camera it shouldn't be fixed on the camera instead on its old position
i have tried but not working as i expect
let markerEl = document.querySelector("a-marker-camera");
// get position
let pos = markerEl.getAttribute("position");
let camera = sceneEl.camera;
let position = sceneEl.camera.el.object3D.position;
camera.lookAt(position);
if (markerEl.object3D.visible || !markerEl.object3D.visible) markerEl.setAttribute("position", pos); ```

Related

SceneKit + ARKit: Billboarding without rolling with camera

I'm trying to draw a billboarded quad using SceneKit and ARKit. I have basic billboarding working, however when I roll the camera the billboard also rotates in place. This video shows this in action as I roll the camera to the left (the smily face is the billboard):
Instead I'd like the billboard to still face the camera but keep oriented vertically in the scene, no matter what the camera is doing
Here's how I compute billboarding:
// inside frame update function
struct Vertex {
var position: SIMD3<Float>
var texCoord: SIMD2<Float>
}
let halfSize = Float(0.25)
let cameraNode = sceneView.scene.rootNode.childNodes.first!
let modelTransform = self.scnNode.simdWorldTransform
let viewTransform = cameraNode.simdWorldTransform.inverse
let modelViewTransform = viewTransform * modelTransform
let right = SIMD3<Float>(modelViewTransform[0][0], modelViewTransform[1][0], modelViewTransform[2][0]);
let up = SIMD3<Float>(modelViewTransform[0][1], modelViewTransform[1][1], modelViewTransform[2][1]);
// drawBuffer is a MTL buffer of vertex data
let data = drawBuffer.contents().bindMemory(to: ParticleVertex.self, capacity: 4)
data[0].position = (right + up) * halfSize
data[0].texCoord = SIMD2<Float>(0, 0)
data[1].position = -(right - up) * halfSize
data[1].texCoord = SIMD2<Float>(1, 0)
data[2].position = (right - up) * halfSize
data[2].texCoord = SIMD2<Float>(0, 1)
data[3].position = -(right + up) * halfSize
data[3].texCoord = SIMD2<Float>(1, 1)
Again this gets the billboard facing the camera correctly, however when I roll the camera, the billboard rotates along with it.
What I'd like instead is for the billboard to point towards the camera but keep its orientation in the world. Any suggestions on how to fix this?
Note that my code example is simplified so I can't use SCNBillboardConstraint or anything like that; I need to be able to compute the billboarding myself
Here's the solution I came up with: create a new node that matches the camera's position and rotation, but without any roll:
let tempNode = SCNNode()
tempNode.simdWorldPosition = cameraNode.simdWorldPosition
// This changes the node's pitch and yaw, but not roll
tempNode.simdLook(at: cameraNode.simdConvertPosition(SIMD3<Float>(0, 0, 1), to: nil))
let view = tempNode.simdWorldTransform.inverse
let modelViewTransform = view * node.simdWorldTransform
This keeps the billboard pointing upwards in world space, even as the camera rolls.
I had actually tried doing this earlier by setting tempNode.eulerAngles.z = 0, however that seems to effect the rest of the transform matrix in unexpected ways
There's probably a way to do this without creating a temporary node too but this works well enough for me

How to render an object that always moves along the motion

I am rendering an object using node.setWorldPosition(0,0,-2f).
But I hope it always shows up in front of my camera view 2 meters away
I tried to display the node that is always facing my camera, but when I move the device forward, I can't move the node. Im not using anchor to fixed my object.
private void nodeAlwaysFaceCamera() {
Vector3 cameraPosition = arFragment.getArSceneView().getScene().getCamera().getWorldPosition();
Vector3 cardPosition = node.getWorldPosition();
Vector3 direction = Vector3.subtract(cameraPosition, cardPosition);
Quaternion lookRotation = Quaternion.lookRotation(direction, Vector3.up());
node.setWorldRotation(lookRotation);
}
Render the object 2 meters in front of the camera
node = new Node();
node.setParent(arFragment.getArSceneView().getScene());
node.setWorldPosition(new Vector3(0f, 0f, -2f));
node.setRenderable(viewRenderable);
My expected result is that the object will stick to the camera while moving.
Instead of getting the world position, you should use the Pose of the Camera. So instead of this
Vector3 cameraPosition = arFragment.getArSceneView().getScene().getCamera().getWorldPosition();
You can use:
Pose cameraPose = arFragment.getArSceneView().getScene().getCamera().getPose();
Or alternatively you can create a new Pose without the rotation of the camera.
Pose cameraPose = arFragment.getArSceneView().getScene().getCamera().getPose().extractTranslation();
Then you can use that pose to compose the pose for the object that you want to have in front of the camera. In example this will place the object in front of the camera at the same height at a distance of 2m (using the second cameraPose):
// Compose the Pose of the Object relative to the cameraPose
Pose objectPose = cameraPose.compose(Pose.makeTranslation(0,0,-2f));
// Create an Anchor for the object
Anchor objectAnchor = arFragment.getArSceneView().getSession().createAnchor(objectPose);
AnchorNode objectAnchorNode = AnchorNode(objectAnchor);
// Here is your code
node = new Node();
node.setParent(anchorNode);
node.setRenderable(viewRenderable);
....

Align 3D object parallel to vertical plane detected by estametedVerticalPlane

I have this book, but I'm currently remixing the furniture app from the video tutorial that was free on AR/VR week.
I would like to have a 3D wall canvas aligned with the wall/vertical plane detected.
This is proving to be harder than I thought. Positioning isn't an issue. Much like the furniture placement app you can just get the column3 of the hittest.worldtransform and provide the new geometry this vector3 for position.
But I do not know what I have to do to get my 3D object rotated to face forward on the aligned detected plane. As I have a canvas object, the photo is on one side of the canvas. On placement, the photo is ALWAYS facing away.
I thought about applying a arbitrary rotation to the canvas to face forward but that then was only correct if I was looking north and place a canvas on a wall to my right.
I'v tried quite a few solutions on line all but one always use .existingPlaneUsingExtent. for vertical plane detections. This allows for you to get the ARPlaneAnchor from the
hittest.anchor? as ARPlaneAnchor.
If you try this when using .estimatedVerticalPlane the anchor? is nil
I also didn't continue down this route as my horizontal 3D objects started getting placed in the air. This maybe down to a control flow logic but I am ignoring it until the vertical canvas placement is working.
My current train of thought is to get the front vector of the canvas and rotate it towards the front facing vector of the vertical plane detected UIImage or the hittest point.
How would I get a forward vector from a 3D point. OR get the front vector from the grid image, that is a UIImage that is placed as an overlay when ARKit detects a vertical wall?
Here is an example. The canvas is showing the back of the canvas and is not parallel with the detected vertical plane that is the column. But there is a "Place Poster Here" grid which is what I want the canvas to align with and I'm able to see the photo.
Things I have tried.
using .estimatedVerticalPlane
ARKit estimatedVerticalPlane hit test get plane rotation
I don't know how to correctly apply this matrix and eular angle results from the SO answer.
my add picture function.
func addPicture(hitTestResult: ARHitTestResult) {
// I would like to convert estimate hitTest to a anchorpoint
// it is easier to rotate a node to a anchorpoint over calculating eularAngles
// we have all detected anchors in the _Renderer SCNNode. however there are
// Get the current furniture item, correct its position if necessary,
// and add it to the scene.
let picture = pictureSettings.currentPicturePiece()
//look for the vertical node geometry in verticalAnchors
if let hitPlaneAnchor = hitTestResult.anchor as? ARPlaneAnchor {
if let anchoredNode = verticalAnchors[hitPlaneAnchor]{
//code removed as a .estimatedVerticalPlane hittestResult doesn't get here
}
}else{
// Transform hitresult to world coords
let worldTransform = hitTestResult.worldTransform
let anchoredNodeOrientation = worldTransform.eulerAngles
picture.rotation.y =
-.pi * anchoredNodeOrientation.y
//set the transform matirs
let positionMatris = worldTransform.columns.3
let position = SCNVector3 (
positionMatris.x,
positionMatris.y,
positionMatris.z
)
picture.position = position + pictureSettings.currentPictureOffset();
}
//parented to rootNode of the scene
sceneView.scene.rootNode.addChildNode(picture)
}
Thanks for any help available.
Edited:
I have notice the 'handness' or the 3D model isn't correct/ is opposite?
Positive Z is pointing to the Left and Positive X is facing the camera for what I would expects is the front of the model. Is this a issue?
You should try to avoid adding node directly into the scene using world coordinates. Rather you should notify the ARSession of an area of interest by adding an ARAnchor then use the session callback to vend an SCNNode for the added anchor.
For example your hit test might look something like:
#objc func tapped(_ sender: UITapGestureRecognizer) {
let location = sender.location(in: sender.view)
guard let hitTestResult = sceneView.hitTest(location, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane]).first,
let planeAnchor = hitTestResult.anchor as? ARPlaneAnchor,
planeAnchor.alignment == .vertical else { return }
let anchor = ARAnchor(transform: hitTestResult.worldTransform)
sceneView.session.add(anchor: anchor)
}
Here a tap gesture recognized is used to detect taps within an ARSCNView. When a tap is detected a hit test is performed looking for existing and estimated planes. If the plane is vertical, we add an ARAnchor is added with the worldTransform of the hit test result, and we add that anchor to the ARSession. This will register that point as an area of interest for the ARSession, so we'll receive better tracking and less drift after our content is added there.
Next, we need to vend our SCNNode for the newly added ARAnchor. For example
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
if anchor is ARPlaneAnchor {
let anchorNode = SCNNode()
anchorNode.name = "anchor"
return anchorNode
} else {
let plane = SCNPlane(width: 0.67, height: 1.0)
plane.firstMaterial?.diffuse.contents = UIImage(named: "monaLisa")
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles = SCNVector3(CGFloat.pi * -0.5, 0.0, 0.0)
let node = SCNNode()
node.addChildNode(planeNode)
return node
}
}
Here we're first checking if the anchor is an ARPlaneAnchor. If it is, we vend an empty node for debugging purposes. If it is not, then it is an anchor that was added as the result of a hit test. So we create a geometry and node for the most recent tap. Because it is a vertical plane and our content is lying flat need to rotate it about the x axis. So we adjust it's eulerAngles to have it be upright. If we were to return planeNode directly adjustment to eulerAngles would be removed so we add it as a child node of an empty node and return it.
Should result in something like the following.

How can I place an anchor in ARKit relative to the ground with WorldAlignment rather than relative to the camera?

I'm following along the SpriteKit example with ARKit and want to change the behavior such that instead of placing the sprite just in front of the camera - I want the sprite to always be at a fixed distance above the floor (plane?), e.g. eye-level. I also want to use the WorldAlignment gravityAndHeading so that I can always place it in the same part of the room.
Here's the sample code
if let currentFrame = sceneView.session.currentFrame {
// Create a transform with a translation of 1.5 meters in front of the camera
var translation = matrix_identity_float4x4
translation.columns.3.z = -1.5
let transform = simd_mul(currentFrame.camera.transform, translation)
// Add a new anchor to the session
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
}
Do I need to get the camera's height position when I pass in the translation to currentFrame.camera.transform, or is there something that will give me the absolute or relative position vs 0?
Also, I'm assuming that I should turn on plane detection and add my anchor on plane detected? configuration.planeDetection = .horizontal

Drag SceneKit Node Along X-Axis while maintaining velocity? Swift 3

Swift 3, SceneKit: In my game, I have an SCNSphere node in the center of the screen. The sphere drops by gravity onto an SCNBox node, and a velocity of SCNVector3(0,6,0) is applied to it once it collides with the box.
A new box is created and moves forward (z+) towards my camera and towards the sphere as well. The sphere rises, peaks, and then falls back down (by gravity) towards the new box, and when it collides with the new box, a velocity of SCNVector(0,6,0) is applied to it. This process repeats continuously. A sphere that repeatedly bounces on a new approaching box, basically.
Instead of just one box, however, there will be three boxes in a row. All boxes begin in front of the sphere node and move towards it when they are created, the boxes are placed in a row, one to the left of the sphere, one directly in front of the sphere (the middle), and the third to the right of the sphere.
I want to be able to drag my finger across the screen and move my sphere so that it can land on the left and right boxes. While I'm dragging, I do not want the y-velocity or y-position to be changed at all. I just want the x-position of my sphere node to mirror the real-world x-position of my finger relative to the screen. I also do not want the sphere node to change location based on a touch alone.
For example, if the sphere's position is at SCNVector3(2,0,0), and if the user taps near SCNVector3(-2,0,0), I do not want the sphere to "teleport" to where the user tapped. I want the user to drag the sphere from its last position.
func handlePan(recognizer: UIPanGestureRecognizer) {
let sceneView = self.view as! SCNView
sceneView.delegate = self
sceneView.scene = scene
let trans:SCNVector3 = sceneView.unprojectPoint(SCNVector3Zero)
let pos:SCNVector3 = player.presentation.position
let newPos = (trans.x) + (pos.x)
player.position.x = newPos
}
I just want the x-position of my sphere node to mirror the real-world x-position of my finger relative to the screen
You can do this by using UIPanGestureRecognizer and getting the translation in the coordinate system of the view.
let myPanGestureRecognizer = UIPanGestureRecognizer(target: self, action: #selector(handlePan))
let trans2D:CGPoint = myPanGestureRecognizer.translation(in:self.view)
let transPoint3D:SCNVector3 = SCNVector3Make(trans2D.x, trans2D.y, <<z>>)
For z value, refer to the unProjectPoint Discussion, which says that z should refer to the depth at which you want to un-project relative to the near and far clipping planes of your view frustum.
You can then un-project the translation to the 3D world coordinate system of the scene, which will give you the translation for the sphere node. Some partial sample code:
let trans:SCNVector3 = sceneView.unProjectPoint(transPoint3D)
let pos:SCNVector3 = sphereNode.presentationNode.position
let newPos:SCNVector3 = // trans + pos
sphereNode.position = newPosition

Resources