I currently have a rootNode with multiple children nodes attached to it that I want to move around the scene together as a cluster. I currently move it along the x and y axis by using left, right, up and down buttons by changing the position of the rootNode little by little every time the button is clicked, for example for moving left:
self.newRootNode.position.x = self.newRootNode.position.x - 0.01
This way, the cluster always moves with respect to the coordinate system set when the app is initialized. I'm trying to make it move with respect to the user's left and right everytime they change their position. I've tried doing it as follows:
let nodeCam = self.sceneView.session.currentFrame!.camera
let cameraTransform = nodeCam.transform
self.newRootNode.position.x = cameraTransform.columns.3.x - 0.01
I know this is not what I want, I must be missing a transform from the camera's position to the root node's position, but I'm not sure what steps to follow.
What would be the right way to approach this? Do I need to reset tracking every time the user changes position? Any help would be appreciated :)
I believe you can use this function on your ARSession now in ARKit 1.5:
func setWorldOrigin(relativeTransform: matrix_float4x4)
Which:
Changes the basis for the AR world coordinate space using the specified transform.
Here is an example (untested) which may point you in the right direction:
func session(_ session: ARSession, didUpdate frame: ARFrame) {
guard let currentFrame = augmentedRealitySession.currentFrame?.camera else { return }
let transform = currentFrame.transform
augmentedRealitySession.setWorldOrigin(relativeTransform: transform )
}
Not forgetting of course to use this ARSCNDebugOptions.showWorldOrigin
in order to debug and adjust your matrixes and transforms etc.
You could also use the code in the delegate callback as an IBAction etc..
Hope it helps...
Related
I have this book, but I'm currently remixing the furniture app from the video tutorial that was free on AR/VR week.
I would like to have a 3D wall canvas aligned with the wall/vertical plane detected.
This is proving to be harder than I thought. Positioning isn't an issue. Much like the furniture placement app you can just get the column3 of the hittest.worldtransform and provide the new geometry this vector3 for position.
But I do not know what I have to do to get my 3D object rotated to face forward on the aligned detected plane. As I have a canvas object, the photo is on one side of the canvas. On placement, the photo is ALWAYS facing away.
I thought about applying a arbitrary rotation to the canvas to face forward but that then was only correct if I was looking north and place a canvas on a wall to my right.
I'v tried quite a few solutions on line all but one always use .existingPlaneUsingExtent. for vertical plane detections. This allows for you to get the ARPlaneAnchor from the
hittest.anchor? as ARPlaneAnchor.
If you try this when using .estimatedVerticalPlane the anchor? is nil
I also didn't continue down this route as my horizontal 3D objects started getting placed in the air. This maybe down to a control flow logic but I am ignoring it until the vertical canvas placement is working.
My current train of thought is to get the front vector of the canvas and rotate it towards the front facing vector of the vertical plane detected UIImage or the hittest point.
How would I get a forward vector from a 3D point. OR get the front vector from the grid image, that is a UIImage that is placed as an overlay when ARKit detects a vertical wall?
Here is an example. The canvas is showing the back of the canvas and is not parallel with the detected vertical plane that is the column. But there is a "Place Poster Here" grid which is what I want the canvas to align with and I'm able to see the photo.
Things I have tried.
using .estimatedVerticalPlane
ARKit estimatedVerticalPlane hit test get plane rotation
I don't know how to correctly apply this matrix and eular angle results from the SO answer.
my add picture function.
func addPicture(hitTestResult: ARHitTestResult) {
// I would like to convert estimate hitTest to a anchorpoint
// it is easier to rotate a node to a anchorpoint over calculating eularAngles
// we have all detected anchors in the _Renderer SCNNode. however there are
// Get the current furniture item, correct its position if necessary,
// and add it to the scene.
let picture = pictureSettings.currentPicturePiece()
//look for the vertical node geometry in verticalAnchors
if let hitPlaneAnchor = hitTestResult.anchor as? ARPlaneAnchor {
if let anchoredNode = verticalAnchors[hitPlaneAnchor]{
//code removed as a .estimatedVerticalPlane hittestResult doesn't get here
}
}else{
// Transform hitresult to world coords
let worldTransform = hitTestResult.worldTransform
let anchoredNodeOrientation = worldTransform.eulerAngles
picture.rotation.y =
-.pi * anchoredNodeOrientation.y
//set the transform matirs
let positionMatris = worldTransform.columns.3
let position = SCNVector3 (
positionMatris.x,
positionMatris.y,
positionMatris.z
)
picture.position = position + pictureSettings.currentPictureOffset();
}
//parented to rootNode of the scene
sceneView.scene.rootNode.addChildNode(picture)
}
Thanks for any help available.
Edited:
I have notice the 'handness' or the 3D model isn't correct/ is opposite?
Positive Z is pointing to the Left and Positive X is facing the camera for what I would expects is the front of the model. Is this a issue?
You should try to avoid adding node directly into the scene using world coordinates. Rather you should notify the ARSession of an area of interest by adding an ARAnchor then use the session callback to vend an SCNNode for the added anchor.
For example your hit test might look something like:
#objc func tapped(_ sender: UITapGestureRecognizer) {
let location = sender.location(in: sender.view)
guard let hitTestResult = sceneView.hitTest(location, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane]).first,
let planeAnchor = hitTestResult.anchor as? ARPlaneAnchor,
planeAnchor.alignment == .vertical else { return }
let anchor = ARAnchor(transform: hitTestResult.worldTransform)
sceneView.session.add(anchor: anchor)
}
Here a tap gesture recognized is used to detect taps within an ARSCNView. When a tap is detected a hit test is performed looking for existing and estimated planes. If the plane is vertical, we add an ARAnchor is added with the worldTransform of the hit test result, and we add that anchor to the ARSession. This will register that point as an area of interest for the ARSession, so we'll receive better tracking and less drift after our content is added there.
Next, we need to vend our SCNNode for the newly added ARAnchor. For example
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
if anchor is ARPlaneAnchor {
let anchorNode = SCNNode()
anchorNode.name = "anchor"
return anchorNode
} else {
let plane = SCNPlane(width: 0.67, height: 1.0)
plane.firstMaterial?.diffuse.contents = UIImage(named: "monaLisa")
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles = SCNVector3(CGFloat.pi * -0.5, 0.0, 0.0)
let node = SCNNode()
node.addChildNode(planeNode)
return node
}
}
Here we're first checking if the anchor is an ARPlaneAnchor. If it is, we vend an empty node for debugging purposes. If it is not, then it is an anchor that was added as the result of a hit test. So we create a geometry and node for the most recent tap. Because it is a vertical plane and our content is lying flat need to rotate it about the x axis. So we adjust it's eulerAngles to have it be upright. If we were to return planeNode directly adjustment to eulerAngles would be removed so we add it as a child node of an empty node and return it.
Should result in something like the following.
I am new to SceneKit and I am programming a game.
I have loaded two objects into my scene. The first object doesn't move, only the second one. The two objects always have to stick together but the second object can move completely free on the first object's surface depending on user input (basically like two magnets with infinity power but no friction).
My approach is to take the second object's x and y coordinates and look what object one's z coordinate is at given x and y coordinates. Then I move object two to the exact same z-coordinate.
I tried using a SCNDistanceConstraint but it didn't have any effect:
let cnstrnt = SCNDistanceConstraint(target: object1)
cnstrnt.maximumDistance = 1
cnstrnt.minimumDistance = 0.99
object2?.constraints?.append(cnstrnt)
I also tried using a SCNTransformConstraint without any effect either:
let transform = SCNTransformConstraint.positionConstraint(inWorldSpace: true) { (object2, vector) -> SCNVector3 in
let z = object1?.worldPosition.z
return SCNVector3(object2.worldPosition.x, object2.worldPosition.y, z!)
}
object2?.constraints?.append(transform)
Using a hitTest only returns results that are positioned on the bounding box of the object and not its actual surface:
let hitTest = mySceneView.scene?.physicsWorld.rayTestWithSegment(from: SCNVector3((object2?.position.x)!, (object2?.position.y)!, -10), to: (object2?.position)!, options: nil)
So how can I get the z-coordinate of an 3d object's surface from a x and y coordinate? Because then I'd be able to set the new position of object2 manually.
Maybe you have another approach that is more elegant and faster than mine?
Thanks beforehand!
I'm trying to make an ARKit application where an SCNNode, in this case a box, is placed in front of the camera, facing the camera. As the user moves the camera around, objects are placed when a certain distance has been moved. This would leave you with a series of nodes facing the camera in a line, equally spaced.
I have this working to a certain extent, but my problem is with the rotation. I'm currently taking all axes of rotation, so as the user re-orients their phone, the rotation of the node matches. I want to restrict this to just the rotation around the y-axis. The ideal outcome is a domino-trail like look, with all the objects having the same x and z rotations, but potentially different y rotations.
I hope I've explained this clearly enough!
Here's the code I'm currently using:
func createNode(fromCameraTransform cameraTransform: matrix_float4x4) -> SCNNode {
let geometry = SCNBox(width: 0.02, height: 0.04, length: 0.01, chamferRadius: 0)
let physicsBody = SCNPhysicsBody(type: .dynamic, shape: SCNPhysicsShape(geometry: geometry))
physicsBody.mass = 1000
let node = SCNNode(geometry: geometry)
node.physicsBody = physicsBody
var translationMatrix = matrix_identity_float4x4
translationMatrix.columns.3.x = 0.05 // Moves the node down in world space
translationMatrix.columns.3.z = -0.1 // Moves the object away from the camera
node.simdTransform = simd_mul(cameraTransform, translationMatrix)
return node
}
I've tried different combinations of extracting values from the second column of the cameraTransform and setting them as eulerAngles, rotation and simdRotation, but to no avail.
I've also tried extracting values from the pointOfView of the current sceneView and assigning them to the same values as listed above, but again, no luck.
Any help would greatly appreciated!
I know a little bit about this, but am really just starting out with SceneKit and 3D transformations/matrices so be gentle with me!
I think I know what your trying to do, basically automatically drop each new domino so its evenly spaced following the camera.pointOfView trail.
You can update the new nodes euler angles y-axis to the same as the cameras pointofView eularAngles.y. So as you move the camera around the next node you are placing is always facing towards the camera (only rotating around the y-axis).
The renderer function updateAtTime below gets called everytime
the camera moves
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
// You set the camera’s pointOfView’s eularAngle for y-xis to the node you are
about to place.
node.eulerAngles.y = (sceneView.pointOfView?.eulerAngles.y)!
I had this working in a playground so it does work.
Edit: This solution above angle was having a gimbal lock problem (as you went around in a circle in would reset its angle back to the axis.
So I found this approach using SCNBillboardConstraint works without experiencing the gimbal lock problem as you go around in circle.
let yFreeConstraint = SCNBillboardConstraint()
yFreeConstraint.freeAxes = .Y
node.constraints = [yFreeConstraint]
node.eulerAngles = node.presentation.eulerAngles
node.position = position
I'm building a simple navigation app with ARKit. The app shows a pin at a destination, which can be far away or nearby. The user is able to walk toward the pin to navigate.
In my ARSCNView I have an SCNNode called waypointNode, which represents the pin at the destination.
To determine where to place the waypointNode, I calculate the distance to the destination in meters, and the bearing (degrees away from North) to the destination. Then, I create and multiply some transformations and apply them to the node to put it in the proper position.
There's also some logic establish a maximum distance away for the waypointNode so it's not too small for the user to see.
This is how I configure the ARSCNView, so the axes line up with the real-world compass directions:
func setUpSceneView() {
let configuration = ARWorldTrackingConfiguration()
configuration.worldAlignment = .gravityAndHeading
configuration.planeDetection = .horizontal
session.run(configuration, options: [.resetTracking])
}
Every time the device gets a new CLLocation from CoreLocation, I update the distance and bearing then call this function to update the position of the waypointNode:
func updateWaypointNode() {
// limit the max distance so the node doesn't become invisible
let distanceLimit: Float = 80
let translationDistance: Float
if navigationInfo.distance > distanceLimit {
translationDistance = distanceLimit
} else {
translationDistance = navigationInfo.distance
}
// transform matrix to adjust node distance
let distanceTranslation = SCNMatrix4MakeTranslation(0, 0, -translationDistance)
// transform matrix to rotate node around y-axis
let rotation = SCNMatrix4MakeRotation(-1 * GLKMathDegreesToRadians(Float(navigationInfo.bearing)), 0, 1, 0)
// multiply the rotation and distance translation matrices
let distanceTimesRotation = SCNMatrix4Mult(distanceTranslation, rotation)
// grab the current camera transform
guard let cameraTransform = session.currentFrame?.camera.transform else { return }
// multiply the rotation and distance translation transform by the camera transform
let finalTransform = SCNMatrix4Mult(SCNMatrix4(cameraTransform), distanceTimesRotation)
// update the waypoint node with this updated transform
waypointNode.transform = finalTransform
}
This works fine when the user first starts the session, and when the user moves less than about 100m.
Once the user covers a significant distance, like over 100m walking or driving, just calling updateWaypointNode() is not enough to maintain the proper position of the node at the destination. When walking toward the node, for example, it's possible for the user to eventually reach the node, even though the user has not reached the destination. Note: This incorrect positioning happens while the session open the whole time, not if the session is interrupted.
As a workaround, I'm also calling setUpSceneView() every time the device gets a location update.
Even though this works OK, it feels wrong to me. It doesn't seem like I should have to call run with the .resetTracking option every time. I think I might just be overlooking something in my translations. I also see some jumpiness in the camera that seems to happen every time run is called when the session is running, so that's less desirable than simply updating translations.
Is there something different I could do to avoid calling run on the session and resetting the tracking every time the device gets a location update?
I'm building a platform game, and I made the camera follow the player when he walks:
let cam = SKCameraNode()
override func didMoveToView(view: SKView) {
self.camera = cam
...
}
override func update(currentTime: CFTimeInterval) {
/* Called before each frame is rendered */
cam.position = Player.player.position
...
But, when the camera moves, the control buttons move as well
What should I do to keep the control buttons static?
See this note in the SKCameraNode docs:
A camera’s descendants are always rendered relative to the camera node’s origin and without applying the camera’s scaling or rotation to them. For example, if your game wants to display scores or other data floating above the gameplay, the nodes that render these elements should be descendants of the current camera node.
If you want HUD elements that stay fixed relative to the screen even as the camera moves/scales/rotates, make them child nodes of the camera.
By the way, you don't need to change the camera's position on every update(). Instead, just constrain the camera's position to match that of the player:
let constraint = SKConstraint.distance(SKRange(constantValue: 0), toNode: player)
camera.constraints = [ constraint ]
Then, SpriteKit will automatically keep the camera centered on the player without any per-frame work from you. You can even add more than one constraint — say, to follow the player but keep the camera from getting too close to the edge of the world (and showing empty space).
Add the buttons as child to the camera, like cam.addchild(yourButton)
From rickster's answer I made these constraints where the camera only moves horizontally, even if the player jumps. The order in which they are added is important. In case somebody else find them useful:
Swift 4.2
let camera = SKCameraNode()
scene.addChild(camera)
camera.constraints = [SKConstraint.distance(SKRange(upperLimit: 200), to: player),
SKConstraint.positionY(SKRange(constantValue: 0))]