Hey I'm trying to figure out. How to keep a simple node in place. As I walk around it in ARKit
Code:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if let planeAnchor = anchor as? ARPlaneAnchor {
if planeDetected == false { // Bool only allows 1 plane to be added
planeDetected = true
self.addPlane(node: node, anchor: planeAnchor)
}
}
}
This adds the SCNNode
func addPlane(node: SCNNode, anchor: ARPlaneAnchor) {
// We add the anchor plane here
let showDebugVisuals = Bool()
let plane = Plane(anchor, showDebugVisuals)
planes[anchor] = plane
node.addChildNode(plane)
// We add our custom SCNNode here
let scene = SCNScene(named: "art.scnassets/PlayerModel.scn")!
let Body = scene.rootNode.childNode(withName: "Body", recursively: true)!
Body.position = SCNVector3.positionFromTransform(anchor.transform)
Body.movabilityHint = .movable
wrapperNode.position = SCNVector3.positionFromTransform(anchor.transform)
wrapperNode.addChildNode(Body)
scnView.scene.rootNode.addChildNode(wrapperNode)
Ive tried adding a Plane/Anchor Node and putting the "Body" node in that but it still moves. I thought maybe it has something to do with the update function.
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
}
Or most likely the position setting
wrapperNode.position = SCNVector3.positionFromTransform(anchor.transform)
Iv'e looked through every source / project file / video on the internet and nobody has a simple solution to this simple problem.
There are two kinds of "moving around" that could be happening here.
One is that ARKit is continuously refining its estimate of how the device's position in the real world maps to the abstract coordinate space you're placing virtual content in. For example, suppose you put a virtual object at (0, 0, -0.5), and then move your device to the left by exactly 10 cm. The virtual object will appear to be anchored in physical space only if ARKit tracks the move precisely. But visual-inertial odometry isn't an exact science, so it's possible that ARKit thinks you moved to the left by 10.5 cm — in that case, your virtual object will appear to "slip" to the right by 5 mm, even though its position in the ARKit/SceneKit coordinate space remains constant.
You can't really do much about this, other than hope Apple makes devices with better sensors, better cameras, or better CPUs/GPUs and improves the science of world tracking. (In the fullness of time, that's probably a safe bet, though that probably doesn't help with your current project.)
Since you're also dealing with plane detection, there's another wrinkle. ARKit is continuously refining its estimates of where a detected plane is. So, even though the real-world position of the plane isn't changing, its position in ARKit/SceneKit coordinate space is.
This kind of movement is generally a good thing — if you want your virtual object to appear anchored to the real-world surface, you want to be sure of where that surface is. You'll see some movement as plane detection gets more sure of the surface's position, but after a short time, you should see less "slip" as you move the camera around for plan-anchored virtual objects than those that are just floating in world space.
In your code, though, you're not taking advantage of plane detection to make your custom content (from "PlayerModel.scn") stick to the plane anchor:
wrapperNode.position = SCNVector3.positionFromTransform(anchor.transform)
wrapperNode.addChildNode(Body)
scnView.scene.rootNode.addChildNode(wrapperNode)
This code uses the initial position of the plane anchor to position wrapperNode in world space (because you're making it a child of the root node). If you instead make wrapperNode a child of the plane anchor's node (the one you received in renderer(_:didAdd:for:)), it'll stay attached to the plane as ARKit refines its estimate of the plane's position. You'll get a little bit more movement initially, but as plane detection "settles", your virtual object will "slip" less.
(When you make the node a child of the plane, you don't need to set its position — a position of zero means it's right where the plane is. Inf anything, you need to set its position only relative to the plane — i.e. how far above/below/along it.)
To keep an SCNNode in place you can disable sceneView plane detection once you get the result you desired.
let configuration = ARWorldTrackingConfiguration();
configuration.planeDetection = []
self.sceneView.session.run(configuration)
The reason for this is that ARKit constantly reestimates the position of the detected plane resulting in your SCNNode moving around.
Related
I'm trying to grasp the conversion thing in SpriteKit but despite having read the documentation and several posts on SO I can't seem to get it right. As far as I understand there are two coordinate systems that work independently of one another, one for the scene and one for the view, which is why I simply can't use the things like UIScreen.main.bounds.maxX to determine screen corners that the node can relate to. Am I getting this right?
Anyway, here's my attempt at converting coordinates:
let mySquare = SKShapeNode(rectOf: CGSize(width: 50, height: 50))
mySquare.fillColor = SKColor.blue
mySquare.lineWidth = 1
let myPoint = CGPoint(x: 200, y: 0)
let newPosition = mySquare.convert(myPoint, from: self)
mySquare.position = newPosition
print(newPosition)
self.addChild(mySquare)
The print returns the exact same position as went in so obviously I'm not doing this right, but I have tried a number of different constellations but with pretty much no result; the coordinates remain the same. I have also tried let myPoint = CGPoint(x: UIScreen.main.bounds.maxX, y: UIScreen.main.bounds.maxY) but same there; no conversion.
What am I missing? In my head I read the conversion above as "convert myPoint from the view coordinate system and use it for my node mySquare.
There are lots of coordinate systems floating around, and so lots of potential sources of confusion:
Scene coordinates: that's your game's world, and what you usually think about when imagining coordinates and how to position things overall.
Node: Nodes have their own coordinate system. Once you start building a hierarchy, that matters. Imagine, e.g., an on-screen joystick that has a background showing a graphic of movement directions and a central "knob" that the player can manipulate. You might represent the joystick as a node with two children. One child is a sprite for the background, and the other is a sprite for the knob. The background sprite would naturally be at position (0,0), meaning the center of the overall joystick. The knob would move around, with (0,0) meaning centered and maybe (0,100) meaning up a bit. The overall joystick might sit at (200,300) in the scene. Then the background sprite would show up at (200,300) in the scene and the knob, when up, would be at (200,300)+(0,100) = (200,400) in the scene. The convert(from:) and convert(to:) are for converting within the node hierarchy. You could ask where the knob is in the overall scene's coordinates by knob.convert(.zero, to: scene) or joystick.convert(knob.position, to: scene). You very rarely need to do that sort of conversion.
View coordinates: The view is a window on the scene, i.e., what's actually being shown. If you've got a full screen game, the view is basically determined by the screen size in points. How view coordinates map to scene coordinates determines what part of the scene you actually see. If you need to go between view coordinates and scene coordinates, you use the scene's convertPoint(fromView:) and convertPoint(toView:) methods.
If you don't do anything special and have the scene size the same as the view size, then the scene-view mapping will have (0,0) in the scene at the lower left corner of the view. Another common convention is to have (0,0) in the scene at the center of the screen by setting the scene's anchorPoint to (0.5,0.5). Or perhaps you've designed the scene so that the world is 2000x2000 in size and there will be a nontrivial scaling and possible letter-boxing or cropping involved (depending on the setting of the scene's scaleMode). Or if your game has a camera node and, e.g., the camera is set to follow the player around, then the view-to-scene mapping will be changing as the player moves.
In your code, calling mySquare.convert(from:) doesn't really even make sense since the square hasn't been added to the scene at the time you're doing the "conversion".
Anyway, if you really want to do something like "put a square in the top-left corner of the screen", then you can take the point in the view's frame and convert it to scene coordinates and set the square's position to that.
override func didMove(to view: SKView) {
...
mySquare.position = convertPoint(fromView: CGPoint(x: view.frame.minX, y: view.frame.minY))
addChild(mySquare)
...
}
Edit: I would encourage you though to think mostly in terms of the overall scene, after some initial consideration of how the game should map to devices with screens of different sizes and aspect ratios. Once you're thinking in terms of the scene, then the scene's frame (rather than the view's frame) becomes the most natural reference when you're imagining "at the left edge" or "near the bottom right".
I want to place a figure straight on the floor. I see two options where to put it:
inside anchor's SCNNode with anchor's coordinates
inside rootNode, in global coordinates, with height == anchor.transform[3][1]
I don't turn off tracking because I see that stability of tracking improves in first 10-20 second.
In the first case, my figure rotates randomly (because anchor tends to increase the extent and wants to fit extent's rectangle in the tracking area). In the second case, the figure may be upper or lower than the actual floor (I can see it by adding extra "floor" inside anchor's SCNNode).
I can use the first case and make transformations to compensate rotation but it does not look like a right solution.
What is the right way to place a figure on the floor?
I guess you have the anchor from a callback from something like arConfiguration.planeDetection = .horizontal where arConfiguration is defined as let arConfiguration = ARWorldTrackingConfiguration().
When a callback like func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) is called by ARKit you should add the node to the ARKit scene.rootNode. For the same plane this callback will be called: func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor). Then, to have the same object in the same position during the ARKit scene exploration you should act in the "add" callback.
Did i get it right?
Hope this helps
I want a node which I add to my scene to point north. I get the heading data from Core Location, so that represents the direction the device is currently facing at the point my scene was created (and thus the direction my root node faces), and then I add the heading to my new sceneNode's eulerAngles.y, to rotate it so it faces north.
func renderer(_ renderer: SCNSceneRenderer, didRenderScene scene: SCNScene, atTime time: TimeInterval) {
if sceneNode == nil,
let heading = self.locationManager.heading {
sceneNode = SCNNode()
sceneNode.eulerAngles.y += Float(heading).degreesToRadians
sceneView.scene.rootNode.addChildNode(sceneNode)
}
}
The heading information is correct, and so rotating it by that much does rotate it by the required amount, presuming that the heading is the same direction that my root node is facing. But I'm finding that my root node's direction is not equivalent to where the device's heading is, and can sometimes by wildly off. So that means the assumption that the heading is the same as the scene node's "heading" is incorrect, and I need to be able to know how far out from the heading it is, so I can then correct it properly within my sceneNode.
Change your session configuration's worldAlignment to .gravityAndHeading.
With the default .gravity alignment, there's no absolute reference for where the x and z axes of the AR world coordinate system point — their directions are based on the initial orientation of your device when the session starts.
With the .gravityAndHeading option, the x and z axes are aligned to compass directions, so you can safely orient content relative to compass directions.
How is it possible to implement a vertical plane detection (i.e. for walls)?
let configuration = ARWorldTrackingSessionConfiguration()
configuration.planeDetection = .horizontal //TODO
Edit: This is now supported as of ARKit 1.5 (iOS 11.3). Simply use .vertical. I have kept the previous post below for historical purposes.
TL;DR
Vertical plane detection is not (yet) a feature that exists in ARKit. The .horizontal suggests that this feature could be being worked on and might be added in the future. If it was just a Boolean value, this would suggest that it is final.
Confirmation
This suspicion was confirmed by a conversation that I had with an Apple engineer at WWDC17.
Explanation
You could argue that creating an implementation for this would be difficult as there are infinitely many more orientations for a vertical plane rather than a horizontal one, but as rodamn said, this is probably not the case.
From rodamn’s comment:
At its simplest, a plane is defined to be three coplanar points. You have a surface candidate once there are sufficient detected coplanar features detected along a surface (vertical, horizontal, or at any arbitrary angle). It's just that the normal for horizontals will be along the up/down axis, while vertical's normals will be parallel to the ground plane. The challenge is that unadorned drywall tends to generate few visual features, and plain walls may often go undetected. I strongly suspect that this is why the .vertical feature is not yet released.
However, there is a counter argument to this. See comments from rickster for more information.
Support for this is coming with iOS 11.3:
static var vertical: ARWorldTrackingConfiguration.PlaneDetection
The session detects surfaces that are parallel to gravity (regardless of other orientation).
https://developer.apple.com/documentation/arkit/arworldtrackingconfiguration.planedetection
https://developer.apple.com/documentation/arkit/arworldtrackingconfiguration.planedetection/2867271-vertical
Apple has release iOS 11.3 will feature various updates for AR, including ARKit 1.5. In this update ARKit includes the ability for ARKit to recognize and place virtual objects on vertical surfaces like wall and door.
Support for vertical is supported now in ARWorldTrackingConfiguration
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical]
sceneView.session.run(configuration)
As the iPhone X is featuring a front facing depth camera, my suspicion is that a back facing one will be on the next version and perhaps the .vertical capability will be delegated until then.
In ARKit 1.0 there was just .horizontal enum's case for detecting horizontal surfaces like a table or a floor. In ARKit 1.5 and higher there are .horizontal and .vertical type properties of a PlaneDetection struct that conforms to OptionSet protocol.
To implement a vertical plane detection in ARKit 2.0+ use the following code:
configuration.planeDetection = .vertical
Or you can use values for both types of detected planes:
private func configureSceneView(_ sceneView: ARSCNView) {
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = [.horizontal, .vertical] //BOTH TYPES
configuration.isLightEstimationEnabled = true
sceneView.session.run(configuration)
}
Also you can add an extension of your class to handle the delegate calls:
extension ARSceneManager: ARSCNViewDelegate {
func renderer(_ renderer: SCNSceneRenderer,
didAdd node: SCNNode,
for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
print("Found plane: \(planeAnchor)")
}
}
i did it with Unity, but i need to do my math.
I use Random Sample Consensus to detect vertical plane from the point cloud returned by ARkit. It's like having a loop that randomly picks 3 points to create a plane and counts points that matches it, and see which try is the best.
It's working. But because ARkit can't return many points when the wall is in plain color. So it doesn't work in many situation.
Apple is said to be working on extra AR capabilities for the new iPhone i.e extra sensors for the Camera. Maybe this will be a feature when those device capabilities are known. Some speculation here. http://uk.businessinsider.com/apple-iphone-8-rumors-3d-laser-camera-augmented-reality-2017-7 and another source https://www.fastcompany.com/40440342/apple-is-working-hard-on-an-iphone-8-rear-facing-3d-laser-for-ar-and-autofocus-source
I have a series of (flat plane) nodes in my scene that I need to have constantly facing the camera.
How can I adjust the transform/rotation to get this working?
Also, where do I make this calculation?
Currently I am trying to make it happen on user interaction in the SCNSceneRendererDelegate renderer:updateAtTime: delegate method.
How about an SCNBillboardConstraint? That restricts you to iOS 9/El Capitan/tvOS. Add the constraint to each of your flat plane (billboard) nodes.
From the SceneKit Framework Reference: https://developer.apple.com/library/ios/documentation/SceneKit/Reference/SCNBillboardConstraint_Class/index.html
An SCNBillboardConstraint object automatically adjusts a node’s orientation so that it always points toward the pointOfView node currently being used to render the scene.
In the more general case, SCNLookAtConstraint will keep any node's minus-Z axis pointed toward any other node.