didAddNode vs SceneKit Collision Detection - ios

I am building a small demo where two objects can collide with each other. Basically an object will be placed on a plane. I have the following code for adding physics body to the plane.
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if anchor is ARPlaneAnchor {
let plane = SCNPlane(width: 0.5, height: 0.5)
let material = SCNMaterial()
material.isDoubleSided = true
material.diffuse.contents = UIImage(named: "overlay_grid")
plane.firstMaterial = material
let planeNode = SCNNode(geometry: plane)
planeNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil)
planeNode.physicsBody?.categoryBitMask = BodyType.plane.rawValue
planeNode.eulerAngles.x = .pi/2
node.addChildNode(planeNode)
}
Even though the plane gets added it does not participate in any physical collisions. If I try to place objects on it, it goes right through it. But if I change the last line to the following it works:
// node.addChildNode(planeNode) // INSTEAD OF THIS
planeNode.position = SCNVector3(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
self.sceneView.scene.rootNode.addChildNode(planeNode) // THIS WORKS
My understanding is that all the collision related stuff is maintained by SceneView and in order to participate in collisions I need to add it to the SceneView hierarchy instead of the ARSCNView hierarchy.
QUESTION:
// node.addChildNode(planeNode) // WHY THIS DOES NOT WORK
planeNode.position = SCNVector3(anchor.transform.columns.3.x, anchor.transform.columns.3.y, anchor.transform.columns.3.z)
self.sceneView.scene.rootNode.addChildNode(planeNode) // WHY THIS WORKS

static physics bodies are so named because they aren’t supposed to move (relative to the global/world/scene coordinate space). Many optimizations in the inner workings of a physics engine depend on this, so changing the position of a node with an attached static physics body is likely to cause incorrect behavior.
ARKit continually moves the ARPlaneAnchors that result from plane detection — the longer it looks at a real-world planar surface, from more different angles, the better it knows the position and size of that plane.
When you add a child node to the ARSCNView-managed node in renderer(_:didAdd:for:), the child node’s position may not change... but that position is relative to its parent node, and ARKit automatically moves the parent node to match the ARPlaneAnchor it goes with. So the child node moves relative to the world whenever ARKit updates the plane anchor. If you have a static physics body on that node, you get weirdness.
When you directly add a node as a child of the scene’s rootNode and set its position based on the initial of a plane anchor, that node stays still — you’re the only one setting its world-space position, and you’re doing so exactly once. So it’s safe to give it a static physics body.
(Note that if you want “static” physics body behavior for something that can change over time, it is possible to delete the physics body and re-create it at a new position. Just don’t do so too often or you’re probably defeating other optimizations in the physics engine.)

Related

Fix the camera to a distance from a spritenode

I am trying to fix the camera to a sprite node “players.first!” and I managed to do so using SKConstraints as follows
func setupWorld(){
let playerCamera = SKCameraNode()
let background = SKSpriteNode(imageNamed: platformType + "BG")
var cameraFollow = [SKConstraint]()
cameraFollow.append(SKConstraint.distance(SKRange(constantValue: 0), to: players.first!))
playerCamera.constraints = cameraFollow
background.zPosition = layers().backgroundLayer
background.constraints = cameraFollow
background.size = self.size
self.addChild(playerCamera)
self.camera = playerCamera
self.addChild(background)
physicsWorld.contactDelegate = self
addEmitter()
}
But this keeps the camera fixed to the exact location of the node, I want the camera to be shifted to the right of the node “players.first!” (only in X dimension) and I couldn’t manage to do so with SKConstraints, note that the node is moving fast so updating the position of the camera in the update function makes the camera jitter.
This image is explaining my issue
Constrain the camera to an empty SKNode and make it a child node of the first player which is offset to the right in the frame of the player. This can be accomplished in the scene editor or programmatically by setting this dummy node's position to something like CGPoint(x: 100, y: 0). When the player moves, this node will also move, dragging the camera along with it; and since the camera is focused on this node, the nodes in the same 'world' of the player will appropriately appear to move in the opposite direction while maintaining the look you want for the player.
EDIT: If the player rotates
If the player needs to rotate, the above configuration will result in the entire node world revolving around the fixed empty node. To prevent this, instead place an empty SKNode that acts as the fixed camera point which will be called "cameraLocation" and the player node into another empty SKNode which will be called "pseudoPlayer". Constrain the camera to "cameraLocation". Moving the "pseudoPlayer" node will then move both the camera's fixed point (so that the camera moves) and the player node while only resulting in the rotation of the player and not the entire world.
NOTE: The only potential drawback is that in order to move the player correctly through the world, you must move the "pseudoPlayer" instead.

Align 3D object parallel to vertical plane detected by estametedVerticalPlane

I have this book, but I'm currently remixing the furniture app from the video tutorial that was free on AR/VR week.
I would like to have a 3D wall canvas aligned with the wall/vertical plane detected.
This is proving to be harder than I thought. Positioning isn't an issue. Much like the furniture placement app you can just get the column3 of the hittest.worldtransform and provide the new geometry this vector3 for position.
But I do not know what I have to do to get my 3D object rotated to face forward on the aligned detected plane. As I have a canvas object, the photo is on one side of the canvas. On placement, the photo is ALWAYS facing away.
I thought about applying a arbitrary rotation to the canvas to face forward but that then was only correct if I was looking north and place a canvas on a wall to my right.
I'v tried quite a few solutions on line all but one always use .existingPlaneUsingExtent. for vertical plane detections. This allows for you to get the ARPlaneAnchor from the
hittest.anchor? as ARPlaneAnchor.
If you try this when using .estimatedVerticalPlane the anchor? is nil
I also didn't continue down this route as my horizontal 3D objects started getting placed in the air. This maybe down to a control flow logic but I am ignoring it until the vertical canvas placement is working.
My current train of thought is to get the front vector of the canvas and rotate it towards the front facing vector of the vertical plane detected UIImage or the hittest point.
How would I get a forward vector from a 3D point. OR get the front vector from the grid image, that is a UIImage that is placed as an overlay when ARKit detects a vertical wall?
Here is an example. The canvas is showing the back of the canvas and is not parallel with the detected vertical plane that is the column. But there is a "Place Poster Here" grid which is what I want the canvas to align with and I'm able to see the photo.
Things I have tried.
using .estimatedVerticalPlane
ARKit estimatedVerticalPlane hit test get plane rotation
I don't know how to correctly apply this matrix and eular angle results from the SO answer.
my add picture function.
func addPicture(hitTestResult: ARHitTestResult) {
// I would like to convert estimate hitTest to a anchorpoint
// it is easier to rotate a node to a anchorpoint over calculating eularAngles
// we have all detected anchors in the _Renderer SCNNode. however there are
// Get the current furniture item, correct its position if necessary,
// and add it to the scene.
let picture = pictureSettings.currentPicturePiece()
//look for the vertical node geometry in verticalAnchors
if let hitPlaneAnchor = hitTestResult.anchor as? ARPlaneAnchor {
if let anchoredNode = verticalAnchors[hitPlaneAnchor]{
//code removed as a .estimatedVerticalPlane hittestResult doesn't get here
}
}else{
// Transform hitresult to world coords
let worldTransform = hitTestResult.worldTransform
let anchoredNodeOrientation = worldTransform.eulerAngles
picture.rotation.y =
-.pi * anchoredNodeOrientation.y
//set the transform matirs
let positionMatris = worldTransform.columns.3
let position = SCNVector3 (
positionMatris.x,
positionMatris.y,
positionMatris.z
)
picture.position = position + pictureSettings.currentPictureOffset();
}
//parented to rootNode of the scene
sceneView.scene.rootNode.addChildNode(picture)
}
Thanks for any help available.
Edited:
I have notice the 'handness' or the 3D model isn't correct/ is opposite?
Positive Z is pointing to the Left and Positive X is facing the camera for what I would expects is the front of the model. Is this a issue?
You should try to avoid adding node directly into the scene using world coordinates. Rather you should notify the ARSession of an area of interest by adding an ARAnchor then use the session callback to vend an SCNNode for the added anchor.
For example your hit test might look something like:
#objc func tapped(_ sender: UITapGestureRecognizer) {
let location = sender.location(in: sender.view)
guard let hitTestResult = sceneView.hitTest(location, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane]).first,
let planeAnchor = hitTestResult.anchor as? ARPlaneAnchor,
planeAnchor.alignment == .vertical else { return }
let anchor = ARAnchor(transform: hitTestResult.worldTransform)
sceneView.session.add(anchor: anchor)
}
Here a tap gesture recognized is used to detect taps within an ARSCNView. When a tap is detected a hit test is performed looking for existing and estimated planes. If the plane is vertical, we add an ARAnchor is added with the worldTransform of the hit test result, and we add that anchor to the ARSession. This will register that point as an area of interest for the ARSession, so we'll receive better tracking and less drift after our content is added there.
Next, we need to vend our SCNNode for the newly added ARAnchor. For example
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
if anchor is ARPlaneAnchor {
let anchorNode = SCNNode()
anchorNode.name = "anchor"
return anchorNode
} else {
let plane = SCNPlane(width: 0.67, height: 1.0)
plane.firstMaterial?.diffuse.contents = UIImage(named: "monaLisa")
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles = SCNVector3(CGFloat.pi * -0.5, 0.0, 0.0)
let node = SCNNode()
node.addChildNode(planeNode)
return node
}
}
Here we're first checking if the anchor is an ARPlaneAnchor. If it is, we vend an empty node for debugging purposes. If it is not, then it is an anchor that was added as the result of a hit test. So we create a geometry and node for the most recent tap. Because it is a vertical plane and our content is lying flat need to rotate it about the x axis. So we adjust it's eulerAngles to have it be upright. If we were to return planeNode directly adjustment to eulerAngles would be removed so we add it as a child node of an empty node and return it.
Should result in something like the following.

ARKit: Are renderer(didAdd: ) and renderer(nodeFor: ) exclusive

Relying completely on ARKit automatic plane detection is something I don't want to do since it takes time to detect surfaces and then real life surfaces should be textured enough, hence I need to think of something to give an option where if I want I should be able to add anchors at my will with a tap of a button.
Here is where renderer(nodeFor: ) comes in handy. Just add an anchor at the tap of a button, using hitTest to ascertain the position of the anchor and then add nodes using nodeFor: method.
However, in other cases when I don't want to manually tap buttons, renderer(didAdd: ) should work. I have made a sharedObject through which I can ascertain whether plane detection needs to be "automated" or "manual". In case it's automated planeDetection would be set as .horizontal whereas in case it is manual, planeDetection would be set as [].
The issue is on testing it appears that either one of the two methods under delegate would work. Is there a way that I can achieve what I desire? Having a switch using which I can toggle whether I want automated plane detection or I want to add anchors and then planes. I would love to have both option.
Is it possible to use two different delegates to achieve it, just a thought...in that case how would it work. Pointers would be very much appreciated.
Yes, renderer(didAdd: ) and renderer(nodeFor: ) are exclusive. As per the docs, if we want to implement our own method for adding node in the scene, we can go ahead and use renderer(nodeFor: ), or we can instead choose ARKit to do the same for us using renderer(didAdd: ).
The way to manage both cases, viz. adding nodes manually while planeDetection = []; automatically adding nodes when planeDetection = .horizontal can be achieved by using renderer(nodeFor: ) method itself. There is no need of renderer(didAdd: ).
Within renderer(nodeFor: ) in case of planeDetection = .horizontal, anchor can be casted as ARPlaneAnchor whose center and extent can be used to update the added node.
Such as:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
if let planeAnchor = anchor as? ARPlaneAnchor {
let node = SCNNode()
let plane = SCNPlane(width: CGFloat(planeAnchor.extent.x),
height: CGFloat(planeAnchor.extent.z))
let planeNode = SCNNode(geometry: plane)
planeNode.name = "anchorPlane"
planeNode.simdPosition = float3(planeAnchor.center.x, 0, planeAnchor.center.z)
node.addChildNode(planeNode)
return node
At the same time, another condition can be imposed for planeDetection = [], when anchor can't be casted as ARPlaneAnchor, and geometry underlying the node can be given size as desired.
} else {
let node = SCNNode()
let plane = SCNPlane(width: 0.5, height: 0.5)
plane.firstMaterial?.diffuse.contents = UIColor.white
let planeNode = SCNNode(geometry: plane)
node.addChildNode(node)
return node
}
}

Position SCNNode in front of camera with just one axis of rotation

I'm trying to make an ARKit application where an SCNNode, in this case a box, is placed in front of the camera, facing the camera. As the user moves the camera around, objects are placed when a certain distance has been moved. This would leave you with a series of nodes facing the camera in a line, equally spaced.
I have this working to a certain extent, but my problem is with the rotation. I'm currently taking all axes of rotation, so as the user re-orients their phone, the rotation of the node matches. I want to restrict this to just the rotation around the y-axis. The ideal outcome is a domino-trail like look, with all the objects having the same x and z rotations, but potentially different y rotations.
I hope I've explained this clearly enough!
Here's the code I'm currently using:
func createNode(fromCameraTransform cameraTransform: matrix_float4x4) -> SCNNode {
let geometry = SCNBox(width: 0.02, height: 0.04, length: 0.01, chamferRadius: 0)
let physicsBody = SCNPhysicsBody(type: .dynamic, shape: SCNPhysicsShape(geometry: geometry))
physicsBody.mass = 1000
let node = SCNNode(geometry: geometry)
node.physicsBody = physicsBody
var translationMatrix = matrix_identity_float4x4
translationMatrix.columns.3.x = 0.05 // Moves the node down in world space
translationMatrix.columns.3.z = -0.1 // Moves the object away from the camera
node.simdTransform = simd_mul(cameraTransform, translationMatrix)
return node
}
I've tried different combinations of extracting values from the second column of the cameraTransform and setting them as eulerAngles, rotation and simdRotation, but to no avail.
I've also tried extracting values from the pointOfView of the current sceneView and assigning them to the same values as listed above, but again, no luck.
Any help would greatly appreciated!
I know a little bit about this, but am really just starting out with SceneKit and 3D transformations/matrices so be gentle with me!
I think I know what your trying to do, basically automatically drop each new domino so its evenly spaced following the camera.pointOfView trail.
You can update the new nodes euler angles y-axis to the same as the cameras pointofView eularAngles.y. So as you move the camera around the next node you are placing is always facing towards the camera (only rotating around the y-axis).
The renderer function updateAtTime below gets called everytime
the camera moves
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
// You set the camera’s pointOfView’s eularAngle for y-xis to the node you are
about to place.
node.eulerAngles.y = (sceneView.pointOfView?.eulerAngles.y)!
I had this working in a playground so it does work.
Edit: This solution above angle was having a gimbal lock problem (as you went around in a circle in would reset its angle back to the axis.
So I found this approach using SCNBillboardConstraint works without experiencing the gimbal lock problem as you go around in circle.
let yFreeConstraint = SCNBillboardConstraint()
yFreeConstraint.freeAxes = .Y
node.constraints = [yFreeConstraint]
node.eulerAngles = node.presentation.eulerAngles
node.position = position

Swift : ARKit Save ARPlaneAnchor for next session

ARKit is quite new and I am quite new in swift... So I'm having some troubles...
I'd like to save the ARPlaneAnchor detected during a session and reload them when I relaunch my app. My phone will always be at the same place and I'd like to scan the room one time. And remembering the Anchor I found in the room everytime I launch the app.
I tried several solutions :
Solution1 :
Save the ARPlaneAnchor using : NSKeyedArchiver.archiveRootObject(plane, toFile: filePath)
I got this error :
Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[ARPlaneAnchor encodeWithCoder:]: unrecognized selector sent to instance
I think that maybe I can't save this kind of data locally
Solution2 : Store the datas of the ARPlaneAnchor then intantiate them when I launch the app. the datas are mainly float. I could creat ARAnchor easily, I could cast them as ARPlaneAnchor, but I could not modify the "center" and "extend" parameter of the ARPlaneAnchor because they only have a getter and not a setter. So I can't create the good anchors.
I am open to anysolution. I think I need to store the ARAnchor object, but for now I could not find a way to do it without a crash!
So if someone can help me I would be very grateful.
First... if your app is restricted to a situation where the device is permanently installed and the user can never move or rotate it, using ARKit to display overlay content on the camera feed is sort of a "killing mosquitos with a cannon" kind of situation. You could just as well work out at development time what kind of camera projection your 3D engine needs, use a "dumb" camera feed with your 3D engine running on top, and not need iOS 11 or an ARKit-capable device.
So you might want to think about your use case or your technology stack some more before you commit to specific solutions and workarounds.
As for your more specific problem...
ARPlaneAnchor is entirely a read-only class, because its use case is entirely read-only. It exists for the sole purpose of giving ARKit a way to give you information about detected planes. However, once you have that information, you can do with it whatever you want. And from there on, you don't need to keep ARPlaneAnchor in the equation anymore.
Perhaps you're confused because of the typical use case for plane detection (and SceneKit-based display):
Turn on plane detection
Respond to renderer(_:didAdd:for:) to receive ARPlaneAnchor objects
In that method, return virtual content to associate with the plane anchor
Let ARSCNView automatically position that content for you so it follows the plane's position
If your plane's position is static with respect to the camera, though, you don't need all that.
You only need ARKit to handle the placement of your content within the scene if that placement needs ongoing management, as is the case when plane detection is live (ARKit refines its estimates of plane location and extent and updates the anchor accordingly). If you did all your plane-finding ahead of time, you won't be getting updates, so you don't need ARKit to manage updates.
Instead your steps can look more like this:
Know where a plane is (position in world space).
Set the position of your virtual content to the position of the plane.
Add the content to the scene directly.
In other words, your "Solution 2" is a step in the right direction, but not far enough. You want to archive not an ARPlaneAnchor instance itself, but the information it contains — and then when unarchiving, you don't need to re-create an ARPlaneAnchor instance, you just need to use that information.
So, if this is what you do to place content with "live" plane detection:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
let extent = planeAnchor.extent
let center = planeAnchor.center
// planeAnchor.transform not used, because ARSCNView automatically applies it
// to the container node, and we make a child of the container node
let plane = SCNPlane(width: CGFloat(extent.x), height: CGFloat(extent.z))
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = .pi / 2
planeNode.simdPosition = center
node.addChildNode(planeNode)
}
Then you can do something like this for static content placement:
struct PlaneInfo { // something to save and restore ARPlaneAnchor data
let transform: float4x4
let center: float3
let extent: float3
}
func makePlane(from planeInfo: PlaneInfo) { // call this when you place content
let extent = planeInfo.extent
let center = float4(planeInfo.center, 1) * planeInfo.transform
// we're positioning content in world space, so center is now
// an offset relative to transform
let plane = SCNPlane(width: CGFloat(extent.x), height: CGFloat(extent.z))
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles.x = .pi / 2
planeNode.simdPosition = center.xyz
view.scene.rootNode.addChildNode(planeNode)
}
// convenience vector-width conversions used above
extension float4 {
init(_ xyz: float3, _ w: Float) {
self.init(xyz.x, xyz.y, xyz.z, 1)
}
var xyz: float3 {
return float3(self.x, self.y, self.z)
}
}

Resources