I put an AnchorNode 0.9 meter away from camera.
val anchor = frag.arSceneView.session.createAnchor(pose)
val node = TransformableNode(system)
node.renderable = renderable
node.setParent(root)
I can't move the node when no plane is detected on the table.
How can I let the node move by dragging in the infinite horizontal table plane?
Related
I am trying to fix the camera to a sprite node “players.first!” and I managed to do so using SKConstraints as follows
func setupWorld(){
let playerCamera = SKCameraNode()
let background = SKSpriteNode(imageNamed: platformType + "BG")
var cameraFollow = [SKConstraint]()
cameraFollow.append(SKConstraint.distance(SKRange(constantValue: 0), to: players.first!))
playerCamera.constraints = cameraFollow
background.zPosition = layers().backgroundLayer
background.constraints = cameraFollow
background.size = self.size
self.addChild(playerCamera)
self.camera = playerCamera
self.addChild(background)
physicsWorld.contactDelegate = self
addEmitter()
}
But this keeps the camera fixed to the exact location of the node, I want the camera to be shifted to the right of the node “players.first!” (only in X dimension) and I couldn’t manage to do so with SKConstraints, note that the node is moving fast so updating the position of the camera in the update function makes the camera jitter.
This image is explaining my issue
Constrain the camera to an empty SKNode and make it a child node of the first player which is offset to the right in the frame of the player. This can be accomplished in the scene editor or programmatically by setting this dummy node's position to something like CGPoint(x: 100, y: 0). When the player moves, this node will also move, dragging the camera along with it; and since the camera is focused on this node, the nodes in the same 'world' of the player will appropriately appear to move in the opposite direction while maintaining the look you want for the player.
EDIT: If the player rotates
If the player needs to rotate, the above configuration will result in the entire node world revolving around the fixed empty node. To prevent this, instead place an empty SKNode that acts as the fixed camera point which will be called "cameraLocation" and the player node into another empty SKNode which will be called "pseudoPlayer". Constrain the camera to "cameraLocation". Moving the "pseudoPlayer" node will then move both the camera's fixed point (so that the camera moves) and the player node while only resulting in the rotation of the player and not the entire world.
NOTE: The only potential drawback is that in order to move the player correctly through the world, you must move the "pseudoPlayer" instead.
I have this book, but I'm currently remixing the furniture app from the video tutorial that was free on AR/VR week.
I would like to have a 3D wall canvas aligned with the wall/vertical plane detected.
This is proving to be harder than I thought. Positioning isn't an issue. Much like the furniture placement app you can just get the column3 of the hittest.worldtransform and provide the new geometry this vector3 for position.
But I do not know what I have to do to get my 3D object rotated to face forward on the aligned detected plane. As I have a canvas object, the photo is on one side of the canvas. On placement, the photo is ALWAYS facing away.
I thought about applying a arbitrary rotation to the canvas to face forward but that then was only correct if I was looking north and place a canvas on a wall to my right.
I'v tried quite a few solutions on line all but one always use .existingPlaneUsingExtent. for vertical plane detections. This allows for you to get the ARPlaneAnchor from the
hittest.anchor? as ARPlaneAnchor.
If you try this when using .estimatedVerticalPlane the anchor? is nil
I also didn't continue down this route as my horizontal 3D objects started getting placed in the air. This maybe down to a control flow logic but I am ignoring it until the vertical canvas placement is working.
My current train of thought is to get the front vector of the canvas and rotate it towards the front facing vector of the vertical plane detected UIImage or the hittest point.
How would I get a forward vector from a 3D point. OR get the front vector from the grid image, that is a UIImage that is placed as an overlay when ARKit detects a vertical wall?
Here is an example. The canvas is showing the back of the canvas and is not parallel with the detected vertical plane that is the column. But there is a "Place Poster Here" grid which is what I want the canvas to align with and I'm able to see the photo.
Things I have tried.
using .estimatedVerticalPlane
ARKit estimatedVerticalPlane hit test get plane rotation
I don't know how to correctly apply this matrix and eular angle results from the SO answer.
my add picture function.
func addPicture(hitTestResult: ARHitTestResult) {
// I would like to convert estimate hitTest to a anchorpoint
// it is easier to rotate a node to a anchorpoint over calculating eularAngles
// we have all detected anchors in the _Renderer SCNNode. however there are
// Get the current furniture item, correct its position if necessary,
// and add it to the scene.
let picture = pictureSettings.currentPicturePiece()
//look for the vertical node geometry in verticalAnchors
if let hitPlaneAnchor = hitTestResult.anchor as? ARPlaneAnchor {
if let anchoredNode = verticalAnchors[hitPlaneAnchor]{
//code removed as a .estimatedVerticalPlane hittestResult doesn't get here
}
}else{
// Transform hitresult to world coords
let worldTransform = hitTestResult.worldTransform
let anchoredNodeOrientation = worldTransform.eulerAngles
picture.rotation.y =
-.pi * anchoredNodeOrientation.y
//set the transform matirs
let positionMatris = worldTransform.columns.3
let position = SCNVector3 (
positionMatris.x,
positionMatris.y,
positionMatris.z
)
picture.position = position + pictureSettings.currentPictureOffset();
}
//parented to rootNode of the scene
sceneView.scene.rootNode.addChildNode(picture)
}
Thanks for any help available.
Edited:
I have notice the 'handness' or the 3D model isn't correct/ is opposite?
Positive Z is pointing to the Left and Positive X is facing the camera for what I would expects is the front of the model. Is this a issue?
You should try to avoid adding node directly into the scene using world coordinates. Rather you should notify the ARSession of an area of interest by adding an ARAnchor then use the session callback to vend an SCNNode for the added anchor.
For example your hit test might look something like:
#objc func tapped(_ sender: UITapGestureRecognizer) {
let location = sender.location(in: sender.view)
guard let hitTestResult = sceneView.hitTest(location, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane]).first,
let planeAnchor = hitTestResult.anchor as? ARPlaneAnchor,
planeAnchor.alignment == .vertical else { return }
let anchor = ARAnchor(transform: hitTestResult.worldTransform)
sceneView.session.add(anchor: anchor)
}
Here a tap gesture recognized is used to detect taps within an ARSCNView. When a tap is detected a hit test is performed looking for existing and estimated planes. If the plane is vertical, we add an ARAnchor is added with the worldTransform of the hit test result, and we add that anchor to the ARSession. This will register that point as an area of interest for the ARSession, so we'll receive better tracking and less drift after our content is added there.
Next, we need to vend our SCNNode for the newly added ARAnchor. For example
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
if anchor is ARPlaneAnchor {
let anchorNode = SCNNode()
anchorNode.name = "anchor"
return anchorNode
} else {
let plane = SCNPlane(width: 0.67, height: 1.0)
plane.firstMaterial?.diffuse.contents = UIImage(named: "monaLisa")
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles = SCNVector3(CGFloat.pi * -0.5, 0.0, 0.0)
let node = SCNNode()
node.addChildNode(planeNode)
return node
}
}
Here we're first checking if the anchor is an ARPlaneAnchor. If it is, we vend an empty node for debugging purposes. If it is not, then it is an anchor that was added as the result of a hit test. So we create a geometry and node for the most recent tap. Because it is a vertical plane and our content is lying flat need to rotate it about the x axis. So we adjust it's eulerAngles to have it be upright. If we were to return planeNode directly adjustment to eulerAngles would be removed so we add it as a child node of an empty node and return it.
Should result in something like the following.
I have a following scene:
[Root Node]
|
[Main Container]
| |
[Node A Wrapper] [Node B Wrapper]
| |
[Node A] [Node B]
I've set up pan gesture recognizers in a way that when u pan in open space, the [Main Container] rotates in the selected direction by +/- Double.pi/2 (90deg). When the pan starts on one of the subnodes A, B (i'm hittesting for this on touchesBegan), i want to rotate the subnode along the direction of world axis (again 90deg increments).
I'm rotating the [Main Container] using convertTransform() from rootNode, which works fine, and the rotations are performed along the world axes - the position of main container is (0,0,0) which i believe makes it lot easier.
The reason why i wrapped the subnodes is so they have local positions (0,0,0) inside the wrapper, which should help with the rotation around their origin. But as they are rotated also when i perform rotate on [Main Container] , the direction of their local axes is changed and the rotation is performed around different axis than what i want.
In my (very limited) understanding of transformation matrices, i assume i need to somehow chain and multiply the matrices produced by convertTransform of the parent nodes, or to use the worldTransform property somehow, but anything i tried results in weird rotations. Any help would be appreciated!
I've set up a small sample project based on the SceneKit template, with controls similar as what you described. It's in Objective C but the relevant parts are pretty much the same:
- (void) handlePan:(UIPanGestureRecognizer*)gestureRecognize {
CGPoint delta = [gestureRecognize translationInView:(SCNView *)self.view];
if (gestureRecognize.state == UIGestureRecognizerStateChanged) {
panHorizontal = NO;
if (fabs(delta.x) > fabs(delta.y)) {
panHorizontal = YES;
}
} else if (gestureRecognize.state == UIGestureRecognizerStateEnded) {
SCNMatrix4 rotMat;
int direction = 0;
if (panHorizontal) {
if (delta.x <0) {
direction = -1;
} else if (delta.x >1) {
direction = 1;
}
rotMat= SCNMatrix4Rotate(SCNMatrix4Identity, M_PI_2, 0, direction, 0);
} else {
if (delta.y <0) {
direction = -1;
} else if (delta.y >1) {
direction = 1;
}
rotMat= SCNMatrix4Rotate(SCNMatrix4Identity, M_PI_2, direction, 0, 0);
}
if (selectedNode == mainPlanet) {
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, rotMat);
} else { //_selectedNode is a child node of mainPlanet, i.e. moons.
//get the translation matrix of the child node
SCNMatrix4 transMat = SCNMatrix4MakeTranslation(selectedNode.position.x, selectedNode.position.y, selectedNode.position.z);
//move the child node the origin of its parent (but keep its local rotation)
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, SCNMatrix4Invert(transMat));
//apply the "rotation" of the mainPlanet extra (we can use the transform because mainPlanet is at world origin)
selectedNode.transform = SCNMatrix4Mult( selectedNode.transform, mainPlanet.transform);
//perform the rotation based on the pan gesture
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform, rotMat);
//remove the extra "rotation" of the mainPlanet (we can use the transform because mainPlanet is at world origin)
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform,SCNMatrix4Invert(mainPlanet.transform));
//add back the translation mat
selectedNode.transform = SCNMatrix4Mult(selectedNode.transform,transMat);
}
}
}
In handleTap:
selectedNode = result.node;
In viewDidLoad:
mainPlanet = [scene.rootNode childNodeWithName:#"MainPlanet" recursively:YES];
orangeMoon = [scene.rootNode childNodeWithName:#"orangeMoon" recursively:YES];
yellowMoon = [scene.rootNode childNodeWithName:#"yellowMoon" recursively:YES];
UIPanGestureRecognizer *panGesture = [[UIPanGestureRecognizer alloc] initWithTarget:self action:#selector(handlePan:)];
[gestureRecognizers addObject:panGesture];
And local vars:
SCNNode *mainPlanet;
SCNNode *orangeMoon;
SCNNode *yellowMoon;
SCNNode *selectedNode;
BOOL panHorizontal;
MainPlanet would be your mainContainer and doesn't have to be visible (it does in my example because it has to be tapped to know what to rotate...). The two moons are your node A and B, child nodes of the main node. No wrappers necessary. The key part is obviously the commented portion.
Normally to rotate a child node in local space (IF the parent node is at 0,0,0)
First move it back to the node by multiplying its transform with the inverse of its translation only.
Apply the rotation matrix.
Apply the original translation we removed in step 1.
As you noticed that will rotate the child node on its local pivot point and over its local axis. This works fine until you rotate the parent node. The solution is to apply that same rotation to the child node before rotating it based on the pan gesture (step 2), and then after that remove it again.
So to get the results you desire:
First move it back to the node by multiplying its transform with the inverse of its translation only.
Apply the rotation of the parent node (since it's at 0,0,0 and I assume not scaled, we can use the transform).
Apply the rotation matrix based on the pan gesture.
Remove the rotation of the parent node
Apply the original translation we removed in step 1.
I’m sure there are other possible routes and perhaps instead of step 2 and 4 the rotation matrix could be converted to the main node using convert to/from but this way you can clearly tell what’s going on.
I have a horizontal plane detection enabled in my ARKit app. I add geometry to my plane node whenever I find a new plane or get a update on existing planeAnchor. Now, I want to know what's the intersection coordinates of the partial plane which is visible in current camera frame.
Following is the code to find the nodes visible in current frame.
// get the current frame
if let node = sceneView.pointOfView {
let nodes = sceneView.nodesInsideFrustum(of: node)
print("-----")
for node in nodes {
print(node.name ?? "unknown node")
}
}
Swift 3, SceneKit: In my game, I have an SCNSphere node in the center of the screen. The sphere drops by gravity onto an SCNBox node, and a velocity of SCNVector3(0,6,0) is applied to it once it collides with the box.
A new box is created and moves forward (z+) towards my camera and towards the sphere as well. The sphere rises, peaks, and then falls back down (by gravity) towards the new box, and when it collides with the new box, a velocity of SCNVector(0,6,0) is applied to it. This process repeats continuously. A sphere that repeatedly bounces on a new approaching box, basically.
Instead of just one box, however, there will be three boxes in a row. All boxes begin in front of the sphere node and move towards it when they are created, the boxes are placed in a row, one to the left of the sphere, one directly in front of the sphere (the middle), and the third to the right of the sphere.
I want to be able to drag my finger across the screen and move my sphere so that it can land on the left and right boxes. While I'm dragging, I do not want the y-velocity or y-position to be changed at all. I just want the x-position of my sphere node to mirror the real-world x-position of my finger relative to the screen. I also do not want the sphere node to change location based on a touch alone.
For example, if the sphere's position is at SCNVector3(2,0,0), and if the user taps near SCNVector3(-2,0,0), I do not want the sphere to "teleport" to where the user tapped. I want the user to drag the sphere from its last position.
func handlePan(recognizer: UIPanGestureRecognizer) {
let sceneView = self.view as! SCNView
sceneView.delegate = self
sceneView.scene = scene
let trans:SCNVector3 = sceneView.unprojectPoint(SCNVector3Zero)
let pos:SCNVector3 = player.presentation.position
let newPos = (trans.x) + (pos.x)
player.position.x = newPos
}
I just want the x-position of my sphere node to mirror the real-world x-position of my finger relative to the screen
You can do this by using UIPanGestureRecognizer and getting the translation in the coordinate system of the view.
let myPanGestureRecognizer = UIPanGestureRecognizer(target: self, action: #selector(handlePan))
let trans2D:CGPoint = myPanGestureRecognizer.translation(in:self.view)
let transPoint3D:SCNVector3 = SCNVector3Make(trans2D.x, trans2D.y, <<z>>)
For z value, refer to the unProjectPoint Discussion, which says that z should refer to the depth at which you want to un-project relative to the near and far clipping planes of your view frustum.
You can then un-project the translation to the 3D world coordinate system of the scene, which will give you the translation for the sphere node. Some partial sample code:
let trans:SCNVector3 = sceneView.unProjectPoint(transPoint3D)
let pos:SCNVector3 = sphereNode.presentationNode.position
let newPos:SCNVector3 = // trans + pos
sphereNode.position = newPosition