How to drag SCNode with finger irrespective of axis using ARKit? - ios

I am working on an AR based application using ARKit. I am using https://developer.apple.com/documentation/arkit/handling_3d_interaction_and_ui_controls_in_augmented_reality as base for this. Using this i am able to move or rotate the whole Virtual Object.
Now there are lot of child nodes in the Virtual Object. I want to drag/move any child node with user finger irrespective of the axis. The child SCNode may be in ground or floating. I want to move the object wherever the user finger goes irrespective of the axis or irrespective of the euler angles of the child node. Is this even possible?
I followed the below links but it is just moving along a particular axis.
ARKit - Drag a node along a specific axis (not on a plane)
Dragging SCNNode in ARKit Using SceneKit
I tried using the below code and it is not at all helping,
let tapPoint: CGPoint = gesture.location(in: sceneView)
let result = sceneView.hitTest(tapPoint, options: nil)
if result.count == 0 {
return
}
let scnHitResult: SCNHitTestResult? = result.first
movedObject = scnHitResult?.node //.parent?.parent
let hitResults = self.sceneView.hitTest(tapPoint, types: .existingPlane)
if !hitResults.isEmpty{
guard let hitResult = hitResults.last else { return }
movedObject?.position = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
}

Related

How to limit SCNNode from moving when camera moves along the x-axis?

I'm working a simple app using ARKit where a user can tap their screen and place a node (SCNNode) on a given location. I want the user to be able to place nodes that stay in place no matter where the camera is so that when they pan back to the location where they placed the node, it's still there.
I've gotten the tap functionality to work, but I've noticed that when I physically move my device along the x-axis, the placed node moves along with it. I've tried to anchor the nodes to something other than the root node, but it hasn't worked as expected. I tried to look up documentation on how the root node is placed and if it's calculated based on the camera which would explain why the nodes are moving along with the camera, but no luck there either.
Here's the code for placing the nodes. The node position is placed using scenePoint which is a projection from the touch location to the scene that was done using SceneKit: unprojectPoint returns same/similar point no matter where you touch screen.
let nodeImg = SCNNode(geometry: SCNSphere(radius: 0.05))
nodeImg.physicsBody? = .static()
nodeImg.geometry?.materials.first?.diffuse.contents = hexColor
nodeImg.geometry?.materials.first?.specular.contents = UIColor.white
nodeImg.position = SCNVector3(scenePoint.x, scenePoint.y, scenePoint.z)
print(nodeImg.position)
sceneView.scene.rootNode.addChildNode(nodeImg)
I think this has something to do with the fact that I'm adding the nodeImg node as a child to the rootNode, but I'm not sure what else to anchor it to.
On tap you need to set the 'worldPosition' of the node and not just 'position'
Check this link : ARKit: position vs worldposition vs simdposition
Assuming you have set sceneView.allowsCameraControl = false
#objc func handleTapGesture(withGestureRecognizer recognizer: UITapGestureRecognizer) {
let location: CGPoint = recognizer.location(in: self.sceneView)
let hits = self.sceneView.hitTest(location, options: nil)
if let tappednode = hits.first?.node {
nodeImg.worldPosition = tappednode.worldPosition
self.sceneView.scene.rootNode.addChildNode(nodeImg)
}
}

Align 3D object parallel to vertical plane detected by estametedVerticalPlane

I have this book, but I'm currently remixing the furniture app from the video tutorial that was free on AR/VR week.
I would like to have a 3D wall canvas aligned with the wall/vertical plane detected.
This is proving to be harder than I thought. Positioning isn't an issue. Much like the furniture placement app you can just get the column3 of the hittest.worldtransform and provide the new geometry this vector3 for position.
But I do not know what I have to do to get my 3D object rotated to face forward on the aligned detected plane. As I have a canvas object, the photo is on one side of the canvas. On placement, the photo is ALWAYS facing away.
I thought about applying a arbitrary rotation to the canvas to face forward but that then was only correct if I was looking north and place a canvas on a wall to my right.
I'v tried quite a few solutions on line all but one always use .existingPlaneUsingExtent. for vertical plane detections. This allows for you to get the ARPlaneAnchor from the
hittest.anchor? as ARPlaneAnchor.
If you try this when using .estimatedVerticalPlane the anchor? is nil
I also didn't continue down this route as my horizontal 3D objects started getting placed in the air. This maybe down to a control flow logic but I am ignoring it until the vertical canvas placement is working.
My current train of thought is to get the front vector of the canvas and rotate it towards the front facing vector of the vertical plane detected UIImage or the hittest point.
How would I get a forward vector from a 3D point. OR get the front vector from the grid image, that is a UIImage that is placed as an overlay when ARKit detects a vertical wall?
Here is an example. The canvas is showing the back of the canvas and is not parallel with the detected vertical plane that is the column. But there is a "Place Poster Here" grid which is what I want the canvas to align with and I'm able to see the photo.
Things I have tried.
using .estimatedVerticalPlane
ARKit estimatedVerticalPlane hit test get plane rotation
I don't know how to correctly apply this matrix and eular angle results from the SO answer.
my add picture function.
func addPicture(hitTestResult: ARHitTestResult) {
// I would like to convert estimate hitTest to a anchorpoint
// it is easier to rotate a node to a anchorpoint over calculating eularAngles
// we have all detected anchors in the _Renderer SCNNode. however there are
// Get the current furniture item, correct its position if necessary,
// and add it to the scene.
let picture = pictureSettings.currentPicturePiece()
//look for the vertical node geometry in verticalAnchors
if let hitPlaneAnchor = hitTestResult.anchor as? ARPlaneAnchor {
if let anchoredNode = verticalAnchors[hitPlaneAnchor]{
//code removed as a .estimatedVerticalPlane hittestResult doesn't get here
}
}else{
// Transform hitresult to world coords
let worldTransform = hitTestResult.worldTransform
let anchoredNodeOrientation = worldTransform.eulerAngles
picture.rotation.y =
-.pi * anchoredNodeOrientation.y
//set the transform matirs
let positionMatris = worldTransform.columns.3
let position = SCNVector3 (
positionMatris.x,
positionMatris.y,
positionMatris.z
)
picture.position = position + pictureSettings.currentPictureOffset();
}
//parented to rootNode of the scene
sceneView.scene.rootNode.addChildNode(picture)
}
Thanks for any help available.
Edited:
I have notice the 'handness' or the 3D model isn't correct/ is opposite?
Positive Z is pointing to the Left and Positive X is facing the camera for what I would expects is the front of the model. Is this a issue?
You should try to avoid adding node directly into the scene using world coordinates. Rather you should notify the ARSession of an area of interest by adding an ARAnchor then use the session callback to vend an SCNNode for the added anchor.
For example your hit test might look something like:
#objc func tapped(_ sender: UITapGestureRecognizer) {
let location = sender.location(in: sender.view)
guard let hitTestResult = sceneView.hitTest(location, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane]).first,
let planeAnchor = hitTestResult.anchor as? ARPlaneAnchor,
planeAnchor.alignment == .vertical else { return }
let anchor = ARAnchor(transform: hitTestResult.worldTransform)
sceneView.session.add(anchor: anchor)
}
Here a tap gesture recognized is used to detect taps within an ARSCNView. When a tap is detected a hit test is performed looking for existing and estimated planes. If the plane is vertical, we add an ARAnchor is added with the worldTransform of the hit test result, and we add that anchor to the ARSession. This will register that point as an area of interest for the ARSession, so we'll receive better tracking and less drift after our content is added there.
Next, we need to vend our SCNNode for the newly added ARAnchor. For example
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode? {
if anchor is ARPlaneAnchor {
let anchorNode = SCNNode()
anchorNode.name = "anchor"
return anchorNode
} else {
let plane = SCNPlane(width: 0.67, height: 1.0)
plane.firstMaterial?.diffuse.contents = UIImage(named: "monaLisa")
let planeNode = SCNNode(geometry: plane)
planeNode.eulerAngles = SCNVector3(CGFloat.pi * -0.5, 0.0, 0.0)
let node = SCNNode()
node.addChildNode(planeNode)
return node
}
}
Here we're first checking if the anchor is an ARPlaneAnchor. If it is, we vend an empty node for debugging purposes. If it is not, then it is an anchor that was added as the result of a hit test. So we create a geometry and node for the most recent tap. Because it is a vertical plane and our content is lying flat need to rotate it about the x axis. So we adjust it's eulerAngles to have it be upright. If we were to return planeNode directly adjustment to eulerAngles would be removed so we add it as a child node of an empty node and return it.
Should result in something like the following.

ARKit - Object stuck to camera after tap on screen

I started out with the template project which you get when you choose ARKit project. As you run the app you can see the ship and view it from any angle.
However, once I allow camera control and tap on the screen or zoom into the ship through panning the ship gets stuck to camera. Now wherever I go with the camera the ship is stuck to the screen.
I went through the Apple Guide and seems like the don't really consider this as unexpected behavior as there is nothing about this behavior.
How to keep the position of the ship fixed after I zoom it or touch the screen?
Well, looks like allowsCameraControl is not the answer at all. It's good for SceneKit but not for ARKit(maybe it's good for something in AR but I'm not aware of it yet).
In order to zoom into the view a UIPinchGestureRecognizer is required.
// 1. Find the touch location
// 2. Perform a hit test
// 3. From the results take the first result
// 4. Take the node from that first result and change the scale
#objc private func handlePan(recognizer: UIPinchGestureRecognizer) {
if recognizer.state == .changed {
// 1.
let location = recognizer.location(in: sceneView)
// 2.
let hitTestResults = sceneView.hitTest(location, options: nil)
// 3.
if let hitTest = hitTestResults.first {
let shipNode = hitTest.node
let newScaleX = Float(recognizer.scale) * shipNode.scale.x
let newScaleY = Float(recognizer.scale) * shipNode.scale.y
let newScaleZ = Float(recognizer.scale) * shipNode.scale.z
// 4.
shipNode.scale = SCNVector3(newScaleX, newScaleY, newScaleZ)
recognizer.scale = 1
}
}
Regarding #2. I got confused a little with another hitTest method called hitTest(_:types:)
Note from documentation
This method searches for AR anchors and real-world objects detected by
the AR session, not SceneKit content displayed in the view. To search
for SceneKit objects, use the view's hitTest(_:options:) method
instead.
So that method cannot be used if you want to scale a node which is a SceneKit content

ARkit - Camera Position and 3D model Positions

I am trying to put several models in the scene.
for candidate in selectedCandidate {
sceneView.scene.rootNode.addChildNode(selectedObjects[candidate])
}
The candidate and selectedCandidate stands for the index of the model I want to use. Each model contains a rootNode and nodes attached to it. I use the API worldPosition and position of SCNNode to get and modify 3D model's position.
The thing I want to do is put those models right in front users' eyes. It means I need to get the camera's position and orientation vector to put the models in the right position I want. I also use these codes to get the camera's position according to this solution https://stackoverflow.com/a/47241952/7772038:
guard let pointOfView = sceneView.pointOfView else { return }
let transform = pointOfView.transform
let orientation = SCNVector3(-transform.m31, -transform.m32, transform.m33)
let location = SCNVector3(transform.m41, transform.m42, transform.m43)
The PROBLEM is that the camera's position and the model's position I printed out directly are severely different in order of magnitude. Camera's position is 10^-2 level like {0.038..., 0.047..., 0.024...} BUT the model's position is 10^2 level like {197.28, 100.29, -79.25}. From my point of view when I run the program, I am in the middle of those models and models are very near, but the positions are so different. So can you tell me how to modify the model's position to whatever I want? I really need to put the model right in front of user's eyes. If I simply do addChildNode() the models are behind me or somewhere else, while I need the model just be in front of users' eyes. Thank you in advance!
If you want to place an SCNNode infront of the camera you can do so like this:
/// Adds An SCNNode 3m Away From The Current Frame Of The Camera
func addNodeInFrontOfCamera(){
guard let currentTransform = augmentedRealitySession.currentFrame?.camera.transform else { return }
let nodeToAdd = SCNNode()
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
boxGeometry.firstMaterial?.diffuse.contents = UIColor.red
nodeToAdd.geometry = boxGeometry
var translation = matrix_identity_float4x4
//Change The X Value
translation.columns.3.x = 0
//Change The Y Value
translation.columns.3.y = 0
//Change The Z Value
translation.columns.3.z = -3
nodeToAdd.simdTransform = matrix_multiply(currentTransform, translation)
augmentedRealityView?.scene.rootNode.addChildNode(nodeToAdd)
}
And you can change any of the X,Y,Z values as you need.
Hope it points you in the right direction...
Update:
If you have multiple nodes e.g. in a scene, in order to use this function, it's probably best to create a 'holder' node, and then add all your content as a child.
Which means then you can simply call this function on the holder node.

ARKit: How to place one imported 3D model above another?

I am working on a AR project using ARKit.
If I touch only the imported 3D object on a point, I want to place another 3D object above it.
(For example I have placed a table above which I have to place something else like a flower vase on the touched point).
How can I solve the problem that the second object should only be placed, when I touch the first 3D object?
The surface of the object is not flat, so I can not use hittest with bounding box.
One approach is to give the first imported 3D object a node name.
firstNode.name = “firstObject”
Inside you tapped gesture function you can do a hitTest like this
let tappedNode = self.sceneView.hitTest(location, options: [:])
let node = tappedNode[0].node
if node.name == “firstObject” {
let height = firstNode.boundingBox.max.y -firstNode.boundingBox.min.y
let position2ndNode = SCNVector3Make(firstNode.worldPosition.x, (firstNode.worldPosition.y + height), firstNode.worldPosition.z)
2ndNode.position = position2ndNode
sceneView.scene.rootNode.addChildNode(2ndNode)
} else {
return
}
This way when you tap anywhere else the 2nd object won’t get placed. It will only place when you tap on the node itself. It doesn’t matter where you tap on the node, because we only want the height & we can determine that from its boundingBox max - min which we then add to the firstnode.worldPosition.y
Make sure you set at the top of ARSCNView class
var firstNode = SCNNode!
this way we can access the firstNode in the tap gesture function.
Edit: If the first 3D model has many nodes. You can flattenNode on the parent Node in the sceneGraph (best illustrated with photo below). This removes all the childNodes and wraps from the sceneGraph. You can then just work with the parentNode.

Resources