Is there a possibility to focus on a selected SCNNode - ios

I have the following situation.
A SCNView with a .scn of a bridge showing.
On the left part of the screen you see the visual representation of the bridge.
I'd like to do the following.
Select an item on the left in the "model browser"
Zoom to the selected item and focus on it
On the screenshot provided. You can see I've selected Kist which is yellow in the SceneView
Now I'd like to zoom and focus on the selected SCNNode
I've already tried to use rayTestWithSegment to find the selected position to zoom. But the result of rayTestWithSegment is always [].
I've also tried to use a SCNLookAtConstraint but this doesn't do the trick.
var constraint: SCNLookAtConstraint!
func handle(gesture: UIGestureRecognizer) {
guard let result = hitTestResult(for: gesture), let world = sceneView?.scene?.physicsWorld else { return }
let nodePosition = result.node.position
let results = world.rayTestWithSegment(from: SCNVector3(0, 1, 0), to: nodePosition, options: nil)
print(results)
constraint = SCNLookAtConstraint(target: result.node)
sceneView?.pointOfView?.constraints = [constraint]
}
func hitTestResult(for gesture: UIGestureRecognizer) -> SCNHitTestResult? {
let location = gesture.location(in: sceneView)
guard let hit = sceneView?.hitTest(location, options: nil), hit.count > 0, let result = hit.first else {
return nil
}
return result
}
I'm using rayTestWithSegment because there is the possibility that a SCNNode which I've selected is not fully visible. For example an SCNNode which is behind another node in this perspective.
Thanks in advance, I hope someone can help / explain me what I'm doing wrong..

You can have a look at the -[SCNCameraController frameNodes:] API that should do just that.

Related

iOS ARKit: Large size object always appears to move with the change in the position of the device camera

I am creating an iOS ARKit app where I wanted to place a large object in Augmented Reality.
When I am trying to place the object at a particular position it always appears to be moving with the change in camera position and I am not able to view the object from all angles by changing the camera position.
But if I reduce it's scale value to 0.001 (Reducing the size of the object), I am able to view the object from all angles and the position of the placed object also does not change to that extent.
Bounding Box of the Object:-
Width = 3.66
Height = 1.83
Depth = 2.438
Model/Object Url:-
https://drive.google.com/open?id=1uDDlrTIu5iSRJ0cgp70WFo7Dz0hCUz9D
Source Code:-
import UIKit
import ARKit
import SceneKit
class ViewController: UIViewController {
#IBOutlet weak var sceneView: ARSCNView!
private let configuration = ARWorldTrackingConfiguration()
private var node: SCNNode!
//MARK: - Life cycle
override func viewDidLoad() {
super.viewDidLoad()
self.sceneView.showsStatistics = false
self.sceneView.debugOptions = [ARSCNDebugOptions.showFeaturePoints]
self.sceneView.automaticallyUpdatesLighting = false
self.sceneView.delegate = self
self.addTapGesture()
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
configuration.planeDetection = .horizontal
self.sceneView.session.run(configuration)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
self.sceneView.session.pause()
}
//MARK: - Methods
func addObject(hitTestResult: ARHitTestResult) {
let scene = SCNScene(named: "art.scnassets/Cube.obj")!
let modelNode = scene.rootNode.childNodes.first
modelNode?.position = SCNVector3(hitTestResult.worldTransform.columns.3.x,
hitTestResult.worldTransform.columns.3.y,
hitTestResult.worldTransform.columns.3.z)
let scale = 1
modelNode?.scale = SCNVector3(scale, scale, scale)
self.node = modelNode
self.sceneView.scene.rootNode.addChildNode(modelNode!)
let lightNode = SCNNode()
lightNode.light = SCNLight()
lightNode.light?.type = .omni
lightNode.position = SCNVector3(x: 0, y: 10, z: 20)
self.sceneView.scene.rootNode.addChildNode(lightNode)
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light?.type = .ambient
ambientLightNode.light?.color = UIColor.darkGray
self.sceneView.scene.rootNode.addChildNode(ambientLightNode)
}
private func addTapGesture() {
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(didTap(_:)))
self.sceneView.addGestureRecognizer(tapGesture)
}
#objc func didTap(_ gesture: UIPanGestureRecognizer) {
let tapLocation = gesture.location(in: self.sceneView)
let results = self.sceneView.hitTest(tapLocation, types: .featurePoint)
guard let result = results.first else {
return
}
let translation = result.worldTransform.translation
guard let node = self.node else {
self.addObject(hitTestResult: result)
return
}
node.position = SCNVector3Make(translation.x, translation.y, translation.z)
self.sceneView.scene.rootNode.addChildNode(self.node)
}
}
extension float4x4 {
var translation: SIMD3<Float> {
let translation = self.columns.3
return SIMD3<Float>(translation.x, translation.y, translation.z)
}
}
GIF of the Problem:-
Video URL of the Problem:-
https://drive.google.com/open?id=1E4euZ0ArEtj2Ffto1pAOfVZocV08EYKN
Approaches Tried:-
Tried to place the object at the origin
modelNode?.position = SCNVector3(0, 0, 0)
Tried to place the object at some distance away from the device camera
modelNode?.position = SCNVector3(0, 0, -800)
Tried with the different combinations of worldTransform/localTransform columns
modelNode?.position = SCNVector3(hitTestResult.worldTransform.columns.3.x, hitTestResult.worldTransform.columns.3.y, hitTestResult.worldTransform.columns.3.z)
modelNode?.position = SCNVector3(hitTestResult.worldTransform.columns.2.x, hitTestResult.worldTransform.columns.2.y, hitTestResult.worldTransform.columns.2.z)
modelNode?.position = SCNVector3(hitTestResult.worldTransform.columns.1.x, hitTestResult.worldTransform.columns.1.y, hitTestResult.worldTransform.columns.1.z)
modelNode?.position = SCNVector3(hitTestResult.worldTransform.columns.1.x, hitTestResult.worldTransform.columns.2.y, hitTestResult.worldTransform.columns.3.z)
modelNode?.position = SCNVector3(hitTestResult.localTransform.columns.3.x, hitTestResult.localTransform.columns.3.y, hitTestResult.localTransform.columns.3.z)
But still of no luck. It still appears to be moving with the device camera and not stuck to a position where it has been placed.
Expected Result:-
Object should be of actual size (Scale should be of 1.0). Their should be no reduction in the scale value.
Once placed at a particular position it should not move with the movement of the device camera.
Object can be seen from all angles with the movement of the device camera without any change in object position.
Unlike stated in the accepted answer, the issue is probably not about the tracking quality or a bug in the model. It looks like the model is not correctly placed on top of the ground, probably due to a mispositioned pivot point, and some part of the model stays under ground. So when you move the camera, since the part under ground is not occluded by the floor, it looks like it is shifting. Have a look at this picture:
The pivot points of the models provided by Apple are positioned correctly so that when it is placed on top of a plane on the ground, its parts stay above ground.
If you correctly position the pivot point of the model, it should work correctly, independent of the model type.
I found out the root cause of the issue. The issue was related to the Model which I was using for AR. When, I replaced the model with the one provided in this link:- https://developer.apple.com/augmented-reality/quick-look/. I was not facing any issues. So, if anyone face such type of issues in future I would recommend to use any of the model provided by Apple to check if the issue persists with it or not.
I experienced the same issue.
Whenever I try to change anything in our Model of .usdz type (which is actually an Encrypted and compressed type) we cannot edit or change anything in it. If I edit or change a little position then it behaves same way as highlighted in the question.
To handle this issue, I just moved the old model .usdz to trash and and copied the original file (again) to XCode and then it worked.

Getting location of tap on SCNSphere - Swift (SceneKit) / iOS

I want to create a sample application that allows the user to get information about continents on a globe when they tap on them. In order to do this, I need to figure out the location where a user taps on an SCNSphere object in a scene (SceneKit). I attempted to do it like this:
import UIKit
import SceneKit
class ViewController: UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene()
/* Lighting and camera added (hidden)*/
let earthNode = SCNSphere(radius: 1)
/* Added styling to the Earth (hidden)*/
earthNode.name = "Earth"
scene.rootNode.addChildNode(earthNode)
let sceneView = self.view as! SCNView
sceneView.scene = scene
sceneView.allowsCameraControl = true
// add a tap gesture recognizer
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(handleTap(_:)))
sceneView.addGestureRecognizer(tapGesture)
}
#objc func handleTap(_ gestureRecognize: UIGestureRecognizer) {
// retrieve the SCNView
let sceneView = self.view as! SCNView
// check what nodes are tapped
let p = gestureRecognize.location(in: scnView)
let hitResults = sceneView.hitTest(p, options: [:])
// check that we clicked on at least one object
if hitResults.count > 0 {
// retrieved the first clicked object
let result: SCNHitTestResult = hitResults[0]
print(result.node.name!)
print("x: \(p.x) y: \(p.y)") // <--- THIS IS WHERE I PRINT THE COORDINATES
}
}
}
When I actually run this code however and click on an area on my sphere, it prints out the coordinates of the tap on the screen instead of where I tapped on the sphere. For example, the coordinates are the same when I tap on the center of the sphere, and when I tap it in the center again after rotating the sphere.
I want to know where on the actual sphere I pressed, not just where I click on the screen. What is the best way that I should go about this problem?
In the hitResult, you can get result.textureCoordinates which tells you the point in your map textures. From this point, you are supposed to know the location of your map as the map should have coordinations which was mapped to textures.
#objc func handleTap(_ gestureRecognize: UIGestureRecognizer) {
// retrieve the SCNView
let sceneView = self.view as! SCNView
// check what nodes are tapped
let p = gestureRecognize.location(in: scnView)
let hitResults = sceneView.hitTest(p, options: [:])
// check that we clicked on at least one object
if hitResults.count > 0 {
// retrieved the first clicked object
let result: SCNHitTestResult = hitResults[0]
print(result.node.name!)
print(result.textureCoordinates(withMappingChannel 0)) // This line is added here.
print("x: \(p.x) y: \(p.y)") // <--- THIS IS WHERE I PRINT THE COORDINATES
}
}

How to tap and move scene nodes in ARKit

I'm currently trying to build an AR Chess app and I'm having trouble getting the movement of the pieces working.
I would like to be able to tap on a chess piece, then the legal moves it can make on the chess board will be highlighted and it will move to whichever square the user tapped on.
Pic of the chess board design and nodes:
https://gyazo.com/2a88f9cda3f127301ed9b4a44f8be047
What I would like to implement:
https://imgur.com/a/IGhUDBW
Would greatly appreciate any suggestions on how to get this working.
Thanks!
ViewController Code:
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
// Set the view's delegate
sceneView.delegate = self
// Show statistics such as fps and timing information
sceneView.showsStatistics = true
// Add lighting to the scene
sceneView.autoenablesDefaultLighting = true
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
// Create a session configuration to track an external image
let configuration = ARImageTrackingConfiguration()
// Image detection
// Reference which group to find the image to detect in the Assets folder e.g. "Detection Card"
if let imageDetect = ARReferenceImage.referenceImages(inGroupNamed: "Detection Card", bundle: Bundle.main) {
// Sets image tracking properties to the image in the referenced group
configuration.trackingImages = imageDetect
// Amount of images to be tracked
configuration.maximumNumberOfTrackedImages = 1
}
// Run the view's session
sceneView.session.run(configuration)
}
// Run when horizontal surface is detected and display 3D object onto image
// ARAnchor - tells a certain point in world space is relevant to your app, makes virtual content appear "attached" to some real-world point of interest
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) -> SCNNode {
// Creates 3D object
let obj = SCNNode()
// Check if image detected through camera is an ARImageAnchor - which contains position and orientation data about the image detected in the session
if let imageAnchor = anchor as? ARImageAnchor {
// Set dimensions of the horizontal plane to be displayed onto the image to be the same as the image uploaded
let plane = SCNPlane(width: imageAnchor.referenceImage.physicalSize.width, height: imageAnchor.referenceImage.physicalSize.height)
// Display mild transparent layer onto detected image
// This is to ensure image detection works by display a faint layer on the image
plane.firstMaterial?.diffuse.contents = UIColor(white: 1.0, alpha: 0.2)
// Set geometry shape of the plane
let planeNode = SCNNode(geometry: plane)
// Flip vertical plane to horizontal plane
planeNode.eulerAngles.x = -Float.pi / 2
obj.addChildNode(planeNode)
// Initialise chess scene
if let chessBoardSCN = SCNScene(named: "art.scnassets/chess.scn") {
// If there is a first in the scene file
if let chessNodes = chessBoardSCN.rootNode.childNodes.first {
// Displays chessboard upright
chessNodes.eulerAngles.x = Float.pi / 2
// Adds chessboard to the overall 3D scene
obj.addChildNode(chessNodes)
}
}
}
return obj
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
// Pause the view's session
sceneView.session.pause()
}
}
You will need to add gestures on to your view and use the ARSceneViews hitTest method to detect what the gesture is touching in your scene. You can then update the positions based on the movement from the gestures.
Here is a question that deals with roughly the same requirement of dragging nodes around.
Placing, Dragging and Removing SCNNodes in ARKit
First, you need to add a gesture recognizer for tap into your viewDidLoad, like this:
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(handleTap(_:)))
myScnView.addGestureRecognizer(tapGesture)
Then realize the handler function:
#objc
func handleTap(_ gestureRecognize: UIGestureRecognizer) {
// HERE YOU NEED TO DETECT THE TAP
// check what nodes are tapped
let location = gestureRecognize.location(in: myScnView)
let hitResults = myScnView.hitTest(location, options: [:])
// check that we clicked on at least one object
if hitResults.count > 0 {
// retrieved the first clicked object
let tappedPiece = hitResults[0].node
// HERE YOU CAN SHOW POSSIBLE MOVES
//Ex. showPossibleMoves(for: tappedPiece)
}
}
Now, to show the possible moves, you need to identify all quadrants and your node position on the chessboard.
To do this, you can assign a name or a number, or a combination of letter and number, or moreover a combination of numbers. (I suggest combination of number, like row 1 column 1, like a matrix).
let's take my suggestion, so you can name each quadrant 1.1 1.2 ... 2.1 2.2 and so on.
Now, to detect where your piece is, you can check contact with the PhysicsContactDelegate.
Now you have the tappedPiece and the place where it is, so you have to define the rule for the pieces, for example:
let rules = ["tower":"cross"] //add the others
N.B You can choose what you want to define the rules.
Let's take my suggestion for good, now you should create the function to highlight:
func highlight(quadrant: SCNNode){
quadrant.geometry?.firstMaterial?.emission.contents = UIColor.yellow
}
Finally the showPossibleMoves(for: tappedPiece) could be something this:
func showPossibleMoves(for piece: SCNNode){
let pieceType = piece.name //You have to give the name as you did into your rules variable
//ex. if you have rules like ["tower":"cross"] you have to set all towers name to "tower"
let rule = rules[pieceType]
switch rule{
case "cross":
//you have to highlight all nodes on the right, left, above and bottom
// you can achieve this by selecting the start point and increase it
//assuming you named your quadrants like 1.1 1.2 or 11 12 13 ecc...
let startRow = Int(startQuadrant.name.first)
let startColumn = Int(startQuadrant.name.last)
//Now loop the highlight on right
for column in startColumn+1...MAX_COLUMN-1{
let quadrant = myScnView.scene.rootNode.childNode(withName:"\(startRow).\(column)" , recursively: true)
// call highlight function
highlight(quadrant: quadrant)
}
//Now loop for above quadrants
for row in startRow+1...MAX_ROW-1{
let quadrant = myScnView.scene.rootNode.childNode(withName:"\(row).\(startColumn)" , recursively: true)
// call highlight function
highlight(quadrant: quadrant)
}
//DO THE SAME FOR ALL DIRECTIONS
}
// ADD ALL CASES, like bishop movements "diagonals" and so on
}
NOTE: In the handlerTap function you have to check what you're tapping, for example, to check if you're tapping on a quadrant after selecting a piece (you want to move you're piece) you can check a boolean value and the name of the selected node
//assuming you have set the boolean value after selecting a piece
if pieceSelected && node.name != "tower"{
//HERE YOU CAN MOVE YOUR PIECE
}

need to implement virtual buttons on xcode for my project/ way for users to interact with augmented objects

I am doing my research project in augmented reality and I want to allow users to touch my true/false buttons as displayed in the picture below in the camera's view and not the touchscreen. Is there any way that I can do it?
If you want to make virtual buttons you have several choices:
1: (Experimental): You could render a custom UIView with 2 Buttons on it.
2nd: Create two SCNNodes with an SCNPlane Geometry and then use an image for the true or false prompts.
3rd: Use SCNText as displayed in your image above.
If you opted for option number one there is no need to perform an SCNHitTest, as you can use IBActions to determine which one was tapped.
For the other options you will need to make use of an SCNHitTest which:
looks for SCNGeometry objects along the ray you specify. For each
intersection between the ray and and a geometry, SceneKit creates a
hit-test result to provide information about both the SCNNode object
containing the geometry and the location of the intersection on the
geometry’s surface.
I won't go into the details of the 1st option with you as this is not a 'standard' nor widely adopted practice (if it all).
Lets look then first at using two SCNNodes as 'virtual buttons' with an SCNPlaneGeometry:
/// Creates A Menu With A True Or False Button Using SCNPlane Geometry
func createTrueOrFalseMenu(){
//1. Create A Menu Holder
let menu = SCNNode()
//2. Create A True Button With An SCNPlane Geometry & Green Colour
let trueButton = SCNNode();
let trueButtonGeometry = SCNPlane(width: 0.2, height: 0.2)
let greenMaterial = SCNMaterial()
greenMaterial.diffuse.contents = UIColor.green
greenMaterial.isDoubleSided = true
trueButtonGeometry.firstMaterial = greenMaterial
trueButton.geometry = trueButtonGeometry
trueButton.name = "True"
//3. Create A False Button With An SCNPlane Geometry & A Red Colour
let falseButton = SCNNode();
let falseButtonGeometry = SCNPlane(width: 0.2, height: 0.2)
let redMaterial = SCNMaterial()
redMaterial.diffuse.contents = UIColor.red
redMaterial.isDoubleSided = true
falseButtonGeometry.firstMaterial = redMaterial
falseButton.geometry = falseButtonGeometry
falseButton.name = "False"
//4. Set The Buttons Postions
trueButton.position = SCNVector3(-0.2,0,0)
falseButton.position = SCNVector3(0.2,0,0)
//5. Add The Buttons To The Menu Node & Set Its Position
menu.addChildNode(trueButton)
menu.addChildNode(falseButton)
menu.position = SCNVector3(0,0, -1.5)
//6. Add The Menu To The View
augmentedRealityView.scene.rootNode.addChildNode(menu)
}
Now let's look at using two SCNNodes as 'virtual buttons' using an SCNTextGeometry:
/// Creates A Menu With A True Or False Button Using SCNText Geometry
func createTrueOrFalseMenuWithText(){
//1. Create A Menu Holder
let menu = SCNNode()
//2. Create A True Button With An SCNText Geometry & Green Colour
let trueButton = SCNNode();
let trueTextGeometry = SCNText(string: "True" , extrusionDepth: 1)
trueTextGeometry.font = UIFont(name: "Helvatica", size: 3)
trueTextGeometry.flatness = 0
trueTextGeometry.firstMaterial?.diffuse.contents = UIColor.green
trueButton.geometry = trueTextGeometry
trueButton.scale = SCNVector3(0.01, 0.01 , 0.01)
trueButton.name = "True"
//3. Create A False Button With An SCNText Geometry & Red Colour
let falseButton = SCNNode();
let falseTextGeometry = SCNText(string: "False" , extrusionDepth: 1)
falseTextGeometry.font = UIFont(name: "Helvatica", size: 3)
falseTextGeometry.flatness = 0
falseTextGeometry.firstMaterial?.diffuse.contents = UIColor.red
falseButton.geometry = falseTextGeometry
falseButton.scale = SCNVector3(0.01, 0.01 , 0.01)
falseButton.name = "False"
//4. Set The Buttons Postions
trueButton.position = SCNVector3(-0.2,0,0)
falseButton.position = SCNVector3(0.2,0,0)
//5. Add The Buttons To The Menu Node & Set Its Position
menu.addChildNode(trueButton)
menu.addChildNode(falseButton)
menu.position = SCNVector3(0,0, -1.5)
//6. Add The Menu To The View
augmentedRealityView.scene.rootNode.addChildNode(menu)
}
Now we have our different implementation setup, you then need to create some kind of logic to handle whether we touched the true or false button.
You will note that when creating the true or false button, I made use of their name property which will help us to determine which one was tapped e.g:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
//1. Get The Current Touch Location
guard let touchLocation = touches.first?.location(in: self.augmentedRealityView),
//2. Perform An SCNHitTest & Get The Node Touched
let hitTestNode = self.augmentedRealityView.hitTest(touchLocation, options: nil).first?.node else { return }
//3. Determine Whether The User Pressed True Or False & Handle Game Logic
if hitTestNode.name == "True"{
print("User Has A Correct Answer")
}else if hitTestNode.name == "False"{
print("User Has An InCorrect Answer")
}
}
Update: In order to detect which Virtual Button has been selected outside of standard touches, you have two options, one which determines whether the button is inViewOfFrostumor using an SCNHitTest based on a specified CGPoint e.g. the center of the screen
Looking at the first option, we need to take into consideration that the ARCamera has a Frostrum in which our content is shown:
You could then determine if the user had selected either virtualButton by creating a function to determine this. However, this isn't probably what you are after as this would mean that you would have to ensure that the SCNNode buttons where placed apart enough to ensure that only one would be in view at a time.
If you needed this option you would first need two SCNNodes e.g:
var trueButton: SCNNode!
var falseButton: SCNNode!
Then create a function like so:
/// Detects If An Object Is In View Of The Camera Frostrum
func detectButtonInFrostrumOfCamera(){
//1. Get The Current Point Of View
if let currentPointOfView = augmentedRealityView.pointOfView{
if augmentedRealityView.isNode(trueButton, insideFrustumOf: currentPointOfView){
print("True Button Is In View & Has Been Selected As The Answer")
}
if augmentedRealityView.isNode(falseButton, insideFrustumOf: currentPointOfView){
print("False Button Is In View & Has Been Selected As The Answer")
}
}
}
Which you would then trigger in the following delegate callback e.g.
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
detectButtonInFrostrumOfCamera()
}
A more likely solution however is to perform a virtual raycast using a CGPoint as a reference.
In this example let's first create our CGPoint var which will refer to the center of the screen:
var screenCenter: CGPoint!
We will then set this in viewDidLoad like so:
DispatchQueue.main.async {
self.screenCenter = CGPoint(x: self.view.bounds.width/2, y: self.view.bounds.height/2)
}
We will then create a function which performs an SCNHitTest against the screen center to see if the points touches either the true or false button e.g:
/// Detects If We Have Intersected A Virtual Button
func detectIntersetionOfButton(){
guard let rayCastTarget = self.augmentedRealityView?.hitTest(screenCenter, options: nil).first else { return }
if rayCastTarget.node.name == "True"{
print("User Has Selected A True Answer")
}
if rayCastTarget.node.name == "False"{
print("User Has Selected A False Answer")
}
}
Which again would be called in the following delegate callback:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
detectIntersetionOfButton()
}
#Josh Robbins - first I want to thank you for your kindness and the time and effort you take to you put into your answers.
I gave you credit for this interesting infomrative answer that can be adapted for many different uses and ways and also well explained (as always with your answers :-)

How can I add multiple objects in ARKit?

In the AR example project of apple there is an option for placing a chair in the room. What do I need to do to place multiple chairs in the code?
Would a simple append function do the trick?
When I tap on the chair option I need the first chair to be placed in the plane. If I tap again the option the chair should be placed once again. And I know I will need a delete function for this too. So how can I detect a long tap by the user?
A basic tap function to add a ball each time you tap the display.
#objc func handleTap(_ gesture: UITapGestureRecognizer) {
let results = self.sceneView.hitTest(gesture.location(in: gesture.view), types: ARHitTestResult.ResultType.featurePoint)
guard let result: ARHitTestResult = results.first else {
return
}
// create a simple ball
let sphereNode = SCNNode(geometry: SCNSphere(radius: 0.2)
// create position of ball based on tap result
let position = SCNVector3Make(result.worldTransform.columns.3.x, result.worldTransform.columns.3.y, result.worldTransform.columns.3.z)
// set position of ball before adding to scene
sphereNode?.position = position
// each tap adds a new instance of the ball.
self.sceneView.scene.rootNode.addChildNode(sphereNode!)
}
If you need the full swift code to get started...take a look at this earlier post adds a cube.scn from a remote url
You can do long press with
#objc func longPress(_ gesture: UILongPressGestureRecognizer) {
}
But its better to just detect you've tapped on an existing sphereNode you want to remove. You could add something like this to the above function.
let tappedNode = self.sceneView.hitTest(gesture.location(in: gesture.view), options: [:])
if !tappedNode.isEmpty {
let node = tappedNode[0].node
node.removeFromParent()
} else {
// add my new node
}

Resources