How to convert points between UIViews and SKNodes - ios

I would like to convert CGPoint between UIView and SKNode, similarly to how you can convert between views/layers via their respective convert(_:to/from:) methods.
What is the correct way to do this?

Here is what I came up with:
extension SKNode {
/// Find point in node relative to it's containing SKView
/// so that it can be further converted relative to other views.
func pointInContainingView(_ point:CGPoint) -> (SKView, CGPoint)? {
guard let scene = scene, let view = scene.view else {
return nil
}
// Invert Y to match view coordinate system.
let invertedY = CGPoint(x: point.x, y: -point.y)
// Get the point in it's containing scene.
let pointInScene = convert(invertedY, from: scene)
// Get the resulting point in the containing SKView
let pointInView = view.convert(pointInScene, from: scene)
return (view, pointInView)
}
}
Used like this:
// view/node
let origin = node.frame.origin
if let (nodeView, point) = node.parent?.pointInContainingView(origin) {
let converted = nodeView.convert(point, from:view)
}

All you need to do is scene.convertPoint(toView:pointOnScene)
If you need to convert a node that is not a child of the scene but a descendent of the scene , you do:
let absolutePosition = node.convert(node.position,toNode:node.scene)
scene.convertPoint(toView: absolutePosition)
A nice little extension to use:
extension SKNode {
/// Find point in node relative to it's containing SKView
/// so that it can be further converted relative to other views.
func pointInContainingView(_ point:CGPoint) -> (SKView, CGPoint)? {
guard let scene = scene, let view = scene.view else {
return nil
}
let pointInView.convertPoint(toView: convert(point,toNode:scene))
return (view, pointInView)
}
}

Related

How to scale and move all nodes at once? ARKit Swift

I got these functions that scale and move the 3D object displayed on the screen when tapping on it. The problem consists that if I move or scale the node (3D object) it will only make bigger or smaller one part of the node and I want to scale everything.
I know one solution is to join the 3D object in Blender as one but, later on, I want to add specific animation to specific nodes and that's why I don't want to join them.
Here are the code that I'm using for scaling and moving the objects:
#objc func panned(recognizer :UIPanGestureRecognizer)
{
//This function is for scaling
if recognizer.state == .changed
{
guard let sceneView = recognizer.view as? ARSCNView
else
{
return
}
let touch = recognizer.location(in: sceneView)
let translation = recognizer.translation(in: sceneView)
let hitTestResults = self.sceneView.hitTest(touch, options: nil)
if let hitTest = hitTestResults.first
{
let planeNode = hitTest.node
self.newAngleY = Float(translation.x) * (Float) (Double.pi)/180
self.newAngleY += self.currentAngleY
planeNode.eulerAngles.y = self.newAngleY
}
else if recognizer.state == .ended
{
self.currentAngleY = self.newAngleY
}
}
}
And this one:
#objc func pinched(recognizer :UIPinchGestureRecognizer)
{
if recognizer.state == .changed
{
guard let sceneView = recognizer.view as? ARSCNView
else
{
return
}
}
let touch = recognizer.location(in: sceneView)
let hitTestResults = self.sceneView.hitTest(touch, options: nil)
if let hitTest = hitTestResults.first
{
let planeNode = hitTest.node
let pinchScaleX = Float(recognizer.scale) * planeNode.scale.x
let pinchScaleY = Float(recognizer.scale) * planeNode.scale.y
let pinchScaleZ = Float(recognizer.scale) * planeNode.scale.z
planeNode.scale = SCNVector3(pinchScaleX, pinchScaleY, pinchScaleZ)
recognizer.scale = 1
}
}
I don't know if this helps, but here is an image of the nodes:
Image of the nodes
Thanks in advance!
In order to achieve the "grouping" you are looking for you have to create an empty scene node that will be the root to all other nodes.
let myRootNode : SCNNode = SCNNode()
sceneView.scene.rootNode.addChildNode(myRootNode)
Next append all new nodes in the scene to myRootNode so all your models are parented to the same node.
let newChildNode : SCNNode = SCNNode()
myRootNode.addChildNode(newChildNode)
Then you can apply translations to the myRootNode to move it along with all child nodes. Also the child nodes remain separate models that can be moved (translated/rotated) individually or as a single group.
Within your hit test you should be scaling and rotating the myRootNode instead of the individual model.

ARKit - getting distance from camera to anchor

I'm creating an anchor and adding it to my ARSKView at a certain distance in front of the camera like this:
func displayToken(distance: Float) {
print("token dropped at: \(distance)")
guard let sceneView = self.view as? ARSKView else {
return
}
// Create anchor using the camera's current position
if let currentFrame = sceneView.session.currentFrame {
// Create a transform with a translation of x meters in front of the camera
var translation = matrix_identity_float4x4
translation.columns.3.z = -distance
let transform = simd_mul(currentFrame.camera.transform, translation)
// Add a new anchor to the session
let anchor = ARAnchor(transform: transform)
sceneView.session.add(anchor: anchor)
}
}
then the node gets created for the anchor like this:
func view(_ view: ARSKView, nodeFor anchor: ARAnchor) -> SKNode? {
// Create and configure a node for the anchor added to the view's session.
if let image = tokenImage {
let texture = SKTexture(image: image)
let tokenImageNode = SKSpriteNode(texture: texture)
tokenImageNode.name = "token"
return tokenImageNode
} else {
return nil
}
}
This works fine and I see the image get added at the appropriate distance. However, what I'm trying to do is then calculate how far the anchor/node is in front of the camera as you move. The problem is the calculation seems to be off immediately using fabs(cameraZ - anchor.transform.columns.3.z). Please see my code below that is in the update() method to calculate distance between camera and object:
override func update(_ currentTime: TimeInterval) {
// Called before each frame is rendered
guard let sceneView = self.view as? ARSKView else {
return
}
if let currentFrame = sceneView.session.currentFrame {
let cameraZ = currentFrame.camera.transform.columns.3.z
for anchor in currentFrame.anchors {
if let spriteNode = sceneView.node(for: anchor), spriteNode.name == "token", intersects(spriteNode) {
// token is within the camera view
//print("token is within camera view from update method")
print("DISTANCE BETWEEN CAMERA AND TOKEN: \(fabs(cameraZ - anchor.transform.columns.3.z))")
print(cameraZ)
print(anchor.transform.columns.3.z)
}
}
}
}
Any help is appreciated in order to accurately get distance between camera and the anchor.
The last column of a 4x4 transform matrix is the translation vector (or position relative to a parent coordinate space), so you can get the distance in three dimensions between two transforms by simply subtracting those vectors.
let anchorPosition = anchor.transforms.columns.3
let cameraPosition = camera.transform.columns.3
// here’s a line connecting the two points, which might be useful for other things
let cameraToAnchor = cameraPosition - anchorPosition
// and here’s just the scalar distance
let distance = length(cameraToAnchor)
What you’re doing isn’t working right because you’re subtracting the z-coordinates of each vector. If the two points are different in x, y, and z, just subtracting z doesn’t get you distance.
This one is for scenekit, I'll leave it here though.
let end = node.presentation.worldPosition
let start = sceneView.pointOfView?.worldPosition
let dx = (end?.x)! - (start?.x)!
let dy = (end?.y)! - (start?.y)!
let dz = (end?.z)! - (start?.z)!
let distance = sqrt(pow(dx,2)+pow(dy,2)+pow(dz,2))
With RealityKit there is a slightly different way to do this. If you're using the world tracking configuration, your AnchorEntity object conforms to HasAnchoring which gives you a target. Target is an enum of AnchoringComponent.Target. It has a case .world(let transform). You can compare your world transform to the camera's world transform like this:
if case let AnchoringComponent.Target.world(transform) = yourAnchorEntity.anchoring.target {
let theDistance = distance(transform.columns.3, frame.camera.transform.columns.3)
}
This took me a bit to figure out but I figure others that might be using RealityKit might benefit from this.
As mentioned above by #codeman, this is the right solution:
let distance = simd_distance(YOUR_NODE.simdTransform.columns.3, (sceneView.session.currentFrame?.camera.transform.columns.3)!);
3D distance - You can check these utils,
class ARSceneUtils {
/// return the distance between anchor and camera.
class func distanceBetween(anchor : ARAnchor,AndCamera camera: ARCamera) -> CGFloat {
let anchorPostion = SCNVector3Make(
anchor.transform.columns.3.x,
anchor.transform.columns.3.y,
anchor.transform.columns.3.z
)
let cametaPosition = SCNVector3Make(
camera.transform.columns.3.x,
camera.transform.columns.3.y,
camera.transform.columns.3.z
)
return CGFloat(self.calculateDistance(from: cametaPosition , to: anchorPostion))
}
/// return the distance between 2 vectors.
class func calculateDistance(from: SCNVector3, to: SCNVector3) -> Float {
let x = from.x - to.x
let y = from.y - to.y
let z = from.z - to.z
return sqrtf( (x * x) + (y * y) + (z * z))
}
}
And now you can call:
guard let camera = session.currentFrame?.camera else { return }
let anchor = // you anchor
let distanceAchorAndCamera = ARSceneUtils.distanceBetween(anchor: anchor, AndCamera: camera)

ARKIT: Move Object with PanGesture (the right way)

I've been reading plenty of StackOverflow answers on how to move an object by dragging it across the screen. Some use hit tests against .featurePoints some use the gesture translation or just keeping track of the lastPosition of the object. But honestly.. none work the way everyone is expecting it to work.
Hit testing against .featurePoints just makes the object jump all around, because you dont always hit a featurepoint when dragging your finger. I dont understand why everyone keeps suggesting this.
Solutions like this one work: Dragging SCNNode in ARKit Using SceneKit
But the object doesnt really follow your finger, and the moment you take a few steps or change the angle of the object or the camera.. and try to move the object.. the x,z are all inverted.. and makes total sense to do that.
I really want to move objects as good as the Apple Demo, but I look at the code from Apple... and is insanely weird and overcomplicated I cant even understand a bit. Their technique to move the object so beautifly is not even close to what everyone propose online.
https://developer.apple.com/documentation/arkit/handling_3d_interaction_and_ui_controls_in_augmented_reality
There's gotta be a simpler way to do it.
Short answer:
To get this nice and fluent dragging effect like in the Apple demo project, you will have to do it like in the Apple demo project (Handling 3D Interaction). On the other side I agree with you, that the code might be confusing if you look at it for the first time. It is not easy at all to calculate the correct movement for an object placed on a floor plane - always and from every location or viewing angle. It’s a complex code construct, that is doing this superb dragging effect. Apple did a great job to achieve this, but didn’t make it too easy for us.
Full Answer:
Striping down the AR Interaction template for your needy results in a nightmare - but should work too if you invest enough time. If you prefer to begin from scratch, basically start using a common swift ARKit/SceneKit Xcode template (the one containing the space ship).
You will also require the entire AR Interaction Template Project from Apple. (The link is included in the SO question)
At the End you should be able to drag something called VirtualObject, which is in fact a special SCNNode. In Addition you will have a nice Focus Square, that can be useful for whatever purpose - like initially placing objects or adding a floor, or a wall. (Some code for the dragging effect and the focus square usage are kind of merged or linked together - doing it without the focus square will actually be more complicated)
Get started:
Copy the following files from the AR Interaction template to your empty project:
Utilities.swift (usually I name this file Extensions.swift, it contains some basic extensions that are required)
FocusSquare.swift
FocusSquareSegment.swift
ThresholdPanGesture.swift
VirtualObject.swift
VirtualObjectLoader.swift
VirtualObjectARView.swift
Add the UIGestureRecognizerDelegate to the ViewController class definition like so:
class ViewController: UIViewController, ARSCNViewDelegate, UIGestureRecognizerDelegate {
Add this code to your ViewController.swift, in the definitions section, right before viewDidLoad:
// MARK: for the Focus Square
// SUPER IMPORTANT: the screenCenter must be defined this way
var focusSquare = FocusSquare()
var screenCenter: CGPoint {
let bounds = sceneView.bounds
return CGPoint(x: bounds.midX, y: bounds.midY)
}
var isFocusSquareEnabled : Bool = true
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
/// The tracked screen position used to update the `trackedObject`'s position in `updateObjectToCurrentTrackingPosition()`.
private var currentTrackingPosition: CGPoint?
/**
The object that has been most recently intereacted with.
The `selectedObject` can be moved at any time with the tap gesture.
*/
var selectedObject: VirtualObject?
/// The object that is tracked for use by the pan and rotation gestures.
private var trackedObject: VirtualObject? {
didSet {
guard trackedObject != nil else { return }
selectedObject = trackedObject
}
}
/// Developer setting to translate assuming the detected plane extends infinitely.
let translateAssumingInfinitePlane = true
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
In viewDidLoad, before you setup the scene add this code:
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
let panGesture = ThresholdPanGesture(target: self, action: #selector(didPan(_:)))
panGesture.delegate = self
// Add gestures to the `sceneView`.
sceneView.addGestureRecognizer(panGesture)
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
At the very end of your ViewController.swift add this code:
// MARK: - Pan Gesture Block
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
#objc
func didPan(_ gesture: ThresholdPanGesture) {
switch gesture.state {
case .began:
// Check for interaction with a new object.
if let object = objectInteracting(with: gesture, in: sceneView) {
trackedObject = object // as? VirtualObject
}
case .changed where gesture.isThresholdExceeded:
guard let object = trackedObject else { return }
let translation = gesture.translation(in: sceneView)
let currentPosition = currentTrackingPosition ?? CGPoint(sceneView.projectPoint(object.position))
// The `currentTrackingPosition` is used to update the `selectedObject` in `updateObjectToCurrentTrackingPosition()`.
currentTrackingPosition = CGPoint(x: currentPosition.x + translation.x, y: currentPosition.y + translation.y)
gesture.setTranslation(.zero, in: sceneView)
case .changed:
// Ignore changes to the pan gesture until the threshold for displacment has been exceeded.
break
case .ended:
// Update the object's anchor when the gesture ended.
guard let existingTrackedObject = trackedObject else { break }
addOrUpdateAnchor(for: existingTrackedObject)
fallthrough
default:
// Clear the current position tracking.
currentTrackingPosition = nil
trackedObject = nil
}
}
// - MARK: Object anchors
/// - Tag: AddOrUpdateAnchor
func addOrUpdateAnchor(for object: VirtualObject) {
// If the anchor is not nil, remove it from the session.
if let anchor = object.anchor {
sceneView.session.remove(anchor: anchor)
}
// Create a new anchor with the object's current transform and add it to the session
let newAnchor = ARAnchor(transform: object.simdWorldTransform)
object.anchor = newAnchor
sceneView.session.add(anchor: newAnchor)
}
private func objectInteracting(with gesture: UIGestureRecognizer, in view: ARSCNView) -> VirtualObject? {
for index in 0..<gesture.numberOfTouches {
let touchLocation = gesture.location(ofTouch: index, in: view)
// Look for an object directly under the `touchLocation`.
if let object = virtualObject(at: touchLocation) {
return object
}
}
// As a last resort look for an object under the center of the touches.
// return virtualObject(at: gesture.center(in: view))
return virtualObject(at: (gesture.view?.center)!)
}
/// Hit tests against the `sceneView` to find an object at the provided point.
func virtualObject(at point: CGPoint) -> VirtualObject? {
// let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
let hitTestResults = sceneView.hitTest(point, options: [SCNHitTestOption.categoryBitMask: 0b00000010, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])
// let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
// let hitTestResults = sceneView.hitTest(point, options: hitTestOptions)
return hitTestResults.lazy.compactMap { result in
return VirtualObject.existingObjectContainingNode(result.node)
}.first
}
/**
If a drag gesture is in progress, update the tracked object's position by
converting the 2D touch location on screen (`currentTrackingPosition`) to
3D world space.
This method is called per frame (via `SCNSceneRendererDelegate` callbacks),
allowing drag gestures to move virtual objects regardless of whether one
drags a finger across the screen or moves the device through space.
- Tag: updateObjectToCurrentTrackingPosition
*/
#objc
func updateObjectToCurrentTrackingPosition() {
guard let object = trackedObject, let position = currentTrackingPosition else { return }
translate(object, basedOn: position, infinitePlane: translateAssumingInfinitePlane, allowAnimation: true)
}
/// - Tag: DragVirtualObject
func translate(_ object: VirtualObject, basedOn screenPos: CGPoint, infinitePlane: Bool, allowAnimation: Bool) {
guard let cameraTransform = sceneView.session.currentFrame?.camera.transform,
let result = smartHitTest(screenPos,
infinitePlane: infinitePlane,
objectPosition: object.simdWorldPosition,
allowedAlignments: [ARPlaneAnchor.Alignment.horizontal]) else { return }
let planeAlignment: ARPlaneAnchor.Alignment
if let planeAnchor = result.anchor as? ARPlaneAnchor {
planeAlignment = planeAnchor.alignment
} else if result.type == .estimatedHorizontalPlane {
planeAlignment = .horizontal
} else if result.type == .estimatedVerticalPlane {
planeAlignment = .vertical
} else {
return
}
/*
Plane hit test results are generally smooth. If we did *not* hit a plane,
smooth the movement to prevent large jumps.
*/
let transform = result.worldTransform
let isOnPlane = result.anchor is ARPlaneAnchor
object.setTransform(transform,
relativeTo: cameraTransform,
smoothMovement: !isOnPlane,
alignment: planeAlignment,
allowAnimation: allowAnimation)
}
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
Add some Focus Square Code
// MARK: - Focus Square (code by Apple, some by me)
func updateFocusSquare(isObjectVisible: Bool) {
if isObjectVisible {
focusSquare.hide()
} else {
focusSquare.unhide()
}
// Perform hit testing only when ARKit tracking is in a good state.
if let camera = sceneView.session.currentFrame?.camera, case .normal = camera.trackingState,
let result = smartHitTest(screenCenter) {
DispatchQueue.main.async {
self.sceneView.scene.rootNode.addChildNode(self.focusSquare)
self.focusSquare.state = .detecting(hitTestResult: result, camera: camera)
}
} else {
DispatchQueue.main.async {
self.focusSquare.state = .initializing
self.sceneView.pointOfView?.addChildNode(self.focusSquare)
}
}
}
And add some control Functions:
func hideFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } } // to hide the focus square
func showFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } } // to show the focus square
From the VirtualObjectARView.swift COPY! the entire function smartHitTest to the ViewController.swift (so they exist twice)
func smartHitTest(_ point: CGPoint,
infinitePlane: Bool = false,
objectPosition: float3? = nil,
allowedAlignments: [ARPlaneAnchor.Alignment] = [.horizontal, .vertical]) -> ARHitTestResult? {
// Perform the hit test.
let results = sceneView.hitTest(point, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane, .estimatedHorizontalPlane])
// 1. Check for a result on an existing plane using geometry.
if let existingPlaneUsingGeometryResult = results.first(where: { $0.type == .existingPlaneUsingGeometry }),
let planeAnchor = existingPlaneUsingGeometryResult.anchor as? ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {
return existingPlaneUsingGeometryResult
}
if infinitePlane {
// 2. Check for a result on an existing plane, assuming its dimensions are infinite.
// Loop through all hits against infinite existing planes and either return the
// nearest one (vertical planes) or return the nearest one which is within 5 cm
// of the object's position.
let infinitePlaneResults = sceneView.hitTest(point, types: .existingPlane)
for infinitePlaneResult in infinitePlaneResults {
if let planeAnchor = infinitePlaneResult.anchor as? ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {
if planeAnchor.alignment == .vertical {
// Return the first vertical plane hit test result.
return infinitePlaneResult
} else {
// For horizontal planes we only want to return a hit test result
// if it is close to the current object's position.
if let objectY = objectPosition?.y {
let planeY = infinitePlaneResult.worldTransform.translation.y
if objectY > planeY - 0.05 && objectY < planeY + 0.05 {
return infinitePlaneResult
}
} else {
return infinitePlaneResult
}
}
}
}
}
// 3. As a final fallback, check for a result on estimated planes.
let vResult = results.first(where: { $0.type == .estimatedVerticalPlane })
let hResult = results.first(where: { $0.type == .estimatedHorizontalPlane })
switch (allowedAlignments.contains(.horizontal), allowedAlignments.contains(.vertical)) {
case (true, false):
return hResult
case (false, true):
// Allow fallback to horizontal because we assume that objects meant for vertical placement
// (like a picture) can always be placed on a horizontal surface, too.
return vResult ?? hResult
case (true, true):
if hResult != nil && vResult != nil {
return hResult!.distance < vResult!.distance ? hResult! : vResult!
} else {
return hResult ?? vResult
}
default:
return nil
}
}
You might see some errors in the copied function regarding the hitTest. Just correct it like so:
hitTest... // which gives an Error
sceneView.hitTest... // this should correct it
Implement the renderer updateAtTime function and add this lines:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
// For the Focus Square
if isFocusSquareEnabled { showFocusSquare() }
self.updateObjectToCurrentTrackingPosition() // *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
}
And finally add some helper functions for the Focus Square
func hideFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } } // to hide the focus square
func showFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } } // to show the focus square
At this point you might still see about a dozen errors and warnings in the imported files, this might occur, when doing this in Swift 5 and you have some Swift 4 files. Just let Xcode correct the errors. (Its all about renaming some code statements, Xcode knows best)
Go in VirtualObject.swift and search for this code block:
if smoothMovement {
let hitTestResultDistance = simd_length(positionOffsetFromCamera)
// Add the latest position and keep up to 10 recent distances to smooth with.
recentVirtualObjectDistances.append(hitTestResultDistance)
recentVirtualObjectDistances = Array(recentVirtualObjectDistances.suffix(10))
let averageDistance = recentVirtualObjectDistances.average!
let averagedDistancePosition = simd_normalize(positionOffsetFromCamera) * averageDistance
simdPosition = cameraWorldPosition + averagedDistancePosition
} else {
simdPosition = cameraWorldPosition + positionOffsetFromCamera
}
Outcomment or replace this entire block by this single line of code:
simdPosition = cameraWorldPosition + positionOffsetFromCamera
At this point you should be able to compile the project and run it on a device. You should see the Spaceship and a yellow focus square that should already work.
To start placing an Object, that you can drag you need some function to create a so called VirtualObject as I said in the beginning.
Use this example function to test (add it somewhere in the view controller):
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
if focusSquare.state != .initializing {
let position = SCNVector3(focusSquare.lastPosition!)
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
let testObject = VirtualObject() // give it some name, when you dont have anything to load
testObject.geometry = SCNCone(topRadius: 0.0, bottomRadius: 0.2, height: 0.5)
testObject.geometry?.firstMaterial?.diffuse.contents = UIColor.red
testObject.categoryBitMask = 0b00000010
testObject.name = "test"
testObject.castsShadow = true
testObject.position = position
sceneView.scene.rootNode.addChildNode(testObject)
}
}
Note: everything you want to drag on a plane, must be setup using VirtualObject() instead of SCNNode(). Everything else regarding the VirtualObject stays the same as SCNNode
(You can also add some common SCNNode extensions as well, like the one to load scenes by its name - useful when referencing imported models)
Have fun!
I added some of my ideas to Claessons's answer. I noticed some lag when dragging the node around. I found that the node cannot follow the finger's movement.
To make the node move more smoothly, I added a variable that keeps track of the node that is currently being moved, and set the position to the location of the touch.
var selectedNode: SCNNode?
Also, I set a .categoryBitMask value to specify the category of nodes that I want to edit(move). The default bit mask value is 1.
The reason why we set the category bit mask is to distinguish between different kinds of nodes, and specify those that you wish to select (to move around, etc).
enum CategoryBitMask: Int {
case categoryToSelect = 2 // 010
case otherCategoryToSelect = 4 // 100
// you can add more bit masks below . . .
}
Then, I added a UILongPressGestureRecognizer in viewDidLoad().
let longPressRecognizer = UILongPressGestureRecognizer(target: self, action: #selector(longPressed))
self.sceneView.addGestureRecognizer(longPressRecognizer)
The following is the UILongPressGestureRecognizer I used to detect a long press, which initiates the dragging of the node.
First, obtain the touch location from the recognizerView
#objc func longPressed(recognizer: UILongPressGestureRecognizer) {
guard let recognizerView = recognizer.view as? ARSCNView else { return }
let touch = recognizer.location(in: recognizerView)
The following code runs once when a long press is detected.
Here, we perform a hitTest to select the node that has been touched. Note that here, we specify a .categoryBitMask option to select only nodes of the following category: CategoryBitMask.categoryToSelect
// Runs once when long press is detected.
if recognizer.state == .began {
// perform a hitTest
let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: CategoryBitMask.categoryToSelect])
guard let hitNode = hitTestResult.first?.node else { return }
// Set hitNode as selected
self.selectedNode = hitNode
The following code will run periodically until the user releases the finger.
Here we perform another hitTest to obtain the plane you want the node to move along.
// Runs periodically after .began
} else if recognizer.state == .changed {
// make sure a node has been selected from .began
guard let hitNode = self.selectedNode else { return }
// perform a hitTest to obtain the plane
let hitTestPlane = self.sceneView.hitTest(touch, types: .existingPlane)
guard let hitPlane = hitTestPlane.first else { return }
hitNode.position = SCNVector3(hitPlane.worldTransform.columns.3.x,
hitNode.position.y,
hitPlane.worldTransform.columns.3.z)
Make sure you deselect the node when the finger is removed from the screen.
// Runs when finger is removed from screen. Only once.
} else if recognizer.state == .ended || recognizer.state == .cancelled || recognizer.state == .failed{
guard let hitNode = self.selectedNode else { return }
// Undo selection
self.selectedNode = nil
}
}
Kind of late answer but I know I had some problems solving this as well. Eventually I figured out a way to do it by performing two separate hit tests whenever my gesture recognizer is called.
First, I perform a hit test for my 3d-object to detect if I'm currently pressing an object or not (as you would get results for pressing featurePoints, planes etc. if you don't specify any options). I do this by using the .categoryBitMaskvalue of SCNHitTestOption.
Keep in mind you have to assign the correct .categoryBitMask value to your object node and all it's child nodes beforehand in order for the hit test to work. I declare an enum I can use for that:
enum BodyType : Int {
case ObjectModel = 2;
}
As becomes apparent by the answer to my question about .categoryBitMask values I posted here, it is important to consider what values you assign your bitmask.
Below is the code i use in in conjunction with UILongPressGestureRecognizer in
order to select the object I'm currently pressing:
guard let recognizerView = recognizer.view as? ARSCNView else { return }
let touch = recognizer.location(in: recognizerView)
let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: BodyType.ObjectModel.rawValue])
guard let modelNodeHit = hitTestResult.first?.node else { return }
After that I perform a 2nd hit test in order to find a plane I'm pressing on.
You can use the type .existingPlaneUsingExtent if you don't want to move your object further than the edge of a plane, or .existingPlane if you want to move your object indefinitely along a detected plane surface.
var planeHit : ARHitTestResult!
if recognizer.state == .changed {
let hitTestPlane = self.sceneView.hitTest(touch, types: .existingPlane)
guard hitTestPlane.first != nil else { return }
planeHit = hitTestPlane.first!
modelNodeHit.position = SCNVector3(planeHit.worldTransform.columns.3.x,modelNodeHit.position.y,planeHit.worldTransform.columns.3.z)
}else if recognizer.state == .ended || recognizer.state == .cancelled || recognizer.state == .failed{
modelNodeHit.position = SCNVector3(planeHit.worldTransform.columns.3.x,modelNodeHit.position.y,planeHit.worldTransform.columns.3.z)
}
I made a GitHub repo when I tried this out while also experimenting with ARAnchors. You can check it out if you want to see my method in practice, but I did not make it with the intention of anyone else using it so it's quite unfinished. Also, the development branch should support some functionality for an object with more childNodes.
EDIT: ==================================
For clarification if you want to use a .scn object instead of a regular geometry, you need to iterate through all the child nodes of the object when creating it, setting the bitmask of each child like this:
let objectModelScene = SCNScene(named:
"art.scnassets/object/object.scn")!
let objectNode = objectModelScene.rootNode.childNode(
withName: "theNameOfTheParentNodeOfTheObject", recursively: true)
objectNode.categoryBitMask = BodyType.ObjectModel.rawValue
objectNode.enumerateChildNodes { (node, _) in
node.categoryBitMask = BodyType.ObjectModel.rawValue
}
Then, in the gesture recognizer after you get a hitTestResult
let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: BodyType.ObjectModel.rawValue])
you need to find the parent node since otherwise you might be moving the individual child node you just pressed. Do this by searching recursively upwards through the node tree of the node you just found.
guard let objectNode = getParentNodeOf(hitTestResult.first?.node) else { return }
where you declare the getParentNode-method as follows
func getParentNodeOf(_ nodeFound: SCNNode?) -> SCNNode? {
if let node = nodeFound {
if node.name == "theNameOfTheParentNodeOfTheObject" {
return node
} else if let parent = node.parent {
return getParentNodeOf(parent)
}
}
return nil
}
Then you are free to perform any operation on the objectNode, as it will be the parent node of your .scn object, meaning that any transformation applied to it will also be applied to the child nodes.
As #ZAY mentioned out, Apple made it quite confusing in addition they used ARRaycastQuery which only works on iOS 13 and above. Therefore, I reached to a solution that works by using the current camera orientation to calculate the translation on a plane in the world coordinates.
First, by using this snippet we are able to get the current orientation the user is facing using quaternions.
private func getOrientationYRadians()-> Float {
guard let cameraNode = arSceneView.pointOfView else { return 0 }
//Get camera orientation expressed as a quaternion
let q = cameraNode.orientation
//Calculate rotation around y-axis (heading) from quaternion and convert angle so that
//0 is along -z-axis (forward in SceneKit) and positive angle is clockwise rotation.
let alpha = Float.pi - atan2f( (2*q.y*q.w)-(2*q.x*q.z), 1-(2*pow(q.y,2))-(2*pow(q.z,2)) )
// here I convert the angle to be 0 when the user is facing +z-axis
return alpha <= Float.pi ? abs(alpha - (Float.pi)) : (3*Float.pi) - alpha
}
Handle Pan Method
private var lastPanLocation2d: CGPoint!
#objc func handlePan(panGesture: UIPanGestureRecognizer) {
let state = panGesture.state
guard state != .failed && state != .cancelled else {
return
}
let touchLocation = panGesture.location(in: self)
if (state == .began) {
lastPanLocation2d = touchLocation
}
// 200 here is a random value that controls the smoothness of the dragging effect
let deltaX = Float(touchLocation.x - lastPanLocation2d!.x)/200
let deltaY = Float(touchLocation.y - lastPanLocation2d!.y)/200
let currentYOrientationRadians = getOrientationYRadians()
// convert delta in the 2D dimensions to the 3d world space using the current rotation
let deltaX3D = (deltaY*sin(currentYOrientationRadians))+(deltaX*cos(currentYOrientationRadians))
let deltaY3D = (deltaY*cos(currentYOrientationRadians))+(-deltaX*sin(currentYOrientationRadians))
// assuming that the node is currently positioned on a plane so the y-translation will be zero
let translation = SCNVector3Make(deltaX3D, 0.0, deltaY3D)
nodeToDrag.localTranslate(by: translation)
lastPanLocation2d = touchLocation
}

ARKit : Handle tap to show / hide a node

I am new to ARKit , and i am trying an example to create a SCNBox on tap location. What i am trying to do is on initial touch i create a box and on the second tap on the created box it should be removed from the scene. I am doing the hit test. but it keeps on adding the box. I know this is a simple task, but i am unable to do it
#objc func handleTap(sender: UITapGestureRecognizer) {
print("hande tapp")
guard let _ = sceneView.session.currentFrame
else { return }
guard let scnView = sceneView else { return }
let touchLocation = sender.location(in: scnView)
let hitTestResult = scnView.hitTest(touchLocation, types: [ .featurePoint])
guard let pointOfView = sceneView.pointOfView else {return}
print("point \(pointOfView.name)")
if hitTestResult.count > 0 {
print("Hit")
if let _ = pointOfView as? ARBox {
print("Box Available")
}
else {
print("Adding box")
let transform = hitTestResult.first?.worldTransform.columns.3
let xPosition = transform?.x
let yPosition = transform?.y
let zPosition = transform?.z
let position = SCNVector3(xPosition!,yPosition!,zPosition!)
basketCount = basketCount + 1
let newBasket = ARBox(position: position)
newBasket.name = "basket\(basketCount)"
self.sceneView.scene.rootNode.addChildNode(newBasket)
boxNodes.append(newBasket)
}
}
}
pointOfView of a sceneView, is the rootnode of your scene, which is one used to render your whole scene. For generic cases, it usually is an empty node with lights/ camera. I don't think you should cast it as ARBox/ or any type of SCNNode(s) for that matter.
What you probably can try is the logic below (hitResults are the results of your hitTest):
if hitResults.count > 0 {
if let node = hitResults.first?.node as SCNNode? (or ARBox) {
// node.removeFromParentNode()
// or make the node opaque if you don't want to remove
else {
// add node.

SpriteKit reference nodes from level editor

I'm using the scene editor in SpriteKit to place color sprites and assign them textures using the Attributes Inspector. My problem is trying to figure out how to reference those sprites from my GameScene file. For example, I'd like to know when a sprite is a certain distance from my main character.
Edit - code added
I'm adding the code because for some reason, appzYourLife's answer worked great in a simple test project, but not in my code. I was able to use Ron Myschuk's answer which I also included in the code below for reference. (Though, as I look at it now I think the array of tuples was overkill on my part.) As you can see, I have a Satellite class with some simple animations. There's a LevelManager class that replaces the nodes from the scene editor with the correct objects. And finally, everything gets added to the world node in GameScene.swift.
Satellite Class
func spawn(parentNode:SKNode, position: CGPoint, size: CGSize = CGSize(width: 50, height: 50)) {
parentNode.addChild(self)
createAnimations()
self.size = size
self.position = position
self.name = "satellite"
self.runAction(satAnimation)
self.physicsBody = SKPhysicsBody(circleOfRadius: size.width / 2)
self.physicsBody?.affectedByGravity = false
self.physicsBody?.categoryBitMask = PhysicsCategory.satellite.rawValue
self.physicsBody?.contactTestBitMask = PhysicsCategory.laser.rawValue
self.physicsBody?.collisionBitMask = 0
}
func createAnimations() {
let flyFrames:[SKTexture] = [textureAtlas.textureNamed("sat1.png"),
textureAtlas.textureNamed("sat2.png")]
let flyAction = SKAction.animateWithTextures(flyFrames, timePerFrame: 0.14)
satAnimation = SKAction.repeatActionForever(flyAction)
let warningFrames:[SKTexture] = [textureAtlas.textureNamed("sat8.png"),
textureAtlas.textureNamed("sat1.png")]
let warningAction = SKAction.animateWithTextures(warningFrames, timePerFrame: 0.14)
warningAnimation = SKAction.repeatActionForever(warningAction)
}
func warning() {
self.runAction(warningAnimation)
}
Level Manager Class
import SpriteKit
class LevelManager
{
let levelNames:[String] = ["Level1"]
var levels:[SKNode] = []
init()
{
for levelFileName in levelNames {
let level = SKNode()
if let levelScene = SKScene(fileNamed: levelFileName) {
for node in levelScene.children {
switch node.name! {
case "satellite":
let satellite = Satellite()
satellite.spawn(level, position: node.position)
default: print("Name error: \(node.name)")
}
}
}
levels.append(level)
}
}
func addLevelsToWorld(world: SKNode)
{
for index in 0...levels.count - 1 {
levels[index].position = CGPoint(x: -2000, y: index * 1000)
world.addChild(levels[index])
}
}
}
GameScene.swift - didMoveToView
world = SKNode()
world.name = "world"
addChild(world)
physicsWorld.contactDelegate = self
levelManager.addLevelsToWorld(self.world)
levelManager.levels[0].position = CGPoint(x:0, y: 0)
//This does not find the satellite nodes
let satellites = children.flatMap { $0 as? Satellite }
//This does work
self.enumerateChildNodesWithName("//*") {
node, stop in
if (node.name == "satellite") {
self.satTuple.0 = node.position
self.satTuple.1 = (node as? SKSpriteNode)!
self.currentSatellite.append(self.satTuple)
}
}
The Obstacle class
First of all you should create an Obstacle class like this.
class Obstacle: SKSpriteNode { }
Now into the scene editor associate the Obstacle class to your obstacles images
The Player class
Do the same for Player, create a class
class Player: SKSpriteNode { }
and associate it to your player sprite.
Checking for collisions
Now into GameScene.swift change the updated method like this
override func update(currentTime: CFTimeInterval) {
/* Called before each frame is rendered */
let obstacles = children.flatMap { $0 as? Obstacle }
let player = childNodeWithName("player") as! Player
let obstacleNearSprite = obstacles.contains { (obstacle) -> Bool in
let distance = hypotf(Float(player.position.x) - Float(obstacle.position.x), Float(player.position.y) - Float(obstacle.position.y))
return distance < 100
}
if obstacleNearSprite {
print("Oh boy!")
}
}
What does it do?
The first line retrieves all your obstacles into the scene.
the second line retrieves the player (and does crash if it's not present).
Next it put into the obstacleNearSprite constant the true value if there is at least one Obstacle at no more then 100 points from Player.
And finally use the obstacleNearSprite to print something.
Optimizations
The updated method gets called 60 times per second. We put these 2 lines into it
let obstacles = children.flatMap { $0 as? Obstacle }
let player = childNodeWithName("player") as! Player
in order to retrieves the sprites we need. With the modern hardware it is not a problem but you should save references to Obstacle and Player instead then searching for them in every frame.
Build a nice game ;)
you will have to loop through the children of the scene and assign them to local objects to use in your code
assuming your objects in your SKS file were named Obstacle1, Obstacle2, Obstacle3
Once in local objects you can check and do whatever you want with them
let obstacle1 = SKSpriteNode()
let obstacle2 = SKSpriteNode()
let obstacle3 = SKSpriteNode()
let obstacle3Location = CGPointZero
func setUpScene() {
self.enumerateChildNodesWithName("//*") {
node, stop in
if (node.name == "Obstacle1") {
self.obstacle1 = node
}
else if (node.name == "Obstacle2") {
self.obstacle2 = node
}
else if (node.name == "Obstacle3") {
self.obstacle3Location = node.position
}
}
}

Resources