ARKIT: Move Object with PanGesture (the right way) - ios

I've been reading plenty of StackOverflow answers on how to move an object by dragging it across the screen. Some use hit tests against .featurePoints some use the gesture translation or just keeping track of the lastPosition of the object. But honestly.. none work the way everyone is expecting it to work.
Hit testing against .featurePoints just makes the object jump all around, because you dont always hit a featurepoint when dragging your finger. I dont understand why everyone keeps suggesting this.
Solutions like this one work: Dragging SCNNode in ARKit Using SceneKit
But the object doesnt really follow your finger, and the moment you take a few steps or change the angle of the object or the camera.. and try to move the object.. the x,z are all inverted.. and makes total sense to do that.
I really want to move objects as good as the Apple Demo, but I look at the code from Apple... and is insanely weird and overcomplicated I cant even understand a bit. Their technique to move the object so beautifly is not even close to what everyone propose online.
https://developer.apple.com/documentation/arkit/handling_3d_interaction_and_ui_controls_in_augmented_reality
There's gotta be a simpler way to do it.

Short answer:
To get this nice and fluent dragging effect like in the Apple demo project, you will have to do it like in the Apple demo project (Handling 3D Interaction). On the other side I agree with you, that the code might be confusing if you look at it for the first time. It is not easy at all to calculate the correct movement for an object placed on a floor plane - always and from every location or viewing angle. It’s a complex code construct, that is doing this superb dragging effect. Apple did a great job to achieve this, but didn’t make it too easy for us.
Full Answer:
Striping down the AR Interaction template for your needy results in a nightmare - but should work too if you invest enough time. If you prefer to begin from scratch, basically start using a common swift ARKit/SceneKit Xcode template (the one containing the space ship).
You will also require the entire AR Interaction Template Project from Apple. (The link is included in the SO question)
At the End you should be able to drag something called VirtualObject, which is in fact a special SCNNode. In Addition you will have a nice Focus Square, that can be useful for whatever purpose - like initially placing objects or adding a floor, or a wall. (Some code for the dragging effect and the focus square usage are kind of merged or linked together - doing it without the focus square will actually be more complicated)
Get started:
Copy the following files from the AR Interaction template to your empty project:
Utilities.swift (usually I name this file Extensions.swift, it contains some basic extensions that are required)
FocusSquare.swift
FocusSquareSegment.swift
ThresholdPanGesture.swift
VirtualObject.swift
VirtualObjectLoader.swift
VirtualObjectARView.swift
Add the UIGestureRecognizerDelegate to the ViewController class definition like so:
class ViewController: UIViewController, ARSCNViewDelegate, UIGestureRecognizerDelegate {
Add this code to your ViewController.swift, in the definitions section, right before viewDidLoad:
// MARK: for the Focus Square
// SUPER IMPORTANT: the screenCenter must be defined this way
var focusSquare = FocusSquare()
var screenCenter: CGPoint {
let bounds = sceneView.bounds
return CGPoint(x: bounds.midX, y: bounds.midY)
}
var isFocusSquareEnabled : Bool = true
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
/// The tracked screen position used to update the `trackedObject`'s position in `updateObjectToCurrentTrackingPosition()`.
private var currentTrackingPosition: CGPoint?
/**
The object that has been most recently intereacted with.
The `selectedObject` can be moved at any time with the tap gesture.
*/
var selectedObject: VirtualObject?
/// The object that is tracked for use by the pan and rotation gestures.
private var trackedObject: VirtualObject? {
didSet {
guard trackedObject != nil else { return }
selectedObject = trackedObject
}
}
/// Developer setting to translate assuming the detected plane extends infinitely.
let translateAssumingInfinitePlane = true
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
In viewDidLoad, before you setup the scene add this code:
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
let panGesture = ThresholdPanGesture(target: self, action: #selector(didPan(_:)))
panGesture.delegate = self
// Add gestures to the `sceneView`.
sceneView.addGestureRecognizer(panGesture)
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
At the very end of your ViewController.swift add this code:
// MARK: - Pan Gesture Block
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
#objc
func didPan(_ gesture: ThresholdPanGesture) {
switch gesture.state {
case .began:
// Check for interaction with a new object.
if let object = objectInteracting(with: gesture, in: sceneView) {
trackedObject = object // as? VirtualObject
}
case .changed where gesture.isThresholdExceeded:
guard let object = trackedObject else { return }
let translation = gesture.translation(in: sceneView)
let currentPosition = currentTrackingPosition ?? CGPoint(sceneView.projectPoint(object.position))
// The `currentTrackingPosition` is used to update the `selectedObject` in `updateObjectToCurrentTrackingPosition()`.
currentTrackingPosition = CGPoint(x: currentPosition.x + translation.x, y: currentPosition.y + translation.y)
gesture.setTranslation(.zero, in: sceneView)
case .changed:
// Ignore changes to the pan gesture until the threshold for displacment has been exceeded.
break
case .ended:
// Update the object's anchor when the gesture ended.
guard let existingTrackedObject = trackedObject else { break }
addOrUpdateAnchor(for: existingTrackedObject)
fallthrough
default:
// Clear the current position tracking.
currentTrackingPosition = nil
trackedObject = nil
}
}
// - MARK: Object anchors
/// - Tag: AddOrUpdateAnchor
func addOrUpdateAnchor(for object: VirtualObject) {
// If the anchor is not nil, remove it from the session.
if let anchor = object.anchor {
sceneView.session.remove(anchor: anchor)
}
// Create a new anchor with the object's current transform and add it to the session
let newAnchor = ARAnchor(transform: object.simdWorldTransform)
object.anchor = newAnchor
sceneView.session.add(anchor: newAnchor)
}
private func objectInteracting(with gesture: UIGestureRecognizer, in view: ARSCNView) -> VirtualObject? {
for index in 0..<gesture.numberOfTouches {
let touchLocation = gesture.location(ofTouch: index, in: view)
// Look for an object directly under the `touchLocation`.
if let object = virtualObject(at: touchLocation) {
return object
}
}
// As a last resort look for an object under the center of the touches.
// return virtualObject(at: gesture.center(in: view))
return virtualObject(at: (gesture.view?.center)!)
}
/// Hit tests against the `sceneView` to find an object at the provided point.
func virtualObject(at point: CGPoint) -> VirtualObject? {
// let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
let hitTestResults = sceneView.hitTest(point, options: [SCNHitTestOption.categoryBitMask: 0b00000010, SCNHitTestOption.searchMode: SCNHitTestSearchMode.any.rawValue as NSNumber])
// let hitTestOptions: [SCNHitTestOption: Any] = [.boundingBoxOnly: true]
// let hitTestResults = sceneView.hitTest(point, options: hitTestOptions)
return hitTestResults.lazy.compactMap { result in
return VirtualObject.existingObjectContainingNode(result.node)
}.first
}
/**
If a drag gesture is in progress, update the tracked object's position by
converting the 2D touch location on screen (`currentTrackingPosition`) to
3D world space.
This method is called per frame (via `SCNSceneRendererDelegate` callbacks),
allowing drag gestures to move virtual objects regardless of whether one
drags a finger across the screen or moves the device through space.
- Tag: updateObjectToCurrentTrackingPosition
*/
#objc
func updateObjectToCurrentTrackingPosition() {
guard let object = trackedObject, let position = currentTrackingPosition else { return }
translate(object, basedOn: position, infinitePlane: translateAssumingInfinitePlane, allowAnimation: true)
}
/// - Tag: DragVirtualObject
func translate(_ object: VirtualObject, basedOn screenPos: CGPoint, infinitePlane: Bool, allowAnimation: Bool) {
guard let cameraTransform = sceneView.session.currentFrame?.camera.transform,
let result = smartHitTest(screenPos,
infinitePlane: infinitePlane,
objectPosition: object.simdWorldPosition,
allowedAlignments: [ARPlaneAnchor.Alignment.horizontal]) else { return }
let planeAlignment: ARPlaneAnchor.Alignment
if let planeAnchor = result.anchor as? ARPlaneAnchor {
planeAlignment = planeAnchor.alignment
} else if result.type == .estimatedHorizontalPlane {
planeAlignment = .horizontal
} else if result.type == .estimatedVerticalPlane {
planeAlignment = .vertical
} else {
return
}
/*
Plane hit test results are generally smooth. If we did *not* hit a plane,
smooth the movement to prevent large jumps.
*/
let transform = result.worldTransform
let isOnPlane = result.anchor is ARPlaneAnchor
object.setTransform(transform,
relativeTo: cameraTransform,
smoothMovement: !isOnPlane,
alignment: planeAlignment,
allowAnimation: allowAnimation)
}
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
Add some Focus Square Code
// MARK: - Focus Square (code by Apple, some by me)
func updateFocusSquare(isObjectVisible: Bool) {
if isObjectVisible {
focusSquare.hide()
} else {
focusSquare.unhide()
}
// Perform hit testing only when ARKit tracking is in a good state.
if let camera = sceneView.session.currentFrame?.camera, case .normal = camera.trackingState,
let result = smartHitTest(screenCenter) {
DispatchQueue.main.async {
self.sceneView.scene.rootNode.addChildNode(self.focusSquare)
self.focusSquare.state = .detecting(hitTestResult: result, camera: camera)
}
} else {
DispatchQueue.main.async {
self.focusSquare.state = .initializing
self.sceneView.pointOfView?.addChildNode(self.focusSquare)
}
}
}
And add some control Functions:
func hideFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } } // to hide the focus square
func showFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } } // to show the focus square
From the VirtualObjectARView.swift COPY! the entire function smartHitTest to the ViewController.swift (so they exist twice)
func smartHitTest(_ point: CGPoint,
infinitePlane: Bool = false,
objectPosition: float3? = nil,
allowedAlignments: [ARPlaneAnchor.Alignment] = [.horizontal, .vertical]) -> ARHitTestResult? {
// Perform the hit test.
let results = sceneView.hitTest(point, types: [.existingPlaneUsingGeometry, .estimatedVerticalPlane, .estimatedHorizontalPlane])
// 1. Check for a result on an existing plane using geometry.
if let existingPlaneUsingGeometryResult = results.first(where: { $0.type == .existingPlaneUsingGeometry }),
let planeAnchor = existingPlaneUsingGeometryResult.anchor as? ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {
return existingPlaneUsingGeometryResult
}
if infinitePlane {
// 2. Check for a result on an existing plane, assuming its dimensions are infinite.
// Loop through all hits against infinite existing planes and either return the
// nearest one (vertical planes) or return the nearest one which is within 5 cm
// of the object's position.
let infinitePlaneResults = sceneView.hitTest(point, types: .existingPlane)
for infinitePlaneResult in infinitePlaneResults {
if let planeAnchor = infinitePlaneResult.anchor as? ARPlaneAnchor, allowedAlignments.contains(planeAnchor.alignment) {
if planeAnchor.alignment == .vertical {
// Return the first vertical plane hit test result.
return infinitePlaneResult
} else {
// For horizontal planes we only want to return a hit test result
// if it is close to the current object's position.
if let objectY = objectPosition?.y {
let planeY = infinitePlaneResult.worldTransform.translation.y
if objectY > planeY - 0.05 && objectY < planeY + 0.05 {
return infinitePlaneResult
}
} else {
return infinitePlaneResult
}
}
}
}
}
// 3. As a final fallback, check for a result on estimated planes.
let vResult = results.first(where: { $0.type == .estimatedVerticalPlane })
let hResult = results.first(where: { $0.type == .estimatedHorizontalPlane })
switch (allowedAlignments.contains(.horizontal), allowedAlignments.contains(.vertical)) {
case (true, false):
return hResult
case (false, true):
// Allow fallback to horizontal because we assume that objects meant for vertical placement
// (like a picture) can always be placed on a horizontal surface, too.
return vResult ?? hResult
case (true, true):
if hResult != nil && vResult != nil {
return hResult!.distance < vResult!.distance ? hResult! : vResult!
} else {
return hResult ?? vResult
}
default:
return nil
}
}
You might see some errors in the copied function regarding the hitTest. Just correct it like so:
hitTest... // which gives an Error
sceneView.hitTest... // this should correct it
Implement the renderer updateAtTime function and add this lines:
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
// For the Focus Square
if isFocusSquareEnabled { showFocusSquare() }
self.updateObjectToCurrentTrackingPosition() // *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
}
And finally add some helper functions for the Focus Square
func hideFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: true) } } // to hide the focus square
func showFocusSquare() { DispatchQueue.main.async { self.updateFocusSquare(isObjectVisible: false) } } // to show the focus square
At this point you might still see about a dozen errors and warnings in the imported files, this might occur, when doing this in Swift 5 and you have some Swift 4 files. Just let Xcode correct the errors. (Its all about renaming some code statements, Xcode knows best)
Go in VirtualObject.swift and search for this code block:
if smoothMovement {
let hitTestResultDistance = simd_length(positionOffsetFromCamera)
// Add the latest position and keep up to 10 recent distances to smooth with.
recentVirtualObjectDistances.append(hitTestResultDistance)
recentVirtualObjectDistances = Array(recentVirtualObjectDistances.suffix(10))
let averageDistance = recentVirtualObjectDistances.average!
let averagedDistancePosition = simd_normalize(positionOffsetFromCamera) * averageDistance
simdPosition = cameraWorldPosition + averagedDistancePosition
} else {
simdPosition = cameraWorldPosition + positionOffsetFromCamera
}
Outcomment or replace this entire block by this single line of code:
simdPosition = cameraWorldPosition + positionOffsetFromCamera
At this point you should be able to compile the project and run it on a device. You should see the Spaceship and a yellow focus square that should already work.
To start placing an Object, that you can drag you need some function to create a so called VirtualObject as I said in the beginning.
Use this example function to test (add it somewhere in the view controller):
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
if focusSquare.state != .initializing {
let position = SCNVector3(focusSquare.lastPosition!)
// *** FOR OBJECT DRAGGING PAN GESTURE - APPLE ***
let testObject = VirtualObject() // give it some name, when you dont have anything to load
testObject.geometry = SCNCone(topRadius: 0.0, bottomRadius: 0.2, height: 0.5)
testObject.geometry?.firstMaterial?.diffuse.contents = UIColor.red
testObject.categoryBitMask = 0b00000010
testObject.name = "test"
testObject.castsShadow = true
testObject.position = position
sceneView.scene.rootNode.addChildNode(testObject)
}
}
Note: everything you want to drag on a plane, must be setup using VirtualObject() instead of SCNNode(). Everything else regarding the VirtualObject stays the same as SCNNode
(You can also add some common SCNNode extensions as well, like the one to load scenes by its name - useful when referencing imported models)
Have fun!

I added some of my ideas to Claessons's answer. I noticed some lag when dragging the node around. I found that the node cannot follow the finger's movement.
To make the node move more smoothly, I added a variable that keeps track of the node that is currently being moved, and set the position to the location of the touch.
var selectedNode: SCNNode?
Also, I set a .categoryBitMask value to specify the category of nodes that I want to edit(move). The default bit mask value is 1.
The reason why we set the category bit mask is to distinguish between different kinds of nodes, and specify those that you wish to select (to move around, etc).
enum CategoryBitMask: Int {
case categoryToSelect = 2 // 010
case otherCategoryToSelect = 4 // 100
// you can add more bit masks below . . .
}
Then, I added a UILongPressGestureRecognizer in viewDidLoad().
let longPressRecognizer = UILongPressGestureRecognizer(target: self, action: #selector(longPressed))
self.sceneView.addGestureRecognizer(longPressRecognizer)
The following is the UILongPressGestureRecognizer I used to detect a long press, which initiates the dragging of the node.
First, obtain the touch location from the recognizerView
#objc func longPressed(recognizer: UILongPressGestureRecognizer) {
guard let recognizerView = recognizer.view as? ARSCNView else { return }
let touch = recognizer.location(in: recognizerView)
The following code runs once when a long press is detected.
Here, we perform a hitTest to select the node that has been touched. Note that here, we specify a .categoryBitMask option to select only nodes of the following category: CategoryBitMask.categoryToSelect
// Runs once when long press is detected.
if recognizer.state == .began {
// perform a hitTest
let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: CategoryBitMask.categoryToSelect])
guard let hitNode = hitTestResult.first?.node else { return }
// Set hitNode as selected
self.selectedNode = hitNode
The following code will run periodically until the user releases the finger.
Here we perform another hitTest to obtain the plane you want the node to move along.
// Runs periodically after .began
} else if recognizer.state == .changed {
// make sure a node has been selected from .began
guard let hitNode = self.selectedNode else { return }
// perform a hitTest to obtain the plane
let hitTestPlane = self.sceneView.hitTest(touch, types: .existingPlane)
guard let hitPlane = hitTestPlane.first else { return }
hitNode.position = SCNVector3(hitPlane.worldTransform.columns.3.x,
hitNode.position.y,
hitPlane.worldTransform.columns.3.z)
Make sure you deselect the node when the finger is removed from the screen.
// Runs when finger is removed from screen. Only once.
} else if recognizer.state == .ended || recognizer.state == .cancelled || recognizer.state == .failed{
guard let hitNode = self.selectedNode else { return }
// Undo selection
self.selectedNode = nil
}
}

Kind of late answer but I know I had some problems solving this as well. Eventually I figured out a way to do it by performing two separate hit tests whenever my gesture recognizer is called.
First, I perform a hit test for my 3d-object to detect if I'm currently pressing an object or not (as you would get results for pressing featurePoints, planes etc. if you don't specify any options). I do this by using the .categoryBitMaskvalue of SCNHitTestOption.
Keep in mind you have to assign the correct .categoryBitMask value to your object node and all it's child nodes beforehand in order for the hit test to work. I declare an enum I can use for that:
enum BodyType : Int {
case ObjectModel = 2;
}
As becomes apparent by the answer to my question about .categoryBitMask values I posted here, it is important to consider what values you assign your bitmask.
Below is the code i use in in conjunction with UILongPressGestureRecognizer in
order to select the object I'm currently pressing:
guard let recognizerView = recognizer.view as? ARSCNView else { return }
let touch = recognizer.location(in: recognizerView)
let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: BodyType.ObjectModel.rawValue])
guard let modelNodeHit = hitTestResult.first?.node else { return }
After that I perform a 2nd hit test in order to find a plane I'm pressing on.
You can use the type .existingPlaneUsingExtent if you don't want to move your object further than the edge of a plane, or .existingPlane if you want to move your object indefinitely along a detected plane surface.
var planeHit : ARHitTestResult!
if recognizer.state == .changed {
let hitTestPlane = self.sceneView.hitTest(touch, types: .existingPlane)
guard hitTestPlane.first != nil else { return }
planeHit = hitTestPlane.first!
modelNodeHit.position = SCNVector3(planeHit.worldTransform.columns.3.x,modelNodeHit.position.y,planeHit.worldTransform.columns.3.z)
}else if recognizer.state == .ended || recognizer.state == .cancelled || recognizer.state == .failed{
modelNodeHit.position = SCNVector3(planeHit.worldTransform.columns.3.x,modelNodeHit.position.y,planeHit.worldTransform.columns.3.z)
}
I made a GitHub repo when I tried this out while also experimenting with ARAnchors. You can check it out if you want to see my method in practice, but I did not make it with the intention of anyone else using it so it's quite unfinished. Also, the development branch should support some functionality for an object with more childNodes.
EDIT: ==================================
For clarification if you want to use a .scn object instead of a regular geometry, you need to iterate through all the child nodes of the object when creating it, setting the bitmask of each child like this:
let objectModelScene = SCNScene(named:
"art.scnassets/object/object.scn")!
let objectNode = objectModelScene.rootNode.childNode(
withName: "theNameOfTheParentNodeOfTheObject", recursively: true)
objectNode.categoryBitMask = BodyType.ObjectModel.rawValue
objectNode.enumerateChildNodes { (node, _) in
node.categoryBitMask = BodyType.ObjectModel.rawValue
}
Then, in the gesture recognizer after you get a hitTestResult
let hitTestResult = self.sceneView.hitTest(touch, options: [SCNHitTestOption.categoryBitMask: BodyType.ObjectModel.rawValue])
you need to find the parent node since otherwise you might be moving the individual child node you just pressed. Do this by searching recursively upwards through the node tree of the node you just found.
guard let objectNode = getParentNodeOf(hitTestResult.first?.node) else { return }
where you declare the getParentNode-method as follows
func getParentNodeOf(_ nodeFound: SCNNode?) -> SCNNode? {
if let node = nodeFound {
if node.name == "theNameOfTheParentNodeOfTheObject" {
return node
} else if let parent = node.parent {
return getParentNodeOf(parent)
}
}
return nil
}
Then you are free to perform any operation on the objectNode, as it will be the parent node of your .scn object, meaning that any transformation applied to it will also be applied to the child nodes.

As #ZAY mentioned out, Apple made it quite confusing in addition they used ARRaycastQuery which only works on iOS 13 and above. Therefore, I reached to a solution that works by using the current camera orientation to calculate the translation on a plane in the world coordinates.
First, by using this snippet we are able to get the current orientation the user is facing using quaternions.
private func getOrientationYRadians()-> Float {
guard let cameraNode = arSceneView.pointOfView else { return 0 }
//Get camera orientation expressed as a quaternion
let q = cameraNode.orientation
//Calculate rotation around y-axis (heading) from quaternion and convert angle so that
//0 is along -z-axis (forward in SceneKit) and positive angle is clockwise rotation.
let alpha = Float.pi - atan2f( (2*q.y*q.w)-(2*q.x*q.z), 1-(2*pow(q.y,2))-(2*pow(q.z,2)) )
// here I convert the angle to be 0 when the user is facing +z-axis
return alpha <= Float.pi ? abs(alpha - (Float.pi)) : (3*Float.pi) - alpha
}
Handle Pan Method
private var lastPanLocation2d: CGPoint!
#objc func handlePan(panGesture: UIPanGestureRecognizer) {
let state = panGesture.state
guard state != .failed && state != .cancelled else {
return
}
let touchLocation = panGesture.location(in: self)
if (state == .began) {
lastPanLocation2d = touchLocation
}
// 200 here is a random value that controls the smoothness of the dragging effect
let deltaX = Float(touchLocation.x - lastPanLocation2d!.x)/200
let deltaY = Float(touchLocation.y - lastPanLocation2d!.y)/200
let currentYOrientationRadians = getOrientationYRadians()
// convert delta in the 2D dimensions to the 3d world space using the current rotation
let deltaX3D = (deltaY*sin(currentYOrientationRadians))+(deltaX*cos(currentYOrientationRadians))
let deltaY3D = (deltaY*cos(currentYOrientationRadians))+(-deltaX*sin(currentYOrientationRadians))
// assuming that the node is currently positioned on a plane so the y-translation will be zero
let translation = SCNVector3Make(deltaX3D, 0.0, deltaY3D)
nodeToDrag.localTranslate(by: translation)
lastPanLocation2d = touchLocation
}

Related

How to move multiple nodes in ARSCNView

My goal is to be able to move/rotate AR objects using the gestureRecognizer. While I got it working for a single AR cube, I cannot get it work for multiple cubes/objects.
Main part of the viewDidLoad:
let boxNode1 = addCube(position: SCNVector3(0,0,0), name: "box")
let boxNode2 = addCube(position: SCNVector3(0,-0.1,-0.1), name: "box2")
sceneView.scene.rootNode.addChildNode(boxNode1)
sceneView.scene.rootNode.addChildNode(boxNode2)
var nodes: [SCNNode] = getMyNodes()
var parentNode = SCNNode()
parentNode.name = "motherNode"
for node in nodes {
parentNode.addChildNode(node)
}
sceneView.scene.rootNode.addChildNode(parentNode)
// sceneView.addGestureRecognizer(UITapGestureRecognizer(target: self, action: #selector(ViewController.handleTap(_:))))
sceneView.addGestureRecognizer(UIPanGestureRecognizer(target: self, action: #selector(ViewController.handleMove(_:))))
sceneView.addGestureRecognizer(UIRotationGestureRecognizer(target: self, action: #selector(ViewController.handleRotate(_:))))
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
Part of panGesture (works for each cube, but does not work if I change to nodeHit.Parent!) The parent node is detected correctly, but no change is made to it:
#objc func handleMove(_ gesture: UIPanGestureRecognizer) {
//1. Get The Current Touch Point
let location = gesture.location(in: self.sceneView)
//2. Get The Next Feature Point Etc
guard let nodeHitTest = self.sceneView.hitTest(location, options: nil).first else { print("no node"); return }
var nodeHit = nodeHitTest.node
// nodeHit = nodeHit.parent!
//3. Convert To World Coordinates
let worldTransform = nodeHitTest.simdWorldCoordinates
//4. Apply To The Node
nodeHit.position = SCNVector3(worldTransform.x, worldTransform.y, 0)
}
What I want to do is to be able to move both cube at once (so they all undergoes the same translation). It sounds possible from this post:
How to join multiple nodes to one node in iOS scene
However, at the same time this post also says I cannot do that for reason I do not understand yet:
SceneKit nodes aren't changing position with scene's root node
In the worst case I think it is possible to manually apply the transformation to every child node, however applying translation to one parent Node seems to be a much elegant way for doing this.
Edit: I tried this way and can get both moving nodes moving, however sometimes the position is reversed (sometimes the cube comes on top of the other when they should not):
#objc func handleMove(_ gesture: UIPanGestureRecognizer) {
//1. Get The Current Touch Point
let location = gesture.location(in: self.sceneView)
//2. Get The Next Feature Point Etc
guard let nodeHitTest = self.sceneView.hitTest(location, options: nil).first else { print("no node"); return }
// var nodeHit = nodeHitTest.node
let nodeHit = nodeHitTest.node
let original_x = nodeHitTest.node.position.x
let original_y = nodeHitTest.node.position.y
print(original_x, original_y)
// let nodeHit = sceneView.scene.rootNode.childNode(withName: "motherNode2", recursively: true)
//3. Convert To World Coordinates
let worldTransform = nodeHitTest.simdWorldCoordinates
//4. Apply To The Node
//// nodeHit.position = SCNVector3(worldTransform.x, worldTransform.y, 0)
nodeHit.position = SCNVector3(worldTransform.x, worldTransform.y, 0)
for node in nodeHit.parent!.childNodes {
if node.name != nil{
if node.name != nodeHit.name {
let old_x = node.position.x
let old_y = node.position.y
print(old_x, old_y)
node.position = SCNVector3((nodeHit.simdPosition.x + original_x - old_x), (nodeHit.simdPosition.y + original_y - old_y), 0)
}
}
}
Any ideas?
I swapped the plus minus sign and now it is working correctly. Here is the code:
/// - Parameter gesture: UIPanGestureRecognizer
#objc func handleMove(_ gesture: UIPanGestureRecognizer) {
//1. Get The Current Touch Point
let location = gesture.location(in: self.sceneView)
//2. Get The Next Feature Point Etc
guard let nodeHitTest = self.sceneView.hitTest(location, options: nil).first else { print("no node"); return }
// var nodeHit = nodeHitTest.node
let nodeHit = nodeHitTest.node
let original_x = nodeHitTest.node.position.x
let original_y = nodeHitTest.node.position.y
// let nodeHit = sceneView.scene.rootNode.childNode(withName: "motherNode2", recursively: true)
//3. Convert To World Coordinates
let worldTransform = nodeHitTest.simdWorldCoordinates
//4. Apply To The Node
//// nodeHit.position = SCNVector3(worldTransform.x, worldTransform.y, 0)
nodeHit.position = SCNVector3(worldTransform.x, worldTransform.y, 0)
for node in nodeHit.parent!.childNodes {
if node.name != nil{
if node.name != nodeHit.name {
let old_x = node.position.x
let old_y = node.position.y
node.position = SCNVector3((nodeHit.simdPosition.x - original_x + old_x), (nodeHit.simdPosition.y - original_y + old_y), 0)
}
}
}
The idea is even without grouping everything into a new node, I can access all the nodes using nodeHit.parent!.childNodes. This also contains other nodes created by default such as camera or light source so I added the condition to make sure it only selects the nodes with the names I have created. Ideally you just need to use some built in methods to move all the nodes in the scene but I cannot find such method.
So my method first keep track of the old position before moving, then if it is node hit by hit test, move it as before. However, if it is not the node hit by hit test, reposition it by the difference between the the 2 nodes. The relative position difference should be the same regardless of where you move the nodes.

ARKit : Handle tap to show / hide a node

I am new to ARKit , and i am trying an example to create a SCNBox on tap location. What i am trying to do is on initial touch i create a box and on the second tap on the created box it should be removed from the scene. I am doing the hit test. but it keeps on adding the box. I know this is a simple task, but i am unable to do it
#objc func handleTap(sender: UITapGestureRecognizer) {
print("hande tapp")
guard let _ = sceneView.session.currentFrame
else { return }
guard let scnView = sceneView else { return }
let touchLocation = sender.location(in: scnView)
let hitTestResult = scnView.hitTest(touchLocation, types: [ .featurePoint])
guard let pointOfView = sceneView.pointOfView else {return}
print("point \(pointOfView.name)")
if hitTestResult.count > 0 {
print("Hit")
if let _ = pointOfView as? ARBox {
print("Box Available")
}
else {
print("Adding box")
let transform = hitTestResult.first?.worldTransform.columns.3
let xPosition = transform?.x
let yPosition = transform?.y
let zPosition = transform?.z
let position = SCNVector3(xPosition!,yPosition!,zPosition!)
basketCount = basketCount + 1
let newBasket = ARBox(position: position)
newBasket.name = "basket\(basketCount)"
self.sceneView.scene.rootNode.addChildNode(newBasket)
boxNodes.append(newBasket)
}
}
}
pointOfView of a sceneView, is the rootnode of your scene, which is one used to render your whole scene. For generic cases, it usually is an empty node with lights/ camera. I don't think you should cast it as ARBox/ or any type of SCNNode(s) for that matter.
What you probably can try is the logic below (hitResults are the results of your hitTest):
if hitResults.count > 0 {
if let node = hitResults.first?.node as SCNNode? (or ARBox) {
// node.removeFromParentNode()
// or make the node opaque if you don't want to remove
else {
// add node.

Swift: can't detect hit test on SCNNode? See if vector3 is contained in a node?

I am trying to just detect a tap on an SCNNode instantiated in my AR scene. It does not seem to work like SceneKit hit test results here, and I don't have much experience with scene kit.
I just want to detect if the tapped point is contained within any node in the scene, thus detecting a tap on the object. What I have tried from other answers:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let touch = touches.first else { return }
let results = sceneView.hitTest(touch.location(in: sceneView), types: [ARHitTestResult.ResultType.featurePoint])
guard let hitFeature = results.last else { return }
let hitTransform = SCNMatrix4.init(hitFeature.worldTransform)
let hitPosition = SCNVector3Make(hitTransform.m41,
hitTransform.m42,
hitTransform.m43)
if(theDude.node.boundingBoxContains(point: hitPosition))
{
}
}
Nothing gets printed with the last if statement, which I got with:
extension SCNNode {
func boundingBoxContains(point: SCNVector3, in node: SCNNode) -> Bool {
let localPoint = self.convertPosition(point, from: node)
return boundingBoxContains(point: localPoint)
}
func boundingBoxContains(point: SCNVector3) -> Bool {
return BoundingBox(self.boundingBox).contains(point)
}
}
struct BoundingBox {
let min: SCNVector3
let max: SCNVector3
init(_ boundTuple: (min: SCNVector3, max: SCNVector3)) {
min = boundTuple.min
max = boundTuple.max
}
func contains(_ point: SCNVector3) -> Bool {
let contains =
min.x <= point.x &&
min.y <= point.y &&
min.z <= point.z &&
max.x > point.x &&
max.y > point.y &&
max.z > point.z
return contains
}
}
And the ARHitTestResult doesn't contain nodes. What can I do to detect a tap on a node in an ARScene?
When you're working with ARSCNView, there are two kinds of hit testing you can do, and they use entirely separate code paths.
Use the ARKit method hitTest(_:types:) if you want to hit test against the real world (or at least, against ARKit's estimate of where real-world features are). This returns ARHitTestResult objects, which tell you about real-world features like detected planes. In other words, use this method if you want to find a real object that anyone can see and touch without a device — like the table you're pointing your device at.
Use the SceneKit method hitTest(_:options:) if you want to hit test against SceneKit content; that is, to search for virtual 3D objects you've placed in the AR scene. This returns SCNHitTestResult objects, which tell you about things like nodes and geometry. Use this method if you want to find SceneKit nodes, the model (geometry) in a node, or the specific point on the geometry at a tap location.
In both cases the 3D position found by a hit test is the same, because ARSCNView makes sure that the virtual "world coordinates" space matches real-world space.
It looks like you're using the former but expecting the latter. When you do a SceneKit hit test, you get results if and only if there's a node under the hit test point — you don't need any kind of bounding box test because it's already being done for you.
Add a UITapGestureRecognizer
let tapRec = UITapGestureRecognizer(target: self, action: #selector(ViewController.handleTap(rec:)))
Inside your handleTap method on state .ended:
#objc func handleTap(rec: UITapGestureRecognizer){
if rec.state == .ended {
let location: CGPoint = rec.location(in: sceneView)
let hits = self.sceneView.hitTest(location, options: nil)
if let tappednode = hits.first?.node {
//do something with tapped object
}
}
}

Trying to make platforms that I can jump through from underneath but land on top of. having trouble fine tuning the logic

My goal is to set up all my platforms in the .sks file for easier design of my levels.
this is declared at the top of game scene.swift before didMove:
private var JumpThroughPlatformObject = SKSpriteNode()
and this is in DidMove:
if let JumpThroughPlatformObjectNode = self.childNode(withName: "//jumpThroughPlatform1") as? SKSpriteNode {
JumpThroughPlatformObject = JumpThroughPlatformObjectNode}
I reference the platform to get it's height from the .sks, since all my platforms are going to be the same height I only need to get it from one.
Below is what Im trying to use in my update method to turn off collisions until my player is totally above the platform. The main issue with only checking if my players velocity is greater than zero is: if the player is at the peak of a jump (his velocity slows to zero). if this happens and the player is inside a platform, he either instantly springs up to the top of the platform or gets launched downward.
I don't want my platforms to have to be 1 pixel high lines. I also need to have the player have a full collision box since he will be interacting with other types of environments. This leads me to believe that I somehow need to only register the top of the platform as a collision box and not the entire platform.
This if statement I wrote is supposed to take the y position of a platform and add half of its height to it, since the y position is based on the center of the sprite I figured this would put the collision for the platform on its top boundary.
I did the same for the player but in reverse. Putting the players collisions on only the bottom of his border. But its not working perfectly and I'm not sure why at this point.
if (JumpThroughPlatformObject.position.y + (JumpThroughPlatformObject.size.height / 2)) > (player.position.y - (player.size.height / 2))
The function below is giving me 3 main issues:
My players jump is always dy = 80. If I'm jumping up to a platform that position.y = 90, the players peak of the jump stops in the middle of the platform, but he teleports to the top of it instead of continuing to fall to the ground.
the left and right edges of the platforms still have full collision with the player if I'm falling
if my player is on a platform and there is another one directly above me, the player can't jump through it.
let zero:CGFloat = 0
if let body = player.physicsBody {
let dy = player.physicsBody?.velocity.dy
// when I jump dy is greater than zero else I'm falling
if (dy! >= zero) {
if (JumpThroughPlatformObject.position.y + (JumpThroughPlatformObject.size.height / 2)) > (player.position.y - (player.size.height / 2)) {
print(" platform y: \(JumpThroughPlatformObject.position.y)")
print ("player position: \(player.position.y)")
// Prevent collisions if the hero is jumping
body.collisionBitMask = CollisionTypes.saw.rawValue | CollisionTypes.ground.rawValue
}
}
else {
// Allow collisions if the hero is falling
body.collisionBitMask = CollisionTypes.platform.rawValue | CollisionTypes.ground.rawValue | CollisionTypes.saw.rawValue
}
}
Any advice would be greatly appreciated. I've been tearing my hair out for a couple days now.
EDIT in didBegin and didEnd:
func didBegin(_ contact: SKPhysicsContact) {
if let body = player.physicsBody {
let dy = player.physicsBody?.velocity.dy
let platform = JumpThroughPlatformObject
let zero:CGFloat = 0
if contact.bodyA.node == player {
// playerCollided(with: contact.bodyB.node!)
if (dy! > zero || body.node!.intersects(platform)) && ((body.node?.position.y)! - player.size.height / 2 < platform.position.y + platform.size.height / 2) {
body.collisionBitMask &= ~CollisionTypes.platform.rawValue
}
} else if contact.bodyB.node == player {
// playerCollided(with: contact.bodyA.node!)
isPlayerOnGround = true
if (dy! > zero || body.node!.intersects(platform)) && ((body.node?.position.y)! - player.size.height / 2 < platform.position.y + platform.size.height / 2) {
body.collisionBitMask &= ~CollisionTypes.platform.rawValue}
}
}
}
func didEnd(_ contact: SKPhysicsContact) {
if let body = player.physicsBody {
// let dy = player.physicsBody?.velocity.dy
// let platform = JumpThroughPlatformObject
if contact.bodyA.node == player {
body.collisionBitMask |= CollisionTypes.platform.rawValue
}else if contact.bodyB.node == player {
body.collisionBitMask |= CollisionTypes.platform.rawValue
}
}
}
Adding what I did, the player can no longer jump through the platform.
Here is a link to the project that I made for macOS and iOS targets:
https://github.com/fluidityt/JumpUnderPlatform
Basically, this all has to do with
Detecting collision of a platform
Then determining if your player is under the platform
Allow your player to go through the platform (and subsequently land on it)
--
SK Physics makes this a little complicated:
On collision detection, your player's .position.y or .velocity.dy
may already have changed to a "false" state in reference to satisfying the #2 check from above (meaning #3 will never happen). Also, your player will bounce off the platform on first contact.
There is no "automatic" way to determine when your player has finished passing through the object (thus to allow player to land on the platform)
--
So to get everything working, a bit of creativity and ingenuity must be used!
1: Detecting collision of a platform:
So, to tackle 1 is the simplest: we just need to use the built in didBegin(contact:)
We are going to be relying heavily on the 3 big bitMasks, contact, category, and collision:
(fyi, I don't like using enums and bitmath for physics because I'm a rebel idiot):
struct BitMasks {
static let playerCategory = UInt32(2)
static let jupCategory = UInt32(4) // JUP = JumpUnderPlatform
}
override func didBegin(_ contact: SKPhysicsContact) {
// Crappy way to do "bit-math":
let contactedSum = contact.bodyA.categoryBitMask + contact.bodyB.categoryBitMask
switch contactedSum {
case BitMasks.jupCategory + BitMasks.playerCategory:
// ...
}
--
Now, you said that you wanted to use the SKSEditor, so I have accommodated you:
// Do all the fancy stuff you want here...
class JumpUnderPlatform: SKSpriteNode {
var pb: SKPhysicsBody { return self.physicsBody! } // If you see this on a crash, then WHY DOES JUP NOT HAVE A PB??
// NOTE: I could not properly configure any SKNode properties here..
// it's like they all get RESET if you put them in here...
required init?(coder aDecoder: NSCoder) { super.init(coder: aDecoder) }
}
--
Now for the player:
class Player: SKSpriteNode {
// If you see this on a crash, then WHY DOES PLAYER NOT HAVE A PB??
var pb: SKPhysicsBody { return self.physicsBody! }
static func makePlayer() -> Player {
let newPlayer = Player(color: .blue, size: CGSize(width: 50, height: 50))
let newPB = SKPhysicsBody(rectangleOf: newPlayer.size)
newPB.categoryBitMask = BitMasks.playerCategory
newPB.usesPreciseCollisionDetection = true
newPlayer.physicsBody = newPB
newPlayer.position.y -= 200 // For demo purposes.
return newPlayer
}
}
2. (and dealing with #4): Determining if under platform on contact:
There are many ways to do this, but I chose to use the player.pb.velocity.dy approach as mentioned by KOD to keep track of the player's position... if your dy is over 0, then you are jumping (under a platform) if not, then you are either standing still or falling (need to make contact with the platform and stick to it).
To accomplish this we have to get a bit more technical, because again, the physics system and the way SK works in its loop doesn't always mesh 100% with how we think it should work.
Basically, I had to make an initialDY property for Player that is constantly updated each frame in update
This initialDY will give us the correct data that we need for the first contact with the platform, allowing us to tell us to change the collision mask, and also to reset our player's CURRENT dy to the initial dy (so the player doesn't bounce off).
3. (and dealing with #5): Allow player to go through platform
To go through the platform, we need to play around with the collisionBitMasks. I chose to make the player's collision mask = the player's categoryMask, which is probably not the right way to do it, but it works for this demo.
You end up with magic like this in didBegin:
// Check if jumping; if not, then just land on platform normally.
guard player.initialDY > 0 else { return }
// Gives us the ability to pass through the platform!
player.pb.collisionBitMask = BitMasks.playerCategory
Now, dealing with #5 is going to require us to add another piece of state to our player class.. we need to temporarily store the contacted platform so we can check if the player has successfully finished passing through the platform (so we can reset the collision mask)
Then we just check in didFinishUpdate if the player's frame is above that platform, and if so, we reset the masks.
Here are all of the files , and again a link to the github:
https://github.com/fluidityt/JumpUnderPlatform
Player.swift:
class Player: SKSpriteNode {
// If you see this on a crash, then WHY DOES PLAYER NOT HAVE A PB??
var pb: SKPhysicsBody { return self.physicsBody! }
// This is set when we detect contact with a platform, but are underneath it (jumping up)
weak var platformToPassThrough: JumpUnderPlatform?
// For use inside of gamescene's didBeginContact (because current DY is altered by the time we need it)
var initialDY = CGFloat(0)
}
// MARK: - Funkys:
extension Player {
static func makePlayer() -> Player {
let newPlayer = Player(color: .blue, size: CGSize(width: 50, height: 50))
let newPB = SKPhysicsBody(rectangleOf: newPlayer.size)
newPB.categoryBitMask = BitMasks.playerCategory
newPB.usesPreciseCollisionDetection = true
newPlayer.physicsBody = newPB
newPlayer.position.y -= 200 // For demo purposes.
return newPlayer
}
func isAbovePlatform() -> Bool {
guard let platform = platformToPassThrough else { fatalError("wtf is the platform!") }
if frame.minY > platform.frame.maxY { return true }
else { return false }
}
func landOnPlatform() {
print("resetting stuff!")
platformToPassThrough = nil
pb.collisionBitMask = BitMasks.jupCategory
}
}
// MARK: - Player GameLoop:
extension Player {
func _update() {
// We have to keep track of this for proper detection of when to pass-through platform
initialDY = pb.velocity.dy
}
func _didFinishUpdate() {
// Check if we need to reset our collision mask (allow us to land on platform again)
if platformToPassThrough != nil {
if isAbovePlatform() { landOnPlatform() }
}
}
}
JumpUnderPlatform & BitMasks.swift (respectively:)
// Do all the fancy stuff you want here...
class JumpUnderPlatform: SKSpriteNode {
var pb: SKPhysicsBody { return self.physicsBody! } // If you see this on a crash, then WHY DOES JUP NOT HAVE A PB??
required init?(coder aDecoder: NSCoder) { super.init(coder: aDecoder) }
}
struct BitMasks {
static let playerCategory = UInt32(2)
static let jupCategory = UInt32(4)
}
GameScene.swift:
-
MAKE SURE YOU HAVE THE TWO NODES IN YOUR SKS EDITOR:
-
// MARK: - Props:
class GameScene: SKScene, SKPhysicsContactDelegate {
// Because I hate crashes related to spelling errors.
let names = (jup: "jup", resetLabel: "resetLabel")
let player = Player.makePlayer()
}
// MARK: - Physics handling:
extension GameScene {
private func findJup(contact: SKPhysicsContact) -> JumpUnderPlatform? {
guard let nodeA = contact.bodyA.node, let nodeB = contact.bodyB.node else { fatalError("how did this happne!!??") }
if nodeA.name == names.jup { return (nodeA as! JumpUnderPlatform) }
else if nodeB.name == names.jup { return (nodeB as! JumpUnderPlatform) }
else { return nil }
}
// Player is 2, platform is 4:
private func doContactPlayer_X_Jup(platform: JumpUnderPlatform) {
// Check if jumping; if not, then just land on platform normally.
guard player.initialDY > 0 else { return }
// Gives us the ability to pass through the platform!
player.physicsBody!.collisionBitMask = BitMasks.playerCategory
// Will push the player through the platform (instead of bouncing off) on first hit
if player.platformToPassThrough == nil { player.pb.velocity.dy = player.initialDY }
player.platformToPassThrough = platform
}
func _didBegin(_ contact: SKPhysicsContact) {
// Crappy way to do bit-math:
let contactedSum = contact.bodyA.categoryBitMask + contact.bodyB.categoryBitMask
switch contactedSum {
case BitMasks.jupCategory + BitMasks.playerCategory:
guard let platform = findJup(contact: contact) else { fatalError("must be platform!") }
doContactPlayer_X_Jup(platform: platform)
// Put your other contact cases here...
// case BitMasks.xx + BitMasks.yy:
default: ()
}
}
}
// MARK: - Game loop:
extension GameScene {
// Scene setup:
override func didMove(to view: SKView) {
physicsWorld.contactDelegate = self
physicsBody = SKPhysicsBody(edgeLoopFrom: frame)
addChild(player)
}
// Touch handling: (convert to touchesBegan for iOS):
override func mouseDown(with event: NSEvent) {
// Make player jump:
player.pb.applyImpulse(CGVector(dx: 0, dy: 50))
// Reset player on label click (from sks file):
if nodes(at: event.location(in: self)).first?.name == names.resetLabel {
player.position.y = frame.minY + player.size.width/2 + CGFloat(1)
}
}
override func update(_ currentTime: TimeInterval) {
player._update()
}
func didBegin(_ contact: SKPhysicsContact) {
self._didBegin(contact)
}
override func didFinishUpdate() {
player._didFinishUpdate()
}
}
I HOPE THIS HELPS SOME!
You just need a condition that let's you know if you are in a body. I also cleaned up your code to avoid accidently putting in the wrong categories
if let body = player.physicsBody, let dy = body.velocity.dy {
// when I am jumping or I am in a platform, then do not register
if (dy > zero || body.node.intersects(platform) && (body.node.position.y - body.node.size.height/2 != platform.position.y + platform.size.height / 2) {
body.collisionBitMask &= ~CollisionTypes.platform.rawValue
}
else {
// Allow collisions if the hero is falling
body.collisionBitMask |= CollisionTypes.platform.rawValue
Well, The answers above work well but those are very completed.
Simple answer is use Platform effector 2D component. which applies various “platform” behavior such as one-way collisions, removal of side-friction/bounce etc.
Check out this Unity's official tutorial for more clearance.

Implement Pan Gesture Interface Using ReactiveCocoa

Let's say I have a view that I want to be draggable. Using just UIKit I would implement that with a variation of the following logic.
var viewStartY: CGFloat = 0
var panStartY: CGFloat = 0
func handlePan(recognizer: UIPanGestureRecognizer) {
var location = recognizer.locationInView(someView)
if recognizer.state == .Began {
viewStartY = someView.frame.origin.y
panStartY = location.y
}
let delta = location.y - panStartY
someView.frame.origin.y = viewStartY + delta.y
}
Now I was wondering if there was a way to deal with values like viewStartY that have to be stored when the gesture begins while avoiding side effects. Is there a way to continuously pass them through the pipeline?
FlatMap might work for you. Take a stream of beginning gestures, and flatMap a stream of changing gestures over it. This just gives you a stream of changes again, but you can capture the startValue inside the flatMapped function.
Pseudo-code that might explain the technique:
let gestures = recognizer.stream // Depends on your framework
// Filter gesture events by state
let gesture_begans = gestures.filter { $0.state == .Began }
let gesture_changeds = gestures.filter { $0.state == .Changed }
// Take a stream of beginning gestures...
let gesture_deltas = gesture_begans.flatMap { (began) -> in
let startPoint = began.locationInView(outerView)
// ... and flatMap into a stream of deltas
return gesture_changeds.map { (changed) in
let changedPoint = changed.locationInView(outerView)
let delta = changedPoint.y - startPoint.y
return delta
}
}
gesture_deltas.onValue { y in innerView.frame.origin.y = y }

Resources