How to use UIPinchGestureRecognizer velocity - ios

I am trying to use UIPinchGestureRecognizer's property velocity to play an animation in a SceneKit node when the user stops pinching, but I don't understand how the velocity property is calculated/should be used.
Here is my code:
var initialScale = simd_float3(1)
#objc private func handlePinch(sender: UIPinchGestureRecognizer) {
let scale = simd_float3(Float(sender.scale))
let state = sender.state
let vel = Float(sender.velocity)
if state == .began {
updateQ.async(group: updateGrp) {
log.debug("Begin scale: \(scale)")
log.debug("Begin vel: \(vel)")
if let node = self.activeNode {
self.initialScale = node.simdScale
}
else {
self.initialScale = self.layeredImgNode.simdScale
}
}
}
else if state == .changed {
updateQ.async(group: updateGrp) {
log.debug("Changed scale: \(scale)")
log.debug("Changed vel: \(vel)")
let scale = self.initialScale * scale
if let node = self.activeNode {
node.simdScale = scale
}
else {
self.layeredImgNode.simdScale = scale
}
}
}
else if state == .ended {
updateQ.async(group: updateGrp) {
log.debug("End scale: \(scale)")
log.debug("End vel: \(vel)")
// Calculate for how long to run the animation
let time = Double(vel / Float(10))
let target = self.initialScale * (scale + simd_float3(vel))
log.debug("End target: \(target) and time: \(time)")
self.initialScale = simd_float3(1)
SCNTransaction.begin()
SCNTransaction.animationDuration = time
if let node = self.activeNode {
node.simdScale = target
}
else {
self.layeredImgNode.simdScale = target
}
SCNTransaction.commit()
}
}
The documentation states that velocity is "The velocity of the pinch in scale factor per second". However, I still don't understand how I should use it. Could someone help me?
Thanks.

Related

SceneKit. Place nodes in one "surface"

The users draw a line on drawing views and then I need to translate these points into 3d world, but place these points in one "surface". For this, I map the array of points into vectors (I use hitTest with .featurePoint) and then filter this array for the further one
func didEndDrawing(with points: [CGPoint]) {
guard let transform = sceneView.pointOfView?.transform else { return }
let cameraVectror = SCNVector3(transform.m31, transform.m32, transform.m33)
let farthestVector = points
.reduce((vector: SCNVector3(0, 0, 0), distance: CGFloat.zero)) { result, point in
guard let vector = getVector(for: point) else { return result }
let distance: CGFloat = cameraVectror.distance(to: vector)
return distance > result.distance ? (vector, distance) : result
}
.vector
}
let parentNode = SCNNode()
parentNode.position = farthestVector
How can I adjust coordinates (I guess z position) to have all the child nodes at the same distance from the point of view?
The idea of the app is freehand drawing in AR.
Update
With Voltan's help I was able to solve it
points.forEach { point in
let scenePoint = sceneView.unprojectPoint(SCNVector3(point.x, point.y, CGFloat(projectedPoint.z)))
let sphere = SCNSphere(radius: 0.01)
let material = SCNMaterial()
material.diffuse.contents = UIColor.green
sphere.materials = [material]
let node = SCNNode(geometry: sphere)
node.position = scenePoint
sceneView.scene.rootNode.addChildNode(node)
}
If I'm understanding correctly, you want some kind of tap/drag combination - get the points from the 2D world and translate to a 3D world. This is some game code for a missile command type game, maybe it will help you with unprojectPoint stuff. There are some timers that aren't included, but hopefully you will get the idea.
#objc func handleTap(recognizer: UITapGestureRecognizer)
{
if(data.gameState == .endGame)
{
endGameAnimates.stop()
let _ = Timer.scheduledTimer(withTimeInterval: 1, repeats: false, block: { _ in self.dismiss(animated: false, completion: nil) })
return
}
if(gameControl.isWaveComplete == true || gNodes.gameNodes.isPaused == true) { return }
currentLocation = recognizer.location(in: gameScene)
let projectedPoint = gameScene.projectPoint(SCNVector3(0, 0, 0))
let scenePoint = gameScene.unprojectPoint(SCNVector3(currentLocation.x, currentLocation.y, CGFloat(projectedPoint.z)))
if(data.gameState == .endGame) // Allow animations to finish, otherwise they will show up next round
{
DispatchQueue.main.async { self.endGameAnimates.stop() }
let _ = Timer.scheduledTimer(withTimeInterval: 1, repeats: false, block: { _ in self.dismiss(animated: false, completion: nil) })
return
}
if(data.missilesAvailable <= 0)
{
sound.playSoundType(vSoundType: .defenseFails)
hudControl.notify()
}
else
{
gameControl.defenseMissileShoot(vPosition: scenePoint, soundType: 0)
sound.playSoundType(vSoundType: .defenseFires)
}
}
//**************************************************************************
#objc func handlePan(recognizer: UIPanGestureRecognizer)
{
currentLocation = recognizer.location(in: gameScene)
let projectedPoint = gameScene.projectPoint(SCNVector3(0, 0, 0))
let scenePoint = gameScene.unprojectPoint(SCNVector3(currentLocation.x, currentLocation.y, CGFloat(projectedPoint.z)))
if(gameControl.isWaveComplete == true || gNodes.gameNodes.isPaused == true) { return }
switch recognizer.state
{
case UIGestureRecognizer.State.began:
gameControl.defenseMissileShoot(vPosition: scenePoint, soundType: 1)
SNDdefenseSoundCount = 0
if(data.missilesAvailable <= 0) { sound.playSoundType(vSoundType: .defenseFails); hudControl.notify() }
beginLocation.x = currentLocation.x
break
case UIGestureRecognizer.State.changed:
if(currentLocation.x > beginLocation.x + dragDistance)
{
beginLocation.x = currentLocation.x
if(data.missilesAvailable > 0) { gameControl.defenseMissileShoot(vPosition: scenePoint, soundType: 2) }
SNDdefenseSoundCount += 1
}
if(currentLocation.x < beginLocation.x - dragDistance)
{
beginLocation.x = currentLocation.x
if(data.missilesAvailable > 0) { gameControl.defenseMissileShoot(vPosition: scenePoint, soundType: 2) }
SNDdefenseSoundCount += 1
}
break
case UIGestureRecognizer.State.ended:
if(data.missilesAvailable > 0)
{
if(SNDdefenseSoundCount < 2) { sound.playSoundType(vSoundType: .defenseFires) }
else { sound.playSoundType(vSoundType: .defensePans) }
}
break
default:
break
}

How can I create an onscreen controller that works in multiple scenes in SpriteKit?

Working on a game in SpriteKit to learn. Its a platformer with an onscreen controller. I have this all working using touchesBegan and touchesEnded to know when the player is pushing the buttons or not. This works fine, however when i want to load the next scene for 'level 2' i need to implement the controller all over again. I could do a lot of copy and pasting but I feel this will lead to a lot of duplication of code. Every tutorial I've ever read said to try to adhere to the DRY principle.
Im sorry if this is simple, but I have <6 months programming experience and am trying to learn and improve. Im assuming I would need to create a separate class for the onscreen controller so it can be reused, but Im a little lost on where to start.
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touch = touches.first {
let location = (touch.location(in: playerCamera))
print("LocationX: \(location.x), LocationY: \(location.y)")
let objects = nodes(at: location)
print("\(objects)")
if rightButton.frame.contains(location) {
rightButtonPressed = true
playerFacingRight = true
playerFacingLeft = false
thePlayer.xScale = 1
let animation = SKAction(named: "running")!
let loopingAnimation = SKAction.repeatForever(animation)
thePlayer.run(loopingAnimation, withKey: "moveRight")
moveRight()
} else if leftButton.frame.contains(location) {
leftButtonPressed = true
playerFacingLeft = true
playerFacingRight = false
thePlayer.xScale = -1
let leftAnimation = SKAction(named: "running")!
let leftLoopingAnimation = SKAction.repeatForever(leftAnimation)
thePlayer.run(leftLoopingAnimation, withKey: "moveLeft")
moveLeft()
} else if upButton.frame.contains(location) {
upButtonPressed = true
print("upButton is pressed")
if playerAndButtonContact == true {
print("contact - player + button + upButtonPressed=true")
print("\(movingPlatform.position)")
button.texture = SKTexture(imageNamed: "switchGreen")
let moveRight = SKAction.moveTo(x: -150, duration: 3)
if movingPlatform.position == CGPoint(x: -355, y: movingPlatform.position.y) {
movingPlatform.run(moveRight)
thePlayer.run(moveRight)
button.run(moveRight)
}
}
if playerAndDoorSwitchContact == true {
let switchPressed = SKAction.run{
self.switchPressedSound()
self.doorSwitch.texture = SKTexture(imageNamed: "switchGreen")
self.door.texture = SKTexture(imageNamed: "DoorUnlocked")
}
let wait = SKAction.wait(forDuration: 2)
let doorOpen = SKAction.run {
let doorOpen = SKSpriteNode(imageNamed: "DoorOpen")
doorOpen.alpha = 0
doorOpen.position = self.door.position
doorOpen.size = self.door.size
doorOpen.size = self.door.size
self.door.zPosition = -2
doorOpen.zPosition = -1
let fadeIn = SKAction.fadeIn(withDuration: 0.5)
let start = SKAction.run {
self.addChild(doorOpen)
doorOpen.run(fadeIn)
}
let sound = SKAction.run {
self.doorOpeningSound()
}
let opening = SKAction.group([sound, start])
self.door.run(opening)
}
let sequence = SKAction.sequence([switchPressed, wait, doorOpen])
self.doorSwitch.run(sequence)
}
if playerAndDoorContact == true {
self.view?.presentScene(level1, transition: transition)
}
} else if downButton.frame.contains(location) {
}
else if shoot.frame.contains(location) {
shoot()
} else if jumpButton.frame.contains(location) {
self.pressed = true
let timerAction = SKAction.wait(forDuration: 0.05)
let update = SKAction.run {
if(self.force < Constants.maximumJumpForce) {
self.force += 2.0
} else {
self.jump(force: Constants.maximumJumpForce)
self.force = Constants.maximumJumpForce
}
}
let sequence = SKAction.sequence([timerAction, update])
let repeat_seq = SKAction.repeatForever(sequence)
self.run(repeat_seq, withKey: "repeatAction")
}
}
}

SceneKit finger panning becomes inaccurate as it moves away from center

In my ARSCNView, I'm drawing a SCNPlane with a grid inside. It is a child of sceneView.pointOfView. When you drag your finger across it, you can connect the dots in the grid. This works almost perfectly, except something subtly strange is happening: The edge of the tragged line SHOULD track accurately with the finger, but it actually falls ahead or behind depending on how far above or below the center it is respectively.
#objc func handlePan(gesture: UIPanGestureRecognizer) {
print("panned")
let location = gesture.location(in: sceneView)
let projectedOriginUI = sceneView.projectPoint(SCNVector3(0, 0, gridCon.grid.position.z))
let placeOnUI = sceneView.unprojectPoint(SCNVector3(location.x, location.y, CGFloat(projectedOriginUI.z)))
let rootNode: SCNNode = sceneView.scene.rootNode
let positionOnGrid = rootNode.convertPosition(placeOnUI, to: gridCon.grid)
let state = gesture.state
let hitVertex = gridCon.hitVertex(location: location, sceneView: sceneView)
if (state == UIGestureRecognizer.State.began) {
print("began")
if (hitVertex != nil) {
let edge = EdgeController.createEdge()
edge.position.x = hitVertex?.position.x ?? 0
edge.position.y = hitVertex?.position.y ?? 0
gridCon.grid.addChildNode(edge)
draggedLine = edge
startVertex = hitVertex
print("Edge added!")
}
}
if (state == UIGestureRecognizer.State.ended) {
draggedLine = nil
startVertex = nil
}
if (draggedLine != nil && startVertex !== nil) {
print("line height changed")
let _draggedLine = draggedLine!
let geo: SCNPlane = _draggedLine.geometry as! SCNPlane
let between = startVertex!.position.y - positionOnGrid.y
geo.height = CGFloat(between)
_draggedLine.position.y = startVertex!.position.y - (between/2)
}
}
Why is my finger tracking slightly off here?

How to swipe/slide go into the 3D Object

I have done small demo in ARKit. i am facing problem with how to go inside 3D object by swiping.
For example, i have home 3D object, its working with Tap Gesture, Rotation Gesture, i would like to go inside home by swiping one to another room inside home 3D object.
Its rotating whole 3D object node, i could not able to swipe and go inside home..
This is the code i used for rotate based Y axis,
#objc private func viewRotated(_ gesture: UIRotationGestureRecognizer) {
let location = gesture.location(in: sceneView)
guard let node = node(at: location) else { return }
switch gesture.state {
case .began:
originalRotation = node.eulerAngles
case .changed:
guard var originalRotation = originalRotation else { return }
originalRotation.y -= Float(gesture.rotation)
node.eulerAngles = originalRotation
default:
originalRotation = nil
}
}
To rotate in all direction using UIPanGestureRecognizer, this is the code i have added,
#objc func viewPanned(gestureRecognize: UIPanGestureRecognizer){
let translation = gestureRecognize.translation(in: gestureRecognize.view!)
let x = Float(translation.x)
let y = Float(-translation.y)
let anglePan = sqrt(pow(x,2)+pow(y,2))*(Float)(M_PI)/180.0
var rotationVector = SCNVector4()
rotationVector.x = -y
rotationVector.y = x
rotationVector.z = 0
rotationVector.w = anglePan
homeNode?.rotation = rotationVector
if(gestureRecognize.state == UIGestureRecognizerState.ended) {
//
let currentPivot = homeNode?.pivot
let changePivot = SCNMatrix4Invert( (homeNode?.transform)!)
homeNode?.pivot = SCNMatrix4Mult(changePivot, currentPivot!)
homeNode?.transform = SCNMatrix4Identity
}
}
Perhaps there is another way of doing this, Can someone suggest a way..Thanks..

ARKit – Drag a node along a specific axis (not on a plane)

I am trying to drag a node exactly where my finger is on the screen on the Y-axis. But I don't find any way to do it, here is my current code. Do you have any idea?
var movedObject:SCNNode?
#objc func handlePan(_ recognizer: UIPanGestureRecognizer) {
if recognizer.state == .began {
let tapPoint: CGPoint = recognizer.location(in: sceneView)
let result = sceneView.hitTest(tapPoint, options: nil)
if result.count == 0 {
return
}
let hitResult: SCNHitTestResult? = result.first
movedObject = hitResult?.node //.parent?.parent
} else if recognizer.state == .changed {
if (movedObject != nil) {
let tapPoint: CGPoint = recognizer.location(in: sceneView)
let hitResults = sceneView.hitTest(tapPoint, types: .featurePoint)
let result: ARHitTestResult? = hitResults.last
if result?.worldTransform != nil{
let matrix: SCNMatrix4 = SCNMatrix4.init((result?.worldTransform)!)
let vector: SCNVector3 = SCNVector3Make(matrix.m41, matrix.m42, matrix.m43)
movedObject?.position = vector
}
}
} else if recognizer.state == .ended {
movedObject = nil
}
}
Found the solution ! I added some explanations in comments.
if (movedObject != nil) {
//Normalized-depth coordinate matching the plane I want
let projectedOrigin = sceneView.projectPoint((movedObject?.position)!)
//Location of the finger in the view on a 2D plane
let location2D = recognizer.location(in: sceneView)
//Location of the finger in a 3D vector
let location3D = SCNVector3Make(Float(location2D.x), Float(location2D.y), projectedOrigin.z)
//Unprojects a point from the 2D pixel coordinate system of the renderer to the 3D world coordinate system of the scene
let realLocation3D = sceneView.unprojectPoint(location3D)
if movedObject?.position != nil {
//Only updating Y axis position
movedObject?.position = SCNVector3Make((movedObject?.position.x)!, realLocation3D.y, (movedObject?.position.z)!)
}
}
Posts that help understand:
unProjectPoint & projectedOrigin
translation & unProjectPoint

Resources