I'm trying to rotate an SCNBox I created using swipe gestures. For example, when I swipe right the box should rotate 90degs in the Y-axis and -90degs when I swipe left. To achieve this I have been using the node's SCNAction.rotateByX method to perform the rotation animation. Now the problem I'm having is when rotating along either the X-axis or Z-axis after a rotation in the Y-axis and vice-versa is that the positions of the axes change.
What I have notice is that any rotation perform on either of the X,Y,Z axes changes the direction in which the other axes point.
Example: Default position
Then after a rotation in the Z-axis:
Of course this pose a problem because now when I swipe left or right I no longer get the desire effect because the X-axis and Y-axis have now swapped positions. What I would like to know is why does this happen? and is there anyway to perform the rotation animation without it affecting the other axes?
I apologize for my lack of understanding on this subject as this is my first go at 3d graphics.
Solution:
func swipeRight(recognizer: UITapGestureRecognizer) {
// rotation animation
let action = SCNAction.rotateByX(0, y: CGFloat(GLKMathDegreesToRadians(90)), z: 0, duration: 0.5)
boxNode.runAction(action)
//repositoning of the x,y,z axes after the rotation has been applied
let currentPivot = boxNode.pivot
let changePivot = SCNMatrix4Invert(boxNode.transform)
boxNode.pivot = SCNMatrix4Mult(changePivot, currentPivot)
boxNode.transform = SCNMatrix4Identity
}
I haven't ran into any problems yet but it may be safer to use a completion handler to ensure any changes to X,Y,Z axes are done before repositioning them.
I had the same issue, here's what I use to give the desired behavior:
func panGesture(sender: UIPanGestureRecognizer) {
let translation = sender.translationInView(sender.view!)
let pan_x = Float(translation.x)
let pan_y = Float(-translation.y)
let anglePan = sqrt(pow(pan_x,2)+pow(pan_y,2))*(Float)(M_PI)/180.0
var rotVector = SCNVector4()
rotVector.x = -pan_y
rotVector.y = pan_x
rotVector.z = 0
rotVector.w = anglePan
// apply to your model container node
boxNode.rotation = rotVector
if(sender.state == UIGestureRecognizerState.Ended) {
let currentPivot = boxNode.pivot
let changePivot = SCNMatrix4Invert(boxNode.transform)
boxNode.pivot = SCNMatrix4Mult(changePivot, currentPivot)
boxNode.transform = SCNMatrix4Identity
}
}
Related
I'm building a UIPanGestureRecognizer so I can move nodes in 3D space.
Currently, I have something that works, but only when the camera is exactly perpendicular to the plane, my UIPanGestureRecognizer looks like this:
#objc func handlePan(_ sender:UIPanGestureRecognizer) {
let projectedOrigin = self.sceneView!.projectPoint(SCNVector3Zero)
let viewCenter = CGPoint(
x: self.view!.bounds.midX,
y: self.view!.bounds.midY
)
let touchlocation = sender.translation(in: self.view!)
let moveLoc = CGPoint(
x: CGFloat(touchlocation.x + viewCenter.x),
y: CGFloat(touchlocation.y + viewCenter.y)
)
let touchVector = SCNVector3(x: Float(moveLoc.x), y: Float(moveLoc.y), z: Float(projectedOrigin.z))
let worldPoint = self.sceneView!.unprojectPoint(touchVector)
let loc = SCNVector3( x: worldPoint.x, y: 0, z: worldPoint.z )
worldHandle?.position = loc
}
The problem happens when the camera is rotated, and the coordinates are effected by the perspective change. Here is you can see the touch position drifting:
Related SO post for which I used to get to this position:
How to use iOS (Swift) SceneKit SCNSceneRenderer unprojectPoint properly
It referenced these great slides: http://www.terathon.com/gdc07_lengyel.pdf
The tricky part of going from 2D touch position to 3D space is obviously the z-coordinate. Instead of trying to convert the touch position to an imaginary 3D space, map the 2D touch to a 2D plane in that 3D space using a hittest. Especially when movement is required only in two direction, for example like chess pieces on a board, this approach works very well. Regardless of the orientation of the plane and the camera settings (as long as the camera doesn't look at the plane from the side obviously) this will map the touch position to a 3D position directly under the finger of the touch and follow consistently.
I modified the Game template from Xcode with an example.
https://github.com/Xartec/PrecisePan/
The main parts are:
the pan gesture code:
// retrieve the SCNView
let scnView = self.view as! SCNView
// check what nodes are tapped
let p = gestureRecognize.location(in: scnView)
let hitResults = scnView.hitTest(p, options: [SCNHitTestOption.searchMode: 1, SCNHitTestOption.ignoreHiddenNodes: false])
if hitResults.count > 0 {
// check if the XZPlane is in the hitresults
for result in hitResults {
if result.node.name == "XZPlane" {
//NSLog("Local Coordinates on XZPlane %f, %f, %f", result.localCoordinates.x, result.localCoordinates.y, result.localCoordinates.z)
//NSLog("World Coordinates on XZPlane %f, %f, %f", result.worldCoordinates.x, result.worldCoordinates.y, result.worldCoordinates.z)
ship.position = result.worldCoordinates
ship.position.y += 1.5
return;
}
}
}
The addition of a XZ plane node in viewDidload:
let XZPlaneGeo = SCNPlane(width: 100, height: 100)
let XZPlaneNode = SCNNode(geometry: XZPlaneGeo)
XZPlaneNode.geometry?.firstMaterial?.diffuse.contents = UIImage(named: "grid")
XZPlaneNode.name = "XZPlane"
XZPlaneNode.rotation = SCNVector4(-1, 0, 0, Float.pi / 2)
//XZPlaneNode.isHidden = true
scene.rootNode.addChildNode(XZPlaneNode)
Uncomment the isHidden line to hide the helper plane and it will still work. The plane obviously needs to be large enough to fill the screen or at least the portion where the user is allowed to pan.
By setting a global var to hold a startWorldPosition of the pan (in state .began) and comparing it to the hit worldPosition in the state .change you can determine the delta/translation in world space and translate other objects accordingly.
I'm making an app where the user can create some flat shapes by positioning some points on a 3D space with ARKit, but it seems that the part where I create the UIBezierPath using these points is problematic.
In my app, the user starts by positioning a virtual transparent wall in AR at the same place that his device by pressing a button:
guard let currentFrame = sceneView.session.currentFrame else {
return
}
let imagePlane = SCNPlane(width: sceneView.bounds.width, height: sceneView.bounds.height)
imagePlane.firstMaterial?.diffuse.contents = UIColor.black
imagePlane.firstMaterial?.lightingModel = .constant
var windowNode = SCNNode()
windowNode.geometry = imagePlane
sceneView.scene.rootNode.addChildNode(windowNode)
windowNode.simdTransform = currentFrame.camera.transform
windowNode.opacity = 0.1
Then, the user place some points (some sphere nodes) on that wall to determine the shape of the flat object that he wants to create by pressing a button. If the user points back to the first sphere node created, I close the shape, create a node of it and place it at the same position that the wall:
let hitTestResult = sceneView.hitTest(self.view.center, options: nil)
if let firstHit = hitTestResult.first {
if firstHit.node == windowNode {
let x = Double(firstHit.worldCoordinates.x)
let y = Double(firstHit.worldCoordinates.y)
let pointCoordinates = CGPoint(x: x , y: y)
let sphere = SCNSphere(radius: 0.02)
sphere.firstMaterial?.diffuse.contents = UIColor.white
sphere.firstMaterial?.lightingModel = .constant
let sphereNode = SCNNode(geometry: sphere)
sceneView.scene.rootNode.addChildNode(sphereNode)
sphereNode.worldPosition = firstHit.worldCoordinates
if points.isEmpty {
windowPath.move(to: pointCoordinates)
} else {
windowPath.addLine(to: pointCoordinates)
}
points.append(sphereNode)
if undoButton.alpha == 0 {
undoButton.alpha = 1
}
} else if firstHit.node == points.first {
windowPath.close()
let windowShape = SCNShape(path: windowPath, extrusionDepth: 0)
windowShape.firstMaterial?.diffuse.contents = UIColor.white
windowShape.firstMaterial?.lightingModel = .constant
let tintedWindow = SCNNode(geometry: windowShape)
let worldPosition = windowNode.worldPosition
tintedWindow.worldPosition = worldPosition
sceneView.scene.rootNode.addChildNode(tintedWindow)
//removing all the sphere nodes from points and reinitializing the UIBezierPath windowPath
removeAllPoints()
}
}
That code works when I create a first invisible wall and a first shape, but when I create a second wall, when I'm done to draw my shape, the shape appears to be deformed and not at the right place like really not at the right place at all. So I think that I'm missing something with the coordinates of my UIBezierPath points but what ?
EDIT
Ok so after several tests, it seems that it depends on the orientation of the device at the launch of the AR session. When the device, at launch, faces the first wall that the user will create, the shape is created and places as expected. But if the user for exemple launch the app with his device pointed in one direction, then do a rotation of 90 degrees on himself, place the first wall and create his shape, the shape will be deformed and not at the right place.
So it seems that it's a problem of 3D coordinates but I still don't figure it out.
Ok I just found the problem ! I was just using the wrong vectors and coordinates... I've never been a math/geometry guy haha
So instead of using:
let x = Double(firstHit.worldCoordinates.x)
let y = Double(firstHit.worldCoordinates.y)
I now use:
let x = Double(firstHit.localCoordinates.x)
let y = Double(firstHit.localCoordinates.y)
And instead of using:
let worldPosition = windowNode.worldPosition
I now use:
let worldPosition = windowNode.transform
That's why the position of my shape node was depending of the initialisation of the AR session, I was working with world coordinates, seems obvious to me now.
I'm using ARKit to display 3D objects. I managed to place the nodes in the real world in front of the user (aka the camera). But I don't manage to make them to face the camera when I drop them.
let tap_point=CGPoint(x: x, y: y)
let results=arscn_view.hitTest(tap_point, types: .estimatedHorizontalPlane)
guard results.count>0 else{
return
}
guard let r=results.first else{
return
}
let hit_tf=SCNMatrix4(r.worldTransform)
let new_pos=SCNVector3Make(hit_tf.m41, hit_tf.m42+Float(0.2), hit_tf.m43)
guard let scene=SCNScene(named: file_name) else{
return
}
guard let node=scene.rootNode.childNode(withName: "Mesh", recursively: true) else{
return
}
node.position=new_pos
arscn_view.scene.rootNode.addChildNode(node)
The nodes are well positioned on the plane, in front of the camera. But they are all looking in the same direction. I guess I should rotate the SCNNode but I didn't manage to do this.
First, get the rotation matrix of the camera:
let rotate = simd_float4x4(SCNMatrix4MakeRotation(sceneView.session.currentFrame!.camera.eulerAngles.y, 0, 1, 0))
Then, combine the matrices:
let rotateTransform = simd_mul(r.worldTransform, rotate)
Lastly, apply a transform to your node, casting as SCNMatrix4:
node.transform = SCNMatrix4(rotateTransform)
Hope that helps
EDIT
here how you can create SCNMatrix4 from simd_float4x4
let rotateTransform = simd_mul(r.worldTransform, rotate)
node.transform = SCNMatrix4(m11: rotateTransform.columns.0.x, m12: rotateTransform.columns.0.y, m13: rotateTransform.columns.0.z, m14: rotateTransform.columns.0.w, m21: rotateTransform.columns.1.x, m22: rotateTransform.columns.1.y, m23: rotateTransform.columns.1.z, m24: rotateTransform.columns.1.w, m31: rotateTransform.columns.2.x, m32: rotateTransform.columns.2.y, m33: rotateTransform.columns.2.z, m34: rotateTransform.columns.2.w, m41: rotateTransform.columns.3.x, m42: rotateTransform.columns.3.y, m43: rotateTransform.columns.3.z, m44: rotateTransform.columns.3.w)
guard let frame = self.sceneView.session.currentFrame else {
return
}
node.eulerAngles.y = frame.camera.eulerAngles.y
here's my code for the SCNNode facing the camera..hope help for someone
let location = touches.first!.location(in: sceneView)
var hitTestOptions = [SCNHitTestOption: Any]()
hitTestOptions[SCNHitTestOption.boundingBoxOnly] = true
let hitResultsFeaturePoints: [ARHitTestResult] = sceneView.hitTest(location, types: .featurePoint)
let hitTestResults = sceneView.hitTest(location)
guard let node = hitTestResults.first?.node else {
if let hit = hitResultsFeaturePoints.first {
let rotate = simd_float4x4(SCNMatrix4MakeRotation(sceneView.session.currentFrame!.camera.eulerAngles.y, 0, 1, 0))
let finalTransform = simd_mul(hit.worldTransform, rotate)
sceneView.session.add(anchor: ARAnchor(transform: finalTransform))
}
return
}
Do you want the nodes to always face the camera, even as the camera moves? That's what SceneKit constraints are for. Either SCNLookAtConstraint or SCNBillboardConstraint can keep a node always pointing at the camera.
Do you want the node to face the camera when placed, but then hold still (so you can move the camera around and see the back of it)? There are a few ways to do that. Some involve fun math, but a simpler way to handle it might just be to design your 3D assets so that "front" is always in the positive Z-axis direction. Set a placed object's transform based on the camera transform, and its initial orientation will match the camera's.
Here's how I did it:
func faceCamera() {
guard constraints?.isEmpty ?? true else {
return
}
SCNTransaction.begin()
SCNTransaction.animationDuration = 5
SCNTransaction.completionBlock = { [weak self] in
self?.constraints = []
}
constraints = [billboardConstraint]
SCNTransaction.commit()
}
private lazy var billboardConstraint: SCNBillboardConstraint = {
let constraint = SCNBillboardConstraint()
constraint.freeAxes = [.Y]
return constraint
}()
As stated earlier a SCNBillboardConstraint will make the node always look at the camera. I am animating it so the node doesn't just immediately snap into place, this is optional. In the SCNTransaction.completionBlock I remove the constraint, also optional.
Also I set the SCNBillboardConstraint's freeAxes, which customizes on what axis the node follows the camera, again optional.
I want the node to face the camera when I place it then keep it here (and be able to move around). – Marie Dm
Blockquote
You can put object facing to camera, using this:
if let rotate = sceneView.session.currentFrame?.camera.transform {
node.simdTransform = rotate
}
This code will save you from gimbal lock and other troubles.
The four-component rotation vector specifies the direction of the rotation axis in the first three components and the angle of rotation (in radians) in the fourth. The default rotation is the zero vector, specifying no rotation. Rotation is applied relative to the node’s simdPivot property.
The simdRotation, simdEulerAngles, and simdOrientation properties all affect the rotational aspect of the node’s simdTransform property. Any change to one of these properties is reflected in the others.
https://developer.apple.com/documentation/scenekit/scnnode/2881845-simdrotation
https://developer.apple.com/documentation/scenekit/scnnode/2881843-simdtransform
I'm having a hard time setting boundaries and positioning camera properly inside my view after panning. So here's my scenario.
I have a node that is bigger than the screen and I want to let user pan around to see the full map. My node is 1000 by 1400 when the view is 640 by 1136. Sprites inside the map node have the default anchor point.
Then I've added a camera to the map node and set it's position to (0.5, 0.5).
Now I'm wondering if I should be changing the position of the camera or the map node when the user pans the screen ? The first approach seems to be problematic, since I can't simply add translation to the camera position because position is defined as (0.5, 0.5) and translation values are way bigger than that. So I tried multiplying/dividing it by the screen size but that doesn't seem to work. Is the second approach better ?
var map = Map(size: CGSize(width: 1000, height: 1400))
override func didMove(to view: SKView) {
(...)
let pan = UIPanGestureRecognizer(target: self, action: #selector(panned(sender:)))
view.addGestureRecognizer(pan)
self.anchorPoint = CGPoint.zero
self.cam = SKCameraNode()
self.cam.name = "camera"
self.camera = cam
self.addChild(map)
self.map.addChild(self.cam!)
cam.position = CGPoint(x: 0.5, y: 0.5)
}
var previousTranslateX:CGFloat = 0.0
func panned (sender:UIPanGestureRecognizer) {
let currentTranslateX = sender.translation(in: view!).x
//calculate translation since last measurement
let translateX = currentTranslateX - previousTranslateX
let xMargin = (map.nodeSize.width - self.frame.width)/2
var newCamPosition = CGPoint(x: cam.position.x, y: cam.position.y)
let newPositionX = cam.position.x*self.frame.width + translateX
// since the camera x is 320, our limits are 140 and 460 ?
if newPositionX > self.frame.width/2 - xMargin && newPositionX < self.frame.width - xMargin {
newCamPosition.x = newPositionX/self.frame.width
}
centerCameraOnPoint(point: newCamPosition)
//(re-)set previous measurement
if sender.state == .ended {
previousTranslateX = 0
} else {
previousTranslateX = currentTranslateX
}
}
func centerCameraOnPoint(point: CGPoint) {
if cam != nil {
cam.position = point
}
}
Your camera is actually at a pixel point 0.5 points to the right of the centre, and 0.5 points up from the centre. At (0, 0) your camera is dead centre of the screen.
I think the mistake you've made is a conceptual one, thinking that anchor point of the scene (0.5, 0.5) is the same as the centre coordinates of the scene.
If you're working in pixels, which it seems you are, then a camera position of (500, 700) will be at the top right of your map, ( -500, -700 ) will be at the bottom left.
This assumes you're using the midpoint anchor that comes default with the Xcode SpriteKit template.
Which means the answer to your question is: Literally move the camera as you please, around your map, since you'll now be confident in the knowledge it's pixel literal.
With one caveat...
a lot of games use constraints to stop the camera somewhat before it gets to the edge of a map so that the map isn't half off and half on the screen. In this way the map's edge is showing, but the furthest the camera travels is only enough to reveal that edge of the map. This becomes a constraints based effort when you have a player/character that can walk/move to the edge, but the camera doesn't go all the way out there.
I want to rotate camera up or down and left or right to look around an object, (360 degrees view) with pan gesture using opengl lookat function, I use swift and Metal(in this case Metal = opengl es). here is the code:
the lookat function(this function is in another ViewController which is inheritance from the main ViewController which is with the viewDidLoad and pan gesture function):
let viewMatrix = lookAt(location, center: ktarget, up: up)
the viewDidLoad and var:
var ktarget = V3f()
var up = V3f()
let location =V3f()
override func viewDidLoad() {
super.viewDidLoad()
location = V3f(0.0, 0.0, -2.0)
ktarget = V3f(0.0, 0.0, 0.0)
up = V3f(0.0, 1.0, 0.0)
}
the pan gesture:
func pan(panGesture: UIPanGestureRecognizer){
up = V3f(0.0, 1.0, 0.0).Normalized()
forward = (ktarget + -location).Normalized()
right = Cross(forward, b: up).Normalized()
if panGesture.state == UIGestureRecognizerState.Changed{
let pointInView = panGesture.locationInView(self.view)
let xDelta = (lastPanLocation.x - pointInView.x)/self.view.bounds.width * panSensivity
let yDelta = (lastPanLocation.y - pointInView.y)/self.view.bounds.height * panSensivity
// To rotate left or right, rotate the forward around up and then use a cross product between forward and up to get the right vector.
forward = rotationM3f(up, angle: Float(xDelta)) * forward.Normalized()
right = Cross(forward, b: up).Normalized()
// To rotate up or down, rotate forward vector around right vector and use a cross product between the right and forward to get the new up vector.
forward = rotationM3f(right, angle: Float(yDelta)) * forward.Normalized()
up = V3f(0.0, 1.0, 0.0).Normalized()
ktarget = location + forward
lastPanLocation = pointInView
} else if panGesture.state == UIGestureRecognizerState.Began {
ktarget = location + forward
lastPanLocation = panGesture.locationInView(self.view)
}
}
But pan the camera, I set up the up vector always =(0,1,0), to make people only can see 180 degree vertically. If user still pan When Camera look up or down(max value)the screen will flip, the real value x and z of target is changed little by little(very small numebr, such as 0.0000somenumber) every frames, so the draw function will draw the object very little difference every frames. Anyone knows how can fix the flip? Thanks~~~