Align a SceneKit Plane to Face of cube - ios

I have created a scnbox in SceneKit and am trying to add a circular plane on the face that is touched by the user.
I can add the SCNPlane as a child node at the touch point using the hittest but I’m struggling to orient the plane to the face that was touched.
The localnormal vector provided as part of hit test seems to be what I need but I’m nit sure how to use it. Normally I would orient using the EulerAngles property but localnormal looks to be a vector. I tried Look(at:) which takes a vector3 but that didn’t seem to work.
Any suggestions would be gratefully received. Code sample below which is taken from touchesBegan. "result" is the SCNHitTestResult:
//Draw circular plane, double sided
let circle = SCNPlane(width: 0.1, height: 0.1) //SCNSphere(radius: 0.1)
circle.cornerRadius = 0.5
circle.materials.first?.diffuse.contents = UIColor.black
circle.materials.first?.isDoubleSided = true
let circleNode = SCNNode(geometry: circle)
//Set position to hit test
circleNode.position = result.localCoordinates
let lookAtPoint = SCNVector3(result.localNormal.x * 100, result.localNormal.y * 100, result.localNormal.z * 100)
//Align to far point on normal
circleNode.look(at: lookAtPoint)
//Add to touched node
result.node.addChildNode(circleNode)

Related

How to position walls at the edges of the screen using a fixed-position camera (SceneKit)

I have a SceneKit scene in which the camera is stationary, and is positioned like this: cameraNode.position = SCNVector3(0.0, 0.0, 100.0). Other than that, the camera has the default configuration.
In the scene is a single, spherical SCNNode with a physics body.
Below the sphere is a flat plane, with a physics body, on which the sphere rolls around. The plane is positioned in the center of the scene, at SCNVector3(0.0, 0.0, 0.0).
What I need is for the scene to be surrounded by invisible "walls" that are positioned exactly at the edges of the screen. The sphere should bounce off these static physics bodies so it never leaves the screen.
I've tried placing one of these "walls" (an SCNNode with an SCNBox geometry) using the actual screen dimensions, but the positioning is incorrect; the node is apparently off screen. This is presumably because SceneKit coordinates are in meters, not pixels or whatever.
Question: How can I figure out the positioning of the "walls" so that they are fixed to the edges of the screen?
Thanks for your help!
Use the SCNSceneRenderer's unprojectPoint(_:) method to convert left and right edges of the screen to 3D space coordinates and add two planes in those coordinates.
let leftPoint = SCNVector3(scnView.bounds.minX, scnView.bounds.midY, 1.0)
let righPoint = SCNVector3(scnView.bounds.maxX, scnView.bounds.midY, 1.0)
let leftPointCoords = scnView.unprojectPoint(leftPoint)
let rightPointCoords = scnView.unprojectPoint(righPoint)
let rightPlane = SCNPlane(width: 100.0, height: 100.0)
let leftPlane = SCNPlane(width: 100.0, height: 100.0)
let rightPlaneNode = SCNNode(geometry: rightPlane)
rightPlaneNode.eulerAngles = .init([0.0, .pi / 2, 0.0])
rightPlaneNode.physicsBody = .init(type: .static, shape: nil)
let leftPlaneNode = SCNNode(geometry: leftPlane)
leftPlaneNode.physicsBody = .init(type: .static, shape: nil)
leftPlaneNode.eulerAngles = .init([0.0, .pi / 2, 0.0])
rightPlaneNode.position = rightPointCoords
leftPlaneNode.position = leftPointCoords
scene.rootNode.addChildNode(rightPlaneNode)
So, obviously there's a mathematical solution to this problem, and that's the best way to do this. Unfortunately, I'm not very good at math, so I had to come up with another solution.
First, I create the wall to be located at the top edge of the screen. It will begin at the center of the scene:
let topEdge = SCNNode()
let topEdgeGeo = SCNBox(width: MainData.screenWidth, height: 5.0, length: 5.0, chamferRadius: 0.0)
topEdge.geometry = topEdgeGeo
topEdge.physicsBody = SCNPhysicsBody(type: SCNPhysicsBodyType.kinematic, shape: SCNPhysicsShape.init(node: topEdge))
topEdge.physicsBody?.categoryBitMask = CollisionTypes.kinematicObjects.rawValue
topEdge.physicsBody?.collisionBitMask = CollisionTypes.dynamicObjects.rawValue
topEdge.physicsBody?.isAffectedByGravity = false
topEdge.physicsBody?.allowsResting = true
topEdge.physicsBody?.friction = 0.0
topEdge.position = SCNVector3(0.0, 0.0, 2.5)
scnView.scene?.rootNode.addChildNode(topEdge)
I then repeatedly reposition the wall a little bit farther up the y axis until it's no longer within the camera's viewport:
var topWallIsOnScreen: Bool = scnView.isNode(topEdge, insideFrustumOf: scnView.pointOfView!)
while topWallIsOnScreen {
topEdge.position.y += 0.001
topWallIsOnScreen = scnView.isNode(topEdge, insideFrustumOf: scnView.pointOfView!)
}
The end result is that the wall is positioned at the top edge of the screen. I was concerned about performance, but it seems to work just fine.

iOS ARKit + SceneKit physics contact detection scaling issue

I have a simple 3d area that contains 4 walls, each is a SCNNode with a simple SCNBox geometry, of rectangular shape, and matching SCNPhysicsBody attached. The SCNPhysicsBody uses a SCNPhysicsShape.ShapeType.boundingBox, and is set to static type. Here is a code snippet:
let size = (self.levelNode.boundingBox.max - self.levelNode.boundingBox.min) * self.levelNode.scale
//x //z
let geometryA = SCNBox(width: CGFloat(size.x), height: CGFloat(1 * self.levelNode.scale.x), length: 0.01, chamferRadius: 0)
let geometryB = SCNBox(width: CGFloat(size.z), height: CGFloat(1 * self.levelNode.scale.x), length: 0.01, chamferRadius: 0)
geometryA.firstMaterial?.diffuse.contents = UIColor(red: 0.0, green: 0.2, blue: 1.0, alpha: 0.65)
geometryB.firstMaterial?.diffuse.contents = UIColor(red: 0.0, green: 0.2, blue: 1.0, alpha: 0.65)
let nodeA = SCNNode(geometry: geometryA)
nodeA.position += self.levelNode.position
nodeA.position += SCNVector3(0, 0.25 * self.levelNode.scale.y, -size.z/2)
nodeA.name = "Boundary-01"
let nodeB = SCNNode(geometry: geometryA)
nodeB.position += self.levelNode.position
nodeB.position += SCNVector3(0, 0.25 * self.levelNode.scale.y, size.z/2)
nodeB.name = "Boundary-03"
let nodeC = SCNNode(geometry: geometryB)
nodeC.position += self.levelNode.position
nodeC.position += SCNVector3(-size.x/2, 0.25 * self.levelNode.scale.y, 0)
nodeC.eulerAngles = SCNVector3(0, -Float.pi/2, 0)
nodeC.name = "Boundary-02"
let nodeD = SCNNode(geometry: geometryB)
nodeD.position += self.levelNode.position
nodeD.position += SCNVector3(size.x/2, 0.25 * self.levelNode.scale.y, 0)
nodeD.eulerAngles = SCNVector3(0, Float.pi/2, 0)
nodeD.name = "Boundary-04"
let nodes = [nodeA, nodeB, nodeC, nodeD]
for node in nodes {
//
let shape = SCNPhysicsShape(geometry: node.geometry!, options: [
SCNPhysicsShape.Option.type : SCNPhysicsShape.ShapeType.boundingBox])
let body = SCNPhysicsBody(type: .static, shape: shape)
node.physicsBody = body
node.physicsBody?.isAffectedByGravity = false
node.physicsBody?.categoryBitMask = Bitmask.boundary.rawValue
node.physicsBody?.contactTestBitMask = Bitmask.edge.rawValue
node.physicsBody?.collisionBitMask = 0
scene.rootNode.addChildNode(node)
node.physicsBody?.resetTransform()
}
Inside this area, I spawn entities at a regular time interval. Each also has a SCNBox geometry, that is a cube shape this time, smaller than the walls, and same parameters for the physics body as above.
To simplify the behaviour of my entities inside this game area, I am calculating their paths to travel, then applying a SCNAction to the relevant node to move them. The SCNAction moves both the node and physics body attached to it.
I am using the SCNPhysicsWorld contact delegate to detect when an entity reaches one of the boundary walls. I then calculate a random trajectory for it from that wall in another direction, clear its actions, and apply a new move SCNAction.
This is where it gets interesting...
When this 'world' is at 1:1 scale. The contacts are detected as normal both in a standard SCNScene, and a scene projected using ARKit. The visible contact, i.e. the visible change in direction of the entity appears to be close to the boundary as expected. When I check the contact.penetrationDistance of each contact their values are e.g. 0.00294602662324905.
BUT when I change the scale of this 'world' to something smaller, say the equivalent of 10cm width, in ARKit, the simulation falls apart.
The contacts between an entity and a boundary node have a comparatively huge visible gap between them when the contact is detected. Yet the contact.penetrationDistance is of the same magnitude as before.
I switched on the ARSCNView debug options to show the physics shapes in the render, and they all appear to be the correct proportions, matching the bounding box of their node.
As you can see from the code example above, the boundary nodes are generated after I have scaled the level, during my AR user setup. They are added to the root node of the scene, not as a child of the level node. The same code is being used to generate the entities.
Previously I had tried using the resetTransform() function on the physics bodies but this did not produce a reliable scaling of the physics bodies, after I had scaled the level, so I decided to generate the nodes for the boundaries and entities after the level has been scaled.
In Apple's documentation, it does state that if the SCNPhysicsBody is not a custom shape, that it will adopt the scale of the node geometry applied to it. I am not affected by this as I am generating the geometries and their respective nodes, after the scaling has been applied to the level.
One of assumptions at the moment is that the physics simulation falls apart at such a small scale. But I am not relying on the simulation of forces to move the bodies ...
Is there a more appropriate way to scale the physics world?
Or, am I staring a bug in the SCNPhysicsWorld, that is something beyond my control, at the moment.
One solution I did think about was to run the entire simulation at 1:1 scale but hidden, then apply those movements to the smaller entities. As you can imagine, that will affect the performance of the entire scene...
The penetration distance of the first contact is a negative value, suggesting there is a gap. This gap does not appear to scale as you scale down the size of the simulation.
As a way to resolve the above excess, I have implemented an additional check on the contacts in the Contact Delegate to not take the first contact detected for a particular category, but rather ensure the penetrationDistance value is positive, so ensuring that there is overlap between the two objects, before triggering a change in direction of the entity which connected with a boundary.

Scaling an object after rotating in SceneKit

I am trying to set up a simple scene (one spherical node and the default camera) in a square SceneView. Currently I set up the scene as below:
let scene = SCNScene()
let planet = SCNSphere(radius: 1.0)
let planetNode = SCNNode(geometry: planet)
scene.rootNode.addChildNode(planetNode)
To certain views, I also rotate the node as such:
let rotationNode = SCNNode()
rotationNode.addChildNode(planetNode)
scene.rootNode.addChildNode(rotationNode)
rotationNode.rotation = (SCNVector4: SCNVector4(x: 0, y: 0, z: 1, w: some_amount_of_radians))
What I noticed, however, is the objects that get rotated are smaller than the ones that don't get rotated. I am not really sure what the ratio is, but it seems to be dependent on how much rotation is added, to a point.
In the below screenshot, Earth is rotated 45 degrees, and the other 2 are not rotated. If I rotated it 90 degrees instead, there is no difference, which leads me to believe there is a square bounding box around the sphere and the default camera is forcing its point of view to contain this box.
I have also tried to change the euler angles, position, and scale of the rotated nodes to compensate, but no transormations I apply seem to have any effect. Any pointers for solving this camera issue would be perfect.

Moving SCNLight with SCNAction

I have a spotlight, created with the code beneath, casting shadows on all of my nodes:
spotLight.type = SCNLightTypeSpot
spotLight.spotInnerAngle = 50.0
spotLight.spotOuterAngle = 150.0
spotLight.castsShadow = true
spotLight.shadowMode = SCNShadowMode.Deferred
spotlightNode.light = spotLight
spotlightNode.eulerAngles = SCNVector3(x: GLKMathDegreesToRadians(-90), y: 0, z: 0)
spotlightNode.position = levelData.coordinatesForGridPosition(column: 0, row: playerGridRow)
spotlightNode.position.y = 1.5
rootNode.addChildNode(spotlightNode)
The scene is moving along the z axis, and the camera has an infinite animation that makes it move:
let moveAction = SCNAction.moveByX(0.0, y: 0.0, z: CGFloat(-GameVariables.segmentSize / 2), duration: 2.0)
cameraContainerNode.runAction(SCNAction.repeatActionForever(moveAction))
As the camera moves though, the light doesn't, so after a while, the whole scene is dark. I want to move the light with the camera, however if I apply to the light node the same moving animation, all the shadows start to flicker. I tried to change the SCNShadowMode to Forward and the light type to Directional, but the flickering is still there. With directional, I actually loose most of my shadows. If I create a new light node later on, it will seem that I have two "suns", which of course is impossible. The final aim is simply to have an infinite light that shines parallel to the scene from the left, casting all the shadows to the right. Any ideas?
Build a node tree to hold both spotlight and camera.
Create, say, cameraRigNode as an SCNNode with no geometry. Create cameraContainerNode and spotlightNode the same way you are now. But make them children of cameraRigNode, not the scene's root node.
Apply moveAction to cameraRigNode. Both the camera and the light will now move together.

How to "center" SKTexture in SKSpriteNode

I'm trying to make Jigsaw puzzle game in SpriteKit. To make things easier I using 9x9 squared tiles board. On each tile is one childNode with piece of image from it area.
But here's starts my problem. Piece of jigsaw puzzle isn't perfect square, and when I apply SKTexture to node it just place from anchorPoint = {0,0}. And result isn't pretty, actually its terrible.
https://www.dropbox.com/s/2di30hk5evdd5fr/IMG_0086.jpg?dl=0
I managed to fix those tiles with right and top "hooks", but left and bottom side doesn't care about anything.
var sprite = SKSpriteNode()
let originSize = frame.size
let textureSize = texture.size()
sprite.size = originSize
sprite.texture = texture
sprite.size = texture.size()
let x = (textureSize.width - originSize.width)
let widthRate = x / textureSize.width
let y = (textureSize.height - originSize.height)
let heightRate = y / textureSize.height
sprite.anchorPoint = CGPoint(x: 0.5 - (widthRate * 0.5), y: 0.5 - (heightRate * 0.5))
sprite.position = CGPoint(x: frame.width * 0.5, y: frame.height * 0.5)
addChild(sprite)
Can you give me some advice?
I don't see a way you can get placement right without knowing more about the piece texture you are using because they will all be different. Like if the piece has a nob on any of the sides and the width width/height the nob will add to the texture. Hard to tell in the pic but even if it doesn't have a nob and instead has an inset it might add varying sizes.
Without knowing anything about how the texture is created I am not able to offer help on that. But I do believe the issue starts with that. If it were me I would create a square texture with additional alpha to center the piece correctly. So the center of that texture would always be placed in the center of a square on the grid.
With all that being said I do know that adding that texture to a node and then adding that node to a SKNode will make your placement go smoother with the way you currently have it. The trick will then only be placing that textured piece correctly within the empty SKNode.
For example...
let piece = SKSpriteNode()
let texturedPiece = SKSpriteNode(texture: texture)
//positioning
//offset x needs to be calculated with additional info about the texture
//for example it has just a nob on the right
let offsetX : CGFloat = -nobWidth/2
//offset y needs to be calculated with additional info about the texture
//for example it has a nob on the top and bottom
let offsetY : CGFloat = 0.0
texturedPiece.position = CGPointMake(offsetX, offsetY)
piece.addChild(texturedPiece)
let squareWidth = size.width/2
//Now that the textured piece is placed correctly within a parent
//placing the parent is super easy and consistent without messing
//with anchor points. This will also make rotations nice.
piece.position = CGPoint(x: squareWidth/2, y: squareWidth/2)
addChild(piece)
Hopefully that makes sense and didn't confuse things further.

Resources