SceneKit Node normals Geometries etc - ios

All,
There is a specific switch to make normals, geometries and other parts of all nodes in a scene visible.
I thought I kept it somewhere in my code, but I must have erased it somehow.
I need this function because the contact detection doesn‘t seem to work right.
I want to detect if thrown grenades in a tunnel touch the tunnel walls. Right now, most of them just fall through.
Anybody know the command?

Like showBoundingBoxes?
SCNDebugOptions
Edit:
You can put it UIViewController along with...
gameScene.allowsCameraControl = false
gameScene.showsStatistics = false
Go to this link and see SCNDebugOption for a complete list.
https://developer.apple.com/documentation/scenekit/scnscenerenderer/1523281-debugoptions

Related

Getting the current visible entities in RealityKit

Currently, RealityKit doesn't have any method that provides the currently visible entities. In SceneKit we do have a method for that particular functionality—nodesInsideFrustum(pointOfView).
Our internal solution is to create a big fake bounding box in front of the camera. We then check intersections between the "frustum" bounding box and each entity's bounding box. That, of course, is a bit cumbersome and inaccurate. I wonder if someone can come up with a better solution who is willing to share it.
You could combine two ARView methods:
ARView.project(position) to get the 2D point in screen space
ARView.bounds.contains(point) to know if it's visible on screen
But it's not enough, you also have to check if the object is behind you:
Entity.position(relativeTo: cameraAnchor) (with cameraAnchor being an AnchorEntity(.camera)) to have the local position
the sign of localPosition.z shows if it's in front or behind the camera

Different results when using isNode(_:insideFrustumOf:) vs nodesInsideFrustum(of:)

Have a quick question to see if I'm using isNode(_:insideFrustumOf:) correctly.
I'm putting an node (SCNNode) into an ARSCNView with standard configuration. The geometry in the node is off-centered so I used the node's pivot property to adjust its center point.
let c = object.boundingSphere.center
object.pivot = SCNMatrix4MakeTranslation(c.x, c.y, c.z)
object.position = c
My issue arises after I update the object's scale or rotation using a pinch gesture. After this happens, I get different results from isNode(_:insideFrustumOf:) and from nodesInsideFrustum(of:).
The node I'm testing is clearly visible in the screen, but isNode(_:insideFrustumOf:) fails to see it. However, the node is in the [SCNNode] results from nodesInsideFrustum(of:).
My question is whether this is a bug, or is there some other proper way of centering geometry to a node that may fix this issue. For now, I'm going to use the nodesInsideFrustum(of:) function and test if the object is in the array.
Thanks!
The trick is making sure you are using the ”presentation” node for any moving objects, even the pointOfView if it’s moving.
This is a working snippet from my app inside the renderer: (updateAtTime)
if renderer.isNode(stationaryObjectNode, insideFrustrumOf: renderer.pointOfView!.presentation) == true {
//do stuff if seen
}
If your object is not stationary, use movingObject.presentation instead of stationaryObjectNode

Removing SKNodes When Not Visible On Screen

In my game, the size of the level can be larger than the screen of the phone and the camera will follow the player around the level, so there can be a decent amount of content(such as SKEmitterNodes) in the scene that is not visible at any given time. I've been reading through some of the SpriteKit documentation and found this quote in the SMEmitterNode section:
"Consider removing a particle emitter from the scene when it is not
visible onscreen. Add it just before it becomes visible."
Is this something that can be done in my type of game design? I don't want the nodes to be completely removed since they will eventually be put on the screen, but is there a good way for me to add/remove the EmitterNodes (or other SpriteNodes) that are a certain distance from the screen/is this a good idea to do? I'm looking to improve my frame-rate and don't want costly nodes like SMEmitterNodes working while they're not even being displayed, but will adding/removing them as the player moves around reduce the performance?
Here is the idea I currently have: create a rectangle that extends a certain distance around the screen and detect when a node comes into that rectangle, and if it's not already added to the scene, go ahead and add it. Thank you for any suggestions.
SKNodes really aren't a problem because when they are off screen they are not being rendered anyway, just evaluated. So the main thing to worry about with SKNodes are any physics bodies attached to them,
SKEmitterNodes however require some processing power, and that is why apple is recommending not having them emit if they are not on screen. I would just subclass my SKScene class, and do a checks only on SKEmitterNodes whether or not they are in frame, and emit based on that.
So, I would throw all your SKEmitterNodes into a container like an array, and have a loop function to have the node do a CGRectIntersectsRect check based on your camera location and viewable screen size. and if they intersect, add it to the scene, if not remove it from the scene. The array will keep a strong reference so you do not have to worry about it deiniting on you

SKPhysicsWorld bodyWithTexture not working well with complex shapes

Am I the only one whose having issues with the new bodyWithTexture function of SKPhysicsBody?
I'm new to iOS development and maybe it's me, but I'm trying to create a game where I need to detect if a ball is inside a path.
I'm loading both from images dynamically (as the level proceeds the paths are more and more complex), and I'm setting a physics body to both the ball (bodyWithCircle) and from the dynamic path which is a PNG file of a path and all the rest is transparent background. I'm using the new bodyWithTexture function (yes I know it's supported only under iOS 8), and after assigning bit masks I've defined a contact between the ball and path and am informed with begin/end contact.
SKSpriteNode *lvlPath = [SKSpriteNode spriteNodeWithImageNamed:currentLevel.imagePath];
lvlPath.position = CGPointMake(self.frame.size.width/2, self.frame.size.height/2);
lvlPath.physicsBody = [SKPhysicsBody bodyWithTexture:lvlPath.texture size:lvlPath.frame.size];
now for simple paths like straight line, it works great. once it comes to complicated paths - the mechanism is going crazy, at least in my simluator (running iOS 8).
i've created another simple app just to check this issue, and saw that it's going crazy when the path is a complex shape. when the ball enters the path in one direction it seems to be working (begin/end contact), but going the reverse direction suddenly acts weird when still inside the path it reports to have ended contact, and then like randomly flips begin/end contact.
help... since the levels are loaded dynamically, this is a really cool feature for me, saving me the definitions of all levels as CGPathRef and creating a polygon for each level (and perhaps device).
thanks all,
Eyal
edit
example screenshot:
https://www.dropbox.com/s/e8v9g1kajtvakfq/screenshot%20bodywithtexture.jpg?dl=0
in this example the ball with the arrow is inited using bodyWithCircle, and the C shaped object is inited using bodyWithTexture. I'm debug printing "didBeginContact" and "didEndContact" and it freaks out in the top line there, you can see it's with "didEndContact" while the two object are definitely at contact. If I jiggle it (I'm moving it with the cursor) it suddenly flips to "didBeginContact".
With Simpler objects (like horizontal/vertical lines with round corners) it works perfectly.

Hit-testing a UIGestureRecogniser in 3d space

I'm reasonably new to iOS's SceneKit and have come across a dilemma with regards to user-interaction in a 3d scene:
I have a set of SCNNode cubes in an SCNView, and would like to be able to pin-point where a user touches the mesh of a given cube, as a 3d coordinate (so as to later manipulate the scene according to touch vectors). At present, I've been using a UIGestureRecognizer in order to achieve basic hit-testing, but this seems to be limited to returning 2d-points.
This isn't a problem when wanting to hit-test a whole node itself, as this can be achieved via a UIGestureRecognizer's hittest method in the SCNView. However, does anybody have any suggestions as to how to precisely locate where a touch landed on a node, in terms of coordinates (i.e. SCNVector3)?
Thanks!
You are on the right track with calling hitTest:options: on the SCNView. As you have probably seen it results in an array of SCNHitTestResults.
The hit test result can tell you many things about the hit, one of them being what node was hit. What you are looking for is either the localCoordinates or the worldCoordinates.
The local coordinate is relative to the node that was hit. Since you are asking "how to precisely locate where a touch landed on a node" this is probably the one you are looking for.
The world coordinate is relative to the root node.

Resources