Detecting nodes on any side of a touched node - ios

I have a game built using SceneKit and swift.
I have been struggling to figure out how to solve my problem.
I am trying to figure out how to detect nodes touching in my specific scenario. The image below demonstrates the issue I am facing... If a user touched any of the yellow cubes it would highlight that whole chain of yellow cubes. Same for the three red cubes on the bottom and the two red cubes on the top.
The way the game works is a user is given a shape of cubes. The shape can change position by the user swiping it various ways. Cubes may appear or get removed from the scene, so the position of the cubes can change easily. Finally a gravity function will pull the cubes down to the ground when the user swipes the shape, so if they twisted the image below to the right then it would end up as a brand new shape with most of the cubes in a new position.
Here is what I have tried:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch = touches.first!
let location = touch.location(in: gameView)
let hitList = gameView.hitTest(location, options: nil)
if let hitObject = hitList.first {
let node = hitObject.node
//This is where I'm trying to detect the nodes and remove them
gameScene.rootNode.childNodes.filter({ $0.name == node.color }).forEach({ $0.removeFromParentNode() })
}
}
The problem with my code is that it removes all of the cubes that are the same color as the hit cube.

I would not use SceneKit APIs to solve this problem.
You have a game with cubes that can be arranged according to specific constraints. The application should have a model (abstract representation) of where each cube is, and the drawing of the cube is only a view of that model. Everything that involves your gameplay, including resolving which cubes are part of a chain of the same color, should be done on that abstract representation and then any update to the state of the cubes should be propagated to the SceneKit nod hierarchy.

Related

MapKit Coordinate Projection Gives Erroneous Results from Scene Renderer

I am working on an app that overlays a MapKit Map View with a SceneKit SceneView (similar to https://blog.classycode.com/how-to-write-a-pok%C3%A9mon-go-clone-for-ios-edf1cf1cf5ce). In my scene I have a 3D node whose position is defined such that it appears on top of a specific map coordinate.
In the scene renderer I update the nodes position by converting the map coordinate to a point and projecting that point into the 3D scene. This method allows the user to rotate the map around while the 3D node appears to stay in place.
This is working. However, the node is "flickering" as the map changes. After investigating, I discovered that the culprit is the map coordinate conversion. For some reason, when the map is rotating, sometimes the converted screen coordinate is very wrong (somewhere off the screen).
I think the issue has something to do with threading. The map rotation angle is defined in the touchesMoved function and updated in the scene renderer. I have tried moving the map rotation and node position update to the touches moved function, but that made the converted points wrong every time.
Here is what my structure looks like:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent) {
*grab map rotation angle start*
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent) {
*calculate new map rotation angle*
}
extension GameViewController: SCNSceneRendererDelegate {
func renderer(renderer: SCNSceneRenderer, updateAtTime time: NSTimeInterval) {
*set map rotation angle*
*update 3D node position*
}
I expect the map coordinate conversion to always give me a screen point that makes sense (I know the coordinate is on the screen), but the result is intermittently wrong during rotation.
UPDATE:
Two interesting new discoveries:
1) The problem only exists when map type = sateliteFlyover (unfortunately the one I need)
2) The problem goes away when using Apple's default user-interactive map rotation (also unfortunate because I need to control the rotation differently)
It really seems like some sort of thread processing issue where there is a "dead time" after the map camera has been changed that coordinate projection just gives the wrong value. But what could Apple be doing differently from me in their rotation method that eliminates this problem?

How to keep ARKit SCNNode in place

Hey I'm trying to figure out. How to keep a simple node in place. As I walk around it in ARKit
Code:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
if let planeAnchor = anchor as? ARPlaneAnchor {
if planeDetected == false { // Bool only allows 1 plane to be added
planeDetected = true
self.addPlane(node: node, anchor: planeAnchor)
}
}
}
This adds the SCNNode
func addPlane(node: SCNNode, anchor: ARPlaneAnchor) {
// We add the anchor plane here
let showDebugVisuals = Bool()
let plane = Plane(anchor, showDebugVisuals)
planes[anchor] = plane
node.addChildNode(plane)
// We add our custom SCNNode here
let scene = SCNScene(named: "art.scnassets/PlayerModel.scn")!
let Body = scene.rootNode.childNode(withName: "Body", recursively: true)!
Body.position = SCNVector3.positionFromTransform(anchor.transform)
Body.movabilityHint = .movable
wrapperNode.position = SCNVector3.positionFromTransform(anchor.transform)
wrapperNode.addChildNode(Body)
scnView.scene.rootNode.addChildNode(wrapperNode)
Ive tried adding a Plane/Anchor Node and putting the "Body" node in that but it still moves. I thought maybe it has something to do with the update function.
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
}
Or most likely the position setting
wrapperNode.position = SCNVector3.positionFromTransform(anchor.transform)
Iv'e looked through every source / project file / video on the internet and nobody has a simple solution to this simple problem.
There are two kinds of "moving around" that could be happening here.
One is that ARKit is continuously refining its estimate of how the device's position in the real world maps to the abstract coordinate space you're placing virtual content in. For example, suppose you put a virtual object at (0, 0, -0.5), and then move your device to the left by exactly 10 cm. The virtual object will appear to be anchored in physical space only if ARKit tracks the move precisely. But visual-inertial odometry isn't an exact science, so it's possible that ARKit thinks you moved to the left by 10.5 cm — in that case, your virtual object will appear to "slip" to the right by 5 mm, even though its position in the ARKit/SceneKit coordinate space remains constant.
You can't really do much about this, other than hope Apple makes devices with better sensors, better cameras, or better CPUs/GPUs and improves the science of world tracking. (In the fullness of time, that's probably a safe bet, though that probably doesn't help with your current project.)
Since you're also dealing with plane detection, there's another wrinkle. ARKit is continuously refining its estimates of where a detected plane is. So, even though the real-world position of the plane isn't changing, its position in ARKit/SceneKit coordinate space is.
This kind of movement is generally a good thing — if you want your virtual object to appear anchored to the real-world surface, you want to be sure of where that surface is. You'll see some movement as plane detection gets more sure of the surface's position, but after a short time, you should see less "slip" as you move the camera around for plan-anchored virtual objects than those that are just floating in world space.
In your code, though, you're not taking advantage of plane detection to make your custom content (from "PlayerModel.scn") stick to the plane anchor:
wrapperNode.position = SCNVector3.positionFromTransform(anchor.transform)
wrapperNode.addChildNode(Body)
scnView.scene.rootNode.addChildNode(wrapperNode)
This code uses the initial position of the plane anchor to position wrapperNode in world space (because you're making it a child of the root node). If you instead make wrapperNode a child of the plane anchor's node (the one you received in renderer(_:didAdd:for:)), it'll stay attached to the plane as ARKit refines its estimate of the plane's position. You'll get a little bit more movement initially, but as plane detection "settles", your virtual object will "slip" less.
(When you make the node a child of the plane, you don't need to set its position — a position of zero means it's right where the plane is. Inf anything, you need to set its position only relative to the plane — i.e. how far above/below/along it.)
To keep an SCNNode in place you can disable sceneView plane detection once you get the result you desired.
let configuration = ARWorldTrackingConfiguration();
configuration.planeDetection = []
self.sceneView.session.run(configuration)
The reason for this is that ARKit constantly reestimates the position of the detected plane resulting in your SCNNode moving around.

Most optimal method of implementing tappable shapes?

I'm building an app that provides an editable canvas similarly to photoshop or illustrator (currently using SpriteKit). I'm encountering some performance issues when displaying a large number of nodes (1000+) as you might imagine.
Currently, I've got an SKScene with SKShapeNodes on it. They presently do not have any images and are simply filled with a color. Their shape paths (CGPaths) vary from circles, to bezier paths. Each shape currently has an SKPhysicsBody with the same path as the rendered shape that is used to detect taps.
The performance issues can be described by:
slowness when adding 1000 nodes (circles), uses about 0.1mb of memory per node
slowness when moving 1000 nodes (circles)
slowness when generating a texture from 1000 nodes (circles)
Disabling PhysicsBodies doesn't substantially improve performance, but does improve CPU load (jumps from constant 60% to 1% or so)
Most users will not be working with 1000 nodes, but I'd like to implement the optimal solution.
What I'd like is two have two layers:
A render layer on which I'd like to be able to render CGPaths with strokes and fills (preferably choosing the stroke end cap style among other little things)
An interaction layer on which I'd like to be able to detect taps inside CGPaths and stroke CGPath's with a color to indicate highlighting.
How can I accomplish this or a similar solution that will improve the speed at which I can render 1000 circles?
Don't use SKShapeNode, it needs 1 draw call per shapenode. You can create a shape, then "cast" it to a SpriteNode before adding it to the scene:
func shapeToSprite(_ shape: SKShapeNode) -> SKSpriteNode {
let sprite = SKSpriteNode(texture: SKView().texture(from: shape))
sprite.physicsBody = shape.physicsBody // Or create a new PB from alpha mask (may be slower, IDK)
shape.physicsBody = nil
return sprite
}
override func didMove(to view: SKView) {
let shape = SKShapeNode(circleOfRadius: 60)
addChild(shapeToSprite(shape))
}
This would suck though if you needed to constantly edit the size / shape of your shape in a way that isn't doable via scaling or SKWarpGeometry
To detect taps like this, you just need to either 1, Set a name for the SpriteNode then look for it in touchesBegan, or 2, subclass a SpriteNode and override it's touchesBegan.
Also, if you need to just have a static image, you can bit-blit thousands of nodes onto 1 texture.
Sprite Kit: A Lot of sprites (1000+) with Bit Blitting
http://www.sdkboy.com/2014/01/spritekit-bit-blitting-by-drawing-to-a-texture-for-performance-optimization/
So if you don't need every node to be touchable at all times, you can alternate between having physicsBodys and not, or switch layers from bitblitted to active.

How to return the SKSpriteNode within given constraints following an event?

Suppose the following:
You have a myriad of SKSpriteNodes in the view.
When the user taps the screen, you want the whatever sprite that is in / near a specific location to do an animation.
Question: How can figure out which SKSpriteNode is at the specific location without looping through all sprites?
For this, I have implemented a SKSpriteNode, box, which is transparent and has a texture which covers the span of the specific location, and is positioned accordingly.
The SKSpriteNode methods contains and intersects seem promising, but require that I pass a point or a sprite respectively.
Question: How can I get a SKSpriteNode to report what sprite, if any, it intersects with? Again, without looping through every sprite. If two sprites intersect with box, then return only that which is most prominently intersecting with box.
Diagram:
This is not my actual use case, but illustrates the point. There are a lot of sprites (more than visualized below) and there is an area of interest that:
if the user touches, and
a sprite is in that area
I want to know what sprite is there.
There is no way to do this without SOMETHING looping through the sprites. That's either:
The physics engine, as Stoneburner suggests
The scene, via update() setting flags on sprites when they're in the
region
Your code that handles the touch, searching for sprites in the region
GameplayKit offers some optimisations on doing this sort of thing: https://developer.apple.com/reference/gameplaykit/gkrtree
Attach a UITapGestureRecognizer to the view
On tap state UIGestureRecognizerStateRecognized get the location of the tap using CGPoint pointInView = [tapper locationInView:mySKScene.view]
Convert from the view's coordinate system to the scene's coordinate system using CGPoint pointInScene = [mySKScene convertPointFromView:pointInView]
Get the node at that point by asking the scene. SKNode *touchedNode = [self nodeAtPoint:pointInScene];
You can use SKPhysicsBodies to detect collisions (overlaps).
Assign physicsbodies to all sknodes, add one dynamically on the region you want to detect sknodes inside, handle the SKPhysicsContactDelegate, remove the body again

SpriteKit : enumerateBodiesAtPoint not showing correct bodies

I have a few non-completed-circles rotating constantly and a user going from circle to circle.
I removed all gravity, forces etc from my scene
Image A
Image B
Problem : I am trying to do a hit detection where I just check where the user is, and if there are SKNode's bodies at this point in the physics world of my scene. If it's a hit with the shape, the user can continue (image A), but fails if he is outside (image B)
Although the shapes are pretty complex, the scene.showPhysics seem to match my shapes precisely. (see image A and B)
let updatedOrigin = user.calculateAccumulatedFrame().origin
user.scene?.physicsWorld.enumerateBodiesAtPoint(updatedOrigin, usingBlock: { (body, stop) in
print("🍄 Shape contains \(body.node!.name)")
})
which prints
🍄 Shape contains Optional("User")
🍄 Shape contains Optional("circle")
🍄 Shape contains Optional("circle")
🍄 Shape contains Optional("circle")
🍄 Shape contains Optional("circle")
🍄 Shape contains Optional("Scene")
It prints the user and scene correctly, but also prints all the circle's shapes around, when there should only be one at this point, or none. The nodes are there, but the bodies physics should not hit.
Any ideas why it shows a hit for all those circles when it should only match 1 or none? Thanks!
Edit : additional info
- I had similar results when using user.physicsBody?.allContactedBodies()
- I am using a CGPath to create the PhysicsBody of my rotating node
I created a simple test project with a scene containing 3 arcs with physics bodies and 3 rectangle shape-nodes that identify the bounding box for each arc. I then added a touch handler that places a small circle at the tap point and a label that identifies the nodes returned by enumerateBodiesAtPoint with the touch location as the parameter. The figure below shows the results of tapping at various locations in the scene.
From the test results, it's not obvious how enumerateBodiesAtPoint determines if a physics body contains the specified point or not. It is clear, though, that the method is not consistent with its documentation. I suggest that you avoid using it in your app.
Alternatively, you can use SpriteKit's built-in contact detection:
class GameScene: SKScene, SKPhysicsContactDelegate {
override func didMoveToView(view: SKView) {
self.physicsWorld.contactDelegate = self
}
func didBeginContact(contact: SKPhysicsContact) {
// Handle contacts between physics bodies here
}
}
You can also test if a point is within a CGPath using CGPathContainsPoint. Here, you will need to iterate over the paths you used to create the arc-shaped physics bodies. The below figure shows the result of my test project that uses CGPathContainsPoint instead of enumerateBodiesAtPoint. You may need to convert the test point, with convertPoint, to the appropriate coordinate space prior passing it to CGPathContainsPoint.

Resources