FPS drop when adding child to a scene ARKit/SceneKit - ios

I'm working on an ARKit project for 4 months now.
I noticed that when adding a child to my scene rootNode, there is a FPS drop. The device freezes for less than a second.
I did a lot of research and trials, noticed that all Apple's code examples have this FPS drop too when placing an object.
It does not matter if the node is added directly (scene.rootNode.addChild(child)) or if it's added in the renderer loop at different phases (didUpdateAtTime, didApplyAnimations etc...).
I found that once an object has been added to a scene, the next added object will render immediately. I use a 3D model created in SceneKit editor, clone it to generate my different nodes before adding them as child. I do this loading work before placing the objects.
Instruments shows that the renderer loop is busy for the duration of the freeze.
The only solution that I found is to add my nodes to the scene behind a loading screen before starting the whole experience.
Is that a normal behavior in game programming to render nodes before using them ?
Thanks guys

With the release of ARKit 3.0 and its satellite – RealityKit (framework with optimised rendering engine and changed scene's hierarchy, that was written in Swift hence it has no Objective-C binding), a drop-frame, when adding a child, is reduced to an imperceptible value.
And such a predictable behaviour of ARKit3/RealityKit ligament is especially true for devices with processors A12 Bionic and A13 Bionic manufactured on 7 nm process (and, of course, due to the fact they have last-gen neural engines and powerful GPUs).
For devices with a less powerful processors (A9, A10, A11), it is advisable to use 3D models with a total number of polygons of no more than 10K per model, and with usual shaders like .blinn or .phong (not PBR).
I believe it's quite a common practice for games and apps that use game engines, to firstly load (or cache) all the necessary game assets (like 3D models, textures, sound files, etc) into RAM before using them. For further details please read this article and this article.
However, it’s worth saying that AR games, unlike VR games, consume considerably more processing power, therefore they need to be carefully optimised. So, you're absolutely right, rendering nodes before using them and it's a normal behaviour in game programming.

Related

Proper scenekit architecture for multi level/screen games

I'm new to scenekit, so when I started my first project I noticed it's slightly different from spritekit. Although not ideal, I used to use 1 gameviewcontroller and 1 gamescene in spritekit to code my entire game. (YES, this includes a menu, game/levels, settings, etc...)
I'm now wondering since I'm using SCNScenes (which are just loaded from the viewcontroller), I do not want my 1 view controller to be swamped with code. What is the best way to layout a game? Should i have 1 viewcontroller per SCNScene? (switching to a different view controller for: main menu, game level 1, game level 2, game level 3, game over, settings, etc..)
This may not have been worded perfectly, but i appreciate the help!
Aidan
My typical layout for games with multiple levels, maps, interactive game play menus:
MainMenuVC
SelectLevelVC, SelectMapVC, etc.
GameVC (Hud, score, in-game menus), DefenseVC (like an overlay, choose weapon)
EndWaveVC, EndGameVC
GameControl() - Timers, array iteration, check wave completed, nextWave, cleanup, etc.
Data - shared instance (arrays, switches) for data shared between VC’s and game objects
BaseObjects such as (defenses, explosions, sounds, movement, targeting)
GameNodes (all node creation and storage, model loading, particle loading, camera, lights)
I landed here though many iterations. There are some tradeoffs, but by separating VC’s from data and scenekit things can stay pretty clean. It also enhances the ability to do threading and timers in only a few places when needed.
The sharedInstance of Data allows game objects to iterate through arrays to find targets and objects, such as a defense trying to find an enemy and vice versa. When game objects need to start talking to each other, this is usually where game design starts going off the tracks. So, I iterate loops via timers in GameControl, sharing Data, and that keeps it pretty clean. Since I’m doing loops in one place, I can avoid trampling other objects while they are updating and also get some clean measurements of timing and control FPS.
Hope that helps.

Fixing or avoiding memory leak in default third party library

I developed an app that includes the ability to preview the subdivision results of a 3D model on the fly. I have my own catmull clark subdivision functions to permanently modify the geometry, but I use the .subdivisionLevel property of the SCNGeometry to temporarily subdivide the model as a preview. In most cases previewing does not automatically mean the user will go for the permanent option.
.subdivisionLevel uses (just as MDLMesh’s subdivision, which I tried as a workaround) Pixar’s OpenSubdiv to do the actual subdivision and smoothing. It works faster than my own but more importantly it doesn’t permanently modify the vertex data I provide through a SCNGeometry source.
The problem is, I can’t get it to stop leaking memory. I first noticed this a long time ago, figured it was something in my code. I don’t think it’s just one specific IOS version and it happens in both Swift and Objective C. Eventually I set up a small example adding just 1 line to the SceneKit game template in Xcode, setting the ship’s subdivisionLevel to 1. Instruments shows that immediately results in memory leaks:
I submitted a bug report to Apple a week ago but I’m not sure I can expect a reply or a fix anytime soon or at all. The screenshot is from a test with a very small model, but even with small models (hundreds to couple of thousand vertices) it leaks a lot and fast and will lead to the app crashing.
To reproduce, create a new project in Xcode based on the SceneKit game template and add the following lines to handletap:
if result.node.geometry!.subdivisionLevel == 3 {
result.node.geometry!.subdivisionLevel = 0
} else {
result.node.geometry!.subdivisionLevel = 3
}
(Remove the ! For objective c)
Tap the ship to leak megabytes, tap it some more and it quickly adds up.
OpenSubdiv is used in 3D Studio max as well as others obviously and it appears to be in Apple’s implementation. So my question is: is there a way to fix/avoid this problem without giving up on the subdivision features of SceneKit entirely, or is a response from Apple my only chance?
Going through the WWDC videos to get an idea of how committed Apple is to OpenSubdiv and thus the chance of them fixing the leaks, I found the subdivision can be performed on the GPU by Metal since the latest SceneKit update.
Here are the required two lines (Swift) if you want to use subdivision in SceneKit or Model IO:
let tess = SCNGeometryTessellator()
geometry.tessellator = tess
(from WWDC 2017 What's new in Scenekit, 23:45 into the video)
This will cause the subdivision to be performed on the GPU (thus faster, especially at higher levels), use less memory, and most importantly, releases the memory when setting the subdivision level lower or back to zero.

Rendering a big grid with SpriteKit

I'm a total newcomer to SpriteKit and game development in general, I've been toying with SpriteKit to make a strategy game set in space.
My backend architecture use a grid system to represent the Universe, I have empty cases and cases with systems/planets/etc...
My grid is backed by GameplayKit GKGridGraph, I use an algorithm that generate a node with random properties for each node of the grid and I subclassed it to add a custom entity to it, which all the properties of this specific universe case.
To render it, I simply use SKShapeNode and SKPriteNode with various colors, shapes and textures.
I enumerate all nodes (GKGridGraphNode) in my GKGridGraph instance, and for each of those nodes I get generate the corresponding SKNode (my SKNode generations is a component of each GKGridGraphNode entity attached object), and I set them a position, and add them as a child to my main node (let's call it mapNode) which is a simple SKNode. In the end it looks like a grid.
It works well for a 30/30 grid, I have 60 FPS while scrolling my grid (custom implementation, I modify my mapNode positon as the user move his fingers).
But as soon as I try a 50/50 or a 100/100 grid, I have literally too many nodes on the screen for the scrolling to works. I know I shouldn't add every of my node on the screen, so I thought about various strategies and I wanted some input on them:
Instead of scrolling my mapNode, I could render only the nodes I see on the screen, and then add/remove nodes as the user scroll left/right/up/down. So it's not really scrolling anymore, it simulate it. I can think of it, but not really how I should implement it in practice. Is it the right solution?
Maybe I could render all my node as one big node? Is there a way to do that? But then I'll loose functions such as nodeAtPosition, which I use extensively to get the entity (custom object) associated with my nodes.
Edit: Actually, my current code is open source, here is the scene in I'm rendering: https://github.com/Dimillian/LittleOrion/blob/master/Little%20Orion/Little%20Orion/scenes/UniverseScene.swift
Is there any other smart way of doing that?
SKTileMapNode was made just for this in Xcode 8
https://www.raywenderlich.com/137216/whats-new-spritekit-ios-10-look-tile-maps
Alternatively, you would only want to load the nodes that are in and near your current view. You would need an algorithm to do this, and would be a big headache compared to tilemaps.
I suspect the algo would involve checking which nodes are in the view's .frame' and then using 'addChild' on them--concurrently, add a reference to them to an array, which you would check against nextupdate()andremoveFromParent` if they were no longer visible ..
It would get hairy trying to optimize this though. The goal would be to load only a few nodes out on each end so that way you have some buffer in moving the camera (less updates).
You could create a math function to pre-determine which nodes are where in relation to the current frame coordinates--so you don't have to iterate through the nodes--but that would require even a lot more work and headache--and it's what people developing on consoles, etc, have to do with high-end games and limited power.
I recommend skimming through a Direct3d/DirectX/OpenGL game development book, just to get an idea of what goes into everything... They aren't hard to find: walk into a bookstore and look for the thickest / heaviest book--that will be the DirectX game development book.
You will see how what we can do with 30 lines in SK requires thousands of lines and vector calculus in C++ and low-level AV frameworks. It will give you an appreciation, understanding, and perspective of game dev, which will help you in your SK journeys :)

How to Visualize zPositions in iOS

My team and I are working on a SpriteKit based iOS game of medium complexity. There are lots of layers and nodes to the design of the game and the zPositioning of the nodes has gotten sloppy. One task I have agreed to take on is the revamping of our zPosition strategy: moving to constants instead of magic numbers, having a holistic zPosition scheme for the app, etc. but first I want to analyze where we are at now. So here is my question:
I vaguely recall watching a WWDC video (or some other tutorial, maybe) in which the person showed using some aspect of Instruments (or some other tool) to show a 3D rendering of an app, seen from an isomorphic angle, based on the zPosition of the SKNodes (or UIKit elements?) in the app.
Does anyone here know what tool this is? And if not, what is the best way to visualize the current state of zPositions in a SpriteKit based app? Thanks!

SpriteKit using 95-100% CPU when running game with large tilemap (JSTileMap)

My project runs at 55-60FPS on an iPhone 6 but anything older is completely unplayable because something is eating CPU.
I think the issue is related to the number of tiles and layers on my map (64x256 with 4 layers) and Instruments shows "SKCRenderer:preprocessSpriteImp(..." taking 5198ms (23.2%) running time.
Does JSTileMap load every single tile's image (visible or not) at once? This post from RW indicates that is the case and that it could be worked around for large performance boosts:
http://www.raywenderlich.com/forums/viewtopic.php?f=29&t=9479
In another performance note - Sprite Kit checks all it's nodes and
decides which ones it needs to display each frame. If you have a big
tile map, this can be a big performance hit. JSTileMap loads all the
nodes (SKSpriteNode for each tile) when it loads the tile map. So, I
was also seeing performance issues in the Sprite Kit version with my
maps (which are 500 x 40 tiles). I added a check to my version of
JSTileMap that's included in the kit that marks the hidden property of
each tile, then intelligently unhides and hides only the tiles that
enter/exit the screen space. That increased performance on these
larger maps significantly.
Unfortunately that post does not go into detail regarding the steps taken to remedy this.
My first thought was to (I'm a beginner, please be gentle) create an array of nodes by looping through each point and checking for a tile on the specific layer. From there I'd work on adding/removing them based on distance from the player.
This didn't work, because the process of adding nodes to an array simply caused the app to hang forever on larger maps.
Could someone lend a hand? I'd love to work with larger/more complicated tilemaps but this performance issue is destroying my brain.
Thanks for reading!
UPDATE: Big thanks to SKAToolKit: https://github.com/SpriteKitAlliance/SKAToolKit
Their culling feature solved my problem and I'm now running even larger maps at less than 35% CPU.
JSTileMap has some issues handling larger maps but you do have a couple of options to consider:
Break your large map into several smaller pieces and load each new piece as required.
Only load those objects which are in the player's vicinity.
Only load those tiles which are in the player's vicinity.
I personally found it impossible to accomplish #3 with JSTileMap as I could not locate an array holding the map tiles. I solved this issue by using the SKAToolKit which provides easy access to map tile arrays. It is an excellent resource for parsing maps created in Tiled.

Resources