I'm writing a app using Scenekit where the client wishes to push the limits of animation in IOS. This particular app has requirements where I"m pushing out to the screen over 1,500 redraws. Even with this many redraws, I've locked down the FPS to 60, which is great, but when I add all elements the client wants, the redraws are pushed to 7,500 redraws (and yes, this isn't a mistake or a joke, this is the redraw number even though it's almost 50-80 times more than most redraw times I've seen with scenekit). At this level of redrawing, the screen contains 1.7 million vertices, and around 800k polygons. This is a a lot of stuff, and it's really too much stuff for this app to be useful to anyone because now my FPS drops to 15-30FPS which is expected from drawing over 3K geometry elements on screen. What I've done so far:
I clone all nodes, cloning allows me to push the limits of Scenekit. I was able to fit on screen over 1.5k constant CAAnimations with over 1.8K unique geometries placed in different locations across the screen.
I've forced all windows, views, and screens in app to be opaque by looping through all windows and setting their opaque property to yes.
Question is this, I can deal with the performance issues, but I'm having a problem with the node cloning. Well, the node cloning works, but the problem is that each geometry that is pushed to the screen must have a different size and it seems like there is no way to change the geometry of each separate clone. I know that I can change the geometry of a "copied" node (SCNNode *node = [masterNode copy];), and I know I can change the materials property of a cloned node, but is there a way to change the geometry of the cloned node? Apple doesn't give any insight about the geometry being changed, but they do talk about changing the materials. Am I to assume that I can't change the size of the geometry of the clone? I can change the transform, pivot, rotation, animation, position, etc, of the clone, but the size of the geometry won't change. For my purposes, I just need the "height" variable of a cylinder to be changeable, I have everything else in good order, AND, there's no other way to push over 2k redraws to a screen without node cloning, I've tried it without cloning and FPS drops to less than 10 with just 300 redraws when declaring each geometry and node with geometry as it's own unique variable.
Lastly, given this same scenario, how much of a performance increase should I expect by moving from Scenekit to Metal. I'm not worried about the math, the level of detail, the time consuming operations of setting up the rendering pipeline or whatever else might come my way, I'm merely trying to find the BEST solution for my problem here, and I've not used Metal yet because I'm not sure I'd get different results given how many polygons, vertices, and redraws are required. Thanks.
is there a way to change the geometry of the cloned node
I believe you can change the baked geometry itself, but not the parametric one (not the SCNCylinder). So you can (to change the height):
Scale the node
Change the Transformation Matrix (so scaling too, just a different way to)
Add a Geometry Shader modifier that moves the points up/down on the axis you want
Changing the actual geometry kind of defeats the whole purpose of cloning, so I don't think there is a way around that.
Lastly, given this same scenario, how much of a performance increase should I expect by moving from Scenekit to Metal.
A lot. Around 30% from what I've seen, but again it will depend on your setup. Metal comes with iOS 9, and you won't have to do anything to get it for your scene, so just update one of your devices and try it there, to see if it helps!
Out of curiosity: why do you need so much cylinders? Could you not cheat the way they are rendered?
Related
I'm a total newcomer to SpriteKit and game development in general, I've been toying with SpriteKit to make a strategy game set in space.
My backend architecture use a grid system to represent the Universe, I have empty cases and cases with systems/planets/etc...
My grid is backed by GameplayKit GKGridGraph, I use an algorithm that generate a node with random properties for each node of the grid and I subclassed it to add a custom entity to it, which all the properties of this specific universe case.
To render it, I simply use SKShapeNode and SKPriteNode with various colors, shapes and textures.
I enumerate all nodes (GKGridGraphNode) in my GKGridGraph instance, and for each of those nodes I get generate the corresponding SKNode (my SKNode generations is a component of each GKGridGraphNode entity attached object), and I set them a position, and add them as a child to my main node (let's call it mapNode) which is a simple SKNode. In the end it looks like a grid.
It works well for a 30/30 grid, I have 60 FPS while scrolling my grid (custom implementation, I modify my mapNode positon as the user move his fingers).
But as soon as I try a 50/50 or a 100/100 grid, I have literally too many nodes on the screen for the scrolling to works. I know I shouldn't add every of my node on the screen, so I thought about various strategies and I wanted some input on them:
Instead of scrolling my mapNode, I could render only the nodes I see on the screen, and then add/remove nodes as the user scroll left/right/up/down. So it's not really scrolling anymore, it simulate it. I can think of it, but not really how I should implement it in practice. Is it the right solution?
Maybe I could render all my node as one big node? Is there a way to do that? But then I'll loose functions such as nodeAtPosition, which I use extensively to get the entity (custom object) associated with my nodes.
Edit: Actually, my current code is open source, here is the scene in I'm rendering: https://github.com/Dimillian/LittleOrion/blob/master/Little%20Orion/Little%20Orion/scenes/UniverseScene.swift
Is there any other smart way of doing that?
SKTileMapNode was made just for this in Xcode 8
https://www.raywenderlich.com/137216/whats-new-spritekit-ios-10-look-tile-maps
Alternatively, you would only want to load the nodes that are in and near your current view. You would need an algorithm to do this, and would be a big headache compared to tilemaps.
I suspect the algo would involve checking which nodes are in the view's .frame' and then using 'addChild' on them--concurrently, add a reference to them to an array, which you would check against nextupdate()andremoveFromParent` if they were no longer visible ..
It would get hairy trying to optimize this though. The goal would be to load only a few nodes out on each end so that way you have some buffer in moving the camera (less updates).
You could create a math function to pre-determine which nodes are where in relation to the current frame coordinates--so you don't have to iterate through the nodes--but that would require even a lot more work and headache--and it's what people developing on consoles, etc, have to do with high-end games and limited power.
I recommend skimming through a Direct3d/DirectX/OpenGL game development book, just to get an idea of what goes into everything... They aren't hard to find: walk into a bookstore and look for the thickest / heaviest book--that will be the DirectX game development book.
You will see how what we can do with 30 lines in SK requires thousands of lines and vector calculus in C++ and low-level AV frameworks. It will give you an appreciation, understanding, and perspective of game dev, which will help you in your SK journeys :)
My project runs at 55-60FPS on an iPhone 6 but anything older is completely unplayable because something is eating CPU.
I think the issue is related to the number of tiles and layers on my map (64x256 with 4 layers) and Instruments shows "SKCRenderer:preprocessSpriteImp(..." taking 5198ms (23.2%) running time.
Does JSTileMap load every single tile's image (visible or not) at once? This post from RW indicates that is the case and that it could be worked around for large performance boosts:
http://www.raywenderlich.com/forums/viewtopic.php?f=29&t=9479
In another performance note - Sprite Kit checks all it's nodes and
decides which ones it needs to display each frame. If you have a big
tile map, this can be a big performance hit. JSTileMap loads all the
nodes (SKSpriteNode for each tile) when it loads the tile map. So, I
was also seeing performance issues in the Sprite Kit version with my
maps (which are 500 x 40 tiles). I added a check to my version of
JSTileMap that's included in the kit that marks the hidden property of
each tile, then intelligently unhides and hides only the tiles that
enter/exit the screen space. That increased performance on these
larger maps significantly.
Unfortunately that post does not go into detail regarding the steps taken to remedy this.
My first thought was to (I'm a beginner, please be gentle) create an array of nodes by looping through each point and checking for a tile on the specific layer. From there I'd work on adding/removing them based on distance from the player.
This didn't work, because the process of adding nodes to an array simply caused the app to hang forever on larger maps.
Could someone lend a hand? I'd love to work with larger/more complicated tilemaps but this performance issue is destroying my brain.
Thanks for reading!
UPDATE: Big thanks to SKAToolKit: https://github.com/SpriteKitAlliance/SKAToolKit
Their culling feature solved my problem and I'm now running even larger maps at less than 35% CPU.
JSTileMap has some issues handling larger maps but you do have a couple of options to consider:
Break your large map into several smaller pieces and load each new piece as required.
Only load those objects which are in the player's vicinity.
Only load those tiles which are in the player's vicinity.
I personally found it impossible to accomplish #3 with JSTileMap as I could not locate an array holding the map tiles. I solved this issue by using the SKAToolKit which provides easy access to map tile arrays. It is an excellent resource for parsing maps created in Tiled.
My iOS app draws 2D curves in an opengles view. The scene is very expensive to render (can take up to 1-2 seconds), which means that, AFAIK, I can't change the scale, redraw, and re-render for incremental changes in scale (due to pinch-and-zoom). I currently draw directly on a buffer that is rendered to the screen.
I think one way I can achieve zooming is by rendering to a texture at a given resolution and then render a quad with part of that texture (potentially at a different scale and translated). My guess is that it'll double the memory I'm currently using (if I of course keep the texture at the same resolution as the screen). Can someone confirm that? Is there another way to do zooming without redrawing while not doubling graphical memory usage?
Now, if I want to maintain a decent quality, I'll have to re-render at different resolutions. My initial thinking was to "manually" create a mipmap with e.g. 2 levels (1 texture for 100-150% zooming, and another one for 150-200% zooming). This time, I'll have 1 buffer + 2 textures. I can of course, re-render on panning but I don't think the user experience will be great. Any thoughts on how I can improve that from a user experience and/or memory perspective?
Since you already need that long to draw the scene I would suggest you to create tiling. You could draw the scene in different resolutions at load time and save the output to some images (save the image files into some temporary directory). With this approach you should have minimum memory consumption and the user experience should be great.
If you do this you should also consider if you even want to use openGL to present the scene since you have some very nice methods for presenting a large image on an image view, scroll view. Doing it this way you can actually skip all the GL - UIView bindings and presenting. You can move all the GL work to some separate thread which means if a scene should change you can do that in background allowing the user to work on the current scene uninterrupted. Also if you expect the user to be "swapping" between the scenes you can keep them saved and reuse them without any performance impact at all.
I have seen the usage of plist and png atlasses for the game i am developing. However I've notice a slight performance swiftness(speed up) keeping the 60 fps, and for a side note my app has not crash at the moment.
The thing is I noticed I have used SpriteFrameCache with plist to do CCactions and animations for my characters(sprites). However some of the characters ive been using SpriteBatchnode, but it was on accident, since I am relatively new to deep development of a game, I didnt notice this difference before, they both work, but I feel like both are the same, its just that one has an easier way of implementation than the other, i was thinking that perhaps it was developed in an earlier version....
so my question is. is there a difference between the two? will my game benefit for using SpriteFrameCache over SpriteBatchNode?
Thanks for the help.
FYI: this doesnt slow down my developing, its just a question because I know at the end when my game is finished maybe i would want to optimize performance for my game.
Batch nodes draw all child sprites in one draw call.
Sprite frames hold a reference to a texture and define a region in the texture to draw from. The cache allows you to access sprite frames by name.
Both are different concepts, they are not replacements for each other. You can use sprite frames to create sprites or sprite frame animations. In addition to that sprite batching enables you to speed up rendering of two or more sprites using the same texture.
Note that if you use a batch node with only a single child sprite this will not be any different from rendering the sprite without the batch node, since both create a single draw call so there is no positive effect in using the batch node.
I am creating an app that represents the pages of a book with animation and interactive areas. There is one character who is constant throughout but each page has them represented in a different look so I cannot re-use the frames very easily. This character has wings, legs and eyes which all need to move differently. What I am wondering is what is the best way to take them from the PSD into the app? The two approaches I can think of is either:
Create a separate png for each frame of the animation and then cycle through them (this would be combined into a single sprite atlas)
Split the character into their parts and then position, rotate, scale and move them in the app manually.
The main reason I am considering point 2 is that if I do point 1 then I will need to create a lot of frames of animation for each page and also create them all twice to cater for normal and retina displays.
Please let me know what the correct approach for this may be and if there is anything I should keep in mind.
Thanks
Option 1 sounds much more feasible. 300 frames is a bit too much, but you dont have to load all of them in the memory at the same time. Divide your frames into multiple spritesheets of 1024*1024 and make sure all the frames of the same animation are on a single spritesheet. So, at any given moment, only a single texture would be loaded in the memory, which I guess is the minimum anyway.
You can also do a bit more optimization maybe, by creating separate animations for things that behave the same in different poses. For example, if the eyes are blinking exactly the same in different poses, you can stop creating separate frames for each pose just for blinking. Just take out the eyes (ouch!), create a separate animation for them, and place it over your character's animation node.
Going with option 2 would create un-necessary complications, both for you and the poor device.