Creating a large grid of similar objects using SceneKit - ios

I'm currently testing the feasibility of using SceneKit for a game I would like to make but having trouble figuring out how best to create a grid of many similar geometrical objects in SceneKit while maintaining acceptable performance. Here is what I would like to do:
Place hundreds of a geometric primitive, just an ordinary cube for now in a grid.
Apply various vertex and/or surface and/or fragment shaders to each cube. Some cubes may have the same shaders applied to them but in practice I would like to have at least tens of different shaders that I can apply to the cubes.
I would like to be able to zoom the camera out and view all of the cubes simultaneously while maintaining a smooth framerate.
I'm beginning with a grid of 25 cubes by 25 cubes. I have achieved good performance by building the grid in a loop using a single SCNBox object for the geometry, and setting the shaderModifiers property of the firstMaterial property. I add all 625 cubes to a node, then add the flattenedClone() of this node to my scene's root node. This means the entire scene can be rendered with just 1 draw call.
The problem with reusing the geometry, however, is that all cubes must use the same shaders. But if I create a new SCNBox for each cube (so that I can set the shaders individually for each cube) then I end up with a draw call being made for each cube, which is inefficient and performance will suffer quickly when more and more cubes are added to the scene. And if the geometry is anything more complex than a cube, performance is degraded severely.
Of course I could optimize and have any cubes that DO use the same shaders share the same geometry, add them to their own node and add the flattenedClone() of that node to the scene. But is there more that I can do to optimize in this case? Or am I better off looking for an alternative to SceneKit entirely?

Related

How to create an Isometric tile-map with 3D objects

So, I am building a basic tycoon game. In this game I want to have a simplex 3D generated tile-map that the player can build on. I need the tile-map to be 3D (or have a 3D appearance) because I want to be able to use some realistic shaders on the 3D models the player is going to be able to build.
At first, I thought I could combine SpriteKit and SceneKit. I would generate the 2D SKTileMapNode and then allow the player to build the 3D buildings on top of that. But that wouldn't work as well because SKTileMapNodes aren't easily pannable or zoomable.
My second idea was to build an SKTileMap randomly (which I know how to do easily) and then use that as a reference to build a 3D scene. That would allow me to have full control over what goes where, but there is a catch. Each 3D block (representing a tile in the tileMap) would be treated as a node and cause huge performance issues, Unlike SpriteKits SKTileMapNode that treats the tile-map as a single large node once it is filled with the tiles.
I would prefer to not use an isometric SKTileMap because that wouldn't allow the player to be able to pan/zoom the map and thus, reduce the depth and feel that I would like to achieve with this game.
For instance, this is what I am going for (similar in build but completely different in style)
“Each 3D block (representing a tile in the tileMap) would be treated as a node and cause huge performance issues,”
I think your question should be “how to render a 3D tile map with good performance in Scenekit” because imo Scenekit is definitely the way to go here and it is certainly doable to prevent those “huge performance issues”.
For starters, how do you create a tile? If you use the builtin primitives you can get a huge performance increase by using a tile from a dae or obj file, or even by creating it programmatically.
If the tile is the same model throughout, you should add it only once, and then clone it for all the other tiles:
https://developer.apple.com/documentation/scenekit/scnnode/1408046-clone
(Note you will have to create a copy of the material and assign it to a clone to prevent it from being shared across all tiles).
Additionally, by adding them all to a single parent node, you can create a socalled flattened clone to combine all the tiles into a single node.
https://developer.apple.com/documentation/scenekit/scnnode/1407960-flattenedclone which significantly reduces the number of draw calls.
If that isn’t fast enough, another option is to create the entire map programmatically. That is, create all the vertices and create a SCNGeometry based on those.
Yet another, probably blazing fast option, would be to use 4vertices to create a plane, and then use a shader and displacement map to create the tiled map.
The part about cloning and flattened clones also applies to the buildings that have the same geometry.
And just for completeness, in case that wasn't obvious, you should set the https://developer.apple.com/documentation/scenekit/scncamera/1436621-usesorthographicprojection property of the camera to true to get that isometric look.

How to batch sprites in iOS/OpenGL ES 2.0

I have developed my own sprite library on top of OpenGL ES 2.0. Right now, I am not doing any batching of draw calls; instead, each sprite has its own VBO/VAO of four textured vertices, drawn as a triangle strip (The VAO/VBO itself is managed by the Texture atlas, so identical sprites reuse the same VAO/VBO, which is 'reference counted' and hence deleted when no sprite instances reference it).
Before drawing each sprite, I'll bind its texture, upload its uniforms/attributes to the shader (modelview matrix, opacity - Projection matrix stays constant all along), bind its Vertex Array Object (4 textured vertices + four indices), and call glDrawElements(). I do cull off-screen sprites (based on position and bounds), but still it is one draw call per sprite, even if all sprites share the same texture. The vertex positions and texture coordinates for each sprite never change.
I must say that, despite this inefficiency, I have never experienced performance issues, even when drawing many sprites on screen. I do split the sprites into opaque/non-opaque, draw the opaque ones first, and the non-opaque ones after, back to front. I have seen performance suffer only when I overdraw (tax the fill rate).
Nevertheless, the OpenGL instruments in Xcode will complain that I draw too many small meshes and that I should consolidate my geometry into less objects. And in the Unity world everyone talks about limiting the number of draw calls as if they were the plague.
So, how should I go around batching very many sprites, each with a different transform and opacity value (but the same texture), into one draw call? One thing that comes to mind is to modify the vertex data every frame, and stream it: applying the modelview matrix of each sprite to all its vertices, assembling the transformed vertices for all sprites into one mesh, and submitting it to the GPU. This approach does not solve the problem of varying opacity between sprites.
Another idea that comes to mind is to have all the textured vertices of all the sprites assembled into a single mesh (VBO), treated as 'static' (same vertex format I am using now), and a separate array with the stuff that changes per sprite every frame (transform matrix and opacity), and only stream that data each frame, and pull it/apply it on the vertex shader side. That is, have a separate array where the 'attribute' being represented is the modelview matrix/alpha for the corresponding vertices. Still have to figure out the exact implementation in terms of data format/strides etc. In any case, there is the additional complication that arises whenever a new sprite is created/destroyed, the whole mesh has to be modified...
Or perhaps there is an ideal, 'textbook' solution to this problem out there that I haven't figured out? What does cocos2d do?
When I initially started reading you post I though that each quad used a different texture (since you stated "Before drawing each sprite, I'll bind its texture") but then you said that each sprite has "the same texture".
A possible easy win is to control the way you bind your textures during the draw since each call is a burden for the OpenGL driver. If (and I am not really sure abut this from your post) you use different textures, I suggest to go for a simple texture atlas where all the sprites are inside a single picture (preferably a power of 2 texture with mipmapping) and then you take the piece of the texture you need in the fragment using texture coordinates (this is the reason they exist in the end)
If the position of the sprites change over time (of course it does) at each frame, a possible advantage would be to pack the new vertex coordinates of your sprites at each frame and draw directly from memory (possibly over VAO. VBO could cost more since you need to build it each frame? to be tested in real scenario). This would be a good call pack operation and I am pretty sure it will bust the performances.
Consider that the VAO option could be feasible since we are talking about very small amount of data and the memory bandwidth should not represent a real bottleneck (each quad I guess uses 12 floats for vertex coordinates, 8 for textures and 12 for normals, 128 byte?), it shouldn't be a big problem over VAO.
About opacity, can't you play using an uniform to your fragment shader where you play with alpha? Am I wrong with it? It should work.
I hope this helps.
Ciao,
Maurizio

How to draw thousands of Sprites with different transparency?

Hi I'm using Firemonkey because of it's cross platform capabilities. I want to render a particle system. Now I'm using a TMesh which works well enough to display the particles fast. Each particle is represented in the mesh via a two textured triangles. Using different texture coordinates I can show many different particle types with the same mesh. The problem is, that every particle can have its own transparency/opacity. With my current approach I cannot set the tranparency individually for each triangle (or even vertex). What can I do?
I realized that there are some other properties in TMesh.Data.VertexBuffer, like Diffuse or other sets of textures (TexCoord1-3), but these properties are not used (not even initalized) in TMesh. It also seems not easy to simply change this behavior by inheriting from TMesh. It seems one have to inherit from a lower level control to initialize the VertextBuffer with more properties. Before I try that, I'd like to ask if it would be possible to control the transparency of a triangle with that. E.g. can I set a transparent color (Diffuse) or use a transparent texture (TextCoor1)? Or is there a better way to draw the particles in Firemonkey.
I admit that I don't know much about that particular framework, but you shouldn't be able to change transparency via vertex points in a 3D model. The points are usually x,y,z coordinates. Now, the vertex points would have an effect on how the sprites are lighted if you are using a lighting system. You can also use the vertex information to apply different transparency effects
Now, there's probably a dozen different ways to do this. Usually you have a texture with different degrees of alpha values that can be set at runtime. Graphics APIs usually have some filtering function that can quickly apply values to sprites/textures, and a good one will use your graphics chip if available.
If you can use an effect, it's usually better since the nuclear way is to make a bunch of different copies of a sprite and then apply effects to them individually. If you are using Gouraud Shading, then it gets easier since Gouraud uses code to fill in texture information.
Now, are you using light particles? Some graphics APIs actually have code that makes light particles.
Edit: I just remembered Vertex Shaders, which could.

Design advice for OpenGL ES 2 / iOS GLKit

I'd like to build an app using the new GLKit framework, and I'm in need of some design advice. I'd like to create an app that will present up to a couple thousand "bricks" (objects with very simple geometry). Most will have identical texture, but up to a couple hundred will have unique texture. I'd like the bricks to appear every few seconds, move into place and then stay put (in world coords). I'd like to simulate a camera whose position and orientation are controlled by user gestures.
The advice I need is about how to organize the code. I'd like my model to be a collection of bricks that have a lot more than graphical data associated with them:
Does it make sense to associate a view-like object with each handle geometry, texture, etc.?
Should every brick have it's own vertex buffer?
Should each have it's own GLKBaseEffect?
I'm looking for help organizing what object should do what during setup, then rendering.
I hope I can stay close to the typical MVC pattern, with my GLKViewController observing model state changes, controlling eye coordinates based on gestures, and so on.
Would be much obliged if you could give some advice or steer me toward a good example. Thanks in advance!
With respect to the models, I think an approach analogous to the relationship between UIImage and UIImageView is appropriate. So every type of brick has a single vertex buffer,GLKBaseEffect, texture and whatever else. Each brick may then appear multiple times just as multiple UIImageViews may use the same UIImage. In terms of keeping multiple reference frames, it's actually a really good idea to build a hierarchy essentially equivalent to UIView, each containing some transform relative to the parent and one sort being able to display a model.
From the GLKit documentation, I think the best way to keep the sort of camera you want (and indeed the object locations) is to store it directly as a GLKMatrix4 or a GLKQuaternion — so you don't derive the matrix or quaternion (plus location) from some other description of the camera, rather the matrix or quaternion directly is the storage for the camera.
Both of those classes have methods built in to apply rotations, and GLKMatrix4 can directly handle translations. So you can directly map the relevant gestures to those functions.
The only slightly non-obvious thing I can think of when dealing with the camera in that way is that you want to send the inverse to OpenGL rather than the thing itself. Supposing you use a matrix, the reasoning is that if you wanted to draw an object at that location you'd load the matrix directly then draw the object. When you draw an object at the same location as the camera you want it to end up being drawn at the origin. So the matrix you have to load for the camera is the inverse of the matrix you'd load to draw at that location because you want the two multiplied together to be the identity matrix.
I'm not sure how complicated the models for your bricks are but you could hit a performance bottleneck if they're simple and all moving completely independently. The general rule when dealing with OpenGL is that the more geometry you can submit at once, the faster everything goes. So, for example, an entirely static world like that in most games is much easier to draw efficiently than one where everything can move independently. If you're drawing six-sided cubes and moving them all independently then you may see worse performance than you might expect.
If you have any bricks that move in concert then it is more efficient to draw them as a single piece of geometry. If you have any bricks that definitely aren't visible then don't even try to draw them. As of iOS 5, GL_EXT_occlusion_query_boolean is available, which is a way to pass some geometry to OpenGL and ask if any of it is visible. You can use that in realtime scenes by building a hierarchical structure describing your data (which you'll already have if you've directly followed the UIView analogy), calculating or storing some bounding geometry for each view and doing the draw only if the occlusion query suggests that at least some of the bounding geometry would be visible. By following that sort of logic you can often discard large swathes of your geometry long before submitting it.

Rendering multiple textures to a sphere based on game-generated values

I'm making a game with my friend that involves randomly generating planets based on certain properties. Originally this game was all 2D, but now we've decided to enhance the purpose of planets in the game and make it 2.5D, with planets being rendered as 3D spheres in an otherwise 2D world. Now, up to this point we had a pretty good thing going with the way planets looked. We used layered textures, one for each property (water, land, atmosphere) depending on how our algorithms created the planet. This looked pretty, but the planet surfaces were largely lame and didn't vary as they were all made from the same few textures.
Now that we are going 3D, I want to create a nice planetary map which will determine the topography of the planet based on its properties to make each planet have different bodies of water, land masses, etc. I also want to draw different textures on the surface of the planet based on that map, with them blending together at the edges.
I've considered two possibilities: rendering the textures based on the map to a RenderTarget and then wrapping that RenderTarget around my sphere model, or converting the map to vertex data and writing a shader to draw the textures with the proper weight.
The problem is, I'm a novice at both RenderTargets and HLSL (as a matter of fact, I don't even know if the RenderTarget method is possible), so I feel the need for some guidance here. What would be recommended for rendering multiple textures to a sphere model based on a generated terrain map? Also, are there any suggestions for what format to create the terrain map in (it would be some sort of data structure which would represent the type of terrain at any coordinate on the planet's surface)?
I have looked at other multi-texture tutorials, but they all seem based on a pre-determined texture or set of values. I need to be able to randomly generate the terrain in-game.

Resources