How to create an Isometric tile-map with 3D objects - ios

So, I am building a basic tycoon game. In this game I want to have a simplex 3D generated tile-map that the player can build on. I need the tile-map to be 3D (or have a 3D appearance) because I want to be able to use some realistic shaders on the 3D models the player is going to be able to build.
At first, I thought I could combine SpriteKit and SceneKit. I would generate the 2D SKTileMapNode and then allow the player to build the 3D buildings on top of that. But that wouldn't work as well because SKTileMapNodes aren't easily pannable or zoomable.
My second idea was to build an SKTileMap randomly (which I know how to do easily) and then use that as a reference to build a 3D scene. That would allow me to have full control over what goes where, but there is a catch. Each 3D block (representing a tile in the tileMap) would be treated as a node and cause huge performance issues, Unlike SpriteKits SKTileMapNode that treats the tile-map as a single large node once it is filled with the tiles.
I would prefer to not use an isometric SKTileMap because that wouldn't allow the player to be able to pan/zoom the map and thus, reduce the depth and feel that I would like to achieve with this game.
For instance, this is what I am going for (similar in build but completely different in style)

“Each 3D block (representing a tile in the tileMap) would be treated as a node and cause huge performance issues,”
I think your question should be “how to render a 3D tile map with good performance in Scenekit” because imo Scenekit is definitely the way to go here and it is certainly doable to prevent those “huge performance issues”.
For starters, how do you create a tile? If you use the builtin primitives you can get a huge performance increase by using a tile from a dae or obj file, or even by creating it programmatically.
If the tile is the same model throughout, you should add it only once, and then clone it for all the other tiles:
https://developer.apple.com/documentation/scenekit/scnnode/1408046-clone
(Note you will have to create a copy of the material and assign it to a clone to prevent it from being shared across all tiles).
Additionally, by adding them all to a single parent node, you can create a socalled flattened clone to combine all the tiles into a single node.
https://developer.apple.com/documentation/scenekit/scnnode/1407960-flattenedclone which significantly reduces the number of draw calls.
If that isn’t fast enough, another option is to create the entire map programmatically. That is, create all the vertices and create a SCNGeometry based on those.
Yet another, probably blazing fast option, would be to use 4vertices to create a plane, and then use a shader and displacement map to create the tiled map.
The part about cloning and flattened clones also applies to the buildings that have the same geometry.
And just for completeness, in case that wasn't obvious, you should set the https://developer.apple.com/documentation/scenekit/scncamera/1436621-usesorthographicprojection property of the camera to true to get that isometric look.

Related

Creating a large grid of similar objects using SceneKit

I'm currently testing the feasibility of using SceneKit for a game I would like to make but having trouble figuring out how best to create a grid of many similar geometrical objects in SceneKit while maintaining acceptable performance. Here is what I would like to do:
Place hundreds of a geometric primitive, just an ordinary cube for now in a grid.
Apply various vertex and/or surface and/or fragment shaders to each cube. Some cubes may have the same shaders applied to them but in practice I would like to have at least tens of different shaders that I can apply to the cubes.
I would like to be able to zoom the camera out and view all of the cubes simultaneously while maintaining a smooth framerate.
I'm beginning with a grid of 25 cubes by 25 cubes. I have achieved good performance by building the grid in a loop using a single SCNBox object for the geometry, and setting the shaderModifiers property of the firstMaterial property. I add all 625 cubes to a node, then add the flattenedClone() of this node to my scene's root node. This means the entire scene can be rendered with just 1 draw call.
The problem with reusing the geometry, however, is that all cubes must use the same shaders. But if I create a new SCNBox for each cube (so that I can set the shaders individually for each cube) then I end up with a draw call being made for each cube, which is inefficient and performance will suffer quickly when more and more cubes are added to the scene. And if the geometry is anything more complex than a cube, performance is degraded severely.
Of course I could optimize and have any cubes that DO use the same shaders share the same geometry, add them to their own node and add the flattenedClone() of that node to the scene. But is there more that I can do to optimize in this case? Or am I better off looking for an alternative to SceneKit entirely?

How can I add a texture to a joint in SpriteKit?

Maybe I could do it with the anchor points, but I don't really understand how I should do that.
The solution I'm building needs to have a SKSpriteNode with a texture for the joint, then add it as a child of one of the conjoining nodes, such that it covers the area the SKPhysicsJoint is operating on. Anytime I rotate the joint the texture rotates with it. I got the idea from action figures with hidden joints, basically, you're just putting something over the mechanical joint, in order to make for more pleasing aesthetics.
I don't know if this will work for you, but you could also define a texture using the centerRect of an existing texture and then define the part you want to paint on the joint node. This is more work than I wanted to put in, depending on your artwork assets this may be a good way to get around having another image in your bundle. I'm the artist as well as the programmer, so its faster for me to just create a joint image and add another node over the upper and lower nodes in the joint.
Joints can't have a texture. You can estimate or for some joints calculate where the connection point is by using the two connected body's positions and then put a sprite there (and continue to update the sprite's position).

Design advice for OpenGL ES 2 / iOS GLKit

I'd like to build an app using the new GLKit framework, and I'm in need of some design advice. I'd like to create an app that will present up to a couple thousand "bricks" (objects with very simple geometry). Most will have identical texture, but up to a couple hundred will have unique texture. I'd like the bricks to appear every few seconds, move into place and then stay put (in world coords). I'd like to simulate a camera whose position and orientation are controlled by user gestures.
The advice I need is about how to organize the code. I'd like my model to be a collection of bricks that have a lot more than graphical data associated with them:
Does it make sense to associate a view-like object with each handle geometry, texture, etc.?
Should every brick have it's own vertex buffer?
Should each have it's own GLKBaseEffect?
I'm looking for help organizing what object should do what during setup, then rendering.
I hope I can stay close to the typical MVC pattern, with my GLKViewController observing model state changes, controlling eye coordinates based on gestures, and so on.
Would be much obliged if you could give some advice or steer me toward a good example. Thanks in advance!
With respect to the models, I think an approach analogous to the relationship between UIImage and UIImageView is appropriate. So every type of brick has a single vertex buffer,GLKBaseEffect, texture and whatever else. Each brick may then appear multiple times just as multiple UIImageViews may use the same UIImage. In terms of keeping multiple reference frames, it's actually a really good idea to build a hierarchy essentially equivalent to UIView, each containing some transform relative to the parent and one sort being able to display a model.
From the GLKit documentation, I think the best way to keep the sort of camera you want (and indeed the object locations) is to store it directly as a GLKMatrix4 or a GLKQuaternion — so you don't derive the matrix or quaternion (plus location) from some other description of the camera, rather the matrix or quaternion directly is the storage for the camera.
Both of those classes have methods built in to apply rotations, and GLKMatrix4 can directly handle translations. So you can directly map the relevant gestures to those functions.
The only slightly non-obvious thing I can think of when dealing with the camera in that way is that you want to send the inverse to OpenGL rather than the thing itself. Supposing you use a matrix, the reasoning is that if you wanted to draw an object at that location you'd load the matrix directly then draw the object. When you draw an object at the same location as the camera you want it to end up being drawn at the origin. So the matrix you have to load for the camera is the inverse of the matrix you'd load to draw at that location because you want the two multiplied together to be the identity matrix.
I'm not sure how complicated the models for your bricks are but you could hit a performance bottleneck if they're simple and all moving completely independently. The general rule when dealing with OpenGL is that the more geometry you can submit at once, the faster everything goes. So, for example, an entirely static world like that in most games is much easier to draw efficiently than one where everything can move independently. If you're drawing six-sided cubes and moving them all independently then you may see worse performance than you might expect.
If you have any bricks that move in concert then it is more efficient to draw them as a single piece of geometry. If you have any bricks that definitely aren't visible then don't even try to draw them. As of iOS 5, GL_EXT_occlusion_query_boolean is available, which is a way to pass some geometry to OpenGL and ask if any of it is visible. You can use that in realtime scenes by building a hierarchical structure describing your data (which you'll already have if you've directly followed the UIView analogy), calculating or storing some bounding geometry for each view and doing the draw only if the occlusion query suggests that at least some of the bounding geometry would be visible. By following that sort of logic you can often discard large swathes of your geometry long before submitting it.

How to texture a cylinder in XNA4 with multiple textures?

Basically, I'm trying to cover a slot machine reel (white cylinder model) with multiple evenly spaced textures around the exterior. The program will be Windows only and the textures will be dynamically loaded at run-time instead of using the content pipeline. (Windows based multi-screen setup with XNA from the Microsoft example)
Most of the examples I can find online are for XNA3 and are still seemingly gibberish to me at this point.
So I'm looking for any help someone can provide on the subject of in-game texturing of objects like cylinders with multiple textures.
Maybe there is a good book out there that can properly describe how texturing works in XNA (4.0 specifically)?
Thanks
You have a few options. It depends two things: whether the model is loaded or generated at runtime, and whether your multiple textures get combined into one or kept individual.
If you have art skills or know an artist, probably the easiest approach is to get them to texture map the cylinder with as many textures as you want (multiple materials). You'd want your Model to have one mesh (ModelMesh) and one material (ModelMeshPart) per texture required. This is assuming the cylinders always have a fixed number of textures!. Then, to swap the textures at runtime you'd iterate through the ModelMesh.Effects collection, cast each to a BasicEffect and set it's Texture property.
If you can't modify the model, you'll have to generate it. There's an example of this on the AppHub site: http://create.msdn.com/en-US/education/catalog/sample/primitives_3d. It probably does not generate texture coordinates so you'd need to add them. If you wanted 5 images per cylinder, you should make sure the number of segments is a multiple of 5 and the V coordinate should go from 0 to 1, 5 times as it wraps around the cylinder. To keep your textures individual with this technique, you'd need to draw the cylinder in 5 chunks, each time setting the GraphicsDevice.Textures[0] to your current texture.
With both techniques it would be possible to draw the cylinder in a single draw call, but you'd need to merge your textures into a single one using Texture2D.GetData and Texture2D.SetData. This is going to be more efficient, but really isn't worth the trouble. Well not unless you making some kind of crazy slot machine particle system anyway.

Rendering multiple textures to a sphere based on game-generated values

I'm making a game with my friend that involves randomly generating planets based on certain properties. Originally this game was all 2D, but now we've decided to enhance the purpose of planets in the game and make it 2.5D, with planets being rendered as 3D spheres in an otherwise 2D world. Now, up to this point we had a pretty good thing going with the way planets looked. We used layered textures, one for each property (water, land, atmosphere) depending on how our algorithms created the planet. This looked pretty, but the planet surfaces were largely lame and didn't vary as they were all made from the same few textures.
Now that we are going 3D, I want to create a nice planetary map which will determine the topography of the planet based on its properties to make each planet have different bodies of water, land masses, etc. I also want to draw different textures on the surface of the planet based on that map, with them blending together at the edges.
I've considered two possibilities: rendering the textures based on the map to a RenderTarget and then wrapping that RenderTarget around my sphere model, or converting the map to vertex data and writing a shader to draw the textures with the proper weight.
The problem is, I'm a novice at both RenderTargets and HLSL (as a matter of fact, I don't even know if the RenderTarget method is possible), so I feel the need for some guidance here. What would be recommended for rendering multiple textures to a sphere model based on a generated terrain map? Also, are there any suggestions for what format to create the terrain map in (it would be some sort of data structure which would represent the type of terrain at any coordinate on the planet's surface)?
I have looked at other multi-texture tutorials, but they all seem based on a pre-determined texture or set of values. I need to be able to randomly generate the terrain in-game.

Resources