I'm making a game with my friend that involves randomly generating planets based on certain properties. Originally this game was all 2D, but now we've decided to enhance the purpose of planets in the game and make it 2.5D, with planets being rendered as 3D spheres in an otherwise 2D world. Now, up to this point we had a pretty good thing going with the way planets looked. We used layered textures, one for each property (water, land, atmosphere) depending on how our algorithms created the planet. This looked pretty, but the planet surfaces were largely lame and didn't vary as they were all made from the same few textures.
Now that we are going 3D, I want to create a nice planetary map which will determine the topography of the planet based on its properties to make each planet have different bodies of water, land masses, etc. I also want to draw different textures on the surface of the planet based on that map, with them blending together at the edges.
I've considered two possibilities: rendering the textures based on the map to a RenderTarget and then wrapping that RenderTarget around my sphere model, or converting the map to vertex data and writing a shader to draw the textures with the proper weight.
The problem is, I'm a novice at both RenderTargets and HLSL (as a matter of fact, I don't even know if the RenderTarget method is possible), so I feel the need for some guidance here. What would be recommended for rendering multiple textures to a sphere model based on a generated terrain map? Also, are there any suggestions for what format to create the terrain map in (it would be some sort of data structure which would represent the type of terrain at any coordinate on the planet's surface)?
I have looked at other multi-texture tutorials, but they all seem based on a pre-determined texture or set of values. I need to be able to randomly generate the terrain in-game.
Related
I'm using python kivy to render meshes with opengl onto a canvas. I want to return vertex data from the fragment shader so i can build a collider (to use on my cpu event listeners after doing projection and model view transforms). I can replicate the matrix multiplications on the cpu (i guess that's the easy way out), but then i would have to do the same calculations twice (not good).
The only way I can think of doing this (after some browsing) is to imprint an object id onto my rendered mesh alpha channel (wouldn't affect much if i'd keep data coding near value 1 for alpha ). And create some kind of 'color picker' on the cpu side to decode it (I'm guessing that's not hard to do using kivy).
Anyone has a better idea to deal with this? Or a better approach?
First criterion here is: do you need collision for picking or for physics simulation?
If it is for physics: you almost never want the same mesh for rendering and for physics collisions. Typically, you use a very rough approximation for the physics shape, nearly always a convex shape, or a union of convex shapes. (Colliding arbitrary concave meshes is something that no physics engine can do well, and if they attempt it at all, performance will be poor.)
If it is for the purpose of picking an object with a mouse-click: you can go two different ways for this:
You replicate the geometry on the CPU, and use the mouse-location plus camera-view to create a ray that intersects this geometry, to see what is hit first.
After rendering your scene, you read back a single pixel from the depth buffer. (The pixel that your mouse is over.) With the depth value you get back, plus camera info, you can reconstruct a corresponding 3D position in your world. Once you have a 3D location, you can query your world to see which object is the closest to this point, and you will have your hit.
I have a problem in which in my game I have to use SpriteSortMode.Texture because I have a lot of objects with few textures, so I cannot afford to use SpriteSortMode.BackToFront.
The thing is this means I cannot draw by layers, unless I do SpriteBatch.Begin with the exact same settings, which is what I'm currently doing.
I only have 3 draw layers I need - a Tileset surface, Objects like rocks or characters on the surface, and UI.
Other solutions I've found is using texture quads (which supposedly also improves tileset drawing performance), going 3D with orthogonal view which I haven't researched yet.
I'm hoping there's a better to make this work.
Why would having a lot of objects with few textures mean you have to use SpriteSortMode.Texture?
"This can improve performance when drawing non-overlapping sprites of uniform depth." says the MSDN page, and this is clearly not what you are doing.
Just use the default SpriteSortMode.Deferred and draw things back to front in order.
I have been able to convert a 3D mesh from Maya into Voxel art (looks like a bunch of cubes--similar to legos), all done in Maya. I plan on using the 3D art to wrap around my 2D textures to make it 2.5D. My question is: does the mesh being voxelized allow me to use the pieces as particles that i can put into a particle engine in XNA to have awesome dynamic effects?
No, because you get a set of vertices and index defining triangles with no information about cubes.
But you can create an algorithm that extract the info from the model. It's a bit hard but it's feasible.
I'd do it creating a 3d grid, and foreach face I'd launch rays from that face to the opposite face, taking every collision with the mesh, getting for each ray a number of collisions that should be pair (0, 2, 4,...), this between two points should have a solid volume.
That way it can be converted to voxels... on each collision it would be useful to store the bones that are related to the triangle that collides, this way you would be able to animate the voxel model.
I'd like to build an app using the new GLKit framework, and I'm in need of some design advice. I'd like to create an app that will present up to a couple thousand "bricks" (objects with very simple geometry). Most will have identical texture, but up to a couple hundred will have unique texture. I'd like the bricks to appear every few seconds, move into place and then stay put (in world coords). I'd like to simulate a camera whose position and orientation are controlled by user gestures.
The advice I need is about how to organize the code. I'd like my model to be a collection of bricks that have a lot more than graphical data associated with them:
Does it make sense to associate a view-like object with each handle geometry, texture, etc.?
Should every brick have it's own vertex buffer?
Should each have it's own GLKBaseEffect?
I'm looking for help organizing what object should do what during setup, then rendering.
I hope I can stay close to the typical MVC pattern, with my GLKViewController observing model state changes, controlling eye coordinates based on gestures, and so on.
Would be much obliged if you could give some advice or steer me toward a good example. Thanks in advance!
With respect to the models, I think an approach analogous to the relationship between UIImage and UIImageView is appropriate. So every type of brick has a single vertex buffer,GLKBaseEffect, texture and whatever else. Each brick may then appear multiple times just as multiple UIImageViews may use the same UIImage. In terms of keeping multiple reference frames, it's actually a really good idea to build a hierarchy essentially equivalent to UIView, each containing some transform relative to the parent and one sort being able to display a model.
From the GLKit documentation, I think the best way to keep the sort of camera you want (and indeed the object locations) is to store it directly as a GLKMatrix4 or a GLKQuaternion — so you don't derive the matrix or quaternion (plus location) from some other description of the camera, rather the matrix or quaternion directly is the storage for the camera.
Both of those classes have methods built in to apply rotations, and GLKMatrix4 can directly handle translations. So you can directly map the relevant gestures to those functions.
The only slightly non-obvious thing I can think of when dealing with the camera in that way is that you want to send the inverse to OpenGL rather than the thing itself. Supposing you use a matrix, the reasoning is that if you wanted to draw an object at that location you'd load the matrix directly then draw the object. When you draw an object at the same location as the camera you want it to end up being drawn at the origin. So the matrix you have to load for the camera is the inverse of the matrix you'd load to draw at that location because you want the two multiplied together to be the identity matrix.
I'm not sure how complicated the models for your bricks are but you could hit a performance bottleneck if they're simple and all moving completely independently. The general rule when dealing with OpenGL is that the more geometry you can submit at once, the faster everything goes. So, for example, an entirely static world like that in most games is much easier to draw efficiently than one where everything can move independently. If you're drawing six-sided cubes and moving them all independently then you may see worse performance than you might expect.
If you have any bricks that move in concert then it is more efficient to draw them as a single piece of geometry. If you have any bricks that definitely aren't visible then don't even try to draw them. As of iOS 5, GL_EXT_occlusion_query_boolean is available, which is a way to pass some geometry to OpenGL and ask if any of it is visible. You can use that in realtime scenes by building a hierarchical structure describing your data (which you'll already have if you've directly followed the UIView analogy), calculating or storing some bounding geometry for each view and doing the draw only if the occlusion query suggests that at least some of the bounding geometry would be visible. By following that sort of logic you can often discard large swathes of your geometry long before submitting it.
Basically, I'm trying to cover a slot machine reel (white cylinder model) with multiple evenly spaced textures around the exterior. The program will be Windows only and the textures will be dynamically loaded at run-time instead of using the content pipeline. (Windows based multi-screen setup with XNA from the Microsoft example)
Most of the examples I can find online are for XNA3 and are still seemingly gibberish to me at this point.
So I'm looking for any help someone can provide on the subject of in-game texturing of objects like cylinders with multiple textures.
Maybe there is a good book out there that can properly describe how texturing works in XNA (4.0 specifically)?
Thanks
You have a few options. It depends two things: whether the model is loaded or generated at runtime, and whether your multiple textures get combined into one or kept individual.
If you have art skills or know an artist, probably the easiest approach is to get them to texture map the cylinder with as many textures as you want (multiple materials). You'd want your Model to have one mesh (ModelMesh) and one material (ModelMeshPart) per texture required. This is assuming the cylinders always have a fixed number of textures!. Then, to swap the textures at runtime you'd iterate through the ModelMesh.Effects collection, cast each to a BasicEffect and set it's Texture property.
If you can't modify the model, you'll have to generate it. There's an example of this on the AppHub site: http://create.msdn.com/en-US/education/catalog/sample/primitives_3d. It probably does not generate texture coordinates so you'd need to add them. If you wanted 5 images per cylinder, you should make sure the number of segments is a multiple of 5 and the V coordinate should go from 0 to 1, 5 times as it wraps around the cylinder. To keep your textures individual with this technique, you'd need to draw the cylinder in 5 chunks, each time setting the GraphicsDevice.Textures[0] to your current texture.
With both techniques it would be possible to draw the cylinder in a single draw call, but you'd need to merge your textures into a single one using Texture2D.GetData and Texture2D.SetData. This is going to be more efficient, but really isn't worth the trouble. Well not unless you making some kind of crazy slot machine particle system anyway.