XNA best way to load 2d textures; monogame - xna

I am making a test xna game as a learning exercise and I have a small question about using 2d textures. Basically the game is a grid of different 'tiles' which are taken from a text map file. I basically just parse through the file when initializing a level and create a matrix of the different tile types. The level is essentially a tub of wall tiles and spikes. So essentially, there are lots of wall tiles and multiple spike tiles and then lots of empty tiles. However, there are four types of wall tile and spike textures to cover different directions.
My question is what is the best way to load the textures for each of the tile? Do I load individual textures for each tile? i.e when I create a tile, pass it a texture2d which I can draw and load the texture at the same time. This seems like a good way, but then I have to load each tiles texture individually and this seems wasteful.
The other option I can think of is to use a static texture in the tile struct an then simple load this texture as a tile atlas with the different walls and spikes. This way I am only loading a single texture, and then when drawing I just move a rectangle to the area of the appropriate tile within the sprite.
I am not sure which of these ways would be optimal from a performance perspective, or if there is an alternative approach?
Thanks in advanmce

The wonderful thing about the content pipeline is that when you do
Content.Load<Texture2D>("sometexture");
It doesn't load the Texture2D everytime. The content pipeline is smart enough to load it once and send back the same Texture2D for that texture everytime. It would actually be worse if you made the static struct.

Related

What are the benefits of using texture atlas in Sprite Kit?

I'm new to ios game development field.
I have been going through the following apple tutorial multiple times but not getting the points
https://developer.apple.com/library/ios/documentation/GraphicsAnimation/Conceptual/SpriteKit_PG/Sprites/Sprites.html
thanks
Directly from the SKTextureAtlas class reference :
Texture atlases can improve memory usage and rendering performance. For example, if you have a scene with sprites drawn with different textures, Sprite Kit performs one drawing pass for each texture. However, if all of the textures were loaded from the same texture atlas, then Sprite Kit can render the sprites in a single drawing pass—and use less memory to do so. Whenever you have textures that are always used together, you should store them in an atlas.
SKTextureAtlas Class Reference
animation can be hundred of files. atlas puts them in one large file.
reading one file (large with lots of images) is much more efficient then reading hundreds of files (with one image each)
rendering of one large image and then showing only part of it is much more efficient then rendering each file separately.

How to batch sprites in iOS/OpenGL ES 2.0

I have developed my own sprite library on top of OpenGL ES 2.0. Right now, I am not doing any batching of draw calls; instead, each sprite has its own VBO/VAO of four textured vertices, drawn as a triangle strip (The VAO/VBO itself is managed by the Texture atlas, so identical sprites reuse the same VAO/VBO, which is 'reference counted' and hence deleted when no sprite instances reference it).
Before drawing each sprite, I'll bind its texture, upload its uniforms/attributes to the shader (modelview matrix, opacity - Projection matrix stays constant all along), bind its Vertex Array Object (4 textured vertices + four indices), and call glDrawElements(). I do cull off-screen sprites (based on position and bounds), but still it is one draw call per sprite, even if all sprites share the same texture. The vertex positions and texture coordinates for each sprite never change.
I must say that, despite this inefficiency, I have never experienced performance issues, even when drawing many sprites on screen. I do split the sprites into opaque/non-opaque, draw the opaque ones first, and the non-opaque ones after, back to front. I have seen performance suffer only when I overdraw (tax the fill rate).
Nevertheless, the OpenGL instruments in Xcode will complain that I draw too many small meshes and that I should consolidate my geometry into less objects. And in the Unity world everyone talks about limiting the number of draw calls as if they were the plague.
So, how should I go around batching very many sprites, each with a different transform and opacity value (but the same texture), into one draw call? One thing that comes to mind is to modify the vertex data every frame, and stream it: applying the modelview matrix of each sprite to all its vertices, assembling the transformed vertices for all sprites into one mesh, and submitting it to the GPU. This approach does not solve the problem of varying opacity between sprites.
Another idea that comes to mind is to have all the textured vertices of all the sprites assembled into a single mesh (VBO), treated as 'static' (same vertex format I am using now), and a separate array with the stuff that changes per sprite every frame (transform matrix and opacity), and only stream that data each frame, and pull it/apply it on the vertex shader side. That is, have a separate array where the 'attribute' being represented is the modelview matrix/alpha for the corresponding vertices. Still have to figure out the exact implementation in terms of data format/strides etc. In any case, there is the additional complication that arises whenever a new sprite is created/destroyed, the whole mesh has to be modified...
Or perhaps there is an ideal, 'textbook' solution to this problem out there that I haven't figured out? What does cocos2d do?
When I initially started reading you post I though that each quad used a different texture (since you stated "Before drawing each sprite, I'll bind its texture") but then you said that each sprite has "the same texture".
A possible easy win is to control the way you bind your textures during the draw since each call is a burden for the OpenGL driver. If (and I am not really sure abut this from your post) you use different textures, I suggest to go for a simple texture atlas where all the sprites are inside a single picture (preferably a power of 2 texture with mipmapping) and then you take the piece of the texture you need in the fragment using texture coordinates (this is the reason they exist in the end)
If the position of the sprites change over time (of course it does) at each frame, a possible advantage would be to pack the new vertex coordinates of your sprites at each frame and draw directly from memory (possibly over VAO. VBO could cost more since you need to build it each frame? to be tested in real scenario). This would be a good call pack operation and I am pretty sure it will bust the performances.
Consider that the VAO option could be feasible since we are talking about very small amount of data and the memory bandwidth should not represent a real bottleneck (each quad I guess uses 12 floats for vertex coordinates, 8 for textures and 12 for normals, 128 byte?), it shouldn't be a big problem over VAO.
About opacity, can't you play using an uniform to your fragment shader where you play with alpha? Am I wrong with it? It should work.
I hope this helps.
Ciao,
Maurizio

Loading texture in segments

I'm working on an Open GL app that uses 1 particularly large texture 2250x1000. Unfortunately, Open GL ES 2.0 doesn't support textures larger than 2048x2048. When I try to draw my texture, it appears black. I need a way to load and draw the texture in 2 segments (left, right). I've seen a few questions that touch on libpng, but I really just need a straight forward solution for drawing large textures in opengl es.
First of all the texture size support depends on device, I believe iPad 3 supports 4096x4096 but don't mind that. There is no way to push all those data as they are to most devices onto 1 texture. First you should ask yourself if you really need such a large texture, will it really make a difference if you resample it down to 2048x_. If the answer is NO you will need to break it at some point. You could cut it by half in width and append of the cut parts to the bottom of the texture resulting in 1125x2000 texture or simply create 2 or more textures and push to them certain parts of the texture image. In any of the cases you might have trouble with texture coordinates but this all heavily depends on what you are trying to do, what is on that texture (a single image or parts of a sophisticated model; color mapping or some data you can not interpolate; do you create it at load time or it is modified as it goes...). Maybe some more info could help us solve your situation more specifically.

Rendering multiple textures to a sphere based on game-generated values

I'm making a game with my friend that involves randomly generating planets based on certain properties. Originally this game was all 2D, but now we've decided to enhance the purpose of planets in the game and make it 2.5D, with planets being rendered as 3D spheres in an otherwise 2D world. Now, up to this point we had a pretty good thing going with the way planets looked. We used layered textures, one for each property (water, land, atmosphere) depending on how our algorithms created the planet. This looked pretty, but the planet surfaces were largely lame and didn't vary as they were all made from the same few textures.
Now that we are going 3D, I want to create a nice planetary map which will determine the topography of the planet based on its properties to make each planet have different bodies of water, land masses, etc. I also want to draw different textures on the surface of the planet based on that map, with them blending together at the edges.
I've considered two possibilities: rendering the textures based on the map to a RenderTarget and then wrapping that RenderTarget around my sphere model, or converting the map to vertex data and writing a shader to draw the textures with the proper weight.
The problem is, I'm a novice at both RenderTargets and HLSL (as a matter of fact, I don't even know if the RenderTarget method is possible), so I feel the need for some guidance here. What would be recommended for rendering multiple textures to a sphere model based on a generated terrain map? Also, are there any suggestions for what format to create the terrain map in (it would be some sort of data structure which would represent the type of terrain at any coordinate on the planet's surface)?
I have looked at other multi-texture tutorials, but they all seem based on a pre-determined texture or set of values. I need to be able to randomly generate the terrain in-game.

copying a texture in xna into another texture

I am loading a Texture2D that contains multiple sprite textures. I would like to pull the individual textures out when I load the initial Texture to store into separate Texture2D objects, but can't seem to find a method any where that would let me do this. SpriteBatch.Draw I believe should only be called from within a begin, end block right?
Thanks.
I am loading a Texture2D that contains
multiple sprite textures. I would like
to pull the individual textures out
when I load the initial Texture to
store into separate Texture2D objects.
You don't have to do this nor should you. Accessing a single texture is faster than multiple textures. Also, textures are stored in GPU texture memory. It just makes no sense to split it up.
You should instead focus on writing code that can access individual sprites within your sprite sheet. I suggest you have a look at how sprite based games work.
Here is a great tutorial video series that should help you out: tile engine videos

Resources