copying a texture in xna into another texture - xna

I am loading a Texture2D that contains multiple sprite textures. I would like to pull the individual textures out when I load the initial Texture to store into separate Texture2D objects, but can't seem to find a method any where that would let me do this. SpriteBatch.Draw I believe should only be called from within a begin, end block right?
Thanks.

I am loading a Texture2D that contains
multiple sprite textures. I would like
to pull the individual textures out
when I load the initial Texture to
store into separate Texture2D objects.
You don't have to do this nor should you. Accessing a single texture is faster than multiple textures. Also, textures are stored in GPU texture memory. It just makes no sense to split it up.
You should instead focus on writing code that can access individual sprites within your sprite sheet. I suggest you have a look at how sprite based games work.
Here is a great tutorial video series that should help you out: tile engine videos

Related

Can I create big texture from other small textures in webgl?

I have loaded textures in memory and I want to draw them one draw call. I can put all texture coords to buffer but how I create one texture from small texture parts ? is that possible ?
or I must download images and combine them then I create textrute from combined big picture ?
In general combining images into a texture atlas is something you'd do off line either manually like in an image editing program or using custom or specialized tools. That's the most common and recommended way.
If you have to do it at runtime for some reason then the easiest way to combine images into a single texture is to first load all your images, then use the canvas 2D api to draw them into a 2D canvas, then use that canvas as a source for texImage2D in WebGL. The only issue with using a 2D canvas is if you need data other than images because 2D canvas only supports pre-multiplied alpha.
Otherwise doing it in WebGL is just a matter of rendering your smaller textures into a larger texture. Rendering to a texture requires creating the texture, attaching to a framebuffer, and then rendering like you would anything else. See this for rendering to a texture and this for rendering any part of an image to any place in the canvas or another texture.

Using a tileset for textures for a mesh

I'm trying to create a isometric game in Love2D. I need to use meshes because I want to be able to rotate the camera view, but I could not figure out how to use a tileset to provide textures for the mesh.
The mesh class accepts a texture only, unlike the sprite batch class, which accepts quads to direct it on what part of the texture to use. Is there a way to give the mesh class this information or even slice up the tileset into individual images to be used with meshes?

What are the benefits of using texture atlas in Sprite Kit?

I'm new to ios game development field.
I have been going through the following apple tutorial multiple times but not getting the points
https://developer.apple.com/library/ios/documentation/GraphicsAnimation/Conceptual/SpriteKit_PG/Sprites/Sprites.html
thanks
Directly from the SKTextureAtlas class reference :
Texture atlases can improve memory usage and rendering performance. For example, if you have a scene with sprites drawn with different textures, Sprite Kit performs one drawing pass for each texture. However, if all of the textures were loaded from the same texture atlas, then Sprite Kit can render the sprites in a single drawing pass—and use less memory to do so. Whenever you have textures that are always used together, you should store them in an atlas.
SKTextureAtlas Class Reference
animation can be hundred of files. atlas puts them in one large file.
reading one file (large with lots of images) is much more efficient then reading hundreds of files (with one image each)
rendering of one large image and then showing only part of it is much more efficient then rendering each file separately.

How to batch sprites in iOS/OpenGL ES 2.0

I have developed my own sprite library on top of OpenGL ES 2.0. Right now, I am not doing any batching of draw calls; instead, each sprite has its own VBO/VAO of four textured vertices, drawn as a triangle strip (The VAO/VBO itself is managed by the Texture atlas, so identical sprites reuse the same VAO/VBO, which is 'reference counted' and hence deleted when no sprite instances reference it).
Before drawing each sprite, I'll bind its texture, upload its uniforms/attributes to the shader (modelview matrix, opacity - Projection matrix stays constant all along), bind its Vertex Array Object (4 textured vertices + four indices), and call glDrawElements(). I do cull off-screen sprites (based on position and bounds), but still it is one draw call per sprite, even if all sprites share the same texture. The vertex positions and texture coordinates for each sprite never change.
I must say that, despite this inefficiency, I have never experienced performance issues, even when drawing many sprites on screen. I do split the sprites into opaque/non-opaque, draw the opaque ones first, and the non-opaque ones after, back to front. I have seen performance suffer only when I overdraw (tax the fill rate).
Nevertheless, the OpenGL instruments in Xcode will complain that I draw too many small meshes and that I should consolidate my geometry into less objects. And in the Unity world everyone talks about limiting the number of draw calls as if they were the plague.
So, how should I go around batching very many sprites, each with a different transform and opacity value (but the same texture), into one draw call? One thing that comes to mind is to modify the vertex data every frame, and stream it: applying the modelview matrix of each sprite to all its vertices, assembling the transformed vertices for all sprites into one mesh, and submitting it to the GPU. This approach does not solve the problem of varying opacity between sprites.
Another idea that comes to mind is to have all the textured vertices of all the sprites assembled into a single mesh (VBO), treated as 'static' (same vertex format I am using now), and a separate array with the stuff that changes per sprite every frame (transform matrix and opacity), and only stream that data each frame, and pull it/apply it on the vertex shader side. That is, have a separate array where the 'attribute' being represented is the modelview matrix/alpha for the corresponding vertices. Still have to figure out the exact implementation in terms of data format/strides etc. In any case, there is the additional complication that arises whenever a new sprite is created/destroyed, the whole mesh has to be modified...
Or perhaps there is an ideal, 'textbook' solution to this problem out there that I haven't figured out? What does cocos2d do?
When I initially started reading you post I though that each quad used a different texture (since you stated "Before drawing each sprite, I'll bind its texture") but then you said that each sprite has "the same texture".
A possible easy win is to control the way you bind your textures during the draw since each call is a burden for the OpenGL driver. If (and I am not really sure abut this from your post) you use different textures, I suggest to go for a simple texture atlas where all the sprites are inside a single picture (preferably a power of 2 texture with mipmapping) and then you take the piece of the texture you need in the fragment using texture coordinates (this is the reason they exist in the end)
If the position of the sprites change over time (of course it does) at each frame, a possible advantage would be to pack the new vertex coordinates of your sprites at each frame and draw directly from memory (possibly over VAO. VBO could cost more since you need to build it each frame? to be tested in real scenario). This would be a good call pack operation and I am pretty sure it will bust the performances.
Consider that the VAO option could be feasible since we are talking about very small amount of data and the memory bandwidth should not represent a real bottleneck (each quad I guess uses 12 floats for vertex coordinates, 8 for textures and 12 for normals, 128 byte?), it shouldn't be a big problem over VAO.
About opacity, can't you play using an uniform to your fragment shader where you play with alpha? Am I wrong with it? It should work.
I hope this helps.
Ciao,
Maurizio

XNA best way to load 2d textures; monogame

I am making a test xna game as a learning exercise and I have a small question about using 2d textures. Basically the game is a grid of different 'tiles' which are taken from a text map file. I basically just parse through the file when initializing a level and create a matrix of the different tile types. The level is essentially a tub of wall tiles and spikes. So essentially, there are lots of wall tiles and multiple spike tiles and then lots of empty tiles. However, there are four types of wall tile and spike textures to cover different directions.
My question is what is the best way to load the textures for each of the tile? Do I load individual textures for each tile? i.e when I create a tile, pass it a texture2d which I can draw and load the texture at the same time. This seems like a good way, but then I have to load each tiles texture individually and this seems wasteful.
The other option I can think of is to use a static texture in the tile struct an then simple load this texture as a tile atlas with the different walls and spikes. This way I am only loading a single texture, and then when drawing I just move a rectangle to the area of the appropriate tile within the sprite.
I am not sure which of these ways would be optimal from a performance perspective, or if there is an alternative approach?
Thanks in advanmce
The wonderful thing about the content pipeline is that when you do
Content.Load<Texture2D>("sometexture");
It doesn't load the Texture2D everytime. The content pipeline is smart enough to load it once and send back the same Texture2D for that texture everytime. It would actually be worse if you made the static struct.

Resources