glReadPixels on separate layers - ios

I'll get straight to the point :)
From the above 480 x 320 diagram, I am thinking I can detect collision on pixel level like a worm game.
What I want to know is how to sample pixels on separate layers. As you can see in the diagram, as the worm is falling, I want to only sample the black pixels with glReadPixels() to see if the worm is standing (colliding) with any terrain but when I last tried it, glReadPixels() samples all pixels on screen, without any ideas of "layers".
The white pixel is the background that should not be part of the sampling.
Am I perhaps suppose to have a black and white copy of my terrain on a separate buffer and call glReadPixels() on that separate buffer so that the background images (white pixels) won't get sampled?
Before I was drawing my terrain on the screen in the same buffer/context where I draw my background image.
Any ideas?

What read pixels does is read back the binded buffer, since the buffer is the output of all your compositions, will obviously contain all the data your wrote and doesn't understand you logic arrangement into layer. You can try drawing your terrain into the stencil buffer and read back only that. Use GL_DEPTH_STENCIL (format parameter).

Related

How to do texture edge padding for tiling correctly?

My aim is to draw a set of texures (128x128 pixels) as (gap-less) tiles without filtering artifacts in XNA.
Currently, I use for example 25 x 15 fully opaque tiles (alpha is always 255) in x-y to create a background image in a game, or a similar number of semi-transparent tiles to create the game "terrain" (foreground). In both cases, the tiles are scaled and drawn using floating-point positions. As it is known, to avoid filtering artifacts (like small but visible gaps, or unwanted color overlaps at the tile borders) one has to do "edge padding" which is described as adding an additional fringe of a width of one pixel and using the color of adjacent pixels for the added pixels. Discussions about this issue can be found for example here. An example image of this issue from our game can be found below.
However, I do not really understand how to do this - technically, and specifically in XNA.
(1) When adding a fringe of one pixel width, my tiles would then be 129 x 129 and the overlapping fringes would create quite visible artifacts of their own.
(2) Alternatively, once could add the padding pixels but then not draw the full 129x129 pixel texture but only its "center" (without the fringe) e.g. by choosing the source rectangle of this texture to be (1,1,128,128). But are then the padding pixels not simply ignored or is the filtering hardware really using this information?
So basically, I wonder how this is done properly? :-)
Example image of filtering issue from game: Unwanted vertical gap in brown foreground tiles.

Drawing Curves using XNA

I've been making progress in a fan-replicated game I'm coding, but I'm stuck with this problem.
Right now I'm drawing a texture pixel by pixel on the curve path, but this cuts down frames per second from 4000 to 50 on curves with long lengths.
I need to store pixel by pixel Vector2 + length data anyway, so I can produce static speed movement along it, looping through it to draw the curve as well.
Curves I need to be able to draw are Bezier, Circular and Catmull.
Any ideas of how to make it more efficient?
Maybe I have misunderstood the question but I did this once:
Create the curve and sample x points on it. (Red dots)
Create a mesh from it by calculating the cross vector of each point. (Green lines)
Build a quad between all of these. So basically 5 of them in my picture.
Set the U coordinate to be on the perpendicular plane and V coordinate follows the curve length. So 0 at the start an 1 at the end of it.
You can of course scale V if you want you texture to repeat.
Any ideas of how to make it more efficient?
Assuming the texture needs to be dynamic, draw the texture on the GPU-side using a shader. Drawing it on the CPU-side is not only slow, but bogs down both the CPU and GPU when you need to send it back to the GPU every frame. Much better to draw it GPU-side.
I need to store pixel by pixel Vector2 + length data anyway
The shader can store additional information into the texture. e.g. even though you may allocate a RGBA texture, it doesn't mean that it needs to store color information when it is your shaders that will interpret the data.

Considering the Stencil in depth

I'm using Orthographic projection to draw my objects.
Each object items is being added to different buffers and being drawn in several cycles.
Let's say that each object has an outline square and fill for the square (in different color).
So i'm drawing first the all the fillings, and then the outlines.
I'm using depth buffer to make sure that the outlines will not be over all the fills as shown at the picture
Now i'm facing a problem that each object contains another drawing item on it (such as text - points) which can be longer than this squares. So i'm using the stencil buffer for cutting this additional drawing over the square. Although, when doing this there is no consideration in the depth buffer.
Meaning that one text item can be drawn over the other square. as showed below.
Is there anyway\trick to make it happen ?
You should be able to set the stencil buffer to a different value for each of the squares (provided there is <= 255 squares, as you won't be able to get a more than 8-bit stencil buffer). Configure the stencil value to KEEP for pixels that fail the depth test, causing any stencil values written by quads that are further in front but have been drawn earlier to be retained.
This will allow clipping each text individually.
Another way is to use only the depth buffer and pass the pixel extents of the current quad into the text pixel shader, where you can discard any extra pixels. This requires less state changes.

Scaling RenderTarget2D doesn't scale SourceRectangles

I develop a 2D match3 game in XNA. The core logic and animations are done. I use RenderTarget2D to draw the entire board. The board has 8 rows and 8 columns with 64x64 textures (the tiles), which could be clicked and moved. To capture the mouse intersection, I use SourceRectangles for each tile. Of course the SourceRectangles have same size as textures - 64x64.
I would like to scale down the entire board, using the RenderTarget2D, to support different monitor resolutions and aspects. First I draw all tiles in the RenderTarget2D. Then I scale down the RenderTarget2D with a float coefficient. Finally I draw the RenderTarget2D on the screen. As a result the entire board is scaled down properly (all textures are scaled down from 64x64 to 50x50 for example), but the SourceRectagles are not scaled, they remain 64x64 and mouse intersections are not captured for the proper tiles.
Why scaling the RenderTarget2D doesn't handle this? How I can solve this problem?
You should approach this problem differently. Your source rectangles for textures are just that — don't try to use them as button rectangles, or you will get in trouble like this.
Instead, use a different Rectangle hitboxRectangle, which will be the same size as your source rectangle initially, but will scale with your game window, and check intersections against it.

XNA Drawing a series of squares

I am trying to draw a series of squares in XNA. I am looking at all these articles about TriangleStrips and DynamicVertexBuffers. But, not sure where to begin.
Current step
I am able to draw 1 square using VertexPositionColor, TriangleList and indices. Now I want to draw a series of squares with varying colors.
End Goal
Something to keep in mind is the number of such squares that I would like to be able to draw, eventually. If we assume a 5px width, on a 1920x1080 screen, we can calculate the number of squares to be (1920 * 1080) / 25 = 82944.
Any pointers on how to accomplish this would be great!
Generally, you can draw more squares in the same way you draw the first one. However, there will be a significant loss in performance.
Instead, you can add all triangles to one vertex buffer / index buffer. You already are able to draw two triangles as a triangle list. You should be able to easily adjust this routine to draw more than two triangles. Just add the according vertices and indices to the buffers and modify the draw call.
If you need vertices at the same position with different colors, you need to add two vertices to the buffer.
This way, the performance loss is very little, because you draw everything with only one draw call. Although the amount of triangles should be no problem for most graphic cards, some smaller or older ones can get into trouble. If so, you should consider changing your drawing strategy. Maybe it is not even necessary to draw that much triangles. But you can think about that, if the resulting performance is too low...
If you don't care about 3D, just 2D - you can use SpriteBatch to draw squares/rectangles on the screen. This will handle batching all the vertex/index buffer management for you.

Resources