A 2D grid of 100 x 100 squares in OpenGL ES 2.0 - ios

I'd like to display a 2D grid of 100 x 100 squares. The size of each square is 10 pixels wide and filled with color. The color of any square may be updated at any time.
I'm new to OpenGL and wondered if I need to define the vertices for every square in the grid or is there another way? I want to use OpenGL directly rather than a framework like Cocos2D for this simple task.

You can probably get away with just rendering the positions of your squares as points with a size of 10. GL_POINT's always are a set number of pixels wide and high, so that will keep your squares 10 pixels always. If you render the squares as a quad you will have to make sure they are the right distance from the camera to be 10 pixels wide and high (also the aspect may affect it).

Related

How to do texture edge padding for tiling correctly?

My aim is to draw a set of texures (128x128 pixels) as (gap-less) tiles without filtering artifacts in XNA.
Currently, I use for example 25 x 15 fully opaque tiles (alpha is always 255) in x-y to create a background image in a game, or a similar number of semi-transparent tiles to create the game "terrain" (foreground). In both cases, the tiles are scaled and drawn using floating-point positions. As it is known, to avoid filtering artifacts (like small but visible gaps, or unwanted color overlaps at the tile borders) one has to do "edge padding" which is described as adding an additional fringe of a width of one pixel and using the color of adjacent pixels for the added pixels. Discussions about this issue can be found for example here. An example image of this issue from our game can be found below.
However, I do not really understand how to do this - technically, and specifically in XNA.
(1) When adding a fringe of one pixel width, my tiles would then be 129 x 129 and the overlapping fringes would create quite visible artifacts of their own.
(2) Alternatively, once could add the padding pixels but then not draw the full 129x129 pixel texture but only its "center" (without the fringe) e.g. by choosing the source rectangle of this texture to be (1,1,128,128). But are then the padding pixels not simply ignored or is the filtering hardware really using this information?
So basically, I wonder how this is done properly? :-)
Example image of filtering issue from game: Unwanted vertical gap in brown foreground tiles.

Drawing Curves using XNA

I've been making progress in a fan-replicated game I'm coding, but I'm stuck with this problem.
Right now I'm drawing a texture pixel by pixel on the curve path, but this cuts down frames per second from 4000 to 50 on curves with long lengths.
I need to store pixel by pixel Vector2 + length data anyway, so I can produce static speed movement along it, looping through it to draw the curve as well.
Curves I need to be able to draw are Bezier, Circular and Catmull.
Any ideas of how to make it more efficient?
Maybe I have misunderstood the question but I did this once:
Create the curve and sample x points on it. (Red dots)
Create a mesh from it by calculating the cross vector of each point. (Green lines)
Build a quad between all of these. So basically 5 of them in my picture.
Set the U coordinate to be on the perpendicular plane and V coordinate follows the curve length. So 0 at the start an 1 at the end of it.
You can of course scale V if you want you texture to repeat.
Any ideas of how to make it more efficient?
Assuming the texture needs to be dynamic, draw the texture on the GPU-side using a shader. Drawing it on the CPU-side is not only slow, but bogs down both the CPU and GPU when you need to send it back to the GPU every frame. Much better to draw it GPU-side.
I need to store pixel by pixel Vector2 + length data anyway
The shader can store additional information into the texture. e.g. even though you may allocate a RGBA texture, it doesn't mean that it needs to store color information when it is your shaders that will interpret the data.

how to find orientation of a picture with delphi

I need to find orientation of corn pictures (as examples below) they have different angles to right or left. I need to turn them upside (90 degree angle with their normal) (when they look like a water drop)
Is there any way I can do it easily?
As starting point - find image moments (and Hu moments for complex forms like pear). From the link:
Information about image orientation can be derived by first using the
second order central moments to construct a covariance matrix.
I suspect that usage of some image processing library like OpenCV could give more reliable results in common case
From the OP I got the impression you a rookie in this so I stick to something simple:
compute bounding box of image
simple enough go through all pixels and remember min,max of x,y coordinates of non background pixels
compute critical dimensions
Just cast few lines through the bounding box computing the red points positions. So select the start points I choose 25%,50%,75% of height. First start from left and stop on first non background pixel. Then start from right and stop on first non background pixel.
axis aligned position
start rotating the image with some step remember/stop on position where the red dots are symmetric so they are almost the same distance from left and from right. Also the bounding box has maximal height and minimal width in axis aligned position so you can also exploit that instead ...
determine the position
You got 4 options if I call the distance l0,l1,l2,r0,r1,r2
l means from left, r means from right
0 is upper (bluish) line, 1 middle, 2 bottom
then you wanted position is if (l0==r0)>=(l1==r1)>=(l2==r2) and bounding box is bigger in y axis then in x axis so rotate by 90 degrees until match is found or determine the orientation directly from distances and rotate just once ...
[Notes]
You will need accessing pixels of image so I strongly recommend to use Graphics::TBitmap from VCL. Look here gfx in C specially the section GDI Bitmap and also at this finding horizon on high altitude photo might help a bit.
I use C++ and VCL so you have to translate to Pascal but the VCL stuff is the same...

Scaling RenderTarget2D doesn't scale SourceRectangles

I develop a 2D match3 game in XNA. The core logic and animations are done. I use RenderTarget2D to draw the entire board. The board has 8 rows and 8 columns with 64x64 textures (the tiles), which could be clicked and moved. To capture the mouse intersection, I use SourceRectangles for each tile. Of course the SourceRectangles have same size as textures - 64x64.
I would like to scale down the entire board, using the RenderTarget2D, to support different monitor resolutions and aspects. First I draw all tiles in the RenderTarget2D. Then I scale down the RenderTarget2D with a float coefficient. Finally I draw the RenderTarget2D on the screen. As a result the entire board is scaled down properly (all textures are scaled down from 64x64 to 50x50 for example), but the SourceRectagles are not scaled, they remain 64x64 and mouse intersections are not captured for the proper tiles.
Why scaling the RenderTarget2D doesn't handle this? How I can solve this problem?
You should approach this problem differently. Your source rectangles for textures are just that — don't try to use them as button rectangles, or you will get in trouble like this.
Instead, use a different Rectangle hitboxRectangle, which will be the same size as your source rectangle initially, but will scale with your game window, and check intersections against it.

How to deal with texture distortion caused by "squaring" textures, and the interactions with mipmapping?

Suppose I have a texture which is naturally not square (for example, a photographic texture of something with a 4:1 aspect ratio). And suppose that I want to use PVRTC compression to display this texture on an iOS device, which requires that the texture be square. If I scale up the texture so that it is square during compression, the result is a very blurry image when the texture is viewed from a distance.
I believe that this caused by mipmapping. Since the mipmap filter sees the new larger stretched dimension, it uses that to choose a low mip level, which is actually not correct, since those pixels were just stretched to that size. If it looked at the other dimension, it would choose a higher resolution mip level.
This theory is confirmed (somewhat) by the observation that if I leave the texture in a format that doesn't have to be square, the mipmap versions look just dandy.
There is a LOD Bias parameter, but the docs say that is applied to both dimensions. It seems like what is called for is a way to bias the LOD but only in one dimension (that is, to bias it toward more resolution in the dimension of the texture which was scaled up).
Other than chopping up the geometry to allow the use of square subsets of the original texture (which is infeasible, give our production pipeline), does anyone have any clever hacks they've used to deal with this issue?
It seems to me that you have a few options, depending on what you can do with, say, the vertex UVs.
[Hmm Just realised that in the following I'm assuming that the V coordinates run from the top to the bottom... you'll need to allow for me being old school :-) ]
The first thing that comes to mind is to take your 4N*N (X*Y) source texture and repeat it 4x vertically to give a 4N*4N texture, and then adjust the V coordinates on the model to be 1/4 of their current values. This won't save you much in terms of memory (since it effectively means a 4bpp PVRTC becomes 4x larger) but it will still save bandwidth and cache space, since the other parts of the texture won't be accessed. MIP mapping will also work all the way down to 1x1 textures.
Alternatively, if you want to save a bit of space and you have a pair of 4N*N textures, you could try packing them together into a "sort of" 4N*4N atlas. Put the first texture in the top N rows, then follow it by the N/2 of the top rows. The pack the bottom N/2 rows of the 2nd texture, followed by the second texture, and then the top N/2 rows. Finally, do the bottom N/2 rows of the first texture. For the UVs that access the first texture, do the same divide by 4 for the V parameter. For the second texture, you'll need to divide by 4 and add 0.5
This should work fine until the MIP map level is so small that the two textures are being blended together... but I doubt that will really be an issue.

Resources