How to do texture edge padding for tiling correctly? - xna

My aim is to draw a set of texures (128x128 pixels) as (gap-less) tiles without filtering artifacts in XNA.
Currently, I use for example 25 x 15 fully opaque tiles (alpha is always 255) in x-y to create a background image in a game, or a similar number of semi-transparent tiles to create the game "terrain" (foreground). In both cases, the tiles are scaled and drawn using floating-point positions. As it is known, to avoid filtering artifacts (like small but visible gaps, or unwanted color overlaps at the tile borders) one has to do "edge padding" which is described as adding an additional fringe of a width of one pixel and using the color of adjacent pixels for the added pixels. Discussions about this issue can be found for example here. An example image of this issue from our game can be found below.
However, I do not really understand how to do this - technically, and specifically in XNA.
(1) When adding a fringe of one pixel width, my tiles would then be 129 x 129 and the overlapping fringes would create quite visible artifacts of their own.
(2) Alternatively, once could add the padding pixels but then not draw the full 129x129 pixel texture but only its "center" (without the fringe) e.g. by choosing the source rectangle of this texture to be (1,1,128,128). But are then the padding pixels not simply ignored or is the filtering hardware really using this information?
So basically, I wonder how this is done properly? :-)
Example image of filtering issue from game: Unwanted vertical gap in brown foreground tiles.

Related

find rectangle coordinates in a given image

I'm trying to blindly detect signals in a spectra.
one way that came to my mind is to detect rectangles in the waterfall (a 2D matrix that can be interpret as an image) .
Is there any fast way (in the order of 0.1 second) to find center and width of all of the horizontal rectangles in an image? (heights of rectangles are not considered for me).
an example image will be uploaded (Note I know that all rectangles are horizontal.
I would appreciate it if you give me any other suggestion for this purpose.
e.g. I want the algorithm to give me 9 center and 9 coordinates for the above image.
Since the rectangle are aligned, you can do that quite easily and efficiently (this is not the case with unaligned rectangles since they are not clearly separated). The idea is first to compute the average color of each line and for each column. You should get something like that:
Then, you can subtract the background color (blue), compute the luminance and then compute a threshold. You can remove some artefact using a median/blur before.
Then, you can just scan the resulting 1D array filled with binary values so to locate where each rectangle start/stop. The center of each rectangle is ((x_start+x_end)/2, (y_start+y_end)/2).

About created a pdf file by Core-Plot

Core plot is quite powerful, I use it to create ecg graph.
When I create pdf file, I encounter some problems.
Each small grid is square on my App.
But when I use -dataForPDFRepresentationOfLayer() to write to the pdf file, small grid is not square.
The pdf file's "minorGridLine" on wrong position.
I set up the pixel dimensions of the plot area and number of grid lines.
Each small grid is square on my app, but small grid isn't square on the pdf file.
How to solve this problem?
Thanks,
Midas
You're seeing the effect of aligning the grid lines to pixel boundaries to get crisper edges on the lines. The upper image (the screenshot) looks like a 1x render with blurred minor grid lines and anti-aliasing on the data line. The bottom image (the PDF) has crisp line edges implying a higher resolution (2x or 3x) drawing canvas. When the resolution is high enough to render the line width with an integer number of pixels, Core Plot moves the lines to fall on the nearest pixel boundaries to eliminate the fuzzy edges caused by anti-aliasing.
Possible solutions are to ensure that the pixel dimensions of the plot area are an even multiple of the number of minor tick locations (accounting for the contentsScale of the graph) or adjusting the line width of the minor grid lines so it's not possible to render them with an integer number of pixels. For example, use a line width of 0.4 instead of 0.5.

OpenCV warp perspective to align and equalize rectangle size

I'm trying to align two rectangles using the perspectiveTransform. In the image below there are the two orange rectangles (I know their dimensions and locations) and I want to warp the perspective so that they are of approximately the same size and in line (the green ones in the image). A perspectiveTransform that e.g. makes the small one equal in size with the big one doesn't really do the trick, as the size of the big one changes too. Any help much appreciated!

Scaling RenderTarget2D doesn't scale SourceRectangles

I develop a 2D match3 game in XNA. The core logic and animations are done. I use RenderTarget2D to draw the entire board. The board has 8 rows and 8 columns with 64x64 textures (the tiles), which could be clicked and moved. To capture the mouse intersection, I use SourceRectangles for each tile. Of course the SourceRectangles have same size as textures - 64x64.
I would like to scale down the entire board, using the RenderTarget2D, to support different monitor resolutions and aspects. First I draw all tiles in the RenderTarget2D. Then I scale down the RenderTarget2D with a float coefficient. Finally I draw the RenderTarget2D on the screen. As a result the entire board is scaled down properly (all textures are scaled down from 64x64 to 50x50 for example), but the SourceRectagles are not scaled, they remain 64x64 and mouse intersections are not captured for the proper tiles.
Why scaling the RenderTarget2D doesn't handle this? How I can solve this problem?
You should approach this problem differently. Your source rectangles for textures are just that — don't try to use them as button rectangles, or you will get in trouble like this.
Instead, use a different Rectangle hitboxRectangle, which will be the same size as your source rectangle initially, but will scale with your game window, and check intersections against it.

A 2D grid of 100 x 100 squares in OpenGL ES 2.0

I'd like to display a 2D grid of 100 x 100 squares. The size of each square is 10 pixels wide and filled with color. The color of any square may be updated at any time.
I'm new to OpenGL and wondered if I need to define the vertices for every square in the grid or is there another way? I want to use OpenGL directly rather than a framework like Cocos2D for this simple task.
You can probably get away with just rendering the positions of your squares as points with a size of 10. GL_POINT's always are a set number of pixels wide and high, so that will keep your squares 10 pixels always. If you render the squares as a quad you will have to make sure they are the right distance from the camera to be 10 pixels wide and high (also the aspect may affect it).

Resources