How to eliminate the boundary between triangles?
I'm implementing export-to-pdf in my triangle drawing app. The image above shows what the pdf output looks like. There are white boundaries everywhere, less than 1 pixel wide.
The triangles can have any color.
I draw each triangle, like this:
CGContextBeginPath(context);
CGContextMoveToPoint(context, x0, y0);
CGContextAddLineToPoint(context, x1, y1);
CGContextAddLineToPoint(context, x2, y2);
CGContextClosePath(context);
CGContextFillPath(context);
It's important that black and white triangles have the same sizes.
Approaches
Approach 1 - Stroke
Draw a 1 pixel thick stroke around all triangles.
Approach 2 - Extrude
Extrude all triangles by 2 pixels so the triangles overlaps.
Approach 3 - Combine
Combine touching triangles into a single polygon.
Approach 4 - PDF overlap settings
Perhaps PDF has settings for eliminating boundaries. Dunno.
Approach 5 - Post processing
Create a filter that detects boundary pixels and eliminates them. This will not work for me, it needs to be saved to a PDF. Shader code is not supported in PDF on iOS, AFAIK.
Approach X - A smarter way
Is there a better way of snapping triangles together?
Are your coordinates (x0, y0, x1, y1, etc.) at integer point values? It's common for people to make that mistake because they're used to setting views' frames to be on whole point boundaries. CoreGraphics draws with one point lines drawn centered on the coordinates you provide. I suspect that you can eliminate your artifacts by adjusting your coordinates by 0.5 points in all cases:
CGContextBeginPath(context);
CGContextMoveToPoint(context, x0 + 0.5, y0 + 0.5);
CGContextAddLineToPoint(context, x1 + 0.5, y1 + 0.5);
CGContextAddLineToPoint(context, x2 + 0.5, y2 + 0.5);
CGContextClosePath(context);
CGContextFillPath(context);
Edit: Actually I don't think this is going to work. Here's another suggestion:
I'm leaving my previous comments because I think what I said about the CoreGraphics coordinates is true, but I tried some experiments with the setup you described and shifting everything over didn't eliminate those border artifacts. However adding this line did:
CGContextSetAllowsAntialiasing(context, false);
(I don't know why CGContextSetAllowsAntialiasing is declared to take a stdbool style bool, but it is, that's why I used false instead of NO here, not that it makes a difference.)
Cheap/Easy solution is to draw each triangle twice. This will reduce anti-aliasing but boost the coverage along the edges. Rendering as a single path should work too if it's all the same color.
Approach 3 - Combine
I'm using GPC – General Polygon Clipper library and it almost works as desired.
I run a UNION operation one triangle at a time. Until all triangles have been UNION'ed into the result.
Below is output from GPC. No white edges can be seen.
I also tried using Angus Johnson's Clipper library, but was unable to built polygons by UNIONing one triangle at a time. It only removed a few of the white edges between triangles. Although Clipper seems more powerful than GPC.
Below is output from Clipper, it shows white edges.
Related
EPS has the arc command
xcenter ycenter radius startangle stopangle arc
which will draw a circular arc at the desired around the desired location. How can I draw an ellipse?
With curves :-)
You need to look at the curveto operator, then define your ellipse as a series of Bezier curves.
See also the answer to this question
I am attempting to map a fisheye image to a 360 degree view using a sky sphere in Unity. The scene is inside the sphere. I am very close but I am seeing some slight distortion. I am calculating UV coordinates as follows:
Vector3 v = currentVertice; // unit vector from edge of sphere, -1, -1, -1 to 1, 1, 1
float r = Mathf.Atan2(Mathf.Sqrt(v.x * v.x + v.y * v.y), v.z) / (Mathf.PI * 2.0f);
float phi = Mathf.Atan2(v.y, v.x);
textureCoordinates.x = (r * Mathf.Cos(phi)) + 0.5f;
textureCoordinates.y = (r * Mathf.Sin(phi)) + 0.5f;
Here is the distortion and triangles:
The rest of the entire sphere looks great, it's just at this one spot that I get the distortion.
Here is the source fish eye image:
And the same sphere with a UV test texture over the top showing the same distortion area. Full UV test texture is on the right, and is a square although stretched into a rectangle on the right for purposes of my screenshot.
The distortion above is using sphere mapping rather than fish eye mapping. Here is the UV texture using fish eye mapping:
Math isn't my strong point, am I doing anything wrong here or is this kind of mapping simply not 100% possible?
The spot you are seeing is the case where r gets very close to 1. As you can see in the source image, this is the border area between the very distorted image data and the black.
This area is very distorted, however that's not the main problem. Looking at the result you can see that there are problems with UV orientation.
I've added a few lines to your source image to demonstrate what I mean. Where r is small (yellow lines) you can see that the UV coordinates can be interpolated between the corners of your quad (assuming quads instead of tris). However, where r is big (red corners), interpolating UV coordinates will make them travel through areas of your source image whose r is much smaller than 1 (red lines), causing distortions in UV space. Actually, those red lines should not be straight, but they should travel along the border of your source image data.
You can improve this by having a higher polycount in the area of your skysphere where r gets close to 1, but it will never be perfect as long as your UVs are interpolated in a linear way.
I also found another problem. If you look closely at the spot, you'll find that the complete source image is present there in small. This is because your UV coordinates wrap around at that point. As rendering passes around the viewer, uv coordinates travel from 0 towards 1. At the spot they are at 1, the neighboring vertex however is at 0.001 or something, causing the whole source image to be rendered inbetween. To fix that, you'll need to have two seperate vertices at the seam of your skysphere, one where the surface of the sphere starts, and one where it ends. In object space they are identical, but in uv space one is at 0, the other at 1.
i have been trying to draw a rounded rectangle with spacing in the border, but i cant seem to find a way to do this using the Canvas.RoundRect function, and i am not that good in maths to draw the edges myself, i can draw a rectangle with spacing using the Canvas.MoveTo and Canvas.LineTo functions, but i dont know how to make the edges rounded. Currently what i am doing is i make yellow rectangle at the place where i want to make the spacing in the border but the problem is when i am printing i have to directly draw on printer canvas and i have to draw on a transparent sheet, so a background color will cause problems. Anyone who can help me build a custom drawing routine or tell me how can i erase that area and still print on a transparent paper without any background color. The yellow background color is just for a preview, when i am drawing to a printer canvas the background is transparent.
See the image to know what i mean by spacing in the border line.
Thanks
You can exclude the gap by manipulating the clipping region of the current device context. Assuming that L, R, T and B are the coordinates of your yellow rectangle to make the gap, use the following code:
ExcludeClipRect(Canvas.Handle, L, T, R, B); // exclude the gap
Canvas.RoundRect(<whatever you already do here>);
SelectClipRgn(Canvas.Handle, 0); // reset the clipping region
You can draw your partial rounded rectangle yourself. Use MoveTo and LineTo for the straight portions, and use Arc for the corners.
The Arc function draws a portion of an ellipse. The first two pairs of coordinates to the function indicate the bounds of the ellipse. If you want the corners of your rectangle to be circular, then the ellipse is a circle, and X2 - X1 will equal Y2 - Y1. The second two pairs of coordinates indicate the starting and ending points on the circle; they'll be the same points you pass to MoveTo and LineTo for the straight portions. The arc is drawn counter-clockwise.
Let's say we have a texture (in this case 8x8 pixels) we want to use as a sprite sheet. One of the sub-images (sprite) is a subregion of 4x3 inside the texture, like in this image:
(Normalized texture coordinates of the four corners are shown)
Now, there are basically two ways to assign texture coordinates to a 4px x 3px-sized quad so that it effectively becomes the sprite we are looking for; The first and most straightforward is to sample the texture at the corners of the subregion:
// Texture coordinates
GLfloat sMin = (xIndex0 ) / imageWidth;
GLfloat sMax = (xIndex0 + subregionWidth ) / imageWidth;
GLfloat tMin = (yIndex0 ) / imageHeight;
GLfloat tMax = (yIndex0 + subregionHeight) / imageHeight;
Although when first implementing this method, ca. 2010, I realized the sprites looked slightly 'distorted'. After a bit of search, I came across a post in the cocos2d forums explaining that the 'right way' to sample a texture when rendering a sprite is this:
// Texture coordinates
GLfloat sMin = (xIndex0 + 0.5) / imageWidth;
GLfloat sMax = (xIndex0 + subregionWidth - 0.5) / imageWidth;
GLfloat tMin = (yIndex0 + 0.5) / imageHeight;
GLfloat tMax = (yIndex0 + subregionHeight - 0.5) / imageHeight;
...and after fixing my code, I was happy for a while. But somewhere along the way, and I believe it is around the introduction of iOS 5, I started feeling that my sprites weren't looking good. After some testing, I switched back to the 'blue' method (second image) and now they seem to look good, but not always.
Am I going crazy, or something changed with iOS 5 related to GL ES texture mapping? Perhaps I am doing something else wrong? (e.g., the vertex position coordinates are slightly off? Wrong texture setup parameters?) But my code base didn't change, so perhaps I am doing something wrong from the beginning...?
I mean, at least with my code, it feels as if the "red" method used to be correct but now the "blue" method gives better results.
Right now, my game looks OK, but I feel there is something half-wrong that I must fix sooner or later...
Any ideas / experiences / opinions?
ADDENDUM
To render the sprite above, I would draw a quad measuring 4x3 in orthographic projection, with each vertex assigned the texture coords implied in the code mentioned before, like this:
// Top-Left Vertex
{ sMin, tMin };
// Bottom-Left Vertex
{ sMin, tMax };
// Top-Right Vertex
{ sMax, tMin };
// Bottom-right Vertex
{ sMax, tMax };
The original quad is created from (-0.5, -0.5) to (+0.5, +0.5); i.e. it is a unit square at the center of the screen, then scaled to the size of the subregion (in this case, 4x3), and its center positioned at integer (x,y) coordinates. I smell this has something to do too, especially when either width, height or both are not even?
ADDENDUM 2
I also found this article, but I'm still trying to put it together (it's 4:00 AM here)
http://www.mindcontrol.org/~hplus/graphics/opengl-pixel-perfect.html
There's slightly more to this picture than meets the eye, the texture coordinates are not the only factor in where the texture gets sampled. In your case I believe the blue is probably what want to have.
What you ultimately want is to sample each texel in center. You don't want to be taking samples on the boundary between two texels, because that either combines them with linear sampling, or arbitrarily chooses one or the other with nearest, depending on which way the floating point calculations round.
Having said that, you might think that you don't want to have your texcoords at (0,0), (1,1) and the other corners, because those are on the texel boundary. However an important thing to note is that opengl samples textures in the center of a fragment.
For a super simple example, consider a 2 by 2 pixel monitor, with a 2 by 2 pixel texture.
If you draw a quad from (0,0) to (2,2), this will cover 4 pixels. If you texture map this quad, it will need to take 4 samples from the texture.
If your texture coordinates go from 0 to 1, then opengl will interpolate this and sample from the center of each pixel, with the lower left texcoord starting at the bottom left corner of the bottom left pixel. This will ultimately generate texcoord pairs of (0.25, 0.25), (0.75,0.75), (0.25, 0.75), and (0.75, 0.25). Which puts the samples right in the middle of each texel, which is what you want.
If you offset your texcoords by a half pixel as in the red example, then it will interpolate incorrectly, and you'll end up sampling the texture off center of the texels.
So long story short, you want to make sure that your pixels line up correctly with your texels (don't draw sprites at non-integer pixel locations), and don't scale sprites by arbitrary amounts.
If the blue square is giving you bad results, can you give an example image, or describe how you're drawing it?
Picture says 1000 words:
I'm trying to create 3d effect using vertex and index buffers in 2d (z-coord is 0) using DirectX7.
It's easier to explain with a picture:
The problem is that the lines are broken. They should be straight. To render this image it gets break up in triangles and rendered using DrawIndexedPrimitiveVB. Obviously each of the triangle is skewed a little differently and I don't see why.
Am I missing something trivial here?
I'm not sure if this will help, but the source and destination quads are as follow:
SPoint4:= pBounds4(1, 1, W - 2, H - 2);
DPoint4:= Point4(ProjTo2dX(i, FlyDist + DeepDist, W), ProjTo2dY(0, FlyDist + DeepDist, H), ProjTo2dX(W - i, FlyDist, W), ProjTo2dY(0, FlyDist, H), ProjTo2dX(W - i, FlyDist, W), ProjTo2dY(H, FlyDist, H), ProjTo2dX(i, FlyDist + DeepDist, W), ProjTo2dY(H, FlyDist + DeepDist, H));
One way to map a square/rectangular texture to an arbitrary quad is projective interpolation. I've written an article showing how to do this (using vertex/pixel shaders).
The short version: you interpolate UVs across the quad in a way analogous to how GPUs do it for perspective-correct rendering (which, as you may have noticed, does not produce a visible seam between the two triangles). To do this, you need to calculate a false "depth" value for each vertex of the quad, and do the interpolation using homogeneous coordinates based on this "depth". Full details are in the article linked above.
You need to provide some perspective information to have a proper texture coordinates interpolation on a trapezoid, see
Problems with texture deformation in OpenGL ES 1.1 on quad made out of triangle strips
I found a solution or at least a workaround. Instead of breaking the image up in 2 triangles, I break it up in many (several horizontal strips, each consisting of 2 triangles). In this case the image looks ok.
in this case the image is split in 10 strips (20 triangles).
I'll be happy for any comments or other solutions. Thank you.