XNA texture coordinates on merged textures - xna

I got a problem with texture coordinates. First I would like to describe what I want to do then I will ask the question.
I want to have a mesh that has more textures using only one big texture. The big texture merges all textures the mesh is using in it. I made a routine that merges textures, that is no problem, but I still have to modify the texture coordinates, so the mesh that now uses only one texture instead of many has everything placed well.
See the picture:
On the upper left corner I got one of the textures (let's call it A) I merged into a big texture, the right one (B). A's top left is 0,0 and bottom right is 1,1. For easy use let's say that B.width = A.width * 2 and so for the height too. So on B the mini texture (M what is the A originally) bottom-right should be 0.5,0.5.
I got no problems understanding these so far and I hope I understood it ok. But the problem here is, that there are texture coordinates that are:
above 1
negative
on the original A. What should these be on M?
Let's say, A has -0.1,0 - is that -0.05,0 on M inside B?
What about those numbers that are outside 0..1 region? Is -3.2,0 on A -1.6 or -3.1 on B? So I clip of the part that is %1 and divide by 2 (because I stated above that width is double) or should I divide the whole number by 2? As far I understand so far, numbers outside this region are about mirroring the texture. How do I manage this, so the output does not contain the orange texture from B?
If my question is not clear enough (I am not much skilled in English), please ask and I will edit/answer, just help me clear my confusion :)
Thanks in advance:
Péter

A single texture has coordinates in [0-1,0-1] range
The new texture has coordinates in [0-1,0-1] range
In your new texture composed by four single textures, your algoritm has to translate texture coordinates this way.
Blue single square texture will have new coordinates in [0-0.5,
0-0.5] range
Orange single square texture will have new coordinates
in [0.5-1, 0-0.5] range

Related

Extend a square in world space to a cube when only screen space coordinates are available

I have a photo of a Go-board, which is basically a grid with n*n squares, each of size a.
Depending on how the image was taken, the grid can have either one vanishing point like this (n = 15, board size b = 15*a):
or two vanishing points like this (n = 9, board size b = 9*a):
So what is available to me are the four screen space coordinates of the four corners of the flat board: p1, p2, p3, p4.
What I would like to do is to calculate the corresponding four screen space coordinates q1, q2, q3, q4 of the corners of the board, if the board was moved 'upward' (perpendicular to the plane of the board) in world space by a, or in other words the coordinates on top of the board, if the board had a thickness of a.
Is the information about the four points even sufficient to calculate this?
If this is not enough information, maybe it would help to make the assumption that the distance of the camera to the center of the board is typically of the order of 1.5 or 2 times the board size b?
From my understanding, the four lines p1-q1, p2-q2, p3-q3, p4-q4 would all go through the same (yet unknown) vanishing point, located somewhere below the board.
Maybe a sufficient approximation (because typically for a Go board n=18 and therefore square size a is small in comparison to the board size) for the direction of each of the lines p1-q1, p2-q2, ... in screen space would be to simply choose a line perpendicular to the horizon (given by the two vanishing points vp1-vp2 or by p1-p2 in the case of only one vanishing point)?
Having made this approximation, still the length of the four lines p1-q1, p2-q2, p3-q3, p4-q4 would need to be calculated ...
Any hints are highly appreciated!
PS: I am using Objective-C & OpenCV
Not yet a full answer but this might help to move forward. As MvG pointed out 4 points alone are not enough. Luckily we know the board is a square so even with perspective distortion the diagonals in 2D should/will intersect at board center (unless serious fish-eye or other distortions are present in the image). Here a test image (created by OpenGL I used as a test input):
The grayish surface is 2D QUAD using 2D perspective distorted corner points (your input). The aqua/bluish grid is 3D OpenGL grid I created the 2D corner points with (to see if they match). The green lines are 2D diagonals and Orange points are the 2D corner points and the diagonals intersection. As you can see 2D diagonal intersection correspond exactly with 3D board mid cell center.
Now we can use the ratio between half diagonal lengths to assume/fit the perspective. If we handle cell coordinates in range <0,9> we want to achieve further division of halve diagonals like this:
I am still not sure how exactly (linear ratio l0/(l0+l1) is not working) so I need to inspect perspective mapping equations to find relative ratio dependence and compute inverse (when I have time mood for this).
If that will be a success than we can compute any points along the diagonals (we want the cell edges). If that is done from that we can easily compute visual size of any cell size a and use the vanishing point without any 3D transform matrices at all.
In case this is not doable there is still the option to use DIP/CV techniques to detect the cell crossings like this:
OpenCV Birdseye view without loss of data
using just the bullet #2 but for that you need to take into account type of images you will have and adjust the detector or add preprocessing for it ...
Now back to your offsetting you can simply offset your cells up by the visual size of the cell like this:
And handle the left side points (either interpolate the size or use the sane as neighboring cell) That should work unless too weird angles of the board are used.

Drawing Curves using XNA

I've been making progress in a fan-replicated game I'm coding, but I'm stuck with this problem.
Right now I'm drawing a texture pixel by pixel on the curve path, but this cuts down frames per second from 4000 to 50 on curves with long lengths.
I need to store pixel by pixel Vector2 + length data anyway, so I can produce static speed movement along it, looping through it to draw the curve as well.
Curves I need to be able to draw are Bezier, Circular and Catmull.
Any ideas of how to make it more efficient?
Maybe I have misunderstood the question but I did this once:
Create the curve and sample x points on it. (Red dots)
Create a mesh from it by calculating the cross vector of each point. (Green lines)
Build a quad between all of these. So basically 5 of them in my picture.
Set the U coordinate to be on the perpendicular plane and V coordinate follows the curve length. So 0 at the start an 1 at the end of it.
You can of course scale V if you want you texture to repeat.
Any ideas of how to make it more efficient?
Assuming the texture needs to be dynamic, draw the texture on the GPU-side using a shader. Drawing it on the CPU-side is not only slow, but bogs down both the CPU and GPU when you need to send it back to the GPU every frame. Much better to draw it GPU-side.
I need to store pixel by pixel Vector2 + length data anyway
The shader can store additional information into the texture. e.g. even though you may allocate a RGBA texture, it doesn't mean that it needs to store color information when it is your shaders that will interpret the data.

Texture getting stretched across faces of a cuboid in Open Inventor

I am trying to write a little script to apply texture to rectangular cuboids. To accomplish this, I run through the scenegraph, and wherever I find the SoIndexedFaceSet Nodes, I insert a SoTexture2 Node before that. I put my image file in the SoTexture2 Node. The problem I am facing is that the texture is applied correctly to 2 of the faces(say face1 and face2), in the Y-Z plane, but for the other 4 planes, it just stretches the texture at the boundaries of the two faces(1 and 2).
It looks something like this.
The front is how it should look, but as you can see, on the other two faces, it just extrapolates the corner values of the front face. Any ideas why this is happening and any way to avoid this?
Yep, assuming that you did not specify texture coordinates for your SoIndexedFaceSet, that is exactly the expected behavior.
If Open Inventor sees that you have applied a texture image to a geometry and did not specify texture coordinates, it will automatically compute some texture coordinates. Of course it's not possible to guess how you wanted the texture to be applied. So it computes the bounding box then computes texture coordinates that stretch the texture across the largest extent of the geometry (XY, YZ or XZ). If the geometry is a cuboid you can see the effect clearly as in your image. This behavior can be useful, especially as a quick approximation.
What you need to make this work the way you want, is to explicitly assign texture coordinates to the geometry such that the texture is mapped separately to each face. In Open Inventor you can actually still share the vertices between faces because you are allowed to specify different vertex indices and texture coordinate indices (of course this is only more convenient for the application because OpenGL doesn't support this and Open Inventor has to re-shuffle the data internally). If you applied the same texture to an SoCube node you would see that the texture is mapped separately to each face as expected. That's because SoCube defines texture coordinates for each face.

Drawing a concave polygon in OpenGL

I have a concave polygon I need to draw in OpenGL.
The polygon is defined as a list of points which form its exterior ring, and a list of lists-of-points that define its interior rings (exclusion zones).
I can already deal with the exclusion zones, so a solution for how to draw a polygon without interior rings will be good too.
A solution with Boost.Geometry will be good, as I already use it heavily in my application.
I need this to work on the iPhone, namely OpenGL ES (the older version with fixed pipeline).
How can I do that?
Try OpenGL's tessellation facilities. You can use it to convert a complex polygon into a set of triangles, which you can render directly.
EDIT (in response to comment): OpenGL ES doesn't support tessellation functions. In this case, and if the polygon is static data, you could generate the tessellation offline using OpenGL on your desktop or notebook computer.
If the shape is dynamic, then you are out of luck with OpenGL ES. However, there are numerous libraries (e.g., CGAL) that will perform the same function.
It's a bit complicated, and resource-costly method, but any concave polygon can be drawn with the following steps (note this methos works surely on flat polygons, but I also assume you try to draw on flat surface, or in 2D orthogonal mode):
enable stencil test, use glStencilFunc(GL_ALWAYS,1,0xFFFF)
disable color mask to oprevent unwanted draws: glColorMask(0,0,0,0)
I think you have the vertices in an array of double, or in other form (strongly recommended as this method draws the same polygon multiple times, but using glList or glBegin-glEnd can be used as well)
set glStencilOp(GL_KEEP,GL_KEEP,GL_INCR)
draw the polygon as GL_TRIANGLE_FAN
Now on the stencil layer, you have bits set >0 where triangles of polygon were drawn. The trick is, that all the valid polygon area is filled with values having mod2=1, this is because the triangle fan drawing sweeps along polygon surface, and if the selected triangle has area outside the polygon, it will be drawn twice (once at the current sequence, then on next drawings when valid areas are drawn) This can happens many times, but in all cases, pixels outside the polygon are drawn even times, pixels inside are drawn odd times.
Some exceptions can happen, when order of pixels cause outside areas not to be drawn again. To filter these cases, the reverse directioned vertex array must be drawn (all these cases work properly when order is switched):
- set glStencilFunc(GL.GL_EQUAL,1,1) to prevent these errors happen in reverse direction (Can draw only areas inside the polygon drawn at first time, so errors happening in the other direction won't apperar, logically this generates the intersectoin of the two half-solution)
- draw polygon in reverse order, keeping glStencilFunc to increase sweeped pixel values
Now we have a correct stencil layer with pixel_value%2=1 where the pixel is truly inside the polygon. The last step is to draw the polygon itself:
- set glColorMask(1,1,1,1) to draw visible polygon
- keep glStencilFunc(GL_EQUAL,1,1) to draw the correct pixels
- draw polygon in the same mode (vertex arrays etc.), or if you draw without lighting/texturing, a single whole-screen-rectangle can be also drawn (faster than drawing all the vertices, and only the valid polygon pixels will be set)
If everything goes well, the polygon is correctly drawn, make sure that after this function you reset the stencil usage (stencil test) and/or clear stencil buffer if you also use it for another purpose.
Check out glues, which has tessellation functions that can handle concave polygons.
I wrote a java classe for a small graphical library that do exacly what you are looking for, you can check it here :
https://github.com/DzzD/TiGL/blob/main/android/src/fr/dzzd/tigl/PolygonTriangulate.java
It receive as input two float arrays (vertices & uvs) and return the same vertices and uvs reordered and ready to be drawn as a list of triangles.
If you want to exclude a zone (or many) you can simply connect your two polygones (the main one + the hole) in one by connecting them by a vertex, you will end with only one polygone that can be triangulate like any other with the same function.
Like this :
To better understand zoomed it will look like :
Finally it is just a single polygon.

Given normal map in world space what is a suitable algorithm to find edges?

If I have the vertex normals of a normal scene showing up as colours in a texture in world space is there a way to calculate edges efficiently or is it mathematically impossible? I know it's possible to calculate edges if you have the normals in view space but I'm not sure if it is possible to do so if you have the normals in world space (I've been trying to figure out a way for the past hour..)
I'm using DirectX with HLSL.
if ( normalA dot normalB > cos( maxAngleDiff )
then you have an edge. It won't be perfect but it will definitely find edges that other methods won't.
Or am i misunderstanding the problem?
Edit: how about, simply, high pass filtering the image?
I assume you are trying to make cartoon style edges for a cell shader?
If so, simply make a dot product of the world space normal with the world space pixel position minus camera position. As long as your operands are all in the same space you should be ok.
float edgy = dot(world_space_normal, pixel_world_pos - camera_world_pos);
If edgy is near 0, it's an edge.
If you want a screen space sized edge you will need to render additional object id information on another surface and post process the differences to the color surface.
It will depend on how many colors your image contain, and how they merge: sharp edges, dithered, blended,...
Since you say you have the vertex normals I am assuming that you can access the color-information on a single plane.
I have used two techniques with varying success:
I searched the image for local areas of the same color (RGB) and then used the highest of R, G or B to find the 'edge' - that is where the selected R,G or B is no longer the highest value;
the second method I used is to reduce the image to 16 colors internally, and it is easy to find the outlines in this case.
To construct vectors would then depend on how fine you want the granularity of your 'wireframe'-image to be.

Resources