why the sphere texture map can not actually match - directx

i have a sphere in my 3d project, and i have an earth texture, i use the algorithm from wiki to calculate the texture coordinate.
the code in my effect file look like this:
float pi = 3.14159265359f;
output.uvCoords.x = 0.5 + atan2(input.normal.z, input.normal.x) / (2 * pi);
output.uvCoords.y = 0.5f - asin(input.normal.y) / pi;
the result is the picture below:
look from left( there is a line, this is my question)
look from front
3.look from right

Not pretend to be a complete answer at all, but there are some ideas:
Try 6.28 instead 6.18, because 3.14 * 2 = 6.28. It is always a good idea to create variables or macro instead of plain numbers, to prevent such sad mistakes in future
Try to use more precise value of Pi (numbers to the right of the decimal point)
Try to normalize normal vector before calculations
Even better calculate texcoords on CPU once and for all, instead of calculating on each shader invocation. You can use any asset library for this purpose or just quickly move your HLSL to main code.
#define PI 3.14159265359f
#define PImul2 6.28318530718f // pi*2
#define PIdiv2 1.57079632679f // pi/2
#define PImul3div2 2.09439510239 // 3*pi/2
#define PIrev 0.31830988618f // 1/pi
...
Hope it helps.

finally i figure it out by myself, The problem lies in the fact, that i'am calculating the texture coordinates in the vertex shader. The problem is that one vertex is on the far right of the texture, while the other 2 vertices of a triangle are on the far left of the texture, which results in almost the whole texture being visible on such a triangle. so there is a line of jumbled texture coords. the solution is i should send the normal to pixel shader and calculate the texture coord in the pixel shader

Related

How can I make my WebGL Coordinate System "Top Left" Oriented?

Because of computation efficiency, I use a fragment shader to implement a simple 2D metaballs algorithm. The data of the circles to render is top-left oriented.
I have everything working, except that the origin of WebGL's coordinate system (bottom-left) is giving me a hard time: Obviously, the rendered output is mirrored along the horizontal axis.
Following https://webglfundamentals.org/webgl/lessons/webgl-2d-rotation.html (and others), I tried to rotate things using a vertex shader. Without any success unfortunately.
What is the most simple way of achieving the reorientation of WebGL's coordinate system?
I'd appreciate any hints and pointers, thanks! :)
Please find a working (not working ;) ) example here:
https://codesandbox.io/s/gracious-fermat-znbsw?file=/src/index.js
Since you are using gl_FragCoord in your pixels shader, you can't do it from the vertex shader becasuse gl_FragCoord is the canvas coordinates but upside down. You could easily invert it in javascript in your pass trough to WebGL
gl.uniform3fv(gl.getUniformLocation(program, `u_circles[${i}]`), [
circles[i].x,
canvas.height - circles[i].y - 1,
circles[i].r
]);
If you want to do it in the shader and keep using gl_FragCoord then you should pass the height of the canvas to the shader using a uniform and do the conversion of y there by doing something like
vec2 screenSpace = vec2(gl_FragCoord.x, canvasHeight - gl_FragCoord.y - 1);
The -1 is because the coordinates start at 0.

Drawing Curves using XNA

I've been making progress in a fan-replicated game I'm coding, but I'm stuck with this problem.
Right now I'm drawing a texture pixel by pixel on the curve path, but this cuts down frames per second from 4000 to 50 on curves with long lengths.
I need to store pixel by pixel Vector2 + length data anyway, so I can produce static speed movement along it, looping through it to draw the curve as well.
Curves I need to be able to draw are Bezier, Circular and Catmull.
Any ideas of how to make it more efficient?
Maybe I have misunderstood the question but I did this once:
Create the curve and sample x points on it. (Red dots)
Create a mesh from it by calculating the cross vector of each point. (Green lines)
Build a quad between all of these. So basically 5 of them in my picture.
Set the U coordinate to be on the perpendicular plane and V coordinate follows the curve length. So 0 at the start an 1 at the end of it.
You can of course scale V if you want you texture to repeat.
Any ideas of how to make it more efficient?
Assuming the texture needs to be dynamic, draw the texture on the GPU-side using a shader. Drawing it on the CPU-side is not only slow, but bogs down both the CPU and GPU when you need to send it back to the GPU every frame. Much better to draw it GPU-side.
I need to store pixel by pixel Vector2 + length data anyway
The shader can store additional information into the texture. e.g. even though you may allocate a RGBA texture, it doesn't mean that it needs to store color information when it is your shaders that will interpret the data.

HLSL correct pixel position in deferred shading

In OpenGL, I am using the following in my pixel shaders to get the correct pixel position, which is used to sample diffuse, normal, position gbuffer textures:
ivec2 texcoord = ivec2(textureSize(unifDiffuseTexture) * (gl_FragCoord.xy / UnifAmbientPass.mScreenSize));
So far, this is what I do in HLSL:
float2 texcoord = input.mPosition.xy / gScreenSize;
Most notably, in GLSL I am using textureSize() to get accurate pixel position. I am wondering, is there a HLSL equivalent to textureSize()?
In HLSL, you have GetDimensions
But it may be costlier than reading it from a constant buffer, even if it looks easier to use at first to do quick tests.
Also, you have alternative, using SV_Position and Load, just use the xy as an uint2, you remove the need of an user interpolator carrying a texture coordinate to index the screen.
Here the full documentation of a TextureObject

DirectX11 - Matrix Translation

I am creating a 3D scene and I have just inserted a cube object into it. It is rendered fine at the origin but when I try to rotate it and then translate it I get a huge deformed cube. Here is the problem area in my code:
D3DXMATRIX cubeROT, cubeMOVE;
D3DXMatrixRotationY(&cubeROT, D3DXToRadian(45.0f));
D3DXMatrixTranslation(&cubeMOVE, 10.0f, 2.0f, 1.0f);
D3DXMatrixTranspose(&worldMatrix, &(cubeROT * cubeMOVE));
// Put the model vertex and index buffers on the graphics pipeline to prepare them for drawing.
m_Model->Render(m_Direct3D->GetDeviceContext());
// Render the model using the light shader.
result = m_LightShader->Render(m_Direct3D->GetDeviceContext(), m_Model->GetIndexCount(), worldMatrix, viewMatrix, projectionMatrix,
m_Model->GetTexture(), m_Light->GetDirection(), m_Light->GetDiffuseColor());
// Reset the world matrix.
m_Direct3D->GetWorldMatrix(worldMatrix);
I have discovered that it's the cubeMOVE part of the transpose that is giving me the problem but I have no idea why.
This rotates the cube properly:
D3DXMatrixTranspose(&worldMatrix, &cubeROT);
This translates the cube properly:
D3DXMatrixTranslation(&worldMatrix, 10.0f, 2.0f, 1.0f);
But this creates the deformed mesh:
D3DXMatrixTranspose(&worldMatrix, &cubeMOVE);
I'm quite new to DirectX so any help would be very much appreciated.
I don't think transpose does what you think it does. To combine transformation matrices, you just multiply them -- no need to transpose. I guess it should be simply:
worldMatrix = cubeROT * cubeMOVE;
Edit
The reason "transpose" seems to work for rotation but not translation, is that transpose flips the non-diagonal parts of the matrix. But for an axis-rotation matrix, that leaves the matrix nearly unchanged. (It does change a couple of signs, but that would only affect the direction of the rotation.) For a translation matrix, applying a transpose would completely deform it -- hence the result you see.

Texture Sampling Coordinates to Render a Sprite

Let's say we have a texture (in this case 8x8 pixels) we want to use as a sprite sheet. One of the sub-images (sprite) is a subregion of 4x3 inside the texture, like in this image:
(Normalized texture coordinates of the four corners are shown)
Now, there are basically two ways to assign texture coordinates to a 4px x 3px-sized quad so that it effectively becomes the sprite we are looking for; The first and most straightforward is to sample the texture at the corners of the subregion:
// Texture coordinates
GLfloat sMin = (xIndex0 ) / imageWidth;
GLfloat sMax = (xIndex0 + subregionWidth ) / imageWidth;
GLfloat tMin = (yIndex0 ) / imageHeight;
GLfloat tMax = (yIndex0 + subregionHeight) / imageHeight;
Although when first implementing this method, ca. 2010, I realized the sprites looked slightly 'distorted'. After a bit of search, I came across a post in the cocos2d forums explaining that the 'right way' to sample a texture when rendering a sprite is this:
// Texture coordinates
GLfloat sMin = (xIndex0 + 0.5) / imageWidth;
GLfloat sMax = (xIndex0 + subregionWidth - 0.5) / imageWidth;
GLfloat tMin = (yIndex0 + 0.5) / imageHeight;
GLfloat tMax = (yIndex0 + subregionHeight - 0.5) / imageHeight;
...and after fixing my code, I was happy for a while. But somewhere along the way, and I believe it is around the introduction of iOS 5, I started feeling that my sprites weren't looking good. After some testing, I switched back to the 'blue' method (second image) and now they seem to look good, but not always.
Am I going crazy, or something changed with iOS 5 related to GL ES texture mapping? Perhaps I am doing something else wrong? (e.g., the vertex position coordinates are slightly off? Wrong texture setup parameters?) But my code base didn't change, so perhaps I am doing something wrong from the beginning...?
I mean, at least with my code, it feels as if the "red" method used to be correct but now the "blue" method gives better results.
Right now, my game looks OK, but I feel there is something half-wrong that I must fix sooner or later...
Any ideas / experiences / opinions?
ADDENDUM
To render the sprite above, I would draw a quad measuring 4x3 in orthographic projection, with each vertex assigned the texture coords implied in the code mentioned before, like this:
// Top-Left Vertex
{ sMin, tMin };
// Bottom-Left Vertex
{ sMin, tMax };
// Top-Right Vertex
{ sMax, tMin };
// Bottom-right Vertex
{ sMax, tMax };
The original quad is created from (-0.5, -0.5) to (+0.5, +0.5); i.e. it is a unit square at the center of the screen, then scaled to the size of the subregion (in this case, 4x3), and its center positioned at integer (x,y) coordinates. I smell this has something to do too, especially when either width, height or both are not even?
ADDENDUM 2
I also found this article, but I'm still trying to put it together (it's 4:00 AM here)
http://www.mindcontrol.org/~hplus/graphics/opengl-pixel-perfect.html
There's slightly more to this picture than meets the eye, the texture coordinates are not the only factor in where the texture gets sampled. In your case I believe the blue is probably what want to have.
What you ultimately want is to sample each texel in center. You don't want to be taking samples on the boundary between two texels, because that either combines them with linear sampling, or arbitrarily chooses one or the other with nearest, depending on which way the floating point calculations round.
Having said that, you might think that you don't want to have your texcoords at (0,0), (1,1) and the other corners, because those are on the texel boundary. However an important thing to note is that opengl samples textures in the center of a fragment.
For a super simple example, consider a 2 by 2 pixel monitor, with a 2 by 2 pixel texture.
If you draw a quad from (0,0) to (2,2), this will cover 4 pixels. If you texture map this quad, it will need to take 4 samples from the texture.
If your texture coordinates go from 0 to 1, then opengl will interpolate this and sample from the center of each pixel, with the lower left texcoord starting at the bottom left corner of the bottom left pixel. This will ultimately generate texcoord pairs of (0.25, 0.25), (0.75,0.75), (0.25, 0.75), and (0.75, 0.25). Which puts the samples right in the middle of each texel, which is what you want.
If you offset your texcoords by a half pixel as in the red example, then it will interpolate incorrectly, and you'll end up sampling the texture off center of the texels.
So long story short, you want to make sure that your pixels line up correctly with your texels (don't draw sprites at non-integer pixel locations), and don't scale sprites by arbitrary amounts.
If the blue square is giving you bad results, can you give an example image, or describe how you're drawing it?
Picture says 1000 words:

Resources