DirectX 11 indexed drawing with normals - directx

I am working on a OBJ loader for DirectX 11.
In OBJ Format a square (two triangles) would look like this:
v 0 0 0
v 0 1 0
v 1 1 0
v 1 0 0
f 1 2 3
f 1 3 4
So at first the vertex data is given by v and then the faces by f. So am just reading the vertexes into a vertex buffer and the indexes into an index buffer. But now I need to calculate normals for the pixel shader. Can I somehow store normal data for the FACES while using indexed rendering or do i have to create a vertex buffer without indices? (because then i could store the normal data for each vertex, because each vertex is only used in 1 face)

Usual way is to store same normal vector for all 3 vertices of a face. Something like this:
Vertex
{
Vector3 position;
Vector3 normal;
}
std::vector<Vertex> vertices;
std::vector<uint32_t> indices;
for(each face f)
{
Vector3 faceNormal = CalculateFaceNormalFromPositions(f); // Generate normal for given face number `f`;
for(each vertex v)
{
Vertex vertex;
vertex.position = LoadPosition(f, v); // Load position from OBJ based on face index (f) and vertex index (v);
vertex.normal = faceNormal;
vertices.push_back(vertex);
indices.push_back(GetPosIndex()); // only position index from OBJ file needed
}
}
Note: typically you will want to use vertex normals instead of face normals, because vertex normals allows better-looking lighting algorithms to be applied (per-pixel lighting):
for(each face f)
{
for(each vertex v)
{
Vertex vertex;
vertex.position = LoadPosition(f, v);
vertex.normal = ...precalculated somewhere...
vertices.push_back(vertex);
}
}
Note2: typically you will want to read pre-calculated normals from asset file instead of calculating it in runtime:
for(each face f)
{
for(each vertex v)
{
Vertex vertex;
vertex.position = LoadPosition(f, v);
vertex.normal = LoadNormal(f, v);
vertices.push_back(vertex);
}
}
.obj format allows storing per-vertex normals). Example from google:
# cube.obj
#
g cube
# positions
v 0.0 0.0 0.0
v 0.0 0.0 1.0
v 0.0 1.0 0.0
v 0.0 1.0 1.0
v 1.0 0.0 0.0
v 1.0 0.0 1.0
v 1.0 1.0 0.0
v 1.0 1.0 1.0
# normals
vn 0.0 0.0 1.0
vn 0.0 0.0 -1.0
vn 0.0 1.0 0.0
vn 0.0 -1.0 0.0
vn 1.0 0.0 0.0
vn -1.0 0.0 0.0
# faces: indices of position / texcoord(empty) / normal
f 1//2 7//2 5//2
f 1//2 3//2 7//2
f 1//6 4//6 3//6
f 1//6 2//6 4//6
f 3//3 8//3 7//3
f 3//3 4//3 8//3
f 5//5 7//5 8//5
f 5//5 8//5 6//5
f 1//4 5//4 6//4
f 1//4 6//4 2//4
f 2//1 6//1 8//1
f 2//1 8//1 4//1
Example code in C++ (not tested)
struct Vector3{ float x, y, z; };
struct Face
{
uint32_t position_ids[3];
uint32_t normal_ids[3];
};
struct Vertex
{
Vector3 position;
Vector3 normal;
};
std::vector<Vertex> vertices; // Your future vertex buffer
std::vector<uint32_t> indices; // Your future index buffer
void ParseOBJ(std::vector<Vector3>& positions, std::vector <Vector3>& normals, std::vector<Face>& faces) { /*TODO*/ }
void LoadOBJ(const std::wstring& filename, std::vector<Vertex>& vertices, std::vector<uint32_t>& indices)
{
// after parsing obj file
// you will have positions, normals
// and faces (which contains indices for positions and normals)
std::vector<Vector3> positions;
std::vector<Vector3> normals;
std::vector<Face> faces;
ParseOBJ(positions, normals, faces);
for (auto itFace = faces.begin(); itFace != faces.end(); ++itFace) // for each face
{
for (uint32_t i = 0; i < 3; ++i) // for each face vertex
{
uint32_t position_id = itFace->position_ids[i]; // just for short writing later
uint32_t normal_id = itFace->normal_ids[i];
Vertex vertex;
vertex.position = positions[position_id];
vertex.normal = normals[normal_id];
indices.push_back(position_id); // Note: only position's indices
vertices.push_back(vertex);
}
}
}
Note, that after merging normal's data inside vertex you will not need normal's indices anymore. Thus, normals becomes not indexed (and two equal normals can be stored in different vertices, which is a waste of space). But you can still use indexed rendering, because positions are indexed.
I must say, that, of course, programmable pipeline of modern GPUs allows more tricky things to do:
create buffers for each: positions, normals, pos_indices and nor_indices
using current vertex_id, read in shader current position_id and corresponding position, normal_id and corresponding normal
you can even generate normals in shader (thus no need of normal_id and normal buffers at all)
you can assemble your faces in geometry shader
...another bad ideas here =)
In such algorithms rendering system becomes more complex with almost no gain.

Related

Linear Depth to World Position

I have the following fragment and vertex shaders.
HLSL code
`
// Vertex shader
//-----------------------------------------------------------------------------------
void mainVP(
float4 position : POSITION,
out float4 outPos : POSITION,
out float2 outDepth : TEXCOORD0,
uniform float4x4 worldViewProj,
uniform float4 texelOffsets,
uniform float4 depthRange) //Passed as float4(minDepth, maxDepth,depthRange,1 / depthRange)
{
outPos = mul(worldViewProj, position);
outPos.xy += texelOffsets.zw * outPos.w;
outDepth.x = (outPos.z - depthRange.x)*depthRange.w;//value [0..1]
outDepth.y = outPos.w;
}
// Fragment shader
void mainFP( float2 depth: TEXCOORD0, out float4 result : COLOR) {
float finalDepth = depth.x;
result = float4(finalDepth, finalDepth, finalDepth, 1);
}
`
This shader produces a depth map.
This depth map must then be used to reconstruct the world positions for the depth values. I have searched other posts but none of them seem to store the depth using the same formula I am using. The only similar post is the following
Reconstructing world position from linear depth
Therefore, I am having a hard time reconstructing the point using the x and y coordinates from the depth map and the corresponding depth.
I need some help in constructing the shader to get the world view position for a depth at particular texture coordinates.
It doesn't look like you're normalizing your depth. Try this instead. In your VS, do:
outDepth.xy = outPos.zw;
And in your PS to render the depth, you can do:
float finalDepth = depth.x / depth.y;
Here is a function to then extract the view-space position of a particular pixel from your depth texture. I'm assuming you're rendering screen aligned quad and performing your position-extraction in the pixel shader.
// Function for converting depth to view-space position
// in deferred pixel shader pass. vTexCoord is a texture
// coordinate for a full-screen quad, such that x=0 is the
// left of the screen, and y=0 is the top of the screen.
float3 VSPositionFromDepth(float2 vTexCoord)
{
// Get the depth value for this pixel
float z = tex2D(DepthSampler, vTexCoord);
// Get x/w and y/w from the viewport position
float x = vTexCoord.x * 2 - 1;
float y = (1 - vTexCoord.y) * 2 - 1;
float4 vProjectedPos = float4(x, y, z, 1.0f);
// Transform by the inverse projection matrix
float4 vPositionVS = mul(vProjectedPos, g_matInvProjection);
// Divide by w to get the view-space position
return vPositionVS.xyz / vPositionVS.w;
}
For a more advanced approach that reduces the number of calculation involved but involves using the view frustum and a special way of rendering the screen-aligned quad, see here.

Converting YUV422 to RGB using GPU shader HLSL

I'm considering to perform the color space conversion from YUV422 to RGB using HLSL. A four-byte YUYV will yield 2 three-byte RGB values, for example, Y1UY2V will give R1G1B1(left pixel) and R2G2B2(right pixel). Given texture coordinates in pixel shader increased gradiently, how could I differentiate between the texture coordinates for the left pixels i.e. all R1G1B1 and the texture coordinates for right pixels i.e. all R2G2B2. This way I could render all R1G1B1 and all R2G2B2 on a single texture instead of two.
Thanks!
Not sure what version of DirectX you use, but here is the version I use for dx11 (please note in that case I send yuv data in a StructuredBuffer, which saves me the fact of dealing with row stride. You can apply the same technique sending your yuv data as texture of course (with few little changes to the code below).
Here is the pixel shader code (I assume your render target is same size as your input image, and that you render a full screen quad/triangle).
StructuredBuffer<uint> yuy;
int w;
int h;
struct psInput
{
float4 p : SV_Position;
float2 uv : TEXCOORD0;
};
float4 PS(psInput input) : SV_Target
{
//Calculate pixel location within buffer (if you use texture change lookup here)
uint2 xy = input.p.xy;
uint p = (xy.x) + (xy.y * w);
uint pixloc = p / 2;
uint pixdata = yuy[pixloc];
//Since pixdata is packed, use some bitshift to remove non useful data
uint v = (pixdata & 0xff000000) >> 24;
uint y1 = (pixdata & 0xff0000) >> 16;
uint u = (pixdata & 0xff00) >> 8;
uint y0 = pixdata & 0x000000FF;
//Check if you are left/right pixel
uint y = p % 2 == 0 ? y0: y1;
//Convert yuv to rgb
float cb = u;
float cr = v;
float r = (y + 1.402 * (cr - 128.0));
float g = (y - 0.344 * (cb - 128.0) - 0.714 * (cr - 128));
float b = (y + 1.772 * (cb - 128));
return float4(r,g,b,1.0f) / 256.0f;
}
Hope that helps.

How can I repeat my texture in DX

There is a handy feature in three.js 3d library that you can set the sampler to repeat mode and set the repeat attribute to some values you like, for example, (3, 5) means this texture will repeat 3 times horizontally and 5 times vertically. But now I'm using DirectX and I cannot find some good solutions for this problem. Note that the UV coordinates of vertices still ranges from 0 to 1, and I don't want to change my HLSL codes because I want a programmable solution for this, thanks very much!
Edit : presume I have a cube model already. And the texture coordinates of its vertices are between0 and 1. If i use wrap mode or clamp mode for sampling textures it's all OK now. But I want to repeat a texture on one of its faces, and I first need to change to wrap mode. That's i already knows. Then I have to edit my model so that texture coordinates range 0-3. What if I don't change my model? So far i came out one way: I need to add a variable to pixel shader represents how many times does the map repeats and I will multiply this factor to coordinate when sampling. Not a graceful solution i think emmmm…
Since you've edited your Question, there is another Answer to your problem:
From what I understood, you have a face with uv's like so:
0,1 1,1
-------------
| |
| |
| |
-------------
0,0 1,0
But want the texture repeated 3 times (for example) instead of 1 time.
(Without changing the original model)
Multiple solutions here:
You could do it, when updating your buffers (if you do it):
D3D11_MAPPED_SUBRESOURCE resource;
HRESULT hResult = D3DDeviceContext->Map(vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &resource);
if(hResult != S_OK) return false;
YourVertexFormat *ptr=(YourVertexFormat*)resource.pData;
for(int i=0;i<vertexCount;i++)
{
ptr[i] = vertices[i];
ptr[i].uv.x *= multiplyX; //in your case 3
ptr[i].uv.y *= multiplyY; //in your case 5
}
D3DDeviceContext->Unmap(vertexBuffer, 0);
But if you don't need updating the buffer anyways, i wouldn't recommend it, because it is terribly slow.
A faster way is to use the vertex shader:
cbuffer MatrixBuffer
{
matrix worldMatrix;
matrix viewMatrix;
matrix projectionMatrix;
};
struct VertexInputType
{
float4 position : POSITION0;
float2 uv : TEXCOORD0;
// ...
};
struct PixelInputType
{
float4 position : SV_POSITION;
float2 uv : TEXCOORD0;
// ...
};
PixelInputType main(VertexInputType input)
{
input.position.w = 1.0f;
PixelInputType output;
output.position = mul(input.position, worldMatrix);
output.position = mul(output.position, viewMatrix);
output.position = mul(output.position, projectionMatrix);
This is what you basicly need:
output.uv = input.uv * 3; // 3x3
Or more advanced:
output.uv = float2(input.u * 3, input.v * 5);
// ...
return output;
}
I would recommend the vertex shader solution, because it's fast and in directx you use vertex shaders anyways, so it's not as expensive as the buffer update solution...
Hope that helped solving your problems :)
You basicly want to create a sampler state like so:
ID3D11SamplerState* m_sampleState;
3D11_SAMPLER_DESC samplerDesc;
samplerDesc.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
samplerDesc.MipLODBias = 0.0f;
samplerDesc.MaxAnisotropy = 1;
samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS;
samplerDesc.BorderColor[0] = 0;
samplerDesc.BorderColor[1] = 0;
samplerDesc.BorderColor[2] = 0;
samplerDesc.BorderColor[3] = 0;
samplerDesc.MinLOD = 0;
samplerDesc.MaxLOD = D3D11_FLOAT32_MAX;
// Create the texture sampler state.
result = ifDEVICE->ifDX11->getD3DDevice()->CreateSamplerState(&samplerDesc, &m_sampleState);
And when you are setting your shader constants, call this:
ifDEVICE->ifDX11->getD3DDeviceContext()->PSSetSamplers(0, 1, &m_sampleState);
Then you can write your pixel shaders like this:
Texture2D Texture;
SamplerState SampleType;
...
float4 main(PixelInputType input) : SV_TARGET
{
float4 textureColor = shaderTexture.Sample(SampleType, input.uv);
...
}
Hope that helps...

HLSL Modify Tessellation shader to make equilateral triangles?

Details:
I'm in the proccess of procedural planet generation; so far I have done the dynamic LOD work, but my current software algorithm is very very slow. I decided to do it using DX11's new tessellation features instead.
Currently my sphere is a subdivided icosahedron. (20 sides all equilateral triangles)
Back when I was subdividing using my software algorithm, one triangle would be
split into four children across the midpoints of the parent forming the Hyrule symbol each time...like this: http://puu.sh/1xFIx
As you can see, each triangle subdivided created more and more equilateral triangles, i.e. each one was exactly the same shape.
But now that I am using the GPU to tessellate in HLSL, the result is definately not
what I am looking for: http://puu.sh/1xFx7
Questions:
Is there anything I can do in the Hull and Domain shaders to change the tessellation
so that it subdivides into sets of equilateral triangles like the first image?
Should I be using the geometry shader for something like this? If so, would it be
slower then the tessellator?
I tried using Tessellation Shader, but I encontred a problem: the domain shader only pass the uv coordinate (SV_DomainLocation) and the input patch for positionining the vertices, when the domain location for vertex is 0.3, 0.3, 0.3 (center vertex) is impossible to know the correct position because you need information about the other vertices or a index(x, y) of iteration that's not provided by the Domain Shader Stage.
because this problem I write the code in geometry shader, this shader is very limited for tessellations because the output stream cannot have a size bigger than 1024 bytes (in shader model 5.0). I implemented the calculation of vertex positions using the uv (like SV_DomainLocation) but this only tessellate the triangles, you must use part of your code to calculate added position in center of triangles to create the precise final result.
this is the code for equilateral triangles tessellation:
// required for array
#define MAX_ITERATIONS 5
void DrawTriangle(float4 p0, float4 p1, float4 p2, inout TriangleStream<VS_OUT> stream)
{
VS_OUT v0;
v0.pos = p0;
stream.Append(v0);
VS_OUT v1;
v1.pos = p1;
stream.Append(v1);
VS_OUT v2;
v2.pos = p2;
stream.Append(v2);
stream.RestartStrip();
}
[maxvertexcount(128)] // directx rule: maxvertexcount * sizeof(VS_OUT) <= 1024
void gs(triangle VS_OUT input[3], inout TriangleStream<VS_OUT> stream)
{
int itc = min(tess, MAX_ITERATIONS);
float fitc = itc;
float4 past_pos[MAX_ITERATIONS];
float4 array_pass[MAX_ITERATIONS];
for (int pi = 0; pi < MAX_ITERATIONS; pi++)
{
past_pos[pi] = float4(0, 0, 0, 0);
array_pass[pi] = float4(0, 0, 0, 0);
}
// -------------------------------------
// Tessellation kernel for the control points
for (int x = 0; x <= itc; x++)
{
float4 last;
for (int y = 0; y <= x; y++)
{
float2 seg = float2(x / fitc, y / fitc);
float3 uv;
uv.x = 1 - seg.x;
uv.z = seg.y;
uv.y = 1 - (uv.x + uv.z);
// ---------------------------------------
// Domain Stage
// uv Domain Location
// x,y IterationIndex
float4 fpos = input[0].pos * uv.x;
fpos += input[1].pos * uv.y;
fpos += input[2].pos * uv.z;
if (x > 0 && y > 0)
{
DrawTriangle(past_pos[y - 1], last, fpos, stream);
if (y < x)
{
// add adjacent triangle
DrawTriangle(past_pos[y - 1], fpos, past_pos[y], stream);
}
}
array_pass[y] = fpos;
last = fpos;
}
for (int i = 0; i < MAX_ITERATIONS; i++)
{
past_pos[i] = array_pass[i];
}
}
}

fullscreen quad in pixel shader has screen coordinates?

i have a 640x480 rendertarget (its the main backbuffer).
im passing a fullscreen quad to the vertex shader, the fullscreen quad has coordinates between [-1,1] for both X and Y, that means that i only pass the coordinates into the pixel shader with no calculation:
struct VSInput
{
float4 Position : SV_POSITION0;
};
struct VSOutput
{
float4 Position : SV_POSITION0;
};
VSOutput VS(VSInput input)
{
VSOutput output = (VSOutput)0;
output.Position = input.Position;
return output;
}
but on the pixel shader, the x and y coordinate for each fragment is in screen space (0 < x < 640 and 0 < y < 480).
why is that? i always thought that the coordinates would get interpolated on their way to the pixel shader and be set between -1,1 and in this case even more so because I'm passing the coordinates between -1 and 1 hardcoded on the vertex shader!
but truth is, this pixel shader works:
float x = input.Position.x;
if(x < 200)
output.Diffuse = float4(1.0, 0.0, 0.0, 1.0);
else if( x > 400)
output.Diffuse = float4(0.0, 0.0, 1.0, 1.0);
else
output.Diffuse = float4(0.0, 1.0, 0.0, 1.0);
return output;
it outputs 3 color stripes on my rendering window, but if i change the values from screen space (the 200 and 400 from the code above) to -1,1 space and use something like if(x < 0.5) it wont work.
i already tried
float x = input.Position.x / input.Position.w;
because i read somewhere that that way i could get values between -1,1 but it doesn't work either.
thanks in advance.
From the MSDN on the Semantics-page about SV_POSITION:
When used in a pixel shader, SV_Position describes the pixel location.
So you are seeing expected behavior.
The best solution is to pass screen space coordinates as an additional parameter. I like to use this "full-screen-triangle" vertex shader:
struct VSQuadOut {
float4 position : SV_Position;
float2 uv: TexCoord;
};
// outputs a full screen triangle with screen-space coordinates
// input: three empty vertices
VSQuadOut VSQuad( uint vertexID : SV_VertexID ){
VSQuadOut result;
result.uv = float2((vertexID << 1) & 2, vertexID & 2);
result.position = float4(result.uv * float2(2.0f, -2.0f) + float2(-1.0f, 1.0f), 0.0f, 1.0f);
return result;
}
(Original source: Full screen quad without vertex buffer?)

Resources