I'm using vs2015 and studying dx11.
I'll show you code first.
cbuffer cbperobject {
float4x4 gWorldViewProj;
};
struct VertexIn {
float3 Pos : POSITION;
float4 Color : COLOR;
};
struct VertexOut {
float4 PosH : SV_POSITION;
float4 Color : COLOR;
};
VertexOut main( VertexIn vin )
{
VertexOut vOut;
vOut.PosH = mul(float4(vin.Pos, 1.0f), gWorldViewProj);
vOut.Color = vin.Color;
return vOut;
}
This is my vertex shader code. I rahter copied it from internet.
HRESULT result;
D3D11_MAPPED_SUBRESOURCE mappedResource;
XMMATRIX* dataPtr;
UINT bufferNumber;
// Transpose the matrices to prepare them for the shader.
// Lock the constant buffer so it can be written to.
result = mD3dDContext->Map(contantBuff, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
if (FAILED(result))
{
return false;
}
// Get a pointer to the data in the constant buffer.
dataPtr = (XMMATRIX*)mappedResource.pData;
// Copy the matrices into the constant buffer.
XMMATRIX world = XMLoadFloat4x4(&mWorld); // 버텍스의 월드변환
XMMATRIX view = XMLoadFloat4x4(&mView); // 카메라
XMMATRIX proj = XMLoadFloat4x4(&mProj); // 직교투영
XMMATRIX worldViewProj = world*view*proj;
worldViewProj = XMMatrixTranspose(worldViewProj);
*dataPtr = worldViewProj;
// Unlock the constant buffer.
mD3dDContext->Unmap(contantBuff, 0);
// Set the position of the constant buffer in the vertex shader.
bufferNumber = 0;
// Finanly set the constant buffer in the vertex shader with the updated values.
mD3dDContext->VSSetConstantBuffers(bufferNumber, 1, &contantBuff);
return true;
This is my setting constant buffer in shader code.
First, what is difference between POSITION and SV_POSITION semantic? Would you recommend good HLSL tutorial book? I'm Korean and I'm living in Korea. There is no good book in here; I don't know why, all good book is out of print. What a bad country for studying programming.
Second, why should I transpose my camera matrix(worldviewproj matrix) before CPU gives data to GPU? It's Vertex * matrix = processed Vertex. Why should I transpose it?
Well POSITION(Semantic) gives directive to GPU, that concrete values will be placed as points in coordinate space and SV_POSITION is giving directive for pixel shader. Actually it gives order to GPU about pixels location on screen mainly in range -1 to 1. Look at this https://msdn.microsoft.com/en-us/library/windows/desktop/bb509647(v=vs.85).aspx
Well seems you need Linear Algebra lessons mate. Matrix transposition is the key stone in 3d graphics. With Matrix transpositions(And same time transposed Matrix is inverse Matrix and Inverse Matrix is always Orthogonal) all Matrix Transformations are happening(Translation, Rotation, Scaling). First of all you need Linear Algebra stuff and about Rendering Api be it OpenGL or DirectX(never mind they are just API's) you can grab any book or online documentation you can look at amazon.com. Happy graphics coding pal ;).
Related
I have array with SCNVector3, one for each vertex.
var terrainArray = [SCNVector3]()
I need to provide this data per vertex in my fragment shader. Something like this:
struct TerrainVertexInput
{
float3 position [[attribute(SCNVertexSemanticPosition)]];
float4 color [[attribute(SCNVertexSemanticColor)]];
};
struct TerrainVertexOutput
{
float4 position [[position]];
float3 terrain;
float4 color;
};
vertex TerrainVertexOutput terrainVertex(TerrainVertexInput in [[stage_in]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
constant MyNodeBuffer& scn_node [[buffer(1)]],
constant float3 terrain [[buffer(2)]])
{
TerrainVertexOutput v;
v.position = scn_node.modelViewProjectionTransform * float4(in.position, 1.0);
v.terrain = terrain;
v.color = in.color;
return v;
}
As I understand I need to create Data object with array data and provide it to program with setValue(_:forKey:) but I'm not sure if vertex function will get right element for vertex.
How to do this right?
You're on the right track, but you don't want to use SCNVector3 for data you're passing into a Metal shader. SceneKit's vector types have components of type CGFloat, the size of which is platform-dependent.
Instead, your data should use one of the simd vector types. In Swift and Metal, that means float3 or float4. Note that float3 actually occupies 16 bytes of space; there's a dummy element at the end for alignment purposes. If you want to pack your data tightly, using exactly 3 floats per vertex, you can type your buffer in Metal as packed_float3 and write 3 contiguous floats into your data buffer for each vertex. There is no three-element packed float vector type in Swift.
There are many ways to copy an array of SCNVector3 into a suitably-typed data buffer. Here's one:
// Allocate enough memory to store three floats per vertex, ensuring we free it later
let terrainBuffer = UnsafeMutableBufferPointer<Float>.allocate(capacity: terrainArray.count * 3)
defer {
terrainBuffer.deallocate()
}
// Copy each element of each vector into the buffer
terrainArray.enumerated().forEach { i, v in
terrainBuffer[i * 3 + 0] = Float(v.x)
terrainBuffer[i * 3 + 1] = Float(v.y)
terrainBuffer[i * 3 + 2] = Float(v.z)
}
// Copy the buffer data into a Data object, as expected by SceneKit
let terrainData = Data(buffer: terrainBuffer)
You can then use setValue(:forKey:) on your geometry or material:
material.setValue(terrainData, forKey: "terrain")
Rather than taking a single float3 as a parameter in your vertex function, instead take a pointer to packed_float3 and index into it according to the vertex ID:
vertex TerrainVertexOutput terrainVertex(TerrainVertexInput in [[stage_in]],
constant SCNSceneBuffer& scn_frame [[buffer(0)]],
constant MyNodeBuffer& scn_node [[buffer(1)]],
constant packed_float3 *terrain [[buffer(2)]],
uint vid [[vertex_id]]) {
// ...
v.terrain = terrain[vid];
// ...
}
This assumes an exact correspondence between vertices in your geometry and terrain data points. Rather than using the vertex ID directly, you can of course do whatever sort of fancy indexing you want to look up the terrain data for a given vertex.
I am currently working on a multi-textured terrain and I have problems with the Sample function of Texture2DArray.
In my example, I use a Texture2DArray to store a set of different terrain texture, e.g. grass, sand, asphalt, etc. Each of my vertices stores a texture coordinate (UV coordinate) and an index of the texture I want to use. So, if my index is 0, I use the first texture. If the index is 1, I use the second texture, and so on. This works fine, as long as my index is a natural number (0, 1, ..). However, it fails, if the index is a real number (like 1.5f).
In order to look for the problem, I reduced my entire pixel shader to this:
Texture2DArray DiffuseTextures : register(t0);
Texture2DArray NormalTextures : register(t1);
Texture2DArray EmissiveTextures : register(t2);
Texture2DArray SpecularTextures : register(t3);
SamplerState Sampler : register(s0);
struct PS_IN
{
float4 pos : SV_POSITION;
float3 nor : NORMAL;
float3 tan : TANGENT;
float3 bin : BINORMAL;
float4 col : COLOR;
float4 TextureIndices : COLOR1;
float4 tra : COLOR2;
float2 TextureUV : TEXCOORD0;
};
float4 PS(PS_IN input) : SV_Target
{
float4 texCol = DiffuseTextures.Sample(Sampler, float3(input.TextureUV, input.TextureIndices.r));
return texCol;
}
The following image shows the result of a sample scene on the left side. As you can see, there is a hard border between the used textures. There is no form of interpolation.
In order to check my texture indices, I changed my pixel shader from above by returning the texture indices as a color:
return float4(input.TextureIndices.r, input.TextureIndices.r, input.TextureIndices.r, 1.0f);
The result can be seen on the right side of the image. The texture indices are correct, since they range in the interval [0, 1] and you can clearly see the interpolation at the border of the area. However, my sampled texture does not show any form of interpolation.
Since my pixel shader is pretty simple, I wonder what causes this behaviour? Is there any setting in DirextX responsible for this?
I use DirectX 11, pixel shader ps_5_0 (I also tested with ps_4_0) and I use DDS textures (BC3 compression).
Edit
This is the sampler I am using:
SharpDX.Direct3D11.SamplerStateDescription samplerStateDescription = new SharpDX.Direct3D11.SamplerStateDescription()
{
AddressU = SharpDX.Direct3D11.TextureAddressMode.Wrap,
AddressV = SharpDX.Direct3D11.TextureAddressMode.Wrap,
AddressW = SharpDX.Direct3D11.TextureAddressMode.Wrap,
Filter = SharpDX.Direct3D11.Filter.MinMagMipLinear
};
SharpDX.Direct3D11.SamplerState samplerState = new SharpDX.Direct3D11.SamplerState(_device, samplerStateDescription);
_deviceContext.PixelShader.SetSampler(0, samplerState);
Solution
I made a function using the code presented by catflier for getting a texture color:
float4 GetTextureColor(Texture2DArray textureArray, float2 textureUV, float textureIndex)
{
float tid = textureIndex;
int id = (int)tid;
float l = frac(tid);
float4 texCol1 = textureArray.Sample(Sampler, float3(textureUV, id));
float4 texCol2 = textureArray.Sample(Sampler, float3(textureUV, id + 1));
return lerp(texCol1, texCol2, l);
}
This way, I can get the desired texture color for all texture types (diffuse, specular, emissive, ...) with a simple function call:
float4 texCol = GetTextureColor(DiffuseTextures, input.TextureUV, input.TextureIndices.r);
float4 bumpMap = GetTextureColor(NormalTextures, input.TextureUV, input.TextureIndices.g);
float4 emiCol = GetTextureColor(EmissiveTextures, input.TextureUV, input.TextureIndices.b);
float4 speCol = GetTextureColor(SpecularTextures, input.TextureUV, input.TextureIndices.a);
The result is as smooth as I wanted it to be: :-)
Texture arrays do not sample across slices, so technically, this is expected result.
If you want to interpolate between slices (eg: 1.5f gives you "half" of second texture and "half" of third texture), you can use a Texture3d instead, which allows this (but will cost some more as it will perform trilinear filtering)
Otherwise, you can perform your sampling that way :
float4 PS(PS_IN input) : SV_Target
{
float tid = input.TextureIndices.r;
int id = (int)tid;
float l = frac(tid); //lerp amount
float4 texCol1 = DiffuseTextures.Sample(Sampler, float3(input.TextureUV,id));
float4 texCol2 = DiffuseTextures.Sample(Sampler, float3(input.TextureUV,id+1));
return lerp(texCol1,texCol2, l);
}
Please note that this technique is quite more flexible, since you can also provide non adjacent slices as input (so you can lerp between slice 2 and 23 for example), and eventually use a different blend mode by changing lerp by some other function.
I'm using the rastertek framework for terrain generation. I've got the terrain rendered from the vertex shader but I don't know how to calculate the normals in the shader. Theres a function call in one of the classes that generates the normals from the terrain but this only works if the terrain was generated on the cpu. Heres the code for the vertex shader I'm using:
////////////////////////////////////////////////////////////////////////////////
// Filename: terrain.vs
////////////////////////////////////////////////////////////////////////////////
#include "terrain.fx"
/////////////
// GLOBALS //
/////////////
cbuffer MatrixBuffer
{
matrix worldMatrix;
matrix viewMatrix;
matrix projectionMatrix;
};
//////////////
// TYPEDEFS //
//////////////
struct VertexInputType
{
float4 position : POSITION;
float3 normal : NORMAL;
};
struct PixelInputType
{
float4 position : SV_POSITION;
float3 normal : NORMAL;
};
////////////////////////////////////////////////////////////////////////////////
// Vertex Shader
////////////////////////////////////////////////////////////////////////////////
PixelInputType TerrainVertexShader(VertexInputType input)
{
PixelInputType output;
input.position.y = input.position.y + terrain(input.position.x,input.position.z);
// Change the position vector to be 4 units for proper matrix calculations.
input.position.w = 1.0f;
// Calculate the position of the vertex against the world, view, and projection matrices.
output.position = mul(input.position, worldMatrix);
output.position = mul(output.position, viewMatrix);
output.position = mul(output.position, projectionMatrix);
// Calculate the normal vector against the world matrix only.
output.normal = mul(input.normal, (float3x3)worldMatrix);
// Normalize the normal vector.
output.normal = normalize(output.normal);
return output;
}
Your big problem with generating normals in a shader is that you need knowledge of surrounding vertices. This is something you can overcome with a geometry shader but not with a vertex shader. A simple way to calculate the normal is to calculate the polygon normals (take the vector formed from 2 edges and cross product them for the face normal) of all polys that hit the vertex you are looking at and then add them up and normalise. As such if you haven't got access to a geometry shader the only real solution is to use the CPU. Even then this is not the best way to calculate the vertex normals. You may still find it better to use a more complex algorithm yet and that will give you even more problems! So yeah, CPU or geometry shader ... those are, basically, your options.
Hy everyone. I started re-coding my engine to convert it to directx 11. I'm now trying to get the basics working, but this error is really stoping me.
I created a basic shader, a simple dot product of the normal and the view. I got it to compile without errors, but it dosnt works.
It just totaly deforms the input mesh. I started debugging in vs2012, and found out that the pixel shader was getting as input all NaNs. I attached two screens and the shader code, if someone can provide any ideas, it would be really apriciated.
Vertex Shader
//----------------------------------------------------------------------------
// Constant Buffer Variables
//----------------------------------------------------------------------------
cbuffer ConstantBuffer : register( b0 )
{
matrix World;
matrix View;
matrix Projection;
//float3 CameraPos;
float Power;
}
//---------------------------------------------------------------------------
struct VS_INPUT
{
float4 Pos : POSITION;
float3 Normal : NORMAL;
};
struct VS_OUTPUT
{
float4 Pos : SV_POSITION;
float3 Normal : TEXCOORD0;
};
//--------------------------------------------------------------------------
// Vertex Shader
//-------------------------------------------------------------------------------
VS_OUTPUT VS( VS_INPUT input)
{
VS_OUTPUT output = (VS_OUTPUT)0;
output.Pos = mul( input.Pos, World );
output.Pos = mul( output.Pos, View );
output.Pos = mul( output.Pos, Projection );
output.Normal = mul( float4( input.Normal, 1 ), World ).xyz;
//output.wNormal = input.Normal;
return output;
}
And here the Pixel Shader
//------------------------------------------------------------------------------
// Constant Buffer Variables
//------------------------------------------------------------------------------
cbuffer ConstantBuffer : register( b0 )
{
matrix World;
matrix View;
matrix Projection;
float Power;
}
//------------------------------------------------------------------------------
struct VS_OUTPUT
{
float4 Pos : SV_POSITION;
float3 wNormal : TEXCOORD0;
};
//-----------------------------------------------------------------------------
// Pixel Shader
//-----------------------------------------------------------------------------
float4 PS( VS_OUTPUT input) : SV_Target
{
//return pow(dot(input.wNormal,float3(0,0,0)),Power); // fixed camera, just for now
return float4(0.1,0.6,0.1,1);
}
And at last, i created a xml file structure for my shaders, that i then parse, dont know if it relevant, but
<?xml version="1.0" encoding="utf-8"?>
<vs path = "D:\\Documentos\\Visual Studio 2012\\Projects\\Fusion Engine\\Tests\\ConstantLighting_VS.hlsl" name ="VS" target = "vs_4_0">
</vs>
<ps path = "D:\\Documentos\\Visual Studio 2012\\Projects\\Fusion Engine\\Tests\\ConstantLighting_PS.hlsl" name ="PS" target = "ps_4_0">
<val1 type = "scalar" value = "0.456645" name = "Power"/>
</ps>
You could just use #pragma pack_matrix(row_major) in your shader instead of tranposing matrices on the cpu side
Try simply concatenating the WorldViewProjection matrices into one, before calling the vertex function. Hopefully all of those compnenets were okay, not, say, all zero's? You might also want to explicitly set the matrix type in your HLSL code as "matrix <float, 4, 4>" or just declare the transform(s) as "float4x4".
Ok, i solved it. You MUST use DirectX::XMMatrixTranspose() when you pass any matrix to the shader.
Tricky DirectX ;)
Only in DirectX 11 you have to transpose the matrices berfore sending them to shader. They changed the way you keep and work with structures in shaders
I have mapped some values into my texture on my alpha channel. Actually I use my texture as 2Darray. What I need is a way to read the alpha value of the map at position e.g. [4][5] (representing x and y)
I need the returned value available in my pixelshader. Is there any way to do this?
I use DX9.
Thx in advance!
Do you want to use the texel at [4][5] (x,y) for your entire pixelshader?
if that is your question you could just precalc that cordinate on the vertex shader and passit along to every vertex, and then sample with that uv cords. this way it wont get interpolated. (or it will, but it will only have one value to interpolate with)
other than that you probably have to specifiy abit more on what you are trying to achive.
What are you using it for? when will it occure, what sort of mesh are you using it for?
Texture2DArray is a shader model 4 thing. I don't believe you're using it on dx9.
If you are using shader model 4, then just use the function Load(4, 5).
Otherwise, for sm1,2,3, you can put the numbers you want, e.g. 4.0f and 5.0 into your vertex as normal texcoord data. Then have the pixel shader scale it by the size of the texture.
struct VertexInput {
float4 pos : POSITION;
float2 uv : TEXCOORD0; //0.0, 1.0, 2.0, 3.0, 4.0 etc
};
struct PixelInput {
float4 position : POSITION;
float2 uv : TEXCOORD0;
};
PixelInput vsTex(VertexInput vtx)
{
PixelInput output;
float4 pos = vtx.pos;
output.position = mul(pos, MatWorld);
output.position = mul(output.position, MatView);
output.position = mul(output.position, MatProj);
output.uv = vtx.uv;
return output;
}
float4 PixelShader(PixelInput input) : SV_Target
{
float coords = pix.uv / float2(TEX_WIDTH, TEX_HEIGHT);
return tex = tex2D(mySampler, coords);
}
Where TEX_WIDTH, TEX_HEIGHT are passed in via the 'defines' parameter of D3DXCompileShader. And
OR: just do 4.0f/tex_width and 5.0/tex_height in software and just pass that number (which will be between [0.0f,1.0f] through to the pixel shader)