I'm trying to read 3D models which were created for a DirectX applications, which are defined in the following way :
In the file header, the Flexible Vertex Format (FVF) of the mesh is given (actually, I have any combinations of D3DFVF_{XYZ,DIFFUSE,NORMAL,TEX1,TEX2} in the meshes I tested)
Then, n vertices are given in a linear pattern, with the fields presents according to the FVF.
However, I do not know the order of these fields. The logic would be that it is defined somewhere in DirectX documentation, but I was unable to find it. For example, which of these two structures is correct with FVF = D3DFVF_XYZ | D3DFVF_DIFFUSE | D3DFVF_NORMAL (C syntax, but this problem applies to every language) ?
// This one ?
struct vertex1
{
D3DVERTEX pos;
DWORD color;
D3DVERTEX normal;
};
// Or this one ?
struct vertex2
{
D3DVERTEX pos;
D3DVERTEX normal;
DWORD color;
};
I would like a general answer to this question with all the possible fields (for example, XYZ before DIFFUSE before NORMAL before TEX1 before TEX2). A pointer to the right page of the documentation would be fine too as I was not able to find it :) .
I ran into the same thing myself.
I think the order of the bits is the required order. From d3d9types.h:
#define D3DFVF_RESERVED0 0x001
#define D3DFVF_POSITION_MASK 0x400E
#define D3DFVF_XYZ 0x002
#define D3DFVF_XYZRHW 0x004
#define D3DFVF_XYZB1 0x006
#define D3DFVF_XYZB2 0x008
#define D3DFVF_XYZB3 0x00a
#define D3DFVF_XYZB4 0x00c
#define D3DFVF_XYZB5 0x00e
#define D3DFVF_XYZW 0x4002
#define D3DFVF_NORMAL 0x010
#define D3DFVF_PSIZE 0x020
#define D3DFVF_DIFFUSE 0x040
#define D3DFVF_SPECULAR 0x080
#define D3DFVF_TEXCOUNT_MASK 0xf00
#define D3DFVF_TEXCOUNT_SHIFT 8
#define D3DFVF_TEX0 0x000
#define D3DFVF_TEX1 0x100
#define D3DFVF_TEX2 0x200
#define D3DFVF_TEX3 0x300
#define D3DFVF_TEX4 0x400
#define D3DFVF_TEX5 0x500
#define D3DFVF_TEX6 0x600
#define D3DFVF_TEX7 0x700
#define D3DFVF_TEX8 0x800
I'm pretty sure that the order you are looking for is:
POSITION,NORMAL,PSIZE,DIFFUSE,SPECULAR,TEX0[,TEXn...]
I wasn't able to find a definitive answer in the documentation either.
here you are
FVF (OP says the information on this page is incorrect. I dont know, didnt check if FVF positioning is correct)
Generator
Well you should be defining it as follows.
struct EitherVertex
{
float x, y, z;
DWORD col;
float nx, ny, nz
};
or
struct EitherVertex
{
D3DXVECTOR3 pos;
DWORD col;
D3DXVECTOR3 nrm;
};
(D3DVERTEX refers to an entire vertex struct and not just a 3 element vector)
Of your 2 options a lot depends on how you access those vert elements. If you are using the depreceated FVF then the second of your 2 choice is the more correct.
If however you are using Vertex Declarations then YOU define where in the struct the relevant data is and the ordering does not matter.
Related
I want to know how to change the values inside GLKMatrix4.
I mean in static.
If any one knows explain me.If any tutorial to learn or to understand OpenGl reply me with the link... Except raywenderlich tutorial. Because i had already gone through it...
Have you looked at the headers?
#if defined(__STRICT_ANSI__)
struct _GLKMatrix4
{
float m[16];
} __attribute__((aligned(16)));
typedef struct _GLKMatrix4 GLKMatrix4;
#else
union _GLKMatrix4
{
struct
{
float m00, m01, m02, m03;
float m10, m11, m12, m13;
float m20, m21, m22, m23;
float m30, m31, m32, m33;
};
float m[16];
} __attribute__((aligned(16)));
typedef union _GLKMatrix4 GLKMatrix4;
#endif
It varies a bit based on your build environment and target platform/device, but long story short: all of the GLKit math types are plain-old-data structs (or unions), and you can access their members directly.
I am currently working on implementing dynamic shader linkage into my shader reflection code. It works quite nicely, but to make my code as dynamic as possible i would like to automate the process of getting the offset into the dynamicLinkageArray. Microsoft suggests something like this in their sample:
g_iNumPSInterfaces = pReflector->GetNumInterfaceSlots();
g_dynamicLinkageArray = (ID3D11ClassInstance**) malloc( sizeof(ID3D11ClassInstance*) * g_iNumPSInterfaces );
if ( !g_dynamicLinkageArray )
return E_FAIL;
ID3D11ShaderReflectionVariable* pAmbientLightingVar = pReflector->GetVariableByName("g_abstractAmbientLighting");
g_iAmbientLightingOffset = pAmbientLightingVar->GetInterfaceSlot(0);
I would like to this without giving the exact name, so when the shader changes i do not have to manually change this code. To accomplish this i would need to get the name i marked below through shader reflection. Is this possible? I searched through the References of the Shader-Reflection but did not find anything useful, besides the number of interface slots (GetNumInterfaceSlots()).
#include "BasicShader_PSBuffers.hlsli"
iBaseLight g_abstractAmbientLighting;
^^^^^^^^^^^^^^^^^^^^^^^^^^
struct PixelInput
{
float4 position : SV_POSITION;
float3 normals : NORMAL;
float2 tex: TEXCOORD0;
};
float4 main(PixelInput input) : SV_TARGET
{
float3 Ambient = (float3)0.0f;
Ambient = g_txDiffuse.Sample(g_samplerLin, input.tex) * g_abstractAmbientLighting.IlluminateAmbient(input.normals);
return float4(saturate(Ambient), 1.0f);
}
If this is not possible, how would one go about this? Just add anything i can think of there so that i have to change as little as possible manually?
Thanks in advance
I want to copy BasicEffect's fog method to use in my own shader so I don't have to declare a basiceffect shader and my own. The HLSL code of the basic effect was released with one of the downloadable samples on XNA Creators Club a while ago and I thought the method needed would be found within that HLSL file. However, all I can see is a function being called but no actual definition for that function. The function called is:
ApplyFog(color, pin.PositionWS.w);
Does anybody know where the definition is and if it's freely acceptable. Otherwise any help on how to replicate it's effect would be great.
I downloaded the sample from here.
Thanks.
Edit: Stil having problems. Think it's to do with getting depth:
VertexToPixel InstancedCelShadeVSNmVc(VSInputNmVc VSInput, in VSInstanceVc VSInstance)
{
VertexToPixel Output = (VertexToPixel)0;
Output.Position = mul(mul(mul(mul(VSInput.Position, transpose(VSInstance.World)), xWorld), xView), xProjection);
Output.ViewSpaceZ = -VSInput.Position.z / xCameraClipFar;
Is that right? Camera clip far is passed in as a constant.
Heres an example of how to achieve a similar effect
In your Vertex Shader Function, you pass the viewspace Z position, divided by the distance of your farplane, that gives you a nice 0..1 mapping for your depthvalues.
Than, in your pixelshader, you use the lerp function to blend between your original color value, and the fogcolor, heres some (pseudo)code:
cbuffer Input //Im used to DX10+ remove the cbuffer for DX9
{
float FarPlane;
float4 FogColor;
}
struct VS_Output
{
//...Whatever else you need
float ViewSpaceZ : TEXCOORD0; //or whatever semantic you'd like to use
}
VS_Output VertexShader(/*Your Input Here */)
{
VS_Output output;
//...Transform to viewspace
VS_Output.ViewSpaceZ = -vsPosition.Z / FarPlane;
return output;
}
float4 PixelShader(VS_Output input) : SV_Target0 // or COLOR0 depending on DX version
{
const float FOG_MIN = 0.9;
const float FOG_MAX = 0.99;
//...Calculate Color
return lerp(yourCalculatedColor, FogColor, lerp(FOG_MIN, FOG_MAX, input.ViewSpaceZ));
}
I've written this from the top of my head, hope it helps.
The constants i've chose will give you a pretty "steep" fog, choose a smaller value for FOG_MIN to get a smoother fog.
people.
I have a problem passing a float array to vertex shader (HLSL) through constant buffer. I know that each "float" in the array below gets a 16-byte slot all by itself (space equivalent to float4) due to HLSL packing rule:
// C++ struct
struct ForegroundConstants
{
DirectX::XMMATRIX transform;
float bounceCpp[64];
};
// Vertex shader constant buffer
cbuffer ForegroundConstantBuffer : register(b0)
{
matrix transform;
float bounceHlsl[64];
};
(Unfortunately, the simple solution here does not work, nothing is drawn after I made that change)
While the C++ data gets passed, due to the packing rule they get spaced out such that each "float" in the bounceCpp C++ array gets into a 16-byte space all by itself in bounceHlsl array. This resulted in an warning similar to the following:
ID3D11DeviceContext::DrawIndexed: The size of the Constant Buffer at slot 0 of the Vertex Shader unit is too small (320 bytes provided, 1088 bytes, at least, expected). This is OK, as out-of-bounds reads are defined to return 0. It is also possible the developer knows the missing data will not be used anyway. This is only a problem if the developer actually intended to bind a sufficiently large Constant Buffer for what the shader expects.
The recommendation, as being pointed out here and here, is to rewrite the HLSL constant buffer this way:
cbuffer ForegroundConstantBuffer : register(b0)
{
matrix transform;
float4 bounceHlsl[16]; // equivalent to 64 floats.
};
static float temp[64] = (float[64]) bounceHlsl;
main(pos : POSITION) : SV_POSITION
{
int index = someValueRangeFrom0to63;
float y = temp[index];
// Bla bla bla...
}
But that didn't work (i.e. ID3D11Device1::CreateVertexShader never returns). I'm compiling things against Shader Model 4 Level 9_1, can you spot anything that I have done wrong here?
Thanks in advance! :)
Regards,
Ben
One solution, albeit non optimal, is to just declare your float array as
float4 bounceHlsl[16];
then process the index like
float x = ((float[4])(bounceHlsl[i/4]))[i%4];
where i is the index you require.
I came across a weird behavior of HLSL. I am trying to use an array that is contained within a struct, like this (Pixel Shader code):
struct VSOUT {
float4 projected : SV_POSITION;
float3 pos: POSITION;
float3 normal : NORMAL;
};
struct Something {
float a[17];
};
float4 shMain (VSOUT input) : SV_Target {
Something s;
for (int i = 0; i < (int)(input.pos.x * 800); ++i)
s.a[(int)input.pos.x] = input.pos.x;
return col * s.a[(int)input.pos.x];
}
The code makes no sense logically, it's just a sample. The problem is that when I try to compile this code, I get the following error (line 25 is the for-loop line):
(25,7): error X3511: Forced to unroll
loop, but unrolling failed.
However, when I put the array outside the struct (just declare float a[17] in shMain), everything works as expected.
My question is, why is DirectX trying to unroll the (unrollable) for-loop when using the struct? Is this a documented behavior? Is there any available workaround except for putting the array outside the struct?
I am using shader model 4.0, DirectX 10 SDK from June 2010.
EDIT:
For clarification I am adding the working code, it only replaces usage of the struct Something with plain array:
struct VSOUT {
float4 projected : SV_POSITION;
float3 pos: POSITION;
float3 normal : NORMAL;
};
float4 shMain (VSOUT input) : SV_Target {
float a[17]; // Direct declaration of the array
for (int i = 0; i < (int)(input.pos.x * 800); ++i)
a[(int)input.pos.x] = input.pos.x;
return col * a[(int)input.pos.x];
}
This code compiles and works as expected. It works even if I add [loop] attribute in front of the for-loop which means it is not unrolled (which is a correct behavior).
I'm not sure but what I know is that the hardware schedule and process fragments by block of 2x2 (for computing derivatives). This could be a reason that fxc try to unroll the for loop so that the shader program is executed in lockstep mode.
Also did you try to use [loop] attribute for generating code that uses flow control?