How does DirectX read in vertices? - directx

So I've went through some tutorials and resources such as the DirectX Documentation / Reference itself and either I missed something or I really can't find an answer to my question.
The question is as stated in the title: How does DirectX read in Vertices into Vertex Buffers?
I've understood, of course, that you have to provide one or more FVF codes. But it doesn't say anywhere how to properly set up your Vertex struct. The only thing I could imagine would be, that DirectX checks flags in a linear "timeline", of course, so one flag which may require the same data types and orders must come first in the struct as well.
As a little example for what I mean:
struct MyVertex {
float x, y, z;
float nx, ny, ny;
};
!=
struct MyVertex {
float nx, ny, nz;
float x, y, z;
};
with the FVF codes:
D3DFVF_XYZ | D3DFVF_NORMAL
and nx, ny, nz representing the 3D coordinates of the normal vertex.
Any help on how to properly set up your vertex struct is appreciated...
Sincerely,
Derija

You need to ensure the C++ and HLSL Structure, matches the order in which it was specified in the Vertex Format (if you specified XYZ then Normal, your structure must match this), then you need to use the device->CreateBuffer to create the vertex buffer, from the array of vertex structures, after which the arrays of vertex structures may be released and freed as DirectX will manage the buffer data independently from there, to alter the data in the render loop, the buffer must be writable, and can be updated after create using ID3D10Buffer Map and Unmap.
MSDN:
Create Vertexbufferm http://msdn.microsoft.com/en-us/library/windows/desktop/bb173544(v=vs.85).aspx
Buffer: http://msdn.microsoft.com/en-us/library/windows/desktop/bb173510(v=vs.85).aspx
Eg: C++
D3D10_INPUT_ELEMENT_DESC layoutPosTexNormCInstanced[] =
{
{"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D10_INPUT_PER_VERTEX_DATA, 0},
{"TEXCOORD", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 12, D3D10_INPUT_PER_VERTEX_DATA, 0},
{"NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 20, D3D10_INPUT_PER_VERTEX_DATA, 0},
{"TANGENT", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 32, D3D10_INPUT_PER_VERTEX_DATA, 0},
{"BINORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 44, D3D10_INPUT_PER_VERTEX_DATA, 0},
{"BLENDINDICES", 0, DXGI_FORMAT_R8G8B8A8_SINT, 0, 56, D3D10_INPUT_PER_VERTEX_DATA, 0 }, //4
{"mTransform", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 0, D3D10_INPUT_PER_INSTANCE_DATA, 1 },
{"mTransform", 1, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 16, D3D10_INPUT_PER_INSTANCE_DATA, 1 },
{"mTransform", 2, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 32, D3D10_INPUT_PER_INSTANCE_DATA, 1 },
{"mTransform", 3, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 48, D3D10_INPUT_PER_INSTANCE_DATA, 1 },
};
//Contains the position, texture coordinate and normal for lighting calculations.
struct DX10VertexNormal
{
//Constructor.
DX10VertexNormal()
{
ZeroMemory(this, sizeof(DX10VertexNormal));
boneIndex[0] = -1;
boneIndex[1] = -1;
boneIndex[2] = -1;
boneIndex[3] = -1;
};
//PAD to 4.
D3DXVECTOR3 pos;
D3DXVECTOR2 tcoord;
D3DXVECTOR3 normal;
D3DXVECTOR3 tangent;
D3DXVECTOR3 binormal;
int boneIndex[4];
};
HLSL:
///Holds the vertex shader data for rendering
///instanced mesh data, with position, texture coord,
///and surface normal for lighting calculations.
struct VS_Instanced_PosTexNorm_INPUT
{
float4 Pos: POSITION;
float2 Tex: TEXCOORD;
float3 Norm: NORMAL;
float3 Tangent: TANGENT;
float3 Binormal: BINORMAL;
int4 boneIndex: BLENDINDICES;
row_major float4x4 mTransform : mTransform;
uint InstanceId : SV_InstanceID;
};

Related

Unable to render square in directx9

// this is the function used to render a single frame
void render_frame(void)
{
init_graphics();
d3ddev->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(255, 0, 0), 1.0f, 0);
d3ddev->BeginScene();
// select which vertex format we are using
d3ddev->SetFVF(CUSTOMFVF);
// select the vertex buffer to display
d3ddev->SetStreamSource(0, v_buffer, 0, sizeof(CUSTOMVERTEX));
// copy the vertex buffer to the back buffer
d3ddev->DrawPrimitive(D3DPT_TRIANGLESTRIP, 0, 2);
// d3ddev->DrawPrimitive(D3DPT_TRIANGLESTRIP, 0, 1);
d3ddev->EndScene();
d3ddev->Present(NULL, NULL, NULL, NULL);
}
// this is the function that puts the 3D models into video RAM
void init_graphics(void)
{
// create the vertices using the CUSTOMVERTEX struct
CUSTOMVERTEX vertices[] =
{
{ 100.f, 0.f, 0.f, D3DCOLOR_XRGB(0, 0, 255), },
{ 300.f, 0.f, 0.f, D3DCOLOR_XRGB(0, 0, 255), },
{ 300.f, 80.f, 0.f, D3DCOLOR_XRGB(0, 0, 255), },
{ 100.f, 80.f, 0.f, D3DCOLOR_XRGB(0, 0, 255), },
};
// create a vertex buffer interface called v_buffer
d3ddev->CreateVertexBuffer(6 * sizeof(CUSTOMVERTEX),
0,
CUSTOMFVF,
D3DPOOL_MANAGED,
&v_buffer,
NULL);
VOID* pVoid; // a void pointer
// lock v_buffer and load the vertices into it
v_buffer->Lock(0, 0, (void**)&pVoid, 0);
memcpy(pVoid, vertices, sizeof(vertices));
v_buffer->Unlock();
}
Can't render a square for some reason. I've searched for an hour but can't find the answer.
https://i.imgur.com/KCKZSrJ.jpg
Does anybody know how to render it? I'm using directx9.
Tried using DrawIndexPrimitive but It has the same result.
There's likely a few things going on here:
You do not set a Vertex or Pixel Shader, so you are using the legacy fixed-function render pipeline. This pipeline requires you set the view/projection matrices with SetTransform. Since you haven't done that, the vertex positions you provide in 'screens space' don't mean what you think they mean. See The Direct3D Transformation Pipeline.
You are not setting the backface culling mode via SetRenderState so it's defaulting to D3DCULL_CCW (i.e. cull counter-clockwise winding triangles). As such, your vertex positions are resulting in one of the triangles being rejected. You may want to to call SetRenderState(D3DRS_CULLMODE, D3DCULL_NONE); getting started.
You are using TRIANGLESTRIP and only 4 points. You may find it easier to get correct initially by using TRIANGELIST and 6 points.

How to change where rendered object is placed on the screen, OpenGL Es 2.0 iOS

I downloaded a project that displays a square with some texture. The square is currently located near the bottom of the screen, and I want to move it to the middle. Here is the code.
Geometry.h
#import "GLKit/GLKit.h"
#ifndef Geometry_h
#define Geometry_h
typedef struct {
float Position[3];
float Color[4];
float TexCoord[2];
float Normal[3];
} Vertex
typedef struct{
int x;
int y;
int z;
} Position;
extern const Vertex VerticesCube[24];
extern const GLubyte IndicesTrianglesCube[36];
#endif
Here is the code in geometry.m
const Vertex VerticesCube[] = {
// Front
{{1, -1, 1}, {1, 0, 0, 1}, {0, 1}, {0, 0, 1}},
{{1, 1, 1}, {0, 1, 0, 1}, {0, 2.0/3.0}, {0, 0, 1}},
{{-1, 1, 1}, {0, 0, 1, 1}, {1.0/3.0, 2.0/3.0}, {0, 0, 1}},
{{-1, -1, 1}, {0, 0, 0, 1}, {1.0/3.0, 1}, {0, 0, 1}},
};
const GLubyte IndicesTrianglesCube[] =
{
// Front
0, 1, 2,
2, 3, 0,
}
What part of this code determines the position of the rendered object on the screen?
None of the code you posted has to do with screen position.
VerticesCube specifies the cube corners in an arbitrary 3D space. Somewhere in code you haven't posted, a projection transform (and probably also view and model transforms) map each vertex to clip space, and a glViewport call (which is probably implicitly done for you if you're using GLKView) maps clip space to screen/view coordinates.
Rearranging things on screen could involve any one of those transforms, and which one to use is a choice that depends on understanding each one and how it fits into the larger context of your app design.
This is the sort of thing you'd get from the early stages of any OpenGL tutorial. Here's a decent one.

Mul function in HLSL: Which one should be the first parameter? the Vector or the Matrix?

I am learning HLSL shading, and in my vertex shader, I have code like this:
VS_OUTPUT vs_main(
float4 inPos: POSITION,
float2 Txr1: TEXCOORD0 )
{
VS_OUTPUT Output;
Output.Position = mul( inPos, matViewProjection);
Output.Tex1 = Txr1;
return( Output );
}
It works fine. But when I was typing codes from the book, the code was like this:
VS_OUTPUT vs_main(
float4 inPos: POSITION,
float2 Txr1: TEXCOORD0 )
{
VS_OUTPUT Output;
Output.Position = mul( matViewProjection, inPos );
Output.Tex1 = Txr1;
return( Output );
}
At first I thought maybe the order does not matter. However, when I exchanged the parameters in the mul function in my code, it does not work. I don't know why.
BTW, I am using RenderMonkey.
This issue is known as pre- vs. post-multiplication.
By convention, matrices produced by D3DX are stored in row-major order. To produce proper results you have to pre-multiply. That means that for matViewProjection to transform the vector inPos into clip-space inPos should appear on the l-hand side (first parameter).
Order absolutely matters, matrix multiplication is not commutative. However, pre-multiplying a matrix is the same as post-multiplying the transpose of the same matrix. To put this another way, if you were using the same matrix but stored in column-major order (transposed) then you would want to swap the operands.
Thus (vector on r-hand side -- also known as post-multiplication):
[ 0, 0, 0, m41 ] [ x ]
[ 0, 0, 0, m42 ] * [ y ]
[ 0, 0, 0, m43 ] [ z ]
[ 0, 0, 0, m44 ] [ w ]
When the vector appears on the r-hand side it is interpreted as a column-vector.
Is equivalent to (vector on l-hand side -- also known as pre-multiplication):
[ x, y, z, w ] * [ 0, 0, 0, 0 ]
[ 0, 0, 0, 0 ]
[ 0, 0, 0, 0 ]
[ m41, m42, m43, m44 ]
When the vector appears on the l-hand side, it is interpreted as a row-vector.
There is no universally correct side, it depends on how the matrix is represented.
Not sure if this is caused by matrix order (row major or column major), take a look at Type modifier and mul function
Updated:
I have test this in my project, use the keyword row_major make the second case work.
row_major matrix matViewProjection
mul(matViewProjection, inPos);

WebGL - Reverse transform to retrieve vertex position from gl_Position in the vertex shader

In my simple WebGL app, I calculate gl_position (of the vertex shader) as follow:
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
aVertexPosition contains the (world-coordinate) vertex position.
I have been trying to do a reverse transform with gl_position to get aVertexPosition's values (x, y and z) back but it seems that the process is a bit tricky for me.
The data and matrices I used in the app:
Projection matrix:
[1.0282340049743652, 0, 0, 0,
0, 1.5423510074615479, 0, 0,
0, 0, -1.0004000663757324, -1,
0, 0, -20.00400161743164, 0]
Model-View Matrix:
[203830718994, 0.563721776008606, -0.5349519848823547, 0,
-3.6513849721586666e-9, 0.6883545517921448, 0.7253743410110474, 0,
0.7771459817886353, -0.45649290084838867, 0.4331955313682556, 0,
-0.00007212938362499699,-1000, -9290, 1]
aVertexPosition: [10.0, 15.0, 72.0]
vertex: is the homogeneous version of aVertexPosition: [10.0, 15.0, 72.0, 1.0]
I have followed this approach for simulating the process of calculating gl_position in the vertex shader:
mat4.multiply(mvpro, modelViewMatrix,projectionMatrix);
vec4.transformMat4(vertex, vertex, mvpro);
As a result of the above calculations, I got the new vertex (should be the same with the values in gl_position):
[-65.04684448242188, 72063.734375, 668851.375, -72 ]
Then, I applied the following reverse transform to obtain the original vertex's coordinates back:
mat4.invert(invMvpro,mvpro);
vec4.transformMat4(vertex,vertex, invMvpro);
Now vertex is :
[10.014994621276855, 14.986272811889648, 72, 1.0005537271499634].
As you can see, the above values are slightly off compared to the original ones [10, 15, 72].
I used glMatrix version 2 for all the calculations.

Kalman filters with four input parameters

I have been studying the operation of the Kalman filter for a couple of days now to improve the performance of my face detection program. From the information I have gathered I have put together a code. The code for the Kalman filter part is as follows.
int Kalman(int X,int faceWidth,int Y,int faceHeight, IplImage *img1){
CvRandState rng;
const float T = 0.1;
// Initialize Kalman filter object, window, number generator, etc
cvRandInit( &rng, 0, 1, -1, CV_RAND_UNI );
//IplImage* img = cvCreateImage( cvSize(500,500), 8, 3 );
CvKalman* kalman = cvCreateKalman( 4, 4, 0 );
// Initializing with random guesses
// state x_k
CvMat* state = cvCreateMat( 4, 1, CV_32FC1 );
cvRandSetRange( &rng, 0, 0.1, 0 );
rng.disttype = CV_RAND_NORMAL;
cvRand( &rng, state );
// Process noise w_k
CvMat* process_noise = cvCreateMat( 4, 1, CV_32FC1 );
// Measurement z_k
CvMat* measurement = cvCreateMat( 4, 1, CV_32FC1 );
cvZero(measurement);
/* create matrix data */
const float A[] = {
1, 0, T, 0,
0, 1, 0, T,
0, 0, 1, 0,
0, 0, 0, 1
};
const float H[] = {
1, 0, 0, 0,
0, 0, 0, 0,
0, 0, 1, 0,
0, 0, 0, 0
};
//Didn't use this matrix in the end as it gave an error:'ambiguous call to overloaded function'
/* const float P[] = {
pow(320,2), pow(320,2)/T, 0, 0,
pow(320,2)/T, pow(320,2)/pow(T,2), 0, 0,
0, 0, pow(240,2), pow(240,2)/T,
0, 0, pow(240,2)/T, pow(240,2)/pow(T,2)
}; */
const float Q[] = {
pow(T,3)/3, pow(T,2)/2, 0, 0,
pow(T,2)/2, T, 0, 0,
0, 0, pow(T,3)/3, pow(T,2)/2,
0, 0, pow(T,2)/2, T
};
const float R[] = {
1, 0, 0, 0,
0, 0, 0, 0,
0, 0, 1, 0,
0, 0, 0, 0
};
//Copy created matrices into kalman structure
memcpy( kalman->transition_matrix->data.fl, A, sizeof(A));
memcpy( kalman->measurement_matrix->data.fl, H, sizeof(H));
memcpy( kalman->process_noise_cov->data.fl, Q, sizeof(Q));
//memcpy( kalman->error_cov_post->data.fl, P, sizeof(P));
memcpy( kalman->measurement_noise_cov->data.fl, R, sizeof(R));
//Initialize other Kalman Filter parameters
//cvSetIdentity( kalman->measurement_matrix, cvRealScalar(1) );
//cvSetIdentity( kalman->process_noise_cov, cvRealScalar(1e-5) );
/*cvSetIdentity( kalman->measurement_noise_cov, cvRealScalar(1e-1) );*/
cvSetIdentity( kalman->error_cov_post, cvRealScalar(1e-5) );
/* choose initial state */
kalman->state_post->data.fl[0]=X;
kalman->state_post->data.fl[1]=faceWidth;
kalman->state_post->data.fl[2]=Y;
kalman->state_post->data.fl[3]=faceHeight;
//cvRand( &rng, kalman->state_post );
/* predict position of point */
const CvMat* prediction=cvKalmanPredict(kalman,0);
//generate measurement (z_k)
cvRandSetRange( &rng, 0, sqrt(kalman->measurement_noise_cov->data.fl[0]), 0 );
cvRand( &rng, measurement );
cvMatMulAdd( kalman->measurement_matrix, state, measurement, measurement );
//Draw rectangles in detected face location
cvRectangle( img1,
cvPoint( kalman->state_post->data.fl[0], kalman->state_post->data.fl[2] ),
cvPoint( kalman->state_post->data.fl[1], kalman->state_post->data.fl[3] ),
CV_RGB( 0, 255, 0 ), 1, 8, 0 );
cvRectangle( img1,
cvPoint( prediction->data.fl[0], prediction->data.fl[2] ),
cvPoint( prediction->data.fl[1], prediction->data.fl[3] ),
CV_RGB( 0, 0, 255 ), 1, 8, 0 );
cvShowImage("Kalman",img1);
//adjust kalman filter state
cvKalmanCorrect(kalman,measurement);
cvMatMulAdd(kalman->transition_matrix, state, process_noise, state);
return 0;
}
In the face detection part(not shown), a box is drawn for the face detected. 'X, Y, faceWidth and faceHeight' are coordinates of the box and the width and the height passed into the Kalman filter. 'img1' is the current frame of a video.
Results:
Although I do get two new rectangles from the 'state_post' and 'prediction' data (as seen in the code), none of them seem to be any more stable than the initial box drawn without the Kalman filter.
Here are my questions:
Are the matrices initialized (transition matrix A, measurement matrix H etc.), correct for this four input case? (eg.4*4 matrices for four inputs?)
Can't we set every matrix to be an identity matrix?
Is the method I followed till plotting the rectangles theoretically correct? I followed the examples in this and the book 'Learning OpenCV' which don't use external inputs.
Any help regarding this would be much appreciated!
H[] should be identity if you measure directly from the image. If you have 4 measurements and you make 0 some values on the diagonal, you are making those expected measurements (x*H) 0, when it is not true. Then the innovation ( z- x*H) on the kalman filter will be high.
R[] should also be diagonal, though the covariance of the error of measurement can be different from one. If you have normalized coordinates ( the width=height=1), R could be something like 0.01. If you are dealing with pixel coordinates, R=diagonal_ones means an error of one pixel, and that's ok. You can try with 2,3,4, etc...
Your transition matrix A[], which is supposed to propagate the state on each frame, looks like a transition matrix for a state composed of x,y, v_x and v_y. You don't mention velocity in your model, you only talk about measurements. Be careful, do not confuse State (describes position of the face) with Measurements (used to update the state). Your state can be position, velocity and acceleration, and your measurements can be n points in the image. Or the x and y position of the face.
Hope this helps.

Resources