I try to study the directx3D, but I'm having a problem about D3DXCompileShaderFromFile. this is a part of my code.
ID3DXBuffer* pShader;
ID3DXBuffer* error_buffer;
HRESULT hr =D3DXCompileShaderFromFile(L"Transform.txt", NULL, NULL, "vs_main", "vs_3_1",
D3DXSHADER_DEBUG, &pShader, &error_buffer, &g_pConstantTable);
/*创建着色器对像*/
g_pd3dDevice->CreateVertexShader( (DWORD*)pShader->GetBufferPointer(), &g_pVertexShader );
pShader->Release();
The problem is that the pShader can't get value by D3DXCompileShaderFromFile, the value of pShader is NULL, so that it can't perform operations CreateVertexShader.
So, Why this error?
Related
In my custom dissector, i add gint16 value as
gint16 stmp16 = 0;
stmp16 = (gint16)tvb_get_letohs(tvb, suboffset);
proto_tree_add_int_format_value(Din_tree, hf_distanceValue, tvb, suboffset, 2, stmp16, "%lf", stmp16/100.0);
suboffset += 2;
It correctly displays the 16 bit signed gint16 value in datagram, whose header field is described as
&hf_distanceValue,
{ "Distance Value", "veh.in",
FT_INT16, BASE_DEC, NULL, 0x0,
NULL, HFILL }
},
However when I try to display 32 bit signed gint32 value, I get error
[Dissector bug, protocol CUSTOM: ..\build\epan\proto.c:4128: failed assertion "DISSECTOR_ASSERT_NOT_REACHED"]
Value is fetched in same manner using tvb_get_letohl() function,
gint32 stmp32 = 0;
stmp32 = (gint32)tvb_get_letohl(tvb, suboffset);
proto_tree_add_int_format_value(Din_tree, hf_speedValue, tvb, suboffset, 4, stmp32, "%lf", stmp32/1000000.0);
suboffset += 4;
&hf_speedValue,
{ "Speed Value", "veh.in",
FT_INT32, BASE_DEC, NULL, 0x0,
NULL, HFILL }
},
Assertion fails at proto_tree_add_int_format_value() in case value is of type gint32, it works fine in gint16 case.
I'm not sure if this is the problem or not, but you have 2 fields with the same filter name:
"Distance Value", "veh.in"
"Speed Value", "veh.in"
Normally, you would have something like, "veh.distance" and "veh.speed", assuming the abbreviation for your protocol is "veh". Maybe try changing this first to see if the problem is resolved?
Hi all I am having some trouble with my PUT function and getting Invalid key predicate. I have never seen this error before and don't really know what it means. Can anyone see what I am doing wrong here?
Here is my code:
boxId = 1;
updateBox = {};
updateBox.x = 5;
updateBox.y = 10;
sap.ui.getCore().getModel("updateBoxModel").update("/Boxes(BoxId=" + boxId + ")", updateBox,
null, this.successMsg, this.errorMsg);
updateBoxLog = {};
updateBoxLog.x = 5;
updateBoxLog.y = 10;
sap.ui.getCore().getModel("updateBoxModel").update("/BoxLogs(BoxId=" + boxId + ")", updateBoxLog,
null, null, null);
The first update works as it should but the second doesn't work at all. Both tables are expecting a numeric value and not sure if this helps, but BoxLogs tables primary key isn't BoxId
If BoxId is an alternate key for BoxLogs, you're going to have to enable alternate keys on the OData service and write some supporting code. There is a sample project on github that should provide enough guidance.
I not know where initialize shader constant buffer: before Map(), before Unmnap() or is not difference where initialize ?
See second parameter of the create buffer function.
ID3D11Device::CreateBuffer(
[in] const D3D11_BUFFER_DESC *pDesc,
[in, optional] const D3D11_SUBRESOURCE_DATA *pInitialData,
[out, optional] ID3D11Buffer **ppBuffer
);
http://msdn.microsoft.com/en-us/library/windows/desktop/ff476501(v=vs.85).aspx
I'm trying to get around XNA 3.1's automatic clearing of the depth buffer when switching render targets by copying the IDirect3DSurface9 from the depth buffer before the render targets are switched, then restore the depth buffer at a later point.
In the code, the getDepthBuffer method is a pointer to the IDirect3DDevice9 GetDepthStencilBuffer function. The pointer to that method seems to be correct, but when I try to get the IDirect3DSurface9 pointer it returns an exception (0x8876086C - D3DERR_INVALIDCALL). The surfacePtr pointer ends up pointing to 0x00000000.
Any idea on why it is not working? And any ideas on how to fix it?
Heres the code:
public static unsafe Texture2D GetDepthStencilBuffer(GraphicsDevice g)
{
if (g.DepthStencilBuffer.Format != DepthFormat.Depth24Stencil8)
{
return null;
}
Texture2D t2d = new Texture2D(g, g.DepthStencilBuffer.Width, g.DepthStencilBuffer.Height, 1, TextureUsage.None, SurfaceFormat.Color);
FieldInfo f = typeof(GraphicsDevice).GetField("pComPtr", BindingFlags.NonPublic | BindingFlags.GetField | BindingFlags.Instance);
object o = f.GetValue(g);
void* devicePtr = Pointer.Unbox(f.GetValue(g));
void* getDepthPtr = AccessVTable(devicePtr, 160);
void* surfacePtr;
var getDepthBuffer = (GetDepthStencilBufferDelegate)Marshal.GetDelegateForFunctionPointer(new IntPtr(getDepthPtr), typeof(GetDepthStencilBufferDelegate));
var rv = getDepthBuffer(&surfacePtr);
SetData(t2d, 0, surfacePtr, g.DepthStencilBuffer.Width, g.DepthStencilBuffer.Height, (uint)(g.DepthStencilBuffer.Width / 4), D3DFORMAT.D24S8);
Marshal.Release(new IntPtr(devicePtr));
Marshal.Release(new IntPtr(getDepthPtr));
Marshal.Release(new IntPtr(surfacePtr));
return t2d;
}
XNA3.1 will not clear your depth-stencil buffer upon changing render targets, it will however resolve it (so that it's unusable for depth tests) if you are not careful with your render target changes.
For example:
SetRenderTarget(someRenderTarget)
DrawStuff()
SetRenderTarget(null)
SetRenderTarget(someOtherRenderTarget)
Will cause the depth-stencil buffer to be resolved, but the following will not:
SetRenderTarget(someRenderTarget)
DrawStuff()
SetRenderTarget(someOtherRenderTarget)
I'm unsure why this happens with XNA3.1 (and earlier versions), but ever since figuring that out I've been able to keep the same depth-stencil buffer alive through many render target changes, even Clear operations as long as the clear specified ClearOptions.Target only.
I'm trying to read the output from a geometry shader which is using stream-output to output to a buffer.
The output buffer used by the geometry shader is described like this:
D3D10_BUFFER_DESC vbdesc =
{
numPoints * sizeof( MESH_VERTEX ),
D3D10_USAGE_DEFAULT,
D3D10_BIND_VERTEX_BUFFER | D3D10_BIND_STREAM_OUTPUT,
0,
0
};
V_RETURN( pd3dDevice->CreateBuffer( &vbdesc, NULL, &g_pDrawFrom ) );
The geometry shader creates a number of triangles based on a single point (at max 12 triangles per point), and if I understand the SDK correctly I have to create a staging resource in order to read the output from the geometry shader on the CPU.
I have declared another buffer resource (this time setting the STAGING flag) like this:
D3D10_BUFFER_DESC sbdesc =
{
(numPoints * (12*3)) * sizeof( VERTEX_STREAMOUT ),
D3D10_USAGE_STAGING,
NULL,
D3D10_CPU_ACCESS_READ,
0
};
V_RETURN( pd3dDevice->CreateBuffer( &sbdesc, NULL, &g_pStaging ) );
After the first draw call of the application the geometry shader is done creating all triangles and can be drawn. However, after this first draw call I would like to be able to read the vertices output by the geometry shader.
Using the buffer staging resource I'm trying to do it like this (right after the first draw call):
pd3dDevice->CopyResource(g_pStaging, g_pDrawFrom]);
pd3dDevice->Flush();
void *ptr = 0;
HRESULT hr = g_pStaging->Map( D3D10_MAP_READ, NULL, &ptr );
if( FAILED( hr ) )
return hr;
VERTEX_STREAMOUT *mv = (VERTEX_STREAMOUT*)ptr;
g_pStaging->Unmap();
This compiles and doesn't give any errors at runtime. However, I don't seem to be getting any output.
The geometry shader outputs the following:
struct VSSceneStreamOut
{
float4 Pos : POS;
float3 Norm : NORM;
float2 Tex : TEX;
};
and the VERTEX_STREAMOUT is declared like this:
struct VERTEX_STREAMOUT
{
D3DXVECTOR4 Pos;
D3DXVECTOR3 Norm;
D3DXVECTOR2 Tex;
};
Am I missing something here?
Problem solved by creating the staging resource buffer like this:
D3D10_BUFFER_DESC sbdesc;
ZeroMemory( &sbdesc, sizeof(sbdesc) );
g_pDrawFrom->GetDesc( &sbdesc );
sbdesc.CPUAccessFlags = D3D10_CPU_ACCESS_READ;
sbdesc.Usage = D3D10_USAGE_STAGING;
sbdesc.BindFlags = 0;
sbdesc.MiscFlags = 0;
V_RETURN( pd3dDevice->CreateBuffer( &sbdesc, NULL, &g_pStaging ) );
Problem was with the ByteWidth.