DirectX CreateRenderTargetView not properly initialized - directx

For some reason I seem to be unable to initialize my RenderTargetView (it stays NULL) which causes an access violation.
Here is the line that should initialize the RenderTargetView:
hr = g_pd3dDevice->CreateRenderTargetView(pBackBuffer, NULL, &g_pRenderTargetView);
pBackBuffer is the Back buffer and it gets a value, it isn't NULL. However, the rendertagetview stays NULL throughout the process. Any idea why?

In order to trace the DirectX11 errors, you'd better to create the D3D11 device with the debug layer, it will print the error message to output window in Visual Studio when you launch your app.
// Create device and swap chain
HRESULT hr;
UINT flags = D3D11_CREATE_DEVICE_BGRA_SUPPORT;
#if defined( DEBUG ) || defined( _DEBUG )
flags |= D3D11_CREATE_DEVICE_DEBUG;
#endif
// Create device and swap chain
D3D_FEATURE_LEVEL FeatureLevelsRequested = D3D_FEATURE_LEVEL_11_0; // Use d3d11
UINT numLevelsRequested = 1; // Number of levels
D3D_FEATURE_LEVEL FeatureLevelsSupported;
if (FAILED (hr = D3D11CreateDeviceAndSwapChain( NULL,
D3D_DRIVER_TYPE_HARDWARE,
NULL,
0,
&FeatureLevelsRequested,
numLevelsRequested,
D3D11_SDK_VERSION,
&sd_,
&swap_chain_,
&d3ddevice_,
&FeatureLevelsSupported,
&immediate_context_)))
{
MessageBox(hWnd, L"Create device and swap chain failed!", L"Error", 0);
}

I think you are failing to create the render target view because the second parameter is NULL:
HRESULT CreateRenderTargetView
(
[in] ID3D11Resource *pResource,
[in] const D3D11_RENDER_TARGET_VIEW_DESC *pDesc, <== You need to pass in a valid description
[out] ID3D11RenderTargetView **ppRTView
);
You can initialize it to something like this:
D3D11_RENDER_TARGET_VIEW_DESC desc = {0};
desc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
desc.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2D;

Related

Desktop Duplication API returns empty frame

I am aware that there are already a few questions asking this or similar things and I dived into a few of them, but without any success.
I try to capture a "screenshot" of my display using the Desktop duplication API and process pixeldata of it. Later I would like to do that at least 30 times/second, but thats a different case.
For now, I tried the example of microsoft: https://github.com/microsoftarchive/msdn-code-gallery-microsoft/tree/master/Official%20Windows%20Platform%20Sample/DXGI%20desktop%20duplication%20sample
I successfully saved a picture of the screen and accessed the pixel data with that code.
DirectX::ScratchImage image;
hr = DirectX::CaptureTexture(m_Device, m_DeviceContext, m_AcquiredDesktopImage, image);
hr = DirectX::SaveToDDSFile(image.GetImages(), image.GetImageCount(), image.GetMetadata(), DirectX::DDS_FLAGS_NONE, L"test.dds");
uint8_t* pixels;
pixels = image.GetPixels();
Now I wanted to break the example code down to the basic stuff I need. As I am not familiar with DirectX I have a hard time doing that.
I came up with following code, which runs without error but produces an empty picture. I check hr in Debug Mode, I am aware that this is bad practice and dirty!
int main()
{
HRESULT hr = S_OK;
ID3D11Device* m_Device;
ID3D11DeviceContext* m_DeviceContext;
// Driver types supported
D3D_DRIVER_TYPE DriverTypes[] =
{
D3D_DRIVER_TYPE_HARDWARE,
D3D_DRIVER_TYPE_WARP,
D3D_DRIVER_TYPE_REFERENCE,
};
UINT NumDriverTypes = ARRAYSIZE(DriverTypes);
// Feature levels supported
D3D_FEATURE_LEVEL FeatureLevels[] =
{
D3D_FEATURE_LEVEL_11_0,
D3D_FEATURE_LEVEL_10_1,
D3D_FEATURE_LEVEL_10_0,
D3D_FEATURE_LEVEL_9_1
};
UINT NumFeatureLevels = ARRAYSIZE(FeatureLevels);
D3D_FEATURE_LEVEL FeatureLevel;
// Create device
for (UINT DriverTypeIndex = 0; DriverTypeIndex < NumDriverTypes; ++DriverTypeIndex)
{
hr = D3D11CreateDevice(nullptr, DriverTypes[DriverTypeIndex], nullptr, 0, FeatureLevels, NumFeatureLevels,
D3D11_SDK_VERSION, &m_Device, &FeatureLevel, &m_DeviceContext);
if (SUCCEEDED(hr))
{
// Device creation success, no need to loop anymore
break;
}
}
IDXGIOutputDuplication* m_DeskDupl;
IDXGIOutput1* DxgiOutput1 = nullptr;
IDXGIOutput* DxgiOutput = nullptr;
IDXGIAdapter* DxgiAdapter = nullptr;
IDXGIDevice* DxgiDevice = nullptr;
UINT Output = 0;
hr = m_Device->QueryInterface(__uuidof(IDXGIDevice), reinterpret_cast<void**>(&DxgiDevice));
hr = DxgiDevice->GetParent(__uuidof(IDXGIAdapter), reinterpret_cast<void**>(&DxgiAdapter));
DxgiDevice->Release();
DxgiDevice = nullptr;
hr = DxgiAdapter->EnumOutputs(Output, &DxgiOutput);
DxgiAdapter->Release();
DxgiAdapter = nullptr;
hr = DxgiOutput->QueryInterface(__uuidof(DxgiOutput1), reinterpret_cast<void**>(&DxgiOutput1));
DxgiOutput->Release();
DxgiOutput = nullptr;
hr = DxgiOutput1->DuplicateOutput(m_Device, &m_DeskDupl);
IDXGIResource* DesktopResource = nullptr;
DXGI_OUTDUPL_FRAME_INFO FrameInfo;
hr = m_DeskDupl->AcquireNextFrame(500, &FrameInfo, &DesktopResource);
ID3D11Texture2D* m_AcquiredDesktopImage;
hr = DesktopResource->QueryInterface(__uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&m_AcquiredDesktopImage));
DesktopResource->Release();
DesktopResource = nullptr;
DirectX::ScratchImage image;
hr = DirectX::CaptureTexture(m_Device, m_DeviceContext, m_AcquiredDesktopImage, image);
hr = DirectX::SaveToDDSFile(image.GetImages(), image.GetImageCount(), image.GetMetadata(), DirectX::DDS_FLAGS_NONE, L"test.dds");
uint8_t* pixels;
pixels = image.GetPixels();
hr = m_DeskDupl->ReleaseFrame();
}
Could anyone give me a hint what is wrong with this code?
EDIT:
Just found the code snipet below and integrated it into my code.
Now it works!
Lessons learnt:
-) actully output/process hr!
-) AcquireNextFrame might not work on first try (?)
I might update this post again with better code, with functioning loop.
int lTryCount = 4;
do
{
Sleep(100);
hr = m_DeskDupl->AcquireNextFrame(250, &FrameInfo, &DesktopResource);
if (SUCCEEDED(hr))
break;
if (hr == DXGI_ERROR_WAIT_TIMEOUT)
{
continue;
}
else if (FAILED(hr))
break;
} while (--lTryCount > 0);
AcquireNextFrame is allowed to return null resource (texture) because it returns on either change in desktop image or change related to pointer.
AcquireNextFrame acquires a new desktop frame when the operating system either updates the desktop bitmap image or changes the shape or position of a hardware pointer.
When you start frame acquisition you apparently are to get first desktop image soon, but you can also have a few of pointer notifications too before the image.
You should not limit yourself with 4 attempts and you don't need to sleep within the loop. Just keep polling for the image. To avoid dead loop it makes more sense to track total time spent in the loop and limit it to, for example, one second.
See also:
AcquireNextFrame() never grabs an updated image, always blank

Install NDIS filer driver unbinded

I have built the "NDIS 6.0 Filter Driver" WinDDK sample (ndislwf.sys), and built the BindView sample to install "NDIS 6.0 Filter Driver".
It installs OK, but it always bound to all Network Interfaces by default.
Is it possible to install NDIS Filter Driver and have it unbound from all Network Interfaces so then I could bind it only to certain interfaces ?
The code from BindView uses SetupCopyOEMInfW to copy the driver to the OemInfs :
if ( !SetupCopyOEMInfW(lpszInfFullPath,
DirWithDrive, // Other files are in the
// same dir. as primary INF
SPOST_PATH, // First param is path to INF
0, // Default copy style
NULL, // Name of the INF after
// it's copied to %windir%\inf
0, // Max buf. size for the above
NULL, // Required size if non-null
NULL) ) { // Optionally get the filename
// part of Inf name after it is copied.
dwError = GetLastError();
And then, uses INetCfgClassSetup::Install():
INetCfgClassSetup *pncClassSetup = NULL;
INetCfgComponent *pncc = NULL;
OBO_TOKEN OboToken;
HRESULT hr = S_OK;
//
// OBO_TOKEN specifies on whose behalf this
// component is being installed.
// Set it to OBO_USER so that szComponentId will be installed
// on behalf of the user.
//
ZeroMemory( &OboToken,
sizeof(OboToken) );
OboToken.Type = OBO_USER;
//
// Get component's setup class reference.
//
hr = pnc->QueryNetCfgClass ( pguidClass,
IID_INetCfgClassSetup,
(void**)&pncClassSetup );
if ( hr == S_OK ) {
hr = pncClassSetup->Install( szComponentId,
&OboToken,
0,
0, // Upgrade from build number.
NULL, // Answerfile name
NULL, // Answerfile section name
&pncc ); // Reference after the component
if ( S_OK == hr ) { // is installed.
//
// we don't need to use pncc (INetCfgComponent), release it
//
ReleaseRef( pncc );
}
ReleaseRef( pncClassSetup );
}
Recent versions of Windows 10 have a feature for this. Put this line into your INF:
HKR, Ndi\Interfaces, DisableDefaultBindings, 0x00010001, 1
Add that line in the same section that has the FilterMediaTypes directive.
That directive will create all new bindings to your filter in the disabled state. You can manually re-enable them in the same ways as before:
from the command-line (Set-NetAdapterBinding);
the GUI (run ncpa.cpl, open the adapter properties, check the box next to the filter driver); or
from INetCfg code (INetCfgBindingPath::Enable).

Loading .fbx models into directX 10

I'm trying to load in meshes into DirectX 10. I've created a bunch of classes that handle it and allow me to call in a mesh with only a single line of code in my main game class.
How ever, when I run the program this is what renders:
In the debug output window the following errors keep appearing:
D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that Semantic 'TEXCOORD' is defined for mismatched hardware registers between the output stage and input stage. [ EXECUTION ERROR #343: DEVICE_SHADER_LINKAGE_REGISTERINDEX ]
D3D10: ERROR: ID3D10Device::DrawIndexed: Input Assembler - Vertex Shader linkage error: Signatures between stages are incompatible. The reason is that the input stage requires Semantic/Index (POSITION,0) as input, but it is not provided by the output stage. [ EXECUTION ERROR #342: DEVICE_SHADER_LINKAGE_SEMANTICNAME_NOT_FOUND ]
The thing is, I've no idea how to fix this. The code I'm using does work and I've simply brought all of that code into a new project of mine. There are no build errors and this only appears when the game is running
The .fx file is as follows:
float4x4 matWorld;
float4x4 matView;
float4x4 matProjection;
struct VS_INPUT
{
float4 Pos:POSITION;
float2 TexCoord:TEXCOORD;
};
struct PS_INPUT
{
float4 Pos:SV_POSITION;
float2 TexCoord:TEXCOORD;
};
Texture2D diffuseTexture;
SamplerState diffuseSampler
{
Filter = MIN_MAG_MIP_POINT;
AddressU = WRAP;
AddressV = WRAP;
};
//
// Vertex Shader
//
PS_INPUT VS( VS_INPUT input )
{
PS_INPUT output=(PS_INPUT)0;
float4x4 viewProjection=mul(matView,matProjection);
float4x4 worldViewProjection=mul(matWorld,viewProjection);
output.Pos=mul(input.Pos,worldViewProjection);
output.TexCoord=input.TexCoord;
return output;
}
//
// Pixel Shader
//
float4 PS(PS_INPUT input ) : SV_Target
{
return diffuseTexture.Sample(diffuseSampler,input.TexCoord);
//return float4(1.0f,1.0f,1.0f,1.0f);
}
RasterizerState NoCulling
{
FILLMODE=SOLID;
CULLMODE=NONE;
};
technique10 Render
{
pass P0
{
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetGeometryShader( NULL );
SetPixelShader( CompileShader( ps_4_0, PS() ) );
SetRasterizerState(NoCulling);
}
}
In my game, the .fx file and model are called and set as follows:
Loading in shader file
//Set the shader flags - BMD
DWORD dwShaderFlags = D3D10_SHADER_ENABLE_STRICTNESS;
#if defined( DEBUG ) || defined( _DEBUG )
dwShaderFlags |= D3D10_SHADER_DEBUG;
#endif
ID3D10Blob * pErrorBuffer=NULL;
if( FAILED( D3DX10CreateEffectFromFile( TEXT("TransformedTexture.fx" ), NULL, NULL, "fx_4_0", dwShaderFlags, 0, md3dDevice, NULL, NULL, &m_pEffect, &pErrorBuffer, NULL ) ) )
{
char * pErrorStr = ( char* )pErrorBuffer->GetBufferPointer();
//If the creation of the Effect fails then a message box will be shown
MessageBoxA( NULL, pErrorStr, "Error", MB_OK );
return false;
}
//Get the technique called Render from the effect, we need this for rendering later on
m_pTechnique=m_pEffect->GetTechniqueByName("Render");
//Number of elements in the layout
UINT numElements = TexturedLitVertex::layoutSize;
//Get the Pass description, we need this to bind the vertex to the pipeline
D3D10_PASS_DESC PassDesc;
m_pTechnique->GetPassByIndex( 0 )->GetDesc( &PassDesc );
//Create Input layout to describe the incoming buffer to the input assembler
if (FAILED(md3dDevice->CreateInputLayout( TexturedLitVertex::layout, numElements,PassDesc.pIAInputSignature, PassDesc.IAInputSignatureSize, &m_pVertexLayout ) ) )
{
return false;
}
model loading:
m_pTestRenderable=new CRenderable();
//m_pTestRenderable->create<TexturedVertex>(md3dDevice,8,6,vertices,indices);
m_pModelLoader = new CModelLoader();
m_pTestRenderable = m_pModelLoader->loadModelFromFile( md3dDevice,"armoredrecon.fbx" );
m_pGameObjectTest = new CGameObject();
m_pGameObjectTest->setRenderable( m_pTestRenderable );
// Set primitive topology, how are we going to interpet the vertices in the vertex buffer
md3dDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST );
if ( FAILED( D3DX10CreateShaderResourceViewFromFile( md3dDevice, TEXT( "armoredrecon_diff.png" ), NULL, NULL, &m_pTextureShaderResource, NULL ) ) )
{
MessageBox( NULL, TEXT( "Can't load Texture" ), TEXT( "Error" ), MB_OK );
return false;
}
m_pDiffuseTextureVariable = m_pEffect->GetVariableByName( "diffuseTexture" )->AsShaderResource();
m_pDiffuseTextureVariable->SetResource( m_pTextureShaderResource );
Finally, the draw function code:
//All drawing will occur between the clear and present
m_pViewMatrixVariable->SetMatrix( ( float* )m_matView );
m_pWorldMatrixVariable->SetMatrix( ( float* )m_pGameObjectTest->getWorld() );
//Get the stride(size) of the a vertex, we need this to tell the pipeline the size of one vertex
UINT stride = m_pTestRenderable->getStride();
//The offset from start of the buffer to where our vertices are located
UINT offset = m_pTestRenderable->getOffset();
ID3D10Buffer * pVB=m_pTestRenderable->getVB();
//Bind the vertex buffer to input assembler stage -
md3dDevice->IASetVertexBuffers( 0, 1, &pVB, &stride, &offset );
md3dDevice->IASetIndexBuffer( m_pTestRenderable->getIB(), DXGI_FORMAT_R32_UINT, 0 );
//Get the Description of the technique, we need this in order to loop through each pass in the technique
D3D10_TECHNIQUE_DESC techDesc;
m_pTechnique->GetDesc( &techDesc );
//Loop through the passes in the technique
for( UINT p = 0; p < techDesc.Passes; ++p )
{
//Get a pass at current index and apply it
m_pTechnique->GetPassByIndex( p )->Apply( 0 );
//Draw call
md3dDevice->DrawIndexed(m_pTestRenderable->getNumOfIndices(),0,0);
//m_pD3D10Device->Draw(m_pTestRenderable->getNumOfVerts(),0);
}
Is there anything I've clearly done wrong or are missing? Spent 2 weeks trying to workout what on earth I've done wrong to no avail.
Any insight a fresh pair eyes could give on this would be great.

Memory Issues with cvShowImage and Kinect SDK: Skeletal Viewer

I'm using cvSetData to get the rgb frame into one I can use for openCV.
I modified the SkeletalViewer slightly to produce the rgb stream.
void CSkeletalViewerApp::Nui_GotVideoAlert( )
{
const NUI_IMAGE_FRAME * pImageFrame = NULL;
IplImage* kinectColorImage = cvCreateImage(cvSize(640,480),IPL_DEPTH_8U, 4);
HRESULT hr = NuiImageStreamGetNextFrame(
m_pVideoStreamHandle,
0,
&pImageFrame );
if( FAILED( hr ) )
{
return;
}
NuiImageBuffer * pTexture = pImageFrame->pFrameTexture;
KINECT_LOCKED_RECT LockedRect;
pTexture->LockRect( 0, &LockedRect, NULL, 0 );
if( LockedRect.Pitch != 0 )
{
BYTE * pBuffer = (BYTE*) LockedRect.pBits;
m_DrawVideo.DrawFrame( (BYTE*) pBuffer );
cvSetData(kinectColorImage, (BYTE*) pBuffer,kinectColorImage->widthStep);
cvShowImage("Color Image", kinectColorImage);
//cvReleaseImage( &kinectColorImage );
cvWaitKey(10);
}
else
{
OutputDebugString( L"Buffer length of received texture is bogus\r\n" );
}
cvReleaseImage(&kinectColorImage);
NuiImageStreamReleaseFrame( m_pVideoStreamHandle, pImageFrame );
}
With the cvReleaseImage, I would get a cvException error. Not exactly sure which one as it didn't specify. Without cvReleaseImage, I would get the rgb video running in an openCV window but would eventually crash because it ran out of memory.
How should I release the image properly?
Just solved this problem.
After a bunch of sleuthing using breakpoints and debugging, it appears as though the problem has to do with the pointers used in cvSetData. My best guess is that Nui_GotVideoAlert() updates the address pointed to by pBuffer before cvReleaseImage is called. In addition, cvSetData never appears to copy the bytes from this address.
What happens then is that cvReleaseImage is called on an address that no longer exists.
I fixed this by declaring kinectColorImage at the top of NuiImpl.cpp, calling cvSetData in ::Nui_GotVideoAlert(), and only calling cvReleaseImage in the Nui_Uninit() method. This way, kinectColorImage will just update instead of creating a new IplImage in each call of Nui_GotVideoAlert().
That's strange. As far as I know, cvReleaseImage released both the image header and the image data. I did the piece of code below and in this certain example, cvReleaseImage does not free the buffer that contains the data. There I didn't use cvSetData but I just updated the pointer to the image data. If you uncomment the commented lines and comment the ones just below each one, program still runs but you'll get some memory leaks. I used OpenCV 2.2 (this is the legacy interface).
#include <opencv/cv.h>
#include <stdlib.h>
#define NLOOPS 1000
int main(void){
int i,j
char *buff = (char *) malloc( sizeof(char) * 3 * 640 * 480 );
for( i = 0; i < 640 * 480 * 3; i++ ) buff[i] = 128;
j = 0;
while( j++< NLOOPS ){
IplImage *im = cvCreateImage(cvSize(640,480),IPL_DEPTH_8U, 3);
//cvSetData(im, buff, im->widthStep); ---> If you use that version you'll get memory leaks. Comment line below.
im->imageData = buff;
cvWaitKey(4);
cvShowImage("kk", im);
//cvReleaseImageHeader(&im); ---> If you use that version you'll get memory leaks. Comment line below.
cvReleaseImage(&im);
free(im);
}
free(buff);
return 0;
}

C++ function to do DxDiag "Direct3D Acceleration" detection

Microsoft DxDiag can detect whether a system has "Direct3D Acceleration".
If the system has not the capability, DxDiag will write "Direct3D Acceleration not available" and will write in the console "Direct3D functionality not available. You should verify that the driver is a final version from the hardware manufacturer."
I would like the same with a C++ function.
I made some tests and the following function seems to do the job.
Any other better idea?
Thank you.
Alessandro
#include <ddraw.h>
#include <atlbase.h>
bool has3D()
{
CComPtr< IDirectDraw > dd;
HRESULT hr = ::DirectDrawCreate( 0, &dd, 0 );
if ( hr != DD_OK ) return false;
DDCAPS hel_caps, hw_caps;
::ZeroMemory( &hel_caps, sizeof( DDCAPS ) );
::ZeroMemory( &hw_caps, sizeof( DDCAPS ) );
hw_caps.dwSize = sizeof( DDCAPS );
hel_caps.dwSize = sizeof( DDCAPS );
hr = dd->GetCaps( &hw_caps, &hel_caps );
if ( hr != DD_OK ) return false;
return (hw_caps.dwCaps & DDCAPS_3D) && (hel_caps.dwCaps & DDCAPS_3D);
}
As DirectDraw is now deprecated, it's maybe preferable to use the Direct3D functions.
If the purpose is to detect if 3D acceleration is available for an application, I would initialize Direct3D and then check if the HAL Device Type is available.
LPDIRECT3D9 d3d = Direct3DCreate9( D3D_SDK_VERSION );
D3DCAPS9 caps;
if ( FAILED(d3d->GetDeviceCaps(D3DADAPTER_DEFAULT , D3DDEVTYPE_HAL, &caps)) )
{
return false;
}
You can check the validity of this code by forcing the software rendering in the DirectX Control Panel by checking the "Software only" checkbox in the Direct3D tab.
Test the code with and without the checkbox checked and see if it suits your needs.
You can access DX Diag via IDXDiagContainer and IDXDiagProvider

Resources