DirectX11 pass current backbuffer as a resource to pixel shader - directx

I'm trying to do a direct 3d 11 "present" function hook and to pass the current backbuffer and the current z-buffer (Depth stencil) as textures to my pixel shader.
I use the following code in my hooked "Present" function:
ID3D11RenderTargetView* pRT = NULL;
ID3D11DepthStencilView* pDS = NULL;
pContext->OMGetRenderTargets(1, &pRT, &pDS);
if (pRT != NULL)
{
ID3D11Texture2D* pBackBuffer = NULL;
pSwapChain->GetBuffer(0, __uuidof(*pBackBuffer), (LPVOID*)&pBackBuffer);
D3D11_TEXTURE2D_DESC bbDesc;
pBackBuffer->GetDesc(&bbDesc);
ID3D11ShaderResourceView* g_refRes = NULL;
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
shaderResourceViewDesc.Format = bbDesc.Format;
shaderResourceViewDesc.ViewDimension = D3D_SRV_DIMENSION_UNKNOWN;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
shaderResourceViewDesc.Texture2D.MipLevels = bbDesc.MipLevels;
hr = d3d11Device->CreateShaderResourceView(pBackBuffer, &g_shaderResourceViewDesc, &g_refRes);
d3d11DevCon->PSSetShaderResources(0, 1, &g_refRes);
}
But when I call CreateShaderResourceView() I get E_INVALID_ARG.
Can anybody help me in how to pass the current backbuffer & z-buffer as textures to a pixel shader?
Thanks a lot.

Related

Resize D3D11Texture2D DirectX 11

I would like to resize a D3D11Texture2D to make it smaller. For example I have a texture in 1920x1080 and I would like to scale it 1280x720 for example.
Just so you know I'm not drawing at all, I just want to get the byte buffer scaled. Here is my code :
if (mRealTexture == nullptr) {
D3D11_TEXTURE2D_DESC description;
texture2D->GetDesc(&description);
description.BindFlags = 0;
description.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
description.Usage = D3D11_USAGE_STAGING;
description.MiscFlags = 0;
hr = mDevice->CreateTexture2D(&description, NULL, &mRealTexture);
if (FAILED(hr)) {
if (mRealTexture) {
mRealTexture->Release();
mRealTexture = nullptr;
}
return NULL;
}
}
mImmediateContext->CopyResource(mRealTexture, texture2D);
if (mScaledTexture == nullptr) {
D3D11_TEXTURE2D_DESC description;
texture2D->GetDesc(&description);
description.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE;
description.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
description.Width = 1440;
description.Height = 585;
description.MipLevels = 4;
description.ArraySize = 1;
description.SampleDesc.Count = 1;
description.SampleDesc.Quality = 0;
description.Usage = D3D11_USAGE_DEFAULT;
hr = mDevice->CreateTexture2D(&description, NULL, &mScaledTexture);
if (FAILED(hr)) {
if (mScaledTexture) {
mScaledTexture->Release();
mScaledTexture = nullptr;
}
return NULL;
}
} //I want to copy the mRealTexture on the mScaledTexture, map the scaledTexture and get the buffer.
Thanks for help
Having thought about this and that you are left with only a couple of options.
1 - You create a viewport etc and render to your target size with a full screen quad. Which I get the feeling you don't want to do.
2 - You roll your own scaling which isn't too bad and scale texture as you copy the data from one buffer to another.
Option 2 isn't too bad, the roughest scaling would be reading a points based on the scaling ratio, but a more accurate version would be to average a number of samples based a weighted grid (the weights need to be recalculated for each pixel you visit on your target.)

Fill CubeTexture with data

I'm puzzled why this isn't working.
I'm trying to add texture data to each of the cube textures faces. For some reason, only the first(+x) works. The MSDN documentation is quite sparse, but it looks like this should do the trick:
// mip-level 0 data
// R8G8B8A8 texture
uint32_t sizeWidth = textureWidth * sizeof(uint8_t) * 4;
if (isCubeTexture)
{
for (uint32_t index = 0; index < gCubemapNumTextures; ++index)
{
const uint32_t subResourceID = D3D11CalcSubresource(0, index, 1);
context->UpdateSubresource(mTexture, subResourceID, NULL, &textureData.at(sizeWidth * textureHeight * index), sizeWidth, 0);
}
}
When debugging and looking at the faces its all just black except the first face, which seems to load fine. So obivously I am doing somerhing wrong, how do you properly upload cubetexture data to all the faces?
EDIT: follow parameters used to create the texture:
D3D11_TEXTURE2D_DESC textureDesc;
ZeroMemory(&textureDesc, sizeof(D3D11_TEXTURE2D_DESC));
textureDesc.Width = textureWidth;
textureDesc.Height = textureHeight;
textureDesc.ArraySize = isCubeTexture ? gCubemapNumTextures : 1;
if (isSRGB)
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM_SRGB;
else
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
textureDesc.SampleDesc.Count = 1;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.BindFlags = D3D11_BIND_RENDER_TARGET | D3D11_BIND_SHADER_RESOURCE;
textureDesc.MiscFlags = D3D11_RESOURCE_MISC_GENERATE_MIPS;
if (isCubeTexture)
textureDesc.MiscFlags |= D3D11_RESOURCE_MISC_TEXTURECUBE;
DXCALL(device->CreateTexture2D(&textureDesc, NULL, &mTexture));
Then after uploading the data I generate mip chain like this:
context->GenerateMips(mShaderResourceView);
And again, it works fine but only for the first (+x) face.
You create the texture with "0" mip levels by virtue of zero'ing out the texture description. Zero means "full mip chain please", which means more than 1 mip (unless your texture is 1x1).
Your arguments to D3D11CalcSubresource has a third argument of '1', suggesting only one mip, which appears not to be true. Be sure to pass in the correct number of mips to this helper function or it won't calculate the correct subresource index.
You can get the mip count by calling GetDesc() after the texture has been created.

Depth stencil buffer not working directx11

ok i tried everything at this point and I'm really lost....
ID3D11Texture2D* depthStencilTexture;
D3D11_TEXTURE2D_DESC depthTexDesc;
ZeroMemory (&depthTexDesc, sizeof(D3D11_TEXTURE2D_DESC));
depthTexDesc.Width = set->mapSettings["SCREEN_WIDTH"];
depthTexDesc.Height = set->mapSettings["SCREEN_HEIGHT"];
depthTexDesc.MipLevels = 1;
depthTexDesc.ArraySize = 1;
depthTexDesc.Format = DXGI_FORMAT_D32_FLOAT;
depthTexDesc.SampleDesc.Count = 1;
depthTexDesc.SampleDesc.Quality = 0;
depthTexDesc.Usage = D3D11_USAGE_DEFAULT;
depthTexDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL;
depthTexDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE | D3D11_CPU_ACCESS_READ;
depthTexDesc.MiscFlags = 0;
mDevice->CreateTexture2D(&depthTexDesc, NULL, &depthStencilTexture);
D3D11_DEPTH_STENCIL_DESC dsDesc;
// Depth test parameters
dsDesc.DepthEnable = true;
dsDesc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
dsDesc.DepthFunc = D3D11_COMPARISON_LESS;//LESS
// Stencil test parameters
dsDesc.StencilEnable = false;
dsDesc.StencilReadMask = 0xFF;
dsDesc.StencilWriteMask = 0xFF;
// Stencil operations if pixel is front-facing
dsDesc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP; //KEEP
dsDesc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_INCR; //INCR
dsDesc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP; //KEEP
dsDesc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
// Stencil operations if pixel is back-facing
dsDesc.BackFace.StencilFailOp = D3D11_STENCIL_OP_KEEP; //KEEP
dsDesc.BackFace.StencilDepthFailOp = D3D11_STENCIL_OP_DECR; //DECR
dsDesc.BackFace.StencilPassOp = D3D11_STENCIL_OP_KEEP; //KEEP
dsDesc.BackFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
// Create depth stencil state
mDevice->CreateDepthStencilState(&dsDesc, &mDepthStencilState);
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc;
ZeroMemory (&depthStencilViewDesc, sizeof(depthStencilViewDesc));
depthStencilViewDesc.Format = depthTexDesc.Format;
depthStencilViewDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
depthStencilViewDesc.Texture2D.MipSlice = 0;
mDevice->CreateDepthStencilView(depthStencilTexture, &depthStencilViewDesc, &mDepthStencilView);
mDeviceContext->OMSetDepthStencilState(mDepthStencilState, 1);
and then afterwards i call
mDeviceContext->OMSetRenderTargets(1, &mTargetView, mDepthStencilView);
obviously i clean before every frame
mDeviceContext->ClearRenderTargetView(mTargetView, D3DXCOLOR(0.0f, 0.0f, 0.0f, 1.0f));
mDeviceContext->ClearDepthStencilView(mDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0 );
and still it just keeps the last pixel drawn with no testing....
screenshot
PS i've checked the rasterizer and it is correctly drawing only the front faces
any help anyone?
Check your HRESULTs - the call to CreateTexture2D is almost certainly failing because you have specified CPU_ACCESS flags on a DEFAULT texture. Since you never check any errors or pointers, this just propagates NULL to all your depth objects, effectively disabling depth testing.
You can also catch errors like this by enabling D3D debug layers, by adding D3D11_CREATE_DEVICE_DEBUG to the flags on D3D11CreateDevice. If you had done this, you would see the following debug spew:
D3D11 ERROR: ID3D11Device::CreateTexture2D: A D3D11_USAGE_DEFAULT
Resource cannot have any CPUAccessFlags set. The following
CPUAccessFlags bits cannot be set in this case: D3D11_CPU_ACCESS_READ
(1), D3D11_CPU_ACCESS_WRITE (1). [ STATE_CREATION ERROR #98:
CREATETEXTURE2D_INVALIDCPUACCESSFLAGS]

cvHaarDetectObjects Memory Leak

I'm using the function cvHaarDetectObjects to do face detection and there is a memory leak checking with valgrind even though I think I freed all the memories. I really don't know how to fix the memory leak. Here is my code:
int Detect(MyImage* Img,MyImage **Face)
{
Char* Cascade_name = new Char[1024];
strcpy(Cascade_name,"/usr/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml");
// Create memory for calculations
CvMemStorage* Storage = 0;
// Create a new Haar classifier
CvHaarClassifierCascade* Cascade = 0;
int Scale = 1;
// Create two points to represent the face locations
CvPoint pt1, pt2;
int Loop;
// Load the HaarClassifierCascade
Cascade = (CvHaarClassifierCascade*)cvLoad( Cascade_name, 0, 0, 0 );
// Check whether the cascade has loaded successfully. Else report and error and quit
if( !Cascade )
{
fprintf( stderr, "ERROR: Could not load classifier cascade\n" );
exit(0);
}
// Allocate the memory storage
Storage = cvCreateMemStorage(0);
// Clear the memory storage which was used before
cvClearMemStorage( Storage );
// Find whether the cascade is loaded, to find the faces. If yes, then:
if( Cascade )
{
// There can be more than one face in an image. So create a growable sequence of faces.
// Detect the objects and store them in the sequence
CvSeq* Faces = cvHaarDetectObjects( Img->Image(), Cascade, Storage,
1.1, 2, CV_HAAR_DO_CANNY_PRUNING,
cvSize(40, 40) );
int MaxWidth = 0;
int MaxHeight = 0;
if(Faces->total == 0)
{
cout<<"There is no face."<<endl;
return 1;
}
//just get the first face
for( Loop = 0; Loop <1; Loop++ )
{
// Create a new rectangle for drawing the face
CvRect* Rect = (CvRect*)cvGetSeqElem( Faces, Loop );
// Find the dimensions of the face,and scale it if necessary
pt1.x = Rect->x*Scale;
pt2.x = (Rect->x+Rect->width)*Scale;
if(Rect->width>MaxWidth) MaxWidth = Rect->width;
pt1.y = Rect->y*Scale;
pt2.y = (Rect->y+Rect->height)*Scale;
if(Rect->height>MaxHeight) MaxHeight = Rect->height;
cvSetImageROI( Img->Image(), *Rect );
MyImage* Dest = new MyImage(cvGetSize(Img->Image()),IPL_DEPTH_8U, 1);
cvCvtColor( Img->Image(), Dest->Image(), CV_RGB2GRAY );
MyImage* Equalized = new MyImage(cvGetSize(Dest->Image()), IPL_DEPTH_8U, 1);
// Perform histogram equalization
cvEqualizeHist( Dest->Image(), Equalized->Image());
(*Face) = new MyImage(Equalized->Image());
if(Equalized)
delete Equalized;
Equalized = NULL;
if(Dest)
delete Dest;
Dest = NULL;
cvResetImageROI(Img->Image());
}
if(Cascade)
{
cvReleaseHaarClassifierCascade( &Cascade );
delete Cascade;
Cascade = NULL;
}
if(Storage)
{
cvClearMemStorage(Storage);
cvReleaseMemStorage(&Storage);
delete Storage;
Storage = NULL;
}
if(Cascade_name)
delete [] Cascade_name;
Cascade_name = NULL;
return 0;
}
In the code, MyImage is a wrapper class of IplImage containing IplImage* p as a member. if the constructor takes a IplImage* ppara as parameter, then the member p will create memory using cvCreateImage(cvGetSize(ppara), ppara->depth, ppara->nChannels) and cvCopy(ppara, p). if it takes size,depth and channels as parameter, then only do cvCreateImage. Then the destructor do cvReleaseImage(&p). The function int Detect(MyImage *Img, MyImage **Face) is called like:
IplImage *Temp = cvLoadImage(ImageName);
MyImage* Img = new MyImage(Temp);
if(Temp)
cvReleaseImage(&Temp);
Temp = NULL;
MyImage * Face = NULL;
Detect(Img, &Face);
I released Img and Face in the following code once the operations on them is done. And the memory leak is happened inside the Detect function. I'm using OpenCV 2.3.1 on 64 bit OS fedora 16. The whole program can terminate normally except for the memory leak.
Thanks a lot.
I found out why there is a memory leak. The reason is:
In the MyImage class constructor, I passed in a IplImage* p pointer, and do the following:
mp = cvCloneImage(p);
where mp is a IplImage* member of the MyImage class. I free the IplImage* pointer that I passed in after creating a new MyImage class object, since cvCloneImage() will create some memories. However I free the member pointer mp in the class destructor when it actually doesn't new any memory. It just points to memories that created by cvCloneImage(). So the memories created by cvCloneImage() isn't freed. This is where the memory leak came from.
Thus I do the following in the constructor given a IplImage* p passed in as parameter:
mp = cvCreateImage(cvGetSize(p), p->depth, p->nChannels);
cvCopy(p, mp);
And free the mp pointer in the class destructor will free the memory that it creates.
After doing this, definitely lost and indirectly lost memories are turned to 0, but there are still possibly lost memories, and valgrind points all the lost record to the cvHaarDetectObjects() function which is from OpenCV. And mostly are caused by some "new threads" issues. Thus I googled this problem and found out valgrind does give possibly lost memories sometimes when new thread is involved. So I monitored the memory usage of the system. The result shows no memory usage building up as the program executed repeatedly.
That's what I found.

Reading output from geometry shader on CPU

I'm trying to read the output from a geometry shader which is using stream-output to output to a buffer.
The output buffer used by the geometry shader is described like this:
D3D10_BUFFER_DESC vbdesc =
{
numPoints * sizeof( MESH_VERTEX ),
D3D10_USAGE_DEFAULT,
D3D10_BIND_VERTEX_BUFFER | D3D10_BIND_STREAM_OUTPUT,
0,
0
};
V_RETURN( pd3dDevice->CreateBuffer( &vbdesc, NULL, &g_pDrawFrom ) );
The geometry shader creates a number of triangles based on a single point (at max 12 triangles per point), and if I understand the SDK correctly I have to create a staging resource in order to read the output from the geometry shader on the CPU.
I have declared another buffer resource (this time setting the STAGING flag) like this:
D3D10_BUFFER_DESC sbdesc =
{
(numPoints * (12*3)) * sizeof( VERTEX_STREAMOUT ),
D3D10_USAGE_STAGING,
NULL,
D3D10_CPU_ACCESS_READ,
0
};
V_RETURN( pd3dDevice->CreateBuffer( &sbdesc, NULL, &g_pStaging ) );
After the first draw call of the application the geometry shader is done creating all triangles and can be drawn. However, after this first draw call I would like to be able to read the vertices output by the geometry shader.
Using the buffer staging resource I'm trying to do it like this (right after the first draw call):
pd3dDevice->CopyResource(g_pStaging, g_pDrawFrom]);
pd3dDevice->Flush();
void *ptr = 0;
HRESULT hr = g_pStaging->Map( D3D10_MAP_READ, NULL, &ptr );
if( FAILED( hr ) )
return hr;
VERTEX_STREAMOUT *mv = (VERTEX_STREAMOUT*)ptr;
g_pStaging->Unmap();
This compiles and doesn't give any errors at runtime. However, I don't seem to be getting any output.
The geometry shader outputs the following:
struct VSSceneStreamOut
{
float4 Pos : POS;
float3 Norm : NORM;
float2 Tex : TEX;
};
and the VERTEX_STREAMOUT is declared like this:
struct VERTEX_STREAMOUT
{
D3DXVECTOR4 Pos;
D3DXVECTOR3 Norm;
D3DXVECTOR2 Tex;
};
Am I missing something here?
Problem solved by creating the staging resource buffer like this:
D3D10_BUFFER_DESC sbdesc;
ZeroMemory( &sbdesc, sizeof(sbdesc) );
g_pDrawFrom->GetDesc( &sbdesc );
sbdesc.CPUAccessFlags = D3D10_CPU_ACCESS_READ;
sbdesc.Usage = D3D10_USAGE_STAGING;
sbdesc.BindFlags = 0;
sbdesc.MiscFlags = 0;
V_RETURN( pd3dDevice->CreateBuffer( &sbdesc, NULL, &g_pStaging ) );
Problem was with the ByteWidth.

Resources