DirectX11 CopyResource leave right bottom uncopyed - directx

D3D level is D3D_FEATURE_LEVEL_11_0.
I made texture2d with same desc setting.
CD3D11_TEXTURE2D_DESC desc;
desc.Width = width;
desc.Height = height;
desc.MipLevels = 1;
desc.ArraySize = _array_size;
desc.Format =DXGI_FORMAT_R24G8_TYPELESS;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;
But when I render shader resource view for stencil, the right bottom is leave uncopied like below.
Depths are copied well.
enter image description here
The original texture don't make this strange results.
enter image description here
I made depth stencil buffer with this state for rectangle in the picture to make stencil value bigger than 0. For the man upon the rect, with same the desc but stencilEnable off to make stencil value 0.
D3D11_DEPTH_STENCIL_DESC desc;
ZeroMemory(&desc, sizeof(D3D11_DEPTH_STENCIL_DESC));
desc.DepthEnable = true;
desc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
desc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL;
desc.StencilEnable = true;
desc.StencilReadMask = 0xFF;
desc.StencilWriteMask = 0xFF;
desc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
desc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
desc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_INCR_SAT;
desc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
I used the code below to copy the stencil. The only difference of drawing the original stencil value is just this 1 line.
D3D_Context->CopyResource(copy->GetTex2D(), origin->GetTex2D());
And draw stencil with this state. I tried to draw the rectangle not the man.
D3D11_DEPTH_STENCIL_DESC desc;
ZeroMemory(&desc, sizeof(D3D11_DEPTH_STENCIL_DESC));
desc.DepthEnable = false;
desc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
desc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL;
desc.StencilEnable = true;
desc.StencilReadMask = 0x01;
desc.StencilWriteMask = 0x01;
desc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
desc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
desc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
desc.FrontFace.StencilFunc = D3D11_COMPARISON_LESS;

Related

Can't set TextureWic C++DirectX

I am trying to use a texture embedded in a file, it's not a tga.
Here is my code, I don't know where the logical error is.
ID3D11ShaderResourceView* texturePtr = nullptr;
ID3D11Texture2D* texture2D = nullptr;
ID3D11SamplerState* sampleStatePtr = nullptr;
hr = CoInitialize(NULL);
assert(SUCCEEDED(hr));
devConPtr->PSSetSamplers(0, 1, &sampleStatePtr);
devConPtr->PSSetShaderResources(0, 1, &texturePtr);
Texture2D tex : TEXTURE;
SamplerState mySampler : SAMPLER;
D3D11_SAMPLER_DESC sd;
ZeroMemory(&sd, sizeof(sd));
sd.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
sd.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
sd.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
sd.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
sd.MipLODBias = 0.0f;
sd.MaxLOD = D3D11_FLOAT32_MAX;
sd.ComparisonFunc = D3D11_COMPARISON_NEVER;
hr = devPtr->CreateSamplerState(&sd, &sampleStatePtr);
DXGI_SAMPLE_DESC sample;
sample.Count = 1;
sample.Quality = 0;
D3D11_TEXTURE2D_DESC textureDesc;
textureDesc.Width = w;
textureDesc.Height = h;
textureDesc.MipLevels = 1;
textureDesc.ArraySize = 1;
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
textureDesc.SampleDesc = sample;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
textureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
textureDesc.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA subData;
subData.pSysMem = data;
subData.SysMemPitch = sizeof(*data)*w;
HRESULT hr = devPtr->CreateTexture2D(
&textureDesc,
&subData,
&texture2D
);
assert(SUCCEEDED(hr));
//(ID3D11Texture2D*)texture2D;
texturePtr->QueryInterface(IID_ID3D11Texture2D, (void**)&texture2D);
D3D11_SHADER_RESOURCE_VIEW_DESC shvD;
shvD.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
shvD.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
hr= devPtr->CreateShaderResourceView(texture2D, &shvD, &texturePtr);
assert(SUCCEEDED(hr));
hr = DirectX::CreateWICTextureFromMemory(devPtr, devConPtr, (const
uint8_t*)&data, sizeof(*data),
nullptr, &texturePtr, NULL);
assert(SUCCEEDED(hr));
unsigned int textureCount = mat->GetTextureCount(aiTextureType_UNKNOWN);
for (UINT j = 0; j < textureCount; j++)
{
aiString* path = nullptr;
mat->GetTexture(aiTextureType_UNKNOWN, j, path);
assert(path->length >= 2);
int index = atoi(&path->C_Str()[1]);
createTexture(scenePtr->mTextures[index]->mWidth, scenePtr-
>mTextures[index]->mHeight, (uint8_t*)scenePtr->mTextures[index]->pcData);
}
If you could find some kind of logical error or help with the debugging that would be super helpful, I try to put a breakpoint at my HRESULTS but I can't find the variables however it does say that my resourceviewptr is always nullptr despite me trying to use it.
I am using c++ and directx and directx toolkit etc.
You are not initializing shvD completely. To fix it, initialize it like this:
D3D11_SHADER_RESOURCE_VIEW_DESC shvD;
shvD.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
shvD.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
shvD.Texture2D.MostDetailedMip = 0;
shvD.Texture2D.MipLevels = 1;

Using stride in metal similar to openGL

I have a buffer and the vertices have a stride. How do I specify this in metal? I cannot seem to find any example.
Thanks!
Check out MTLVertexBufferLayoutDescriptor, which is part of MTLRenderPipelineDescriptor. It has the stride member.
Below is the example of settings up three vertex attributes stored in one vertex buffer in the interleaved fashion. The stride is set next to the end: vertexDescriptor.layouts[0].stride = 32;
MTLRenderPipelineDescriptor *pipelineDescriptor = [[MTLRenderPipelineDescriptor alloc] init];
MTLVertexDescriptor *vertexDescriptor = [MTLVertexDescriptor vertexDescriptor];
vertexDescriptor.attributes[0].offset = 0;
vertexDescriptor.attributes[0].format = MTLVertexFormatFloat3; // position
vertexDescriptor.attributes[0].bufferIndex = 0;
vertexDescriptor.attributes[1].offset = 12;
vertexDescriptor.attributes[1].format = MTLVertexFormatFloat3; // normal
vertexDescriptor.attributes[1].bufferIndex = 0;
vertexDescriptor.attributes[2].offset = 24;
vertexDescriptor.attributes[2].format = MTLVertexFormatFloat2; // texCoords
vertexDescriptor.attributes[2].bufferIndex = 0;
vertexDescriptor.layouts[0].stepRate = 1;
vertexDescriptor.layouts[0].stepFunction = MTLVertexStepFunctionPerVertex;
vertexDescriptor.layouts[0].stride = 32;
pipelineDescriptor.vertexDescriptor = vertexDescriptor;

Understanding Snake

I am trying to make a Snake Active Contour program and I been looking at different websites that shows how they programmed the snake but none of them explain what CV_VALUE or coefficient usage is and how they initialized it.
Here some code that I was working on but I do not know what the problem is.
void snake(Mat copy){
threshold(copy, copy, 170, 255, CV_THRESH_BINARY);
float alpha = 0.1; //Continuity snake
float beta = 0.5; //Curvature snake
float gamma = 0.4; //Movement snake
//Have to be odd
CvSize size;
size.width = 5;
size.height = 5;
CvTermCriteria criteria;
criteria.type = CV_TERMCRIT_ITER;
criteria.max_iter = 10000;
criteria.epsilon = 0.1;
int cpt = 40;
CvPoint pointsArray[5];
pointsArray[0].x = 0;
pointsArray[0].y = 95;
pointsArray[1].x = 5;
pointsArray[1].y = 95;
pointsArray[2].x = 10;
pointsArray[2].y = 95;
pointsArray[3].x = 15;
pointsArray[3].y = 95;
pointsArray[4].x = 20;
pointsArray[4].y = 95;
//The Code (image, points, length, alpha (consistency), beta (curve), gamma (movement), coefficient Usage, win, criteria, calcGradient)
cvSnakeImage(copy, pointsArray, cpt, &alpha, &beta, &gamma, CV_VALUE, size,criteria, 0);
}
CV_VALUE indicates that each of alpha, beta, gamma is a pointer to a
single value to be used for all points;
CV_ARRAY indicates that each of alpha, beta, gamma is a pointer to an
array of coefficients different for all the points of the snake. All
the arrays must have the size equal to the contour size.

Directx10 flip texture horizontal/vertical

As I mentioned in title do you know how to flip an ID3D10Texture2D object horizontal/vertical ?
I used this code to take screenshot and save it to a file.
ID3D10Resource *backbufferRes;
renderTargetView->GetResource(&backbufferRes);
ID3D10Texture2D *mRenderedTexture;
// Create our texture
D3D10_TEXTURE2D_DESC texDesc;
texDesc.ArraySize = 1;
texDesc.BindFlags = 0;
texDesc.CPUAccessFlags = 0;
texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
texDesc.Width = 640; // must be same as backbuffer
texDesc.Height = 480; // must be same as backbuffer
texDesc.MipLevels = 1;
texDesc.MiscFlags = 0;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D10_USAGE_DEFAULT;
d3d10Device->CreateTexture2D(&texDesc, 0, &mRenderedTexture);
d3d10Device->CopyResource(mRenderedTexture, backbufferRes);
D3DX10FilterTexture(mRenderedTexture, 0, D3DX10_FILTER_MIRROR_U);
D3DX10SaveTextureToFile(mRenderedTexture, D3DX10_IFF_PNG, L"test.png");
D3DX10FilterTexture(mRenderedTexture, 0, D3DX10_FILTER_MIRROR_U); line doesnt mirror my texture. Any suggestions ?
In your shader do 1-u to flip horizontally or 1-v to flip vertically.
Edit: If you aren't actually doing any rendering then there are far better ways to do image manipulation. However if you want to do it manually you will have to use map and flip the data round yourself.
You could do that as follows (The code is not tested so please excuse any compile errors):
D3D10Resource *backbufferRes;
renderTargetView->GetResource(&backbufferRes);
ID3D10Texture2D *mRenderedTexture;
// Create our texture
D3D10_TEXTURE2D_DESC texDesc;
texDesc.ArraySize = 1;
texDesc.BindFlags = 0;
texDesc.CPUAccessFlags = 0;
texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
texDesc.Width = 640; // must be same as backbuffer
texDesc.Height = 480; // must be same as backbuffer
texDesc.MipLevels = 1;
texDesc.MiscFlags = 0;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D10_USAGE_DEFAULT;
d3d10Device->CreateTexture2D(&texDesc, 0, &mRenderedTexture);
d3d10Device->CopyResource(mRenderedTexture, backbufferRes);
D3D10_MAPPED_TEXTURE2D d3d10MT = { 0 };
mRenderedTexture->Map( 0, D3D10_MAP_READ_WRITE, 0, &d3d10MT );
unsigned int* pPix = (unsigned int)d3d10MT.pData;
int rows = 0;
int rowsMax = height;
while( rows < rowsMax )
{
unsigned int* pRowStart = pPix + (rows * width);
unsigned int* pRowEnd = pRowStart + width;
std::reverse( pRowStart, pRowEnd );
rows++;
}
mRenderedTexture->Unmap();
D3DX10SaveTextureToFile(mRenderedTexture, D3DX10_IFF_PNG, L"test.png");
From the doc btw:
D3DX10_FILTER_MIRROR_U Pixels off the edge of the texture on the
u-axis should be mirrored, not wrapped.
So that only counts for the pixels round the edge when you are filtering the image.

DirectX 10 Primitive is not displayed

I am trying to write my first DirectX 10 program that displays a triangle. Everything compiles fine, and the render function is called, since the background changes to black. However, the triangle I'm trying to draw with a triangle strip primitive is not displayed at all.
The Initialization function:
bool InitDirect3D(HWND hWnd, int width, int height)
{
//****** D3DDevice and SwapChain *****//
DXGI_SWAP_CHAIN_DESC swapChainDesc;
ZeroMemory(&swapChainDesc, sizeof(swapChainDesc));
swapChainDesc.BufferCount = 1;
swapChainDesc.BufferDesc.Width = width;
swapChainDesc.BufferDesc.Height = height;
swapChainDesc.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
swapChainDesc.BufferDesc.RefreshRate.Numerator = 60;
swapChainDesc.BufferDesc.RefreshRate.Denominator = 1;
swapChainDesc.BufferUsage = DXGI_USAGE_RENDER_TARGET_OUTPUT;
swapChainDesc.OutputWindow = hWnd;
swapChainDesc.SampleDesc.Count = 1;
swapChainDesc.SampleDesc.Quality = 0;
swapChainDesc.Windowed = TRUE;
if (FAILED(D3D10CreateDeviceAndSwapChain( NULL,
D3D10_DRIVER_TYPE_HARDWARE,
NULL,
0,
D3D10_SDK_VERSION,
&swapChainDesc,
&pSwapChain,
&pD3DDevice)))
return fatalError(TEXT("Hardware does not support DirectX 10!"));
//***** Shader *****//
if (FAILED(D3DX10CreateEffectFromFile( TEXT("basicEffect.fx"),
NULL, NULL,
"fx_4_0",
D3D10_SHADER_ENABLE_STRICTNESS,
0,
pD3DDevice,
NULL,
NULL,
&pBasicEffect,
NULL,
NULL)))
return fatalError(TEXT("Could not load effect file!"));
pBasicTechnique = pBasicEffect->GetTechniqueByName("Render");
pViewMatrixEffectVariable = pBasicEffect->GetVariableByName( "View" )->AsMatrix();
pProjectionMatrixEffectVariable = pBasicEffect->GetVariableByName( "Projection" )->AsMatrix();
pWorldMatrixEffectVariable = pBasicEffect->GetVariableByName( "World" )->AsMatrix();
//***** Input Assembly Stage *****//
D3D10_INPUT_ELEMENT_DESC layout[] =
{
{"POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D10_INPUT_PER_VERTEX_DATA, 0},
{"COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D10_INPUT_PER_VERTEX_DATA, 0}
};
UINT numElements = 2;
D3D10_PASS_DESC PassDesc;
pBasicTechnique->GetPassByIndex(0)->GetDesc(&PassDesc);
if (FAILED( pD3DDevice->CreateInputLayout( layout,
numElements,
PassDesc.pIAInputSignature,
PassDesc.IAInputSignatureSize,
&pVertexLayout)))
return fatalError(TEXT("Could not create Input Layout."));
pD3DDevice->IASetInputLayout( pVertexLayout );
//***** Vertex buffer *****//
UINT numVertices = 100;
D3D10_BUFFER_DESC bd;
bd.Usage = D3D10_USAGE_DYNAMIC;
bd.ByteWidth = sizeof(vertex) * numVertices;
bd.BindFlags = D3D10_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = D3D10_CPU_ACCESS_WRITE;
bd.MiscFlags = 0;
if (FAILED(pD3DDevice->CreateBuffer(&bd, NULL, &pVertexBuffer)))
return fatalError(TEXT("Could not create vertex buffer!"));;
UINT stride = sizeof(vertex);
UINT offset = 0;
pD3DDevice->IASetVertexBuffers( 0, 1, &pVertexBuffer, &stride, &offset );
//***** Rasterizer *****//
// Set the viewport
viewPort.Width = width;
viewPort.Height = height;
viewPort.MinDepth = 0.0f;
viewPort.MaxDepth = 1.0f;
viewPort.TopLeftX = 0;
viewPort.TopLeftY = 0;
pD3DDevice->RSSetViewports(1, &viewPort);
D3D10_RASTERIZER_DESC rasterizerState;
rasterizerState.CullMode = D3D10_CULL_NONE;
rasterizerState.FillMode = D3D10_FILL_SOLID;
rasterizerState.FrontCounterClockwise = true;
rasterizerState.DepthBias = false;
rasterizerState.DepthBiasClamp = 0;
rasterizerState.SlopeScaledDepthBias = 0;
rasterizerState.DepthClipEnable = true;
rasterizerState.ScissorEnable = false;
rasterizerState.MultisampleEnable = false;
rasterizerState.AntialiasedLineEnable = true;
ID3D10RasterizerState* pRS;
pD3DDevice->CreateRasterizerState(&rasterizerState, &pRS);
pD3DDevice->RSSetState(pRS);
//***** Output Merger *****//
// Get the back buffer from the swapchain
ID3D10Texture2D *pBackBuffer;
if (FAILED(pSwapChain->GetBuffer(0, __uuidof(ID3D10Texture2D), (LPVOID*)&pBackBuffer)))
return fatalError(TEXT("Could not get back buffer."));
// create the render target view
if (FAILED(pD3DDevice->CreateRenderTargetView(pBackBuffer, NULL, &pRenderTargetView)))
return fatalError(TEXT("Could not create the render target view."));
// release the back buffer
pBackBuffer->Release();
// set the render target
pD3DDevice->OMSetRenderTargets(1, &pRenderTargetView, NULL);
return true;
}
The render function:
void Render()
{
if (pD3DDevice != NULL)
{
pD3DDevice->ClearRenderTargetView(pRenderTargetView, D3DXCOLOR(0.0f, 0.0f, 0.0f, 0.0f));
//create world matrix
static float r;
D3DXMATRIX w;
D3DXMatrixIdentity(&w);
D3DXMatrixRotationY(&w, r);
r += 0.001f;
//set effect matrices
pWorldMatrixEffectVariable->SetMatrix(w);
pViewMatrixEffectVariable->SetMatrix(viewMatrix);
pProjectionMatrixEffectVariable->SetMatrix(projectionMatrix);
//fill vertex buffer with vertices
UINT numVertices = 3;
vertex* v = NULL;
//lock vertex buffer for CPU use
pVertexBuffer->Map(D3D10_MAP_WRITE_DISCARD, 0, (void**) &v );
v[0] = vertex( D3DXVECTOR3(-1,-1,0), D3DXVECTOR4(1,0,0,1) );
v[1] = vertex( D3DXVECTOR3(0,1,0), D3DXVECTOR4(0,1,0,1) );
v[2] = vertex( D3DXVECTOR3(1,-1,0), D3DXVECTOR4(0,0,1,1) );
pVertexBuffer->Unmap();
// Set primitive topology
pD3DDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP );
//get technique desc
D3D10_TECHNIQUE_DESC techDesc;
pBasicTechnique->GetDesc(&techDesc);
for(UINT p = 0; p < techDesc.Passes; ++p)
{
//apply technique
pBasicTechnique->GetPassByIndex(p)->Apply(0);
//draw
pD3DDevice->Draw(numVertices, 0);
}
pSwapChain->Present(0,0);
}
}
I'm not sure try to set:
pD3DDevice->IASetVertexBuffers( 0, 1, &pVertexBuffer, &stride, &offset );
after you unmap the the buffer. To get something like this:
pVertexBuffer->Unmap();
pD3DDevice->IASetVertexBuffers( 0, 1, &pVertexBuffer, &stride, &offset );
// Set primitive topology
pD3DDevice->IASetPrimitiveTopology( D3D10_PRIMITIVE_TOPOLOGY_TRIANGLESTRIP );
I suspect that locking blows avay buffer binding

Resources