I have a buffer and the vertices have a stride. How do I specify this in metal? I cannot seem to find any example.
Thanks!
Check out MTLVertexBufferLayoutDescriptor, which is part of MTLRenderPipelineDescriptor. It has the stride member.
Below is the example of settings up three vertex attributes stored in one vertex buffer in the interleaved fashion. The stride is set next to the end: vertexDescriptor.layouts[0].stride = 32;
MTLRenderPipelineDescriptor *pipelineDescriptor = [[MTLRenderPipelineDescriptor alloc] init];
MTLVertexDescriptor *vertexDescriptor = [MTLVertexDescriptor vertexDescriptor];
vertexDescriptor.attributes[0].offset = 0;
vertexDescriptor.attributes[0].format = MTLVertexFormatFloat3; // position
vertexDescriptor.attributes[0].bufferIndex = 0;
vertexDescriptor.attributes[1].offset = 12;
vertexDescriptor.attributes[1].format = MTLVertexFormatFloat3; // normal
vertexDescriptor.attributes[1].bufferIndex = 0;
vertexDescriptor.attributes[2].offset = 24;
vertexDescriptor.attributes[2].format = MTLVertexFormatFloat2; // texCoords
vertexDescriptor.attributes[2].bufferIndex = 0;
vertexDescriptor.layouts[0].stepRate = 1;
vertexDescriptor.layouts[0].stepFunction = MTLVertexStepFunctionPerVertex;
vertexDescriptor.layouts[0].stride = 32;
pipelineDescriptor.vertexDescriptor = vertexDescriptor;
Related
D3D level is D3D_FEATURE_LEVEL_11_0.
I made texture2d with same desc setting.
CD3D11_TEXTURE2D_DESC desc;
desc.Width = width;
desc.Height = height;
desc.MipLevels = 1;
desc.ArraySize = _array_size;
desc.Format =DXGI_FORMAT_R24G8_TYPELESS;
desc.SampleDesc.Count = 1;
desc.SampleDesc.Quality = 0;
desc.Usage = D3D11_USAGE_DEFAULT;
desc.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
desc.CPUAccessFlags = 0;
desc.MiscFlags = 0;
But when I render shader resource view for stencil, the right bottom is leave uncopied like below.
Depths are copied well.
enter image description here
The original texture don't make this strange results.
enter image description here
I made depth stencil buffer with this state for rectangle in the picture to make stencil value bigger than 0. For the man upon the rect, with same the desc but stencilEnable off to make stencil value 0.
D3D11_DEPTH_STENCIL_DESC desc;
ZeroMemory(&desc, sizeof(D3D11_DEPTH_STENCIL_DESC));
desc.DepthEnable = true;
desc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
desc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL;
desc.StencilEnable = true;
desc.StencilReadMask = 0xFF;
desc.StencilWriteMask = 0xFF;
desc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
desc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
desc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_INCR_SAT;
desc.FrontFace.StencilFunc = D3D11_COMPARISON_ALWAYS;
I used the code below to copy the stencil. The only difference of drawing the original stencil value is just this 1 line.
D3D_Context->CopyResource(copy->GetTex2D(), origin->GetTex2D());
And draw stencil with this state. I tried to draw the rectangle not the man.
D3D11_DEPTH_STENCIL_DESC desc;
ZeroMemory(&desc, sizeof(D3D11_DEPTH_STENCIL_DESC));
desc.DepthEnable = false;
desc.DepthWriteMask = D3D11_DEPTH_WRITE_MASK_ALL;
desc.DepthFunc = D3D11_COMPARISON_LESS_EQUAL;
desc.StencilEnable = true;
desc.StencilReadMask = 0x01;
desc.StencilWriteMask = 0x01;
desc.FrontFace.StencilFailOp = D3D11_STENCIL_OP_KEEP;
desc.FrontFace.StencilDepthFailOp = D3D11_STENCIL_OP_KEEP;
desc.FrontFace.StencilPassOp = D3D11_STENCIL_OP_KEEP;
desc.FrontFace.StencilFunc = D3D11_COMPARISON_LESS;
I am trying to use a texture embedded in a file, it's not a tga.
Here is my code, I don't know where the logical error is.
ID3D11ShaderResourceView* texturePtr = nullptr;
ID3D11Texture2D* texture2D = nullptr;
ID3D11SamplerState* sampleStatePtr = nullptr;
hr = CoInitialize(NULL);
assert(SUCCEEDED(hr));
devConPtr->PSSetSamplers(0, 1, &sampleStatePtr);
devConPtr->PSSetShaderResources(0, 1, &texturePtr);
Texture2D tex : TEXTURE;
SamplerState mySampler : SAMPLER;
D3D11_SAMPLER_DESC sd;
ZeroMemory(&sd, sizeof(sd));
sd.Filter = D3D11_FILTER_MIN_MAG_MIP_LINEAR;
sd.AddressU = D3D11_TEXTURE_ADDRESS_WRAP;
sd.AddressV = D3D11_TEXTURE_ADDRESS_WRAP;
sd.AddressW = D3D11_TEXTURE_ADDRESS_WRAP;
sd.MipLODBias = 0.0f;
sd.MaxLOD = D3D11_FLOAT32_MAX;
sd.ComparisonFunc = D3D11_COMPARISON_NEVER;
hr = devPtr->CreateSamplerState(&sd, &sampleStatePtr);
DXGI_SAMPLE_DESC sample;
sample.Count = 1;
sample.Quality = 0;
D3D11_TEXTURE2D_DESC textureDesc;
textureDesc.Width = w;
textureDesc.Height = h;
textureDesc.MipLevels = 1;
textureDesc.ArraySize = 1;
textureDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
textureDesc.SampleDesc = sample;
textureDesc.Usage = D3D11_USAGE_DEFAULT;
textureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
textureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
textureDesc.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA subData;
subData.pSysMem = data;
subData.SysMemPitch = sizeof(*data)*w;
HRESULT hr = devPtr->CreateTexture2D(
&textureDesc,
&subData,
&texture2D
);
assert(SUCCEEDED(hr));
//(ID3D11Texture2D*)texture2D;
texturePtr->QueryInterface(IID_ID3D11Texture2D, (void**)&texture2D);
D3D11_SHADER_RESOURCE_VIEW_DESC shvD;
shvD.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
shvD.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
hr= devPtr->CreateShaderResourceView(texture2D, &shvD, &texturePtr);
assert(SUCCEEDED(hr));
hr = DirectX::CreateWICTextureFromMemory(devPtr, devConPtr, (const
uint8_t*)&data, sizeof(*data),
nullptr, &texturePtr, NULL);
assert(SUCCEEDED(hr));
unsigned int textureCount = mat->GetTextureCount(aiTextureType_UNKNOWN);
for (UINT j = 0; j < textureCount; j++)
{
aiString* path = nullptr;
mat->GetTexture(aiTextureType_UNKNOWN, j, path);
assert(path->length >= 2);
int index = atoi(&path->C_Str()[1]);
createTexture(scenePtr->mTextures[index]->mWidth, scenePtr-
>mTextures[index]->mHeight, (uint8_t*)scenePtr->mTextures[index]->pcData);
}
If you could find some kind of logical error or help with the debugging that would be super helpful, I try to put a breakpoint at my HRESULTS but I can't find the variables however it does say that my resourceviewptr is always nullptr despite me trying to use it.
I am using c++ and directx and directx toolkit etc.
You are not initializing shvD completely. To fix it, initialize it like this:
D3D11_SHADER_RESOURCE_VIEW_DESC shvD;
shvD.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
shvD.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
shvD.Texture2D.MostDetailedMip = 0;
shvD.Texture2D.MipLevels = 1;
i am learning directx11 these days. and i have been stuck in compute shader section.
so i made four resource and three corresponding view.
immutable input buffer = {1,1,1,1,1} / SRV
immutable input buffer = {2,2,2,2,2} / SRV
output buffer / UAV
staging buffer for reading / No View
and i succeeded to create all things, and dispatch cs function, and copy data from output buffer to staging buffer, and i read/check data.
// INPUT BUFFER1--------------------------------------------------
const int dataSize = 5;
D3D11_BUFFER_DESC vb_dest;
vb_dest.ByteWidth = sizeof(float) * dataSize;
vb_dest.StructureByteStride = sizeof(float);
vb_dest.BindFlags = D3D11_BIND_SHADER_RESOURCE;
vb_dest.Usage = D3D11_USAGE_IMMUTABLE;
vb_dest.CPUAccessFlags = 0;
vb_dest.MiscFlags = 0;
float v1_float[dataSize] = { 1,1,1,1,1 };
D3D11_SUBRESOURCE_DATA v1_data;
v1_data.pSysMem = static_cast<void*>(v1_float);
device->CreateBuffer(
&vb_dest,
&v1_data,
valueBuffer1.GetAddressOf());
D3D11_SHADER_RESOURCE_VIEW_DESC srv_desc;
srv_desc.Format = DXGI_FORMAT_R32_FLOAT;
srv_desc.ViewDimension = D3D11_SRV_DIMENSION_BUFFER;
srv_desc.Buffer.FirstElement = 0;
srv_desc.Buffer.NumElements = dataSize;
srv_desc.Buffer.ElementWidth = sizeof(float);
device->CreateShaderResourceView(
valueBuffer1.Get(),
&srv_desc,
inputSRV1.GetAddressOf());
// INPUT BUFFER2-----------------------------------------------------------
float v2_float[dataSize] = { 2,2,2,2,2 };
D3D11_SUBRESOURCE_DATA v2_data;
v2_data.pSysMem = static_cast<void*>(v2_float);
device->CreateBuffer(
&vb_dest,
&v2_data,
valueBuffer2.GetAddressOf());
device->CreateShaderResourceView(
valueBuffer2.Get(),
&srv_desc,
inputSRV2.GetAddressOf());
// OUTPUT BUFFER-----------------------------------------------------------
D3D11_BUFFER_DESC ov_desc;
ov_desc.ByteWidth = sizeof(float) * dataSize;
ov_desc.StructureByteStride = sizeof(float);
ov_desc.BindFlags = D3D11_BIND_UNORDERED_ACCESS;
ov_desc.Usage = D3D11_USAGE_DEFAULT;
ov_desc.CPUAccessFlags = 0;
ov_desc.MiscFlags = 0;
device->CreateBuffer(
&ov_desc,
nullptr,
outputResource.GetAddressOf());
D3D11_UNORDERED_ACCESS_VIEW_DESC outputUAV_desc;
outputUAV_desc.Format = DXGI_FORMAT_R32_FLOAT;
outputUAV_desc.ViewDimension = D3D11_UAV_DIMENSION_BUFFER;
outputUAV_desc.Buffer.FirstElement = 0;
outputUAV_desc.Buffer.NumElements = dataSize;
outputUAV_desc.Buffer.Flags = 0;
device->CreateUnorderedAccessView(
outputResource.Get(),
&outputUAV_desc,
outputUAV.GetAddressOf());
// BUFFER FOR COPY-----------------------------------------------------------
D3D11_BUFFER_DESC rb_desc;
rb_desc.ByteWidth = sizeof(float) * dataSize;
rb_desc.StructureByteStride = sizeof(float);
rb_desc.Usage = D3D11_USAGE_STAGING;
rb_desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
rb_desc.BindFlags = 0;
rb_desc.MiscFlags = 0;
device->CreateBuffer(
&rb_desc,
nullptr,
readResource.GetAddressOf());
// DISPATCH and COPY and GET DATA
dContext->CSSetShaderResources(0, 1, inputSRV1.GetAddressOf());
dContext->CSSetShaderResources(1, 1, inputSRV2.GetAddressOf());
dContext->CSSetUnorderedAccessViews(0, 1, outputUAV.GetAddressOf(), nullptr);
dContext->CSSetShader(cs.Get(), nullptr, 0);
dContext->Dispatch(1, 1, 1);
dContext->CopyResource(readResource.Get(), outputResource.Get());
D3D11_MAPPED_SUBRESOURCE mappedResource2;
ZeroMemory(&mappedResource2, sizeof(D3D11_MAPPED_SUBRESOURCE));
R_CHECK(dContext->Map(readResource.Get(), 0, D3D11_MAP_READ, 0, &mappedResource2));
float* data = static_cast<float*>(mappedResource2.pData);
for (int i = 0; i < 5; ++i)
{
int a = data[i];
}
and this is compute shader code
StructuredBuffer<float> inputA : register(t0);
StructuredBuffer<float> inputB : register(t1);
RWStructuredBuffer<float> output : register(u0);
[numthreads(5, 1, 1)]
void main(int3 id : SV_DispatchThreadID)
{
output[id.x] = inputA[id.x] + inputB[id.x];
}
in CS, it's adding two input buffer data and store into output buffer.
so expected answer would be {3,3,3,3,3}.
but the result is {3,0,0,0,0} only first idx has proper answer.
any advice would be amazing.
dContext->CopyResource(readResource.Get(), outputResource.Get());
D3D11_MAPPED_SUBRESOURCE mappedResource2;
ZeroMemory(&mappedResource2, sizeof(D3D11_MAPPED_SUBRESOURCE));
R_CHECK(dContext->Map(readResource.Get(), 0, D3D11_MAP_READ, 0, &mappedResource2));
float* data = static_cast<float*>(mappedResource2.pData);
for (int i = 0; i < 5; ++i)
{
int a = data[i];
}
this code should be like this.
CopyResource();
Map();
Declare and allocate 'data'
zeromemory(data);
memcopy(data, resource's pointer);
unMap();
for some reason, i have to use the memcopy instead of just reading resource directly with the pointer that i get from mapping.
I am trying to make a Snake Active Contour program and I been looking at different websites that shows how they programmed the snake but none of them explain what CV_VALUE or coefficient usage is and how they initialized it.
Here some code that I was working on but I do not know what the problem is.
void snake(Mat copy){
threshold(copy, copy, 170, 255, CV_THRESH_BINARY);
float alpha = 0.1; //Continuity snake
float beta = 0.5; //Curvature snake
float gamma = 0.4; //Movement snake
//Have to be odd
CvSize size;
size.width = 5;
size.height = 5;
CvTermCriteria criteria;
criteria.type = CV_TERMCRIT_ITER;
criteria.max_iter = 10000;
criteria.epsilon = 0.1;
int cpt = 40;
CvPoint pointsArray[5];
pointsArray[0].x = 0;
pointsArray[0].y = 95;
pointsArray[1].x = 5;
pointsArray[1].y = 95;
pointsArray[2].x = 10;
pointsArray[2].y = 95;
pointsArray[3].x = 15;
pointsArray[3].y = 95;
pointsArray[4].x = 20;
pointsArray[4].y = 95;
//The Code (image, points, length, alpha (consistency), beta (curve), gamma (movement), coefficient Usage, win, criteria, calcGradient)
cvSnakeImage(copy, pointsArray, cpt, &alpha, &beta, &gamma, CV_VALUE, size,criteria, 0);
}
CV_VALUE indicates that each of alpha, beta, gamma is a pointer to a
single value to be used for all points;
CV_ARRAY indicates that each of alpha, beta, gamma is a pointer to an
array of coefficients different for all the points of the snake. All
the arrays must have the size equal to the contour size.
As I mentioned in title do you know how to flip an ID3D10Texture2D object horizontal/vertical ?
I used this code to take screenshot and save it to a file.
ID3D10Resource *backbufferRes;
renderTargetView->GetResource(&backbufferRes);
ID3D10Texture2D *mRenderedTexture;
// Create our texture
D3D10_TEXTURE2D_DESC texDesc;
texDesc.ArraySize = 1;
texDesc.BindFlags = 0;
texDesc.CPUAccessFlags = 0;
texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
texDesc.Width = 640; // must be same as backbuffer
texDesc.Height = 480; // must be same as backbuffer
texDesc.MipLevels = 1;
texDesc.MiscFlags = 0;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D10_USAGE_DEFAULT;
d3d10Device->CreateTexture2D(&texDesc, 0, &mRenderedTexture);
d3d10Device->CopyResource(mRenderedTexture, backbufferRes);
D3DX10FilterTexture(mRenderedTexture, 0, D3DX10_FILTER_MIRROR_U);
D3DX10SaveTextureToFile(mRenderedTexture, D3DX10_IFF_PNG, L"test.png");
D3DX10FilterTexture(mRenderedTexture, 0, D3DX10_FILTER_MIRROR_U); line doesnt mirror my texture. Any suggestions ?
In your shader do 1-u to flip horizontally or 1-v to flip vertically.
Edit: If you aren't actually doing any rendering then there are far better ways to do image manipulation. However if you want to do it manually you will have to use map and flip the data round yourself.
You could do that as follows (The code is not tested so please excuse any compile errors):
D3D10Resource *backbufferRes;
renderTargetView->GetResource(&backbufferRes);
ID3D10Texture2D *mRenderedTexture;
// Create our texture
D3D10_TEXTURE2D_DESC texDesc;
texDesc.ArraySize = 1;
texDesc.BindFlags = 0;
texDesc.CPUAccessFlags = 0;
texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
texDesc.Width = 640; // must be same as backbuffer
texDesc.Height = 480; // must be same as backbuffer
texDesc.MipLevels = 1;
texDesc.MiscFlags = 0;
texDesc.SampleDesc.Count = 1;
texDesc.SampleDesc.Quality = 0;
texDesc.Usage = D3D10_USAGE_DEFAULT;
d3d10Device->CreateTexture2D(&texDesc, 0, &mRenderedTexture);
d3d10Device->CopyResource(mRenderedTexture, backbufferRes);
D3D10_MAPPED_TEXTURE2D d3d10MT = { 0 };
mRenderedTexture->Map( 0, D3D10_MAP_READ_WRITE, 0, &d3d10MT );
unsigned int* pPix = (unsigned int)d3d10MT.pData;
int rows = 0;
int rowsMax = height;
while( rows < rowsMax )
{
unsigned int* pRowStart = pPix + (rows * width);
unsigned int* pRowEnd = pRowStart + width;
std::reverse( pRowStart, pRowEnd );
rows++;
}
mRenderedTexture->Unmap();
D3DX10SaveTextureToFile(mRenderedTexture, D3DX10_IFF_PNG, L"test.png");
From the doc btw:
D3DX10_FILTER_MIRROR_U Pixels off the edge of the texture on the
u-axis should be mirrored, not wrapped.
So that only counts for the pixels round the edge when you are filtering the image.