D3D12 Dynamic Constant Buffer Creation and Binding - directx

I'm writing a shader tool app where I can create nodes and link them to generate a texture:
I used D3D12 shader reflection to get the constant buffer variables and now I'm trying to figure out how to pass/bind these vars in runtime. At the moment I have the constant buffers defined in code like this:
struct ObjectConstants
{
DirectX::XMFLOAT4X4 World{ D3DUtil::Identity4x4() };
DirectX::XMFLOAT3 Color{ 0.f, 0.f, 0.f };
};
How can I create a constant buffer, bind it and copy data from CPU to GPU in runtime?
PS: The root signature and PSO are being created in runtime.

Related

Determine the element's type of a buffer in the Metal Shader Language

I use a codegen for generating declarations of Metal shaders and sometimes I do not know the exact types of objects that are passed to shaders.
E.g. I would have this declaration generated automatically:
vertex VertexOut vertexShader(constant Element *array [[buffer(0)]])
When I try to get the element's type of the array I get the error from the compiler:
using T = metal::remove_reference_t<decltype( *array )>;
T test;// <-- ERROR! "Automatic variable qualified with an address space"
Is this possible to "erase" the address space from the type?
What is the best way of getting the type of an array's element in Metal (if it's possible at all)?
As I said in the comment, I think the problem is that remove_reference does exactly what it says: it removes reference, while still leaving the type qualified. You can not declare a variable in device or constant space, so you also need to remove the address space qualifier, similar to how remove_cv_t works. I've written up a couple templates to show you what I mean:
template <typename T>
struct remove_address_space
{
typedef T type;
};
template <typename T>
struct remove_address_space<device T>
{
typedef T type;
};
template <typename T>
struct remove_address_space<constant T>
{
typedef T type;
};
and then you would use it like
using T = remove_address_space<metal::remove_reference_t<decltype( *array )>>::type;
Keep in mind that metal has a lot of address spaces, but for the purposes of writing entry points to functions, I think only the device and constant are relevant.

ClearUnorderedAccessViewFloat on a Buffer

On a RGBA Texture this works as expected. Each component of the FLOAT[4] argument gets casted to the corresponding component of the DXGI_FORMAT of the texture.
However, with a Buffer this doesn't work and some rubbish is assigned the buffer based on the first component of the FLOAT[4] argument.
Although, this makes sense since a buffer UAV has no DXGI_FORMAT to specify what cast happens.
The docs said
If you want to clear the UAV to a specific bit pattern, consider using ID3D12GraphicsCommandList::ClearUnorderedAccessViewUint.
so you can just use as follows:
float fill_value = ...;
auto Values = std::array<UINT, 4>{ *((UINT*)&fill_value),0,0,0};
commandList->ClearUnorderedAccessViewUint(
ViewGPUHandleInCurrentHeap,
ViewCPUHandle,
pResource,
Values.data(),
0,
nullptr
);
I believe the debug layer should raise a error when using ClearUnorderedAccessViewFloat on a Buffer.
Edit: it actually does I just missed it.
D3D12 ERROR: ID3D12CommandList::ClearUnorderedAccessViewUint: ClearUnorderedAccessView* methods are not compatible with Structured Buffers. StructuredByteStride is set to 4 for resource 0x0000023899A09A40'. [ RESOURCE_MANIPULATION ERROR #1156: CLEARUNORDEREDACCESSVIEW_INCOMPATIBLE_WITH_STRUCTURED_BUFFERS]

Choosing between buffers in a Metal shader

I'm struggling with porting my OpenGL application to Metal. In my old app, I used to bind two buffers, one with vertices and respective colours and one with vertices and respective textures, and switch between the two based on some app logic. Now in Metal I've started with the Hello Triangle example where I tried running this vertex shader
vertex RasterizerData
vertexShader(uint vertexID [[vertex_id]],
constant AAPLVertex1 *vertices1 [[buffer(AAPLVertexInputIndexVertices1)]],
constant AAPLVertex2 *vertices2 [[buffer(AAPLVertexInputIndexVertices2)]],
constant bool &useFirstBuffer [[buffer(AAPLVertexInputIndexUseFirstBuffer)]])
{
float2 pixelSpacePosition;
if (useFirstBuffer) {
pixelSpacePosition = vertices1[vertexID].position.xy;
} else {
pixelSpacePosition = vertices2[vertexID].position.xy;
}
...
and this Objective-C code
bool useFirstBuffer = true;
[renderEncoder setVertexBytes:&useFirstBuffer
length:sizeof(bool)
atIndex:AAPLVertexInputIndexUseFirstBuffer];
[renderEncoder setVertexBytes:triangleVertices
length:sizeof(triangleVertices)
atIndex:AAPLVertexInputIndexVertices1];
(where AAPLVertexInputIndexVertices1 = 0, AAPLVertexInputIndexVertices2 = 1 and AAPLVertexInputIndexUseFirstBuffer = 3), which should result in vertices2 never getting accessed, but still I get the error: failed assertion 'Vertex Function(vertexShader): missing buffer binding at index 1 for vertices2[0].'
Everything works if I replace if (useFirstBuffer) with if (true) in the Metal code. What is wrong?
When you're hard-coding the conditional, the compiler is smart enough to eliminate the branch that references the absent buffer (via dead-code elimination), but when the conditional must be evaluated at runtime, the compiler doesn't know that the branch is never taken.
Since all declared buffer parameters must be bound, leaving the unreferenced buffer unbound trips the validation layer. You could bind a few "dummy" bytes at the Vertices2 slot (using -setVertexBytes:length:atIndex:) when not following that path to get around this. It's not important that the buffers have the same length, since, after all, the dummy buffer will never actually be accessed.
In the atIndex argument, you call the code with the values AAPLVertexInputIndexUseFirstBuffer and AAPLVertexInputIndexVertices1 but in the Metal code the values AAPLVertexInputIndexVertices1 and AAPLVertexInputIndexVertices2 appear in the buffer() spec. It looks like you need to use AAPLVertexInputIndexVertices1 instead of AAPLVertexInputIndexUseFirstBuffer in your calling code.

Binding error with OpenGL buffers and Direct State Access (DSA)

I got this error from OpenGL when I use glNamedBufferStorage() :
GL_INVALID_OPERATION error generated. Buffer must be bound.
Normally I don't have to use glBindBuffer() with direct state access !?
Here is my gl call sequence :
glCreateBuffers(1, &m_identifier);
...
glNamedBufferStorage(m_identifier, static_cast< GLsizeiptr >(bytes + offset), data, GL_DYNAMIC_STORAGE_BIT);
...
glNamedBufferSubData(m_identifier, static_cast< GLintptr >(offset), static_cast< GLsizeiptr >(bytes), data);
I only use DSA functions, so I don't understand why I got the problem.
My bad, I forget this little one : glGetBufferParameteriv().
Replaced by glGetNamedBufferParameteriv() in DSA.
It was wrapped into a method of my class.

No member named 'read' in 'metal::texturecube'

According to Apple's Metal shading language specification, texture cubes have a read method,
read(uint2 coord, uint face, uint lod = 0) const
However, when I try to build this shader, I get a compiler error,
fragment half4 passFragment(VertexInOut inFrag [[stage_in]],
texturecube<float, access::read> tex [[ texture(0) ]])
{
float4 out = tex.read(uint2(0,0), uint(0));
return half4(out);
}
The error is,
No member named 'read' in 'metal::texturecube<float, metal::access::read>'
If I remove the access qualifier, then I get,
No member named 'read' in 'metal::texturecube<float, metal::access::sample>'
I also tried changing the type from float to int or short, but I get the same error. Frustrating that there's no header to look at...
Any ideas?
It appears that texturecube::read() is only available on macOS.
There are, in fact, headers available. Look in /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/usr/lib/clang/3.5/include/metal.
In the metal_texture header, you'll see that the declaration of read() is inside of a preprocessor conditional (#if) and is only declared if the macro __HAVE_TEXTURE_CUBE_READ__ is defined. On macOS, that's defined in the metal_config header. On iOS, it's not defined.

Resources