Vulkan texture rendering on multiple meshes - texture-mapping

I am in the middle of rendering different textures on multiple meshes of a model, but I do not have much clues about the procedures. Someone suggested for each mesh, create its own descriptor sets and call vkCmdBindDescriptorSets() and vkCmdDrawIndexed() for rendering like this:
// Pipeline with descriptor set layout that matches the shared descriptor sets
vkCmdBindPipeline(...pipelines.mesh...);
...
// Mesh A
vkCmdBindDescriptorSets(...&meshA.descriptorSet... );
vkCmdDrawIndexed(...);
// Mesh B
vkCmdBindDescriptorSets(...&meshB.descriptorSet... );
vkCmdDrawIndexed(...);
However, the above approach is quite different from the chopper sample and vulkan's samples that makes me have no idea where to start the change. I really appreciate any help to guide me to a correct direction.
Cheers

You have a conceptual object which is made of multiple meshes which have different texturing needs. The general ways to deal with this are:
Change descriptor sets between parts of the object. Painful, but it works on all Vulkan-capable hardware.
Employ array textures. Each individual mesh fetches its data from a particular layer in the array texture. Of course, this restricts you to having each sub-mesh use textures of the same size. But it works on all Vulkan-capable hardware (up to 128 array elements, minimum). The array layer for a particular mesh can be provided as a push-constant, or a base instance if that's available.
Note that if you manage to be able to do it by base instance, then you can render the entire object with a multi-draw indirect command. Though it's not clear that a short multi-draw indirect would be faster than just baking a short sequence of drawing commands into a command buffer.
Employ sampler arrays, as Sascha Willems suggests. Presumably, the array index for the sub-mesh is provided as a push-constant or a multi-draw's draw index. The problem is that, regardless of how that array index is provided, it will have to be a dynamically uniform expression. And Vulkan implementations are not required to allow you to index a sampler array with a dynamically uniform expression. The base requirement is just a constant expression.
This limits you to hardware that supports the shaderSampledImageArrayDynamicIndexing feature. So you have to ask for that, and if it's not available, then you've got to work around that with #1 or #2. Or just don't run on that hardware. But the last one means that you can't run on any mobile hardware, since most of them don't support this feature as of yet.
Note that I am not saying you shouldn't use this method. I just want you to be aware that there are costs. There's a lot of hardware out there that can't do this. So you need to plan for that.

The person that suggested the above code fragment was me I guess ;)
This is only one way of doing it. You don't necessarily have to create one descriptor set per mesh or per texture. If your mesh e.g. uses 4 different textures, you could bind all of them at once to different binding points and select them in the shader.
And if you a take a look at NVIDIA's chopper sample, they do it pretty much the same way only with some more abstraction.
The example also sets up descriptor sets for the textures used :
VkDescriptorSet *textureDescriptors = m_renderer->getTextureDescriptorSets();
binds them a few lines later :
VkDescriptorSet sets[3] = { sceneDescriptor, textureDescriptors[0], m_transform_descriptor_set };
vkCmdBindDescriptorSets(m_draw_command[inCommandIndex], VK_PIPELINE_BIND_POINT_GRAPHICS, layout, 0, 3, sets, 0, NULL);
and then renders the mesh with the bound descriptor sets :
vkCmdDrawIndexedIndirect(m_draw_command[inCommandIndex], sceneIndirectBuffer, 0, inCount, sizeof(VkDrawIndexedIndirectCommand));
vkCmdDraw(m_draw_command[inCommandIndex], 1, 1, 0, 0);
If you take a look at initDescriptorSets you can see that they also create separate descriptor sets for the cubemap, the terrain, etc.
The LunarG examples should work similar, though if I'm not mistaken they never use more than one texture?

Related

Binding all textures on one huge descriptor set

I have some design qustion for a vulkan game engine:
In my game engine i bound all "static" textures resources on one huge descriptor-set(256k descriptors), and my shaders access those samplers by an dynamic indexing.
[For example when i want to sample a some normals-map that belong to a currtain gameobject i add an new uint into the material's ubo that represent the index of the object's normals-map descriptor inside the huge descriptor set, then i sample it and compute the final object color.]
I wondered whether this way to access objects textures is efficient compare to the idia to bind each object's texture on his per-object descriptor set(alongside the material ubo).
Does the size of an descriptor-set can drastically affect on the texel access speed?
or my idia is suck?
Again, sorry about my English.
There are no performance issues with indexing from an array of sampler descriptors. The only real reason not to do things this way is that implementations may not let you dynamically index such arrays. But if you're requiring that from the implementation (all desktop implementations allow it), then just keep doing it; it's a common technique for reducing the number of state changes you have to issue on the CPU.

DirectCompute: How to read from a RWTexture2D<float4>?

I have the following buffer:
RWTexture2D<float4> Output : register(u0);
This buffer is used by a compute shader for rendering a computed image.
To write a pixel in that texture, I just use code similar to this:
Output[XY] = SomeFunctionReturningFloat4(SomeArgument);
This works very well and my computed image is correctly rendered on screen.
Now at some stage in the compute shader, I would like to read back an
already computed pixel and process it again.
Output[XY] = SomeOtherFunctionReturningFloat4(Output[XY]);
The compiler return an error:
error X3676: typed UAV loads are only allowed for single-component 32-bit element types
Any help appreciated.
In Compute Shaders, data access is limited on some data types, and not at all intuitive and straightforward. In your case, you use a
RWTexture2D<float4>
That is a UAV typed of DXGI_FORMAT_R32G32B32A32_FLOAT format.
This forma is only supported for UAV typed store, but it’s not supported by UAV typed load.
Basically, you can only write on it, but not read it. UAV typed load only supports 32 bit formats, in your case DXGI_FORMAT_R32_FLOAT, that can only contain a single component (32 bits and that’s all).
Your code should run if you use a RWTexture2D<float> but I suppose this is not enough for you.
Possible workarounds that spring to my minds are:
1. using 4 different RWTexture2D<float>, one for each component
2. using 2 different textures, RWTexture2D<float4> to write your values and Texture2D<float4> to read from
3. Use a RWStructuredBufferinstead of the texture.
I don’t know your code so I don’t know if solutions 1. and 2. could be viable. However, I strongly suggest going for 3. and using StructuredBuffer. A RWStructuredBuffer can hold any type of struct and can easily cover all your needs. To be honest, in compute shaders I almost only use them to pass data. If you need the final output to be a texture, you can do all your calculations on the buffer, then copy the results on the texture when you’re done. I would add that drivers often use CompletePath to access RWTexture2D data, and FastPath to access RWStructuredBuffer data, making the former awfully slower than the latter.
Reference for data type access is here. Scroll down to UAV typed load.

Instance vs Loops in HLSL Model 5 Geometry Shaders

I'm looking at getting a program written for DirectX11 to play nice on DirectX10. To do that, I need to compile the shaders for model 4, not 5. Right now the only problem with that is that the geometry shaders use instancing which is unsupported by 4. The general model is
[instance(NUM_INSTANCES)]
void Gs(..., in uint instanceId : SV_GSInstanceID) { }
I can't seem to find many documents on why this exists, because my thought is: can't I just replace this with a loop from instanceId=0 to instanceId=NUM_INSTANCES-1?
The answer seems to be no, as it doesn't seems to output correctly, but besides my exact problem - can you help me understand why the concept of instancing exists. Is there some implication on the entire pipeline that instancing has beyond simply calling the main function twice with a different index?
With regards to why my replacement did not work:
Geometry shaders are annotated with [maxvertexcount(N)]. I had incorrectly assumed this was the vertex input count, and ignored it. In fact, input is determined by the type of primitive coming in, and so this was about the output. Before, if N was my output over I instances, each instance output N vertices. But now that I want to use a loop, a single instance outputs N*I vertices. As such, the answer was to do as I suggested, and also use [maxvertexcount(N*NUM_INSTANCES)].
To more broadly answer my question on why instances may be useful in a world that already has loops, I can only guess
Loops are not truly supported in shaders, it turns out - graphics card cores do not have a concept of control flow. When loops are written in shaders, the loop is unrolled (see [unroll]). This has limitations, makes compilation slower, and makes the shader blob bigger.
Instances can be parallelized - one GPU core can run one instance of a shader while another runs the next instance of the same shader with the same input.

WebGL one-to-many data processing

Is there any scheme using WebGL which allows to process one data record to an previously unknown number of records?
Using OpenGL for example, a geometry program can be used to multiply vertices depending on their attributes, and thus output data of unknown length.
Is there any trick to use WebGL in a likewise fashion, or is this only possible on the JavaScript side?
Yup, there is no Geometry Shader in WebGL (just Vertex and Fragment).
So, yes, something multiplicative needs to be implemented on the JS side, by making more data or more calls to gl.drawTriangles/gl.drawElements.
One approach that might be applicable, is to have lots of data (triangles, say), and use the Fragment Shader to algorithmicly throw-away some or all of them. Kind of the opposite of multiplying. But if you keep the same triangles, and change their processing with uniforms, or perhaps smaller data in textures, you can at least save the hit of sending up lots of different data.
To "Throw away" a vertex, need to put it outside the NDC (the -1 to +1 unit cube), for all three vertices of a triangle.

best approach to constructing an OpenGL ES 2.0 shader-based dynamic chain of filters

I'm on iOS 6 (7 too if you will and makes any difference) and GL ES 2.0.
The idea is for a CAEAGLLayer to have a dynamic chain of shader-based filters that processes its contents property and displays the final result. Filters can be added / removed at any point in the chain.
So far I came up with an implementation, but I'm wondering if there's better ways to go about it. My implementation roughly goes about it this way:
A base filter class from which concrete filters inherit, creating a shader program (vertex / fragment combo) for whatever filter / imaging they implement.
A CAEAGLLayer subclass which implements the filter chain and to which filters are added. The high-level processing algorithm is:
// 1 - Assume whenever the layer's content property is changed to an image, a copy of the image gets stored in a sourceImage property.
// 2 - Assume changing the content property or adding / removing an image unit triggers this algorithm.
// 3 - Assume the whole filter chain basically processes a quad with position and texture coordinates thru a VBO.
// 4 - Assume all shader programs (by shader program I mean a vertex and fragment shader pair in a single program) have access to texture unit 0.
// 5 - Assume P shader programs.
load imageSource into a texture object bound to GL_TEXTURE2D and pointing to to GL_TEXTURE0
attach bound texture object to GL_FRAMEBUFFER GL_COLOR_ATTACHMENT0 (so we are doing render-to-texture, which will be accessible to fragment shaders)
for p = program identifier 0 up to P - 2:
glUseProgram(p)
glDrawArrays()
attach GL_RENDERBUFFER to GL_FRAMEBUFFER GL_COLOR_ATTACHMENT0 (GL_RENDERBUFFER in turn has its storage set to the layer itself);
p = program identifier P - 1 (last program in the chain)
glUseProgram(p)
glDrawArrays()
present GL_RENDERBUFFER onscreen
This approach seems to work so far, but there's a number of things I'm wondering about:
Best way to implement adding / removing of filters:
Adding and removing programs seems the most logical approach right now. However this means one program per plugin and switching between all of these at render time. I wonder how these other approaches would compare:
Attaching / detaching shader-pairs and re-linking a single composite program, instead of adding / removing programs. The OpenGL ES 2.0 Programming Guide says you cannot do it. However, since desktop GL allows for multiple shader objects in one program, I'm anyway curious if it would be a better approach if ES supported it.
Keeping the filters in text format (their code within a function other than main) and instead compile them all into a monolithic shader pair (with an added main of course) each time a filter is added / removed.
Best way to implement per-filter caching:
Right now, adding / removing any number of filters at any point in the chain requires running all programs again to render the final image. It'd be nice however if I could somehow cache the output of each filter. That way, removing, adding or bypassing a filter would only require running the filters past the point of insertion / deletion / bypassing in the chain. I can think of a naive approach: on each program pass, bind a different texture object to GL_TEXTURE0 and to the GL_COLOR_ATTACHMENT0of the frame buffer. In this way I can keep the output of every filter around. However, creating a new texture, binding and changing the framebuffer attachment once per filter seems inefficient.
I don't have much to say about the filter output caching problem, but as for filter switching... The EXT_separate_shader_objects extension is designed to solve this very problem, and it's supported on every device that runs iOS 5.0 or later. Here's a brief overview:
There's a new convenience API for compiling shader programs that also takes care of making them "separable":
_vertexProgram = glCreateShaderProgramvEXT(GL_VERTEX_SHADER, 1, &source);
Program Pipeline Objects manage your program state and let you mix and match already-compiled shaders:
GLuint _ppo;
glGenProgramPipelinesEXT(1, &_ppo);
glBindProgramPipelineEXT(_ppo);
glUseProgramStagesEXT(_ppo, GL_VERTEX_SHADER_BIT_EXT, _vertexProgram);
glUseProgramStagesEXT(_ppo, GL_FRAGMENT_SHADER_BIT_EXT, _fragmentProgram);
Mixing and matching shaders can make attribute binding a pain, so you can specify that in the shader (likewise for varyings):
#extension GL_EXT_separate_shader_objects : enable
layout(location = 0) attribute vec4 position;
layout(location = 1) attribute vec3 normal;
Uniforms are set for the shader program they belong to:
glProgramUniformMatrix3fvEXT(_vertexProgram, u_normalMatrix, 1, 0, _normalMatrix.m);

Resources