Metal fragment shader get size - ios

Is there a way to retrieve sizes of Frame Buffer & buffers passed in Metal fragment shader, or we need to manually pass them as arguments? I wish to retrieve the width and height of Frame Buffer texture to which results are being written as well as length of other MTLBuffers ([[buffer(0)]], [[buffer(1)]],...) passed to the fragment shader.

That information is not automatically available. You have to pass it in as arguments.

Related

Confused about the generation of mipmap in metal?

I set mipmapped to YES in texture2DDescriptorWithPixelFormat and call generateMipmapsForTexture method of MTLBlitCommandEncoder on given texture to automatically generate mipmaps.
The question is if I have set the mipmapped to YES, doesn't it means that the resulting image should be mipmapped, why should I need MTLBlitCommandEncoder to explicit generate mipmaps?
It's a bit confusing, so let's walk through it.
texture2DDescriptorWithPixelFormat takes format, width, height and mipmapped as parameters. The mipmapped parameter is there to tell Metal to calculate the number of mip levels that resulting image will have, since there is no parameter to pass mip level count. Here's how it's described in documentation:
mipmapped
A Boolean indicating whether the resulting image should be
mipmapped. If YES, then the mipmapLevelCount property in the returned
descriptor is computed from width and height. If NO, then
mipmapLevelCount is 1.
If you would use newTextureWithDescriptor with texture descriptor you created explicitly, then there's no mipmapped parameter, since you explicitly pass number of mip levels in mipmapLevelCount property of MTLTextureDescriptor.
Since you create a new texture, there's no reason to generate mipmaps, since the texture is empty.
The generateMipmapsForTexture method is used to generate mipmaps for a texture that already has mip levels and you just need to populate them with mipmaps generated automatically.
So, to get this straight, mipmapped parameter just tells Metal to create a texture that has mip levels, that you can later populate (if you want) with generateMipmapsForTexture (or in some other ways, such as using texture as color attachment in render pass with level specified).

Using gl_FragData[] from multiple shader files

I have a webgl shader set up with some shaders. I'm using multiple render targets (gl_FragData[])
In the first shader, I can output to
gl_FragData[0] = vec4(..);
gl_FragData[1] = vec4(..);
gl_FragData[2] = vec4(..);
Now with my second shader, I want to output to gl_FragData[3] and save the texture to pass to my third shader.
The second shader doesn't seem to output to gl_FragData[3], yet this works if I use it in my first shader. I want the output of gl_FragData[3] to be stored in a texture and sent to the third shader.
I think it may have to do with the framebuffer, but I've tried changing that and have had no luck. What am I missing?
If you want to use the same framebuffer, you'll need to mask off the unused draw buffers: drawBuffers([COLOR_ATTACHMENT0, COLOR_ATTACHMENT1, COLOR_ATTACHMENT2]) for the first shader, and drawBuffers([NONE, NONE, NONE, COLOR_ATTACHMENT3]) for the second shader.
From EXT_draw_buffers:
Any colors, or color components, associated with a fragment that are not written by the fragment shader are undefined.

Reading variable from vertex shader for rendering in webgl

I want to implement a collision detector between a moving and a static object. The way I am thinking of doing so is by checking in vertex shader every time if any vertex of the moving object intersects with the position of the static object.
By doing the above, I would get the point of collision in the vertex shader, but I want to use the variable for rendering purposes in the js file.
Is there a way to do it.
In WebGL 1 you can not directly read any data from a vertex shader. The best you can do is use the vertex shader to affect the pixels rendered in the fragment shader. So you could for example set gl_Position so nothing is rendered if it fails your test and a single pixel is rendered if the test passes. Or you can set some varying that sets certain colors based on your test results. Then you can either read the pixel with gl.readPixels or you can just pass the texture you wrote to to another shader in a different draw calls.
In WebGL2 you can use transform feedback to allow a vertex shader to write its varyings to a buffer. You can then use that buffer in other draw calls or read it's contents with gl.getSubBuffer
In WebGL2 you can also do occlusion queries which means you can try to draw something and test if it was actually drawn or if the depth buffer prevented it from being drawn.

Is DirectX 11 compute capable of writing more than 10k vertices to a RWStructuredBuffer?

I have a vertex buffer with an unordered access view, which I'm using to fill the vertices using a compute shader, which treats the UAV as a RWStructuredBuffer, using an equivalent struct to the vertex definition. There are 216000 vertices (i.e. 60 x 60 x 60). But my compute shader seems to fill only about 8000 of them, leaving the rest with their initial values. Is there a limit on the number of elements in a structured buffer that can be written in this way?
As it turns out, if you turn on DirectX error-checking, assigning the UAV of a vertex buffer as a RWStructuredBuffer in the shader is considered to be an error. So although this actually works (for a limited number of vertices), it's not supported.

Per Instance Textures, and Vertex And Pixel Shaders?

How do you implement per instance textures, vertex shaders, and pixel shaders, in the same Vertex Buffer and/or DeviceContext?
I am just trying to find the most efficient way to have different pixel shaders used by the same type of mesh, but colored differently. For example, I would like square and triangle models in the vertex buffer, and for the vertex/pixel/etc shaders to act differently based on instance data.... (If the instance data includes "dead" somehow, the shaders used to draw opaque shapes with solid colors rather than gradients are used.
Given:
1. Different model templates in Vertex Buffer, Square & Triangl, (more eventually).
Instance Buffer with [n] instances of type Square and/or Triangle, etc.
Guesses:
Things I am trying to Research to do this:
A: Can I add a Texture, VertexShader or PixelShader ID to the buffer data so that HLSL or the InputAssembly can determine which Shader to use at draw time?
B. Can I "Set" multiple Pixel and Vertex Shaders into the DeviceContext, and how do I tell DirectX to "switch" the Vertex Shader that is loaded at render time?
C. How many Shaders of each type, (Vertex, Pixel, Hull, etc), can I associate with model definitions/meshes in the default Vertex Buffer?
D. Can I use some sort of Shader Selector in HLSL?
Related C++ Code
When I create an input layout, can I do this without specifying an actual Vertex Shader, or somehow specify more than one?
NS::ThrowIfFailed(
result = NS::DeviceManager::Device->CreateInputLayout(
NS::ModelRenderer::InitialElementDescription,
2,
vertexShaderFile->Data,
vertexShaderFile->Length,
& NS::ModelRenderer::StaticInputLayout
)
);
When I set the VertexShader and PixelShader, how do I associate them with a particular model in my VertexBuffer? Is it possible to set more than one of each?
DeviceManager::DeviceContext->IASetInputLayout(ModelRenderer::StaticInputLayout.Get());
DeviceManager::DeviceContext->VSSetShader(ModelRenderer::StaticVertexShader.Get(), nullptr, 0);
DeviceManager::DeviceContext->PSSetShader(ModelRenderer::StaticPixelShader.Get(), nullptr, 0);
How do I add a Texture, VertexShader or PixelShader ID to the buffer
data so that HLSL or the InputAssembly can determine which Shader to
use at draw time?
You can't assign a Pixel Shader ID to a buffer, that's not how the pipeline works.
A / You can bind only one Vertex/Pixel Shader in a Device context at a time, which defines your pipeline, draw your geometry using this shader, then switch to another Vertex/Pixel shader as needed, draw next geometry...
B/ you can use different shaders using the same model, but that's done on cpu using VSSetShader, PSSetShader....
C/No, for same reason as in B (shaders are set on the CPU)
When I create an input layout, can I do this without specifying an actual Vertex Shader, or somehow specify more than one?
if you don't specify a vertex shader, the pipeline will consider that you draw "null" geometry, which is actually possible (and very fun), but bit out of context, if you provide geometry you need to send the vertex shader data so the runtime can match your geometry layout to the vertex input layout. You can of course create several input layouts by calling the function several times (once per vertex shader/geometry in worst case, but if two models/vertex shaders have the same layout you can omit it).
When I set the VertexShader and PixelShader, how do I associate them with a particular model in my VertexBuffer? Is it possible to set more than one of each?
You bind everything you need (Vertex/Pixel shaders, Vertex/IndexBuffer,Input layout) and call draw (or drawinstanced).

Resources