How to get the Vertex shader from program - webgl

Can I use the Webgl api to get the vertex shader (only) from a program (WebGlProgram)?
There is a gl.getAttachedShaders() which gives me an array of them. Is there a way to determine which is which?
thanks

See WebGL Specification; 5.13.9 Programs and Shaders
any getShaderParameter(WebGLShader shader, GLenum pname)
Return the value for the passed pname given the passed shader. The type returned is the natural type for the requested pname, as given in the following table:
SHADER_TYPE unsigned long
DELETE_STATUS boolean
COMPILE_STATUS boolean
Use gl.getShaderParameter(shader, gl.SHADER_TYPE), to determine if the shader object shader, which is returned by gl.getAttachedShaders, is a vertex or fragment shader. The possible return values are gl.VERTEX_SHADER and gl.FRAGMENT_SHADER.

Related

Using Metal discard_fragment() to discard individual samples in an MSAA attachment

For an MSSA attachment, the following simple Metal fragment shader is meant to be run in multiple render passes, once per sample, to fill the stencil attachment with different reference values per sample. It does not work as expected, and effectively fills all stencil pixel samples with the reference value on each renderpass.
struct _10
{
int _m0;
};
struct main0_out
{
float gl_FragDepth [[depth(any)]];
};
fragment main0_out main0(constant _10& _12 [[buffer(0)]], uint gl_SampleID [[sample_id]])
{
main0_out out = {};
if (gl_SampleID != _12._m0)
{
discard_fragment();
}
out.gl_FragDepth = 0.5;
return out;
}
The problem seems to be using discard_fragment() on a per-sample basis. The intended operation of discarding one sample but writing another does not occur. Instead, the sample is never discarded, regardless of the comparison value passed in the buffer.
In fact, from what I can tell from GPU capture shader tracing results, it appears that the entire if-discard clause is optimized away by the Metal compiler. My guess is that Metal probably recognizes the disconnect between per-sample invocations and discard_fragment(), and removes it, but I can't be sure.
I can't find any Metal documentation on discard_fragment() and its use with MSAA, so I can't tell whether discard_fragment() is supposed to work with individual sample invocations in that environment, or whether it can only discard the entire fragment (which admittedly the function name implies, but what does that mean for per-sample invocations?).
Does the logic and intention of this shader make sense? Is discard_fragment() supposed to work with individual sample invocations? And why would the Metal compiler possibly be removing the discard operation from my shader?

How to select WebGL GLSL sampler type from texture format properties?

WebGL's GLSL has sampler2D, isampler2D, and usampler2D for reading float, int, and unsigned int from textures inside a shader. When creating a texture in WebGL1/2 we specify a texture InternalFormat, Format, and Type. According to the OpenGL Sampler Wiki Page, using a sampler with incompatible types for a given texture can lead to undefined values.
Is there a simple rule to determine how to map a texture's InternalFormat, Format, and Type definitively to the correct GLSL sampler type?
(Without loss of generality, I have focused on ?sampler2D but of course there are also 3D, Cube, etc textures which I assume follow the exactly same rules)
WebGL1 doesn't have those different sampler types.
WebGL2 the type is specified by the internal format. Types that end in I like RGB8I are isampler. Types that end in UI like RGB8UI are usampler formats. Everything else is sampler
There's a list of the formats on page 5 of the WebGL2 Reference Guide
Also note
(1) You should avoid the OpenGL reference pages for WebGL2 as they will often not match. Instead, you should be reading the OpenGL ES 3.0.x reference pages
(2) WebGL2 has stronger restrictions. The docs you referenced said the values can be undefined. WebGL2 doesn't allow this. From the WebGL2 spec
5.22 A sampler type must match the internal texture format
Texture lookup functions return values as floating point, unsigned integer or signed integer, depending on the sampler type passed to the lookup function. If the wrong sampler type is used for texture access, i.e., the sampler type does not match the texture internal format, the returned values are undefined in OpenGL ES Shading Language 3.00.6 (OpenGL ES Shading Language 3.00.6 ยง8.8). In WebGL, generates an INVALID_OPERATION error in the corresponding draw call, including drawArrays, drawElements, drawArraysInstanced, drawElementsInstanced , and drawRangeElements.
If the sampler type is floating point and the internal texture format is normalized integer, it is considered as a match and the returned values are converted to floating point in the range [0, 1].

How many times is the vertex shader called with metal?

I've been learning some basic metal rendering, and I am stuck with some basic concepts:
I know we can send vertex data to a shader using:
renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
And then we can retrieve it in the shader with:
vertex float4 basic_vertex(const device VertexIn* vertexIn [[ buffer(0) ]], unsigned int vid [[ vertex_id ]])
As I understand it, the vertex function will be called once per each vertex, and vertex_id will update on each call to contain the vertex index.
The question is, from where comes that vertex_id?
I could send to the shader more data with different sizes:
renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
renderEncoder.setVertexBuffer(vertexBuffer2, offset: 0, index: 1)
If vertexBuffer has 3 elements , and vertexBuffer2 has 10 elements ...how many times are the vertex function called? 10?
Thanks!
That's determined by the draw call you make on the render command encoder. Take the simplest draw method:
drawPrimitives(type:vertexStart:vertexCount:)
The vertexCount determines how many times your vertex function is called. The vertex IDs passed to the vertex function are those in the range from vertexStart to vertexStart + vertexCount - 1.
If you consider another draw method:
drawPrimitives(type:vertexStart:vertexCount:instanceCount:)
That goes over the same range of vertex IDs. However, it calls your vertex function vertexCount * instanceCount times. There will be instanceCount calls with the vertex ID being vertexStart. The instance ID will range from 0 to instanceCount - 1 for those calls. Likewise, there will be instanceCount calls with the vertex ID being vertexStart + 1 (assuming vertexCount >= 2), one with each instance ID in [0..instanceCount-1]. Etc.
The other draw methods have various other options, but they mostly don't affect how many times the vertex function is called. For example, baseInstance shifts the range of the instance IDs, but not its size.
The various drawIndexedPrimitives() methods get the specific vertex IDs from a buffer instead of enumerating all vertex IDs in a range. That buffer may contain a given vertex ID in multiple locations. For that case, I don't think it's defined whether the vertex function might be called multiple times for the same vertex ID and instance ID. Metal will presumably try to avoid duplicating effort, but it might end up actually being faster to just call the vertex function for every index in the index buffer even if multiple such indexes end up being the same vertex ID.
The relationship between vertexes and data in buffers you pass to the vertex processing stage is entirely up to you. You don't have to pass any buffers, at all. For example, a vertex function could generate vertex information completely computationally just from the vertex ID and instance ID.
It is pretty common, of course, for at least some of the buffers to contain arrays of per-vertex data that are indexed into using the vertex ID. Other buffers might be uniform data that's the same for all vertexes (that is, you don't index into that buffer using the vertex ID). Metal itself doesn't know this, though.

Random access to D3D11 buffer with R8G8B8A8_UNorm format in HLSL

I have a D3D11 buffer with a few million elements that is supposed to hold data in the R8G8B8A8_UNorm format.
The desired behavior is the following: One shader calculates a vec4 and writes it to the buffer in a random access pattern. In the next pass, another shader reads the data in a random access pattern and processes them further.
My best guess would be to create an UnorderedAccessView with the R8G8B8A8_UNorm format. But how do I declare the RWBuffer<?> in HLSL, and how do I write to and read from it? Is it necessary to declare it as RWBuffer<uint> and do the packing from vec4 to uint manually?
In OpenGL I would create a buffer and a buffer texture. Then I can declare an imageBuffer with the rgba8 format in the shader, access it with imageLoad and imageStore, and the hardware does all the conversions for me. Is this possible in D3D11?
This is a little tricky due to a lot of different gotchas, but you should be able to do something like this.
In your shader that writes to the buffer declare:
RWBuffer<float4> WriteBuf : register( u1 );
Note that it is bound to register u1 instead of u0. Unordered access views (UAV) must start at slot 1 because the u# register is also used for render targets.
To write to the buffer just do something like:
WriteBuf[0] = float4(0.5, 0.5, 0, 1);
Note that you must write all 4 values at once.
In your C++ code, you must create an unordered access buffer, and bind it to a UAV. You can use the DXGI_FORMAT_R8G8B8A8_UNORM format. When you write 4 floats to it, the values will automatically be converted and packed. The UAV can be bound to the pipeline using OMSetRenderTargetsAndUnorderedAccessViews.
In your shader that reads from the buffer declare a read only buffer:
Buffer<float4> ReadBuf : register( t0 );
Note that this buffer uses t0 because it will be bound as a shader resource view (SRV) instead of UAV.
To access the buffer use something like:
float4 val = ReadBuf[0];
In your C++ code, you can bind the same buffer you created earlier to an SRV instead of a UAV. The SRV can be bound to the pipeline using PSSetShaderResources and can also be created with DXGI_FORMAT_R8G8B8A8_UNORM.
You can't bind both the SRV and UAV using the same buffer to the pipeline at the same time. So you must bind the UAV first and run your first shader pass. Then unbind the UAV, bind SRV, and run the second shader pass.
There are probably other ways to do this as well. Note that all of this requires shader model 5.

using setvertexdeclaration with fixed-function pipeline in directx 9

I am trying to use my own vertex structure, upload the vertices into a vertex buffer (indices into index buffer, without FVF code), set up the vertex declaration and stream source and use, and draw them using DrawIndexedPrimitive with fixed shader (but not FVF).
Do I have to write my own shader to use directx 9 SetVertexDeclaration ?
Can I use a customised vertex structure with SetVertexDeclaration and fixed-pipeline ?
If I can ,is there any restriction on fixed-pipeline and vertex declaration ?
Customised vertex structure:
struct PosNormTexCoord
{
float x,y,z;
float nx,ny,nz;
float tu,tv;
};
Unfortunately, you can't use fixed pipeline with custom vertex format. But your structure can be expressed in FVF, why would you want to skip its usage?

Resources