In OpenGL ES it is possible to set precision to uniforms and attributes using lopw/mediump/highp. Is there something like this in Metal?
The metal shading language supports the half data type (see section 2.1 of the spec). It's defined there as:
A 16-bit floating-point. The half data type must conform to the IEEE 754 binary16 storage format.
This makes it pretty much equivalent to mediump.
There isn't really an equivalent to lowp in metal. However, that's no real loss because I believe that metal capable iOS GPUs don't benefit from lowp anyway and just do any lowp operations at mediump.
Related
What's the best practice for them? Is there any performance difference?
What's the best practice for them?
For the most part these only matter on mobile. The spec says an implementation can always use a higher precision so on desktop both the vertex shader and fragment shader run in highp always. (I know of no desktop GPUs for which this is not true)
From the spec section 4.5.2
4.5.2 Precision Qualifiers
...
Precision qualifiers declare a minimum range and precision that the underlying implementation must use
when storing these variables. Implementations may use greater range and precision than requested, but
not less.
For Mobile and Tablets then there are several answers. There is no best. It's up to you
use the lowest precision you can that still does what you need it to do.
use highp and ignore the perf issues and the old phones where it doesn't work
use mediump and ignore the bugs (See below)
check if the user's device supports highp, if not use different shaders with less features.
WebGL defaults to vertex shaders use highp and fragment shaders don't have a default an you have to specify one. Further, highp in the fragment shader is an optional feature and some mobile GPUs don't support it. I don't know what percent that is in 2019. AFAIK most or maybe even all phones shipping in 2019 support highp but older phones (2011, 2012, 2013) don't.]
From the spec:
The vertex language requires any uses of lowp, mediump and highp to compile and link without error.
The fragment language requires any uses of lowp and mediump to compile without error. Support for
highp is optional.
Examples of places you generally need highp. Phong shaded point lights usually need highp. So for example you might use only directional lights on a system that doesn't support highp OR you might use only directional lights on mobile for performance.
Is there any performance difference?
Yes but as it says above an implemenation is free to use a higher precision. So if you use mediump on a desktop GPU you won't see any perf difference since it's really using highp always. On mobile you will see a perf diff, at least in 2019. You may also see where your shaders really needed highp.
Here is a phong shader set to use mediump. On desktop since mediump is actually highp it works
On Mobile where mediump is actually mediump it breaks
An example where mediump would be fine, at least in the fragment shader, is most 2D games.
I'm porting a directx hlsl script to webgl 2, but I cannot find the equivalent of a structuredbuffer.
I can only see a constant buffer which are limited to 64k size and use aligning. Should I split the structuredbuffers to constant buffers?
The more-or-less equivalent in OpenGL land of D3D's StructuredBuffers are Shader Storage Buffer Objects. However, WebGL 2.0 is based on OpenGL ES 3.0, which does not include SSBOs.
I have high number of variables (30 uniforms (mostly vec4), about 20 variables (vec3, float, vec4) within shader) within fragment shader. It runs just fine on iPhone5S, but I have serious problem on iPhone4. GPU time is 1s / frame and 98% of the time is shader run time.
According to Apple API
OpenGL ES limits the number of each variable type you can use in a
vertex or fragment shader. The OpenGL ES specification doesn’t require
implementations to provide a software fallback when these limits are
exceeded; instead, the shader simply fails to compile or link. When
developing your app you must ensure that no errors occur during shader
compilation, as shown in Listing 10-1.
But from this I quite dont understand. Do they provide SW fallback or not? Because I have no errors during compilation or linking of shader and yet performance is poor. I have comment almost everything out and just leave 2 texture lookups and directional light computation. I changed other functions to return just vec4(0,0,0,0).
The limitation on uniforms is much higher than that. GLSL ES (2.0) requires 512 scalar uniform components per-vertex shader (though ES describes this in terms of the number of vectors -- 128). Assuming all 30 of your uniforms were vec4, you still have enough storage for 98 more.
The relevant limits are gl_MaxVertexUniformVectors and gl_MaxFragmentUniformVectors. Implementations are only required to support 16 in the fragment shader, but most will far exceed the minimum - check the values yourself. Query the limits from GL ES rather than trying to figure them out in your GLSL program with some Frankenstein shader code ;)
OpenGL ES 2.0 Shading Language - Appendix A: Limitations - pp. 113
const mediump int gl_MaxVertexAttribs = 8;
const mediump int gl_MaxVertexUniformVectors = 128;
const mediump int gl_MaxVaryingVectors = 8;
const mediump int gl_MaxVertexTextureImageUnits = 0;
const mediump int gl_MaxCombinedTextureImageUnits = 8;
const mediump int gl_MaxTextureImageUnits = 8;
const mediump int gl_MaxFragmentUniformVectors = 16;
const mediump int gl_MaxDrawBuffers = 1;
In fact, it would be a good idea to query all of the GLSL program / shader limits just to get a better idea of the constraints you need to work under for your target software/hardware. It is better to plan ahead than to wait to address these things until your program blows up.
As for software fallbacks, I doubt it. This is an embedded environment, there is not much need for such a thing. When developing the actual software on a PC/Mac, they usually ship with a reference software implementation mostly for testing purposes. Individual components may sometimes fallback to software to overcome hardware limitations, but that is necessary because of the wide variety of hardware in Apple's Mac line alone. But when you are writing an app that is specifically written for a single specification of hardware it is generally acceptable to give a complete failure if you try to do something that exceeds the limitations (which you are expected to be familiar with).
I'm trying to pass simple FLOAT value from vertex to fragment shader. How can I pass it "as is" without interpolation?
On desktop I could use flat varying to disable interpolation, is there something similar in openGL es or the only way is through texture?
GLSL ES does currently not support the flat keyword, so the only way is to use the same float value in all the triangle vertices.
The same answer was given here:
In opengl es 2, is there a way to prevent interpolation of varyings
GLSL ES 2.0 does not support the flat interpolation qualifier, just as it does not support integral vertex shader output variables.
Compare OpenGL ES 2.0 Specification and OpenGL ES 3.0. Specification.
it seems that in HLSL i can but dont have to provide the uniform keyword for variables which come from the application. right?
why is that so?
In HLSL global variables are considered uniform by default.
It's also settled that a variable coming out of the vertex shader stage for example is varying (HLSL doesn't need this keyword at all!).
Note that GLSL keywords uniform/varying are inherited from RSL
(RenderMan shading language).