How to pass non interpolated data OpenGL ES (GLSL) - ios

I'm trying to pass simple FLOAT value from vertex to fragment shader. How can I pass it "as is" without interpolation?
On desktop I could use flat varying to disable interpolation, is there something similar in openGL es or the only way is through texture?

GLSL ES does currently not support the flat keyword, so the only way is to use the same float value in all the triangle vertices.
The same answer was given here:
In opengl es 2, is there a way to prevent interpolation of varyings

GLSL ES 2.0 does not support the flat interpolation qualifier, just as it does not support integral vertex shader output variables.
Compare OpenGL ES 2.0 Specification and OpenGL ES 3.0. Specification.

Related

IOS how metal convert fragment shader return value to MTKView.colorPixelFormat?

I have a question is between the fragment shader return value and MTkView.colorPixelFormat.
My fragment shader return float4, it is a 4 * 32bit vector, and MTkView.colorPixelFormat is .bgr10_xr.
how to convert float4 to .bgr10_xr? or this conversion is automatically?
It should just work. Metal will do the conversion for you. Refer to section 7.7 of Metal Shading Language Specification theres an entry about 10-bit formats 7.7.4 Conversion for 10- and 11-bit Floating-Point Pixel Data Type.

How to specify LOD bias in Metal?

I'm rewriting an OpenGL filter from the Android version of the app I'm currently working at in Metal. It uses the following texture lookup function:
vec4 texture2D(sampler2D sampler, vec2 coord, float bias)
Assuming my filter kernel function looks like this:
float4 fname(sampler src) {
...
}
The texture lookup call would be the following:
src.sample(coord)
But how can I pass the bias parameter? (the sample function takes only 1 argument)
I'm afraid Core Image only supports 2D textures – no mipmapping and LOD selection. Only bilinear sampling is available.
If you need different LODs, you need to pass different samplers to your kernel and do the interpolation yourself.

Whats the equivalent of (dx11) structuredbuffer in webgl2?

I'm porting a directx hlsl script to webgl 2, but I cannot find the equivalent of a structuredbuffer.
I can only see a constant buffer which are limited to 64k size and use aligning. Should I split the structuredbuffers to constant buffers?
The more-or-less equivalent in OpenGL land of D3D's StructuredBuffers are Shader Storage Buffer Objects. However, WebGL 2.0 is based on OpenGL ES 3.0, which does not include SSBOs.

equivalent to Open GL precision attributes (lowp, mediump, highp) in iOS Metal

In OpenGL ES it is possible to set precision to uniforms and attributes using lopw/mediump/highp. Is there something like this in Metal?
The metal shading language supports the half data type (see section 2.1 of the spec). It's defined there as:
A 16-bit floating-point. The half data type must conform to the IEEE 754 binary16 storage format.
This makes it pretty much equivalent to mediump.
There isn't really an equivalent to lowp in metal. However, that's no real loss because I believe that metal capable iOS GPUs don't benefit from lowp anyway and just do any lowp operations at mediump.

Metal Compute Kernel vs Fragment Shader

Metal supports kernel in addition to the standard vertex and fragment functions. I found a metal kernel example that converts an image to grayscale.
What exactly is the difference between doing this in a kernel vs fragment? What can a compute kernel do (better) that a fragment shader can't and vice versa?
Metal has four different types of command encoders:
MTLRenderCommandEncoder
MTLComputeCommandEncoder
MTLBlitCommandEncoder
MTLParallelRenderCommandEncoder
If you're just doing graphics programming, you're most familiar with the MTLRenderCommandEncoder. That is where you would set up your vertex and fragment shaders. This is optimized to deal with a lot of draw calls and object primitives.
The kernel shaders are primarily used for the MTLComputeCommandEncoder. I think the reason a kernel shader and a compute encoder were used for the image processing example is because you're not drawing any primitives as you would be with the render command encoder. Even though both methods are utilizing graphics, in this instance it's simply modifying color data on a texture rather than calculating depth of multiple objects on a screen.
The compute command encoder is also more easily set up to do parallel computing using threads:
https://developer.apple.com/reference/metal/mtlcomputecommandencoder
So if your application wanted to utilize multithreading on data modification, it's easier to do that in this command encoder than the render command encoder.

Resources