I'm trying to convert my openGL shaders into metal shaders for converting yuv10 bits texture to rgba texture.
With openGL I was using this GLSL code from this link
I did the conversion, but the result is not the one I expected.
I think it is because I use
tex2d.sample(sampler2d, x, y).rgba;
instead of
texelFetch(tex, ivec2(sourceColumnIndexY, texcoordDenorm.y)
Is there any equivalent to texelFetch in Metal ?
You don't really explain in what way "the result is not the one I expected". You also didn't explain things like what Metal pixel format you're using for the input texture.
Anyway, the Metal function that corresponds to texelFetch() is simply the read() member function of the texture types. For example, texture2d<T> has these two member functions:
Tv read(uint2 coord, uint lod = 0) const
Tv read(ushort2 coord, ushort lod = 0) const
where Tv is the 4-component vector whose component type is T.
Related
I'm trying to understand color within the context of a metal fragment (or compute) shader.
My understanding is that within the context of a metal shader any color values are always linear. Whatever texture is attached to fragment or compute function, metal will apply the inverse of any linear transfer function (gamma) on the way into the shader, and apply it again on the way out.
With this in mind, if within the context of a shader, I return a value with an approximate linear middle grey value of around 22.25%, when rendered to the screen using metal kit via a simple .bgra8Unorm texture, I would expect to get a non-linear sRGB reading of around 128,128,128.
fragment float4 fragment_shader(
TextureMappingVertex in [[stage_in]]
) {
float middleGrey = float(0.2225);
return float4(middleGrey, middleGrey, middleGrey, 1);
}
But in fact I get an output of 57,57,57 which is what I would expect if there were no conversion to and from the linear color space within the shader:
What am I missing here?
On the one hand, this certainly seems more intuitive, but it goes against what I thought were the rules for Metal shaders in that they are always in linear space.
I'm writing some metal code that draws a skybox. I'd like for the depth output by the vertex shader to always be 1, but of course, I'd also like the vertices to be drawn in their correct positions.
In OpenGL, you could use glDepthRange(1,1) to have the depth always be written out as 1.0 in this scenario. I don't see anything similar in Metal. Does such a thing exist? If not, is there another way to always output 1.0 as the depth from the vertex shader?
What I'm trying to accomplish is drawing the scenery first and then drawing the skybox to avoid overdraw. If I just set the z component of the outgoing vertex to 1.0, then the geometry doesn't draw correctly, obviously. What are my options here?
Looks like you can specify the fragment shader output (return value) format roughly so:
struct MyFragmentOutput {
// color attachment 0
float4 color_att [[color(0)]];
// depth attachment
float depth_att [[depth(depth_argument)]]
}
as seen in the section "Fragment Function Output Attributes" on page 88 of the Metal Shading Language Specification (https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf). Looks like any is a working value for depth_argument (see here for more: In metal how to clear the depth buffer or the stencil buffer?)
Then you would set you fragment shader to use that format
fragment MyFragmentOutput interestingShaderFragment
// instead of: fragment float4 interestingShaderFragment
and finally just write to the depth buffer in your fragment shader:
MyFragmentOutput out;
out.color_att = float(rgb_color_here, 1.0);
out.depth_att = 1.0;
return out;
Tested and it worked.
In OpenGL, depth buffer values are calculated based on the near and far clipping planes of the scene. (Reference: Getting the true z value from the depth buffer)
How does this work in WebGL? My understanding is that WebGL is unaware of my scene's far and near clipping planes. The near and far clipping planes are used to calculate my projection matrix, but I never tell WebGL what they are explicitly so it can't use them to calculate depth buffer values.
How does WebGL set values in the depth buffer when my scene is rendered?
WebGL (like modern OpenGL and OpenGL ES) gets the depth value from the value you supply to gl_Position.z in your vertex shader (though you can also write directly to the depth buffer using certain extensions but that's far less common)
There is no scene in WebGL nor modern OpenGL. That concept of a scene is part of legacy OpenGL left over from the early 90s and long since deprecated. It doesn't exist in OpenGL ES (the OpenGL that runs on Android, iOS, ChromeOS, Raspberry PI, WebGL etc...)
Modern OpenGL and WebGL are just rasterization APIs. You write shaders which are small functions that run on the GPU. You provide those shaders with data through attributes (per iteration data), uniforms (global variables), textures (2d/3d arrays), varyings (data passed from vertex shaders to fragment shaders).
The rest is up to you and what your supplied shader functions do. Modern OpenGL and WebGL are for all intents and purposes just generic computing engines with certain limits. To get them to do anything is up to you to supply shaders.
See webglfundamentals.org for more.
In the Q&A you linked to it's the programmer supplied shaders that decide to use frustum math to decide how to set gl_Position.z. The frustum math is supplied by the programmer. WebGL/GL don't care how gl_Position.z is computed, only that it's a value between -1.0 and +1.0 so how to take a value from the depth buffer and go back to Z is solely up to how the programmer decided to calculate it in the first place.
This article covers the most commonly used math for setting gl_Position.z when rendering 3d with WebGL/OpenGL. Based on your question though I'd suggest reading the preceding articles linked at the beginning of that one.
As for what actual values get written to the depth buffer it's
ndcZ = gl_Position.z / gl_Position.w;
depthValue = (far - near) / 2 * ndcZ + (near - far) / 2
near and far default to 0 and 1 respectively though you can set them with gl.depthRange but assuming they are 0 and 1 then
ndcZ = gl_Position.z / gl_Position.w;
depthValue = .5 * ndcZ - .5
That depthValue would then be in the 0 to 1 range and converted to whatever bit depth the depth buffer is. It's common to have a 24bit depth buffer so
bitValue = depthValue * (2^24 - 1)
I'm trying to render a forrest scene for an iOS App with OpenGL. To make it a little bit nicer, I'd like to implement a depth effect into the scene. However I need a linearized depth value from the OpenGL depth buffer to do so. Currently I am using a computation in the fragment shader (which I found here).
Therefore my terrain fragment shader looks like this:
#version 300 es
precision mediump float;
layout(location = 0) out lowp vec4 out_color;
float linearizeDepth(float depth) {
return 2.0 * nearz / (farz + nearz - depth * (farz - nearz));
}
void main(void) {
float depth = gl_FragCoord.z;
float linearized = (linearizeDepth(depth));
out_color = vec4(linearized, linearized, linearized, 1.0);
}
However, this results in the following output:
As you can see, the "further" you get away, the more "stripy" the resulting depth value gets (especially behind the ship). If the terrain tile is close to the camera, the output is somewhat okay..
I even tried another computation:
float linearizeDepth(float depth) {
return 2.0 * nearz * farz / (farz + nearz - (2.0 * depth - 1.0) * (farz - nearz));
}
which resulted in a way too high value so I scaled it down by dividing:
float linearized = (linearizeDepth(depth) - 2.0) / 40.0;
Nevertheless, it gave a similar result.
So how do I achieve a smooth, linear transition between the near and the far plane, without any stripes? Has anybody had a similar problem?
the problem is that you store non linear values which are truncated so when you peek the depth values later on you got choppy result because you lose accuracy the more you are far from znear plane. No matter what you evaluate you will not obtain better results unless:
Lower accuracy loss
You can change znear,zfar values so they are closer together. enlarge znear as much as you can so the more accurate area covers more of your scene.
Another option is to use more bits per depth buffer (16 bits is too low) not sure if can do this in OpenGL ES but in standard OpenGL you can use 24,32 bits on most cards.
use linear depth buffer
So store linear values into depth buffer. There are two ways. One is compute depth so after all the underlying operations you will get linear value.
Another option is to use separate texture/FBO and store the linear depths directly to it. The problem is you can not use its contents in the same rendering pass.
[Edit1] Linear Depth buffer
To linearize depth buffer itself (not just the values taken from it) try this:
Vertex:
varying float depth;
void main()
{
vec4 p=ftransform();
depth=p.z;
gl_Position=p;
gl_FrontColor = gl_Color;
}
Fragment:
uniform float znear,zfar;
varying float depth; // original z in camera space instead of gl_FragCoord.z because is already truncated
void main(void)
{
float z=(depth-znear)/(zfar-znear);
gl_FragDepth=z;
gl_FragColor=gl_Color;
}
Non linear Depth buffer linearized on CPU side (as you do):
Linear Depth buffer GPU side (as you should):
The scene parameters are:
// 24 bits per Depth value
const double zang = 60.0;
const double znear= 0.01;
const double zfar =20000.0;
and simple rotated plate covering whole depth field of view. Booth images are taken by glReadPixels(0,0,scr.xs,scr.ys,GL_DEPTH_COMPONENT,GL_FLOAT,zed); and transformed to 2D RGB texture on CPU side. Then rendered as single QUAD covering whole screen on unit matrices ...
Now to obtain original depth value from linear depth buffer you just do this:
z = znear + (zfar-znear)*depth_value;
I used the ancient stuff just to keep this simple so port it to your profile ...
Beware I do not code in OpenGL ES nor IOS so I hope I did not miss something related to that (I am used to Win and PC).
To show the difference I added another rotated plate to the same scene (so they intersect) and use colored output (no depth obtaining anymore):
As you can see linear depth buffer is much much better (for scenes covering large part of depth FOV).
I'm trying to implement the technique described at : Compositing Images with Depth.
The idea is to use an existing texture (loaded from an image) as a depth mask, to basically fake 3D.
The problem I face is that glDrawPixels is not available in OpenglES. Is there a way to accomplish the same thing on the iPhone?
The depth buffer is more obscured than you think in OpenGL ES; not only is glDrawPixels absent but gl_FragDepth has been removed from GLSL. So you can't write a custom fragment shader to spool values to the depth buffer as you might push colours.
The most obvious solution is to pack your depth information into a texture and to use a custom fragment shader that does a depth comparison between the fragment it generates and one looked up from a texture you supply. Only if the generated fragment is closer is it allowed to proceed. The normal depth buffer will catch other cases of occlusion and — in principle — you could use a framebuffer object to create the depth texture in the first place, giving you a complete on-GPU round trip, though it isn't directly relevant to your problem.
Disadvantages are that drawing will cost you an extra texture unit and textures use integer components.
EDIT: for the purposes of keeping the example simple, suppose you were packing all of your depth information into the red channel of a texture. That'd give you a really low precision depth buffer, but just to keep things clear, you could write a quick fragment shader like:
void main()
{
// write a value to the depth map
gl_FragColor = vec4(gl_FragCoord.w, 0.0, 0.0, 1.0);
}
To store depth in the red channel. So you've partially recreated the old depth texture extension — you'll have an image that has a brighter red in pixels that are closer, a darker red in pixels that are further away. I think that in your question, you'd actually load this image from disk.
To then use the texture in a future fragment shader, you'd do something like:
uniform sampler2D depthMap;
void main()
{
// read a value from the depth map
lowp vec3 colourFromDepthMap = texture2D(depthMap, gl_FragCoord.xy);
// discard the current fragment if it is less close than the stored value
if(colourFromDepthMap.r > gl_FragCoord.w) discard;
... set gl_FragColor appropriately otherwise ...
}
EDIT2: you can see a much smarter mapping from depth to an RGBA value here. To tie in directly to that document, OES_depth_texture definitely isn't supported on the iPad or on the third generation iPhone. I've not run a complete test elsewhere.