Does Metal support anything like glDepthRange()? - metal

I'm writing some metal code that draws a skybox. I'd like for the depth output by the vertex shader to always be 1, but of course, I'd also like the vertices to be drawn in their correct positions.
In OpenGL, you could use glDepthRange(1,1) to have the depth always be written out as 1.0 in this scenario. I don't see anything similar in Metal. Does such a thing exist? If not, is there another way to always output 1.0 as the depth from the vertex shader?
What I'm trying to accomplish is drawing the scenery first and then drawing the skybox to avoid overdraw. If I just set the z component of the outgoing vertex to 1.0, then the geometry doesn't draw correctly, obviously. What are my options here?

Looks like you can specify the fragment shader output (return value) format roughly so:
struct MyFragmentOutput {
// color attachment 0
float4 color_att [[color(0)]];
// depth attachment
float depth_att [[depth(depth_argument)]]
}
as seen in the section "Fragment Function Output Attributes" on page 88 of the Metal Shading Language Specification (https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf). Looks like any is a working value for depth_argument (see here for more: In metal how to clear the depth buffer or the stencil buffer?)
Then you would set you fragment shader to use that format
fragment MyFragmentOutput interestingShaderFragment
// instead of: fragment float4 interestingShaderFragment
and finally just write to the depth buffer in your fragment shader:
MyFragmentOutput out;
out.color_att = float(rgb_color_here, 1.0);
out.depth_att = 1.0;
return out;
Tested and it worked.

Related

Apple Metal Shader Language (MSL) Always Linear Color?

I'm trying to understand color within the context of a metal fragment (or compute) shader.
My understanding is that within the context of a metal shader any color values are always linear. Whatever texture is attached to fragment or compute function, metal will apply the inverse of any linear transfer function (gamma) on the way into the shader, and apply it again on the way out.
With this in mind, if within the context of a shader, I return a value with an approximate linear middle grey value of around 22.25%, when rendered to the screen using metal kit via a simple .bgra8Unorm texture, I would expect to get a non-linear sRGB reading of around 128,128,128.
fragment float4 fragment_shader(
TextureMappingVertex in [[stage_in]]
) {
float middleGrey = float(0.2225);
return float4(middleGrey, middleGrey, middleGrey, 1);
}
But in fact I get an output of 57,57,57 which is what I would expect if there were no conversion to and from the linear color space within the shader:
What am I missing here?
On the one hand, this certainly seems more intuitive, but it goes against what I thought were the rules for Metal shaders in that they are always in linear space.

Reading variable from vertex shader for rendering in webgl

I want to implement a collision detector between a moving and a static object. The way I am thinking of doing so is by checking in vertex shader every time if any vertex of the moving object intersects with the position of the static object.
By doing the above, I would get the point of collision in the vertex shader, but I want to use the variable for rendering purposes in the js file.
Is there a way to do it.
In WebGL 1 you can not directly read any data from a vertex shader. The best you can do is use the vertex shader to affect the pixels rendered in the fragment shader. So you could for example set gl_Position so nothing is rendered if it fails your test and a single pixel is rendered if the test passes. Or you can set some varying that sets certain colors based on your test results. Then you can either read the pixel with gl.readPixels or you can just pass the texture you wrote to to another shader in a different draw calls.
In WebGL2 you can use transform feedback to allow a vertex shader to write its varyings to a buffer. You can then use that buffer in other draw calls or read it's contents with gl.getSubBuffer
In WebGL2 you can also do occlusion queries which means you can try to draw something and test if it was actually drawn or if the depth buffer prevented it from being drawn.

Modifying Individual Pixels with SKShader

I am attempting to write a fragment shader for the app that I am working on. I pass my uniform into the shader which works but it works on the entire object. I want to be able to modify the object pixel by pixel. So my code now is....
let shader = SKShader( fileNamed: "Shader.fsh" );
shader.addUniform( SKUniform( name: "value", float: 1.0 ) );
m_image.shader = shader;
Here the uniform "value" will be the same for all pixels. But, for example, let's say I want to change "value" to "0.0" after a certain amount of pixels are drawn. So for example....
shader.addUniform( SKUniform( name: "value", float: 1.0 ) );
// 100 pixels are drawn
shader.addUniform( SKUniform( name: "value", float: 0.0 ) );
Is this even possible with SKShader? Would this have to be done in the shader source?
One idea I was thinking of was using an array uniform but it doesn't appear that SKShader allows this.
Thanks for any help is advance.
In general, the word uniform means unchanging — something that's the same in all cases or situations. Such is the way of shader uniforms: even though the shader code runs independently (and in parallel) for each pixel in a rendered, images, the value of a uniform variable input to the shader is the same across all pixels.
While you could, in theory, pass an array of values into the shader representing the colors for every pixel, that's essentially the same as passing an image (or just setting a texture image on the sprite)... at that point you're using a shader for nothing.
Instead, you typically want your GLSL(ish*) code to, if it's doing anything based on pixel location, find out the pixel coordinates it's writing to and calculate a result based on that. In a shader for SKShader, you get pixel coordinates from the vec2 v_tex_coord shader variable.
(This looks like a decent tutorial (with links to others) for getting started on SpriteKit shaders. If you follow other tutorials or shader code libraries for help doing cool stuff with pixel shaders, you'll find ideas and algorithms you can reuse, but the ways they find the current output pixel will be different. In a shader for SpriteKit, you can usually safely replace gl_FragCoord with v_tex_coord.)
* SKShader doesn't use actual GLSL per se, It actually uses a subset of GLSL that automatically translates to appropriate GPU code for the device/renderer in use.

HLSL correct pixel position in deferred shading

In OpenGL, I am using the following in my pixel shaders to get the correct pixel position, which is used to sample diffuse, normal, position gbuffer textures:
ivec2 texcoord = ivec2(textureSize(unifDiffuseTexture) * (gl_FragCoord.xy / UnifAmbientPass.mScreenSize));
So far, this is what I do in HLSL:
float2 texcoord = input.mPosition.xy / gScreenSize;
Most notably, in GLSL I am using textureSize() to get accurate pixel position. I am wondering, is there a HLSL equivalent to textureSize()?
In HLSL, you have GetDimensions
But it may be costlier than reading it from a constant buffer, even if it looks easier to use at first to do quick tests.
Also, you have alternative, using SV_Position and Load, just use the xy as an uint2, you remove the need of an user interpolator carrying a texture coordinate to index the screen.
Here the full documentation of a TextureObject

Writing texture data onto depth buffer

I'm trying to implement the technique described at : Compositing Images with Depth.
The idea is to use an existing texture (loaded from an image) as a depth mask, to basically fake 3D.
The problem I face is that glDrawPixels is not available in OpenglES. Is there a way to accomplish the same thing on the iPhone?
The depth buffer is more obscured than you think in OpenGL ES; not only is glDrawPixels absent but gl_FragDepth has been removed from GLSL. So you can't write a custom fragment shader to spool values to the depth buffer as you might push colours.
The most obvious solution is to pack your depth information into a texture and to use a custom fragment shader that does a depth comparison between the fragment it generates and one looked up from a texture you supply. Only if the generated fragment is closer is it allowed to proceed. The normal depth buffer will catch other cases of occlusion and — in principle — you could use a framebuffer object to create the depth texture in the first place, giving you a complete on-GPU round trip, though it isn't directly relevant to your problem.
Disadvantages are that drawing will cost you an extra texture unit and textures use integer components.
EDIT: for the purposes of keeping the example simple, suppose you were packing all of your depth information into the red channel of a texture. That'd give you a really low precision depth buffer, but just to keep things clear, you could write a quick fragment shader like:
void main()
{
// write a value to the depth map
gl_FragColor = vec4(gl_FragCoord.w, 0.0, 0.0, 1.0);
}
To store depth in the red channel. So you've partially recreated the old depth texture extension — you'll have an image that has a brighter red in pixels that are closer, a darker red in pixels that are further away. I think that in your question, you'd actually load this image from disk.
To then use the texture in a future fragment shader, you'd do something like:
uniform sampler2D depthMap;
void main()
{
// read a value from the depth map
lowp vec3 colourFromDepthMap = texture2D(depthMap, gl_FragCoord.xy);
// discard the current fragment if it is less close than the stored value
if(colourFromDepthMap.r > gl_FragCoord.w) discard;
... set gl_FragColor appropriately otherwise ...
}
EDIT2: you can see a much smarter mapping from depth to an RGBA value here. To tie in directly to that document, OES_depth_texture definitely isn't supported on the iPad or on the third generation iPhone. I've not run a complete test elsewhere.

Resources