HLSL correct pixel position in deferred shading - directx

In OpenGL, I am using the following in my pixel shaders to get the correct pixel position, which is used to sample diffuse, normal, position gbuffer textures:
ivec2 texcoord = ivec2(textureSize(unifDiffuseTexture) * (gl_FragCoord.xy / UnifAmbientPass.mScreenSize));
So far, this is what I do in HLSL:
float2 texcoord = input.mPosition.xy / gScreenSize;
Most notably, in GLSL I am using textureSize() to get accurate pixel position. I am wondering, is there a HLSL equivalent to textureSize()?

In HLSL, you have GetDimensions
But it may be costlier than reading it from a constant buffer, even if it looks easier to use at first to do quick tests.
Also, you have alternative, using SV_Position and Load, just use the xy as an uint2, you remove the need of an user interpolator carrying a texture coordinate to index the screen.
Here the full documentation of a TextureObject

Related

How can I make my WebGL Coordinate System "Top Left" Oriented?

Because of computation efficiency, I use a fragment shader to implement a simple 2D metaballs algorithm. The data of the circles to render is top-left oriented.
I have everything working, except that the origin of WebGL's coordinate system (bottom-left) is giving me a hard time: Obviously, the rendered output is mirrored along the horizontal axis.
Following https://webglfundamentals.org/webgl/lessons/webgl-2d-rotation.html (and others), I tried to rotate things using a vertex shader. Without any success unfortunately.
What is the most simple way of achieving the reorientation of WebGL's coordinate system?
I'd appreciate any hints and pointers, thanks! :)
Please find a working (not working ;) ) example here:
https://codesandbox.io/s/gracious-fermat-znbsw?file=/src/index.js
Since you are using gl_FragCoord in your pixels shader, you can't do it from the vertex shader becasuse gl_FragCoord is the canvas coordinates but upside down. You could easily invert it in javascript in your pass trough to WebGL
gl.uniform3fv(gl.getUniformLocation(program, `u_circles[${i}]`), [
circles[i].x,
canvas.height - circles[i].y - 1,
circles[i].r
]);
If you want to do it in the shader and keep using gl_FragCoord then you should pass the height of the canvas to the shader using a uniform and do the conversion of y there by doing something like
vec2 screenSpace = vec2(gl_FragCoord.x, canvasHeight - gl_FragCoord.y - 1);
The -1 is because the coordinates start at 0.

Does Metal support anything like glDepthRange()?

I'm writing some metal code that draws a skybox. I'd like for the depth output by the vertex shader to always be 1, but of course, I'd also like the vertices to be drawn in their correct positions.
In OpenGL, you could use glDepthRange(1,1) to have the depth always be written out as 1.0 in this scenario. I don't see anything similar in Metal. Does such a thing exist? If not, is there another way to always output 1.0 as the depth from the vertex shader?
What I'm trying to accomplish is drawing the scenery first and then drawing the skybox to avoid overdraw. If I just set the z component of the outgoing vertex to 1.0, then the geometry doesn't draw correctly, obviously. What are my options here?
Looks like you can specify the fragment shader output (return value) format roughly so:
struct MyFragmentOutput {
// color attachment 0
float4 color_att [[color(0)]];
// depth attachment
float depth_att [[depth(depth_argument)]]
}
as seen in the section "Fragment Function Output Attributes" on page 88 of the Metal Shading Language Specification (https://developer.apple.com/metal/Metal-Shading-Language-Specification.pdf). Looks like any is a working value for depth_argument (see here for more: In metal how to clear the depth buffer or the stencil buffer?)
Then you would set you fragment shader to use that format
fragment MyFragmentOutput interestingShaderFragment
// instead of: fragment float4 interestingShaderFragment
and finally just write to the depth buffer in your fragment shader:
MyFragmentOutput out;
out.color_att = float(rgb_color_here, 1.0);
out.depth_att = 1.0;
return out;
Tested and it worked.

How to correctly linearize depth in OpenGL ES in iOS?

I'm trying to render a forrest scene for an iOS App with OpenGL. To make it a little bit nicer, I'd like to implement a depth effect into the scene. However I need a linearized depth value from the OpenGL depth buffer to do so. Currently I am using a computation in the fragment shader (which I found here).
Therefore my terrain fragment shader looks like this:
#version 300 es
precision mediump float;
layout(location = 0) out lowp vec4 out_color;
float linearizeDepth(float depth) {
return 2.0 * nearz / (farz + nearz - depth * (farz - nearz));
}
void main(void) {
float depth = gl_FragCoord.z;
float linearized = (linearizeDepth(depth));
out_color = vec4(linearized, linearized, linearized, 1.0);
}
However, this results in the following output:
As you can see, the "further" you get away, the more "stripy" the resulting depth value gets (especially behind the ship). If the terrain tile is close to the camera, the output is somewhat okay..
I even tried another computation:
float linearizeDepth(float depth) {
return 2.0 * nearz * farz / (farz + nearz - (2.0 * depth - 1.0) * (farz - nearz));
}
which resulted in a way too high value so I scaled it down by dividing:
float linearized = (linearizeDepth(depth) - 2.0) / 40.0;
Nevertheless, it gave a similar result.
So how do I achieve a smooth, linear transition between the near and the far plane, without any stripes? Has anybody had a similar problem?
the problem is that you store non linear values which are truncated so when you peek the depth values later on you got choppy result because you lose accuracy the more you are far from znear plane. No matter what you evaluate you will not obtain better results unless:
Lower accuracy loss
You can change znear,zfar values so they are closer together. enlarge znear as much as you can so the more accurate area covers more of your scene.
Another option is to use more bits per depth buffer (16 bits is too low) not sure if can do this in OpenGL ES but in standard OpenGL you can use 24,32 bits on most cards.
use linear depth buffer
So store linear values into depth buffer. There are two ways. One is compute depth so after all the underlying operations you will get linear value.
Another option is to use separate texture/FBO and store the linear depths directly to it. The problem is you can not use its contents in the same rendering pass.
[Edit1] Linear Depth buffer
To linearize depth buffer itself (not just the values taken from it) try this:
Vertex:
varying float depth;
void main()
{
vec4 p=ftransform();
depth=p.z;
gl_Position=p;
gl_FrontColor = gl_Color;
}
Fragment:
uniform float znear,zfar;
varying float depth; // original z in camera space instead of gl_FragCoord.z because is already truncated
void main(void)
{
float z=(depth-znear)/(zfar-znear);
gl_FragDepth=z;
gl_FragColor=gl_Color;
}
Non linear Depth buffer linearized on CPU side (as you do):
Linear Depth buffer GPU side (as you should):
The scene parameters are:
// 24 bits per Depth value
const double zang = 60.0;
const double znear= 0.01;
const double zfar =20000.0;
and simple rotated plate covering whole depth field of view. Booth images are taken by glReadPixels(0,0,scr.xs,scr.ys,GL_DEPTH_COMPONENT,GL_FLOAT,zed); and transformed to 2D RGB texture on CPU side. Then rendered as single QUAD covering whole screen on unit matrices ...
Now to obtain original depth value from linear depth buffer you just do this:
z = znear + (zfar-znear)*depth_value;
I used the ancient stuff just to keep this simple so port it to your profile ...
Beware I do not code in OpenGL ES nor IOS so I hope I did not miss something related to that (I am used to Win and PC).
To show the difference I added another rotated plate to the same scene (so they intersect) and use colored output (no depth obtaining anymore):
As you can see linear depth buffer is much much better (for scenes covering large part of depth FOV).

Modifying Individual Pixels with SKShader

I am attempting to write a fragment shader for the app that I am working on. I pass my uniform into the shader which works but it works on the entire object. I want to be able to modify the object pixel by pixel. So my code now is....
let shader = SKShader( fileNamed: "Shader.fsh" );
shader.addUniform( SKUniform( name: "value", float: 1.0 ) );
m_image.shader = shader;
Here the uniform "value" will be the same for all pixels. But, for example, let's say I want to change "value" to "0.0" after a certain amount of pixels are drawn. So for example....
shader.addUniform( SKUniform( name: "value", float: 1.0 ) );
// 100 pixels are drawn
shader.addUniform( SKUniform( name: "value", float: 0.0 ) );
Is this even possible with SKShader? Would this have to be done in the shader source?
One idea I was thinking of was using an array uniform but it doesn't appear that SKShader allows this.
Thanks for any help is advance.
In general, the word uniform means unchanging — something that's the same in all cases or situations. Such is the way of shader uniforms: even though the shader code runs independently (and in parallel) for each pixel in a rendered, images, the value of a uniform variable input to the shader is the same across all pixels.
While you could, in theory, pass an array of values into the shader representing the colors for every pixel, that's essentially the same as passing an image (or just setting a texture image on the sprite)... at that point you're using a shader for nothing.
Instead, you typically want your GLSL(ish*) code to, if it's doing anything based on pixel location, find out the pixel coordinates it's writing to and calculate a result based on that. In a shader for SKShader, you get pixel coordinates from the vec2 v_tex_coord shader variable.
(This looks like a decent tutorial (with links to others) for getting started on SpriteKit shaders. If you follow other tutorials or shader code libraries for help doing cool stuff with pixel shaders, you'll find ideas and algorithms you can reuse, but the ways they find the current output pixel will be different. In a shader for SpriteKit, you can usually safely replace gl_FragCoord with v_tex_coord.)
* SKShader doesn't use actual GLSL per se, It actually uses a subset of GLSL that automatically translates to appropriate GPU code for the device/renderer in use.

HLSL vertex shader

I've been studying shaders in HLSL for an XNA project (so no DX10-DX11) but almost all resouces I found were tutorial of effects where the most part of the work was done in the pixel shader. For istance in lights the vertex shader is used only to serve to the pixel one normals and other things like that.
I'd like to make some effect based on the vertex shader rather than the pixel one, like deformation for istance. Could someone suggest me a book or a website? Even the bare effect name would be useful since than I could google it.
A lot of lighting, etc. is done in the pixel shader because the resulting image quality will be much better.
Imagine a sphere that is created by subdividing a cube or icosahedron. If lighting calculations are done in the vertex shader, the resulting values will be interpolated between face edges, which can lead to a flat or faceted appearance.
Things like blending and morphing are done in the vertex shader because that's where you can manipulate the vertices.
For example:
matrix World;
matrix View;
matrix Projection;
float WindStrength;
float3 WindDirection;
VertexPositionColor VS(VertexPositionColor input)
{
VertexPositionColor output;
matrix wvp = mul(mul(World,View),Projection);
float3 worldPosition = mul(World,input.Position);
worldPosition += WindDirection * WindStrength * worldPosition.y;
output.Position = mul(mul(View,Projection),worldPositioninput);
output.Color = input.Color;
return output;
}
(Pseudo-ish code since I'm writing this in the SO post editor.)
In this case, I'm offsetting vertices that are "high" on the Y axis with a wind direction and strength. If I use this when rendering grass, for instance, the tops of the blades will lean in the direction of the wind, while the vertices that are closer to the ground (ideally with a Y of zero) will not move at all. The math here should be tweaked a bit to take into account really tall things that would cause unacceptable large changes, and the wind should not be uniformly applied to all blades, but it should be clear that here the vertex shader is modifying the mesh in a non-uniform way to get an interesting effect.
No matter the effect you are trying to achieve - morphing, billboards (so the item you're drawing always faces the camera), etc., you're going to wind up passing some parameters into the VS that are then selectively applied to vertices as they pass through the pipeline.
A fairly trivial example would be "inflating" a model into a sphere, based on some parameter.
Pseudocode again,
matrix World;
matrix View;
matrix Projection;
float LerpFactor;
VertexShader(VertexPositionColor input)
float3 normal = normalize(input.Position);
float3 position = lerp(input.Position,normal,LerpFactor);
matrix wvp = mul(mul(World,View),Projection);
float3 outputVector = mul(wvp,position);
....
By stepping the uniform LerpFactor from 0 to 1 across a number of frames, your mesh (ideally a convex polyhedron) will gradually morph from its original shape to a sphere. Of course, you could include more explicit morph targets in your vertex declaration and morph between two model shapes, collapse it to a less complex version of a model, open the lid on a box (or completely unfold it), etc. The possibilites are endless.
For more information, this page has some sample code on generating and using morph targets on the GPU.
If you need some good search terms, look for "xna bones," "blendweight" and "morph targets."

Resources