Is it possible to get the depth values from the depth stencil buffer? Right now, I have a depth shader that renders the distance of each pixel using 1.0f - (outputPosition.z / outputPosition.w) from the vertex shader.
Related
I want to update a UAV in my pixel shader, which outputs to SV_DEPTH with the depth buffer set to standard depth buffer settings.
Question is - does the depth buffer only affect SV_TARGET outputs, or will it also be used to prevent update to the UAV ?
I'm trying to render a forrest scene for an iOS App with OpenGL. To make it a little bit nicer, I'd like to implement a depth effect into the scene. However I need a linearized depth value from the OpenGL depth buffer to do so. Currently I am using a computation in the fragment shader (which I found here).
Therefore my terrain fragment shader looks like this:
#version 300 es
precision mediump float;
layout(location = 0) out lowp vec4 out_color;
float linearizeDepth(float depth) {
return 2.0 * nearz / (farz + nearz - depth * (farz - nearz));
}
void main(void) {
float depth = gl_FragCoord.z;
float linearized = (linearizeDepth(depth));
out_color = vec4(linearized, linearized, linearized, 1.0);
}
However, this results in the following output:
As you can see, the "further" you get away, the more "stripy" the resulting depth value gets (especially behind the ship). If the terrain tile is close to the camera, the output is somewhat okay..
I even tried another computation:
float linearizeDepth(float depth) {
return 2.0 * nearz * farz / (farz + nearz - (2.0 * depth - 1.0) * (farz - nearz));
}
which resulted in a way too high value so I scaled it down by dividing:
float linearized = (linearizeDepth(depth) - 2.0) / 40.0;
Nevertheless, it gave a similar result.
So how do I achieve a smooth, linear transition between the near and the far plane, without any stripes? Has anybody had a similar problem?
the problem is that you store non linear values which are truncated so when you peek the depth values later on you got choppy result because you lose accuracy the more you are far from znear plane. No matter what you evaluate you will not obtain better results unless:
Lower accuracy loss
You can change znear,zfar values so they are closer together. enlarge znear as much as you can so the more accurate area covers more of your scene.
Another option is to use more bits per depth buffer (16 bits is too low) not sure if can do this in OpenGL ES but in standard OpenGL you can use 24,32 bits on most cards.
use linear depth buffer
So store linear values into depth buffer. There are two ways. One is compute depth so after all the underlying operations you will get linear value.
Another option is to use separate texture/FBO and store the linear depths directly to it. The problem is you can not use its contents in the same rendering pass.
[Edit1] Linear Depth buffer
To linearize depth buffer itself (not just the values taken from it) try this:
Vertex:
varying float depth;
void main()
{
vec4 p=ftransform();
depth=p.z;
gl_Position=p;
gl_FrontColor = gl_Color;
}
Fragment:
uniform float znear,zfar;
varying float depth; // original z in camera space instead of gl_FragCoord.z because is already truncated
void main(void)
{
float z=(depth-znear)/(zfar-znear);
gl_FragDepth=z;
gl_FragColor=gl_Color;
}
Non linear Depth buffer linearized on CPU side (as you do):
Linear Depth buffer GPU side (as you should):
The scene parameters are:
// 24 bits per Depth value
const double zang = 60.0;
const double znear= 0.01;
const double zfar =20000.0;
and simple rotated plate covering whole depth field of view. Booth images are taken by glReadPixels(0,0,scr.xs,scr.ys,GL_DEPTH_COMPONENT,GL_FLOAT,zed); and transformed to 2D RGB texture on CPU side. Then rendered as single QUAD covering whole screen on unit matrices ...
Now to obtain original depth value from linear depth buffer you just do this:
z = znear + (zfar-znear)*depth_value;
I used the ancient stuff just to keep this simple so port it to your profile ...
Beware I do not code in OpenGL ES nor IOS so I hope I did not miss something related to that (I am used to Win and PC).
To show the difference I added another rotated plate to the same scene (so they intersect) and use colored output (no depth obtaining anymore):
As you can see linear depth buffer is much much better (for scenes covering large part of depth FOV).
In OpenGL, I am using the following in my pixel shaders to get the correct pixel position, which is used to sample diffuse, normal, position gbuffer textures:
ivec2 texcoord = ivec2(textureSize(unifDiffuseTexture) * (gl_FragCoord.xy / UnifAmbientPass.mScreenSize));
So far, this is what I do in HLSL:
float2 texcoord = input.mPosition.xy / gScreenSize;
Most notably, in GLSL I am using textureSize() to get accurate pixel position. I am wondering, is there a HLSL equivalent to textureSize()?
In HLSL, you have GetDimensions
But it may be costlier than reading it from a constant buffer, even if it looks easier to use at first to do quick tests.
Also, you have alternative, using SV_Position and Load, just use the xy as an uint2, you remove the need of an user interpolator carrying a texture coordinate to index the screen.
Here the full documentation of a TextureObject
Could someone explain the math behind the function Tex2D in HLSL?
One of the examples is: given a quad with 4 vertices, the texture coordinates are (0,0) (0,1) (1,0) (1,1) on it and the texture's width and height are 640 and 480. How is the shader able to determine the number of times of sampling to be performed? If it is to map texels to pixels directly, does it mean that the shader needs to perform 640*480 times of sampling with the texture coordinates increasing in some kind of gradients? Also, I would appreciate if you could provide more references and articles on this topic.
Thanks.
After the vertex shader the rasterizer "converts" triangles into pixels. Each pixel is associated with a screen position, and the vertex attributes of the triangles (eg: texture coordinates) are interpolated across the triangles and an interpolated value is stored in each pixel according to the pixel position.
The pixel shader runs once per pixel (in most cases).
The number of times the texture is sampled per pixel depends on the sampler used. If you use a point sampler the texture is sampled once, 4 times if you use a bilinear sampler and a few more if you use more complex samplers.
So if you're drawing a fullscreen quad, the texture you're sampling is the same size of the render target and you're using a point sampler the texture will be sampled width*height times (once per pixel).
You can think about textures as an 2-dimensional array of texels. tex2D simply returns the texel at the requested position performing some kind of interpolation depending on the sampler used (texture coordinates are usually relative to the texture size so the hardware will convert them to absolute coordinates).
This link might be useful: Rasterization
I'm trying to achieve the following blending when the texture at one vertex merges with another:
Here's what I currently have:
I've enabled blending and am specifying the blending function as:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I can see that the image drawn in the paper app is made up of a small circle that merges with the same texture before and after and has some blending effect on the color and the alpha.
How do I achieve the desired effect?
UPDATE:
What I think is happening is that the intersected region of the two textures is getting the alpha channel to be modified (either additive or some other custom function) while the texture is not being drawn in the intersected region. The rest of the region has the rest of the texture drawn. Like so:
I'm not entirely sure of how to achieve this result, though.
You shouldn't need blending for this (and it won't work the way you want).
I think as long as you define your texture coordinate in the screen space, it should be seamless between two separate circles.
To do this, instead of using a texture coordinate passed through the vertex shader, just use the position of the fragment to sample the texture, plus or minus some scaling:
float texcoord = gl_FragCoord / vec2(xresolution_in_pixels, yresolution_in_pixels);`
gl_FragColor = glTexture2D(papertexture, texcoord);
If you don't have access to GLSL, you could do something instead with the stencil buffer. Just draw all your circles into the stencil buffer, use the combined region as a stencil mask, and then draw a fullscreen quad of your texture. The color will be seamlessly deposited at the union of all the circles.
You can achieve this effect with max blending for alpha. Or manual (blending off) with shader (OpenGL ES 2.0):
#extension GL_EXT_shader_framebuffer_fetch : require
precision highp float;
uniform sampler2D texture;
uniform vec4 color;
varying vec2 texCoords;
void main() {
float res_a =gl_LastFragData[0].a;
float a = texture2D(texture, texCoords).a;
res_a = max(a, res_a);
gl_FragColor = vec4(color.rgb * res_a, res_a);
}
Result: