I'm doing depth/stencil passes in DirectX11 and I believe I can simply unbind the pixel shader slot to skip the pixel shading and thus all color-output to rendertargets alltogether (even if there are rendertargets bound), and just render to the depth/stencil buffer. I cannot find any documentation to support this theory however. Am I right in my assumption?
It is correct to proceed as you describe.
The best way to know if something is legit is to enable the debug layer with the appropriate flag at the device creation, d3d will log any inappropriate states or arguments to the API.
Related
I'm developing webgl application, where I draw detailed building on top of mapbox-gl-js.
Everything goes fine except one detail, I don't know how to acquire depth buffer of every drawn frame.
In some cases my overlay is drawn over extruded by mapbox-gl-js style buildings, but it must be behind it.
I see only one possibility to do this correctly - acquire depth buffer from mapbox-gl-js and pass it in to my shader as texture and compare with my actual depth buffer values.
As in deferred rendering technique.
Any possibility to do that?
You may be better off using a Custom Layer.
I'm currently learning how to use Metal and having some difficulty using the stencil buffer – possibly because its the wrong solution for the problem I have.
The problem: I have a tree of 2D render nodes, quads, that I'm rendering with metal. For some quads, I'd like to enable a 'clipping mask', that clips the rendering of all its sub-nodes to within its bounds.
I imagined that this might be a good use-case for the stencil attachment (Metal is my first foray into low-level graphics APIs) but am not 100% sure.
Having set-up the depth attachment however, I've got no idea of what to actually do with it. Is it even possible to implement this idea of nested clipping masks with this method?
My rough idea for how it might work would be:
Set-up a pipeline state for each quad as usual
Set-up a couple of depth stencil states, one for tree elements that will clip, and one for nodes that won't. (Write masks of 0xFF and 0x00 respectively.)
Begin a render pass as usual and begin traversing the tree.
If a node should clip, use the clipping depth stencil state otherwise the non-clipping.
Any idea if this is the right approach?
Any thoughts as to the specifics of tackling this problem. i.e configuration of the MTLStencilDescriptor and its read/write masks and various comparison operations and functions. How I would set the stencilReferenceValue on the render command encoder? Increment it at each level of the tree?
EDIT: A Similar question attempts to tackle this problem in Open GL (although the solution comes with its own compromises), so it appears that the above problem can be tackled using a stencil attachment.
The solution in the linked question notes "by giving each level in your scene graph a higher number than the last, you can assign each level its own stencil mask," but comes with the caveat: "Of course, if two siblings at the same depth level overlap, or simply are too close, then you've got a problem."
Is there a better way of achieving this with the capabilities that Metal provides? An idea of/pointers to a recommended algorithm/method to do this within the capabilities of Metal's API would be appreciated!
If I want to clear an entire depth/stencil view in Direct3D 11, I can easily call ID3D11DeviceContext::ClearDepthStencilView.
Direct3D 11.1 adds support for clearing rectangular portions of render target views using ID3D11DeviceContext1::ClearView.
But I see no way to clear only a portion of a depth/stencil view in Direct3D 11, short of rendering a quad over the desired area? This seems like an odd regression from Direct3D 9, where this was trivially easy. Am I missing something, or is this really not supported?
There is no such function that can clear only a part of depth/stencil view.
This is my way to solve the problem:
make a texture. Set Alpha of the part to clear to 1,and other part to 0.
open AlphaTest, only the pixel whose alpha is 1.
open AlphaBlend,set BlendOP to Add,set SrcBlend factor to 0,set DestBlend factor to 1.
set StencilTest and DepthTest to Always, set StencilRef to the value you want to clear.
use orthogonal projection matrix.
draw a rectangle that just covers the screen( z-coordinate/(ZFar-ZNear) will convert to depth),and paste the texture on it.
There is an excellent reason at removing the partial clears in the API. First, it is always possible to emulate them by drawing quads with proper render states, and second, all GPUs have fast clear and resolve hardware. Using them in the intend logic greatly improve performance of the rendering.
With the DX11 clear API, it is possible to use the fast clear and the latter GPU optimisation. A depth buffer fast clear also prepare for an early depth test ( prior to pixel shading, because yes, the real depth test is post pixel shading ), and some bandwidth optimisation on access to the depth buffer will rendering. If you clear with a quad, you lost all that and draw cost will rise.
Should a WebGL fragment shader output gl_FragColor RGB values which are linear, or to some 1⁄γ power in order to correct for display gamma? If the latter, is there a specific value to use or must a complete application be configurable?
The WebGL Specification does not currently contain “gamma”, “γ”, or a relevant use of “linear”, and the GL_ARB_framebuffer_sRGB extension is not available in WebGL. Is there some other applicable specification? If this is underspecified, what do current implementations do? A well-sourced answer would be appreciated.
(Assume we have successfully loaded or procedurally generated linear color values; that is, gamma of texture images is not at issue.)
This is a tough one, but from what I've been able to dig up (primarily from this email thread) it seems that the current behavior is to gamma correct linear color space images(such as PNGs) as they are loaded. Things like JPEG get loaded without transformation of any sort because they are already gamma corrected. (Source: https://www.khronos.org/webgl/public-mailing-list/archives/1009/msg00013.html) This would indicate that textures may possibly be passed to WebGL in a non-linear space, which would be problematic. I'm not sure if that has changed since late 2010.
Elsewhere in that thread it's made very clear that the desired behavior should be that everything input and output from WebGL should be in a linear color space. What happens beyond that is outside the scope of the WebGL spec (which is why it's silent on the issue).
Sorry if that doesn't authoritatively answer your question, I'm just digging up what I can on the matter. As for the matter of wether or not you should be doing correction in a shader, I would say that the answer appears to be "no", since the WebGL output is going to be assumed to be linear, and attempting to self correct may lead to a double transformation of the color space.
When I mentioned this question on Freenode #webgl (June 29, 2012), Florian Boesch vigorously expressed the opinion that nearly all users' systems are hopelessly misconfigured with regard to gamma, and therefore the only way to get good results is to provide a gamma option within a WebGL application, as even if WebGL specified a color space (whether linear or non-linear) for framebuffer data, it would not be correctly converted for the monitor.
I am doing a bit of work on some of our HLSL shaders, trying to get them to work in SM2.0. I've nearly succeeded but one of our shaders accepts a parameter:
float alignment : VFACE
My understanding from MSDN is this is an automatic var calculated in case I need it, but it's not supported under SM2.0... so, how might I reproduce this? I'm not a shader programmer so any (pseudo) code would be really helpful. I understand what VFACE does, but not how I might calculate it myself in a pixel shader, or in a VS and pass it into the PS. Calculating it per-pixel sounds expensive so maybe someone can show a skeleton to calculate it in a VS and use it in a PS?
You can't. Because VFACE means orientation of the triangle (back or front) and the VS or PS stages have not access to the whole primitive (like in SM4/5 GS stage).
The only way is to render your geometry in two passes (one with back face culling, the other with front face culling) and pass a constant value to the shader matching VFACE meaning.