I'm having a number of isses with HLSL and Pix.
1) Can you in HLSL 3, declare a Pixel shader alone without a Vertex Shader? If not, what can I do to get around this?
2) Why does Pix skip code? I have a reasonably long shader method but Pix seems to stop me from debugging until half way through the method and then skips the rest of the block. I set Pix to gather information when F12 is hit and check the DISABLE D3DX analysis checkbox. I have to do this because I'm using XNA.
3) Can I debug the shader as the experiment is being run?
Cheers,
Related
I had a good search before starting here, this question:
How to set RenderState in DirectX11?
is far too general; in studying the first answer, I suspect I need the Blend State, but it's not obvious how to set up an alpha comparison.
And searching stack overflow for D3DRS_ALPHAREF produced only seven other questions: https://stackoverflow.com/search?q=D3DRS_ALPHAREF none of which are even remotely close.
I'm using this for a program that does a two pass render to transition from one image to a second. I have a control texture that is the same size as the textures I'm rendering, and is single channel luminance.
The last lines of my pixel shader are:
// Copy rgb from the source texture
out.color.rgb = source.color.rgb;
// copy alpha from the control texture.
out.color.a = control.color.r;
return out;
Then in my render setup I have:
DWORD const reference = static_cast<DWORD>(frameNum);
D3DCMPFUNC const compare = pass == 0 ? D3DCMP_GREATEREQUAL : D3DCMP_LESS;
m_pd3dDevice->SetRenderState(D3DRS_ALPHAREF, reference);
m_pd3dDevice->SetRenderState(D3DRS_ALPHAFUNC, compare);
Where frameNum is the current frame number of the transition: 0 through 255.
-- Edit -- For those not intimately familiar with this particular capability of DirectX 9, the final stage uses the compare function to compare the alpha output from the pixel shader with the reference value, and then it actually draws the pixel iff the comparison returns a true value.
The net result of all this is that the luminance level of the control texture controls how early or late each pixel changes in the transition.
So, how exactly do I do this with DirectX 11?
Yes, I realize there are other ways to achieve the same result, passing frameNum to a suitably crafted pixel shader could get me to the same place.
That's not the point here, I'm not looking for an alternative implementation, I am looking to learn how to do alpha comparisons in DirectX 11, since they have proven a useful tool from time to time in DirectX 9.
If you are moving from Direct3D 9 to Direct3D 11, it is useful to take a brief stop at what changed in Direct3D 10. This is covered in detail on MSDN. One of the points in that article is:
Removal of Fixed Function
It is sometimes surprising that even in a Direct3D 9 engine that fully exploits the programmable pipeline, there remains a number of areas that depend on the fixed-function (FF) pipeline. The most common areas are usually related to screen-space aligned rendering for UI. It is for this reason that you are likely to need to build a FF emulation shader or set of shaders which provide the necessary replacement behaviors.
This documentation contains a white paper containing replacement shader sources for the most common FF behaviors (see Fixed Function EMU Sample). Some fixed-function pixel behavior including alpha test has been moved into shaders.
IOW: You do this in a programmable shader in Direct3D 10 or later.
Take a look at DirectX Tool Kit and in particular the AlphaTestEffect (implemented in this cpp and shader file).
I need some performance in my project back, so I thought of implementing clipping and later on Backface culling.
So I am looking in my vertex shader if the Vertex is facing me, if it is true, I render it and if not, not. But how do I say no render in the vertex shader oO
Same with the clipping how do I say paint only this section in the vertex shader, or do I get something wrong here, I am quit new with Open Gl and my project is for IPhone, it is OpenGl ES 2.0.
Vertices don't face front or backward. When 3 or more vertices meet, the plan (triangle) they make face front or back depending on the convention (clock/anticlock).
You have to just enable the culling and do nothing different in your shaders.
Some APIs of interest are: (Should be more or less same in iOS also which is not 100% compliant to OGLES)
glDisable / glEnable with argument GL_CULL_FACE
glCullFace with argument GL_FRONT, GL_BACK or GL_FRONT_AND_BACK
glFrontFace with argument GL_CW or GL_CCW
you could use discard in your fragment
I'm doing depth/stencil passes in DirectX11 and I believe I can simply unbind the pixel shader slot to skip the pixel shading and thus all color-output to rendertargets alltogether (even if there are rendertargets bound), and just render to the depth/stencil buffer. I cannot find any documentation to support this theory however. Am I right in my assumption?
It is correct to proceed as you describe.
The best way to know if something is legit is to enable the debug layer with the appropriate flag at the device creation, d3d will log any inappropriate states or arguments to the API.
I would like to precalculate some values to be used each time the fragment shader is called.
How/where do I do that?
I am using a full screen quad, four vertices.
Some profiling might be required to see if you will really benefit from precalculating these values instead of doing the calculations in the fragment shader (usually it's a win, but sometimes not).
If you will benefit from this, values calculated once per frame can be passed in as uniforms. You can also calculate these in the vertex shader and pass them along as varyings (which won't really vary), due to the small number of vertices you're talking about here.
I am doing a bit of work on some of our HLSL shaders, trying to get them to work in SM2.0. I've nearly succeeded but one of our shaders accepts a parameter:
float alignment : VFACE
My understanding from MSDN is this is an automatic var calculated in case I need it, but it's not supported under SM2.0... so, how might I reproduce this? I'm not a shader programmer so any (pseudo) code would be really helpful. I understand what VFACE does, but not how I might calculate it myself in a pixel shader, or in a VS and pass it into the PS. Calculating it per-pixel sounds expensive so maybe someone can show a skeleton to calculate it in a VS and use it in a PS?
You can't. Because VFACE means orientation of the triangle (back or front) and the VS or PS stages have not access to the whole primitive (like in SM4/5 GS stage).
The only way is to render your geometry in two passes (one with back face culling, the other with front face culling) and pass a constant value to the shader matching VFACE meaning.