How to read/write data in HLSL (DirectX11) code? - buffer

I'm writing a Unity HLSL shader and I want to buffer the previous pixel to use it in the next iteration. I need this to implement TAA anti-aliasing.
I have a simple Raymarching shader. I create a sphere and use Jittering for optimization. But because of this method, noise appears that needs to be removed using the complex Denoiser method, or more SIMPLE TAA anti-aliasing.
Explanation of the reason for this method: I am not satisfied with the usual TAA anti-aliasing in the form of C# code that works with a ready-made render, because there is a "fading noise" effect, which, for example, can be seen on the Cycles render engine in Blender. I need this not to happen, I need the noise to be removed instantly and calculated during the rendering of the PIXEL, not the FRAME.
I really need help, searched the entire internet for the answer

Related

What is the correct way to store per-pixel persistence data in the Metal compute kernel?

I am trying to implement the MoG background subtraction algorithm based on the opencv cuda implementation
What I need is to maintain a set of gaussian parameter independently for each pixel location across multiple frame. Currently I am just allocating a single big MTLBuffer to do the job and on every frame, I have to invoke the commandEncoder.setBuffer API. Is there a better way? I read about imageblock but I am not sure if it is relevant.
Also, I would be really happy if you can spot any things that shouldn't be directly translated from cuda to metal.
Allocate an 8 bit texture and store intermediate values into the texture in your compute shader. Then after this texture is rendered, you can rebind it as an input texture to whatever other methods need to read from it in the rest of the renders. You can find a very detailed example of this sort of thing at this github example project of a parallel prefix sum on top of Metal. This example also shows how to write XCTest regression tests for your Metal shaders. Github MetalPrefixSum

How do I clear only part of a depth/stencil view?

If I want to clear an entire depth/stencil view in Direct3D 11, I can easily call ID3D11DeviceContext::ClearDepthStencilView.
Direct3D 11.1 adds support for clearing rectangular portions of render target views using ID3D11DeviceContext1::ClearView.
But I see no way to clear only a portion of a depth/stencil view in Direct3D 11, short of rendering a quad over the desired area? This seems like an odd regression from Direct3D 9, where this was trivially easy. Am I missing something, or is this really not supported?
There is no such function that can clear only a part of depth/stencil view.
This is my way to solve the problem:
make a texture. Set Alpha of the part to clear to 1,and other part to 0.
open AlphaTest, only the pixel whose alpha is 1.
open AlphaBlend,set BlendOP to Add,set SrcBlend factor to 0,set DestBlend factor to 1.
set StencilTest and DepthTest to Always, set StencilRef to the value you want to clear.
use orthogonal projection matrix.
draw a rectangle that just covers the screen( z-coordinate/(ZFar-ZNear) will convert to depth),and paste the texture on it.
There is an excellent reason at removing the partial clears in the API. First, it is always possible to emulate them by drawing quads with proper render states, and second, all GPUs have fast clear and resolve hardware. Using them in the intend logic greatly improve performance of the rendering.
With the DX11 clear API, it is possible to use the fast clear and the latter GPU optimisation. A depth buffer fast clear also prepare for an early depth test ( prior to pixel shading, because yes, the real depth test is post pixel shading ), and some bandwidth optimisation on access to the depth buffer will rendering. If you clear with a quad, you lost all that and draw cost will rise.

Rendering "layers" in OpenGL ES

I'm making an iOS app and I want to be able to render with individual "layers" so that I can do blending between them and use shaders on each individually before blending them all together and rendering to the screen.
I understand that I will be rendering to Textures and then rendering these textures on top of each other in the framebuffer, but I am not understanding clearly what code needs to be written to follow this procedure. In another answer I found what I want to do, but I don't know what code accomplishes this task: How to achieve multi-layered drawing with OpenGL ES on iOS? (For example how do I "Bind texture 1, then draw it"? What does it mean to "Attach texture 1"?)
I've also looked at Apple's documentation regarding this technique but it isn't very clear about the steps or code for the actual rendering part of the process.
How would I go about doing this? (hopefully with code examples of each step because I haven't understood spotty instructions that expect me to just know what is needed for each step)
Here is an example of what I want to do with this. The spheres would be rendered into a "layer" or Texture2D which I would then pass through the shader, then render on top of a already partially rendered scene. I don't know exactly what kind of openGL code could do that.
You're looking at wrong place. To use OpenGL, you need to study OpenGL, not anything else. Apple doesn't provide any OpenGL documentation because it's an open standard, so the specs are freely published. Apple assumes you're already familiar with it.
OpenGL ES 2.0 spec
manual pages
I think you are having trouble because you don't have understanding of GL specific terms. The spec describes them very well and clearly. So, please read the spec. That will save your time A LOT. Or you will keep the trouble.
Also, I like to introduce a site which has very nice conceptual description of OpenGL pipeline.
http://www.songho.ca/opengl/
This site is targeting desktop GL, and some API may be different a little. Please focus on conceptual understanding. For example, here's an illustration from the site.
For more tutorials, google with proper keyword like OpenGL ES 2.0 tutorials (or how-tos). Here's an example link, and would be helpful. There're also many more tutorials. If spec is too boring, it's also good to have some fun with tutors.
Update
I like to say one more. IMO, the OpenGL is all about drawing triangles. Everything is ultimately converted into triangles in 3D space to represent some shape. Anything else all exists only for optimization. And in most cases, GL chooses batch processing for major optimization strategy. Because overhead of each drawing call is not affordable for most games.
It's hard to start OpenGL ES because it's an optimized version of desktop GL, so all convenient or easy drawing features are stripped off. This is same even on recent version of desktop GL.
So there's no such drawOneTriangle function. Instead GL has something like
make a buffer,
put list of many triangles there
select the buffer for next drawing.
draw all triangles in current buffer at once
delete the buffer.
By using buffer, you don't need to dispatch duplicated data to GPU from CPU. And GL uses this approach everywhere. For example, you don't have such drawOneTriangleWithTexture function to use textures. Instead, you have to
make a buffer
put list of many pixels there (bitmap)
select the buffer for next drawing.
draw all triangles with the texture pixel data in current buffers.
delete the buffer.
Everything overly complex stuffs on GL are all exists for optimization. This may look weird at first, but there're usually very good reasons for the design.
Update 2
Now I think you're looking for render to texture feature. (well actually you already mentioned this…)
You can use a rendered image as a texture source. To do this,
you need to bind a frame-buffer with texture object rather then render-buffer object using some functions like glFramebufferTexture.
Once you render to a texture, and switch frame-buffer to final buffer, and bind the texture you drawn and others, and perform the final drawing. You need two frame buffers: one for render-to-texture, and one for final output.

is it worth it to use hlsl shaders for 2D drawing

I was wondering if it is worth it to use shaders to draw a 2D texture in xna. I am asking because with openGL it is much faster if you use GLSL.
Everything on a modern GPU is drawn using a shader.
For the old immediate-style rendering (ie: glBegin/glVertex), that will get converted to something approximating buffers and shaders somewhere in the driver. This is why using GLSL is "faster" - because it's closer to the metal, you're not going through a conversion layer.
For a modern API, like XNA, everything is already built around "buffers and shaders".
In XNA, SpriteBatch provides its own shader. The source code for the shader is available here. The shader itself is very simple: The vertex shader is a single matrix multiplication to transform the vertices to the correct raster locations. The pixel shader simply samples from your sprite's texture.
You can't really do much to make SpriteBatch's shader faster - there's almost nothing to it. There are some things you can do to make the buffering behaviour faster in specific circumstances (for example: if your sprites don't change between frames) - but this is kind of advanced. If you're experiencing performance issues with SpriteBatch, be sure you're using it properly in the first place. For what it does, SpriteBatch is already extremely well optimised.
For more info on optimisation, see this answer.
If you want to pass a custom shader into SpriteBatch (eg: for a special effect) use this overload of Begin and pass in an appropriate Effect.

GLKit's GLKBaseEffect and custom shaders

I've been researching this problem I have and I can't seem to understand it well enough to solve it so I thought I might as well throw it out there and the intelligent bunch might have some ideas. :P
Basically I have been working on a iPhone project for a while where I have the luxury to use all the newest frameworks and target 5.1. So I've been using GLKit and the GLKBaseEffect which have been working just fine for me. The reason I started out with GLKBaseEffect rather then writing my own shaders is that I don't know glsl well. However the requirements have become more precise and the base effect just doesn't seem to cut it any longer.
Since I am already doing all my transforms using the base effect I would prefer if I could keep my base effect intact but add glsl-type shaders on top if that makes any sense.
My old approach look something like this (this is in a loop rendering all objects, where an object contain such things as transforms, a mesh and some other less important things for this problem, such as textures, materials and so on)
ObjectBase *obj = [ResourceManager.shared getObjectNamed:name inScene:sceneName];
GLKMatrix4 modelview = effect.transform.modelviewMatrix;
effect.transform.modelviewMatrix = GLKMatrix4Multiply(effect.transform.modelviewMatrix, obj.transform);
[effect prepareToDraw];
[obj render];
effect.transform.modelviewMatrix = modelView;
Here we fetch an object to render and transform (i.e. translate, rotate and scale) the object then we render it, where the rendering itself fetches the mesh for the object, bind the buffers and render it.
So far so good.
What I would like to do however is that during the [obj render]; call I would like the object to also do something like glUseProgram(someProgram); adding more specialized shader code.
I guess one could argue that I am trying to use the base effect for my vertex shaders and want to use "normal" shaders for my fragment shaders. At least that's what I think I want to do.
I have been trying some things.
I tried to create just the fragment shaders and glUseProgramon it, however it said that I need one vertex and one fragment shader when setting up and compiling it. I've also tried to create an empty vertex shader, which didn't turn out very well, I don't know what happens with that, but I am guessing that it overrule the base effect.
I am leaning toward, in the end, accept that it's probably best to throw out the base effects and just write my own shaders all the way. I just feel like it's a lot of work out the window, so I wanted to see how much of it I can save.
I do understand that my understanding of shaders are the part that gives me the most problems, so please be patient with that fact.
I just wanted to give my conclusions for anyone interested in them.
What I've done is actually thrown out the GLBaseEffect all together and implemented my own shader code.
My biggest problem were that I didn't really understand that it's all or nothing so to speak.
Please I might be wrong, so any corrections to where I am wrong will be greatly appreciated, I really don't want to fool anyone reading this.
What I found out during my endeavors is a couple of key-points:
GLKBaseEffect, is meant to mimic the fixed-function pipeline as seen in earlier versions of OpenGLES. Hence it is wrapping the common shader code so you don't really have to care to much about it. You will have basic functionality but it's not really very extendible.
You can still use the neat features of GLKit such as texture loader, the math library and so on if you write your own shader-code. So if you want something more complicated or customizable (bump mapping, toon shading and so on) it is totally worth rewriting the boiler-plate code needed to render properly. What I did at first was that I used the GLKBaseEffect to orient in the scene since it's quite comfortable and easy to use. However when I wanted to do more (tangent-space normal mapping) I kind of got stuck since I couldn't add to the shader program handled by the GLKBaseEffect.
Shaders are really not as scary as I always thought! I just had no idea what it really meant, and I'm surprised that I've read so much about them and still hadn't understood that basically shaders are programs REPLACING the fixed functionality pipeline. Simple as that.
That's enough rant I guess, just wanted to follow up and add what bits and pieces I've collected this far.
Just as you discovered, you can't just use a fragment shader and leave behind the vertex shader. This is because both have different tasks. Vertex shaders deal with the per-vertex aspects: calculating the vertex data, texture (uv's) etc and finally drawing the faces (triangles). Fragment shaders deal with what exactly will be drawn at each pixel on the screen (or in the viewport). When you provide only a fragment shader, you are not telling what your vertex data is, rather you are only telling OpenGL to do something on the pixels. And these pixels hold nothing/gibberish (I am not sure which) since your vertex shader did not do anything.
When using GLKEffect, a call to the [yourEffect prepareToDraw] method takes care of the shaders etc.
If you just wish to use a stock shader pair, why not use the one provided in the XCode OpenGL game template? When you run it, it has two cubes, one rendered using GLKit, and other the normal way. Though I think it will not be enough for most effects. In case you wish to know more about shaders, you can have a look at the NeHe GLSL introduction article. It is about GLSL and how you can write and use shaders in your code. You might want to have a look at Diney Bomfim's All About Shaders articles and this page.
Using GLKit is nice in most cases, since it saves you from writing lots of useless, repetitive code. For example, you do not have to go through so many image formats with different color codings and bits per pixel (per format) and all when you can just use GLKTextureLoader.

Resources