If I want to clear an entire depth/stencil view in Direct3D 11, I can easily call ID3D11DeviceContext::ClearDepthStencilView.
Direct3D 11.1 adds support for clearing rectangular portions of render target views using ID3D11DeviceContext1::ClearView.
But I see no way to clear only a portion of a depth/stencil view in Direct3D 11, short of rendering a quad over the desired area? This seems like an odd regression from Direct3D 9, where this was trivially easy. Am I missing something, or is this really not supported?
There is no such function that can clear only a part of depth/stencil view.
This is my way to solve the problem:
make a texture. Set Alpha of the part to clear to 1,and other part to 0.
open AlphaTest, only the pixel whose alpha is 1.
open AlphaBlend,set BlendOP to Add,set SrcBlend factor to 0,set DestBlend factor to 1.
set StencilTest and DepthTest to Always, set StencilRef to the value you want to clear.
use orthogonal projection matrix.
draw a rectangle that just covers the screen( z-coordinate/(ZFar-ZNear) will convert to depth),and paste the texture on it.
There is an excellent reason at removing the partial clears in the API. First, it is always possible to emulate them by drawing quads with proper render states, and second, all GPUs have fast clear and resolve hardware. Using them in the intend logic greatly improve performance of the rendering.
With the DX11 clear API, it is possible to use the fast clear and the latter GPU optimisation. A depth buffer fast clear also prepare for an early depth test ( prior to pixel shading, because yes, the real depth test is post pixel shading ), and some bandwidth optimisation on access to the depth buffer will rendering. If you clear with a quad, you lost all that and draw cost will rise.
Related
I'm developing webgl application, where I draw detailed building on top of mapbox-gl-js.
Everything goes fine except one detail, I don't know how to acquire depth buffer of every drawn frame.
In some cases my overlay is drawn over extruded by mapbox-gl-js style buildings, but it must be behind it.
I see only one possibility to do this correctly - acquire depth buffer from mapbox-gl-js and pass it in to my shader as texture and compare with my actual depth buffer values.
As in deferred rendering technique.
Any possibility to do that?
You may be better off using a Custom Layer.
I've tried to make an "overlay" effect in a 3d scene. After drawing stuff to the buffer, i tried to draw a full screen quad with blending enabled and the depth test disabled. On some android devices this seems to have caused a slow down.
I found this link:
The particularly slow point is the point where the drawing of a pixel needs to check what the color behind it was.
So instead of drawing a single full screen quad, i divided it up in tiles, and rendered with multiple draw calls, which seems to have caused some gain.
What may be happening here and how can this be profiled with webgl i.e. how does one come to the conclusion from the quote above?
I guess that to profile it, you simply have to test with several blending function, with or without blending enabled, etc...
Blending is not a trivial operation, and indeed we can assume that blending function which need to read pixel on buffer could induce performance lose, like all "reading" operation in OpenGL, because this can block the pipeline. I guess most of modern desktop GPU have some specific design to optimize this, but on mobile phones, this is maybe more problematic.
Anyway, if you are about to draw a full screen quad, why don't you render your quad directly using two source texture, which you blend directly in the fragment shader using a custom equation ? this way, you don't need to use blending and you avoid any back buffer reading problem.
Im trying to implement a particle system (using OpenGL 2.0 ES), where each particle is made up of a quad with a simple texture
the red pixels are transparent. Each particle will have a random alpha value from 50% to 100%
Now the tricky part is i like each particle to have a blendmode much like Photoshop "overlay" i tried many different combinations with the glBlendFunc() but without luck.
I dont understand how i could implement this in a fragment shader, since i need infomations about the current color of the fragment. So that i can calculate a new color based on the current and texture color.
I also thought about using a frame buffer object, but i guess i would need to re-render my frame-buffer-object into a texture, for each particle since each particle every frame, since i need the calculated fragment color when particles overlap each other.
Ive found math' and other infomations regrading the Overlay calculation but i have a hard time figuring out which direction i could go to implement this.
http://www.pegtop.net/delphi/articles/blendmodes/
Photoshop blending mode to OpenGL ES without shaders
Im hoping to have a effect like this:
You can get information about the current fragment color in the framebuffer on an iOS device. Programmable blending has been available through the EXT_shader_framebuffer_fetch extension since iOS 6.0 (on every device supported by that release). Just declare that extension in your fragment shader (by putting the directive #extension GL_EXT_shader_framebuffer_fetch : require at the top) and you'll get current fragment data in gl_LastFragData[0].
And then, yes, you can use that in the fragment shader to implement any blending mode you like, including all the Photoshop-style ones. Here's an example of a Difference blend:
// compute srcColor earlier in shader or get from varying
gl_FragColor = abs(srcColor - gl_LastFragData[0]);
You can also use this extension for effects that don't blend two colors. For example, you can convert an entire scene to grayscale -- render it normally, then draw a quad with a shader that reads the last fragment data and processes it:
mediump float luminance = dot(gl_LastFragData[0], vec4(0.30,0.59,0.11,0.0));
gl_FragColor = vec4(luminance, luminance, luminance, 1.0);
You can do all sorts of blending modes in GLSL without framebuffer fetch, but that requires rendering to multiple textures, then drawing a quad with a shader that blends the textures. Compared to framebuffer fetch, that's an extra draw call and a lot of schlepping pixels back and forth between shared and tile memory -- this method is a lot faster.
On top of that, there's no saying that framebuffer data has to be color... if you're using multiple render targets in OpenGL ES 3.0, you can read data from one and use it to compute data that you write to another. (Note that the extension works differently in GLSL 3.0, though. The above examples are GLSL 1.0, which you can still use in an ES3 context. See the spec for how to use framebuffer fetch in a #version 300 es shader.)
I suspect you want this configuration:
Source: GL_SRC_ALPHA
Destination: GL_ONE.
Equation: GL_ADD
If not, it might be helpful if you could explain the math of the filter you're hoping to get.
[EDIT: the answer below is true for OpenGL and OpenGL ES pretty much everywhere except iOS since 6.0. See rickster's answer for information about EXT_shader_framebuffer_fetch which, in ES 3.0 terms, allows a target buffer to be flagged as inout, and introduces a corresponding built-in variable under ES 2.0. iOS 6.0 is over a year old at the time of writing so there's no particular excuse for my ignorance; I've decided not to delete the answer because it's potentially valid to those finding this question based on its opengl-es, opengl-es-2.0 and shader tags.]
To confirm briefly:
the OpenGL blend modes are implemented in hardware and occur after the fragment shader has concluded;
you can't programmatically specify a blend mode;
you're right that the only workaround is to ping pong, swapping the target buffer and a source texture for each piece of geometry (so you draw from the first to the second, then back from the second to the first, etc).
Per Wikipedia and the link you provided, Photoshop's overlay mode is defined so that the output pixel from a background value of a and a foreground colour of b, f(a, b) is 2ab if a < 0.5 and 1 - 2(1 - a)(1 - b) otherwise.
So the blend mode changes per pixel depending on the colour already in the colour buffer. And each successive draw's decision depends on the state the colour buffer was left in by the previous.
So there's no way you can avoid writing that as a ping pong.
The closest you're going to get without all that expensive buffer swapping is probably, as Sorin suggests, to try to produce something similar using purely additive blending. You could juice that a little by adding a final ping-pong stage that converts all values from their linear scale to the S-curve that you'd see if you overlaid the same colour onto itself. That should give you the big variation where multiple circles overlap.
Is it possible to optimise OpenGL ES 2.0 drawing by using dirty rectangles?
In my case, I have a 2D app that needs to draw a background texture (full screen on iPad), followed by the contents of several VBOs on each frame. The problem is that these VBOs can potentially contain millions of vertices, taking anywhere up to a couple of seconds to draw everything to the display. However, only a small fraction of the display would actually be updated each frame.
Is this optimisation possible, and how (or perhaps more appropriately, where) would this be implemented? Would some kind of clipping plane need to be passed into the vertex shader?
If you set an area with glViewport, clipping is adjusted accordingly. This however happens after the vertex shader stage, just before rasterization. As the GL cannot know the result of your own vertex program, it cannot sort out any vertex before applying the vertex program. After that, it does. How efficent it does depents on the actual GPU.
Thus you have to sort and split your objects to smaller (eg. rectangulary bounded) tiles and test them against the field of view by yourself for full performance.
I realise that Direct3D doesn't properly support line thickness, and infact on most graphics hardware, lines are actually just collapsed rectangles.
At least I thought that was the case, until I tried to actually implement line thickness by rendering rectangles instead of lines and found that they lost detail and were eventually invisible as I zoomed out; whereas line primtive types seem to be guaranteed to always be 1 pixel wide regardless of scale.
I'm creating an AutoCAD viewer, of which lines are a fairly staple entity, and need to support a thickness; but regardless of zoom level must always be at least one pixel wide.
Can anyone suggest a strategy for achieving this; ideally a rendering settings adjustment as opposed to working out if it should render lines instead of rectangles?
[Edit] Should have mentioned; it's Direct3D 9 in .Net via SlimDX.
The simplest approach I can think of would be to render the lines as simple quads in 2D, and have the pixel shader write an oDepth value containing the correct 3d perspective depth.