DX11 Constant buffer persistence - directx

Using DirectX 11 & Effect 11, I'm trying to understand how to draw efficiently two objects with different shaders. So first I set all the states and set the constant buffers up once for all. And while iterating through all of first object's meshes, all the previously set constant buffers stay available which is fine as you can see
here.
And then I'm applying another pass (Pass.Apply() from Effect 11) to draw the second object. And at this point, all my constant buffers are
destroyed as shown there.
So now I'm starting to wonder if the constant buffers cannot be set once for all on app startup and then be used/shared at anytime, across any shader. Or does it belong to the active shader only?
Thanks!

If I remember, if you execute a different effect then you will have to reassociate the constant buffer to the stage (this is also possible dependent on the driver also). The only time you should get to reuse the same constant buffer is if you are not changing the state of shaders.
To be safe, a different Pass is basically binding a new set of shaders (if they differ). Best practice is that you bind your resource (buffer), each time you do a different effects pass.
I personally have moved away from Effects as it is deprecated, I've also found that explicitly understanding what I am binding to the pipeline has helped my understanding on usage of constant buffers.
The buffer shouldn't be destroyed, it should be just unbound on the 2nd call - otherwise you have something more nefarious going on.

Related

Optimizing webgl (draw) call overhead

I'm trying to use the stencil buffer to render cross-sections of 3d models with WebGL. I am also using a library - three.js, which gives me a scene graph and various other abstractions.
Three.js exposes a callback that is called before and after gl.drawElements which I used to make the stencil calls. If I leave the render order to be managed by three, I end up in this situation:
Which looks pretty redundant:
gl.disable(STENCIL_TEST) //disable followed immediately by enable
gl.enable(STENCIL_TEST)
gl.stencilFunc(...) //same call for multiple draw calls
gl.stencilOp(...)
It looks like it would require some acrobatics to batch these with three.js, and I'm wondering if it's even worth it. I keep reading about the overhead of WebGL (draw?) calls, but I don't really understand their weight. It's pretty obvious what happens when a bunch of geometry is merged and drawn with a single call, but I don't understand what happens with other calls.
I'm not even entirely sure how to test it, would it be enough to toggle the stencil on/off multiple times between these draw calls until there is a frame drop?
I would like to enable the stencil only once before issuing multiple draw calls, and disable it after the final one. I would like to change the stencilOp and stencilFunc somewhere inside of that group of draw calls, but I'm not sure how much there is to be gained from this.
I'm not asking how to do this with three.js
There are a few relatively straightforward ways of doing this with three. Geometries that need to be batched can be put in their own scene and stencil state can be set before and after rendering it. Another one would be to manually sort everything and make the stencil calls only before the first one.
My question is, if and why should this queue of commands look different, for any reason?
What is the difference between having 5, and 255 calls to gl.enable(STENCIL_TEST) and gl.stencilOp(), is it something that can be ignored or not.
edit
I've reduced the number of calls, and achieved the desired effect when rendering opaque objects. However, transparency now became a bit more involved. I am trying to understand what the difference between the 4k and 5k "calls" in Sceenshot #2 means. Is it something that i should be concerned with at all, or concerned with selectively.

Getting or Simulating GL_PACK_ROW_LENGTH and GL_UNPACK_ROW_LENGTH in iOS

It seems that using glPixelStorei with GL_UNPACK_ROW_LENGTH and GL_PACK_ROW_LENGTH is not supported on iOS.
Is is possible to somehow simulate them to get the same effect for memory stride when using glTexImage2D and glReadPixels respectively (without an extra prior copy to aligned memory)?
Not a pretty solution, but you could read/write the data row by row.
For the glTexImage2D() case, you would call glTexImage2D() once with the full size, and with NULL for the last argument. Then use a separate glTexSubImage2D() call for each row.
Same idea for glReadPixels(). You can use a call per row, and apply the correct row length to the pointer you pass in.
The downside is of course that you need a lot more API calls. You would have to benchmark if the performance is better or worse than what you get with an extra copy.
Particularly for the glTexImage2D() case, another option might be to make the texture the size that matches the row length of your input data, and then only sample the part of the texture you want to use. You can do this by adjusting the range of texture coordinates during sampling.
Other than that, I can't think of a great way to do this in ES 2.0. Of course if you can constrain this to devices that support ES 3.0, moving to ES 3.0 is the obvious solution.

iOS OpenGL ES - Only draw on request

I'm using OpenGL ES to write a custom UI framework on iOS. The use case is for an application, as in something that won't be updating on a per-frame basis (such as a game). From what I can see so far, it seems that the default behavior of the GLKViewController is to redraw the screen at a rate of about 30fps. It's typical for UI to only redraw itself when necessary to reduce resource usage, and I'd like to not drain extra battery power by utilizing the GPU while the user isn't doing anything.
I tried only clearing and drawing the screen once as a test, and got a warning from the profiler saying that an uninitialized color buffer was being displayed.
Looking into it, I found this documentation: http://developer.apple.com/library/ios/#DOCUMENTATION/iPhone/Reference/EAGLDrawable_Ref/EAGLDrawable/EAGLDrawable.html
The documentation states that there is a flag, kEAGLDrawablePropertyRetainedBacking, which when set to YES, will allow the backbuffer to retain things drawn to it in the previous frame. However, it also states that it isn't recommended and cause performance and memory issues, which is exactly what I'm trying to avoid in the first place.
I plan to try both ways, drawing every frame and not, but I'm curious if anyone has encountered this situation. What would you recommend? Is it not as big a deal as I assume it is to re-draw everything 30 times per frame?
In this case, you shouldn't use GLKViewController, as its very purpose is to provide a simple animation timer on the main loop. Instead, your view can be owned by any other subclass of UIViewController (including one of your own creation), and you can rely on the usual setNeedsDisplay/drawRect system used by all other UIKit views.
It's not the backbuffer that retains the image, but a separate buffer. Possibly a separate buffer created specifically for your view.
You can always set paused on the GLKViewController to pause the rendering loop.

Is iOS glGenerateMipmap synchronous, or is it possibly asynchronous?

I'm developing an iPad app that uses large textures in OpenGL ES. When the scene first loads I get a large black artifact on the ceiling for a few frames, as seen in the picture below. It's as if higher levels of the mipmap have not yet been filled in. On subsequent frames, the ceiling displays correctly.
This problem only began showing up when I started using mipmapping. One possible explanation is that the glGenerateMipmap() call does its work asynchronously, spawning some mipmap creation worker (in a separate process, or perhaps in the GPU) and returning.
Is this possible, or am I barking up the wrong tree?
Within a single context, all operations will appear to execute strictly in order. However, in your most recent reply, you mentioned using a second thread. To do that, you must have created a second shared context: it is always illegal to re-enter an OpenGL context. If already using a shared context, there are still some synchronization rules you must follow, documented at http://developer.apple.com/library/ios/ipad/#DOCUMENTATION/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithOpenGLESContexts/WorkingwithOpenGLESContexts.html
It should be synchronous; OpenGL does not in itself have any real concept of threading (excepting the implicit asynchronous dialogue between CPU and GPU).
A good way to diagnose would be to switch to GL_LINEAR_MIPMAP_LINEAR. If it's genuinely a problem with lower resolution mip maps not arriving until later then you'll see the troublesome areas on the ceiling blend into one another rather than the current black-or-correct effect.
A second guess, based on the output, would be some sort of depth buffer clearing issue.
I followed #Tommy's suggestion and switched to GL_LINEAR_MIPMAP_LINEAR. Now the black-or-correct effect changed to a fade between correct and black.
I guess that although we all know that OpenGL is a pipeline (and therefore asynchronous unless you are retrieving state or explicity synchronizing), we tend to forget it. I certainly did in this case, where I was not drawing, but loading and setting up textures.
Once I confirmed the nature of the problem, I added a glFinish() after loading all my textures, and the problem went away. (Btw, my draw loop is in the foreground and my texture loading loop - because it is so time consuming and would impair interactivity - is in the background. Also, since this may vary between platforms, I'm using iOS5 on an iPad 2)

A hooked DirectX 9 program crashes on window resize, texture related

I'm using EasyHook and SlimDX to overlay some graphics using SlimDX's Sprite and Texture classes. When I resize windows some programs fine, but others will crash - Winamp's MilkDrop 2 gives me an ambiguous memory error for example.
I expect this is due to the after market Texture I created. The question is what VTable function should I hook and/or how/when do I dispose and recreate the Texture? Reset perhaps?
If it isn't obvious I don't know much about DirectX.
edit/ps: I paint the texture inside an EndScene hook, but I haven't created any other hooks yet...
You shouldn't have to recreate texture at all if it was created in D3DPOOL_MANAGED (D3DPOOL parameter of IDirect3DDevice9::CreateTexture).
If you absolutely have to use D3DPOOL_DEFAULT and need to kill off lost textures, then,
the simplest way would be to destroy all "perishable" objects before call to IDirect3DDevice9::Reset, and restore then after the call, but only if it was succesfull.
YOu could also track functions that may return D3DERR_DEVICELOST (there are two of them), but hooking only Reset() will be easier.

Resources