Let's say a render target was created via ID2D1Factory::CreateDxgiSurfaceRenderTarget.
Then the render target was passed to my function. I only have the target and not the IDXGISurface.
Is there a way to access IDXGISurface from the target?
QueryInterface doesn't retrieve it.
DirectX 10. Windows 7.
Thank you.
Unfortunately, this is not available. The render target and the DXGI surface are different objects and thus QueryInterface will not work. Internally, the render target holds a pointer to the underlying DXGI surface.
This is more explicit and manageable in Direct2D 1.1 (DirectX 11.1) where you can wrap the DXGI surface in a Direct2D bitmap (CreateBitmapFromDxgiSurface), which is then set as the target (SetTarget) of a Direct2D 1.1 render target (ID2D1DeviceContext). You can then ask the target for the bitmap (GetTarget) and ask the bitmap for the underlying DXGI surface (ID2D1Bitmap1::GetSurface).
Related
Please refer to the background section below if the following does not make much sense, I omitted most of the context as to make the problem as clear as possible.
I have two WebGLRenderingContexts with the following traits:
WebGLRenderingContext: InputGL (Allows read and write operations on its framebuffers.)
WebGLRenderingContext: OutputGL (Allows only write operations on its framebuffers.)
GOAL: Superimpose InputGL's renders onto OutputGL's renders periodically within 33ms (30fps) on mobile.
Both the InputGL's and OutputGL's framebuffers get drawn to from separate processes. Both are available (and with complete framebuffers) within one single window.requestAnimationFrame callback. As InputGL requires read operations, and OutputGL only supportes write operations, InputGL and OutputGL cannot be merged into one WebGLRenderingContext.
Therefore, I would like to copy the framebuffer content from InputGL to OutputGL in every window.requestAnimationFrame callback. This allows me to keep read/write supported on InputGL and only use write on OutputGL. Neither of them have (regular) canvasses attached so canvas overlay is out of the question. I have the following code:
// customOutputGLFramebuffer is the WebXR API's extended framebuffer which does not allow read operations
let fbo = InputGL.createFramebuffer();
InputGL.bindFramebuffer(InputGL.FRAMEBUFFER, fbo)
// TODO: Somehow get fbo data into OutputGL (I guess?)
OutputGl.bindFramebuffer(OutputGl.FRAMEBUFFER, customOutputGLFramebuffer);
// Drawing to OutputGL here works, and it gets drawn on top of the customOutputGLFramebuffer
I am not sure if this requires binding in some particular order, or some kind of texture manipulation of some sorts, any help with this would be greatly appreciated.
Background: I am experimenting with Unity WebGL in combination with the unreleased WebXR API. WebXR uses its own, modified WebGLRenderingContext which disallows reading from its buffers (as a privacy concern). However, Unity WebGL requires reading from its buffers. Having both operate on the same WebGLRenderingContext gives errors on Unity's read operations, which means they need to be kept separate. The idea is to periodically superimpose Unity's framebuffer data onto WebXR's framebuffers.
WebGL2 is also supported in case this is required.
You can not share resources across contexts period.
The best you can do is use one via some method as a source to the other via texImage2D
For example if the context is using a canvas then draw the framebuffer to the canvas and then
destContext.texImage2D(......., srcContext.canvas);
If it's a OffscreenRenderingContext use transferToImageBitmap and then pass the resulting bitmap to texImage2D
I need to port the glDiscardFramebufferEXT() OpenGL method to metal and I haven't found anything useful on the internet yet. How can I do that?
Its functionality is in MTLRenderPassDescriptor:
A MTLRenderPassDescriptor object contains a collection of attachments that are the rendering destination for pixels generated by a rendering pass. The MTLRenderPassDescriptor class is also used to set the destination buffer for visibility information generated by a rendering pass.
See especially members {color/depth}Attachments.storeAction and {color/depth}.loadAction.
MTLLoadActionDontCare means ignoring.
TTexture can have the style [TTextureStyle.RenderTarget]
the doc of delphi says:
Specifies the rendering target for this canvas object. Use the value
of RenderTarget to access the specific properties of the target on
which the canvas is drawn. RenderTarget returns a Direct2D interface
that can be used directly.
This is a little obscure, can someone explain me exactly the purpose of TTextureStyle.RenderTarget and in which condition we need to use it ?
To make the question also useful for everyone, if someone can also explain all the other different values (and their purpose) that TTextureStyle can take :
TTextureStyle = (MipMaps, Dynamic, RenderTarget, Volatile);
I want to create a DLL plugins to use with Delphi and other languages (mostly C++).
How can I pass bitmaps in a C++ and Delphi-friendly way? Can it just be a handle to the Delphi TBitmap? C++ program should be able to decode it using WinApi, right?
You cannot pass a Delphi TBitmap object since that is only meaningful to Delphi code. What you need to pass is an HBITMAP, a handle to a Windows bitmap.
The Delphi TBitmap class is just a wrapper around the Windows bitmap and can provide HBITMAP handles. The thing you need to watch out for is the ownership of those handles.
If you have a Delphi TBitmap you can get an HBITMAP by calling the ReleaseHandle method of a TBitmap. The handle returned by ReleaseHandle is no longer owned and managed by the TBitmap object which is exactly what you want. You pass that handle to the C++ code and let it become the owner. It is responsible for disposing of that handle.
The documentation for ReleaseHandle says:
Returns the handle to the bitmap so that the TBitmap object no longer
knows about the handle.
Use ReleaseHandle to disassociate the bitmap from the bitmap handle.
Use it when you need to give a bitmap handle to a routine or object
that will assume ownership (or destroy) the bitmap handle.
In the other direction your Delphi code would receive an HBITMAP from the C++ code and take on ownership. Do that by assigning to the Handle property of a TBitmap instance.
The details will vary from language to language, but no matter what, all will be able to deal with an HBITMAP.
I use GLkit/GLKView in my IOS OpenGL ES 2.0 project to manage default FBO/life cycle of my app.
In desktop OpenGL in order to bind default FBO (the front buffer) I can just call glBindFrameBuffer(GL_FRAMEBUFFER,0) but this is not the case in IOS app since you have to create the default FBO yourself and it will have a unique ID;
The problem is GLKit/GLKView coding style force me to use GLKView's "bindDrawable" function to activate default FBO which make the design of my cross platform rendering system a little ugly (have to store GLKView pointer as void* in my c++ engine class and bridge cast it every time I want to perform default FBO binding)
Are there any way to get the default FBO ID that GLKit/GLKView create so that I can store and use it to bind default frame buffer any where in my code ?
At worst I can revert back to create the default FBO myself and dissing GLKit/GLKView but it such a nice framework that I would like to continue using it.
Sorry for my bad english and thank in advance for any reply.
Perhaps you can get the "current" framebuffer ID just after your bindDrawable call, by calling something like:
GLint defaultFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &defaultFBO);
The answer that is given is definitely the proper solution, however it does not address the error in your understanding of the conceptual difference between standard openGL and openGL for embedded systems.
//-----------------------------------------------
I feel it's necessary to point out here that the call to glBindFramebuffer(GL_FRAMEBUFFER, 0)
does not return rendering to the main framebuffer although it would appear to be so for machines that run Windows, Unix(Mac) or Linux. Desktops and laptops have no concept of a main default system framebuffer. This idea started with handheld devices. When you make an openGL bind call with zero as the parameter then what you are doing is setting this function to NULL. It's how you disable this function. It's the same with glBindTexture(GL_TEXTURE_2D, 0);
It is possible that on some handheld devices that the driver automatically activates the main system framebuffer when you set the framebuffer to NULL without activating another. This would be a choice made by the manufacturer and is not something that you should count on, this is not part of the openGL ES spec. For desktops and laptops, this is absolutely necessary since disabling the framebuffer is required to return to normal openGL rendering. But remember! this is not a return to any main framebuffer, you are shutting down the activated frame buffer.
The proper way to bind default framebuffer in GLKit is to call bindDrawable method on GLKview.
[self bindDrawable] or [myglkview bindDrawable] depending on the context.