Copy framebuffer data from one WebGLRenderingContext to another? - webgl

Please refer to the background section below if the following does not make much sense, I omitted most of the context as to make the problem as clear as possible.
I have two WebGLRenderingContexts with the following traits:
WebGLRenderingContext: InputGL (Allows read and write operations on its framebuffers.)
WebGLRenderingContext: OutputGL (Allows only write operations on its framebuffers.)
GOAL: Superimpose InputGL's renders onto OutputGL's renders periodically within 33ms (30fps) on mobile.
Both the InputGL's and OutputGL's framebuffers get drawn to from separate processes. Both are available (and with complete framebuffers) within one single window.requestAnimationFrame callback. As InputGL requires read operations, and OutputGL only supportes write operations, InputGL and OutputGL cannot be merged into one WebGLRenderingContext.
Therefore, I would like to copy the framebuffer content from InputGL to OutputGL in every window.requestAnimationFrame callback. This allows me to keep read/write supported on InputGL and only use write on OutputGL. Neither of them have (regular) canvasses attached so canvas overlay is out of the question. I have the following code:
// customOutputGLFramebuffer is the WebXR API's extended framebuffer which does not allow read operations
let fbo = InputGL.createFramebuffer();
InputGL.bindFramebuffer(InputGL.FRAMEBUFFER, fbo)
// TODO: Somehow get fbo data into OutputGL (I guess?)
OutputGl.bindFramebuffer(OutputGl.FRAMEBUFFER, customOutputGLFramebuffer);
// Drawing to OutputGL here works, and it gets drawn on top of the customOutputGLFramebuffer
I am not sure if this requires binding in some particular order, or some kind of texture manipulation of some sorts, any help with this would be greatly appreciated.
Background: I am experimenting with Unity WebGL in combination with the unreleased WebXR API. WebXR uses its own, modified WebGLRenderingContext which disallows reading from its buffers (as a privacy concern). However, Unity WebGL requires reading from its buffers. Having both operate on the same WebGLRenderingContext gives errors on Unity's read operations, which means they need to be kept separate. The idea is to periodically superimpose Unity's framebuffer data onto WebXR's framebuffers.
WebGL2 is also supported in case this is required.

You can not share resources across contexts period.
The best you can do is use one via some method as a source to the other via texImage2D
For example if the context is using a canvas then draw the framebuffer to the canvas and then
destContext.texImage2D(......., srcContext.canvas);
If it's a OffscreenRenderingContext use transferToImageBitmap and then pass the resulting bitmap to texImage2D

Related

How to avoid an entire call stack being declared MainActor because a low-level function needs it?

I have an interesting query with regard to #MainActor and strict concurrency checking (-Xfrontend -warn-concurrency -Xfrontend -enable-actor-data-race-checks)
I have functions (Eg, Analytics) that at the lowest level require access to the device screen scale UIScreen.main.scale which is isolated to MainActor. However I would prefer not to have to declare the entire stack of functions above the one that accesses scale as requiring MainActor.
Is there a way to do this, or do I have no other options?
How would be the best way to ensure my code only ever calls UIScreen once and keeps the result available for next time without manually defining a var and checking if its nil? Ie is there a kind of computed property that will do this?
Edit: Is there an equivalent of this using MainActor (MainActor.run doesn't do the same thing; it seems to block synchronously):
DispatchQueue.main.async {
Thanks,
Chris
Non-UI code should not rely directly on UIScreen. The scale (for example), should be passed as a parameter, or to actors in their init. If the scale changes (which it can, when screens are added or removed), then the new value should be sent to the actor. Or the actor can observe something that publishes the scale when it changes.
The key point is accessing UIScreen from a random thread is not valid for a reason. The scale can in fact change at any time. Reading it from an actor is and should be an async call.
It sounds like you have some kind of Analytics actor. The simplest implementation of this would be to just pass the scale when you create it.

How to do glDiscardFramebufferEXT in metal

I need to port the glDiscardFramebufferEXT() OpenGL method to metal and I haven't found anything useful on the internet yet. How can I do that?
Its functionality is in MTLRenderPassDescriptor:
A MTLRenderPassDescriptor object contains a collection of attachments that are the rendering destination for pixels generated by a rendering pass. The MTLRenderPassDescriptor class is also used to set the destination buffer for visibility information generated by a rendering pass.
See especially members {color/depth}Attachments.storeAction and {color/depth}.loadAction.
MTLLoadActionDontCare means ignoring.

Where is the swapBuffer OpenGL call in WebGL

Noticed that SwapBuffer functionality is not there in WebGL, If that is the case how do we change state across draw calls and draw multiple objects in WebGL, at what point of time is swapBuffer called internally by WebGL?
First off there is no SwapBuffers in OpenGL. SwapBuffers is a platform specific thing that is not part of OpenGL.
In any case though the equivalent of SwapBuffers is implicit in WebGL. If you call any WebGL functions that affect the drawingbuffer (eg, drawArray, drawElements, clear, ...) then the next time the browser composites the page it will effectively "swapbuffers".
note that whether it actually "swaps" or "copies" is up to the browser. For example if antialiasing is enabled (the default) then internally the browser will effectively do a "copy" or rather a "blit" that converts the internal multisample buffer to something that can actually be displayed.
Also note that because the swap is implicit WebGL will clear the drawingBuffer before the next render command. This is to make the behavior consistent regardless of whether the browser decides to swap or copy internally.
You can force a copy instead of swap (and avoid the clearing) by passing {preserveDrawingBuffer: true} to getContext as the 2nd parameter but of course at the expensive of disallowing a swap.
Also it's important to be aware that the swap itself and when it happens is semi-undefined. In other words calling gl.drawXXXor gl.clear will tell the browser to swap/copy at the next composite but between that time and the time the browser actually composites other events could get processed. The swap won't happen until your current event exits, for example a requestAnimationFrame event, but, between the time your event exits and the time the browser composites more events could happen (like say mousemove).
The point of all that is that if don't use {preserveDrawingBuffer: true} you should always do all of your drawing during one event, usually requestAnimationFrame, otherwise you might get inconsistent results.
AFAIK, swap buffers call usually doesn't change any visible GL state. There're plenty of GL calls to change that state between draw calls though. As for buffer swapping, browser does that for you sometime after a callback with rendering code returns (and yes, there's no direct control over when this will actually happen).

Questions about passing strings and other data from UI to LV2 plugin

I need to pass a string from the UI to the plugin. From the eg-sample, it appears that an LV2 atom should be written to a atom port.
If I understand it correctly
First allocate a LV2_Atom_Forge. May that object be on the stack or does it have to survive after the UI event callback has returned?
Call lv2_atom_forge_set_buffer. How do I know the required size of the buffer? The example sets it to 1024 bytes for no reason. May the buffer be allocated on the stack or does it have to survive the UI after the UI event callback has returned?
The forge is just a utility for writing atoms. The buffer it writes to is provided by the code that uses it, so the lifetime of the forge itself is irrelevant. Allocating it on the stack is fine, though it may be more convenient to keep one around in your UI struct for use in various places.
You can estimate the space required by knowing the format of atoms as described in the documentation, or simply implementing everything with a massive buffer at first and checking the size field of the top-level atom in your output. Keep in mind that this will change if you have variable-sized elements like strings in there. The data passed to the UI callback(s) is const and only valid during the call, it must be copied by the receiver if it needs to be available later.

Getting default frame buffer id from GLKView/GLKit

I use GLkit/GLKView in my IOS OpenGL ES 2.0 project to manage default FBO/life cycle of my app.
In desktop OpenGL in order to bind default FBO (the front buffer) I can just call glBindFrameBuffer(GL_FRAMEBUFFER,0) but this is not the case in IOS app since you have to create the default FBO yourself and it will have a unique ID;
The problem is GLKit/GLKView coding style force me to use GLKView's "bindDrawable" function to activate default FBO which make the design of my cross platform rendering system a little ugly (have to store GLKView pointer as void* in my c++ engine class and bridge cast it every time I want to perform default FBO binding)
Are there any way to get the default FBO ID that GLKit/GLKView create so that I can store and use it to bind default frame buffer any where in my code ?
At worst I can revert back to create the default FBO myself and dissing GLKit/GLKView but it such a nice framework that I would like to continue using it.
Sorry for my bad english and thank in advance for any reply.
Perhaps you can get the "current" framebuffer ID just after your bindDrawable call, by calling something like:
GLint defaultFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &defaultFBO);
The answer that is given is definitely the proper solution, however it does not address the error in your understanding of the conceptual difference between standard openGL and openGL for embedded systems.
//-----------------------------------------------
I feel it's necessary to point out here that the call to glBindFramebuffer(GL_FRAMEBUFFER, 0)
does not return rendering to the main framebuffer although it would appear to be so for machines that run Windows, Unix(Mac) or Linux. Desktops and laptops have no concept of a main default system framebuffer. This idea started with handheld devices. When you make an openGL bind call with zero as the parameter then what you are doing is setting this function to NULL. It's how you disable this function. It's the same with glBindTexture(GL_TEXTURE_2D, 0);
It is possible that on some handheld devices that the driver automatically activates the main system framebuffer when you set the framebuffer to NULL without activating another. This would be a choice made by the manufacturer and is not something that you should count on, this is not part of the openGL ES spec. For desktops and laptops, this is absolutely necessary since disabling the framebuffer is required to return to normal openGL rendering. But remember! this is not a return to any main framebuffer, you are shutting down the activated frame buffer.
The proper way to bind default framebuffer in GLKit is to call bindDrawable method on GLKview.
[self bindDrawable] or [myglkview bindDrawable] depending on the context.

Resources