WebGL Multi-Render Target: For drawBuffers, what does gl.BACK do? - webgl

I am working on my Multiple Render Target pipeline and I came across a curiosity in the docs that I don't fully understand and googling for an hour hasn't helped me find a clear answer.
You utilize gl.drawBuffers([...]) to link the locations used in your shader to actual color attachments in your framebuffer. So, most of the expected parameters makes sense:
gl.NONE - Make the shader output for this location NOT output to any Color attachment in the FBO
gl.COLOR_ATTACHMENT[0 - 15] - Make the shader location output to the specified color attachment.
But then we have this mysterious target (from the docs):
gl.BACK: Fragment shader output is written into the back color buffer.
I don't think I understand what the back color buffer is, especially relative to the currently attached FBO. As far as I know you don't specify a 'back color buffer' when making a FBO...so what does this mean? What is this 'back color buffer'?

In WebGL the backbuffer is effectively "the canvas". It's called the backbuffer because sometimes there is a frontbuffer. Canvas's in WebGL are double buffered. One buffer is whatever is visible, the other is the buffer you're currently drawing to.
You can't use [gl.BACK, gl_COLOR_ATTACHMENT0]
When writing to a framebuffer each entry can only be the same attachment or NONE. For example imagine you have 4 attachments. Then the array you pass to drawBuffers is as follows
gl.drawBuffers([
gl.COLOR_ATTACHMENT0, // OR gl.NONE,
gl.COLOR_ATTACHMENT1, // OR gl.NONE,
gl.COLOR_ATTACHMENT2, // OR gl.NONE,
gl.COLOR_ATTACHMENT3, // OR gl.NONE,
])
You can not swap around attachments.
gl.drawBuffers([
gl.NONE,
gl.COLOR_ATTACHMENT0, // !! ERROR! This has to be COLOR_ATTACHMENT1 or NONE
])
You can't use gl.BACK gl.BACK is only for when writing to the canvas, in other words then the frame buffer is set to null as in gl.bindFramebuffer(null);
gl.drawBuffers([
gl.BACK, // OR gl.NONE
]);
note: drawBuffers state is part of the state of each framebuffer (and canvas). See this and this

Related

Is it possible to discard for one draw buffer while still writing to another?

The use case is essentially the same as Alpha Blending with Integer Texture for Object Picking
One of the answers is to discard fragments that fail the alpha test.
So I'm wondering if it's possible to discard for one render target in a framebuffer, while still writing to the others. In my initial attempt of simply not writing the out value for that target, it seems to still write the default value (0, uvec4(0), etc.)
No, if you discard then no buffers will be updated, regardless of whether you assigned to the output variable beforehand or not.
The discard keyword is only allowed within fragment shaders. It can be used within a fragment shader to abandon the operation on the current fragment. This keyword causes the fragment to be discarded and no updates to any buffers will occur.
– The OpenGL ES Shading Language Specification, Page 58 (Page 64 in the PDF)
(emphasis mine)
If blending is enabled, then you could write a transparent color to the output that you want to "discard".
// discard;
out = vec4(0.0, 0.0, 0.0, 0.0);

WebGL feedback loop formed between Framebuffer and active Texture

I have a webgl project setup that uses 2 pass rendering to create effects on a texture.
Everything was working until recently chrome started throwing this error:
[.WebGL-0000020DB7FB7E40] GL_INVALID_OPERATION: Feedback loop formed between Framebuffer and active Texture.
This just started happening even though I didn't change my code, so I'm guessing a new update caused this.
I found this answer on SO, stating the error "happens any time you read from a texture which is currently attached to the framebuffer".
However I've combed through my code 100 times and I don't believe I am doing that. So here is how I have things setup.
Create a fragment shader with a uniform sampler.
uniform sampler2D sampler;
Create 2 textures
var texture0 = initTexture(); // This function does all the work to create a texture
var texture1 = initTexture(); // This function does all the work to create a texture
Create a Frame Buffer
var frameBuffer = gl.createFramebuffer();
Then I start the "2 pass processing" by uploading a html image to texture0, and binding texture0 to the sampler.
I then bind the frame buffer & call drawArrays:
gl.bindFramebuffer(gl.FRAMEBUFFER, frameBuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture1, 0);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
To clean up I unbind the frame buffer:
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
Edit:
After adding break points to my code I found that the error is not actually thrown until I bind the null frame buffer. So the drawArrays call isn't causing the error, it's binding the null frame buffer afterwards that sets it off.
Chrome since version 83 started to perform conservative checks for the framebuffer and the active texture feedback loop. These checks are likely too conservative and affect usage that should actually be allowed.
In these new checks Chrome seem to disallow a render target to be bound to any texture slot, even if this slot is not used by the program.
In your 2 pass rendering you likely have something like:
Initialize a render target and create a texture that points to a framebuffer.
Render to the target.
In 1 you likely bind a texture using gl.bindTexture(gl.TEXTURE_2D, yourTexture) you need to then, before the step 2, unbind the texture using gl.bindTexture(gl.TEXTURE_2D, null); Otherwise Chrome will fail because the render target is bound as a texture, even though this texture is not sampled by the program.

Render to multisample buffer and resolved frame buffer independently

So im doing some graph drawing using GL_LINE_STRIP's and i am using a multisampled buffer so the lines dont look so jagged. the problem is i have some lines in the background of the graph that act as a legend. The multisampling kind of screws the lines up cause they are meant to be exactly 1 pixel thick, but because of the multisampling, it will sometimes put the line spread over 2 pixels that are slightly dimmer than the original colour, making the lines look different to each other.
Is it possible to render those legend lines directly to the resolved frame buffer, then have the multisampled stuff drawn on top? this will effectively not multisample the background legend lines, but multisample the graph lines.
Is this possible? I just want to know before i dive into this and later find out you cant do this. If you have some demo code to show me that would be great as well
It would be much easier if the legend came last: You could just resolve the MSAA buffers into the view framebuffer and then normally render the legend into the resolved buffer afterwards. But the other way won't be possible, since multisample resolution will just overwrite any previous contents of the target framebuffer, it won't do any blending or depth testing.
The only way to actually render the MSAA stuff on top yould be to first resolve them into another FBO and draw that FBO's texture on top of the legend. But for the legend to not get completely overwritten, you will have to use alpha blending. So you basically clear the MSAA buffers to an alpha of 0 before rendering, then render the graph into it. Then you resolve those buffers and draw the resulting texture on top of the legend, using alpha blending to only overwrite the parts where the graph was actually drawn.

OpenGL ES 1.1 - alpha mask

I'm working on an iPad app, with OpenFrameworks and OpenGL ES 1.1. I need to display a video with alpha channel. To simulate it i have a RGB video (without any alpha channel) and another video containing only alpha channel (on every RGB channel, so the white parts correspond to the visible parts and the black to the invisible). Every video is an OpenGL texture.
In OpenGL ES 1.1 there is no shader, so i found this solution (here : OpenGL - mask with multiple textures) :
glEnable(GL_BLEND);
// Use a simple blendfunc for drawing the background
glBlendFunc(GL_ONE, GL_ZERO);
// Draw entire background without masking
drawQuad(backgroundTexture);
// Next, we want a blendfunc that doesn't change the color of any pixels,
// but rather replaces the framebuffer alpha values with values based
// on the whiteness of the mask. In other words, if a pixel is white in the mask,
// then the corresponding framebuffer pixel's alpha will be set to 1.
glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_SRC_COLOR, GL_ZERO);
// Now "draw" the mask (again, this doesn't produce a visible result, it just
// changes the alpha values in the framebuffer)
drawQuad(maskTexture);
// Finally, we want a blendfunc that makes the foreground visible only in
// areas with high alpha.
glBlendFunc(GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA);
drawQuad(foregroundTexture);
It's exactly what i want to do but glBlendFuncSeparate() doesn't exist in OpenGL ES 1.1 (or on iOS). I'm trying to do it with glColorMask and i found this : Can't get masking to work correctly with OpenGL
But it doesn't work as well, i guess because his mask texture file contains an 'real' alpha channel, and not mine.
I highly suggest you compute a single RGBA texture instead.
This will be both easier, and faster ( because you're sending 2 RGBA textures each frame - yes, your RGB texture is in fact encoded in RGBA by the hardware, and the A is ignored )
glColorMask won't help you, because it simply says "turn on or off this channel completely".
glBlendFuncSeparate could help you if you had it, but again, it's not a good solution : you're ruining your (very limited) iphone bandwidth by sending twice as much data as needed.
UPDATE :
Since you're using OpenFrameworks, and according to its source code ( https://github.com/openframeworks/openFrameworks/blob/master/libs/openFrameworks/gl/ofTexture.cpp and https://github.com/openframeworks/openFrameworks/blob/master/libs/openFrameworks/video/ofVideoPlayer.cpp ) :
Use ofVideoPlayer::setUseTexture(false) so that ofVideoPlayer::update won't upload the data to video memory;
Get the video data with ofVideoPlayer::getPixels
Interleave the result in the RGBA texture (you can use an GL_RGBA ofTexture and ofTexture::loadData)
Draw using ofTexture::Draw ( this is what ofVideoPlayer does anyway )

Scaling the contents of OpenGL ES framebuffer

Currently I'm scaling down the contents of my OpenGL ES 1.1 framebuffer like this:
save current framebuffer and renderbuffer references
bind framebuffer2 and smallerRenderbuffer
re-render all contents
now smallerRenderbuffer contains the "scaled-down" contents of
framebuffer
do stuff with contents of smallerRenderbuffer
re-bind framebuffer and renderbuffer
What's an alternative way to do this? Perhaps I can just copy and scale the contents of the original framebuffer and renderbuffer into framebuffer2 and smallerRenderbuffer. Hence avoiding the re-render step. I've been looking at glScalef but I'm not sure where to go from here.
Note: this is all done in OpenGL ES 1.1 on iOS.
You could do an initial render to texture, then render from that to both the frame buffer that you want to be visible and to the small version. Whatever way you look at it, what you're trying to do is use the data that has been rendered as the source for another rendering so rendering to a texture is the most natural thing to do.
You're probably already familiar with the semantics of a render to texture if you're doing work on the miniature version, but for completeness: you'd create a frame buffer object, use glFramebufferTexture2DOES to attach a named texture to a suitable attachment point, then bind either the frame buffer or the texture (ensuring the other isn't simultaneously bound if you want defined behaviour) as appropriate.

Resources