I have a webgl project setup that uses 2 pass rendering to create effects on a texture.
Everything was working until recently chrome started throwing this error:
[.WebGL-0000020DB7FB7E40] GL_INVALID_OPERATION: Feedback loop formed between Framebuffer and active Texture.
This just started happening even though I didn't change my code, so I'm guessing a new update caused this.
I found this answer on SO, stating the error "happens any time you read from a texture which is currently attached to the framebuffer".
However I've combed through my code 100 times and I don't believe I am doing that. So here is how I have things setup.
Create a fragment shader with a uniform sampler.
uniform sampler2D sampler;
Create 2 textures
var texture0 = initTexture(); // This function does all the work to create a texture
var texture1 = initTexture(); // This function does all the work to create a texture
Create a Frame Buffer
var frameBuffer = gl.createFramebuffer();
Then I start the "2 pass processing" by uploading a html image to texture0, and binding texture0 to the sampler.
I then bind the frame buffer & call drawArrays:
gl.bindFramebuffer(gl.FRAMEBUFFER, frameBuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture1, 0);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
To clean up I unbind the frame buffer:
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
Edit:
After adding break points to my code I found that the error is not actually thrown until I bind the null frame buffer. So the drawArrays call isn't causing the error, it's binding the null frame buffer afterwards that sets it off.
Chrome since version 83 started to perform conservative checks for the framebuffer and the active texture feedback loop. These checks are likely too conservative and affect usage that should actually be allowed.
In these new checks Chrome seem to disallow a render target to be bound to any texture slot, even if this slot is not used by the program.
In your 2 pass rendering you likely have something like:
Initialize a render target and create a texture that points to a framebuffer.
Render to the target.
In 1 you likely bind a texture using gl.bindTexture(gl.TEXTURE_2D, yourTexture) you need to then, before the step 2, unbind the texture using gl.bindTexture(gl.TEXTURE_2D, null); Otherwise Chrome will fail because the render target is bound as a texture, even though this texture is not sampled by the program.
Related
I am working on my Multiple Render Target pipeline and I came across a curiosity in the docs that I don't fully understand and googling for an hour hasn't helped me find a clear answer.
You utilize gl.drawBuffers([...]) to link the locations used in your shader to actual color attachments in your framebuffer. So, most of the expected parameters makes sense:
gl.NONE - Make the shader output for this location NOT output to any Color attachment in the FBO
gl.COLOR_ATTACHMENT[0 - 15] - Make the shader location output to the specified color attachment.
But then we have this mysterious target (from the docs):
gl.BACK: Fragment shader output is written into the back color buffer.
I don't think I understand what the back color buffer is, especially relative to the currently attached FBO. As far as I know you don't specify a 'back color buffer' when making a FBO...so what does this mean? What is this 'back color buffer'?
In WebGL the backbuffer is effectively "the canvas". It's called the backbuffer because sometimes there is a frontbuffer. Canvas's in WebGL are double buffered. One buffer is whatever is visible, the other is the buffer you're currently drawing to.
You can't use [gl.BACK, gl_COLOR_ATTACHMENT0]
When writing to a framebuffer each entry can only be the same attachment or NONE. For example imagine you have 4 attachments. Then the array you pass to drawBuffers is as follows
gl.drawBuffers([
gl.COLOR_ATTACHMENT0, // OR gl.NONE,
gl.COLOR_ATTACHMENT1, // OR gl.NONE,
gl.COLOR_ATTACHMENT2, // OR gl.NONE,
gl.COLOR_ATTACHMENT3, // OR gl.NONE,
])
You can not swap around attachments.
gl.drawBuffers([
gl.NONE,
gl.COLOR_ATTACHMENT0, // !! ERROR! This has to be COLOR_ATTACHMENT1 or NONE
])
You can't use gl.BACK gl.BACK is only for when writing to the canvas, in other words then the frame buffer is set to null as in gl.bindFramebuffer(null);
gl.drawBuffers([
gl.BACK, // OR gl.NONE
]);
note: drawBuffers state is part of the state of each framebuffer (and canvas). See this and this
I'm developing a opengl es application for ios.
I'm trying to blend two textures in my shader, but I always get only one active texture unit.
I have generated two texture, and linked them with two "sampler2D" from the fragment shader.
I set them to unit 0 and 1 by using glUniform1f();
And I have bind the textures using a loop
for (int i = 0; i < 2; i++)
{
glActiveTexture(GL_TEXTURE0 + i);
glBindTexture(GL_TEXTURE_2D, textures[i]);
}
But when I draw the opengl frame, only one unit is active. like in the picture below
So, what I've been doing wrong?
The way I read the output of that tool (I have not used it), the left pane shows the currently active texture unit. There is always exactly one active texture unit, corresponding to your last call of glActiveTexture(). This means that after you call:
glActiveTexture(GL_TEXTURE0 + i);
the value in the left circled field will be the value of i.
The right pane shows the textures bound to each texture unit. Since you bound textures to unit 0 and 1 with the loop shown in your question, it shows a texture (with id 201) bound to texture unit 0, and a texture (with id 202) bound to texture unit 1.
So as far as I can tell, the state shown in the screenshot represents exactly what you set based on your description and code fragment.
Based on the wording in your question, you might be under the impression that glActiveTexture() enables texture units. That is not the case. glActiveTexture() only specifies which texture unit subsequent glBindTexture() calls operate on.
Which textures are used is then determined by the values you set for the sampler uniforms of your shader program, and by the textures you bound to the corresponding texture units. The value of the currently active texture unit has no influence on the draw call, only on texture binding.
void D3DApp::OnResize()
{
assert(md3dImmediateContext);
assert(md3dDevice);
assert(mSwapChain);
// Release the old views, as they hold references to the buffers we
// will be destroying. Also release the old depth/stencil buffer.
ReleaseCOM(mRenderTargetView);
ReleaseCOM(mDepthStencilView);
ReleaseCOM(mDepthStencilBuffer);
// Resize the swap chain and recreate the render target view.
HR(mSwapChain->ResizeBuffers(1, mClientWidth, mClientHeight, DXGI_FORMAT_R8G8B8A8_UNORM, 0));
ID3D11Texture2D* backBuffer;
HR(mSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&backBuffer)));
HR(md3dDevice->CreateRenderTargetView(backBuffer, 0, &mRenderTargetView));
ReleaseCOM(backBuffer);
I am having a bit of trouble understanding exactly what this code is actually doing. I think it is getting the current back buffer storing the buffer in memory and then rendering it to the screen again? What could be in the contents of the buffer that is gotten? I am very confused. My program is using an 800 x 600 pixel window.
As the function name point out, it just Reset the backbuffer and the rendertarget, and this function should be called whenever a render window is resized.
HR(mSwapChain->ResizeBuffers(1, mClientWidth, mClientHeight, DXGI_FORMAT_R8G8B8A8_UNORM, 0));
In an DirectX application, we always set the backbuffer size same as the window size, so when the window size changed, we should update the backbuffer size accordingly, otherwise your scene will be stretched.
ID3D11Texture2D* backBuffer;
HR(mSwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast<void**>(&backBuffer)));
HR(md3dDevice->CreateRenderTargetView(backBuffer, 0, &mRenderTargetView));
ReleaseCOM(backBuffer);
The render target was based on the backbuffer which in turn should be updated.
I'm using OpenTK in MonoTouch to render some textures in iOS, and some of the textures come up broken. This is a closeup of an iPad screenshot showing one correctly rendered texture (the top one) and two broken ones below:
I'm not doing anything weird. I'm loading the texture from a semitransparent PNG using CGImage->CGBitmapContext->GL.TexImage2D. I'm rendering each sprite with two triangles, and my fragment shader just reads the texel from the sampler with texture2D() and multiplies it by a uniform vec4 to color the texture.
The files themselves seem to be okay, and the Android port of the same application using Mono for Android, and the exact same binary resources renders them perfectly. As you can see, other transparent textures work fine.
If it helps, pretty much every texture is broken when I run the program in the simulator. Also this problem persists even if I rebuild the program.
Any ideas on how to figure out what is causing this problem?
Here's my vertex shader:
attribute vec4 spritePosition;
attribute vec2 textureCoords;
uniform mat4 projectionMatrix;
uniform vec4 color;
varying vec4 colorVarying;
varying vec2 textureVarying;
void main()
{
gl_Position = projectionMatrix * spritePosition;
textureVarying = textureCoords;
colorVarying = color;
}
Here's my fragment shader:
varying lowp vec4 colorVarying;
varying lowp vec2 textureVarying;
uniform sampler2D spriteTexture;
void main()
{
gl_FragColor = texture2D(spriteTexture, textureVarying) * colorVarying;
}
I'm loading the image like this:
using (var bitmap = UIImage.FromFile(resourcePath).CGImage)
{
IntPtr pixels = Marshal.AllocHGlobal(bitmap.Width * bitmap.Height * 4);
using (var context = new CGBitmapContext(pixels, bitmap.Width, bitmap.Height, 8, bitmap.Width * 4, bitmap.ColorSpace, CGImageAlphaInfo.PremultipliedLast))
{
context.DrawImage(new RectangleF(0, 0, bitmap.Width, bitmap.Height), bitmap);
int[] textureNames = new int[1];
GL.GenTextures(1, textureNames);
GL.BindTexture(TextureTarget.Texture2D, textureNames[0]);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)All.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)All.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (int)All.ClampToEdge);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (int)All.ClampToEdge);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, bitmap.Width, bitmap.Height, 0, PixelFormat.Rgba, PixelType.UnsignedByte, pixels);
CurrentResources.Add(resourceID, new ResourceData(resourcePath, resourceType, 0, new TextureEntry(textureNames[0], bitmap.Width, bitmap.Height)));
}
}
and in my onRenderFrame, I have this:
GL.ClearColor(1.0f, 1.0f, 1.0f, 1.0f);
GL.Clear(ClearBufferMask.ColorBufferBit);
GL.Enable(EnableCap.Blend);
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
GL.UseProgram(RenderingProgram);
GL.VertexAttribPointer((int)ShaderAttributes.SpritePosition, 2, VertexAttribPointerType.Float, false, 0, squareVertices);
GL.VertexAttribPointer((int)ShaderAttributes.TextureCoords, 2, VertexAttribPointerType.Float, false, 0, squareTextureCoords);
GL.EnableVertexAttribArray((int)ShaderAttributes.SpritePosition);
GL.EnableVertexAttribArray((int)ShaderAttributes.TextureCoords);
//...
GL.ActiveTexture(TextureUnit.Texture0);
GL.BindTexture(TextureTarget.Texture2D, textureEntry.TextureID);
GL.Uniform1(Uniforms[(int)ShaderUniforms.Texture], 0);
// ...
GL.DrawArrays(BeginMode.TriangleStrip, 0, 4);
That triangle strip is made out of two triangles that make up the texture, with the vertex and texture coordinates set to where I want to show my sprite. projectionMatrix is a simple ortographic projection matrix.
As you can see, I'm not trying to do anything fancy here. This is all pretty standard code, and it works for some textures, so I think that in general the code is okay. I'm also doing pretty much the same thing in Mono for Android, and it works pretty well without any texture corruption.
Corrupted colors like that smell like uninitialized variables somewhere, and seeing it happen only on the transparent part leads me to believe that I'm having uninitialized alpha values somewhere. However, GL.Clear(ClearBufferMask.ColorBufferBit) should clear my alpha values, and even so, the background texture has an alpha value of 1, and with the current BlendFunc, should set the alpha for those pixels to 1. Afterwards, the transparent textures have alpha values ranging from 0 to 1, so they should blend properly. I see no uninitialized variables anywhere.
...or... this is all the fault of CGBitmapContext. Maybe by doing DrawImage, I'm not blitting the source image, but drawing it with blending instead, and the garbage data comes from when I did AllocGlobal. This doesn't explain why it consistently happens with just these two textures though... (I'm tagging this as core-graphics so maybe one of the quartz people can help)
Let me know if you want to see some more code.
Okay, it is just as I had expected. The memory I get with Marshal.AllocHGlobal is not initialized to anything, and CGBitmapContext.DrawImage just renders the image on top of whatever is in the context, which is garbage.
So the way to fix this is simply to insert a context.ClearRect() call before I call context.DrawImage().
I don't know why it worked fine with other (larger) textures, but maybe it is because in those cases, I'm requesting a large block of memory, so the iOS (or mono) memory manager gets a new zeroed block, while for the smaller textures, I'm reusing memory previously freed, which has not been zeroed.
It would be nice if your memory was allocated to something like 0xBAADF00D when using the debug heap, like LocalAlloc does in the Windows API.
Two other somewhat related things to remember:
In the code I posted, I'm not releasing the memory requested with AllocHGlobal. This is a bug. GL.TexImage2D copies the texture to VRAM, so it is safe to free it right there.
context.DrawImage is drawing the image into a new context (instead of reading the raw pixels from the image), and Core Graphics only works with premultiplied alpha (which I find idiotic). So the loaded texture will always be loaded with premultiplied alpha if I do it in this way. This means that I must also change the alpha blending function to GL.BlendFunc(BlendingFactorSrc.One, BlendingFactorDest.OneMinusSrcAlpha), and make sure that all crossfading code works over the entire RGBA, and not just the alpha value.
check out the following test:
http://binks.knobbits.org/webgl/texture3.html
It's a simple test of cube textures. It also has a 2D texture in there for good measure.
I discovered that in some browsers (so far, chrome) The image is not displayed properly if I re-use the same texture unit for drawing the cube texture as for the 2D texture.
There is a checkbox at the bottom marked "Use separate texture units for the cube texture on the sphere and the 2D texture on the floor" that shows this.
Is this a bug in chrome or in my code?
I don't see anything wrong with your code but
1) You can't use the same texture for 2 different targets. In other words you can't do this
tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.bindTexture(gl.TEXTURE_CUBE_MAP, tex);
2) You can't use both a TEXTURE_2D and a CUBE_MAP on a texture unit AT THE SAME TIME.
You can assign both, but when you render you're only allowed to reference one of them in your shaders. In other words.
gl.activeTexture(gl.TEXTURE0);
tex1 = gl.createTexture();
tex2 = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex1);
gl.bindTexture(gl.TEXTURE_CUBE_MAP, tex2);
Is okay but a shader that tried use both textures from texture unit 0 would fail.
I have ordered a bit the code of the drawing functions and now are working.
Square:
TexturedSquare.prototype.draw = function() {
gl.bindBuffer(gl.ARRAY_BUFFER,this.v);
gl.enableVertexAttribArray(gl.va_vertex);
gl.enableVertexAttribArray(gl.va_normal);
gl.enableVertexAttribArray(gl.va_tex1pos);
gl.vertexAttribPointer(gl.va_vertex,4,gl.FLOAT,false,10*4,0);
gl.vertexAttribPointer(gl.va_normal,4,gl.FLOAT,false,10*4,4*4);
gl.vertexAttribPointer(gl.va_tex1pos,2,gl.FLOAT,false,10*4,4*8);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D,this.texture);
gl.bindTexture(gl.TEXTURE_CUBE_MAP,null);
gl.uniform1i(shader.textures,1);
gl.uniform1i(shader.texture1,0);
gl.uniform1i(shader.cube_textures,0);
gl.uniform1i(shader.cubeTexture0,1);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER,this.e);
gl.drawElements(gl.TRIANGLES,this.l,gl.UNSIGNED_SHORT,0);
gl.disableVertexAttribArray(gl.va_tex1pos);
}
Sphere:
GLHTexturedSphere.prototype.draw = function() {
gl.bindBuffer(gl.ARRAY_BUFFER,this.vbuf);
gl.enableVertexAttribArray(gl.va_vertex);
gl.enableVertexAttribArray(gl.va_normal);
gl.enableVertexAttribArray(this.va_cubetex0pos);
gl.vertexAttribPointer(gl.va_vertex,4,gl.FLOAT,false,8*4,0);
gl.vertexAttribPointer(gl.va_normal,4,gl.FLOAT,false,8*4,4*4);
gl.vertexAttribPointer(this.va_cubetex0pos,4,gl.FLOAT,false,8*4,4*4);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D,null);
gl.bindTexture(gl.TEXTURE_CUBE_MAP,this.texture);
gl.uniform1i(shader.textures,0);
gl.uniform1i(shader.texture1,1);
gl.uniform1i(shader.cube_textures,1);
gl.uniform1i(shader.cubeTexture0,0);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER,this.ebuf);
gl.drawElements(gl.TRIANGLES,this.l,gl.UNSIGNED_SHORT,0);
gl.disableVertexAttribArray(gl.va_cubetex0pos);
}
Both of them are using now TEXTURE0. Please check WebGL states and uniform values.
Original code is a bit hard for me, sorry. But I think the problem is that texture1 and cubeTexture0 uniforms are been setted with the same value.