Why can't I render to a floating point texture in webgl? - webgl

I am creating a framebuffer and attaching a texture to it. Here is the texture that I would like to attach(but is not working):
gl.texImage2D(gl.TEXTURE_2D, 0, gl.R32F, sphere_texture.width, sphere_texture.height, 0, gl.RED, gl.FLOAT, null);
However, when I use this as the texture format, it works:
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, sphere_texture.width, sphere_texture.height, 0, gl.RGB, gl.UNSIGNED_BYTE, null)
Does anyone know how I could render to a framebuffer float texture?
This is how I am creating the framebuffer:
framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, scale_factor_texture, 0);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);

For WebGL2 contexts (which I assume you're working with, going by your intended use of the R32F format), you need to enable the EXT_color_buffer_float extension for those formats to be renderable:
if (!ctx.getExtension('EXT_color_buffer_float'))
throw new Error('Rendering to floating point textures is not supported on this platform');
For WebGL1 context there's WEBGL_color_buffer_float, as well as implicit support when enabling OES_texture_float (that one can probe for by attaching such a texture to render target and checking its completeness), however (with WebGL 1) rendering to single channel textures is not supported either way.

Related

WebGL feedback loop formed between Framebuffer and active Texture

I have a webgl project setup that uses 2 pass rendering to create effects on a texture.
Everything was working until recently chrome started throwing this error:
[.WebGL-0000020DB7FB7E40] GL_INVALID_OPERATION: Feedback loop formed between Framebuffer and active Texture.
This just started happening even though I didn't change my code, so I'm guessing a new update caused this.
I found this answer on SO, stating the error "happens any time you read from a texture which is currently attached to the framebuffer".
However I've combed through my code 100 times and I don't believe I am doing that. So here is how I have things setup.
Create a fragment shader with a uniform sampler.
uniform sampler2D sampler;
Create 2 textures
var texture0 = initTexture(); // This function does all the work to create a texture
var texture1 = initTexture(); // This function does all the work to create a texture
Create a Frame Buffer
var frameBuffer = gl.createFramebuffer();
Then I start the "2 pass processing" by uploading a html image to texture0, and binding texture0 to the sampler.
I then bind the frame buffer & call drawArrays:
gl.bindFramebuffer(gl.FRAMEBUFFER, frameBuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture1, 0);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
To clean up I unbind the frame buffer:
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
Edit:
After adding break points to my code I found that the error is not actually thrown until I bind the null frame buffer. So the drawArrays call isn't causing the error, it's binding the null frame buffer afterwards that sets it off.
Chrome since version 83 started to perform conservative checks for the framebuffer and the active texture feedback loop. These checks are likely too conservative and affect usage that should actually be allowed.
In these new checks Chrome seem to disallow a render target to be bound to any texture slot, even if this slot is not used by the program.
In your 2 pass rendering you likely have something like:
Initialize a render target and create a texture that points to a framebuffer.
Render to the target.
In 1 you likely bind a texture using gl.bindTexture(gl.TEXTURE_2D, yourTexture) you need to then, before the step 2, unbind the texture using gl.bindTexture(gl.TEXTURE_2D, null); Otherwise Chrome will fail because the render target is bound as a texture, even though this texture is not sampled by the program.

The fastest way of loading data into a texture in WebGL

We are profiling our application and we're noticing that most of the cpu time is spent on calls to texImage2D which is what we use to populate a texture. An example is shown below. I'd like to know if there are faster methods available in WebGL 1/2 or propriatary browser extensions that make this faster?
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D,
0,
0,
0,
width,
height,
gl.RED,
gl.FLOAT,
data);
gl.bindTexture(gl.TEXTURE_2D, null);

Shadow volume front/light cap appearing in stencil buffer

I'm trying to implement shadow volumes according to NVDIA GPU Gems Chapter 9. Efficient Shadow Volume Rendering on iPad, but I'm having issues with the front / light cap appearing in my stencil buffer.
I'm trying to render shadows on the box in the middle of the picture below. Shadows are being correctly generated on the right side of the box, but when I move the camera around, parts of the lit sides of the box are shadowed. It seems to me like it could be a problem with the resolution of the depth buffer, not recognizing when the shadow volume is the same depth as the box and should not be drawn, but I've used glDepthFunc(GL_LESS) for the drawing of the shadow volumes to try to correct this, it doesn't seem to change anything.
Here is a summary of my code:
glClearStencil(0);
glClear(GL_STENCIL_BUFFER_BIT);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LESS);
glDisable(GL_BLEND);
[self drawAmbient];
glDepthMask(GL_FALSE);
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 0, 0xff);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR_WRAP_OES, GL_KEEP);
glStencilOpSeparate(GL_BACK, GL_KEEP, GL_INCR_WRAP_OES, GL_KEEP);
glDisable(GL_CULL_FACE);
[self drawShadowVolumes];
glStencilFunc(GL_EQUAL, 0, 0xff);
glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_KEEP, GL_KEEP);
glStencilOpSeparate(GL_BACK, GL_KEEP, GL_KEEP, GL_KEEP);
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
glDepthMask(GL_TRUE);
glDepthFunc(GL_EQUAL);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glEnable(GL_CULL_FACE);
[self drawDirectionalLight];
You doing something wrong. For default Z-fail technic you must have 2 passes of shadow volume rendering, one for "Degenerated quads" (shadow volumes) and one for "Exactly object geometry" with flats normals (shadow caps). I can see only one pass for "Degenerated quads", but where pass for "Exactly geometry" with opposed flags in stencil buffer?
Degenerated quads must be rendered with
glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR_WRAP_OES, GL_KEEP);
glStencilOpSeparate(GL_BACK, GL_KEEP, GL_INCR_WRAP_OES, GL_KEEP);
and Exactly geometry must be rendered with opposed flags
glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_INCR_WRAP_OES, GL_KEEP);
glStencilOpSeparate(GL_BACK, GL_KEEP, GL_DECR_WRAP_OES, GL_KEEP);
The depth test must be GL_LESS or GL_LEQUAL like in simple geometry rendering.

OpenGL ES 2.0 lines appear more jagged than Core Animation. Is anti-aliasing possible in iOS 4?

Is there a relatively simple way to implement anti-aliasing on iOS 4 using OpenGL ES 2.0?
Had a situation where I needed to abandon Core Animation in favor of OpenGL ES 2.0 to get true 3d graphics.
Things work but I've noticed that simple 3d cubes rendered using Core Animation are much crisper than those produced with OpenGL which have more jagged lines.
I read that iOS 4.0 supports anti-aliasing for GL_TRIANGLE_STRIP, and I found an online tutorial (see below for code from link) that looked promising, but I have not been able to get it working.
First thing I noticed was all the OES suffixes which appear to be a remnant of Open GL ES 1.0.
Since everything I've done is for OpenGL ES 2.0, I tried removing every OES just to see what happened. Things compiled and built with zero errors or warnings but my graphics were no longer rendering.
If I keep the OES suffixes I get several errors and warnings of the following types:
Error - Use of undeclared identifier ''
Warning - Implicit declaration of function '' is invalid in C99
Including the ES1 header files resulted in a clean build but still nothing got rendered. Doesn't seem like I should need to include ES 1.0 header files to implement this functionality anyways.
So my question is how do I get this to work, and will it actually address my issue?
Does the approach in the online tutorial I linked have the right idea, and I just messed up the implementation, or is there a better method?
Any guidance or details would be greatly appreciated.
Code from link above:
GLint backingWidth, backingHeight;
//Buffer definitions for the view.
GLuint viewRenderbuffer, viewFramebuffer;
//Buffer definitions for the MSAA
GLuint msaaFramebuffer, msaaRenderBuffer, msaaDepthBuffer;
//Create our viewFrame and render Buffers.
glGenFramebuffersOES(1, &viewFramebuffer);
glGenRenderbuffersOES(1, &viewRenderbuffer);
//Bind the buffers.
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(CAEAGLLayer*)self.layer];
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
//Generate our MSAA Frame and Render buffers
glGenFramebuffersOES(1, &msaaFramebuffer);
glGenRenderbuffersOES(1, &msaaRenderBuffer);
//Bind our MSAA buffers
glBindFramebufferOES(GL_FRAMEBUFFER_OES, msaaFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, msaaRenderBuffer);
// Generate the msaaDepthBuffer.
// 4 will be the number of pixels that the MSAA buffer will use in order to make one pixel on the render buffer.
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_RGB5_A1_OES, backingWidth, backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, msaaRenderBuffer);
glGenRenderbuffersOES(1, &msaaDepthBuffer);
//Bind the msaa depth buffer.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, msaaDepthBuffer);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_DEPTH_COMPONENT16_OES, backingWidth , backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, msaaDepthBuffer);
- (void) draw
{
[EAGLContext setCurrentContext:context];
//
// Do your drawing here
//
// Apple (and the khronos group) encourages you to discard depth
// render buffer contents whenever is possible
GLenum attachments[] = {GL_DEPTH_ATTACHMENT_OES};
glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE, 1, attachments);
//Bind both MSAA and View FrameBuffers.
glBindFramebufferOES(GL_READ_FRAMEBUFFER_APPLE, msaaFramebuffer);
glBindFramebufferOES(GL_DRAW_FRAMEBUFFER_APPLE, viewFramebuffer);
// Call a resolve to combine both buffers
glResolveMultisampleFramebufferAPPLE();
// Present final image to screen
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
This https://developer.apple.com/library/ios/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithEAGLContexts/WorkingwithEAGLContexts.html#//apple_ref/doc/uid/TP40008793-CH103-SW12 is probably the modern version of what that tutorial was describing. Multisampling wherein you draw 4 pixels that are then sampled down to 1 onscreen is the technique suggested.

Re-use of texture unit for different texture types breaks in chrome

check out the following test:
http://binks.knobbits.org/webgl/texture3.html
It's a simple test of cube textures. It also has a 2D texture in there for good measure.
I discovered that in some browsers (so far, chrome) The image is not displayed properly if I re-use the same texture unit for drawing the cube texture as for the 2D texture.
There is a checkbox at the bottom marked "Use separate texture units for the cube texture on the sphere and the 2D texture on the floor" that shows this.
Is this a bug in chrome or in my code?
I don't see anything wrong with your code but
1) You can't use the same texture for 2 different targets. In other words you can't do this
tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
gl.bindTexture(gl.TEXTURE_CUBE_MAP, tex);
2) You can't use both a TEXTURE_2D and a CUBE_MAP on a texture unit AT THE SAME TIME.
You can assign both, but when you render you're only allowed to reference one of them in your shaders. In other words.
gl.activeTexture(gl.TEXTURE0);
tex1 = gl.createTexture();
tex2 = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex1);
gl.bindTexture(gl.TEXTURE_CUBE_MAP, tex2);
Is okay but a shader that tried use both textures from texture unit 0 would fail.
I have ordered a bit the code of the drawing functions and now are working.
Square:
TexturedSquare.prototype.draw = function() {
gl.bindBuffer(gl.ARRAY_BUFFER,this.v);
gl.enableVertexAttribArray(gl.va_vertex);
gl.enableVertexAttribArray(gl.va_normal);
gl.enableVertexAttribArray(gl.va_tex1pos);
gl.vertexAttribPointer(gl.va_vertex,4,gl.FLOAT,false,10*4,0);
gl.vertexAttribPointer(gl.va_normal,4,gl.FLOAT,false,10*4,4*4);
gl.vertexAttribPointer(gl.va_tex1pos,2,gl.FLOAT,false,10*4,4*8);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D,this.texture);
gl.bindTexture(gl.TEXTURE_CUBE_MAP,null);
gl.uniform1i(shader.textures,1);
gl.uniform1i(shader.texture1,0);
gl.uniform1i(shader.cube_textures,0);
gl.uniform1i(shader.cubeTexture0,1);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER,this.e);
gl.drawElements(gl.TRIANGLES,this.l,gl.UNSIGNED_SHORT,0);
gl.disableVertexAttribArray(gl.va_tex1pos);
}
Sphere:
GLHTexturedSphere.prototype.draw = function() {
gl.bindBuffer(gl.ARRAY_BUFFER,this.vbuf);
gl.enableVertexAttribArray(gl.va_vertex);
gl.enableVertexAttribArray(gl.va_normal);
gl.enableVertexAttribArray(this.va_cubetex0pos);
gl.vertexAttribPointer(gl.va_vertex,4,gl.FLOAT,false,8*4,0);
gl.vertexAttribPointer(gl.va_normal,4,gl.FLOAT,false,8*4,4*4);
gl.vertexAttribPointer(this.va_cubetex0pos,4,gl.FLOAT,false,8*4,4*4);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D,null);
gl.bindTexture(gl.TEXTURE_CUBE_MAP,this.texture);
gl.uniform1i(shader.textures,0);
gl.uniform1i(shader.texture1,1);
gl.uniform1i(shader.cube_textures,1);
gl.uniform1i(shader.cubeTexture0,0);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER,this.ebuf);
gl.drawElements(gl.TRIANGLES,this.l,gl.UNSIGNED_SHORT,0);
gl.disableVertexAttribArray(gl.va_cubetex0pos);
}
Both of them are using now TEXTURE0. Please check WebGL states and uniform values.
Original code is a bit hard for me, sorry. But I think the problem is that texture1 and cubeTexture0 uniforms are been setted with the same value.

Resources