Shadow volume front/light cap appearing in stencil buffer - ios

I'm trying to implement shadow volumes according to NVDIA GPU Gems Chapter 9. Efficient Shadow Volume Rendering on iPad, but I'm having issues with the front / light cap appearing in my stencil buffer.
I'm trying to render shadows on the box in the middle of the picture below. Shadows are being correctly generated on the right side of the box, but when I move the camera around, parts of the lit sides of the box are shadowed. It seems to me like it could be a problem with the resolution of the depth buffer, not recognizing when the shadow volume is the same depth as the box and should not be drawn, but I've used glDepthFunc(GL_LESS) for the drawing of the shadow volumes to try to correct this, it doesn't seem to change anything.
Here is a summary of my code:
glClearStencil(0);
glClear(GL_STENCIL_BUFFER_BIT);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LESS);
glDisable(GL_BLEND);
[self drawAmbient];
glDepthMask(GL_FALSE);
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 0, 0xff);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR_WRAP_OES, GL_KEEP);
glStencilOpSeparate(GL_BACK, GL_KEEP, GL_INCR_WRAP_OES, GL_KEEP);
glDisable(GL_CULL_FACE);
[self drawShadowVolumes];
glStencilFunc(GL_EQUAL, 0, 0xff);
glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_KEEP, GL_KEEP);
glStencilOpSeparate(GL_BACK, GL_KEEP, GL_KEEP, GL_KEEP);
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
glDepthMask(GL_TRUE);
glDepthFunc(GL_EQUAL);
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glEnable(GL_CULL_FACE);
[self drawDirectionalLight];

You doing something wrong. For default Z-fail technic you must have 2 passes of shadow volume rendering, one for "Degenerated quads" (shadow volumes) and one for "Exactly object geometry" with flats normals (shadow caps). I can see only one pass for "Degenerated quads", but where pass for "Exactly geometry" with opposed flags in stencil buffer?
Degenerated quads must be rendered with
glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_DECR_WRAP_OES, GL_KEEP);
glStencilOpSeparate(GL_BACK, GL_KEEP, GL_INCR_WRAP_OES, GL_KEEP);
and Exactly geometry must be rendered with opposed flags
glStencilOpSeparate(GL_FRONT, GL_KEEP, GL_INCR_WRAP_OES, GL_KEEP);
glStencilOpSeparate(GL_BACK, GL_KEEP, GL_DECR_WRAP_OES, GL_KEEP);
The depth test must be GL_LESS or GL_LEQUAL like in simple geometry rendering.

Related

Why can't I render to a floating point texture in webgl?

I am creating a framebuffer and attaching a texture to it. Here is the texture that I would like to attach(but is not working):
gl.texImage2D(gl.TEXTURE_2D, 0, gl.R32F, sphere_texture.width, sphere_texture.height, 0, gl.RED, gl.FLOAT, null);
However, when I use this as the texture format, it works:
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, sphere_texture.width, sphere_texture.height, 0, gl.RGB, gl.UNSIGNED_BYTE, null)
Does anyone know how I could render to a framebuffer float texture?
This is how I am creating the framebuffer:
framebuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, scale_factor_texture, 0);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
For WebGL2 contexts (which I assume you're working with, going by your intended use of the R32F format), you need to enable the EXT_color_buffer_float extension for those formats to be renderable:
if (!ctx.getExtension('EXT_color_buffer_float'))
throw new Error('Rendering to floating point textures is not supported on this platform');
For WebGL1 context there's WEBGL_color_buffer_float, as well as implicit support when enabling OES_texture_float (that one can probe for by attaching such a texture to render target and checking its completeness), however (with WebGL 1) rendering to single channel textures is not supported either way.

Antialised Circle with texture mapping

i have a question about a circle which has a texture mapping. My code works well but i have not antialised edges so it is not smooth and looks not good. I have read now about 3 hours and found some solutions but i don't know how can i implement them in my code. There were two solutions which sounds pretty good.
First was a blurry texture which should bind instead of a non blurry to have smooth edges.
Second add color vertices on the edges with opacity to have smooth edges. My currently draw function looks like this:
CC_NODE_DRAW_SETUP();
[self.shaderProgram use];
ccGLBindTexture2D( _texture.name );
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
ccGLBlendFunc( _blendFunc.src, _blendFunc.dst);
ccGLEnableVertexAttribs( kCCVertexAttribFlag_Position | kCCVertexAttribFlag_TexCoords );
// Send the texture coordinates to OpenGL
glVertexAttribPointer(kCCVertexAttrib_TexCoords, 2, GL_FLOAT, GL_FALSE, 0, _textCoords);
// Send the polygon coordinates to OpenGL
glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, 0, _triangleFanPos);
// Draw it
glDrawArrays(GL_TRIANGLE_FAN, 0, _numOfSegements+2);
I am currently using cococs2d version 3. I asked a similar question and found only the solution of enable the multisampling on cocos2d but this break my fps to 30.
So maybe there is someone how can help me.

OpenGL ES iOS drawing performance a lot slower with VBOs than without

I've recently changed drawing in my current project from standard drawing from a memory array to VBOs. To my surprise the framerate dropped significantly from 60fps to 30fps drawing a model with 1200verts 8 times. Doing further profiling showed that glDrawElements took 10 times as long when using VBOs compared to drawing from memory.
I am really puzzled why this is happening. Does anyone know what could be the cause for a performance decrease?
I am testing on an iPhone 5 running iOS 6.1.2.
I've isolated my VBO handling into a single function where I create the vertex/index buffer once statically at the top of the function. I can switch between normal and VBO rendering with an #ifdef USE_VBO
- (void)drawDuck:(Toy*)toy reflection:(BOOL)reflection
{
ModelOBJ* model = _duck[0].model;
int stride = sizeof(ModelOBJ::Vertex);
#define USE_VBO
#ifdef USE_VBO
static bool vboInitialized = false;
static unsigned int vbo, ibo;
if (!vboInitialized) {
vboInitialized = true;
// Generate VBO
glGenBuffers(1, &vbo);
int numVertices = model->getNumberOfVertices();
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, stride*numVertices, model->getVertexBuffer(), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
// Generate index buffer
glGenBuffers(1, &ibo);
int numIndices = model->getNumberOfIndices();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(unsigned short)*numIndices, model->getIndexBuffer(), GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
#endif
[self setupDuck:toy reflection:reflection];
#ifdef USE_VBO
// Draw with VBO
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glEnableVertexAttribArray(GC_SHADER_ATTRIB_POSITION);
glEnableVertexAttribArray(GC_SHADER_ATTRIB_NORMAL);
glEnableVertexAttribArray(GC_SHADER_ATTRIB_TEX_COORD);
glVertexAttribPointer(GC_SHADER_ATTRIB_POSITION, 3, GL_FLOAT, GL_FALSE, stride, (void*)offsetof(ModelOBJ::Vertex, position));
glVertexAttribPointer(GC_SHADER_ATTRIB_TEX_COORD, 2, GL_FLOAT, GL_FALSE, stride, (void*)offsetof(ModelOBJ::Vertex, texCoord));
glVertexAttribPointer(GC_SHADER_ATTRIB_NORMAL, 3, GL_FLOAT, GL_FALSE, stride, (void*)offsetof(ModelOBJ::Vertex, normal));
glDrawElements(GL_TRIANGLES, model->getNumberOfIndices(), GL_UNSIGNED_SHORT, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
#else
// Draw with array
glEnableVertexAttribArray(GC_SHADER_ATTRIB_POSITION);
glEnableVertexAttribArray(GC_SHADER_ATTRIB_NORMAL);
glEnableVertexAttribArray(GC_SHADER_ATTRIB_TEX_COORD);
glVertexAttribPointer(GC_SHADER_ATTRIB_POSITION, 3, GL_FLOAT, GL_FALSE, stride, model->getVertexBuffer()->position);
glVertexAttribPointer(GC_SHADER_ATTRIB_TEX_COORD, 2, GL_FLOAT, GL_FALSE, stride, model->getVertexBuffer()->texCoord);
glVertexAttribPointer(GC_SHADER_ATTRIB_NORMAL, 3, GL_FLOAT, GL_FALSE, stride, model->getVertexBuffer()->normal);
glDrawElements(GL_TRIANGLES, model->getNumberOfIndices(), GL_UNSIGNED_SHORT, model->getIndexBuffer());
#endif
}
ModelOBJ::Vertex is just 3,2,3 float for pos, texcoord, normal. Indices are ushort.
UPDATE: I've now wrapped the draw setup (ie. the attrib binding calls) into an VAO and now performance is ok, even slightly better than drawing from main memory. So my conclusion is that VBO support without VAOs is broken on iOS. Is that assumption correct?
It is likely that the driver was falling back to software vertex submission (CPU copy from the VBO into the command buffer). This can be worse than using vertex arrays in client memory, as client memory us usually cached, while VBO contents are typically in write combined memory on iOS.
When using the CPU Sampler in Instruments, you'll see a ton if time underneath glDrawArrays/glDrawElements in gleRunVertexSubmitARM.
The most common reason to fall back to SW CPU submission is an unaligned attribute (current iOS devices require each attribute to be 4 byte aligned), but that doesn't appear to be the case for the 3 attributes you've shown. After that, the next most common cause is mixing client arrays and buffer objects in a single vertex array configuration.
In this case, you probably have a stray vertex attribute binding: some other array element is likely still enabled and pointing to a client array, causing everything to fall off of the hardware DMA path. By creating a VAO, you've either switched away from the misconfigured default VAO, or alternatively you are trying to enable a client VAO but being saved because client arrays are depreciated and do not function when used with VAOs (throws an INVALID_OPERATION error instead).
When you populate your index buffer with glBufferData, the second argument should be 2*numIndices rather than stride*numIndices.
Since your index buffer is much larger than it needs to be, this could explain your performance problem.

glDrawArrays from iOS to OSX

I'm trying to get a game I made for iOS work in OSX. And so far I have been able to get everything working except for the drawing of some random generated hills using a glbound texture.
It works perfectly in iOS but somehow this part is the only thing not visible when the app is run in OSX. I checked all coords and color values so I'm pretty sure it has to do with OpenGL somehow.
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glBindTexture(GL_TEXTURE_2D, _textureSprite.texture.name);
glColor4f(_terrainColor.r,_terrainColor.g,_terrainColor.b, 1);
glVertexPointer(2, GL_FLOAT, 0, _hillVertices);
glTexCoordPointer(2, GL_FLOAT, 0, _hillTexCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei)_nHillVertices);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
You're disabling the texture coordinate (and color) array along with the texturing unit, yet are binding a texture coordinate pointer.
Is this really what you intend to do?
Appearantly it was being drawn after all, only as a 1/2 pixel line. Somehow there is some scaling on the vertices in effect, will have to check my code.

OpenGL ES 2.0 lines appear more jagged than Core Animation. Is anti-aliasing possible in iOS 4?

Is there a relatively simple way to implement anti-aliasing on iOS 4 using OpenGL ES 2.0?
Had a situation where I needed to abandon Core Animation in favor of OpenGL ES 2.0 to get true 3d graphics.
Things work but I've noticed that simple 3d cubes rendered using Core Animation are much crisper than those produced with OpenGL which have more jagged lines.
I read that iOS 4.0 supports anti-aliasing for GL_TRIANGLE_STRIP, and I found an online tutorial (see below for code from link) that looked promising, but I have not been able to get it working.
First thing I noticed was all the OES suffixes which appear to be a remnant of Open GL ES 1.0.
Since everything I've done is for OpenGL ES 2.0, I tried removing every OES just to see what happened. Things compiled and built with zero errors or warnings but my graphics were no longer rendering.
If I keep the OES suffixes I get several errors and warnings of the following types:
Error - Use of undeclared identifier ''
Warning - Implicit declaration of function '' is invalid in C99
Including the ES1 header files resulted in a clean build but still nothing got rendered. Doesn't seem like I should need to include ES 1.0 header files to implement this functionality anyways.
So my question is how do I get this to work, and will it actually address my issue?
Does the approach in the online tutorial I linked have the right idea, and I just messed up the implementation, or is there a better method?
Any guidance or details would be greatly appreciated.
Code from link above:
GLint backingWidth, backingHeight;
//Buffer definitions for the view.
GLuint viewRenderbuffer, viewFramebuffer;
//Buffer definitions for the MSAA
GLuint msaaFramebuffer, msaaRenderBuffer, msaaDepthBuffer;
//Create our viewFrame and render Buffers.
glGenFramebuffersOES(1, &viewFramebuffer);
glGenRenderbuffersOES(1, &viewRenderbuffer);
//Bind the buffers.
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(CAEAGLLayer*)self.layer];
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
//Generate our MSAA Frame and Render buffers
glGenFramebuffersOES(1, &msaaFramebuffer);
glGenRenderbuffersOES(1, &msaaRenderBuffer);
//Bind our MSAA buffers
glBindFramebufferOES(GL_FRAMEBUFFER_OES, msaaFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, msaaRenderBuffer);
// Generate the msaaDepthBuffer.
// 4 will be the number of pixels that the MSAA buffer will use in order to make one pixel on the render buffer.
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_RGB5_A1_OES, backingWidth, backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, msaaRenderBuffer);
glGenRenderbuffersOES(1, &msaaDepthBuffer);
//Bind the msaa depth buffer.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, msaaDepthBuffer);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_DEPTH_COMPONENT16_OES, backingWidth , backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, msaaDepthBuffer);
- (void) draw
{
[EAGLContext setCurrentContext:context];
//
// Do your drawing here
//
// Apple (and the khronos group) encourages you to discard depth
// render buffer contents whenever is possible
GLenum attachments[] = {GL_DEPTH_ATTACHMENT_OES};
glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE, 1, attachments);
//Bind both MSAA and View FrameBuffers.
glBindFramebufferOES(GL_READ_FRAMEBUFFER_APPLE, msaaFramebuffer);
glBindFramebufferOES(GL_DRAW_FRAMEBUFFER_APPLE, viewFramebuffer);
// Call a resolve to combine both buffers
glResolveMultisampleFramebufferAPPLE();
// Present final image to screen
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
This https://developer.apple.com/library/ios/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithEAGLContexts/WorkingwithEAGLContexts.html#//apple_ref/doc/uid/TP40008793-CH103-SW12 is probably the modern version of what that tutorial was describing. Multisampling wherein you draw 4 pixels that are then sampled down to 1 onscreen is the technique suggested.

Resources