Offscreen Framebuffers in OpenGL - ios

I'm creating an iPhone game using OpenGL, and I want to draw onto an offscreen framebuffer, then using that framebuffer as a texture for drawing on the actual screen. I based my code off Apple's and the GLSprite example, but it seems I'm not doing it right when it comes to switching the drawing target, as I only get a blank texture. The code I'm using is below. What is wrong? What is the best way to render a texture on the fly?
The Creating an Offscreen Framebuffer below is giving me the 8cd6 error code.
Creating the Screen Framebuffer
glGenFramebuffersOES(1, &viewFramebuffer);
glGenRenderbuffersOES(1, &viewRenderbuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(id<EAGLDrawable>)self.layer];
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
if(glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES) != GL_FRAMEBUFFER_COMPLETE_OES) {
NSLog(#"failed to make complete framebuffer object %x", glCheckFramebufferStatusOES(GL_FRAMEBUFFER_OES));
return NO;
}
Creating an Offscreen Framebuffer
glGenFramebuffers(1, &offscreenFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, offscreenFramebuffer);
glGenTextures(1,&framebufferTexture);
glBindTexture(GL_TEXTURE_2D, framebufferTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 64, 64, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, framebufferTexture, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"ERROR: %x", status);
}
Drawing, Loop:
glBindTexture(GL_TEXTURE_2D,textureFromFile); //Switch to a texture from a file
glBindFramebufferOES(GL_FRAMEBUFFER_OES, offscreenFramebuffer); //Switch to the offscreen framebuffer
//Render the texture to be used later
glBindTexture(GL_TEXTURE_2D,framebufferTexture); //Switch to the offscreen framebuffer's texture
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer); //Switch to the screen framebuffer
//Do the drawing using the texture rendered just before, and present this to the screen.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];

The problem is simple: you've forgotten to draw to the screen. You know, the default framebuffer, what you get when you do glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0); The thing you were rendering to just fine before writing this code.
There is no such thing as a "onscreen" framebuffer object. All user-defined FBOs are by definition, off-screen.
You shouldn't have this viewFramebuffer FBO at all. Instead, you should draw to your texture, then use that texture to draw to the default framebuffer.

Given that you're getting a GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT_OES, you at least have a problem with the creation of your offscreen framebuffer. Looking at your code, nowhere do I see the creation of a color renderbuffer for this offscreen framebuffer (when rendering to a CAEAGLLayer, the layer itself creates the color renderbuffer).
I believe that you'll need to add something like the following in your offscreen framebuffer creation code (you will need to add the OES suffix to one or more functions here, because these are from an OpenGL ES 2.0 application):
glGenRenderbuffers(1, &renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, renderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8_OES, bufferSize.width, bufferSize.height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderbuffer);

Related

opengles drawing color changing with xcode 9

My app uses OpenGL ES for rendering a screen where the user draws his signature.
My code has been working fine for over 3 years, but after upgrading to Xcode 9 I am getting a strange color change in the lines I draw (both on the simulator and the device).
The lines used to have a pure red, green, or blue color, and now they have a grey/black line mixed in.
I do not know much about OpenGL and my code was put together from sample apps and tutorials.
What might have caused this change?
This is the drawing after upgrading to Xcode 9:
The code to change color:
// Change the brush color
- (void)changeBrushColor:(NSString *) newColor
{
SEL setcolor = NSSelectorFromString(newColor);
UIColor *nColor = [UIColor performSelector:setcolor];
CGColorRef color = nColor.CGColor;
const CGFloat *components = CGColorGetComponents(color);
brushColor[0] = (GLfloat) components[0];
brushColor[1] = (GLfloat) components[1];
brushColor[2] = (GLfloat) components[2];
brushColor[3] = (GLfloat) components[3];
if (initialized) {
glUseProgram(program[PROGRAM_POINT].id);
glUniform4fv(program[PROGRAM_POINT].uniform[UNIFORM_VERTEX_COLOR], 1, brushColor);
}
}
The code to draw:
CGPoint newMidPoint = CGPointMake(x/scale, y/scale);
[currentStroke insertObject:[NSValue valueWithCGPoint:newMidPoint] atIndex:2];
[currentStroke removeObjectAtIndex:0];
[currentStroke removeObjectAtIndex:0];
//NSLog(#" after draw currentstroke %#: ", currentStroke);
// Load data to the Vertex Buffer Object & Draw it
//glUseProgram(program[PROGRAM_POINT].id);
//glBindBuffer(GL_ARRAY_BUFFER, vboId);
glBufferData(GL_ARRAY_BUFFER, vertexCount*2*sizeof(GLfloat), vertexBuffer, GL_DYNAMIC_DRAW);
//glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, GL_FALSE, 0, 0);
// Draw
glDrawArrays(GL_POINTS, 0, (int)vertexCount);
// Display the buffer
//glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
init code:
- (BOOL)initGL
{
// Generate IDs for a framebuffer object and a color renderbuffer
glGenFramebuffers(1, &viewFramebuffer);
g
lGenRenderbuffers(1, &viewRenderbuffer);
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
// This cal
l associates the storage for the current render buffer with the EAGLDrawable (our CAEAGLLayer)
// allowing us to draw into a buffer that will later be rendered to screen wherever the layer is (which corresponds with our view).
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(id<EAGLDrawable>)self.layer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, viewRenderbuffer);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &backingWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &backingHeight);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
NSLog(#"failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
return NO;
}
// Setup the view port in Pixels
glViewport(0, 0, backingWidth, backingHeight);
// Create a Vertex Buffer Object to hold our data
glGenBuffers(1, &vboId);
// Load the brush texture
brushTexture = [self textureFromName:#"brush"];
// Load shaders
[self setupShaders];
// Enable blending and set a blending function appropriate for premultiplied alpha pixel data
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
//moved from draw
glUseProgram(program[PROGRAM_POINT].id);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glEnableVertexAttribArray(ATTRIB_VERTEX);
// //***** for testing ******
// [self sample];
// //***** for testing ******
return YES;
}
I finally found a solution to my problem. I used a brush texture image that was 64x64 versus 32x32 and that resolved the issue. I am not sure why this worked and would still like some comments on what caused the problem.

How to restore a GL_RENDERBUFFER?

I am working on storing and restoring my OpenGL ES based application's state.
I have a function to save the GL_RENDERBUFFER to dump the data with the following code:
glBindFramebuffer(GL_FRAMEBUFFER, fboTextureBufferData.framebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, fboTextureBufferData.colorbuffer);
GLint x = 0, y = 0, width2 = backingWidth, height2 = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
I don't see a glWritePixels function. What is the best way to repopulate the GL_RENDERBUFFER with the GLubyte data populated above? An example would be greatly appreciated.
EDIT 3:
Here is how I am attempting to configure the texture render buffer, and the function used to draw it. As noted in the code, if I specify GL_COLOR_ATTACHMENT1 for the glFramebufferTexture2D parameter, the stored pixel data is restored but I can't get any updates to draw. But if I use GL_COLOR_ATTACHMENT0 instead, I get drawing updates but no pixel data restored.
I have tried various combinations (for instance also using GL_COLOR_ATTACHMENT1 for the glFramebufferRenderbuffer parameter) but then I get an invalid frame buffer error when attempting to render. It seems I am so close, but can't figure out how to get them both restoring and rendering working together.
- (bool)configureRenderTextureBuffer {
[EAGLContext setCurrentContext:context];
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer*)self.layer];
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer*)self.layer];
glGenFramebuffers(1, &fboTextureBufferData.framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, fboTextureBufferData.framebuffer);
glGenRenderbuffers(1, &fboTextureBufferData.colorbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, fboTextureBufferData.colorbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8, backingWidth, backingHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, fboTextureBufferData.colorbuffer);
// Generate texture name, stores in .textureID
glGenTextures(1, &fboTextureBufferData.textureID);
glBindTexture(GL_TEXTURE_2D, fboTextureBufferData.textureID);
////////////////// Read Existing texture data //////////////////
NSString *dataPath = [TDTDeviceUtilitesLegacy documentDirectory]; //This just returns the app's document directory
NSData *data = [NSData dataWithContentsOfFile:[NSString stringWithFormat:#"%#/buffer.data", dataPath]];
GLubyte *pixelData = (GLubyte*)[data bytes];
// If I use GL_COLOR_ATTACHMENT1 here, my existing pixel data is restored
// but no drawing occurs. If I use GL_COLOR_ATTACHMENT0, then data isn't
// restored but drawing updates work
glFramebufferTexture2D ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, fboTextureBufferData.textureID, 0 );
// Populate with existing data
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, backingWidth, backingHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, &pixelData[0]); //&image[0]
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
if(status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"failed to make complete render texture framebuffer object %x", status);
return false;
}
return true;
}
Here is the code for rendering. The viewFramebuffer is attached to GL_COLOR_ATTACHMENT0 and is used so the texture frame buffer can be zoomed and positioned inside the view.
- (void)renderTextureBuffer {
//Bind the texture frame buffer, if I don't use this, I can't get it to draw
//glBindFramebuffer(GL_FRAMEBUFFER, fboTextureBufferData.framebuffer);
//If I use this instead of binding the framebuffer above, I get no drawing and black background
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, fboTextureBufferData.textureID, 0);
renderParticlesToTextureBuffer();
//Bind the view frame buffer.
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
drawFboTexture();
[context presentRenderbuffer:GL_RENDERBUFFER];
}
If you need to write data directly to a render target, using a renderbuffer is not a good option. In this case, it's much better to use a texture instead.
Using a texture as a FBO attachment works very similarly to using a renderbuffer. Where you currently use glRenderbufferStorage() to allocate a renderbuffer of the needed dimensions, you create a texture instead, and allocate its storage with:
GLuint texId = 0;
glGenTextures(1, &texId);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0,
GL_RGBA, GL_UNSIGNED_SHORT_5_6_5, 0);
Then you attach it to the framebuffer with:
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, texId, 0);
Now, if you later want to fill your render target with data, you can simply call glTexImage2D() again, or even better glTexSubImage2D() if the size is unchanged, to do that.

Open GL ES 2.0 renderbufferStorage returns false

I'm trying to create a Framebuffer to display a 3D model in my iOS app but when the framebuffer is created, the renderbufferStorage is returning false. My code is based on GLCameraRipple sample code.
The code to create the frame buffer is the following:
- (void)createFramebuffer
{
if (_context && !defaultFramebuffer) {
[EAGLContext setCurrentContext:_context];
// Create default framebuffer object
glGenFramebuffers(1, &defaultFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
// Create colour renderbuffer and allocate backing store
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
// Allocate the renderbuffer's storage (shared with the drawable object)
CAEAGLLayer *layer = (CAEAGLLayer*)self.glkView.layer;
BOOL success = [_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:layer];
if(!success) {
NSLog(#"Error rendering buffer storage");
}
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA4, layer.bounds.size.width, layer.bounds.size.height);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &framebufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &framebufferHeight);
// Create the depth render buffer and allocate storage
glGenRenderbuffers(1, &depthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, framebufferWidth, framebufferHeight);
// Attach colour and depth render buffers to the frame buffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderbuffer);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
NSLog(#"Failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
}
}
Any help will be appreciated.
Thanks in advance.
I was struggling with this too and found a solution worked for me. I was using
var openGLView = OpenGLView()
and the "renderbufferStorage" returned false. After I changed that to
var openGLView: OpenGLView!
I got the "renderbufferStorage" returned true.
you can't use standard renderbufferStorage in iOS. You have to do that using an object that is instance of EAGLContext

Does iOS5 support both GL_STENCIL_INDEX and GL_STENCIL_INDEX8?

With the following code:
GLuint viewRenderbuffer, viewFramebuffer, viewDepthbuffer, stencilBuffer;
// Create the framebuffer object
glGenFramebuffers(1, &viewFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);
// Create a render buffer and bind it to the FBO.
glGenRenderbuffers(1, &viewRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer*)self.layer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, viewRenderbuffer);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &imageWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &imageHeight);
// Create a depth buffer and bind it to the FBO.
glGenRenderbuffers(1, &viewDepthbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, viewDepthbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, imageWidth, imageHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, viewDepthbuffer);
// Create a stencil buffer to crop the rendered scene and bind it to the FBO.
glGenRenderbuffers(1, &stencilBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, stencilBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX, imageWidth, imageHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, stencilBuffer);
// Check the FBO.
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"Failure with framebuffer generation: %d", glCheckFramebufferStatus(GL_FRAMEBUFFER));
}
With GL_STENCIL_INDEX, I get the GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT error. With GL_STENCIL_INDEX8, I get the GL_FRAMEBUFFER_UNSUPPORTED error. Both are caused by the last glFramebufferRenderbuffer() function, which should bind the stencil buffer to the FBO.
Furthermore, when I check the GL_RENDERBUFFER_STENCIL_SIZE value, I get the right value (8) with GL_STENCIL_INDEX8, but I get 0 with GL_STENCIL_INDEX.
With this, I can't get a functional and complete FBO with a stencil buffer. Is it due to the GL_STENCIL_INDEX? Which one should be used here?
It seems that in OpenGL ES 2.0 at least on iOS (not sure for other OS) you have to create combine the depth buffer and the stencil buffer.
I listed all the extensions supported on my device (iPhone 4 with iOS 5.0.1) and the only one related to the stencil buffer is :
GL_OES_packed_depth_stencil
This suggests that you would have to create a combo depth+stencil buffer (taken from the iPhone 3D Programming book)
// Create a packed depth stencil buffer.
GLuint depthStencil;
glGenRenderbuffersOES(1, &depthStencil);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depthStencil);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH24_STENCIL8_OES, width, height);
// Create the framebuffer object.
GLuint framebuffer;
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES,
GL_RENDERBUFFER_OES, color);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES,
GL_RENDERBUFFER_OES, depthStencil);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_STENCIL_ATTACHMENT_OES,
GL_RENDERBUFFER_OES, depthStencil);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, color);

Rendering to texture on iOS OpenGL ES—works on simulator, but not on device

In order to improve the performance of my OpenGL ES application for the iPad, I was planning to draw a rarely updated but rendertime-heavy element to a texture, so I can just use the texture unless the element has to be redrawn. However, while the texture is mapped correctly on both the simulator and the device, only on the simulator is something actually rendered into the texture.
The following is the code that I added to the project. While setting up the scene, I create the buffers and the texture needed:
int width = 768;
int height = 270;
// Prepare texture for off-screen rendering.
glGenTextures(1, &wTexture);
glBindTexture(GL_TEXTURE_2D, wTexture);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_FALSE);
glClearColor(.9f, .3f, .6f, 1.0f); // DEBUG
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA,
GL_UNSIGNED_BYTE, 0);
glBindTexture(GL_TEXTURE_2D, 0);
// Depth attachment buffer, always needed.
glGenRenderbuffersOES(1, &wDepth);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, wDepth);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_DEPTH_COMPONENT16_OES,
width, height);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, 0);
// Create FBO for render-to-texture.
glGenFramebuffersOES(1, &wBuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, wBuffer);
glFramebufferTexture2DOES(GL_FRAMEBUFFER_OES,
GL_COLOR_ATTACHMENT0_OES, GL_TEXTURE_2D, wTexture, 0);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,
GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, wDepth);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0);
A glFramebufferStatusOES on the new FBO (before it is unbound of course) yields a 'framebuffer complete' return value on both the simulator and the device. Note that I set the pink clear colour for the texture in order to confirm that the texture is actually rendered, and the problem is in fact simply that the texture is never drawn into.
Whenever the texture needs to be redrawn, I do this before rendering the element:
glBindFramebufferOES(GL_FRAMEBUFFER_OES, wBuffer);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, width, height);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
// ...
and the following after the actual rendering:
// ...
glPopMatrix();
glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0);
Finally, every time the screen is redrawn I map the texture to a quad at the appropriate position on the screen, like so:
float Vertices[] = {
-65.0f, -100.0f, .0f,
-65.0f, 100.0f, .0f,
-10.0f, -100.0f, .0f,
-10.0f, 100.0f, .0f};
float Texture[] = {.0f, .0f, 1.0f, .0f, .0f, 1.0f, 1.0f, 1.0f};
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glBindTexture(GL_TEXTURE_2D, wTexture);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glVertexPointer(3, GL_FLOAT, 0, Vertices);
glTexCoordPointer(2, GL_FLOAT, 0, Texture);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, 0);
On the iPhone and iPad simulators (4.2, 4.3), the code works as expected. I see the dynamically rendered texture displayed at the respective position, of course with a pink instead of a transparent background due to my debugging statement. On my iPad 4.2 device, however, only the pink rectangle is rendered, not what should have been drawn into it during the render-to-texture step. Thus, the texture is rendered to the screen correctly, but for some reason, on the device the render-to-texture code fails to actually render anything to the texture.
I suppose I am using some functionality that is not available on the device, or make an erroneus assumption somewhere, but I can't figure out what it is. I also tried running it through the OpenGL ES Analyzer, but it gives me nothing but some basic performance optimisation tips. Where do I need to look for the problem?
I was using MSAA in my project, and have found out that the problem disappeared when I disabled it. This has lead me to discover this other question where the same problem is discussed (but not solved).
The problem seems to be that if multisampling is enabled for your main framebuffer, all of your custom FBOs have to use multisampling as well. You cannot render to a normal non-multisampled GL_TEXTURE_2D, and a multi-sampled GL_TEXTURE_2D_MULTISAMPLE is not available on OpenGL ES 2.
In order to fix the problem, I modified my render-to-texture code the same way I modified my main rendering code to enable multisampling. In addition to the three buffer objects created in the code from the question, I create three more for the multi-sampled rendering:
glGenFramebuffersOES(1, &wmBuffer);
glGenRenderbuffersOES(1, &wmColor);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, wmBuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, wmColor);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_RGBA8_OES, width, height);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, wmColor);
glGenRenderbuffersOES(1, &wmDepth);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, wmDepth);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_DEPTH_COMPONENT16_OES, width, height);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, wmDepth);
Before rendering to the texture, I bind the new MSAA buffer:
glBindFramebufferOES(GL_FRAMEBUFFER_OES, wmBuffer);
Finally, after rendering, I resolve the MSAA FBO into the texture FBO the same way I do for my main rendering framebuffer:
glBindFramebufferOES(GL_READ_FRAMEBUFFER_APPLE, wmBuffer);
glBindFramebufferOES(GL_DRAW_FRAMEBUFFER_APPLE, wBuffer);
glResolveMultisampleFramebufferAPPLE();
GLenum attachments[] = {GL_DEPTH_ATTACHMENT_OES, GL_COLOR_ATTACHMENT0_OES, GL_STENCIL_ATTACHMENT_OES};
glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE, 3, attachments);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, 0);
The textures are now rendered correctly (and the performance is great!)

Resources