OpenGL ES 2.0 and default FrameBuffer in iOS - ios

I'm a bit confused about FrameBuffers.
Currently, to draw on screen, I generate a framebuffer with a Renderbuffer for the GL_COLOR_ATTACHMENT0 using this code.
-(void)initializeBuffers{
//Build the main FrameBuffer
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
//Build the color Buffer
glGenRenderbuffers(1, &colorBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorBuffer);
//setup the color buffer with the EAGLLayer (it automatically defines width and height of the buffer)
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:EAGLLayer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &bufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &bufferHeight);
//Attach the colorbuffer to the framebuffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorBuffer);
//Check the Framebuffer status
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
NSAssert(status == GL_FRAMEBUFFER_COMPLETE, ERROR_FRAMEBUFFER_FAIL);
}
And I show the buffer content using
[context presentRenderbuffer:GL_RENDERBUFFER];
Reading this question, I saw the comment of Arttu Peltonen who says:
Default framebuffer is where you render to by default, you don't have
to do anything to get that. Framebuffer objects are what you can
render to instead, and that's called "off-screen rendering" by some.
If you do that, you end up with your image in a texture instead of the
default framebuffer (that gets displayed on-screen). You can copy the
image from that texture to the default framebuffer (on-screen), that's
usually done with blitting (but it's only available in OpenGL ES 3.0).
But if you only wanted to show the image on-screen, you probably
wouldn't use a FBO in the first place.
So I wonder if my method is just to be used for off-screen rendering.
And in that case, what I have to do to render on the default buffer?!
(Note, I don't want to use a GLKView...)

The OpenGL ES spec provides for two kinds of framebuffers: window-system-provided and framebuffer objects. The default framebuffer would be the window-system-provided kind. But the spec doesn't require that window-system-provided framebuffers or a default framebuffer exist.
In iOS, there are no window-system-provided framebuffers, and no default framebuffer -- all drawing is done with framebuffer objects. To render to the screen, you create a renderbuffer whose storage comes from a CAEAGLLayer object (or you use one that's created on your behalf, as when using the GLKView class). That's exactly what your code is doing.
To do offscreen rendering, you create a renderbuffer and call glRenderbufferStorage to allocate storage for it. Said storage is not associated with a CAEAGLLayer, so that renderbuffer can't be (directly) presented on the screen. (It's not a texture either -- setting up a texture as a render target works differently -- it's just an offscreen buffer.)
There's more information about all of this and example code for each approach in Apple's OpenGL ES Programming Guide for iOS.

Related

OpenGL ES 2.0 lines appear more jagged than Core Animation. Is anti-aliasing possible in iOS 4?

Is there a relatively simple way to implement anti-aliasing on iOS 4 using OpenGL ES 2.0?
Had a situation where I needed to abandon Core Animation in favor of OpenGL ES 2.0 to get true 3d graphics.
Things work but I've noticed that simple 3d cubes rendered using Core Animation are much crisper than those produced with OpenGL which have more jagged lines.
I read that iOS 4.0 supports anti-aliasing for GL_TRIANGLE_STRIP, and I found an online tutorial (see below for code from link) that looked promising, but I have not been able to get it working.
First thing I noticed was all the OES suffixes which appear to be a remnant of Open GL ES 1.0.
Since everything I've done is for OpenGL ES 2.0, I tried removing every OES just to see what happened. Things compiled and built with zero errors or warnings but my graphics were no longer rendering.
If I keep the OES suffixes I get several errors and warnings of the following types:
Error - Use of undeclared identifier ''
Warning - Implicit declaration of function '' is invalid in C99
Including the ES1 header files resulted in a clean build but still nothing got rendered. Doesn't seem like I should need to include ES 1.0 header files to implement this functionality anyways.
So my question is how do I get this to work, and will it actually address my issue?
Does the approach in the online tutorial I linked have the right idea, and I just messed up the implementation, or is there a better method?
Any guidance or details would be greatly appreciated.
Code from link above:
GLint backingWidth, backingHeight;
//Buffer definitions for the view.
GLuint viewRenderbuffer, viewFramebuffer;
//Buffer definitions for the MSAA
GLuint msaaFramebuffer, msaaRenderBuffer, msaaDepthBuffer;
//Create our viewFrame and render Buffers.
glGenFramebuffersOES(1, &viewFramebuffer);
glGenRenderbuffersOES(1, &viewRenderbuffer);
//Bind the buffers.
glBindFramebufferOES(GL_FRAMEBUFFER_OES, viewFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER_OES fromDrawable:(CAEAGLLayer*)self.layer];
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, viewRenderbuffer);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
//Generate our MSAA Frame and Render buffers
glGenFramebuffersOES(1, &msaaFramebuffer);
glGenRenderbuffersOES(1, &msaaRenderBuffer);
//Bind our MSAA buffers
glBindFramebufferOES(GL_FRAMEBUFFER_OES, msaaFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, msaaRenderBuffer);
// Generate the msaaDepthBuffer.
// 4 will be the number of pixels that the MSAA buffer will use in order to make one pixel on the render buffer.
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_RGB5_A1_OES, backingWidth, backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, msaaRenderBuffer);
glGenRenderbuffersOES(1, &msaaDepthBuffer);
//Bind the msaa depth buffer.
glBindRenderbufferOES(GL_RENDERBUFFER_OES, msaaDepthBuffer);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER_OES, 4, GL_DEPTH_COMPONENT16_OES, backingWidth , backingHeight);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_DEPTH_ATTACHMENT_OES, GL_RENDERBUFFER_OES, msaaDepthBuffer);
- (void) draw
{
[EAGLContext setCurrentContext:context];
//
// Do your drawing here
//
// Apple (and the khronos group) encourages you to discard depth
// render buffer contents whenever is possible
GLenum attachments[] = {GL_DEPTH_ATTACHMENT_OES};
glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE, 1, attachments);
//Bind both MSAA and View FrameBuffers.
glBindFramebufferOES(GL_READ_FRAMEBUFFER_APPLE, msaaFramebuffer);
glBindFramebufferOES(GL_DRAW_FRAMEBUFFER_APPLE, viewFramebuffer);
// Call a resolve to combine both buffers
glResolveMultisampleFramebufferAPPLE();
// Present final image to screen
glBindRenderbufferOES(GL_RENDERBUFFER_OES, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER_OES];
}
This https://developer.apple.com/library/ios/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/WorkingwithEAGLContexts/WorkingwithEAGLContexts.html#//apple_ref/doc/uid/TP40008793-CH103-SW12 is probably the modern version of what that tutorial was describing. Multisampling wherein you draw 4 pixels that are then sampled down to 1 onscreen is the technique suggested.

OpenGL ES examples that don't use EAGLContext

I'd like to better understand the creation, allocation, and binding of OpenGL ES framebuffers, renderbuffers, etc under iOS. I understand that the EAGLContext and EAGLSharegroup classes normally manage the allocation and binding of such objects. However, the apple docs suggest that it is possible to do GL offscreen rendering without using the EAGLContext class and I'm interested in how. Does anyone have any pointers to code examples?
I would also be interested in examples showing how to accomplish offscreen rendering with EAGLContext.
The only way to render content using OpenGL ES on iOS, offscreen or onscreen, is to do so through an EAGLContext. From the OpenGL ES Programming Guide:
Before your application can call any OpenGL ES functions, it must
initialize an EAGLContext object and set it as the current context.
I think the following lines might be what are causing some confusion:
The EAGLContext class also provides methods your application uses to
integrate OpenGL ES content with Core Animation. Without these
methods, your application would be limited to working with offscreen
images.
What that means is that if you want to render content to the screen, you use some extra methods only provided by the EAGLContext class, such as -renderbufferStorage:fromDrawable:. You still need an EAGLContext to manage OpenGL ES commands even if you're going to draw offscreen, but these particular methods which are specific to EAGLContext are needed to draw onscreen.
To your second question, how you setup your offscreen rendering will depend on the configuration of this offscreen render (texture-backed FBO, depth buffer, etc.). For example, the following code will set up a simple FBO that has no depth buffer and renders to the already set up outputTexture texture:
glActiveTexture(GL_TEXTURE1);
glGenFramebuffers(1, &filterFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, filterFramebuffer);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)currentFBOSize.width, (int)currentFBOSize.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, outputTexture, 0);
For code examples, you could look at how I do this within the open source GPUImage framework (which just does simple image rendering) or my open source Molecules application (which does more complex offscreen rendering using depth buffers).

setting a CAEAGLLayer properties for OpenGL ES?

Using frame buffer objects for rendering on iOS, which appears to be Apples preferred way of rendering on iOS according to the OpenGL ES Programming Guide for iOS from Apple, one is supposed to use glRenderbufferStorage() for specifying properties like width and hight according to OpenGL ES 2.0 Programming Guide from Munshi, Ginsburg and Shreiner. Apple replaces this with renderbufferStorage:fromDrawable: message sent to the EAGLContext in above guide.
Apple then goes on writing to fetch width and hight from the Renderbuffer as that buffer sets them on creation without further detail.
The width and height are 0 though.
The CAEAGLLayer Class Reference writes to "Set the layer bounds to match the dimensions of the display". The CAEAGLLayer class is the class Apple wants one to use as the backing class of the view one uses. This is done by returning it from the views layerClass method. This CAEAGLLayer only has 1 property "drawableProperties" which is an NSDictionary. Unfortunately that documentation is sparse. Dimensions cannot be set.
Thus: how to go on setting a CAEAGLLayer properties for OpenGL ES?
Here's my code thus far (Note an old example of Apple uses initWithCoder, I either guessed or got from somewhere I don't remember to use initWithFrame):
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self)
{
// Initialization code
theCAEAGLLayer = (CAEAGLLayer*)self.layer;
theCAEAGLLayer.opaque = YES;
theEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:theEAGLContext];
glGenFramebuffers(1, &theFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, theFramebuffer);
glGenRenderbuffers(1, &theColorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, theColorRenderbuffer);
[theEAGLContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:theCAEAGLLayer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, theColorRenderbuffer);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &widthOfTheColorRenderbuffer);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &heightOfTheColorRenderbuffer);
glGenRenderbuffers(1, &theDepthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, theDepthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, widthOfTheColorRenderbuffer, heightOfTheColorRenderbuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, theDepthRenderbuffer);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
}
}
return self;
}
Proper answer:
UIKit batches together certain operations and defers them until later in the runloop. That's because you may have code that changes the size of a view and changes different bits of text inside it. You probably want that stuff to happen atomically.
What that probably means for you is that the layer hasn't been sized yet. Have you tried moving what you have to - (void)layoutSubviews?
If you're planning to target iOS 5 only, you can just use GLKView and avoid writing any of this stuff for yourself.
Other comments:
glRenderbufferStorage would create storage at an opaque location that OpenGL could draw to, but how should the OS guess which of your frame buffers is the one you want to show to the user, rather than merely being an intermediate result? The OpenGL spec explicitly doesn't define how you communicate that to your specific OS. In iOS it's achieved via renderbufferStorage:fromDrawable: — that says to add storage that equates to the CALayer that iOS knows how to composite. Apple's method is not a replacement for glRenderbufferStorage, it does something that glRenderbufferStorage can't and shouldn't, and there are many times you'll use it instead even when programming for iOS only.
- (id)initWithFrame: is the initialiser you'd use if you were creating the view manually. - (id)initWithCoder: is used by the system to load the view from a NIB.
Has your UIView definitely specified its layerClass as CAEAGLLayer? If not then the call to your EAGL context would be permitted to fail.

Scaling the contents of OpenGL ES framebuffer

Currently I'm scaling down the contents of my OpenGL ES 1.1 framebuffer like this:
save current framebuffer and renderbuffer references
bind framebuffer2 and smallerRenderbuffer
re-render all contents
now smallerRenderbuffer contains the "scaled-down" contents of
framebuffer
do stuff with contents of smallerRenderbuffer
re-bind framebuffer and renderbuffer
What's an alternative way to do this? Perhaps I can just copy and scale the contents of the original framebuffer and renderbuffer into framebuffer2 and smallerRenderbuffer. Hence avoiding the re-render step. I've been looking at glScalef but I'm not sure where to go from here.
Note: this is all done in OpenGL ES 1.1 on iOS.
You could do an initial render to texture, then render from that to both the frame buffer that you want to be visible and to the small version. Whatever way you look at it, what you're trying to do is use the data that has been rendered as the source for another rendering so rendering to a texture is the most natural thing to do.
You're probably already familiar with the semantics of a render to texture if you're doing work on the miniature version, but for completeness: you'd create a frame buffer object, use glFramebufferTexture2DOES to attach a named texture to a suitable attachment point, then bind either the frame buffer or the texture (ensuring the other isn't simultaneously bound if you want defined behaviour) as appropriate.

Binding multiple buffers in OpenGL ES

It is possible to bind multiple framebuffers and renderbuffers in OpenGL ES? I'm rendering into an offscreen framebuffer/renderbuffer and would prefer to just use my existing render code.
Here's what I'm currently doing:
// create/bind framebuffer and renderbuffer (for screen display)
// render all content
// create/bind framebuffer2 and renderbuffer2 (for off-screen rendering)
// render all content again (would like to skip this)
Here's what I'd like to do:
// create/bind framebuffer and renderbuffer (for screen display)
// create/bind framebuffer2 and renderbuffer2 (for off-screen rendering)
// render all content (only once)
Is this possible?
You cannot render into multiple framebuffers at once. You might be able to use MRTs to render into multiple render targets (textures/renderbuffers) that belong the same FBO by putting out multiple colors in the fragment shader, but not into multiple FBOs, like an offscreen FBO and the default framebuffer. But if I'm informed correctly, ES doesn't support MRTs at the moment, anyway.
But in your case you still don't need to render the scene twice. If you need it in an offscreen renderbuffer anyway, why don't you just use a texture instead of a renderbuffer to hold the offscreen data (shouldn't make a difference). This way you can just render the scene once into the offscreen buffer (texture) and then display this texture to the screen framebuffer by drawing a simple textured quad with a simple pass-through fragment shader.
Though, in OpenGL ES it may make a difference if you use a renderbuffer or a texture to hold the offscreen data, as ES doesn't have a glGetTexImage. So if you need to copy the offscreen data to the CPU you won't get around glReadPixels and therefore need a renderbuffer. But in this case you still don't need to render the scene twice. You just have to introduce another FBO with a texture attached. So you render the scene once into the texture using this FBO and then render this texture into both the offsrceen FBO and the screen framebuffer. This might still be faster than drawing the whole scene twice, though only evaluation can tell you.
But if you need to copy the data to the CPU for processing, you can also just copy it from the screen framebuffer directly and don't need an offscreen FBO. And if you need the offscreen data for GPU-based processing only, then a texture is better than a renderbuffer anyway. So it might be usefull to reason if you actually need an additional offscreen buffer anyway, if it only contains the same data as the screen framebuffer. This might render the whole problem obsolete.

Resources