OpenGL ES examples that don't use EAGLContext - ios

I'd like to better understand the creation, allocation, and binding of OpenGL ES framebuffers, renderbuffers, etc under iOS. I understand that the EAGLContext and EAGLSharegroup classes normally manage the allocation and binding of such objects. However, the apple docs suggest that it is possible to do GL offscreen rendering without using the EAGLContext class and I'm interested in how. Does anyone have any pointers to code examples?
I would also be interested in examples showing how to accomplish offscreen rendering with EAGLContext.

The only way to render content using OpenGL ES on iOS, offscreen or onscreen, is to do so through an EAGLContext. From the OpenGL ES Programming Guide:
Before your application can call any OpenGL ES functions, it must
initialize an EAGLContext object and set it as the current context.
I think the following lines might be what are causing some confusion:
The EAGLContext class also provides methods your application uses to
integrate OpenGL ES content with Core Animation. Without these
methods, your application would be limited to working with offscreen
images.
What that means is that if you want to render content to the screen, you use some extra methods only provided by the EAGLContext class, such as -renderbufferStorage:fromDrawable:. You still need an EAGLContext to manage OpenGL ES commands even if you're going to draw offscreen, but these particular methods which are specific to EAGLContext are needed to draw onscreen.
To your second question, how you setup your offscreen rendering will depend on the configuration of this offscreen render (texture-backed FBO, depth buffer, etc.). For example, the following code will set up a simple FBO that has no depth buffer and renders to the already set up outputTexture texture:
glActiveTexture(GL_TEXTURE1);
glGenFramebuffers(1, &filterFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, filterFramebuffer);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)currentFBOSize.width, (int)currentFBOSize.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, outputTexture, 0);
For code examples, you could look at how I do this within the open source GPUImage framework (which just does simple image rendering) or my open source Molecules application (which does more complex offscreen rendering using depth buffers).

Related

OpenGL ES 2.0 and default FrameBuffer in iOS

I'm a bit confused about FrameBuffers.
Currently, to draw on screen, I generate a framebuffer with a Renderbuffer for the GL_COLOR_ATTACHMENT0 using this code.
-(void)initializeBuffers{
//Build the main FrameBuffer
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
//Build the color Buffer
glGenRenderbuffers(1, &colorBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorBuffer);
//setup the color buffer with the EAGLLayer (it automatically defines width and height of the buffer)
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:EAGLLayer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &bufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &bufferHeight);
//Attach the colorbuffer to the framebuffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorBuffer);
//Check the Framebuffer status
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
NSAssert(status == GL_FRAMEBUFFER_COMPLETE, ERROR_FRAMEBUFFER_FAIL);
}
And I show the buffer content using
[context presentRenderbuffer:GL_RENDERBUFFER];
Reading this question, I saw the comment of Arttu Peltonen who says:
Default framebuffer is where you render to by default, you don't have
to do anything to get that. Framebuffer objects are what you can
render to instead, and that's called "off-screen rendering" by some.
If you do that, you end up with your image in a texture instead of the
default framebuffer (that gets displayed on-screen). You can copy the
image from that texture to the default framebuffer (on-screen), that's
usually done with blitting (but it's only available in OpenGL ES 3.0).
But if you only wanted to show the image on-screen, you probably
wouldn't use a FBO in the first place.
So I wonder if my method is just to be used for off-screen rendering.
And in that case, what I have to do to render on the default buffer?!
(Note, I don't want to use a GLKView...)
The OpenGL ES spec provides for two kinds of framebuffers: window-system-provided and framebuffer objects. The default framebuffer would be the window-system-provided kind. But the spec doesn't require that window-system-provided framebuffers or a default framebuffer exist.
In iOS, there are no window-system-provided framebuffers, and no default framebuffer -- all drawing is done with framebuffer objects. To render to the screen, you create a renderbuffer whose storage comes from a CAEAGLLayer object (or you use one that's created on your behalf, as when using the GLKView class). That's exactly what your code is doing.
To do offscreen rendering, you create a renderbuffer and call glRenderbufferStorage to allocate storage for it. Said storage is not associated with a CAEAGLLayer, so that renderbuffer can't be (directly) presented on the screen. (It's not a texture either -- setting up a texture as a render target works differently -- it's just an offscreen buffer.)
There's more information about all of this and example code for each approach in Apple's OpenGL ES Programming Guide for iOS.

opengles: how to copy existing fbo's colorattachment(renderbuffer) to another fbo's colorattachment(texture2D)

Platform is iPhone OpenGL ES 2.0
the framework already create an main fbo with renderbuffer as it's colorattachment.
And I have my own fbo with texture2D as colorattachment.
I want to copy main fbo's content to my fbo.
I tried common glCopyTexImage2D way, but it's too slow on my device(iPad1).
So I wonder if a faster solution is out there.
If main fbo uses texture2D as colorattachment, I know just draw fullscreen quad using that texture to my fbo, but how to draw it's renderbuffer to my fbo? google quite a while but no specific answer.
RenderBuffers are almost useless on most embedded systems. All you can do with them is read from them with glReadPixels(), which is too slow.
You should use a Texture attachment, as you said, then render with that texture. This artcile will help:
http://processors.wiki.ti.com/index.php/Render_to_Texture_with_OpenGL_ES

Good tutorial on using Quads for custom Text in OpenGL ES 2.0 on iOS

I'm currently new to OpenGL ES and am self teaching myself how to program iOS games. I'm currently playing with a project that I would like to put a HUD over with some custom text. I don't want to do this using a UILabel and currently have no idea how to use Quads to cut up a png or such full of text and attach them to normal text to be used for display. I would like the end result to be providing a simple string to a command/method and the output to be displayed using the textures/bitmap for the quad. Say glPrint("Hello World");. Would anyone be able to guide me in the proper direction? There doesn't seem to be a single good tutorial on how to do this for OpenGL ES 2.0 (just OpenGL). I also want to try to avoid using 3rd party APIs. I really need/want to understand how to tackle this.
When I was getting started with OpenGL ES for my current 2D project I used Ray's tutorial, which helped me get a handle on rendering textured 2D quads. In conjunction with his 3D OpenGL ES tutorial, you might be able to piece together what you want to do. Note that you probably wouldn't render every single quad separately like in the tutorial, as that is very inefficient. Instead, you would gather all of the vertices of the characters into two big arrays/vertex buffers and batch render the characters. The basic flow for rendering each frame would probably look like this: pass a normal perspective projection matrix for 3D rendering, get your vertex information for your 3D scene to your shaders somehow, render the 3D scene. This part you've already done. For the text, immediately after, pass an orthogonal projection matrix in, bind your font texture (generally generated earlier with the GLKTextureLoader class) to the active texture unit, generate two big arrays of texture and geometric vertices for the characters/update VBOs if the text has changed, pass that in, and then batch render all of the letters at once using either glDrawArrays or glDrawElements (which requires indices).
Also, as I'm also new at using OpenGL, some of this may be wrong/inefficient. I've yet to use OpenGL ES to render anything 3D, so I'm not sure what other state changes (enabling, disabling, etc) besides a different projection matrix might be needed between rendering your 3D scene and the 2D scene (text).
It seems that drawing text using only OpenGL is a relatively difficult and tedious task, so if you just want to render a HUD overlay displaying frame rates and other things you are much better off using UILabels and saving yourself the trouble, especially if your project is not very complex. This also prevents you from having to deal with wrapping, kerning, font sizes, colors, different languages and a load of other stuff that greatly complicates text rendering if you need anything more complex.
Rather than tracking the location of each letter, why not use Core Graphics to draw your entire string into a bitmap, then upload that as a texture? You'd just need to get the dimensions from your bitmap to know what size quad to draw for that text string.
Within my open source GPUImage framework, I have an input class called a GPUImageUIElement that does something similar. The relevant code from that input is as follows:
CGSize layerPixelSize = [self layerSizeInPixels];
GLubyte *imageData = (GLubyte *) calloc(1, (int)layerPixelSize.width * (int)layerPixelSize.height * 4);
CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)layerPixelSize.width, (int)layerPixelSize.height, 8, (int)layerPixelSize.width * 4, genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGContextTranslateCTM(imageContext, 0.0f, layerPixelSize.height);
CGContextScaleCTM(imageContext, layer.contentsScale, -layer.contentsScale);
[layer renderInContext:imageContext];
CGContextRelease(imageContext);
CGColorSpaceRelease(genericRGBColorspace);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)layerPixelSize.width, (int)layerPixelSize.height, 0, GL_BGRA, GL_UNSIGNED_BYTE, imageData);
free(imageData);
This code takes a CALayer (either directly or from the backing layer of a UIView) and renders its contents to a texture. I've already initialized the texture before this, so the code sets up a bitmap context, renders the layer into that context using -renderInContext:, and then uploads that bitmap to the texture for use in OpenGL ES.
The helper method -layerSizeInPixels just accounts for the current Retina scale factor as follows:
- (CGSize)layerSizeInPixels;
{
CGSize pointSize = layer.bounds.size;
return CGSizeMake(layer.contentsScale * pointSize.width, layer.contentsScale * pointSize.height);
}
If you used a UILabel for your view and had it autosize to fit its text, you could set the text on it, use the above to render and upload your texture, and then take the pixel size of the element to determine your quad size. However, it would probably be more efficient to just draw the text yourself using -drawAtPoint:withFont:fontForSize: or the like with an NSString.
Using Core Graphics to render your text makes it easy to manipulate the text as an NSString and use all of Core Graphics' typesetting capabilities instead of rolling your own.

iOS GLKit and back to default framebuffer

I am running the boiler plate OpenGL example code that XCode creates for an OpenGL project for iOS. This sets up a simple ViewController and uses GLKit to handle the rest of the work.
All the update/draw functionality of the application is in C++. It is cross platform.
There is a lot of framebuffer creation going on. The draw phase renders to a few frame buffers and then tries to set it back to the default framebuffer.
glBindFramebuffer(GL_FRAMEBUFFER, 0);
This generates an GL_INVALID_ENUM. Only on iOS.
I am completely stumped as to why. The code runs fine on all major platforms except iOS. I'm wanting to blame GLKit. Any examples of iOS OpenGL setup that do not use GLKit?
UPDATE
The following snippet of code lets me see the default framebuffer that GLKit is using. For some reason it comes out as "2". Sure enough if I use "2" in all my glBindFrameBuffer calls it works. This is very frustrating.
[view bindDrawable ];
GLint defaultFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &defaultFBO);
LOGI("DEFAULT FBO: %d", defaultFBO);
What reason on earth would cause GLKit to not generate its internal framebuffer at 0? This is the semantic all other implementations of OpenGL use, 0 is the default FBO.
On iOS there is no default framebuffer. See Framebuffer Objects are the Only Rendering Target on iOS. I don't know much about GLKit, but on iOS to render something on screen you need to create framebuffer, and attach to it renderbuffer, and inform Core Animation Layer that this renderbuffer will be the "screen" or "default framebuffer" to draw to. Meaning - everything you'll draw to this framebuffer, will appear on screen. See Rendering to a Core Animation Layer.
I feel it's necessary to point out here that the call to glBindFramebuffer(GL_FRAMEBUFFER, 0);
does not return rendering to the main framebuffer although it would appear to work for machines that run Windows, Unix(Mac) or Linux. Desktops and laptops have no concept of a main default system buffer. This idea started with handheld devices. When you make an openGL bind call with zero as the parameter then what you are doing is setting this function to NULL. It's how you disable this function. It's the same with glBindTexture(GL_TEXTURE_2D, 0);
It is possible that on some handheld devices that the driver automatically activates the main system framebuffer when you set the framebuffer to NULL without activating another. This would be a choice made by the manufacturer and is not something that you should count on, this is not part of the openGL ES spec. For desktops and laptops, this is absolutely necessary since disabling the framebuffer is required to return to normal openGL rendering.
On an iOS device, you should make the following call,
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);,
providing that you named your system framebuffer 'viewFramebuffer'. Look for through your initialization code for the following call,
glGenFramebuffers(1, &viewFramebuffer);
Whatever you have written at the end there is what you bind to when returning to your main system buffer.
If you are using GLKit then you can use the following call,
[((GLKView *) self.view) bindDrawable]; The 'self.view' may be slightly different depending on your particular startup code.
Also, for iOS, you could use, glBindFramebuffer(GL_FRAMEBUFFER, 2); but this is likely not going to be consistent across future devices released by Apple. They may change the default value of '2' to be '3' or something else in the future so you'd want to use the actual name instead of an integer value.
(void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
//your use secondary/offscreen frame buffer to store render result
[self.shader drawOffscreenOnFBO];
//back to default frame buffer of GLKView
[((GLKView *) self.view) bindDrawable];
//draw on main screen
[self.shader drawinmainscreen];
}
refernece .....http://districtf13.blogspot.com/

setting a CAEAGLLayer properties for OpenGL ES?

Using frame buffer objects for rendering on iOS, which appears to be Apples preferred way of rendering on iOS according to the OpenGL ES Programming Guide for iOS from Apple, one is supposed to use glRenderbufferStorage() for specifying properties like width and hight according to OpenGL ES 2.0 Programming Guide from Munshi, Ginsburg and Shreiner. Apple replaces this with renderbufferStorage:fromDrawable: message sent to the EAGLContext in above guide.
Apple then goes on writing to fetch width and hight from the Renderbuffer as that buffer sets them on creation without further detail.
The width and height are 0 though.
The CAEAGLLayer Class Reference writes to "Set the layer bounds to match the dimensions of the display". The CAEAGLLayer class is the class Apple wants one to use as the backing class of the view one uses. This is done by returning it from the views layerClass method. This CAEAGLLayer only has 1 property "drawableProperties" which is an NSDictionary. Unfortunately that documentation is sparse. Dimensions cannot be set.
Thus: how to go on setting a CAEAGLLayer properties for OpenGL ES?
Here's my code thus far (Note an old example of Apple uses initWithCoder, I either guessed or got from somewhere I don't remember to use initWithFrame):
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self)
{
// Initialization code
theCAEAGLLayer = (CAEAGLLayer*)self.layer;
theCAEAGLLayer.opaque = YES;
theEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:theEAGLContext];
glGenFramebuffers(1, &theFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, theFramebuffer);
glGenRenderbuffers(1, &theColorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, theColorRenderbuffer);
[theEAGLContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:theCAEAGLLayer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, theColorRenderbuffer);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &widthOfTheColorRenderbuffer);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &heightOfTheColorRenderbuffer);
glGenRenderbuffers(1, &theDepthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, theDepthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, widthOfTheColorRenderbuffer, heightOfTheColorRenderbuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, theDepthRenderbuffer);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
}
}
return self;
}
Proper answer:
UIKit batches together certain operations and defers them until later in the runloop. That's because you may have code that changes the size of a view and changes different bits of text inside it. You probably want that stuff to happen atomically.
What that probably means for you is that the layer hasn't been sized yet. Have you tried moving what you have to - (void)layoutSubviews?
If you're planning to target iOS 5 only, you can just use GLKView and avoid writing any of this stuff for yourself.
Other comments:
glRenderbufferStorage would create storage at an opaque location that OpenGL could draw to, but how should the OS guess which of your frame buffers is the one you want to show to the user, rather than merely being an intermediate result? The OpenGL spec explicitly doesn't define how you communicate that to your specific OS. In iOS it's achieved via renderbufferStorage:fromDrawable: — that says to add storage that equates to the CALayer that iOS knows how to composite. Apple's method is not a replacement for glRenderbufferStorage, it does something that glRenderbufferStorage can't and shouldn't, and there are many times you'll use it instead even when programming for iOS only.
- (id)initWithFrame: is the initialiser you'd use if you were creating the view manually. - (id)initWithCoder: is used by the system to load the view from a NIB.
Has your UIView definitely specified its layerClass as CAEAGLLayer? If not then the call to your EAGL context would be permitted to fail.

Resources