setting a CAEAGLLayer properties for OpenGL ES? - ios

Using frame buffer objects for rendering on iOS, which appears to be Apples preferred way of rendering on iOS according to the OpenGL ES Programming Guide for iOS from Apple, one is supposed to use glRenderbufferStorage() for specifying properties like width and hight according to OpenGL ES 2.0 Programming Guide from Munshi, Ginsburg and Shreiner. Apple replaces this with renderbufferStorage:fromDrawable: message sent to the EAGLContext in above guide.
Apple then goes on writing to fetch width and hight from the Renderbuffer as that buffer sets them on creation without further detail.
The width and height are 0 though.
The CAEAGLLayer Class Reference writes to "Set the layer bounds to match the dimensions of the display". The CAEAGLLayer class is the class Apple wants one to use as the backing class of the view one uses. This is done by returning it from the views layerClass method. This CAEAGLLayer only has 1 property "drawableProperties" which is an NSDictionary. Unfortunately that documentation is sparse. Dimensions cannot be set.
Thus: how to go on setting a CAEAGLLayer properties for OpenGL ES?
Here's my code thus far (Note an old example of Apple uses initWithCoder, I either guessed or got from somewhere I don't remember to use initWithFrame):
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self)
{
// Initialization code
theCAEAGLLayer = (CAEAGLLayer*)self.layer;
theCAEAGLLayer.opaque = YES;
theEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:theEAGLContext];
glGenFramebuffers(1, &theFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, theFramebuffer);
glGenRenderbuffers(1, &theColorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, theColorRenderbuffer);
[theEAGLContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:theCAEAGLLayer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, theColorRenderbuffer);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &widthOfTheColorRenderbuffer);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &heightOfTheColorRenderbuffer);
glGenRenderbuffers(1, &theDepthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, theDepthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, widthOfTheColorRenderbuffer, heightOfTheColorRenderbuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, theDepthRenderbuffer);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
}
}
return self;
}

Proper answer:
UIKit batches together certain operations and defers them until later in the runloop. That's because you may have code that changes the size of a view and changes different bits of text inside it. You probably want that stuff to happen atomically.
What that probably means for you is that the layer hasn't been sized yet. Have you tried moving what you have to - (void)layoutSubviews?
If you're planning to target iOS 5 only, you can just use GLKView and avoid writing any of this stuff for yourself.
Other comments:
glRenderbufferStorage would create storage at an opaque location that OpenGL could draw to, but how should the OS guess which of your frame buffers is the one you want to show to the user, rather than merely being an intermediate result? The OpenGL spec explicitly doesn't define how you communicate that to your specific OS. In iOS it's achieved via renderbufferStorage:fromDrawable: — that says to add storage that equates to the CALayer that iOS knows how to composite. Apple's method is not a replacement for glRenderbufferStorage, it does something that glRenderbufferStorage can't and shouldn't, and there are many times you'll use it instead even when programming for iOS only.
- (id)initWithFrame: is the initialiser you'd use if you were creating the view manually. - (id)initWithCoder: is used by the system to load the view from a NIB.
Has your UIView definitely specified its layerClass as CAEAGLLayer? If not then the call to your EAGL context would be permitted to fail.

Related

OpenGL ES 2.0 and default FrameBuffer in iOS

I'm a bit confused about FrameBuffers.
Currently, to draw on screen, I generate a framebuffer with a Renderbuffer for the GL_COLOR_ATTACHMENT0 using this code.
-(void)initializeBuffers{
//Build the main FrameBuffer
glGenFramebuffers(1, &frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
//Build the color Buffer
glGenRenderbuffers(1, &colorBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorBuffer);
//setup the color buffer with the EAGLLayer (it automatically defines width and height of the buffer)
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:EAGLLayer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &bufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &bufferHeight);
//Attach the colorbuffer to the framebuffer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorBuffer);
//Check the Framebuffer status
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
NSAssert(status == GL_FRAMEBUFFER_COMPLETE, ERROR_FRAMEBUFFER_FAIL);
}
And I show the buffer content using
[context presentRenderbuffer:GL_RENDERBUFFER];
Reading this question, I saw the comment of Arttu Peltonen who says:
Default framebuffer is where you render to by default, you don't have
to do anything to get that. Framebuffer objects are what you can
render to instead, and that's called "off-screen rendering" by some.
If you do that, you end up with your image in a texture instead of the
default framebuffer (that gets displayed on-screen). You can copy the
image from that texture to the default framebuffer (on-screen), that's
usually done with blitting (but it's only available in OpenGL ES 3.0).
But if you only wanted to show the image on-screen, you probably
wouldn't use a FBO in the first place.
So I wonder if my method is just to be used for off-screen rendering.
And in that case, what I have to do to render on the default buffer?!
(Note, I don't want to use a GLKView...)
The OpenGL ES spec provides for two kinds of framebuffers: window-system-provided and framebuffer objects. The default framebuffer would be the window-system-provided kind. But the spec doesn't require that window-system-provided framebuffers or a default framebuffer exist.
In iOS, there are no window-system-provided framebuffers, and no default framebuffer -- all drawing is done with framebuffer objects. To render to the screen, you create a renderbuffer whose storage comes from a CAEAGLLayer object (or you use one that's created on your behalf, as when using the GLKView class). That's exactly what your code is doing.
To do offscreen rendering, you create a renderbuffer and call glRenderbufferStorage to allocate storage for it. Said storage is not associated with a CAEAGLLayer, so that renderbuffer can't be (directly) presented on the screen. (It's not a texture either -- setting up a texture as a render target works differently -- it's just an offscreen buffer.)
There's more information about all of this and example code for each approach in Apple's OpenGL ES Programming Guide for iOS.

logical buffer load - slow framebuffer load - ios

We are trying to figure out why we have a relatively slow FPS on iphone 4 and ipad 1.
We are seeing this Category of warning in our open GL Analysis: Logical Buffer Load. The summary is "Slow framebuffer load". The recommendation says that the framebuffer must be loaded by the GPU before rendering. It recommends that we are failing to performa a fullscreen clear operation at the beginning fo each frame. However, we are doing this with glClear.
[EAGLContext setCurrentContext:_context];
glBindFramebuffer(GL_FRAMEBUFFER, _defaultFramebuffer);
glClear(GL_COLOR_BUFFER_BIT);
// Our OpenGL Drawing Occurs here
...
...
...
// hint to opengl to not bother with this buffer
const GLenum discards[] = {GL_DEPTH_ATTACHMENT};
glBindFramebuffer(GL_FRAMEBUFFER, _defaultFramebuffer);
glDiscardFramebufferEXT(GL_FRAMEBUFFER, 1, discards);
// present render
[_context presentRenderbuffer:GL_RENDERBUFFER];
We are not actually using a depth or stencil buffer.
This is happening when we render textures as tiles and it happens each time we load a new tile. It is pointing to our glDrawArrays command.
Any recommendations on how we can get rid of this warning?
If it helps at all, this is how we are setting up the layer:
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO], kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGB565, kEAGLDrawablePropertyColorFormat,
nil];
After a lot of work and deliberation, I managed to figure this out in the end.
Ok I am using an open source library called GLESuperman. Its a great library which helps to debug these kind of issues and it can be used for drawing graphics - its pretty fast. Yes I have no idea why its called that... But its free and it works. Just search it up on Github. It gets updated very frequently and supports iOS 7 and higher.
Ok so to implement it, do the following:
// Import the framework into your Xcode project.
#import <GLESuperman/GLESuperman.h>
// Also you will need to import Core Graphics.
#import <CoreGraphics/CoreGraphics.h>
// In order to run it in debug mode and get
// a live detailed report about things like FPS, do the following.
GLESuperman *debugData = [[GLESuperman alloc] init];
[debugData runGraphicDebug withRepeat:YES inBackground:YES];
// In order to draw graphics, do the following.
GLESuperman *graphicView = [[GLESuperman alloc] init];
[graphicView drawView:CGRectMake(0, 0, 50, 50];
// You can do other things too like add images/etc..
// Just look at the library documentation, it has everything.
[graphicView setAlpha:1.0];
[graphicView showGraphic];

OpenGL ES examples that don't use EAGLContext

I'd like to better understand the creation, allocation, and binding of OpenGL ES framebuffers, renderbuffers, etc under iOS. I understand that the EAGLContext and EAGLSharegroup classes normally manage the allocation and binding of such objects. However, the apple docs suggest that it is possible to do GL offscreen rendering without using the EAGLContext class and I'm interested in how. Does anyone have any pointers to code examples?
I would also be interested in examples showing how to accomplish offscreen rendering with EAGLContext.
The only way to render content using OpenGL ES on iOS, offscreen or onscreen, is to do so through an EAGLContext. From the OpenGL ES Programming Guide:
Before your application can call any OpenGL ES functions, it must
initialize an EAGLContext object and set it as the current context.
I think the following lines might be what are causing some confusion:
The EAGLContext class also provides methods your application uses to
integrate OpenGL ES content with Core Animation. Without these
methods, your application would be limited to working with offscreen
images.
What that means is that if you want to render content to the screen, you use some extra methods only provided by the EAGLContext class, such as -renderbufferStorage:fromDrawable:. You still need an EAGLContext to manage OpenGL ES commands even if you're going to draw offscreen, but these particular methods which are specific to EAGLContext are needed to draw onscreen.
To your second question, how you setup your offscreen rendering will depend on the configuration of this offscreen render (texture-backed FBO, depth buffer, etc.). For example, the following code will set up a simple FBO that has no depth buffer and renders to the already set up outputTexture texture:
glActiveTexture(GL_TEXTURE1);
glGenFramebuffers(1, &filterFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, filterFramebuffer);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)currentFBOSize.width, (int)currentFBOSize.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, outputTexture, 0);
For code examples, you could look at how I do this within the open source GPUImage framework (which just does simple image rendering) or my open source Molecules application (which does more complex offscreen rendering using depth buffers).

Is there another way to display OpenGL content than using a Core Animation aware renderbuffer?

According to Apple's OpenGL ES Programming Guide, "If [a] framebuffer is intended to be displayed to the user, use a special Core Animation-aware renderbuffer."
The text goes on to say that to make this Core Animation aware renderbuffer, one needs to "Subclass UIView to create an OpenGL ES view for [the] iOS application [and] Override the layerClass" by using this code:
+ (Class) layerClass
{
return [CAEAGLLayer class];
}
However, if one examines Apple's GLCameraRipple example which displays OpenGL to the end user, the layerClass never appears to be overridden. A text search on layerClass or CAEAGLLayer reveals they are missing.
If you look for other approaches to display directly to users, Apple gives two other OpenGL approaches, but both seem to imply that they are not for displaying directly to users but rather are for off-screen rendering. (i.e. "If the framebuffer is used to perform offscreen image processing, attach a renderbuffer. If the framebuffer image is used as an input to a later rendering step, attach a texture.")
Is there another way to display OpenGL content than using a Core Animation aware renderbuffer - or is Apple somehow overrriding the layer class so the OpenGL content is becoming Core Animation aware in another way?
The reason you don't see a subclassed UIView with a CAEAGLLayer backing it in the GLCameraRipple example is because it uses a GLKView. GLKView is a class introduced in iOS 5.0 as part of GLKit, and it wraps some common code, such as the explicit override to use a CAEAGLLayer and the setup of its matching renderbuffer.
This is still being done, it's just abstracted away from you. For displaying OpenGL ES content to the screen, you still need to go through a CAEAGLLayer one way or another.
Offscreen rendering is a different animal, because there you aren't attaching to a layer for display, so there's no layer needed. If you want to render to a texture, attach a texture as a target for your FBO, and that's it.

iOS GLKit and back to default framebuffer

I am running the boiler plate OpenGL example code that XCode creates for an OpenGL project for iOS. This sets up a simple ViewController and uses GLKit to handle the rest of the work.
All the update/draw functionality of the application is in C++. It is cross platform.
There is a lot of framebuffer creation going on. The draw phase renders to a few frame buffers and then tries to set it back to the default framebuffer.
glBindFramebuffer(GL_FRAMEBUFFER, 0);
This generates an GL_INVALID_ENUM. Only on iOS.
I am completely stumped as to why. The code runs fine on all major platforms except iOS. I'm wanting to blame GLKit. Any examples of iOS OpenGL setup that do not use GLKit?
UPDATE
The following snippet of code lets me see the default framebuffer that GLKit is using. For some reason it comes out as "2". Sure enough if I use "2" in all my glBindFrameBuffer calls it works. This is very frustrating.
[view bindDrawable ];
GLint defaultFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &defaultFBO);
LOGI("DEFAULT FBO: %d", defaultFBO);
What reason on earth would cause GLKit to not generate its internal framebuffer at 0? This is the semantic all other implementations of OpenGL use, 0 is the default FBO.
On iOS there is no default framebuffer. See Framebuffer Objects are the Only Rendering Target on iOS. I don't know much about GLKit, but on iOS to render something on screen you need to create framebuffer, and attach to it renderbuffer, and inform Core Animation Layer that this renderbuffer will be the "screen" or "default framebuffer" to draw to. Meaning - everything you'll draw to this framebuffer, will appear on screen. See Rendering to a Core Animation Layer.
I feel it's necessary to point out here that the call to glBindFramebuffer(GL_FRAMEBUFFER, 0);
does not return rendering to the main framebuffer although it would appear to work for machines that run Windows, Unix(Mac) or Linux. Desktops and laptops have no concept of a main default system buffer. This idea started with handheld devices. When you make an openGL bind call with zero as the parameter then what you are doing is setting this function to NULL. It's how you disable this function. It's the same with glBindTexture(GL_TEXTURE_2D, 0);
It is possible that on some handheld devices that the driver automatically activates the main system framebuffer when you set the framebuffer to NULL without activating another. This would be a choice made by the manufacturer and is not something that you should count on, this is not part of the openGL ES spec. For desktops and laptops, this is absolutely necessary since disabling the framebuffer is required to return to normal openGL rendering.
On an iOS device, you should make the following call,
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);,
providing that you named your system framebuffer 'viewFramebuffer'. Look for through your initialization code for the following call,
glGenFramebuffers(1, &viewFramebuffer);
Whatever you have written at the end there is what you bind to when returning to your main system buffer.
If you are using GLKit then you can use the following call,
[((GLKView *) self.view) bindDrawable]; The 'self.view' may be slightly different depending on your particular startup code.
Also, for iOS, you could use, glBindFramebuffer(GL_FRAMEBUFFER, 2); but this is likely not going to be consistent across future devices released by Apple. They may change the default value of '2' to be '3' or something else in the future so you'd want to use the actual name instead of an integer value.
(void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
//your use secondary/offscreen frame buffer to store render result
[self.shader drawOffscreenOnFBO];
//back to default frame buffer of GLKView
[((GLKView *) self.view) bindDrawable];
//draw on main screen
[self.shader drawinmainscreen];
}
refernece .....http://districtf13.blogspot.com/

Resources