logical buffer load - slow framebuffer load - ios - ios

We are trying to figure out why we have a relatively slow FPS on iphone 4 and ipad 1.
We are seeing this Category of warning in our open GL Analysis: Logical Buffer Load. The summary is "Slow framebuffer load". The recommendation says that the framebuffer must be loaded by the GPU before rendering. It recommends that we are failing to performa a fullscreen clear operation at the beginning fo each frame. However, we are doing this with glClear.
[EAGLContext setCurrentContext:_context];
glBindFramebuffer(GL_FRAMEBUFFER, _defaultFramebuffer);
glClear(GL_COLOR_BUFFER_BIT);
// Our OpenGL Drawing Occurs here
...
...
...
// hint to opengl to not bother with this buffer
const GLenum discards[] = {GL_DEPTH_ATTACHMENT};
glBindFramebuffer(GL_FRAMEBUFFER, _defaultFramebuffer);
glDiscardFramebufferEXT(GL_FRAMEBUFFER, 1, discards);
// present render
[_context presentRenderbuffer:GL_RENDERBUFFER];
We are not actually using a depth or stencil buffer.
This is happening when we render textures as tiles and it happens each time we load a new tile. It is pointing to our glDrawArrays command.
Any recommendations on how we can get rid of this warning?
If it helps at all, this is how we are setting up the layer:
eaglLayer.drawableProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:NO], kEAGLDrawablePropertyRetainedBacking,
kEAGLColorFormatRGB565, kEAGLDrawablePropertyColorFormat,
nil];

After a lot of work and deliberation, I managed to figure this out in the end.
Ok I am using an open source library called GLESuperman. Its a great library which helps to debug these kind of issues and it can be used for drawing graphics - its pretty fast. Yes I have no idea why its called that... But its free and it works. Just search it up on Github. It gets updated very frequently and supports iOS 7 and higher.
Ok so to implement it, do the following:
// Import the framework into your Xcode project.
#import <GLESuperman/GLESuperman.h>
// Also you will need to import Core Graphics.
#import <CoreGraphics/CoreGraphics.h>
// In order to run it in debug mode and get
// a live detailed report about things like FPS, do the following.
GLESuperman *debugData = [[GLESuperman alloc] init];
[debugData runGraphicDebug withRepeat:YES inBackground:YES];
// In order to draw graphics, do the following.
GLESuperman *graphicView = [[GLESuperman alloc] init];
[graphicView drawView:CGRectMake(0, 0, 50, 50];
// You can do other things too like add images/etc..
// Just look at the library documentation, it has everything.
[graphicView setAlpha:1.0];
[graphicView showGraphic];

Related

GLKit Multi-sampling with OpenGL ES 3.0

I am in the process of migrating a small iPad application from OpenGL ES 2.0 to OpenGL ES 3.0. In the App, I use a subclass of GLKView to handle all my drawing, though the only GLKit features I use are:
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3]; // Or 2
self.drawableDepthFormat = GLKViewDrawableDepthFormatNone;
self.drawableColorFormat = GLKViewDrawableColorFormatRGBA8888;
self.drawableMultisample = GLKViewDrawableMultisample4X;
self.drawableStencilFormat = GLKViewDrawableStencilFormatNone;
self.enableSetNeedsDisplay = YES;
// ... gl code following
My -drawRect method looks like this:
glClearColor(1, 1, 1, 1);
glClear(GL_COLOR_BUFFER_BIT);
[_currentProgram use]; // Use program
glUniformMatrix4fv([_currentProgram getUniformLocation:#"modelViewProjectionMatrix"], 1, 0, modelViewProjectionMatrix.m);
// ...
if (isES3) {
glBindVertexArray(vertexArray);
}
else {
glBindVertexArrayOES(vertexArray);
}
glDrawArrays(GL_TRIANGLE_STRIP, 0, verticiesLength);
I do not yet have a OpenGL ES 3.0-capable device so all of my OpenGL ES 3.0 testing is being done in the iOS Simulator. OpenGL ES 2.0 testing is done on-device and in-simulator.
As expected, in ES2, the screen is cleared to white immediately on startup (-drawRect having been called once and no vertices to draw yet). However, when I make the swap to ES3, the context is successfully created, no gl calls fail, and yet the screen does not clear as it should - it just appears as a black screen. Fishing around for what was going wrong, I decided to remove multi-sampling:
self.drawableMultisample = GLKViewDrawableMultisampleNone;
And it worked! (Albeit without antialiasing.) My question is therefore, are there any known issues with GLKit multi-sampling with OpenGL ES 3.0 in the iOS Simulator (iPad, iPad Retina and iPad Retina (64-Bit))? My laptop has more than enough free memory to cope with the multi-sampling.
OpenGL is a very asynchronous API. For example, if you call glClear, you should not expect the screen to be cleared when the call returns. You can only reliably look at the result of the rendering you produced after you finished rendering the frame, and it is displayed (typically by swapping the buffers when using double buffered rendering).
So what you're observing most likely does not mean anything. Does everything look fine at the end of the frame? If yes, there's no reason to be worried.
The difference is likely caused in the different rendering process if multisampling is enabled. In this case, rendering goes to a higher resolution buffer first, and is only downsampled to the actual framebuffer at the end of the frame.
When you deploy, chose the device type to be iPad, not universal, from the General>Deployment Info menu, that will fix it.

Memory usage keeps increasing over time (GLKit - iOS)

I've almost finished my app. One of the views uses GLKit. I just have a problem with memory. Basically what happens is that when GLKView is displayed, the memory consumption constantly rises (seen with Instruments). At a certain point it obviously crashes.
I don't know much about GLKit, so I hope you can help me.
The problem is a 3d arrow that I'm displaying. If I don't draw it, all the other things don't create any problem.
This is the header file that contains the arrow vertex data:
#import <GLKit/GLKit.h>
struct arrowVertexData
{
GLKVector3 vertex;
GLKVector3 normal;
GLKVector2 texCoord;
};
typedef struct arrowVertexData arrowVertexData;
typedef arrowVertexData* vertexDataPtr;
static const arrowVertexData MeshVertexData[] = {
{/*v:*/{{-0.000004, 0.0294140, -0.0562387}}, /*n:*/{{0.000000, 1.000000, 0.000000}}, /*t:*/{{0.500000, 0.333333}}},
... etc...
And this is the draw code:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
[self.arrowEffect prepareToDraw];
//glGenVertexArraysOES(1, &arrowVertexArray);
//glBindVertexArrayOES(arrowVertexArray);
glGenBuffers(1, &arrowVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, arrowVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(MeshVertexData), MeshVertexData, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, sizeof(arrowVertexData), 0);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_TRUE, sizeof(arrowVertexData), (void *)offsetof(arrowVertexData, normal));
glBindVertexArrayOES(arrowVertexArray);
// Render the object with GLKit
glDrawArrays(GL_TRIANGLES, 0, sizeof(MeshVertexData) / sizeof(arrowVertexData));
//reset buffers
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
//disable atttributes
glDisableVertexAttribArray(GLKVertexAttribNormal);
glDisableVertexAttribArray(GLKVertexAttribPosition);
}
Any suggestion?
Thank you very much for you help!
You are creating a new vertex buffer (VBO) each time drawInRect is called, and never deleting them. GLGenBuffers and GLBindBuffer set up and a new buffer and make it current, but the real damage is done with GLBufferData, which copies the data into the new buffer.
glBindBuffer(GL_ARRAY_BUFFER, 0); resets GL to not use the buffer, and glDisableVertexAttribArray(GLKVertexAttribPosition); tells GL not to look for position data in a buffer anymore, but neither of these calls does anything to free the memory. If you wanted to free the memory each time, you would need to call GLDeleteBuffers(1, &arrowVertexBuffer); to free the memory.
A better approach would be to generate the buffer once at startup and delete it when terminating, and hang on to arrowVertexBuffer, rebinding and unbinding it each time through as needed, as sell as reseting the pointers, assuming other parts of your program are modifying GL state.
It looks like you also started down the path of using a Vertex Array Object (VAO), which would be another way to capture state once for reuse, although it may be better to wait until you have the VBO working correctly before attempting that. Both VBOs and VAOs are methods for caching state that evolved over time to reduce the load each time through your rendering loop, but VAOs cast a much broader net, which could make it trickier to get it right.
As a general suggestion, you may be able to get more attention for a question like this by adding a more general and popular tag, such as [Open GL].
Another debugging tool you should definitely try is OpenGL Profiler. If you did not install it with XCode, look it up in the documentation and you should find a link to download the graphic tools package. The resources window will allow you to track the buffer objects in use.
Have you tried running the static analyzer in Xcode?
It's very good at pointing out allocated memory that isn't released and that kind of thing.
To use it hold the mouse down on the "Run" button and select "Analyze" from the drop down list.
If it does find anything it usually points them out in blue and you can see the lines tracing back to where memory is being allocated and not released, etc...
Let me know if that has any effect.

OpenGL ES examples that don't use EAGLContext

I'd like to better understand the creation, allocation, and binding of OpenGL ES framebuffers, renderbuffers, etc under iOS. I understand that the EAGLContext and EAGLSharegroup classes normally manage the allocation and binding of such objects. However, the apple docs suggest that it is possible to do GL offscreen rendering without using the EAGLContext class and I'm interested in how. Does anyone have any pointers to code examples?
I would also be interested in examples showing how to accomplish offscreen rendering with EAGLContext.
The only way to render content using OpenGL ES on iOS, offscreen or onscreen, is to do so through an EAGLContext. From the OpenGL ES Programming Guide:
Before your application can call any OpenGL ES functions, it must
initialize an EAGLContext object and set it as the current context.
I think the following lines might be what are causing some confusion:
The EAGLContext class also provides methods your application uses to
integrate OpenGL ES content with Core Animation. Without these
methods, your application would be limited to working with offscreen
images.
What that means is that if you want to render content to the screen, you use some extra methods only provided by the EAGLContext class, such as -renderbufferStorage:fromDrawable:. You still need an EAGLContext to manage OpenGL ES commands even if you're going to draw offscreen, but these particular methods which are specific to EAGLContext are needed to draw onscreen.
To your second question, how you setup your offscreen rendering will depend on the configuration of this offscreen render (texture-backed FBO, depth buffer, etc.). For example, the following code will set up a simple FBO that has no depth buffer and renders to the already set up outputTexture texture:
glActiveTexture(GL_TEXTURE1);
glGenFramebuffers(1, &filterFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, filterFramebuffer);
glBindTexture(GL_TEXTURE_2D, outputTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, (int)currentFBOSize.width, (int)currentFBOSize.height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, outputTexture, 0);
For code examples, you could look at how I do this within the open source GPUImage framework (which just does simple image rendering) or my open source Molecules application (which does more complex offscreen rendering using depth buffers).

iOS GLKit and back to default framebuffer

I am running the boiler plate OpenGL example code that XCode creates for an OpenGL project for iOS. This sets up a simple ViewController and uses GLKit to handle the rest of the work.
All the update/draw functionality of the application is in C++. It is cross platform.
There is a lot of framebuffer creation going on. The draw phase renders to a few frame buffers and then tries to set it back to the default framebuffer.
glBindFramebuffer(GL_FRAMEBUFFER, 0);
This generates an GL_INVALID_ENUM. Only on iOS.
I am completely stumped as to why. The code runs fine on all major platforms except iOS. I'm wanting to blame GLKit. Any examples of iOS OpenGL setup that do not use GLKit?
UPDATE
The following snippet of code lets me see the default framebuffer that GLKit is using. For some reason it comes out as "2". Sure enough if I use "2" in all my glBindFrameBuffer calls it works. This is very frustrating.
[view bindDrawable ];
GLint defaultFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &defaultFBO);
LOGI("DEFAULT FBO: %d", defaultFBO);
What reason on earth would cause GLKit to not generate its internal framebuffer at 0? This is the semantic all other implementations of OpenGL use, 0 is the default FBO.
On iOS there is no default framebuffer. See Framebuffer Objects are the Only Rendering Target on iOS. I don't know much about GLKit, but on iOS to render something on screen you need to create framebuffer, and attach to it renderbuffer, and inform Core Animation Layer that this renderbuffer will be the "screen" or "default framebuffer" to draw to. Meaning - everything you'll draw to this framebuffer, will appear on screen. See Rendering to a Core Animation Layer.
I feel it's necessary to point out here that the call to glBindFramebuffer(GL_FRAMEBUFFER, 0);
does not return rendering to the main framebuffer although it would appear to work for machines that run Windows, Unix(Mac) or Linux. Desktops and laptops have no concept of a main default system buffer. This idea started with handheld devices. When you make an openGL bind call with zero as the parameter then what you are doing is setting this function to NULL. It's how you disable this function. It's the same with glBindTexture(GL_TEXTURE_2D, 0);
It is possible that on some handheld devices that the driver automatically activates the main system framebuffer when you set the framebuffer to NULL without activating another. This would be a choice made by the manufacturer and is not something that you should count on, this is not part of the openGL ES spec. For desktops and laptops, this is absolutely necessary since disabling the framebuffer is required to return to normal openGL rendering.
On an iOS device, you should make the following call,
glBindFramebuffer(GL_FRAMEBUFFER, viewFramebuffer);,
providing that you named your system framebuffer 'viewFramebuffer'. Look for through your initialization code for the following call,
glGenFramebuffers(1, &viewFramebuffer);
Whatever you have written at the end there is what you bind to when returning to your main system buffer.
If you are using GLKit then you can use the following call,
[((GLKView *) self.view) bindDrawable]; The 'self.view' may be slightly different depending on your particular startup code.
Also, for iOS, you could use, glBindFramebuffer(GL_FRAMEBUFFER, 2); but this is likely not going to be consistent across future devices released by Apple. They may change the default value of '2' to be '3' or something else in the future so you'd want to use the actual name instead of an integer value.
(void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
//your use secondary/offscreen frame buffer to store render result
[self.shader drawOffscreenOnFBO];
//back to default frame buffer of GLKView
[((GLKView *) self.view) bindDrawable];
//draw on main screen
[self.shader drawinmainscreen];
}
refernece .....http://districtf13.blogspot.com/

setting a CAEAGLLayer properties for OpenGL ES?

Using frame buffer objects for rendering on iOS, which appears to be Apples preferred way of rendering on iOS according to the OpenGL ES Programming Guide for iOS from Apple, one is supposed to use glRenderbufferStorage() for specifying properties like width and hight according to OpenGL ES 2.0 Programming Guide from Munshi, Ginsburg and Shreiner. Apple replaces this with renderbufferStorage:fromDrawable: message sent to the EAGLContext in above guide.
Apple then goes on writing to fetch width and hight from the Renderbuffer as that buffer sets them on creation without further detail.
The width and height are 0 though.
The CAEAGLLayer Class Reference writes to "Set the layer bounds to match the dimensions of the display". The CAEAGLLayer class is the class Apple wants one to use as the backing class of the view one uses. This is done by returning it from the views layerClass method. This CAEAGLLayer only has 1 property "drawableProperties" which is an NSDictionary. Unfortunately that documentation is sparse. Dimensions cannot be set.
Thus: how to go on setting a CAEAGLLayer properties for OpenGL ES?
Here's my code thus far (Note an old example of Apple uses initWithCoder, I either guessed or got from somewhere I don't remember to use initWithFrame):
- (id)initWithFrame:(CGRect)frame
{
self = [super initWithFrame:frame];
if (self)
{
// Initialization code
theCAEAGLLayer = (CAEAGLLayer*)self.layer;
theCAEAGLLayer.opaque = YES;
theEAGLContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:theEAGLContext];
glGenFramebuffers(1, &theFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, theFramebuffer);
glGenRenderbuffers(1, &theColorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, theColorRenderbuffer);
[theEAGLContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:theCAEAGLLayer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, theColorRenderbuffer);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &widthOfTheColorRenderbuffer);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &heightOfTheColorRenderbuffer);
glGenRenderbuffers(1, &theDepthRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, theDepthRenderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, widthOfTheColorRenderbuffer, heightOfTheColorRenderbuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, theDepthRenderbuffer);
if(glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
}
}
return self;
}
Proper answer:
UIKit batches together certain operations and defers them until later in the runloop. That's because you may have code that changes the size of a view and changes different bits of text inside it. You probably want that stuff to happen atomically.
What that probably means for you is that the layer hasn't been sized yet. Have you tried moving what you have to - (void)layoutSubviews?
If you're planning to target iOS 5 only, you can just use GLKView and avoid writing any of this stuff for yourself.
Other comments:
glRenderbufferStorage would create storage at an opaque location that OpenGL could draw to, but how should the OS guess which of your frame buffers is the one you want to show to the user, rather than merely being an intermediate result? The OpenGL spec explicitly doesn't define how you communicate that to your specific OS. In iOS it's achieved via renderbufferStorage:fromDrawable: — that says to add storage that equates to the CALayer that iOS knows how to composite. Apple's method is not a replacement for glRenderbufferStorage, it does something that glRenderbufferStorage can't and shouldn't, and there are many times you'll use it instead even when programming for iOS only.
- (id)initWithFrame: is the initialiser you'd use if you were creating the view manually. - (id)initWithCoder: is used by the system to load the view from a NIB.
Has your UIView definitely specified its layerClass as CAEAGLLayer? If not then the call to your EAGL context would be permitted to fail.

Resources