I am trying to write a game using opengl, but I am having a lot of trouble with the new glkit classes and the default template from iOS.
- (void)viewDidLoad
{
[super viewDidLoad];
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (!self.context) {
NSLog(#"Failed to create ES context");
}
if(!renderer)
renderer = [RenderManager sharedManager];
tiles = [[TileSet alloc]init];
GLKView *view = (GLKView *)self.view;
view.context = self.context;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
[self setupGL];
}
- (void)setupGL
{
int width = [[self view] bounds].size.width;
int height = [[self view] bounds].size.height;
[EAGLContext setCurrentContext:self.context];
self.effect = [[GLKBaseEffect alloc] init];
self.effect.light0.enabled = GL_TRUE;
self.effect.light0.diffuseColor = GLKVector4Make(0.4f, 0.4f, 0.4f, 1.0f);
//Configure Buffers
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glGenRenderbuffers(2, &colourRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colourRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8_OES, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colourRenderBuffer);
glGenRenderbuffers(3, &depthRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer);
//Confirm everything happened awesomely
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
if(status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"failed to make complete framebuffer object %x", status);
}
glEnable(GL_DEPTH_TEST);
// Enable the OpenGL states we are going to be using when rendering
glEnableClientState(GL_VERTEX_ARRAY);
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClearColor(0.4f, 0.4f, 0.4f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
float iva[] = {
0.0,0.0,0.0,
0.0,1.0,0.0,
1.0,1.0,0.0,
1.0,0.0,0.0,
};
glVertexPointer(3, GL_FLOAT, sizeof(float) * 3, iva);
glDrawArrays(GL_POINTS, 0, 4);
}
#end
With this the buffer clears(to a grey colour), but nothing from the vertex array renders. I have no idea what to do from here and due to the age of the technology there is not much information available on how to properly use glkit.
I don't see anything in your setup code that loads your shaders - I presume you are doing this somewhere in your code?
In addition, in your setup code, you are creating your framebuffer. The GLKView does this for you - indeed you are telling the view to use a 24-bit depthbuffer in your viewDidLoad method:
GLKView *view = (GLKView *)self.view;
view.context = self.context;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
So what your glkView:drawInRect: code above is doing is saying: "Bind my handmade framebuffer, and draw some stuff into it". The GLKView then automatically presents itself, but nothing has been drawn into it, you've only drawn into your handmade buffer. Unless you need additional framebuffer objects for tasks such as rendering to texture, then you don't need to concern yourself with framebuffer creation at all - let the GLKView do it automatically.
What you should be doing in your setupGL method (or anywhere you like in the setup) is creating your vertex array object(s) that remember the openGL state required to perform a draw. Then, in the glkView:drawInRect: method you should:
Clear using glClear().
Enable your program.
Bind the vertex array object (or, if you didn't use a VAO, enable
the appropriate vertex attrib pointers).
Draw your data using glDrawArrays() or glDrawElements().
The GLKView automatically sets its context as current, and binds its framebuffer object before each draw cycle.
Perhaps try to think of GLKView more like a regular UIView. It handles most of the openGL code behind the scenes for you, leaving you to simply tell it what it needs to draw. It has its drawRect: code just like a regular UIView - with a regular UIView in drawRect: you just tell it what it should draw, for example using Core Graphics functions - you don't then tell it to present itself.
The GLKViewController is then best thought of as handling the mechanics of the rendering loop behind the scenes. You don't need to implement the timers, or even worry about pausing the animation on your application entering the background. You just need to override the update or glkViewControllerUpdate: method (depending on whether you're subclassing or delegating) to update the state of the openGL objects or view matrix.
I made a post about the way to set up a basic project template using GLKit. You can find it here:
Steve Zissou's Programming Blog
I haven't used the GLKit yet, but it seems that you do not present your framebuffer after drawing into it.
In an application using OpenGL ES 2 under iOs but without GLKit, I use to call the following code at the end of the rendering loop.
if(context) {
[EAGLContext setCurrentContext:context];
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}
As I said I haven't used GLKit yet so I hope this might be useful.
I think you forgot to call
[self.effect prepareToDraw];
just before
glDrawArrays(GL_POINTS, 0, 4);
As GLKit mimics the OpenGL ES 1.1 rendering pipeline, you do not need to include the routines to define Shader. GLKit actually does this for you, if you wish to use basic pipeline like OpenGL ES1.1
Related
I need to create a sound wave animation like Siri (SiriAnim)
With OpenGL I'v got a shape of wave:
Here is my code:
#property (strong, nonatomic) EAGLContext *context;
#property (strong, nonatomic) GLKBaseEffect *effect;
// .....
- (void)setupGL {
[EAGLContext setCurrentContext:self.context];
glEnable(GL_CULL_FACE);
self.effect = [[GLKBaseEffect alloc] init];
self.effect.useConstantColor = GL_TRUE;
self.effect.constantColor = GLKVector4Make(0.0f, 1.0f, 0.0f, 1.0f);
}
// .....
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
glClearColor(_curRed, 0.0, 0.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
[self.effect prepareToDraw];
GLfloat line[1440];
for (int i = 0; i < 1440; i += 4) {
float x = 0.002*i - 0.75;
float K = 8.0f;
float radians = DEGREES_TO_RADIANS(i/2);
float func_x = 0.4 *
pow(K/(K + pow(radians-M_PI,4.0f)), K) *
cos(radians-M_PI);
line[i] = x;
line[i+1] = func_x;
line[i+2] = x;
line[i+3] = -func_x;
}
GLuint bufferObjectNameArray;
glGenBuffers(1, &bufferObjectNameArray);
glBindBuffer(GL_ARRAY_BUFFER, bufferObjectNameArray);
glBufferData(
GL_ARRAY_BUFFER,
sizeof(line),
line,
GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(
GLKVertexAttribPosition,
2,
GL_FLOAT,
GL_FALSE,
2*4,
NULL);
glLineWidth(15.0);
glDrawArrays(GL_LINES, 0, 360);
}
BUT! I'm confused because i can't find any solutions for gradient. After a lot of time of searching I even have a strong suspicion that such task is impossible (because GLKBaseEffect *effect . constantColor i think).
So! Could anyone help me with any solution for this task?
Can this problem be solved with shaders or textures (the worst solution)?
Bless you for your answers!
Although this could be done with a texture, I think the easiest way to accomplish this is by using OpenGL's default color interpolation. If you make the top vertex of the lines you're drawing a light blue, and the bottom vertex a darker blue, the GPU will automatically interpolate the colors between them to gradually change, and produce the gradient effect you're looking for.
The easiest way to implement this in your code is to make room in your buffer, the "lines" array, for the color of every single vertex of the line, and set up your shaders to output this value. That means you'll have to add inputs and outputs for this color to your vertex and pixel shaders. The idea is to pass it from the vertex to the pixel shader, and the pixel shader outputs the value unmodified. The hardware handles the interpolation between colors automatically for you(!).
Many modern OpenGL tutorials have examples of doing this. One free online one is from LearnOpenGL's Shader tutorial. If you have the money, though, my favorite explanation of buffers, shaders, and the pipeline itself is in Graham Sellers' OpenGL SuperBible. If you plan on using OpenGL often and really learning it, it's an invaluable desktop reference.
Basically what I'm doing is making a simple finger drawing application. I have a single class that takes the input touch points and does all the fun work of turning those touch points into bezier curves, calculating vertices from those, etc. That's all working fine.
The only interesting constraint I'm working with is that I need strokes to blend on on top of each other, but not with themselves. Imagine having a scribbly line that crosses itself and has 50% opacity. Where the line crosses itself, there should be no visible blending (it should all look like the same color). However, the line SHOULD blend with the rest of the drawing below it.
To accomplish this, I'm using two textures. A back texture and a scratch texture. While the line is actively being updated (during the course of the stroke), I disable blending, draw the vertices on the scratch texture, then enable blending, and draw the back texture and scratch texture into my frame buffer. When the stroke is finished, I draw the scratch texture into the back texture, and we're ready to start the next stroke.
This all works very smoothly on a newer device, but on older devices the frame rate takes a severe hit. From some testing, it seems that the biggest performance hit is in drawing the textures to the frame buffer, because they're relatively large textures (due to the iPhone's retina resolution).
Does anybody have any hints on some strategies to work around this? I'm happy to provide more specifics or code, I'm just not sure where to start.
I am using OpenGL ES 2.0, targeting iOS 7.0, but testing on an iPhone 4S
The following is code I'm using to draw into the framebuffers:
- (void)drawRect:(CGRect)rect
{
[self drawRect:rect
ofTexture:_backTex
withOpacity:1.0];
if (_activeSpriteStroke)
{
[self drawStroke:_activeSpriteStroke
intoFrameBuffer:0];
}
}
Those rely on the following few methods:
- (void)drawRect:(CGRect)rect
ofTexture:(GLuint)tex
withOpacity:(CGFloat)opacity
{
_texShader.color = GLKVector4Make(1.0, 1.0, 1.0, opacity);
[_texShader prepareToDraw];
glBindTexture(GL_TEXTURE_2D, tex);
glBindVertexArrayOES(_texVertexVAO);
glBindBuffer(GL_ARRAY_BUFFER, _texVertexVBO);
[self bufferTexCoordsForRect:rect];
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindVertexArrayOES(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindTexture(GL_TEXTURE_2D, tex);
}
- (void)drawStroke:(AHSpriteStroke *)stroke
intoFrameBuffer:(GLuint)frameBuffer
{
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
[self renderStroke:stroke
ontoTexture:_scratchTex
inFrameBuffer:_scratchFrameBuffer];
if (frameBuffer == 0)
{
[self bindDrawable];
}
else
{
glBindFramebuffer(GL_FRAMEBUFFER, frameBuffer);
}
[self setScissorRect:_activeSpriteStroke.boundingRect];
glEnable(GL_SCISSOR_TEST);
[self drawRect:self.bounds
ofTexture:_scratchTex
withOpacity:stroke.lineOpacity];
glDisable(GL_SCISSOR_TEST);
glDisable(GL_BLEND);
}
- (void)renderStroke:(AHSpriteStroke *)stroke
ontoTexture:(GLuint)tex
inFrameBuffer:(GLuint)framebuffer
{
glBindFramebuffer(GL_FRAMEBUFFER, _msFrameBuffer);
glBindTexture(GL_TEXTURE_2D, tex);
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
[stroke render];
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, framebuffer);
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, _msFrameBuffer);
glResolveMultisampleFramebufferAPPLE();
const GLenum discards[] = { GL_COLOR_ATTACHMENT0 };
glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE, 1, discards);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
}
And a couple of the helper methods just for completeness so you can follow it:
- (void)bufferTexCoordsForRect:(CGRect)rect
{
AHTextureMap textureMaps[4] =
{
[self textureMapForPoint:CGPointMake(CGRectGetMinX(rect), CGRectGetMinY(rect))
inRect:self.bounds],
[self textureMapForPoint:CGPointMake(CGRectGetMaxX(rect), CGRectGetMinY(rect))
inRect:self.bounds],
[self textureMapForPoint:CGPointMake(CGRectGetMinX(rect), CGRectGetMaxY(rect))
inRect:self.bounds],
[self textureMapForPoint:CGPointMake(CGRectGetMaxX(rect), CGRectGetMaxY(rect))
inRect:self.bounds]
};
glBufferData(GL_ARRAY_BUFFER, 4 * sizeof(AHTextureMap), textureMaps, GL_DYNAMIC_DRAW);
}
- (AHTextureMap)textureMapForPoint:(CGPoint)point
inRect:(CGRect)outerRect
{
CGPoint pt = CGPointApplyAffineTransform(point, CGAffineTransformMakeScale(self.contentScaleFactor, self.contentScaleFactor));
return (AHTextureMap) { { pt.x, pt.y }, { point.x / outerRect.size.width, 1.0 - (point.y / outerRect.size.height) } };
}
From what I understand you are drawing each quad in a separate draw call.
If your stroke consist of a lot of quads(from sampling the bezier curve) your code will make many draw calls per frame.
Having many draw calls in OpenGL ES 2 on older iOS devices will probably generate a bottle neck on the CPU.
The reason is that draw calls in OpenGL ES 2 can have a lot of overhead in the driver.
The driver tries to organize the draw calls you make into something the GPU can digest and it does this organization using the CPU.
If you intend to draw many quads to simulate a brush stroke you should update a vertex buffer to contain many quads and then draw it with one draw call instead of making a draw call per quad.
You can verify that your bottle neck is in the CPU with the Time Profiler instrument.
You can then check if the CPU is spending most of his time on the OpenGL draw call methods or rather on your own functions.
If the CPU spends most of it's time on the OpenGL draw call methods it is likely because you are making too many draw calls per frame.
i try to draw a waveform from the incoming iphone microphone stream. The extraction of the data was no problem an the drawing works fine. Only when I use the OpenGL Exception Breakpoint xcode throws exceptions at glPushMatrix() & glPopMatrix()with the code GL_INVALID_OPERATION. I searched the internet for some more informations, but the only thing that i found was this:
GL_INVALID_OPERATION is generated if glPushMatrix or glPopMatrix is executed between the execution of glBegin and the corresponding execution of glEnd.
i dont use the commands glBegin oder glEnd, because of that this doesn't help me. Any ideas? What is the problem here? i draw the stuff like this:
- (void)drawPlotWithView:(GLKView*)view drawInRect:(CGRect)rect {
glBindBuffer(GL_ARRAY_BUFFER, _plotVBO);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, sizeof(XOAudioPlotGLPoint), NULL);
[self.baseEffect prepareToDraw];
glPushMatrix();
self.baseEffect.transform.modelviewMatrix = GLKMatrix4MakeXRotation(0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, _plotGraphSize);
glPopMatrix();
[self.baseEffect prepareToDraw];
glPushMatrix();
self.baseEffect.transform.modelviewMatrix = GLKMatrix4MakeXRotation(M_PI);
glDrawArrays(GL_TRIANGLE_STRIP, 0, _plotGraphSize);
glPopMatrix();
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
the initialization ist like this:
self.baseEffect = [[GLKBaseEffect alloc] init];
self.baseEffect.useConstantColor = GL_TRUE;
self.preferredFramesPerSecond = 60;
if (![EAGLContext currentContext]) {
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
} else {
self.context = [EAGLContext currentContext];
}
if (!self.context) {
NSLog(#"Failed to create ES context");
} else {
EAGLContext.currentContext = self.context;
}
GLKView *view = (GLKView *)self.view;
view.context = self.context;
view.drawableMultisample = GLKViewDrawableMultisample4X;
glGenBuffers(1, &_plotVBO);
glLineWidth(2.0f);
glPushMatrix and glPopMatrix refer to the built-in matrix stack from the fixed-function OpenGL pipeline -- that functionality isn't in OpenGL ES 2.0.
However, the way that you're using it looks like it's not really doing anything, and what you are doing is in the wrong order. Drawing with GLKBaseEffect takes three steps:
Set the modelview and projection matrices via properties on your GLKBaseEffect instance. There's no "current matrix" or "matrix mode" implicit state like there is in GLES 1.x; just explicitly named and separately stored properties on GLKBaseEffect. (You're already doing this with the lines where you set self.baseEffect.transform.modelviewMatrix.)
Call prepareToDraw on the GLKBaseEffect instance. This binds the matrices, textures, and other state you've set in GLKBaseEffect for use by the shaders that class generates for you. (You're doing this before setting each matrix, so the matrices you're setting aren't taking effect when you want.)
After all that, perform an OpenGL draw command (glDrawArrays, glDrawElements, etc.) to draw with the state you've set.
The one additional thing you might think about is whether you've (elsewhere) set a different modelview matrix on your baseEffect and are using it for other draw calls. In that case, you might want to save the current matrix before drawing with a different matrix, then restore it afterward. A matrix stack is useful for that, and GLKit provides one in the GLKMatrixStack type and related functions. But if these are your only draw calls with that effect, or your other calls create a matrix from scratch like these ones do, there's no need to save/restore.
Continuing my research on the topic, the answer seems to be that OpenGL ES 2.0 indeed does not support matrix stack or push/pop, see here.
I'm trying to add multisampling to my app, but it seems that I've made a mistake, but I can't find what I did wrong.
This is how I setup my frame buffers and render buffers
- (void)setupBuffers {
glGenFramebuffers(1, &_framebuffer);
glGenRenderbuffers(1, &_renderbuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _framebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _renderbuffer);
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_eaglLayer]; // I already set the current context to _context, and _eaglLayer is just self.layer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _renderbuffer);
if (YES) { // note if I set this to no, my app properly displays (I don't even have to remove the code in my render method)
glGenFramebuffers(1, &_msaa_framebuffer);
glGenRenderbuffers(1, &_msaa_renderbuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _msaa_framebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _msaa_renderbuffer);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, 2, GL_RGBA8_OES, [AppDelegate screenWidth], [AppDelegate screenHeight]); // yes, this is the proper width and height I tested it
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _msaa_renderbuffer);
}
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"Failed to make complete framebuffer object %x", status);
exit(1);
}
}
After viewDidLoad is called on my ViewController I call the method setupDisplayLink on my UIView subclass.
- (void)setupDisplayLink {
CADisplayLink* displayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(render:)];
//displayLink.frameInterval = [[NSNumber numberWithInt:1] integerValue];
[displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
}
This calls my render method which is pretty simple:
- (void)render:(CADisplayLink*)displayLink {
glBindRenderbuffer(GL_RENDERBUFFER, _msaa_framebuffer);
glViewport(0, 0, [AppDelegate screenWidth], [AppDelegate screenHeight]);
glClearColor(188.0f / 255.0f, 226.0f / 255.0f, 232.0f / 255.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
[[OpenGLViewController instance].menu draw:displayLink]; // drawing happens here
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, _framebuffer);
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, _msaa_framebuffer);
glResolveMultisampleFramebufferAPPLE();
glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE, 1, (GLenum[1]) { GL_COLOR_ATTACHMENT0 });
glBindRenderbuffer(GL_RENDERBUFFER, _renderbuffer);
NSLog(#"%#", #"HI");
[_context presentRenderbuffer:GL_RENDERBUFFER];
}
It's not hanging at all (the app keeps printing "HI" in the console because I told it to in the render method). For some reason only the first frame is drawn when I add the extra frame buffer and render buffer for multisampling and I can't figure out why. It just freezes on that frame. Why will my app only draw the first frame with MSAA?
This is not surprising to say the least. The only time you have _msaa_framebuffer bound as the DRAW buffer is immediately after you initialize your FBOs.
The first time you call render (...), the following line will be drawn into your _msaa_framebuffer:
[[OpenGLViewController instance].menu draw:displayLink]; // drawing happens here
However, later on in render (...) you set the draw buffer to _framebuffer and you never change it from that point on.
To fix your problem, all you have to do is remember to bind _msaa_framebuffer as your draw buffer at the beginning of your render (...) function:
- (void)render:(CADisplayLink*)displayLink {
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, _msaa_framebuffer);
I am working on a simple iOS game and I am having a problem with UINavigationController and an EAGL-View.
The situation is as follows: I use one EAGL-View in conjunction with multiple controllers.
Whenever I push the MainViewController (which does all the custom openGL drawing), I end up using more memory (around 5MB per push!).
The problem seems to be within [eaglView_ setFramebuffer] - or at least that's where almost all allocations seem to happen (I've checked the live bytes via Instruments - around 70% of memory is allocated in this function).
EAGLView::setFramebuffer:
- (void)setFramebuffer {
if (context) {
[EAGLContext setCurrentContext:context];
if (!defaultFramebuffer)
[self createFramebuffer];
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, msaaFramebuffer);
glViewport(0, 0, framebufferWidth, framebufferHeight);
}
}
and EAGLView::createFramebuffer:
- (void)createFramebuffer
{
if (context && !defaultFramebuffer) {
[EAGLContext setCurrentContext:context];
// Create default framebuffer object.
glGenFramebuffers(1, &defaultFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
// Create color render buffer and allocate backing store.
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &framebufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &framebufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
//MSAA stuff
glGenFramebuffers(1, &msaaFramebuffer);
glGenRenderbuffers(1, &msaaRenderBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, msaaFramebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, msaaRenderBuffer);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, 4, GL_RGBA8_OES, framebufferWidth, framebufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, msaaRenderBuffer);
glGenRenderbuffers(1, &msaaDepthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, msaaDepthBuffer);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, 4, GL_DEPTH_COMPONENT16, framebufferWidth, framebufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, msaaDepthBuffer);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
NSLog(#"Failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
}
}
As you can see - nothing special there.
I switch my ViewControllers like this: (in the AppDelegate)
- (void) swichToViewController: (UIViewController*) newViewController overlapCurrentView: (bool) overlap {
// get the viewController on top of the stack - popViewController doesn't do anything if it's the rootViewController.
if(!overlap) {
// clear everything that was on the eaglView before
[eaglView_ clearFramebuffer];
[eaglView_ applyMSAA];
[eaglView_ presentFramebuffer];
}
UIViewController* oldViewController = [navController_ topViewController];
// see if the view to be switched to is already the top controller:
if(oldViewController == newViewController)
return;
// if the view is already on the stack, just remove all views on top of it:
if([[navController_ viewControllers] containsObject:newViewController]) {
[oldViewController setView:nil];
[newViewController setView:eaglView_];
[navController_ popToViewController:newViewController animated:!overlap];
return;
}
// else push the new controller
[navController_ popViewControllerAnimated:NO];
[oldViewController setView:nil];
[newViewController setView:eaglView_];
[navController_ pushViewController:newViewController animated:!overlap];
}
Finally, I render my sprites like this: (In my MainViewController.mm):
- (void)drawFrame
{
// When I delete this line, I just get a white screen, even if I have called setFramebuffer earlier(?!)
[(EAGLView *)self.view setFramebuffer];
glClearColor(1, 1, 1, 1);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
GLfloat screenWidth = [UIScreen mainScreen].bounds.size.width;
GLfloat screenHeight = [UIScreen mainScreen].bounds.size.height;
glOrthof(0, screenWidth, 0, screenHeight, -1.0, 1.0);
glViewport(0, 0, screenWidth, screenHeight);
[gameManager_ playGame];
//MSAA stuff
[(EAGLView *)self.view applyMSAA];
[(EAGLView *)self.view presentFramebuffer];
[(EAGLView *)self.view clearFramebuffer];
}
Something that might be worth mentioning is that I don't allocate the views every time I push them, I keep references to them until the game exits.
[gameManager_ playGame] draws the sprites to the screen - but I've used this method in another project without any memory problems.
Any help would be really appreciated as I've been stuck on this for 2 days :/
Edit:
I've been able to narrow the problem down to a call to gldLoadFramebuffer. This is called whenever I try to draw something on the screen using an openGL function.
It seems to consume more memory when the context changes... But how could I avoid that?
I think I found the problem.
For anyone interested: The MSAA-Buffers weren't correctly deleted on switching the views. That caused the performance to drop significantly after a few pushes, and was also responsible for the increase in memory usage.