iPhone memory usage increases when pushing an EAGL-View using UINavigationController - ios

I am working on a simple iOS game and I am having a problem with UINavigationController and an EAGL-View.
The situation is as follows: I use one EAGL-View in conjunction with multiple controllers.
Whenever I push the MainViewController (which does all the custom openGL drawing), I end up using more memory (around 5MB per push!).
The problem seems to be within [eaglView_ setFramebuffer] - or at least that's where almost all allocations seem to happen (I've checked the live bytes via Instruments - around 70% of memory is allocated in this function).
EAGLView::setFramebuffer:
- (void)setFramebuffer {
if (context) {
[EAGLContext setCurrentContext:context];
if (!defaultFramebuffer)
[self createFramebuffer];
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, msaaFramebuffer);
glViewport(0, 0, framebufferWidth, framebufferHeight);
}
}
and EAGLView::createFramebuffer:
- (void)createFramebuffer
{
if (context && !defaultFramebuffer) {
[EAGLContext setCurrentContext:context];
// Create default framebuffer object.
glGenFramebuffers(1, &defaultFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
// Create color render buffer and allocate backing store.
glGenRenderbuffers(1, &colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &framebufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &framebufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);
//MSAA stuff
glGenFramebuffers(1, &msaaFramebuffer);
glGenRenderbuffers(1, &msaaRenderBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, msaaFramebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, msaaRenderBuffer);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, 4, GL_RGBA8_OES, framebufferWidth, framebufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, msaaRenderBuffer);
glGenRenderbuffers(1, &msaaDepthBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, msaaDepthBuffer);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, 4, GL_DEPTH_COMPONENT16, framebufferWidth, framebufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, msaaDepthBuffer);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
NSLog(#"Failed to make complete framebuffer object %x", glCheckFramebufferStatus(GL_FRAMEBUFFER));
}
}
As you can see - nothing special there.
I switch my ViewControllers like this: (in the AppDelegate)
- (void) swichToViewController: (UIViewController*) newViewController overlapCurrentView: (bool) overlap {
// get the viewController on top of the stack - popViewController doesn't do anything if it's the rootViewController.
if(!overlap) {
// clear everything that was on the eaglView before
[eaglView_ clearFramebuffer];
[eaglView_ applyMSAA];
[eaglView_ presentFramebuffer];
}
UIViewController* oldViewController = [navController_ topViewController];
// see if the view to be switched to is already the top controller:
if(oldViewController == newViewController)
return;
// if the view is already on the stack, just remove all views on top of it:
if([[navController_ viewControllers] containsObject:newViewController]) {
[oldViewController setView:nil];
[newViewController setView:eaglView_];
[navController_ popToViewController:newViewController animated:!overlap];
return;
}
// else push the new controller
[navController_ popViewControllerAnimated:NO];
[oldViewController setView:nil];
[newViewController setView:eaglView_];
[navController_ pushViewController:newViewController animated:!overlap];
}
Finally, I render my sprites like this: (In my MainViewController.mm):
- (void)drawFrame
{
// When I delete this line, I just get a white screen, even if I have called setFramebuffer earlier(?!)
[(EAGLView *)self.view setFramebuffer];
glClearColor(1, 1, 1, 1);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
GLfloat screenWidth = [UIScreen mainScreen].bounds.size.width;
GLfloat screenHeight = [UIScreen mainScreen].bounds.size.height;
glOrthof(0, screenWidth, 0, screenHeight, -1.0, 1.0);
glViewport(0, 0, screenWidth, screenHeight);
[gameManager_ playGame];
//MSAA stuff
[(EAGLView *)self.view applyMSAA];
[(EAGLView *)self.view presentFramebuffer];
[(EAGLView *)self.view clearFramebuffer];
}
Something that might be worth mentioning is that I don't allocate the views every time I push them, I keep references to them until the game exits.
[gameManager_ playGame] draws the sprites to the screen - but I've used this method in another project without any memory problems.
Any help would be really appreciated as I've been stuck on this for 2 days :/
Edit:
I've been able to narrow the problem down to a call to gldLoadFramebuffer. This is called whenever I try to draw something on the screen using an openGL function.
It seems to consume more memory when the context changes... But how could I avoid that?

I think I found the problem.
For anyone interested: The MSAA-Buffers weren't correctly deleted on switching the views. That caused the performance to drop significantly after a few pushes, and was also responsible for the increase in memory usage.

Related

iOS Rendering a object to a texture

I'm currently trying to divide the OpenGL ES 2.0 drawing process onto two halves: the first half where I render an object of interest (i.e. a cube or triangle) to a framebuffer that has a texture attached to it, and the second half where I apply that texture onto the face of a shape drawn in another framebuffer (i.e. another cube or triangle).
I cleared the framebuffer binded to the texture with a green color, and have been able to get that color to appear onto a triangle that I've drawn in another framebuffer that has the main renderbuffer attached and that I call [context presentRenderbuffer: renderbuffer] on. However, no matter what I do I'm not able to additionally draw another shape into that texture after I've cleared it to a green background, and render that onto the shape I've drawn.
For some visual reference, currently I'm drawing a square to the screen in my main framebuffer, and then applying a texture that is supposed to have a green background plus a triangle in the middle, but all that I get is this green screen.
It has everything that I currently want, except there is no triangle that is also in the middle. Essentially, this should look like a big green square with a black triangle in the middle of it, where the green and the triangle all came from the texture (the square would have originally been black).
My texture drawing method and main framebuffer drawing methods are included below (without the setup code):
- (BOOL) loadModelToTexture: (GLuint*) tex {
GLuint fb;
GLenum status;
glGenFramebuffers(1, &fb);
// Set up the FBO with one texture attachment
glBindFramebuffer(GL_FRAMEBUFFER, fb);
glGenTextures(1, tex);
glBindTexture(GL_TEXTURE_2D, *tex);
NSLog(#"Error1: %x", glGetError());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 128, 128, 0,
GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, *tex, 0);
status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (status != GL_FRAMEBUFFER_COMPLETE) {
// Handle error here
NSLog(#"Loading model to texture failed");
return FALSE;
}
glClearColor(0.0f, 1.0f, 0.0f, 1.0f); // Set color's clear-value to red
glClearDepthf(1.0f); // Set depth's clear-value to farthest
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, self.frame.size.width*self.contentsScale, self.frame.size.height*self.contentsScale);
NSLog(#"Error2: %x", glGetError());
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, vertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
NSLog(#"Error3: %x", glGetError());
glDrawArrays(GL_TRIANGLES, 0, 3);
return TRUE;
}
- (void) draw {
[EAGLContext setCurrentContext:context];
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glClearColor(0.0f, 0.5f, 0.5f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glViewport(0, 0, self.frame.size.width*self.contentsScale, self.frame.size.height*self.contentsScale);
// Use shader program.
glUseProgram(program);
// Update attribute values.
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, vertices);
//for some reason this is unneeded, but I'm not sure why
glVertexAttribPointer(ATTRIB_TEXTURE_COORD, 2, GL_FLOAT, 0, 0, texCoords);
glEnableVertexAttribArray(ATTRIB_TEXTURE_COORD);
glBindTexture(GL_TEXTURE_2D, textureName0);
glDrawArrays(GL_TRIANGLES, 0, 3);
glDrawArrays(GL_TRIANGLES, 3, 3);
[context presentRenderbuffer:renderbuffer];
}
What other steps do I need to take to get it to draw correctly to the texture/apply correctly to the main framebuffer drawing?
Never mind, it turns out that I was drawing my triangle to the texture, but the triangle just automatically defaulted to the same color as the background texture, so it's a completely different issue.

Why does adding multisampling to my app freeze the screen on the first frame?

I'm trying to add multisampling to my app, but it seems that I've made a mistake, but I can't find what I did wrong.
This is how I setup my frame buffers and render buffers
- (void)setupBuffers {
glGenFramebuffers(1, &_framebuffer);
glGenRenderbuffers(1, &_renderbuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _framebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _renderbuffer);
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_eaglLayer]; // I already set the current context to _context, and _eaglLayer is just self.layer
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _renderbuffer);
if (YES) { // note if I set this to no, my app properly displays (I don't even have to remove the code in my render method)
glGenFramebuffers(1, &_msaa_framebuffer);
glGenRenderbuffers(1, &_msaa_renderbuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _msaa_framebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _msaa_renderbuffer);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, 2, GL_RGBA8_OES, [AppDelegate screenWidth], [AppDelegate screenHeight]); // yes, this is the proper width and height I tested it
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _msaa_renderbuffer);
}
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if(status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"Failed to make complete framebuffer object %x", status);
exit(1);
}
}
After viewDidLoad is called on my ViewController I call the method setupDisplayLink on my UIView subclass.
- (void)setupDisplayLink {
CADisplayLink* displayLink = [CADisplayLink displayLinkWithTarget:self selector:#selector(render:)];
//displayLink.frameInterval = [[NSNumber numberWithInt:1] integerValue];
[displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
}
This calls my render method which is pretty simple:
- (void)render:(CADisplayLink*)displayLink {
glBindRenderbuffer(GL_RENDERBUFFER, _msaa_framebuffer);
glViewport(0, 0, [AppDelegate screenWidth], [AppDelegate screenHeight]);
glClearColor(188.0f / 255.0f, 226.0f / 255.0f, 232.0f / 255.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
[[OpenGLViewController instance].menu draw:displayLink]; // drawing happens here
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, _framebuffer);
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, _msaa_framebuffer);
glResolveMultisampleFramebufferAPPLE();
glDiscardFramebufferEXT(GL_READ_FRAMEBUFFER_APPLE, 1, (GLenum[1]) { GL_COLOR_ATTACHMENT0 });
glBindRenderbuffer(GL_RENDERBUFFER, _renderbuffer);
NSLog(#"%#", #"HI");
[_context presentRenderbuffer:GL_RENDERBUFFER];
}
It's not hanging at all (the app keeps printing "HI" in the console because I told it to in the render method). For some reason only the first frame is drawn when I add the extra frame buffer and render buffer for multisampling and I can't figure out why. It just freezes on that frame. Why will my app only draw the first frame with MSAA?
This is not surprising to say the least. The only time you have _msaa_framebuffer bound as the DRAW buffer is immediately after you initialize your FBOs.
The first time you call render (...), the following line will be drawn into your _msaa_framebuffer:
[[OpenGLViewController instance].menu draw:displayLink]; // drawing happens here
However, later on in render (...) you set the draw buffer to _framebuffer and you never change it from that point on.
To fix your problem, all you have to do is remember to bind _msaa_framebuffer as your draw buffer at the beginning of your render (...) function:
- (void)render:(CADisplayLink*)displayLink {
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, _msaa_framebuffer);

Taking an iOS Screenshot Mixing OpenGLES and UIKIT from a parent class

I know this question is answered quite a bit but I have a different situation it seems. I'm trying to write a top level function where I can take a screenshot of my app at anytime, be it openGLES or UIKit, and I won't have access to the underlying classes to make any changes.
The code I've been trying works for UIKit, but returns a black screen for OpenGLES parts
CGSize imageSize = [[UIScreen mainScreen] bounds].size;
if (NULL != UIGraphicsBeginImageContextWithOptions)
UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
else
UIGraphicsBeginImageContext(imageSize);
CGContextRef context = UIGraphicsGetCurrentContext();
// Iterate over every window from back to front
for (UIWindow *window in [[UIApplication sharedApplication] windows])
{
if (![window respondsToSelector:#selector(screen)] || [window screen] == [UIScreen mainScreen])
{
// -renderInContext: renders in the coordinate space of the layer,
// so we must first apply the layer's geometry to the graphics context
CGContextSaveGState(context);
// Center the context around the window's anchor point
CGContextTranslateCTM(context, [window center].x, [window center].y);
// Apply the window's transform about the anchor point
CGContextConcatCTM(context, [window transform]);
// Offset by the portion of the bounds left of and above the anchor point
CGContextTranslateCTM(context,
-[window bounds].size.width * [[window layer] anchorPoint].x,
-[window bounds].size.height * [[window layer] anchorPoint].y);
for (UIView *subview in window.subviews)
{
CAEAGLLayer *eaglLayer = (CAEAGLLayer *) subview.layer;
if([eaglLayer respondsToSelector:#selector(drawableProperties)]) {
NSLog(#"reponds");
/*eaglLayer.drawableProperties = #{
kEAGLDrawablePropertyRetainedBacking: [NSNumber numberWithBool:YES],
kEAGLDrawablePropertyColorFormat: kEAGLColorFormatRGBA8
};*/
UIImageView *glImageView = [[UIImageView alloc] initWithImage:[self snapshotx:subview]];
glImageView.transform = CGAffineTransformMakeScale(1, -1);
[glImageView.layer renderInContext:context];
//CGImageRef iref = [self snapshot:subview withContext:context];
//CGContextDrawImage(context, CGRectMake(0.0, 0.0, 640, 960), iref);
}
[[window layer] renderInContext:context];
// Restore the context
CGContextRestoreGState(context);
}
}
}
// Retrieve the screenshot image
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
and
- (UIImage*)snapshotx:(UIView*)eaglview
{
GLint backingWidth, backingHeight;
//glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
//don't know how to access the renderbuffer if i can't directly access the below code
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate (
width,
height,
8,
32,
width * 4,
colorspace,
// Fix from Apple implementation
// (was: kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast).
kCGBitmapByteOrderDefault,
ref,
NULL,
true,
kCGRenderingIntentDefault
);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions)
{
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = eaglview.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
return image;
}
Any advice on how to mix the two without having the ability to modify the classes in the rest of the application?
Thanks!
I see what you tried to do there and it is not really a bad concept. There does seem to be one big problem though: You can not just call glReadPixels at any time you want. First of all you should make sure the buffer is full with data (pixels) you really need and second is it should be called on the same thread as its GL part is working on...
If the GL views are not yours you might have some big trouble calling that screenshot method, you need to call some method that will trigger binding its internal context and if it is animating you will have to know when the cycle is done to ensure that the pixels you receive are the same as the ones presented on the view.
Anyway if you get past all those you will still probably need to "jump" through different threads or need to wait for a cycle to finish. In that case I suggest you use blocks that return the screenshot image which should be passed as a method parameter so you can catch it whenever it is returned. That being said it would be best if you could override some methods on the GL views to be able to return you the screenshot image via callback block and write some recursive system.
To sum it up you need to anticipate multithreading, setting the context, binding the correct frame buffer, waiting for everything to be rendered. This all might result in impossibility to create a screenshot method that would simply work for any application, view, system without overloading some internal methods.
Note that you are simply not allowed to make a whole screenshot (like the one pressing home and lock button at the same time) in your application. As for the UIView part being so easy to create an image from it is because UIView is being redrawn into graphics context independently to the screen; as if you could take some GL pipeline and bind it to your own buffer and context and draw it, this would result in being able to get its screenshot independently and could be performed on any thread.
Actually, I'm trying to do something similar: I'll post in full when I've ironed it out, but in brief:
use your superview's layer's renderInContext method
in the subviews which use openGL, implement the layer delegate's drawLayer:inContext: method
to render your view into the context, use a CVOpenGLESTextureCacheRef
Your superview's layer will call renderInContext: on each of it's sublayers - by implementing the delegate method, your GLView respond for it's layer.
Using a texture cache is much, much faster than glReadPixels: that will probably be a bottleneck.
Sam

iOS: draw over video with OpenGL

I'm trying to draw some OpenGL graphics over the video from camera.
I've modified Apple's GLCameraRipple sample with code that draws a couple of textured triangles. This code works well in my another OpenGL project (but without GLKit).
Unfortunately, it only works here that way: when my app starts I see screen filled with ClearColor with my textured triangles on it (but no video), and in a moment the screen turns to black and I don't see anything.
Could you explain me what's the problem is?
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClearColor(0.5, 0.0, 0.0, 0.3);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(_program);
glUniform1i(uniforms[UNIFORM_Y], 0);
glUniform1i(uniforms[UNIFORM_UV], 1);
if (_ripple)
{
glDrawElements(GL_TRIANGLE_STRIP, [_ripple getIndexCount], GL_UNSIGNED_SHORT, 0);
}
[self drawAnimations];
}
- (void) drawAnimations{
// Use shader program.
glUseProgram(_texturingProgram);
GLfloat modelviewProj[16];
[self MakeMatrix:modelviewProj
OriginX:100.0
OriginY:100.0
Width:200.0
Height:200.0
Rotation:0.0];
// update uniform values
glUniformMatrix4fv(texturing_uniforms[TEXTURING_UNIFORM_MODEL_VIEW_PROJECTION_MATRIX], 1, GL_FALSE, modelviewProj);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _animationTexture);
glUniform1i(texturing_uniforms[TEXTURING_UNIFORM_TEXTURE], 0);
glVertexAttribPointer(TEXTURING_ATTRIB_VERTEX,3, GL_FLOAT, GL_FALSE, sizeof(vertexDataTextured), &plain[0].vertex);
glEnableVertexAttribArray(TEXTURING_ATTRIB_VERTEX);
glVertexAttribPointer(TEXTURING_ATTRIB_TEX_COORDS, 2, GL_FLOAT, GL_FALSE, sizeof(vertexDataTextured), &plain[0].texCoord);
glEnableVertexAttribArray(TEXTURING_ATTRIB_TEX_COORDS);
glDrawArrays(GL_TRIANGLES, 0, 6);
if (![self validateProgram:_texturingProgram]) {
NSLog(#"Failed to validate program: (%d)", _texturingProgram);
}
}
You could place a transparent UIView containing your Open GL drawing layer over another UIView containing the camera preview image layer.

Drawing with GLKit

I am trying to write a game using opengl, but I am having a lot of trouble with the new glkit classes and the default template from iOS.
- (void)viewDidLoad
{
[super viewDidLoad];
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if (!self.context) {
NSLog(#"Failed to create ES context");
}
if(!renderer)
renderer = [RenderManager sharedManager];
tiles = [[TileSet alloc]init];
GLKView *view = (GLKView *)self.view;
view.context = self.context;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
[self setupGL];
}
- (void)setupGL
{
int width = [[self view] bounds].size.width;
int height = [[self view] bounds].size.height;
[EAGLContext setCurrentContext:self.context];
self.effect = [[GLKBaseEffect alloc] init];
self.effect.light0.enabled = GL_TRUE;
self.effect.light0.diffuseColor = GLKVector4Make(0.4f, 0.4f, 0.4f, 1.0f);
//Configure Buffers
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glGenRenderbuffers(2, &colourRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, colourRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8_OES, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colourRenderBuffer);
glGenRenderbuffers(3, &depthRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, depthRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthRenderBuffer);
//Confirm everything happened awesomely
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
if(status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"failed to make complete framebuffer object %x", status);
}
glEnable(GL_DEPTH_TEST);
// Enable the OpenGL states we are going to be using when rendering
glEnableClientState(GL_VERTEX_ARRAY);
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClearColor(0.4f, 0.4f, 0.4f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
float iva[] = {
0.0,0.0,0.0,
0.0,1.0,0.0,
1.0,1.0,0.0,
1.0,0.0,0.0,
};
glVertexPointer(3, GL_FLOAT, sizeof(float) * 3, iva);
glDrawArrays(GL_POINTS, 0, 4);
}
#end
With this the buffer clears(to a grey colour), but nothing from the vertex array renders. I have no idea what to do from here and due to the age of the technology there is not much information available on how to properly use glkit.
I don't see anything in your setup code that loads your shaders - I presume you are doing this somewhere in your code?
In addition, in your setup code, you are creating your framebuffer. The GLKView does this for you - indeed you are telling the view to use a 24-bit depthbuffer in your viewDidLoad method:
GLKView *view = (GLKView *)self.view;
view.context = self.context;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
So what your glkView:drawInRect: code above is doing is saying: "Bind my handmade framebuffer, and draw some stuff into it". The GLKView then automatically presents itself, but nothing has been drawn into it, you've only drawn into your handmade buffer. Unless you need additional framebuffer objects for tasks such as rendering to texture, then you don't need to concern yourself with framebuffer creation at all - let the GLKView do it automatically.
What you should be doing in your setupGL method (or anywhere you like in the setup) is creating your vertex array object(s) that remember the openGL state required to perform a draw. Then, in the glkView:drawInRect: method you should:
Clear using glClear().
Enable your program.
Bind the vertex array object (or, if you didn't use a VAO, enable
the appropriate vertex attrib pointers).
Draw your data using glDrawArrays() or glDrawElements().
The GLKView automatically sets its context as current, and binds its framebuffer object before each draw cycle.
Perhaps try to think of GLKView more like a regular UIView. It handles most of the openGL code behind the scenes for you, leaving you to simply tell it what it needs to draw. It has its drawRect: code just like a regular UIView - with a regular UIView in drawRect: you just tell it what it should draw, for example using Core Graphics functions - you don't then tell it to present itself.
The GLKViewController is then best thought of as handling the mechanics of the rendering loop behind the scenes. You don't need to implement the timers, or even worry about pausing the animation on your application entering the background. You just need to override the update or glkViewControllerUpdate: method (depending on whether you're subclassing or delegating) to update the state of the openGL objects or view matrix.
I made a post about the way to set up a basic project template using GLKit. You can find it here:
Steve Zissou's Programming Blog
I haven't used the GLKit yet, but it seems that you do not present your framebuffer after drawing into it.
In an application using OpenGL ES 2 under iOs but without GLKit, I use to call the following code at the end of the rendering loop.
if(context) {
[EAGLContext setCurrentContext:context];
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
}
As I said I haven't used GLKit yet so I hope this might be useful.
I think you forgot to call
[self.effect prepareToDraw];
just before
glDrawArrays(GL_POINTS, 0, 4);
As GLKit mimics the OpenGL ES 1.1 rendering pipeline, you do not need to include the routines to define Shader. GLKit actually does this for you, if you wish to use basic pipeline like OpenGL ES1.1

Resources