How to stop iOS EAGLContext's memory from growing? - ios

In this super simple app that uses a GLKViewController to display a red screen the memory keeps growing.
ViewController.h:
#import <UIKit/UIKit.h>
#import <GLKit/GLKit.h>
#interface ViewController : GLKViewController
#end
ViewController.m:
#import "ViewController.h"
#interface ViewController ()
#end
#implementation ViewController {
EAGLContext* context;
}
- (void)viewDidLoad {
[super viewDidLoad];
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
GLKView* view = (GLKView*)self.view;
view.context = context;
view.drawableDepthFormat = GLKViewDrawableColorFormatRGBA8888;
[EAGLContext setCurrentContext:context];
self.preferredFramesPerSecond = 60;
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
}
#end
For each frame 9*64 bytes is allocated and never freed as seen in this image (note that the transient count is 0 for IOAccellResource):
This is what the allocation list and stacktrace looks like:
The memory "leak" is small but it still managed to use up 6.5 MB despite only running for less than 3 minutes.
Is there a bug in the EAGLContext or is there something I can do about this? I have noticed (I'm new to iOS development) that other parts of Apple's API uses zone allocators and the memory usage keeps growing when it really should have been in some kind of steady state mode. That makes me think I have missed something (I have tried to send it LowMemory but nothing happen).

So from your comment you say that you are using some other code that is written in C++ and basically all you need is the connection to the actual buffer to be presented. I will assume none of that "C++" code you mentioned is inflating your memory and you have actually try to create a new application only adding the code you posted just to be 100% sure...
To migrate off the GLKit is then very easy for you. Simply subclass the UIView which will be used to drawing and add a few methods:
This needs to be overridden so you can get the render buffer from the view
+ (Class)layerClass {
return [CAEAGLLayer class];
}
The context is already done correctly and needs to be used.
You need to setup the buffers manually. I use a custom class but I believe you will be able to see what is going on here and possibly remove the code you don't need.
- (void)loadBuffersWithView:(UIView *)view
{
self.view = view;
CAEAGLLayer *layer = (CAEAGLLayer *)view.layer;
layer.opaque = YES;
if ([[UIScreen mainScreen] respondsToSelector:#selector(displayLinkWithTarget:selector:)]) {
layer.contentsScale = [UIScreen mainScreen].scale;
}
GLuint frameBuffer; // will hold the generated ID
glGenFramebuffers(1, &frameBuffer); // generate only 1
self.frameBuffer = frameBuffer; // assign to store as the local variable
[self bindFrameBuffer];
GLuint renderBuffer; // will hold the generated ID
glGenRenderbuffers(1, &renderBuffer); // generate only 1
self.renderBuffer = renderBuffer; // assign to store as the local variable
[self bindRenderBuffer];
[self.context.glContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:layer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, self.renderBuffer);
GLint width, height;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
self.bufferSize = CGSizeMake(width, height);
glViewport(0, 0, width, height);
[GlobalTools framebufferStatusValid];
[GlobalTools checkError];
}
On present you need to call presentRenderbuffer on your context with the appropriate render buffer id.
Rest of your code should be the same. But remember to also tear down the openGL when you do not need it and possibly explicitly delete all the buffers that were generated on the GPU.

I printed out the amount of resident memory to the screen and noticed that the memory didn't increase when the device was disconnected from the debugger.
It turns out that using the "Zombie instrument" was causing this... I swear that I saw the memory increase in the memory view of the debugger too, but now I can't repeat it (haven't changed anything in the scheme).

Related

GLKViewController and GLKView - Rendering nothing the second time created

I have a MainMenuViewController and a GameViewController which is a GLKViewConrtroller.
The first time I go from the main menu to the GameViewController everything is rendered fine. If I go back to the main menu, the GameViewController and its view get dealloced (I logged it).
When now going back to the game, I see a blank screen, nothing gets rendered OpenGL-wise. The overlay test menu with UIKit is still there.
Thisis how I tear down OpenGL in the GameViewController's dealloc method, the last five lines were added as tries to make it work, so it doesn't work with or without them.
- (void)tearDownGL {
[EAGLContext setCurrentContext:self.context];
glDeleteBuffers(1, &_vertexBuffer);
glDeleteVertexArraysOES(1, &_vertexArray);
self.effect = nil;
_program = nil;
glBindVertexArrayOES(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
[EAGLContext setCurrentContext: nil];
}
I think that the problem is that you are not using a sharegroup - a place where OpenGL can share textures and shaders between contexts?
Here is code that will create a sharegroup that all your GLKViewController 's subclass. If you have multiple subclasses you will have to do something to make the shareGroup global, if that's appropriate.
- (void)viewDidLoad
{
[super viewDidLoad];
// Create an OpenGL ES context and assign it to the view loaded from storyboard
GLKView *view = (GLKView *)self.view;
// GLES 3 is not supported on iPhone 4s, 5. It may 'just work' to try 3, but stick with 2, as we don't use the new features, me thinks.
//view.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES3];
//if (view.context == nil)
static EAGLSharegroup* shareGroup = nil;
view.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2 sharegroup:shareGroup];
if (shareGroup == nil)
shareGroup = view.context.sharegroup;
...

xcode ios opencv memory leak. App is crashing

I am currently working on iOS with openvc,
I am trying to convert an cv::Mat to an UIImage.
But the app is crashing after a few seconds!
(Terminated due to Memory Error)
This is my code that I am currently using:
using namespace cv;
#import "ViewController.h"
#interface ViewController ()
#end
#implementation ViewController
{
CvVideoCamera* videoCamera;
CADisplayLink*run_loop;
UIImage*image2;
}
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
videoCamera = [[CvVideoCamera alloc] initWithParentView:_liveview];
videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionBack;
videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPresetiFrame1280x720;
videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationLandscapeLeft;
videoCamera.defaultFPS = 30;
videoCamera.delegate = self;
[videoCamera start];
run_loop = [CADisplayLink displayLinkWithTarget:self selector:#selector(update)];
[run_loop setFrameInterval:2];
[run_loop addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSRunLoopCommonModes];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (void)update{
_smallliveview.image = image2;
}
- (UIImage *)UIImageFromMat:(cv::Mat)image
{
cvtColor(image, image, CV_BGR2RGB);
NSData *data = [NSData dataWithBytes:image.data length:image.elemSize()*image.total()];
CGColorSpaceRef colorSpace;
if (image.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);//CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(image.cols, //width
image.rows, //height
8, //bits per component
8 * image.elemSize(), //bits per pixel
image.step.p[0], //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
//[self.imgView setImage:finalImage];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
#pragma mark - Protocol CvVideoCameraDelegate
#ifdef __cplusplus
- (void)processImage:(Mat&)image;
{
image2 = [self UIImageFromMat:image];
}
#endif
#end
What should i do?
It would be very nice if somebody can help me!? (;
Greetings David
Throw away every bit of what you wrote to create a UIImage and use the MatToUIImage() function instead. Simply pass the mat to the function, and you have your image.
Although you didn't ask, you shouldn't use a run loop or display link here. Time and initiate related methods to the processImage method called by OpenCV.
Also, make sure you're using the latest version. This has nothing to do with your problem, but it's good practice.
To import OpenCV 3 into your Xcode 8 project:
Install 'OpenCV2' with Cocoapods (it says '2', but it's still version 3). Don't install the 'devel' build.
Open your project in the workspace Cocoapods created for you — not the project file you created — and append every implementation file that uses OpenCV with .mm (versus .m). You'll get strange error messages if you don't.

OpenGL ES 2.0 Invalid drawable error

I'm trying to set up openGL for an iOS app I am building and I am receiving the following error:
-[EAGLContext renderbufferStorage:fromDrawable:]: invalid drawable
Here is a part of my code, located within a UIViewController class:
#implementation MyViewController
{
CAEAGLayer *_eaglLayer;
EAGLContext *_context;
GLuint _colorRenderBuffer;
}
+(Class)layerClass
{
return [CAEAGLLayer class];
}
-(void)setup
{
_eaglLayer = (CAEAGLLayer *)self.view.layer;
_eaglLayer.opaque = YES;
EAGLRenderingAPI api = kEAGLRenderingAPIOpenGLES2;
_context = [[EAGLContext alloc] initWithAPI:api];
glGenRenderbuffers(1, &_colorRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderBuffer);
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_eaglLayer];
}
How do I fix this error?
Try making the context current after creating it. I believe that does not happen automatically despite all the "magic" that some of these helper classes do for you.
_context = [[EAGLContext alloc] initWithAPI:api];
[EAGLContext setCurrentContext: _context];
The first line is what you already have, the second line is the one you add.

how to set up CAEAGLLayer subclass with openGL context: Current draw framebuffer is invalid

I'm trying to set up a CAEAGLLayer subclass with a gl context. That is, instead of creating a UIView subclass which returns a CAEAGLLayer and binding a gl context to this layer from within the UIView subclass, I'm directly subclassing the layer and trying to setup the context in the layer's init, like so:
- (id)init
{
self = [super init];
if (self) {
self.opaque = YES;
_glContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
NSAssert([EAGLContext setCurrentContext:_glContext], #"");
glGenRenderbuffers(1, &_colorRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderBuffer);
[_glContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:self];
glGenFramebuffers(1, &_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _framebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorRenderBuffer);
/// . . .
up to that point everything seems fine. However, I then try to create a shader program with a "pass-thru" vertex/fragment shader pair and while linking the program returns no errors, validation fails saying: "Current draw framebuffer is invalid."
The code that links and validates the shader program (after attaching the shaders) looks like so, just in case:
- (BOOL)linkAndValidateProgram
{
GLint status;
glLinkProgram(_shaderProgram);
#ifdef DEBUG
GLint infoLogLength;
GLchar *infoLog = NULL;
glGetProgramiv(_shaderProgram, GL_INFO_LOG_LENGTH, &infoLogLength);
if (infoLogLength > 0) {
infoLog = (GLchar *)malloc(infoLogLength);
glGetProgramInfoLog(_shaderProgram, infoLogLength, &infoLogLength, infoLog);
NSLog(#"Program link log:\n%s", infoLog);
free(infoLog);
}
#endif
glGetProgramiv(_shaderProgram, GL_LINK_STATUS, &status);
if (!status) {
return NO;
}
glValidateProgram(_shaderProgram);
#ifdef DEBUG
glGetProgramiv(_shaderProgram, GL_INFO_LOG_LENGTH, &infoLogLength);
if (infoLogLength > 0) {
infoLog = (GLchar *)malloc(infoLogLength);
glGetProgramInfoLog(_shaderProgram, infoLogLength, &infoLogLength, infoLog);
NSLog(#"Program validation log:\n%s", infoLog);
free(infoLog);
}
#endif
glGetProgramiv(_shaderProgram, GL_VALIDATE_STATUS, &status);
if (!status) {
return NO;
}
glUseProgram(_shaderProgram);
return YES;
}
I'm wondering if there might be some extra setup at some point throughout the lifecycle of CAEAGLLayer that I might be unaware of and might be skipping by trying to setup GL in init?
The problem was the layer has no dimensions at that point in init. Which in turn makes it where trying to set the render buffer storage to the layer implies a buffer of 0.
UPDATE: My current best thinking is that, instead of imposing a size on init (which worked fine for the purposes of testing but is kind hacky), I should just re set the buffer storage whenever the layer changes sizes. So I'm overriding -setBounds: like so:
- (void)setBounds:(CGRect)bounds
{
[super setBounds:bounds];
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:self];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &someVariableToHoldWidthIfYouNeedIt);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &someVariableToHoldHeightIfYouNeedIt);
}
As far as I know you have to overwrite the layerClass method in the View, like this
+ (Class)layerClass
{
return [MYCEAGLLayer class];
}
Also you have to set the drawableProperties on the MYCEAGLLayer.

Drawing in another thread with CGImage / CGLayer

I have custom UICollectionViewCell subclass where I draw with clipping, stroking and transparency. It works pretty well on Simulator and iPhone 5, but on older devices there is noticeable performance problems.
So I want to move time-consuming drawing to background thread. Since -drawRect method is always called on the main thread, I ended up saving drawn context to CGImage (original question contained code with using CGLayer, but it is sort of obsolete as Matt Long pointed out).
Here is my implementation of drawRect method inside this class:
-(void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
if (self.renderedSymbol != nil) {
CGContextDrawImage(ctx, self.bounds, self.renderedSymbol);
}
}
Rendering method that defines this renderedSymbol property:
- (void) renderCurrentSymbol {
[self.queue addOperationWithBlock:^{
// creating custom context to draw there (contexts are not thread safe)
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(nil, self.bounds.size.width, self.bounds.size.height, 8, self.bounds.size.width * (CGColorSpaceGetNumberOfComponents(space) + 1), space, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(space);
// custom drawing goes here using 'ctx' context
// then saving context as CGImageRef to property that will be used in drawRect
self.renderedSymbol = CGBitmapContextCreateImage(ctx);
// asking main thread to update UI
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
[self setNeedsDisplayInRect:self.bounds];
}];
CGContextRelease(ctx);
}];
}
This setup works perfectly on main thread, but when I wrap it with NSOperationQueue or GCD, I'm getting lots of different "invalid context 0x0" errors. App doesn't crash itself, but drawing doesn't happen. I suppose there is a problem with releasing custom created CGContextRef, but I don't know what to do about it.
Here's my property declarations. (I tried using atomic versions, but that didn't help)
#property (nonatomic) CGImageRef renderedSymbol;
#property (nonatomic, strong) NSOperationQueue *queue;
#property (nonatomic, strong) NSString *symbol; // used in custom drawing
Custom setters / getters for properties:
-(NSOperationQueue *)queue {
if (!_queue) {
_queue = [[NSOperationQueue alloc] init];
_queue.name = #"Background Rendering";
}
return _queue;
}
-(void)setSymbol:(NSString *)symbol {
_symbol = symbol;
self.renderedSymbol = nil;
[self setNeedsDisplayInRect:self.bounds];
}
-(CGImageRef) renderedSymbol {
if (_renderedSymbol == nil) {
[self renderCurrentSymbol];
}
return _renderedSymbol;
}
What can I do?
Did you notice the document on CGLayer you're referencing hasn't been updated since 2006? The assumption you've made that CGLayer is the right solution is incorrect. Apple has all but abandoned this technology and you probably should too: http://iosptl.com/posts/cglayer-no-longer-recommended/ Use Core Animation.
Issue solved by using amazing third party library by Mind Snacks — MSCachedAsyncViewDrawing.

Resources