xcode ios opencv memory leak. App is crashing - ios

I am currently working on iOS with openvc,
I am trying to convert an cv::Mat to an UIImage.
But the app is crashing after a few seconds!
(Terminated due to Memory Error)
This is my code that I am currently using:
using namespace cv;
#import "ViewController.h"
#interface ViewController ()
#end
#implementation ViewController
{
CvVideoCamera* videoCamera;
CADisplayLink*run_loop;
UIImage*image2;
}
- (void)viewDidLoad
{
[super viewDidLoad];
// Do any additional setup after loading the view, typically from a nib.
videoCamera = [[CvVideoCamera alloc] initWithParentView:_liveview];
videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionBack;
videoCamera.defaultAVCaptureSessionPreset = AVCaptureSessionPresetiFrame1280x720;
videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationLandscapeLeft;
videoCamera.defaultFPS = 30;
videoCamera.delegate = self;
[videoCamera start];
run_loop = [CADisplayLink displayLinkWithTarget:self selector:#selector(update)];
[run_loop setFrameInterval:2];
[run_loop addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSRunLoopCommonModes];
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
- (void)update{
_smallliveview.image = image2;
}
- (UIImage *)UIImageFromMat:(cv::Mat)image
{
cvtColor(image, image, CV_BGR2RGB);
NSData *data = [NSData dataWithBytes:image.data length:image.elemSize()*image.total()];
CGColorSpaceRef colorSpace;
if (image.elemSize() == 1) {
colorSpace = CGColorSpaceCreateDeviceGray();
} else {
colorSpace = CGColorSpaceCreateDeviceRGB();
}
CGDataProviderRef provider = CGDataProviderCreateWithCFData((CFDataRef)data);//CGDataProviderCreateWithCFData((__bridge CFDataRef)data);
// Creating CGImage from cv::Mat
CGImageRef imageRef = CGImageCreate(image.cols, //width
image.rows, //height
8, //bits per component
8 * image.elemSize(), //bits per pixel
image.step.p[0], //bytesPerRow
colorSpace, //colorspace
kCGImageAlphaNone|kCGBitmapByteOrderDefault,// bitmap info
provider, //CGDataProviderRef
NULL, //decode
false, //should interpolate
kCGRenderingIntentDefault //intent
);
// Getting UIImage from CGImage
UIImage *finalImage = [UIImage imageWithCGImage:imageRef];
//[self.imgView setImage:finalImage];
CGImageRelease(imageRef);
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpace);
return finalImage;
}
#pragma mark - Protocol CvVideoCameraDelegate
#ifdef __cplusplus
- (void)processImage:(Mat&)image;
{
image2 = [self UIImageFromMat:image];
}
#endif
#end
What should i do?
It would be very nice if somebody can help me!? (;
Greetings David

Throw away every bit of what you wrote to create a UIImage and use the MatToUIImage() function instead. Simply pass the mat to the function, and you have your image.
Although you didn't ask, you shouldn't use a run loop or display link here. Time and initiate related methods to the processImage method called by OpenCV.
Also, make sure you're using the latest version. This has nothing to do with your problem, but it's good practice.
To import OpenCV 3 into your Xcode 8 project:
Install 'OpenCV2' with Cocoapods (it says '2', but it's still version 3). Don't install the 'devel' build.
Open your project in the workspace Cocoapods created for you — not the project file you created — and append every implementation file that uses OpenCV with .mm (versus .m). You'll get strange error messages if you don't.

Related

UIImage initWithCGImage leads to memory issue in Obj-C

I develop an application that grabs images from iPhone rear camera. These images are then processed asynchronously.
So I am using AVFoundation functions in Obj-C. My problem is that my app is crashing because of memory issue when capturing images.
Here is the code that I use in the captureOutput callback :
- (void)captureOutput:(AVCaptureOutput *)output didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
connection.videoOrientation = AVCaptureVideoOrientationLandscapeLeft;
CVPixelBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CIImage* ciimage = [[CIImage alloc] initWithCVPixelBuffer:imageBuffer];
CIContext* context = [CIContext contextWithOptions:nil];
CGImage* cgImage = [context createCGImage:ciimage fromRect:[ciimage extent]];
#synchronized(self) {
UIImage* image = [[UIImage alloc] initWithCGImage:cgImage];
self.uiimageBuffer = image;
}
CGImageRelease(cgImage);
}
As I need to asynchronously process the image grabbed elsewhere in the application, I introduced a buffer called uiimageBuffer. This buffer is updated everytime captureOutput is called, like written right below :
UIImage* image = [[UIImage alloc] initWithCGImage:cgImage];
self.uiimageBuffer = image;
But the allocation of the UIImage leads to memory issue very very quickly (few seconds).
So my question is : how could I update my buffer without allocating new UIImage at every calls of captureOutput ?
PS : the same piece of code written in Swift 4 doesn't lead to memory issue.
Thank you
How about #autoreleasepool? This helped me several times in both captureOutput and requestMediaDataWhenReady.
https://developer.apple.com/library/archive/documentation/Cocoa/Conceptual/MemoryMgmt/Articles/mmAutoreleasePools.html

How do I handle GPUImage image buffers so that they're usable with things like Tokbox?

I'm using OpenTok and replaced their Publisher with my own subclassed version which incorporates GPUImage. My goal is to add filters.
The application builds and runs, but crashes here:
func willOutputSampleBuffer(sampleBuffer: CMSampleBuffer!) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
CVPixelBufferLockBaseAddress(imageBuffer!, 0)
videoFrame?.clearPlanes()
for var i = 0 ; i < CVPixelBufferGetPlaneCount(imageBuffer!); i++ {
print(i)
videoFrame?.planes.addPointer(CVPixelBufferGetBaseAddressOfPlane(imageBuffer!, i))
}
videoFrame?.orientation = OTVideoOrientation.Left
videoCaptureConsumer.consumeFrame(videoFrame) //comment this out to stop app from crashing. Otherwise, it crashes here.
CVPixelBufferUnlockBaseAddress(imageBuffer!, 0)
}
If I comment that line out, I'm able to run the app without crashing. In fact, I see the filter being applied correctly, but it's flickering. Nothings get published to Opentok.
My entire codebase can be downloaded. Click here to see the specific file: This is the specific file for the class. It's actually pretty easy to run - just do pod install before running it.
Upon inspection, it could be that videoCaptureConsumer is not initialized. Protocol reference
I have no idea what my code means. I translated it directly from this objective C file: Tokbox's sample project
I analyzed both, your Swift-project and the Objective-C-project as well.
I figured out, that neither is working.
With this post, i want to give a first update and show a really working demo of how to use GPU image filters with OpenTok.
Reasons why your GPUImage filer implementation is not woring with OpenTok
#1 Multiple target specifications
let sepia = GPUImageSepiaFilter()
videoCamera?.addTarget(sepia)
sepia.addTarget(self.view)
videoCamera?.addTarget(self.view) // <-- This is wrong and produces the flickering
videoCamera?.startCameraCapture()
Two sources try to render into the same view. Makes things flickering...
Part one is solved. Next up: Why is nothing pubilshed to OpenTok? To find the reason for this, i decided to start with the "working" Objective-C version.
#2 -The Objective-C original codebase
The orignal Objective-C version doesnt have the expected functionality. The publishing of the GPUImageVideoCamera to an OpenTok subscriber works but there is no filtering involved. And thats your core requirement.
The point is, that adding filters is not as trivial as someone would expect, because of differing image formats and differing mechanisms how to do asynchronous programming.
So reason #2, why your code isnt working as expected:
Your reference codebase for your porting work is not correct. It doesnt allow to put GPU filters in between the Publish - Subscriber pipeline.
A working Objective-C implementation
I modified the Objective-C version. Current results are looking like this:
[![enter image description here][1]][1]
Its running smoothly.
Final steps
This is the full code for the custom Tok publisher. Its basically the original code (TokBoxGPUImagePublisher) from [https://github.com/JayTokBox/TokBoxGPUImage/blob/master/TokBoxGPUImage/ViewController.m][2] with following notable modifications:
OTVideoFrame gets instantiated with a new format
...
format = [[OTVideoFormat alloc] init];
format.pixelFormat = OTPixelFormatARGB;
format.bytesPerRow = [#[#(imageWidth * 4)] mutableCopy];
format.imageWidth = imageWidth;
format.imageHeight = imageHeight;
videoFrame = [[OTVideoFrame alloc] initWithFormat: format];
...
Replace the WillOutputSampleBuffer callback mechanism
This callback only triggers, when sample buffers coming directly from the GPUImageVideoCamera are ready and NOT from your custom filters. GPUImageFilters don't provide such a callback / delegate mechanism. Thats why we put an GPUImageRawDataOutput in between and ask it for ready images. This pipeline is implemented in the initCapture method and looks like this:
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
sepiaImageFilter = [[GPUImageSepiaFilter alloc] init];
[videoCamera addTarget:sepiaImageFilter];
// Create rawOut
CGSize size = CGSizeMake(imageWidth, imageHeight);
rawOut = [[GPUImageRawDataOutput alloc] initWithImageSize:size resultsInBGRAFormat:YES];
// Filter into rawOut
[sepiaImageFilter addTarget:rawOut];
// Handle filtered images
// We need a weak reference here to avoid a strong reference cycle.
__weak GPUImageRawDataOutput* weakRawOut = self->rawOut;
__weak OTVideoFrame* weakVideoFrame = self->videoFrame;
__weak id<OTVideoCaptureConsumer> weakVideoCaptureConsumer = self.videoCaptureConsumer;
//
[rawOut setNewFrameAvailableBlock:^{
[weakRawOut lockFramebufferForReading];
// GLubyte is an uint8_t
GLubyte* outputBytes = [weakRawOut rawBytesForImage];
// About the video formats used by OTVideoFrame
// --------------------------------------------
// Both YUV video formats (i420, NV12) have the (for us) following important properties:
//
// - Two planes
// - 8 bit Y plane
// - 8 bit 2x2 subsampled U and V planes (1/4 the pixels of the Y plane)
// --> 12 bits per pixel
//
// Further reading: www.fourcc.org/yuv.php
//
[weakVideoFrame clearPlanes];
[weakVideoFrame.planes addPointer: outputBytes];
[weakVideoCaptureConsumer consumeFrame: weakVideoFrame];
[weakRawOut unlockFramebufferAfterReading];
}];
[videoCamera addTarget:self.view];
[videoCamera startCameraCapture];
Whole code (The really important thing is initCapture)
//
// TokBoxGPUImagePublisher.m
// TokBoxGPUImage
//
// Created by Jaideep Shah on 9/5/14.
// Copyright (c) 2014 Jaideep Shah. All rights reserved.
//
#import "TokBoxGPUImagePublisher.h"
#import "GPUImage.h"
static size_t imageHeight = 480;
static size_t imageWidth = 640;
#interface TokBoxGPUImagePublisher() <GPUImageVideoCameraDelegate, OTVideoCapture> {
GPUImageVideoCamera *videoCamera;
GPUImageSepiaFilter *sepiaImageFilter;
OTVideoFrame* videoFrame;
GPUImageRawDataOutput* rawOut;
OTVideoFormat* format;
}
#end
#implementation TokBoxGPUImagePublisher
#synthesize videoCaptureConsumer ; // In OTVideoCapture protocol
- (id)initWithDelegate:(id<OTPublisherDelegate>)delegate name:(NSString*)name
{
self = [super initWithDelegate:delegate name:name];
if (self)
{
self.view = [[GPUImageView alloc] initWithFrame:CGRectMake(0, 0, 1, 1)];
[self setVideoCapture:self];
format = [[OTVideoFormat alloc] init];
format.pixelFormat = OTPixelFormatARGB;
format.bytesPerRow = [#[#(imageWidth * 4)] mutableCopy];
format.imageWidth = imageWidth;
format.imageHeight = imageHeight;
videoFrame = [[OTVideoFrame alloc] initWithFormat: format];
}
return self;
}
#pragma mark GPUImageVideoCameraDelegate
- (void)willOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, 0);
[videoFrame clearPlanes];
for (int i = 0; i < CVPixelBufferGetPlaneCount(imageBuffer); i++) {
[videoFrame.planes addPointer:CVPixelBufferGetBaseAddressOfPlane(imageBuffer, i)];
}
videoFrame.orientation = OTVideoOrientationLeft;
[self.videoCaptureConsumer consumeFrame:videoFrame];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
#pragma mark OTVideoCapture
- (void) initCapture {
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480
cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
sepiaImageFilter = [[GPUImageSepiaFilter alloc] init];
[videoCamera addTarget:sepiaImageFilter];
// Create rawOut
CGSize size = CGSizeMake(imageWidth, imageHeight);
rawOut = [[GPUImageRawDataOutput alloc] initWithImageSize:size resultsInBGRAFormat:YES];
// Filter into rawOut
[sepiaImageFilter addTarget:rawOut];
// Handle filtered images
// We need a weak reference here to avoid a strong reference cycle.
__weak GPUImageRawDataOutput* weakRawOut = self->rawOut;
__weak OTVideoFrame* weakVideoFrame = self->videoFrame;
__weak id<OTVideoCaptureConsumer> weakVideoCaptureConsumer = self.videoCaptureConsumer;
//
[rawOut setNewFrameAvailableBlock:^{
[weakRawOut lockFramebufferForReading];
// GLubyte is an uint8_t
GLubyte* outputBytes = [weakRawOut rawBytesForImage];
// About the video formats used by OTVideoFrame
// --------------------------------------------
// Both YUV video formats (i420, NV12) have the (for us) following important properties:
//
// - Two planes
// - 8 bit Y plane
// - 8 bit 2x2 subsampled U and V planes (1/4 the pixels of the Y plane)
// --> 12 bits per pixel
//
// Further reading: www.fourcc.org/yuv.php
//
[weakVideoFrame clearPlanes];
[weakVideoFrame.planes addPointer: outputBytes];
[weakVideoCaptureConsumer consumeFrame: weakVideoFrame];
[weakRawOut unlockFramebufferAfterReading];
}];
[videoCamera addTarget:self.view];
[videoCamera startCameraCapture];
}
- (void)releaseCapture
{
videoCamera.delegate = nil;
videoCamera = nil;
}
- (int32_t) startCapture {
return 0;
}
- (int32_t) stopCapture {
return 0;
}
- (BOOL) isCaptureStarted {
return YES;
}
- (int32_t)captureSettings:(OTVideoFormat*)videoFormat {
videoFormat.pixelFormat = OTPixelFormatNV12;
videoFormat.imageWidth = imageWidth;
videoFormat.imageHeight = imageHeight;
return 0;
}
#end

How to stop iOS EAGLContext's memory from growing?

In this super simple app that uses a GLKViewController to display a red screen the memory keeps growing.
ViewController.h:
#import <UIKit/UIKit.h>
#import <GLKit/GLKit.h>
#interface ViewController : GLKViewController
#end
ViewController.m:
#import "ViewController.h"
#interface ViewController ()
#end
#implementation ViewController {
EAGLContext* context;
}
- (void)viewDidLoad {
[super viewDidLoad];
context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
GLKView* view = (GLKView*)self.view;
view.context = context;
view.drawableDepthFormat = GLKViewDrawableColorFormatRGBA8888;
[EAGLContext setCurrentContext:context];
self.preferredFramesPerSecond = 60;
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect {
glClearColor(1.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
}
- (void)didReceiveMemoryWarning {
[super didReceiveMemoryWarning];
}
#end
For each frame 9*64 bytes is allocated and never freed as seen in this image (note that the transient count is 0 for IOAccellResource):
This is what the allocation list and stacktrace looks like:
The memory "leak" is small but it still managed to use up 6.5 MB despite only running for less than 3 minutes.
Is there a bug in the EAGLContext or is there something I can do about this? I have noticed (I'm new to iOS development) that other parts of Apple's API uses zone allocators and the memory usage keeps growing when it really should have been in some kind of steady state mode. That makes me think I have missed something (I have tried to send it LowMemory but nothing happen).
So from your comment you say that you are using some other code that is written in C++ and basically all you need is the connection to the actual buffer to be presented. I will assume none of that "C++" code you mentioned is inflating your memory and you have actually try to create a new application only adding the code you posted just to be 100% sure...
To migrate off the GLKit is then very easy for you. Simply subclass the UIView which will be used to drawing and add a few methods:
This needs to be overridden so you can get the render buffer from the view
+ (Class)layerClass {
return [CAEAGLLayer class];
}
The context is already done correctly and needs to be used.
You need to setup the buffers manually. I use a custom class but I believe you will be able to see what is going on here and possibly remove the code you don't need.
- (void)loadBuffersWithView:(UIView *)view
{
self.view = view;
CAEAGLLayer *layer = (CAEAGLLayer *)view.layer;
layer.opaque = YES;
if ([[UIScreen mainScreen] respondsToSelector:#selector(displayLinkWithTarget:selector:)]) {
layer.contentsScale = [UIScreen mainScreen].scale;
}
GLuint frameBuffer; // will hold the generated ID
glGenFramebuffers(1, &frameBuffer); // generate only 1
self.frameBuffer = frameBuffer; // assign to store as the local variable
[self bindFrameBuffer];
GLuint renderBuffer; // will hold the generated ID
glGenRenderbuffers(1, &renderBuffer); // generate only 1
self.renderBuffer = renderBuffer; // assign to store as the local variable
[self bindRenderBuffer];
[self.context.glContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:layer];
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, self.renderBuffer);
GLint width, height;
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);
self.bufferSize = CGSizeMake(width, height);
glViewport(0, 0, width, height);
[GlobalTools framebufferStatusValid];
[GlobalTools checkError];
}
On present you need to call presentRenderbuffer on your context with the appropriate render buffer id.
Rest of your code should be the same. But remember to also tear down the openGL when you do not need it and possibly explicitly delete all the buffers that were generated on the GPU.
I printed out the amount of resident memory to the screen and noticed that the memory didn't increase when the device was disconnected from the debugger.
It turns out that using the "Zombie instrument" was causing this... I swear that I saw the memory increase in the memory view of the debugger too, but now I can't repeat it (haven't changed anything in the scheme).

OpenGL ES 2.0 Invalid drawable error

I'm trying to set up openGL for an iOS app I am building and I am receiving the following error:
-[EAGLContext renderbufferStorage:fromDrawable:]: invalid drawable
Here is a part of my code, located within a UIViewController class:
#implementation MyViewController
{
CAEAGLayer *_eaglLayer;
EAGLContext *_context;
GLuint _colorRenderBuffer;
}
+(Class)layerClass
{
return [CAEAGLLayer class];
}
-(void)setup
{
_eaglLayer = (CAEAGLLayer *)self.view.layer;
_eaglLayer.opaque = YES;
EAGLRenderingAPI api = kEAGLRenderingAPIOpenGLES2;
_context = [[EAGLContext alloc] initWithAPI:api];
glGenRenderbuffers(1, &_colorRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderBuffer);
[_context renderbufferStorage:GL_RENDERBUFFER fromDrawable:_eaglLayer];
}
How do I fix this error?
Try making the context current after creating it. I believe that does not happen automatically despite all the "magic" that some of these helper classes do for you.
_context = [[EAGLContext alloc] initWithAPI:api];
[EAGLContext setCurrentContext: _context];
The first line is what you already have, the second line is the one you add.

Drawing in another thread with CGImage / CGLayer

I have custom UICollectionViewCell subclass where I draw with clipping, stroking and transparency. It works pretty well on Simulator and iPhone 5, but on older devices there is noticeable performance problems.
So I want to move time-consuming drawing to background thread. Since -drawRect method is always called on the main thread, I ended up saving drawn context to CGImage (original question contained code with using CGLayer, but it is sort of obsolete as Matt Long pointed out).
Here is my implementation of drawRect method inside this class:
-(void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
if (self.renderedSymbol != nil) {
CGContextDrawImage(ctx, self.bounds, self.renderedSymbol);
}
}
Rendering method that defines this renderedSymbol property:
- (void) renderCurrentSymbol {
[self.queue addOperationWithBlock:^{
// creating custom context to draw there (contexts are not thread safe)
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(nil, self.bounds.size.width, self.bounds.size.height, 8, self.bounds.size.width * (CGColorSpaceGetNumberOfComponents(space) + 1), space, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(space);
// custom drawing goes here using 'ctx' context
// then saving context as CGImageRef to property that will be used in drawRect
self.renderedSymbol = CGBitmapContextCreateImage(ctx);
// asking main thread to update UI
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
[self setNeedsDisplayInRect:self.bounds];
}];
CGContextRelease(ctx);
}];
}
This setup works perfectly on main thread, but when I wrap it with NSOperationQueue or GCD, I'm getting lots of different "invalid context 0x0" errors. App doesn't crash itself, but drawing doesn't happen. I suppose there is a problem with releasing custom created CGContextRef, but I don't know what to do about it.
Here's my property declarations. (I tried using atomic versions, but that didn't help)
#property (nonatomic) CGImageRef renderedSymbol;
#property (nonatomic, strong) NSOperationQueue *queue;
#property (nonatomic, strong) NSString *symbol; // used in custom drawing
Custom setters / getters for properties:
-(NSOperationQueue *)queue {
if (!_queue) {
_queue = [[NSOperationQueue alloc] init];
_queue.name = #"Background Rendering";
}
return _queue;
}
-(void)setSymbol:(NSString *)symbol {
_symbol = symbol;
self.renderedSymbol = nil;
[self setNeedsDisplayInRect:self.bounds];
}
-(CGImageRef) renderedSymbol {
if (_renderedSymbol == nil) {
[self renderCurrentSymbol];
}
return _renderedSymbol;
}
What can I do?
Did you notice the document on CGLayer you're referencing hasn't been updated since 2006? The assumption you've made that CGLayer is the right solution is incorrect. Apple has all but abandoned this technology and you probably should too: http://iosptl.com/posts/cglayer-no-longer-recommended/ Use Core Animation.
Issue solved by using amazing third party library by Mind Snacks — MSCachedAsyncViewDrawing.

Resources