I'm using a CIContext, CIDetector, and CIImage to detect rectangles in a vImage_Buffer derived from samples in captureOutput:didOutputSampleBuffer:fromConnection:. It seems that either the detector or the CIImage is retaining memory and it cannot be released.
Here is the code in question - skipping over this code shows memory held constant, otherwise in increases until crashing the app:
// ...rotatedBuf and format managed outside scope
// Use a CIDetector to find any potential subslices to process
CVPixelBufferRef cvBuffer;
vImageCVImageFormatRef cvFormat = vImageCVImageFormat_CreateWithCVPixelBuffer(pixelBuffer);
CVPixelBufferCreate(kCFAllocatorSystemDefault, rotatedBuf.width, rotatedBuf.height, kCVPixelFormatType_32BGRA, NULL, &cvBuffer);
CVPixelBufferLockBaseAddress(cvBuffer, kCVPixelBufferLock_ReadOnly);
err = vImageBuffer_CopyToCVPixelBuffer(&rotatedBuf, &format, cvBuffer, cvFormat, NULL, kvImageNoFlags);
CVPixelBufferUnlockBaseAddress(cvBuffer, kCVPixelBufferLock_ReadOnly);
if (![self vImageDidError:err]) {
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:cvBuffer];
NSArray *feats = [self.ciDetector featuresInImage:ciImage options:nil];
if (feats && [feats count]) {
for (CIFeature *feat in feats) {
// The frame is currently in image space, so we must convert it to a unitless space like the other rects.
CGRect frame = feat.bounds;
CGRect clip = CGRectMake(frame.origin.x / rotatedBuf.width, frame.origin.y / rotatedBuf.height,
frame.size.width / rotatedBuf.width, frame.size.height / rotatedBuf.height);
rects = [rects arrayByAddingObject:[NSValue valueWithCGRect:clip]];
}
}
}
CVPixelBufferRelease(cvBuffer);
vImageCVImageFormat_Release(cvFormat);
Other answers seem to suggest wrapping in an autorelease pool or create a new CIDector each frame, but neither affect the memory use.
CIDetector isn't releasing memory
CIDetector won't release memory - swift
Edit: switching the dispatch queue to one other than dispatch_main_queue seemed to have cleared the memory issue and keeps the UI responsive.
I figured out a different solution - in my case I was running all my processing on the main dispatch queue. What fixed the situation was creating a new queue to run the processing in. I realized this may be the case when the majority of my CPU time was spent on the call to featuresInImage:options:. It doesn't explain what caused the memory issue, but now that I'm running in a separate queue, memory is nice and constant.
Related
I have (multiple) UIViews with layers of type CAEAGLLayer, and am able to call [EAGLContext presentRenderBuffer:] on renderbuffers attached to these layers, on a secondary thread, without any kind of graphical glitches.
I would have expected to see at least some tearing, since other UI with which these UIViews are composited is updated on the main thread.
Does CAEAGLLayer (I have kEAGLDrawablePropertyRetainedBacking set to NO) do some double-buffering behind the scenes?
I just want to understand why it is that this works...
Example:
BView is a UIView subclass that owns a framebuffer with renderbuffer storage assigned to its OpenGLES layer, in a shared EAGLContext:
#implementation BView
-(id) initWithFrame:(CGRect)frame context:(EAGLContext*)context
{
self = [super initWithFrame:frame];
// Configure layer
CAEAGLLayer* eaglLayer = (CAEAGLLayer*)self.layer;
eaglLayer.opaque = YES;
eaglLayer.drawableProperties = #{ kEAGLDrawablePropertyRetainedBacking : [NSNumber numberWithBool:NO], kEAGLDrawablePropertyColorFormat : kEAGLColorFormatSRGBA8 };
// Create framebuffer with renderbuffer attached to layer
[EAGLContext setCurrentContext:context];
glGenFramebuffers( 1, &FrameBuffer );
glBindFramebuffer( GL_FRAMEBUFFER, FrameBuffer );
glGenRenderbuffers( 1, &RenderBuffer );
glBindRenderbuffer( GL_RENDERBUFFER, RenderBuffer );
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(id<EAGLDrawable>)self.layer];
glFramebufferRenderbuffer( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, RenderBuffer );
return self;
}
+(Class) layerClass
{
return [CAEAGLLayer class];
}`
A UIViewController adds a BView instance on the main thread at init time:
BView* view = [[BView alloc] initWithFrame:(CGRect){ 0.0, 0.0, 75.0, 75.0 } context:Context];
[self.view addSubView:view];
On a secondary thread, render to the framebuffer in the BView and present it; in this case it's in a callback from a video AVCaptureDevice, called regularly:
-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
[EAGLContext setCurrentContext:bPipe->Context.GlContext];
// Render into framebuffer ...
// Present renderbuffer
glBindRenderbuffer( GL_RENDERBUFFER, BViewsRenderBuffer );
[Context presentRenderbuffer:GL_RENDERBUFFER];
}
It used to not work. There used to be several issues with updating the view if the buffer was presented on any but the main thread. It seems this has been working for some time now but it is on your own risk to implement it as you do. Later versions may have issues with it as well as some older probably still do (not that you need to support some old OS versions anyway).
Apple was always a bit closed as to how things work internally but we may guess quite a few things. Since iOS seems to be the only platform that uses your main buffer as a FBO (frame buffer object) I would expect the main frame buffer is inaccessible for development and your main FBO is actually redrawn to the main frame buffer when you present the render buffer. The last time I checked the method to present the render buffer will block your current thread and seems to be limited to the screen refresh rate (60FPS in most cases) which implies there is still some locking mechanism. Some additional test should be done but I would expect there is some sort of a pool of buffers which need to be redrawn to the main buffer where in the pool only one unique buffer id can be present at the time or the calling thread is blocked. This would result in the first call to the present render buffer would not be blocked at all but each sequential would be if the previous buffer has not yet been redrawn.
If this is true then yes, a double buffering is mandatory at some point since you may immediately continue drawing to your buffer. Since the render buffer has the same id over the frames it may not be swapped (for what I know) but it could be redrawn/copied to another buffer (most likely a texture) which can be done on the fly at any given time. In this procedure then when you first present the buffer you will copy the buffer to the texture which will be locked. When the screen refreshes the texture will be collected and unlocked. So if this texture is locked your presentation call will block the thread, otherwise it will continue smoothly. It is hard to say this is double buffering then. It has 2 buffers but it still works with a locking mechanism.
I do hope you may then understand why it works. It is pretty much the same procedure you would use when loading large data structures on the separate shared context which runs on a separate thread.
Still most of this is just guessing unfortunately.
I am now writing a program to capture screen and convert to video. I can successfully save the video if it is less than 10 seconds. But, if more than that, I received memory warning and application crash. I wrote this code as follow. Where am I missing to release data ? I would like to know how to do.
-(void)captureAndSaveImage
{
if(!stopCapturing){
if (assetWriterInput.readyForMoreMediaData)
{
keepTrackOfBackGroundMood++;
NSLog(#"keepTrackOfBackGroundMood is %d",keepTrackOfBackGroundMood);
CVReturn cvErr = kCVReturnSuccess;
CGSize imageSize = screenCaptureAndDraw.bounds.size;
CGFloat imageScale = 0; //if zero, it reduce processing time
if (NULL != UIGraphicsBeginImageContextWithOptions)
{
UIGraphicsBeginImageContextWithOptions(imageSize, NO, imageScale);
}
else
{
UIGraphicsBeginImageContext(imageSize);
}
[self.hiddenView.layer renderInContext:UIGraphicsGetCurrentContext()];
[self.screenCaptureAndDraw.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
image = (CGImageRef) [img CGImage];
CVPixelBufferRef pixelBuffer = NULL;
CFDataRef imageData= CGDataProviderCopyData(CGImageGetDataProvider(image));
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
FRAME_WIDTH2,
FRAME_HEIGHT2,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(imageData),
CGImageGetBytesPerRow(image),
NULL,
NULL,
NULL,
&pixelBuffer);
//CFRelease(imageData);
//CGImageRelease(image); //I can't write this code because I am not creating it and when I check online, it say it is not my responsibility to release. If I write, the application crash immediately
// calculate the time
CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
CFTimeInterval elapsedTime = thisFrameWallClockTime - firstFrameWallClockTime;
// write the sample
BOOL appended = [assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
if (appended) {
NSLog (#"appended sample at time %lf and keepTrackofappended is %d", CMTimeGetSeconds(presentationTime),keepTrackofappended);
keepTrackofappended++;
} else {
NSLog (#"failed to append");
[self stopRecording];
//self.startStopButton.selected = NO;
screenRecord=false;
}
}
}//stop capturing
// });
}
I agree that you don't want to do the CGImageRelease(image). This object was created by calling CGImage method of a UIImage object. Thus ownership was not transferred and ARC still does the memory management for your img object and no releasing of the image object is needed.
But I think you do want to restore your CFRelease(imageData). This is an object created by CGDataProviderCopyData, so you own it and must clean up.
I also think you have to release the pixelBuffer that you created with CVPixelBufferCreateWithBytes after you appendPixelBuffer. You can use the CVPixelBufferRelease function for that.
The Core Foundation memory rule is that if the function has Copy or Create in the name, you own that object and are responsible for releasing it. See the Create Rule in the Memory Management Programming Guide for Core Foundation.
I would have thought that the static analyzer (shift+command+B or "Analyze" from the Xcode "Product" menu) would have identified this issue, as it has gotten much better at finding Core Foundation memory issues (albeit, not perfect).
Alternatively, if you run your app through the Leaks tool in Instruments (which will also show you the Allocations tool at the same time), you can take a look at your memory usage. While the video capture requires a lot of Live Bytes, in my experience it stays pretty darn flat. If it's growing, you have a leak somewhere.
Or this code can be executed in a background thread safely?
CGImageRef cgImage;
CGContextRef context;
CGColorSpaceRef colorSpace;
// Sets the CoreGraphic Image to work on it.
cgImage = [uiImage CGImage];
// Sets the image's size.
_width = CGImageGetWidth(cgImage);
_height = CGImageGetHeight(cgImage);
// Extracts the pixel informations and place it into the data.
colorSpace = CGColorSpaceCreateDeviceRGB();
_data = malloc(_width * _height * 4);
context = CGBitmapContextCreate(_data, _width, _height, 8, 4 * _width, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
// Adjusts position and invert the image.
// The OpenGL uses the image data upside-down compared commom image files.
CGContextTranslateCTM(context, 0, _height);
CGContextScaleCTM(context, 1.0, -1.0);
// Clears and ReDraw the image into the context.
CGContextClearRect(context, CGRectMake(0, 0, _width, _height));
CGContextDrawImage(context, CGRectMake(0, 0, _width, _height), cgImage);
// Releases the context.
CGContextRelease(context);
How to acheive the same result, if not?
(My problem is that I can't see my OpenGL textures based on the output buffer of this method, if it runs in the background)
I think you might have trouble with running this code on a separate thread from GL's like this. Even if it would work you might encounter half drawn images/textures. You could avoid this by creating a double buffer:
Your "_data" should be allocated only once and should hold 2 raw image data buffers. Then just create 2 pointers defined as foreground and background buffer (void *fg = _data[0], void *bg = _data[1] to begin with). Now when your method collects data from CGImage to bg just swap the pointers (then void *fg = _data[1], void *bg = _data[0] or the other way around)
Now your GL thread should fill your texture with data on fg (same thread as drawing).
Also you might need some locking mechanisms:
Before you push data to texture you should lock "buffer swap" and
unlock it after the push.
You will probably want to know if the
buffer has been swapped and only push fg data to texture in such
case.
Also note that if you call GL methods on more then 1 thread you will have trouble in most cases.
That looks OK to me, assuming that uiImage, _width, _height and _data aren't being manipulated from another thread at the same time. (Assuming you're using iOS 4 and above.)
Are you uploading the texture to OpenGL on the background thread? If so, that's probably the problem (since a given OpenGL context should only be accessed from a single thread at a time).
As long as you don't access UIKit (or similar frameworks) (directly or indirectly) and as long as you don't access the variables in your code from multiple threads, it's OK.
I am having trouble with an app with an OpenGL component crashing on iPad. The app throws a memory warning and crashes, but it doesn't appear to be using that much memory. Am I missing something?
The app is based on the Vuforia augmented reality system (borrows heavily from the ImageTargets sample). I have about 40 different models I need to include in my app, so in the interests of memory conservation I am loading the objects (and rendering textures etc) dynamically in the app as I need them. I tried to copy the UIScrollView lazy loading idea. The three 4mb allocations are the textures I have loaded into memory ready for when the user selects a different model to display.
Anything odd in here?
I don't know much at all about OpenGL (part of the reason why I chose the Vurforia engine). Anything in this screen shot below that should concern me? Note that Vurforia's ImageTagets sample app also has Uninitialized Texture Data (about one per frame), so I don't think this is the problem.
Any help would be appreciated!!
Here is the code that generates the 3D objects (in EAGLView):
// Load the textures for use by OpenGL
-(void)loadATexture:(int)texNumber {
if (texNumber >= 0 && texNumber < [tempTextureList count]) {
currentlyChangingTextures = YES;
[textureList removeAllObjects];
[textureList addObject:[tempTextureList objectAtIndex:texNumber]];
Texture *tex = [[Texture alloc] init];
NSString *file = [textureList objectAtIndex:0];
[tex loadImage:file];
[textures replaceObjectAtIndex:texNumber withObject:tex];
[tex release];
// Remove all old textures outside of the one we're interested in and the two on either side of the picker.
for (int i = 0; i < [textures count]; ++i) {
if (i < targetIndex - 1 || i > targetIndex + 1) {
[textures replaceObjectAtIndex:i withObject:#""];
}
}
// Render - Generate the OpenGL texture objects
GLuint nID;
Texture *texture = [textures objectAtIndex:texNumber];
glGenTextures(1, &nID);
[texture setTextureID: nID];
glBindTexture(GL_TEXTURE_2D, nID);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, [texture width], [texture height], 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*)[texture pngData]);
// Set up objects using the above textures.
Object3D *obj3D = [[Object3D alloc] init];
obj3D.numVertices = rugNumVerts;
obj3D.vertices = rugVerts;
obj3D.normals = rugNormals;
obj3D.texCoords = rugTexCoords;
obj3D.texture = [textures objectAtIndex:texNumber];
[objects3D replaceObjectAtIndex:texNumber withObject:obj3D];
[obj3D release];
// Remove all objects except the one currently visible and the ones on either side of the picker.
for (int i = 0; i < [tempTextureList count]; ++i) {
if (i < targetIndex - 1 || i > targetIndex + 1) {
Object3D *obj3D = [[Object3D alloc] init];
[objects3D replaceObjectAtIndex:i withObject:obj3D];
[obj3D release];
}
}
if (QCAR::GL_20 & qUtils.QCARFlags) {
[self initShaders];
}
currentlyChangingTextures = NO;
}
}
Here is the code in the textures object.
- (id)init
{
self = [super init];
pngData = NULL;
return self;
}
- (BOOL)loadImage:(NSString*)filename
{
BOOL ret = NO;
// Build the full path of the image file
NSString* resourcePath = [[NSBundle mainBundle] resourcePath];
NSString* fullPath = [resourcePath stringByAppendingPathComponent:filename];
// Create a UIImage with the contents of the file
UIImage* uiImage = [UIImage imageWithContentsOfFile:fullPath];
if (uiImage) {
// Get the inner CGImage from the UIImage wrapper
CGImageRef cgImage = uiImage.CGImage;
// Get the image size
width = CGImageGetWidth(cgImage);
height = CGImageGetHeight(cgImage);
// Record the number of channels
channels = CGImageGetBitsPerPixel(cgImage)/CGImageGetBitsPerComponent(cgImage);
// Generate a CFData object from the CGImage object (a CFData object represents an area of memory)
CFDataRef imageData = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
// Copy the image data for use by Open GL
ret = [self copyImageDataForOpenGL: imageData];
CFRelease(imageData);
}
return ret;
}
- (void)dealloc
{
if (pngData) {
delete[] pngData;
}
[super dealloc];
}
#end
#implementation Texture (TexturePrivateMethods)
- (BOOL)copyImageDataForOpenGL:(CFDataRef)imageData
{
if (pngData) {
delete[] pngData;
}
pngData = new unsigned char[width * height * channels];
const int rowSize = width * channels;
const unsigned char* pixels = (unsigned char*)CFDataGetBytePtr(imageData);
// Copy the row data from bottom to top
for (int i = 0; i < height; ++i) {
memcpy(pngData + rowSize * i, pixels + rowSize * (height - 1 - i), width * channels);
}
return YES;
}
Odds are, you're not seeing the true memory usage of your application. As I explain in this answer, the Allocations instrument hides memory usage from OpenGL ES, so you can't use it to measure the size of your application. Instead, use the Memory Monitor instrument, which I'm betting will show that your application is using far more RAM than you think. This is a common problem people run into when trying to optimize OpenGL ES on iOS using Instruments.
If you're concerned about which objects or resources could be accumulating in memory, you can use the heap shots functionality of the Allocations instrument to identify specific resources that are allocated but never removed when performing repeated tasks within your application. That's how I've tracked down textures and other items that were not being properly deleted.
Seeing some code would help, but I can make some gusses:
I have about 40 different models I need to include in my app, so in the interests of memory conservation I am loading the objects (and rendering textures etc) dynamically in the app as I need them. I tried to copy the UIScrollView lazy loading idea. The three 4mb allocations are the textures I have loaded into memory ready for when the user selects a different model to display.
(...)
This kind of approach is not ideal; and it's most likely the reason for your problems, if the memory is not properly deallocated. Eventually you'll run out of memory and then your process dies if you don't take proper precautions. It's very likely that the engine used has some memory leak, exposed by your access scheme.
Today operating systems don't differentiate between RAM and storage. To them it's all just memory and all address space is backed by the block storage system anyway (if there's actually some storage device attached doesn't matter).
So here's what you should do: Instead of read-ing your models into memory, you should memory map them (mmap). This tells the OS "this part of storage should be visible in address space" and the OS kernel will do all the necessary transfers when they're due.
Note that Vurforia's ImageTagets sample app also has Uninitialized Texture Data (about one per frame), so I don't think this is the problem.
This is a strong indicator, that OpenGL texture objects don't get properly deleted.
Any help would be appreciated!!
My advice: Stop programming like it was the 1970ies. Today's computers and operating systems work differently. See also http://www.varnish-cache.org/trac/wiki/ArchitectNotes
I'm using the following code to rotate an image
http://www.platinumball.net/blog/2010/01/31/iphone-uiimage-rotation-and-scaling/
that's one of the few image transformations that I do before uploading an image to the server, I also have some other transformations: normalize, crop, resize.
Each one of the transformations returns an (UIImage*) and I add those functions using a category. I use it like this:
UIImage *img = //image from camera;
img = [[[img normalize] rotate] scale] resize];
[upload img];
After selecting 3~4 photos from the camera and executing the same code each time I get a Memory Warning message in XCode.
I'm guessing I have a memory leak somewhere (even though im using ARC). I'm not very experienced using the xCode debugging tools, so I started printing the retain count after each method.
UIImage *img = //image from camera;
img = [img normalize];
img = [img rotate]; // retain count increases :(
img = [img scale];
img = [img resize];
The only operation that increases the retain count is the rotation. Is this normal?
The only operation that increases the retain count is the rotation. Is this normal?
It's quite possible that the UIGraphicsGetImageFromCurrentImageContext() call in your rotate function ends up retaining the image. If so, it almost certainly also autoreleases the image in keeping with the normal Cocoa memory management rules. Either way, you shouldn't worry about it. As long as your rotate function doesn't itself contain any unbalanced retain (or alloc, new, or copy) calls, you should expect to be free of leaks. If you do suspect a leak, it's better to track it down with Instruments than by watching retainCount yourself.