Crash running OpenGL on iOS after memory warning - ios

I am having trouble with an app with an OpenGL component crashing on iPad. The app throws a memory warning and crashes, but it doesn't appear to be using that much memory. Am I missing something?
The app is based on the Vuforia augmented reality system (borrows heavily from the ImageTargets sample). I have about 40 different models I need to include in my app, so in the interests of memory conservation I am loading the objects (and rendering textures etc) dynamically in the app as I need them. I tried to copy the UIScrollView lazy loading idea. The three 4mb allocations are the textures I have loaded into memory ready for when the user selects a different model to display.
Anything odd in here?
I don't know much at all about OpenGL (part of the reason why I chose the Vurforia engine). Anything in this screen shot below that should concern me? Note that Vurforia's ImageTagets sample app also has Uninitialized Texture Data (about one per frame), so I don't think this is the problem.
Any help would be appreciated!!
Here is the code that generates the 3D objects (in EAGLView):
// Load the textures for use by OpenGL
-(void)loadATexture:(int)texNumber {
if (texNumber >= 0 && texNumber < [tempTextureList count]) {
currentlyChangingTextures = YES;
[textureList removeAllObjects];
[textureList addObject:[tempTextureList objectAtIndex:texNumber]];
Texture *tex = [[Texture alloc] init];
NSString *file = [textureList objectAtIndex:0];
[tex loadImage:file];
[textures replaceObjectAtIndex:texNumber withObject:tex];
[tex release];
// Remove all old textures outside of the one we're interested in and the two on either side of the picker.
for (int i = 0; i < [textures count]; ++i) {
if (i < targetIndex - 1 || i > targetIndex + 1) {
[textures replaceObjectAtIndex:i withObject:#""];
}
}
// Render - Generate the OpenGL texture objects
GLuint nID;
Texture *texture = [textures objectAtIndex:texNumber];
glGenTextures(1, &nID);
[texture setTextureID: nID];
glBindTexture(GL_TEXTURE_2D, nID);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, [texture width], [texture height], 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*)[texture pngData]);
// Set up objects using the above textures.
Object3D *obj3D = [[Object3D alloc] init];
obj3D.numVertices = rugNumVerts;
obj3D.vertices = rugVerts;
obj3D.normals = rugNormals;
obj3D.texCoords = rugTexCoords;
obj3D.texture = [textures objectAtIndex:texNumber];
[objects3D replaceObjectAtIndex:texNumber withObject:obj3D];
[obj3D release];
// Remove all objects except the one currently visible and the ones on either side of the picker.
for (int i = 0; i < [tempTextureList count]; ++i) {
if (i < targetIndex - 1 || i > targetIndex + 1) {
Object3D *obj3D = [[Object3D alloc] init];
[objects3D replaceObjectAtIndex:i withObject:obj3D];
[obj3D release];
}
}
if (QCAR::GL_20 & qUtils.QCARFlags) {
[self initShaders];
}
currentlyChangingTextures = NO;
}
}
Here is the code in the textures object.
- (id)init
{
self = [super init];
pngData = NULL;
return self;
}
- (BOOL)loadImage:(NSString*)filename
{
BOOL ret = NO;
// Build the full path of the image file
NSString* resourcePath = [[NSBundle mainBundle] resourcePath];
NSString* fullPath = [resourcePath stringByAppendingPathComponent:filename];
// Create a UIImage with the contents of the file
UIImage* uiImage = [UIImage imageWithContentsOfFile:fullPath];
if (uiImage) {
// Get the inner CGImage from the UIImage wrapper
CGImageRef cgImage = uiImage.CGImage;
// Get the image size
width = CGImageGetWidth(cgImage);
height = CGImageGetHeight(cgImage);
// Record the number of channels
channels = CGImageGetBitsPerPixel(cgImage)/CGImageGetBitsPerComponent(cgImage);
// Generate a CFData object from the CGImage object (a CFData object represents an area of memory)
CFDataRef imageData = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
// Copy the image data for use by Open GL
ret = [self copyImageDataForOpenGL: imageData];
CFRelease(imageData);
}
return ret;
}
- (void)dealloc
{
if (pngData) {
delete[] pngData;
}
[super dealloc];
}
#end
#implementation Texture (TexturePrivateMethods)
- (BOOL)copyImageDataForOpenGL:(CFDataRef)imageData
{
if (pngData) {
delete[] pngData;
}
pngData = new unsigned char[width * height * channels];
const int rowSize = width * channels;
const unsigned char* pixels = (unsigned char*)CFDataGetBytePtr(imageData);
// Copy the row data from bottom to top
for (int i = 0; i < height; ++i) {
memcpy(pngData + rowSize * i, pixels + rowSize * (height - 1 - i), width * channels);
}
return YES;
}

Odds are, you're not seeing the true memory usage of your application. As I explain in this answer, the Allocations instrument hides memory usage from OpenGL ES, so you can't use it to measure the size of your application. Instead, use the Memory Monitor instrument, which I'm betting will show that your application is using far more RAM than you think. This is a common problem people run into when trying to optimize OpenGL ES on iOS using Instruments.
If you're concerned about which objects or resources could be accumulating in memory, you can use the heap shots functionality of the Allocations instrument to identify specific resources that are allocated but never removed when performing repeated tasks within your application. That's how I've tracked down textures and other items that were not being properly deleted.

Seeing some code would help, but I can make some gusses:
I have about 40 different models I need to include in my app, so in the interests of memory conservation I am loading the objects (and rendering textures etc) dynamically in the app as I need them. I tried to copy the UIScrollView lazy loading idea. The three 4mb allocations are the textures I have loaded into memory ready for when the user selects a different model to display.
(...)
This kind of approach is not ideal; and it's most likely the reason for your problems, if the memory is not properly deallocated. Eventually you'll run out of memory and then your process dies if you don't take proper precautions. It's very likely that the engine used has some memory leak, exposed by your access scheme.
Today operating systems don't differentiate between RAM and storage. To them it's all just memory and all address space is backed by the block storage system anyway (if there's actually some storage device attached doesn't matter).
So here's what you should do: Instead of read-ing your models into memory, you should memory map them (mmap). This tells the OS "this part of storage should be visible in address space" and the OS kernel will do all the necessary transfers when they're due.
Note that Vurforia's ImageTagets sample app also has Uninitialized Texture Data (about one per frame), so I don't think this is the problem.
This is a strong indicator, that OpenGL texture objects don't get properly deleted.
Any help would be appreciated!!
My advice: Stop programming like it was the 1970ies. Today's computers and operating systems work differently. See also http://www.varnish-cache.org/trac/wiki/ArchitectNotes

Related

CIImage and CIDetector use with AVCaptureOutput memory leak

I'm using a CIContext, CIDetector, and CIImage to detect rectangles in a vImage_Buffer derived from samples in captureOutput:didOutputSampleBuffer:fromConnection:. It seems that either the detector or the CIImage is retaining memory and it cannot be released.
Here is the code in question - skipping over this code shows memory held constant, otherwise in increases until crashing the app:
// ...rotatedBuf and format managed outside scope
// Use a CIDetector to find any potential subslices to process
CVPixelBufferRef cvBuffer;
vImageCVImageFormatRef cvFormat = vImageCVImageFormat_CreateWithCVPixelBuffer(pixelBuffer);
CVPixelBufferCreate(kCFAllocatorSystemDefault, rotatedBuf.width, rotatedBuf.height, kCVPixelFormatType_32BGRA, NULL, &cvBuffer);
CVPixelBufferLockBaseAddress(cvBuffer, kCVPixelBufferLock_ReadOnly);
err = vImageBuffer_CopyToCVPixelBuffer(&rotatedBuf, &format, cvBuffer, cvFormat, NULL, kvImageNoFlags);
CVPixelBufferUnlockBaseAddress(cvBuffer, kCVPixelBufferLock_ReadOnly);
if (![self vImageDidError:err]) {
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:cvBuffer];
NSArray *feats = [self.ciDetector featuresInImage:ciImage options:nil];
if (feats && [feats count]) {
for (CIFeature *feat in feats) {
// The frame is currently in image space, so we must convert it to a unitless space like the other rects.
CGRect frame = feat.bounds;
CGRect clip = CGRectMake(frame.origin.x / rotatedBuf.width, frame.origin.y / rotatedBuf.height,
frame.size.width / rotatedBuf.width, frame.size.height / rotatedBuf.height);
rects = [rects arrayByAddingObject:[NSValue valueWithCGRect:clip]];
}
}
}
CVPixelBufferRelease(cvBuffer);
vImageCVImageFormat_Release(cvFormat);
Other answers seem to suggest wrapping in an autorelease pool or create a new CIDector each frame, but neither affect the memory use.
CIDetector isn't releasing memory
CIDetector won't release memory - swift
Edit: switching the dispatch queue to one other than dispatch_main_queue seemed to have cleared the memory issue and keeps the UI responsive.
I figured out a different solution - in my case I was running all my processing on the main dispatch queue. What fixed the situation was creating a new queue to run the processing in. I realized this may be the case when the majority of my CPU time was spent on the call to featuresInImage:options:. It doesn't explain what caused the memory issue, but now that I'm running in a separate queue, memory is nice and constant.

Generate an image of the contents of a SKScene which is not displayed

In my current SKScene, I use UIGraphicsGetImageFromCurrentImageContext to generate an image from the current graphics on screen.
However, I would like to generate an image from a scene not on screen. One I have created but have not displayed. Is this possible?
The reason for doing is this is to create a custom image for users to share when they achieve a high score, which is similar, but not the same as my main Scene.
Here is a method that captures the contents of a node as a PNG. Be aware that current SpriteKit seems to have a memory leak on the access of the CGImage property, so use this in DEBUG mode.
+ (NSData*) captureNodeAsPNG:(SKSpriteNode*)node skView:(SKView*)skView
{
NSData *data = nil;
#autoreleasepool {
SKTexture *captureTexture = [skView textureFromNode:node];
CGImageRef cgImageRef = captureTexture.CGImage;
NSLog(#"capture texture from node as pixels : %d x %d", (int)CGImageGetWidth(cgImageRef), (int)(CGImageGetHeight(cgImageRef)));
UIImage *capImg = [UIImage imageWithCGImage:cgImageRef];
data = UIImagePNGRepresentation(capImg);
}
return data;
}

Memory Leak with UIImages

- (void)viewDidLoad
{
[super viewDidLoad];
self.predictionsObjectArray = [[AAPredictions alloc] init];
self.animationImagesMutableArray = [NSMutableArray new];
[self.predictionsObjectArray setPredictionsArray:#[#"Probably Not", #"Ask Again", #"I doubt it", #"Unlikely", #"I believe so"]];
for (int x = 1; x<61; x++) {
NSMutableString *imageName = [[NSMutableString alloc] init];
if (x > 9) {
imageName = [NSMutableString stringWithFormat:#"CB000%i.png", x];
}
else {
imageName = [NSMutableString stringWithFormat:#"CB0000%i.png", x];
}
[self.animationImagesMutableArray addObject:[UIImage imageNamed:imageName]];
}
self.background_image.animationImages = self.animationImagesMutableArray;
self.animationImagesMutableArray = NULL;
}
- (void)didReceiveMemoryWarning
{
[super didReceiveMemoryWarning];
// Dispose of any resources that can be recreated.
}
#pragma mark Prediction
-(void)makePrediction {
self.predictionLabel.text = [self.predictionsObjectArray getPrediction];
[self animateItems];
}
- (void)animationDidStop:(CAAnimation *)theAnimation finished:(BOOL)flag {
if (flag == true) {
self.image_button.alpha = 1;
}
}
-(void)animateItems {
self.image_button.alpha = 0.0;
self.background_image.animationRepeatCount = 1;
if (self.background_image.animationImages == NULL) {
self.background_image.animationImages = self.animationImagesMutableArray;
}
self.background_image.animationDuration = 3;
[self.background_image startAnimating];
[self performSelector:#selector(postAnimation) withObject:nil afterDelay:4.25];
}
-(void)postAnimation {
self.image_button.alpha = 1;
self.background_image.animationImages = NULL;
}
#pragma mark Actions
-(void) motionBegan:(UIEventSubtype)motion withEvent:(UIEvent *)event {
if (self.background_image.isAnimating != true) {
[self makePrediction];
}
}
- (IBAction)button_pushed {
if (self.background_image.isAnimating != true) {
[self makePrediction];
}
}
#end
I am new to programming and am doing a class online and had to create a crystalBall app. I wanted to take it a step further and add a button functionality. So that button is basically the crystal ball showing up and disappearing(goes away during animation). The problem i’ve had for the last few days that i can’t get is in the debugger i have all the images stored in memory after the makePrediction function is called… it’s about 187MB. i know it’s not a lot, but the app starts with 27MB. how do i release it from memory then restore the images back into the background_image.animationImages every time that function is called?
A couple of thoughts:
Your approach of animating via an array of images will always be an extravagant use of memory. If you do this, you might want to reduce either the number of images, or reduce the dimensions of the individual images.
To appreciate how much memory is used by this technique, assume 4 bytes per pixel per image. Thus, 60 images at 800px x 800px takes up 146mb. Do not look at the size of the JPG or PNG file to determine how much memory the images take. Those are compressed formats, but when the image is loaded into a UIKit control, it is uncompressed, taking 4 bytes per pixel.
As others have pointed out, the use of imageNamed will cache images, which will prevent memory from being freed when the image is released. You might consider using imageWithContentsOfFile instead. You'll lose the performance gain of the cache, but you shouldn't suffer the memory usage issues it entails.
You might want to give us some idea of what the "crystal ball showing up and disappearing" animation looks like. The question is whether you could achieve the desired animation without having an array of different images.
Generally you would have only one image (completely eliminating this memory issue), and then animate some animatable property (e.g. the alpha so it fades in and out of view, the frame if you want it to slide in and out or squeeze in and out, transform to scale/move it, etc.). Having only one image, and then animating one of those properties is a more more memory efficient way to handle animation. For example, to have it fade out:
[UIView animationWithDuration:0.5
animations:^{
self.crystalBall.alpha = 0.0;
}];
You'd have to tell us more about what this animation is supposed to look like for us to help you further.
Bottom line, loading all of those images into memory is extravagant and you want to either minimize the size of each image, reduce the number of images, or completely retire this "array of images" concept and move to some animateWithDuration block-based animation.
So this is an old issue, but imageNamed will cache the image in memory, it's designed for small reusable images like buttons, icons, etc. I think you will see an improvement in memory pressure if you'd use imageWithContentsOfFile which is not cached.
There is a terrific answer on SO that goes into more detail on the issue.

Received memory warning when capturing screen and save to video ios

I am now writing a program to capture screen and convert to video. I can successfully save the video if it is less than 10 seconds. But, if more than that, I received memory warning and application crash. I wrote this code as follow. Where am I missing to release data ? I would like to know how to do.
-(void)captureAndSaveImage
{
if(!stopCapturing){
if (assetWriterInput.readyForMoreMediaData)
{
keepTrackOfBackGroundMood++;
NSLog(#"keepTrackOfBackGroundMood is %d",keepTrackOfBackGroundMood);
CVReturn cvErr = kCVReturnSuccess;
CGSize imageSize = screenCaptureAndDraw.bounds.size;
CGFloat imageScale = 0; //if zero, it reduce processing time
if (NULL != UIGraphicsBeginImageContextWithOptions)
{
UIGraphicsBeginImageContextWithOptions(imageSize, NO, imageScale);
}
else
{
UIGraphicsBeginImageContext(imageSize);
}
[self.hiddenView.layer renderInContext:UIGraphicsGetCurrentContext()];
[self.screenCaptureAndDraw.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
image = (CGImageRef) [img CGImage];
CVPixelBufferRef pixelBuffer = NULL;
CFDataRef imageData= CGDataProviderCopyData(CGImageGetDataProvider(image));
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
FRAME_WIDTH2,
FRAME_HEIGHT2,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(imageData),
CGImageGetBytesPerRow(image),
NULL,
NULL,
NULL,
&pixelBuffer);
//CFRelease(imageData);
//CGImageRelease(image); //I can't write this code because I am not creating it and when I check online, it say it is not my responsibility to release. If I write, the application crash immediately
// calculate the time
CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
CFTimeInterval elapsedTime = thisFrameWallClockTime - firstFrameWallClockTime;
// write the sample
BOOL appended = [assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
if (appended) {
NSLog (#"appended sample at time %lf and keepTrackofappended is %d", CMTimeGetSeconds(presentationTime),keepTrackofappended);
keepTrackofappended++;
} else {
NSLog (#"failed to append");
[self stopRecording];
//self.startStopButton.selected = NO;
screenRecord=false;
}
}
}//stop capturing
// });
}
I agree that you don't want to do the CGImageRelease(image). This object was created by calling CGImage method of a UIImage object. Thus ownership was not transferred and ARC still does the memory management for your img object and no releasing of the image object is needed.
But I think you do want to restore your CFRelease(imageData). This is an object created by CGDataProviderCopyData, so you own it and must clean up.
I also think you have to release the pixelBuffer that you created with CVPixelBufferCreateWithBytes after you appendPixelBuffer. You can use the CVPixelBufferRelease function for that.
The Core Foundation memory rule is that if the function has Copy or Create in the name, you own that object and are responsible for releasing it. See the Create Rule in the Memory Management Programming Guide for Core Foundation.
I would have thought that the static analyzer (shift+command+B or "Analyze" from the Xcode "Product" menu) would have identified this issue, as it has gotten much better at finding Core Foundation memory issues (albeit, not perfect).
Alternatively, if you run your app through the Leaks tool in Instruments (which will also show you the Allocations tool at the same time), you can take a look at your memory usage. While the video capture requires a lot of Live Bytes, in my experience it stays pretty darn flat. If it's growing, you have a leak somewhere.

Apparent leaks: png_malloc

I have an application with various animations and images. The application runs just fine for about 30 minutes, but then crashes. I have looked through the instruments and I notice that there are a whole bunch of 7kB png_malloc allocations building each time I mark the heap (amounting to about 300kB every couple minutes).
I noticed in my leaks that every time an animation or png is used for the first time, there seems to be a "leak" of the data (although I am a bit skeptical whether this is a real leak or not).
All of these images have been declared using
frameName = [[NSString alloc] initWithFormat:#"image.png"];
UIImage * u = [UIImage cachelessImageNamed:frameName];
so I don't believe there should be a problem with caching the images.
Has anyone else had the same problem with this png_malloc allocation?
The instruments screenshot
*Notes: I am using arc and the animations are getting set to nil in the deallocation function; however, these isn't called until the application exits. Does this create a problem each time the animation is run if it's only been created once?
EDIT Some more code:
-(void) createSymbolAnimations
{
if (symbolAnimations == nil)
{
symbolAnimations = [[NSMutableArray alloc]init];
}
NSString * frameName;
if (thisAnimation == nil)
{
thisAnimation = [[NSMutableArray alloc] init];
}
for (int x= 0; x< 40; x++)
{
frameName = [[NSString alloc] initWithFormat:#"image%d%s",x,".png"];
UIImage * u = [UIImage cachelessImageNamed:frameName];
[thisAnimation addObject:u];
}
[symbolAnimations addObject:thisAnimation];
}
Is the creation of the animation. Imagine I have a few of these and then I change the animation set and start animating on touch with this snippet:
UIImageView * aView = [frameArray objectAtIndex:x];
aView.image = [[symbolAnimations objectAtIndex:x]objectAtIndex:0];
[aView startAnimating];
Where x is the set of images I want to animate and 0 is the first frame of the animation.
So the image is changed quite a few times and I'm starting to worry that each time the animation images are changed, the RAM isn't cleared but instead over/rewritten.
EDIT Image grabber
+(UIImage *) cachelessImageNamed: (NSString *) name
{
return [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:name ofType:nil]];
}
Just in case anyone stumbles upon this later, I found the problem.
The pngs used in this project for animations were created in Windows (not sure how pertinent that is) and it seems the file format is slightly different than the png that XCode is expecting. This disallows any png from being deallocated. If you convert the format to a png for Mac, it seems to work fine. I did this through
mogrify -type truecolormatte -format png *.png
After adjusting all of my images, the leaks were greatly reduced and everything seems to run fine.

Resources