Received memory warning when capturing screen and save to video ios - ios

I am now writing a program to capture screen and convert to video. I can successfully save the video if it is less than 10 seconds. But, if more than that, I received memory warning and application crash. I wrote this code as follow. Where am I missing to release data ? I would like to know how to do.
-(void)captureAndSaveImage
{
if(!stopCapturing){
if (assetWriterInput.readyForMoreMediaData)
{
keepTrackOfBackGroundMood++;
NSLog(#"keepTrackOfBackGroundMood is %d",keepTrackOfBackGroundMood);
CVReturn cvErr = kCVReturnSuccess;
CGSize imageSize = screenCaptureAndDraw.bounds.size;
CGFloat imageScale = 0; //if zero, it reduce processing time
if (NULL != UIGraphicsBeginImageContextWithOptions)
{
UIGraphicsBeginImageContextWithOptions(imageSize, NO, imageScale);
}
else
{
UIGraphicsBeginImageContext(imageSize);
}
[self.hiddenView.layer renderInContext:UIGraphicsGetCurrentContext()];
[self.screenCaptureAndDraw.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
image = (CGImageRef) [img CGImage];
CVPixelBufferRef pixelBuffer = NULL;
CFDataRef imageData= CGDataProviderCopyData(CGImageGetDataProvider(image));
cvErr = CVPixelBufferCreateWithBytes(kCFAllocatorDefault,
FRAME_WIDTH2,
FRAME_HEIGHT2,
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(imageData),
CGImageGetBytesPerRow(image),
NULL,
NULL,
NULL,
&pixelBuffer);
//CFRelease(imageData);
//CGImageRelease(image); //I can't write this code because I am not creating it and when I check online, it say it is not my responsibility to release. If I write, the application crash immediately
// calculate the time
CFAbsoluteTime thisFrameWallClockTime = CFAbsoluteTimeGetCurrent();
CFTimeInterval elapsedTime = thisFrameWallClockTime - firstFrameWallClockTime;
// write the sample
BOOL appended = [assetWriterPixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentationTime];
if (appended) {
NSLog (#"appended sample at time %lf and keepTrackofappended is %d", CMTimeGetSeconds(presentationTime),keepTrackofappended);
keepTrackofappended++;
} else {
NSLog (#"failed to append");
[self stopRecording];
//self.startStopButton.selected = NO;
screenRecord=false;
}
}
}//stop capturing
// });
}

I agree that you don't want to do the CGImageRelease(image). This object was created by calling CGImage method of a UIImage object. Thus ownership was not transferred and ARC still does the memory management for your img object and no releasing of the image object is needed.
But I think you do want to restore your CFRelease(imageData). This is an object created by CGDataProviderCopyData, so you own it and must clean up.
I also think you have to release the pixelBuffer that you created with CVPixelBufferCreateWithBytes after you appendPixelBuffer. You can use the CVPixelBufferRelease function for that.
The Core Foundation memory rule is that if the function has Copy or Create in the name, you own that object and are responsible for releasing it. See the Create Rule in the Memory Management Programming Guide for Core Foundation.
I would have thought that the static analyzer (shift+command+B or "Analyze" from the Xcode "Product" menu) would have identified this issue, as it has gotten much better at finding Core Foundation memory issues (albeit, not perfect).
Alternatively, if you run your app through the Leaks tool in Instruments (which will also show you the Allocations tool at the same time), you can take a look at your memory usage. While the video capture requires a lot of Live Bytes, in my experience it stays pretty darn flat. If it's growing, you have a leak somewhere.

Related

CIImage and CIDetector use with AVCaptureOutput memory leak

I'm using a CIContext, CIDetector, and CIImage to detect rectangles in a vImage_Buffer derived from samples in captureOutput:didOutputSampleBuffer:fromConnection:. It seems that either the detector or the CIImage is retaining memory and it cannot be released.
Here is the code in question - skipping over this code shows memory held constant, otherwise in increases until crashing the app:
// ...rotatedBuf and format managed outside scope
// Use a CIDetector to find any potential subslices to process
CVPixelBufferRef cvBuffer;
vImageCVImageFormatRef cvFormat = vImageCVImageFormat_CreateWithCVPixelBuffer(pixelBuffer);
CVPixelBufferCreate(kCFAllocatorSystemDefault, rotatedBuf.width, rotatedBuf.height, kCVPixelFormatType_32BGRA, NULL, &cvBuffer);
CVPixelBufferLockBaseAddress(cvBuffer, kCVPixelBufferLock_ReadOnly);
err = vImageBuffer_CopyToCVPixelBuffer(&rotatedBuf, &format, cvBuffer, cvFormat, NULL, kvImageNoFlags);
CVPixelBufferUnlockBaseAddress(cvBuffer, kCVPixelBufferLock_ReadOnly);
if (![self vImageDidError:err]) {
CIImage *ciImage = [CIImage imageWithCVPixelBuffer:cvBuffer];
NSArray *feats = [self.ciDetector featuresInImage:ciImage options:nil];
if (feats && [feats count]) {
for (CIFeature *feat in feats) {
// The frame is currently in image space, so we must convert it to a unitless space like the other rects.
CGRect frame = feat.bounds;
CGRect clip = CGRectMake(frame.origin.x / rotatedBuf.width, frame.origin.y / rotatedBuf.height,
frame.size.width / rotatedBuf.width, frame.size.height / rotatedBuf.height);
rects = [rects arrayByAddingObject:[NSValue valueWithCGRect:clip]];
}
}
}
CVPixelBufferRelease(cvBuffer);
vImageCVImageFormat_Release(cvFormat);
Other answers seem to suggest wrapping in an autorelease pool or create a new CIDector each frame, but neither affect the memory use.
CIDetector isn't releasing memory
CIDetector won't release memory - swift
Edit: switching the dispatch queue to one other than dispatch_main_queue seemed to have cleared the memory issue and keeps the UI responsive.
I figured out a different solution - in my case I was running all my processing on the main dispatch queue. What fixed the situation was creating a new queue to run the processing in. I realized this may be the case when the majority of my CPU time was spent on the call to featuresInImage:options:. It doesn't explain what caused the memory issue, but now that I'm running in a separate queue, memory is nice and constant.

CGImageDestinationFinalize or UIImageJPEGRepresentation - Crash when saving a large file on IOS 10

I am trying to create tiles for a larger image, and it seems that as of IOS 10 the following code no longer works and crashes with EXC_BAD_ACCESS.
This happens on IOS 10 Device only, IOS 9 works fine.
The crash happens with any image that is larger than ~1300x1300.
Profiling in instruments doesn't yield anything interesting and points to CGImageDestinationFinalize.
There is no memory spike.
I tried both ways below:
UIImage* tempImage = [UIImage imageWithCGImage:tileImage];
NSData* imageData = UIImageJPEGRepresentation(tempImage, 0.8f); // CRASH HERE.
OR
+ (BOOL) CGImageWriteToFile: (CGImageRef) image andPath: (NSString *)path
{
CFURLRef url = (__bridge CFURLRef)[NSURL fileURLWithPath:path];
CGImageDestinationRef destination = CGImageDestinationCreateWithURL(url, kUTTypeJPEG, 1, NULL);
if (!destination) {
NSLog(#"Failed to create CGImageDestination for %#", path);
return NO;
}
CGImageDestinationAddImage(destination, image, nil);
if (!CGImageDestinationFinalize(destination)) { // CRASH HERE
NSLog(#"Failed to write image to %#", path);
CFRelease(destination);
return NO;
}
CFRelease(destination);
As I understand it UIImageJPEGRepresentation in turn calls CGImageDestinationFinalize eventually.
Edit: Forgot to mention the images are written to hdd, but about half way through.
This really does seem like something to do with IOS 10, could this be a bug?
I also filed the bug with apple just in case.
Any help or workarounds on how to write out large images is greatly appreciated.
I am going to answer my own question here.
What I have gathered is that IOS 10 has trouble saving a file that has color mode "grayscale".
So if you do
UIImageJPEGRepresentation(tempImage, 0.8f)
and if tempImage is grayscale you will get a crash.
I was processing many images and it didn't click until later that it could be related to the color mode of the image.
I already filed a bug with apple to see what they say, but meanwhile I had to find a temporary fix.
My fix was to convert image to RGB by drawing it and capturing it into a new image.
Sample method:
+ (UIImage *)convertImage:(UIImage *)sourceImage
{
UIGraphicsBeginImageContext(sourceImage.size);
[sourceImage drawInRect:CGRectMake( 0, 0, sourceImage.size.width, sourceImage.size.height)];
UIImage *targetImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return targetImage;
}
Hopefully this helps someone.
tempimage is one of the parameters you passed to UIImageJPEGRepresentation; if it points to a dead object, then that will cause a crash when UIImageJPEGRepresentation tries to use that dead object. check whether tempImage is nil.
check whether you are adding image after calling CGImageDestinationFinalize at destination

How to release memory quickly inside receiving method?

In my iPhone app, I have a large image that I've cached to disk and I retrieve it just before I hand the image to a class that does a lot processing on that image. The receiving class only needs the image briefly for some initialization and I want to release the memory that the image is taking up as soon as possible because the image processing code is very memory intensive, but I don't know how.
It looks something like this:
// inside viewController
- (void) pressedRender
{
UIImage *imageToProcess = [[EGOCache globalCache] imageForKey:#"reallyBigImage"];
UIImage *finalImage = [frameBuffer renderImage:imageToProcess];
// save the image
}
// inside frameBuffer class
- (UIImage *)renderImage:(UIImage *)startingImage
{
CGContextRef context = CGBitmapCreateContext(....)
CGContextDrawImage(context, rect, startingImage.CGImage);
// at this point, I no longer need the image
// and would like to release the memory it's taking up
// lots of image processing/memory usage here...
// return the processed image
CGImageRef tmpImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImage *renderedImage = [UIImage imageWithCGImage:tmpImage];
CGImageRelease(tmpImage);
return renderedImage;
}
This may be obvious, but I'm missing something. Thank you.
#Jonah.at.GoDaddy is on the right track, but I would make all of this more explicit rather than relying on ARC optimizations. ARC is much less aggressive in debug mode, and so your memory usage may become too high when you're debugging unless you take steps.
UIImage *imageToProcess = [[EGOCache globalCache] imageForKey:#"reallyBigImage"];
First, I'm going to assume that imageForKey: does not cache anything itself, and does not call imageNamed: (which caches things).
The key is that you need to nil your pointer when you want the memory to go away. That's going to be very hard if you pass the image from one place to another (which Jonah's solution also fixes). Personally, I'd probably do something like this to get from image->context as fast as I can:
CGContextRef CreateContextForImage(UIImage *image) {
CGContextRef context = CGBitmapCreateContext(....)
CGContextDrawImage(context, rect, image.CGImage);
return context;
}
- (void) pressedRender {
CGContextRef context = NULL;
// I'm adding an #autoreleasepool here just in case there are some extra
// autoreleases attached by imageForKey: (which it's free to do). It also nicely
// bounds the references to imageToProcess.
#autoreleasepool {
UIImage *imageToProcess = [[EGOCache globalCache] imageForKey:#"reallyBigImage"];
context = CreateContextForImage(imageToProcess);
}
// The image should be gone now; there is no reference to it in scope.
UIImage *finalImage = [frameBuffer renderImageForContext:context];
CGContextRelease(context);
// save the image
}
// inside frameBuffer class
- (UIImage *)renderImageForContext:(CGContextRef)context
{
// lots of memory usage here...
return renderedImage;
}
For debugging, you can make sure that the UIImage is really going away by adding an associated watcher to it. See the accepted answer to How to enforce using `-retainCount` method and `-dealloc` selector under ARC? (The answer has little to do with the question; it just happens to address the same thing you might find useful).
you can autorelease objects right away in the same method. I think you need to try to handle the "big-image" process within one methods to use #autorelease:
-(void)myMethod{
//do something
#autoreleasepool{
// do your heavy image processing and free the memory right away
}
//do something
}

Memory increases when merging multiple high resolution images into single image, iOS

I have to merge multiple images in to single (all of high resolution), It acquires lots of memory. I saved original images to local directory and set resized images to imageviews, placed on different locations on main image. Now at the time of saving final merged image, I then read the original images from local directory. here the memory increases, that cause error (crash due to memory) for higher number of images.
here is code: retrieving original image from local directory
UIImage *originalImage = [UIImage imageWithContentsOfFile:[self getOriginalImagePath:imageview.tag]];
Is there any other way to get images from local directory without loading it into memory.
Thanks in advance
There is no way to load an image without it going into memory. With some image formats you could, in theory, implement your own reader that scales the image down while reading the file, so that the original size never ends up in memory, but that would require a lot of work for little gain.
Overall you would be better off just saving the different sizes of images as separate files and loading only the correct size (you seem to be scaling them based on the screen size, so there are not that many different versions required).
If you do keep to resizing them on the fly, try to ensure that you get rid of the original versions as soon as possible, i.e., don't keep any image reference no longer required, and perhaps wrap the whole thing in #autoreleasepool (assuming ARC is being used):
#autoreleasepool {
UIImage *originalImage = [UIImage imageWithContentsOfFile:[self getOriginalImagePath:imageview.tag]];
UIImage *pThumbsImage = [self scaleImageToSize:CGSizeMake(AppScreenBound.size.width, AppScreenBound.size.height) imageWithImage:pOrignalImage];
originalImage = nil;
imageView.image = pThumbImage;
pThumbImage = nil;
// … ?
}
Similarly treat any other image handling that creates intermediate versions, i.e., get rid of references no longer required as soon as possible (such as by assigning nil or having them fall out of scope), and put #autoreleasepool { … } around subsections that may generate temporary objects.
Found a solution, posting it as an answer to my own question, might help other people. reference from Image I/O Programming Guide
An alternative to "imageWithContentsOfFile:", one can use an Image Source
here is a code how I use it.
UIImage *originalWMImage = [self createCGImageFromFile:your-image-path];
the method createCGImageFromFile: get an image content without loading it to memory
-(UIImage*) createCGImageFromFile :(NSString*)path
{
// Get the URL for the pathname passed to the function.
NSURL *url = [NSURL fileURLWithPath:path];
CGImageRef myImage = NULL;
CGImageSourceRef myImageSource;
CFDictionaryRef myOptions = NULL;
CFStringRef myKeys[2];
CFTypeRef myValues[2];
// Set up options if you want them. The options here are for
// caching the image in a decoded form and for using floating-point
// values if the image format supports them.
myKeys[0] = kCGImageSourceShouldCache;
myValues[0] = (CFTypeRef)kCFBooleanTrue;
myKeys[1] = kCGImageSourceShouldAllowFloat;
myValues[1] = (CFTypeRef)kCFBooleanTrue;
// Create the dictionary
myOptions = CFDictionaryCreate(NULL, (const void **) myKeys,
(const void **) myValues, 2,
&kCFTypeDictionaryKeyCallBacks,
& kCFTypeDictionaryValueCallBacks);
// Create an image source from the URL.
myImageSource = CGImageSourceCreateWithURL((CFURLRef)url, myOptions);
CFRelease(myOptions);
// Make sure the image source exists before continuing
if (myImageSource == NULL){
fprintf(stderr, "Image source is NULL.");
return NULL;
}
// Create an image from the first item in the image source.
myImage = CGImageSourceCreateImageAtIndex(myImageSource,
0,
NULL);
CFRelease(myImageSource);
// Make sure the image exists before continuing
if (myImage == NULL){
fprintf(stderr, "Image not created from image source.");
return NULL;
}
return [UIImage imageWithCGImage:myImage];
}
Here is code: resized image and simply assigned to imageview. Then i perform scaling and rotation on imageview.
UIImage *pThumbsImage = [self scaleImageToSize:CGSizeMake(AppScreenBound.size.width, AppScreenBound.size.height) imageWithImage:pOrignalImage];
[imageView setImage:pThumbImage];
here when saving:this code is within for loop: (upto number of images to merge on main image)
// get size of the second image
CGFloat backgroundWidth = canvasSize.width;
CGFloat backgroundHeight = canvasSize.height;
//Image View: to be merged
UIImageView* imageView = [[UIImageView alloc] initWithImage:stampImage];
[imageView setFrame:CGRectMake(0, 0, stampFrameSize.size.width , stampFrameSize.size.height)];
// Rotate Image View
CGAffineTransform currentTransform = imageView.transform;
CGAffineTransform newTransform = CGAffineTransformRotate(currentTransform, radian);
[imageView setTransform:newTransform];
// Scale Image View
CGRect imageFrame = [imageView frame];
// Create Final Stamp View
UIView *finalStamp = nil;
finalStamp = [[UIView alloc] initWithFrame:CGRectMake(0, 0, imageFrame.size.width, imageFrame.size.height)];
// Set Center of Stamp Image
[imageView setCenter:CGPointMake(imageFrame.size.width /2, imageFrame.size.height /2)];
[finalImageView addSubview:imageView];
// Create Image From image View;
UIGraphicsBeginImageContext(finalStamp.frame.size);
[finalStamp.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImage *pfinalMainImage = nil;
// Create Final Image With Stamp
UIGraphicsBeginImageContext(CGSizeMake(backgroundWidth, backgroundHeight));
[canvasImage drawInRect:CGRectMake(0, 0, backgroundWidth, backgroundHeight)];
[viewImage drawInRect:CGRectMake(stampFrameSize.origin.x , stampFrameSize.origin.y , stampImageFrame.size.width , stampImageFrame.size.height) blendMode:kCGBlendModeNormal alpha:fAlphaValue];
pfinalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
}
everything is okay here. the problem occurs while saving it or generating merged image.
This is an old question, but I had to face something like that recently... so there is my answer.
I had to merge a lot of images into one, and had the same problem. The memory increased until the app crashes. The functions that I created, returned UIImage and that was the problem. The ARC was not releasing at time, so I had to change to return CGImageRef and release them at properly time.

Crash running OpenGL on iOS after memory warning

I am having trouble with an app with an OpenGL component crashing on iPad. The app throws a memory warning and crashes, but it doesn't appear to be using that much memory. Am I missing something?
The app is based on the Vuforia augmented reality system (borrows heavily from the ImageTargets sample). I have about 40 different models I need to include in my app, so in the interests of memory conservation I am loading the objects (and rendering textures etc) dynamically in the app as I need them. I tried to copy the UIScrollView lazy loading idea. The three 4mb allocations are the textures I have loaded into memory ready for when the user selects a different model to display.
Anything odd in here?
I don't know much at all about OpenGL (part of the reason why I chose the Vurforia engine). Anything in this screen shot below that should concern me? Note that Vurforia's ImageTagets sample app also has Uninitialized Texture Data (about one per frame), so I don't think this is the problem.
Any help would be appreciated!!
Here is the code that generates the 3D objects (in EAGLView):
// Load the textures for use by OpenGL
-(void)loadATexture:(int)texNumber {
if (texNumber >= 0 && texNumber < [tempTextureList count]) {
currentlyChangingTextures = YES;
[textureList removeAllObjects];
[textureList addObject:[tempTextureList objectAtIndex:texNumber]];
Texture *tex = [[Texture alloc] init];
NSString *file = [textureList objectAtIndex:0];
[tex loadImage:file];
[textures replaceObjectAtIndex:texNumber withObject:tex];
[tex release];
// Remove all old textures outside of the one we're interested in and the two on either side of the picker.
for (int i = 0; i < [textures count]; ++i) {
if (i < targetIndex - 1 || i > targetIndex + 1) {
[textures replaceObjectAtIndex:i withObject:#""];
}
}
// Render - Generate the OpenGL texture objects
GLuint nID;
Texture *texture = [textures objectAtIndex:texNumber];
glGenTextures(1, &nID);
[texture setTextureID: nID];
glBindTexture(GL_TEXTURE_2D, nID);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, [texture width], [texture height], 0, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid*)[texture pngData]);
// Set up objects using the above textures.
Object3D *obj3D = [[Object3D alloc] init];
obj3D.numVertices = rugNumVerts;
obj3D.vertices = rugVerts;
obj3D.normals = rugNormals;
obj3D.texCoords = rugTexCoords;
obj3D.texture = [textures objectAtIndex:texNumber];
[objects3D replaceObjectAtIndex:texNumber withObject:obj3D];
[obj3D release];
// Remove all objects except the one currently visible and the ones on either side of the picker.
for (int i = 0; i < [tempTextureList count]; ++i) {
if (i < targetIndex - 1 || i > targetIndex + 1) {
Object3D *obj3D = [[Object3D alloc] init];
[objects3D replaceObjectAtIndex:i withObject:obj3D];
[obj3D release];
}
}
if (QCAR::GL_20 & qUtils.QCARFlags) {
[self initShaders];
}
currentlyChangingTextures = NO;
}
}
Here is the code in the textures object.
- (id)init
{
self = [super init];
pngData = NULL;
return self;
}
- (BOOL)loadImage:(NSString*)filename
{
BOOL ret = NO;
// Build the full path of the image file
NSString* resourcePath = [[NSBundle mainBundle] resourcePath];
NSString* fullPath = [resourcePath stringByAppendingPathComponent:filename];
// Create a UIImage with the contents of the file
UIImage* uiImage = [UIImage imageWithContentsOfFile:fullPath];
if (uiImage) {
// Get the inner CGImage from the UIImage wrapper
CGImageRef cgImage = uiImage.CGImage;
// Get the image size
width = CGImageGetWidth(cgImage);
height = CGImageGetHeight(cgImage);
// Record the number of channels
channels = CGImageGetBitsPerPixel(cgImage)/CGImageGetBitsPerComponent(cgImage);
// Generate a CFData object from the CGImage object (a CFData object represents an area of memory)
CFDataRef imageData = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
// Copy the image data for use by Open GL
ret = [self copyImageDataForOpenGL: imageData];
CFRelease(imageData);
}
return ret;
}
- (void)dealloc
{
if (pngData) {
delete[] pngData;
}
[super dealloc];
}
#end
#implementation Texture (TexturePrivateMethods)
- (BOOL)copyImageDataForOpenGL:(CFDataRef)imageData
{
if (pngData) {
delete[] pngData;
}
pngData = new unsigned char[width * height * channels];
const int rowSize = width * channels;
const unsigned char* pixels = (unsigned char*)CFDataGetBytePtr(imageData);
// Copy the row data from bottom to top
for (int i = 0; i < height; ++i) {
memcpy(pngData + rowSize * i, pixels + rowSize * (height - 1 - i), width * channels);
}
return YES;
}
Odds are, you're not seeing the true memory usage of your application. As I explain in this answer, the Allocations instrument hides memory usage from OpenGL ES, so you can't use it to measure the size of your application. Instead, use the Memory Monitor instrument, which I'm betting will show that your application is using far more RAM than you think. This is a common problem people run into when trying to optimize OpenGL ES on iOS using Instruments.
If you're concerned about which objects or resources could be accumulating in memory, you can use the heap shots functionality of the Allocations instrument to identify specific resources that are allocated but never removed when performing repeated tasks within your application. That's how I've tracked down textures and other items that were not being properly deleted.
Seeing some code would help, but I can make some gusses:
I have about 40 different models I need to include in my app, so in the interests of memory conservation I am loading the objects (and rendering textures etc) dynamically in the app as I need them. I tried to copy the UIScrollView lazy loading idea. The three 4mb allocations are the textures I have loaded into memory ready for when the user selects a different model to display.
(...)
This kind of approach is not ideal; and it's most likely the reason for your problems, if the memory is not properly deallocated. Eventually you'll run out of memory and then your process dies if you don't take proper precautions. It's very likely that the engine used has some memory leak, exposed by your access scheme.
Today operating systems don't differentiate between RAM and storage. To them it's all just memory and all address space is backed by the block storage system anyway (if there's actually some storage device attached doesn't matter).
So here's what you should do: Instead of read-ing your models into memory, you should memory map them (mmap). This tells the OS "this part of storage should be visible in address space" and the OS kernel will do all the necessary transfers when they're due.
Note that Vurforia's ImageTagets sample app also has Uninitialized Texture Data (about one per frame), so I don't think this is the problem.
This is a strong indicator, that OpenGL texture objects don't get properly deleted.
Any help would be appreciated!!
My advice: Stop programming like it was the 1970ies. Today's computers and operating systems work differently. See also http://www.varnish-cache.org/trac/wiki/ArchitectNotes

Resources