Core Graphics raster data not releasing from memory - ios

So I'm getting my App to take a screen shot and save it to the photo album with the code below...
- (void) save {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, self.view.opaque, 0.0 );
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *theImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(theImage,nil,NULL,NULL);
NSData*theImageData=UIImageJPEGRepresentation(theImage, 1.0 );
[theImageData writeToFile:#"image.jpeg" atomically:YES];
}
How can I release the memory allocated by Core Graphics that is holding the screenshot raster data?
My project is using ARC for memory management. When testing how the App is allocating memory I've noticed that memory is not being released after taking the screen shot, causing the app to grow sluggish over time. The 'Allocation Summary' in Instruments is telling me that the data category is 'CG raster data' and the responsible caller is 'CGDataProviderCreatWithCopyOfData'.
Is there a solution in CFRelease(); ?
My first App so I'm pretty noob, I've had a look around the internet to try and resolve the issue with no luck...

You could try wrapping the contents of your method into an #autorelease block.
#autoreleasepool {
...
}

Related

Why the UIImage is not released by ARC when I used UIGraphicsGetImageFromCurrentImageContext inside of a block

I try to download an image from server by using the NSURLSessionDownloadTask(iOS 7 API), and inside of the completion block, I want to the original image to be resized and store locally. So I wrote the helper method to create the bitmap context and draw the image, then get the new image from UIGraphicsGetImageFromCurrentImageContext(). The problem is the image is never released every time I do this. However, if I don't use the context and image drawing, things just work fine and no memory increasing issue. There is no CGImageCreate/Release function called, so really nothing to manually release here, and nothing fixed by adding #autoreleasepool here. Is there any way to fix this? I really want to modify the original image after downloading and before storing.
Here is some snippets for the issue:
[self fetchImageByDownloadTaskWithURL:url completion:^(UIImage *image, NSError *error) {
UIImage *modifiedImage = [image resizedImageScaleAspectFitToSize:imageView.frame.size];
// save to local disk
// ...
}];
// This is the resize method in UIImage Category
- (UIImage *)resizedImageScaleAspectFitToSize:(CGSize)size
{
CGSize imageSize = [self scaledSizeForAspectFitToSize:size];
UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0.0);
CGRect imageRect = CGRectMake(0.0, 0.0, imageSize.width, imageSize.height);
[self drawInRect:imageRect]; // nothing will change if make it weakSelf
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
updates:
When I dig into with allocations instrument, I find out that the memory growth is related with "VM: CG raster data". In my storing method, I use the NSCache for a photo memory cache option before store it persistently, and the raster data eats a lot of memory if I use the memory cache. It seems like after the rendered image being cached, all drawing data is also alive in memory until I release all cached images. If I don't memory cache the image, then non of raster data that coming from my image category method will be alive in memory. I just can not figure out why the drawing data is not released after image is being cached? Shouldn't it being released after drawing?
new updates:
I still didn't figure out why raster data is not being released when image for drawing is alive, and there is no analyze warning about this for sure. So I guess I just have to not cache the huge image for drawing to fit the big size, and remove cached drawing images when I don't want to use them any more. If I call [UIImage imageNamed:] and make it drawing, it seems never being released with raster data together since the image is system cached. So I called [UIImage imageWithContentsOfFile:] instead. Eventually the memory performs well. Other memory growth are something called non-object in allocations instrument which I have no idea currently. The memory warning simulation will release the system cached image created by [UIImage imageNamed:]. But for raster data, I will give some more tests on tomorrow and see.
Try making your category method a class method instead. Perhaps the leak is the original CGImage data which you are overwriting when you call [self drawInRect:imageRect];.

How to fix this memory leak while drawing on a UIImage?

Addendum to the question below.
We have traced the growth in allocated memory to a NSMutableArray which points at a list of UIImages. The NSMutable array is in a method. It has no outside pointers, strong or weak, that are pointing at it. Because the NSMutableArray is in a method - shouldn't it - and all the objects at which it points be automatically de-allocated at some point after the method returns?
How do we ensure that happens?
=================
(1) First, does calling this code cause a memory leak or should we be looking elsewhere?
(It appears to us that this code does leak as when we look at Apple's Instruments, running this code seems to create a string of 1.19MB mallocs from CVPixelBuffer - and skipping the code avoids that. Additionally, the malloc allocation size continually creeps up across the execution cycle and never seems to be reclaimed. Adding an #autorelease pool decreased peak memory use and helped prolong the app from crashing - but there is steady increase in baseline memory use with the biggest culprit being these 1.19MB mallocs.) image2 is an existing UIImage.
image2 = [self imageByDrawingCircleOnImage:image2 withX:newX withY:newY withColor:color];
- (UIImage *)imageByDrawingCircleOnImage:(UIImage *)image withX:(int)x withY:(int)y withColor:(UIColor *)color
{
UIGraphicsBeginImageContext(image.size);
[image drawAtPoint:CGPointZero];
CGContextRef ctx = UIGraphicsGetCurrentContext();
[color setStroke];
CGRect shape = CGRectMake(x-10, y-10, 20, 20);
shape = CGRectInset(shape, 0, 0);
CGContextStrokeEllipseInRect(ctx, shape);
UIImage *retImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return retImage;
}
(2) Second, if this code does leak, then how do we prevent the leak and, more importantly, prevent a crash from a memory shortage when we call this method multiple times in rapid succession? We notice that memory use is surging as we call this method multiple times which leads to a crash. The question is how do we ensure the rapid freeing of the discarded UIImages so that the app doesn't crash from shortage of memory while calling this method multiple times.
running this code seems to create a string of 1.19MB mallocs from CVPixelBuffer
But do not make the mistake of calling memory use a memory leak. It's a leak only if the used memory can never be reclaimed. You have not proved that.
Lots of operations use memory — but that doesn't matter if the operation is performed once, because then your code ends and the memory is reclaimed.
Issues arise only if your code keeps going, possibly looping so that there is never a chance for the memory to be reclaimed; and in that situation, you can provide such a chance by wrapping each iteration of the loop in an #autoreleasepool block.
We found the leak elsewhere. We needed to release a pixelBuffer. We were getting a pixelBuffer from a CGI image and adding the buffer to a AVAssetWriterInputPixelBufferAdaptor - but it was never released.
After this code which created the buffer:
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, 480,
640, kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
...and this code which appended it to an AVAssetWriter:
[adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
...we needed to add this release code per this SO answer:
CVPixelBufferRelease(buffer);
After that code was added, the memory footprint of the app stayed constant.
Additionally, we added #autoreleasepool { } commands to several points in the video writing code and the memory usage spikes flattened which also stabilized the app.
Our simple conclusion is that SO should get a Nobel prize.

drawViewHierarchyInRect:afterScreenUpdates: delays other animations

In my app, I use drawViewHierarchyInRect:afterScreenUpdates: in order to obtain a blurred image of my view (using Apple’s UIImage category UIImageEffects).
My code looks like this:
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Use im */
I noticed during development that many of my animations were delayed after using my app for a bit, i.e., my views were beginning their animations after a noticeable (but less than about a second) pause compared to a fresh launch of the app.
After some debugging, I noticed that the mere act of using drawViewHierarchyInRect:afterScreenUpdates: with screen updates set to YES caused this delay. If this message was never sent during a session of usage, the delay never appeared. Using NO for the screen updates parameter also made the delay disappear.
The strange thing is that this blurring code is completely unrelated (as far as I can tell) to the delayed animations. The animations in question do not use drawViewHierarchyInRect:afterScreenUpdates:, they are CAKeyframeAnimation animations. The mere act of sending this message (with screen updates set to YES) seems to have globally affected animations in my app.
What’s going on?
(I have created videos illustrating the effect: with and without an animation delay. Note the delay in the appearance of the "Check!" speech bubble in the navigation bar.)
UPDATE
I have created an example project to illustrate this potential bug. https://github.com/timarnold/AnimationBugExample
UPDATE No. 2
I received a response from Apple verifying that this is a bug. See answer below.
I used one of my Apple developer support tickets to ask Apple about my issue.
It turns out it is a confirmed bug (radar number 17851775). Their hypothesis for what is happening is below:
The method drawViewHierarchyInRect:afterScreenUpdates: performs its operations on the GPU as much as possible, and much of this work will probably happen outside of your app’s address space in another process. Passing YES as the afterScreenUpdates: parameter to drawViewHierarchyInRect:afterScreenUpdates: will cause a Core Animation to flush all of its buffers in your task and in the rendering task. As you may imagine, there’s a lot of other internal stuff that goes on in these cases too. Engineering theorizes that it may very well be a bug in this machinery related to the effect you are seeing.
In comparison, the method renderInContext: performs its operations inside of your app’s address space and does not use the GPU based process for performing the work. For the most part, this is a different code path and if it is working for you, then that is a suitable workaround. This route is not as efficient as it does not use the GPU based task. Also, it is not as accurate for screen captures as it may exclude blurs and other Core Animation features that are managed by the GPU task.
And they also provided a workaround. They suggested that instead of:
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Use im */
I should do this
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
/* Use im */
Hopefully this is helpful for someone!
I tried all the latest snapshot methods using swift. Other methods didn't work for me in the background. But taking snapshot this way worked for me.
create an extension with parameters view layer and view bounds.
extension UIView {
func asImage(viewLayer: CALayer, viewBounds: CGRect) -> UIImage {
if #available(iOS 10.0, *) {
let renderer = UIGraphicsImageRenderer(bounds: viewBounds)
return renderer.image { rendererContext in
viewLayer.render(in: rendererContext.cgContext)
}
} else {
UIGraphicsBeginImageContext(viewBounds.size)
viewLayer.render(in:UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return UIImage(cgImage: image!.cgImage!)
}
}
}
Usage
DispatchQueue.main.async {
let layer = self.selectedView.layer
let bounds = self.selectedView.bounds
DispatchQueue.global(qos: .background).async {
let image = self.selectedView.asImage(viewLayer: layer, viewBounds: bounds)
}
}
We need to calculate layer and bounds in the main thread, then other operations will work in the background thread. It will give smooth user experience without any lag or interruption in UI.
Why do you have this line (from your sample app):
animation.beginTime = CACurrentMediaTime();
Just remove it, and everything will be as you want it to be.
By setting animation time explicitly to CACurrentMediaTime() you ignore possible time transformations that can be present in layer tree. Either don't set it at all (by default animations will start now) or use time conversion method:
animation.beginTime = [view.layer convert​Time:CACurrentMediaTime() from​Layer:nil];
UIKit adds time transformations to layer tree when you call afterScreenUpdates:YES to prevent jumps in ongoing animation, that would be caused otherwise by intermediate CoreAnimation commits. If you want to start animation at specific time (not now), use time conversion method mentioned above.
And while at it, strongly prefer using -[UIView snapshotViewAfterScreenUpdates:] and friends instead of -[UIView drawViewHierarchyInRect:] family (preferably specifying NO for afterScreenUpdates part). In most of the cases you don't really need a persistent image and view snapshot is what you actually want. Using view snapshot instead of rendered image has following benefits:
2x-10x faster
Uses 2x-3x less memory
It will always use correct colorspace and buffer format (e.g. on devices with wide color screen)
It will use correct scale and orientation, so you don't need to think how to position your image so it looks good.
It works with accessibility features better (e.g. with Smart Invert colors)
View snapshot will also capture out-of-process and secure views correctly (while drawViewHierarchyInRect will render them black or white).
When the afterScreenUpdates parameter is set to YES, the system has to wait until all pending screen updates have happened before it can render the view.
If you're kicking off animations at the same time then perhaps the rendering and the animations are trying to happen together and this is causing a delay.
It may be worth experimenting with kicking off your animations slightly later to prevent this. Obviously not too much later because that would defeat the object, but a small dispatch_after interval would be worth trying.
Have you tried running your code on a background thread?
Heres an example using gcd:
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0), ^{
//background thread
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, NO, 0);
[self.view drawViewHierarchyInRect:self.view.bounds afterScreenUpdates:YES];
UIImage *im = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
dispatch_sync(dispatch_get_main_queue(), ^(void) {
//update ui on main thread
});
});

Reducing memory usage with UIImagePickerController

In my app, the user can take multiple images using the UIImagePickerController, and those images are then displayed one by one in the view.
I've been having some trouble with memory management. With cameras on today's phones quickly rising in megapixels, UIImages returned from UIImagePickerController are memory hogs. On my iPhone 4S, the UIImages are around 5MB; I can hardly imagine what they're like on the newer and future models.
A friend of mine said that the best way to handle UIImages was to immediately save them to a JPEG file in my app's document directory and to release the original UIImage as soon as possible. So this is what I've been trying to do. Unfortunately, even after saving the UIImage to a JPEG and leaving no references to it in my code, it is not being garbage collected.
Here are the relevant sections of my code. I am using ARC.
// Entry point: UIImagePickerController delegate method
-(void) imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
// Process the image. The method returns a pathname.
NSString* path = [self processImage:[info objectForKey:UIImagePickerControllerOriginalImage]];
// Add the image to the view
[self addImage:path];
}
-(NSString*) processImage:(UIImage*)image {
// Get a file path
NSArray* paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString* documentsDirectory = [paths objectAtIndex:0];
NSString* filename = [self makeImageFilename]; // implementation omitted
NSString* imagePath = [documentsDirectory stringByAppendingPathComponent:filename];
// Get the image data (blocking; around 1 second)
NSData* imageData = UIImageJPEGRepresentation(image, 0.1);
// Write the data to a file
[imageData writeToFile:imagePath atomically:YES];
// Upload the image (non-blocking)
[self uploadImage:imageData withFilename:filename];
return imagePath;
}
-(void) uploadImage:(NSData*)imageData withFilename:(NSString*)filename {
// this sends the upload job (implementation omitted) to a thread
// pool, which in this case is managed by PhoneGap
[self.commandDelegate runInBackground:^{
[self doUploadImage:imageData withFilename:filename];
}];
}
-(void) addImage:(NSString*)path {
// implementation omitted: make a UIImageView (set bounds, etc). Save it
// in the variable iv.
iv.image = [UIImage imageWithContentsOfFile:path];
[iv setNeedsDisplay];
NSLog(#"Displaying image named %#", path);
self.imageCount++;
}
Notice how the processImage method takes a reference to a UIImage, but it uses it for only one thing: making the NSData* representation of that image. So, after the processImage method is complete, the UIImage should be released from memory, right?
What can I do to reduce the memory usage of my app?
Update
I now realize that a screenshot of the allocations profiler would be helpful for explaining this question.
Your processImage method is not your problem.
We can test your image-saving code by transplanting it into Apple's PhotoPicker demo app
Conveniently, Apple's sample project is very similar to yours, with a method to take repeated pictures on a timer. In the sample, the images are not saved to the filesystem, but accumulated in memory. It comes with this warning:
/*
Start the timer to take a photo every 1.5 seconds.
CAUTION: for the purpose of this sample, we will continue to take pictures indefinitely.
Be aware we will run out of memory quickly. You must decide the proper threshold number of photos allowed to take from the camera.
One solution to avoid memory constraints is to save each taken photo to disk rather than keeping all of them in memory.
In low memory situations sometimes our "didReceiveMemoryWarning" method will be called in which case we can recover some memory and keep the app running.
*/
With your method added to Apple's code, we can address this issue.
The imagePicker delegate method looks like this:
- (void)imagePickerController:(UIImagePickerController *)picker
didFinishPickingMediaWithInfo:(NSDictionary *)info
{
UIImage *image = [info valueForKey:UIImagePickerControllerOriginalImage];
[self.capturedImages removeAllObjects]; // (1)
[self.imagePaths addObject:[self processImage:image]]; //(2)
[self.capturedImages addObject:image];
if ([self.cameraTimer isValid])
{
return;
}
[self finishAndUpdate]; //(3)
}
(1) - our addition, to flush the live memory on each image capture event
(2) - our addition, to save image to filesystem and build a list of filesystem paths.
(3) - for our tests we are using the cameraTimer to take repeat images, so finishAndUpdate does not get called.
I have used your processImage: method as is, with the line:
[self uploadImage:imageData withFilename:filename];
commented out.
I have also added a small makeImageFileName method:
static int imageName = 0;
-(NSString*)makeImageFilename {
imageName++;
return [NSString stringWithFormat:#"%d.jpg",imageName];
}
These are the only additions I have made to Apple's code.
Here is the memory footprint of Apple's original code (cameraTimer run without (1) and (2))
Memory climbed to ~140MB after capture of ~40 images
Here is the memory footprint with the additions (cameraTimer run with (1) and (2))
The filesaving method is fixing the memory issue: memory is flat with spikes of ~30MB per image capture.
These test were run on an iPhone5S. Uncompressed images are 3264 x 2448 px, which should be around 24mB (24-bit RGB). Jpeg compressed (filesystem) size ranges between 250kB (0.1 quality, as per your code) to 1-2mB (0.7 quality) upto ~6mB (1.0 quality).
In a comment to your question, you suggest that a re-loaded image will benefit from that compression. This is not the case: when an image is loaded into memory it must first be uncompressed. It's memory footprint will be approximately equal to pixels x colours x bit-depth per colour - regardless of the way the image is stored on disk. As jrturton has pointed out, this at least suggests that you should avoid loading an image at greater resolution than you need for display. Say you have a full-screen (retina) imageView of 832 x 640, you are wasting memory loading an image larger than that if your user cannot zoom in. That's a live memory footprint of ~1.6mB, a huge improvement on your 24mMB original (but this is a digression from your main issue).
As processImage doesn't seem to be the cause of your memory trouble, you should look at other possibilities:
1/ You don't have a memory issue. How are you profiling the app?
2/ One of addImage or uploadImage is retaining memory. Try commenting each out in turn to identify which.
3/ The problem is elsewhere (something managed by PhoneGap?)
As regards those memory spikes, these are caused by the image-to-data jpeg compression line:
NSData* imageData = UIImageJPEGRepresentation(image, 0.1);
Under the hood, that is ImageIO, and it is probably unavoidable when using ImagePickerController. See here: Most memory efficient way to save a photo to disk on iPhone? If you switch to AVFoundation you can get at the image as unconverted NSData so you could avoid the spike.

renderInContext throw crash

I am rendering images from webview. so renderIncontext method call more than 50 times in for loop. after 20 or 30 times my app crashed because of more memory consuption.
I used this code:
UIGraphicsBeginImageContext(CGSizeMake([w floatValue], [h floatValue]));
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, webview.frame);
[self.webview.layer renderInContext:ctx];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
After 20 times it got crashed. I need its solutions.
Why this occurs? Anyone knows?
It sounds like you're creating lots of bitmap images in a tight loop. You need to save off the images you need (probably on disk instead of in memory if you need them all), and allow the images in memory to be autoreleased. Wrap the body of your loop in an #autorelease block like:
for (whatever) {
#autorelease {
// Work that makes big autoreleased objects.
}
}
This way your memory consumption will not be out of control inside your loop. Again, you're still going to be allocating tons of memory if you make all these UIImage objects persist. Save the generated images to a temporary directory (or some other convenient place) on disk and fetch them individually as needed.

Resources