renderInContext throw crash - ios

I am rendering images from webview. so renderIncontext method call more than 50 times in for loop. after 20 or 30 times my app crashed because of more memory consuption.
I used this code:
UIGraphicsBeginImageContext(CGSizeMake([w floatValue], [h floatValue]));
CGContextRef ctx = UIGraphicsGetCurrentContext();
[[UIColor blackColor] set];
CGContextFillRect(ctx, webview.frame);
[self.webview.layer renderInContext:ctx];
image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
After 20 times it got crashed. I need its solutions.
Why this occurs? Anyone knows?

It sounds like you're creating lots of bitmap images in a tight loop. You need to save off the images you need (probably on disk instead of in memory if you need them all), and allow the images in memory to be autoreleased. Wrap the body of your loop in an #autorelease block like:
for (whatever) {
#autorelease {
// Work that makes big autoreleased objects.
}
}
This way your memory consumption will not be out of control inside your loop. Again, you're still going to be allocating tons of memory if you make all these UIImage objects persist. Save the generated images to a temporary directory (or some other convenient place) on disk and fetch them individually as needed.

Related

Why the UIImage is not released by ARC when I used UIGraphicsGetImageFromCurrentImageContext inside of a block

I try to download an image from server by using the NSURLSessionDownloadTask(iOS 7 API), and inside of the completion block, I want to the original image to be resized and store locally. So I wrote the helper method to create the bitmap context and draw the image, then get the new image from UIGraphicsGetImageFromCurrentImageContext(). The problem is the image is never released every time I do this. However, if I don't use the context and image drawing, things just work fine and no memory increasing issue. There is no CGImageCreate/Release function called, so really nothing to manually release here, and nothing fixed by adding #autoreleasepool here. Is there any way to fix this? I really want to modify the original image after downloading and before storing.
Here is some snippets for the issue:
[self fetchImageByDownloadTaskWithURL:url completion:^(UIImage *image, NSError *error) {
UIImage *modifiedImage = [image resizedImageScaleAspectFitToSize:imageView.frame.size];
// save to local disk
// ...
}];
// This is the resize method in UIImage Category
- (UIImage *)resizedImageScaleAspectFitToSize:(CGSize)size
{
CGSize imageSize = [self scaledSizeForAspectFitToSize:size];
UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0.0);
CGRect imageRect = CGRectMake(0.0, 0.0, imageSize.width, imageSize.height);
[self drawInRect:imageRect]; // nothing will change if make it weakSelf
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
updates:
When I dig into with allocations instrument, I find out that the memory growth is related with "VM: CG raster data". In my storing method, I use the NSCache for a photo memory cache option before store it persistently, and the raster data eats a lot of memory if I use the memory cache. It seems like after the rendered image being cached, all drawing data is also alive in memory until I release all cached images. If I don't memory cache the image, then non of raster data that coming from my image category method will be alive in memory. I just can not figure out why the drawing data is not released after image is being cached? Shouldn't it being released after drawing?
new updates:
I still didn't figure out why raster data is not being released when image for drawing is alive, and there is no analyze warning about this for sure. So I guess I just have to not cache the huge image for drawing to fit the big size, and remove cached drawing images when I don't want to use them any more. If I call [UIImage imageNamed:] and make it drawing, it seems never being released with raster data together since the image is system cached. So I called [UIImage imageWithContentsOfFile:] instead. Eventually the memory performs well. Other memory growth are something called non-object in allocations instrument which I have no idea currently. The memory warning simulation will release the system cached image created by [UIImage imageNamed:]. But for raster data, I will give some more tests on tomorrow and see.
Try making your category method a class method instead. Perhaps the leak is the original CGImage data which you are overwriting when you call [self drawInRect:imageRect];.

How to fix this memory leak while drawing on a UIImage?

Addendum to the question below.
We have traced the growth in allocated memory to a NSMutableArray which points at a list of UIImages. The NSMutable array is in a method. It has no outside pointers, strong or weak, that are pointing at it. Because the NSMutableArray is in a method - shouldn't it - and all the objects at which it points be automatically de-allocated at some point after the method returns?
How do we ensure that happens?
=================
(1) First, does calling this code cause a memory leak or should we be looking elsewhere?
(It appears to us that this code does leak as when we look at Apple's Instruments, running this code seems to create a string of 1.19MB mallocs from CVPixelBuffer - and skipping the code avoids that. Additionally, the malloc allocation size continually creeps up across the execution cycle and never seems to be reclaimed. Adding an #autorelease pool decreased peak memory use and helped prolong the app from crashing - but there is steady increase in baseline memory use with the biggest culprit being these 1.19MB mallocs.) image2 is an existing UIImage.
image2 = [self imageByDrawingCircleOnImage:image2 withX:newX withY:newY withColor:color];
- (UIImage *)imageByDrawingCircleOnImage:(UIImage *)image withX:(int)x withY:(int)y withColor:(UIColor *)color
{
UIGraphicsBeginImageContext(image.size);
[image drawAtPoint:CGPointZero];
CGContextRef ctx = UIGraphicsGetCurrentContext();
[color setStroke];
CGRect shape = CGRectMake(x-10, y-10, 20, 20);
shape = CGRectInset(shape, 0, 0);
CGContextStrokeEllipseInRect(ctx, shape);
UIImage *retImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return retImage;
}
(2) Second, if this code does leak, then how do we prevent the leak and, more importantly, prevent a crash from a memory shortage when we call this method multiple times in rapid succession? We notice that memory use is surging as we call this method multiple times which leads to a crash. The question is how do we ensure the rapid freeing of the discarded UIImages so that the app doesn't crash from shortage of memory while calling this method multiple times.
running this code seems to create a string of 1.19MB mallocs from CVPixelBuffer
But do not make the mistake of calling memory use a memory leak. It's a leak only if the used memory can never be reclaimed. You have not proved that.
Lots of operations use memory — but that doesn't matter if the operation is performed once, because then your code ends and the memory is reclaimed.
Issues arise only if your code keeps going, possibly looping so that there is never a chance for the memory to be reclaimed; and in that situation, you can provide such a chance by wrapping each iteration of the loop in an #autoreleasepool block.
We found the leak elsewhere. We needed to release a pixelBuffer. We were getting a pixelBuffer from a CGI image and adding the buffer to a AVAssetWriterInputPixelBufferAdaptor - but it was never released.
After this code which created the buffer:
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, 480,
640, kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
...and this code which appended it to an AVAssetWriter:
[adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
...we needed to add this release code per this SO answer:
CVPixelBufferRelease(buffer);
After that code was added, the memory footprint of the app stayed constant.
Additionally, we added #autoreleasepool { } commands to several points in the video writing code and the memory usage spikes flattened which also stabilized the app.
Our simple conclusion is that SO should get a Nobel prize.

Core Graphics raster data not releasing from memory

So I'm getting my App to take a screen shot and save it to the photo album with the code below...
- (void) save {
UIGraphicsBeginImageContextWithOptions(self.view.bounds.size, self.view.opaque, 0.0 );
[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *theImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(theImage,nil,NULL,NULL);
NSData*theImageData=UIImageJPEGRepresentation(theImage, 1.0 );
[theImageData writeToFile:#"image.jpeg" atomically:YES];
}
How can I release the memory allocated by Core Graphics that is holding the screenshot raster data?
My project is using ARC for memory management. When testing how the App is allocating memory I've noticed that memory is not being released after taking the screen shot, causing the app to grow sluggish over time. The 'Allocation Summary' in Instruments is telling me that the data category is 'CG raster data' and the responsible caller is 'CGDataProviderCreatWithCopyOfData'.
Is there a solution in CFRelease(); ?
My first App so I'm pretty noob, I've had a look around the internet to try and resolve the issue with no luck...
You could try wrapping the contents of your method into an #autorelease block.
#autoreleasepool {
...
}

Is it possible to use Quartz 2D to make a UIImage on another thread?

I want to move some code that takes a couple seconds to generate a UIImage on another thread, but im getting a context error when using
UIGraphicsBeginImageContextWithOptions(size,false,0);
before calling the dispatch to generate the image saying "invalid context 0x0" for each operation i try to do. Is this at all possible?
What's New in iOS: iOS 4.0 says this:
Drawing to a graphics context in UIKit is now thread-safe. Specifically:
The routines used to access and manipulate the graphics context can now correctly handle contexts residing on different threads.
String and image drawing is now thread-safe.
Using color and font objects in multiple threads is now safe to do.
It sounds like you tried something like this:
UIGraphicsBeginImageContextWithOptions(size,false,0);
dispatch_async(someQueue, ^{
[UIColor.whiteColor setFill];
UIRectFill(0, 0, 20, 20);
dispatch_async(dispatch_get_main_queue(), ^{
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.imageView.image = image;
});
};
That won't work because each thread has its own stack of graphics contexts (starting in iOS 4.0). You need to do it like this:
dispatch_async(someQueue, ^{
UIGraphicsBeginImageContextWithOptions(size,false,0);
[UIColor.whiteColor setFill];
UIRectFill(0, 0, 20, 20);
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
dispatch_async(dispatch_get_main_queue(), ^{
self.imageView.image = image;
});
};
UPDATE
The documentation for UIGraphicsBeginImageContextWithOptions and other UIKit graphics functions now says
In iOS 4 and later, you may call this function from any thread of your app.
The documentation for UIColor says
Color objects are immutable and so it is safe to use them from multiple threads in your app.
The documentation for UIFont says
Font objects are immutable and so it is safe to use them from multiple threads in your app.
However, the documentation for the UIKit NSString-drawing additions says
The methods described in this class extension must be used from your app’s main thread.
So you must not try something like [#"hello" drawAtPoint:CGPointZero withAttributes:attrs] from a background thread.
The docs say:
You should call this function from the main thread of your application only.
So calling it on another thread is not a good idea.
You could try using CoreGraphics instead, and calling CGBitmapContextCreate().
You can easily operate using a CGContext to produce a CGImage while on a secondary thread.
Back on the main thread, create a UIImage from the CGImage. Note that UIImage is an immutable container type - this should not result in a deep copy of the image data.

CGContextDrawImage is EXTREMELY slow after large UIImage drawn into it

It seems that CGContextDrawImage(CGContextRef, CGRect, CGImageRef) performs MUCH WORSE when drawing a CGImage that was created by CoreGraphics (i.e. with CGBitmapContextCreateImage) than it does when drawing the CGImage which backs a UIImage. See this testing method:
-(void)showStrangePerformanceOfCGContextDrawImage
{
///Setup : Load an image and start a context:
UIImage *theImage = [UIImage imageNamed:#"reallyBigImage.png"];
UIGraphicsBeginImageContext(theImage.size);
CGContextRef ctxt = UIGraphicsGetCurrentContext();
CGRect imgRec = CGRectMake(0, 0, theImage.size.width, theImage.size.height);
///Why is this SO MUCH faster...
NSDate * startingTimeForUIImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImage.CGImage); //Draw existing image into context Using the UIImage backing
NSLog(#"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForUIImageDrawing]);
/// Create a new image from the context to use this time in CGContextDrawImage:
CGImageRef theImageConverted = CGBitmapContextCreateImage(ctxt);
///This is WAY slower but why?? Using a pure CGImageRef (ass opposed to one behind a UIImage) seems like it should be faster but AT LEAST it should be the same speed!?
NSDate * startingTimeForNakedGImageDrawing = [NSDate date];
CGContextDrawImage(ctxt, imgRec, theImageConverted);
NSLog(#"Time was %f", [[NSDate date] timeIntervalSinceDate:startingTimeForNakedGImageDrawing]);
}
So I guess the question is, #1 what may be causing this and #2 is there a way around it, i.e. other ways to create a CGImageRef which may be faster? I realize I could convert everything to UIImages first but that is such an ugly solution. I already have the CGContextRef sitting there.
UPDATE : This seems to not necessarily be true when drawing small images? That may be a clue- that this problem is amplified when large images (i.e. fullsize camera pics) are used. 640x480 seems to be pretty similar in terms of execution time with either method
UPDATE 2 : Ok, so I've discovered something new.. Its actually NOT the backing of the CGImage that is changing the performance. I can flip-flop the order of the 2 steps and make the UIImage method behave slowly, whereas the "naked" CGImage will be super fast. It seems whichever you perform second will suffer from terrible performance. This seems to be the case UNLESS I free memory by calling CGImageRelease on the image I created with CGBitmapContextCreateImage. Then the UIImage backed method will be fast subsequently. The inverse it not true. What gives? "Crowded" memory shouldn't affect performance like this, should it?
UPDATE 3 : Spoke too soon. The previous update holds true for images at size 2048x2048 but stepping up to 1936x2592 (camera size) the naked CGImage method is still way slower, regardless of order of operations or memory situation. Maybe there are some CG internal limits that make a 16MB image efficient whereas the 21MB image can't be handled efficiently. Its literally 20 times slower to draw the camera size than a 2048x2048. Somehow UIImage provides its CGImage data much faster than a pure CGImage object does. o.O
UPDATE 4 : I thought this might have to do with some memory caching thing, but the results are the same whether the UIImage is loaded with the non-caching [UIImage imageWithContentsOfFile] as if [UIImage imageNamed] is used.
UPDATE 5 (Day 2) : After creating mroe questions than were answered yesterday I have something solid today. What I can say for sure is the following:
The CGImages behind a UIImage don't use alpha. (kCGImageAlphaNoneSkipLast). I thought that maybe they were faster to be drawn because my context WAS using alpha. So I changed the context to use kCGImageAlphaNoneSkipLast. This makes the drawing MUCH faster, UNLESS:
Drawing into a CGContextRef with a UIImage FIRST, makes ALL subsequent image drawing slow
I proved this by 1)first creating a non-alpha context (1936x2592). 2) Filled it with randomly colored 2x2 squares. 3) Full frame drawing a CGImage into that context was FAST (.17 seconds) 4) Repeated experiment but filled context with a drawn CGImage backing a UIImage. Subsequent full frame image drawing was 6+ seconds. SLOWWWWW.
Somehow drawing into a context with a (Large) UIImage drastically slows all subsequent drawing into that context.
Well after a TON of experimentation I think I have found the fastest way to handle situations like this. The drawing operation above which was taking 6+ seconds now .1 seconds. YES. Here's what I discovered:
Homogenize your contexts & images with a pixel format! The root of the question I asked boiled down to the fact that the CGImages inside a UIImage were using THE SAME PIXEL FORMAT as my context. Therefore fast. The CGImages were a different format and therefore slow. Inspect your images with CGImageGetAlphaInfo to see which pixel format they use. I'm using kCGImageAlphaNoneSkipLast EVERYWHERE now as I don't need to work with alpha. If you don't use the same pixel format everywhere, when drawing an image into a context Quartz will be forced to perform expensive pixel-conversions for EACH pixel. = SLOW
USE CGLayers! These make offscreen-drawing performance much better. How this works is basically as follows. 1) create a CGLayer from the context using CGLayerCreateWithContext. 2) do any drawing/setting of drawing properties on THIS LAYER's CONTEXT which is gotten with CGLayerGetContext. READ any pixels or information from the ORIGINAL context. 3) When done, "stamp" this CGLayer back onto the original context using CGContextDrawLayerAtPoint.This is FAST as long as you keep in mind:
1) Release any CGImages created from a context (i.e. those created with CGBitmapContextCreateImage) BEFORE "stamping" your layer back into the CGContextRef using CGContextDrawLayerAtPoint. This creates a 3-4x speed increase when drawing that layer. 2) Keep your pixel format the same everywhere!! 3) Clean up CG objects AS SOON as you can. Things hanging around in memory seem to create strange situations of slowdown, probably because there are callbacks or checks associated with these strong references. Just a guess, but I can say that CLEANING UP MEMORY ASAP helps performance immensely.
I had a similar problem. My application has to redraw a picture almost as large as the screen size. The problem came down to drawing as fast as possible two images of the same resolution, neither rotated nor flipped, but scaled and positioned in different places of the screen each time. After all, I was able to get ~15-20 FPS on iPad 1 and ~20-25 FPS on iPad4. So... hope this helps someone:
Exactly as typewriter said, you have to use the same pixel format. Using one with AlphaNone gives a speed boost. But even more important, argb32_image call in my case did numerous calls converting pixels from ARGB to BGRA. So the best bitmapInfo value for me was (at the time; there is a probability that Apple can change something here in the future):
const CGBitmabInfo g_bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipLast;
CGContextDrawImage may work faster if rectangle argument was made integral (by CGRectIntegral). Seems to have more effect when image is scaled by factor close to 1.
Using layers actually slowed down things for me. Probably something was changed since 2011 in some internal calls.
Setting interpolation quality for the context lower than default (by CGContextSetInterpolationQuality) is important. I would recommend using (IS_RETINA_DISPLAY ? kCGInterpolationNone : kCGInterpolationLow). Macros IS_RETINA_DISPLAY is taken from here.
Make sure you get CGColorSpaceRef from CGColorSpaceCreateDeviceRGB() or the like when creating context. Some performance issues were reported for getting fixed color space instead of requesting that of the device.
Inheriting view class from UIImageView and simply setting self.image to the image created from context proved useful to me. However, read about using UIImageView first if you want to do this, for it requires some changes in code logic (because drawRect: isn't called anymore).
And if you can avoid scaling your image at the time of actual drawing, try to do so. Drawing non-scaled image is significantly faster - unfortunately, for me that was not an option.

Resources