Memory Management for CGImageCreateWithImageInRect used for CAKeyFramAnimation - ios

I'm using CAKeyFrameAnimation to animate few png images. I cut them from one png image by using CGImageCreateWithImageInRect
When I run Analyze function in XCode it shows me potential memory leaks. They are beause of CGImageCreateWithImageInRect. But when I use CGImageRelease after the animation object is created, no images are shown (I know why, but when I don't use release, there are leaks).
Could someone explain me this memory issue situation? And what is the best solution? I was thinking about creating UIImages with CGImageRefs for each "cut". But CAKeyFrameAnimation uses field of CGImageRef so I thought it is not necessary to create that UIImages.
UIImage *source = [UIImage imageNamed:#"my_anim_actions.png"];
cutRect = CGRectMake(0*dimForImg.width,0*dimForImg.height,dimForImg.width,dimForImg.height);
CGImageRef image1 = CGImageCreateWithImageInRect([source CGImage], cutRect);
cutRect = CGRectMake(1*dimForImg.width,0*dimForImg.height,dimForImg.width,dimForImg.height);
CGImageRef image2 = CGImageCreateWithImageInRect([source CGImage], cutRect);
NSArray *images = [[NSArray alloc] initWithObjects:(__bridge id)image1, (__bridge id)image2, (__bridge id)image2, (__bridge id)image1, (__bridge id)image2, (__bridge id)image1, nil];
CAKeyframeAnimation *myAnimation = [CAKeyframeAnimation animationWithKeyPath: #"contents"];
myAnimation.calculationMode = kCAAnimationDiscrete;
myAnimation.duration = kMyTime;
myAnimation.values = images; // NSArray of CGImageRefs
[myAnimation setValue:#"ANIMATION_MY" forKey:#"MyAnimation"];
myAnimation.removedOnCompletion = NO;
myAnimation.fillMode = kCAFillModeForwards;
CGImageRelease(image1);CGImageRelease(image2);//YES or NO

Two solutions:
First: Use [UIImageView animationImages] to animate between multiple images.
Second: Store the images as Ivars and release them in dealloc of your class.

Not sure if this helps, but I ran into something similar where using CGImageCreateWithImageInRect to take sections of an image and then showing those in image views caused huge memory usages. And I found that one thing that improved memory usage a lot was encoding the sub-image into PNG data and re-decoding it back into an image again.
I know, that sounds ridiculous and completely redundant, right? But it helped a lot. According to the documentation of CGImageCreateWithImageInRect,
The resulting image retains a reference to the original image, which
means you may release the original image after calling this function.
It seems as though when the image is shown, the UI copies the image data, including copying the original image data that it references too; that's why it uses so much memory. But when you write it to PNG and back again, you create an independent image data, that is smaller and does not depend on the original. This is just my guess.
Anyway, try it and see if it helps.

Related

AVPlayer poster frame

When an AVPlayer is at the first frame I would like to show a different frame in the video like a YouTube poster frame. How would you accomplish something like this?
You would set the contents property of any visible layer accordingly:
UIImage *image = [UIImage imageWithCGImage:...]; // replace ... with the CGImage returned by AVAssetImageGenerator
view.layer.contents = (__bridge id)image.CGImage;
Strangely, you have to create a UIImage, and then convert it back to a CGImage to set the Layer's contents property. You can release the UIImage immediately afterwards with CFRelease or by using the __weak or __autoreleasing designators, or just #autoreleasepool.
Don't let anyone tell you that this is memory-consuming or slow. If you relegate this to a separate thread from the main via NSThread, it's as fast as any other means; and, I've displayed 80 UIImages in this way on one screen before.
But, then again, multi-threading and memory management are my strong-suits, as you can see here:
https://youtu.be/7QlaO7WxjGg

Why the UIImage is not released by ARC when I used UIGraphicsGetImageFromCurrentImageContext inside of a block

I try to download an image from server by using the NSURLSessionDownloadTask(iOS 7 API), and inside of the completion block, I want to the original image to be resized and store locally. So I wrote the helper method to create the bitmap context and draw the image, then get the new image from UIGraphicsGetImageFromCurrentImageContext(). The problem is the image is never released every time I do this. However, if I don't use the context and image drawing, things just work fine and no memory increasing issue. There is no CGImageCreate/Release function called, so really nothing to manually release here, and nothing fixed by adding #autoreleasepool here. Is there any way to fix this? I really want to modify the original image after downloading and before storing.
Here is some snippets for the issue:
[self fetchImageByDownloadTaskWithURL:url completion:^(UIImage *image, NSError *error) {
UIImage *modifiedImage = [image resizedImageScaleAspectFitToSize:imageView.frame.size];
// save to local disk
// ...
}];
// This is the resize method in UIImage Category
- (UIImage *)resizedImageScaleAspectFitToSize:(CGSize)size
{
CGSize imageSize = [self scaledSizeForAspectFitToSize:size];
UIGraphicsBeginImageContextWithOptions(imageSize, YES, 0.0);
CGRect imageRect = CGRectMake(0.0, 0.0, imageSize.width, imageSize.height);
[self drawInRect:imageRect]; // nothing will change if make it weakSelf
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return image;
}
updates:
When I dig into with allocations instrument, I find out that the memory growth is related with "VM: CG raster data". In my storing method, I use the NSCache for a photo memory cache option before store it persistently, and the raster data eats a lot of memory if I use the memory cache. It seems like after the rendered image being cached, all drawing data is also alive in memory until I release all cached images. If I don't memory cache the image, then non of raster data that coming from my image category method will be alive in memory. I just can not figure out why the drawing data is not released after image is being cached? Shouldn't it being released after drawing?
new updates:
I still didn't figure out why raster data is not being released when image for drawing is alive, and there is no analyze warning about this for sure. So I guess I just have to not cache the huge image for drawing to fit the big size, and remove cached drawing images when I don't want to use them any more. If I call [UIImage imageNamed:] and make it drawing, it seems never being released with raster data together since the image is system cached. So I called [UIImage imageWithContentsOfFile:] instead. Eventually the memory performs well. Other memory growth are something called non-object in allocations instrument which I have no idea currently. The memory warning simulation will release the system cached image created by [UIImage imageNamed:]. But for raster data, I will give some more tests on tomorrow and see.
Try making your category method a class method instead. Perhaps the leak is the original CGImage data which you are overwriting when you call [self drawInRect:imageRect];.

How to layer a distinct alpha channel animation on top of video?

I'm developing an iphone app and am having issues with the AVFoundation API; I'm used to lots of image manipulation, and just kind of figured I would have access to an image buffer; but the video API is quite different.
I want to take a 30 frame/sec animation which is generated as PNGs with transparency channel, and overlay it onto an arbitrary number of video clips composited inside of a AVMutableComposition.
I figured that the AVMutableVideoComposition would be a good way to go about it; but as it turns out, the animation tool, AVVideoCompositionCoreAnimationTool, requires a special kind of CALayer animation. It supports an animation with basic stuff like a spatial transform, scaling, fading, etc -- but my animation is already complete as a series of PNGS.
Is this possible with AVFoundation? If so, what is the recommended process?
I think you should work with UIImageView and animationImages:
UIImageView *anImageView = [[UIImageView alloc] initWithFrame:frame];
NSMutableArray *animationImages = [NSMutableArray array];
for (int i = 0; i < 500; i++) {
[animationImages addObject:[UIImage imageNamed:[NSString stringWithFormat:#"image%d", i]]];
}
anImageView.animationImages = animationImages;
anImageView.animationDuration = 500/30;
I would use AVVideoCompositing protocol along with AVAsynchronousVideoCompositionRequest. Use [AVAsynchronousVideoCompositionRequest sourceFrameByTrackID:] to get the CVPixelBufferRef for the video frame. Then Create a CIImage with the appropriate png based on the timing you want. Then render the video frame onto a GL_TEXTURE, render the CIImage to a GL_TEXTURE, then draw these all into your destination CVPixelBufferRef and you should get the effect you are looking for.
Something Like:
CVPixelBufferRef foregroundPixelBuffer;
CIImage *appropriatePNG = [CIImage imageWithURL:pngURL];
[someCIContext render:appropriatePNG toCVPixelBuffer:foregroundPixelBuffer]
CVPixelBufferRef backgroundPixelBuffer = [asynchronousVideoRequest sourceFrameByTrackID:theTrackID];
//... GL Code for rendering using CVOpenGLESTextureCacheRef
You will need to comp each set of input images (background and foreground) down to a single pixel buffer and then encode those buffers into a H.264 video a frame at a time. Just be aware that this will not be super fast since there is a lot of memory writing and encoding time to create the H.264. You can have a look at AVRender to see a working example of the approach described here. If you would like to roll your own impl then take a look at this tutorial which includes source code that can help you get started.

UICollectionView bad performance with UIImageViews with Core Image-manipulated images

I've got a UICollectionView in my app whose cells mainly consist of UIImageViews containing images that have been manipulated with Core Image to have less color saturation. Performance is absolutely horrible when scrolling. When I profile it, the huge majority of time spent (80% or so) is not in my code or even in my stack. It all appears to be in Core Animation code. Might anyone know why this could be?
In my UICollectionViewCell subclass I have something like this:
UIImage *img = [self obtainImageForCell];
img = [img applySaturation:0.5];
self.imageView.image = img;
applySaturation looks like this:
CIImage *image = [CIImage imageWithCGImage:self.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CIColorControls"];
[filter setValue:image forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:saturation] forKey:#"inputSaturation"];
return [UIImage imageWithCIImage:filter.outputImage];
My only guess is that Core Animation doesn't play well with Core Image. The Apple docs say this about CIImage:
Although a CIImage object has image data associated with it, it is not an image. You can think of a CIImage object as an image “recipe.” A CIImage object has all the information necessary to produce an image, but Core Image doesn’t actually render an image until it is told to do so. This “lazy evaluation” method allows Core Image to operate as efficiently as possible.
Doing this evaluation at the last minute while animating might be tricky.
I had the exact same problem, cured by avoiding the triggering of Core Image filters during cell updates.
The Apple docs stuff about lazy evaluation / recipes is, I think, more directed at the idea that you can chain core image filters together very efficiently. However, when you want to display the results of a core image filter chain, the thing needs to be evaluated then and there, which is not a good situation if the 'then and there' is during a rapidly-scrolling view and the filter in question requires heavy processing (many of them do).
You can try fiddling with GPU vs CPU processing, but I have found that the overhead of moving image data to and from CIImage can be a more significant overhead (see my answer here)
My recommendation is to treat this kind of processing the same way as you would treat the populating of a scrolling view with online images - i.e. process asynchronously, use placeholders (eg the preprocessed image), and cache results for reuse.
update
in reply to your comment:
Applicable filters are applied when you extract data from the CIImage - for example, with imageWithCIImage: [warning - this is my inference, I have not tested].
But this is not your problem... you need to process your images on a backgound thread as the processing will take time that will hold up the scrolling. Meanwhile display something else in the scrolling cell, such as a flat color or - better - the UIImage you are feeding into your CIImage for filtering. Updated the cell when the processing is done (check to see if it still needs updating, it may have scrolled offscreen by then). Save the filtered image in some kind of persistent store so that you don't need to filter it a second time, and check the cache whenever you need to display the image again before reprocessing from scratch.
I also had this exact problem – right down to wanting to desaturate an image! – and filtering once and caching the result (even as a UIImage) didn't help.
The problem, as others have mentioned, is that a CIImage encapsulates the information required to generate an image, but isn't actually an image itself. So when scrolling, the on-screen image needs to be generated on the fly, which kills performance. The same turns out to be true of a UIImage created using the imageWithCIImage:scale:orientation: method, so creating this once and reusing it also doesn't help.
The solution is to force CoreGraphics to actually render the image before saving it as a UIImage. This gave a huge improvement in scrolling performance for me. In your case, the applySaturation method might look like this:
CIImage *image = [CIImage imageWithCGImage:self.CGImage];
CIFilter *filter = [CIFilter filterWithName:#"CIColorControls"];
[filter setValue:image forKey:kCIInputImageKey];
[filter setValue:[NSNumber numberWithFloat:saturation] forKey:#"inputSaturation"];
CGImageRef cgImage = [[CIContext contextWithOptions:nil] createCGImage:filter.outputImage fromRect:filter.outputImage.extent];
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
return image;
You might also consider caching the CIFilter and/or the CIContext if you're going to be using this method a lot, since these can be expensive to create.

Issue using GLKTextureLoader and imageNamed: when loading same image multiple times

I get strange behaviour when I use the following code to load an image multiple times:
NSDictionary *options = #{GLKTextureLoaderOriginBottomLeft: #YES};
textureInfo = [GLKTextureLoader textureWithCGImage:[UIImage imageNamed:#"name"].CGImage
options:options
error:nil];
It works as expected when I run load the image the first time, but when I try to load the same image again it's drawn upside down.
I think this has to do with the fact that it's actually the same CGImage that gets passed to the texture loader because of the use of imageNamed:. The flip transformation is therefore applied a second time on the same image.
Is there a way to get around this issue?
I guess you could flip the image, and load it the first time when your program starts.
Or not use imageNamed. Or keep the texture in memory so you only have to load it once.

Resources