UIImage, CGImage, backing buffers and memory usage - ios

I have a WebP image. Since there is no native iOS support for it I use libwebp to decode it into a malloc'd RGB buffer.
uint8_t *rgb = malloc(width * height * 3);
WebPDecode(rgb, ...);
CGDataProviderRef provider = CGDataProviderCreateWithData(rgb, size, ...);
CGImageRef imageRef = CGImageCreate(width, height, provider, ...);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGDataProviderRelease(provider);
CGImageRelease(imageRef);
This works fine. However the images I'm working with are extremely large so the above code allocates a lot of memory and that uint8_t *rgb buffer will stick around as long as the image is being retained.
Now I also have a JPEG version of the exact same image. In it's encoded state the JPEG is about 30% larger than the WebP. Since iOS supports JPEG natively I can create a UIImage like this:
NSData *jpegData = ...;
UIImage *image = [UIImage imageWithData:jpegData];
I'm not sure what imageWithData: does under the hood, but for some reason it only uses about 1/3 of the memory as the WebP image I created above (reported by Xcode, device and simulator).
My question: If I understand correctly, before you render an image to the screen at some point it has to be decoded into it's RGB components. If I am correct in this assumption, then shouldn't these 2 images take up the same amount of memory?
Update: Xcode does not seem to be reporting the real memory being used, but Instruments does. Profiling the app in Instruments I can see the same amount of memory allocated by malloc and ImageIO_JPEG_Data. Can anyone explain what Xcode's memory report is actually telling me? It seems wildly off.

Related

UIImage takes up much more memory than its NSData

I'm loading a UIImage from NSData with the following code
var image = UIImage(data: data!)
However, there is a weird behavior.
At first, I used png data, and the NSData was about 80kB each.
When I set UIImage with the data, the UIImage took up 128kb each.
(Checked with Allocation instrument, the size of ImageIO_PNG_Data)
Then I changed to use jpeg instead, and the NSData became about 7kb each.
But still, the UIImage is 128kb each, so when displaying the image I get no memory advantage! (The NSData reduced to 80kb -> 7kb and still the UIImage takes up the same amount of memory)
It is weird, why the UIImage should take up 128kb when the original data is just 7kb?
Can I reduce this memory usage by UIImage without shrinking the size of the UIImage itself??
Note that I'm not dealing with high resolution image so resizing the image is not an option (The NSData is already 7kb!!)
Any help will be appreciated.
Thanks!!
When you access the NSData, it is often compressed (with either PNG or JPEG). When you use the UIImage, there is an uncompressed pixel buffer which is often 4 bytes per pixel (one byte for red, green, blue, and alpha, respectively). There are other formats, but it illustrates the basic idea, that the JPEG or PNG representations can be compressed, when you start using an image, it is uncompressed.
In your conclusion, you say that resizing not an option and that the NSData is already 7kb. I would suggest that resizing should be considered if the resolution of the image is greater than the resolution (the points of the bounds/frame times the scale of the device) of the UIImageView in which you're using it. The question of whether to resize is not a function of the size of the NSData, but rather the resolution of the view. So, if you have a 1000x1000 pixel image that you're using in a small thumbview in a table view, then regardless of how small the JPEG representation is, you should definitely resize the image.
This is normal. When the image is stored as NSData, it is compressed (usually using PNG or JPG compression). When it's a UIImage, the image is decompressed, which allows it to be drawn quickly on the screen.

How to combine an image with a mask into one single UIImage with Accelerate Framework?

This code combines an image and a grayscale mask image into one UIImage. It works but it is slow.
+ (UIImage*) maskImage:(UIImage *)image withMask:(UIImage *) mask
{
CGImageRef imageReference = image.CGImage;
CGImageRef maskReference = mask.CGImage;
CGImageRef imageMask = CGImageMaskCreate(CGImageGetWidth(maskReference),
CGImageGetHeight(maskReference),
CGImageGetBitsPerComponent(maskReference),
CGImageGetBitsPerPixel(maskReference),
CGImageGetBytesPerRow(maskReference),
CGImageGetDataProvider(maskReference),
NULL, // Decode is null
YES // Should interpolate
);
CGImageRef maskedReference = CGImageCreateWithMask(imageReference, imageMask);
CGImageRelease(imageMask);
UIImage *maskedImage = [UIImage imageWithCGImage:maskedReference];
CGImageRelease(maskedReference);
return maskedImage;
}
I think that Accelerate Framework can help. But I am not sure.
There is vImage and it can do alpha compositing. Or maybe what I look for is called "vImage Transform". Not like CATransform3D but "transforming" image.
But what I need is make a photo into a transparent JPEG based on a mask.
Can Accelerate Framework be used for this? Or is there an alternative?
VImageOverwriteChannels_ARGB8888 is probably the API you want, provided that the image JPEG is opaque to start with. You can use vImageBuffer_InitWithCGImage to extract out the source image as 8 bpc, 32 bpp, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little. This will give you a BGRA8888 image with opaque alpha. Get out the mask as a 8bpc, 8bpp, kCGImageAlphaNone image. Use vImageOverwriteChannels_ARGB8888 to overwrite the BGRA alpha with the new alpha channel. Then make a new CGImage with vImageCreateCGImageFromBuffer, modifying the format slightly to kCGImageAlphaFirst | kCGBitmapByteOrder32Little.
You can also try flattening the mask into the image by taking maskedReference above and decoding it directly to BGRA kCGImageAlphaFirst. This only really works well if the image and the mask are the same size. Otherwise some resampling occurs, which is time consuming.
I don't know whether either of these is really going to be faster or not. It would be useful to look at an instruments time profile of where your time is going. vImageOverwriteChannels_ARGB8888 is probably only a tiny bit of the work to be done here. Depending on the format of the original image, quite a lot of work for colorspace conversion and image format conversion can occur behind the scenes in vImageBuffer_InitWithCGImage and vImageCreateCGImageFromBuffer. The key to speed here (and with the competing CG path) is to minimize the workload by making intelligent choices.
Sometimes, trying some stuff, then filing a bug against apple if nothing works well can yield an informed response. A trivially reproducible example is usually key.

How to find UIImage Bottleneck

I have an app that uses UIImage objects. Up to this point, I've been using image objects initialized using something like this:
UIImage *image = [UIImage imageNamed:imageName];
using an image in my app bundle. I've been adding functionality to allow users to use imagery from the camera or their library using UIImagePickerController. These images, obviously, can't be in my app bundle, so I initialize the UIImage object a different way:
UIImage *image = [UIImage imageWithContentsOfFile:pathToFile];
This is done after first resizing the image to a size similar to the other files in my app bundle, in both pixel dimensions and total bytes, both using Jpeg format (interestingly, PNG was much slower, even for the same file size). In other words, the file pointed to by pathToFile is a file of similar size as an image in the bundle (pixel dimensions match, and compression was chosen so byte count was similar).
The app goes through a loop making small pieces from the original image, among other things that are not relevant to this post. My issue is that going through the loop using an image created the second way takes much longer than using an image created the first way.
I realize the first method caches the image, but I don't think that's relevant, unless I'm not understanding how the caching works. If it is the relevant factor, how can I add caching to the second method?
The relevant portion of code that is causing the bottleneck is this:
[image drawInRect:self.imageSquare];
Here, self is a subclass of UIImageView. Its property imageSquare is simply a CGRect defining what gets drawn. This portion is the same for both methods. So why is the second method so much slower with similar sized UIImage object?
Is there something I could be doing differently to optimize this process?
EDIT: I change access to the image in the bundle to imageWithContentsOfFile and the time to perform the loop changed from about 4 seconds to just over a minute. So it's looking like I need to find some way to do caching like imageNamed does, but with non-bundled files.
UIImage imageNamed doesn't simply cache the image. It caches an uncompressed image. The extra time spent was not caused by reading from local storage to RAM but by decompressing the image.
The solution was to create a new uncompressed UIImage object and use it for the time sensitive portion of the code. The uncompressed object is discarded when that section of code is complete. For completeness, here is a copy of the class method to return an uncompressed UIImage object from a compressed one, thanks to another thread. Note that this assumes data is in CGImage. That is not always true for UIImage objects.
+(UIImage *)decompressedImage:(UIImage *)compressedImage
{
CGImageRef originalImage = compressedImage.CGImage;
CFDataRef imageData = CGDataProviderCopyData(
CGImageGetDataProvider(originalImage));
CGDataProviderRef imageDataProvider = CGDataProviderCreateWithCFData(imageData);
CFRelease(imageData);
CGImageRef image = CGImageCreate(
CGImageGetWidth(originalImage),
CGImageGetHeight(originalImage),
CGImageGetBitsPerComponent(originalImage),
CGImageGetBitsPerPixel(originalImage),
CGImageGetBytesPerRow(originalImage),
CGImageGetColorSpace(originalImage),
CGImageGetBitmapInfo(originalImage),
imageDataProvider,
CGImageGetDecode(originalImage),
CGImageGetShouldInterpolate(originalImage),
CGImageGetRenderingIntent(originalImage));
CGDataProviderRelease(imageDataProvider);
UIImage *decompressedImage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
return decompressedImage;
}

Any way to encode a PNG faster than UIImagePNGRepresentation?

I'm generating a bunch of tiles for CATiledLayer. It takes about 11 seconds to generate 120 tiles at 256 x 256 with 4 levels of detail on an iPhone 4S. The image itself fits within 2048 x 2048.
My bottleneck is UIImagePNGRepresentation. It takes about 0.10-0.15 seconds to generate every 256 x 256 image.
I've tried generating multiple tiles on different background queue's, but this only cuts it down to about 9-10 seconds.
I've also tried using the ImageIO framework with code like this:
- (void)writeCGImage:(CGImageRef)image toURL:(NSURL*)url andOptions:(CFDictionaryRef) options
{
CGImageDestinationRef myImageDest = CGImageDestinationCreateWithURL((__bridge CFURLRef)url, (__bridge CFStringRef)#"public.png", 1, nil);
CGImageDestinationAddImage(myImageDest, image, options);
CGImageDestinationFinalize(myImageDest);
CFRelease(myImageDest);
}
While this produces smaller PNG files (win!), it takes about 13 seconds, 2 seconds more than before.
Is there any way to encode a PNG image from CGImage faster? Perhaps a library that makes use of NEON ARM extension (iPhone 3GS+) like libjpeg-turbo does?
Is there perhaps a better format than PNG for saving tiles that doesn't take up a lot of space?
The only viable option I've been able to come up with is to increase the tile size to 512 x 512. This cuts the encoding time by half. Not sure what that will do to my scroll view though. The app is for iPad 2+, and only supports iOS 6 (using iPhone 4S as a baseline).
It turns out the reason why UIImageRepresentation was performing so poorly was because it was decompressing the original image every time even though I thought I was creating a new image with CGImageCreateWithImageInRect.
You can see the results from Instruments here:
Notice _cg_jpeg_read_scanlines and decompress_onepass.
I was force-decompressing the image with this:
UIImage *image = [UIImage imageWithContentsOfFile:path];
UIGraphicsBeginImageContext(CGSizeMake(1, 1));
[image drawAtPoint:CGPointZero];
UIGraphicsEndImageContext();
The timing of this was about 0.10 seconds, almost equivalent to the time taken by each UIImageRepresentation call.
There are numerous articles over the internet that recommend drawing as a way of force decompressing an image.
There's an article on Cocoanetics Avoiding Image Decompression Sickness. The article provides an alternate way of loading the image:
NSDictionary *dict = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES]
forKey:(id)kCGImageSourceShouldCache];
CGImageSourceRef source = CGImageSourceCreateWithURL((__bridge CFURLRef)[[NSURL alloc] initFileURLWithPath:path], NULL);
CGImageRef cgImage = CGImageSourceCreateImageAtIndex(source, 0, (__bridge CFDictionaryRef)dict);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CFRelease(source);
And now the same process takes about 3 seconds! Using GCD to generate tiles in parallel reduces the time more significantly.
The writeCGImage function above takes about 5 seconds. Since the file sizes are smaller, I suspect the zlib compression is at a higher level.

Dynamic image rendering on iOS

I have a programming task for an application I am writing for the iPad and the documentation is not clear about how to go about doing this. I am hoping for some good advice on approaching this problem.
Basically, I have a memory buffer that stores raw RGB for a 256x192 pixel image. This image will be written to regularly and I wish to display this to a 768x576 pixel area on the screen on an update call. I would like this to be relatively quick and maybe optimise it by only processing the areas of the image that actually change.
How would I go about doing this? My initial thought is to create a CGBitmapContext to manage the 256x192 image, then create a CGImage from this, then create a UIImage from this and change the image property of a UIImageView instance. This sounds like a rather slow process.
Am I on the right lines or should I be looking at something different. Another note is that this image must co-exists with other UIKit views on the screen.
Thanks for any help you can provide.
In my experience, obtaining an image from a bitmap context is actually very quick. The real performance hit, if any, will be in the drawing operations themselves. Since you are scaling the resultant image, you might obtain better results by creating the bitmap context at the final size, and drawing everything scaled to begin with.
If you do use a bitmap context, however, you must make sure to add an alpha channel (RGBA or ARGB), as CGBitmapContext does not support just RGB.
OK, I've come up with a solution. Thanks Justin for giving me the confidence to use the bitmap contexts. In the end I used this bit of code:
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CFDataRef data = CFDataCreateWithBytesNoCopy(kCFAllocatorDefault, (UInt8*)screenBitmap, sizeof(UInt32)*256*192, kCFAllocatorNull);
CGDataProviderRef provider = CGDataProviderCreateWithCFData(data);
CGColorSpaceRef colourSpace = CGColorSpaceCreateDeviceRGB();
CGImageRef image = CGImageCreate(256, 192, 8, 32, sizeof(UInt32)*256, colourSpace, bitmapInfo, provider, 0, NO, kCGRenderingIntentDefault);
CGColorSpaceRelease(colourSpace);
CGDataProviderRelease(provider);
CFRelease(data);
self.image = [UIImage imageWithCGImage:image];
CGImageRelease(image);
Also note that screenBitmap is my UInt32 array of size 256x192, and self is a UIImageView-derived object. This code works well, but is it the right way of doing it?

Resources