Any way to encode a PNG faster than UIImagePNGRepresentation? - ios

I'm generating a bunch of tiles for CATiledLayer. It takes about 11 seconds to generate 120 tiles at 256 x 256 with 4 levels of detail on an iPhone 4S. The image itself fits within 2048 x 2048.
My bottleneck is UIImagePNGRepresentation. It takes about 0.10-0.15 seconds to generate every 256 x 256 image.
I've tried generating multiple tiles on different background queue's, but this only cuts it down to about 9-10 seconds.
I've also tried using the ImageIO framework with code like this:
- (void)writeCGImage:(CGImageRef)image toURL:(NSURL*)url andOptions:(CFDictionaryRef) options
{
CGImageDestinationRef myImageDest = CGImageDestinationCreateWithURL((__bridge CFURLRef)url, (__bridge CFStringRef)#"public.png", 1, nil);
CGImageDestinationAddImage(myImageDest, image, options);
CGImageDestinationFinalize(myImageDest);
CFRelease(myImageDest);
}
While this produces smaller PNG files (win!), it takes about 13 seconds, 2 seconds more than before.
Is there any way to encode a PNG image from CGImage faster? Perhaps a library that makes use of NEON ARM extension (iPhone 3GS+) like libjpeg-turbo does?
Is there perhaps a better format than PNG for saving tiles that doesn't take up a lot of space?
The only viable option I've been able to come up with is to increase the tile size to 512 x 512. This cuts the encoding time by half. Not sure what that will do to my scroll view though. The app is for iPad 2+, and only supports iOS 6 (using iPhone 4S as a baseline).

It turns out the reason why UIImageRepresentation was performing so poorly was because it was decompressing the original image every time even though I thought I was creating a new image with CGImageCreateWithImageInRect.
You can see the results from Instruments here:
Notice _cg_jpeg_read_scanlines and decompress_onepass.
I was force-decompressing the image with this:
UIImage *image = [UIImage imageWithContentsOfFile:path];
UIGraphicsBeginImageContext(CGSizeMake(1, 1));
[image drawAtPoint:CGPointZero];
UIGraphicsEndImageContext();
The timing of this was about 0.10 seconds, almost equivalent to the time taken by each UIImageRepresentation call.
There are numerous articles over the internet that recommend drawing as a way of force decompressing an image.
There's an article on Cocoanetics Avoiding Image Decompression Sickness. The article provides an alternate way of loading the image:
NSDictionary *dict = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES]
forKey:(id)kCGImageSourceShouldCache];
CGImageSourceRef source = CGImageSourceCreateWithURL((__bridge CFURLRef)[[NSURL alloc] initFileURLWithPath:path], NULL);
CGImageRef cgImage = CGImageSourceCreateImageAtIndex(source, 0, (__bridge CFDictionaryRef)dict);
UIImage *image = [UIImage imageWithCGImage:cgImage];
CGImageRelease(cgImage);
CFRelease(source);
And now the same process takes about 3 seconds! Using GCD to generate tiles in parallel reduces the time more significantly.
The writeCGImage function above takes about 5 seconds. Since the file sizes are smaller, I suspect the zlib compression is at a higher level.

Related

UIImagePNGRepresentation returns inappropriately large data

We have an UIImage of size 1280*854 and we are trying to save it in png format.
NSData *pngData = UIImagePNGRepresentation(img);
The problem is that the size of pngData is 9551944 which is inappropriately large for the input image size. Even considering 24 bit PNG, at the max it should be 1280*854*3 (3 for 24 bit png).
BTW, this is only happening with images scaled with UIGraphicsGetImageFromCurrentImageContext. We also noticed that image._scale is set to 2.0 in image returned by UIGraphicsGetImageFromCurrentImageContext.
Any idea what's wrong.

UIImage takes up much more memory than its NSData

I'm loading a UIImage from NSData with the following code
var image = UIImage(data: data!)
However, there is a weird behavior.
At first, I used png data, and the NSData was about 80kB each.
When I set UIImage with the data, the UIImage took up 128kb each.
(Checked with Allocation instrument, the size of ImageIO_PNG_Data)
Then I changed to use jpeg instead, and the NSData became about 7kb each.
But still, the UIImage is 128kb each, so when displaying the image I get no memory advantage! (The NSData reduced to 80kb -> 7kb and still the UIImage takes up the same amount of memory)
It is weird, why the UIImage should take up 128kb when the original data is just 7kb?
Can I reduce this memory usage by UIImage without shrinking the size of the UIImage itself??
Note that I'm not dealing with high resolution image so resizing the image is not an option (The NSData is already 7kb!!)
Any help will be appreciated.
Thanks!!
When you access the NSData, it is often compressed (with either PNG or JPEG). When you use the UIImage, there is an uncompressed pixel buffer which is often 4 bytes per pixel (one byte for red, green, blue, and alpha, respectively). There are other formats, but it illustrates the basic idea, that the JPEG or PNG representations can be compressed, when you start using an image, it is uncompressed.
In your conclusion, you say that resizing not an option and that the NSData is already 7kb. I would suggest that resizing should be considered if the resolution of the image is greater than the resolution (the points of the bounds/frame times the scale of the device) of the UIImageView in which you're using it. The question of whether to resize is not a function of the size of the NSData, but rather the resolution of the view. So, if you have a 1000x1000 pixel image that you're using in a small thumbview in a table view, then regardless of how small the JPEG representation is, you should definitely resize the image.
This is normal. When the image is stored as NSData, it is compressed (usually using PNG or JPG compression). When it's a UIImage, the image is decompressed, which allows it to be drawn quickly on the screen.

UIImage, CGImage, backing buffers and memory usage

I have a WebP image. Since there is no native iOS support for it I use libwebp to decode it into a malloc'd RGB buffer.
uint8_t *rgb = malloc(width * height * 3);
WebPDecode(rgb, ...);
CGDataProviderRef provider = CGDataProviderCreateWithData(rgb, size, ...);
CGImageRef imageRef = CGImageCreate(width, height, provider, ...);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGDataProviderRelease(provider);
CGImageRelease(imageRef);
This works fine. However the images I'm working with are extremely large so the above code allocates a lot of memory and that uint8_t *rgb buffer will stick around as long as the image is being retained.
Now I also have a JPEG version of the exact same image. In it's encoded state the JPEG is about 30% larger than the WebP. Since iOS supports JPEG natively I can create a UIImage like this:
NSData *jpegData = ...;
UIImage *image = [UIImage imageWithData:jpegData];
I'm not sure what imageWithData: does under the hood, but for some reason it only uses about 1/3 of the memory as the WebP image I created above (reported by Xcode, device and simulator).
My question: If I understand correctly, before you render an image to the screen at some point it has to be decoded into it's RGB components. If I am correct in this assumption, then shouldn't these 2 images take up the same amount of memory?
Update: Xcode does not seem to be reporting the real memory being used, but Instruments does. Profiling the app in Instruments I can see the same amount of memory allocated by malloc and ImageIO_JPEG_Data. Can anyone explain what Xcode's memory report is actually telling me? It seems wildly off.

How to find UIImage Bottleneck

I have an app that uses UIImage objects. Up to this point, I've been using image objects initialized using something like this:
UIImage *image = [UIImage imageNamed:imageName];
using an image in my app bundle. I've been adding functionality to allow users to use imagery from the camera or their library using UIImagePickerController. These images, obviously, can't be in my app bundle, so I initialize the UIImage object a different way:
UIImage *image = [UIImage imageWithContentsOfFile:pathToFile];
This is done after first resizing the image to a size similar to the other files in my app bundle, in both pixel dimensions and total bytes, both using Jpeg format (interestingly, PNG was much slower, even for the same file size). In other words, the file pointed to by pathToFile is a file of similar size as an image in the bundle (pixel dimensions match, and compression was chosen so byte count was similar).
The app goes through a loop making small pieces from the original image, among other things that are not relevant to this post. My issue is that going through the loop using an image created the second way takes much longer than using an image created the first way.
I realize the first method caches the image, but I don't think that's relevant, unless I'm not understanding how the caching works. If it is the relevant factor, how can I add caching to the second method?
The relevant portion of code that is causing the bottleneck is this:
[image drawInRect:self.imageSquare];
Here, self is a subclass of UIImageView. Its property imageSquare is simply a CGRect defining what gets drawn. This portion is the same for both methods. So why is the second method so much slower with similar sized UIImage object?
Is there something I could be doing differently to optimize this process?
EDIT: I change access to the image in the bundle to imageWithContentsOfFile and the time to perform the loop changed from about 4 seconds to just over a minute. So it's looking like I need to find some way to do caching like imageNamed does, but with non-bundled files.
UIImage imageNamed doesn't simply cache the image. It caches an uncompressed image. The extra time spent was not caused by reading from local storage to RAM but by decompressing the image.
The solution was to create a new uncompressed UIImage object and use it for the time sensitive portion of the code. The uncompressed object is discarded when that section of code is complete. For completeness, here is a copy of the class method to return an uncompressed UIImage object from a compressed one, thanks to another thread. Note that this assumes data is in CGImage. That is not always true for UIImage objects.
+(UIImage *)decompressedImage:(UIImage *)compressedImage
{
CGImageRef originalImage = compressedImage.CGImage;
CFDataRef imageData = CGDataProviderCopyData(
CGImageGetDataProvider(originalImage));
CGDataProviderRef imageDataProvider = CGDataProviderCreateWithCFData(imageData);
CFRelease(imageData);
CGImageRef image = CGImageCreate(
CGImageGetWidth(originalImage),
CGImageGetHeight(originalImage),
CGImageGetBitsPerComponent(originalImage),
CGImageGetBitsPerPixel(originalImage),
CGImageGetBytesPerRow(originalImage),
CGImageGetColorSpace(originalImage),
CGImageGetBitmapInfo(originalImage),
imageDataProvider,
CGImageGetDecode(originalImage),
CGImageGetShouldInterpolate(originalImage),
CGImageGetRenderingIntent(originalImage));
CGDataProviderRelease(imageDataProvider);
UIImage *decompressedImage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
return decompressedImage;
}

NSData length of an Image compressed with UIImageJPEGRepresentation()

We know the image can be compressed with the Method UIImageJPEGRepresentation() as in the following codes.
NSData *imgData = UIImageJPEGRepresentation(imageResized, 0.5);
NSLog(#"imgData.length :%d",imgData.length);
imageResized = [UIImage imageWithData:imgData];
NSData *imgData2 = UIImageJPEGRepresentation(imageResized, 1);
NSLog(#"imgData2.length :%d",imgData2.length);
The log is:
2013-02-25 00:33:14.756 MyApp[1119:440b] imgData.length :371155
2013-02-25 00:33:20.988 MyApp[1119:440b] imgData2.length :1308415
What Im confused is that why the length of imgData and imgData2 are different. In my App, the image should be uploaded to the server. Should I upload the NSData to the server for saving storage? Is it possible for an Android phone to download the NSData and convert it to an image? Any help will be appreciated!
You start with a UIImage of some size (say 1024x768). This takes 1024x768x4 byes in memory. Then you compress it with a factor of 0.5 and get 371,155 bytes.
You then create a new UIImage with the compressed data. This is still a 1024x768 (or whatever) UIImage so it now takes the same amount of memory (1024x768x4) as the original image. You then convert it to a new JPG with less compression giving you 1,308,415 bytes.
Even though you create an uncompressed version of the compressed image, the number of bytes comes from converting the full sized UIImage. The 2nd, uncompressed image, though bigger, will still have the same lower quality of the compressed image.
Since your data represents a JPG, anything that downloads the data will be able to treat the data as a JPG, including an Android phone.
The number of bytes is bigger for the second image because you passed a much higher compression quality value to UIImageJPEGRepresentation. Higher quality takes more bytes.
The file once uploaded to a server will be a standard JPEG file, viewable by any device, including Android.

Resources