Save UIImage representation as PNG8, not PNG24? - ios

How can a UIImage be saved as PNG8 (8-bit color mode) instead of PNG24 (24-bit color mode)?
The goal is to save space when dynamically saving many UIWebView screenshots that have less than 255 colors.
Right now I'm using UIImagePNGRepresentation after reducing the physical image size (Ref: here), but the images are still large.
Using UIImageJPEGRepresentation with a quality < 1.f is not desired. Any other ideas?

Related

UIImage takes up much more memory than its NSData

I'm loading a UIImage from NSData with the following code
var image = UIImage(data: data!)
However, there is a weird behavior.
At first, I used png data, and the NSData was about 80kB each.
When I set UIImage with the data, the UIImage took up 128kb each.
(Checked with Allocation instrument, the size of ImageIO_PNG_Data)
Then I changed to use jpeg instead, and the NSData became about 7kb each.
But still, the UIImage is 128kb each, so when displaying the image I get no memory advantage! (The NSData reduced to 80kb -> 7kb and still the UIImage takes up the same amount of memory)
It is weird, why the UIImage should take up 128kb when the original data is just 7kb?
Can I reduce this memory usage by UIImage without shrinking the size of the UIImage itself??
Note that I'm not dealing with high resolution image so resizing the image is not an option (The NSData is already 7kb!!)
Any help will be appreciated.
Thanks!!
When you access the NSData, it is often compressed (with either PNG or JPEG). When you use the UIImage, there is an uncompressed pixel buffer which is often 4 bytes per pixel (one byte for red, green, blue, and alpha, respectively). There are other formats, but it illustrates the basic idea, that the JPEG or PNG representations can be compressed, when you start using an image, it is uncompressed.
In your conclusion, you say that resizing not an option and that the NSData is already 7kb. I would suggest that resizing should be considered if the resolution of the image is greater than the resolution (the points of the bounds/frame times the scale of the device) of the UIImageView in which you're using it. The question of whether to resize is not a function of the size of the NSData, but rather the resolution of the view. So, if you have a 1000x1000 pixel image that you're using in a small thumbview in a table view, then regardless of how small the JPEG representation is, you should definitely resize the image.
This is normal. When the image is stored as NSData, it is compressed (usually using PNG or JPG compression). When it's a UIImage, the image is decompressed, which allows it to be drawn quickly on the screen.

Bigger size image in smaller image view

This is more sort of a logical question, everything is working fine.
I have an ImageView and for that I download images from the web server. Our web server keeps the biggest size image and then I am rescaling the images down for required devices. So lets say, if I have an UIImageView with size 200 * 200 and I am downloading image of 400 * 400, I rescale the image to 200 * 200 and then I put it in the imageview, I tried putting 400 by 400 image in 200 by 200 image view and it looks fine to me (no pixelation). The way I implemented the downscaling is
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
within the image context. Now I feel like apple might be doing this anyway because it is rescaling my image to fit in the image view, so is it really required? Or should I just put high resolution images directly in the image view?
Suggestions required.
You should be fine to just assign a 400x400 UIImage to a 200x200 UIImageView. CoreAnimation will deal with the image scaling underneath.
Image Quality
If you want to experiment with different image scaling qualities, you can set the minificationFilter on the UIImageView's layer. The default is kCAFilterLinear, which I think would be fine for your usage. Multiple pixels from the 400x400 image will be selected and linearly blended together to get the 200x200 image pixel color. kCAFilterNearest will get you better performance at the cost of image quality (a single pixel from the 400x400 image is selected to get the color for the 200x200 image pixel).
You could experiment using kCAFilterTrilinear instead, which should get you better image quality at the cost of some performance. The documentation doesn't make it clear which devices this will actually have an affect, although this guy's had success using it on an iPad 2 which makes me think it may be supported on all devices now. The documentation also notes your image may need to be of dimensions of a power of 2 for this to have affect.
Performance
You could scale the image down from 200x200 perhaps as a performance optimization to save memory and CoreAnimation render time (including the image scaling), but I wouldn't do that unless you have reason to think your app's performance would actually benefit from this.

iOS - Save UIImage as a greyscale JPEG

In my app, I convert and process images.
from colour to greyscale, then doing operations such as histogram-equalisation, filtering, etc.
that part works fine.
my UIImage display correctly, I also save them to jpeg files and it works.
The only problem is that, although my images are now greyscales, they are still saved as RGB jpegs. that is the red, green and blue value for each pixel are the same but it still waste space to keep the duplicated value, making the file size higher than it could be.
So when i open the image file in photoshop, it is black & white but when I check "Photoshop > Image > Mode", it still says "RGB" instead of "Greyscale".
Anyone know how to tell iOS that the UIImageJPEGRepresentation call should create data with one channel per pixel instead of 4?
Thanks in advance.
You should do an explicit conversion of your image using CGColorSpaceCreateDeviceGray() as color space which is 8 bits per component, 1 channel.

Poor memory management performance for images on ios devices

I have the following issue:
I have a primary view object (that inherits from UIView) that displays a grid of 16 squares (each is a class I created that inherits from UIImageView), in a 4x4 layout.
Each of these 16 squares is 160x160, and contains an image (a different image for each square) that is no bigger than 30kb. The image, however, is 500x500 (because it is used elsewhere in the program, in its full size), so it gets resized in the "square" class to 160x160, by the setFrame method.
By looking at the memory management feature of Xcode when the app is running, I've noticed a few things:
each of these squares, when added to the primary view object, increase the memory usage of the app by 1MB. This doesn't happen at instantiation, but only when they are added by [self addSubview:square] at the primary view object.
if I use the same image for all the squares, the memory increase is
minimal. If I initialize the square objects without any images, then
the increase is basically zero.
the same app, when running in the simulator, uses 1/6 of the memory
it does on an actual device.
The whole point here is: why is each of the squares using up 1MB of memory when loading a 30kb image? Is there a way to reduce this? I've tried creating the images in a number of different ways: [UIImage imageNamed:img], [UIImage imageWithContentsFromFile:path], [UIImage imageWithData:imgData scale:scale], as well as not resizing the frame.
When you use a 500x500 image in a smaller UIImageView, it's still loading the larger image into memory. You can solve this by resizing the UIImage, itself (not just adjusting the frame of the UIImageView), making a 160x160 image, and use that image in your view. See this answer for some code to resize the image, which can then be invoked as follows:
UIImage *smallImage = [image scaleImageToSizeAspectFill:CGSizeMake(160, 160)];
You might even want to save the resized image, so you're not constantly encumbering yourself with the computational overhead of creating the smaller images every time, e.g.:
NSData *data = UIImagePNGRepresentation(smallImage);
[data writeToFile:path atomically:YES];
You can then load that PNG file corresponding to your small image in future invocations of the view.
In answer to your question why it takes up so much memory, it's because while the image is probably stored as a compressed JPG or PNG in persistent storage, I suspect in memory it's held as an uncompressed bitmap. There are many internal formats, but a common one is a 32-bit format with 8 bits each for red, green, blue, and alpha. Regardless of the specifics, you can quickly see how a 500 x 500 pixel representation, with 4 bytes per pixel could translate to a 1 mb of memory. But a 160 x 160 image should be roughly one tenth the size.

Does UIImage Cache the File or the Rendered Bitmap?

I generally use Fireworks PNGs (with different layers, some hidden, etc.) in my iOS projects (loaded into NIBs in UIImageView instances). Often, I take the PNG and resave it as PNG-32 to make the file smaller, but I'm questioning this now (because then I have to store the Fireworks PNG separately)....
In my cursory tests, a smaller file size does NOT affect the resultant memory use. Is there a relationship, or is it the final rendered bitmap that matters?
Note: I'm not asking about using an image that is too big in pixels. A valid comparison would be a high-quality jpeg that weights 1mb vs. a low-quality jpeg of the same content that weights 100K. Is the memory use the same?
UIImageView does not do any processing so if you set a large image the whole thing is loaded into memory when the imageView needs it regardless of the size of the imageView. So, yes, it does make a difference. You should store the smallest images that work within the imageView.
While your current example is using NIB's, if you were creating an app that displays large images acquired from other sources (e.g. the device camera or an external service) then you would scale those to a display size before using them in a UIImageView.

Resources