UIImage takes up much more memory than its NSData - ios

I'm loading a UIImage from NSData with the following code
var image = UIImage(data: data!)
However, there is a weird behavior.
At first, I used png data, and the NSData was about 80kB each.
When I set UIImage with the data, the UIImage took up 128kb each.
(Checked with Allocation instrument, the size of ImageIO_PNG_Data)
Then I changed to use jpeg instead, and the NSData became about 7kb each.
But still, the UIImage is 128kb each, so when displaying the image I get no memory advantage! (The NSData reduced to 80kb -> 7kb and still the UIImage takes up the same amount of memory)
It is weird, why the UIImage should take up 128kb when the original data is just 7kb?
Can I reduce this memory usage by UIImage without shrinking the size of the UIImage itself??
Note that I'm not dealing with high resolution image so resizing the image is not an option (The NSData is already 7kb!!)
Any help will be appreciated.
Thanks!!

When you access the NSData, it is often compressed (with either PNG or JPEG). When you use the UIImage, there is an uncompressed pixel buffer which is often 4 bytes per pixel (one byte for red, green, blue, and alpha, respectively). There are other formats, but it illustrates the basic idea, that the JPEG or PNG representations can be compressed, when you start using an image, it is uncompressed.
In your conclusion, you say that resizing not an option and that the NSData is already 7kb. I would suggest that resizing should be considered if the resolution of the image is greater than the resolution (the points of the bounds/frame times the scale of the device) of the UIImageView in which you're using it. The question of whether to resize is not a function of the size of the NSData, but rather the resolution of the view. So, if you have a 1000x1000 pixel image that you're using in a small thumbview in a table view, then regardless of how small the JPEG representation is, you should definitely resize the image.

This is normal. When the image is stored as NSData, it is compressed (usually using PNG or JPG compression). When it's a UIImage, the image is decompressed, which allows it to be drawn quickly on the screen.

Related

UIImagePNGRepresentation returns inappropriately large data

We have an UIImage of size 1280*854 and we are trying to save it in png format.
NSData *pngData = UIImagePNGRepresentation(img);
The problem is that the size of pngData is 9551944 which is inappropriately large for the input image size. Even considering 24 bit PNG, at the max it should be 1280*854*3 (3 for 24 bit png).
BTW, this is only happening with images scaled with UIGraphicsGetImageFromCurrentImageContext. We also noticed that image._scale is set to 2.0 in image returned by UIGraphicsGetImageFromCurrentImageContext.
Any idea what's wrong.

Save UIImage representation as PNG8, not PNG24?

How can a UIImage be saved as PNG8 (8-bit color mode) instead of PNG24 (24-bit color mode)?
The goal is to save space when dynamically saving many UIWebView screenshots that have less than 255 colors.
Right now I'm using UIImagePNGRepresentation after reducing the physical image size (Ref: here), but the images are still large.
Using UIImageJPEGRepresentation with a quality < 1.f is not desired. Any other ideas?

Poor memory management performance for images on ios devices

I have the following issue:
I have a primary view object (that inherits from UIView) that displays a grid of 16 squares (each is a class I created that inherits from UIImageView), in a 4x4 layout.
Each of these 16 squares is 160x160, and contains an image (a different image for each square) that is no bigger than 30kb. The image, however, is 500x500 (because it is used elsewhere in the program, in its full size), so it gets resized in the "square" class to 160x160, by the setFrame method.
By looking at the memory management feature of Xcode when the app is running, I've noticed a few things:
each of these squares, when added to the primary view object, increase the memory usage of the app by 1MB. This doesn't happen at instantiation, but only when they are added by [self addSubview:square] at the primary view object.
if I use the same image for all the squares, the memory increase is
minimal. If I initialize the square objects without any images, then
the increase is basically zero.
the same app, when running in the simulator, uses 1/6 of the memory
it does on an actual device.
The whole point here is: why is each of the squares using up 1MB of memory when loading a 30kb image? Is there a way to reduce this? I've tried creating the images in a number of different ways: [UIImage imageNamed:img], [UIImage imageWithContentsFromFile:path], [UIImage imageWithData:imgData scale:scale], as well as not resizing the frame.
When you use a 500x500 image in a smaller UIImageView, it's still loading the larger image into memory. You can solve this by resizing the UIImage, itself (not just adjusting the frame of the UIImageView), making a 160x160 image, and use that image in your view. See this answer for some code to resize the image, which can then be invoked as follows:
UIImage *smallImage = [image scaleImageToSizeAspectFill:CGSizeMake(160, 160)];
You might even want to save the resized image, so you're not constantly encumbering yourself with the computational overhead of creating the smaller images every time, e.g.:
NSData *data = UIImagePNGRepresentation(smallImage);
[data writeToFile:path atomically:YES];
You can then load that PNG file corresponding to your small image in future invocations of the view.
In answer to your question why it takes up so much memory, it's because while the image is probably stored as a compressed JPG or PNG in persistent storage, I suspect in memory it's held as an uncompressed bitmap. There are many internal formats, but a common one is a 32-bit format with 8 bits each for red, green, blue, and alpha. Regardless of the specifics, you can quickly see how a 500 x 500 pixel representation, with 4 bytes per pixel could translate to a 1 mb of memory. But a 160 x 160 image should be roughly one tenth the size.

NSData length of an Image compressed with UIImageJPEGRepresentation()

We know the image can be compressed with the Method UIImageJPEGRepresentation() as in the following codes.
NSData *imgData = UIImageJPEGRepresentation(imageResized, 0.5);
NSLog(#"imgData.length :%d",imgData.length);
imageResized = [UIImage imageWithData:imgData];
NSData *imgData2 = UIImageJPEGRepresentation(imageResized, 1);
NSLog(#"imgData2.length :%d",imgData2.length);
The log is:
2013-02-25 00:33:14.756 MyApp[1119:440b] imgData.length :371155
2013-02-25 00:33:20.988 MyApp[1119:440b] imgData2.length :1308415
What Im confused is that why the length of imgData and imgData2 are different. In my App, the image should be uploaded to the server. Should I upload the NSData to the server for saving storage? Is it possible for an Android phone to download the NSData and convert it to an image? Any help will be appreciated!
You start with a UIImage of some size (say 1024x768). This takes 1024x768x4 byes in memory. Then you compress it with a factor of 0.5 and get 371,155 bytes.
You then create a new UIImage with the compressed data. This is still a 1024x768 (or whatever) UIImage so it now takes the same amount of memory (1024x768x4) as the original image. You then convert it to a new JPG with less compression giving you 1,308,415 bytes.
Even though you create an uncompressed version of the compressed image, the number of bytes comes from converting the full sized UIImage. The 2nd, uncompressed image, though bigger, will still have the same lower quality of the compressed image.
Since your data represents a JPG, anything that downloads the data will be able to treat the data as a JPG, including an Android phone.
The number of bytes is bigger for the second image because you passed a much higher compression quality value to UIImageJPEGRepresentation. Higher quality takes more bytes.
The file once uploaded to a server will be a standard JPEG file, viewable by any device, including Android.

How to determine the number of bytes used by a UIImage?

I would like to be able to calculate the total number of bytes a UIImage uses in memory.
I can make a rough estimate by multiplying the width by the height and then by a multiplier number of bytes, but I'd like to calculate the size exactly if possible.
In general, objects don't have a single meaningful "size", since they can allocate and release any number of other objects privately as needed. sizeof(*myObj) only gives you the size of the top level structure, not a very useful number. If you need the complete memory impact of allocating and using an object, run under Instruments and watch allocations.
For a UIImage, its practical size is the size of whatever is backing it, typically either an NSData containing a PNG, or a CGimageRef, plus the object overhead. (There's also the pixel buffer when it gets rendered to the screen or other context; but that buffer belongs to the view or context in question, not the UIImage. If a UIView is doing the rendering then that buffer is likely in GL texture memory anyway.)
[UIImage imageWithData:[NSData dataWithContentsOfFile:#"foo.png"]] gives you a UIImage that is the same size as the foo.png file, plus some inconsequential overhead. [UIImage imageNamed:#"foo.png"] does the same thing, except that the class maintains a cache table of one object per filename, and will cause that object to dump its memory copy of the png in low-memory situations, reducing its "size" to just the overhead.
imageWithCGImage: and variants give you an UIImage that uses a CGImage reference as its backing store, and CGImages can be any number of things depending on their source. If you've been painting in one, it's probably an uncompressed pixel buffer. Calculate its size exactly as you propose above. If you need what its size "would be" if it were from a file, inspect the result of the UIImagePNGRepresentation or UIImageJPEGRepresentation functions.
Width * height * 4 will get you close. I'm not sure there's a way to get the exact size, since width is rounded out to an arbitrary, undocumented boundary (at least 4 pixels or 16 bytes, I gather), and there are several extra internal pieces of the object that you'd need to count. Plus likely there are internal attributes that are hung on the object or not, based on its use.
I had to solve this for a twitter app I was writing. Twitter rejects images larger than 3MB, so I needed to compress the image just enough to get below the 3MB limit. Here is the code snippet I used:
float compression = 1.0f;
NSData* data = UIImageJPEGRepresentation(photo, compression);
while(data.length > 3145728) //3MB
{
compression -= .1f;
NSLog(#"Compressing Image to: %lf", compression);
data = UIImageJPEGRepresentation(photo, compression);
NSLog(#"Image Bytes: %i", data.length);
}
The compression algorithm I used is non-optimized.
So, What is it doing?
Good question! The UIImageJPEGRepresentation method returns a byte array. To get the size, simply check the length of the array!
There is also a UIImagePNGRepresentation method. Keep in mind, these method are having to build byte arrays, and if needed convert the binary representation of the data. This can take a bit of time. Luckily in my case, most images taken by the iPhone are already less than 3MB, and will only need the compression if there is a wide range of colors; but calling the UIImageJPEGRepresentation method repeatedly (which could happen in my posted code) can take some time.

Resources