Should I use UIImage or CGImage in a Swift iOS App? - ios

All the code I can find revolves around loading images directly into visual controls.
However, I have my own cache system (converting a project from another language) and so I as efficient as possible want the following:
Load jpg/png images - probably into a bitmap / cgimage. (his can either be from the file system or from images downloaded online)
Possibly save image back as a compressed/resized png/jpg file
Supply an image reference for a visual control
I am new to swift and ios platform, but as far as I can tell, cgimage is as close as it gets? However, there does not appear to be a way to load an image from he file system when using cgimage... But i have found people discussing ways for e.g. UIImage, so I am now doubting my initial impression ha cgimage was the best match for my needs.

It is easy to get confused between UIImage, CGImage and CIImage. The difference is following:
UIImage: UIImage object is a high-level way to display image data. You can create images from files, from Quartz image objects, or from raw image data you receive. They are immutable and must specify an image’s properties at initialization time. This also means that these image objects are safe to use from any thread.
Typically you can take NSData object containing a PNG or JPEG representation image and convert it to a UIImage.
CGImage: A CGImage can only represent bitmaps. Operations in CoreGraphics, such as blend modes and masking require CGImageRefs. If you need to access and change the actual bitmap data, you can use CGImage. It can also be converted to NSBitmapImageReps.
CIImage: A CIImage is an immutable object that represents an image. It is not an image. It only has the image data associated with it. It has all the information necessary to produce an image.
You typically use CIImage objects in conjunction with other Core Image classes such as CIFilter, CIContext, CIColor, and CIVector. You can create CIImage objects with data supplied from variety of sources such as Quartz 2D images, Core Videos image, etc.
It is required to use the various GPU optimized Core Image filters. They can also be converted to NSBitmapImageReps. It can be based on the CPU or the GPU.
In conclusion, UIImage is what you are looking for. Reasons are:
You can get image from device memory and assign it to UIImage
You can get image from URL and assign it to UIImage
You can write UIImage in your desired format to device memory
You can resize image assigned to UIImage
Once you have assigned an image to UIImage, you can use that instance in controls directly. e.g. setting background of a button, setting as image for UIImageView
Would have added code samples but all these are basic questions which have been already answered on Stackoverflow, so there is no point. Not to mention adding code will make this unnecessarily large.
Credit for summarizing differences: Randall Leung

You can load your image easily into an UIImage object...
NSData *data = [NSData dataWith...];
UIImage *image = [UIImage imageWithData:data];
If you then want to show it in a view, you can use an UIImageView:
UIImageView *imageView = [[UIImageView alloc] init]; // or whatever
...
imageView.image = image;
See more in UIImage documentation.

Per the documentation at: https://developer.apple.com/documentation/uikit/uiimage
let uiImage = uiImageView.image
let cgImage = uiImage.cgImage

Related

Image size optimization for less data usage

I'm working on a Instagram-like app on iOS, and I'm wondering on how to optimize the file size of each picture so the users will use as least data as possible when fetching all those pictures from my backend. It seems like it will drain data if you download the file just as it is since high resolution pictures are around 1.5MB each. Is there a way to shrink the size of the picture while maintaining the quality of the picture as much as possible?
you can compress the image by saving it into json binary data.
OR simply core binary data.
OR elegantly Swift Realm - Core binary data.
Do you really want to customise image by you own! because there are lots of libraries already available for doing that.
Which are more effective and powerful.
like,
AFNetworking Api
Its is wonderful it could not only compressed images as per UIImageView current available size according to device resolution but also give you image cache flexibility.
Here is the link of Pod File and github
Just try It you will love it
You can compress a UIImage by converting it into NSData
UIImage *rainyImage =[UImage imageNamed:#"rainy.jpg"];
NSData *imgData= UIImageJPEGRepresentation(rainyImage,0.1 /*compressionQuality*/); // Second parameter of this function is the compression Quality
Now use this NSData object to save or convert it into UIImage
To convert it again into UIImage
UIImage *image = [UIImage imageWithData:imgData];
Hope this will resolve your issue.
Image compression will loose in the image quality.

How to get NSData representation of UIGraphicsGetImageFromCurrentImageContext() [duplicate]

This question already has answers here:
convert UIImage to NSData
(7 answers)
Closed 7 years ago.
I'm taking a "snapshot" of the image context in UIGraphicsBeginImageContextWithOptions(UIScreen.mainScreen().bounds.size, true, 0) and eventually creating a UIImage using
var renderedImage = UIGraphicsGetImageFromCurrentImageContext()
However I need to get the NSData representation of this UIImage without using UIImageJPEGRepresentation or UIImagePNGRepresentation (because these produce files that are way larger than the original UIImage). How can I do this?
these produce files that are way larger than the original UIImage). How can I do this?
Image files contain compressed data, while NSData is raw i.e. not compressed. Therefore NSData will in about all cases be larger, when written into a file.
More info at another question: convert UIImage to NSData
I'm not sure what you mean by "way larger" than the original UIImage. The data backing the UIImage object is at least as big as the data you would get by converting it into a JPG, and roughly equivalent to the data you would get by converting it to a PNG.
The rendered image will be twice the screen size in points, because you have rendered a retina screen into an image.
You can avoid this and render the image as non-retina by making your image context have a scale of 1:
UIGraphicsBeginImageContextWithOptions(UIScreen.mainScreen().bounds.size, true, 1)

UIImage takes up much more memory than its NSData

I'm loading a UIImage from NSData with the following code
var image = UIImage(data: data!)
However, there is a weird behavior.
At first, I used png data, and the NSData was about 80kB each.
When I set UIImage with the data, the UIImage took up 128kb each.
(Checked with Allocation instrument, the size of ImageIO_PNG_Data)
Then I changed to use jpeg instead, and the NSData became about 7kb each.
But still, the UIImage is 128kb each, so when displaying the image I get no memory advantage! (The NSData reduced to 80kb -> 7kb and still the UIImage takes up the same amount of memory)
It is weird, why the UIImage should take up 128kb when the original data is just 7kb?
Can I reduce this memory usage by UIImage without shrinking the size of the UIImage itself??
Note that I'm not dealing with high resolution image so resizing the image is not an option (The NSData is already 7kb!!)
Any help will be appreciated.
Thanks!!
When you access the NSData, it is often compressed (with either PNG or JPEG). When you use the UIImage, there is an uncompressed pixel buffer which is often 4 bytes per pixel (one byte for red, green, blue, and alpha, respectively). There are other formats, but it illustrates the basic idea, that the JPEG or PNG representations can be compressed, when you start using an image, it is uncompressed.
In your conclusion, you say that resizing not an option and that the NSData is already 7kb. I would suggest that resizing should be considered if the resolution of the image is greater than the resolution (the points of the bounds/frame times the scale of the device) of the UIImageView in which you're using it. The question of whether to resize is not a function of the size of the NSData, but rather the resolution of the view. So, if you have a 1000x1000 pixel image that you're using in a small thumbview in a table view, then regardless of how small the JPEG representation is, you should definitely resize the image.
This is normal. When the image is stored as NSData, it is compressed (usually using PNG or JPG compression). When it's a UIImage, the image is decompressed, which allows it to be drawn quickly on the screen.

iOS: sharing a transparent UIImage w/ UIActivityViewController?

So I am using UIActivityViewController to let users share or save an image they create with my app. I include a UIImage as one of the share items, it all works fine.
Except: it is possible to create this image with some transparent areas. And it looks to me like the built-in UIActivities all create JPEG representations of the UIImage, thus losing the transparency.
Is there a way to force it to use a PNG representation so as not to lose the alpha channel?
Without seeing some code it may be difficult to fully answer the question. Have you tried converting your UIImage to an NSData object using UIImagePNGRepresentation and sharing the NSdata object instead of the image itself? I would have put this in a comment rather than an answer, but I lack the rep.
Reference for UIImagePNGRepresentation:
https://developer.apple.com/library/ios/documentation/uikit/reference/UIKitFunctionReference/Reference/reference.html#//apple_ref/c/func/UIImagePNGRepresentation

How to find UIImage Bottleneck

I have an app that uses UIImage objects. Up to this point, I've been using image objects initialized using something like this:
UIImage *image = [UIImage imageNamed:imageName];
using an image in my app bundle. I've been adding functionality to allow users to use imagery from the camera or their library using UIImagePickerController. These images, obviously, can't be in my app bundle, so I initialize the UIImage object a different way:
UIImage *image = [UIImage imageWithContentsOfFile:pathToFile];
This is done after first resizing the image to a size similar to the other files in my app bundle, in both pixel dimensions and total bytes, both using Jpeg format (interestingly, PNG was much slower, even for the same file size). In other words, the file pointed to by pathToFile is a file of similar size as an image in the bundle (pixel dimensions match, and compression was chosen so byte count was similar).
The app goes through a loop making small pieces from the original image, among other things that are not relevant to this post. My issue is that going through the loop using an image created the second way takes much longer than using an image created the first way.
I realize the first method caches the image, but I don't think that's relevant, unless I'm not understanding how the caching works. If it is the relevant factor, how can I add caching to the second method?
The relevant portion of code that is causing the bottleneck is this:
[image drawInRect:self.imageSquare];
Here, self is a subclass of UIImageView. Its property imageSquare is simply a CGRect defining what gets drawn. This portion is the same for both methods. So why is the second method so much slower with similar sized UIImage object?
Is there something I could be doing differently to optimize this process?
EDIT: I change access to the image in the bundle to imageWithContentsOfFile and the time to perform the loop changed from about 4 seconds to just over a minute. So it's looking like I need to find some way to do caching like imageNamed does, but with non-bundled files.
UIImage imageNamed doesn't simply cache the image. It caches an uncompressed image. The extra time spent was not caused by reading from local storage to RAM but by decompressing the image.
The solution was to create a new uncompressed UIImage object and use it for the time sensitive portion of the code. The uncompressed object is discarded when that section of code is complete. For completeness, here is a copy of the class method to return an uncompressed UIImage object from a compressed one, thanks to another thread. Note that this assumes data is in CGImage. That is not always true for UIImage objects.
+(UIImage *)decompressedImage:(UIImage *)compressedImage
{
CGImageRef originalImage = compressedImage.CGImage;
CFDataRef imageData = CGDataProviderCopyData(
CGImageGetDataProvider(originalImage));
CGDataProviderRef imageDataProvider = CGDataProviderCreateWithCFData(imageData);
CFRelease(imageData);
CGImageRef image = CGImageCreate(
CGImageGetWidth(originalImage),
CGImageGetHeight(originalImage),
CGImageGetBitsPerComponent(originalImage),
CGImageGetBitsPerPixel(originalImage),
CGImageGetBytesPerRow(originalImage),
CGImageGetColorSpace(originalImage),
CGImageGetBitmapInfo(originalImage),
imageDataProvider,
CGImageGetDecode(originalImage),
CGImageGetShouldInterpolate(originalImage),
CGImageGetRenderingIntent(originalImage));
CGDataProviderRelease(imageDataProvider);
UIImage *decompressedImage = [UIImage imageWithCGImage:image];
CGImageRelease(image);
return decompressedImage;
}

Resources