This question already has answers here:
convert UIImage to NSData
(7 answers)
Closed 7 years ago.
I'm taking a "snapshot" of the image context in UIGraphicsBeginImageContextWithOptions(UIScreen.mainScreen().bounds.size, true, 0) and eventually creating a UIImage using
var renderedImage = UIGraphicsGetImageFromCurrentImageContext()
However I need to get the NSData representation of this UIImage without using UIImageJPEGRepresentation or UIImagePNGRepresentation (because these produce files that are way larger than the original UIImage). How can I do this?
these produce files that are way larger than the original UIImage). How can I do this?
Image files contain compressed data, while NSData is raw i.e. not compressed. Therefore NSData will in about all cases be larger, when written into a file.
More info at another question: convert UIImage to NSData
I'm not sure what you mean by "way larger" than the original UIImage. The data backing the UIImage object is at least as big as the data you would get by converting it into a JPG, and roughly equivalent to the data you would get by converting it to a PNG.
The rendered image will be twice the screen size in points, because you have rendered a retina screen into an image.
You can avoid this and render the image as non-retina by making your image context have a scale of 1:
UIGraphicsBeginImageContextWithOptions(UIScreen.mainScreen().bounds.size, true, 1)
Related
We have an UIImage of size 1280*854 and we are trying to save it in png format.
NSData *pngData = UIImagePNGRepresentation(img);
The problem is that the size of pngData is 9551944 which is inappropriately large for the input image size. Even considering 24 bit PNG, at the max it should be 1280*854*3 (3 for 24 bit png).
BTW, this is only happening with images scaled with UIGraphicsGetImageFromCurrentImageContext. We also noticed that image._scale is set to 2.0 in image returned by UIGraphicsGetImageFromCurrentImageContext.
Any idea what's wrong.
All the code I can find revolves around loading images directly into visual controls.
However, I have my own cache system (converting a project from another language) and so I as efficient as possible want the following:
Load jpg/png images - probably into a bitmap / cgimage. (his can either be from the file system or from images downloaded online)
Possibly save image back as a compressed/resized png/jpg file
Supply an image reference for a visual control
I am new to swift and ios platform, but as far as I can tell, cgimage is as close as it gets? However, there does not appear to be a way to load an image from he file system when using cgimage... But i have found people discussing ways for e.g. UIImage, so I am now doubting my initial impression ha cgimage was the best match for my needs.
It is easy to get confused between UIImage, CGImage and CIImage. The difference is following:
UIImage: UIImage object is a high-level way to display image data. You can create images from files, from Quartz image objects, or from raw image data you receive. They are immutable and must specify an image’s properties at initialization time. This also means that these image objects are safe to use from any thread.
Typically you can take NSData object containing a PNG or JPEG representation image and convert it to a UIImage.
CGImage: A CGImage can only represent bitmaps. Operations in CoreGraphics, such as blend modes and masking require CGImageRefs. If you need to access and change the actual bitmap data, you can use CGImage. It can also be converted to NSBitmapImageReps.
CIImage: A CIImage is an immutable object that represents an image. It is not an image. It only has the image data associated with it. It has all the information necessary to produce an image.
You typically use CIImage objects in conjunction with other Core Image classes such as CIFilter, CIContext, CIColor, and CIVector. You can create CIImage objects with data supplied from variety of sources such as Quartz 2D images, Core Videos image, etc.
It is required to use the various GPU optimized Core Image filters. They can also be converted to NSBitmapImageReps. It can be based on the CPU or the GPU.
In conclusion, UIImage is what you are looking for. Reasons are:
You can get image from device memory and assign it to UIImage
You can get image from URL and assign it to UIImage
You can write UIImage in your desired format to device memory
You can resize image assigned to UIImage
Once you have assigned an image to UIImage, you can use that instance in controls directly. e.g. setting background of a button, setting as image for UIImageView
Would have added code samples but all these are basic questions which have been already answered on Stackoverflow, so there is no point. Not to mention adding code will make this unnecessarily large.
Credit for summarizing differences: Randall Leung
You can load your image easily into an UIImage object...
NSData *data = [NSData dataWith...];
UIImage *image = [UIImage imageWithData:data];
If you then want to show it in a view, you can use an UIImageView:
UIImageView *imageView = [[UIImageView alloc] init]; // or whatever
...
imageView.image = image;
See more in UIImage documentation.
Per the documentation at: https://developer.apple.com/documentation/uikit/uiimage
let uiImage = uiImageView.image
let cgImage = uiImage.cgImage
I'm loading a UIImage from NSData with the following code
var image = UIImage(data: data!)
However, there is a weird behavior.
At first, I used png data, and the NSData was about 80kB each.
When I set UIImage with the data, the UIImage took up 128kb each.
(Checked with Allocation instrument, the size of ImageIO_PNG_Data)
Then I changed to use jpeg instead, and the NSData became about 7kb each.
But still, the UIImage is 128kb each, so when displaying the image I get no memory advantage! (The NSData reduced to 80kb -> 7kb and still the UIImage takes up the same amount of memory)
It is weird, why the UIImage should take up 128kb when the original data is just 7kb?
Can I reduce this memory usage by UIImage without shrinking the size of the UIImage itself??
Note that I'm not dealing with high resolution image so resizing the image is not an option (The NSData is already 7kb!!)
Any help will be appreciated.
Thanks!!
When you access the NSData, it is often compressed (with either PNG or JPEG). When you use the UIImage, there is an uncompressed pixel buffer which is often 4 bytes per pixel (one byte for red, green, blue, and alpha, respectively). There are other formats, but it illustrates the basic idea, that the JPEG or PNG representations can be compressed, when you start using an image, it is uncompressed.
In your conclusion, you say that resizing not an option and that the NSData is already 7kb. I would suggest that resizing should be considered if the resolution of the image is greater than the resolution (the points of the bounds/frame times the scale of the device) of the UIImageView in which you're using it. The question of whether to resize is not a function of the size of the NSData, but rather the resolution of the view. So, if you have a 1000x1000 pixel image that you're using in a small thumbview in a table view, then regardless of how small the JPEG representation is, you should definitely resize the image.
This is normal. When the image is stored as NSData, it is compressed (usually using PNG or JPG compression). When it's a UIImage, the image is decompressed, which allows it to be drawn quickly on the screen.
This question already has answers here:
How to easily resize/optimize an image size with iOS?
(18 answers)
Closed 9 years ago.
I have image with only red and white color. In image processing, we can reduce image from 24 bit to 8 bit or something like that.
Is it possible to reduce image size? In my iPad application, I can save image as png or jpeg. But I want to reduce the size more. How should I write code?
Have you looked into the method UIImageJPEGRepresentation? Once you have your UIImage you just need to do something like:
NSData* imgData = UIImageJPEGRepresentation(myImage, 0.4); //0.4 is the compression rate.
[imgData writeToURL:myFileURL atomically:YES];
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What’s the easiest way to resize/optimize an image size with the iPhone SDK?
I want to change the reolution of image taken from camera to 320x320. Can any one please tell me how to do it.
I know how to take image from camera. So please tell me the rest (i.e) changing the reolution of image.
Thanks in advance
This is called in this post: https://stackoverflow.com/a/613380/1648976
+ (UIImage*)imageWithImage:(UIImage*)image
scaledToSize:(CGSize)newSize;
{
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
As far as storage of the image, the fastest image format to use with the iPhone is PNG, because it has optimizations for that format. However, if you want to store these images as JPEGs, you can take your UIImage and do the following:
NSData *dataForJPEGFile = UIImageJPEGRepresentation(theImage, 0.6);
This creates an NSData instance containing the raw bytes for a JPEG image at a 60% quality setting. The contents of that NSData instance can then be written to disk or cached in memory.
This does a convert, not exactely to 320*320.. but you can twak the 0.6 to lower or higher.
If this is not what you want, please tell me more precisely
1) Low-pass filter the original image and
2) decimate or
3) resample
If the original dimension is 640x320, it's enough to LP filter and then choose every other sample. That's decimation.
If the original dimension is e.g. 480x320, then one has to still LP filter and interpolate the pixel values for those pixels, that do not align exactly with the original pixels.
The LP filtering is crucial, as without it e.g. a very high resolution chessboard pattern will be re-sampled to noise or weird patterns, caused by an effect called 'frequency aliasing'.