Reduce image size while saving UIImage [duplicate] - ios

This question already has answers here:
How to easily resize/optimize an image size with iOS?
(18 answers)
Closed 9 years ago.
I have image with only red and white color. In image processing, we can reduce image from 24 bit to 8 bit or something like that.
Is it possible to reduce image size? In my iPad application, I can save image as png or jpeg. But I want to reduce the size more. How should I write code?

Have you looked into the method UIImageJPEGRepresentation? Once you have your UIImage you just need to do something like:
NSData* imgData = UIImageJPEGRepresentation(myImage, 0.4); //0.4 is the compression rate.
[imgData writeToURL:myFileURL atomically:YES];

Related

Number of bytes in UiImage [duplicate]

This question already exists:
Get binary data from UIImage
Closed 5 years ago.
I have a confusion regarding getting number of bytes in UIImage
To get number of bytes we use NSdata byts,
I have have an image of size 128X160 size, and NSData bytes tell it have 400669 bytes
But as per calculation in 3 channel image of 8 bit color i.e 24bit per pixel the number of bytes should be 3X128X160 = 61440 byts
Please help and let me know why NSData tell different in number of bytes?
Thanks
The underlying data of a UIImage can vary, so for the same "image" one can have varying sizes of data. One thing you can do is use UIImagePNGRepresentation or UIImageJPEGRepresentation to get the equivalent NSData constructs for either, then check the size of that.

UIImagePNGRepresentation returns inappropriately large data

We have an UIImage of size 1280*854 and we are trying to save it in png format.
NSData *pngData = UIImagePNGRepresentation(img);
The problem is that the size of pngData is 9551944 which is inappropriately large for the input image size. Even considering 24 bit PNG, at the max it should be 1280*854*3 (3 for 24 bit png).
BTW, this is only happening with images scaled with UIGraphicsGetImageFromCurrentImageContext. We also noticed that image._scale is set to 2.0 in image returned by UIGraphicsGetImageFromCurrentImageContext.
Any idea what's wrong.

How to get NSData representation of UIGraphicsGetImageFromCurrentImageContext() [duplicate]

This question already has answers here:
convert UIImage to NSData
(7 answers)
Closed 7 years ago.
I'm taking a "snapshot" of the image context in UIGraphicsBeginImageContextWithOptions(UIScreen.mainScreen().bounds.size, true, 0) and eventually creating a UIImage using
var renderedImage = UIGraphicsGetImageFromCurrentImageContext()
However I need to get the NSData representation of this UIImage without using UIImageJPEGRepresentation or UIImagePNGRepresentation (because these produce files that are way larger than the original UIImage). How can I do this?
these produce files that are way larger than the original UIImage). How can I do this?
Image files contain compressed data, while NSData is raw i.e. not compressed. Therefore NSData will in about all cases be larger, when written into a file.
More info at another question: convert UIImage to NSData
I'm not sure what you mean by "way larger" than the original UIImage. The data backing the UIImage object is at least as big as the data you would get by converting it into a JPG, and roughly equivalent to the data you would get by converting it to a PNG.
The rendered image will be twice the screen size in points, because you have rendered a retina screen into an image.
You can avoid this and render the image as non-retina by making your image context have a scale of 1:
UIGraphicsBeginImageContextWithOptions(UIScreen.mainScreen().bounds.size, true, 1)

Add Image on another Image in iOS considering alpha parts [duplicate]

This question already has answers here:
Overlay an image over another image in iOS
(4 answers)
Closed 9 years ago.
I have tried, compositing filters, both ATop and Over, But did not get the required Image as out put.
what I want is something like this:
NOTE: both the Image1 and Border are png Images.
Now what i am getting is either the border1 image or the image1 image, it seems they are not considering the alpha parts in border1 image, its removing the alpha channel and am left with a white background under the border image instead of image 1 peeking out.
Any idea about how to proceed? thanks in advance.
Use the UIImage converted to CIImage with data as png, with this following code
CIImage *paper1 =[CIImage imageWithData:UIImagePNGRepresentation([UIImage imageNamed:#"yourPNGImage.png"])];
This Solves the problem, since earlier times the UIImage (though in .png format) was getting converted into a jpeg format CIImage.

How to shrink the image taken from camera to 320x320 resolution? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
What’s the easiest way to resize/optimize an image size with the iPhone SDK?
I want to change the reolution of image taken from camera to 320x320. Can any one please tell me how to do it.
I know how to take image from camera. So please tell me the rest (i.e) changing the reolution of image.
Thanks in advance
This is called in this post: https://stackoverflow.com/a/613380/1648976
+ (UIImage*)imageWithImage:(UIImage*)image
scaledToSize:(CGSize)newSize;
{
UIGraphicsBeginImageContext( newSize );
[image drawInRect:CGRectMake(0,0,newSize.width,newSize.height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
As far as storage of the image, the fastest image format to use with the iPhone is PNG, because it has optimizations for that format. However, if you want to store these images as JPEGs, you can take your UIImage and do the following:
NSData *dataForJPEGFile = UIImageJPEGRepresentation(theImage, 0.6);
This creates an NSData instance containing the raw bytes for a JPEG image at a 60% quality setting. The contents of that NSData instance can then be written to disk or cached in memory.
This does a convert, not exactely to 320*320.. but you can twak the 0.6 to lower or higher.
If this is not what you want, please tell me more precisely
1) Low-pass filter the original image and
2) decimate or
3) resample
If the original dimension is 640x320, it's enough to LP filter and then choose every other sample. That's decimation.
If the original dimension is e.g. 480x320, then one has to still LP filter and interpolate the pixel values for those pixels, that do not align exactly with the original pixels.
The LP filtering is crucial, as without it e.g. a very high resolution chessboard pattern will be re-sampled to noise or weird patterns, caused by an effect called 'frequency aliasing'.

Resources