Resaving camera roll images are smaller - ios

When I resave an image from camera roll, it is smaller than the original one :
same number of pixels, same width and height, same colorspace, same DPI,
but file size is less :
original = 2.5 MB
resaved one = 1.5 MB
I use this method :
UIImageWriteToSavedPhotosAlbum(imageTosave, self, nil, nil);
Is this normal ?
Thank you for your help .

I encountered this issue a while agoitand found a workaround. Try wrapping the image in a new png:
NSData* imageData = UIImagePNGRepresentation (yourOriginalImage); // get PNG representation
UIImage* image2 = [UIImage imageWithData:imagedata]; // wrap UIImage around PNG representation
UIImageWriteToSavedPhotosAlbum(image2, nil, nil, nil); // save to photo album
Please let me know if this workaround still works. If not, I will remove this answer.

Related

How to compress images in iOS?

I have been looking through every stack overflow post and video online and I can't find a solution that works correctly. Right now the user selects a photo that I want to compress before uploading to an AWS S3 bucket. The upload works perfectly but for some reason, the compressed image is larger than the original image! For example, if the user selects a 9KB photo, when I upload to S3 the photo is 28.5KB. I tried a different photo and it's 48KB and after "compression" on S3 its 378.9KB! (I am using the latest software version of everything and compiling with the simulator)
I want to compress the original image as much as I can before uploading.
This is what I have so far:
How I "compress" the image:
UIImage *compressedProductImage;
NSData *NSproductImage;
NSUInteger productImageSize;
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary<NSString *,id> *)info{
self.productImage = info[UIImagePickerControllerOriginalImage];
[self.productImageImageView setImage:self.productImage];
[self dismissViewControllerAnimated:YES completion:nil];
NSproductImage = UIImageJPEGRepresentation(self.productImage, 0.5f);
productImageSize = [NSproductImage length];
compressedProductImage = [UIImage imageWithData: NSproductImage];
How I upload the photo:
//Convert product UIImage
NSArray *productImagePaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *productImageFilePath = [[productImagePaths objectAtIndex:0] stringByAppendingPathComponent:[NSString stringWithFormat:#".png"]];
[UIImagePNGRepresentation(compressedProductImage) writeToFile:productImageFilePath atomically:YES];
NSURL *productImageFileUrl = [NSURL fileURLWithPath:productImageFilePath];
uploadRequest.body = productImageFileUrl;//Needs to be a NSURL
uploadRequest.bucket = AWS_BUCKET_NAME;
uploadRequest.key = productImageKey;
uploadRequest.contentType = #"image/png";
uploadRequest.ACL = AWSS3BucketCannedACLPublicRead;
[[transferManager upload:uploadRequest] continueWithExecutor:[AWSExecutor mainThreadExecutor] withBlock:^id(AWSTask *task) {
if (task.error != nil) {
NSLog(#"%s %#","Error uploading (product image):", uploadRequest.key);
}else{
NSLog(#"Product image upload completed");
}
return nil;
}];
As rmaddy points out, you're taking the picked image, converting it to JPEG, converting back to a UIImage (losing any benefit of the JPEG compression), and then converting it to a PNG, which offers modest compression, generally far less compression than the original JPEG from the users photo library.
You have a few options.
You can retrieve the original imageData from the asset in your photos library as shown in https://stackoverflow.com/a/32845656/1271826, thereby avoiding the round-tripping it through a UIImage at all. Thus, you preserve the quality of the original image, preserve the meta data associated with this image, and enjoy the decent compression of the original asset.
You could take the picked image as a UIImage and do a combination of:
reduce the dimensions of the image before you call UIImageJPEGRepresentation (see https://stackoverflow.com/a/10491692/1271826 for sample algorithm); and/or
use UIImageJPEGRepresentation with a quality less than 1.0, where the smaller the number, the more compression but the more image quality loss.
You don't actually compress anything. You start with a UIImage. This is a full pixel by pixel representation that takes (typically) width x height x 4 bytes.
You then convert that to a JPG with some compression. So NSproductImage is a much smaller representation, in memory, of the JPG version of the image. This is the supposed smaller size you see where you think it is now compressed.
But then you convert that JPG data back into a UIImage as compressedProductImage. This new UIImage still has the same width and heigh of the original UIImage. As a result, it still takes the same width x height x 4 bytes as the original. It's just of lower quality than the original due to the JPG compression.
Now you convert the updated UIImage into a PNG. Since PNG is lossless, it doesn't compress nearly as much as the JPG attempt. You then send this larger PNG version of the image to Amazon.
You should first remove the pointless code that first converts to JPG and then back to UIImage.
At this point you should either live with the size of the PNG or use JPG instead and send the smaller JPG to Amazon.
Another option would be to scale the image before sending it to Amazon.

How to compress image size using UIImagePNGRepresentation - iOS?

I'm using UIImagePNGRepresentation to save an image. The result image is of size 30+ KB and this is BIG in my case.
I tried using UIImageJPEGRepresentation and it allows to compress image, so image saves in < 5KB size, which is great, but saving it in JPEG gives it white background, which i don't want (my image is circular, so I need to save it with transparent background).
How can I compress image size, using UIImagePNGRepresentation?
PNG uses lossless compression, that's why UIImagePNGRepresentation does not accept compressionQuality parameter like UIImageJPEGRepresentation does. You might get a bit smaller PNG file with different tools, but nothing like with JPEG.
May be this will help you out:
- (void)resizeImage:(UIImage*)image{
NSData *finalData = nil;
NSData *unscaledData = UIImagePNGRepresentation(image);
if (unscaledData.length > 5000.0f ) {
//if image size is greater than 5KB dividing its height and width maintaining proportions
UIImage *scaledImage = [self imageWithImage:image andWidth:image.size.width/2 andHeight:image.size.height/2];
finalData = UIImagePNGRepresentation(scaledImage);
if (finalData.length > 5000.0f ) {
[self resizeImage:scaledImage];
}
//scaled image will be your final image
}
}
Resizing image
- (UIImage*)imageWithImage:(UIImage*)image andWidth:(CGFloat)width andHeight:(CGFloat)height
{
UIGraphicsBeginImageContext( CGSizeMake(width, height));
[image drawInRect:CGRectMake(0,0,width,height)];
UIImage* newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext() ;
return newImage;
}

iOS always add white background to images

I am writing an app process image. But when I change alpha of an image to 0, it mean that this image will be transparent. And then I save this image to photo library.
UIImageWriteToSavedPhotosAlbum(image, nil, nil, nil);
But when I open this image again, i see that it has white background.
Does anybody know why?
Did you saved as jpeg?
try this:
NSData* pngdata = UIImagePNGRepresentation (image);
UIImage* img = [UIImage imageWithData:pngdata];
UIImageWriteToSavedPhotosAlbum(img, nil, nil, nil);

Compressing UIImage into JPEG image data multiple times

I am debugging a piece of code where an UIImage may be gone through UIImageJPEGRepresentation multiple times, I thought that must be a bug and the image quality will get worsen, but surprisingly we can't see the difference visually.
So I did a test, loading an image, and try to let it go through UIImageJPEGRepresentation 1000 times, surprisingly, whether 1 or 1000 times doesn't really make a difference in the image quality visually, why is that so?
This is the testing code:
UIImage *image = [UIImage imageNamed:#"photo.jpeg"];
// Create a data reference here for the for loop later
// First JPEG compression here
// I would imagine the data here already has low image quality
NSData *data = UIImageJPEGRepresentation(image, 0);
for(int i=0; i<1000; i++)
{
// Convert the data with low image quality to UIImage
UIImage *image = [UIImage imageWithData:data];
// Compress the image into a low quality data again
// at this point i would imagine the image get even more low quality, like u resaved a jpeg twice in phootshop
data = UIImageJPEGRepresentation(image, 0);
}
// up to this point I would imagine the "data" has gone through JPEG compression 1000 times
// like you resave a jpeg as a jpeg in photoshop 1000 times, it should look like a piece of crap
UIImage *imageFinal = [UIImage imageWithData:data];
UIImageView *view = [[UIImageView alloc] initWithImage:imageFinal];
[self.view addSubview:view];
// but it didn't, the final image looks like it has only gone through the jpeg compression once.
EDIT: my doubt can be summarised into a simpler code, if you do this in objectiveC:
UIImage *image1 = an image..
NSData *data1 = UIImageJPEGRepresentation(image1, 0);
UIImage *image2 = [UIImage imageWithData:data1];
NSData *data2 = UIImageJPEGRepresentation(image2, 0);
UIImage *imageFinal = [UIImage imageWithData:data2];
Did imageFinal gone through JPEG compression twice?
As you know, JPG compression works by altering the image to produce smaller file size. The reason why you don't see progressively worse quality is because you're using the same compression setting each time.
The algorithm alters the source image just enough to fit into the compression profile - in other words, compressing the result of 50% JPG again at 50% will produce the same image, because the image doesn't need to be altered any more.
You can test this in Photoshop - save a photo out at say 30% quality JPG. Reopen the file you just saved, and go to Save for Web - flip between PNG (uncompressed/original) and JPG 30% - there will be no difference.
Hope this helps.
All types of compression will ideally reduce the size of an image. There are two types of compression which describes how they affects images:
Lossy Compression:
Lossy compression will reduces the size of the image by removing some data from it. This generally cause, effect the quality of the image, which means it reduce your image quality
Lossless Compression:
Lossless compression reduce the size of the image by changing the way in which the data is stored. Therefore this type of compression will make no change in the image quality.
Please check out the compression type you are using.
This may help you in decrease the image size. put the number from yourself how many times you want to perform loop;
UIImage *image = [UIImage imageNamed:#"photo.jpeg"];
for(int i=100; i>0; i--)
{
UIImage *image = [UIImage imageWithData:data];
NSData *data = UIImageJPEGRepresentation(image, (0.1 * i);
NSLog(#"%d",data.length);
}

Can you save PNG's to the photo library in iOS5?

I've noticed that line drawings from the drawing app I'm making are very low quality when saved using this code:
UIImage *imageToSave = drawImage.image;
UIImageWriteToSavedPhotosAlbum(imageToSave, nil, nil, nil);
As far as I can understand, you can't set the JPG quality when using UIImageWriteToSavedPhotosAlbum.
Is there a way to save the UIImage as a PNG to the photo library directly, or some way to increase the JPG quality? When doing a screenshot (pressing on/off+home on the iPad) the quality of the grabbed picture is perfect, but I can't expect people to save images that way.
Any help greatly appreciated!
Try this
NSData* pngdata = UIImagePNGRepresentation (drawImage.image); //PNG wrap
UIImage* img = [UIImage imageWithData:pngdata];
UIImageWriteToSavedPhotosAlbum(img, nil, nil, nil);

Resources