Determine image MB size from PHAsset - ios

I want to determine the memory size of the image accessed through the PHAsset. This size is so that we know how much memory it occupies on the device. Which method does this?
var imageSize = Float(imageData.length)
var image = UIImage(data: imageData)
var jpegSize = UIImageJPEGRepresentation(image, 1)
var pngSize = UIImagePNGRepresentation(image)
var pixelsMultiplied = asset.pixelHeight * asset.pixelWidth
println("regular data: \(imageSize)\nJPEG Size: \(jpegSize.length)\nPNG Size: \(pngSize.length)\nPixel multiplied: \(pixelsMultiplied)")
Results in:
regular data: 1576960.0
JPEG Size: 4604156
PNG Size: 14005689
Pixel multiplied: 7990272
Which one of these values actually represents the amount it occupies on the device?

After emailing the picture to myself and checking the size on the system, it turns out approach ONE is the closest to the actual size.
To get the size of a PHAsset (Image type), I used the following method:
var asset = self.fetchResults[index] as PHAsset
self.imageManager.requestImageDataForAsset(asset, options: nil) { (data:NSData!, string:String!, orientation:UIImageOrientation, object:[NSObject : AnyObject]!) -> Void in
//transform into image
var image = UIImage(data: data)
//Get bytes size of image
var imageSize = Float(data.length)
//Transform into Megabytes
imageSize = imageSize/(1024*1024)
}
Command + I on my macbook shows the image size as 1,575,062 bytes.
imageSize in my program shows the size at 1,576,960 bytes.
I tested with five other images and the two sizes reported were just as close.

The NSData approach becomes precarious when data is prohibitively large. You can use the below as an alternative:
[[PHImageManager defaultManager] requestAVAssetForVideo:self.phAsset options:nil resultHandler:^(AVAsset *asset, AVAudioMix *audioMix, NSDictionary *info) {
CGFloat rawSize = 0;
if ([asset isKindOfClass:[AVURLAsset class]])
{
AVURLAsset *URLAsset = (AVURLAsset *)asset;
NSNumber *size;
[URLAsset.URL getResourceValue:&size forKey:NSURLFileSizeKey error:nil];
rawSize = [size floatValue] / (1024.0 * 1024.0);
}
else if ([asset isKindOfClass:[AVComposition class]])
{
// Asset is an AVComposition (e.g. slomo video)
float estimatedSize = 0.0;
NSArray *tracks = [self tracks];
for (AVAssetTrack * track in tracks)
{
float rate = [track estimatedDataRate] / 8.0f; // convert bits per second to bytes per second
float seconds = CMTimeGetSeconds([track timeRange].duration);
estimatedSize += seconds * rate;
}
rawSize = estimatedSize;
}
if (completionBlock)
{
NSError *error = info[PHImageErrorKey];
completionBlock(rawSize, error);
}
}];
Or for ALAssets, something like this:
[[[ALAssetsLibrary alloc] init] assetForURL:asset.URL resultBlock:^(ALAsset *asset) {
long long sizeBytes = [[asset defaultRepresentation] size];
if (completionBlock)
{
completionBlock(sizeBytes, nil);
}
} failureBlock:^(NSError *error) {
if (completionBlock)
{
completionBlock(0, error);
}
}];

Related

Get the same NSData of PHAsset after save

Do you know is there any way to get the same NSData of Image ( JPG , PNG ) after save with PHPhotoLibrary or no?
OfC, iOS will modify some metadata and EXIF-- > ( Timestamp,... )data after save but, I'm asking about UIImage Data (include same EXIF data).
I didn't copy the exif in in my code here but it doesn't work
so Let's talk over the code:
Save Image and get hash
UIImage * tmp = [[UIImage alloc] initWithData:tmpData];
tmpData =UIImageJPEGRepresentation(tmp, 1.0);
self.str1 = [tmpData MD5];
[[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{
PHAssetResourceCreationOptions *options = [[PHAssetResourceCreationOptions alloc] init];
options.originalFilename = #"XXX";
PHAssetCreationRequest * createReq = [PHAssetCreationRequest creationRequestForAsset];
[createReq addResourceWithType:PHAssetResourceTypePhoto data:tmpData options:options];
} completionHandler:^(BOOL success, NSError * _Nullable error) {
NSLog(#":%d",success);
}];
Load same Image :
[asset requestContentEditingInputWithOptions:0 completionHandler:^(PHContentEditingInput * _Nullable contentEditingInput, NSDictionary * _Nonnull info) {
PHImageRequestOptions * option = [[PHImageRequestOptions alloc] init];
option.synchronous = YES;
option.version = PHImageRequestOptionsVersionOriginal;
option.deliveryMode = PHImageRequestOptionsDeliveryModeHighQualityFormat;
option.resizeMode = PHImageRequestOptionsResizeModeNone;
[[PHImageManager defaultManager] requestImageDataForAsset:asset options:option resultHandler:^(NSData * _Nullable imageData, NSString * _Nullable dataUTI, UIImageOrientation orientation, NSDictionary * _Nullable info) {
UIImage * image = [UIImage imageWithData:imageData];
NSData * tmpDAt = UIImageJPEGRepresentation(image, 1.0);
NSString * md5 = [tmpDAt MD5];
if ([md5 isEqualToString:self.str1]) {
NSLog(#"My Expextation");
}
}];
The Intresting thing that I found is if I crop my image to 1*1 for test, I receive some error ( JPEGDecompressSurface : Picture decode failed: ) during save (It seems OS can't modify image) so I get the same hash before and after save :) !
I presume the difference is due to your JPEGs having different timestamps (and possibly other differences) in their EXIF metadata.
Have you tried using UIImagePNGRepresentation instead of UIImageJPEGRepresentation? Hopefully PNG representations will match.
Jpeg compression is a Lossy form of compression. Every time you convert to Jpeg you will lose data. There is no way around it. Removing PHPhotoLibrary from the equation. If you run the following
UIImage * tmp = [[UIImage alloc] initWithData:tmpData];
tmpData =UIImageJPEGRepresentation(tmp, 1.0);
str1 = [tmpData MD5];
tmp = [[UIImage alloc] initWithData:tmpData];
tmpData =UIImageJPEGRepresentation(tmp, 1.0);
str2 = [tmpData MD5];
You will find that str1 and str2 are different.
If you want the same data you will have to either keep the original jpeg data that generated the image or use a loseless compression method like the one used within the PNG files.

Convert video to animated GIF on iOS

I want to convert a video file taken with the camera (.mp4) and convert it to an animated GIF image.
I looked up the Apple Docs and there doesn't seem to be any built-in function for this.
How should I approach this task?
You can do it in Some Step
1-
Calculate the required Frame
CMTime vid_length = asset.duration;
float seconds = CMTimeGetSeconds(vid_length);
int required_frames_count = seconds * 12.5; //You can set according
to u
int64_t step = vid_length.value / required_frames_count;
int value = 0;
2-
Make Gif File Setting Property
destination = CGImageDestinationCreateWithURL((CFURLRef)[NSURL fileURLWithPath:path],
kUTTypeGIF,
required_frames_count,
NULL);
frameProperties = [NSDictionary dictionaryWithObject:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:0.8] forKey:(NSString *)kCGImagePropertyGIFDelayTime]
forKey:(NSString *)kCGImagePropertyGIFDictionary];
gifProperties = [NSDictionary dictionaryWithObject:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:0] forKey:(NSString *)kCGImagePropertyGIFLoopCount]
forKey:(NSString *)kCGImagePropertyGIFDictionary];
Step 3
Generate Frame from Video Asset AVAssetImageGenerator
for (int i = 0; i < required_frames_count; i++) {
AVAssetImageGenerator *image_generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
image_generator.requestedTimeToleranceAfter = kCMTimeZero;
image_generator.requestedTimeToleranceBefore = kCMTimeZero;
image_generator.appliesPreferredTrackTransform = YES;
image_generator.maximumSize = CGSizeMake(wd, ht); //to get an unscaled image or define a bounding box of 640px, aspect ratio remains
CMTime time = CMTimeMake(value, vid_length.timescale);
CGImageRef image_ref = [image_generator copyCGImageAtTime:time actualTime:NULL error:NULL];
UIImage *thumb = [UIImage imageWithCGImage:image_ref];
[self mergeFrameForGif:thumb];
CGImageRelease(image_ref);
value += step;
}
NSLog(#"Over all Size of Image(bytes):%ld",t);
CGImageDestinationSetProperties(destination, (CFDictionaryRef)gifProperties);
CGImageDestinationFinalize(destination);
CFRelease(destination);
NSLog(#"animated GIF file created at %#", path);
Step 4
Add Frame in Gif file
- (void)mergeFrameForGif:(UIImage*)pic1
{
CGImageDestinationAddImage(destination, pic1.CGImage, (CFDictionaryRef)frameProperties);
pic1=nil;
}
There is no built-in API for that. I released a library that converts video files to animated GIF images while giving enough flexibility to tweak settings such as frame rate, frame duration, size, etc.
The library is called NSGIF. You can find it here: http://github.com/NSRare/NSGIF
This is the simplest way to convert a video to a GIF:
[NSGIF optimalGIFfromURL:url loopCount:0 completion:^(NSURL *GifURL) {
NSLog(#"Finished generating GIF: %#", GifURL);
}];
Using the optimalGIFfromURL method, automatically generates the GIF based on the optimal settings. There is also room for way more flexibility. Check out the repo for more samples.
Updated for Swift 5.1
import Foundation
import AVFoundation
import PhotosUI
import MobileCoreServices
func makeGIF(asset: AVAsset, destionationURL: URL, width: CGFloat, height: CGFloat) {
let duration = asset.duration
let vid_length : CMTime = duration
let seconds : Double = CMTimeGetSeconds(vid_length)
let tracks = asset.tracks(withMediaType: .video)
let fps = tracks.first?.nominalFrameRate ?? 1.0
let required_frames_count : Int = Int(seconds * Double(fps)) // You can set according
let step : Int64 = vid_length.value / Int64(required_frames_count)
var value : CMTimeValue = CMTimeValue.init(0.0)
let destination = CGImageDestinationCreateWithURL(destionationURL as CFURL, kUTTypeGIF, required_frames_count, nil)
let gifProperties : CFDictionary = [ kCGImagePropertyGIFDictionary : [kCGImagePropertyGIFLoopCount : 0] ] as CFDictionary
for _ in 0 ..< required_frames_count {
let image_generator : AVAssetImageGenerator = AVAssetImageGenerator.init(asset: asset)
image_generator.requestedTimeToleranceAfter = CMTime.zero
image_generator.requestedTimeToleranceBefore = CMTime.zero
image_generator.appliesPreferredTrackTransform = true
// to get an unscaled image or define a bounding box of 640px, aspect ratio remains
image_generator.maximumSize = CGSize(width: width, height: height)
let time : CMTime = CMTime(value: value, timescale: vid_length.timescale)
do {
let image_ref : CGImage = try image_generator.copyCGImage(at: time, actualTime: nil)
let thumb : UIImage = UIImage.init(cgImage: image_ref)
mergeFrameForGif(frame: thumb, destination: destination!)
value = value + step
} catch {
//
}
}
//print("Overall Size of Image(bytes): \(t)")
CGImageDestinationSetProperties(destination!, gifProperties)
CGImageDestinationFinalize(destination!)
print("animated GIF file created at \(destionationURL)")
}
func mergeFrameForGif(frame: UIImage, destination: CGImageDestination) {
let frameProperties : CFDictionary = [ kCGImagePropertyGIFDictionary : [kCGImagePropertyGIFDelayTime : 0.8] ] as CFDictionary
CGImageDestinationAddImage(destination, frame.cgImage!, frameProperties)
}

How to get panoramas from ALAsset objects

We have about 2k objects, which are instance of class ALAsset, and we need to know, which files are panoramic images.
We have tried to get CGImageRef from ALAsset instance and check width/height ratio.
ALAsset *alasset = ...
CFImageRef = alasset.thumbnail; // return square thumbnail and not suitable for me
CFImageRef = alasset.aspectRationThumbnail; //return aspect ration thumbnail, but very slowly
It isn't suitable for us, because it works slowly for many files.
Also we have tried to get metadata from defaultRepresentation and check image EXIF, but it works slowly to.
NSDictionary *dictionary = [alasset defaultRepresentation] metadata]; //very slowly to
Is there any way to make it better?
Thanks
Finaly, I've found this solution for ALAsset:
ALAssetsLibrary *assetsLibrary = ...;
NSOperation *queue = [NSoperationQueue alloc] init];
static NSString * const kAssetQueueName = ...;
static NSUInteger const kAssetConcurrentOperationCount = ...; //I use 5
queue.maxConcurrentOperationCount = kAssetConcurrentOperationCount;
queue.name = kAssetQueueName;
dispatch_async(dispatch_get_main_queue(), ^{
[assetsLibrary enumerateGroupsWithTypes:ALAssetsGroupAll usingBlock:^(ALAssetsGroup *group, BOOL *stop) {
/*You must check the group is not nil */
if (!group)
return;
/*Then you need to select group where you will search panoramas: for iPhone-Simulator it's #"Saved Photos" and "Camera Roll" for iPhone. It's actuality only for iOS 7 or early. */
static NSString * const kAssetGroupName = ...;
if ([[group valueForProperty:ALAssetsGroupPropertyName] kAssetGroupName]) {
[group enumerateAssetsUsingBlock:^(ALAsset *asset, NSUInteger index, BOOL *stop) {
if (!asset)
return;
[queue addOperationWithBlock:^{
//I use #autoreleasepool for instant memory release, after I find panoramas asset url
#autoreleasepool {
ALAssetRepresentation *defaultRepresentation = asset.defaultRepresentation;
if ([defaultRepresentation.UTI isEqualToString:#"public.jpeg"]) {
NSDictionary *metadata = defaultRepresentation.metadata;
if (!metadata)
return;
if (metadata[#"PixelWidth"] && metadata[#"PixelHeight"]) {
NSInteger pixelWidth = [metadata[#"PixelWidth"] integerValue];
NSInteger pixelHeight = [metadata[#"PixelHeight"] integerValue];
static NSUInteger const kSidesRelationshipConstant = ...; //I use 2
static NSUInteger const kMinimalPanoramaHeight = ...; //I use 600
if (pixelHeight >= kMinimalPanoramaHeight && pixelWidth/pixelHeight >= kSidesRelationshipConstant) {/*So, that is panorama.*/}
}
}];
}];
}
} failureBlock:^(NSError *error) {
//Some failing action, you know.
}];
};
That is. So, I think that is not the best solution. However, for today I have not found any better.

How to Convert PDF to NSImage and change the DPI?

I have tried to convert PDF Pages to NSImage and save to JPG files successfully. However the output result is as normal 72 DPI. I want to change the DPI to 300 DPI but failed. Below is the code:
- (IBAction)TestButton:(id)sender {
NSString* localDocuments = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES) objectAtIndex:0];
NSString* pdfPath = [localDocuments stringByAppendingPathComponent:#"1.pdf"];
NSData *pdfData = [NSData dataWithContentsOfFile:pdfPath];
NSPDFImageRep *pdfImg = [NSPDFImageRep imageRepWithData:pdfData];
NSFileManager *fileManager = [NSFileManager defaultManager];
NSInteger pageCount = [pdfImg pageCount];
for(int i = 0 ; i < pageCount ; i++) {
[pdfImg setCurrentPage:i];
NSImage *temp = [[NSImage alloc] init];
[temp addRepresentation:pdfImg];
CGFloat factor = 300/72; // Scale from 72 DPI to 300 DPI
//NSImage *img; // Source image
NSSize newSize = NSMakeSize(temp.size.width*factor, temp.size.height*factor);
NSImage *scaledImg = [[NSImage alloc] initWithSize:newSize];
[scaledImg lockFocus];
[[NSColor whiteColor] set];
[NSBezierPath fillRect:NSMakeRect(0, 0, newSize.width, newSize.height)];
NSAffineTransform *transform = [NSAffineTransform transform];
[transform scaleBy:factor];
[transform concat];
[temp drawAtPoint:NSZeroPoint fromRect:NSZeroRect operation:NSCompositeSourceOver fraction:1.0];
[scaledImg unlockFocus];
NSBitmapImageRep *rep = [NSBitmapImageRep imageRepWithData:[temp TIFFRepresentation]];
NSData *finalData = [rep representationUsingType:NSJPEGFileType properties:nil];
NSString *pageName = [NSString stringWithFormat:#"Page_%ld.jpg", (long)[pdfImg currentPage]];
[fileManager createFileAtPath:[NSString stringWithFormat:#"%#%#", pdfPath, pageName] contents:finalData attributes:nil];
}
}
Since OS X 10.8, NSImage has a block based initialiser to draw vector based content into a bitmap.
The idea is to provide a drawing handler that is called whenever a representation of the image is requested.
The relation between points and pixels is expressed by passing a NSSize (in points) to the initialiser and to explicitly set the pixel dimensions for the representation:
NSString* localDocuments = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES) objectAtIndex:0];
NSString* pdfPath = [localDocuments stringByAppendingPathComponent:#"1.pdf"];
NSData* pdfData = [NSData dataWithContentsOfFile:pdfPath];
NSPDFImageRep* pdfImageRep = [NSPDFImageRep imageRepWithData:pdfData];
CGFloat factor = 300/72;
NSInteger pageCount = [pdfImageRep pageCount];
for(int i = 0 ; i < pageCount ; i++)
{
[pdfImageRep setCurrentPage:i];
NSImage* scaledImage = [NSImage imageWithSize:pdfImageRep.size flipped:NO drawingHandler:^BOOL(NSRect dstRect) {
[pdfImageRep drawInRect:dstRect];
return YES;
}];
NSImageRep* scaledImageRep = [[scaledImage representations] firstObject];
/*
* The sizes of the PDF Image Rep and the [NSImage imageWithSize: drawingHandler:]-context
* are defined in terms of points.
* By explicitly setting the size of the scaled representation in Pixels, you
* define the relation between points & pixels.
*/
scaledImageRep.pixelsWide = pdfImageRep.size.width * factor;
scaledImageRep.pixelsHigh = pdfImageRep.size.height * factor;
NSBitmapImageRep* pngImageRep = [NSBitmapImageRep imageRepWithData:[scaledImage TIFFRepresentation]];
NSData* finalData = [pngImageRep representationUsingType:NSJPEGFileType properties:nil];
NSString* pageName = [NSString stringWithFormat:#"Page_%ld.jpg", (long)[pdfImageRep currentPage]];
[[NSFileManager defaultManager] createFileAtPath:[NSString stringWithFormat:#"%#%#", pdfPath, pageName] contents:finalData attributes:nil];
}
You can set the resolution saved in an image file's metadata by setting the size of the NSImageRep to something other than the image's size
[pngImageRep setSize:NSMakeSize(targetWidth, targetHeight)]
where you have to initialize targetWidth and targetHeight to the values you want
Edit: and I guess you wanted to write "scaledImg" not "temp"
NSBitmapImageRep *rep = [NSBitmapImageRep imageRepWithData:[scaledImg TIFFRepresentation]];
Edit 2: on second thought this will get you a larger image but only as a stretched out version of the smaller one. The approach in weichsel's answer with the modification below is probably what you really want (but the code above is still valid for setting the metadata)
NSSize newSize = NSMakeSize(pdfImageRep.size.width * factor,pdfImageRep.size.height * factor);
NSImage* scaledImage = [NSImage imageWithSize:newSize flipped:NO drawingHandler:^BOOL(NSRect dstRect) {
[pdfImageRep drawInRect:dstRect];
return YES;
}];

Calling imageWithData:UIImageJPEGRepresentation() multiple times only compresses image the first time

In order to prevent lagging in my app, I'm trying to compress images larger than 1 MB (mostly for pics taken from iphone's normal camera.
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
NSData *imageSize = UIImageJPEGRepresentation(image, 1);
NSLog(#"original size %u", [imageSize length]);
UIImage *image2 = [UIImage imageWithData:UIImageJPEGRepresentation(image, 0)];
NSData *newImageSize = UIImageJPEGRepresentation(image2, 1);
NSLog(#"new size %u", [newImageSize length]);
UIImage *image3 = [UIImage imageWithData:UIImageJPEGRepresentation(image2, 0)];
NSData *newImageSize2 = UIImageJPEGRepresentation(image3, 1);
NSLog(#"new size %u", [newImageSize2 length]);
picView = [[UIImageView alloc] initWithImage:image3] ;
However, the NSLog I get outputs something along the lines of
original size 3649058
new size 1835251
new size 1834884
The difference between the 1st and 2nd compression is almost negligible. My goal is to get the image size below 1 MB. Did I overlook something/is there an alternative approach to achieve this?
EDIT: I want to avoid scaling the image's height and width, if possible.
A couple of thoughts:
The UIImageJPEGRepresentation function does not return the "original" image. For example, if you employ a compressionQuality of 1.0, it does not, technically, return the "original" image, but rather it returns a JPEG rendition of the image with compressionQuality at its maximum value. This can actually yield an object that is larger than the original asset (at least if the original image is a JPEG). You're also discarding all of the metadata (information about where the image was taken, the camera settings, etc.) in the process.
If you want the original asset, you should use PHImageManager:
NSURL *url = [info objectForKey:UIImagePickerControllerReferenceURL];
PHFetchResult *result = [PHAsset fetchAssetsWithALAssetURLs:#[url] options:nil];
PHAsset *asset = [result firstObject];
PHImageManager *manager = [PHImageManager defaultManager];
[manager requestImageDataForAsset:asset options:nil resultHandler:^(NSData *imageData, NSString *dataUTI, UIImageOrientation orientation, NSDictionary *info) {
NSString *filename = [(NSURL *)info[#"PHImageFileURLKey"] lastPathComponent];
// do what you want with the `imageData`
}];
In iOS versions prior to 8, you'd have to use assetForURL of the ALAssetsLibrary class:
NSURL *url = [info objectForKey:UIImagePickerControllerReferenceURL];
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
[library assetForURL:url resultBlock:^(ALAsset *asset) {
ALAssetRepresentation *representation = [asset defaultRepresentation];
NSLog(#"size of original asset %llu", [representation size]);
// I generally would write directly to a `NSOutputStream`, but if you want it in a
// NSData, it would be something like:
NSMutableData *data = [NSMutableData data];
// now loop, reading data into buffer and writing that to our data strea
NSError *error;
long long bufferOffset = 0ll;
NSInteger bufferSize = 10000;
long long bytesRemaining = [representation size];
uint8_t buffer[bufferSize];
NSUInteger bytesRead;
while (bytesRemaining > 0) {
bytesRead = [representation getBytes:buffer fromOffset:bufferOffset length:bufferSize error:&error];
if (bytesRead == 0) {
NSLog(#"error reading asset representation: %#", error);
return;
}
bytesRemaining -= bytesRead;
bufferOffset += bytesRead;
[data appendBytes:buffer length:bytesRead];
}
// ok, successfully read original asset;
// do whatever you want with it here
} failureBlock:^(NSError *error) {
NSLog(#"error=%#", error);
}];
Please note that this assetForURL runs asynchronously.
If you want a NSData with compression, you can use UIImageJPEGRepresentation with a compressionQuality less than 1.0. Your code actually does this with a compressionQuality of 0.0, which should offer maximum compression. But you don't save that NSData, but rather use it to create a UIImage and you then get a new UIImageJPEGRepresentation with a compressionQuality of 1.0, thus losing much of the compression you originally achieved.
Consider the following code:
// a UIImage of the original asset (discarding meta data)
UIImage *image = [info objectForKey:UIImagePickerControllerOriginalImage];
// this may well be larger than the original asset
NSData *jpgDataHighestCompressionQuality = UIImageJPEGRepresentation(image, 1.0);
[jpgDataHighestCompressionQuality writeToFile:[docsPath stringByAppendingPathComponent:#"imageDataFromJpeg.jpg"] atomically:YES];
NSLog(#"compressionQuality = 1.0; length = %u", [jpgDataHighestCompressionQuality length]);
// this will be smaller, but with some loss of data
NSData *jpgDataLowestCompressionQuality = UIImageJPEGRepresentation(image, 0.0);
NSLog(#"compressionQuality = 0.0; length = %u", [jpgDataLowestCompressionQuality length]);
UIImage *image2 = [UIImage imageWithData:jpgDataLowestCompressionQuality];
// ironically, this will be larger than jpgDataLowestCompressionQuality
NSData *newImageSize = UIImageJPEGRepresentation(image2, 1.0);
NSLog(#"new size %u", [newImageSize length]);
In addition to the JPEG compression quality outlined the prior point, you could also just resize the image. You can also marry this with the JPEG compressionQuality, too.
You can not compress the image again and again. If so everything can be compressed again and again. Then how small do you think it will be?
One way to make your image smaller is to change it's size. For example change 640X960 to 320X480. But you will lose quality.
I is the first implementation of UIImageJPEGRepresentation (image, 0.75), and then change the size. Maybe image's width and heigh two-thirds or half.

Resources