I am doing a simple image file size reduction task but got this issue. When using no-loss option for JPEGs, the file size tripled then the original NSData for the image (same resolution). Any suggestion?
Here is the simple code:
let data = someImageData
print(data.length)
let image = UIImage(data: data)!
let newImageData = UIImageJPEGRepresentation(image, 1)
print(newImageData.length)
let newImageData = UIImageJPEGRepresentation(image, 0.8)
print(newImageData.length)
and output:
2604768 //original length
8112955 //new length if compression = 1
2588870 //new length if compression = 0.8
It seems I have to take the 0.8 quality loss to get the same data length. Did I miss something? Pease help.
Edit: I did some more test by converting the data to UIImage then to UIImageJPEGRepresentation(image, 1), the length size of the new data increases every cycle. But if I use UIImageJPEGRepresentation(image, 0.8), the length size of the new data decreases a bit (very little), however, the compound qualify loss should be a concern.
What your code is doing is decompressing/extracting the image into memory with the line --> let image = UIImage(data: data)!
and then recompressing as JPEG, with let newImageData = UIImageJPEGRepresentation(image, 1) with the highest quality ratio (q=1.0). That’s why the image is suddenly so big.
So the moment you get your image as an UIImage from NSData, the decompression/conversion changes the size of the image.
The image's file size will actually be 8112955.
Related
I am using Swift's Vision Framework for Deep Learning and want to upload the input image to backend using REST API - for which I am converting my UIImage to MultipartFormData using jpegData() and pngData() function that swift natively offers.
I use session.sessionPreset = .vga640x480 to specify the image size in my app for processing.
I was seeing different size of image in backend - which I was able to confirm in the app because UIImage(imageData) converted from the image is of different size.
This is how I convert image to multipartData -
let multipartData = MultipartFormData()
if let imageData = self.image?.jpegData(compressionQuality: 1.0) {
multipartData.append(imageData, withName: "image", fileName: "image.jpeg", mimeType: "image/jpeg")
}
This is what I see in Xcode debugger -
The following looks intuitive, but manifests the behavior you describe, whereby one ends up with a Data representation of the image with an incorrect scale and pixel size:
let ciImage = CIImage(cvImageBuffer: pixelBuffer) // 640×480
let image = UIImage(ciImage: ciImage) // says it is 640×480 with scale of 1
guard let data = image.pngData() else { ... } // but if you extract `Data` and then recreate image from that, the size will be off by a multiple of your device’s scale
However, if you create it via a CGImage, you will get the right result:
let ciImage = CIImage(cvImageBuffer: pixelBuffer)
let ciContext = CIContext()
guard let cgImage = ciContext.createCGImage(ciImage, from: ciImage.extent) else { return }
let image = UIImage(cgImage: cgImage)
You asked:
If my image is 640×480 points with scale 2, does my deep learning model would still take the same to process as for a 1280×960 points with scale 1?
There is no difference, as far as the model goes, between 640×480pt # 2× versus 1280×960pt # 1×.
The question is whether 640×480pt # 2× is better than 640×480pt # 1×: In this case, the model will undoubtedly generate better results, though possibly slower, with higher resolution images (though at 2×, the asset is roughly four times larger/slower to upload; on 3× device, it will be roughly nine times larger).
But if you look at the larger asset generated by the direct CIImage » UIImage process, you can see that it did not really capture a 1280×960 snapshot, but rather captured 640×480 and upscaled (with some smoothing), so you really do not have a more detailed asset to deal with and is unlikely to generate better results. So, you will pay the penalty of the larger asset, but likely without any benefits.
If you need better results with larger images, I would change the preset to a higher resolution but still avoid the scale based adjustment by using the CIContext/CGImage-based snippet shared above.
I am using UIImagePickerController to read images from the photo library. I use the following code to calculate their size:
if let file = info[UIImagePickerControllerOriginalImage] {
let imageValue = (file as? UIImage)!
let data = UIImageJPEGRepresentation(imageValue, 1)
let imageSize = (data?.count)! / 1024
print("imsize in MB: ", Double(imageSize) / 1024.0)
if let imageData = UIImagePNGRepresentation(imageValue) {
let bytes = imageData.count
let KB = Double(bytes) / 1024.0
let MB = Double(KB) / 1024.0
print("we have image size as MB", MB)
}
}
To my surprise both tell a different size for the images, which is also different from size of image. What is happening here and which is more accurate?
A bit confused. Help is much needed to understand this.
Jpeg and Png are different. Here I googled the diffrence between Jpeg and Png on google.
The main difference between JPG and PNG is the compression algorithms that they use. JPG uses a lossy compression algorithm that discards some of the image information in order to reduce the size of the file. ... With PNG, the quality of the image will not change, but the size of the file will usually be larger.
I have an upload limit of 2 MB with my images. So if a user tries to upload an image bigger then 2 MB, I would like to reduce it's size without reducing the resolution.
How do I achieve that? I tried something like this but it didn't work:
var fileSize = UIImageJPEGRepresentation(image, 1)!.length
print("before File size:")
print(fileSize)
while fileSize > MyConstants.MAX_ATTACHMENT_SIZE{
let mydata = UIImageJPEGRepresentation(image, 0.75)
fileSize = mydata!.length
image = UIImage(data: mydata!)!
print("make smaller \(fileSize)")
}
print("after File size:")
print(UIImageJPEGRepresentation(image, 1)!.length)
output:
before File size:
2298429
make smaller 846683
after File size:
2737491
As #Lion said, you will have to play around with the quality to achieve an agreeable file size. I noticed however that:
print(UIImageJPEGRepresentation(image, 1)!.length)
Will print the image length at max quality. This is misleading since inside the while condition, you are achieving smaller file size.
while fileSize > MyConstants.MAX_ATTACHMENT_SIZE{
let mydata = UIImageJPEGRepresentation(image, 0.75)
fileSize = mydata!.length
image = UIImage(data: mydata!)!
print("make smaller \(fileSize)")
}
let mydata = UIImageJPEGRepresentation(image, 0.75)
fileSize = mydata!.length
image = UIImage(data: mydata!)!
This is the right approach. You can set different scale from 0.1 to 1 that how much you want to reduce the quality and size of the image.
print(UIImageJPEGRepresentation(image, 1)!.length) this line again increase image's quality and size or resolution to maximum (because scale = 1)
Hi I am trying to compress a UIImage my code is like so:
var data = UIImageJPEGRepresentation(image, 1.0)
print("Old image size is \(data?.length) bytes")
data = UIImageJPEGRepresentation(image, 0.7)
print("New image size is \(data?.length) bytes")
let optimizedImage = UIImage(data: data!, scale: 1.0)
The print out is:
Old image size is Optional(5951798) bytes
New image size is Optional(1416792) bytes
Later on in the app I upload the UIImage to a webserver (it's converted to NSData first). When I check the size it's the original 5951798 bytes. What could I be doing wrong here?
A UIImage provides a bitmap (uncompressed) version of the image which can be used for rendering the image to screen. The JPEG data of an image can be compressed by reducing the quality (the colour variance in effect) of the image, but this is only in the JPEG data. Once the compressed JPEG is unpacked into a UIImage it again requires the full bitmap size (which is based on the colour format and the image dimensions).
Basically, keep the best quality image you can for display and compress just before upload.
Send this data to server data = UIImageJPEGRepresentation(image, 0.7), instead of sending optimizedimage to server.
You already compressed the image to data. So it will be repeating process(resetting to original size).
In my app I have to upload a UIImage to a server in form of a Base64 string where the condition is that the string must not exceed 54999 characters.
The code I currently use generally works but it takes a lot of time, memory and the length of the uploaded image Base64 string is usually very off from 54999. Sometimes even by a factor of 10.
var imageData = UIImagePNGRepresentation(image)
var base64String = imageData.base64EncodedStringWithOptions(.allZeros)
var scaledImage: UIImage = image
var newSize: CGSize = image.size
while ((base64String as NSString).length > 54999)
{
newSize.width *= 0.5
newSize.height *= 0.5
scaledImage = image.imageScaledToFitSize(newSize)
imageData = UIImagePNGRepresentation(scaledImage)
base64String = imageData.base64EncodedStringWithOptions(.allZeros)
}
// proceed to upload Base64 string...
At the beginning I thought I could this the following way but this obviously didn't work at all because there is no linear correlation between the file size and its Base64 length:
let maxLength: NSInteger = 54999
if ((base64String as NSString).length > maxLength)
{
let downScaleFactor: CGFloat = CGFloat(maxLength) / CGFloat((base64String as NSString).length)
var size: CGSize = image.size
size.width *= downScaleFactor
size.height *= downScaleFactor
scaledImage = image.imageScaledToFitSize(size)
imageData = UIImagePNGRepresentation(scaledImage)
base64String = imageData.base64EncodedStringWithOptions(.allZeros)
}
There must be a better way to do this.
This is a tough problem. The base64 encoding creates a predictable decrease in image size (3 bytes becomes 4, or a 33% increase in size)
You can't be sure of the result size from PNG or JPEG compression since the decrease in byte size depends on the image being compressed. (A solid-colored rectangle compresses EXTREMELY well. A cartoon using solid colors compresses quite well, and a continuous tone photograph with lots of detail does not compress as well.)
You will probably have better luck using JPEG images, since you can adjust the compression level on those, and JPEG offers higher compression ratios.
I would suggest experimenting with the lowest image quality you can tolerate (use quality setting .5, look at the image, and adjust up/down from there until you get an image that looks "good enough".)
Then use that compression. You can skip the base64 encoding until the end since you can simply multiply your non base64 byte size by 4/3 to get the size after base 64 encoding.
I would suggest shrinking your image by 1/sqrt(2) (~0.7) instead of 0.5. A decrease of 50% will cause large jumps in image size.