imageToUpload is 375x500 here. After uploading to Firebase Storage, the width and height was doubled. Is there a way to keep the size unchanged after uploading to Firebase Storage?
if let uploadData = UIImageJPEGRepresentation(imageToUpload, 0.0) {
uploadImageRef.putData(uploadData, metadata: nil) { (metadata, error) in
if error != nil {
print("error")
completion?(false)
} else {
// your uploaded photo url.
// Metadata contains file metadata such as size, content-type.
let size = metadata?.size
// You can also access to download URL after upload.
uploadImageRef.downloadURL { (url, error) in
guard let downloadURL = url else {
// Uh-oh, an error occurred!
completion?(false)
return
}
print("Download url : \(downloadURL)")
completion?(true)
}
}
}
}
Note that I am using the extension below to change image size to 375x500 (size of imageToUpload) before uploading.
extension UIImage {
func resized(to size: CGSize) -> UIImage {
return UIGraphicsImageRenderer(size: size).image { _ in
draw(in: CGRect(origin: .zero, size: size))
}
}
}
let imageToUploadResize:UIImage = image.resized(to: CGSize(width: 375.0, height: 500.0))
As I had mentioned in my comment, IMO, the function in the extension is really being called like a standalone function and not really extending the functionality. I would suggest making it a function to resize a passed in image and return a new image.
Using this function, your image will be resized to the correct size and uploaded at that size (verified that it works)
func resizeImage(image: UIImage, targetSize: CGSize) -> UIImage {
let rect = CGRect(x: 0, y: 0, width: targetSize.width, height: targetSize.height)
UIGraphicsBeginImageContextWithOptions(targetSize, false, 1.0)
image.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
and then called like this
let updatedSize = CGSize(width: 300.0, height: 400.0)
let resizedImage = self.resizeImage(image: origImage!, targetSize: updatedSize)
Now to address the issue with the extension at a 10k foot level.
It all goes back to points vs how it's rendered on an iDevice. Back with the iPhone 2g, 3g, it was 1-1 for rendering so if you ran your code on that device in the simulator and set the image size to 320x480, it would have been a 320x480 image stored in firebase. However, as screens improved and resolutions went up, so did the rendering, which affects the UIImage.
So if you were to set your project to simulate on an iPhone 6, that same image would be 640x960 (2x) and then to iPhone 8+, the size is 960 x 1440 (3x). (there is upsampling involved so we are ignoring it here).
The UIImage knows what device it's on so that should be taken into consideration.
Again, this is generically speaking and there are a lot of other components involved, in particular, pixels = logicalPoints * scaleFactor
Try using the resizing image code provided by the following resource. Your extension creates a new instance of the image and returns it rather than updating the actual image itself.
If you want to keep on using your extension, I would suggest trying
extension UIImage {
func resized(to size: CGSize) {
self = UIGraphicsImageRenderer(size: size).image { _ in
draw(in: CGRect(origin: .zero, size: size))
}
}
}
Related
This question already has an answer here:
How to set the scale when using UIGraphicsImageRenderer
(1 answer)
Closed 3 years ago.
I’m having a performance issue with images in a tableView (very jerky) and several view controllers (slow loading). In order to speed things up and smooth things out, I’m trying to resize the incoming images to the exact size of their respective imageViews upon initial loading instead of on the fly every time they are loaded from Core Data. The resized images will be stored as separate attributes on their parent Entity. All images are square, i.e., 1:1 aspect ratio.
Here is the resizing code:
extension UIImage {
func resize(targetSize: CGSize) -> UIImage {
print("Inside resize function")
print("targetSize is \(targetSize)")
return UIGraphicsImageRenderer(size:targetSize).image { _ in
self.draw(in: CGRect(origin: .zero, size: targetSize))
}
}
}
And here is the function from which the resize function is called and to which it returns:
func difAndAssignPics() {
guard let appDelegate =
UIApplication.shared.delegate as? AppDelegate else {
return
}
// This is a copy of the original image, from which the smaller ones will be created
print("bigToLittlePic size is \(bigToLittlePic?.size as Any)")
// These are the desired sizes of the three images I'm trying to create to facilitate performance
let littlePicSize = CGSize(width: 110, height: 110)
let mediumPicSize = CGSize(width: 160, height: 160)
let displayPicSize = CGSize(width: 340, height: 340)
// This is the call to the resize function for each target size
littlePic = bigToLittlePic?.resize(targetSize: littlePicSize)
mediumPic = bigToLittlePic?.resize(targetSize: mediumPicSize)
displayPic = bigToLittlePic?.resize(targetSize: displayPicSize)
// This code differentiates between front view and rear view of the item and prints out the size of each
switch switchTag {
case 1:
newCoin.obversePic = (displayPic)!.pngData()! as NSData
newCoin.obversePicThumb = (littlePic)!.pngData()! as NSData
newCoin.obversePicMedium = (mediumPic)!.pngData()! as NSData
selectIndicatorLabel.text = "Now select Reverse photo"
appDelegate.saveContext()
print("Inside switch case 1, difAndAssignPics")
if let imageData = newCoin.obversePicThumb{
let image = UIImage(data: imageData as Data)
print("obversePicThumb size is \(image?.size as Any)")
}
if let imageData = newCoin.obversePicMedium{
let image = UIImage(data: imageData as Data)
print("obversePicMedium size is \(image?.size as Any)")
}
if let imageData = newCoin.obversePic{
let image = UIImage(data: imageData as Data)
print("obversePicBig size is \(image?.size as Any)")
}
case 2:
newCoin.reversePic = (displayPic)!.pngData()! as NSData
newCoin.reversePicThumb = (littlePic)!.pngData()! as NSData
newCoin.reversePicMedium = (mediumPic)!.pngData()! as NSData
selectIndicatorLabel.text = "Now select Obverse photo"
appDelegate.saveContext()
print("Inside switch case 2, difAndAssignPics")
if let imageData = newCoin.reversePicThumb{
let image = UIImage(data: imageData as Data)
print("reversePicThumb size is \(image?.size as Any)")
}
if let imageData = newCoin.reversePicMedium{
let image = UIImage(data: imageData as Data)
print("reversePicMedium size is \(image?.size as Any)")
}
if let imageData = newCoin.reversePic{
let image = UIImage(data: imageData as Data)
print("reversePicBig size is \(image?.size as Any)")
}
default: break
}
reactivateDataButtons()
}
Here’s what is being printed in the console at the introduction of each new image:
bigToLittlePic size is Optional((2320.0, 2320.0))
Inside resize function
targetSize is (110.0, 110.0)
Inside resize function
targetSize is (160.0, 160.0)
Inside resize function
targetSize is (340.0, 340.0)
Ok, so far so good. However, when the image gets back to difAndAssignPics, this is the printout:
reversePicThumb size is Optional((330.0, 330.0))
reversePicMedium size is Optional((480.0, 480.0))
reversePicBig size is Optional((1020.0, 1020.0))
I included just the printout for the reverse images for brevity. Obverse gives identical numbers.
As you can see, somehow the size of each resized image has ballooned by a factor of 3. The pictures load, and the quality is high, but the performance is still noticeably suboptimal. I don’t know how to reconcile these numbers.
Anybody have any suggestions?
Many thanks in advance!
Edit
I printed out the size of the images that are being squeezed into the 110 x 110 imageViews in my custom cell. The numbers confirm that something in the resizing function is snafued. Here are the numbers in cellForRow:
obversePicThumb.size == Optional((330.0, 330.0))
reversePicThumb.size == Optional((330.0, 330.0))
Edit #2
This post answered my question. Many thanks to those who took the time to look!
I don't know if this will make any difference, but here's my extension to accomplish this:
extension UIImage {
public func resizeToBoundingSquare(_ boundingSquareSideLength : CGFloat) -> UIImage {
let imgScale = self.size.width > self.size.height ? boundingSquareSideLength / self.size.width : boundingSquareSideLength / self.size.height
let newWidth = self.size.width * imgScale
let newHeight = self.size.height * imgScale
let newSize = CGSize(width: newWidth, height: newHeight)
UIGraphicsBeginImageContext(newSize)
self.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
return resizedImage!
}
}
My usage isn't with a UITableView, nor does it use CoreData. (I'd recommend doing things asynchronously for something like that.) But if your issue is with resizing, this should work with acceptable performance.
I have a CollectionView with cells having UIImageView. Some images in the list have really high resolution of 3000 x 2000 upwards. I'm using AlamofireImage Library to show and cache images but it still has huge spikes. I tried doing
let filter = AspectScaledToFillSizeFilter(size: imageView.frame.size)
imageView.af_setImage(withURL: url, filter: filter)
This didn't have much of a change.
Is there a better way to resize/downgrade resolution on downloading an image but before displaying it as iOS memory spikes are more because of resolution than actually the image file size.
Swift 4.1
This is how I resize my images. You could process them before displaying.
// Resize to ~1.5Kx2K resoultion and compress to <200KB (JPEG 0.2)
private func resizePhoto(_ originalPhoto: UIImage) -> UIImage? {
var size: CGSize
let scale = UIScreen.main.scale
if originalPhoto.size.width > originalPhoto.size.height { // Landscape
size = CGSize(width: 2016/scale, height: 1512/scale)
} else { // Portrait
size = CGSize(width: 1512/scale, height: 2016/scale)
}
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
originalPhoto.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
let resizedPhoto = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let scaledPhotoData = UIImageJPEGRepresentation(resizedPhoto!, 0.2)
//print(">>> Resized data size: \(scaledPhotoData!.count)")
return UIImage(data: scaledPhotoData)
}
I am trying to save an image coming from the iPhone camera to a file. I use the following code:
try UIImageJPEGRepresentation(toWrite, 0.8)?.write(to: tempURL, options: NSData.WritingOptions.atomicWrite)
This results in a file double the resolution of the toWrite UIImage. I confirmed in the watch expressions that creating a new UIImage from UIImageJPEGRepresentation doubles its resolution
-> toWrite.size CGSize (width = 3264, height = 2448)
-> UIImage(data: UIImageJPEGRepresentation(toWrite, 0.8)).size CGSize? (width = 6528, height = 4896)
Any idea why this would happen, and how to avoid it?
Thanks
Your initial image has scale factor = 2, but when you init your image from data you will get image with scale factor = 1. Your way to solve it is to control the scale and init the image with scale property:
#available(iOS 6.0, *)
public init?(data: Data, scale: CGFloat)
Playground code that represents the way you can set scale
extension UIImage {
class func with(color: UIColor, size: CGSize) -> UIImage? {
let rect = CGRect(origin: .zero, size: size)
UIGraphicsBeginImageContextWithOptions(size, true, 2.0)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
context.setFillColor(color.cgColor)
context.fill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
let image = UIImage.with(color: UIColor.orange, size: CGSize(width: 100, height: 100))
if let image = image {
let scale = image.scale
if let data = UIImageJPEGRepresentation(image, 0.8) {
if let newImage = UIImage(data: data, scale: scale) {
debugPrint(newImage?.size)
}
}
}
This question already has answers here:
How do I resize the UIImage to reduce upload image size
(21 answers)
Closed 5 years ago.
I am trying to reduce the file size of my images as much as possible before I upload them. Right now I am doing this:
if let firstImageData = UIImageJPEGRepresentation(pickedImage, 0.1) {
self.imgArray.append(firstImageData)
}
This will take any image coming from the camera or photo album, make it jpg and reduce its size.
I have set the setting to 0.1 but when I upload my images the size still end up around 300-350kb, is there any way I could resize them even more, I am aiming towards 50-70kb
you can resize your image first to smaller size using these extension by resizing it by percent or width
extension UIImage {
func resizeWithPercent(percentage: CGFloat) -> UIImage? {
let imageView = UIImageView(frame: CGRect(origin: .zero, size: CGSize(width: size.width * percentage, height: size.height * percentage)))
imageView.contentMode = .ScaleAspectFit
imageView.image = self
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, scale)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
imageView.layer.renderInContext(context)
guard let result = UIGraphicsGetImageFromCurrentImageContext() else { return nil }
UIGraphicsEndImageContext()
return result
}
func resizeWithWidth(width: CGFloat) -> UIImage? {
let imageView = UIImageView(frame: CGRect(origin: .zero, size: CGSize(width: width, height: CGFloat(ceil(width/size.width * size.height)))))
imageView.contentMode = .ScaleAspectFit
imageView.image = self
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, scale)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
imageView.layer.renderInContext(context)
guard let result = UIGraphicsGetImageFromCurrentImageContext() else { return nil }
UIGraphicsEndImageContext()
return result
}
}
to use it just call it like this
myImage = myImage.resizeWithWidth(700)!
Now next you can still compress it using compression ratio of your choice
let compressData = UIImageJPEGRepresentation(myImage, 0.5) //max value is 1.0 and minimum is 0.0
let compressedImage = UIImage(data: compressData!)
You only can change size of image (less size = less data) and compress it by using image compression algorithms like JPEG, there is no other way (better algorythm = less size in same quality).
I heard google improved JPEG algorithm using neural networks lately (Google’s TensorFlow)
I am resizing and compressing my photos an unusual result.
When I choose the image from photo album, the image compresses and resizes fine. However, If I do it on a image that was passed from the camera, the image becomes oddly small (And unwatchable). What I have done as a test is assign some compression and resizing function in my button that takes an image either from a camera source or photo album. Below are my code and console output
#IBAction func testBtnPressed(sender: AnyObject) {
let img = selectedImageView.image!
print("before resize image \(img.dataLengh_kb)kb size \(img.size)")
let resizedImg = img.resizeWithWidth(1080)
print("1080 After resize image \(resizedImg!.dataLengh_kb)kb size \(resizedImg!.size)")
let compressedImageData = resizedImg!.mediumQualityJPEGNSData
print("Compress to medium quality = \(compressedImageData.length / 1024)kb")
}
extension UIImage {
var mediumQualityJPEGNSData: NSData { return UIImageJPEGRepresentation(self, 0.5)! }
func resizeWithWidth(width: CGFloat) -> UIImage? {
let imageView = UIImageView(frame: CGRect(origin: .zero, size: CGSize(width: width, height: CGFloat(ceil(width/size.width * size.height)))))
imageView.contentMode = .ScaleAspectFit
imageView.image = self
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, scale)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
imageView.layer.renderInContext(context)
guard let result = UIGraphicsGetImageFromCurrentImageContext() else { return nil }
UIGraphicsEndImageContext()
return result
}
}
When photo was selected from photo album
before resize image 5004kb size (3024.0, 3024.0)
1080 After resize image 1023kb size (1080.0, 1080.0)
Compress to medium quality = 119kb
When photo was passed by camera
before resize image 4653kb size (24385.536, 24385.536)
1080 After resize image 25kb size (1080.576, 1080.576)
Compress to medium quality = 4kb
I have replaced the image resizing function with the following one and it worked a lot better
func resizeImage(newHeight: CGFloat) -> UIImage {
let scale = newHeight / self.size.height
let newWidth = self.size.width * scale
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
self.drawInRect(CGRectMake(0, 0, newWidth, newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}