losing image quality while converting base 64 - ios

I have one pdf and I am converting pdf to image and displaying in UIImageView and then after I am converting that image to base 64 but while converting base 64 I am losing the image quality so is there any way to prevent from losing quality while converting to base 64
please tell me is there any solution for that
here is my code for convert
let previewImage1 = convertedImageView.getImage()
let btnImg = UIButton()
btnImg.setImage(previewImage1, for: .normal)
let btn1Imggg2 = btnImg.image(for: .normal)
let imageData2 = btnImg?.jpegData(compressionQuality: 0.0)
let imgString2 = imageData2!.base64EncodedString(options: .init(rawValue: 0))
Even if i set compress quality to 0.0 still its compressing image
i have UIView Inside that i have ScrollView inside that i have ImageView and i am converting whole UIView as image and then convert into base64 the is the scenario hope this help you to understand
Code For UIView To UIImage
func getImage(scale: CGFloat? = nil) -> UIImage {
let newScale = scale ?? UIScreen.main.scale
self.scale(by: newScale)
let format = UIGraphicsImageRendererFormat()
format.scale = newScale
let renderer = UIGraphicsImageRenderer(size: self.bounds.size, format: format)
let image = renderer.image { rendererContext in
self.layer.render(in: rendererContext.cgContext)
}
return image
}

By doing this
let imageData2 = btnImg?.jpegData(compressionQuality: 0.0)
you are compressing the image with the maximum value possible. Change this code to
let imageData2 = btnImg?.jpegData(compressionQuality: 1.0)
and you will be good to go.
As you are converting the UIView to UIImage below function can be useful to you :
func image(with view: UIView) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
view.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}
If the jpeg data does not suffice your requirement the user pngData like this
let imageData2 = btnImg?.pngData()
After getting the conversion to Image apply the base64 conversion as you are doing right now.

Okay, since you have a nested hierarchy (UIView -> UIScrollView -> UIImageView) and you wan't to capture that whole hierarchy as an image, it would be better to make an image from UIView (the topmost in the hierarchy, unless I understood wrong and you wan't the UIImageView part only)
let parentMostView = (the view you want to create into an image)
UIGraphicsBeginImageContextWithOptions(size: parentMostView.frame.size, NO, UIScreen.main.scale)
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGrapgicsEndImageContext()
let imageAsData = image.pngData()
//do your base64 conversion here
EDITED: Upon seeing your image creation code
I don't see any issues in your getImage(:) method. On what device are you testing(lower scale devices would create low resolution images because you are dependent on Device scale)? Try using a high static scale value(e.g 10) and replace Data conversion to .jpegData(compressionQuality: 1.0) or .pngData()

Related

Compose CIImages such that one is centered and same width above the other

I am trying to achieve this result:
I create the gradient image which is aspect ratio 16:9.
This is my code:
extension UIImage {
func mergeWithGradient(completion: #escaping (UIImage)->()){
let width = self.size.width
let maxWidth = min(width, 1024.0)
let height = maxWidth * 16.0 / 9.0
let totalSize = CGSize(width: maxWidth, height: height)
let colors = self.colors()
guard
let gradientImage = UIImage(size: totalSize, gradientPoints: [(colors.top,0), (colors.bottom,1)].map{ GradientPoint(location: $0.1, color: $0.0)}),
let cgImage = self.rotateToImageOrientationUp().cgImage,
let cgGradientImage = gradientImage.cgImage
else {
return
}
let context = CIContext.init(options: nil)
var ciImage = CIImage(cgImage: cgImage)
let ciGradientImage = CIImage(cgImage: cgGradientImage)
let ciMerged = ciImage.composited(over: ciGradientImage)
let cgMerged = context.createCGImage(ciMerged, from: ciMerged.extent)!
let uiMerged = UIImage.init(cgImage: cgMerged)
completion(uiMerged)
}
}
But the attached code actually gets this result:
How can I move the image to the center?
This is really easy with CoreGraphics but I need to do it with CoreImage as later on my project will need more filters and good performance.
If you want to make use Core Image to merge the images while centering the overlay, then the most straight forward way to do it is to just make your overlay image the same size as your gradient image to begin with and just letter box it with transparent pixels. You can do it with UIGraphicsIamgeRender before you convert to a CIImage.

iOS Swift UIImage resizing issue affecting performance; unexpected dimensions upon resizing [duplicate]

This question already has an answer here:
How to set the scale when using UIGraphicsImageRenderer
(1 answer)
Closed 3 years ago.
I’m having a performance issue with images in a tableView (very jerky) and several view controllers (slow loading). In order to speed things up and smooth things out, I’m trying to resize the incoming images to the exact size of their respective imageViews upon initial loading instead of on the fly every time they are loaded from Core Data. The resized images will be stored as separate attributes on their parent Entity. All images are square, i.e., 1:1 aspect ratio.
Here is the resizing code:
extension UIImage {
func resize(targetSize: CGSize) -> UIImage {
print("Inside resize function")
print("targetSize is \(targetSize)")
return UIGraphicsImageRenderer(size:targetSize).image { _ in
self.draw(in: CGRect(origin: .zero, size: targetSize))
}
}
}
And here is the function from which the resize function is called and to which it returns:
func difAndAssignPics() {
guard let appDelegate =
UIApplication.shared.delegate as? AppDelegate else {
return
}
// This is a copy of the original image, from which the smaller ones will be created
print("bigToLittlePic size is \(bigToLittlePic?.size as Any)")
// These are the desired sizes of the three images I'm trying to create to facilitate performance
let littlePicSize = CGSize(width: 110, height: 110)
let mediumPicSize = CGSize(width: 160, height: 160)
let displayPicSize = CGSize(width: 340, height: 340)
// This is the call to the resize function for each target size
littlePic = bigToLittlePic?.resize(targetSize: littlePicSize)
mediumPic = bigToLittlePic?.resize(targetSize: mediumPicSize)
displayPic = bigToLittlePic?.resize(targetSize: displayPicSize)
// This code differentiates between front view and rear view of the item and prints out the size of each
switch switchTag {
case 1:
newCoin.obversePic = (displayPic)!.pngData()! as NSData
newCoin.obversePicThumb = (littlePic)!.pngData()! as NSData
newCoin.obversePicMedium = (mediumPic)!.pngData()! as NSData
selectIndicatorLabel.text = "Now select Reverse photo"
appDelegate.saveContext()
print("Inside switch case 1, difAndAssignPics")
if let imageData = newCoin.obversePicThumb{
let image = UIImage(data: imageData as Data)
print("obversePicThumb size is \(image?.size as Any)")
}
if let imageData = newCoin.obversePicMedium{
let image = UIImage(data: imageData as Data)
print("obversePicMedium size is \(image?.size as Any)")
}
if let imageData = newCoin.obversePic{
let image = UIImage(data: imageData as Data)
print("obversePicBig size is \(image?.size as Any)")
}
case 2:
newCoin.reversePic = (displayPic)!.pngData()! as NSData
newCoin.reversePicThumb = (littlePic)!.pngData()! as NSData
newCoin.reversePicMedium = (mediumPic)!.pngData()! as NSData
selectIndicatorLabel.text = "Now select Obverse photo"
appDelegate.saveContext()
print("Inside switch case 2, difAndAssignPics")
if let imageData = newCoin.reversePicThumb{
let image = UIImage(data: imageData as Data)
print("reversePicThumb size is \(image?.size as Any)")
}
if let imageData = newCoin.reversePicMedium{
let image = UIImage(data: imageData as Data)
print("reversePicMedium size is \(image?.size as Any)")
}
if let imageData = newCoin.reversePic{
let image = UIImage(data: imageData as Data)
print("reversePicBig size is \(image?.size as Any)")
}
default: break
}
reactivateDataButtons()
}
Here’s what is being printed in the console at the introduction of each new image:
bigToLittlePic size is Optional((2320.0, 2320.0))
Inside resize function
targetSize is (110.0, 110.0)
Inside resize function
targetSize is (160.0, 160.0)
Inside resize function
targetSize is (340.0, 340.0)
Ok, so far so good. However, when the image gets back to difAndAssignPics, this is the printout:
reversePicThumb size is Optional((330.0, 330.0))
reversePicMedium size is Optional((480.0, 480.0))
reversePicBig size is Optional((1020.0, 1020.0))
I included just the printout for the reverse images for brevity. Obverse gives identical numbers.
As you can see, somehow the size of each resized image has ballooned by a factor of 3. The pictures load, and the quality is high, but the performance is still noticeably suboptimal. I don’t know how to reconcile these numbers.
Anybody have any suggestions?
Many thanks in advance!
Edit
I printed out the size of the images that are being squeezed into the 110 x 110 imageViews in my custom cell. The numbers confirm that something in the resizing function is snafued. Here are the numbers in cellForRow:
obversePicThumb.size == Optional((330.0, 330.0))
reversePicThumb.size == Optional((330.0, 330.0))
Edit #2
This post answered my question. Many thanks to those who took the time to look!
I don't know if this will make any difference, but here's my extension to accomplish this:
extension UIImage {
public func resizeToBoundingSquare(_ boundingSquareSideLength : CGFloat) -> UIImage {
let imgScale = self.size.width > self.size.height ? boundingSquareSideLength / self.size.width : boundingSquareSideLength / self.size.height
let newWidth = self.size.width * imgScale
let newHeight = self.size.height * imgScale
let newSize = CGSize(width: newWidth, height: newHeight)
UIGraphicsBeginImageContext(newSize)
self.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
return resizedImage!
}
}
My usage isn't with a UITableView, nor does it use CoreData. (I'd recommend doing things asynchronously for something like that.) But if your issue is with resizing, this should work with acceptable performance.

UITextView lags scrolling with added images

I've a UITextView in which I add images to, with ImagePickerController.
If there is only text in it, it is smooth, but after adding 3 or 4 images it becomes laggy on scroll within its content.
I compress the image when adding it, to it's lowest quality, but I must be doing something wrong because it still lags after the third image.
Here's the code:
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
guard let _image = info[UIImagePickerControllerOriginalImage] as? UIImage else { return }
self.dismiss(animated: true, completion: nil)
// Compress the retrieved image
let compressedImage = UIImage(data: _image.lowestQualityJPEGNSData!)
//create and NSTextAttachment and add your image to it.
let attachment = NSTextAttachment()
let oldWidth = compressedImage?.size.width // scales it within the UITextView
let scaleFactor = oldWidth! / (bodyEditText.frame.size.width - 10); // adjusts the desired padding
attachment.image = UIImage(cgImage: (compressedImage?.cgImage!)!, scale: scaleFactor, orientation: .up)
//put your NSTextAttachment into and attributedString
let attString = NSAttributedString(attachment: attachment)
//add this attributed string to the current position.
bodyEditText.textStorage.insert(attString, at: bodyEditText.selectedRange.location)
}
extension UIImage {
var uncompressedPNGData: Data? { return UIImagePNGRepresentation(self) }
var highestQualityJPEGNSData: Data? { return UIImageJPEGRepresentation(self, 1.0) }
var highQualityJPEGNSData: Data? { return UIImageJPEGRepresentation(self, 0.75) }
var mediumQualityJPEGNSData: Data? { return UIImageJPEGRepresentation(self, 0.5) }
var lowQualityJPEGNSData: Data? { return UIImageJPEGRepresentation(self, 0.25) }
var lowestQualityJPEGNSData:Data? { return UIImageJPEGRepresentation(self, 0.0) }
}
What may be causing this issue and any tip how can I overpass this?
Thanks for helping
EDIT
Thanks to Duncan C I've managed to remove the whole lag and increase the performance rendering images A LOT.
By that I've replaced my imagePickerController with the following content:
let size = __CGSizeApplyAffineTransform(_image.size, CGAffineTransform.init(scaleX: 0.3, y: 0.3))
let hasAlpha = false
let scale: CGFloat = 0.0 // Automatically use scale factor of main screen
UIGraphicsBeginImageContextWithOptions(size, !hasAlpha, scale)
_image.draw(in: CGRect(origin: CGPoint.zero, size: size))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
let att = NSTextAttachment()
att.image = scaledImage
//put your NSTextAttachment into and attributedString
let attString = NSAttributedString(attachment: att)
//add this attributed string to the current position.
bodyEditText.textStorage.insert(attString, at: bodyEditText.selectedRange.location)
You're misusing the scale factor. That's intended for handling the 1x, 2x, and 3x scaling of normal, retina, and #3x retina images.
The way you're doing it, the system has to render the full-sized images into your bounds rectangles each time it wants to draw one. If the images are considerably bigger than the size you're displaying them, you waste a bunch of memory and processing time.
First try simply setting your image view's contentMode to .scaleAspectFit and install the original image in image view without mucking around with the scale factor. See how that performs.
If that doesn't give acceptable performance you should render your starting images into an off-screen graphics context that's the target size of your image, and install the re-rendered image into your image views. You can use UIGraphicsBeginImageContextWithOptions() or various other techniques to resize your images for display.
Take a look at this link for information on image scaling and resizing:
http://nshipster.com/image-resizing/
Its not the best practice to do that. UITextViews is meant for text not all views.
you need to make a custom UIView consisting of a 1, textview 2, UICollectionView for numerous images.
In this way you can reuse this custom view at numerous places too throughout the project

swift ios reduce image size before upload [duplicate]

This question already has answers here:
How do I resize the UIImage to reduce upload image size
(21 answers)
Closed 5 years ago.
I am trying to reduce the file size of my images as much as possible before I upload them. Right now I am doing this:
if let firstImageData = UIImageJPEGRepresentation(pickedImage, 0.1) {
self.imgArray.append(firstImageData)
}
This will take any image coming from the camera or photo album, make it jpg and reduce its size.
I have set the setting to 0.1 but when I upload my images the size still end up around 300-350kb, is there any way I could resize them even more, I am aiming towards 50-70kb
you can resize your image first to smaller size using these extension by resizing it by percent or width
extension UIImage {
func resizeWithPercent(percentage: CGFloat) -> UIImage? {
let imageView = UIImageView(frame: CGRect(origin: .zero, size: CGSize(width: size.width * percentage, height: size.height * percentage)))
imageView.contentMode = .ScaleAspectFit
imageView.image = self
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, scale)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
imageView.layer.renderInContext(context)
guard let result = UIGraphicsGetImageFromCurrentImageContext() else { return nil }
UIGraphicsEndImageContext()
return result
}
func resizeWithWidth(width: CGFloat) -> UIImage? {
let imageView = UIImageView(frame: CGRect(origin: .zero, size: CGSize(width: width, height: CGFloat(ceil(width/size.width * size.height)))))
imageView.contentMode = .ScaleAspectFit
imageView.image = self
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, scale)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
imageView.layer.renderInContext(context)
guard let result = UIGraphicsGetImageFromCurrentImageContext() else { return nil }
UIGraphicsEndImageContext()
return result
}
}
to use it just call it like this
myImage = myImage.resizeWithWidth(700)!
Now next you can still compress it using compression ratio of your choice
let compressData = UIImageJPEGRepresentation(myImage, 0.5) //max value is 1.0 and minimum is 0.0
let compressedImage = UIImage(data: compressData!)
You only can change size of image (less size = less data) and compress it by using image compression algorithms like JPEG, there is no other way (better algorythm = less size in same quality).
I heard google improved JPEG algorithm using neural networks lately (Google’s TensorFlow)

Photo resizing and compressing

I am resizing and compressing my photos an unusual result.
When I choose the image from photo album, the image compresses and resizes fine. However, If I do it on a image that was passed from the camera, the image becomes oddly small (And unwatchable). What I have done as a test is assign some compression and resizing function in my button that takes an image either from a camera source or photo album. Below are my code and console output
#IBAction func testBtnPressed(sender: AnyObject) {
let img = selectedImageView.image!
print("before resize image \(img.dataLengh_kb)kb size \(img.size)")
let resizedImg = img.resizeWithWidth(1080)
print("1080 After resize image \(resizedImg!.dataLengh_kb)kb size \(resizedImg!.size)")
let compressedImageData = resizedImg!.mediumQualityJPEGNSData
print("Compress to medium quality = \(compressedImageData.length / 1024)kb")
}
extension UIImage {
var mediumQualityJPEGNSData: NSData { return UIImageJPEGRepresentation(self, 0.5)! }
func resizeWithWidth(width: CGFloat) -> UIImage? {
let imageView = UIImageView(frame: CGRect(origin: .zero, size: CGSize(width: width, height: CGFloat(ceil(width/size.width * size.height)))))
imageView.contentMode = .ScaleAspectFit
imageView.image = self
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, scale)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
imageView.layer.renderInContext(context)
guard let result = UIGraphicsGetImageFromCurrentImageContext() else { return nil }
UIGraphicsEndImageContext()
return result
}
}
When photo was selected from photo album
before resize image 5004kb size (3024.0, 3024.0)
1080 After resize image 1023kb size (1080.0, 1080.0)
Compress to medium quality = 119kb
When photo was passed by camera
before resize image 4653kb size (24385.536, 24385.536)
1080 After resize image 25kb size (1080.576, 1080.576)
Compress to medium quality = 4kb
I have replaced the image resizing function with the following one and it worked a lot better
func resizeImage(newHeight: CGFloat) -> UIImage {
let scale = newHeight / self.size.height
let newWidth = self.size.width * scale
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
self.drawInRect(CGRectMake(0, 0, newWidth, newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}

Resources