how to reduce GIF resolution with Swift 3 - ios

I have Data that contains an animated GIF and i wanted to know how to reduce it`s size before uploading it to the server. I am preferably looking for a pod that would do this but, i have looked everywhere on the internet and i couldn`t find anything. For a jpeg i usually use the function bellow after i create a uiimage from data:
func scaleImage(image:UIImage, newWidth:CGFloat) -> UIImage {
let scale = newWidth / image.size.width
let newHeight = image.size.height * scale
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
image.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}

Related

How to apply scale when drawing and composing UIImage

I have the following functions.
extension UIImage
{
var width: CGFloat
{
return size.width
}
var height: CGFloat
{
return size.height
}
private static func circularImage(diameter: CGFloat, color: UIColor) -> UIImage
{
UIGraphicsBeginImageContextWithOptions(CGSize(width: diameter, height: diameter), false, 0)
let context = UIGraphicsGetCurrentContext()!
context.saveGState()
let rect = CGRect(x: 0, y: 0, width: diameter, height: diameter)
context.setFillColor(color.cgColor)
context.fillEllipse(in: rect)
context.restoreGState()
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}
private func addCentered(image: UIImage, tintColor: UIColor) -> UIImage
{
let topImage = image.withTintColor(tintColor, renderingMode: .alwaysTemplate)
let bottomImage = self
UIGraphicsBeginImageContext(size)
let bottomRect = CGRect(x: 0, y: 0, width: bottomImage.width, height: bottomImage.height)
bottomImage.draw(in: bottomRect)
let topRect = CGRect(x: (bottomImage.width - topImage.width) / 2.0,
y: (bottomImage.height - topImage.height) / 2.0,
width: topImage.width,
height: topImage.height)
topImage.draw(in: topRect, blendMode: .normal, alpha: 1.0)
let mergedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return mergedImage
}
}
They work fine, but how do I properly apply UIScreen.main.scale to support retina screens?
I've looked at what's been done here but can't figure it out yet.
Any ideas?
Accessing UIScreen.main.scale itself is a bit problematic, as you have to access it only from main thread (while you usually want to put a heavier image processing on a background thread). So I suggest one of these ways instead.
First of all, you can replace UIGraphicsBeginImageContext(size) with
UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
The last argument (0.0) is a scale, and based on docs "if you specify a value of 0.0, the scale factor is set to the scale factor of the device’s main screen."
If instead you want to retain original image's scale on resulting UIImage, you can do this: after topImage.draw, instead of getting the UIImage with UIGraphicsGetImageFromCurrentImageContext, get CGImage with
let cgImage = context.makeImage()
and then construct UIImage with the scale and orientation of the original image (as opposed to defaults)
let mergedImage = UIImage(
cgImage: cgImage,
scale: image.scale,
orientation: image.opientation)

Resize UIImage before printing to PDF with PDFKit – Swift/Xcode

I'm perplexed. My app allows the user to takes photos with the camera/photo album, then at the end of the app those photos are printed into a PDF (using PDFKit). I've tried NUMEROUS ways to resize the photos so that the PDF is not so large (8mb for ~13 photos). I can't get it working, however!
Here's one solution I tried:
func resizeImage(image: UIImage) -> UIImage {
if image.size.height >= 1024 && image.size.width >= 1024 {
UIGraphicsBeginImageContext(CGSize(width: 1024, height: 1024))
image.draw(in: CGRect(x: 0, y: 0, width: 1024, height: 1024))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else if image.size.height >= 1024 && image.size.width < 1024 {
UIGraphicsBeginImageContext(CGSize(width: image.size.width, height: 1024))
image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: 1024))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else if image.size.width >= 1024 && image.size.height < 1024 {
UIGraphicsBeginImageContext(CGSize(width: 1024, height: image.size.height))
image.draw(in: CGRect(x: 0, y: 0, width: 1024, height: image.size.height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else {
return image
}
}
And I call it like this:
let theResizedImage = resizeImage(image: image)
This works... sorta. It resizes the image to about a quarter of what it was before (still not great... but better). Here's the issue though: when I draw "theResizedImage" to the PDF...
theResizedImage.draw(in: photoRect)
...the PDF file ends up being TWICE as large as before!! How?!?!?! I don't understand how in the world an image that is resized is all-of-a-sudden two-times larger than it was in its original form as soon as it is drawn onto the PDF.
Someone please help me out with this. If you've got a better way to do it that allows for the image to be resized even further, then fantastic! I'm not sure if you're familiar with a software called pdfFactoryPro (on PC), but it will resize a HUGE (like 40 something mb) file into one that's like... 3 mb. I need these images resized big time. I need them to keep that small size when printed to the PDF, so that the PDF is small. The other option would be if you know some way to resize the PDF itself, like pdfFactoryPro does.
PLEASE explain your answers thoroughly and with code, or a link to a tutorial. I'm a beginner... if you couldn't tell.
NOTE: Kindly make sure your code response is set up in a way that passes an image to the function... not a URL. As I stated earlier, I'm resizing photos that the user took with the camera, accessed from within the app (imagePickerController).
Thanks!
EDIT: I found out through a lot of Googling that even though you resize the image using UIImageJPEGRepresentation, when you print the image to the PDF, the RAW image is printed instead of the compressed image.
Someone named "Michael McNabb" posted this solution, but it is really out-dated, due to being in 2013:
NSData *jpegData = UIImageJPEGRepresentation(sourceImage, 0.75);
CGDataProviderRef dp = CGDataProviderCreateWithCFData((__bridge CFDataRef)jpegData);
CGImageRef cgImage = CGImageCreateWithJPEGDataProvider(dp, NULL, true, kCGRenderingIntentDefault);
[[UIImage imageWithCGImage:cgImage] drawInRect:drawRect];
Can someone please translate this to Swift 5 for me? It may be the solution to my question.
func draw(image: UIImage, in rect: CGRect) {
guard let jpegData = image.jpegData(compressionQuality: 0.75),
let dataProvider = CGDataProvider(data: jpegData as CFData),
let cgImage = CGImage(jpegDataProviderSource: dataProvider, decode: nil, shouldInterpolate: true, intent: .defaultIntent) else {
return
}
UIImage(cgImage: cgImage).draw(in: rect)
}
Turns out it's not just one task to resize images for printing to a PDF. It's two tasks.
The first task is resizing the photo. For example, if you don't want your picture more than 1MB, then you resize your photo to 1024x1024. If you'd prefer the pixel quality be able to be even lower, then you can change it based on your needs.
Resizing example:
func resizeImage(image: UIImage) -> UIImage {
let maxSize = CGFloat(768)
if image.size.height >= maxSize && image.size.width >= maxSize {
UIGraphicsBeginImageContext(CGSize(width: maxSize, height: maxSize))
image.draw(in: CGRect(x: 0, y: 0, width: maxSize, height: maxSize))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else if image.size.height >= maxSize && image.size.width < maxSize {
UIGraphicsBeginImageContext(CGSize(width: image.size.width, height: maxSize))
image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: maxSize))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else if image.size.width >= maxSize && image.size.height < maxSize {
UIGraphicsBeginImageContext(CGSize(width: maxSize, height: image.size.height))
image.draw(in: CGRect(x: 0, y: 0, width: maxSize, height: image.size.height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else {
return image
}
}
Disclaimer: I learned this code from another person on StackOverflow. I don't remember his name, but credits to him.
Second, you then compress the resized image. I don't fully understand compression, but I set mine to something really low haha – do as you will.
Compression example:
func draw(image: UIImage, in rect: CGRect) {
guard let oldImageData = image.jpegData(compressionQuality: 1) else {
return
}
print("The size of the oldImage, before compression, is: \(oldImageData.count)")
guard let jpegData = image.jpegData(compressionQuality: 0.05),
let dataProvider = CGDataProvider(data: jpegData as CFData),
let cgImage = CGImage(jpegDataProviderSource: dataProvider, decode: nil, shouldInterpolate: true, intent: .defaultIntent) else {
return
}
let newImage = UIImage(cgImage: cgImage)
print("The size of the newImage printed on the pdf is: \(jpegData.count)")
newImage.draw(in: rect)
}
Disclaimer: I learned this code from someone named Michael McNabb. Credit goes to him.
I went ahead and drew the image to the PDF at the end of that function. These two things combined took my PDF from being about 24MB or so to about 570 KB.

why is it that when I decrease the number of pixels in a picture the size of the file gets larger?

I am trying to resize an image using swift. I have been trying to set the number of pixels width to 1080p. The original photo Im using has 1200p width and is a 760 KB file. After resizing the image with the following function to have 1080p the result is 833 KB.
func resizeImage(image: UIImage, newWidth: CGFloat) -> UIImage {
let scale = newWidth / image.size.width
let newHeight = image.size.height * scale
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
image.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
Below is the code in its entirety
import UIKit
class ViewController: UIViewController {
#IBOutlet weak var imageView: UIImageView!
var image:UIImage!
override func viewDidLoad() {
super.viewDidLoad()
image = imageView.image
let bcf = ByteCountFormatter()
bcf.allowedUnits = [.useKB] // optional: restricts the units to MB only
bcf.countStyle = .file
var string = bcf.string(fromByteCount: Int64(image.jpegData(compressionQuality: 1.0)!.count))
print("formatted result: \(string)")
compressionLabel.text = string
print("Image Pixels: \(CGSize(width: image.size.width*image.scale, height: image.size.height*image.scale))")
image = resizeImage(image: image, newWidth: 1080)
string = bcf.string(fromByteCount: Int64(image.jpegData(compressionQuality: 1.0)!.count))
print("formatted result: \(string)")
compressionLabel.text = string
print("Image Pixels: \(CGSize(width: image.size.width*image.scale, height: image.size.height*image.scale))")
}
func resizeImage(image: UIImage, newWidth: CGFloat) -> UIImage {
let scale = newWidth / image.size.width
let newHeight = image.size.height * scale
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
image.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
Impossible to say for sure, but you are specifying compressionQuality: 1.0 on your converted image, so possibly the original was saved with a different quality:size ratio? (i.e. a larger image saved at low-quality can easily result in a smaller file size than a smaller image saved at high-quality, due to the nature of the compression algorithm)

Resizing an image but preserving hard edges

I am using a piece of code from this link - Resize UIImage by keeping Aspect ratio and width, and it works perfectly, but I am wondering if it can be altered to preserve hard edges of pixels. I want to double the size of the image and keep the hard edge of the pixels.
class func resizeImage(image: UIImage, newHeight: CGFloat) -> UIImage {
let scale = newHeight / image.size.height
let newWidth = image.size.width * scale
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
image.drawInRect(CGRectMake(0, 0, newWidth, newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
What I want it to do
What it does
In Photoshop there is the nearest neighbour interpolation when resizing, is there something like that in iOS?
Inspired by the accepted answer, updated to Swift 5.
Swift 5
let image = UIImage(named: "Foo")!
let scale: CGFloat = 2.0
let newSize = image.size.applying(CGAffineTransform(scaleX: scale, y: scale))
UIGraphicsBeginImageContextWithOptions(newSize, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()!
context.interpolationQuality = .none
let newRect = CGRect(origin: .zero, size: newSize)
image.draw(in: newRect)
let newImage = UIImage(cgImage: context.makeImage()!)
UIGraphicsEndImageContext()
Did a bit more digging and found the answer -
https://stackoverflow.com/a/25430447/4196903
but where
CGContextSetInterpolationQuality(context, kCGInterpolationHigh)
instead write
CGContextSetInterpolationQuality(context, CGInterpolationQuality.None)
You need to use CISamplerclass (Which is only available in iOS 9) and need to create your own custom image processing filter for it i think
You can find more information here and here too

Resize and Crop 2 Images affected the original image quality

Supposed that I have a UIImage's object on the UIViewController, and I want to set the image from the Controller. Basically what I want to do is, merging two images together, that the first image is the 5 star with blue color :
and the second image is the 5 star with grey color :
It's intended for rating image. Since the maximum rating is 5, then I have to multiply it by 20 to get 100 point to make the calculation easier. Please see code for more details logic.
So I have this (BM_RatingHelper.swift) :
static func getRatingImageBasedOnRating(rating: CGFloat, width: CGFloat, height: CGFloat) -> UIImage {
// available maximum rating is 5.0, so we have to multiply it by 20 to achieve 100.0 point
let ratingImageWidth = ( width / 100.0 ) * ( rating * 20.0 )
// get active rating image
let activeRatingImage = BM_ImageHelper.resize(UIImage(named: "StarRatingFullActive")!, targetSize: CGSize(width: width, height: height))
let activeRatingImageView = UIImageView(frame: CGRectMake(0, 0, ratingImageWidth, height));
activeRatingImageView.image = BM_ImageHelper.crop(activeRatingImage, x: 0, y: 0, width: ratingImageWidth, height: height);
// get inactive rating image
let inactiveRatingImage = BM_ImageHelper.resize(UIImage(named: "StarRatingFullInactive")!, targetSize: CGSize(width: width, height: height))
let inactiveRatingImageView = UIImageView(frame: CGRectMake(ratingImageWidth, 0, ( 100.0 - ratingImageWidth ), height));
inactiveRatingImageView.image = BM_ImageHelper.crop(inactiveRatingImage, x: ratingImageWidth, y: 0, width: ( 100.0 - ratingImageWidth ), height: height);
// combine the images
let ratingView = UIView.init(frame: CGRect(x: 0, y: 0, width: width, height: height))
ratingView.backgroundColor = BM_Color.colorForType(BM_ColorType.ColorWhiteTransparent)
ratingView.addSubview(activeRatingImageView)
ratingView.addSubview(inactiveRatingImageView)
return ratingView.capture()
}
The BM_ImageHelper.swift :
import UIKit
class BM_ImageHelper: NSObject {
// http://stackoverflow.com/questions/158914/cropping-an-uiimage
static func crop(image: UIImage, x: CGFloat, y: CGFloat, width: CGFloat, height: CGFloat) -> UIImage {
let rect = CGRect(x: x, y: y, width: width, height: height)
let imageRef = CGImageCreateWithImageInRect(image.CGImage, rect)!
let croppedImage = UIImage(CGImage: imageRef)
return croppedImage
}
// http://iosdevcenters.blogspot.com/2015/12/how-to-resize-image-in-swift-in-ios.html
static func resize(image: UIImage, targetSize: CGSize) -> UIImage {
let size = image.size
let widthRatio = targetSize.width / image.size.width
let heightRatio = targetSize.height / image.size.height
// Figure out what our orientation is, and use that to form the rectangle
var newSize: CGSize
if(widthRatio > heightRatio) {
newSize = CGSizeMake(size.width * heightRatio, size.height * heightRatio)
} else {
newSize = CGSizeMake(size.width * widthRatio, size.height * widthRatio)
}
// This is the rect that we've calculated out and this is what is actually used below
let rect = CGRectMake(0, 0, newSize.width, newSize.height)
// Actually do the resizing to the rect using the ImageContext stuff
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
image.drawInRect(rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
}
extension UIView {
// http://stackoverflow.com/a/34895760/897733
func capture() -> UIImage {
UIGraphicsBeginImageContextWithOptions(self.frame.size, self.opaque, UIScreen.mainScreen().scale)
self.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
I call that function like (supposed that the image need to filled is ratingImage) :
self.ratingImage.image =
BM_RatingHelper.getRatingImageBasedOnRating(3.7, width: 100.0, height: 20.0)
The code works perfectly, but the merged image is so low in quality although I have use the high quality image. This is the image for 3.7 rating :
What should I do to merge the images without lose the original quality? Thanks.
In your BM_ImageHelper.resize method its giving the scale 1.0. It should be the devices's screens scale.
Change it to
UIGraphicsBeginImageContextWithOptions(newSize, false, UIScreen.mainScreen().scale)
UPDATE
Also change your crop method to address the scale, like
static func crop(image: UIImage, x: CGFloat, y: CGFloat, width: CGFloat, height: CGFloat) -> UIImage {
let transform = CGAffineTransformMakeScale(image.scale, image.scale)
let rect = CGRect(x: x, y: y, width: width, height: height)
let transformedCropRect = CGRectApplyAffineTransform(rect, transform);
let imageRef = CGImageCreateWithImageInRect(image.CGImage, transformedCropRect)!
let croppedImage = UIImage(CGImage: imageRef, scale: image.scale, orientation: image.imageOrientation)
return croppedImage
}

Resources