How to decrease png size saved with swift on iPhone? - ios

In my app user can get a photo from the gallery, edit it and then save to documents directory.
Image requirements:
Dimension: 512X512
Size: less when 100Kb
Images basically are WhatsApp stickers
My files are 250Kb+
I already tried a lot, even tried to save files as png8 like this:
func normalize() -> UIImage {
let size = CGSize(width: 512, height: 512)
let genericColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: nil, width: 512, height: 512, bitsPerComponent: 8, bytesPerRow: 4 * 512, space: genericColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
context?.interpolationQuality = .default//(thumbBitmapCtxt!, CGInterpolationQuality.default)
let destRect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
context?.draw(self.cgImage!, in: destRect)
let tmpThumbImage = context?.makeImage()
let result = UIImage(cgImage: tmpThumbImage!) //, scale: 1, orientation: .up)
return result
}
this method makes images look bad but size even increased to 300Kb
If anyone knows how to deal with it, please help me.
Please note, I don't need JPEG, I need PNG with transparent background
Also, this may be helpful. In my app, I have a few edit tools, like lasso and eraser. All of them made with UIGraphicsImageContext like this:
func erase(fromPoint: CGPoint, toPoint: CGPoint) {
UIGraphicsBeginImageContextWithOptions(lassoImageView.bounds.size, false, 1)
let context = UIGraphicsGetCurrentContext()
lassoImageView.layer.render(in: context!)
context?.move(to: fromPoint)
context?.addLine(to: toPoint)
context?.setLineCap(.round)
context?.setLineWidth(CGFloat(eraserBrushWidth))
context?.setBlendMode(.clear)
context?.strokePath()
lassoImageView.image = UIGraphicsGetImageFromCurrentImageContext()
croppedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
Maybe something wrong with my UIGraphicsBeginImageContextWithOptions implementation?
Thanks for any help

Related

mask two UIImages by using cgImage.mask not working, but works for imageview.layer.mask

I have two images, which both have alpha channel. I want to comine them together. It do works for UIImageView, but I want to do it by using cgimage, which without creat a UIImageView.
I tried cgimage like this is not working:
func combine3(_ bg: UIImage, cover: UIImage) -> UIImage?{
let size = CGSize(width: self.view.frame.width, height: self.view.frame.height)
UIGraphicsBeginImageContext(size)
let maskRef = cover.cgImage!
let mask = CGImage.init(maskWidth: maskRef.width, height: maskRef.height, bitsPerComponent: maskRef.bitsPerComponent, bitsPerPixel: maskRef.bitsPerPixel, bytesPerRow: maskRef.bytesPerRow, provider: maskRef.dataProvider!, decode: nil, shouldInterpolate: false)
let masked = bg.cgImage?.masking(mask!)
let outPutImage = UIImage(cgImage: masked!)
UIGraphicsEndImageContext()
return outPutImage
}
but for UIImageView, it works pretty good:
let bg = creatBGImageFinal()!
bgIV.image = bg
let cover = createCoverImageFinal(progress: 0)!
let layer = CALayer()
layer.contents = cover.cgImage
layer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
maskIV.layer.mask = layer
bg:
cover,which is a picture with an white circle in the middle, the others is Transparent:
result:
First, reverse the mask:
The rule for cgImage masking is Dst = 1 - Src.
Then draw the masked image onto image context with opacity:
func combine3(_ bg: UIImage, cover: UIImage, size: CGSize) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(size, true, 1)
defer {
UIGraphicsEndImageContext()
}
let context = UIGraphicsGetCurrentContext()!
let maskRef = cover.cgImage!
let mask = CGImage.init(maskWidth: maskRef.width, height: maskRef.height, bitsPerComponent: maskRef.bitsPerComponent, bitsPerPixel: maskRef.bitsPerPixel, bytesPerRow: maskRef.bytesPerRow, provider: maskRef.dataProvider!, decode: nil, shouldInterpolate: false)!
let masked = bg.cgImage!.masking(mask)!
// adjust for lower-left-origin CG coordinates
context.translateBy(x: 0, y: size.height)
context.scaleBy(x: 1, y: -1)
context.draw(masked, in: CGRect(origin: .zero, size: size))
return UIGraphicsGetImageFromCurrentImageContext()
}
Result:

Resize UIImage before printing to PDF with PDFKit – Swift/Xcode

I'm perplexed. My app allows the user to takes photos with the camera/photo album, then at the end of the app those photos are printed into a PDF (using PDFKit). I've tried NUMEROUS ways to resize the photos so that the PDF is not so large (8mb for ~13 photos). I can't get it working, however!
Here's one solution I tried:
func resizeImage(image: UIImage) -> UIImage {
if image.size.height >= 1024 && image.size.width >= 1024 {
UIGraphicsBeginImageContext(CGSize(width: 1024, height: 1024))
image.draw(in: CGRect(x: 0, y: 0, width: 1024, height: 1024))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else if image.size.height >= 1024 && image.size.width < 1024 {
UIGraphicsBeginImageContext(CGSize(width: image.size.width, height: 1024))
image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: 1024))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else if image.size.width >= 1024 && image.size.height < 1024 {
UIGraphicsBeginImageContext(CGSize(width: 1024, height: image.size.height))
image.draw(in: CGRect(x: 0, y: 0, width: 1024, height: image.size.height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else {
return image
}
}
And I call it like this:
let theResizedImage = resizeImage(image: image)
This works... sorta. It resizes the image to about a quarter of what it was before (still not great... but better). Here's the issue though: when I draw "theResizedImage" to the PDF...
theResizedImage.draw(in: photoRect)
...the PDF file ends up being TWICE as large as before!! How?!?!?! I don't understand how in the world an image that is resized is all-of-a-sudden two-times larger than it was in its original form as soon as it is drawn onto the PDF.
Someone please help me out with this. If you've got a better way to do it that allows for the image to be resized even further, then fantastic! I'm not sure if you're familiar with a software called pdfFactoryPro (on PC), but it will resize a HUGE (like 40 something mb) file into one that's like... 3 mb. I need these images resized big time. I need them to keep that small size when printed to the PDF, so that the PDF is small. The other option would be if you know some way to resize the PDF itself, like pdfFactoryPro does.
PLEASE explain your answers thoroughly and with code, or a link to a tutorial. I'm a beginner... if you couldn't tell.
NOTE: Kindly make sure your code response is set up in a way that passes an image to the function... not a URL. As I stated earlier, I'm resizing photos that the user took with the camera, accessed from within the app (imagePickerController).
Thanks!
EDIT: I found out through a lot of Googling that even though you resize the image using UIImageJPEGRepresentation, when you print the image to the PDF, the RAW image is printed instead of the compressed image.
Someone named "Michael McNabb" posted this solution, but it is really out-dated, due to being in 2013:
NSData *jpegData = UIImageJPEGRepresentation(sourceImage, 0.75);
CGDataProviderRef dp = CGDataProviderCreateWithCFData((__bridge CFDataRef)jpegData);
CGImageRef cgImage = CGImageCreateWithJPEGDataProvider(dp, NULL, true, kCGRenderingIntentDefault);
[[UIImage imageWithCGImage:cgImage] drawInRect:drawRect];
Can someone please translate this to Swift 5 for me? It may be the solution to my question.
func draw(image: UIImage, in rect: CGRect) {
guard let jpegData = image.jpegData(compressionQuality: 0.75),
let dataProvider = CGDataProvider(data: jpegData as CFData),
let cgImage = CGImage(jpegDataProviderSource: dataProvider, decode: nil, shouldInterpolate: true, intent: .defaultIntent) else {
return
}
UIImage(cgImage: cgImage).draw(in: rect)
}
Turns out it's not just one task to resize images for printing to a PDF. It's two tasks.
The first task is resizing the photo. For example, if you don't want your picture more than 1MB, then you resize your photo to 1024x1024. If you'd prefer the pixel quality be able to be even lower, then you can change it based on your needs.
Resizing example:
func resizeImage(image: UIImage) -> UIImage {
let maxSize = CGFloat(768)
if image.size.height >= maxSize && image.size.width >= maxSize {
UIGraphicsBeginImageContext(CGSize(width: maxSize, height: maxSize))
image.draw(in: CGRect(x: 0, y: 0, width: maxSize, height: maxSize))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else if image.size.height >= maxSize && image.size.width < maxSize {
UIGraphicsBeginImageContext(CGSize(width: image.size.width, height: maxSize))
image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: maxSize))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else if image.size.width >= maxSize && image.size.height < maxSize {
UIGraphicsBeginImageContext(CGSize(width: maxSize, height: image.size.height))
image.draw(in: CGRect(x: 0, y: 0, width: maxSize, height: image.size.height))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
else {
return image
}
}
Disclaimer: I learned this code from another person on StackOverflow. I don't remember his name, but credits to him.
Second, you then compress the resized image. I don't fully understand compression, but I set mine to something really low haha – do as you will.
Compression example:
func draw(image: UIImage, in rect: CGRect) {
guard let oldImageData = image.jpegData(compressionQuality: 1) else {
return
}
print("The size of the oldImage, before compression, is: \(oldImageData.count)")
guard let jpegData = image.jpegData(compressionQuality: 0.05),
let dataProvider = CGDataProvider(data: jpegData as CFData),
let cgImage = CGImage(jpegDataProviderSource: dataProvider, decode: nil, shouldInterpolate: true, intent: .defaultIntent) else {
return
}
let newImage = UIImage(cgImage: cgImage)
print("The size of the newImage printed on the pdf is: \(jpegData.count)")
newImage.draw(in: rect)
}
Disclaimer: I learned this code from someone named Michael McNabb. Credit goes to him.
I went ahead and drew the image to the PDF at the end of that function. These two things combined took my PDF from being about 24MB or so to about 570 KB.

Visible Artifacts Compositing two UIImages with UIGraphicsContext using Hard Light transfer mode

I am compositing two UIImages together using a UIGraphicsContext.
It works for the most part, but using the Hard Light CGBlendMode, I am getting strange artifacts that are not present if I use a Hard Light transfer mode in Photoshop.
I would try to use Core Image, but I need to be able to control the opacity of the top layer being transferred with Hard Light, and as far as I can tell that is not possible.
Here is the image created with a UIGraphicsContext:
Here it is using the same layers and opacity in Photoshop:
Here is the base image:
And the layer being composited with 60% opacity using Hard Light:
I have tried setting the context interpolation quality to high, using a transparency layer, but nothing has made any improvement.
My code is below. Does anyone know how to fix this, or alternative ways of achieving the same results that I am getting in Photoshop?
public extension UIImage {
public func compImagesWithImage(_ topImg: UIImage, blendMode: CGBlendMode, opacity: CGFloat) -> UIImage? {
let baseImg = self
if let cgImage = baseImg.cgImage {
let baseW = CGFloat(baseImg.cgImage!.width as size_t)
let baseH = CGFloat(baseImg.cgImage!.height as size_t)
UIGraphicsBeginImageContext(CGSize(width: baseW, height: baseH))
if let context = UIGraphicsGetCurrentContext() {
context.saveGState();
context.interpolationQuality = .high
let flipVertical = CGAffineTransform(a: 1, b: 0, c: 0, d: -1, tx: 0, ty: baseH)
context.concatenate(flipVertical)
context.setAlpha(1.0)
context.setBlendMode(CGBlendMode.normal)
context.draw(cgImage, in: CGRect(origin: CGPoint.zero, size: CGSize(width: CGFloat(baseW), height: CGFloat(baseH))))
context.setAlpha(opacity)
context.setBlendMode(blendMode)
context.beginTransparencyLayer(auxiliaryInfo: nil)
context.draw(topImg.cgImage!, in: CGRect(origin: CGPoint.zero, size: CGSize(width: CGFloat(baseW), height: CGFloat(baseH))))
context.endTransparencyLayer()
context.restoreGState();
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return finalImage
}
}
return nil
}
}
This fixed it: Higlight Artifacts When Using Hard Light Blend Mode
I just had to set the alpha of the base layer to 0.99.
Here is the updated code:
public func compImagesWithImage(_ topImg: UIImage, blendMode: CGBlendMode, opacity: CGFloat) -> UIImage? {
let baseImg = self
if let cgImage = baseImg.cgImage {
let baseW = CGFloat(baseImg.cgImage!.width as size_t)
let baseH = CGFloat(baseImg.cgImage!.height as size_t)
UIGraphicsBeginImageContext(CGSize(width: baseW, height: baseH))
if let context = UIGraphicsGetCurrentContext() {
context.saveGState();
context.interpolationQuality = .high
let flipVertical = CGAffineTransform(a: 1, b: 0, c: 0, d: -1, tx: 0, ty: baseH)
context.concatenate(flipVertical)
context.setAlpha(0.99)
context.setBlendMode(CGBlendMode.normal)
context.draw(cgImage, in: CGRect(origin: CGPoint.zero, size: CGSize(width: CGFloat(baseW), height: CGFloat(baseH))))
context.setAlpha(opacity)
context.setBlendMode(blendMode)
context.beginTransparencyLayer(auxiliaryInfo: nil)
context.draw(topImg.cgImage!, in: CGRect(origin: CGPoint.zero, size: CGSize(width: CGFloat(baseW), height: CGFloat(baseH))))
context.endTransparencyLayer()
context.restoreGState();
let finalImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return finalImage
}
}

How to change UIImage color?

I'd like to change the color of every pixel of any UIImage to a specific color (all pixel should get the same color):
... so of course I could just loop trough every pixel of the UIImage and set it's red, green and blue property to 0 to achieve a black-color look.
But obviously this is not a effective way to recolor an image and I'm pretty sure there are several more effective methods achieving this instead of looping trough EVERY single pixel of the image.
func recolorImage(image: UIImage, color: String) -> UIImage {
let img: CGImage = image.cgImage!
let context = CGContext(data: nil, width: img.width, height: img.height, bitsPerComponent: 8, bytesPerRow: 4 * img.width, space: CGColorSpaceCreateDeviceRGB(), bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)!
context.draw(img, in: CGRect(x: 0, y: 0, width: img.width, height: img.height))
let data = context.data!.assumingMemoryBound(to: UInt8.self)
for i in 0..<img.height {
for j in 0..<img.width {
// set data[pixel] ==> [0,0,0,255]
}
}
let output = context.makeImage()!
return UIImage(cgImage: output)
}
Ayn help would be very appreciated!
Since every pixel of the original image will be the same color, the result image is not dependent on the pixels of the original image. Your method actually just needs the size of the image and then creates a new image with that size, that is filled with one single color.
func recolorImage(image: UIImage, color: UIColor) -> UIImage {
let size = image.size
UIGraphicsBeginImageContext(size)
color.setFill()
UIRectFill(CGRect(origin: .zero, size: size))
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}

how to reduce GIF resolution with Swift 3

I have Data that contains an animated GIF and i wanted to know how to reduce it`s size before uploading it to the server. I am preferably looking for a pod that would do this but, i have looked everywhere on the internet and i couldn`t find anything. For a jpeg i usually use the function bellow after i create a uiimage from data:
func scaleImage(image:UIImage, newWidth:CGFloat) -> UIImage {
let scale = newWidth / image.size.width
let newHeight = image.size.height * scale
UIGraphicsBeginImageContext(CGSize(width: newWidth, height: newHeight))
image.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}

Resources