How to convert a PNG to a WebP in Swift? - ios

I'm trying to convert a PNG image in Webp in Swift, the only framework I got working is OpenCV through Objective-c++. The problem is that if I resize the image by 512x512 (that's the resolution I need) it crashes:
If I resize the image (either with OpenCV either with Swift) to another resolution (ex 510x510) it doesn't crash.
The strange thing is that on the simulator it never crashes while on the iPhone XS it crashes 90% of the times. How can I convert a PNG to a Webp in Swift? Why is OpenCV crashing on the imwrite instruction if the Mat is 512x512?
UPDATE:
OpenCV version: 3.4.2
I found out that this problem happens when the PNG image get processed before from the Core Graphics framework. I need to use it since I save a UIVIew as UIImage this way:
let renderer = UIGraphicsImageRenderer(bounds: bounds)
return renderer.image { rendererContext in
layer.render(in: rendererContext.cgContext)
}

Swift 4.0
pod both 'SDWebImage' ( https://github.com/SDWebImage/SDWebImage ) AND 'SDWebImage/WebP' ( https://github.com/SDWebImage/SDWebImageWebPCoder )
import SDWebImage
if let image_download = UIImage(data: data) {
let photo:Data = image_download.sd_imageData(as: SDImageFormat.webP)!
}

I ended up using another framework to convert PNG to WebP: https://github.com/seanooi/iOS-WebP, had to create the wrapper to use it on swift but it works very good 😊
My wrapper is very simple but does what I needed:
#import <Foundation/Foundation.h>
#import "WebPWrapper.h"
#import "UIImage+WebP.h"
#implementation WebPWrapper
-(NSData *)convertUIImageToWebp:(UIImage *)image :(int)quality {
NSData *webPData = [UIImage imageToWebP:image quality:quality];
return webPData;
}
#end
In swift I use it this way:
let webPData = WebPWrapper().convertUIImage(toWebp: image, 90)

Related

Magick++: how do i use magick++ to convert an animated gif to an animated webp?

The main logic code is as follows:
std::vector<Magick::Image> images;
Magick::readImages(&images, input_blob);
for (auto &image : images) {
image.magick("WEBP");
}
output_blob = new Magick::Blob;
Magick::writeImages(images.begin(), images.end(), output_blob);
When i wrote output_blob data into a file, i got a static webp image.
Q: How can i get an animated webp file?
Thanks in advance!

UIImage.pngData(_:) losing Camera photo orientation [duplicate]

This question already has answers here:
Swift PNG Image being saved with incorrect orientation
(3 answers)
Closed 2 years ago.
I'm learning how to use UIImagePickerController and got stuck with a problem using UIImagePickerController.sourceType = .camera.
What my app is supposed to do is:
to allow a user to take a photo using the system view controller mentioned above
then convert this image using UIImage.pngData(_:)
save this data to an appropriate struct field (doesn't matter which one, in that case)
use data saved to the structure to make up an image and set this image as a UIButton foreground image
When i'm doing so (following App Development with Swift book's example project) image appears to be rotated 90 degrees for some reason, which I'd like to know.
I've tried to create additional UIImageView and set its image property before converting UIImage to pngData, and it appeared normally (see 2nd screenshot).
When choosing a photo from the photo library problem does not occur in any of the cases (before-after converting).
So I suppose, that pngData is somehow losing photo-orientation information? Or I've messed somewhere else, perhaps.
Here are screenshots from my app, so you can see how original taken photo looks, and how it looks in-app (above labels - UIButton, below - test UIImageView). Nevermind text :)
If you save UIImage as a JPEG, this will set the rotation flag.
PNGs do not support a rotation flag, so if you save a UIImage as a PNG, it will be rotated incorrectly and not have a flag set to fix it. So if you want PNGs you must rotate them yourself.
let jpgData = downloadedImage.jpegData(compressionQuality: 1)
To get rotated image you need to draw that.. you can use following extension
extension UIImage {
func rotateImage()-> UIImage? {
if (self.imageOrientation == UIImage.Orientation.up ) {
return self
}
UIGraphicsBeginImageContext(self.size)
self.draw(in: CGRect(origin: CGPoint.zero, size: self.size))
let copy = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return copy
}
}
How to use
let rotatedImage = downloadedImage.rotateImage()

Need some help converting cvpixelbuffer data to a jpeg/png in iOS

So I'm trying to get a jpeg/png representation of the grayscale depth maps that are typically used in iOS image depth examples. The depth data is stored in each jpeg as aux data. I've followed some tutorials and I have no problem rendering this gray scale data to the screen, but I can find no way to actually save it as a jpeg/png representation. I'm pretty much using this code: https://www.raywenderlich.com/168312/image-depth-maps-tutorial-ios-getting-started
The depth data is put into a cvpixelbuffer and manipulated accordingly. I believe it's in the format kCVPixelFormatType_DisparityFloat32.
While I'm able to see this data represented accordingly on the screen, I'm unable to use UIImagePNGRepresentation or UIImageJPGRepresentation. Sure, I could manually capture a screenshot, but that's not really ideal.
I have a suspicion that the cvpixelbuffer data format is not compatible with these UIImage functions and that's why I can't get them to spit out an image.
Does anyone have any suggestions?
// CVPixelBuffer to UIImage
let ciImageDepth = CIImage(cvPixelBuffer: cvPixelBufferDepth)
let contextDepth:CIContext = CIContext.init(options: nil)
let cgImageDepth:CGImage = contextDepth.createCGImage(ciImageDepth, from: ciImageDepth.extent)!
let uiImageDepth:UIImage = UIImage(cgImage: cgImageDepth, scale: 1, orientation: UIImage.Orientation.up)
// Save UIImage to Photos Album
UIImageWriteToSavedPhotosAlbum(uiImageDepth, nil, nil, nil)
I figured it out. I had to convert to a CGImage first. Ended up doing cvPixelBuffer to CIImage to CGImage to UIImage.
Posting a swift code sample in case anyone wants to use it.
let ciimage = CIImage(cvPixelBuffer: depthBuffer) // depth cvPixelBuffer
let depthUIImage = UIImage(ciImage: ciimage)

GPUImage doubles image size - iOS/Swift

I am trying to convert an image into grayscale one using GPUImage. I wrote an extension to get my work done. Grayscale thing is okay. But output image has become doubled in size. In my case I need the image to be in exact size. Can someone please help me on this? Any help would be highly appreciated.
This is the extension I wrote
import UIKit
import GPUImage
extension UIImage {
public func grayscale() -> UIImage?{
var processedImage = self
print("1: "+"\(processedImage.size)")
let image = GPUImagePicture(image: processedImage)
let grayFilter = GPUImageGrayscaleFilter()
image?.addTarget(grayFilter)
grayFilter.useNextFrameForImageCapture()
image?.processImage()
processedImage = grayFilter.imageFromCurrentFramebuffer()
print("2: "+"\(processedImage.size)")
return processedImage
}
}
This is the output in console
Edit: I know the image can be resized later on. But need to know why is this happening and is there anything to do to keep the image size as it is using GPUImage.
Try to scale the image later:
if let cgImage = processedImage.cgImage {
//The scale value 2.0 here should be replaced by the original image's scale.
let scaledImage = UIImage(cgImage: cgImage, scale: 2.0, orientation: processedImage.imageOrientation)
return scaledImage
}

Write UIImage to PNG; read PNG into UIImage; resulting image has twice the resolution

Following these steps I'm getting an unexpected result:
1) Write UIImage to PNG
2) Read PNG into UIImage
Result: the sizes of the UIImages (before and after PNG) are different. The UIImage created by reading from the PNG has twice the resolution as the original UIImage used to create the PNG.
Swift pseudocode follows:
var initialImage : UIImage;
// Obtain UIImage from UIImagePickerController, Images.xcassets, ...
// As an example, consider a UIImage with size = (420.0, 280.0).
println("Image size = \(initialImage.size)") // Indicates "Image size = (420.0, 280.0)"
// Write image to file
UIImagePNGRepresentation(myImage).writeToFile(imagePath, atomically: true)
// Retrieve image from file
let newImage = UIImage(contentsOfFile: imagePath)!
println("Image size = \(initialImage.size)") // Indicates "Image size = (840.0, 560.0)"!
All is well otherwise. Displaying the newly read image works great.
I'm running this connected to an iPhone 6. Any thoughts would be greatly appreciated.
gary
It's converting it for the retina display. Each point is represented by 2 pixels on the screen so the saved image is double the resolution. This is a good article to reference: http://www.paintcodeapp.com/news/iphone-6-screens-demystified

Resources