How to convert UIView to UIImage with high resolution? - ios

There have been several discussions regarding how to convert UIView to UIImage, either using view.drawHierarchy(in:) or view.layer.renderInContext(). However, even if set the scale to device scale, the result is still in pretty bad resolution. I wonder if there's a way to transform a UIView to UIImage with high resolution and quality?

You need to set the correct content scale on each subview.
extension UIView {
func scale(by scale: CGFloat) {
self.contentScaleFactor = scale
for subview in self.subviews {
subview.scale(by: scale)
}
}
func getImage(scale: CGFloat? = nil) -> UIImage {
let newScale = scale ?? UIScreen.main.scale
self.scale(by: newScale)
let format = UIGraphicsImageRendererFormat()
format.scale = newScale
let renderer = UIGraphicsImageRenderer(size: self.bounds.size, format: format)
let image = renderer.image { rendererContext in
self.layer.render(in: rendererContext.cgContext)
}
return image
}
}
To create your image:
let image = yourView.getImage()

Related

Achieve same CIFilter effect on different sizes of same image

I'm building a photo editor and to keep a good performance I filter a small version of the image first and when the user wants to export it, then I filter the higher resolution image.
I'm using CIGaussianBlur filter but I can't achieve same results for different images resolutions.
This is my code:
class ViewController : UIViewController {
var originalImage = UIImage()
var previewImageView = UIImageView()
var previewCIImage: CIImage!
var scaleFactor = CGFloat()
let blurFilter = CIFilter.gaussianBlur()
var blurSlider = UISlider()
var blurRadius = Float()
override func viewDidLoad() {
previewImageView.image = originalImage.scalePreservingAspectRatio(targetSize: previewImageView.frame.size)
previewCIImage = CIImage(image: previewImageView.image!)
// Get the scale factor
scaleFactor = originalImage.getScaleFactor(targetSize: previewImageView.frame.size)
blurSlider.addTarget(self, action: #selector(blurChanged(slider:)), for: .valueChanged)
}
#objc func blurChanged(slider: UISlider) {
blurRadius = slider.value
let scaledRadius = blurRadius * Float(scaleFactor)
blurFilter.radius = scaledRadius
MTKView.setNeedsDisplay()
}
func exportFullSizeImage() -> UIImage {
let inputImage = CIImage(image: originalImage)!
blurFilter.inputImage = inputImage.clampedToExtent()
// Assuming scaleFactor is 1.0 for the unscaled image
let scaledRadius = blurRadius * 1.0
blurFilter.radius = scaledRadius
let output = (blurFilter.outputImage)!
let outputCGImage = context.createCGImage(output, from: output.extent)
return UIImage(cgImage: outputCGImage!)
}
}
extension UIImage {
func scalePreservingAspectRatio(targetSize: CGSize) -> UIImage {
let widthRatio = targetSize.width / size.width
let heightRatio = targetSize.height / size.height
let scaleFactor = min(widthRatio, heightRatio)
let scaledImageSize = CGSize(
width: size.width * scaleFactor,
height: size.height * scaleFactor
)
let renderer = UIGraphicsImageRenderer(
size: scaledImageSize
)
let scaledImage = renderer.image { _ in
self.draw(in: CGRect(
origin: .zero,
size: scaledImageSize
))
}
return scaledImage
}
func getScaleFactor(targetSize: CGSize) -> CGFloat {
let widthRatio = targetSize.width / size.width
let heightRatio = targetSize.height / size.height
let scaleFactor = min(widthRatio, heightRatio)
return scaleFactor
}
}
Here's the output of the small version of the image (preview image):
And here's the output of the full size image (unscaled image):
The results are clearly different, the full size/unscaled image has more blur. I need to achieve the same blur effect on both images.
I've found two similar questions: Output of CIFilter has different effect for different sizes of same image and How to achieve same CIFilter effect on multiple sizes of same image
I know the scale factor of the resized image, that's maybe useful to get an answer.
The parameter scaling from the linked answer should work for all images, regardless of their aspect ratio. The important part is that you apply the scale factor to both images, the preview and the export.
Alternatively, since you have the scale factor of the resized image, you can use that to scale the parameter (instead of using the image size):
// assuming scaleFactor is 1.0 for the unscaled image
let scaledRadius = radius * scaleFactor
filter.setValue(scaledRadius, forKey: "inputRadius")
Please also note that not every parameter of every filter needs scaling to achieve consistency across different image sizes. Usually, only parameters that describe some kind of effect radius or size need scaling.

Compose CIImages such that one is centered and same width above the other

I am trying to achieve this result:
I create the gradient image which is aspect ratio 16:9.
This is my code:
extension UIImage {
func mergeWithGradient(completion: #escaping (UIImage)->()){
let width = self.size.width
let maxWidth = min(width, 1024.0)
let height = maxWidth * 16.0 / 9.0
let totalSize = CGSize(width: maxWidth, height: height)
let colors = self.colors()
guard
let gradientImage = UIImage(size: totalSize, gradientPoints: [(colors.top,0), (colors.bottom,1)].map{ GradientPoint(location: $0.1, color: $0.0)}),
let cgImage = self.rotateToImageOrientationUp().cgImage,
let cgGradientImage = gradientImage.cgImage
else {
return
}
let context = CIContext.init(options: nil)
var ciImage = CIImage(cgImage: cgImage)
let ciGradientImage = CIImage(cgImage: cgGradientImage)
let ciMerged = ciImage.composited(over: ciGradientImage)
let cgMerged = context.createCGImage(ciMerged, from: ciMerged.extent)!
let uiMerged = UIImage.init(cgImage: cgMerged)
completion(uiMerged)
}
}
But the attached code actually gets this result:
How can I move the image to the center?
This is really easy with CoreGraphics but I need to do it with CoreImage as later on my project will need more filters and good performance.
If you want to make use Core Image to merge the images while centering the overlay, then the most straight forward way to do it is to just make your overlay image the same size as your gradient image to begin with and just letter box it with transparent pixels. You can do it with UIGraphicsIamgeRender before you convert to a CIImage.

UIImageJpgRepresentation doubles image resolution

I am trying to save an image coming from the iPhone camera to a file. I use the following code:
try UIImageJPEGRepresentation(toWrite, 0.8)?.write(to: tempURL, options: NSData.WritingOptions.atomicWrite)
This results in a file double the resolution of the toWrite UIImage. I confirmed in the watch expressions that creating a new UIImage from UIImageJPEGRepresentation doubles its resolution
-> toWrite.size CGSize (width = 3264, height = 2448)
-> UIImage(data: UIImageJPEGRepresentation(toWrite, 0.8)).size CGSize? (width = 6528, height = 4896)
Any idea why this would happen, and how to avoid it?
Thanks
Your initial image has scale factor = 2, but when you init your image from data you will get image with scale factor = 1. Your way to solve it is to control the scale and init the image with scale property:
#available(iOS 6.0, *)
public init?(data: Data, scale: CGFloat)
Playground code that represents the way you can set scale
extension UIImage {
class func with(color: UIColor, size: CGSize) -> UIImage? {
let rect = CGRect(origin: .zero, size: size)
UIGraphicsBeginImageContextWithOptions(size, true, 2.0)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
context.setFillColor(color.cgColor)
context.fill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
let image = UIImage.with(color: UIColor.orange, size: CGSize(width: 100, height: 100))
if let image = image {
let scale = image.scale
if let data = UIImageJPEGRepresentation(image, 0.8) {
if let newImage = UIImage(data: data, scale: scale) {
debugPrint(newImage?.size)
}
}
}

Photo resizing and compressing

I am resizing and compressing my photos an unusual result.
When I choose the image from photo album, the image compresses and resizes fine. However, If I do it on a image that was passed from the camera, the image becomes oddly small (And unwatchable). What I have done as a test is assign some compression and resizing function in my button that takes an image either from a camera source or photo album. Below are my code and console output
#IBAction func testBtnPressed(sender: AnyObject) {
let img = selectedImageView.image!
print("before resize image \(img.dataLengh_kb)kb size \(img.size)")
let resizedImg = img.resizeWithWidth(1080)
print("1080 After resize image \(resizedImg!.dataLengh_kb)kb size \(resizedImg!.size)")
let compressedImageData = resizedImg!.mediumQualityJPEGNSData
print("Compress to medium quality = \(compressedImageData.length / 1024)kb")
}
extension UIImage {
var mediumQualityJPEGNSData: NSData { return UIImageJPEGRepresentation(self, 0.5)! }
func resizeWithWidth(width: CGFloat) -> UIImage? {
let imageView = UIImageView(frame: CGRect(origin: .zero, size: CGSize(width: width, height: CGFloat(ceil(width/size.width * size.height)))))
imageView.contentMode = .ScaleAspectFit
imageView.image = self
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, scale)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
imageView.layer.renderInContext(context)
guard let result = UIGraphicsGetImageFromCurrentImageContext() else { return nil }
UIGraphicsEndImageContext()
return result
}
}
When photo was selected from photo album
before resize image 5004kb size (3024.0, 3024.0)
1080 After resize image 1023kb size (1080.0, 1080.0)
Compress to medium quality = 119kb
When photo was passed by camera
before resize image 4653kb size (24385.536, 24385.536)
1080 After resize image 25kb size (1080.576, 1080.576)
Compress to medium quality = 4kb
I have replaced the image resizing function with the following one and it worked a lot better
func resizeImage(newHeight: CGFloat) -> UIImage {
let scale = newHeight / self.size.height
let newWidth = self.size.width * scale
UIGraphicsBeginImageContext(CGSizeMake(newWidth, newHeight))
self.drawInRect(CGRectMake(0, 0, newWidth, newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}

Apply a mask to AVCaptureStillImageOutput

I'm working on a project where I'd like to mask a photo that the user has just taken with their camera. The mask is created at a specific aspect ratio to add letterboxes to a photo.
I can successfully create the image, create the mask, and save both to the camera roll, but I can't apply the mask to the image. Here's the code I have now
func takePhoto () {
dispatch_async(self.sessionQueue) { () -> Void in
if let photoOutput = self.output as? AVCaptureStillImageOutput {
photoOutput.captureStillImageAsynchronouslyFromConnection(self.outputConnection) { (imageDataSampleBuffer, err) -> Void in
if err == nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
let image = UIImage(data: imageData)
if let _ = image {
let maskedImage = self.maskImage(image!)
print("masked image: \(maskedImage)")
self.savePhotoToLibrary(maskedImage)
}
} else {
print("Error while capturing the image: \(err)")
}
}
}
}
}
func maskImage (image: UIImage) -> UIImage {
let mask = createImageMask(image)
let maskedImage = CGImageCreateWithMask(image.CGImage, mask!)
return UIImage(CGImage: maskedImage!)
}
func createImageMask (image: UIImage) -> CGImage? {
let width = image.size.width
let height = width / CGFloat(store.state.aspect.rawValue)
let x = CGFloat(0.0)
let y = (image.size.height - height) / 2
let maskRect = CGRectMake(0.0, 0.0, image.size.width, image.size.height)
let maskContents = CGRectMake(x, y, width, height)
var color = UIColor(white: 1.0, alpha: 0.0)
UIGraphicsBeginImageContextWithOptions(CGSizeMake(maskRect.size.width, maskRect.size.height), false, 0.0)
color.setFill()
UIRectFill(maskRect)
color = UIColor(white: 0.0, alpha: 1.0)
color.setFill()
UIRectFill(maskContents)
let maskImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
print("mask: \(maskImage)")
savePhotoToLibrary(image)
savePhotoToLibrary(maskImage)
let mask = CGImageMaskCreate(
CGImageGetWidth(maskImage.CGImage),
CGImageGetHeight(maskImage.CGImage),
CGImageGetBitsPerComponent(maskImage.CGImage),
CGImageGetBitsPerPixel(maskImage.CGImage),
CGImageGetBytesPerRow(maskImage.CGImage),
CGImageGetDataProvider(maskImage.CGImage),
nil,
false)
return mask
}
From what I understand, CGImageCreateWithMask requires that the image to be masked has an alpha channel. I've tried everything I've seen here to add an alpha channel to the jpeg representation, but I'm not having any luck. Any help would be super.
This may be a bug, or maybe it's just a bit misleading. CGImageCreateWithMask() doesn't actually modify the image - it just associates the mask data with the image data, and uses the mask when you draw the image to a context (such as in a UIImageView), but not when you save the image to disk.
There are a couple approaches to generating a "rendered" version of the masked image, but if I understand your intent, you don't really want a "mask" ... you want a letter-boxed version of the image.
Here is one option that will effectively draw black bars on the top and bottom of your image (the bars / frame color is an optional parameter, if you don't want black). You can then save the modified image.
In your code above, replace
let maskedImage = self.maskImage(image!)
with
let height = image.size.width / CGFloat(store.state.aspect.rawValue)
let maskedImage = self.doLetterBox(image!, visibleHeight: height)
and add this function:
func doLetterBox(sourceImage: UIImage, visibleHeight: CGFloat, frameColor: UIColor?=UIColor.blackColor()) -> UIImage! {
// local rect based on sourceImage size
let imageRect: CGRect = CGRectMake(0.0, 0.0, sourceImage.size.width, sourceImage.size.height)
// rect for "visible" part of letter-boxed image
let clipRect: CGRect = CGRectMake(0.0, (imageRect.size.height - visibleHeight) / 2.0, imageRect.size.width, visibleHeight)
// setup the image context, using sourceImage size
UIGraphicsBeginImageContextWithOptions(imageRect.size, true, UIScreen.mainScreen().scale)
let ctx: CGContextRef = UIGraphicsGetCurrentContext()!
CGContextSaveGState(ctx)
// fill new empty image with frameColor (defaults to black)
CGContextSetFillColorWithColor(ctx, frameColor?.CGColor)
CGContextFillRect(ctx, imageRect)
// set Clipping rectangle to allow drawing only in desired area
UIRectClip(clipRect)
// draw the sourceImage to full-image-size (the letter-boxed portion will be clipped)
sourceImage.drawInRect(imageRect)
// get new letter-boxed image
let resultImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
// clean up
CGContextRestoreGState(ctx)
UIGraphicsEndImageContext()
return resultImage
}

Resources