Cropping AVCapturePhoto to overlay rectangle displayed on screen - ios

I am trying to take a picture of a thin piece of metal, cropped to the outline displayed on the screen. I have seen almost every other post on here, but nothing has got it for me yet. This image will then be used for analysis by a library. I can get some cropping to happen, but never to the rectangle displayed. I have tried rotating the image before cropping, and calculating the rect based on the rectangle on screen.
Here is my capture code. PreviewView is the container, videoLayer is for the AVCapture video.
// Photo capture delegate
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
guard let imgData = photo.fileDataRepresentation(), let uiImg = UIImage(data: imgData), let cgImg = uiImg.cgImage else {
return
}
print("Original image size: ", uiImg.size, "\nCGHeight: ", cgImg.height, " width: ", cgImg.width)
print("Orientation: ", uiImg.imageOrientation.rawValue)
guard let img = cropImage(image: uiImg) else {
return
}
showImage(image: img)
}
func cropImage(image: UIImage) -> UIImage? {
print("Image size before crop: ", image.size)
//Get the croppedRect from function below
let croppedRect = calculateRect(image: image)
guard let imgRet = image.cgImage?.cropping(to: croppedRect) else {
return nil
}
return UIImage(cgImage: imgRet)
}
func calculateRect(image: UIImage) -> CGRect {
let originalSize: CGSize
let visibleLayerFrame = self.rectangleView.bounds
// Calculate the rect from the rectangleview to translate to the image
let metaRect = (self.videoLayer.metadataOutputRectConverted(fromLayerRect: visibleLayerFrame))
print("MetaRect: ", metaRect)
// check orientation
if (image.imageOrientation == UIImage.Orientation.left || image.imageOrientation == UIImage.Orientation.right) {
originalSize = CGSize(width: image.size.height, height: image.size.width)
} else {
originalSize = image.size
}
let cropRect: CGRect = CGRect(x: metaRect.origin.x * originalSize.width, y: metaRect.origin.y * originalSize.height, width: metaRect.size.width * originalSize.width, height: metaRect.size.height * originalSize.height).integral
print("Calculated Rect: ", cropRect)
return cropRect
}
func showImage(image: UIImage) {
if takenImage != nil {
takenImage = nil
}
takenImage = UIImageView(image: image)
takenImage.frame = CGRect(x: 10, y: 50, width: 400, height: 1080)
takenImage.contentMode = .scaleAspectFit
print("Cropped Image Size: ", image.size)
self.previewView.addSubview(takenImage)
}
And here is along the line of what I keep getting.
What am I screwing up?

I managed to solve the issue for my use case.
private func cropToPreviewLayer(from originalImage: UIImage, toSizeOf rect: CGRect) -> UIImage? {
guard let cgImage = originalImage.cgImage else { return nil }
// This previewLayer is the AVCaptureVideoPreviewLayer which the resizeAspectFill and videoOrientation portrait has been set.
let outputRect = previewLayer.metadataOutputRectConverted(fromLayerRect: rect)
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: (outputRect.origin.x * width), y: (outputRect.origin.y * height), width: (outputRect.size.width * width), height: (outputRect.size.height * height))
if let croppedCGImage = cgImage.cropping(to: cropRect) {
return UIImage(cgImage: croppedCGImage, scale: 1.0, orientation: originalImage.imageOrientation)
}
return nil
}
usage of the piece of code for my case:
let rect = CGRect(x: 25, y: 150, width: 325, height: 230)
let croppedImage = self.cropToPreviewLayer(from: image, toSizeOf: rect)
self.imageView.image = croppedImage

The world of UIKit has the TOP LEFT corner as 0,0.
The 0,0 point in the AVFoundation world is the BOTTOM LEFT corner.
So you have to translate by rotating 90 degrees.
That's why your image is bonkers.
Also remember that because of the origin translation the following rules apply:
X is actually up and down
Y is actually left and right
width and height are swapped
Also be aware that the UIImageView content mode setting WILL impact how your image scales. You might want to use .scaleAspectFill and NOT AspectFit if you really want to see how your image looks in the UIView.
I used this code snippet to see what was behind the curtain:
// figure out how to cut/crop this
let realImageRect = AVMakeRect(aspectRatio: image.size, insideRect: (self.cameraPreview?.frame)!)
NSLog("real image rectangle = \(realImageRect.debugDescription)")
The 'cameraPreview' reference above is the control you're using for your AV Capture Session.
Good luck!

Related

Swift 5: Better way/approach to add image border on photo editing app?

In case the title doesn't make sense, i'm trying to make a photo editing app where user can add border to their photo. For now, i'm testing a white border.
here is a gif sample of the app. (see how slow the slider is. It's meant to be smooth like any other slider.)
Gif sample
My approach was, to render the white background to the image's size, and then render the image n% smaller to shrink it hence the border.
But i have come to a problem where when i'm testing on my device (iphone 7 plus) the slider was so laggy and slow as if it's taking so much time to compute the function.
Here are the codes for the function. This function serves as blend the background with the foreground. Background being plain white colour.
blendImages is a function located on my adjustmentEngine class.
func blendImages(backgroundImg: UIImage,foregroundImg: UIImage) -> Data? {
// size variable
let contentSizeH = foregroundImg.size.height
let contentSizeW = foregroundImg.size.width
// the magic. how the image will scale in the view.
let topImageH = foregroundImg.size.height - (foregroundImg.size.height * imgSizeMultiplier)
let topImageW = foregroundImg.size.width - (foregroundImg.size.width * imgSizeMultiplier)
let bottomImage = backgroundImg
let topImage = foregroundImg
let imgView = UIImageView(frame: CGRect(x: 0, y: 0, width : contentSizeW, height: contentSizeH))
let imgView2 = UIImageView(frame: CGRect(x: 0, y: 0, width: topImageW, height: topImageH))
// - Set Content mode to what you desire
imgView.contentMode = .scaleAspectFill
imgView2.contentMode = .scaleAspectFit
// - Set Images
imgView.image = bottomImage
imgView2.image = topImage
imgView2.center = imgView.center
// - Create UIView
let contentView = UIView(frame: CGRect(x: 0, y: 0, width: contentSizeW, height: contentSizeH))
contentView.addSubview(imgView)
contentView.addSubview(imgView2)
// - Set Size
let size = CGSize(width: contentSizeW, height: contentSizeH)
UIGraphicsBeginImageContextWithOptions(size, true, 0)
contentView.drawHierarchy(in: contentView.bounds, afterScreenUpdates: true)
guard let i = UIGraphicsGetImageFromCurrentImageContext(),
let data = i.jpegData(compressionQuality: 1.0)
else {return nil}
UIGraphicsEndImageContext()
return data
}
Below are the function i called to render it into uiImageView
guard let image = image else { return }
let borderColor = UIColor.white.image()
self.adjustmentEngine.borderColor = borderColor
self.adjustmentEngine.image = image
guard let combinedImageData: Data = self.adjustmentEngine.blendImages(backgroundImg: borderColor, foregroundImg: image) else {return}
let combinedImage = UIImage(data: combinedImageData)
self.imageView.image = combinedImage
This function will get the image and blend it with a new background colour for the border.
And finally, below are the codes for the slider's didChange function.
#IBAction func sliderDidChange(_ sender: UISlider) {
print(sender.value)
let borderColor = adjustmentEngine.borderColor
let image = adjustmentEngine.image
adjustmentEngine.imgSizeMultiplier = CGFloat(sender.value)
guard let combinedImageData: Data = self.adjustmentEngine.blendImages(backgroundImg: borderColor, foregroundImg: image) else {return}
let combinedImage = UIImage(data: combinedImageData)
self.imageView.image = combinedImage
}
So the question is, Is there a better way or optimised way to do this? Or a better approach?

Memory leak when resizing UIImage

I've read through multiple threads concerning the topic but my problem still persists.
When I'm resizing an Image with following code:
extension UIImage {
func thumbnailWithMaxSize(image:UIImage, maxSize: CGFloat) -> UIImage {
let width = image.size.width
let height = image.size.height
var sizeX: CGFloat = 0
var sizeY: CGFloat = 0
if width > height {
sizeX = maxSize
sizeY = maxSize * height/width
}
else {
sizeY = maxSize
sizeX = maxSize * width/height
}
UIGraphicsBeginImageContext(CGSize(width: sizeX, height: sizeY))
let rect = CGRect(x: 0.0, y: 0.0, width: sizeX, height: sizeY)
UIGraphicsBeginImageContext(rect.size)
draw(in: rect)
let thumbnail = UIGraphicsGetImageFromCurrentImageContext()!;
UIGraphicsEndImageContext()
return thumbnail
}
override func viewDidLoad() {
super.viewDidLoad()
let lionImage = UIImage(named: "lion.jpg")!
var thumb = UIImage()
autoreleasepool {
thumb = lionImage.thumbnailWithMaxSize(image: lionImage, maxSize: 2000)
}
myImageView.image = thumb
}
...the memory is not released. So when I navigate through multiple ViewControllers (e.g. with a PageViewController) I end up getting memory warnings and the app eventually crashes.
I also tried to load the image via UIImage(contentsOfFile: path) without success.
Any suggestions?
I noticed your code beginning two contexts but only ending one.
Here's my extension, which is basically the same as your's. Since I'm not having memory issues, it looks like that may be the issue.
extension UIImage {
public func resizeToRect(_ size : CGSize) -> UIImage {
UIGraphicsBeginImageContext(size)
self.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
return resizedImage!
}
}
The problem is this:
UIGraphicsGetImageFromCurrentImageContext()
returns an autoreleased UIImage. The autorelease pool holds on to this image until your code returns control to the runloop, which you do not do for a long time. To solve this problem, make thumb = nil after using it.
var thumb = UIImage()
autoreleasepool {
thumb = lionImage.thumbnailWithMaxSize(image: lionImage, maxSize: 2000)
let myImage:UIImage = UIImage(UIImagePNGRepresentation(thumb));
thumb = nil
}
myImageView.image = myImage

UIImageJpgRepresentation doubles image resolution

I am trying to save an image coming from the iPhone camera to a file. I use the following code:
try UIImageJPEGRepresentation(toWrite, 0.8)?.write(to: tempURL, options: NSData.WritingOptions.atomicWrite)
This results in a file double the resolution of the toWrite UIImage. I confirmed in the watch expressions that creating a new UIImage from UIImageJPEGRepresentation doubles its resolution
-> toWrite.size CGSize (width = 3264, height = 2448)
-> UIImage(data: UIImageJPEGRepresentation(toWrite, 0.8)).size CGSize? (width = 6528, height = 4896)
Any idea why this would happen, and how to avoid it?
Thanks
Your initial image has scale factor = 2, but when you init your image from data you will get image with scale factor = 1. Your way to solve it is to control the scale and init the image with scale property:
#available(iOS 6.0, *)
public init?(data: Data, scale: CGFloat)
Playground code that represents the way you can set scale
extension UIImage {
class func with(color: UIColor, size: CGSize) -> UIImage? {
let rect = CGRect(origin: .zero, size: size)
UIGraphicsBeginImageContextWithOptions(size, true, 2.0)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
context.setFillColor(color.cgColor)
context.fill(rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
let image = UIImage.with(color: UIColor.orange, size: CGSize(width: 100, height: 100))
if let image = image {
let scale = image.scale
if let data = UIImageJPEGRepresentation(image, 0.8) {
if let newImage = UIImage(data: data, scale: scale) {
debugPrint(newImage?.size)
}
}
}

After cropping images in Swift I'm getting results tilted with 90 degrees - why?

I'm using a nice github plugin for Swift https://github.com/budidino/ShittyImageCrop responsible for cropping the image.
I need aspect ratio 4:3, so I call this controller like this:
let shittyVC = ShittyImageCropVC(frame: (self.navigationController?.view.frame)!, image: image!, aspectWidth: 3, aspectHeight: 4)
self.navigationController?.present(shittyVC, animated: true, completion: nil)
Now, when I provide horizontal image (wider than taller) - cropped result is fine - I see a photo with aspect ratio 4:3 as an output.
But when I provide vertical image and try to cropp it - I'm seeing tilted output. So for example, when normal photo is like this:
vertical - and tilted - one looks like this:
(sorry for low res here). Why does it get shifted to one side?
I suspect the problem might be somewhere in the logic of the crop-button:
func tappedCrop() {
print("tapped crop")
var imgX: CGFloat = 0
if scrollView.contentOffset.x > 0 {
imgX = scrollView.contentOffset.x / scrollView.zoomScale
}
let gapToTheHole = view.frame.height/2 - holeRect.height/2
var imgY: CGFloat = 0
if scrollView.contentOffset.y + gapToTheHole > 0 {
imgY = (scrollView.contentOffset.y + gapToTheHole) / scrollView.zoomScale
}
let imgW = holeRect.width / scrollView.zoomScale
let imgH = holeRect.height / scrollView.zoomScale
print("IMG x: \(imgX) y: \(imgY) w: \(imgW) h: \(imgH)")
let cropRect = CGRect(x: imgX, y: imgY, width: imgW, height: imgH)
let imageRef = img.cgImage!.cropping(to: cropRect)
let croppedImage = UIImage(cgImage: imageRef!)
var path:String = NSTemporaryDirectory() + "tempFile.jpeg"
if let data = UIImageJPEGRepresentation(croppedImage, 0.95) { //0.4 - compression quality
//print("low compression is here")
try? data.write(to: URL(fileURLWithPath: path), options: [.atomic])
}
self.dismiss(animated: true, completion: nil)
}
ShittyImageCrop saves cropped images directly to your album and I couldn't replicate your issue using vertical images.
I see you used UIImageJPEGRepresentation compared to UIImageWriteToSavedPhotosAlbum from ShittyImageCrop and it seems other people also have problems with image rotation after using UIImageJPEGRepresentation.
Look up iOS UIImagePickerController result image orientation after upload and iOS JPEG images rotated 90 degrees
EDIT
try implementing fixOrientation() from https://stackoverflow.com/a/27775741/611879
add fixOrientation():
func fixOrientation(img:UIImage) -> UIImage {
if (img.imageOrientation == UIImageOrientation.Up) {
return img
}
UIGraphicsBeginImageContextWithOptions(img.size, false, img.scale)
let rect = CGRect(x: 0, y: 0, width: img.size.width, height: img.size.height)
img.drawInRect(rect)
let normalizedImage : UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return normalizedImage
}
and then do it before using UIImageJPEGRepresentation:
if let data = UIImageJPEGRepresentation(fixOrientation(croppedImage), 0.95) {
try? data.write(to: URL(fileURLWithPath: path), options: [.atomic])
}
EDIT 2
please edit the init method of ShittyImageCrop by replacing img = image with:
if (image.imageOrientation != .up) {
UIGraphicsBeginImageContextWithOptions(image.size, false, image.scale)
var rect = CGRect.zero
rect.size = image.size
image.draw(in: rect)
img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
} else {
img = image
}

Apply a mask to AVCaptureStillImageOutput

I'm working on a project where I'd like to mask a photo that the user has just taken with their camera. The mask is created at a specific aspect ratio to add letterboxes to a photo.
I can successfully create the image, create the mask, and save both to the camera roll, but I can't apply the mask to the image. Here's the code I have now
func takePhoto () {
dispatch_async(self.sessionQueue) { () -> Void in
if let photoOutput = self.output as? AVCaptureStillImageOutput {
photoOutput.captureStillImageAsynchronouslyFromConnection(self.outputConnection) { (imageDataSampleBuffer, err) -> Void in
if err == nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
let image = UIImage(data: imageData)
if let _ = image {
let maskedImage = self.maskImage(image!)
print("masked image: \(maskedImage)")
self.savePhotoToLibrary(maskedImage)
}
} else {
print("Error while capturing the image: \(err)")
}
}
}
}
}
func maskImage (image: UIImage) -> UIImage {
let mask = createImageMask(image)
let maskedImage = CGImageCreateWithMask(image.CGImage, mask!)
return UIImage(CGImage: maskedImage!)
}
func createImageMask (image: UIImage) -> CGImage? {
let width = image.size.width
let height = width / CGFloat(store.state.aspect.rawValue)
let x = CGFloat(0.0)
let y = (image.size.height - height) / 2
let maskRect = CGRectMake(0.0, 0.0, image.size.width, image.size.height)
let maskContents = CGRectMake(x, y, width, height)
var color = UIColor(white: 1.0, alpha: 0.0)
UIGraphicsBeginImageContextWithOptions(CGSizeMake(maskRect.size.width, maskRect.size.height), false, 0.0)
color.setFill()
UIRectFill(maskRect)
color = UIColor(white: 0.0, alpha: 1.0)
color.setFill()
UIRectFill(maskContents)
let maskImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
print("mask: \(maskImage)")
savePhotoToLibrary(image)
savePhotoToLibrary(maskImage)
let mask = CGImageMaskCreate(
CGImageGetWidth(maskImage.CGImage),
CGImageGetHeight(maskImage.CGImage),
CGImageGetBitsPerComponent(maskImage.CGImage),
CGImageGetBitsPerPixel(maskImage.CGImage),
CGImageGetBytesPerRow(maskImage.CGImage),
CGImageGetDataProvider(maskImage.CGImage),
nil,
false)
return mask
}
From what I understand, CGImageCreateWithMask requires that the image to be masked has an alpha channel. I've tried everything I've seen here to add an alpha channel to the jpeg representation, but I'm not having any luck. Any help would be super.
This may be a bug, or maybe it's just a bit misleading. CGImageCreateWithMask() doesn't actually modify the image - it just associates the mask data with the image data, and uses the mask when you draw the image to a context (such as in a UIImageView), but not when you save the image to disk.
There are a couple approaches to generating a "rendered" version of the masked image, but if I understand your intent, you don't really want a "mask" ... you want a letter-boxed version of the image.
Here is one option that will effectively draw black bars on the top and bottom of your image (the bars / frame color is an optional parameter, if you don't want black). You can then save the modified image.
In your code above, replace
let maskedImage = self.maskImage(image!)
with
let height = image.size.width / CGFloat(store.state.aspect.rawValue)
let maskedImage = self.doLetterBox(image!, visibleHeight: height)
and add this function:
func doLetterBox(sourceImage: UIImage, visibleHeight: CGFloat, frameColor: UIColor?=UIColor.blackColor()) -> UIImage! {
// local rect based on sourceImage size
let imageRect: CGRect = CGRectMake(0.0, 0.0, sourceImage.size.width, sourceImage.size.height)
// rect for "visible" part of letter-boxed image
let clipRect: CGRect = CGRectMake(0.0, (imageRect.size.height - visibleHeight) / 2.0, imageRect.size.width, visibleHeight)
// setup the image context, using sourceImage size
UIGraphicsBeginImageContextWithOptions(imageRect.size, true, UIScreen.mainScreen().scale)
let ctx: CGContextRef = UIGraphicsGetCurrentContext()!
CGContextSaveGState(ctx)
// fill new empty image with frameColor (defaults to black)
CGContextSetFillColorWithColor(ctx, frameColor?.CGColor)
CGContextFillRect(ctx, imageRect)
// set Clipping rectangle to allow drawing only in desired area
UIRectClip(clipRect)
// draw the sourceImage to full-image-size (the letter-boxed portion will be clipped)
sourceImage.drawInRect(imageRect)
// get new letter-boxed image
let resultImage: UIImage = UIGraphicsGetImageFromCurrentImageContext()
// clean up
CGContextRestoreGState(ctx)
UIGraphicsEndImageContext()
return resultImage
}

Resources