Converting CIImage coordinates to CGRect from CIFaceFeature - ios

I am trying to place a square rectangle over a user's face which I am recognizing using CIFaceFeature in real time over a full screen (self.view.frame) video feed. However, the coordinates I am getting from CIFaceFeature.bounds are from a different coordinate system than that used by views. I have tried converting these coordinates from this and other examples. But since I don't I am running this atop a video feed I don't have an image I can pass into CIImage to ease with the conversion of coordinates. Below is an example of my configuration, any idea how I can convert to a usable CGRect?
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let opaqueBuffer = Unmanaged<CVImageBuffer>.passUnretained(imageBuffer!).toOpaque()
let pixelBuffer = Unmanaged<CVPixelBuffer>.fromOpaque(opaqueBuffer).takeUnretainedValue()
let sourceImage = CIImage(cvPixelBuffer: pixelBuffer)
let features = self.faceDetector!.features(in: sourceImage, options: options)
if (features.count != 0) {
let faceImage = sourceImage
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: [CIDetectorAccuracy: CIDetectorAccuracyHigh])
let faces = faceDetector?.features(in: faceImage) as! [CIFaceFeature]
let transformScale = CGAffineTransform(scaleX: 1, y: -1)
let transform = transformScale.translatedBy(x: 0, y: -faceImage.extent.height)
for feature in features as! [CIFaceFeature] {
faceBounds = feature.bounds
var fb = faceBounds?.applying(transform)
// imageViewSize is the screen frame
let scale = min(imageViewSize.width / fb!.width,
imageViewSize.height / fb!.height)
let dx = (imageViewSize.width - fb!.width * scale) / 2
let dy = (imageViewSize.height - fb!.height * scale) / 2
fb?.applying(CGAffineTransform(scaleX: scale, y: scale))
fb?.origin.x += dx
fb?.origin.y += dy
realFaceRect = fb // COMPLETELY WRONG :'(
}
}

If anyone runs into the same problem. Here's an easy solution
let imgHeight = CGFloat(CVPixelBufferGetHeight(pixelBuffer))
let ratio = self.view.frame.width / self.visage!.imgHeight
func convertFrame(frame: CGRect, ratio: CGFloat) -> CGRect {
let x = frame.origin.y * ratio
let y = frame.origin.x * ratio
let width = frame.height * ratio
let height = frame.width * ratio
return CGRect(x: x, y: y, width: width, height: height)
}
func convertPoint(point: CGPoint, ratio: CGFloat) -> CGPoint {
let x = point.y * ratio
let y = point.x * ratio
return CGPoint(x: x, y: y)
}

Related

Want to change Text color according to Background image in swiftUI [duplicate]

I'm attempting to overlay a chevron button that will allow the user to dismiss the current view. The colour of the chevron should be light on dark images, and dark on light images. I've attached a screenshot of what I'm describing.
However, there is a significant performance impact when trying to calculate the lightness/darkness of an image, which I'm doing like so (operating on `CGImage):
var isDark: Bool {
guard let imageData = dataProvider?.data else { return false }
guard let ptr = CFDataGetBytePtr(imageData) else { return false }
let length = CFDataGetLength(imageData)
let threshold = Int(Double(width * height) * 0.45)
var darkPixels = 0
for i in stride(from: 0, to: length, by: 4) {
let r = ptr[i]
let g = ptr[i + 1]
let b = ptr[i + 2]
let luminance = (0.299 * Double(r) + 0.587 * Double(g) + 0.114 * Double(b))
if luminance < 150 {
darkPixels += 1
if darkPixels > threshold {
return true
}
}
}
return false
}
In addition, it doesn't do well when the particular area under the chevron is dark, but the rest of the image is light, for example.
I'd like to calculate it for just a small subsection of the image, since the chevron is very small. I tried cropping the image using CGImage's cropping(to rect: CGRect), but the challenge is that the image is set to aspect fill, meaning the top of the UIImageView's frame isn't the top of the UIImage (e.g. the image might be zoomed in and centred). Is there a way that I can isolate just the part of the image that appears below the chevron's frame, after the image has been adjusted by the aspect fill?
Edit
I was able to achieve this thanks to the first link in the accepted answer. I created a series of extensions that I think should work for situations other than mine.
extension UIImage {
var isDark: Bool {
return cgImage?.isDark ?? false
}
}
extension CGImage {
var isDark: Bool {
guard let imageData = dataProvider?.data else { return false }
guard let ptr = CFDataGetBytePtr(imageData) else { return false }
let length = CFDataGetLength(imageData)
let threshold = Int(Double(width * height) * 0.45)
var darkPixels = 0
for i in stride(from: 0, to: length, by: 4) {
let r = ptr[i]
let g = ptr[i + 1]
let b = ptr[i + 2]
let luminance = (0.299 * Double(r) + 0.587 * Double(g) + 0.114 * Double(b))
if luminance < 150 {
darkPixels += 1
if darkPixels > threshold {
return true
}
}
}
return false
}
func cropping(to rect: CGRect, scale: CGFloat) -> CGImage? {
let scaledRect = CGRect(x: rect.minX * scale, y: rect.minY * scale, width: rect.width * scale, height: rect.height * scale)
return self.cropping(to: scaledRect)
}
}
extension UIImageView {
func hasDarkImage(at subsection: CGRect) -> Bool {
guard let image = image, let aspectSize = aspectFillSize() else { return false }
let scale = image.size.width / frame.size.width
let cropRect = CGRect(x: (aspectSize.width - frame.width) / 2,
y: (aspectSize.height - frame.height) / 2,
width: aspectSize.width,
height: frame.height)
let croppedImage = image.cgImage?
.cropping(to: cropRect, scale: scale)?
.cropping(to: subsection, scale: scale)
return croppedImage?.isDark ?? false
}
private func aspectFillSize() -> CGSize? {
guard let image = image else { return nil }
var aspectFillSize = CGSize(width: frame.width, height: frame.height)
let widthScale = frame.width / image.size.width
let heightScale = frame.height / image.size.height
if heightScale > widthScale {
aspectFillSize.width = heightScale * image.size.width
}
else if widthScale > heightScale {
aspectFillSize.height = widthScale * image.size.height
}
return aspectFillSize
}
}
There's a couple of options here for finding the size of your image once it's been fitted to the view: How to know the image size after applying aspect fit for the image in an UIImageView
Once you've got that, you can figure out where the chevron lies (you may need to convert its frame first https://developer.apple.com/documentation/uikit/uiview/1622498-convert)
If the performance was still lacking, I'd look into using CoreImage to perform the calculations: https://www.hackingwithswift.com/example-code/media/how-to-read-the-average-color-of-a-uiimage-using-ciareaaverage
There's a few ways of doing it with CoreImage, but getting the average is the simplest.

how to get the rect of an image to crop it

I am building a app where you can crop multiple images. I am using this code directly from apple:
func cropImage(_ inputImage: UIImage, toRect cropRect: CGRect, viewWidth: CGFloat, viewHeight: CGFloat) -> UIImage?
{
let imageViewScale = max(inputImage.size.width / viewWidth,
inputImage.size.height / viewHeight)
// Scale cropRect to handle images larger than shown-on-screen size
let cropZone = CGRect(x:cropRect.origin.x * imageViewScale,
y:cropRect.origin.y * imageViewScale,
width:cropRect.size.width * imageViewScale,
height:cropRect.size.height * imageViewScale)
// Perform cropping in Core Graphics
guard let cutImageRef: CGImage = inputImage.cgImage?.cropping(to:cropZone)
else {
return nil
}
// Return image to UIImage
let croppedImage: UIImage = UIImage(cgImage: cutImageRef)
return croppedImage
}
to crop the image I need a cropRect. I found also a solution in the Internet that I implemented in my code:
func realImageRect() -> CGRect {
let imageViewSize = self.frame.size
let imgSize = self.image?.size
guard let imageSize = imgSize else {
return CGRect.zero
}
let scaleWidth = imageViewSize.width / imageSize.width
let scaleHeight = imageViewSize.height / imageSize.height
let aspect = fmin(scaleWidth, scaleHeight)
var imageRect = CGRect(x: 0, y: 0, width: imageSize.width * aspect, height: imageSize.height * aspect)
// Center image
imageRect.origin.x = (imageViewSize.width - imageRect.size.width) / 2
imageRect.origin.y = (imageViewSize.height - imageRect.size.height) / 2
// Add imageView offset
imageRect.origin.x += self.frame.origin.x
imageRect.origin.y += self.frame.origin.y
return imageRect
}
As I already said, the app can crop multiple images. The images are stored in a array. I also have a crop view, which you can drag around the image with a pan gesture
for i in 0..<imageContentView.count {
let cropRect = CGRect(x: croppedViewArray[i].frame.origin.x - imageContentView[i].realImageRect().origin.x, y: croppedViewArray[i].frame.origin.y - imageContentView[i].realImageRect().origin.y, width: croppedViewArray[i].frame.width, height: croppedViewArray[i].frame.height)
print("cropRect", cropRect)
let croppedImage = ImageCrophandler.sharedInstance.cropImage(imageContentView[i].image!, toRect: cropRect, viewWidth: imageContentView[i].frame.width, viewHeight: imageContentView[i].frame.height)
print("cheight", croppedImage!.size.height,"cwidth", croppedImage!.size.width)
arrayOfCropedImages.append(croppedImage!)
}
The problem what I have is, that every cropped image has a different height and widths, but the images should be all the same size.
I figured out that the size gets calculated on which position the crop view is located.

IOS Swift UIImage crop to given rect size

I have try to crop UIImage to given rect but its not crop regular size
the rect size is come from scrollView
here is the reference code:
guard let view = imageCrop else { print("Error"); return }
let normalizedX = view.contentOffset.x / view.contentSize.width
let normalizedY = view.contentOffset.y / view.contentSize.height
let normalizedWidth = view.frame.width / view.contentSize.width
let normalizedHeight = view.frame.height / view.contentSize.height
let cropRect = CGRect(x: normalizedX, y: normalizedY,
width: normalizedWidth, height: normalizedHeight)
cropImage(image: imageCrop.image, cropRect: cropRect)
func cropImage(imageToCrop:UIImage, toRect rect:CGRect) -> UIImage{
let imageRef:CGImage = imageToCrop.cgImage!.cropping(to: rect)!
let cropped:UIImage = UIImage(cgImage:imageRef)
return cropped
}
Solved.
var cropArea:CGRect{
get{
let factor = imageCrop.image!.size.width/view.frame.width
let scale = 1/imageCrop.zoomScale
let imageFrame = imageCrop.frame
let x = (imageCrop.contentOffset.x + imageCrop.frame.origin.x - imageFrame.origin.x) * scale * factor
let y = (imageCrop.contentOffset.y + imageCrop.frame.origin.y - imageFrame.origin.y) * scale * factor
let width = imageCrop.frame.size.width * scale * factor
let height = imageCrop.frame.size.height * scale * factor
return CGRect(x: x, y: y, width: width, height: height)
}
}

How to Crop UIImage considering screen scale

I'm trying to crop a UIImage using a crop box UIView that a user can drag around anywhere in the image view to crop. The logic I'm using to compute the crop rect is as follows:
extension UIImageView {
public func computeCropRect(for sourceFrame : CGRect) -> CGRect {
let widthScale = bounds.size.width / image!.size.width
let heightScale = bounds.size.height / image!.size.height
var x : CGFloat = 0
var y : CGFloat = 0
var width : CGFloat = 0
var height : CGFloat = 0
var offSet : CGFloat = 0
if widthScale < heightScale {
offSet = (bounds.size.height - (image!.size.height * widthScale))/2
x = sourceFrame.origin.x / widthScale
y = (sourceFrame.origin.y - offSet) / widthScale
width = sourceFrame.size.width / widthScale
height = sourceFrame.size.height / widthScale
} else {
offSet = (bounds.size.width - (image!.size.width * heightScale))/2
x = (sourceFrame.origin.x - offSet) / heightScale
y = sourceFrame.origin.y / heightScale
width = sourceFrame.size.width / heightScale
height = sourceFrame.size.height / heightScale
}
return CGRect(x: x, y: y, width: width, height: height)
}
}
The crop box frame looks like this and is positionable anywhere in the frame of the image view by dragging it:
This crop code works just fine until I combine this with another feature I'm trying to support which is the ability to let the user draw using their finger inside of the UIImageView. The code for that looks like this:
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
if let touchPoint = touches.first {
let currentPoint = touchPoint.location(in: self)
UIGraphicsBeginImageContextWithOptions(frame.size, false, UIScreen.main.scale)
if let context = UIGraphicsGetCurrentContext() {
image?.draw(in: imageEffectsService.computeAspectFitFrameFor(containerSize: frame.size, imageSize: image!.size), blendMode: .normal, alpha: CGFloat(imageOpacity))
drawLineAt(startPoint: lastTouchPoint, endPoint: currentPoint, currentContext: context, strokeColor: drawColor)
UIGraphicsEndImageContext()
}
}
}
private func drawLineAt(startPoint : CGPoint, endPoint : CGPoint, currentContext : CGContext, strokeColor : UIColor) {
currentContext.beginPath()
currentContext.setLineCap(CGLineCap.round)
currentContext.setLineWidth(brushSize)
currentContext.setStrokeColor(strokeColor.cgColor)
currentContext.move(to: startPoint)
currentContext.addLine(to: endPoint)
currentContext.strokePath()
image = UIGraphicsGetImageFromCurrentImageContext()
}
The crop method loses accuracy once I apply a drawing particularly because of this line:
UIGraphicsBeginImageContextWithOptions(frame.size, false, UIScreen.main.scale)
If instead I use:
UIGraphicsBeginImageContext(frame.size)
My crop code will be accurate but the drawing fidelity will look grainy and low quality because I am not accounting for retina screen devices. My question is how would I modify my crop function to account for the UIScreen.main.scale?

Divide image in array of images with swift

I'm trying to divide an image to create 16 out of it (in a matrix). I'm using swift 2.1. Here's the code:
let cellSize = Int(originalImage.size.height) / 4
for i in 0...4 {
for p in 0...4 {
let tmpImgRef: CGImage = originalImage.CGImage!
let rect: CGImage = CGImageCreateWithImageInRect(tmpImgRef, CGRectMake(CGFloat(i * cellSize), CGFloat(p * cellSize), CGFloat(cellSize), CGFloat(cellSize)))!
gameCells.append(cell)
}
}
This works but the images that returns are only part of the original image. I've been searching and I know that's because when I create a CGImage it has a size different than the UIImage, but I don't know how to fix it. If I could make the variable cellSize with the height of the CGImage instead of the UIImage I suppose I would fix it but I can't get the CGImage height.
Thanks for the help!
The fundamental issue is the difference between how UIImage and CGImage interpret their size. UIImage uses "points" and CGImage uses pixels. And the conversion factor is the scale.
For example, if a UIImage has a scale of 3, every "point" in any given direction the UIImage, there are three pixels in that direction in the underlying CGImage. So for a UIImage that has a scale of 3 and a size of 100x100 points, the underlying CGImage has a size of 300x300 pixels.
To return a simple array of images sliced by n x n (e.g. if n is three, there will be nine images in the array), you can do something like the following in Swift 3:
/// Slice image into array of tiles
///
/// - Parameters:
/// - image: The original image.
/// - howMany: How many rows/columns to slice the image up into.
///
/// - Returns: An array of images.
///
/// - Note: The order of the images that are returned will correspond
/// to the `imageOrientation` of the image. If the image's
/// `imageOrientation` is not `.up`, take care interpreting
/// the order in which the tiled images are returned.
func slice(image: UIImage, into howMany: Int) -> [UIImage] {
let width: CGFloat
let height: CGFloat
switch image.imageOrientation {
case .left, .leftMirrored, .right, .rightMirrored:
width = image.size.height
height = image.size.width
default:
width = image.size.width
height = image.size.height
}
let tileWidth = Int(width / CGFloat(howMany))
let tileHeight = Int(height / CGFloat(howMany))
let scale = Int(image.scale)
var images = [UIImage]()
let cgImage = image.cgImage!
var adjustedHeight = tileHeight
var y = 0
for row in 0 ..< howMany {
if row == (howMany - 1) {
adjustedHeight = Int(height) - y
}
var adjustedWidth = tileWidth
var x = 0
for column in 0 ..< howMany {
if column == (howMany - 1) {
adjustedWidth = Int(width) - x
}
let origin = CGPoint(x: x * scale, y: y * scale)
let size = CGSize(width: adjustedWidth * scale, height: adjustedHeight * scale)
let tileCgImage = cgImage.cropping(to: CGRect(origin: origin, size: size))!
images.append(UIImage(cgImage: tileCgImage, scale: image.scale, orientation: image.imageOrientation))
x += tileWidth
}
y += tileHeight
}
return images
}
Or, in Swift 2.3:
func slice(image image: UIImage, into howMany: Int) -> [UIImage] {
let width: CGFloat
let height: CGFloat
switch image.imageOrientation {
case .Left, .LeftMirrored, .Right, .RightMirrored:
width = image.size.height
height = image.size.width
default:
width = image.size.width
height = image.size.height
}
let tileWidth = Int(width / CGFloat(howMany))
let tileHeight = Int(height / CGFloat(howMany))
let scale = Int(image.scale)
var images = [UIImage]()
let cgImage = image.CGImage!
var adjustedHeight = tileHeight
var y = 0
for row in 0 ..< howMany {
if row == (howMany - 1) {
adjustedHeight = Int(height) - y
}
var adjustedWidth = tileWidth
var x = 0
for column in 0 ..< howMany {
if column == (howMany - 1) {
adjustedWidth = Int(width) - x
}
let origin = CGPoint(x: x * scale, y: y * scale)
let size = CGSize(width: adjustedWidth * scale, height: adjustedHeight * scale)
let tileCgImage = CGImageCreateWithImageInRect(cgImage, CGRect(origin: origin, size: size))!
images.append(UIImage(CGImage: tileCgImage, scale: image.scale, orientation: image.imageOrientation))
x += tileWidth
}
y += tileHeight
}
return images
}
This makes sure that the resulting images are in the correct scale (this is why the above strides through the image in "points" and the multiplies to get the correct pixels in the CGImage). This also, if the dimensions, measured in "points") are not evenly divisible by n, it will making up the difference in the last image for that row or column, respectively. E.g. when you make three tiles for an image with a height of 736 points, the first two will be 245 points, but the last one will be 246 points).
There is one exception that this does not (entirely) handle gracefully. Namely, if the UIImage has an imageOrientation of something other than .up, the order in which the images is retrieved corresponds to that orientation, not the upper left corner of the image as you view it.
you can split your image im two parts vertically and horizontally and sub split the result as needed:
extension UIImage {
var topHalf: UIImage? {
guard let cgImage = cgImage, let image = cgImage.cropping(to: CGRect(origin: .zero, size: CGSize(width: size.width, height: size.height/2))) else { return nil }
return UIImage(cgImage: image, scale: scale, orientation: imageOrientation)
}
var bottomHalf: UIImage? {
guard let cgImage = cgImage, let image = cgImage.cropping(to: CGRect(origin: CGPoint(x: 0, y: CGFloat(Int(size.height)-Int(size.height/2))), size: CGSize(width: size.width, height: CGFloat(Int(size.height) - Int(size.height/2))))) else { return nil }
return UIImage(cgImage: image, scale: scale, orientation: imageOrientation)
}
var leftHalf: UIImage? {
guard let cgImage = cgImage, let image = cgImage.cropping(to: CGRect(origin: .zero, size: CGSize(width: size.width/2, height: size.height))) else { return nil }
return UIImage(cgImage: image, scale: scale, orientation: imageOrientation)
}
var rightHalf: UIImage? {
guard let cgImage = cgImage, let image = cgImage.cropping(to: CGRect(origin: CGPoint(x: CGFloat(Int(size.width)-Int((size.width/2))), y: 0), size: CGSize(width: CGFloat(Int(size.width)-Int((size.width/2))), height: size.height)))
else { return nil }
return UIImage(cgImage: image, scale: scale, orientation: imageOrientation)
}
var splitedInFourParts: [UIImage] {
guard case let topHalf = topHalf,
case let bottomHalf = bottomHalf,
let topLeft = topHalf?.leftHalf,
let topRight = topHalf?.rightHalf,
let bottomLeft = bottomHalf?.leftHalf,
let bottomRight = bottomHalf?.rightHalf else{ return [] }
return [topLeft, topRight, bottomLeft, bottomRight]
}
var splitedInSixteenParts: [UIImage] {
var array = splitedInFourParts.flatMap({$0.splitedInFourParts})
// if you need it in reading order you need to swap some image positions
swap(&array[2], &array[4])
swap(&array[3], &array[5])
swap(&array[10], &array[12])
swap(&array[11], &array[13])
return array
}
}
Splitting the image by columns and rows:
extension UIImage {
func matrix(_ rows: Int, _ columns: Int) -> [UIImage] {
let y = (size.height / CGFloat(rows)).rounded()
let x = (size.width / CGFloat(columns)).rounded()
var images: [UIImage] = []
images.reserveCapacity(rows * columns)
guard let cgImage = cgImage else { return [] }
(0..<rows).forEach { row in
(0..<columns).forEach { column in
var width = Int(x)
var height = Int(y)
if row == rows-1 && size.height.truncatingRemainder(dividingBy: CGFloat(rows)) != 0 {
height = Int(size.height - size.height / CGFloat(rows) * (CGFloat(rows)-1))
}
if column == columns-1 && size.width.truncatingRemainder(dividingBy: CGFloat(columns)) != 0 {
width = Int(size.width - (size.width / CGFloat(columns) * (CGFloat(columns)-1)))
}
if let image = cgImage.cropping(to: CGRect(origin: CGPoint(x: column * Int(x), y: row * Int(x)), size: CGSize(width: width, height: height))) {
images.append(UIImage(cgImage: image, scale: scale, orientation: imageOrientation))
}
}
}
return images
}
}
let myPicture = UIImage(data: try! Data(contentsOf: URL(string:"https://i.stack.imgur.com/Xs4RX.jpg")!))!
let images = myPicture.matrix(4, 4)
images.count // 16
I've used this to slice an image into a matrix. The matrix is represented as a 1D array.
func snapshotImage(image: UIImage, rect: CGRect) -> UIImage {
var imageRect: CGRect! = rect
if image.scale > 1.0 {
imageRect = CGRect(origin: CGPoint(x: rect.origin.x * image.scale, y: rect.origin.y * image.scale), size: CGSize(width: rect.size.width * image.scale, height: rect.size.height * image.scale))
}
let imageRef: CGImage = image.cgImage!.cropping(to: imageRect)!
let result: UIImage = UIImage(cgImage: imageRef, scale: image.scale, orientation: image.imageOrientation)
return result
}
func sliceImage(image: UIImage, size: CGSize) -> [UIImage] {
var slices: [UIImage] = [UIImage]()
var rect = CGRect(x: 0.0, y: 0.0, width: size.width, height: size.height)
var y: Float = 0.0
let width: Int = Int(image.size.width / size.width)
let height: Int = Int(image.size.height / size.height)
for _ in 0...height {
var x: Float = 0.0
for _ in 0...width {
rect.origin.x = CGFloat(x);
slices.append(self.snapshotImage(image: image, rect: rect))
x += Float(size.width)
}
y += Float(size.height)
rect.origin.y = CGFloat(y)
}
return slices
}

Resources