only detect in a section of camera preview layer, iOS, Swift - ios

I am trying to get a detection zone in a live preview on my camera preview layer.
Is it possible for this, say there is a live feed and you have face detect on and as you look around it will only put a box around the face in a certain area for example a rectangle in the centre of the screen. all other faces in the preview that are outside of the rectangle don't get detected?
Im using Vision, iOS, Swift.

I figured this out by adding a guard before the CALayer adding
Before View did load
#IBOutlet weak var scanAreaImage: UIImageView!
var regionOfInterest: CGRect!
In View did load
scanAreaImage.frame is a image view that I put in via storyboard and this would represent the area I only wanted detection in,
let someRect: CGRect = scanAreaImage.frame
regionOfInterest = someRect
then in the vision text detection section.
func highlightLetters(box: VNRectangleObservation) {
let xCord = box.topLeft.x * (cameraPreviewlayer?.frame.size.width)!
let yCord = (1 - box.topLeft.y) * (cameraPreviewlayer?.frame.size.height)!
let width = (box.topRight.x - box.bottomLeft.x) * (cameraPreviewlayer?.frame.size.width)!
let height = (box.topLeft.y - box.bottomLeft.y) * (cameraPreviewlayer?.frame.size.height)!
// This is the section I Added for the rec of interest detection zone.
//////////////////////////////////////////////
let wordRect = CGRect(x: xCord, y: yCord, width: width, height: height)
guard regionOfInterest.contains(wordRect.origin) else { return } // only draw a box if the orgin of the word box is within the regionOfInterest
// regionOfInterest being the cgRect you created earlier
//////////////////////////////////////////////
let outline = CALayer()
outline.frame = CGRect(x: xCord, y: yCord, width: width, height: height)
outline.borderWidth = 1.0
if textColour == 1 {
outline.borderColor = UIColor.blue.cgColor
}else {
outline.borderColor = UIColor.clear.cgColor
}
cameraPreviewlayer?.addSublayer(outline)
this will only show outlines of the things inside the rectangle you created in storyboard. (Mine being the scanAreaImage)
I hope this helps someone.

Related

Swift iOS -How to extract a separate view or image from within it's own UIImageView's bounds? [duplicate]

I'm trying to crop a sub-image of a image view using an overlay UIView that can be positioned anywhere in the UIImageView. I'm borrowing a solution from a similar post on how to solve this when the UIImageView content mode is 'Aspect Fit'. That proposed solution is:
func computeCropRect(for sourceFrame : CGRect) -> CGRect {
let widthScale = bounds.size.width / image!.size.width
let heightScale = bounds.size.height / image!.size.height
var x : CGFloat = 0
var y : CGFloat = 0
var width : CGFloat = 0
var height : CGFloat = 0
var offSet : CGFloat = 0
if widthScale < heightScale {
offSet = (bounds.size.height - (image!.size.height * widthScale))/2
x = sourceFrame.origin.x / widthScale
y = (sourceFrame.origin.y - offSet) / widthScale
width = sourceFrame.size.width / widthScale
height = sourceFrame.size.height / widthScale
} else {
offSet = (bounds.size.width - (image!.size.width * heightScale))/2
x = (sourceFrame.origin.x - offSet) / heightScale
y = sourceFrame.origin.y / heightScale
width = sourceFrame.size.width / heightScale
height = sourceFrame.size.height / heightScale
}
return CGRect(x: x, y: y, width: width, height: height)
}
The problem is that using this solution when the image view is aspect fill causes the cropped segment to not line up exactly with where the overlay UIView was positioned. I'm not quite sure how to adapt this code to accommodate for Aspect Fill or reposition my overlay UIView so that it lines up 1:1 with the segment I'm trying to crop.
UPDATE Solved using Matt's answer below
class ViewController: UIViewController {
#IBOutlet weak var catImageView: UIImageView!
private var cropView : CropView!
override func viewDidLoad() {
super.viewDidLoad()
cropView = CropView(frame: CGRect(x: 0, y: 0, width: 45, height: 45))
catImageView.image = UIImage(named: "cat")
catImageView.clipsToBounds = true
catImageView.layer.borderColor = UIColor.purple.cgColor
catImageView.layer.borderWidth = 2.0
catImageView.backgroundColor = UIColor.yellow
catImageView.addSubview(cropView)
let imageSize = catImageView.image!.size
let imageViewSize = catImageView.bounds.size
var scale : CGFloat = imageViewSize.width / imageSize.width
if imageSize.height * scale < imageViewSize.height {
scale = imageViewSize.height / imageSize.height
}
let croppedImageSize = CGSize(width: imageViewSize.width/scale, height: imageViewSize.height/scale)
let croppedImrect =
CGRect(origin: CGPoint(x: (imageSize.width-croppedImageSize.width)/2.0,
y: (imageSize.height-croppedImageSize.height)/2.0),
size: croppedImageSize)
let renderer = UIGraphicsImageRenderer(size:croppedImageSize)
let _ = renderer.image { _ in
catImageView.image!.draw(at: CGPoint(x:-croppedImrect.origin.x, y:-croppedImrect.origin.y))
}
}
#IBAction func performCrop(_ sender: Any) {
let cropFrame = catImageView.computeCropRect(for: cropView.frame)
if let imageRef = catImageView.image?.cgImage?.cropping(to: cropFrame) {
catImageView.image = UIImage(cgImage: imageRef)
}
}
#IBAction func resetCrop(_ sender: Any) {
catImageView.image = UIImage(named: "cat")
}
}
The Final Result
Let's divide the problem into two parts:
Given the size of a UIImageView and the size of its UIImage, if the UIImageView's content mode is Aspect Fill, what is the part of the UIImage that fits into the UIImageView? We need, in effect, to crop the original image to match what the UIImageView is actually displaying.
Given an arbitrary rect within the UIImageView, what part of the cropped image (derived in part 1) does it correspond to?
The first part is the interesting part, so let's try it. (The second part will then turn out to be trivial.)
Here's the original image I'll use:
https://static1.squarespace.com/static/54e8ba93e4b07c3f655b452e/t/56c2a04520c64707756f4267/1455596221531/
That image is 1000x611. Here's what it looks like scaled down (but keep in mind that I'm going to be using the original image throughout):
My image view, however, will be 139x182, and is set to Aspect Fill. When it displays the image, it looks like this:
The problem we want to solve is: what part of the original image is being displayed in my image view, if my image view is set to Aspect Fill?
Here we go. Assume that iv is the image view:
let imsize = iv.image!.size
let ivsize = iv.bounds.size
var scale : CGFloat = ivsize.width / imsize.width
if imsize.height * scale < ivsize.height {
scale = ivsize.height / imsize.height
}
let croppedImsize = CGSize(width:ivsize.width/scale, height:ivsize.height/scale)
let croppedImrect =
CGRect(origin: CGPoint(x: (imsize.width-croppedImsize.width)/2.0,
y: (imsize.height-croppedImsize.height)/2.0),
size: croppedImsize)
So now we have solved the problem: croppedImrect is the region of the original image that is showing in the image view. Let's proceed to use our knowledge, by actually cropping the image to a new image matching what is shown in the image view:
let r = UIGraphicsImageRenderer(size:croppedImsize)
let croppedIm = r.image { _ in
iv.image!.draw(at: CGPoint(x:-croppedImrect.origin.x, y:-croppedImrect.origin.y))
}
The result is this image (ignore the gray border):
But lo and behold, that is the correct answer! I have extracted from the original image exactly the region portrayed in the interior of the image view.
So now you have all the information you need. croppedIm is the UIImage actually displayed in the clipped area of the image view. scale is the scale between the image view and that image. Therefore, you can easily solve the problem you originally proposed! Given any rectangle imposed upon the image view, in the image view's bounds coordinates, you simply apply the scale (i.e. divide all four of its attributes by scale) — and now you have the same rectangle as a portion of croppedIm.
(Observe that we didn't really need to crop the original image to get croppedIm; it was sufficient, in reality, to know how to perform that crop. The important information is the scale along with the origin of croppedImRect; given that information, you can take the rectangle imposed upon the image view, scale it, and offset it to get the desired rectangle of the original image.)
EDIT I added a little screencast just to show that my approach works as a proof of concept:
EDIT Also created a downloadable example project here:
https://github.com/mattneub/Programming-iOS-Book-Examples/blob/39cc800d18aa484d17c26ffcbab8bbe51c614573/bk2ch02p058cropImageView/Cropper/ViewController.swift
But note that I can't guarantee that URL will last forever, so please read the discussion above to understand the approach used.
Matt answered the question perfectly. I was creating a full-screen camera and had a need to make the final output match the full-screen preview. Offering here a compact extension of Matt's overall answer in Swift 5 for easy use by others. Recommend reading Matt's answer as it explains things very well.
extension UIImage {
func cropToRect(rect: CGRect) -> UIImage? {
var scale = rect.width / self.size.width
scale = self.size.height * scale < rect.height ? rect.height/self.size.height : scale
let croppedImsize = CGSize(width:rect.width/scale, height:rect.height/scale)
let croppedImrect = CGRect(origin: CGPoint(x: (self.size.width-croppedImsize.width)/2.0,
y: (self.size.height-croppedImsize.height)/2.0),
size: croppedImsize)
UIGraphicsBeginImageContextWithOptions(croppedImsize, true, 0)
self.draw(at: CGPoint(x:-croppedImrect.origin.x, y:-croppedImrect.origin.y))
let croppedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return croppedImage
}
}

UIDragInteractionDelegate: How to display transparent parts in the drag preview returned by dragInteraction(_:previewForLifting:session:)

I'm building a drag and drop interaction for an iOS app. I want to enable the user to drag and drop images containing transparent parts.
However, the default preview for the dragged contents is a rectangle with an opaque white background that covers my app's background.
When I create a custom preview by implementing the UIDragInteractionDelegate method
dragInteraction(_:previewForLifting:session:), as in Apple's code sample Adopting Drag and Drop in a Custom View, the transparency of my source image is still not taken into account, meaning my preview image is still displayed in a rectangle with an opaque white background:
func dragInteraction(_ interaction: UIDragInteraction, previewForLifting item: UIDragItem, session: UIDragSession) -> UITargetedDragPreview? {
guard let image = item.localObject as? UIImage else { return nil }
// Scale the preview image view frame to the image's size.
let frame: CGRect
if image.size.width > image.size.height {
let multiplier = imageView.frame.width / image.size.width
frame = CGRect(x: 0, y: 0, width: imageView.frame.width, height: image.size.height * multiplier)
} else {
let multiplier = imageView.frame.height / image.size.height
frame = CGRect(x: 0, y: 0, width: image.size.width * multiplier, height: imageView.frame.height)
}
// Create a new view to display the image as a drag preview.
let previewImageView = UIImageView(image: image)
previewImageView.contentMode = .scaleAspectFit
previewImageView.frame = frame
/*
Provide a custom targeted drag preview that lifts from the center
of imageView. The center is calculated because it needs to be in
the coordinate system of imageView. Using imageView.center returns
a point that is in the coordinate system of imageView's superview,
which is not what is needed here.
*/
let center = CGPoint(x: imageView.bounds.midX, y: imageView.bounds.midY)
let target = UIDragPreviewTarget(container: imageView, center: center)
return UITargetedDragPreview(view: previewImageView, parameters: UIDragPreviewParameters(), target: target)
}
I tried to force the preview not to be opaque, but it did not help:
previewImageView.isOpaque = false
How can I get transparent parts in the lift preview?
Overwrite the backgroundColor in the UIDragPreviewParameters, as it defines the color for the background of a drag item preview.
Set it to UIColor.clear which is a color object whose grayscale and alpha values are both 0.0.
let previewParameters = UIDragPreviewParameters()
previewParameters.backgroundColor = UIColor.clear // transparent background
return UITargetedDragPreview(view: previewImageView,
parameters: previewParameters,
target: target)
You can define a UIBezierPath according to your image and set it to previewParameters.visiblePath
Example (Swift 4.2):
let previewParameters = UIDragPreviewParameters()
previewParameters.visiblePath = UIBezierPath(roundedRect: CGRect(x: yourX, y: yourY, width: yourWidth, height: yourHeight), cornerRadius: yourRadius)
//... Use the created previewParameters

Crop picture from UIImagePickerController like credit card scanning in iOS [duplicate]

I'm trying to crop a sub-image of a image view using an overlay UIView that can be positioned anywhere in the UIImageView. I'm borrowing a solution from a similar post on how to solve this when the UIImageView content mode is 'Aspect Fit'. That proposed solution is:
func computeCropRect(for sourceFrame : CGRect) -> CGRect {
let widthScale = bounds.size.width / image!.size.width
let heightScale = bounds.size.height / image!.size.height
var x : CGFloat = 0
var y : CGFloat = 0
var width : CGFloat = 0
var height : CGFloat = 0
var offSet : CGFloat = 0
if widthScale < heightScale {
offSet = (bounds.size.height - (image!.size.height * widthScale))/2
x = sourceFrame.origin.x / widthScale
y = (sourceFrame.origin.y - offSet) / widthScale
width = sourceFrame.size.width / widthScale
height = sourceFrame.size.height / widthScale
} else {
offSet = (bounds.size.width - (image!.size.width * heightScale))/2
x = (sourceFrame.origin.x - offSet) / heightScale
y = sourceFrame.origin.y / heightScale
width = sourceFrame.size.width / heightScale
height = sourceFrame.size.height / heightScale
}
return CGRect(x: x, y: y, width: width, height: height)
}
The problem is that using this solution when the image view is aspect fill causes the cropped segment to not line up exactly with where the overlay UIView was positioned. I'm not quite sure how to adapt this code to accommodate for Aspect Fill or reposition my overlay UIView so that it lines up 1:1 with the segment I'm trying to crop.
UPDATE Solved using Matt's answer below
class ViewController: UIViewController {
#IBOutlet weak var catImageView: UIImageView!
private var cropView : CropView!
override func viewDidLoad() {
super.viewDidLoad()
cropView = CropView(frame: CGRect(x: 0, y: 0, width: 45, height: 45))
catImageView.image = UIImage(named: "cat")
catImageView.clipsToBounds = true
catImageView.layer.borderColor = UIColor.purple.cgColor
catImageView.layer.borderWidth = 2.0
catImageView.backgroundColor = UIColor.yellow
catImageView.addSubview(cropView)
let imageSize = catImageView.image!.size
let imageViewSize = catImageView.bounds.size
var scale : CGFloat = imageViewSize.width / imageSize.width
if imageSize.height * scale < imageViewSize.height {
scale = imageViewSize.height / imageSize.height
}
let croppedImageSize = CGSize(width: imageViewSize.width/scale, height: imageViewSize.height/scale)
let croppedImrect =
CGRect(origin: CGPoint(x: (imageSize.width-croppedImageSize.width)/2.0,
y: (imageSize.height-croppedImageSize.height)/2.0),
size: croppedImageSize)
let renderer = UIGraphicsImageRenderer(size:croppedImageSize)
let _ = renderer.image { _ in
catImageView.image!.draw(at: CGPoint(x:-croppedImrect.origin.x, y:-croppedImrect.origin.y))
}
}
#IBAction func performCrop(_ sender: Any) {
let cropFrame = catImageView.computeCropRect(for: cropView.frame)
if let imageRef = catImageView.image?.cgImage?.cropping(to: cropFrame) {
catImageView.image = UIImage(cgImage: imageRef)
}
}
#IBAction func resetCrop(_ sender: Any) {
catImageView.image = UIImage(named: "cat")
}
}
The Final Result
Let's divide the problem into two parts:
Given the size of a UIImageView and the size of its UIImage, if the UIImageView's content mode is Aspect Fill, what is the part of the UIImage that fits into the UIImageView? We need, in effect, to crop the original image to match what the UIImageView is actually displaying.
Given an arbitrary rect within the UIImageView, what part of the cropped image (derived in part 1) does it correspond to?
The first part is the interesting part, so let's try it. (The second part will then turn out to be trivial.)
Here's the original image I'll use:
https://static1.squarespace.com/static/54e8ba93e4b07c3f655b452e/t/56c2a04520c64707756f4267/1455596221531/
That image is 1000x611. Here's what it looks like scaled down (but keep in mind that I'm going to be using the original image throughout):
My image view, however, will be 139x182, and is set to Aspect Fill. When it displays the image, it looks like this:
The problem we want to solve is: what part of the original image is being displayed in my image view, if my image view is set to Aspect Fill?
Here we go. Assume that iv is the image view:
let imsize = iv.image!.size
let ivsize = iv.bounds.size
var scale : CGFloat = ivsize.width / imsize.width
if imsize.height * scale < ivsize.height {
scale = ivsize.height / imsize.height
}
let croppedImsize = CGSize(width:ivsize.width/scale, height:ivsize.height/scale)
let croppedImrect =
CGRect(origin: CGPoint(x: (imsize.width-croppedImsize.width)/2.0,
y: (imsize.height-croppedImsize.height)/2.0),
size: croppedImsize)
So now we have solved the problem: croppedImrect is the region of the original image that is showing in the image view. Let's proceed to use our knowledge, by actually cropping the image to a new image matching what is shown in the image view:
let r = UIGraphicsImageRenderer(size:croppedImsize)
let croppedIm = r.image { _ in
iv.image!.draw(at: CGPoint(x:-croppedImrect.origin.x, y:-croppedImrect.origin.y))
}
The result is this image (ignore the gray border):
But lo and behold, that is the correct answer! I have extracted from the original image exactly the region portrayed in the interior of the image view.
So now you have all the information you need. croppedIm is the UIImage actually displayed in the clipped area of the image view. scale is the scale between the image view and that image. Therefore, you can easily solve the problem you originally proposed! Given any rectangle imposed upon the image view, in the image view's bounds coordinates, you simply apply the scale (i.e. divide all four of its attributes by scale) — and now you have the same rectangle as a portion of croppedIm.
(Observe that we didn't really need to crop the original image to get croppedIm; it was sufficient, in reality, to know how to perform that crop. The important information is the scale along with the origin of croppedImRect; given that information, you can take the rectangle imposed upon the image view, scale it, and offset it to get the desired rectangle of the original image.)
EDIT I added a little screencast just to show that my approach works as a proof of concept:
EDIT Also created a downloadable example project here:
https://github.com/mattneub/Programming-iOS-Book-Examples/blob/39cc800d18aa484d17c26ffcbab8bbe51c614573/bk2ch02p058cropImageView/Cropper/ViewController.swift
But note that I can't guarantee that URL will last forever, so please read the discussion above to understand the approach used.
Matt answered the question perfectly. I was creating a full-screen camera and had a need to make the final output match the full-screen preview. Offering here a compact extension of Matt's overall answer in Swift 5 for easy use by others. Recommend reading Matt's answer as it explains things very well.
extension UIImage {
func cropToRect(rect: CGRect) -> UIImage? {
var scale = rect.width / self.size.width
scale = self.size.height * scale < rect.height ? rect.height/self.size.height : scale
let croppedImsize = CGSize(width:rect.width/scale, height:rect.height/scale)
let croppedImrect = CGRect(origin: CGPoint(x: (self.size.width-croppedImsize.width)/2.0,
y: (self.size.height-croppedImsize.height)/2.0),
size: croppedImsize)
UIGraphicsBeginImageContextWithOptions(croppedImsize, true, 0)
self.draw(at: CGPoint(x:-croppedImrect.origin.x, y:-croppedImrect.origin.y))
let croppedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return croppedImage
}
}

How to set half-round UIImage in Swift like this screenshot

https://www.dropbox.com/s/wlizis5zybsvnfz/File%202017-04-04%2C%201%2052%2024%20PM.jpeg?dl=0
Hello all Swifters,
Could anyone tell me how to set this kind of UI? Is there any half rounded image that they have set?
Or there are two images. One with the mountains in the background and anohter image made half rounded in white background and placed in on top?
Please advise
Draw an ellipse shape using UIBezier path.
Draw a rectangle path exactly similar to imageView which holds your image.
Transform the ellipse path with CGAffineTransform so that it will be in the center of the rect path.
Translate rect path with CGAffineTransform by 0.5 to create intersection between ellipse and the rect.
Mask the image using CAShapeLayer.
Additional: As Rob Mayoff stated in comments you'll probably need to calculate the mask size in viewDidLayoutSubviews. Don't forget to play with it, test different cases (different screen sizes, orientations) and adjust the implementation based on your needs.
Try the following code:
import UIKit
class ViewController: UIViewController {
#IBOutlet weak var imageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
guard let image = imageView.image else {
return
}
let size = image.size
imageView.clipsToBounds = true
imageView.image = image
let curveRadius = size.width * 0.010 // Adjust curve of the image view here
let invertedRadius = 1.0 / curveRadius
let rect = CGRect(x: 0,
y: -40,
width: imageView.bounds.width + size.width * 2 * invertedRadius,
height: imageView.bounds.height)
let ellipsePath = UIBezierPath(ovalIn: rect)
let transform = CGAffineTransform(translationX: -size.width * invertedRadius, y: 0)
ellipsePath.apply(transform)
let rectanglePath = UIBezierPath(rect: imageView.bounds)
rectanglePath.apply(CGAffineTransform(translationX: 0, y: -size.height * 0.5))
ellipsePath.append(rectanglePath)
let maskShapeLayer = CAShapeLayer()
maskShapeLayer.frame = imageView.bounds
maskShapeLayer.path = ellipsePath.cgPath
imageView.layer.mask = maskShapeLayer
}
}
Result:
You can find an answer here:
https://stackoverflow.com/a/34983655/5479510
But generally, I wouldn't recommend using a white image overlay as it may appear distorted or pixelated on different devices. Using a masking UIView would do just great.
Why you could not just create (draw) rounded transparent image and add UIImageView() with UIImage() at the top of the view with required height and below this view add other views. I this this is the easiest way. I would write comment but I cant.

How do I capture QR Code data in specific area of AVCaptureVideoPreviewLayer using Swift?

I am creating an iPad app and one of it's features is scanning QR codes. I have the QR scanning part working, but the issue I have is that the iPad screen is very large and I will be scanning small QR codes of of a sheet of paper with many QR codes visible at once. I want to designate a smaller area of the display to be the only area that can actually capture a QR code so it is easier for the user to scan the specific QR code they want.
I currently have made a temporary UIView with red borders that is centered on the page as an example of where I will want the user to scan the QR codes. It looks like this:
I have looked all over to find an answer to how I can target a specific region of the AVCaptureVideoPreviewLayer to collect the QR code data, and what I have found is suggestions to use "rectOfInterest" with AVCaptureMetadataOutput. I have attempted to do that, but when I set rectOfInterest to the same coordinates and size as those I use for my UIView that shows up correctly, I can no longer scan/recognize any QR codes. Can someone please tell me why the scannable area does not match the location of the UIView that is seen and how can I get the rectOfInterest to be within the red borders I have added to the screen?
Here is the code for the scan function I am currently using:
func startScan() {
// Get an instance of the AVCaptureDevice class to initialize a device object and provide the video
// as the media type parameter.
let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
// Get an instance of the AVCaptureDeviceInput class using the previous device object.
var error:NSError?
let input: AnyObject! = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &error)
if (error != nil) {
// If any error occurs, simply log the description of it and don't continue any more.
println("\(error?.localizedDescription)")
return
}
// Initialize the captureSession object.
captureSession = AVCaptureSession()
// Set the input device on the capture session.
captureSession?.addInput(input as! AVCaptureInput)
// Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session.
let captureMetadataOutput = AVCaptureMetadataOutput()
captureSession?.addOutput(captureMetadataOutput)
// calculate a centered square rectangle with red border
let size = 300
let screenWidth = self.view.frame.size.width
let xPos = (CGFloat(screenWidth) / CGFloat(2)) - (CGFloat(size) / CGFloat(2))
let scanRect = CGRect(x: Int(xPos), y: 150, width: size, height: size)
// create UIView that will server as a red square to indicate where to place QRCode for scanning
scanAreaView = UIView()
scanAreaView?.layer.borderColor = UIColor.redColor().CGColor
scanAreaView?.layer.borderWidth = 4
scanAreaView?.frame = scanRect
// Set delegate and use the default dispatch queue to execute the call back
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
captureMetadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode]
captureMetadataOutput.rectOfInterest = scanRect
// Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer)
// Start video capture.
captureSession?.startRunning()
// Initialize QR Code Frame to highlight the QR code
qrCodeFrameView = UIView()
qrCodeFrameView?.layer.borderColor = UIColor.greenColor().CGColor
qrCodeFrameView?.layer.borderWidth = 2
view.addSubview(qrCodeFrameView!)
view.bringSubviewToFront(qrCodeFrameView!)
// Add a button that will be used to close out of the scan view
videoBtn.setTitle("Close", forState: .Normal)
videoBtn.setTitleColor(UIColor.blackColor(), forState: .Normal)
videoBtn.backgroundColor = UIColor.grayColor()
videoBtn.layer.cornerRadius = 5.0;
videoBtn.frame = CGRectMake(10, 30, 70, 45)
videoBtn.addTarget(self, action: "pressClose:", forControlEvents: .TouchUpInside)
view.addSubview(videoBtn)
view.addSubview(scanAreaView!)
}
Update
The reason I do not think this is a duplicate is because the other post referenced is in Objective-C and my code is in Swift. For those of us that are new to iOS it is not as easy to translate the two. Also, the referenced post's answer does not show the actual update made in the code that resolved his issue. He left a good explanation about having to use the metadataOutputRectOfInterestForRect method to convert the rectangle coordinates, but I still cannot seem to get this method to work, as it is unclear to me how this should work without an example.
After fighting with the metedataOutputRectOfInterestForRect method all morning, I got tired of it and decided to write my own conversion.
func convertRectOfInterest(rect: CGRect) -> CGRect {
let screenRect = self.view.frame
let screenWidth = screenRect.width
let screenHeight = screenRect.height
let newX = 1 / (screenWidth / rect.minX)
let newY = 1 / (screenHeight / rect.minY)
let newWidth = 1 / (screenWidth / rect.width)
let newHeight = 1 / (screenHeight / rect.height)
return CGRect(x: newX, y: newY, width: newWidth, height: newHeight)
}
Note: I have an image view with a square to show the user where to scan, be sure to use the imageView.frame and not imageView.bounds in order to get the correct location on the screen.
This has been working successfully for me.
let metadataOutput = AVCaptureMetadataOutput()
metadataOutput.rectOfInterest = convertRectOfInterest(rect: scanRect)
After reviewing other source(https://www.jianshu.com/p/8bb3d8cb224e),
the convertRectOfInterest function has a slight mistake, the return field should be:
return CGRect(x: newY, y: newX, width: newHeight, height: newWidth)
where x and y, Width and Height input should be interchanged to get it working.
You need to convert the rect represented in the UIView's coordinates into the coordinate system of the AVCaptureVideoPreviewLayer:
captureMetadataOutput.rectOfInterest = videoPreviewLayer.metadataOutputRectConverted(fromLayerRect: scanRect)
For more info: https://stackoverflow.com/a/55778152/6898849
let scanView = CGRect(x: centerX, y: centerY, width: width, height: height)
metadataOutput.rectOfInterest = previewLayer.metadataOutputRectConverted(fromLayerRect: scanView)
This works for me.
extension AVCaptureVideoPreviewLayer {
func rectOfInterestConverted(parentRect: CGRect, fromLayerRect: CGRect) -> CGRect {
let parentWidth = parentRect.width
let parentHeight = parentRect.height
let newX = (parentWidth - fromLayerRect.maxX)/parentWidth
let newY = 1 - (parentHeight - fromLayerRect.minY)/parentHeight
let width = 1 - (fromLayerRect.minX/parentWidth + newX)
let height = (fromLayerRect.maxY/parentHeight) - newY
return CGRect(x: newX, y: newY, width: width, height: height)
}
}
Usage:
if let rect = videoPreviewLayer?.rectOfInterestConverted(parentRect: self.view.frame, fromLayerRect: scanAreaView.frame) {
captureMetadataOutput.rectOfInterest = rect
}
metadataOutput.rectOfInterest = previewLayer.metadataOutputRectConverted(fromLayerRect: yourView.frame)
previewLayer it's AVCaptureVideoPreviewLayer

Resources