WebRTC Swift RemoteStream not Rendering - ios

I have created RTCMTLVideoView throught Outlet.
#IBOutlet weak var otherEndVideoHolderView : RTCMTLVideoView!
And collected RTCMediaStream From the RTCPeerConnectionDelegate Delegate
func peerConnection(_ peerConnection: RTCPeerConnection, didAdd stream: RTCMediaStream) {
debugPrint("peerConnection did add stream")
if let video = stream.videoTracks.first{
self.remoteVideoTrack = video
self.delegate?.webRTCClient(self, didReceiveRemoteRender: video)
}
}
After Offer -> Answer -> and peer status to connected. Only local video is rendering. Remote video is not rendering.

I have recently worked on Webrtc and add remote stream on collectionview cell. You can use my code. If you have any other issue let me know, i will help also.
let stream = VideoCallViewController.arrRemoteStreams[indexPath.row]
let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "RmoteVideoCollectionViewCell", for: indexPath) as! RmoteVideoCollectionViewCell
cell.contentView.subviews.forEach({ $0.removeFromSuperview() })
#if arch(arm64)
// Using metal (arm64 only)
let remoteRenderer = RTCMTLVideoView(frame: CGRect.init(x: 0, y: 0, width: 150, height: 150))
remoteRenderer.videoContentMode = .scaleAspectFit
#else
// Using OpenGLES for the rest
let remoteRenderer = RTCEAGLVideoView(frame: CGRect.init(x: 0, y: 0, width: 150, height: 150))
#endif
//set stream to cell
stream.videoTracks.first?.add(remoteRenderer)
//adding stream to cellview
cell.contentView.addSubview(remoteRenderer)

I think there is an issue with WEBRTC for iPhone X and 11s. I've filed a bug report here. If the bug report describes your issue make sure to start it so Google can get on it. Thanks!

Related

How can I handle only scan in specific CGRect with AVCaptureVideoDataOutput of vision frameWork IOS swift [duplicate]

I am building a QR code scanner with Swift and everything works in that regard. The issue I have is that I am trying to make only a small area of the entire visible AVCaptureVideoPreviewLayer be able to scan QR codes. I have found out that in order to specify what area of the screen will be able to read/capture QR codes I would have to use a property of AVCaptureMetadataOutput called rectOfInterest. The trouble is when I assigned that to a CGRect, I couldn't scan anything. After doing more research online I have found some suggesting that I would need to use a method called metadataOutputRectOfInterestForRect to convert a CGRect into a correct format that the property rectOfInterest can actually use. HOWEVER, the big issue I have run into now is that when I use this method metadataoutputRectOfInterestForRect I am getting an error that states CGAffineTransformInvert: singular matrix. Can anyone tell me why I am getting this error? I believe I am using this method properly according to the Apple developer documentation and I believe I need to use this according to all the information I have found online to accomplish my goal. I will include links to the documentation I have found so far as well as a code sample of the function I am using to scan QR codes
CODE SAMPLE
func startScan() {
// Get an instance of the AVCaptureDevice class to initialize a device object and provide the video
// as the media type parameter.
let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
// Get an instance of the AVCaptureDeviceInput class using the previous device object.
var error:NSError?
let input: AnyObject! = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &error)
if (error != nil) {
// If any error occurs, simply log the description of it and don't continue any more.
println("\(error?.localizedDescription)")
return
}
// Initialize the captureSession object.
captureSession = AVCaptureSession()
// Set the input device on the capture session.
captureSession?.addInput(input as! AVCaptureInput)
// Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session.
let captureMetadataOutput = AVCaptureMetadataOutput()
captureSession?.addOutput(captureMetadataOutput)
// calculate a centered square rectangle with red border
let size = 300
let screenWidth = self.view.frame.size.width
let xPos = (CGFloat(screenWidth) / CGFloat(2)) - (CGFloat(size) / CGFloat(2))
let scanRect = CGRect(x: Int(xPos), y: 150, width: size, height: size)
// create UIView that will server as a red square to indicate where to place QRCode for scanning
scanAreaView = UIView()
scanAreaView?.layer.borderColor = UIColor.redColor().CGColor
scanAreaView?.layer.borderWidth = 4
scanAreaView?.frame = scanRect
view.addSubview(scanAreaView!)
// Set delegate and use the default dispatch queue to execute the call back
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
captureMetadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode]
// Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
captureMetadataOutput.rectOfInterest = videoPreviewLayer!.metadataOutputRectOfInterestForRect(scanRect)
view.layer.addSublayer(videoPreviewLayer)
// Start video capture.
captureSession?.startRunning()
// Initialize QR Code Frame to highlight the QR code
qrCodeFrameView = UIView()
qrCodeFrameView?.layer.borderColor = UIColor.greenColor().CGColor
qrCodeFrameView?.layer.borderWidth = 2
view.addSubview(qrCodeFrameView!)
view.bringSubviewToFront(qrCodeFrameView!)
// Add a button that will be used to close out of the scan view
videoBtn.setTitle("Close", forState: .Normal)
videoBtn.setTitleColor(UIColor.blackColor(), forState: .Normal)
videoBtn.backgroundColor = UIColor.grayColor()
videoBtn.layer.cornerRadius = 5.0;
videoBtn.frame = CGRectMake(10, 30, 70, 45)
videoBtn.addTarget(self, action: "pressClose:", forControlEvents: .TouchUpInside)
view.addSubview(videoBtn)
view.bringSubviewToFront(scanAreaView!)
}
Please note that the line of interest causing the error is this:
captureMetadataOutput.rectOfInterest = videoPreviewLayer!.metadataOutputRectOfInterestForRect(scanRect)
Other things I have tried are passing in a CGRect directly as a parameter and that has caused the same error. I have also passed in scanAreaView!.bounds as a parameter as that is really the exact size/area I am looking for and that also causes the same exact error. I have seen this done in other's code examples online and they do not seem to have the errors I am having. Here are some examples:
AVCaptureSession barcode scan
Xcode AVCapturesession scan Barcode in specific frame (rectOfInterest is not working)
Apple documentation
metadataOutputRectOfInterestForRect
rectOfInterest
Image of scanAreaView I am using as the designated area I am trying to make the only scannable area of the video preview layer:
I wasn't really able to clarify the issue with metadataOutputRectOfInterestForRect, however, you can directly set the property as well. You need to the have the resolution in width and height of your video, which you can specify in advance. I quickly used the 640*480 setting. As stated in the documentation, these values have to be
"extending from (0,0) in the top left to (1,1) in the bottom right, relative to the device’s natural orientation".
See https://developer.apple.com/documentation/avfoundation/avcaptureoutput/1616304-metadataoutputrectofinterestforr
Below is the code I tried
var x = scanRect.origin.x/480
var y = scanRect.origin.y/640
var width = scanRect.width/480
var height = scanRect.height/640
var scanRectTransformed = CGRectMake(x, y, width, height)
captureMetadataOutput.rectOfInterest = scanRectTransformed
I just tested it on an iOS device and it seems to work.
Edit
At least I've solved the metadataOutputRectOfInterestForRect problem. I believe you have to do this after the camera has been properly set up and is running, as the camera's resolution is not yet available.
First, add a notification observer method within viewDidLoad()
NSNotificationCenter.defaultCenter().addObserver(self, selector: Selector("avCaptureInputPortFormatDescriptionDidChangeNotification:"), name:AVCaptureInputPortFormatDescriptionDidChangeNotification, object: nil)
Then add the following method
func avCaptureInputPortFormatDescriptionDidChangeNotification(notification: NSNotification) {
captureMetadataOutput.rectOfInterest = videoPreviewLayer.metadataOutputRectOfInterestForRect(scanRect)
}
Here you can then reset the rectOfInterest property. Then, in your code, you can display the AVMetadataObject within the didOutputMetadataObjects function
var rect = videoPreviewLayer.rectForMetadataOutputRectOfInterest(YourAVMetadataObject.bounds)
dispatch_async(dispatch_get_main_queue(),{
self.qrCodeFrameView.frame = rect
})
I've tried, and the rectangle was always within the specified area.
In iOS 9.3.2 I was able to make metadataoutputRectOfInterestForRect work calling it right after startRunning method of AVCaptureSession:
captureSession.startRunning()
let visibleRect = previewLayer.metadataOutputRectOfInterestForRect(previewLayer.bounds)
captureMetadataOutput.rectOfInterest = visibleRect
Swift 4:
captureSession?.startRunning()
let scanRect = CGRect(x: 0, y: 0, width: 100, height: 100)
let rectOfInterest = layer.metadataOutputRectConverted(fromLayerRect: scanRect)
metaDataOutput.rectOfInterest = rectOfInterest
I managed to create an effect of having a region of interest. I tried all the proposed solutions but the region was either a CGPoint.zero or had inappropriate size (after converting frames to a 0-1 coordinate). It's actually a hack for those who can't get the regionOfInterest to work and doesn't optimize the detection.
In:
func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection)
I have the following code:
let visualCodeObject = videoPreviewLayer?.transformedMetadataObject(for: metadataObj)
if self.viewfinderView.frame.contains(visualCodeObject.bounds) {
//visual code is inside the viewfinder, you can now handle detection
}
/// After
captureSession.startRunning()
/// Add this
if let videoPreviewLayer = self.videoPreviewLayer {
self.captureMetadataOutput.rectOfInterest =
videoPreviewLayer.metadataOutputRectOfInterest(for:
self.getRectOfInterest())
fileprivate func getRectOfInterest() -> CGRect {
let centerX = (self.frame.width / 2) - 100
let centerY = (self.frame.height / 2) - 100
let quadr: CGFloat = 200
let myRect = CGRect(x: centerX, y: centerY, width: quadr, height: quadr)
return myRect
}
To read a QRCode/BarCode from a small rect(specific region) from a full camera view.
<br> **Mandatory to keep the below line after (start running)** <br>
[captureMetadataOutput setRectOfInterest:[_videoPreviewLayer metadataOutputRectOfInterestForRect:scanRect] ];
[_captureSession startRunning];
[captureMetadataOutput setRectOfInterest:[_videoPreviewLayer metadataOutputRectOfInterestForRect:scanRect] ];
Note:
captureMetadataOutput --> AVCaptureMetadataOutput
_videoPreviewLayer --> AVCaptureVideoPreviewLayer
scanRect --> Rect where you want the QRCode to be read.
I know there are already solutions present and it's pretty late but i achieved mine by capturing the complete view image and then cropping it with specific rect.
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
if let imageData = photo.fileDataRepresentation() {
print(imageData)
capturedImage = UIImage(data: imageData)
var crop = cropToPreviewLayer(originalImage: capturedImage!)
let sb = UIStoryboard(name: "Main", bundle: nil)
let s = sb.instantiateViewController(withIdentifier: "KeyFobScanned") as! KeyFobScanned
s.image = crop
self.navigationController?.pushViewController(s, animated: true)
}
}
private func cropToPreviewLayer(originalImage: UIImage) -> UIImage? {
guard let cgImage = originalImage.cgImage else { return nil }
let scanRect = CGRect(x: stackView.frame.origin.x, y: stackView.frame.origin.y, width: innerView.frame.size.width, height: innerView.frame.size.height)
let outputRect = videoPreviewLayer.metadataOutputRectConverted(fromLayerRect: scanRect)
let width = CGFloat(cgImage.width)
let height = CGFloat(cgImage.height)
let cropRect = CGRect(x: outputRect.origin.x * width, y: outputRect.origin.y * height, width: outputRect.size.width * width, height: outputRect.size.height * height)
if let croppedCGImage = cgImage.cropping(to: cropRect) {
return UIImage(cgImage: croppedCGImage, scale: 1.0, orientation: originalImage.imageOrientation)
}
return nil
}
Potentially unrelated, but the issue for me was screen orientation. On my portrait only app, I wanted to have a barcode scanner that just detects codes in a horizontal line in the middle of the screen. I thought this would work:
CGRect(x: 0, y: 0.4, width: 1, height: 0.2)
instead i had to switch x with y and width with height
CGRect(x: 0.4, y: 0, width: 0.2, height: 1)
I wrote the following:
videoPreviewLayer?.frame = view.layer.bounds
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
And this worked for me, but I still don't know why.

How can I animate a series of images in SwiftUI?

I am new to SwiftUI and iOS development (I started learning 2 days ago). Before this I was doing Android apps. I learned some of the basics but I can't figure it out how can I animate several images. I need to display a series of images with 1.2 seconds delay before changing them in a frame.
I found some code here, but they use UIImage which is the old framework and the animation can't be stopped.
import SwiftUI
struct SplashScreen: UIViewRepresentable {
let imageSize: CGSize
let imageNames: [String]
let duration: Double
func makeUIView(context: Self.Context) -> UIView {
let containerView = UIView(frame: CGRect(x: 0, y: 0
, width: imageSize.width, height: imageSize.height))
let animationImageView = UIImageView(frame: CGRect(x: 0, y: 0, width: imageSize.width, height: imageSize.height))
animationImageView.clipsToBounds = true
animationImageView.layer.cornerRadius = 5
animationImageView.autoresizesSubviews = true
animationImageView.animationRepeatCount = 3
animationImageView.contentMode = UIView.ContentMode.scaleAspectFill
var images = [UIImage]()
imageNames.forEach { imageName in
if let img = UIImage(named: imageName) {
images.append(img)
}
}
animationImageView.image = UIImage.animatedImage(with: images, duration: duration)
animationImageView.image = animationImageView.animationImages?.last!
containerView.addSubview(animationImageView)
return containerView
}
func updateUIView(_ uiView: UIView, context: UIViewRepresentableContext<SplashScreen>) {
}
}
This code is from Julio Bailon.
I couldn't find any tutorial which shows how to do this in SwiftUI.
**1. Animate the images like 3 times.
On end, launch another ViewController without the possibility of going back. (I am thinking like activities on Android but I don't know how much that helps)**

Play Youtube video on ARImageAnchor in Swift

I want to play a youtube video on my image tracking module
like when image tracked then play youtube video on it.
Can anyone help me?
private func makeVideo(with url: URL, size: CGSize) -> SCNNode? {
DispatchQueue.main.async {
self.player.frame = CGRect(x: 0, y: 0, width: 650, height: 400)
self.player.autoplay = true
self.player.loadPlayer()
}
// 4
let avMaterial = SCNMaterial()
avMaterial.diffuse.contents = player
// 5
let videoPlane = SCNPlane(width: size.width, height: size.height)
videoPlane.materials = [avMaterial]
// 6
let videoNode = SCNNode(geometry: videoPlane)
videoNode.eulerAngles.x = -.pi / 2
return videoNode
}
import YoutubeKit
let player = YTSwiftyPlayer(playerVars: [.videoID("GJQsT-h0FTU")])
I try this but it shows a black screen on image but audio is play
this may be not exactly solution to your issue, but I hope it can help you in some ways
So, what you can do in addition to the Image Detection is to display the UIWebView
'UIWebView' was deprecated in iOS 12.0: No longer supported; please adopt WKWebView.
Even though the Xcode will warn you with something like the message above, but the WKWebView does not display a thing, so you will need to stick with WebView until they fix the issue
So, assuming that you have figured out the image detection part and you are able to locate and put nodes over detected image you can implement following function:
func displayWebSite(on rootNode: SCNNode, horizontalOffset: CGFloat) {
DispatchQueue.main.async {
// Open YouTube
let request = URLRequest(url: URL(string: "https://youtu.be/7ehEPsrw1X8")!)
// Define the size
let webView = UIWebView(frame: CGRect(x: 0, y: 0, width: 650, height: 900))
webView.loadRequest(request)
// Set size
let webViewPlane = SCNPlane(width: horizontalOffset, height: horizontalOffset * 1.45)
webViewPlane.cornerRadius = 0.025
// Define geometry
let webViewNode = SCNNode(geometry: webViewPlane)
// Set the WebView as a material to the plane
webViewNode.geometry?.firstMaterial?.diffuse.contents = webView
webViewNode.opacity = 0
// Put a little in front to avoid merger with detected image
webViewNode.position.z += 0.04
// Add the node
rootNode.addChildNode(webViewNode)
}
}
For your reference you can check one of my projects to see how that function works within a project

Swift: how to create FDWaveformview Dynamically

I want to show wave of multiple audios in scrollview using swift4 and xcode 9. I am using cocoapos library FDWaveFormView for showing wave of audio file. For this I have to create fdwaveformview dynamically. Fdwaveformview works fine if I create this in story board. But it shows an error when create dynamically in swift class.
Code:
for index in selectedAudios {
audioQueue.append(AVPlayerItem(url: index as! URL))
print("aduio url: \(index)")
let waveForm = FDWaveformView(frame: CGRect(x: 0, y: 0, width: 300, height: 150)) // error
audio_scroll_view.addSubview(waveForm!)
}
Screen Shot Of Error
Error: FDWaveformView initializer is inaccessible due to 'internal' protection level
Solution:
let frame = CGRect(x: 0, y: 0, width: 300, height: 100)
let waveform = FDWaveformView()
waveform.frame = frame
audio_scroll_view.addSubview(waveform)

How do I capture QR Code data in specific area of AVCaptureVideoPreviewLayer using Swift?

I am creating an iPad app and one of it's features is scanning QR codes. I have the QR scanning part working, but the issue I have is that the iPad screen is very large and I will be scanning small QR codes of of a sheet of paper with many QR codes visible at once. I want to designate a smaller area of the display to be the only area that can actually capture a QR code so it is easier for the user to scan the specific QR code they want.
I currently have made a temporary UIView with red borders that is centered on the page as an example of where I will want the user to scan the QR codes. It looks like this:
I have looked all over to find an answer to how I can target a specific region of the AVCaptureVideoPreviewLayer to collect the QR code data, and what I have found is suggestions to use "rectOfInterest" with AVCaptureMetadataOutput. I have attempted to do that, but when I set rectOfInterest to the same coordinates and size as those I use for my UIView that shows up correctly, I can no longer scan/recognize any QR codes. Can someone please tell me why the scannable area does not match the location of the UIView that is seen and how can I get the rectOfInterest to be within the red borders I have added to the screen?
Here is the code for the scan function I am currently using:
func startScan() {
// Get an instance of the AVCaptureDevice class to initialize a device object and provide the video
// as the media type parameter.
let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
// Get an instance of the AVCaptureDeviceInput class using the previous device object.
var error:NSError?
let input: AnyObject! = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &error)
if (error != nil) {
// If any error occurs, simply log the description of it and don't continue any more.
println("\(error?.localizedDescription)")
return
}
// Initialize the captureSession object.
captureSession = AVCaptureSession()
// Set the input device on the capture session.
captureSession?.addInput(input as! AVCaptureInput)
// Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session.
let captureMetadataOutput = AVCaptureMetadataOutput()
captureSession?.addOutput(captureMetadataOutput)
// calculate a centered square rectangle with red border
let size = 300
let screenWidth = self.view.frame.size.width
let xPos = (CGFloat(screenWidth) / CGFloat(2)) - (CGFloat(size) / CGFloat(2))
let scanRect = CGRect(x: Int(xPos), y: 150, width: size, height: size)
// create UIView that will server as a red square to indicate where to place QRCode for scanning
scanAreaView = UIView()
scanAreaView?.layer.borderColor = UIColor.redColor().CGColor
scanAreaView?.layer.borderWidth = 4
scanAreaView?.frame = scanRect
// Set delegate and use the default dispatch queue to execute the call back
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
captureMetadataOutput.metadataObjectTypes = [AVMetadataObjectTypeQRCode]
captureMetadataOutput.rectOfInterest = scanRect
// Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
videoPreviewLayer?.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer)
// Start video capture.
captureSession?.startRunning()
// Initialize QR Code Frame to highlight the QR code
qrCodeFrameView = UIView()
qrCodeFrameView?.layer.borderColor = UIColor.greenColor().CGColor
qrCodeFrameView?.layer.borderWidth = 2
view.addSubview(qrCodeFrameView!)
view.bringSubviewToFront(qrCodeFrameView!)
// Add a button that will be used to close out of the scan view
videoBtn.setTitle("Close", forState: .Normal)
videoBtn.setTitleColor(UIColor.blackColor(), forState: .Normal)
videoBtn.backgroundColor = UIColor.grayColor()
videoBtn.layer.cornerRadius = 5.0;
videoBtn.frame = CGRectMake(10, 30, 70, 45)
videoBtn.addTarget(self, action: "pressClose:", forControlEvents: .TouchUpInside)
view.addSubview(videoBtn)
view.addSubview(scanAreaView!)
}
Update
The reason I do not think this is a duplicate is because the other post referenced is in Objective-C and my code is in Swift. For those of us that are new to iOS it is not as easy to translate the two. Also, the referenced post's answer does not show the actual update made in the code that resolved his issue. He left a good explanation about having to use the metadataOutputRectOfInterestForRect method to convert the rectangle coordinates, but I still cannot seem to get this method to work, as it is unclear to me how this should work without an example.
After fighting with the metedataOutputRectOfInterestForRect method all morning, I got tired of it and decided to write my own conversion.
func convertRectOfInterest(rect: CGRect) -> CGRect {
let screenRect = self.view.frame
let screenWidth = screenRect.width
let screenHeight = screenRect.height
let newX = 1 / (screenWidth / rect.minX)
let newY = 1 / (screenHeight / rect.minY)
let newWidth = 1 / (screenWidth / rect.width)
let newHeight = 1 / (screenHeight / rect.height)
return CGRect(x: newX, y: newY, width: newWidth, height: newHeight)
}
Note: I have an image view with a square to show the user where to scan, be sure to use the imageView.frame and not imageView.bounds in order to get the correct location on the screen.
This has been working successfully for me.
let metadataOutput = AVCaptureMetadataOutput()
metadataOutput.rectOfInterest = convertRectOfInterest(rect: scanRect)
After reviewing other source(https://www.jianshu.com/p/8bb3d8cb224e),
the convertRectOfInterest function has a slight mistake, the return field should be:
return CGRect(x: newY, y: newX, width: newHeight, height: newWidth)
where x and y, Width and Height input should be interchanged to get it working.
You need to convert the rect represented in the UIView's coordinates into the coordinate system of the AVCaptureVideoPreviewLayer:
captureMetadataOutput.rectOfInterest = videoPreviewLayer.metadataOutputRectConverted(fromLayerRect: scanRect)
For more info: https://stackoverflow.com/a/55778152/6898849
let scanView = CGRect(x: centerX, y: centerY, width: width, height: height)
metadataOutput.rectOfInterest = previewLayer.metadataOutputRectConverted(fromLayerRect: scanView)
This works for me.
extension AVCaptureVideoPreviewLayer {
func rectOfInterestConverted(parentRect: CGRect, fromLayerRect: CGRect) -> CGRect {
let parentWidth = parentRect.width
let parentHeight = parentRect.height
let newX = (parentWidth - fromLayerRect.maxX)/parentWidth
let newY = 1 - (parentHeight - fromLayerRect.minY)/parentHeight
let width = 1 - (fromLayerRect.minX/parentWidth + newX)
let height = (fromLayerRect.maxY/parentHeight) - newY
return CGRect(x: newX, y: newY, width: width, height: height)
}
}
Usage:
if let rect = videoPreviewLayer?.rectOfInterestConverted(parentRect: self.view.frame, fromLayerRect: scanAreaView.frame) {
captureMetadataOutput.rectOfInterest = rect
}
metadataOutput.rectOfInterest = previewLayer.metadataOutputRectConverted(fromLayerRect: yourView.frame)
previewLayer it's AVCaptureVideoPreviewLayer

Resources