AVCaptureMetadataOutput Region of Interest for Barcodes - ios

I am looking to display a UIView subclass within a UIStackView. The subclass is called PLBarcodeScannerView and is using AVCaptureMetadataOuput to detect barcodes within the camera's field of view. Because this view is not the entire screen, I need to set the region of interest to be the same as the frame of the PLBarcodeScannerView because the user is only seeing a portion of the camera view and we want to be sure that the barcode in the visible view is the one being scanned.
Issue
I cannot seem to set the metadataOutputRectOfInterest properly, nor does the "zoom level" of the preview layer on this view seem correct, although the aspect ratio is correct. The system does receive barcodes successfully, but they are not always visible within the preview window. Codes are still scanned when they reside outside the visible preview window.
Screenshot:
The colorful photo is the PLBarcodeScannerView. Only codes which are fully visible inside this view should be considered.
Below is the code that initializes the view:
This is called within the init methods of PLBarcodeScannerView:UIView
func setupView() {
session = AVCaptureSession()
let tap = UITapGestureRecognizer(target: self, action: #selector(self.resume))
addGestureRecognizer(tap)
// Set the captureDevice.
let videoCaptureDevice = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
// Create input object.
let videoInput: AVCaptureDeviceInput?
do {
videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice)
} catch {
return
}
// Add input to the session.
if (session!.canAddInput(videoInput)) {
session!.addInput(videoInput)
} else {
scanningNotPossible()
}
// Create output object.
let metadataOutput = AVCaptureMetadataOutput()
// Add output to the session.
if (session!.canAddOutput(metadataOutput)) {
session!.addOutput(metadataOutput)
// Send captured data to the delegate object via a serial queue.
metadataOutput.setMetadataObjectsDelegate(self, queue: DispatchQueue.main)
// Set barcode type for which to scan: EAN-13.
metadataOutput.metadataObjectTypes = [
AVMetadataObjectTypeCode128Code
]
} else {
scanningNotPossible()
}
// Determine the size of the region of interest
let x = self.frame.origin.x/UIScreen.main.bounds.width
let y = self.frame.origin.y/UIScreen.main.bounds.height
let width = self.frame.width/UIScreen.main.bounds.height
let height = self.frame.height/UIScreen.main.bounds.height
let scanRectTransformed = CGRect(x: x, y: y, width: 1, height: height)
metadataOutput.metadataOutputRectOfInterest(for: scanRectTransformed)
// Add previewLayer and have it show the video data.
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.frame = self.bounds
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
layer.addSublayer(previewLayer)
// Begin the capture session.
session!.startRunning()
}

Related

iOS ARKit4 - how to fix white balance for the ARCamera?

I have an app that generates point cloud from multiple ARFrame. It appears that the camera used to capture the image has dynamic white balance, and can change it in the middle of a capture session.
How do I configure ARView, ARSession, or ARCamera to force it to lock white balance for the duration of the session?
I have access to the following parameters, but do not see anything related to white balance.
var arView: ARView!
let session: ARSession = arView.session
var sampleFrame: ARFrame = session.currentFrame!
let camera = sampleFrame.camera
func configureSessionAndRun() {
arView.automaticallyConfigureSession = false
let configuration = ARWorldTrackingConfiguration()
configuration.sceneReconstruction = .meshWithClassification
configuration.frameSemantics = .smoothedSceneDepth
configuration.planeDetection = [.horizontal, .vertical]
configuration.environmentTexturing = .automatic
arView.session.run(configuration)
}
There are only two AR View's properties that could help, but they are just gettable, not settable:
let frame = arView.session.currentFrame
frame?.camera.exposureDuration // { get }
frame?.camera.exposureOffset // { get }

Camera preview layer does not show up Swift 4

I am creating a ViewController in which I want to have a somewhat small UIView in the corner of the ViewController to display the camera preview. I am using a function to do this. However when I pass in the small UIView into the function the camera preview is not showing up. The weird thing is if I tell the function to display the preview on self.view everything works fine and I can see the camera preview. For this reason I think the problem is with the way I insert the layer or something similar.
Here is the function I am using to display the preview...
func displayPreview(on view: UIView) throws {
guard let captureSession = self.captureSession, captureSession.isRunning else { throw CameraControllerError.captureSessionIsMissing }
self.previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
self.previewLayer?.videoGravity = AVLayerVideoGravity.resizeAspectFill
self.previewLayer?.connection?.videoOrientation = .portrait
view.layer.insertSublayer(self.previewLayer!, at: 0)
self.previewLayer?.frame = view.frame
}
I call this function from inside another function which handles setting up the capture session and other similar things.
func configureCameraController() {
cameraController.prepare {(error) in
if let error = error {
print("ERROR")
print(error)
}else{
}
print("hello")
try! self.cameraController.displayPreview(on: self.mirrorView)
}
}
configureCameraController()
How can I get the camera preview layer to show up on the smaller UIView?
Can you try adding the following
let rootLayer: CALayer = self.yourSmallerView.layer
rootLayer.masksToBounds = true
self.previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(self.previewLayer)
in place of
view.layer.insertSublayer(self.previewLayer!, at: 0)
Also ensure, yourSmallerView.contentMode = UIViewContentMode.scaleToFill

Reinitializing barcode reader when viewWillAppear (again)

I got a barcode reader that correctly initialize the first time the view is loaded. However when I go back to the view the video feed stops working.
The code below shows how is initialized in the viewDidLoad method. Any suggestion on how to modify it so I can call part of it when the viewWillAppear (again)?
Code:
override func viewDidLoad() {
super.viewDidLoad()
// Get an instance of the AVCaptureDevice class to initialize a device object and provide the video
// as the media type parameter.
let captureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
// Get an instance of the AVCaptureDeviceInput class using the previous device object.
var error:NSError?
let input: AnyObject! = AVCaptureDeviceInput.deviceInputWithDevice(captureDevice, error: &error)
if (error != nil) {
// If any error occurs, simply log the description of it and don't continue any more.
println("\(error?.localizedDescription)")
return
}
// Initialize the captureSession object.
captureSession = AVCaptureSession()
// Set the input device on the capture session.
captureSession?.addInput(input as AVCaptureInput)
// Initialize a AVCaptureMetadataOutput object and set it as the output device to the capture session.
let captureMetadataOutput = AVCaptureMetadataOutput()
captureSession?.addOutput(captureMetadataOutput)
// Set delegate and use the default dispatch queue to execute the call back
captureMetadataOutput.setMetadataObjectsDelegate(self, queue: dispatch_get_main_queue())
captureMetadataOutput.metadataObjectTypes = supportedBarCodes
// Initialize the video preview layer and add it as a sublayer to the viewPreview view's layer.
videoPreviewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
//videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
var tmpbounds=view.layer.bounds;
videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill;
videoPreviewLayer?.bounds = tmpbounds
videoPreviewLayer?.position = CGPointMake(CGRectGetMidX(tmpbounds), CGRectGetMidY(tmpbounds))
// videoPreviewLayer?.videoGravity = AVLayerVideoGravityResizeAspect
videoPreviewLayer?.frame = view.layer.bounds
view.layer.addSublayer(videoPreviewLayer)
// Start video capture.
captureSession?.startRunning()
// Move the message label to the top view
view.bringSubviewToFront(messageLabel)
// Initialize QR Code Frame to highlight the QR code
qrCodeFrameView = UIView()
qrCodeFrameView?.layer.borderColor = UIColor.greenColor().CGColor
qrCodeFrameView?.layer.borderWidth = 2
view.addSubview(qrCodeFrameView!)
view.bringSubviewToFront(qrCodeFrameView!)
}
try to execute on viewDidAppear
captureSession?.startRunning()
on viewWillDissapear
captureSession?.stopRunning()
Also register for notification AVCaptureSessionRuntimeErrorNotification and log all errors. I think it will help you to understand what going wrong.
You are setting up the captureSession and the videoPreviewLayer in viewDidLoad. That is only run when the ViewController is created. You need to move some code to viewDidAppear, which will run each time this view appears. Most simply, move all the code from this line:
captureSession = AVCaptureSession()
onwards into viewDidAppear.

How to draw camera video as background on IOS using Swift scenekit?

I am trying to develop an augmented reality app using swift and scenekit on ios. Is there a way to draw the video captured by the device camera as a background of the scene?
This worked for me,
I used AVFoundation to capture the video input of the device camera:
let captureSession = AVCaptureSession()
let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
if let videoDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo) {
var err: NSError? = nil
if let videoIn : AVCaptureDeviceInput = AVCaptureDeviceInput.deviceInputWithDevice(videoDevice, error: &err) as? AVCaptureDeviceInput {
if(err == nil){
if (captureSession.canAddInput(videoIn as AVCaptureInput)){
captureSession.addInput(videoIn as AVCaptureDeviceInput)
}
else {
println("Failed to add video input.")
}
}
else {
println("Failed to create video input.")
}
}
else {
println("Failed to create video capture device.")
}
}
captureSession.startRunning()
At this point, based on Apple's documentation of the background property of SCNScene, I expected to add the instance of AVCaptureVideoPreviewLayer to an SCNScene's background.contents, with something like:
previewLayer.frame = sceneView.bounds
sceneView.scene.background.contents = previewLayer
This changed the background color of my scene from default white to black, but offered no video input. This may be an iOS bug?.
So Plan B. Instead, I added the 'AVCaptureVideoPreviewLayer' as a sublayer of a UIView's layer:
previewLayer.frame = self.view.bounds
self.view.layer.addSublayer(previewLayer)
I then set an SCNView as a subview of that same UIView, setting the SCNView's background color to clear:
let sceneView = SCNView()
sceneView.frame = self.view.bounds
sceneView.backgroundColor = UIColor.clearColor()
self.view.addSubview(sceneView)
The device camera's video is now visible as the background of the scene.
I've created a small demo.
As of iOS 11, you can now use an AVCaptureDevice as a material on an object or the scene's background (see "Using Animated Content" here)
Example:
let captureDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back)!
scnScene.background.contents = captureDevice
Yes, you can use a AVCaptureVideoPreviewLayer as the contents of a material property (just make sure that you give the layer a bounds).
The scene view has a material property for the background that you can assign the video preview to (assign the layer to the background's contents).
var background: SCNMaterialProperty! { get }

How BarCode,QR Code are recognized without capturing the image?

I wonder how Bar Code and QR Code (even character) are recognized without capturing it.I have seen in many app, when we keep our device above any of these (QR/Bar Code), the app automatically recognize it and starts processing. Is there any scanning mechanism used for this? How this can be achieved? What are mechanism involving in this?
Thanks in advance.
1) The phone camera will be launched by the library it will autofocus and scans until it finds the decoded info from the image displayed by camera
2) The info will be parsed by the library and it will give you the result.
the decoded info is "the bar code which has a information decoded in it"
Example for QRCode: The data is present as square
for barcode: the data is present as vertical lines
The library has all the logics for detecting the type of code and decoding as per the format.
Please read more the QrCode/Bar code libraries docs or implement it and learn
You can use AVCaptureSession, e.g.:
let session = AVCaptureSession()
var qrPayload: String?
func startSession() {
guard !started else { return }
let output = AVCaptureMetadataOutput()
output.setMetadataObjectsDelegate(self, queue: .main)
let device: AVCaptureDevice?
if #available(iOS 10.0, *) {
device = AVCaptureDevice
.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .metadataObject, position: .back)
.devices
.first
} else {
device = AVCaptureDevice.devices().first { $0.position == .back }
}
guard
let camera = device,
let input = try? AVCaptureDeviceInput(device: camera),
session.canAddInput(input),
session.canAddOutput(output)
else {
// handle failures here
return
}
session.addInput(input)
session.addOutput(output)
output.metadataObjectTypes = [.qr]
let videoLayer = AVCaptureVideoPreviewLayer(session: session)
videoLayer.frame = view.bounds
videoLayer.videoGravity = .resizeAspectFill
view.layer.addSublayer(videoLayer)
session.startRunning()
}
And extend your view controller to conform to AVCaptureMetadataOutputObjectsDelegate:
extension QRViewController: AVCaptureMetadataOutputObjectsDelegate {
func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
guard
qrPayload == nil,
let object = metadataObjects.first as? AVMetadataMachineReadableCodeObject,
let string = object.stringValue
else { return }
qrPayload = string
print(qrPayload)
// perhaps dismiss this view controller now that you’ve succeeded
}
}
Note, I’m testing to make sure that the qrPayload is nil because I find that you can see metadataOutput(_:didOutput:from:) get called a few times before the view controller is dismissed.

Resources