How BarCode,QR Code are recognized without capturing the image? - ios

I wonder how Bar Code and QR Code (even character) are recognized without capturing it.I have seen in many app, when we keep our device above any of these (QR/Bar Code), the app automatically recognize it and starts processing. Is there any scanning mechanism used for this? How this can be achieved? What are mechanism involving in this?
Thanks in advance.

1) The phone camera will be launched by the library it will autofocus and scans until it finds the decoded info from the image displayed by camera
2) The info will be parsed by the library and it will give you the result.
the decoded info is "the bar code which has a information decoded in it"
Example for QRCode: The data is present as square
for barcode: the data is present as vertical lines
The library has all the logics for detecting the type of code and decoding as per the format.
Please read more the QrCode/Bar code libraries docs or implement it and learn

You can use AVCaptureSession, e.g.:
let session = AVCaptureSession()
var qrPayload: String?
func startSession() {
guard !started else { return }
let output = AVCaptureMetadataOutput()
output.setMetadataObjectsDelegate(self, queue: .main)
let device: AVCaptureDevice?
if #available(iOS 10.0, *) {
device = AVCaptureDevice
.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .metadataObject, position: .back)
.devices
.first
} else {
device = AVCaptureDevice.devices().first { $0.position == .back }
}
guard
let camera = device,
let input = try? AVCaptureDeviceInput(device: camera),
session.canAddInput(input),
session.canAddOutput(output)
else {
// handle failures here
return
}
session.addInput(input)
session.addOutput(output)
output.metadataObjectTypes = [.qr]
let videoLayer = AVCaptureVideoPreviewLayer(session: session)
videoLayer.frame = view.bounds
videoLayer.videoGravity = .resizeAspectFill
view.layer.addSublayer(videoLayer)
session.startRunning()
}
And extend your view controller to conform to AVCaptureMetadataOutputObjectsDelegate:
extension QRViewController: AVCaptureMetadataOutputObjectsDelegate {
func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
guard
qrPayload == nil,
let object = metadataObjects.first as? AVMetadataMachineReadableCodeObject,
let string = object.stringValue
else { return }
qrPayload = string
print(qrPayload)
// perhaps dismiss this view controller now that you’ve succeeded
}
}
Note, I’m testing to make sure that the qrPayload is nil because I find that you can see metadataOutput(_:didOutput:from:) get called a few times before the view controller is dismissed.

Related

captureOutput not being called by AVCaptureAudioDataOutputSampleBufferDelegate

I have an app that records video, but I need it to show pitch levels of the sounds captured on the microphone in real-time to the user. I have been able to successfully record audio and video to MP4 using AVCaptureSession. However, when I add AVCaptureAudioDataOutput to the session and assign the AVCaptureAudioDataOutputSampleBufferDelegate I receive no errors, and yet the captureOutput function is never called once the session starts.
Here is the code:
import UIKit
import AVFoundation
import CoreLocation
class ViewController: UIViewController,
AVCaptureVideoDataOutputSampleBufferDelegate,
AVCaptureFileOutputRecordingDelegate, CLLocationManagerDelegate ,
AVCaptureAudioDataOutputSampleBufferDelegate {
var videoFileOutput: AVCaptureMovieFileOutput!
let session = AVCaptureSession()
var outputURL: URL!
var timer:Timer!
var locationManager:CLLocationManager!
var currentMagnitudeValue:CGFloat!
var defaultMagnitudeValue:CGFloat!
var visualMagnitudeValue:CGFloat!
var soundLiveOutput: AVCaptureAudioDataOutput!
override func viewDidLoad() {
super.viewDidLoad()
self.setupAVCapture()
}
func setupAVCapture(){
session.beginConfiguration()
//Add the camera INPUT to the session
let videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video, position: .front)
guard
let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice!),
session.canAddInput(videoDeviceInput)
else { return }
session.addInput(videoDeviceInput)
//Add the microphone INPUT to the session
let microphoneDevice = AVCaptureDevice.default(.builtInMicrophone, for: .audio, position: .unspecified)
guard
let audioDeviceInput = try? AVCaptureDeviceInput(device: microphoneDevice!),
session.canAddInput(audioDeviceInput)
else { return }
session.addInput(audioDeviceInput)
//Add the video file OUTPUT to the session
videoFileOutput = AVCaptureMovieFileOutput()
guard session.canAddOutput(videoFileOutput) else {return}
if (session.canAddOutput(videoFileOutput)) {
session.addOutput(videoFileOutput)
}
//Add the audio output so we can get PITCH of the sounds
//AND assign the SampleBufferDelegate
soundLiveOutput = AVCaptureAudioDataOutput()
soundLiveOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "test"))
if (session.canAddOutput(soundLiveOutput)) {
session.addOutput(soundLiveOutput)
print ("Live AudioDataOutput added")
} else
{
print("Could not add AudioDataOutput")
}
//Preview Layer
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
let rootLayer :CALayer = self.cameraView.layer
rootLayer.masksToBounds=true
previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(previewLayer)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill;
//Finalize the session
session.commitConfiguration()
//Begin the session
session.startRunning()
}
func captureOutput(_: AVCaptureOutput, didOutput: CMSampleBuffer, from:
AVCaptureConnection) {
print("Bingo")
}
}
Expected output:
Bingo
Bingo
Bingo
...
I have read:
StackOverflow: captureOutput not being called - The user was not declaring the captureOutput method correctly.
StackOverflow: AVCaptureVideoDataOutput captureOutput not being called - The user was not declaring the captureOutput method at all.
Apple - AVCaptureAudioDataOutputSampleBufferDelegate - Apple's documentation on the delegate and it's method - the method matches the method I have declared.
Other common errors I have encountered online:
Using the declaration for older versions of Swift (I am using v4.1)
Apparently on one article after Swift 4.0, AVCaptureMetadataOutput replaces AVCaptureAudioDataOutput - Although I couldn't find this in Apple's documentation, I tried this also, but similarly, the metadataOutput function is never called.
I am fresh out of ideas. Am I missing something obvious?
Ok, nobody got back to me but after playing around with it I worked out the correct way to declare the captureOutput method for Swift4 is as follows:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//Do your stuff here
}
Unfortunately, the documentation for this online is very poor. I guess you just need to get it exactly right - there are no errors thrown if you mispell or misname the variables as it is an optional function.
The method you're using is been updated with this one, Which will get called for both AVCaptureAudioDataOutput & AVCaptureVideoDataOutput. you make sure you check the output before writing the sample buffers to asset writer.
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//Make sure you check the output before using sample buffer
if output == audioDataOutput {
//Use sample buffer for audio
}
}
The problem for me turned out to be this, and that the AVAudioSession and AVCaptureSession where declared as local variables and when I started the session, it just went away. Once I moved them to class level variables, everything worked great!

__availableRawPhotoPixelFormatTypes is empty on iPhone 7+ and iOS11

I'm trying to capture RAW files with AVFoundation. However I'm getting empty array in __availableRawPhotoPixelFormatTypes
Here is my snippet
if self._photoOutput == nil {
self._photoOutput = AVCapturePhotoOutput()
print(self._photoOutput!.__availableRawPhotoPixelFormatTypes)
}
And the output is empty array []
What may cause this?
Here are three things that will cause the availableRawPhotoPixelFormatTypes array to be empty:
You are reading the availableRawPhotoPixelFormatTypes property before adding your _photoOutput to an AVCaptureSession with a video source.
You are using the dual camera input. If so, you can't capture RAW images.
You are using the front camera. If so, you can't capture RAW images.
Here is some modified sample code from an excellent Apple guide (see link below). I have copied from several places and updated it slightly for brevity, simplicity and better overview:
let session = AVCaptureSession()
let photoOutput = AVCapturePhotoOutput()
private func configureSession() {
// Get camera device.
guard let videoCaptureDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .back) else {
print("Unable to get camera device.")
return
}
// Create a capture input.
guard let videoInput = try? AVCaptureDeviceInput(device: videoCaptureDevice) else {
print("Unable to obtain video input for default camera.")
return
}
// Make sure inputs and output can be added to session.
guard session.canAddInput(videoInput) else { return }
guard session.canAddOutput(photoOutput) else { return }
// Configure the session.
session.beginConfiguration()
session.sessionPreset = .photo
session.addInput(videoInput)
// availableRawPhotoPixelFormatTypes is empty.
session.addOutput(photoOutput)
// availableRawPhotoPixelFormatTypes should not be empty.
session.commitConfiguration()
}
private func capturePhoto() {
// Photo settings for RAW capture.
let rawFormatType = kCVPixelFormatType_14Bayer_RGGB
// At this point the array should not be empty (session has been configured).
guard photoOutput.availableRawPhotoPixelFormatTypes.contains(NSNumber(value: rawFormatType).uint32Value) else {
print("No available RAW pixel formats")
return
}
let photoSettings = AVCapturePhotoSettings(rawPixelFormatType: rawFormatType)
photoOutput.capturePhoto(with: photoSettings, delegate: self)
}
// MARK: - AVCapturePhotoCaptureDelegate methods
func photoOutput(_ output: AVCapturePhotoOutput,
didFinishProcessingRawPhoto rawSampleBuffer: CMSampleBuffer?,
previewPhoto previewPhotoSampleBuffer: CMSampleBuffer?,
resolvedSettings: AVCaptureResolvedPhotoSettings,
bracketSettings: AVCaptureBracketedStillImageSettings?,
error: Error?) {
guard error == nil, let rawSampleBuffer = rawSampleBuffer else {
print("Error capturing RAW photo:\(error)")
return
}
// Do something with the rawSampleBuffer.
}
Apple's Photo Capture Guide:
https://developer.apple.com/library/content/documentation/AudioVideo/Conceptual/PhotoCaptureGuide/index.html
The availableRawPhotoPixelFormatTypes property:
https://developer.apple.com/documentation/avfoundation/avcapturephotooutput/1778628-availablerawphotopixelformattype)
iPhone camera capabilities:
https://developer.apple.com/library/content/documentation/DeviceInformation/Reference/iOSDeviceCompatibility/Cameras/Cameras.html
One supplyment to #Thomas's answer:
If you change AVCaptureDevice.activeFormat, AVCaptureSession.sessionPreset will be set to inputPriority automatically. In such situation, availableRawPhotoPixelFormatTypes will be empty too.

iOS camera facetracking (Swift 3 Xcode 8)

I am trying to make a simple camera application where the front camera can detect faces.
This should be simple enough:
Create a CameraView class that inherits from UIImage and place it in the UI. Make sure it implements AVCaptureVideoDataOutputSampleBufferDelegate in order to process frames from the camera in real time.
class CameraView: UIImageView, AVCaptureVideoDataOutputSampleBufferDelegate
Within a function handleCamera, called when the CameraView is instantiated, setup an AVCapture session. Add input from the camera.
override init(frame: CGRect) {
super.init(frame:frame)
handleCamera()
}
func handleCamera () {
camera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera,
mediaType: AVMediaTypeVideo, position: .front)
session = AVCaptureSession()
// Set recovered camera as an input device for the capture session
do {
try input = AVCaptureDeviceInput(device: camera);
} catch _ as NSError {
print ("ERROR: Front camera can't be used as input")
input = nil
}
// Add the input from the camera to the capture session
if (session?.canAddInput(input) == true) {
session?.addInput(input)
}
Create output. Create a serial output queue to pass the data to which will then be processed by the AVCaptureVideoDataOutputSampleBufferDelegate (the class itself in this case). Add output to session.
output = AVCaptureVideoDataOutput()
output?.alwaysDiscardsLateVideoFrames = true
outputQueue = DispatchQueue(label: "outputQueue")
output?.setSampleBufferDelegate(self, queue: outputQueue)
// add front camera output to the session for use and modification
if(session?.canAddOutput(output) == true){
session?.addOutput(output)
} // front camera can't be used as output, not working: handle error
else {
print("ERROR: Output not viable")
}
Setup the camera preview view and run the session
// Setup camera preview with the session input
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
previewLayer?.frame = self.bounds
self.layer.addSublayer(previewLayer!)
// Process the camera and run it onto the preview
session?.startRunning()
in the captureOutput function run by the delegate, convert the recieved sample buffer to CIImage in order to detect faces. Give feedback if a face is found.
func captureOutput(_ captureOutput: AVCaptureOutput!, didDrop sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!)
let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy)
let faces = faceDetector?.features(in: cameraImage)
for face in faces as! [CIFaceFeature] {
print("Found bounds are \(face.bounds)")
let faceBox = UIView(frame: face.bounds)
faceBox.layer.borderWidth = 3
faceBox.layer.borderColor = UIColor.red.cgColor
faceBox.backgroundColor = UIColor.clear
self.addSubview(faceBox)
if face.hasLeftEyePosition {
print("Left eye bounds are \(face.leftEyePosition)")
}
if face.hasRightEyePosition {
print("Right eye bounds are \(face.rightEyePosition)")
}
}
}
My problem: I can get the camera running but with the multitude of different codes I have tried from all over the internet, I have never been able to get captureOutput to detect a face. Either the application doesn't enter the function or it crashes because of a varible that doesn't work, the most often being that the sampleBuffer variable is nul.
What am I doing wrong?
You need to change your captureOutput function arguments to the following: func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
Your captureOutput function calls when buffer drops, not when it gets from camera.

How to record IOS screen programmatically

Is there any way to record IOS screen programmatically. Means whatever activity you are doing like clicking buttons, Scrolling tableviews.
Even if a video is playing that will be captured again along with some other activity?
Have tried these
https://www.raywenderlich.com/30200/avfoundation-tutorial-adding-overlays-and-animations-to-videos
https://github.com/alskipp/ASScreenRecorder
but with these libraries won't provide quality video. I need quality video.
The issue is that with video playing in the background when i capture screen it does not show smooth video. It shows like one frame of video and then after 3-4 secs 2nd frame and so on. Also quality of video is not good its blurred
As of iOS 9, it looks like ReplayKit is available to greatly simplify this.
https://developer.apple.com/reference/replaykit
https://code.tutsplus.com/tutorials/ios-9-an-introduction-to-replaykit--cms-25458
Update: This may be less relevant now that iOS 11 has a built-in screen recorder, but the following Swift 3 code worked for me:
import ReplayKit
#IBAction func toggleRecording(_ sender: UIBarButtonItem) {
let r = RPScreenRecorder.shared()
guard r.isAvailable else {
print("ReplayKit unavailable")
return
}
if r.isRecording {
self.stopRecording(sender, r)
}
else {
self.startRecording(sender, r)
}
}
func startRecording(_ sender: UIBarButtonItem, _ r: RPScreenRecorder) {
r.startRecording(handler: { (error: Error?) -> Void in
if error == nil { // Recording has started
sender.title = "Stop"
} else {
// Handle error
print(error?.localizedDescription ?? "Unknown error")
}
})
}
func stopRecording(_ sender: UIBarButtonItem, _ r: RPScreenRecorder) {
r.stopRecording( handler: { previewViewController, error in
sender.title = "Record"
if let pvc = previewViewController {
if UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiom.pad {
pvc.modalPresentationStyle = UIModalPresentationStyle.popover
pvc.popoverPresentationController?.sourceRect = CGRect.zero
pvc.popoverPresentationController?.sourceView = self.view
}
pvc.previewControllerDelegate = self
self.present(pvc, animated: true, completion: nil)
}
else if let error = error {
print(error.localizedDescription)
}
})
}
// MARK: RPPreviewViewControllerDelegate
func previewControllerDidFinish(_ previewController: RPPreviewViewController) {
previewController.dismiss(animated: true, completion: nil)
}
ReplayKit is available, although you are not allowed to access the result video, the only way I've found so far is to make a number of screenshots (store them in array of images) and then convert these images to the video, not very efficient from performance standpoint though, but might work when you don't really need a 30/60 fps screen recording and might be ok w/ 6-20 pfs. Here's the full example.
Check out ScreenCaptureView, this has video-recording support built-in (see link).
What this does is it saves the contents of a UIView to a UIImage. The author suggests you can save a video of the app in use by passing the frames through AVCaptureSession.
I believe it hasn't been tested with an OpenGL subview, but assuming that it works you might be able to modify it slightly to include audio and then you'd be set.
AVCaptureSession Sample
AVCaptureSession Reference
import UIKit
import AVFoundation
class ViewController: UIViewController {
let captureSession = AVCaptureSession()
let stillImageOutput = AVCaptureStillImageOutput()
var error: NSError?
override func viewDidLoad() {
super.viewDidLoad()
let devices = AVCaptureDevice.devices().filter{ $0.hasMediaType(AVMediaTypeVideo) && $0.position == AVCaptureDevicePosition.Back }
if let captureDevice = devices.first as? AVCaptureDevice {
captureSession.addInput(AVCaptureDeviceInput(device: captureDevice, error: &error))
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
captureSession.startRunning()
stillImageOutput.outputSettings = [AVVideoCodecKey:AVVideoCodecJPEG]
if captureSession.canAddOutput(stillImageOutput) {
captureSession.addOutput(stillImageOutput)
}
if let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession) {
previewLayer.bounds = view.bounds
previewLayer.position = CGPointMake(view.bounds.midX, view.bounds.midY)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
let cameraPreview = UIView(frame: CGRectMake(0.0, 0.0, view.bounds.size.width, view.bounds.size.height))
cameraPreview.layer.addSublayer(previewLayer)
cameraPreview.addGestureRecognizer(UITapGestureRecognizer(target: self, action:"saveToCamera:"))
view.addSubview(cameraPreview)
}
}
}
func saveToCamera(sender: UITapGestureRecognizer) {
if let videoConnection = stillImageOutput.connectionWithMediaType(AVMediaTypeVideo) {
stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) {
(imageDataSampleBuffer, error) -> Void in
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
UIImageWriteToSavedPhotosAlbum(UIImage(data: imageData), nil, nil, nil)
}
}
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
}
}
You can use this library to record a view : screen-cap-view available on GitHub written in Objective C.
**And to use it in swift:**
--> Drag and drop the .m and .h files in your xcode project.
--> Make a header file and import the this file in that : *#import "IAScreenCaptureView.h"*
--> Then give a View this class from the PropertyInspector and then make a IBOutlet for that view . Something like this:
*#IBOutlet weak var contentView: IAScreenCaptureView!*
--> Then Finally just simply start and stop the recording of the view where ever and when ever you want and for that the code will be like this :
For Starting the Recording : *contentView.startRecording()*
For Stoping the Recording : *contentView.stopRecording()*
//Hope this helps.Happy coding. \o/ , ¯\_(ツ)_/¯ ,(╯°□°)╯︵ ┻━┻

Capturing image with avfoundation

I am using AvFoundation for camera.
This is my live preview:
It looks good. When user presses to "Button" I am creating a snapshot on same screen. (Like snapchat)
I am using following code for capturing image and showing it on the screen:
self.stillOutput.captureStillImageAsynchronouslyFromConnection(videoConnection){
(imageSampleBuffer : CMSampleBuffer!, _) in
let imageDataJpeg = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageSampleBuffer)
let pickedImage: UIImage = UIImage(data: imageDataJpeg)!
self.captureSession.stopRunning()
self.previewImageView.frame = CGRect(x:0, y:0, width:UIScreen.mainScreen().bounds.width, height:UIScreen.mainScreen().bounds.height)
self.previewImageView.image = pickedImage
self.previewImageView.layer.zPosition = 100
}
After user captures an image screen looks like this:
Please look at the marked area. It wasn't looking on the live preview screen(Screenshot 1).
I mean live preview is not showing everything. But I am sure my live preview works well because I compared with other camera apps and everything was same as my live preview screen. I guess I have a problem with captured image.
I am creating live preview with following code:
override func viewWillAppear(animated: Bool) {
super.viewWillAppear(animated)
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
let devices = AVCaptureDevice.devices()
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
}
}
}
if captureDevice != nil {
beginSession()
}
}
func beginSession() {
let err : NSError? = nil
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
} catch{
}
captureSession.addOutput(stillOutput)
if err != nil {
print("error: \(err?.localizedDescription)")
}
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity=AVLayerVideoGravityResizeAspectFill
self.cameraLayer.layer.addSublayer(previewLayer!)
previewLayer?.frame = self.cameraLayer.frame
captureSession.startRunning()
}
My cameraLayer :
How can I resolve this problem?
Presumably you are using an AVCaptureVideoPreviewLayer. So it sounds like this layer is incorrectly placed or incorrectly sized, or it has the wrong AVLayerVideoGravity setting. Part of the image is off the screen or cropped; that's why you don't see that part of what the camera sees while you are previewing.
Ok, I found the solution.
I used the
captureSession.sessionPreset = AVCaptureSessionPresetHigh
Instead of
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
Then problem fixed.

Resources