iOS camera facetracking (Swift 3 Xcode 8) - ios

I am trying to make a simple camera application where the front camera can detect faces.
This should be simple enough:
Create a CameraView class that inherits from UIImage and place it in the UI. Make sure it implements AVCaptureVideoDataOutputSampleBufferDelegate in order to process frames from the camera in real time.
class CameraView: UIImageView, AVCaptureVideoDataOutputSampleBufferDelegate
Within a function handleCamera, called when the CameraView is instantiated, setup an AVCapture session. Add input from the camera.
override init(frame: CGRect) {
super.init(frame:frame)
handleCamera()
}
func handleCamera () {
camera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera,
mediaType: AVMediaTypeVideo, position: .front)
session = AVCaptureSession()
// Set recovered camera as an input device for the capture session
do {
try input = AVCaptureDeviceInput(device: camera);
} catch _ as NSError {
print ("ERROR: Front camera can't be used as input")
input = nil
}
// Add the input from the camera to the capture session
if (session?.canAddInput(input) == true) {
session?.addInput(input)
}
Create output. Create a serial output queue to pass the data to which will then be processed by the AVCaptureVideoDataOutputSampleBufferDelegate (the class itself in this case). Add output to session.
output = AVCaptureVideoDataOutput()
output?.alwaysDiscardsLateVideoFrames = true
outputQueue = DispatchQueue(label: "outputQueue")
output?.setSampleBufferDelegate(self, queue: outputQueue)
// add front camera output to the session for use and modification
if(session?.canAddOutput(output) == true){
session?.addOutput(output)
} // front camera can't be used as output, not working: handle error
else {
print("ERROR: Output not viable")
}
Setup the camera preview view and run the session
// Setup camera preview with the session input
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
previewLayer?.frame = self.bounds
self.layer.addSublayer(previewLayer!)
// Process the camera and run it onto the preview
session?.startRunning()
in the captureOutput function run by the delegate, convert the recieved sample buffer to CIImage in order to detect faces. Give feedback if a face is found.
func captureOutput(_ captureOutput: AVCaptureOutput!, didDrop sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!)
let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy)
let faces = faceDetector?.features(in: cameraImage)
for face in faces as! [CIFaceFeature] {
print("Found bounds are \(face.bounds)")
let faceBox = UIView(frame: face.bounds)
faceBox.layer.borderWidth = 3
faceBox.layer.borderColor = UIColor.red.cgColor
faceBox.backgroundColor = UIColor.clear
self.addSubview(faceBox)
if face.hasLeftEyePosition {
print("Left eye bounds are \(face.leftEyePosition)")
}
if face.hasRightEyePosition {
print("Right eye bounds are \(face.rightEyePosition)")
}
}
}
My problem: I can get the camera running but with the multitude of different codes I have tried from all over the internet, I have never been able to get captureOutput to detect a face. Either the application doesn't enter the function or it crashes because of a varible that doesn't work, the most often being that the sampleBuffer variable is nul.
What am I doing wrong?

You need to change your captureOutput function arguments to the following: func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
Your captureOutput function calls when buffer drops, not when it gets from camera.

Related

Swift how to turn on the flash mode for custom camera

When I build the custom camera, I set the flash mode of the current device by the following code:
try currentDevice.lockForConfiguration()
currentDevice.flashMode = .on
currentDevice.unlockForConfiguration()
My capture picture function is the default captureOutput function (override by AVCaptureVideoDataOutputSampleBufferDelegate) with works well but only the flash mode didn't work.
func captureOutput(
_ output: AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer,
from connection: AVCaptureConnection
) {
if !takePicture {
return
}
takePicture = false
guard let cvBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
let ciImage = CIImage(cvImageBuffer: cvBuffer)
let uiImage = UIImage(ciImage: ciImage)
savePicture(image: uiImage)
}
First of all var flashMode: AVCaptureDevice.FlashMode { get set } is deprecated (Apple docs)
Maybe you mean this flashMode which is set on AVCapturePhotoSettings and supersedes the deprecated one.
But if you need LED back light on you should use .torchMode on the AVCaptureDevice object.
In order to do that you need to make sure that the currentDevice has torch available.
guard currentDevice.isTorchAvailable else { ...fallbac... }
Then you need to lock the device for configuration.
try currentDevice.lockForConfiguration()
And set the desired torch mode.
currentDevice.torchMode = .on // or .off
Remember to unlock the device after you are done with configuration
currentDevice.unlockForConfiguration()
If you want to set the torch mode on with some level of intensity you could turn it on using this method (instead of currentDevice.torchMode = .on)
try currentDevice.setTorchModeOn(level: 0.3)

captureOutput not being called by AVCaptureAudioDataOutputSampleBufferDelegate

I have an app that records video, but I need it to show pitch levels of the sounds captured on the microphone in real-time to the user. I have been able to successfully record audio and video to MP4 using AVCaptureSession. However, when I add AVCaptureAudioDataOutput to the session and assign the AVCaptureAudioDataOutputSampleBufferDelegate I receive no errors, and yet the captureOutput function is never called once the session starts.
Here is the code:
import UIKit
import AVFoundation
import CoreLocation
class ViewController: UIViewController,
AVCaptureVideoDataOutputSampleBufferDelegate,
AVCaptureFileOutputRecordingDelegate, CLLocationManagerDelegate ,
AVCaptureAudioDataOutputSampleBufferDelegate {
var videoFileOutput: AVCaptureMovieFileOutput!
let session = AVCaptureSession()
var outputURL: URL!
var timer:Timer!
var locationManager:CLLocationManager!
var currentMagnitudeValue:CGFloat!
var defaultMagnitudeValue:CGFloat!
var visualMagnitudeValue:CGFloat!
var soundLiveOutput: AVCaptureAudioDataOutput!
override func viewDidLoad() {
super.viewDidLoad()
self.setupAVCapture()
}
func setupAVCapture(){
session.beginConfiguration()
//Add the camera INPUT to the session
let videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video, position: .front)
guard
let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice!),
session.canAddInput(videoDeviceInput)
else { return }
session.addInput(videoDeviceInput)
//Add the microphone INPUT to the session
let microphoneDevice = AVCaptureDevice.default(.builtInMicrophone, for: .audio, position: .unspecified)
guard
let audioDeviceInput = try? AVCaptureDeviceInput(device: microphoneDevice!),
session.canAddInput(audioDeviceInput)
else { return }
session.addInput(audioDeviceInput)
//Add the video file OUTPUT to the session
videoFileOutput = AVCaptureMovieFileOutput()
guard session.canAddOutput(videoFileOutput) else {return}
if (session.canAddOutput(videoFileOutput)) {
session.addOutput(videoFileOutput)
}
//Add the audio output so we can get PITCH of the sounds
//AND assign the SampleBufferDelegate
soundLiveOutput = AVCaptureAudioDataOutput()
soundLiveOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "test"))
if (session.canAddOutput(soundLiveOutput)) {
session.addOutput(soundLiveOutput)
print ("Live AudioDataOutput added")
} else
{
print("Could not add AudioDataOutput")
}
//Preview Layer
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
let rootLayer :CALayer = self.cameraView.layer
rootLayer.masksToBounds=true
previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(previewLayer)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill;
//Finalize the session
session.commitConfiguration()
//Begin the session
session.startRunning()
}
func captureOutput(_: AVCaptureOutput, didOutput: CMSampleBuffer, from:
AVCaptureConnection) {
print("Bingo")
}
}
Expected output:
Bingo
Bingo
Bingo
...
I have read:
StackOverflow: captureOutput not being called - The user was not declaring the captureOutput method correctly.
StackOverflow: AVCaptureVideoDataOutput captureOutput not being called - The user was not declaring the captureOutput method at all.
Apple - AVCaptureAudioDataOutputSampleBufferDelegate - Apple's documentation on the delegate and it's method - the method matches the method I have declared.
Other common errors I have encountered online:
Using the declaration for older versions of Swift (I am using v4.1)
Apparently on one article after Swift 4.0, AVCaptureMetadataOutput replaces AVCaptureAudioDataOutput - Although I couldn't find this in Apple's documentation, I tried this also, but similarly, the metadataOutput function is never called.
I am fresh out of ideas. Am I missing something obvious?
Ok, nobody got back to me but after playing around with it I worked out the correct way to declare the captureOutput method for Swift4 is as follows:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//Do your stuff here
}
Unfortunately, the documentation for this online is very poor. I guess you just need to get it exactly right - there are no errors thrown if you mispell or misname the variables as it is an optional function.
The method you're using is been updated with this one, Which will get called for both AVCaptureAudioDataOutput & AVCaptureVideoDataOutput. you make sure you check the output before writing the sample buffers to asset writer.
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//Make sure you check the output before using sample buffer
if output == audioDataOutput {
//Use sample buffer for audio
}
}
The problem for me turned out to be this, and that the AVAudioSession and AVCaptureSession where declared as local variables and when I started the session, it just went away. Once I moved them to class level variables, everything worked great!

Sampling audio in real time using Aubio without stopping recording audio AND video iPhone/iPad

Swift 2.2
Xcode 7.3
Aubio 0.4.3 (aubio-0.4.3~const.iosuniversal_framework)
iOS 9.3 Target
Test Device - iPad Air
bufferSize: 2048
numSamplesInBuffer: 1024
Sample Rate: 44100
Caveats:
I have intentionally left AVCaptureVideo code in my upcoming code example so that anyone more briefly reading my question will not forget that I trying to capture audio AND video with the same recording AvCaptureSession and sample the audio in real time
I have fully tested Aubio -> Onset, specifically with a sample.caf (Core Audio Format) sound file as well as a recording, saved to file (also a .caf) using AvAudioRecorder and it works correctly on a real device (iPad Air). A very important take away of why Aubio works in tests is that I create a URI or file based sample with new_aubio_source. In my "real" version I am attempting to sample the sound buffer without saving the audio data to file.
Possible alternative approach to use Aubio. If I could start storing AudioBuffers as a valid Core Audio Format (.caf) file, Aubio would work, not sure if sampling would be fast enough with a file based solution, but after days of research I have not figured out how to store func captureOutput(captureOutput: AVCaptureOutput, didOutputSampleBuffer sampleBuffer: CMSampleBufferRef, fromConnection connection: AVCaptureConnection) CmSampleBufferRefs to file. And that includes using NSData which never stores a valid .caf to file.
Related to previous caveat, I have not found a way to use AvFoundation super helpful objects such as AVAudioRecorder (which will store a nice .caf to file) because it depends on you stopping the recording/capture session.
If you remove all video capture code you can run this on simulator, please comment below and I will prepare a simulator version of the code if you desire aka you do not have an Apple device handy. Camera functionality must be tested on a live device.
The following code successfully starts an Audio and Video AVCaptureSession, the AVCaptureSession delegate func captureOutput(captureOutput: AVCaptureOutput, didOutputSampleBuffer sampleBuffer: CMSampleBufferRef, fromConnection connection: AVCaptureConnection) is being called for both audio and video. When a. audio CMSampleBufferRef sample is provided I tried to convert that sample to an AudioBuffer and pass to Aubio method aubio_onset_do. I am using a singleton aubio_onset COpaquePointer.
In this code I attempt to call aubio_onset_do with audio buffer data two different ways.
Method 1 - The current way of the code below is with let useTimerAndNSMutableData = false. This means that in my prepareAudioBuffer function I pass the audioBuffer.mData to sampleAudioForOnsets. This method never fails but there is also no onsets ever detected, I suspect because the sample size is not large enough.
Method 2 If let useTimerAndNSMutableData = true I call ultimately call sampleAudioForOnsets every 1 second allowing time to build NSMutableData with AudioBuffer.mDatas. With this method, I am attempting to give aubio_onset_do a large enough sample to detect onsets, using a timer and NSMutableData This method causes aubio_onset_do to crash very quickly:
(EXC_BAD_ACCESS (code=1))
import UIKit
import AVFoundation
class AvRecorderViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate, AVCaptureAudioDataOutputSampleBufferDelegate, AVAudioRecorderDelegate, AVAudioPlayerDelegate {
var captureSession: AVCaptureSession!
var imageView:UIImageView!
var customLayer:CALayer!
var prevLayer:AVCaptureVideoPreviewLayer!
let samplingFrequency = Int32(30)
var aubioOnset:COpaquePointer? = nil
let pathToSoundSample = FileUtility.getPathToAudioSampleFile()
var onsetCount = 0
let testThres:smpl_t = 0.03
let nsMutableData: NSMutableData = NSMutableData()
var sampleRate:UInt32!
var bufferSize:UInt32!
let useTimerAndNSMutableData = false
override func viewDidLoad() {
super.viewDidLoad()
if FileUtility.fileExistsAtPath(pathToSoundSample) {
print("sample file exists")
FileUtility.deleteFileByNsurl(NSURL(fileURLWithPath: pathToSoundSample))
}
setupCapture()
if useTimerAndNSMutableData {
//create timer for sampling audio
NSTimer.scheduledTimerWithTimeInterval(1, target: self, selector: #selector(timerFiredPrepareForAubioOnsetSample), userInfo: nil, repeats: true)
}
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
override func viewWillTransitionToSize(size: CGSize, withTransitionCoordinator coordinator: UIViewControllerTransitionCoordinator) {
super.viewWillTransitionToSize(size, withTransitionCoordinator: coordinator)
coordinator.animateAlongsideTransition({ (context) -> Void in
}, completion: { (context) -> Void in
})
}
override func viewWillLayoutSubviews() {
prevLayer.frame = self.view.bounds
if prevLayer.connection.supportsVideoOrientation {
prevLayer.connection.videoOrientation = MediaUtility.interfaceOrientationToVideoOrientation(UIApplication.sharedApplication().statusBarOrientation)
}
}
func timerFiredPrepareForAubioOnsetSample() {
if nsMutableData.length <= 0 {
return
}
let data = UnsafeMutablePointer<smpl_t>(nsMutableData.bytes)
sampleAudioForOnsets(data, length: UInt32(nsMutableData.length))
}
func setupCapture() {
let captureDeviceVideo: AVCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
let captureDeviceAudio: AVCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio)
var captureVideoInput: AVCaptureDeviceInput
var captureAudioInput: AVCaptureDeviceInput
//video setup
if captureDeviceVideo.isTorchModeSupported(.On) {
try! captureDeviceVideo.lockForConfiguration()
/*if captureDeviceVideo.position == AVCaptureDevicePosition.Front {
captureDeviceVideo.position == AVCaptureDevicePosition.Back
}*/
//configure frame rate
/*We specify a minimum duration for each frame (play with this settings to avoid having too many frames waiting
in the queue because it can cause memory issues). It is similar to the inverse of the maximum framerate.
In this example we set a min frame duration of 1/10 seconds so a maximum framerate of 10fps. We say that
we are not able to process more than 10 frames per second.*/
captureDeviceVideo.activeVideoMaxFrameDuration = CMTimeMake(1, samplingFrequency)
captureDeviceVideo.activeVideoMinFrameDuration = CMTimeMake(1, samplingFrequency)
captureDeviceVideo.unlockForConfiguration()
}
//try and create audio and video inputs
do {
try captureVideoInput = AVCaptureDeviceInput(device: captureDeviceVideo)
try captureAudioInput = AVCaptureDeviceInput(device: captureDeviceAudio)
} catch {
print("cannot record")
return
}
/*setup the output*/
let captureVideoDataOutput: AVCaptureVideoDataOutput = AVCaptureVideoDataOutput()
let captureAudioDataOutput: AVCaptureAudioDataOutput = AVCaptureAudioDataOutput()
/*While a frame is processes in -captureVideoDataOutput:didOutputSampleBuffer:fromConnection: delegate methods no other frames are added in the queue.
If you don't want this behaviour set the property to false */
captureVideoDataOutput.alwaysDiscardsLateVideoFrames = true
// Set the video output to store frame in BGRA (It is supposed to be faster)
let videoSettings: [NSObject : AnyObject] = [kCVPixelBufferPixelFormatTypeKey:Int(kCVPixelFormatType_32BGRA)]
captureVideoDataOutput.videoSettings = videoSettings
/*And we create a capture session*/
captureSession = AVCaptureSession()
//and configure session
captureSession.sessionPreset = AVCaptureSessionPresetHigh
/*We add audio/video input and output to session*/
captureSession.addInput(captureVideoInput)
captureSession.addInput(captureAudioInput)
captureSession.addOutput(captureVideoDataOutput)
captureSession.addOutput(captureAudioDataOutput)
//not sure if I need this or not, found on internet
captureSession.commitConfiguration()
/*We create a serial queue to handle the processing of our frames*/
var queue: dispatch_queue_t
queue = dispatch_queue_create("queue", DISPATCH_QUEUE_SERIAL)
//setup delegate
captureVideoDataOutput.setSampleBufferDelegate(self, queue: queue)
captureAudioDataOutput.setSampleBufferDelegate(self, queue: queue)
/*We add the Custom Layer (We need to change the orientation of the layer so that the video is displayed correctly)*/
customLayer = CALayer()
customLayer.frame = self.view.bounds
customLayer.transform = CATransform3DRotate(CATransform3DIdentity, CGFloat(M_PI) / 2.0, 0, 0, 1)
customLayer.contentsGravity = kCAGravityResizeAspectFill
view.layer.addSublayer(self.customLayer)
/*We add the imageView*/
imageView = UIImageView()
imageView.frame = CGRectMake(0, 0, 100, 100)
view!.addSubview(self.imageView)
/*We add the preview layer*/
prevLayer = AVCaptureVideoPreviewLayer()
prevLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
prevLayer.frame = CGRectMake(100, 0, 100, 100)
prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
view.layer.addSublayer(self.prevLayer)
/*We start the capture*/
captureSession.startRunning()
}
// MARK: AVCaptureSession delegates
func captureOutput(captureOutput: AVCaptureOutput, didOutputSampleBuffer sampleBuffer: CMSampleBufferRef, fromConnection connection: AVCaptureConnection) {
if (captureOutput is AVCaptureAudioDataOutput) {
prepareAudioBuffer(sampleBuffer)
}
//not relevant to my Stack Overflow question
/*if (captureOutput is AVCaptureVideoDataOutput) {
displayVideo(sampleBuffer)
}*/
}
func captureOutput(captureOutput: AVCaptureOutput!, didDropSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {
print("frame dropped")
}
private func sampleAudioForOnsets(data: UnsafeMutablePointer<smpl_t>, length: UInt32) {
print("\(#function)")
//let samples = new_fvec(512)
var total_frames : uint_t = 0
let out_onset = new_fvec (1)
var read : uint_t = 0
//singleton of aubio_onset
if aubioOnset == nil {
let method = ("default" as NSString).UTF8String
aubioOnset = new_aubio_onset(UnsafeMutablePointer<Int8>(method), bufferSize, 512, UInt32(sampleRate))
aubio_onset_set_threshold(aubioOnset!, testThres)
}
var sample: fvec_t = fvec_t(length: length, data: data)
//do not need the while loop but I have left it in here because it will be quite familiar to people that have used Aubio before and may help jog their
//memory, such as reminding people familiar with Aubio that the aubio_source_do is normally used to "seek" through a sample
while true {
//aubio_source_do(COpaquePointer(source), samples, &read)
//new_aubio_onset hop_size is 512, will aubio_onset_do move through a fvec_t sample at a 512 hop without an aubio_source_do call?
aubio_onset_do(aubioOnset!, &sample, out_onset)
if (fvec_get_sample(out_onset, 0) != 0) {
print(String(format: ">>> %.2f", aubio_onset_get_last_s(aubioOnset!)))
onsetCount += 1
}
total_frames += read
//always will break the first iteration, only reason for while loop is to demonstate the normal use of aubio using aubio_source_do to read
if (read < 512) {
break
}
}
print("done, total onsetCount: \(onsetCount)")
if onsetCount > 1 {
print("we are getting onsets")
}
}
// MARK: - Private Helpers
private func prepareAudioBuffer(sampleBuffer: CMSampleBufferRef) {
let numSamplesInBuffer = CMSampleBufferGetNumSamples(sampleBuffer)
bufferSize = UInt32(CMSampleBufferGetTotalSampleSize(sampleBuffer))
var blockBuffer:CMBlockBufferRef? = nil
var audioBufferList = AudioBufferList(mNumberBuffers: 1, mBuffers: AudioBuffer(mNumberChannels: 0, mDataByteSize: 0, mData: nil))
var status:OSStatus
let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer)!
let asbd = CMAudioFormatDescriptionGetStreamBasicDescription(formatDescription)
sampleRate = UInt32(asbd.memory.mSampleRate)
print("bufferSize: \(bufferSize)")
print("numSamplesInBuffer: \(numSamplesInBuffer)")
print("Sample Rate: \(sampleRate)")
print("assetWriter.status: ")
status = CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
nil,
&audioBufferList,
sizeof(audioBufferList.dynamicType), // instead of UInt(sizeof(audioBufferList.dynamicType))
nil,
nil,
UInt32(kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment),
&blockBuffer
)
let audioBuffers = UnsafeBufferPointer<AudioBuffer>(start: &audioBufferList.mBuffers, count: Int(audioBufferList.mNumberBuffers))
for audioBuffer in audioBuffers {
if useTimerAndNSMutableData {
//NSDATA APPEND, NSMutableData is building and will be analyzed at timer interbal
let frame = UnsafePointer<Float32>(audioBuffer.mData)
nsMutableData.appendBytes(frame, length: Int(audioBuffer.mDataByteSize))
}else{
//this never fails but there are never any onsets either, cannot tell if the audio sampling is just not long enough
//or if the data really isn't valid data
//smpl_t is a Float
let data = UnsafeMutablePointer<smpl_t>(audioBuffer.mData)
sampleAudioForOnsets(data, length: audioBuffer.mDataByteSize)
}
}
}
}

Realtime apply filter using AVCaptureVideoDataOutput and write buffer to file

I'am developing camera application in Swift, that edit every frame realtime.
So what I need:
Edit every frame using CIFilter - probably using AVCaptureVideoDataOutput
Write data buffer to AVAssetWriter
Add audio
Add some labels to the video
Realtime view of what camera "see", but already edited by CIFilter
What I have:
I'am able to edit video output and show it to screen using imageView. But I think that this is not best way to do that and with iPhone 4S I have really low FPS:
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!)
{
guard let filter = Filters["Saturation"]
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(CVPixelBuffer: pixelBuffer!)
filter!.setValue(cameraImage, forKey: kCIInputImageKey)
let filteredImage = UIImage(CIImage: filter!.valueForKey(kCIOutputImageKey) as! CIImage!)
dispatch_async(dispatch_get_main_queue())
{
self.imageView.image = filteredImage
}
}
So how to change my code to edit frame more efficiently, show it to screen and when its needed write it to a file (I know, that I must convert filtred image to CMSampleBuffer, but I don't know how to use NSTime)?

How BarCode,QR Code are recognized without capturing the image?

I wonder how Bar Code and QR Code (even character) are recognized without capturing it.I have seen in many app, when we keep our device above any of these (QR/Bar Code), the app automatically recognize it and starts processing. Is there any scanning mechanism used for this? How this can be achieved? What are mechanism involving in this?
Thanks in advance.
1) The phone camera will be launched by the library it will autofocus and scans until it finds the decoded info from the image displayed by camera
2) The info will be parsed by the library and it will give you the result.
the decoded info is "the bar code which has a information decoded in it"
Example for QRCode: The data is present as square
for barcode: the data is present as vertical lines
The library has all the logics for detecting the type of code and decoding as per the format.
Please read more the QrCode/Bar code libraries docs or implement it and learn
You can use AVCaptureSession, e.g.:
let session = AVCaptureSession()
var qrPayload: String?
func startSession() {
guard !started else { return }
let output = AVCaptureMetadataOutput()
output.setMetadataObjectsDelegate(self, queue: .main)
let device: AVCaptureDevice?
if #available(iOS 10.0, *) {
device = AVCaptureDevice
.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .metadataObject, position: .back)
.devices
.first
} else {
device = AVCaptureDevice.devices().first { $0.position == .back }
}
guard
let camera = device,
let input = try? AVCaptureDeviceInput(device: camera),
session.canAddInput(input),
session.canAddOutput(output)
else {
// handle failures here
return
}
session.addInput(input)
session.addOutput(output)
output.metadataObjectTypes = [.qr]
let videoLayer = AVCaptureVideoPreviewLayer(session: session)
videoLayer.frame = view.bounds
videoLayer.videoGravity = .resizeAspectFill
view.layer.addSublayer(videoLayer)
session.startRunning()
}
And extend your view controller to conform to AVCaptureMetadataOutputObjectsDelegate:
extension QRViewController: AVCaptureMetadataOutputObjectsDelegate {
func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
guard
qrPayload == nil,
let object = metadataObjects.first as? AVMetadataMachineReadableCodeObject,
let string = object.stringValue
else { return }
qrPayload = string
print(qrPayload)
// perhaps dismiss this view controller now that you’ve succeeded
}
}
Note, I’m testing to make sure that the qrPayload is nil because I find that you can see metadataOutput(_:didOutput:from:) get called a few times before the view controller is dismissed.

Resources