Swift how to turn on the flash mode for custom camera - ios

When I build the custom camera, I set the flash mode of the current device by the following code:
try currentDevice.lockForConfiguration()
currentDevice.flashMode = .on
currentDevice.unlockForConfiguration()
My capture picture function is the default captureOutput function (override by AVCaptureVideoDataOutputSampleBufferDelegate) with works well but only the flash mode didn't work.
func captureOutput(
_ output: AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer,
from connection: AVCaptureConnection
) {
if !takePicture {
return
}
takePicture = false
guard let cvBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
let ciImage = CIImage(cvImageBuffer: cvBuffer)
let uiImage = UIImage(ciImage: ciImage)
savePicture(image: uiImage)
}

First of all var flashMode: AVCaptureDevice.FlashMode { get set } is deprecated (Apple docs)
Maybe you mean this flashMode which is set on AVCapturePhotoSettings and supersedes the deprecated one.
But if you need LED back light on you should use .torchMode on the AVCaptureDevice object.
In order to do that you need to make sure that the currentDevice has torch available.
guard currentDevice.isTorchAvailable else { ...fallbac... }
Then you need to lock the device for configuration.
try currentDevice.lockForConfiguration()
And set the desired torch mode.
currentDevice.torchMode = .on // or .off
Remember to unlock the device after you are done with configuration
currentDevice.unlockForConfiguration()
If you want to set the torch mode on with some level of intensity you could turn it on using this method (instead of currentDevice.torchMode = .on)
try currentDevice.setTorchModeOn(level: 0.3)

Related

captureOutput not being called by AVCaptureAudioDataOutputSampleBufferDelegate

I have an app that records video, but I need it to show pitch levels of the sounds captured on the microphone in real-time to the user. I have been able to successfully record audio and video to MP4 using AVCaptureSession. However, when I add AVCaptureAudioDataOutput to the session and assign the AVCaptureAudioDataOutputSampleBufferDelegate I receive no errors, and yet the captureOutput function is never called once the session starts.
Here is the code:
import UIKit
import AVFoundation
import CoreLocation
class ViewController: UIViewController,
AVCaptureVideoDataOutputSampleBufferDelegate,
AVCaptureFileOutputRecordingDelegate, CLLocationManagerDelegate ,
AVCaptureAudioDataOutputSampleBufferDelegate {
var videoFileOutput: AVCaptureMovieFileOutput!
let session = AVCaptureSession()
var outputURL: URL!
var timer:Timer!
var locationManager:CLLocationManager!
var currentMagnitudeValue:CGFloat!
var defaultMagnitudeValue:CGFloat!
var visualMagnitudeValue:CGFloat!
var soundLiveOutput: AVCaptureAudioDataOutput!
override func viewDidLoad() {
super.viewDidLoad()
self.setupAVCapture()
}
func setupAVCapture(){
session.beginConfiguration()
//Add the camera INPUT to the session
let videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera,
for: .video, position: .front)
guard
let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice!),
session.canAddInput(videoDeviceInput)
else { return }
session.addInput(videoDeviceInput)
//Add the microphone INPUT to the session
let microphoneDevice = AVCaptureDevice.default(.builtInMicrophone, for: .audio, position: .unspecified)
guard
let audioDeviceInput = try? AVCaptureDeviceInput(device: microphoneDevice!),
session.canAddInput(audioDeviceInput)
else { return }
session.addInput(audioDeviceInput)
//Add the video file OUTPUT to the session
videoFileOutput = AVCaptureMovieFileOutput()
guard session.canAddOutput(videoFileOutput) else {return}
if (session.canAddOutput(videoFileOutput)) {
session.addOutput(videoFileOutput)
}
//Add the audio output so we can get PITCH of the sounds
//AND assign the SampleBufferDelegate
soundLiveOutput = AVCaptureAudioDataOutput()
soundLiveOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "test"))
if (session.canAddOutput(soundLiveOutput)) {
session.addOutput(soundLiveOutput)
print ("Live AudioDataOutput added")
} else
{
print("Could not add AudioDataOutput")
}
//Preview Layer
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
let rootLayer :CALayer = self.cameraView.layer
rootLayer.masksToBounds=true
previewLayer.frame = rootLayer.bounds
rootLayer.addSublayer(previewLayer)
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill;
//Finalize the session
session.commitConfiguration()
//Begin the session
session.startRunning()
}
func captureOutput(_: AVCaptureOutput, didOutput: CMSampleBuffer, from:
AVCaptureConnection) {
print("Bingo")
}
}
Expected output:
Bingo
Bingo
Bingo
...
I have read:
StackOverflow: captureOutput not being called - The user was not declaring the captureOutput method correctly.
StackOverflow: AVCaptureVideoDataOutput captureOutput not being called - The user was not declaring the captureOutput method at all.
Apple - AVCaptureAudioDataOutputSampleBufferDelegate - Apple's documentation on the delegate and it's method - the method matches the method I have declared.
Other common errors I have encountered online:
Using the declaration for older versions of Swift (I am using v4.1)
Apparently on one article after Swift 4.0, AVCaptureMetadataOutput replaces AVCaptureAudioDataOutput - Although I couldn't find this in Apple's documentation, I tried this also, but similarly, the metadataOutput function is never called.
I am fresh out of ideas. Am I missing something obvious?
Ok, nobody got back to me but after playing around with it I worked out the correct way to declare the captureOutput method for Swift4 is as follows:
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//Do your stuff here
}
Unfortunately, the documentation for this online is very poor. I guess you just need to get it exactly right - there are no errors thrown if you mispell or misname the variables as it is an optional function.
The method you're using is been updated with this one, Which will get called for both AVCaptureAudioDataOutput & AVCaptureVideoDataOutput. you make sure you check the output before writing the sample buffers to asset writer.
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//Make sure you check the output before using sample buffer
if output == audioDataOutput {
//Use sample buffer for audio
}
}
The problem for me turned out to be this, and that the AVAudioSession and AVCaptureSession where declared as local variables and when I started the session, it just went away. Once I moved them to class level variables, everything worked great!

iOS camera facetracking (Swift 3 Xcode 8)

I am trying to make a simple camera application where the front camera can detect faces.
This should be simple enough:
Create a CameraView class that inherits from UIImage and place it in the UI. Make sure it implements AVCaptureVideoDataOutputSampleBufferDelegate in order to process frames from the camera in real time.
class CameraView: UIImageView, AVCaptureVideoDataOutputSampleBufferDelegate
Within a function handleCamera, called when the CameraView is instantiated, setup an AVCapture session. Add input from the camera.
override init(frame: CGRect) {
super.init(frame:frame)
handleCamera()
}
func handleCamera () {
camera = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera,
mediaType: AVMediaTypeVideo, position: .front)
session = AVCaptureSession()
// Set recovered camera as an input device for the capture session
do {
try input = AVCaptureDeviceInput(device: camera);
} catch _ as NSError {
print ("ERROR: Front camera can't be used as input")
input = nil
}
// Add the input from the camera to the capture session
if (session?.canAddInput(input) == true) {
session?.addInput(input)
}
Create output. Create a serial output queue to pass the data to which will then be processed by the AVCaptureVideoDataOutputSampleBufferDelegate (the class itself in this case). Add output to session.
output = AVCaptureVideoDataOutput()
output?.alwaysDiscardsLateVideoFrames = true
outputQueue = DispatchQueue(label: "outputQueue")
output?.setSampleBufferDelegate(self, queue: outputQueue)
// add front camera output to the session for use and modification
if(session?.canAddOutput(output) == true){
session?.addOutput(output)
} // front camera can't be used as output, not working: handle error
else {
print("ERROR: Output not viable")
}
Setup the camera preview view and run the session
// Setup camera preview with the session input
previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer?.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer?.connection.videoOrientation = AVCaptureVideoOrientation.portrait
previewLayer?.frame = self.bounds
self.layer.addSublayer(previewLayer!)
// Process the camera and run it onto the preview
session?.startRunning()
in the captureOutput function run by the delegate, convert the recieved sample buffer to CIImage in order to detect faces. Give feedback if a face is found.
func captureOutput(_ captureOutput: AVCaptureOutput!, didDrop sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(cvPixelBuffer: pixelBuffer!)
let accuracy = [CIDetectorAccuracy: CIDetectorAccuracyHigh]
let faceDetector = CIDetector(ofType: CIDetectorTypeFace, context: nil, options: accuracy)
let faces = faceDetector?.features(in: cameraImage)
for face in faces as! [CIFaceFeature] {
print("Found bounds are \(face.bounds)")
let faceBox = UIView(frame: face.bounds)
faceBox.layer.borderWidth = 3
faceBox.layer.borderColor = UIColor.red.cgColor
faceBox.backgroundColor = UIColor.clear
self.addSubview(faceBox)
if face.hasLeftEyePosition {
print("Left eye bounds are \(face.leftEyePosition)")
}
if face.hasRightEyePosition {
print("Right eye bounds are \(face.rightEyePosition)")
}
}
}
My problem: I can get the camera running but with the multitude of different codes I have tried from all over the internet, I have never been able to get captureOutput to detect a face. Either the application doesn't enter the function or it crashes because of a varible that doesn't work, the most often being that the sampleBuffer variable is nul.
What am I doing wrong?
You need to change your captureOutput function arguments to the following: func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!)
Your captureOutput function calls when buffer drops, not when it gets from camera.

AVCaptureDevice flashmode for front camera is not bright like iPhone camera app

Here's the code I'm working with:
func toggleflash(On on: Bool){
guard let session = captureSession where session.running == true else {
return
}
session.beginConfiguration()
let input = session.inputs[0] as! AVCaptureDeviceInput
if input.device.isFlashModeSupported(.On) {
do {
try input.device.lockForConfiguration()
input.device.flashMode = .On
input.device.unlockForConfiguration()
}catch {}
}
session.commitConfiguration()
}
When I use this function to use flash on my front camera, the iPhone screen does everything you'd expect. It flashes, but it doesn't illuminate the subject the way that the iPhone camera app/snapchat/instagram does. So I wonder if it's not bright enough or if it's not working correctly.

Capturing image with avfoundation

I am using AvFoundation for camera.
This is my live preview:
It looks good. When user presses to "Button" I am creating a snapshot on same screen. (Like snapchat)
I am using following code for capturing image and showing it on the screen:
self.stillOutput.captureStillImageAsynchronouslyFromConnection(videoConnection){
(imageSampleBuffer : CMSampleBuffer!, _) in
let imageDataJpeg = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageSampleBuffer)
let pickedImage: UIImage = UIImage(data: imageDataJpeg)!
self.captureSession.stopRunning()
self.previewImageView.frame = CGRect(x:0, y:0, width:UIScreen.mainScreen().bounds.width, height:UIScreen.mainScreen().bounds.height)
self.previewImageView.image = pickedImage
self.previewImageView.layer.zPosition = 100
}
After user captures an image screen looks like this:
Please look at the marked area. It wasn't looking on the live preview screen(Screenshot 1).
I mean live preview is not showing everything. But I am sure my live preview works well because I compared with other camera apps and everything was same as my live preview screen. I guess I have a problem with captured image.
I am creating live preview with following code:
override func viewWillAppear(animated: Bool) {
super.viewWillAppear(animated)
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
let devices = AVCaptureDevice.devices()
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
}
}
}
if captureDevice != nil {
beginSession()
}
}
func beginSession() {
let err : NSError? = nil
do {
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
} catch{
}
captureSession.addOutput(stillOutput)
if err != nil {
print("error: \(err?.localizedDescription)")
}
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity=AVLayerVideoGravityResizeAspectFill
self.cameraLayer.layer.addSublayer(previewLayer!)
previewLayer?.frame = self.cameraLayer.frame
captureSession.startRunning()
}
My cameraLayer :
How can I resolve this problem?
Presumably you are using an AVCaptureVideoPreviewLayer. So it sounds like this layer is incorrectly placed or incorrectly sized, or it has the wrong AVLayerVideoGravity setting. Part of the image is off the screen or cropped; that's why you don't see that part of what the camera sees while you are previewing.
Ok, I found the solution.
I used the
captureSession.sessionPreset = AVCaptureSessionPresetHigh
Instead of
captureSession.sessionPreset = AVCaptureSessionPresetPhoto
Then problem fixed.

How BarCode,QR Code are recognized without capturing the image?

I wonder how Bar Code and QR Code (even character) are recognized without capturing it.I have seen in many app, when we keep our device above any of these (QR/Bar Code), the app automatically recognize it and starts processing. Is there any scanning mechanism used for this? How this can be achieved? What are mechanism involving in this?
Thanks in advance.
1) The phone camera will be launched by the library it will autofocus and scans until it finds the decoded info from the image displayed by camera
2) The info will be parsed by the library and it will give you the result.
the decoded info is "the bar code which has a information decoded in it"
Example for QRCode: The data is present as square
for barcode: the data is present as vertical lines
The library has all the logics for detecting the type of code and decoding as per the format.
Please read more the QrCode/Bar code libraries docs or implement it and learn
You can use AVCaptureSession, e.g.:
let session = AVCaptureSession()
var qrPayload: String?
func startSession() {
guard !started else { return }
let output = AVCaptureMetadataOutput()
output.setMetadataObjectsDelegate(self, queue: .main)
let device: AVCaptureDevice?
if #available(iOS 10.0, *) {
device = AVCaptureDevice
.DiscoverySession(deviceTypes: [.builtInWideAngleCamera], mediaType: .metadataObject, position: .back)
.devices
.first
} else {
device = AVCaptureDevice.devices().first { $0.position == .back }
}
guard
let camera = device,
let input = try? AVCaptureDeviceInput(device: camera),
session.canAddInput(input),
session.canAddOutput(output)
else {
// handle failures here
return
}
session.addInput(input)
session.addOutput(output)
output.metadataObjectTypes = [.qr]
let videoLayer = AVCaptureVideoPreviewLayer(session: session)
videoLayer.frame = view.bounds
videoLayer.videoGravity = .resizeAspectFill
view.layer.addSublayer(videoLayer)
session.startRunning()
}
And extend your view controller to conform to AVCaptureMetadataOutputObjectsDelegate:
extension QRViewController: AVCaptureMetadataOutputObjectsDelegate {
func metadataOutput(_ output: AVCaptureMetadataOutput, didOutput metadataObjects: [AVMetadataObject], from connection: AVCaptureConnection) {
guard
qrPayload == nil,
let object = metadataObjects.first as? AVMetadataMachineReadableCodeObject,
let string = object.stringValue
else { return }
qrPayload = string
print(qrPayload)
// perhaps dismiss this view controller now that you’ve succeeded
}
}
Note, I’m testing to make sure that the qrPayload is nil because I find that you can see metadataOutput(_:didOutput:from:) get called a few times before the view controller is dismissed.

Resources