I am wondering why my capture session is starting up slowly when the app starts. This doesn't happen every single time I start the app, so I am not sure if it is just other variables of the actual phone or something else. I am not a very good concurrency/parallel programmer, so it is more than likely my crappy coding :(
I would GREATLY appreciate it if someone could identify what is making it slow sometimes. I have read that all calls from a capture session can be blocking, so I have tried my best to dispatch those calls to another queue without having any race conditions. I was learning about how to go about coding this way in swift form here
Here is my code where i initialize and start everything up: My queues are serial queues
/**************************************************************************
VIEW DID LOAD
***************************************************************************/
override func viewDidLoad() {
super.viewDidLoad()
println("Initializing the cameraCaptureDevice with MediaTypeVideo")
//------INIT CAMERA CAPTURE DEVICE TO BEGIN WITH------
self.cameraCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeVideo)
println("Done initializing camera")
var error1: NSError? = nil
println("Getting array of available capture devices")
//------GRAB ALL OF THE DEVICES------
let devices = AVCaptureDevice.devices()
//------FIND THE CAMERA MATCHING THE POSITION------
for device in devices {
if device.position == self.cameraCapturePosition {
self.cameraCaptureDevice = device as? AVCaptureDevice
println("Back camera has been added")
self.usingBackCamera = true
}
}
//------ INIT MOVIE FILE OUTPUT ------
self.movieFileOutput = AVCaptureMovieFileOutput()
//------SET UP PREVIEW LAYER-----
self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.session)
if let preview = self.videoPreviewLayer {
println("Video Preview Layer set")
preview.videoGravity = AVLayerVideoGravityResizeAspectFill
}
else {
println("Video Preview Layer is nil!!! Could not set AVLayerVideoGravityResizeAspectFill")
}
println("Camera successully can display")
//------SET JPEG OUTPUT------
println("Setting JPEG Output")
self.stillImageOutput = AVCaptureStillImageOutput()
let outputSettings = [ AVVideoCodecKey : AVVideoCodecJPEG ]
if let imageOutput = self.stillImageOutput {
imageOutput.outputSettings = outputSettings
}
else {
println("still image output is nil, could notset output settings")
}
println("Successfully configured JPEG Ouput")
//------SET MOVIE FILE OUPUT MAX DURATION AND MIN FREE DISK SPACE------
println("Setting Movie File Max Duration")
let maxDuration:CMTime = CMTimeMakeWithSeconds(self.totalTime, self.preferredTimeScale)
if let movieOutput = self.movieFileOutput {
movieOutput.maxRecordedDuration = maxDuration
println("Successully set movie file max duration")
println("Setting movie file minimun byte space")
movieOutput.minFreeDiskSpaceLimit = self.minFreeSpace
println("Successfully added minium free space")
}
else {
println("Movie file output is nil, could not set maximum recording duration or minimum free space")
}
//------ GRAB THE DEVICE'S SUPPORTED FRAME RATE RANGES ------
if let device = self.cameraCaptureDevice {
println("Setting frame rates")
let supportedFrameRateRanges = device.activeFormat.videoSupportedFrameRateRanges
for range in supportedFrameRateRanges {
// Workaround until finding a better way
// frame rate should be 1 - 30
if (range.minFrameRate >= 1 || range.minFrameRate <= 30) == true && (range.maxFrameRate <= 30 || range.maxFrameRate >= 1) == true {
println("Frame rate is supported")
self.frameRateSupported = true
}
else {
println("Frame rate is not supported")
self.frameRateSupported = false
}
}
var error: NSError?
if frameRateSupported && device.lockForConfiguration(&error) {
device.activeVideoMaxFrameDuration = self.frameDuration
device.activeVideoMinFrameDuration = self.frameDuration
device.unlockForConfiguration()
println("SUCCESS")
}
else {
println("frame rate is not supported or there was an error")
if let err = error {
println("There was an error setting framerate: \(err.description)")
}
else {
println("Frame rate is not supported")
}
}
}
else {
println("camera capture device is nil, could not set frame rate")
}
//------ INIT AUDIO CAPTURE DEVICE ------
self.audioCaptureDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio)
var error2: NSError? = nil
let audioDeviceInput = AVCaptureDeviceInput(device: self.audioCaptureDevice, error: &error2)
//------ADD CAMERA CAPTURE DEVICE TO CAPTURE SESSION INPUT------
if let captureDevice = self.cameraCaptureDevice {
if error1 == nil {
println("Trying to add video input")
self.videoDeviceInput = AVCaptureDeviceInput(device: captureDevice, error: &error1)
}
else {
println("Could not create video input")
}
}
else {
println("Could not create camera capture device")
}
//------ ADD INPUTS AND OUTPUTS AS WELL AS OTHER SESSION CONFIGURATIONS------
dispatch_async(self.sessionQueue) {
println("Trying to add audio output")
if let input = audioDeviceInput {
self.session.addInput(audioDeviceInput)
println("Successfully added audio output")
}
else {
println("Could not create audio input")
}
if self.session.canAddInput(self.videoDeviceInput) {
self.session.addInput(self.videoDeviceInput)
println("Successfully added video input")
}
else {
println("Could not add video input")
}
println("initializing video capture session")
//----- SET THE IMAGE QUALITY / RESOLUTION -----
//Options:
// AVCaptureSessionPresetHigh - Highest recording quality (varies per device)
// AVCaptureSessionPresetMedium - Suitable for WiFi sharing (actual values may change)
// AVCaptureSessionPresetLow - Suitable for 3G sharing (actual values may change)
// AVCaptureSessionPreset640x480 - 640x480 VGA (check its supported before setting it)
// AVCaptureSessionPreset1280x720 - 1280x720 720p HD (check its supported before setting it)
// AVCaptureSessionPresetPhoto - Full photo resolution (not supported for video output)
if self.session.canSetSessionPreset(AVCaptureSessionPresetHigh) {
println("Capture Session preset is set to High Quality")
self.session.sessionPreset = AVCaptureSessionPresetHigh
}
else {
println("Capture Session preset is set to Medium Quality")
self.session.sessionPreset = AVCaptureSessionPresetMedium
}
//------ADD JPEG OUTPUT AND MOVIE FILE OUTPUT TO SESSION OUTPUT------
println("Adding still image and movie file output")
if self.session.canAddOutput(self.stillImageOutput) && self.session.canAddOutput(self.movieFileOutput) {
self.session.addOutput(self.stillImageOutput)
self.session.addOutput(self.movieFileOutput)
println("Successfully added outputs")
}
else {
//------ IF OUTPUTS COULD NOT BE ADDED, THEN APP SHOULD NOT RUN ON DEVICE!!!!! ------
println("Could Not Add still image and movie file output")
}
//------WE CALL A METHOD AS IT ALSO HAS TO BE DONE AFTER CHANGING CAMERA------
self.setCameraOutputProperties()
//------DISPLAY PREVIEW LAYER------
if let videoLayer = self.videoPreviewLayer {
self.videoPreviewView.layer.addSublayer(self.videoPreviewLayer)
println("Video Preview Layer Added as sublayer")
self.videoPreviewLayer!.frame = self.videoPreviewView.layer.frame
println("Video Preview frame set")
}
else {
println("videoPreviewLayer is nil, could not add sublayer or set frame")
}
self.view.sendSubviewToBack(self.videoPreviewView)
}
}
/**************************************************************************
VIEW DID APPEAR
***************************************************************************/
override func viewDidAppear(animated: Bool) {
println("About to start the capture session")
//------INITIALIZE THE CAMERA------
dispatch_async(self.startSessionQueue) {
if self.beenHereBefore == false {
println("Have not seen this view before.... starting the session")
//------ START THE PREVIEW SESSION ------
self.startSession()
/*
CHECK TO MAKE SURE THAT THIS CODE IS REALLY NEEDED FOR AUTHORIZATION
*/
// ----- SET MEDIA TYPE ------
var mediaTypeVideo = AVMediaTypeVideo
AVCaptureDevice.requestAccessForMediaType(mediaTypeVideo, completionHandler: { (granted) -> Void in
//------ GRANTED ACCESS TO MEDIATYPE ------
if granted {
self.deviceAuthorized = AVAuthorizationStatus.Authorized
}
//------ NOT GRANTED ACCESS TO MEDIATYPE ------
else {
dispatch_async(dispatch_get_main_queue()) {
UIAlertView(title: "CopWatch", message: "CopWatch does not have permission to use the camera, please change your privacy settings.", delegate: self, cancelButtonTitle: "OK")
self.deviceAuthorized = AVAuthorizationStatus.Denied
dispatch_resume(dispatch_get_main_queue())
}
}
})
}
else {
println("Been Here Before")
self.session.startRunning()
}
self.weAreRecording = false
}
}
and here is the method that starts the video preview
/**************************************************************************
START SESSION
**************************************************************************/
func startSession() {
println("Checking to see if the session is already running before starting the session")
//------ START SESSION IF IT IS NOT ALREADY RUNNING------
if !self.session.running {
//------START CAMERA------
println("Session is not already running, starting the session now")
self.session.startRunning()
self.isSessionRunning = true
println("Capture Session initiated")
}
else {
println("Session is already running, no need to start it again")
}
}
It seems that I have found the answer.
I was adding the videoPreviewLayer as a subview and sending it to the back of the view in the asynchronous dispatch call. Apparently, the application did not like this and caused things to be very, very slow to start up.
I move this code
//------DISPLAY PREVIEW LAYER------
if let videoLayer = self.videoPreviewLayer {
self.videoPreviewView.layer.addSublayer(self.videoPreviewLayer)
println("Video Preview Layer Added as sublayer")
self.videoPreviewLayer!.frame = self.videoPreviewView.layer.frame
println("Video Preview frame set")
}
else {
println("videoPreviewLayer is nil, could not add sublayer or set frame")
}
self.view.sendSubviewToBack(self.videoPreviewView)
up to here like this:
//------SET UP PREVIEW LAYER-----
self.videoPreviewLayer = AVCaptureVideoPreviewLayer(session: self.session)
if let preview = self.videoPreviewLayer {
println("Video Preview Layer set")
preview.videoGravity = AVLayerVideoGravityResizeAspectFill
}
else {
println("Video Preview Layer is nil!!! Could not set AVLayerVideoGravityResizeAspectFill")
}
println("Camera successully can display")
//------DISPLAY PREVIEW LAYER------
if let videoLayer = self.videoPreviewLayer {
self.videoPreviewView.layer.addSublayer(self.videoPreviewLayer)
println("Video Preview Layer Added as sublayer")
self.videoPreviewLayer!.frame = self.videoPreviewView.layer.frame
println("Video Preview frame set")
self.view.sendSubviewToBack(self.videoPreviewView)
}
else {
println("videoPreviewLayer is nil, could not add sublayer or set frame")
}
I should have been able to see this issue, but I guess this is what happens when you optimize at the wrong times. Now it is pretty responsive.
Moral of the story, if you are programming with AVFoundation, don't set up and add your video preview layer as a subview of your view in the current view controller in an asynchronous queue.
Related
All of the camera tweaks I am wanting to use are working except the zoomfactor. I am lost as to why this is happening...any ideas? The custom exposure and focus settings work fine. Did something change in iOS that im not aware of?
captureSession = AVCaptureSession()
captureSession?.sessionPreset = AVCaptureSessionPresetPhoto
stillImageOutput = AVCapturePhotoOutput()
let device = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeVideo)
do{
do{
try device?.lockForConfiguration()
device?.setFocusModeLockedWithLensPosition(focusValue, completionHandler: {(time) -> Void in})
device?.setExposureModeCustomWithDuration(CMTimeMake(1, exposureValue), iso: ISOValue, completionHandler: {(time) -> Void in})
let zoomFactor:CGFloat = 16
device?.videoZoomFactor = zoomFactor
device?.unlockForConfiguration()
}catch{
print(error)
}
stillImageOutput.isHighResolutionCaptureEnabled = true
let input = try AVCaptureDeviceInput(device: device)
if(captureSession.canAddInput(input)){
captureSession.addInput(input)
if(captureSession.canAddOutput(stillImageOutput)){
captureSession.addOutput(stillImageOutput)
captureSession.startRunning()
let captureVideoLayer: AVCaptureVideoPreviewLayer = AVCaptureVideoPreviewLayer.init(session: captureSession)
captureVideoLayer.frame = self.previewView.bounds
captureVideoLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.previewView.layer.insertSublayer(captureVideoLayer, at: 0)
}
}
}catch{
print(error)
}
turns out i was just setting my device settings at the wrong point. If the lockconfig try block is moved below the capture session stuff it then works as intended.
Using Apple's example code for Photo & Video Acquisition (i.e. AVFoundation), I tried to change the device-zoom of my iPhone camera in code.
With the help of user2345335, I realised that the code-location where you place your zoom manipulation properties matters - and also, make sure you use device.lockForConfiguration() prior to any videoDevice manipulation ! Both are important (code-location and locking !!).
Here the Link and a screenshot to see the download-button where the original Apple example can be taken from :
(AVFoundation Apple Code example: Link)
Here is the code excerpt of the original Apple example with MY CODE THAT MANIPULATES THE ZOOM inserted at the correct spot :)
(Swift-4.2 / Xcode 10.0, iOS 11.0 SDK)
// Call this on the session queue.
private func configureSession() {
if setupResult != .success {
return
}
// ... missing original code (not important for this illustration)...
// Add video input.
do {
var defaultVideoDevice: AVCaptureDevice?
// ... missing original code (not important for this illustration)...
if session.canAddInput(videoDeviceInput) {
session.addInput(videoDeviceInput)
self.videoDeviceInput = videoDeviceInput
// ... missing original code (not important for this illustration)...
} else {
print("Could not add video device input to the session")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
// !!!!!!!!!!!!!!!! MY CODE THAT MANIPULATES THE ZOOM !!!!!!!!!!!!!!!!!!!!!!!!!!!!
// !!!!!!!!!!!!!!!! PLACE IT HERE AND ZOOM WILL WORK !!!!!!!!!!!!!!!!!!!!!!!!!!!!
guard let device = defaultVideoDevice else { return }
do {
try device.lockForConfiguration()
defer { device.unlockForConfiguration() }
device.videoZoomFactor = 10.0
} catch {
debugPrint(error)
}
// !!!!!!!!!!!!!! END OF MY CODE THAT MANIPULATES THE ZOOM !!!!!!!!!!!!!!!!!!!!!!!!!!
} catch {
print("Could not create video device input: \(error)")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
// Add audio input.
do {
let audioDevice = AVCaptureDevice.default(for: .audio)
let audioDeviceInput = try AVCaptureDeviceInput(device: audioDevice!)
if session.canAddInput(audioDeviceInput) {
session.addInput(audioDeviceInput)
} else {
print("Could not add audio device input to the session")
}
} catch {
print("Could not create audio device input: \(error)")
}
// Add photo output.
if session.canAddOutput(photoOutput) {
session.addOutput(photoOutput)
photoOutput.isHighResolutionCaptureEnabled = true
photoOutput.isLivePhotoCaptureEnabled = photoOutput.isLivePhotoCaptureSupported
photoOutput.isDepthDataDeliveryEnabled = photoOutput.isDepthDataDeliverySupported
livePhotoMode = photoOutput.isLivePhotoCaptureSupported ? .on : .off
depthDataDeliveryMode = photoOutput.isDepthDataDeliverySupported ? .on : .off
} else {
print("Could not add photo output to the session")
setupResult = .configurationFailed
session.commitConfiguration()
return
}
session.commitConfiguration()
}
I have faced an issue with zoom factor not working on some devices (usualy the newer ones with wide lens camera) when output.isDepthDataDeliveryEnabled was not set and therefore the default value was true . The problem was present only when AVCaptureSession.Preset was set to .photo.
I'm currently developing an App which records a video and then shows you the video to check if the video was good enough. However when displaying the recorded video, it shows it in the wrong orientation..
So basically I'm recording it in LandscapeRight modus. But when displaying it displays it in portrait mode, and also apparently records it in display mode as well. Even when I set it AVCaptureVideoOrientation.LandscapeRight.
Here is the code I'm using to setup the recording:
func setupAVCapture(){
session = AVCaptureSession()
session.sessionPreset = AVCaptureSessionPresetHigh
let devices = AVCaptureDevice.devices()
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the front camera
if(device.position == AVCaptureDevicePosition.Front) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
beginSession()
break
}
}
}
}
}
func beginSession(){
var err : NSError? = nil
var deviceInput:AVCaptureDeviceInput?
do {
deviceInput = try AVCaptureDeviceInput(device: captureDevice)
} catch let error as NSError {
err = error
deviceInput = nil
};
if err != nil {
print("error: \(err?.localizedDescription)")
}
if self.session.canAddInput(deviceInput){
self.session.addInput(deviceInput)
}
self.videoDataOutput = AVCaptureVideoDataOutput()
self.videoDataOutput.alwaysDiscardsLateVideoFrames=true
self.videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL)
self.videoDataOutput.setSampleBufferDelegate(self, queue:self.videoDataOutputQueue)
if session.canAddOutput(self.videoDataOutput){
session.addOutput(self.videoDataOutput)
}
self.videoDataOutput.connectionWithMediaType(AVMediaTypeVideo).enabled = true
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session)
self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.previewLayer.frame = self.view.bounds
self.previewLayer.masksToBounds = true
self.previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
let rootLayer :CALayer = CameraPreview.layer
rootLayer.masksToBounds=true
rootLayer.addSublayer(self.previewLayer)
session.startRunning()
}
After this the next delegate gets called and I display it in the same view, but for a playback:
func captureOutput(captureOutput: AVCaptureFileOutput!, didFinishRecordingToOutputFileAtURL outputFileURL: NSURL!, fromConnections connections: [AnyObject]!, error: NSError!) {
playbackAvailable = true
SaveButton.hidden = false
recording = false
stopCamera()
// let library = PHPhotoLibrary.sharedPhotoLibrary()
// library.performChanges({ PHAssetChangeRequest.creationRequestForAssetFromVideoAtFileURL(outputFileURL)! }, completionHandler: {success, error in debugPrint("Finished saving asset. %#", (success ? "Success." : error!)) })
// Play Video
player = AVPlayer(URL: outputFileURL)
playerLayer = AVPlayerLayer(player: player)
playerLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
playerLayer.masksToBounds = true
playerLayer.frame = self.view.bounds
CameraPreview.layer.addSublayer(playerLayer)
player.play()
}
Now the playback displays it in the wrong orientation, anyone knows how to fix this? I've also got the orientation of the viewcontroller to LandscapeRight.
Solution:
// Neccesary to record in the correct orientation
// ** MUST BE IMPLEMENTED AFTER SETING UP THE MOVIEFILE **
var videoConnection: AVCaptureConnection? = nil
if let connections = videoFileOutput.connections{
for x in connections {
if let connection = x as? AVCaptureConnection{
for port in connection.inputPorts{
if(port.mediaType == AVMediaTypeVideo){
videoConnection = connection
}
}
}
}
}
if(videoConnection != nil){
if let vidConnect: AVCaptureConnection = videoConnection!{
if(vidConnect.supportsVideoOrientation){
vidConnect.videoOrientation = AVCaptureVideoOrientation(rawValue: UIApplication.sharedApplication().statusBarOrientation.rawValue)!
}
}
}
Basically you need to set the AVCaptureVideoOrientation of the connection to the correct orientation before you start recording. Or else it might record in the wrong orientation.
I configured the rear camera to be 120 fps. However, when I checked the sample output with captureOutput() by printing the time such function is called (see below), the difference is roughly 33ms (30fps). No matter what fps I set with activeVideoMinFrameDuration and activeVideoMaxFrameDuration, the resulting fps observed in captureOutput() is always 30 fps.
I've tested this on a iPhone 6 which can handle slow-motion video. I've read the Apple official doc at https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/04_MediaCapture.html. Any clue?
class ViewController: UIViewController, AVCaptureVideoDataOutputSampleBufferDelegate
{
var captureDevice: AVCaptureDevice?
let captureSession = AVCaptureSession()
let videoCaptureOutput = AVCaptureVideoDataOutput()
var startTime = NSDate.timeIntervalSinceReferenceDate()
// press button to start the video session
#IBAction func startPressed() {
if captureSession.inputs.count > 0 && captureSession.outputs.count > 0 {
startTime = NSDate.timeIntervalSinceReferenceDate()
captureSession.startRunning()
}
}
override func viewDidLoad() {
super.viewDidLoad()
// set capture session resolution
captureSession.sessionPreset = AVCaptureSessionPresetLow
let devices = AVCaptureDevice.devices()
var avFormat: AVCaptureDeviceFormat? = nil
for device in devices {
if (device.hasMediaType(AVMediaTypeVideo)) {
if (device.position == AVCaptureDevicePosition.Back) {
for vFormat in device.formats {
let ranges = vFormat.videoSupportedFrameRateRanges as! [AVFrameRateRange]
let filtered: Array<Double> = ranges.map({ $0.maxFrameRate } ).filter( {$0 >= 119.0} )
if !filtered.isEmpty {
// found a good device with good format!
captureDevice = device as? AVCaptureDevice
avFormat = vFormat as? AVCaptureDeviceFormat
}
}
}
}
}
// use the found capture device and format to set things up
if let dv = captureDevice {
// configure
do {
try dv.lockForConfiguration()
} catch _ {
print("failed locking device")
}
dv.activeFormat = avFormat
dv.activeVideoMinFrameDuration = CMTimeMake(1, 120)
dv.activeVideoMaxFrameDuration = CMTimeMake(1, 120)
dv.unlockForConfiguration()
// input -> session
do {
let input = try AVCaptureDeviceInput(device: dv)
if captureSession.canAddInput(input) {
captureSession.addInput(input)
}
} catch _ {
print("failed adding capture device as input to capture session")
}
}
// output -> session
let videoQueue = dispatch_queue_create("videoQueue", DISPATCH_QUEUE_SERIAL)
videoCaptureOutput.setSampleBufferDelegate(self, queue: videoQueue)
videoCaptureOutput.videoSettings = [kCVPixelBufferPixelFormatTypeKey: Int(kCVPixelFormatType_32BGRA)]
videoCaptureOutput.alwaysDiscardsLateVideoFrames = true
if captureSession.canAddOutput(videoCaptureOutput) {
captureSession.addOutput(videoCaptureOutput)
}
}
func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!)
{
print( "\(NSDate.timeIntervalSinceReferenceDate() - startTime)" )
// More pixel/frame processing here
}
}
Answer found. Swapping orders of the two blocks "configure" and "input -> session".
I was able to successfully grab the recorded video by following this question
here
Basically
Inherit from AVCaptureFileOutputRecordingDelegate prototype
Loop through available devices
Creating a session with the camera
Start Recording
Stop Recording
Get the Record video by implementing above prototype's method
But the file doesn't comes with the audio.
According to this question, i have to record audio separately and merge the video and audio using mentioned classes
But i have no idea how to implement video and audio recording at the same time.
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
print("Capture device found")
beginSession()
}
}
}
}
in this loop only available device types are .Front and .Back
Following is the way to record video with audio using AVFoundation framework. The steps are:
1. Prepare the session:
self.captureSession = AVCaptureSession()
2. Prepare available video and audio devices:
let session = AVCaptureDevice.DiscoverySession.init(deviceTypes:[.builtInWideAngleCamera, .builtInMicrophone], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.unspecified)
let cameras = (session.devices.compactMap{$0})
for camera in cameras {
if camera.position == .front {
self.frontCamera = camera
}
if camera.position == .back {
self.rearCamera = camera
try camera.lockForConfiguration()
camera.focusMode = .continuousAutoFocus
camera.unlockForConfiguration()
}
}
3. Prepare session inputs:
guard let captureSession = self.captureSession else {
throw CameraControllerError.captureSessionIsMissing
}
if let rearCamera = self.rearCamera {
self.rearCameraInput = try AVCaptureDeviceInput(device: rearCamera)
if captureSession.canAddInput(self.rearCameraInput!) {
captureSession.addInput(self.rearCameraInput!)
self.currentCameraPosition = .rear
} else {
throw CameraControllerError.inputsAreInvalid
}
} else if let frontCamera = self.frontCamera {
self.frontCameraInput = try AVCaptureDeviceInput(device: frontCamera)
if captureSession.canAddInput(self.frontCameraInput!) {
captureSession.addInput(self.frontCameraInput!)
self.currentCameraPosition = .front
} else {
throw CameraControllerError.inputsAreInvalid
}
} else {
throw CameraControllerError.noCamerasAvailable
}
// Add audio input
if let audioDevice = self.audioDevice {
self.audioInput = try AVCaptureDeviceInput(device: audioDevice)
if captureSession.canAddInput(self.audioInput!) {
captureSession.addInput(self.audioInput!)
} else {
throw CameraControllerError.inputsAreInvalid
}
}
4. Prepare output:
self.videoOutput = AVCaptureMovieFileOutput()
if captureSession.canAddOutput(self.videoOutput!) {
captureSession.addOutput(self.videoOutput!)
}
captureSession.startRunning()
5. Start recording:
func recordVideo(completion: #escaping (URL?, Error?) -> Void) {
guard let captureSession = self.captureSession, captureSession.isRunning else {
completion(nil, CameraControllerError.captureSessionIsMissing)
return
}
let paths = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
let fileUrl = paths[0].appendingPathComponent("output.mp4")
try? FileManager.default.removeItem(at: fileUrl)
videoOutput!.startRecording(to: fileUrl, recordingDelegate: self)
self.videoRecordCompletionBlock = completion
}
6. Stop recording:
func stopRecording(completion: #escaping (Error?) -> Void) {
guard let captureSession = self.captureSession, captureSession.isRunning else {
completion(CameraControllerError.captureSessionIsMissing)
return
}
self.videoOutput?.stopRecording()
}
7. Implement the delegate:
func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
if error == nil {
//do something
} else {
//do something
}
}
I took idea from here: https://www.appcoda.com/avfoundation-swift-guide/
Here is the complete project https://github.com/rubaiyat6370/iOS-Tutorial/
Found the answer, This answer goes with this code
It can simply done by
declare another capture device variable
loop through devices and initialize camera and audio capture device variable
add audio input to session
code
var captureDevice : AVCaptureDevice?
var captureAudio :AVCaptureDevice?
Loop through devices and Initialize capture devices
var captureDeviceVideoFound: Bool = false
var captureDeviceAudioFound:Bool = false
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the front camera
if(device.position == AVCaptureDevicePosition.Front) {
captureDevice = device as? AVCaptureDevice //initialize video
if captureDevice != nil {
print("Capture device found")
captureDeviceVideoFound = true;
}
}
}
if(device.hasMediaType(AVMediaTypeAudio)){
print("Capture device audio init")
captureAudio = device as? AVCaptureDevice //initialize audio
captureDeviceAudioFound = true
}
}
if(captureDeviceAudioFound && captureDeviceVideoFound){
beginSession()
}
Inside Session
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
try captureSession.addInput(AVCaptureDeviceInput(device: captureAudio))
This will output the video file with audio. no need to merge audio or do anything.
This apples documentation helps
Followed the answer from #Mumu but it didn't work for me because of the call to AVCaptureDevice.DiscoverySession.init that was returning video devices only.
Here is my version that works on iOS 14, Swift 5:
var captureSession: AVCaptureSession? = nil
var camera: AVCaptureDevice? = nil
var microphone: AVCaptureDevice? = nil
var videoOutput: AVCaptureFileOutput? = nil
var previewLayer: AVCaptureVideoPreviewLayer? = nil
func findDevices() {
camera = nil
microphone = nil
//Search for video media type and we need back camera only
let session = AVCaptureDevice.DiscoverySession.init(deviceTypes:[.builtInWideAngleCamera],
mediaType: AVMediaType.video, position: AVCaptureDevice.Position.back)
var devices = (session.devices.compactMap{$0})
//Search for microphone
let asession = AVCaptureDevice.DiscoverySession.init(deviceTypes:[.builtInMicrophone],
mediaType: AVMediaType.audio, position: AVCaptureDevice.Position.unspecified)
//Combine all devices into one list
devices.append(contentsOf: asession.devices.compactMap{$0})
for device in devices {
if device.position == .back {
do {
try device.lockForConfiguration()
device.focusMode = .continuousAutoFocus
device.flashMode = .off
device.whiteBalanceMode = .continuousAutoWhiteBalance
device.unlockForConfiguration()
camera = device
} catch {
}
}
if device.hasMediaType(.audio) {
microphone = device
}
}
}
func initVideoRecorder()->Bool {
captureSession = AVCaptureSession()
guard let captureSession = captureSession else {return false}
captureSession.sessionPreset = .hd4K3840x2160
findDevices()
guard let camera = camera else { return false}
do {
let cameraInput = try AVCaptureDeviceInput(device: camera)
captureSession.addInput(cameraInput)
} catch {
self.camera = nil
return false
}
if let audio = microphone {
do {
let audioInput = try AVCaptureDeviceInput(device: audio)
captureSession.addInput(audioInput)
} catch {
}
}
videoOutput = AVCaptureMovieFileOutput()
if captureSession.canAddOutput(videoOutput!) {
captureSession.addOutput(videoOutput!)
captureSession.startRunning()
videoOutput?.connection(with: .video)?.videoOrientation = .landscapeRight
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity = .resizeAspect
previewLayer?.connection?.videoOrientation = .landscapeRight
return true
}
return false
}
func startRecording()->Bool {
guard let captureSession = captureSession, captureSession.isRunning else {return false}
let paths = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
let fileUrl = paths[0].appendingPathComponent(getVideoName())
try? FileManager.default.removeItem(at: fileUrl)
videoOutput?.startRecording(to: fileUrl, recordingDelegate: self)
return true
}
I had this problem also, but when I grouped adding the video input and the sound input after, the audio worked. This is my code for adding the inputs.
if (cameraSession.canAddInput(deviceInput) == true && cameraSession.canAddInput(audioDeviceInput) == true) {//detects if devices can be added
cameraSession.addInput(deviceInput)//adds video
cameraSession.addInput(audioDeviceInput)//adds audio
}
Also I found you have to have video input first or else there won't be audio. I originally had them in two if statements, but I found putting them in one lets video and audio be recorded together. Hope this helps.
Record Video With Audio
//Get Video Device
if let devices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo) as? [AVCaptureDevice] {
for device in devices {
if device.hasMediaType(AVMediaTypeVideo) {
if device.position == AVCaptureDevicePosition.back {
videoCaptureDevice = device
}
}
}
if videoCaptureDevice != nil {
do {
// Add Video Input
try self.captureSession.addInput(AVCaptureDeviceInput(device: videoCaptureDevice))
// Get Audio Device
let audioInput = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio)
//Add Audio Input
try self.captureSession.addInput(AVCaptureDeviceInput(device: audioInput))
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.portrait
self.videoView.layer.addSublayer(self.previewLayer)
//Add File Output
self.captureSession.addOutput(self.movieOutput)
captureSession.startRunning()
} catch {
print(error)
}
}
}
For more details refer this link:
https://medium.com/#santhosh3386/ios-avcapturesession-record-video-with-audio-23c8f8c9a8f8
My goal is to use an AVCaptureSession to programmatically lock focus, capture one image, activate the flash, then capture a second image after some delay.
I have managed to get the captures to work using an AVCaptureSession instance and an AVCaptureStillImageOutput. However, the images I get when calling captureStillImageAsynchronouslyFromConnection(_:completionHandler:) are 1920 x 1080, not the full 12 megapixel image my iPhone 6S camera is capable of.
Here is my capture function:
func captureImageFromStream(completion: (result: UIImage) -> Void)
{
if let stillOutput = self.stillImageOutput {
var videoConnection : AVCaptureConnection?
for connection in stillOutput.connections {
for port in connection.inputPorts! {
if port.mediaType == AVMediaTypeVideo {
videoConnection = connection as? AVCaptureConnection
break
}
}
if videoConnection != nil {
break
}
}
if videoConnection != nil {
stillOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) {
(imageDataSampleBuffer, error) -> Void in
if error == nil {
let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageDataSampleBuffer)
if let image = UIImage(data: imageData) {
completion(result: image)
}
}
else {
NSLog("ImageCapture Error: \(error)")
}
}
}
}
}
What modifications should I make to capture the image I'm looking for? I'm new to Swift, so please excuse any beginner mistakes I've made.
Before you addOutput the stillImageOutput and startRunning, you need to set your capture session preset to photo:
captureSession.sessionPreset = AVCaptureSessionPresetPhoto