I was able to successfully grab the recorded video by following this question
here
Basically
Inherit from AVCaptureFileOutputRecordingDelegate prototype
Loop through available devices
Creating a session with the camera
Start Recording
Stop Recording
Get the Record video by implementing above prototype's method
But the file doesn't comes with the audio.
According to this question, i have to record audio separately and merge the video and audio using mentioned classes
But i have no idea how to implement video and audio recording at the same time.
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the back camera
if(device.position == AVCaptureDevicePosition.Back) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
print("Capture device found")
beginSession()
}
}
}
}
in this loop only available device types are .Front and .Back
Following is the way to record video with audio using AVFoundation framework. The steps are:
1. Prepare the session:
self.captureSession = AVCaptureSession()
2. Prepare available video and audio devices:
let session = AVCaptureDevice.DiscoverySession.init(deviceTypes:[.builtInWideAngleCamera, .builtInMicrophone], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.unspecified)
let cameras = (session.devices.compactMap{$0})
for camera in cameras {
if camera.position == .front {
self.frontCamera = camera
}
if camera.position == .back {
self.rearCamera = camera
try camera.lockForConfiguration()
camera.focusMode = .continuousAutoFocus
camera.unlockForConfiguration()
}
}
3. Prepare session inputs:
guard let captureSession = self.captureSession else {
throw CameraControllerError.captureSessionIsMissing
}
if let rearCamera = self.rearCamera {
self.rearCameraInput = try AVCaptureDeviceInput(device: rearCamera)
if captureSession.canAddInput(self.rearCameraInput!) {
captureSession.addInput(self.rearCameraInput!)
self.currentCameraPosition = .rear
} else {
throw CameraControllerError.inputsAreInvalid
}
} else if let frontCamera = self.frontCamera {
self.frontCameraInput = try AVCaptureDeviceInput(device: frontCamera)
if captureSession.canAddInput(self.frontCameraInput!) {
captureSession.addInput(self.frontCameraInput!)
self.currentCameraPosition = .front
} else {
throw CameraControllerError.inputsAreInvalid
}
} else {
throw CameraControllerError.noCamerasAvailable
}
// Add audio input
if let audioDevice = self.audioDevice {
self.audioInput = try AVCaptureDeviceInput(device: audioDevice)
if captureSession.canAddInput(self.audioInput!) {
captureSession.addInput(self.audioInput!)
} else {
throw CameraControllerError.inputsAreInvalid
}
}
4. Prepare output:
self.videoOutput = AVCaptureMovieFileOutput()
if captureSession.canAddOutput(self.videoOutput!) {
captureSession.addOutput(self.videoOutput!)
}
captureSession.startRunning()
5. Start recording:
func recordVideo(completion: #escaping (URL?, Error?) -> Void) {
guard let captureSession = self.captureSession, captureSession.isRunning else {
completion(nil, CameraControllerError.captureSessionIsMissing)
return
}
let paths = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
let fileUrl = paths[0].appendingPathComponent("output.mp4")
try? FileManager.default.removeItem(at: fileUrl)
videoOutput!.startRecording(to: fileUrl, recordingDelegate: self)
self.videoRecordCompletionBlock = completion
}
6. Stop recording:
func stopRecording(completion: #escaping (Error?) -> Void) {
guard let captureSession = self.captureSession, captureSession.isRunning else {
completion(CameraControllerError.captureSessionIsMissing)
return
}
self.videoOutput?.stopRecording()
}
7. Implement the delegate:
func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
if error == nil {
//do something
} else {
//do something
}
}
I took idea from here: https://www.appcoda.com/avfoundation-swift-guide/
Here is the complete project https://github.com/rubaiyat6370/iOS-Tutorial/
Found the answer, This answer goes with this code
It can simply done by
declare another capture device variable
loop through devices and initialize camera and audio capture device variable
add audio input to session
code
var captureDevice : AVCaptureDevice?
var captureAudio :AVCaptureDevice?
Loop through devices and Initialize capture devices
var captureDeviceVideoFound: Bool = false
var captureDeviceAudioFound:Bool = false
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the front camera
if(device.position == AVCaptureDevicePosition.Front) {
captureDevice = device as? AVCaptureDevice //initialize video
if captureDevice != nil {
print("Capture device found")
captureDeviceVideoFound = true;
}
}
}
if(device.hasMediaType(AVMediaTypeAudio)){
print("Capture device audio init")
captureAudio = device as? AVCaptureDevice //initialize audio
captureDeviceAudioFound = true
}
}
if(captureDeviceAudioFound && captureDeviceVideoFound){
beginSession()
}
Inside Session
try captureSession.addInput(AVCaptureDeviceInput(device: captureDevice))
try captureSession.addInput(AVCaptureDeviceInput(device: captureAudio))
This will output the video file with audio. no need to merge audio or do anything.
This apples documentation helps
Followed the answer from #Mumu but it didn't work for me because of the call to AVCaptureDevice.DiscoverySession.init that was returning video devices only.
Here is my version that works on iOS 14, Swift 5:
var captureSession: AVCaptureSession? = nil
var camera: AVCaptureDevice? = nil
var microphone: AVCaptureDevice? = nil
var videoOutput: AVCaptureFileOutput? = nil
var previewLayer: AVCaptureVideoPreviewLayer? = nil
func findDevices() {
camera = nil
microphone = nil
//Search for video media type and we need back camera only
let session = AVCaptureDevice.DiscoverySession.init(deviceTypes:[.builtInWideAngleCamera],
mediaType: AVMediaType.video, position: AVCaptureDevice.Position.back)
var devices = (session.devices.compactMap{$0})
//Search for microphone
let asession = AVCaptureDevice.DiscoverySession.init(deviceTypes:[.builtInMicrophone],
mediaType: AVMediaType.audio, position: AVCaptureDevice.Position.unspecified)
//Combine all devices into one list
devices.append(contentsOf: asession.devices.compactMap{$0})
for device in devices {
if device.position == .back {
do {
try device.lockForConfiguration()
device.focusMode = .continuousAutoFocus
device.flashMode = .off
device.whiteBalanceMode = .continuousAutoWhiteBalance
device.unlockForConfiguration()
camera = device
} catch {
}
}
if device.hasMediaType(.audio) {
microphone = device
}
}
}
func initVideoRecorder()->Bool {
captureSession = AVCaptureSession()
guard let captureSession = captureSession else {return false}
captureSession.sessionPreset = .hd4K3840x2160
findDevices()
guard let camera = camera else { return false}
do {
let cameraInput = try AVCaptureDeviceInput(device: camera)
captureSession.addInput(cameraInput)
} catch {
self.camera = nil
return false
}
if let audio = microphone {
do {
let audioInput = try AVCaptureDeviceInput(device: audio)
captureSession.addInput(audioInput)
} catch {
}
}
videoOutput = AVCaptureMovieFileOutput()
if captureSession.canAddOutput(videoOutput!) {
captureSession.addOutput(videoOutput!)
captureSession.startRunning()
videoOutput?.connection(with: .video)?.videoOrientation = .landscapeRight
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer?.videoGravity = .resizeAspect
previewLayer?.connection?.videoOrientation = .landscapeRight
return true
}
return false
}
func startRecording()->Bool {
guard let captureSession = captureSession, captureSession.isRunning else {return false}
let paths = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
let fileUrl = paths[0].appendingPathComponent(getVideoName())
try? FileManager.default.removeItem(at: fileUrl)
videoOutput?.startRecording(to: fileUrl, recordingDelegate: self)
return true
}
I had this problem also, but when I grouped adding the video input and the sound input after, the audio worked. This is my code for adding the inputs.
if (cameraSession.canAddInput(deviceInput) == true && cameraSession.canAddInput(audioDeviceInput) == true) {//detects if devices can be added
cameraSession.addInput(deviceInput)//adds video
cameraSession.addInput(audioDeviceInput)//adds audio
}
Also I found you have to have video input first or else there won't be audio. I originally had them in two if statements, but I found putting them in one lets video and audio be recorded together. Hope this helps.
Record Video With Audio
//Get Video Device
if let devices = AVCaptureDevice.devices(withMediaType: AVMediaTypeVideo) as? [AVCaptureDevice] {
for device in devices {
if device.hasMediaType(AVMediaTypeVideo) {
if device.position == AVCaptureDevicePosition.back {
videoCaptureDevice = device
}
}
}
if videoCaptureDevice != nil {
do {
// Add Video Input
try self.captureSession.addInput(AVCaptureDeviceInput(device: videoCaptureDevice))
// Get Audio Device
let audioInput = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio)
//Add Audio Input
try self.captureSession.addInput(AVCaptureDeviceInput(device: audioInput))
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.portrait
self.videoView.layer.addSublayer(self.previewLayer)
//Add File Output
self.captureSession.addOutput(self.movieOutput)
captureSession.startRunning()
} catch {
print(error)
}
}
}
For more details refer this link:
https://medium.com/#santhosh3386/ios-avcapturesession-record-video-with-audio-23c8f8c9a8f8
Related
I'm currently in the process of debugging my video camera model that I'm using to record video and audio. I would like the video camera to continue playing background audio if there is something play and record using the mic over the audio. I initially got my av capture session to work smoothly as intended by adding the microphone input on set up which automatically stops playing background audio when the camera view is set up.
I have been working on the following solution where I add the audio input only when I start recording and attempt to remove audio input once I stop recording. Here is my current code:
import SwiftUI
import AVFoundation
// MARK: Camera View Model
class CameraViewModel: NSObject,ObservableObject,AVCaptureFileOutputRecordingDelegate, AVCapturePhotoCaptureDelegate{
#Published var session = AVCaptureSession()
#Published var alert = false
#Published var output = AVCaptureMovieFileOutput()
#Published var preview : AVCaptureVideoPreviewLayer!
// MARK: Video Recorder Properties
#Published var isRecording: Bool = false
#Published var recordedURLs: [URL] = []
#Published var previewURL: URL?
#Published var showPreview: Bool = false
// Set up is called after necessary permissions are acquired
func setUp(){
do{
self.session.beginConfiguration()
let cameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front)
if cameraDevice != nil {
/* old code that added audio input on open that worked as intended
let videoInput = try AVCaptureDeviceInput(device: cameraDevice!)
let audioDevice = AVCaptureDevice.default(for: .audio)
let audioInput = try AVCaptureDeviceInput(device: audioDevice!)
if self.session.canAddInput(videoInput) && self.session.canAddInput(audioInput){ //MARK: Audio Input
self.session.addInput(videoInput)
self.session.addInput(audioInput)
self.videoDeviceInput = videoInput
} */
// new code that only adds video input
if self.session.canAddInput(videoInput) {
self.session.addInput(videoInput)
self.videoDeviceInput = videoInput
}
if self.session.canAddOutput(self.output){
self.session.addOutput(self.output)
}
if self.session.canAddOutput(self.photoOutput){
self.session.addOutput(self.photoOutput)
}
//for audio mixing, make sure this is default set to true
self.session.automaticallyConfiguresApplicationAudioSession = true
self.session.commitConfiguration()
}
}
catch{
print(error.localizedDescription)
}
}
//start recording is called upon a user input which now attaches the mic input
func startRecording() {
// here is how I'm mixing the background audio and adding the microphone input when the camera starts recording
do
{
try AVAudioSession.sharedInstance().setActive(false)
try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.ambient)
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .default, options: AVAudioSession.CategoryOptions.mixWithOthers)
try AVAudioSession.sharedInstance().setMode(AVAudioSession.Mode.videoRecording)
try AVAudioSession.sharedInstance().setActive(true)
let audioDevice = AVCaptureDevice.default(for: .audio)
let audioInput = try AVCaptureDeviceInput(device: audioDevice!)
if self.session.canAddInput(audioInput){
self.session.automaticallyConfiguresApplicationAudioSession = false
self.session.addInput(audioInput)
}
} catch {
print("Can't Set Audio Session Category: \(error)")
}
// MARK: Temporary URL for recording Video
let tempURL = NSTemporaryDirectory() + "\(Date()).mov"
//Need to correct image orientation before moving further
if let videoOutputConnection = output.connection(with: .video) {
//For frontCamera settings to capture mirror image
if self.videoDeviceInput.device.position == .front {
videoOutputConnection.automaticallyAdjustsVideoMirroring = false
videoOutputConnection.isVideoMirrored = true
} else {
videoOutputConnection.automaticallyAdjustsVideoMirroring = true
}
}
output.startRecording(to: URL(fileURLWithPath: tempURL), recordingDelegate: self)
isRecording = true
}
//stop recording removes the audio input
func stopRecording(){
output.stopRecording()
isRecording = false
self.flashOn = false
// stop recording is where I believe I'm doing something wrong when I remove the audio input
do{
try AVAudioSession.sharedInstance().setActive(false)
let audioDevice = AVCaptureDevice.default(for: .audio)
let audioInput = try AVCaptureDeviceInput(device: audioDevice!)
self.session.removeInput(audioInput)
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.ambient, mode: .default, options: [.mixWithOthers])
try AVAudioSession.sharedInstance().setActive(true)
} catch {
print("Error occurred while removing audio device input: \(error)")
}
}
}
I also added the following necessary lines in my AppDelegate launch method as well
/*
below is for mixing audio
*/
let audioSession = AVAudioSession.sharedInstance()
do {
try audioSession.setCategory(.ambient, mode: .default, options: [.mixWithOthers])
} catch {
print("Failed to set audio session category.")
}
I believe I'm going on the correct track as the first time the app opens, background audio plays smoothly and there is a small camera flash but once I start recording, it mixes the background audio well with the phone audio input as well. I was able to see this in the preview in a new view. However, once I dismiss the preview of the recorded url and go back to the camera, the phone audio mic input stops working completely.
I also receive this error in my console:
AVAudioSession_iOS.mm:1271 Deactivating an audio session that has running I/O. All I/O should be stopped or paused prior to deactivating the audio session.
When I looked online, it said to stop or pause AVPlayer but I'm unsure where I'm even using an AVPlayer session here. I also noticed that people suggested creating two capture sessions for the audio and video but I was struggling to get that working as well, so went ahead with this option.
Editing for minimal reproducible example:
Here is the camera view model and I've attached the necessary views in a separate answer:
import SwiftUI
import AVFoundation
// MARK: Camera View Model
class CameraViewModel: NSObject,ObservableObject,AVCaptureFileOutputRecordingDelegate, AVCapturePhotoCaptureDelegate{
#Published var session = AVCaptureSession()
#Published var alert = false
#Published var output = AVCaptureMovieFileOutput()
#Published var preview : AVCaptureVideoPreviewLayer!
// MARK: Video Recorder Properties
#Published var isRecording: Bool = false
#Published var recordedURLs: [URL] = []
#Published var previewURL: URL?
#Published var showPreview: Bool = false
// Top Progress Bar
#Published var recordedDuration: CGFloat = 0
// Maximum 15 seconds
#Published var maxDuration: CGFloat = 15
//for photo
// since were going to read pic data....
#Published var photoOutput = AVCapturePhotoOutput()
#Published var isTaken = false
#Published var picData = Data(count: 0)
#Published var thumbnailData = Data(count: 0)
#Published var flashOn = false
#objc dynamic var videoDeviceInput: AVCaptureDeviceInput!
private let sessionQueue = DispatchQueue(label: "session queue")
// MARK: Device Configuration Properties
private let videoDeviceDiscoverySession = AVCaptureDevice.DiscoverySession(deviceTypes: [.builtInWideAngleCamera, .builtInDualCamera, .builtInTrueDepthCamera], mediaType: .video, position: .unspecified)
#AppStorage("camerapermission") var camerapermission = 0
func checkPermission(){
switch AVCaptureDevice.authorizationStatus(for: .video) {
case .authorized:
self.checkAudioPermission()
return
case .notDetermined:
AVCaptureDevice.requestAccess(for: .video) { (status) in
if status{
self.checkAudioPermission()
}
}
case .denied:
self.camerapermission = 2
self.alert.toggle()
return
default:
return
}
}
func checkAudioPermission() {
switch AVAudioSession.sharedInstance().recordPermission {
case .granted :
print("permission granted")
self.camerapermission = 1
setUp()
case .denied:
print("permission denied")
self.camerapermission = 2
self.alert.toggle()
case .undetermined:
print("request permission here")
AVAudioSession.sharedInstance().requestRecordPermission({ granted in
if granted {
print("permission granted here")
DispatchQueue.main.async {
self.camerapermission = 1
}
self.setUp()
}
})
default:
print("unknown")
}
}
func setUp(){
do{
self.session.beginConfiguration()
let cameraDevice = AVCaptureDevice.default(.builtInWideAngleCamera, for: .video, position: .front)
if cameraDevice != nil {
let videoInput = try AVCaptureDeviceInput(device: cameraDevice!)
// let audioDevice = AVCaptureDevice.default(for: .audio)
// let audioInput = try AVCaptureDeviceInput(device: audioDevice!)
// if self.session.canAddInput(videoInput) && self.session.canAddInput(audioInput){ //MARK: Audio Input
// self.session.addInput(videoInput)
// self.session.addInput(audioInput)
// self.videoDeviceInput = videoInput
// }
//
/* mixing code buggy */
if self.session.canAddInput(videoInput) {
self.session.addInput(videoInput)
self.videoDeviceInput = videoInput
}
if self.session.canAddOutput(self.output){
self.session.addOutput(self.output)
}
if self.session.canAddOutput(self.photoOutput){
self.session.addOutput(self.photoOutput)
}
//for audio mixing, make sure this is default set to true
self.session.automaticallyConfiguresApplicationAudioSession = true
self.session.commitConfiguration()
}
}
catch{
print(error.localizedDescription)
}
}
public func set(zoom: CGFloat){
let factor = zoom < 1 ? 1 : zoom
let device = self.videoDeviceInput.device
do {
try device.lockForConfiguration()
device.videoZoomFactor = factor
device.unlockForConfiguration()
}
catch {
print(error.localizedDescription)
}
}
func changeCamera() {
sessionQueue.async {
if self.videoDeviceInput != nil {
let currentVideoDevice = self.videoDeviceInput.device
let currentPosition = currentVideoDevice.position
let preferredPosition: AVCaptureDevice.Position
let preferredDeviceType: AVCaptureDevice.DeviceType
switch currentPosition {
case .unspecified, .front:
preferredPosition = .back
preferredDeviceType = .builtInWideAngleCamera
case .back:
preferredPosition = .front
preferredDeviceType = .builtInWideAngleCamera
#unknown default:
print("Unknown capture position. Defaulting to back, dual-camera.")
preferredPosition = .back
preferredDeviceType = .builtInWideAngleCamera
}
let devices = self.videoDeviceDiscoverySession.devices
var newVideoDevice: AVCaptureDevice? = nil
// First, seek a device with both the preferred position and device type. Otherwise, seek a device with only the preferred position.
if let device = devices.first(where: { $0.position == preferredPosition && $0.deviceType == preferredDeviceType }) {
newVideoDevice = device
} else if let device = devices.first(where: { $0.position == preferredPosition }) {
newVideoDevice = device
}
if let videoDevice = newVideoDevice {
do {
let videoDeviceInput = try AVCaptureDeviceInput(device: videoDevice)
self.session.beginConfiguration()
// Remove the existing device input first, because AVCaptureSession doesn't support
// simultaneous use of the rear and front cameras.
self.session.removeInput(self.videoDeviceInput)
// MARK: Audio Input
if self.session.canAddInput(videoDeviceInput){
self.session.addInput(videoDeviceInput)
self.videoDeviceInput = videoDeviceInput
}
if self.session.canAddOutput(self.output){
self.session.addOutput(self.output)
}
if self.session.canAddOutput(self.photoOutput){
self.session.addOutput(self.photoOutput)
}
self.session.commitConfiguration()
} catch {
print("Error occurred while creating video device input: \(error)")
}
}
}
}
}
// take and retake functions...
func switchFlash() {
self.flashOn.toggle()
}
func takePic(){
let settings = AVCapturePhotoSettings()
if flashOn {
settings.flashMode = .on
} else {
settings.flashMode = .off
}
//Need to correct image orientation before moving further
if let photoOutputConnection = photoOutput.connection(with: .video) {
//For frontCamera settings to capture mirror image
if self.videoDeviceInput.device.position == .front {
photoOutputConnection.automaticallyAdjustsVideoMirroring = false
photoOutputConnection.isVideoMirrored = true
} else {
photoOutputConnection.automaticallyAdjustsVideoMirroring = true
}
}
self.photoOutput.capturePhoto(with: settings, delegate: self)
print("retaking a photo taken...")
DispatchQueue.global(qos: .background).async {
//self.session.stopRunning()
DispatchQueue.main.async {
withAnimation{self.isTaken.toggle()}
}
}
}
func reTake(){
DispatchQueue.global(qos: .background).async {
self.session.startRunning()
DispatchQueue.main.async {
withAnimation{self.isTaken.toggle()}
//clearing ...
self.flashOn = false
self.picData = Data(count: 0)
}
}
}
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
if error != nil{
return
}
print("pic taken...")
guard let imageData = photo.fileDataRepresentation() else{return}
self.picData = imageData
}
func startRecording() {
/* mixing code buggy */
do
{
try AVAudioSession.sharedInstance().setActive(false)
try AVAudioSession.sharedInstance().setCategory(AVAudioSession.Category.ambient)
try AVAudioSession.sharedInstance().setCategory(.playAndRecord, mode: .default, options: AVAudioSession.CategoryOptions.mixWithOthers)
try AVAudioSession.sharedInstance().setMode(AVAudioSession.Mode.videoRecording)
try AVAudioSession.sharedInstance().setActive(true)
let audioDevice = AVCaptureDevice.default(for: .audio)
let audioInput = try AVCaptureDeviceInput(device: audioDevice!)
if self.session.canAddInput(audioInput){
self.session.automaticallyConfiguresApplicationAudioSession = false
self.session.addInput(audioInput)
}
} catch {
print("Can't Set Audio Session Category: \(error)")
}
// MARK: Temporary URL for recording Video
let tempURL = NSTemporaryDirectory() + "\(Date()).mov"
//Need to correct image orientation before moving further
if let videoOutputConnection = output.connection(with: .video) {
//For frontCamera settings to capture mirror image
if self.videoDeviceInput.device.position == .front {
videoOutputConnection.automaticallyAdjustsVideoMirroring = false
videoOutputConnection.isVideoMirrored = true
} else {
videoOutputConnection.automaticallyAdjustsVideoMirroring = true
}
}
output.startRecording(to: URL(fileURLWithPath: tempURL), recordingDelegate: self)
isRecording = true
}
func stopRecording(){
output.stopRecording()
isRecording = false
self.flashOn = false
/* mixing code buggy */
do{
try AVAudioSession.sharedInstance().setActive(false)
let audioDevice = AVCaptureDevice.default(for: .audio)
let audioInput = try AVCaptureDeviceInput(device: audioDevice!)
self.session.removeInput(audioInput)
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.ambient, mode: .default, options: [.mixWithOthers])
try AVAudioSession.sharedInstance().setActive(true)
} catch {
print("Error occurred while removing audio device input: \(error)")
}
}
func generateThumbnail() {
let image = self.imageFromVideo(url: previewURL!, at: 0)
DispatchQueue.main.async {
self.thumbnailData = image?.pngData() ?? Data(count: 0)
}
}
func imageFromVideo(url: URL, at time: TimeInterval) -> UIImage? {
let asset = AVURLAsset(url: url)
let assetIG = AVAssetImageGenerator(asset: asset)
assetIG.appliesPreferredTrackTransform = true
assetIG.apertureMode = AVAssetImageGenerator.ApertureMode.encodedPixels
let cmTime = CMTime(seconds: time, preferredTimescale: 60)
let thumbnailImageRef: CGImage
do {
thumbnailImageRef = try assetIG.copyCGImage(at: cmTime, actualTime: nil)
print("SUCESS: THUMBNAIL")
} catch let error {
print("Error: \(error)")
return nil
}
return UIImage(cgImage: thumbnailImageRef)
}
func restartSession() {
if !self.session.isRunning {
DispatchQueue.global(qos: .background).async {
self.session.startRunning()
}
}
}
func stopSession() {
// DispatchQueue.global(qos: .background).async {
self.session.stopRunning()
// }
}
func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
if let error = error {
print(error.localizedDescription)
return
}
// CREATED SUCCESSFULLY
print(outputFileURL)
guard let data = try? Data(contentsOf: outputFileURL) else {
return
}
print("File size before compression: \(Double(data.count / 1048576)) mb")
self.recordedURLs.append(outputFileURL)
if self.recordedURLs.count == 1{
self.previewURL = outputFileURL
self.generateThumbnail()
return
}
/*
Below code can be ignored because only recording one url
*/
// CONVERTING URLs TO ASSETS
let assets = recordedURLs.compactMap { url -> AVURLAsset in
return AVURLAsset(url: url)
}
self.previewURL = nil
// MERGING VIDEOS
mergeVideos(assets: assets) { exporter in
exporter.exportAsynchronously {
if exporter.status == .failed{
// HANDLE ERROR
print(exporter.error!)
}
else{
if let finalURL = exporter.outputURL{
print(finalURL)
DispatchQueue.main.async {
self.previewURL = finalURL
print("inside final url")
}
}
}
}
}
}
func mergeVideos(assets: [AVURLAsset],completion: #escaping (_ exporter: AVAssetExportSession)->()){
let compostion = AVMutableComposition()
var lastTime: CMTime = .zero
guard let videoTrack = compostion.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else{return}
guard let audioTrack = compostion.addMutableTrack(withMediaType: .audio, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)) else{return}
for asset in assets {
// Linking Audio and Video
do{
try videoTrack.insertTimeRange(CMTimeRange(start: .zero, duration: asset.duration), of: asset.tracks(withMediaType: .video)[0], at: lastTime)
// Safe Check if Video has Audio
if !asset.tracks(withMediaType: .audio).isEmpty{
try audioTrack.insertTimeRange(CMTimeRange(start: .zero, duration: asset.duration), of: asset.tracks(withMediaType: .audio)[0], at: lastTime)
}
}
catch{
// HANDLE ERROR
print(error.localizedDescription)
}
// Updating Last Time
lastTime = CMTimeAdd(lastTime, asset.duration)
}
// MARK: Temp Output URL
let tempURL = URL(fileURLWithPath: NSTemporaryDirectory() + "Reel-\(Date()).mp4")
// VIDEO IS ROTATED
// BRINGING BACK TO ORIGNINAL TRANSFORM
let layerInstructions = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
// MARK: Transform
var transform = CGAffineTransform.identity
transform = transform.rotated(by: 90 * (.pi / 180))
transform = transform.translatedBy(x: 0, y: -videoTrack.naturalSize.height)
layerInstructions.setTransform(transform, at: .zero)
let instructions = AVMutableVideoCompositionInstruction()
instructions.timeRange = CMTimeRange(start: .zero, duration: lastTime)
instructions.layerInstructions = [layerInstructions]
let videoComposition = AVMutableVideoComposition()
videoComposition.renderSize = CGSize(width: videoTrack.naturalSize.height, height: videoTrack.naturalSize.width)
videoComposition.instructions = [instructions]
videoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
guard let exporter = AVAssetExportSession(asset: compostion, presetName: AVAssetExportPresetHighestQuality) else{return}
exporter.outputFileType = .mp4
exporter.outputURL = tempURL
exporter.videoComposition = videoComposition
completion(exporter)
}
func compressVideo(inputURL: URL,
outputURL: URL,
handler:#escaping (_ exportSession: AVAssetExportSession?) -> Void) {
let urlAsset = AVURLAsset(url: inputURL, options: nil)
guard let exportSession = AVAssetExportSession(asset: urlAsset,
presetName: AVAssetExportPresetMediumQuality) else {
handler(nil)
return
}
exportSession.outputURL = outputURL
exportSession.outputFileType = .mp4
exportSession.exportAsynchronously {
handler(exportSession)
}
}
}
You can ignore the merge videos code as there is only one recorded url but right now, you should be able to run this code if you've added camera and microphone permissions to your info.plist.
It currently has the buggy mixing code where background audio does work the first time but after restarting audio session, it no longer works. Any help would be greatly appreciated!
I've recently started running beta on my camera-based app. Everything is working as expected except on iPhone 6 devices.
The session starts on the back camera, and each time an iPhone 6 user switches to the front camera the app crashes. (And just to be really clear: no one on any other iPhone model is experiencing the issue.) I've gotten my hands on a 6 to test and can consistently reproduce the error, resulting in libc++abi.dylib: terminating with uncaught exception of type NSException.
I've tried starting the session on the front camera and it crashes immediately.
func initializeCamera() {
self.captureSession.sessionPreset = .hd1920x1080
let discovery = AVCaptureDevice.DiscoverySession.init(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera],
mediaType: .video,
position: .unspecified) as AVCaptureDevice.DiscoverySession
for device in discovery.devices as [AVCaptureDevice] {
if device.hasMediaType(.video) {
if device.position == AVCaptureDevice.Position.front {
videoCaptureDevice = device
do {
try currentDeviceInput = AVCaptureDeviceInput(device: device)
} catch {
print("error: \(error.localizedDescription)")
}
}
}
}
if videoCaptureDevice != nil {
do {
let videoInput = try AVCaptureDeviceInput(device: videoCaptureDevice!)
captureSession.addInput(videoInput)
if let audioInput = AVCaptureDevice.default(for: .audio) {
try captureSession.addInput(AVCaptureDeviceInput(device: audioInput))
}
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
guard let previewLayer = previewLayer else { return }
cameraPreviewView.frame = cameraContainer.frame
cameraPreviewView.layer.addSublayer(previewLayer)
previewLayer.frame = cameraPreviewView.frame
previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
setVideoOrientation()
captureSession.addOutput(movieFileOutput)
if let movieFileOutputConnection = movieFileOutput.connection(with: .video) {
if movieFileOutputConnection.isVideoStabilizationSupported {
movieFileOutputConnection.preferredVideoStabilizationMode = .cinematic
}
}
captureSession.startRunning()
sessionIsReady(true)
} catch {
print("error: \(error.localizedDescription)")
}
}
}
func setVideoOrientation() {
if let connection = self.previewLayer?.connection {
if connection.isVideoOrientationSupported {
connection.videoOrientation = .portrait
previewLayer?.frame = cameraContainer.bounds
}
}
}
The crash is triggered at captureSession.addInput(videoInput). videoInput is not nil. The camera's orientation is locked to portrait.
Can anyone offer any insight? Please let me know if any additional code would be helpful. Thanks in advance.
captureSession.addInput(videoInput) is causing the crash.
So you should use canAddInput(_:) before to avoid the crash.
if captureSession.canAddInput(videoInput) {
captureSession.addInput(videoInput)
}
And in your case, captureSession.canAddInput(videoInput) == false with that iPhone 6.
Now, you are also doing self.captureSession.sessionPreset = .hd1920x1080
But according to WikiPedia, the iPhone 6 Front Camera hardware supports
camera 1.2 MP (1280×960 px max.), 720p video recording (30 fps). Doesn't seem to fit the 1920*1080 ("Full HD").
You could do this check what the "max" AVCaptureSession.Preset you can use.
func setSessionPreset(forDevice device: AVCaptureDevice) {
let videoPresets: [AVCaptureSession.Preset] = [.hd4K3840x2160, .hd1920x1080, .hd1280x720] //etc. Put them in order to "preferred" to "last preferred"
let preset = videoPresets.first(where: { device.supportsSessionPreset($0) }) ?? .hd1280x720
captureSession.sessionPreset = preset
}
I will answer my own question to share my experience, since there is no complete working code on the internet.
IOS devices usually record videos in .mov files with quicktime format. Even the output video has AVC baseline video codec and AAC audio codec, the resulting file will be in quicktime container. And those videos may not play in android devices. Apple has Avfoundation classes like AvCaptureSession and AVCaptureMovieFileOutput but they do not directly support mp4 file output. How can i record an actual mp4 video in mpeg4 container with swift and IOS 8 support?
First things first: This may not be the best solution, but this is a complete solution.
The code below captures video and audio with AvCaptureSession and converts it into mpeg4 with AvExportSession. There is also zoom in, zoom out and switch camera functionality and permission checking. You can record in 480p or 720p. You can also set minimum and maximum frame rates to create smaller videos. Hope this helps as a complete guide.
Note: There are keys to add to info.plist to ask for camera and photos library permission:
<key>NSCameraUsageDescription</key>
<string>Yo, this is a cam app.</string>
<key>NSPhotoLibraryUsageDescription</key>
<string>Yo, i need to access your photos.</string>
<key>NSMicrophoneUsageDescription</key>
<string>Yo, i can't hear you</string>
And the code:
import UIKit
import Photos
import AVFoundation
class VideoAct: UIViewController, AVCaptureFileOutputRecordingDelegate
{
let captureSession : AVCaptureSession = AVCaptureSession()
var captureDevice : AVCaptureDevice!
var microphone : AVCaptureDevice!
var previewLayer : AVCaptureVideoPreviewLayer!
let videoFileOutput : AVCaptureMovieFileOutput = AVCaptureMovieFileOutput()
var duration : Int = 30
var v_path : URL = URL(fileURLWithPath: "")
var my_timer : Timer = Timer()
var cameraFront : Bool = false
var cameras_number : Int = 0
var max_zoom : CGFloat = 76
var devices : [AVCaptureDevice] = []
var captureInput : AVCaptureDeviceInput = AVCaptureDeviceInput()
var micInput : AVCaptureDeviceInput = AVCaptureDeviceInput()
#IBOutlet weak var cameraView: UIView!
override func viewDidLoad()
{
super.viewDidLoad()
if (check_permissions())
{
initialize()
}
else
{
AVCaptureDevice.requestAccess(forMediaType: AVMediaTypeVideo, completionHandler: { (granted) in
if (granted)
{
self.initialize()
}
else
{
self.dismiss(animated: true, completion: nil)
}
})
}
}
func check_permissions() -> Bool
{
return AVCaptureDevice.authorizationStatus(forMediaType: AVMediaTypeVideo) == AVAuthorizationStatus.authorized
}
#available(iOS 4.0, *)
func capture(_ captureOutput: AVCaptureFileOutput!, didFinishRecordingToOutputFileAt outputFileURL: URL!, fromConnections connections: [Any]!, error: Error!)
{
//you can implement stopvideoaction here if you want
}
func initialize()
{
let directory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
v_path = directory.appendingPathComponent("temp_video.mp4")
// we just set the extension .mp4 but
// actually it is a mov file with QT container !! May not play in Android devices.
// it will be ceonverted
self.duration = 30
devices = AVCaptureDevice.devices() as! [AVCaptureDevice]
for device in devices
{
if (device.hasMediaType(AVMediaTypeVideo))
{
if (device.position == AVCaptureDevicePosition.back)
{
captureDevice = device as AVCaptureDevice
}
if (device.position == AVCaptureDevicePosition.front)
{
cameras_number = 2
}
}
if (device.hasMediaType(AVMediaTypeAudio))
{
microphone = device as AVCaptureDevice
}
}
if (cameras_number == 1)
{
//only 1 camera available
btnSwitchCamera.isHidden = true
}
if captureDevice != nil
{
beginSession()
}
max_zoom = captureDevice.activeFormat.videoMaxZoomFactor
}
func beginSession()
{
if (captureSession.isRunning)
{
captureSession.stopRunning()
}
do
{
try captureInput = AVCaptureDeviceInput(device: captureDevice)
try micInput = AVCaptureDeviceInput(device: microphone)
try captureDevice.lockForConfiguration()
}
catch
{
print("errorrrrrrrrrrr \(error)")
}
// beginconfig before adding input and setting settings
captureSession.beginConfiguration()
captureSession.addInput(captureInput)
captureSession.addInput(micInput)
previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.init(rawValue: UIDevice.current.orientation.rawValue)!
if (previewLayer.connection.isVideoStabilizationSupported)
{
previewLayer.connection.preferredVideoStabilizationMode = AVCaptureVideoStabilizationMode.auto
}
if (captureDevice.isSmoothAutoFocusSupported)
{
captureDevice.isSmoothAutoFocusEnabled = false
}
if (captureDevice.isFocusModeSupported(AVCaptureFocusMode.continuousAutoFocus))
{
captureDevice.focusMode = .continuousAutoFocus
}
set_preview_size_thing()
set_quality_thing()
if (captureDevice.isLowLightBoostSupported)
{
captureDevice.automaticallyEnablesLowLightBoostWhenAvailable = true
}
if (cameraView.layer.sublayers?[0] is AVCaptureVideoPreviewLayer)
{
//to prevent previewlayers stacking on every camera switch
cameraView.layer.sublayers?.remove(at: 0)
}
cameraView.layer.insertSublayer(previewLayer, at: 0)
previewLayer?.frame = cameraView.layer.frame
captureSession.commitConfiguration()
captureSession.startRunning()
}
func duration_thing()
{
// there is a textview to write remaining time left
self.duration = self.duration - 1
timerTextView.text = "remaining seconds: \(self.duration)"
timerTextView.sizeToFit()
if (self.duration == 0)
{
my_timer.invalidate()
stopVideoAction()
}
}
func switch_cam()
{
captureSession.removeInput(captureInput)
captureSession.removeInput(micInput)
cameraFront = !cameraFront
// capturedevice will be locked again
captureDevice.unlockForConfiguration()
for device in devices
{
if (device.hasMediaType(AVMediaTypeVideo))
{
if (device.position == AVCaptureDevicePosition.back && !cameraFront)
{
captureDevice = device as AVCaptureDevice
}
else if (device.position == AVCaptureDevicePosition.front && cameraFront)
{
captureDevice = device as AVCaptureDevice
}
}
}
beginSession()
}
func zoom_in()
{
// 10x zoom would be enough
if (captureDevice.videoZoomFactor * 1.5 < 10)
{
captureDevice.videoZoomFactor = captureDevice.videoZoomFactor * 1.5
}
else
{
captureDevice.videoZoomFactor = 10
}
}
func zoom_out()
{
if (captureDevice.videoZoomFactor * 0.67 > 1)
{
captureDevice.videoZoomFactor = captureDevice.videoZoomFactor * 0.67
}
else
{
captureDevice.videoZoomFactor = 1
}
}
func set_quality_thing()
{
// there is a switch in the screen (30-30 fps high quality or 15-23 fps normal quality)
// you may not have to do this because export session also has some presets and a property called “optimizefornetwork” or something. But it would be better to make sure the output file is not huge with unnecessary 90 fps video
captureDevice.activeVideoMinFrameDuration = CMTimeMake(1, switch_quality.isOn ? 30 : 15)
captureDevice.activeVideoMaxFrameDuration = CMTimeMake(1, switch_quality.isOn ? 30 : 23)
}
func set_preview_size_thing()
{
//there is a switch for resolution (720p or 480p)
captureSession.sessionPreset = switch_res.isOn ? AVCaptureSessionPreset1280x720 : AVCaptureSessionPreset640x480
//this for loop is probably unnecessary and ridiculous but you can make sure you are using the right format
for some_format in captureDevice.formats as! [AVCaptureDeviceFormat]
{
let some_desc : String = String(describing: some_format)
if (switch_res.isOn)
{
if (some_desc.contains("1280x") && some_desc.contains("720") && some_desc.contains("420v") && some_desc.contains("30 fps"))
{
captureDevice.activeFormat = some_format
break
}
}
else
{
if (some_desc.contains("640x") && some_desc.contains("480") && some_desc.contains("420v"))
{
captureDevice.activeFormat = some_format
break
}
}
}
}
func takeVideoAction()
{
// movieFragmentInterval is important !! or you may end up with a video without audio
videoFileOutput.movieFragmentInterval = kCMTimeInvalid
captureSession.addOutput(videoFileOutput)
(videoFileOutput.connections.first as! AVCaptureConnection).videoOrientation = returnedOrientation()
videoFileOutput.maxRecordedDuration = CMTime(seconds: Double(self.duration), preferredTimescale: 1)
videoFileOutput.startRecording(toOutputFileURL: v_path, recordingDelegate: self)
//timer will tell the remaining time
my_timer = Timer.scheduledTimer(timeInterval: 1, target: self, selector: #selector(duration_thing), userInfo: nil, repeats: true)
}
func stopVideoAction()
{
captureDevice.unlockForConfiguration()
videoFileOutput.stopRecording()
captureSession.stopRunning()
// turn temp_video into an .mpeg4 (mp4) video
let directory = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
let avAsset = AVURLAsset(url: v_path, options: nil)
// there are other presets than AVAssetExportPresetPassthrough
let exportSession = AVAssetExportSession(asset: avAsset, presetName: AVAssetExportPresetPassthrough)!
exportSession.outputURL = directory.appendingPathComponent("main_video.mp4")
// now it is actually in an mpeg4 container
exportSession.outputFileType = AVFileTypeMPEG4
let start = CMTimeMakeWithSeconds(0.0, 0)
let range = CMTimeRangeMake(start, avAsset.duration)
exportSession.timeRange = range
exportSession.exportAsynchronously(completionHandler: {
if (exportSession.status == AVAssetExportSessionStatus.completed)
{
// you don’t need temp video after exporting main_video
do
{
try FileManager.default.removeItem(atPath: self.v_path.path)
}
catch
{
}
// v_path is now points to mp4 main_video
self.v_path = directory.appendingPathComponent("main_video.mp4")
self.performSegue(withIdentifier: "ShareVideoController", sender: nil)
}
})
}
func btn_capture_click_listener()
{
if (videoFileOutput.isRecording)
{
stopVideoAction()
}
else
{
takeVideoAction()
}
}
func returnedOrientation() -> AVCaptureVideoOrientation
{
var videoOrientation: AVCaptureVideoOrientation!
let orientation = UIDevice.current.orientation
switch orientation
{
case .landscapeLeft:
videoOrientation = .landscapeRight
case .landscapeRight:
videoOrientation = .landscapeLeft
default:
videoOrientation = .landscapeLeft
}
return videoOrientation
}
override func prepare(for segue: UIStoryboardSegue, sender: Any?)
{
if (segue.identifier == "ShareVideoController")
{
//to make it visible in the camera roll (main_video.mp4)
PHPhotoLibrary.shared().performChanges({PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: self.v_path)}) { completed, error in}
let destVC : ShareVideoController = segue.destination as! ShareVideoController
// use the path in other screen to upload it or whatever
destVC.videoFilePath = v_path
// bla bla
}
}
override var supportedInterfaceOrientations: UIInterfaceOrientationMask
{
// screen will always be in landscape (remove this override if you want)
return .landscape
}
}
Up until recently, my custom camera was working fine. However, I have recently been receiving an error (perhaps with the upgrade to Xcode 8.2 and 8.2.1). I have the following code for loading the camera:
func reload() {
captureSession?.stopRunning()
previewLayer?.removeFromSuperlayer()
captureSession = AVCaptureSession()
captureSession!.sessionPreset = AVCaptureSessionPresetPhoto
let captureDevice = AVCaptureDevice.defaultDevice(withDeviceType: .builtInWideAngleCamera, mediaType: AVMediaTypeVideo, position: direction)
var input = AVCaptureDeviceInput()
do {
input = try AVCaptureDeviceInput(device: captureDevice)
} catch {
print("error")
return
}
DispatchQueue.global(qos: .default).async {
if self.captureSession!.canAddInput(input) == true {
self.captureSession!.addInput(input)
if self.captureSession!.canAddOutput(self.cameraOutput) {
self.captureSession!.addOutput(self.cameraOutput)
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.captureSession)
self.previewLayer!.videoGravity = AVLayerVideoGravityResizeAspectFill
self.previewLayer!.connection?.videoOrientation = .portrait
self.previewLayer?.frame = self.bounds
DispatchQueue.main.async {
self.layer.addSublayer(self.previewLayer!)
self.captureSession!.startRunning()
self.clickedImage = nil
self.bringSubview(toFront: self.rotateButton)
}
} else {
print("cannot add ouput")
}
} else {
print("cannot add input")
}
DispatchQueue.main.async {
self.bringSubview(toFront: self.rotateButton)
}
}
}
For some reason, it keeps print "cannot add output" in the debug logs. I have tried to resolve this using this SO post, but it still does not run properly. Does anyone know what this means and how to fix it? Thanks!
I'm currently developing an App which records a video and then shows you the video to check if the video was good enough. However when displaying the recorded video, it shows it in the wrong orientation..
So basically I'm recording it in LandscapeRight modus. But when displaying it displays it in portrait mode, and also apparently records it in display mode as well. Even when I set it AVCaptureVideoOrientation.LandscapeRight.
Here is the code I'm using to setup the recording:
func setupAVCapture(){
session = AVCaptureSession()
session.sessionPreset = AVCaptureSessionPresetHigh
let devices = AVCaptureDevice.devices()
// Loop through all the capture devices on this phone
for device in devices {
// Make sure this particular device supports video
if (device.hasMediaType(AVMediaTypeVideo)) {
// Finally check the position and confirm we've got the front camera
if(device.position == AVCaptureDevicePosition.Front) {
captureDevice = device as? AVCaptureDevice
if captureDevice != nil {
beginSession()
break
}
}
}
}
}
func beginSession(){
var err : NSError? = nil
var deviceInput:AVCaptureDeviceInput?
do {
deviceInput = try AVCaptureDeviceInput(device: captureDevice)
} catch let error as NSError {
err = error
deviceInput = nil
};
if err != nil {
print("error: \(err?.localizedDescription)")
}
if self.session.canAddInput(deviceInput){
self.session.addInput(deviceInput)
}
self.videoDataOutput = AVCaptureVideoDataOutput()
self.videoDataOutput.alwaysDiscardsLateVideoFrames=true
self.videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL)
self.videoDataOutput.setSampleBufferDelegate(self, queue:self.videoDataOutputQueue)
if session.canAddOutput(self.videoDataOutput){
session.addOutput(self.videoDataOutput)
}
self.videoDataOutput.connectionWithMediaType(AVMediaTypeVideo).enabled = true
self.previewLayer = AVCaptureVideoPreviewLayer(session: self.session)
self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
self.previewLayer.frame = self.view.bounds
self.previewLayer.masksToBounds = true
self.previewLayer.connection.videoOrientation = AVCaptureVideoOrientation.LandscapeRight
let rootLayer :CALayer = CameraPreview.layer
rootLayer.masksToBounds=true
rootLayer.addSublayer(self.previewLayer)
session.startRunning()
}
After this the next delegate gets called and I display it in the same view, but for a playback:
func captureOutput(captureOutput: AVCaptureFileOutput!, didFinishRecordingToOutputFileAtURL outputFileURL: NSURL!, fromConnections connections: [AnyObject]!, error: NSError!) {
playbackAvailable = true
SaveButton.hidden = false
recording = false
stopCamera()
// let library = PHPhotoLibrary.sharedPhotoLibrary()
// library.performChanges({ PHAssetChangeRequest.creationRequestForAssetFromVideoAtFileURL(outputFileURL)! }, completionHandler: {success, error in debugPrint("Finished saving asset. %#", (success ? "Success." : error!)) })
// Play Video
player = AVPlayer(URL: outputFileURL)
playerLayer = AVPlayerLayer(player: player)
playerLayer.videoGravity = AVLayerVideoGravityResizeAspectFill
playerLayer.masksToBounds = true
playerLayer.frame = self.view.bounds
CameraPreview.layer.addSublayer(playerLayer)
player.play()
}
Now the playback displays it in the wrong orientation, anyone knows how to fix this? I've also got the orientation of the viewcontroller to LandscapeRight.
Solution:
// Neccesary to record in the correct orientation
// ** MUST BE IMPLEMENTED AFTER SETING UP THE MOVIEFILE **
var videoConnection: AVCaptureConnection? = nil
if let connections = videoFileOutput.connections{
for x in connections {
if let connection = x as? AVCaptureConnection{
for port in connection.inputPorts{
if(port.mediaType == AVMediaTypeVideo){
videoConnection = connection
}
}
}
}
}
if(videoConnection != nil){
if let vidConnect: AVCaptureConnection = videoConnection!{
if(vidConnect.supportsVideoOrientation){
vidConnect.videoOrientation = AVCaptureVideoOrientation(rawValue: UIApplication.sharedApplication().statusBarOrientation.rawValue)!
}
}
}
Basically you need to set the AVCaptureVideoOrientation of the connection to the correct orientation before you start recording. Or else it might record in the wrong orientation.