Video compression using AVAssetWriter - ios

I've created a function to compress a video file. It uses AVAssetWriter and adds inputs and outputs for video and audio tracks. When it starts writing I'm getting an error when the AVAssetReader for the audio track starts reading, audioReader.startReading(). Here the error, *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[AVAssetReader startReading] cannot be called again after reading has already started'.
The code: https://gist.github.com/jaumevn/9ba329aaf49c81c57a276fd135f53f20
Can anyone see what's the problem here? Thanks!

Line 77 of your code, you're starting a second AVAssetReader on the same file.
You don't need to hook up two readers, instead, you should hook up your AVAudioAssetReader as an Output for the existing AVAssetReader.
Something like this:
let videoReaderSettings : [String : Int] = [kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange)]
let videoReaderOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoReaderSettings)
let videoReader = try! AVAssetReader(asset: videoAssetUrl)
var settings = [String : AnyObject]()
settings[AVFormatIDKey] = Int(kAudioFormatLinearPCM)
let audioReaderOutput = AVAssetReaderTrackOutput(track: audioAssetTrack, outputSettings: settings)
videoReader.addOutput(videoReaderOutput)
videoReader.addOutput(audioReaderOutput)
videoWriter.startWriting()
videoReader.startReading()
Look into using AVCaptureVideoDataOutputSampleBufferDelegate and AVCaptureAudioDataOutputSampleBufferDelegate to capture and process the buffers from the reader.

Related

AVAssetWriterInput with more than 2 channels

Does someone know how to use the AVAssetWriterInput init with more than 2 channels?
I'm trying to init an audioInput, to add it after on AVAssetWriter this way:
let audioInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioOutputSettings)
assetWriter.add(audioInput)
assetWriter.startWriting()
But it crashes when I init the audioInput with the audioOutputSettings dictionary containing the number of channels key greater than 2. The error is:
Terminating app due to uncaught exception ‘NSInvalidArgumentException’, reason: ’*** -[AVAssetWriterInput initWithMediaType:outputSettings:sourceFormatHint:] 6 is not a valid channel count for Format ID ‘aac ’. Use kAudioFormatProperty_AvailableEncodeNumberChannels (<AudioToolbox/AudioFormat.h>) to enumerate available channel counts for a given format.
As you found in the AVAssetWriterInput comment:
If AVNumberOfChannelsKey specifies a channel count greater than 2, the dictionary must also specify a value for AVChannelLayoutKey.
What it fails to mention is that the channel count depends on your format ID, so passing a AudioChannelLayout won't make AAC support anything other than 1 or 2 channels.
Formats that do support 6 channels include LPCM kAudioFormatLinearPCM and, probably more interestingly, High Efficiency AAC (kAudioFormatMPEG4AAC_HE) which supports 2, 4, 6 and 8 channel audio.
The following code creates an AVAssetWriterInput that is ready for 6 channel AAC HE sample buffers:
var channelLayout = AudioChannelLayout()
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_MPEG_5_1_D
let audioOutputSettings: [String : Any] = [
AVNumberOfChannelsKey: 6,
AVFormatIDKey: kAudioFormatMPEG4AAC_HE,
AVSampleRateKey: 44100,
AVChannelLayoutKey: NSData(bytes: &channelLayout, length: MemoryLayout.size(ofValue: channelLayout)),
]
let audioInput = AVAssetWriterInput(mediaType: .audio, outputSettings: audioOutputSettings)
Change these two lines:
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_MPEG_2_0
AVNumberOfChannelsKey : 2,
I hope it helps you in my code it worked.

AVAudioPlayer sends "Unsupported file" error when playing a file recorded via AVCaptureSession

I've been scratching my head around this a full day now and don't seem to get closer, so I hope you guys can guide me in the right path :)
Heres's the situation.
I have a AVCaptureSession properly initialized where I add the audio input as follows :
let audioDevice = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio)
let audioIn = try! AVCaptureDeviceInput(device: audioDevice)
if (session.canAddInput(audioIn)) {
session.addInput(audioIn)
}
The audio output is added as follows :
if session.canAddOutput(self.audioOutput) {
self.audioOutput = AVCaptureAudioDataOutput()
session.addOutput(self.audioOutput)
self.audioConnection: AVCaptureConnection = self.audioOutput.connectionWithMediaType(AVMediaTypeAudio)
}
I then setup the recording settings :
if let audioAssetWriterOutput = self.audioOutput.recommendedAudioSettingsForAssetWriterWithOutputFileType(AVFileTypeAppleM4A) {
return audioAssetWriterOutput as? [String: AnyObject]
}
which I assign to my AVAssetWriterInput that is initialized in audio mode, with those settings and the correct format description.
_audioInput = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSettings, sourceFormatHint: audioFormatDescription)
_audioInput!.expectsMediaDataInRealTime = true
then I just simply start the AVCaptureSession via startRunning() that will capture the audio data into a .M4A file.
Everything is fine during the audio capture, here are the observations that I made :
The file recorded exists on disk as expected
The file can be played by any player : my mac, my iphone (I imported it via iTunes), seems all good.
The file is at the correct location when I setup my AVAudioPlayer.
Tried to initialize the AVAudioPlayer with NSData or NSURL, same result
Now later in my code, I try to read that audio file via an AVAudioPlayer :
self.audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:pathLink] error:&error];
I get thrown an error
Error Domain=NSOSStatusErrorDomain Code=1954115647 "(null)"
which after double checking is "Unsupported type".
What's incorrect in my setup ? Does this come from my AVCaptureSession or can something be wrong with my AVAudioPlayer setup ?
Thanks !

AVAudioRecorder settings empty after constructor call

I'm trying to record audio using the microphone and AVAudioRecorder.
It works on iOS 8 but my code does not work anymore on iOS 9.
The recordSettings dictionary is set properly, then I give it to the
AVAudioRecorder:URL:settings constructor.
But, just after, recorder.settings is empty, an assertion failure is thrown
let recordSettings: [String: AnyObject] = [
AVNumberOfChannelsKey: NSNumber(integer: 2),
AVFormatIDKey: NSNumber(integer: Int(kAudioFormatMPEG4AAC)),
AVEncoderBitRateKey: NSNumber(integer: 64)]
var recorder: AVAudioRecorder!
do {
recorder = try AVAudioRecorder(URL: tempURL, settings:recordSettings) // recordSettings.count = 3
assert(recorder.settings.count != 0, "Audio Recorder does not provide settings") // assertion failure threw
} catch let error as NSError {
print("error when intitializing recorder: \(error)")
return
}
Anyone can help me ? Is it a bug ?
EDIT : In my entire code I did not test recorder.settings just after. I did instantiate recorder like my code above, then I did that :
recorder.delegate = self
recorder.prepareToRecord()
recorder.meteringEnabled = true
And it crashes in this line :
for i in 1...(recorder.settings[AVNumberOfChannelsKey] as! Int) {
...
}
It crashes because recorder.settings[AVNumberOfChannelsKey] is nil
I'm not sure why you're checking the settings property, but
from the AVAudioRecorder header file, on the settings property:
these settings are fully valid only when prepareToRecord has been called
so you must call prepareToRecord() first BUT it will fail/return false, because your bitrate is way too low! Its unit is bits per second, not kilobits per second:
AVEncoderBitRateKey: NSNumber(integer: 64000)
This worked on iOS 8 because your too-low bitrate was simply discarded. Looks like it became an error in iOS 9.

exception com.apple.coreaudio.avfaudio reason: error -50

I have this message when i try to play an audio with a different pitch:
And i googled for that error with no succeed. If i set breakpoints it stops here:
I test printing all objects to see is anything is nit but i didnt found anything. The most misterious thing is that only happens in my iphone6+, in other phones i tested this out doesnt break. Then searched the project where i looked into to add this sound effects which is this:
https://github.com/atikur/Pitch-Perfect
And if you run it it works, until you change...
AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayAndRecord, error: &error)
To:
AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback, error: &error)
And then boom (ONLY IN REAL DEVICE ATTACHED TO XCODE, it works in the simulator):
2015-03-21 11:56:13.311 Pitch Perfect[1237:607678] 11:56:13.311 ERROR: [0x10320c000] AVAudioFile.mm:496: -[AVAudioFile readIntoBuffer:frameCount:error:]: error -50
2015-03-21 11:56:13.313 Pitch Perfect[1237:607678] * Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'error -50'
* First throw call stack:
(0x18687a530 0x1978040e4 0x18687a3f0 0x1851ea6c0 0x185232d38 0x1852130f8 0x185212ccc 0x100584fd4 0x100584f94 0x10058fdb8 0x1005882c4 0x1005925d4 0x100592208 0x198037dc8 0x198037d24 0x198034ef8)
libc++abi.dylib: terminating with uncaught exception of type NSException
And the really really weird thing is this screenshot, for some reason after printing audioEngine, audioEngine.outputNode gets nil?
I had the same error... I had created a "sound.swift" class that my view controller would instantiate... I decided to simplify everything and focus on making the sound work. So I have put the following code in the view controller and it works:
//fetch recorded file
var pitchPlayer = AVAudioPlayerNode()
var timePitch = AVAudioUnitTimePitch()
let dirPath = NSSearchPathForDirectoriesInDomains(.DocumentDirectory,.UserDomainMask,true)[0] as! String
var pathArray = [dirPath, String("son.wav")]
filePath = NSURL.fileURLWithPathComponents(pathArray)
audioFile = AVAudioFile(forReading: filePath.filePathURL, error: nil)
audioEngine = AVAudioEngine()
audioEngine.attachNode(pitchPlayer)
audioEngine.attachNode(timePitch)
//Create a session
var session=AVAudioSession.sharedInstance()
session.setCategory(AVAudioSessionCategoryPlayAndRecord,error:nil)
//output audio
session.overrideOutputAudioPort(AVAudioSessionPortOverride.Speaker, error: nil)
audioEngine.connect(pitchPlayer, to: timePitch, format: audioFile.processingFormat)
audioEngine.connect(timePitch, to: audioEngine.outputNode, format: audioFile.processingFormat)
pitchPlayer.scheduleFile(audioFile, atTime: nil, completionHandler: nil)
audioEngine.startAndReturnError(&audioError)
pitchPlayer.play()

Saving AVAudioRecorder to NSUserDefaults

I'm trying to start an audio recording on the apple watch and allow it to be stopped on the iPhone.
To share the information from the watch to the phone I am trying to do the following:
var recorder : AVAudioRecorder!
recorder = AVAudioRecorder(URL: soundFileURL, settings: recordSettings, error: &error)
if let watchDefaults = NSUserDefaults(suiteName: "group.spywatchkit") {
let encodedRecorder = NSKeyedArchiver.archivedDataWithRootObject(recorder) as NSData
watchDefaults.setObject(encodedRecorder, forKey: "test")
However, this results in the following error:
'NSInvalidArgumentException', reason: '-[AVAudioSession encodeWithCoder:]: unrecognized selector sent to instance
This appears to be failing since the AVAudioRecorder object doesn't conform to the NSCoder protocol. Is there another way to save this object? Can I recreate the object later?

Resources