AudioKit Apply Gain to Mono PCM Buffer - audiokit

I have a mono audio file. Which I am opening and trying to apply gain with the following code:
let inputFile = try! AVAudioFile(forReading: url)
let settings = inputFile.fileFormat.settings
let outputFile = try! AVAudioFile(forWriting: outputURL, settings: settings)
let sourceBuffer = try! AVAudioPCMBuffer(file: inputFile)
let engine = AudioEngine()
let player = AudioPlayer()
let compressor = Compressor(player)
compressor.masterGain = AUValue(gain)
engine.output = compressor
compressor.start()
do {
try engine.start()
player.start()
player.scheduleBuffer(sourceBuffer!, at: nil, options: [], completionHandler: nil)
try engine.renderToFile(outputFile, duration: inputFile.duration)
} catch {
completion(.failure(error))
}
player.scheduleBuffer crashes with the following exception:
*** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: _outputFormat.channelCount == buffer.format.channelCount'
terminating with uncaught exception of type NSException
But how do I set the correct number of channels?
I already tried Settings.audioFormat = inputFile.fileFormat or Settings.audioFormat = sourceBuffer!.format before initializing the AudioEngine. Same results.
I appreciate any help. Thanks.

Related

iOS Camera Capture Session code works on iOS but not Catalyst

I have this block of code that's used for setting up input/output for a video session. It works perfectly on iOS.
let videoOutput = AVCaptureVideoDataOutput()
guard let cameraDevice = device.device,
let captureDeviceInput = try? AVCaptureDeviceInput(device: cameraDevice),
self.canAddInput(captureDeviceInput),
self.canAddOutput(videoOutput) else {
log("FrameExtractor error: could not setup input or output.")
return
}
self.addInput(captureDeviceInput)
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "sample buffer"))
self.addOutput(videoOutput)
guard let connection = videoOutput.connection(with: CAMERA_FEED_MEDIA_TYPE),
connection.isVideoOrientationSupported,
connection.isVideoMirroringSupported else {
log("FrameExtractor: configureSession: Cannot establish connection")
return
}
if let videoOrientation = getCameraOrientation(device: device) {
connection.videoOrientation = videoOrientation
}
// Mirroring the front camera normalizes the display output for iPad's front camera
connection.isVideoMirrored = position == .front
When run on Catalyst, however, I get a consistent crash at
self.addOutput(videoOutput) of
Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid parameter not satisfying: index>=0 && index<[_currentItems count]'

Tap audio output using AVAudioEngine

I'm trying install a tap on the output audio that is played on my app. I have no issue catching buffer from microphone input, but when it comes to catch sound that it goes trough the speaker or the earpiece or whatever the output device is, it does not succeed. Am I missing something?
In my example I'm trying to catch the audio buffer from an audio file that an AVPLayer is playing. But let's pretend I don't have access directly to the AVPlayer instance.
The goal is to perform Speech Recognition on an audio stream.
func catchAudioBuffers() throws {
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.playAndRecord, mode: .voiceChat, options: .allowBluetooth)
try audioSession.setActive(true)
let outputNode = audioEngine.outputNode
let recordingFormat = outputNode.outputFormat(forBus: 0)
outputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in
// PROCESS AUDIO BUFFER
}
audioEngine.prepare()
try audioEngine.start()
// For example I am playing an audio conversation with an AVPlayer and a local file.
player.playSound()
}
This code results in a:
AVAEInternal.h:76 required condition is false: [AVAudioIONodeImpl.mm:1057:SetOutputFormat: (_isInput)]
*** Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: _isInput'
I was facing the same problem and during 2 days of brainstorming found the following.
Apple says that For AVAudioOutputNode, tap format must be specified as nil. I'm not sure that it's important but in my case, that finally worked, format was nil.
You need to start recording and don't forget to stop it.
Removing tap is really important, otherwise you will have file that you can't open.
Try to save the file with the same audio settings that you used in source file.
Here's my code that finally worked. It was partly taken from this question Saving Audio After Effect in iOS.
func playSound() {
let rate: Float? = effect.speed
let pitch: Float? = effect.pitch
let echo: Bool? = effect.echo
let reverb: Bool? = effect.reverb
// initialize audio engine components
audioEngine = AVAudioEngine()
// node for playing audio
audioPlayerNode = AVAudioPlayerNode()
audioEngine.attach(audioPlayerNode)
// node for adjusting rate/pitch
let changeRatePitchNode = AVAudioUnitTimePitch()
if let pitch = pitch {
changeRatePitchNode.pitch = pitch
}
if let rate = rate {
changeRatePitchNode.rate = rate
}
audioEngine.attach(changeRatePitchNode)
// node for echo
let echoNode = AVAudioUnitDistortion()
echoNode.loadFactoryPreset(.multiEcho1)
audioEngine.attach(echoNode)
// node for reverb
let reverbNode = AVAudioUnitReverb()
reverbNode.loadFactoryPreset(.cathedral)
reverbNode.wetDryMix = 50
audioEngine.attach(reverbNode)
// connect nodes
if echo == true && reverb == true {
connectAudioNodes(audioPlayerNode, changeRatePitchNode, echoNode, reverbNode, audioEngine.mainMixerNode, audioEngine.outputNode)
} else if echo == true {
connectAudioNodes(audioPlayerNode, changeRatePitchNode, echoNode, audioEngine.mainMixerNode, audioEngine.outputNode)
} else if reverb == true {
connectAudioNodes(audioPlayerNode, changeRatePitchNode, reverbNode, audioEngine.mainMixerNode, audioEngine.outputNode)
} else {
connectAudioNodes(audioPlayerNode, changeRatePitchNode, audioEngine.mainMixerNode, audioEngine.outputNode)
}
// schedule to play and start the engine!
audioPlayerNode.stop()
audioPlayerNode.scheduleFile(audioFile, at: nil) {
var delayInSeconds: Double = 0
if let lastRenderTime = self.audioPlayerNode.lastRenderTime, let playerTime = self.audioPlayerNode.playerTime(forNodeTime: lastRenderTime) {
if let rate = rate {
delayInSeconds = Double(self.audioFile.length - playerTime.sampleTime) / Double(self.audioFile.processingFormat.sampleRate) / Double(rate)
} else {
delayInSeconds = Double(self.audioFile.length - playerTime.sampleTime) / Double(self.audioFile.processingFormat.sampleRate)
}
}
// schedule a stop timer for when audio finishes playing
self.stopTimer = Timer(timeInterval: delayInSeconds, target: self, selector: #selector(EditViewController.stopAudio), userInfo: nil, repeats: false)
RunLoop.main.add(self.stopTimer!, forMode: RunLoop.Mode.default)
}
do {
try audioEngine.start()
} catch {
showAlert(Alerts.AudioEngineError, message: String(describing: error))
return
}
//Try to save
let dirPaths: String = (NSSearchPathForDirectoriesInDomains(.libraryDirectory, .userDomainMask, true)[0]) + "/sounds/"
let tmpFileUrl = URL(fileURLWithPath: dirPaths + "effected.caf")
//Save the tmpFileUrl into global varibale to not lose it (not important if you want to do something else)
filteredOutputURL = URL(fileURLWithPath: filePath)
do{
print(dirPaths)
let settings = [AVSampleRateKey : NSNumber(value: Float(44100.0)),
AVFormatIDKey : NSNumber(value: Int32(kAudioFormatMPEG4AAC)),
AVNumberOfChannelsKey : NSNumber(value: 1),
AVEncoderAudioQualityKey : NSNumber(value: Int32(AVAudioQuality.medium.rawValue))]
self.newAudio = try! AVAudioFile(forWriting: tmpFileUrl as URL, settings: settings)
let length = self.audioFile.length
audioEngine.mainMixerNode.installTap(onBus: 0, bufferSize: 4096, format: nil) {
(buffer: AVAudioPCMBuffer?, time: AVAudioTime!) -> Void in
//Let us know when to stop saving the file, otherwise saving infinitely
if (self.newAudio.length) <= length {
do{
try self.newAudio.write(from: buffer!)
} catch _{
print("Problem Writing Buffer")
}
} else {
//if we dont remove it, will keep on tapping infinitely
self.audioEngine.mainMixerNode.removeTap(onBus: 0)
}
}
}
// play the recording!
audioPlayerNode.play()
}
#objc func stopAudio() {
if let audioPlayerNode = audioPlayerNode {
let engine = audioEngine
audioPlayerNode.stop()
engine?.mainMixerNode.removeTap(onBus: 0)
}
if let stopTimer = stopTimer {
stopTimer.invalidate()
}
configureUI(.notPlaying)
if let audioEngine = audioEngine {
audioEngine.stop()
audioEngine.reset()
}
isPlaying = false
}

How to create a anAudioSampleBuffer for CMSampleBufferGetFormatDescription in iOS Swift

I have been working on video compression in iOS Swift, and following this SO's answer. It is working fine until I change this piece of code's file format to .mp4
let videoWriter = try! AVAssetWriter(outputURL: outputURL as URL, fileType: AVFileType.mov)
There are reasons that I need the output in .mp4 file format. So when I do that it crashes the app. And gives me this error,
2020-04-27 18:20:52.573614+0500 BrightCaster[7847:1513728] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[AVAssetWriter addInput:] In order to perform passthrough to file type public.mpeg-4, please provide a format hint in the AVAssetWriterInput initializer'
*** First throw call stack:
(0x1b331d5f0 0x1b303fbcc 0x1bd53b2b0 0x102383c0c 0x102382164 0x1021897cc 0x1b6ca73bc 0x1b6caba7c 0x1b6daec94 0x1b7835080 0x1b7834d30 0x1e9d077b4 0x1b786a764 0x1b783eb68 0x1b783f070 0x1e9d468f4 0x1b783f1c0 0x1e9d468f4 0x1b9e21d9c 0x105173730 0x105181710 0x1b329b748 0x1b329661c 0x1b3295c34 0x1bd3df38c 0x1b73c822c 0x10230f8a0 0x1b311d800)
libc++abi.dylib: terminating with uncaught exception of type NSException
So I searched on SO and found this question relevant to my problem.
but now the issue is when I try to add its answer to my function it gives me error anAudioSampleBuffer not defined. As I am totally new to audio/video domain, I am unable to understand why it is giving me this. And how to resolve this.
The piece of code from answer that I am adding with my function is below.
//setup audio writer
//let formatDesc = CMSampleBufferGetFormatDescription(anAudioSampleBuffer)
//let audioWriterInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings: nil, sourceFormatHint: formatDesc)
let audioWriterInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings: nil)
audioWriterInput.expectsMediaDataInRealTime = false
videoWriter.add(audioWriterInput)
The commented part is not working. Any help would be appreciated Thanks.
Whole function for conversion is following
func convertVideoToLowQuailtyWithInputURL(inputURL: URL, outputURL: URL, completion: #escaping (Bool , _ url: String) -> Void) {
let videoAsset = AVURLAsset(url: inputURL as URL, options: nil)
let videoTrack = videoAsset.tracks(withMediaType: AVMediaType.video)[0]
let videoSize = videoTrack.naturalSize
let videoWriterCompressionSettings = [
AVVideoAverageBitRateKey : Int(125000)
]
let videoWriterSettings:[String : AnyObject] = [
AVVideoCodecKey : AVVideoCodecH264 as AnyObject,
AVVideoCompressionPropertiesKey : videoWriterCompressionSettings as AnyObject,
AVVideoWidthKey : Int(videoSize.width) as AnyObject,
AVVideoHeightKey : Int(videoSize.height) as AnyObject
]
let videoWriterInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoWriterSettings)
videoWriterInput.expectsMediaDataInRealTime = true
videoWriterInput.transform = videoTrack.preferredTransform
let videoWriter = try! AVAssetWriter(outputURL: outputURL as URL, fileType: AVFileType.mov) // for now its converting in .mov I THINK SO.
videoWriter.add(videoWriterInput)
//setup video reader
let videoReaderSettings:[String : AnyObject] = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) as AnyObject
]
let videoReaderOutput = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: videoReaderSettings)
var videoReader: AVAssetReader!
do{
videoReader = try AVAssetReader(asset: videoAsset)
}
catch {
print("video reader error: \(error)")
completion(false, "")
}
videoReader.add(videoReaderOutput)
//setup audio writer
//let formatDesc = CMSampleBufferGetFormatDescription(anAudioSampleBuffer) // this is giving me error here of un initilize, which I didn't I know.
//let audioWriterInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings: nil, sourceFormatHint: formatDesc)
let audioWriterInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings: nil)
audioWriterInput.expectsMediaDataInRealTime = false
videoWriter.add(audioWriterInput)
//setup audio reader
let audioTrack = videoAsset.tracks(withMediaType: AVMediaType.audio)[0]
let audioReaderOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: nil)
let audioReader = try! AVAssetReader(asset: videoAsset)
audioReader.add(audioReaderOutput)
videoWriter.startWriting()
//start writing from video reader
videoReader.startReading()
videoWriter.startSession(atSourceTime: CMTime.zero)
let processingQueue = DispatchQueue(label: "processingQueue1")
videoWriterInput.requestMediaDataWhenReady(on: processingQueue, using: {() -> Void in
while videoWriterInput.isReadyForMoreMediaData {
let sampleBuffer:CMSampleBuffer? = videoReaderOutput.copyNextSampleBuffer();
if videoReader.status == .reading && sampleBuffer != nil {
videoWriterInput.append(sampleBuffer!)
}
else {
videoWriterInput.markAsFinished()
if videoReader.status == .completed {
//start writing from audio reader
audioReader.startReading()
videoWriter.startSession(atSourceTime: CMTime.zero)
let processingQueue = DispatchQueue(label: "processingQueue2")
audioWriterInput.requestMediaDataWhenReady(on: processingQueue, using: {() -> Void in
while audioWriterInput.isReadyForMoreMediaData {
let sampleBuffer:CMSampleBuffer? = audioReaderOutput.copyNextSampleBuffer()
if audioReader.status == .reading && sampleBuffer != nil {
audioWriterInput.append(sampleBuffer!)
}
else {
audioWriterInput.markAsFinished()
if audioReader.status == .completed {
videoWriter.finishWriting(completionHandler: {() -> Void in
completion(true, "\(videoWriter.outputURL)")
})
}
}
}
})
}
}
}
})
}
You can output as mp4, passing audio through (no transcode) by providing that format hint like so:
let audioTrack = videoAsset.tracks(withMediaType: AVMediaType.audio)[0]
let audioWriterInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings: nil, sourceFormatHint: audioTrack.formatDescriptions[0] as! CMFormatDescription)
Note the new position of audioTrack definition.
I imagine both of Apple's .mov and .mp4 implementations need to know the the compressed audio format to write the file, but I guess .mov is ok with inferring that information after initialisation, where .mp4 is not. Maybe it's another AVFoundation Surprise!.
In your case I saw that it would be tiresome to rework the code to get the audio format from the first sample buffer, but then I remembered that the format is available from the input audio track.

AVAudioEngine() Playback Not Working

I'm trying to change the pitch of a sound using the AVAudioEngine() in Swift. This is my code:
func setUpEngine() {
let fileString = NSBundle.mainBundle().pathForResource("400", ofType: "wav")
let url = NSURL(fileURLWithPath: fileString!)
do {
try audioFile = AVAudioFile(forReading: url)
print("done")
}
catch{
}
}
var engine = AVAudioEngine()
var audioFile = AVAudioFile()
var audioPlayerNode = AVAudioPlayerNode()
var changeAudioUnitTime = AVAudioUnitTimePitch()
override func viewDidLoad() {
setUpEngine()
let defaults = NSUserDefaults.standardUserDefaults()
audioPlayerNode.stop()
engine.stop()
engine.reset()
engine.attachNode(audioPlayerNode)
changeAudioUnitTime.pitch = 800
engine.attachNode(changeAudioUnitTime)
engine.connect(audioPlayerNode, to: changeAudioUnitTime, format: nil)
engine.connect(changeAudioUnitTime, to: engine.outputNode, format: nil)
audioPlayerNode.scheduleFile(audioFile, atTime: nil, completionHandler: nil)
engine.startAndReturnError(nil)
audioPlayerNode.play()
The rest of my code is below (I do close the brackets).
I found most of this code online and I get an error with the line
engine.startAndReturnError(nil)
'Value of type has no member'.
When I remove this line I get the following error:
'AVAudioPlayerNode.mm:333: Start: required condition is false:
_engine->IsRunning() Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false:
_engine->IsRunning()''
Any help would be greatly appreciated. I am using Swift in xCode and a single view application.
The error is that the engine is not running. You need to reorder your operations like this...
setUpEngine()
let defaults = NSUserDefaults.standardUserDefaults()
engine.attachNode(audioPlayerNode)
engine.attachNode(changeAudioUnitTime)
engine.connect(audioPlayerNode, to: changeAudioUnitTime, format: nil)
engine.connect(changeAudioUnitTime, to: engine.outputNode, format: nil)
changeAudioUnitTime.pitch = 800
engine.prepare()
engine.start()
audioPlayerNode.scheduleFile(audioFile, atTime: nil, completionHandler: nil)
audioPlayerNode.play()
Some time later...
engine.stop()
This is because you are running on either outdated version of XCode or incompatible version of iOS. I had this issue and in the latest Swift version there is no such method instead they provide .start().

Setting Track ID

I am having trouble with setting the track id when i set up my input parameters for my audioMix. How do i set the trackId? I have tried it this way also params.trackID(track2.trackID) but that gives me this error CMPersistantTrackID -> $T4 is not identical to CMPersistantTrackID. I am trying to translate this line [audioInputParams setTrackID:[track trackID]]; from https://developer.apple.com/library/ios/qa/qa1716/_index.html
Error When i run code below:
Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[__NSArrayM trackID]: unrecognized selector sent to instance 0x7f98c34cbfc0'
Code:
let type = AVMediaTypeAudio
let asset1 = AVURLAsset(URL: beatLocationURL, options: nil)
let arr2 = asset1.tracksWithMediaType(type)
let track2 = arr2.last as AVAssetTrack
let asset = AVURLAsset(URL: vocalURL, options:nil)
let arr3 = asset.tracksWithMediaType(type)
let track3 = arr3.last as AVAssetTrack
var trackParams = NSMutableArray()
let params = AVMutableAudioMixInputParameters(track:track2)
params.setVolume(0.0, atTime:kCMTimeZero)
params.trackID = track2.trackID <--- this line
trackParams.addObject(params)
let params1 = AVMutableAudioMixInputParameters(track:track3)
params1.setVolume(1.0, atTime: kCMTimeZero)
params1.trackID = track3.trackID <-- this line also
trackParams.addObject(params1)
let mix = AVMutableAudioMix()
mix.inputParameters = [trackParams]

Resources