Swift AudioToolbox not playing back audio file nor filling audio buffer - ios

I am trying to play back an audio file using AudioToolbox. I wrote this and swift based on an old objective C example on the apple website. It compiles and runs, however the callback function never gets triggered once CFRunLoop starts. (It gets called during set up, but I call it manually so that doesn't count.)
My understanding of how this is supposed to work is that when this line is called:
status = AudioQueueNewOutput(&dataFormat, callback, &aqData, CFRunLoopGetCurrent(), commonModes, 0, &queue)
It is supposed to create an AudioQueue object, place it inside CFRunLoop, set the callback function called "callback", then give me back a reference to the queue object. The callback function either lives inside the AudioQueue, which lives inside CFRunLoop, or the callback function lives in CFRunLoop directly. Not sure.
When I'm done setting up I call:
status = AudioQueueStart(aqData.mQueue!, nil)
which "starts" the queue.
Then I call:
repeat {
CFRunLoopRunInMode(CFRunLoopMode.defaultMode, 0.1, false)
}
My understanding is that this is supposed to trigger the audio queue, which in turn calls my callback function. However, from this point on the callback function never gets hit. I and thinking there might be a way to inspect the audio queue, or perhaps inspect CFRunLoop. I might have made a mistake on one of the pointers somewhere.
The that result is the app plays nothing but silence.
let kNumberBuffers = 3;
var aqData = AQPlayerState()
var bufferLength : Float64 = 0.1
func playAudioFileWithToolbox(){
let bundle = Bundle.main
let permissions : AudioFilePermissions = .readPermission
let filePath = bundle.path(forResource: "dreams", ofType: "wav")!
var filePathArray = Array(filePath.utf8)
let filePathSize = filePath.count
let audioFileUrl = CFURLCreateFromFileSystemRepresentation(nil, &filePathArray, filePathSize, false)
var status = AudioFileOpenURL(audioFileUrl!, permissions, kAudioFileWAVEType, &aqData.mAudioFile)
if status != noErr{
print("ErrorOpeningAudioFileUrl")
}
var dataFormatSize:UInt32 = UInt32(MemoryLayout<AudioStreamBasicDescription>.size)
status = AudioFileGetProperty(aqData.mAudioFile!, kAudioFilePropertyDataFormat, &dataFormatSize, &aqData.mDataFormat)
if status != noErr{
print("Error getting AudioStreamBasicDescription")
}
var queue : AudioQueueRef? = aqData.mQueue
var dataFormat = aqData.mDataFormat
let commonModes = CFRunLoopMode.commonModes.rawValue
status = AudioQueueNewOutput(&dataFormat, callback, &aqData, CFRunLoopGetCurrent(), commonModes, 0, &queue)
if status == noErr{
aqData.mQueue = queue
} else {
print("TroubleSettingUpOutputQueue")
}
var maxPacketSize:UInt32 = 0;
var propertySize:UInt32 = UInt32(MemoryLayout.size(ofValue: maxPacketSize))
AudioFileGetProperty(aqData.mAudioFile!, kAudioFilePropertyPacketSizeUpperBound, &propertySize, &maxPacketSize)
var bufferByteSize = aqData.bufferByteSize
DeriveBufferSize(ASBDesc: &dataFormat, maxPacketSize: maxPacketSize, seconds: bufferLength, outBufferSize: &bufferByteSize, outNumPacketsToRead: &aqData.mNumPacketsToRead)
aqData.bufferByteSize = bufferByteSize
let isFormatVBR = aqData.mDataFormat.mBytesPerPacket == 0 || aqData.mDataFormat.mFramesPerPacket == 0
if isFormatVBR{
aqData.mPacketDescs = UnsafeMutablePointer<AudioStreamPacketDescription>.allocate(capacity: Int(aqData.mNumPacketsToRead))
} else {
aqData.mPacketDescs = nil
}
var cookieSize = UInt32(MemoryLayout.size(ofValue: UInt32.self))
let couldNotGetProperty = AudioFileGetPropertyInfo(aqData.mAudioFile!, kAudioFilePropertyMagicCookieData, &cookieSize, nil)
if couldNotGetProperty == 0 && cookieSize > 0{
var magicCookie = UnsafeMutableRawPointer.allocate(byteCount: Int(cookieSize), alignment: MemoryLayout<UInt32>.alignment)
status = AudioFileGetProperty(aqData.mAudioFile!, kAudioFilePropertyMagicCookieData, &cookieSize, &magicCookie)
if status != noErr{
print("Error:Failed to get magic cookie.")
}
AudioQueueSetProperty(aqData.mQueue!, kAudioQueueProperty_MagicCookie, magicCookie, cookieSize)
magicCookie.deallocate()
}
aqData.mCurrentPacket = 0
for i in 0..<kNumberBuffers{
var pointer = aqData.mBuffers?.advanced(by: i)
status = AudioQueueAllocateBuffer(aqData.mQueue!, aqData.bufferByteSize, &pointer)
if status != noErr{
print("Error allocating audio buffer.")
continue
}
var buffer = aqData.mBuffers![i]
callback(&aqData, aqData.mQueue!, &buffer) //I can imagine how this does anything when it is not running
}
//Set Volume
AudioQueueSetParameter(aqData.mQueue!, kAudioQueueParam_Volume, 0.5)//I have way bigger problems
//Start Playing
aqData.mIsRunning = true
status = AudioQueueStart(aqData.mQueue!, nil)
if status != noErr{
print("Error:Failed to start audio queue.")
}
repeat {
CFRunLoopRunInMode(CFRunLoopMode.defaultMode, 0.1, false)
} while aqData.mIsRunning
CFRunLoopRunInMode(CFRunLoopMode.defaultMode, 1, false)
}
private let callback: AudioQueueOutputCallback = { userData, inAQ, inBuffer in
var aqData = userData!.load(as: AQPlayerState.self) // 255
if !aqData.mIsRunning{ return } // 2
var numBytesReadFromFile : UInt32 = 0
var numPackets = aqData.mNumPacketsToRead
AudioFileReadPacketData(aqData.mAudioFile!, false, &numBytesReadFromFile, aqData.mPacketDescs, aqData.mCurrentPacket, &numPackets, inBuffer)
if (numPackets > 0) {
inBuffer.pointee.mAudioDataByteSize = numBytesReadFromFile
let packetCount = aqData.mPacketDescs!.pointee.mVariableFramesInPacket
AudioQueueEnqueueBuffer (
aqData.mQueue!,
inBuffer,
packetCount,
aqData.mPacketDescs
);
aqData.mCurrentPacket += Int64(numPackets)
} else {
AudioQueueStop (aqData.mQueue!,false)
aqData.mIsRunning = false
}
}
func DeriveBufferSize (ASBDesc: inout AudioStreamBasicDescription, maxPacketSize:UInt32, seconds:Float64, outBufferSize: inout UInt32,outNumPacketsToRead: inout UInt32) {
let maxBufferSize = 0x50000
let minBufferSize = 0x4000
if ASBDesc.mFramesPerPacket != 0 {
let numPacketsForTime = ASBDesc.mSampleRate / Float64(ASBDesc.mFramesPerPacket) * seconds
outBufferSize = UInt32(numPacketsForTime) * maxPacketSize
} else { // 9
outBufferSize = max(UInt32(maxBufferSize), maxPacketSize)
}
if outBufferSize > maxBufferSize && outBufferSize > maxPacketSize{ //10
outBufferSize = UInt32(maxBufferSize)
} else if outBufferSize < minBufferSize {
outBufferSize = UInt32(minBufferSize)
}
outNumPacketsToRead = outBufferSize / UInt32(maxPacketSize) // 12
}
}
struct AQPlayerState{
var mDataFormat:AudioStreamBasicDescription = AudioStreamBasicDescription()
var mQueue:AudioQueueRef?
var mBuffers:AudioQueueBufferRef? = UnsafeMutablePointer<AudioQueueBuffer>.allocate(capacity: 3)
var mAudioFile: AudioFileID?
var bufferByteSize:UInt32 = 0
var mCurrentPacket:Int64 = 0
var mNumPacketsToRead:UInt32 = 0
var mPacketDescs : UnsafeMutablePointer<AudioStreamPacketDescription>?
var mIsRunning : Bool = false
init(){
}
}

Related

Swift AVFoundation timing info for audio measurements

I am creating an application that will take an audio measurement by playing some stimulus data and recording the microphone input, and then analysing the data.
I am having trouble accounting for the time taken to initialise and start the audio engine, as this varies each time and is also dependant on the hardware used, etc.
So, I have an audio engine and have installed a Tap the hardware input, with input 1 being the microphone recording, and input 2 being a reference input (also from the hardware). The output is physically Y-Split and fed back into input 2.
The app initialises the engine, plays the stimulus audio plus 1 second of silence (to allow propagation time for the microphone to record the whole signal back), and then stop and close the engine.
I write the two input buffers as a WAV file so that I can import this into an an existing DAW. to visually examine the signals. I can see that each time I take a measurement, the time difference between the two signals is different (despite the fact the microphone is not moved and the hardware has stayed the same). I am assuming this is to do with the latency of the hardware, the time taken to initialise the engine and the way the divice distributes tasks.
I have tried to capture the absolute time using mach_absolute_time of the first buffer callback on each installTap function and subtracting the two, and I can see that this does vary quite a lot with each call:
class newAVAudioEngine{
var engine = AVAudioEngine()
var audioBuffer = AVAudioPCMBuffer()
var running = true
var in1Buf:[Float]=Array(repeating:0, count:totalRecordSize)
var in2Buf:[Float]=Array(repeating:0, count:totalRecordSize)
var buf1current:Int = 0
var buf2current:Int = 0
var in1firstRun:Bool = false
var in2firstRun:Bool = false
var in1StartTime = 0
var in2startTime = 0
func measure(inputSweep:SweepFilter) -> measurement {
initializeEngine(inputSweep: inputSweep)
while running == true {
}
let measureResult = measurement.init(meas: meas,ref: ref)
return measureResult
}
func initializeEngine(inputSweep:SweepFilter) {
buf1current = 0
buf2current = 0
in1StartTime = 0
in2startTime = 0
in1firstRun = true
in2firstRun = true
in1Buf = Array(repeating:0, count:totalRecordSize)
in2Buf = Array(repeating:0, count:totalRecordSize)
engine.stop()
engine.reset()
engine = AVAudioEngine()
let srcNode = AVAudioSourceNode { _, _, frameCount, AudioBufferList -> OSStatus in
let ablPointer = UnsafeMutableAudioBufferListPointer(AudioBufferList)
if (Int(frameCount) + time) <= inputSweep.stimulus.count {
for frame in 0..<Int(frameCount) {
let value = inputSweep.stimulus[frame + time]
for buffer in ablPointer {
let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
buf[frame] = value
}
}
time += Int(frameCount)
return noErr
} else {
for frame in 0..<Int(frameCount) {
let value = 0
for buffer in ablPointer {
let buf: UnsafeMutableBufferPointer<Float> = UnsafeMutableBufferPointer(buffer)
buf[frame] = Float(value)
}
}
}
return noErr
}
let format = engine.outputNode.inputFormat(forBus: 0)
let stimulusFormat = AVAudioFormat(commonFormat: format.commonFormat,
sampleRate: Double(sampleRate),
channels: 1,
interleaved: format.isInterleaved)
do {
try AVAudioSession.sharedInstance().setCategory(.playAndRecord)
let ioBufferDuration = 128.0 / 44100.0
try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(ioBufferDuration)
} catch {
assertionFailure("AVAudioSession setup failed")
}
let input = engine.inputNode
let inputFormat = input.inputFormat(forBus: 0)
print("InputNode Format is \(inputFormat)")
engine.attach(srcNode)
engine.connect(srcNode, to: engine.mainMixerNode, format: stimulusFormat)
if internalRefLoop == true {
srcNode.installTap(onBus: 0, bufferSize: 1024, format: stimulusFormat, block: {(buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
if self.in2firstRun == true {
var info = mach_timebase_info()
mach_timebase_info(&info)
let currentTime = mach_absolute_time()
let nanos = currentTime * UInt64(info.numer) / UInt64(info.denom)
self.in2startTime = Int(nanos)
self.in2firstRun = false
}
do {
let floatData = buffer.floatChannelData?.pointee
for frame in 0..<buffer.frameLength{
if (self.buf2current + Int(frame)) < totalRecordSize{
self.in2Buf[self.buf2current + Int(frame)] = floatData![Int(frame)]
}
}
self.buf2current += Int(buffer.frameLength)
if (self.numberOfSamples + Int(buffer.frameLength)) <= totalRecordSize{
try self.stimulusFile.write(from: buffer)
self.numberOfSamples += Int(buffer.frameLength) } else {
self.engine.stop()
self.running = false
}
} catch {
print(NSString(string: "write failed"))
}
})
}
let micAudioConverter = AVAudioConverter(from: inputFormat, to: stimulusFormat!)
var micChannelMap:[NSNumber] = [0,-1]
micAudioConverter?.channelMap = micChannelMap
let refAudioConverter = AVAudioConverter(from: inputFormat, to: stimulusFormat!)
var refChannelMap:[NSNumber] = [1,-1]
refAudioConverter?.channelMap = refChannelMap
//Measurement Tap
engine.inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat, block: {(buffer2: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
//print(NSString(string:"writing"))
if self.in1firstRun == true {
var info = mach_timebase_info()
mach_timebase_info(&info)
let currentTime = mach_absolute_time()
let nanos = currentTime * UInt64(info.numer) / UInt64(info.denom)
self.in1StartTime = Int(nanos)
self.in1firstRun = false
}
do {
let micConvertedBuffer = AVAudioPCMBuffer(pcmFormat: stimulusFormat!, frameCapacity: buffer2.frameCapacity)
let micInputBlock: AVAudioConverterInputBlock = { inNumPackets, outStatus in
outStatus.pointee = AVAudioConverterInputStatus.haveData
return buffer2
}
var error: NSError? = nil
//let status = audioConverter.convert(to: convertedBuffer!, error: &error, withInputFrom: inputBlock)
let status = micAudioConverter?.convert(to: micConvertedBuffer!, error: &error, withInputFrom: micInputBlock)
//print(status)
let floatData = micConvertedBuffer?.floatChannelData?.pointee
for frame in 0..<micConvertedBuffer!.frameLength{
if (self.buf1current + Int(frame)) < totalRecordSize{
self.in1Buf[self.buf1current + Int(frame)] = floatData![Int(frame)]
}
if (self.buf1current + Int(frame)) >= totalRecordSize {
self.engine.stop()
self.running = false
}
}
self.buf1current += Int(micConvertedBuffer!.frameLength)
try self.measurementFile.write(from: micConvertedBuffer!)
} catch {
print(NSString(string: "write failed"))
}
if internalRefLoop == false {
if self.in2firstRun == true{
var info = mach_timebase_info()
mach_timebase_info(&info)
let currentTime = mach_absolute_time()
let nanos = currentTime * UInt64(info.numer) / UInt64(info.denom)
self.in2startTime = Int(nanos)
self.in2firstRun = false
}
do {
let refConvertedBuffer = AVAudioPCMBuffer(pcmFormat: stimulusFormat!, frameCapacity: buffer2.frameCapacity)
let refInputBlock: AVAudioConverterInputBlock = { inNumPackets, outStatus in
outStatus.pointee = AVAudioConverterInputStatus.haveData
return buffer2
}
var error: NSError? = nil
let status = refAudioConverter?.convert(to: refConvertedBuffer!, error: &error, withInputFrom: refInputBlock)
//print(status)
let floatData = refConvertedBuffer?.floatChannelData?.pointee
for frame in 0..<refConvertedBuffer!.frameLength{
if (self.buf2current + Int(frame)) < totalRecordSize{
self.in2Buf[self.buf2current + Int(frame)] = floatData![Int(frame)]
}
}
if (self.numberOfSamples + Int(buffer2.frameLength)) <= totalRecordSize{
self.buf2current += Int(refConvertedBuffer!.frameLength)
try self.stimulusFile.write(from: refConvertedBuffer!) } else {
self.engine.stop()
self.running = false
}
} catch {
print(NSString(string: "write failed"))
}
}
}
)
assert(engine.inputNode != nil)
running = true
try! engine.start()
So The above method is my entire class. Currently each buffer call on installTap writes the input directly to a WAV file. This is where I can see the two end results differing each time. I have tried adding the startTime variable and subtracting the two, but the results still vary.
Do I need to take into account my output will have latency too that may vary with each call? If so, how do I add this time into the equation? What I am looking for is for the two inputs and outputs to all have relative time, so that I can compare them. The different hardware latency will not matter too much, as long as I can identify the end call times.
If you are doing real-time measurements, you might want to use AVAudioSinkNode instead of a Tap. The Sink Node is new and introduced along with AVAudioSourceNode you are using. With installing a Tap you won't be able to get precise timing.

iOS AudioUnit recording at 8kHz sample rate yields in silence

I'm building a voip application where we need to record audio from the microphone and send it somewhere in 8kHz sample rate.
Right now I'm recording it in default sample rate, which in my case was always 44,1k. This is then manually converted to 8k using this algorithm.
This naive approach results in an "ok" quality but I think it would be much better using the native downsampling capabilities of the AudioUnit.
But when I change the sample rate property on the recording AudioUnit, it outputs just silent frames (~0.0) and I don't know why.
I've extracted the part of the app responsible for the sound.
It should record from microphone -> write to a ring buffer -> playback the data in the buffer:
RecordingUnit:
import Foundation
import AudioToolbox
import AVFoundation
class RecordingUnit : NSObject
{
public static let AudioPacketDataSize = 160
public static var instance: RecordingUnit!
public var micBuffers : AudioBufferList?;
public var OutputBuffer = RingBuffer<Float>(count: 1 * 1000 * AudioPacketDataSize);
public var currentAudioUnit: AudioUnit?
override init()
{
super.init()
RecordingUnit.instance = self
micBuffers = AudioBufferList(
mNumberBuffers: 1,
mBuffers: AudioBuffer(
mNumberChannels: UInt32(1),
mDataByteSize: UInt32(1024),
mData: UnsafeMutableRawPointer.allocate(byteCount: 1024, alignment: 1)))
}
public func start()
{
var acd = AudioComponentDescription(
componentType: OSType(kAudioUnitType_Output),
componentSubType: OSType(kAudioUnitSubType_VoiceProcessingIO),
componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
componentFlags: 0,
componentFlagsMask: 0)
let comp = AudioComponentFindNext(nil, &acd)
var err : OSStatus
AudioComponentInstanceNew(comp!, &currentAudioUnit)
var true_ui32: UInt32 = 1
guard AudioUnitSetProperty(currentAudioUnit!,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
1,
&true_ui32,
UInt32(MemoryLayout<UInt32>.size)) == 0 else {print ("could not enable IO for input "); return}
err = AudioUnitSetProperty(currentAudioUnit!,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
0,
&true_ui32,
UInt32(MemoryLayout<UInt32>.size))
guard err == 0 else {print ("could not enable IO for output "); return}
var sampleRate : Float64 = 8000
err = AudioUnitSetProperty(currentAudioUnit!,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Input,
0,
&sampleRate,
UInt32(MemoryLayout<Float64>.size))
guard err == 0 else {print ("could not set sample rate (error=\(err))"); return}
var renderCallbackStruct = AURenderCallbackStruct(inputProc: recordingCallback, inputProcRefCon: UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque()))
err = AudioUnitSetProperty(currentAudioUnit!,
AudioUnitPropertyID(kAudioOutputUnitProperty_SetInputCallback),
AudioUnitScope(kAudioUnitScope_Global),
1,
&renderCallbackStruct,
UInt32(MemoryLayout<AURenderCallbackStruct>.size))
guard err == 0 else {print("could not set input callback"); return}
guard AudioUnitInitialize(currentAudioUnit!) == 0 else {print("could not initialize recording unit"); return}
guard AudioOutputUnitStart(currentAudioUnit!) == 0 else {print("could not start recording unit"); return}
print("Audio Recording started")
}
let recordingCallback: AURenderCallback = { (inRefCon, ioActionFlags, inTimeStamp, inBusNumber, frameCount, ioData ) -> OSStatus in
let audioObject = RecordingUnit.instance!
var err: OSStatus = noErr
guard let au = audioObject.currentAudioUnit else {print("AudioUnit nil (recording)"); return 0}
err = AudioUnitRender(au, ioActionFlags, inTimeStamp, inBusNumber, frameCount, &audioObject.micBuffers!)
let bufferPointer = UnsafeMutableRawPointer(audioObject.micBuffers!.mBuffers.mData)
let dataArray = bufferPointer!.assumingMemoryBound(to: Float.self)
var frames = 0
var sum = Float(0)
for i in 0..<Int(frameCount) {
if dataArray[i] != Float.nan {
sum += dataArray[i]
audioObject.OutputBuffer.write(Float(dataArray[i]))
frames = frames+1
}
}
let average = sum/Float(frameCount)
print("recorded -> \(frames)/\(frameCount) -> average=\(average)")
return 0
}
public func stop()
{
if currentAudioUnit != nil { AudioUnitUninitialize(currentAudioUnit!) }
}
}
PlaybackUnit.swift:
import Foundation
import AudioToolbox
import AVFoundation
class PlaybackUnit : NSObject {
public static var instance : PlaybackUnit!
public var InputBuffer : RingBuffer<Float>?
public var currentAudioUnit: AudioUnit?
override init()
{
super.init()
PlaybackUnit.instance = self
}
public func start()
{
var acd = AudioComponentDescription(
componentType: OSType(kAudioUnitType_Output),
componentSubType: OSType(kAudioUnitSubType_VoiceProcessingIO),
componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
componentFlags: 0,
componentFlagsMask: 0)
let comp = AudioComponentFindNext(nil, &acd)
var err = AudioComponentInstanceNew(comp!, &currentAudioUnit)
guard err == 0 else {print ("could not create a new AudioComponent instance"); return};
//set sample rate
var sampleRate : Float64 = 8000
AudioUnitSetProperty(currentAudioUnit!,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Input,
0,
&sampleRate,
UInt32(MemoryLayout<Float64>.size))
guard err == 0 else {print ("could not set sample rate "); return};
//register render callback
var outputCallbackStruct = AURenderCallbackStruct(inputProc: outputCallback, inputProcRefCon: UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque()))
err = AudioUnitSetProperty(currentAudioUnit!,
AudioUnitPropertyID(kAudioUnitProperty_SetRenderCallback),
AudioUnitScope(kAudioUnitScope_Input),
0,
&outputCallbackStruct,
UInt32(MemoryLayout<AURenderCallbackStruct>.size))
guard err == 0 else {print("could not set render callback"); return}
guard AudioUnitInitialize(currentAudioUnit!) == 0 else {print("could not initialize output unit"); return}
guard AudioOutputUnitStart(currentAudioUnit!) == 0 else {print("could not start output unit"); return}
print("Audio Output started")
}
let outputCallback: AURenderCallback = { (
inRefCon,
ioActionFlags,
inTimeStamp,
inBusNumber,
frameCount,
ioData ) -> OSStatus in
let ins = PlaybackUnit.instance
let audioObject = ins!
var err: OSStatus = noErr
var frames = 0
var average : Float = 0
if var ringBuffer = audioObject.InputBuffer {
var dataArray = ioData!.pointee.mBuffers.mData!.assumingMemoryBound(to: Float.self)
var i = 0
while i < frameCount {
if let v = ringBuffer.read() {
dataArray[i] = v
average += v
} else {
dataArray[i] = 0
}
i += 1
frames += 1
}
}
average = average / Float(frameCount)
print("played -> \(frames)/\(frameCount) => avarage: \(average)")
return 0
}
public func stop()
{
if currentAudioUnit != nil { AudioUnitUninitialize(currentAudioUnit!) }
}
}
ViewController:
import UIKit
import AVFoundation
class ViewController: UIViewController {
var micPermission = false
private var micPermissionDispatchToken = 0
override func viewDidLoad() {
super.viewDidLoad()
let audioSession = AVAudioSession.sharedInstance()
if (micPermission == false) {
if (micPermissionDispatchToken == 0) {
micPermissionDispatchToken = 1
audioSession.requestRecordPermission({(granted: Bool)-> Void in
if granted {
self.micPermission = true
return
} else {
print("failed to grant microphone access!!")
}
})
}
}
if micPermission == false { return }
try! audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
try! audioSession.setPreferredSampleRate(8000)
try! audioSession.overrideOutputAudioPort(.speaker)
try! audioSession.setPreferredOutputNumberOfChannels(1)
try! audioSession.setMode(AVAudioSessionModeVoiceChat)
try! audioSession.setActive(true)
let microphone = RecordingUnit()
let speakers = PlaybackUnit()
speakers.InputBuffer = microphone.OutputBuffer
microphone.start()
speakers.start()
}
}

How to play raw audio data from socket in Swift

I need to play raw audio data coming over socket in small chunks. I have read that I suppose to use circular buffer and found few solutions in Objective C, but couldn't made any of them to work, especially in Swift 3.
Can anyone help me?
First you implement ring Buffer like so.
public struct RingBuffer<T> {
private var array: [T?]
private var readIndex = 0
private var writeIndex = 0
public init(count: Int) {
array = [T?](repeating: nil, count: count)
}
/* Returns false if out of space. */
#discardableResult public mutating func write(element: T) -> Bool {
if !isFull {
array[writeIndex % array.count] = element
writeIndex += 1
return true
} else {
return false
}
}
/* Returns nil if the buffer is empty. */
public mutating func read() -> T? {
if !isEmpty {
let element = array[readIndex % array.count]
readIndex += 1
return element
} else {
return nil
}
}
fileprivate var availableSpaceForReading: Int {
return writeIndex - readIndex
}
public var isEmpty: Bool {
return availableSpaceForReading == 0
}
fileprivate var availableSpaceForWriting: Int {
return array.count - availableSpaceForReading
}
public var isFull: Bool {
return availableSpaceForWriting == 0
}
}
After that, you implement Audio Unit like so. ( modify if necessary)
class ToneGenerator {
fileprivate var toneUnit: AudioUnit? = nil
init() {
setupAudioUnit()
}
deinit {
stop()
}
func setupAudioUnit() {
// Configure the description of the output audio component we want to find:
let componentSubtype: OSType
#if os(OSX)
componentSubtype = kAudioUnitSubType_DefaultOutput
#else
componentSubtype = kAudioUnitSubType_RemoteIO
#endif
var defaultOutputDescription = AudioComponentDescription(componentType: kAudioUnitType_Output,
componentSubType: componentSubtype,
componentManufacturer: kAudioUnitManufacturer_Apple,
componentFlags: 0,
componentFlagsMask: 0)
let defaultOutput = AudioComponentFindNext(nil, &defaultOutputDescription)
var err: OSStatus
// Create a new instance of it in the form of our audio unit:
err = AudioComponentInstanceNew(defaultOutput!, &toneUnit)
assert(err == noErr, "AudioComponentInstanceNew failed")
// Set the render callback as the input for our audio unit:
var renderCallbackStruct = AURenderCallbackStruct(inputProc: renderCallback as? AURenderCallback,
inputProcRefCon: nil)
err = AudioUnitSetProperty(toneUnit!,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&renderCallbackStruct,
UInt32(MemoryLayout<AURenderCallbackStruct>.size))
assert(err == noErr, "AudioUnitSetProperty SetRenderCallback failed")
// Set the stream format for the audio unit. That is, the format of the data that our render callback will provide.
var streamFormat = AudioStreamBasicDescription(mSampleRate: Float64(sampleRate),
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kAudioFormatFlagsNativeFloatPacked|kAudioFormatFlagIsNonInterleaved,
mBytesPerPacket: 4 /*four bytes per float*/,
mFramesPerPacket: 1,
mBytesPerFrame: 4,
mChannelsPerFrame: 1,
mBitsPerChannel: 4*8,
mReserved: 0)
err = AudioUnitSetProperty(toneUnit!,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&streamFormat,
UInt32(MemoryLayout<AudioStreamBasicDescription>.size))
assert(err == noErr, "AudioUnitSetProperty StreamFormat failed")
}
func start() {
var status: OSStatus
status = AudioUnitInitialize(toneUnit!)
status = AudioOutputUnitStart(toneUnit!)
assert(status == noErr)
}
func stop() {
AudioOutputUnitStop(toneUnit!)
AudioUnitUninitialize(toneUnit!)
}
}
This is Fixed values
private let sampleRate = 16000
private let amplitude: Float = 1.0
private let frequency: Float = 440
/// Theta is changed over time as each sample is provided.
private var theta: Float = 0.0
private func renderCallback(_ inRefCon: UnsafeMutableRawPointer,
ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBusNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>) -> OSStatus {
let abl = UnsafeMutableAudioBufferListPointer(ioData)
let buffer = abl[0]
let pointer: UnsafeMutableBufferPointer<Float32> = UnsafeMutableBufferPointer(buffer)
for frame in 0..<inNumberFrames {
let pointerIndex = pointer.startIndex.advanced(by: Int(frame))
pointer[pointerIndex] = sin(theta) * amplitude
theta += 2.0 * Float(M_PI) * frequency / Float(sampleRate)
}
return noErr
}
You need to put data in a Circular buffer and then play the sound.

Audio Queue Services Player in Swift isn't calling callback

I've been playing around with Audio Queue Services for about a week and I've written a swift version of from the Apple Audio Queue Services Guide.
I'm recording in Linear PCM and saving to disk with this method:
AudioFileCreateWithURL(url, kAudioFileWAVEType, &format,
AudioFileFlags.dontPageAlignAudioData.union(.eraseFile), &audioFileID)
My AudioQueueOutputCallback isn't being called even though I can verify that my bufferSize is seemingly large enough and that it's getting passed actual data. I'm not getting any OSStatus errors and it seems like everything should work. Theres very little in the way of Swift written AudioServiceQueues and should I get this working I'd be happy to open the rest of my code.
Any and all suggestions welcome!
class SVNPlayer: SVNPlayback {
var state: PlayerState!
private let callback: AudioQueueOutputCallback = { aqData, inAQ, inBuffer in
guard let userData = aqData else { return }
let audioPlayer = Unmanaged<SVNPlayer>.fromOpaque(userData).takeUnretainedValue()
guard audioPlayer.state.isRunning,
let queue = audioPlayer.state.mQueue else { return }
var buffer = inBuffer.pointee // dereference pointers
var numBytesReadFromFile: UInt32 = 0
var numPackets = audioPlayer.state.mNumPacketsToRead
var mPacketDescIsNil = audioPlayer.state.mPacketDesc == nil // determine if the packetDesc
if mPacketDescIsNil {
audioPlayer.state.mPacketDesc = AudioStreamPacketDescription(mStartOffset: 0, mVariableFramesInPacket: 0, mDataByteSize: 0)
}
AudioFileReadPacketData(audioPlayer.state.mAudioFile, false, &numBytesReadFromFile, // read the packet at the saved file
&audioPlayer.state.mPacketDesc!, audioPlayer.state.mCurrentPacket,
&numPackets, buffer.mAudioData)
if numPackets > 0 {
buffer.mAudioDataByteSize = numBytesReadFromFile
AudioQueueEnqueueBuffer(queue, inBuffer, mPacketDescIsNil ? numPackets : 0,
&audioPlayer.state.mPacketDesc!)
audioPlayer.state.mCurrentPacket += Int64(numPackets)
} else {
AudioQueueStop(queue, false)
audioPlayer.state.isRunning = false
}
}
init(inputPath: String, audioFormat: AudioStreamBasicDescription, numberOfBuffers: Int) throws {
super.init()
var format = audioFormat
let pointer = UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque()) // get an unmananged reference to self
guard let audioFileUrl = CFURLCreateFromFileSystemRepresentation(nil,
inputPath,
CFIndex(strlen(inputPath)), false) else {
throw MixerError.playerInputPath }
var audioFileID: AudioFileID?
try osStatus { AudioFileOpenURL(audioFileUrl, AudioFilePermissions.readPermission, 0, &audioFileID) }
guard audioFileID != nil else { throw MixerError.playerInputPath }
state = PlayerState(mDataFormat: audioFormat, // setup the player state with mostly initial values
mQueue: nil,
mAudioFile: audioFileID!,
bufferByteSize: 0,
mCurrentPacket: 0,
mNumPacketsToRead: 0,
isRunning: false,
mPacketDesc: nil,
onError: nil)
var dataFormatSize = UInt32(MemoryLayout<AudioStreamBasicDescription>.stride)
try osStatus { AudioFileGetProperty(audioFileID!, kAudioFilePropertyDataFormat, &dataFormatSize, &state.mDataFormat) }
var queue: AudioQueueRef?
try osStatus { AudioQueueNewOutput(&format, callback, pointer, CFRunLoopGetCurrent(), CFRunLoopMode.commonModes.rawValue, 0, &queue) } // setup output queue
guard queue != nil else { throw MixerError.playerOutputQueue }
state.mQueue = queue // add to playerState
var maxPacketSize = UInt32()
var propertySize = UInt32(MemoryLayout<UInt32>.stride)
try osStatus { AudioFileGetProperty(state.mAudioFile, kAudioFilePropertyPacketSizeUpperBound, &propertySize, &maxPacketSize) }
deriveBufferSize(maxPacketSize: maxPacketSize, seconds: 0.5, outBufferSize: &state.bufferByteSize, outNumPacketsToRead: &state.mNumPacketsToRead)
let isFormatVBR = state.mDataFormat.mBytesPerPacket == 0 || state.mDataFormat.mFramesPerPacket == 0
if isFormatVBR { //Allocating Memory for a Packet Descriptions Array
let size = UInt32(MemoryLayout<AudioStreamPacketDescription>.stride)
state.mPacketDesc = AudioStreamPacketDescription(mStartOffset: 0,
mVariableFramesInPacket: state.mNumPacketsToRead,
mDataByteSize: size)
} // if CBR it stays set to null
for _ in 0..<numberOfBuffers { // Allocate and Prime Audio Queue Buffers
let bufferRef = UnsafeMutablePointer<AudioQueueBufferRef?>.allocate(capacity: 1)
let foo = state.mDataFormat.mBytesPerPacket * 1024 / UInt32(numberOfBuffers)
try osStatus { AudioQueueAllocateBuffer(state.mQueue!, foo, bufferRef) } // allocate the buffer
if let buffer = bufferRef.pointee {
AudioQueueEnqueueBuffer(state.mQueue!, buffer, 0, nil)
}
}
let gain: Float32 = 1.0 // Set an Audio Queue’s Playback Gain
try osStatus { AudioQueueSetParameter(state.mQueue!, kAudioQueueParam_Volume, gain) }
}
func start() throws {
state.isRunning = true // Start and Run an Audio Queue
try osStatus { AudioQueueStart(state.mQueue!, nil) }
while state.isRunning {
CFRunLoopRunInMode(CFRunLoopMode.defaultMode, 0.25, false)
}
CFRunLoopRunInMode(CFRunLoopMode.defaultMode, 1.0, false)
state.isRunning = false
}
func stop() throws {
guard state.isRunning,
let queue = state.mQueue else { return }
try osStatus { AudioQueueStop(queue, true) }
try osStatus { AudioQueueDispose(queue, true) }
try osStatus { AudioFileClose(state.mAudioFile) }
state.isRunning = false
}
private func deriveBufferSize(maxPacketSize: UInt32, seconds: Float64, outBufferSize: inout UInt32, outNumPacketsToRead: inout UInt32){
let maxBufferSize = UInt32(0x50000)
let minBufferSize = UInt32(0x4000)
if state.mDataFormat.mFramesPerPacket != 0 {
let numPacketsForTime: Float64 = state.mDataFormat.mSampleRate / Float64(state.mDataFormat.mFramesPerPacket) * seconds
outBufferSize = UInt32(numPacketsForTime) * maxPacketSize
} else {
outBufferSize = maxBufferSize > maxPacketSize ? maxBufferSize : maxPacketSize
}
if outBufferSize > maxBufferSize && outBufferSize > maxPacketSize {
outBufferSize = maxBufferSize
} else if outBufferSize < minBufferSize {
outBufferSize = minBufferSize
}
outNumPacketsToRead = outBufferSize / maxPacketSize
}
}
My player state struct is :
struct PlayerState: PlaybackState {
var mDataFormat: AudioStreamBasicDescription
var mQueue: AudioQueueRef?
var mAudioFile: AudioFileID
var bufferByteSize: UInt32
var mCurrentPacket: Int64
var mNumPacketsToRead: UInt32
var isRunning: Bool
var mPacketDesc: AudioStreamPacketDescription?
var onError: ((Error) -> Void)?
}
Instead of enqueuing an empty buffer, try calling your callback so it enqueues a (hopefully) full buffer. I'm unsure about the runloop stuff, but I'm sure you know what you're doing.

Unable to convert mp3 into PCM using AudioConverterFillComplexBuffer in AudioFileStreamOpen's AudioFileStream_PacketsProc callback

I have a AudioFileStream_PacketsProc callback set during an AudioFileStreamOpen which handles converting audio packets into PCM using AudioConverterFillComplexBuffer. The issue that I am having is that I am getting a -50 OSStatus (paramErr) after AudioConverterFillComplexBuffer is called. Below is a snippet of what parameters were used in AudioConverterFillComplexBuffer and how they were made:
audioConverterRef = AudioConverterRef()
// AudioConvertInfo is a struct that contains information
// for the converter regarding the number of packets and
// which audiobuffer is being allocated
convertInfo? = AudioConvertInfo(done: false, numberOfPackets: numberPackets, audioBuffer: buffer,
packetDescriptions: packetDescriptions)
var framesToDecode: UInt32 = pcmBufferTotalFrameCount! - end
var localPcmAudioBuffer = AudioBuffer()
localPcmAudioBuffer.mData = pcmAudioBuffer!.mData.advancedBy(Int(end * pcmBufferFrameSizeInBytes!))
var localPcmBufferList = AudioBufferList(mNumberBuffers: 1, mBuffers: AudioBuffer(mNumberChannels: 0, mDataByteSize: 0, mData: nil))
localPcmAudioBuffer = localPcmBufferList.mBuffers
localPcmAudioBuffer.mData = pcmAudioBuffer!.mData.advancedBy(Int(end * pcmBufferFrameSizeInBytes!))
localPcmAudioBuffer.mDataByteSize = framesToDecode * pcmBufferFrameSizeInBytes!;
localPcmAudioBuffer.mNumberChannels = pcmAudioBuffer!.mNumberChannels
var localPcmBufferList = AudioBufferList(mNumberBuffers: 1, mBuffers: AudioBuffer(mNumberChannels: 0, mDataByteSize: 0, mData: nil))
localPcmAudioBuffer = localPcmBufferList.mBuffers
AudioConverterFillComplexBuffer(audioConverterRef, AudioConverter_Callback, &convertInfo, &framesToDecode, &localPcmBufferList, nil)
Does what could possibly be causing the param error?
Here is the full method for the callback if needed:
func handleAudioPackets(inputData: UnsafePointer<Void>, numberBytes: UInt32, numberPackets: UInt32, packetDescriptions: UnsafeMutablePointer<AudioStreamPacketDescription>) {
if currentlyReadingEntry == nil {
print("currentlyReadingEntry = nil")
return
}
if currentlyReadingEntry.parsedHeader == false {
print("currentlyReadingEntry.parsedHeader == false")
return
}
if disposedWasRequested == true {
print("disposedWasRequested == true")
return
}
guard let audioConverterRef = audioConverterRef else {
return
}
if seekToTimeWasRequested == true && currentlyReadingEntry.calculatedBitRate() > 0.0 {
wakeupPlaybackThread()
print("seekToTimeWasRequested == true && currentlyReadingEntry.calculatedBitRate() > 0.0")
return
}
discontinuous = false
var buffer = AudioBuffer()
buffer.mNumberChannels = audioConverterAudioStreamBasicDescription.mChannelsPerFrame
buffer.mDataByteSize = numberBytes
buffer.mData = UnsafeMutablePointer<Void>(inputData)
convertInfo? = AudioConvertInfo(done: false, numberOfPackets: numberPackets, audioBuffer: buffer,
packetDescriptions: packetDescriptions)
if packetDescriptions != nil && currentlyReadingEntry.processedPacketsCount < maxCompressedBacketsForBitrateCalculation {
let count: Int = min(Int(numberPackets), Int(maxCompressedBacketsForBitrateCalculation - currentlyReadingEntry.processedPacketsCount!))
for var i = 0;i < count;++i{
let packetSize: Int32 = Int32(packetDescriptions[i].mDataByteSize)
OSAtomicAdd32(packetSize, &currentlyReadingEntry.processedPacketsSizeTotal!)
OSAtomicIncrement32(&currentlyReadingEntry.processedPacketsCount!)
}
}
while true {
OSSpinLockLock(&pcmBufferSpinLock)
var used: UInt32 = pcmBufferUsedFrameCount!
var start: UInt32 = pcmBufferFrameStartIndex!
var end = (pcmBufferFrameStartIndex! + pcmBufferUsedFrameCount!) % pcmBufferTotalFrameCount!
var framesLeftInsideBuffer = pcmBufferTotalFrameCount! - used
OSSpinLockUnlock(&pcmBufferSpinLock)
if framesLeftInsideBuffer == 0 {
pthread_mutex_lock(&playerMutex)
while true {
OSSpinLockLock(&pcmBufferSpinLock)
used = pcmBufferUsedFrameCount!
start = pcmBufferFrameStartIndex!
end = (pcmBufferFrameStartIndex! + pcmBufferUsedFrameCount!) % pcmBufferTotalFrameCount!
framesLeftInsideBuffer = pcmBufferTotalFrameCount! - used
OSSpinLockUnlock(&pcmBufferSpinLock)
if framesLeftInsideBuffer > 0 {
break
}
if (disposedWasRequested == true
|| internalState == SSPlayerInternalState.Disposed) {
pthread_mutex_unlock(&playerMutex)
return
}
if (seekToTimeWasRequested == true && currentlyPlayingEntry.calculatedBitRate() > 0.0)
{
pthread_mutex_unlock(&playerMutex)
wakeupPlaybackThread()
return;
}
waiting = true
pthread_cond_wait(&playerThreadReadyCondition, &playerMutex)
waiting = false
}
pthread_mutex_unlock(&playerMutex)
}
var localPcmAudioBuffer = AudioBuffer()
var localPcmBufferList = AudioBufferList(mNumberBuffers: 1, mBuffers: AudioBuffer(mNumberChannels: 0, mDataByteSize: 0, mData: nil))
localPcmAudioBuffer = localPcmBufferList.mBuffers
if end >= start {
var framesAdded: UInt32 = 0
var framesToDecode: UInt32 = pcmBufferTotalFrameCount! - end
localPcmAudioBuffer.mData = pcmAudioBuffer!.mData.advancedBy(Int(end * pcmBufferFrameSizeInBytes!))
localPcmAudioBuffer.mDataByteSize = framesToDecode * pcmBufferFrameSizeInBytes!;
localPcmAudioBuffer.mNumberChannels = pcmAudioBuffer!.mNumberChannels
AudioConverterFillComplexBuffer(audioConverterRef, AudioConverter_Callback, &convertInfo, &framesToDecode, &localPcmBufferList, nil)
framesAdded = framesToDecode
if status == 100 {
OSSpinLockLock(&pcmBufferSpinLock)
let newCount = pcmBufferUsedFrameCount! + framesAdded
pcmBufferUsedFrameCount = newCount
OSSpinLockUnlock(&pcmBufferSpinLock);
OSSpinLockLock(&currentlyReadingEntry!.spinLock!)
let newFramesAddedCount = currentlyReadingEntry.framesQueued! + Int64(framesAdded)
currentlyReadingEntry!.framesQueued! = newFramesAddedCount
OSSpinLockUnlock(&currentlyReadingEntry!.spinLock!)
return
} else if status != 0 {
print("error")
return
}
framesToDecode = start
if framesToDecode == 0 {
OSSpinLockLock(&pcmBufferSpinLock)
let newCount = pcmBufferUsedFrameCount! + framesAdded
pcmBufferUsedFrameCount = newCount
OSSpinLockUnlock(&pcmBufferSpinLock);
OSSpinLockLock(&currentlyReadingEntry!.spinLock!)
let newFramesAddedCount = currentlyReadingEntry.framesQueued! + Int64(framesAdded)
currentlyReadingEntry!.framesQueued! = newFramesAddedCount
OSSpinLockUnlock(&currentlyReadingEntry!.spinLock!)
continue
}
localPcmAudioBuffer.mData = pcmAudioBuffer!.mData
localPcmAudioBuffer.mDataByteSize = framesToDecode * pcmBufferFrameSizeInBytes!
localPcmAudioBuffer.mNumberChannels = pcmAudioBuffer!.mNumberChannels
AudioConverterFillComplexBuffer(audioConverterRef, AudioConverter_Callback, &convertInfo, &framesToDecode, &localPcmBufferList, nil)
let decodedFramesAdded = framesAdded + framesToDecode
framesAdded = decodedFramesAdded
if status == 100 {
OSSpinLockLock(&pcmBufferSpinLock)
let newCount = pcmBufferUsedFrameCount! + framesAdded
pcmBufferUsedFrameCount = newCount
OSSpinLockUnlock(&pcmBufferSpinLock);
OSSpinLockLock(&currentlyReadingEntry!.spinLock!)
let newFramesAddedCount = currentlyReadingEntry.framesQueued! + Int64(framesAdded)
currentlyReadingEntry!.framesQueued! = newFramesAddedCount
OSSpinLockUnlock(&currentlyReadingEntry!.spinLock!)
return
} else if status == 0 {
OSSpinLockLock(&pcmBufferSpinLock)
let newCount = pcmBufferUsedFrameCount! + framesAdded
pcmBufferUsedFrameCount = newCount
OSSpinLockUnlock(&pcmBufferSpinLock);
OSSpinLockLock(&currentlyReadingEntry!.spinLock!)
let newFramesAddedCount = currentlyReadingEntry.framesQueued! + Int64(framesAdded)
currentlyReadingEntry!.framesQueued! = newFramesAddedCount
OSSpinLockUnlock(&currentlyReadingEntry!.spinLock!)
continue
} else if status != 0 {
print("error")
return
} else {
var framesAdded: UInt32 = 0
var framesToDecode: UInt32 = start - end
localPcmAudioBuffer.mData = pcmAudioBuffer!.mData.advancedBy(Int(end * pcmBufferFrameSizeInBytes!))
localPcmAudioBuffer.mDataByteSize = framesToDecode * pcmBufferFrameSizeInBytes!;
localPcmAudioBuffer.mNumberChannels = pcmAudioBuffer!.mNumberChannels
var convertInfoo: UnsafePointer<Void> = unsafeBitCast(convertInfo, UnsafePointer<Void>.self)
status = AudioConverterFillComplexBuffer(audioConverterRef, AudioConverter_Callback, &convertInfoo, &framesToDecode, &localPcmBufferList, nil)
framesAdded = framesToDecode
if status == 100 {
OSSpinLockLock(&pcmBufferSpinLock)
let newCount = pcmBufferUsedFrameCount! + framesAdded
pcmBufferUsedFrameCount = newCount
OSSpinLockUnlock(&pcmBufferSpinLock);
OSSpinLockLock(&currentlyReadingEntry!.spinLock!)
let newFramesAddedCount = currentlyReadingEntry.framesQueued! + Int64(framesAdded)
currentlyReadingEntry!.framesQueued! = newFramesAddedCount
OSSpinLockUnlock(&currentlyReadingEntry!.spinLock!)
return
} else if status == 0 {
OSSpinLockLock(&pcmBufferSpinLock)
let newCount = pcmBufferUsedFrameCount! + framesAdded
pcmBufferUsedFrameCount = newCount
OSSpinLockUnlock(&pcmBufferSpinLock);
OSSpinLockLock(&currentlyReadingEntry!.spinLock!)
let newFramesAddedCount = currentlyReadingEntry.framesQueued! + Int64(framesAdded)
currentlyReadingEntry!.framesQueued! = newFramesAddedCount
OSSpinLockUnlock(&currentlyReadingEntry!.spinLock!)
continue
} else if status != 0 {
print("error")
return
}
}
}
}
}
Hej #3254523, I have some answers with possible solutions for you. I hope to guide you in the right way in spite of I am not expert in this major. So, the problem is for sure the configuration of:
AudioBufferList
Here the links that probes the hints of this -50 OSStatus related to the AudioBufferList:
http://lists.apple.com/archives/coreaudio-api/2012/Apr/msg00041.html
https://forums.developer.apple.com/thread/6313
Now, we have to focus in a solutions. Looking through your AudioBufferList, you have no assigned any value but mNumberBuffers which is 1. Try to change the values in the following way(as it shown in the second link):
var localPcmBufferList = AudioBufferList(mNumberBuffers: 2, mBuffers: AudioBuffer(mNumberChannels: 2, mDataByteSize: UInt32(buffer.count), mData: &buffer))
If still is not working, we have to focus to correct it properly, hence here you can find the solution to the -50 OSStatus in AudioConverterFillComplexBuffer although not in swift:
AudioConverterFillComplexBuffer return -50 (paramErr)
iPhone: AudioBufferList init and release
a nice example taken from AudioKit
Audio File Services ( to read the MP3 format and write AIFF or WAV )
Audio File Conversion Services ( to convert the MP3 data to PCM, or to encode from PCM to some other codec if you were to write a file )
A given converter can't convert between two encoded formats.
you can do MP3-to-PCM or PCM-to-AAC
While to do MP3-to-AAC, needs two converters
Easy to do with tanersener/mobile-ffmpeg
let command = "-i input.mp3 -f s16le -acodec pcm_s16le -ac 1 -ar 44100 output.raw"
let result = MobileFFmpeg.execute(command)
switch result {
case RETURN_CODE_SUCCESS:
print("command exe completed successfully.\n")
case RETURN_CODE_CANCEL:
print("command exe cancelled by user.\n")
default:
print("command exe failed with rc=\(result) and output=\(String(describing: MobileFFmpegConfig.getLastCommandOutput())).\n")
}

Resources