Distorted audio when playing raw PCM data buffer in iOS - ios

I'm trying to play audio buffers using Audio Unit in iOS. The audio buffer is from a C library which receives the audio from Playstation 4 and decode it using Opus. The format of the audio buffers is PCM int16_t.
With Audio Unit and TPCircularBuffer, I am done getting it to play the sound. However it is badly distorted and not clean.
Here is the setup of my Audio Unit class
init() {
_TPCircularBufferInit(&buffer, 960
, MemoryLayout<TPCircularBuffer>.size)
setupAudio()
}
func play(data: NSMutableData) {
TPCircularBufferProduceBytes(&buffer, data.bytes, UInt32(data.length))
}
private func setupAudio() {
do {
try AVAudioSession.sharedInstance().setActive(true, options: [])
} catch { }
var audioComponentDesc = AudioComponentDescription(componentType: OSType(kAudioUnitType_Output),
componentSubType: OSType(kAudioUnitSubType_RemoteIO),
componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
componentFlags: 0, componentFlagsMask: 0)
let inputComponent = AudioComponentFindNext(nil, &audioComponentDesc)
status = AudioComponentInstanceNew(inputComponent!, &audioUnit)
if status != noErr {
print("Audio Component Instance New Error \(status.debugDescription)")
}
var audioDescription = AudioStreamBasicDescription()
audioDescription.mSampleRate = 48000
audioDescription.mFormatID = kAudioFormatLinearPCM
audioDescription.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger
audioDescription.mChannelsPerFrame = 2
audioDescription.mFramesPerPacket = 1
audioDescription.mBitsPerChannel = 16
audioDescription.mBytesPerFrame = (audioDescription.mBitsPerChannel / 8) * audioDescription.mChannelsPerFrame
audioDescription.mBytesPerPacket = audioDescription.mBytesPerFrame * audioDescription.mFramesPerPacket
audioDescription.mReserved = 0
status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input, 0, &audioDescription,
UInt32(MemoryLayout<AudioStreamBasicDescription>.size))
if status != noErr {
print("Enable IO for playback error \(status.debugDescription)")
}
var flag: UInt32 = 0
status = AudioUnitSetProperty(audioUnit, kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input, 1, &flag,
UInt32(MemoryLayout.size(ofValue: flag)))
if status != noErr {
print("Enable IO for playback error \(status.debugDescription)")
}
var outputCallbackStruct = AURenderCallbackStruct()
outputCallbackStruct.inputProc = performPlayback
outputCallbackStruct.inputProcRefCon = UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque())
status = AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global, 0, &outputCallbackStruct,
UInt32(MemoryLayout<AURenderCallbackStruct>.size))
status = AudioUnitInitialize(audioUnit)
if status != noErr {
print("Failed to initialize audio unit \(status!)")
}
status = AudioOutputUnitStart(audioUnit)
if status != noErr {
print("Failed to initialize output unit \(status!)")
}
}
The playback function
private func performPlayback(clientData: UnsafeMutableRawPointer,_ ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>, inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBufNumber: UInt32, inNumberFrames: UInt32, ioData: UnsafeMutablePointer<AudioBufferList>?) -> OSStatus {
let player = Unmanaged<AudioPlayer>.fromOpaque(UnsafeRawPointer(clientData)!).takeUnretainedValue()
let buffer = ioData![0].mBuffers
let bytesToCopy = ioData![0].mBuffers.mDataByteSize
var bufferTail: UnsafeMutableRawPointer?
var availableBytes: UInt32 = 0
bufferTail = TPCircularBufferTail(&player.buffer, &availableBytes)
let bytesToWrite = min(bytesToCopy, availableBytes)
memcpy(buffer.mData, bufferTail, Int(bytesToWrite))
TPCircularBufferConsume(&player.buffer, bytesToWrite)
return noErr}
Here is where I call the play function
private func StreamAudioFrameCallback(buffers: UnsafeMutablePointer<Int16>?,
samplesCount: Int, user: UnsafeMutableRawPointer?) {
let decodedData = NSMutableData()
if let buffer = buffers, samplesCount > 0 {
let decodedDataSize = samplesCount * MemoryLayout<opus_int16>.size
decodedData.append(buffer, length: decodedDataSize)
AudioPlayer.shared.play(data: decodedData)
}
Is anyone familiar with this ? Any helps is appreciated.

You might not want to start playing until there's more in the circular buffer than the amount needed to cover the maximum time jitter in the incoming data rate. Try waiting until there's half a second of audio in the circular buffer before starting to play it. Then experiment with the amount of padding.

Related

iOS AudioUnit recording at 8kHz sample rate yields in silence

I'm building a voip application where we need to record audio from the microphone and send it somewhere in 8kHz sample rate.
Right now I'm recording it in default sample rate, which in my case was always 44,1k. This is then manually converted to 8k using this algorithm.
This naive approach results in an "ok" quality but I think it would be much better using the native downsampling capabilities of the AudioUnit.
But when I change the sample rate property on the recording AudioUnit, it outputs just silent frames (~0.0) and I don't know why.
I've extracted the part of the app responsible for the sound.
It should record from microphone -> write to a ring buffer -> playback the data in the buffer:
RecordingUnit:
import Foundation
import AudioToolbox
import AVFoundation
class RecordingUnit : NSObject
{
public static let AudioPacketDataSize = 160
public static var instance: RecordingUnit!
public var micBuffers : AudioBufferList?;
public var OutputBuffer = RingBuffer<Float>(count: 1 * 1000 * AudioPacketDataSize);
public var currentAudioUnit: AudioUnit?
override init()
{
super.init()
RecordingUnit.instance = self
micBuffers = AudioBufferList(
mNumberBuffers: 1,
mBuffers: AudioBuffer(
mNumberChannels: UInt32(1),
mDataByteSize: UInt32(1024),
mData: UnsafeMutableRawPointer.allocate(byteCount: 1024, alignment: 1)))
}
public func start()
{
var acd = AudioComponentDescription(
componentType: OSType(kAudioUnitType_Output),
componentSubType: OSType(kAudioUnitSubType_VoiceProcessingIO),
componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
componentFlags: 0,
componentFlagsMask: 0)
let comp = AudioComponentFindNext(nil, &acd)
var err : OSStatus
AudioComponentInstanceNew(comp!, &currentAudioUnit)
var true_ui32: UInt32 = 1
guard AudioUnitSetProperty(currentAudioUnit!,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
1,
&true_ui32,
UInt32(MemoryLayout<UInt32>.size)) == 0 else {print ("could not enable IO for input "); return}
err = AudioUnitSetProperty(currentAudioUnit!,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
0,
&true_ui32,
UInt32(MemoryLayout<UInt32>.size))
guard err == 0 else {print ("could not enable IO for output "); return}
var sampleRate : Float64 = 8000
err = AudioUnitSetProperty(currentAudioUnit!,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Input,
0,
&sampleRate,
UInt32(MemoryLayout<Float64>.size))
guard err == 0 else {print ("could not set sample rate (error=\(err))"); return}
var renderCallbackStruct = AURenderCallbackStruct(inputProc: recordingCallback, inputProcRefCon: UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque()))
err = AudioUnitSetProperty(currentAudioUnit!,
AudioUnitPropertyID(kAudioOutputUnitProperty_SetInputCallback),
AudioUnitScope(kAudioUnitScope_Global),
1,
&renderCallbackStruct,
UInt32(MemoryLayout<AURenderCallbackStruct>.size))
guard err == 0 else {print("could not set input callback"); return}
guard AudioUnitInitialize(currentAudioUnit!) == 0 else {print("could not initialize recording unit"); return}
guard AudioOutputUnitStart(currentAudioUnit!) == 0 else {print("could not start recording unit"); return}
print("Audio Recording started")
}
let recordingCallback: AURenderCallback = { (inRefCon, ioActionFlags, inTimeStamp, inBusNumber, frameCount, ioData ) -> OSStatus in
let audioObject = RecordingUnit.instance!
var err: OSStatus = noErr
guard let au = audioObject.currentAudioUnit else {print("AudioUnit nil (recording)"); return 0}
err = AudioUnitRender(au, ioActionFlags, inTimeStamp, inBusNumber, frameCount, &audioObject.micBuffers!)
let bufferPointer = UnsafeMutableRawPointer(audioObject.micBuffers!.mBuffers.mData)
let dataArray = bufferPointer!.assumingMemoryBound(to: Float.self)
var frames = 0
var sum = Float(0)
for i in 0..<Int(frameCount) {
if dataArray[i] != Float.nan {
sum += dataArray[i]
audioObject.OutputBuffer.write(Float(dataArray[i]))
frames = frames+1
}
}
let average = sum/Float(frameCount)
print("recorded -> \(frames)/\(frameCount) -> average=\(average)")
return 0
}
public func stop()
{
if currentAudioUnit != nil { AudioUnitUninitialize(currentAudioUnit!) }
}
}
PlaybackUnit.swift:
import Foundation
import AudioToolbox
import AVFoundation
class PlaybackUnit : NSObject {
public static var instance : PlaybackUnit!
public var InputBuffer : RingBuffer<Float>?
public var currentAudioUnit: AudioUnit?
override init()
{
super.init()
PlaybackUnit.instance = self
}
public func start()
{
var acd = AudioComponentDescription(
componentType: OSType(kAudioUnitType_Output),
componentSubType: OSType(kAudioUnitSubType_VoiceProcessingIO),
componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
componentFlags: 0,
componentFlagsMask: 0)
let comp = AudioComponentFindNext(nil, &acd)
var err = AudioComponentInstanceNew(comp!, &currentAudioUnit)
guard err == 0 else {print ("could not create a new AudioComponent instance"); return};
//set sample rate
var sampleRate : Float64 = 8000
AudioUnitSetProperty(currentAudioUnit!,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Input,
0,
&sampleRate,
UInt32(MemoryLayout<Float64>.size))
guard err == 0 else {print ("could not set sample rate "); return};
//register render callback
var outputCallbackStruct = AURenderCallbackStruct(inputProc: outputCallback, inputProcRefCon: UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque()))
err = AudioUnitSetProperty(currentAudioUnit!,
AudioUnitPropertyID(kAudioUnitProperty_SetRenderCallback),
AudioUnitScope(kAudioUnitScope_Input),
0,
&outputCallbackStruct,
UInt32(MemoryLayout<AURenderCallbackStruct>.size))
guard err == 0 else {print("could not set render callback"); return}
guard AudioUnitInitialize(currentAudioUnit!) == 0 else {print("could not initialize output unit"); return}
guard AudioOutputUnitStart(currentAudioUnit!) == 0 else {print("could not start output unit"); return}
print("Audio Output started")
}
let outputCallback: AURenderCallback = { (
inRefCon,
ioActionFlags,
inTimeStamp,
inBusNumber,
frameCount,
ioData ) -> OSStatus in
let ins = PlaybackUnit.instance
let audioObject = ins!
var err: OSStatus = noErr
var frames = 0
var average : Float = 0
if var ringBuffer = audioObject.InputBuffer {
var dataArray = ioData!.pointee.mBuffers.mData!.assumingMemoryBound(to: Float.self)
var i = 0
while i < frameCount {
if let v = ringBuffer.read() {
dataArray[i] = v
average += v
} else {
dataArray[i] = 0
}
i += 1
frames += 1
}
}
average = average / Float(frameCount)
print("played -> \(frames)/\(frameCount) => avarage: \(average)")
return 0
}
public func stop()
{
if currentAudioUnit != nil { AudioUnitUninitialize(currentAudioUnit!) }
}
}
ViewController:
import UIKit
import AVFoundation
class ViewController: UIViewController {
var micPermission = false
private var micPermissionDispatchToken = 0
override func viewDidLoad() {
super.viewDidLoad()
let audioSession = AVAudioSession.sharedInstance()
if (micPermission == false) {
if (micPermissionDispatchToken == 0) {
micPermissionDispatchToken = 1
audioSession.requestRecordPermission({(granted: Bool)-> Void in
if granted {
self.micPermission = true
return
} else {
print("failed to grant microphone access!!")
}
})
}
}
if micPermission == false { return }
try! audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
try! audioSession.setPreferredSampleRate(8000)
try! audioSession.overrideOutputAudioPort(.speaker)
try! audioSession.setPreferredOutputNumberOfChannels(1)
try! audioSession.setMode(AVAudioSessionModeVoiceChat)
try! audioSession.setActive(true)
let microphone = RecordingUnit()
let speakers = PlaybackUnit()
speakers.InputBuffer = microphone.OutputBuffer
microphone.start()
speakers.start()
}
}

Decode AAC to PCM format using AVAudioConverter Swift

How convert AAC to PCM using AVAudioConverter, AVAudioCompressedBuffer and AVAudioPCMBuffer on Swift?
On WWDC 2015, 507 Session was said, that AVAudioConverter can encode and decode PCM buffer, was showed encode example, but wasn't showed examples with decoding.
I tried decode, and something doesn't work. I don't know what:(
Calls:
//buffer - it's AVAudioPCMBuffer from AVAudioInputNode(AVAudioEngine)
let aacBuffer = AudioBufferConverter.convertToAAC(from: buffer, error: nil) //has data
let data = Data(bytes: aacBuffer!.data, count: Int(aacBuffer!.byteLength)) //has data
let aacReverseBuffer = AudioBufferConverter.convertToAAC(from: data) //has data
let pcmReverseBuffer = AudioBufferConverter.convertToPCM(from: aacBuffer2!, error: nil) //zeros data. data object exist, but filled by zeros
It's code for converting:
class AudioBufferFormatHelper {
static func PCMFormat() -> AVAudioFormat? {
return AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: false)
}
static func AACFormat() -> AVAudioFormat? {
var outDesc = AudioStreamBasicDescription(
mSampleRate: 44100,
mFormatID: kAudioFormatMPEG4AAC,
mFormatFlags: 0,
mBytesPerPacket: 0,
mFramesPerPacket: 0,
mBytesPerFrame: 0,
mChannelsPerFrame: 1,
mBitsPerChannel: 0,
mReserved: 0)
let outFormat = AVAudioFormat(streamDescription: &outDesc)
return outFormat
}
}
class AudioBufferConverter {
static func convertToAAC(from buffer: AVAudioBuffer, error outError: NSErrorPointer) -> AVAudioCompressedBuffer? {
let outputFormat = AudioBufferFormatHelper.AACFormat()
let outBuffer = AVAudioCompressedBuffer(format: outputFormat!, packetCapacity: 8, maximumPacketSize: 768)
self.convert(from: buffer, to: outBuffer, error: outError)
return outBuffer
}
static func convertToPCM(from buffer: AVAudioBuffer, error outError: NSErrorPointer) -> AVAudioPCMBuffer? {
let outputFormat = AudioBufferFormatHelper.PCMFormat()
guard let outBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat!, frameCapacity: 4410) else {
return nil
}
outBuffer.frameLength = 4410
self.convert(from: buffer, to: outBuffer, error: outError)
return outBuffer
}
static func convertToAAC(from data: Data) -> AVAudioCompressedBuffer? {
let nsData = NSData(data: data)
let inputFormat = AudioBufferFormatHelper.AACFormat()
let buffer = AVAudioCompressedBuffer(format: inputFormat!, packetCapacity: 8, maximumPacketSize: 768)
buffer.byteLength = UInt32(data.count)
buffer.packetCount = 8
buffer.data.copyMemory(from: nsData.bytes, byteCount: nsData.length)
buffer.packetDescriptions!.pointee.mDataByteSize = 4
return buffer
}
private static func convert(from sourceBuffer: AVAudioBuffer, to destinationBuffer: AVAudioBuffer, error outError: NSErrorPointer) {
//init converter
let inputFormat = sourceBuffer.format
let outputFormat = destinationBuffer.format
let converter = AVAudioConverter(from: inputFormat, to: outputFormat)
converter!.bitRate = 32000
let inputBlock : AVAudioConverterInputBlock = { inNumPackets, outStatus in
outStatus.pointee = AVAudioConverterInputStatus.haveData
return sourceBuffer
}
_ = converter!.convert(to: destinationBuffer, error: outError, withInputFrom: inputBlock)
}
}
In result AVAudioPCMBuffer has data with zeros.
And in messages I see errors:
AACDecoder.cpp:192:Deserialize: Unmatched number of channel elements in payload
AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
[ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x14f81b840) Error decoding packet 1: err = -1, packet length: 0
AACDecoder.cpp:192:Deserialize: Unmatched number of channel elements in payload
AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
[ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x14f81b840) Error decoding packet 3: err = -1, packet length: 0
AACDecoder.cpp:192:Deserialize: Unmatched number of channel elements in payload
AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
[ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x14f81b840) Error decoding packet 5: err = -1, packet length: 0
AACDecoder.cpp:192:Deserialize: Unmatched number of channel elements in payload
AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
[ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x14f81b840) Error decoding packet 7: err = -1, packet length: 0
There were a few problems with your attempt:
you're not setting the multiple packet descriptions when you convert data -> AVAudioCompressedBuffer. You need to create them, as AAC packets are of variable size. You can either copy them from the original AAC buffer, or parse them from your data by hand (ouch) or by using the AudioFileStream api.
you re-create your AVAudioConverters over and over again - once for each buffer, throwing away their state. e.g. the AAC encoder for its own personal reasons needs to add 2112 frames of silence before it can get around to reproducing your audio, so recreating the converter gets you a whole lot of silence.
you present the same buffer over and over to the AVAudioConverter's input block. You should only present each buffer once.
the bit rate of 32000 didn't work (for me)
That's all I can think of right now. Try the following modifications to your code instead which you now call like so:
(p.s. I changed some of the mono to stereo so I could play the round trip buffers on my mac, whose microphone input is strangely stereo - you might need to change it back)
(p.p.s there's obviously some kind of round trip / serialising/deserialising attempt going on here, but what exactly are you trying to do? do you want to stream AAC audio from one device to another? because it might be easier to let another API like AVPlayer play the resulting stream instead of dealing with the packets yourself)
let aacBuffer = AudioBufferConverter.convertToAAC(from: buffer, error: nil)!
let data = Data(bytes: aacBuffer.data, count: Int(aacBuffer.byteLength))
let packetDescriptions = Array(UnsafeBufferPointer(start: aacBuffer.packetDescriptions, count: Int(aacBuffer.packetCount)))
let aacReverseBuffer = AudioBufferConverter.convertToAAC(from: data, packetDescriptions: packetDescriptions)!
// was aacBuffer2
let pcmReverseBuffer = AudioBufferConverter.convertToPCM(from: aacReverseBuffer, error: nil)
class AudioBufferFormatHelper {
static func PCMFormat() -> AVAudioFormat? {
return AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: false)
}
static func AACFormat() -> AVAudioFormat? {
var outDesc = AudioStreamBasicDescription(
mSampleRate: 44100,
mFormatID: kAudioFormatMPEG4AAC,
mFormatFlags: 0,
mBytesPerPacket: 0,
mFramesPerPacket: 0,
mBytesPerFrame: 0,
mChannelsPerFrame: 1,
mBitsPerChannel: 0,
mReserved: 0)
let outFormat = AVAudioFormat(streamDescription: &outDesc)
return outFormat
}
}
class AudioBufferConverter {
static var lpcmToAACConverter: AVAudioConverter! = nil
static func convertToAAC(from buffer: AVAudioBuffer, error outError: NSErrorPointer) -> AVAudioCompressedBuffer? {
let outputFormat = AudioBufferFormatHelper.AACFormat()
let outBuffer = AVAudioCompressedBuffer(format: outputFormat!, packetCapacity: 8, maximumPacketSize: 768)
//init converter once
if lpcmToAACConverter == nil {
let inputFormat = buffer.format
lpcmToAACConverter = AVAudioConverter(from: inputFormat, to: outputFormat!)
// print("available rates \(lpcmToAACConverter.applicableEncodeBitRates)")
// lpcmToAACConverter!.bitRate = 96000
lpcmToAACConverter.bitRate = 32000 // have end of stream problems with this, not sure why
}
self.convert(withConverter:lpcmToAACConverter, from: buffer, to: outBuffer, error: outError)
return outBuffer
}
static var aacToLPCMConverter: AVAudioConverter! = nil
static func convertToPCM(from buffer: AVAudioBuffer, error outError: NSErrorPointer) -> AVAudioPCMBuffer? {
let outputFormat = AudioBufferFormatHelper.PCMFormat()
guard let outBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat!, frameCapacity: 4410) else {
return nil
}
//init converter once
if aacToLPCMConverter == nil {
let inputFormat = buffer.format
aacToLPCMConverter = AVAudioConverter(from: inputFormat, to: outputFormat!)
}
self.convert(withConverter: aacToLPCMConverter, from: buffer, to: outBuffer, error: outError)
return outBuffer
}
static func convertToAAC(from data: Data, packetDescriptions: [AudioStreamPacketDescription]) -> AVAudioCompressedBuffer? {
let nsData = NSData(data: data)
let inputFormat = AudioBufferFormatHelper.AACFormat()
let maximumPacketSize = packetDescriptions.map { $0.mDataByteSize }.max()!
let buffer = AVAudioCompressedBuffer(format: inputFormat!, packetCapacity: AVAudioPacketCount(packetDescriptions.count), maximumPacketSize: Int(maximumPacketSize))
buffer.byteLength = UInt32(data.count)
buffer.packetCount = AVAudioPacketCount(packetDescriptions.count)
buffer.data.copyMemory(from: nsData.bytes, byteCount: nsData.length)
buffer.packetDescriptions!.pointee.mDataByteSize = UInt32(data.count)
buffer.packetDescriptions!.initialize(from: packetDescriptions, count: packetDescriptions.count)
return buffer
}
private static func convert(withConverter: AVAudioConverter, from sourceBuffer: AVAudioBuffer, to destinationBuffer: AVAudioBuffer, error outError: NSErrorPointer) {
// input each buffer only once
var newBufferAvailable = true
let inputBlock : AVAudioConverterInputBlock = {
inNumPackets, outStatus in
if newBufferAvailable {
outStatus.pointee = .haveData
newBufferAvailable = false
return sourceBuffer
} else {
outStatus.pointee = .noDataNow
return nil
}
}
let status = withConverter.convert(to: destinationBuffer, error: outError, withInputFrom: inputBlock)
print("status: \(status.rawValue)")
}
}

How to play raw audio data from socket in Swift

I need to play raw audio data coming over socket in small chunks. I have read that I suppose to use circular buffer and found few solutions in Objective C, but couldn't made any of them to work, especially in Swift 3.
Can anyone help me?
First you implement ring Buffer like so.
public struct RingBuffer<T> {
private var array: [T?]
private var readIndex = 0
private var writeIndex = 0
public init(count: Int) {
array = [T?](repeating: nil, count: count)
}
/* Returns false if out of space. */
#discardableResult public mutating func write(element: T) -> Bool {
if !isFull {
array[writeIndex % array.count] = element
writeIndex += 1
return true
} else {
return false
}
}
/* Returns nil if the buffer is empty. */
public mutating func read() -> T? {
if !isEmpty {
let element = array[readIndex % array.count]
readIndex += 1
return element
} else {
return nil
}
}
fileprivate var availableSpaceForReading: Int {
return writeIndex - readIndex
}
public var isEmpty: Bool {
return availableSpaceForReading == 0
}
fileprivate var availableSpaceForWriting: Int {
return array.count - availableSpaceForReading
}
public var isFull: Bool {
return availableSpaceForWriting == 0
}
}
After that, you implement Audio Unit like so. ( modify if necessary)
class ToneGenerator {
fileprivate var toneUnit: AudioUnit? = nil
init() {
setupAudioUnit()
}
deinit {
stop()
}
func setupAudioUnit() {
// Configure the description of the output audio component we want to find:
let componentSubtype: OSType
#if os(OSX)
componentSubtype = kAudioUnitSubType_DefaultOutput
#else
componentSubtype = kAudioUnitSubType_RemoteIO
#endif
var defaultOutputDescription = AudioComponentDescription(componentType: kAudioUnitType_Output,
componentSubType: componentSubtype,
componentManufacturer: kAudioUnitManufacturer_Apple,
componentFlags: 0,
componentFlagsMask: 0)
let defaultOutput = AudioComponentFindNext(nil, &defaultOutputDescription)
var err: OSStatus
// Create a new instance of it in the form of our audio unit:
err = AudioComponentInstanceNew(defaultOutput!, &toneUnit)
assert(err == noErr, "AudioComponentInstanceNew failed")
// Set the render callback as the input for our audio unit:
var renderCallbackStruct = AURenderCallbackStruct(inputProc: renderCallback as? AURenderCallback,
inputProcRefCon: nil)
err = AudioUnitSetProperty(toneUnit!,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&renderCallbackStruct,
UInt32(MemoryLayout<AURenderCallbackStruct>.size))
assert(err == noErr, "AudioUnitSetProperty SetRenderCallback failed")
// Set the stream format for the audio unit. That is, the format of the data that our render callback will provide.
var streamFormat = AudioStreamBasicDescription(mSampleRate: Float64(sampleRate),
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kAudioFormatFlagsNativeFloatPacked|kAudioFormatFlagIsNonInterleaved,
mBytesPerPacket: 4 /*four bytes per float*/,
mFramesPerPacket: 1,
mBytesPerFrame: 4,
mChannelsPerFrame: 1,
mBitsPerChannel: 4*8,
mReserved: 0)
err = AudioUnitSetProperty(toneUnit!,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&streamFormat,
UInt32(MemoryLayout<AudioStreamBasicDescription>.size))
assert(err == noErr, "AudioUnitSetProperty StreamFormat failed")
}
func start() {
var status: OSStatus
status = AudioUnitInitialize(toneUnit!)
status = AudioOutputUnitStart(toneUnit!)
assert(status == noErr)
}
func stop() {
AudioOutputUnitStop(toneUnit!)
AudioUnitUninitialize(toneUnit!)
}
}
This is Fixed values
private let sampleRate = 16000
private let amplitude: Float = 1.0
private let frequency: Float = 440
/// Theta is changed over time as each sample is provided.
private var theta: Float = 0.0
private func renderCallback(_ inRefCon: UnsafeMutableRawPointer,
ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBusNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>) -> OSStatus {
let abl = UnsafeMutableAudioBufferListPointer(ioData)
let buffer = abl[0]
let pointer: UnsafeMutableBufferPointer<Float32> = UnsafeMutableBufferPointer(buffer)
for frame in 0..<inNumberFrames {
let pointerIndex = pointer.startIndex.advanced(by: Int(frame))
pointer[pointerIndex] = sin(theta) * amplitude
theta += 2.0 * Float(M_PI) * frequency / Float(sampleRate)
}
return noErr
}
You need to put data in a Circular buffer and then play the sound.

Audio Queue Services Player in Swift isn't calling callback

I've been playing around with Audio Queue Services for about a week and I've written a swift version of from the Apple Audio Queue Services Guide.
I'm recording in Linear PCM and saving to disk with this method:
AudioFileCreateWithURL(url, kAudioFileWAVEType, &format,
AudioFileFlags.dontPageAlignAudioData.union(.eraseFile), &audioFileID)
My AudioQueueOutputCallback isn't being called even though I can verify that my bufferSize is seemingly large enough and that it's getting passed actual data. I'm not getting any OSStatus errors and it seems like everything should work. Theres very little in the way of Swift written AudioServiceQueues and should I get this working I'd be happy to open the rest of my code.
Any and all suggestions welcome!
class SVNPlayer: SVNPlayback {
var state: PlayerState!
private let callback: AudioQueueOutputCallback = { aqData, inAQ, inBuffer in
guard let userData = aqData else { return }
let audioPlayer = Unmanaged<SVNPlayer>.fromOpaque(userData).takeUnretainedValue()
guard audioPlayer.state.isRunning,
let queue = audioPlayer.state.mQueue else { return }
var buffer = inBuffer.pointee // dereference pointers
var numBytesReadFromFile: UInt32 = 0
var numPackets = audioPlayer.state.mNumPacketsToRead
var mPacketDescIsNil = audioPlayer.state.mPacketDesc == nil // determine if the packetDesc
if mPacketDescIsNil {
audioPlayer.state.mPacketDesc = AudioStreamPacketDescription(mStartOffset: 0, mVariableFramesInPacket: 0, mDataByteSize: 0)
}
AudioFileReadPacketData(audioPlayer.state.mAudioFile, false, &numBytesReadFromFile, // read the packet at the saved file
&audioPlayer.state.mPacketDesc!, audioPlayer.state.mCurrentPacket,
&numPackets, buffer.mAudioData)
if numPackets > 0 {
buffer.mAudioDataByteSize = numBytesReadFromFile
AudioQueueEnqueueBuffer(queue, inBuffer, mPacketDescIsNil ? numPackets : 0,
&audioPlayer.state.mPacketDesc!)
audioPlayer.state.mCurrentPacket += Int64(numPackets)
} else {
AudioQueueStop(queue, false)
audioPlayer.state.isRunning = false
}
}
init(inputPath: String, audioFormat: AudioStreamBasicDescription, numberOfBuffers: Int) throws {
super.init()
var format = audioFormat
let pointer = UnsafeMutableRawPointer(Unmanaged.passUnretained(self).toOpaque()) // get an unmananged reference to self
guard let audioFileUrl = CFURLCreateFromFileSystemRepresentation(nil,
inputPath,
CFIndex(strlen(inputPath)), false) else {
throw MixerError.playerInputPath }
var audioFileID: AudioFileID?
try osStatus { AudioFileOpenURL(audioFileUrl, AudioFilePermissions.readPermission, 0, &audioFileID) }
guard audioFileID != nil else { throw MixerError.playerInputPath }
state = PlayerState(mDataFormat: audioFormat, // setup the player state with mostly initial values
mQueue: nil,
mAudioFile: audioFileID!,
bufferByteSize: 0,
mCurrentPacket: 0,
mNumPacketsToRead: 0,
isRunning: false,
mPacketDesc: nil,
onError: nil)
var dataFormatSize = UInt32(MemoryLayout<AudioStreamBasicDescription>.stride)
try osStatus { AudioFileGetProperty(audioFileID!, kAudioFilePropertyDataFormat, &dataFormatSize, &state.mDataFormat) }
var queue: AudioQueueRef?
try osStatus { AudioQueueNewOutput(&format, callback, pointer, CFRunLoopGetCurrent(), CFRunLoopMode.commonModes.rawValue, 0, &queue) } // setup output queue
guard queue != nil else { throw MixerError.playerOutputQueue }
state.mQueue = queue // add to playerState
var maxPacketSize = UInt32()
var propertySize = UInt32(MemoryLayout<UInt32>.stride)
try osStatus { AudioFileGetProperty(state.mAudioFile, kAudioFilePropertyPacketSizeUpperBound, &propertySize, &maxPacketSize) }
deriveBufferSize(maxPacketSize: maxPacketSize, seconds: 0.5, outBufferSize: &state.bufferByteSize, outNumPacketsToRead: &state.mNumPacketsToRead)
let isFormatVBR = state.mDataFormat.mBytesPerPacket == 0 || state.mDataFormat.mFramesPerPacket == 0
if isFormatVBR { //Allocating Memory for a Packet Descriptions Array
let size = UInt32(MemoryLayout<AudioStreamPacketDescription>.stride)
state.mPacketDesc = AudioStreamPacketDescription(mStartOffset: 0,
mVariableFramesInPacket: state.mNumPacketsToRead,
mDataByteSize: size)
} // if CBR it stays set to null
for _ in 0..<numberOfBuffers { // Allocate and Prime Audio Queue Buffers
let bufferRef = UnsafeMutablePointer<AudioQueueBufferRef?>.allocate(capacity: 1)
let foo = state.mDataFormat.mBytesPerPacket * 1024 / UInt32(numberOfBuffers)
try osStatus { AudioQueueAllocateBuffer(state.mQueue!, foo, bufferRef) } // allocate the buffer
if let buffer = bufferRef.pointee {
AudioQueueEnqueueBuffer(state.mQueue!, buffer, 0, nil)
}
}
let gain: Float32 = 1.0 // Set an Audio Queue’s Playback Gain
try osStatus { AudioQueueSetParameter(state.mQueue!, kAudioQueueParam_Volume, gain) }
}
func start() throws {
state.isRunning = true // Start and Run an Audio Queue
try osStatus { AudioQueueStart(state.mQueue!, nil) }
while state.isRunning {
CFRunLoopRunInMode(CFRunLoopMode.defaultMode, 0.25, false)
}
CFRunLoopRunInMode(CFRunLoopMode.defaultMode, 1.0, false)
state.isRunning = false
}
func stop() throws {
guard state.isRunning,
let queue = state.mQueue else { return }
try osStatus { AudioQueueStop(queue, true) }
try osStatus { AudioQueueDispose(queue, true) }
try osStatus { AudioFileClose(state.mAudioFile) }
state.isRunning = false
}
private func deriveBufferSize(maxPacketSize: UInt32, seconds: Float64, outBufferSize: inout UInt32, outNumPacketsToRead: inout UInt32){
let maxBufferSize = UInt32(0x50000)
let minBufferSize = UInt32(0x4000)
if state.mDataFormat.mFramesPerPacket != 0 {
let numPacketsForTime: Float64 = state.mDataFormat.mSampleRate / Float64(state.mDataFormat.mFramesPerPacket) * seconds
outBufferSize = UInt32(numPacketsForTime) * maxPacketSize
} else {
outBufferSize = maxBufferSize > maxPacketSize ? maxBufferSize : maxPacketSize
}
if outBufferSize > maxBufferSize && outBufferSize > maxPacketSize {
outBufferSize = maxBufferSize
} else if outBufferSize < minBufferSize {
outBufferSize = minBufferSize
}
outNumPacketsToRead = outBufferSize / maxPacketSize
}
}
My player state struct is :
struct PlayerState: PlaybackState {
var mDataFormat: AudioStreamBasicDescription
var mQueue: AudioQueueRef?
var mAudioFile: AudioFileID
var bufferByteSize: UInt32
var mCurrentPacket: Int64
var mNumPacketsToRead: UInt32
var isRunning: Bool
var mPacketDesc: AudioStreamPacketDescription?
var onError: ((Error) -> Void)?
}
Instead of enqueuing an empty buffer, try calling your callback so it enqueues a (hopefully) full buffer. I'm unsure about the runloop stuff, but I'm sure you know what you're doing.

Translate Objective - C Introduction to AudioUnits into Swift

I alredy managed to translate this code so the render callback gets called:
http://www.cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html
I'm sure that my render callback method is not right implemented, because I either get no sound at all or I get pretty awful noise from my headphones.
I also don't see a connection between my audioSession in viewDidLoad and the rest of the code.
Is there anyone who can help me out with this?
private func performRender(
inRefCon: UnsafeMutablePointer<Void>,
ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBufNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>) -> OSStatus
{
// get object
let vc = unsafeBitCast(inRefCon, ViewController.self)
print("callback")
let thetaIncrement = 2.0 * M_PI * vc.kFrequency / vc.kSampleRate
var theta = vc.theta
// var sinValues = [Int32]()
let amplitude : Double = 0.25
let abl = UnsafeMutableAudioBufferListPointer(ioData)
for buffer in abl
{
let val : Int32 = Int32((sin(theta) * amplitude))
// sinValues.append(val)
theta += thetaIncrement
memset(buffer.mData, val, Int(buffer.mDataByteSize))
}
vc.theta = theta
return noErr
}
class ViewController: UIViewController
{
let kSampleRate : Float64 = 44100
let kFrequency : Double = 440
var theta : Double = 0
private var toneUnit = AudioUnit()
private let kInputBus = AudioUnitElement(1)
private let kOutputBus = AudioUnitElement(0)
#IBAction func tooglePlay(sender: UIButton)
{
if(toneUnit != nil)
{
AudioOutputUnitStop(toneUnit)
AudioUnitInitialize(toneUnit)
AudioComponentInstanceDispose(toneUnit)
toneUnit = nil
}
else
{
createToneUnit()
var err = AudioUnitInitialize(toneUnit)
assert(err == noErr, "error initializing audiounit!")
err = AudioOutputUnitStart(toneUnit)
assert(err == noErr, "error starting audiooutput unit!")
}
}
func createToneUnit()
{
var defaultOutputDescription = AudioComponentDescription(
componentType: kAudioUnitType_Output,
componentSubType: kAudioUnitSubType_RemoteIO,
componentManufacturer: kAudioUnitManufacturer_Apple,
componentFlags: 0,
componentFlagsMask: 0)
let defaultOutput = AudioComponentFindNext(nil,&defaultOutputDescription)
let fourBytesPerFloat : UInt32 = 4
let eightBitsPerByte : UInt32 = 8
var err = AudioComponentInstanceNew(defaultOutput, &toneUnit)
assert(err == noErr, "error setting audio component instance!")
var input = AURenderCallbackStruct(inputProc: performRender, inputProcRefCon: UnsafeMutablePointer(unsafeAddressOf(self)))
err = AudioUnitSetProperty(toneUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &input, UInt32(sizeof(AURenderCallbackStruct)))
assert(err == noErr, "error setting render callback!")
var streamFormat = AudioStreamBasicDescription(
mSampleRate: kSampleRate,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kAudioFormatFlagsNativeFloatPacked,
mBytesPerPacket: fourBytesPerFloat,
mFramesPerPacket: 1,
mBytesPerFrame: fourBytesPerFloat,
mChannelsPerFrame: 1,
mBitsPerChannel: fourBytesPerFloat*eightBitsPerByte,
mReserved: 0)
err = AudioUnitSetProperty(toneUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, kInputBus, &streamFormat, UInt32(sizeof(AudioStreamBasicDescription)))
assert(err == noErr, "error setting audiounit property!")
}
override func viewDidLoad()
{
super.viewDidLoad()
let audioSession = AVAudioSession.sharedInstance()
do
{
try audioSession.setCategory(AVAudioSessionCategoryPlayback)
}
catch
{
print("Audio session setCategory failed")
}
do
{
try audioSession.setPreferredSampleRate(kSampleRate)
}
catch
{
print("Audio session samplerate error")
}
do
{
try audioSession.setPreferredIOBufferDuration(0.005)
}
catch
{
print("Audio session bufferduration error")
}
do
{
try audioSession.setActive(true)
}
catch
{
print("Audio session activate failure")
}
}
vc.theta isn't being incremented
memset only takes a byte's worth of val
the AudioUnit expects floats, but you're storing Int32s
the range of the audio data looks funny too - why not keep it in the range [-1, 1]?
there's no need to constrain theta either, sin can do that fine.
Are you sure this used to work in objective-c?

Resources