I have mic audio captured during an ARSession that I wish to pass to another VC and play back after the capture has taken place, but whilst the app is still running (and audio in memory).
The audio is currently captured as a single CMSampleBuffer and accessed through the didOutputAudioSampleBuffer ARSessionDelegate method.
I've worked with audio files and AVAudioPlayer before, but am new to CMSampleBuffer.
Is there a way of taking the raw buffer as is and playing it? If so, which classes enable this? Or does it need to be rendered/converted into some other format or file first?
This is the format description of the data in the buffer:
mediaType:'soun'
mediaSubType:'lpcm'
mediaSpecific: {
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'lpcm'
mFormatFlags: 0xc
mBytesPerPacket: 2
mFramesPerPacket: 1
mBytesPerFrame: 2
mChannelsPerFrame: 1
mBitsPerChannel: 16 }
cookie: {(null)}
ACL: {Mono}
FormatList Array: {
Index: 0
ChannelLayoutTag: 0x640001
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'lpcm'
mFormatFlags: 0xc
mBytesPerPacket: 2
mFramesPerPacket: 1
mBytesPerFrame: 2
mChannelsPerFrame: 1
mBitsPerChannel: 16 }}
}
extensions: {(null)}
Any guidance appreciated, as Apple's docs aren't clear on this matter, and related questions on SO deal more with a live-stream of audio than capture and subsequent playback.
It seems that the answer is no, you can't simply save and play back raw buffer audio, it needs to be converted to something more persistent first.
Looks like the main way to do this is to use AVAssetWriter to save the buffer data as an audio file, for playback later using AVAudioPlayer.
It's possible to pass mic to audio engine in parallel with recording, at minimum lag:
let audioEngine = AVAudioEngine()
...
self.audioEngine.connect(self.audioEngine.inputNode,
to: self.audioEngine.mainMixerNode, format: nil)
self.audioEngine.start()
If sample buffer use is important --
Roughly, it can be done with conversion into PCM buffer:
import AVFoundation
extension AVAudioPCMBuffer {
static func create(from sampleBuffer: CMSampleBuffer) -> AVAudioPCMBuffer? {
guard let description: CMFormatDescription = CMSampleBufferGetFormatDescription(sampleBuffer),
let sampleRate: Float64 = description.audioStreamBasicDescription?.mSampleRate,
let channelsPerFrame: UInt32 = description.audioStreamBasicDescription?.mChannelsPerFrame /*,
let numberOfChannels = description.audioChannelLayout?.numberOfChannels */
else { return nil }
guard let blockBuffer: CMBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer) else {
return nil
}
let samplesCount = CMSampleBufferGetNumSamples(sampleBuffer)
//let length: Int = CMBlockBufferGetDataLength(blockBuffer)
let audioFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: sampleRate, channels: AVAudioChannelCount(1), interleaved: false)
let buffer = AVAudioPCMBuffer(pcmFormat: audioFormat!, frameCapacity: AVAudioFrameCount(samplesCount))!
buffer.frameLength = buffer.frameCapacity
// GET BYTES
var dataPointer: UnsafeMutablePointer<Int8>?
CMBlockBufferGetDataPointer(blockBuffer, atOffset: 0, lengthAtOffsetOut: nil, totalLengthOut: nil, dataPointerOut: &dataPointer)
guard var channel: UnsafeMutablePointer<Float> = buffer.floatChannelData?[0],
let data = dataPointer else { return nil }
var data16 = UnsafeRawPointer(data).assumingMemoryBound(to: Int16.self)
for _ in 0...samplesCount - 1 {
channel.pointee = Float32(data16.pointee) / Float32(Int16.max)
channel += 1
for _ in 0...channelsPerFrame - 1 {
data16 += 1
}
}
return buffer
}
}
class BufferPlayer {
let audioEngine = AVAudioEngine()
let player = AVAudioPlayerNode()
deinit {
self.audioEngine.stop()
}
init(withBuffer: CMSampleBuffer) {
self.audioEngine.attach(self.player)
self.audioEngine.connect(self.player,
to: self.audioEngine.mainMixerNode,
format: AVAudioPCMBuffer.create(from: withBuffer)!.format
)
_ = try? audioEngine.start()
}
func playEnqueue(buffer: CMSampleBuffer) {
guard let bufferPCM = AVAudioPCMBuffer.create(from: buffer) else { return }
self.player.scheduleBuffer(bufferPCM, completionHandler: nil)
if !self.player.isPlaying { self.player.play() }
}
}
Related
How convert AAC to PCM using AVAudioConverter, AVAudioCompressedBuffer and AVAudioPCMBuffer on Swift?
On WWDC 2015, 507 Session was said, that AVAudioConverter can encode and decode PCM buffer, was showed encode example, but wasn't showed examples with decoding.
I tried decode, and something doesn't work. I don't know what:(
Calls:
//buffer - it's AVAudioPCMBuffer from AVAudioInputNode(AVAudioEngine)
let aacBuffer = AudioBufferConverter.convertToAAC(from: buffer, error: nil) //has data
let data = Data(bytes: aacBuffer!.data, count: Int(aacBuffer!.byteLength)) //has data
let aacReverseBuffer = AudioBufferConverter.convertToAAC(from: data) //has data
let pcmReverseBuffer = AudioBufferConverter.convertToPCM(from: aacBuffer2!, error: nil) //zeros data. data object exist, but filled by zeros
It's code for converting:
class AudioBufferFormatHelper {
static func PCMFormat() -> AVAudioFormat? {
return AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: false)
}
static func AACFormat() -> AVAudioFormat? {
var outDesc = AudioStreamBasicDescription(
mSampleRate: 44100,
mFormatID: kAudioFormatMPEG4AAC,
mFormatFlags: 0,
mBytesPerPacket: 0,
mFramesPerPacket: 0,
mBytesPerFrame: 0,
mChannelsPerFrame: 1,
mBitsPerChannel: 0,
mReserved: 0)
let outFormat = AVAudioFormat(streamDescription: &outDesc)
return outFormat
}
}
class AudioBufferConverter {
static func convertToAAC(from buffer: AVAudioBuffer, error outError: NSErrorPointer) -> AVAudioCompressedBuffer? {
let outputFormat = AudioBufferFormatHelper.AACFormat()
let outBuffer = AVAudioCompressedBuffer(format: outputFormat!, packetCapacity: 8, maximumPacketSize: 768)
self.convert(from: buffer, to: outBuffer, error: outError)
return outBuffer
}
static func convertToPCM(from buffer: AVAudioBuffer, error outError: NSErrorPointer) -> AVAudioPCMBuffer? {
let outputFormat = AudioBufferFormatHelper.PCMFormat()
guard let outBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat!, frameCapacity: 4410) else {
return nil
}
outBuffer.frameLength = 4410
self.convert(from: buffer, to: outBuffer, error: outError)
return outBuffer
}
static func convertToAAC(from data: Data) -> AVAudioCompressedBuffer? {
let nsData = NSData(data: data)
let inputFormat = AudioBufferFormatHelper.AACFormat()
let buffer = AVAudioCompressedBuffer(format: inputFormat!, packetCapacity: 8, maximumPacketSize: 768)
buffer.byteLength = UInt32(data.count)
buffer.packetCount = 8
buffer.data.copyMemory(from: nsData.bytes, byteCount: nsData.length)
buffer.packetDescriptions!.pointee.mDataByteSize = 4
return buffer
}
private static func convert(from sourceBuffer: AVAudioBuffer, to destinationBuffer: AVAudioBuffer, error outError: NSErrorPointer) {
//init converter
let inputFormat = sourceBuffer.format
let outputFormat = destinationBuffer.format
let converter = AVAudioConverter(from: inputFormat, to: outputFormat)
converter!.bitRate = 32000
let inputBlock : AVAudioConverterInputBlock = { inNumPackets, outStatus in
outStatus.pointee = AVAudioConverterInputStatus.haveData
return sourceBuffer
}
_ = converter!.convert(to: destinationBuffer, error: outError, withInputFrom: inputBlock)
}
}
In result AVAudioPCMBuffer has data with zeros.
And in messages I see errors:
AACDecoder.cpp:192:Deserialize: Unmatched number of channel elements in payload
AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
[ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x14f81b840) Error decoding packet 1: err = -1, packet length: 0
AACDecoder.cpp:192:Deserialize: Unmatched number of channel elements in payload
AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
[ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x14f81b840) Error decoding packet 3: err = -1, packet length: 0
AACDecoder.cpp:192:Deserialize: Unmatched number of channel elements in payload
AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
[ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x14f81b840) Error decoding packet 5: err = -1, packet length: 0
AACDecoder.cpp:192:Deserialize: Unmatched number of channel elements in payload
AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
[ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x14f81b840) Error decoding packet 7: err = -1, packet length: 0
There were a few problems with your attempt:
you're not setting the multiple packet descriptions when you convert data -> AVAudioCompressedBuffer. You need to create them, as AAC packets are of variable size. You can either copy them from the original AAC buffer, or parse them from your data by hand (ouch) or by using the AudioFileStream api.
you re-create your AVAudioConverters over and over again - once for each buffer, throwing away their state. e.g. the AAC encoder for its own personal reasons needs to add 2112 frames of silence before it can get around to reproducing your audio, so recreating the converter gets you a whole lot of silence.
you present the same buffer over and over to the AVAudioConverter's input block. You should only present each buffer once.
the bit rate of 32000 didn't work (for me)
That's all I can think of right now. Try the following modifications to your code instead which you now call like so:
(p.s. I changed some of the mono to stereo so I could play the round trip buffers on my mac, whose microphone input is strangely stereo - you might need to change it back)
(p.p.s there's obviously some kind of round trip / serialising/deserialising attempt going on here, but what exactly are you trying to do? do you want to stream AAC audio from one device to another? because it might be easier to let another API like AVPlayer play the resulting stream instead of dealing with the packets yourself)
let aacBuffer = AudioBufferConverter.convertToAAC(from: buffer, error: nil)!
let data = Data(bytes: aacBuffer.data, count: Int(aacBuffer.byteLength))
let packetDescriptions = Array(UnsafeBufferPointer(start: aacBuffer.packetDescriptions, count: Int(aacBuffer.packetCount)))
let aacReverseBuffer = AudioBufferConverter.convertToAAC(from: data, packetDescriptions: packetDescriptions)!
// was aacBuffer2
let pcmReverseBuffer = AudioBufferConverter.convertToPCM(from: aacReverseBuffer, error: nil)
class AudioBufferFormatHelper {
static func PCMFormat() -> AVAudioFormat? {
return AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: false)
}
static func AACFormat() -> AVAudioFormat? {
var outDesc = AudioStreamBasicDescription(
mSampleRate: 44100,
mFormatID: kAudioFormatMPEG4AAC,
mFormatFlags: 0,
mBytesPerPacket: 0,
mFramesPerPacket: 0,
mBytesPerFrame: 0,
mChannelsPerFrame: 1,
mBitsPerChannel: 0,
mReserved: 0)
let outFormat = AVAudioFormat(streamDescription: &outDesc)
return outFormat
}
}
class AudioBufferConverter {
static var lpcmToAACConverter: AVAudioConverter! = nil
static func convertToAAC(from buffer: AVAudioBuffer, error outError: NSErrorPointer) -> AVAudioCompressedBuffer? {
let outputFormat = AudioBufferFormatHelper.AACFormat()
let outBuffer = AVAudioCompressedBuffer(format: outputFormat!, packetCapacity: 8, maximumPacketSize: 768)
//init converter once
if lpcmToAACConverter == nil {
let inputFormat = buffer.format
lpcmToAACConverter = AVAudioConverter(from: inputFormat, to: outputFormat!)
// print("available rates \(lpcmToAACConverter.applicableEncodeBitRates)")
// lpcmToAACConverter!.bitRate = 96000
lpcmToAACConverter.bitRate = 32000 // have end of stream problems with this, not sure why
}
self.convert(withConverter:lpcmToAACConverter, from: buffer, to: outBuffer, error: outError)
return outBuffer
}
static var aacToLPCMConverter: AVAudioConverter! = nil
static func convertToPCM(from buffer: AVAudioBuffer, error outError: NSErrorPointer) -> AVAudioPCMBuffer? {
let outputFormat = AudioBufferFormatHelper.PCMFormat()
guard let outBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat!, frameCapacity: 4410) else {
return nil
}
//init converter once
if aacToLPCMConverter == nil {
let inputFormat = buffer.format
aacToLPCMConverter = AVAudioConverter(from: inputFormat, to: outputFormat!)
}
self.convert(withConverter: aacToLPCMConverter, from: buffer, to: outBuffer, error: outError)
return outBuffer
}
static func convertToAAC(from data: Data, packetDescriptions: [AudioStreamPacketDescription]) -> AVAudioCompressedBuffer? {
let nsData = NSData(data: data)
let inputFormat = AudioBufferFormatHelper.AACFormat()
let maximumPacketSize = packetDescriptions.map { $0.mDataByteSize }.max()!
let buffer = AVAudioCompressedBuffer(format: inputFormat!, packetCapacity: AVAudioPacketCount(packetDescriptions.count), maximumPacketSize: Int(maximumPacketSize))
buffer.byteLength = UInt32(data.count)
buffer.packetCount = AVAudioPacketCount(packetDescriptions.count)
buffer.data.copyMemory(from: nsData.bytes, byteCount: nsData.length)
buffer.packetDescriptions!.pointee.mDataByteSize = UInt32(data.count)
buffer.packetDescriptions!.initialize(from: packetDescriptions, count: packetDescriptions.count)
return buffer
}
private static func convert(withConverter: AVAudioConverter, from sourceBuffer: AVAudioBuffer, to destinationBuffer: AVAudioBuffer, error outError: NSErrorPointer) {
// input each buffer only once
var newBufferAvailable = true
let inputBlock : AVAudioConverterInputBlock = {
inNumPackets, outStatus in
if newBufferAvailable {
outStatus.pointee = .haveData
newBufferAvailable = false
return sourceBuffer
} else {
outStatus.pointee = .noDataNow
return nil
}
}
let status = withConverter.convert(to: destinationBuffer, error: outError, withInputFrom: inputBlock)
print("status: \(status.rawValue)")
}
}
I've implemented installTap method, which provides me audio buffer float samples. I've filtered them by my C++ DSP library. I want to "send" this buffer to headphones/speaker. I've did AVAudioPCMBuffer again from samples. Anyone know how to do that?
Code:
node.installTap(onBus: bus, bufferSize: AVAudioFrameCount(BUFFER_SIZE), format: node.inputFormat(forBus: bus), block: { (buffer : AVAudioPCMBuffer ,time : AVAudioTime) in
let root = buffer.floatChannelData!.pointee
// First pointer defines chanels
// Second pointer defines floats values
for i in 0 ..< BUFFER_SIZE{
self.signalData[i] = Double(root.advanced(by: i).pointee) * self.gainCorrection
}
let signalDataPreEq = self.signalData
let filteredSignal = shared.EQ.filterBuffer(UnsafeMutablePointer<Double>(mutating: self.signalData), with_count: Int32(BUFFER_SIZE))
self.signalData = Array(UnsafeBufferPointer(start : filteredSignal, count : BUFFER_SIZE))
for i in 0 ..< BUFFER_SIZE{
root.advanced(by: i).pointee = Float(self.signalData[i])
}
// HERE I WANT TO LISTEN(PLAYBACK) AUDIO FROM BUFFER
Thanks
You can use an AVAudioPlayerNode to play your AVAudioPCMBuffers:
let player = AVAudioPlayerNode()
engine.attach(player)
let bus = 0
let inputFormat = node.inputFormat(forBus: bus)
engine.connect(player, to: engine.mainMixerNode, format: inputFormat)
node.installTap(...) {
// other stuff
player.scheduleBuffer(filteredSignal) // filteredSignal is your AVAudioPCMBuffer?
}
// engine.start()
player.play()
I'm recording audio using the following:
localInput?.installTap(onBus: 0, bufferSize: 4096, format: localInputFormat) {
(buffer, time) -> Void in
let audioBuffer = self.audioBufferToBytes(audioBuffer: buffer)
let output = self.outputStream!.write(audioBuffer, maxLength: Int(buffer.frameLength))
if output > 0 {
print("\(#file) > \(#function) > \(output) bytes written from queue \(self.currentQueueName())")
}
else if output == -1 {
let error = self.outputStream!.streamError
print("\(#file) > \(#function) > Error writing to stream: \(error?.localizedDescription)")
}
}
Where my localInputFormat is the following:
self.localInput = self.localAudioEngine.inputNode
self.localAudioEngine.attach(self.localAudioPlayer)
self.localInputFormat = self.localInput?.inputFormat(forBus: 0)
self.localAudioEngine.connect(self.localAudioPlayer, to: self.localAudioEngine.mainMixerNode, format: self.localInputFormat)
The function audioBufferToBytes is as follows:
func audioBufferToBytes(audioBuffer: AVAudioPCMBuffer) -> [UInt8] {
let srcLeft = audioBuffer.floatChannelData![0]
let bytesPerFrame = audioBuffer.format.streamDescription.pointee.mBytesPerFrame
let numBytes = Int(bytesPerFrame * audioBuffer.frameLength)
// initialize bytes by 0
var audioByteArray = [UInt8](repeating: 0, count: numBytes)
srcLeft.withMemoryRebound(to: UInt8.self, capacity: numBytes) { srcByteData in
audioByteArray.withUnsafeMutableBufferPointer {
$0.baseAddress!.initialize(from: srcByteData, count: numBytes)
}
}
return audioByteArray
}
On the other device, when I receive the data I have to convert it back. So as it's received it runs through the following:
func bytesToAudioBuffer(_ buf: [UInt8]) -> AVAudioPCMBuffer {
let fmt = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: true)
let frameLength = UInt32(buf.count) / fmt.streamDescription.pointee.mBytesPerFrame
let audioBuffer = AVAudioPCMBuffer(pcmFormat: fmt, frameCapacity: frameLength)
audioBuffer.frameLength = frameLength
let dstLeft = audioBuffer.floatChannelData![0]
buf.withUnsafeBufferPointer {
let src = UnsafeRawPointer($0.baseAddress!).bindMemory(to: Float.self, capacity: Int(frameLength))
dstLeft.initialize(from: src, count: Int(frameLength))
}
return audioBuffer
}
And lastly, we play this audio data:
self.audioPlayerQueue.async {
self.peerAudioPlayer.scheduleBuffer(audioBuffer)
if (!self.peerAudioPlayer.isPlaying && self.localAudioEngine.isRunning) {
self.peerAudioPlayer.play()
}
}
However, on either speaker I just hear what sounds like someone tapping the microphone every half-second(ish). Not them actually talking or anything. I imagine this is due to my conversion from an audio buffer to bytes and back, but I'm not sure. Does anyone see any issues with the above?
Thanks.
If anyone is interested in the solution, basically the issue was that the audio on the recording device was 17640 bytes but to stream it, it breaks it up into smaller pieces, and on the receiving device I had to read the first 17640 bytes and THEN play the audio. Not play every small bit of data that was received.
I'm trying to make my iphone play a tune without using prerecorded files. What are my options here? AVAudioEngine, AudioKit? I've looked at them, but the learning curve is relatively steep for something I'm hoping is easy. They also seem like tools for creating sound effect given a PCM buffer window.
I'd like to be able to do something like
pitchCreator.play(["C4", "E4", "G4"], durations: [1, 1, 1])
Preferrably sounding like an instrument or at least not like a pure sine wave.
EDIT: The below code has been replaced by AudioKit
To anyone wondering this; I did make it work (kind of) using code similar to the one below.
class PitchCreator {
var engine: AVAudioEngine
var player: AVAudioPlayerNode
var mixer: AVAudioMixerNode
var buffer: AVAudioPCMBuffer
init() {
engine = AVAudioEngine()
player = AVAudioPlayerNode()
mixer = engine.mainMixerNode;
buffer = AVAudioPCMBuffer(PCMFormat: player.outputFormatForBus(0), frameCapacity: 100)
buffer.frameLength = 4096
engine.attachNode(player)
engine.connect(player, to: mixer, format: player.outputFormatForBus(0))
}
func play(frequency: Float) {
let signal = self.createSignal(frequency, amplitudes: [1.0, 0.5, 0.3, 0.1], bufferSize: Int(buffer.frameLength), sampleRate: Float(mixer.outputFormatForBus(0).sampleRate))
for i in 0 ..< signal.count {
buffer.floatChannelData.memory[i] = 0.5 * signal[i]
}
do {
try engine.start()
player.play()
player.scheduleBuffer(buffer, atTime: nil, options: .Loops, completionHandler: nil)
} catch {}
}
func stop() {
engine.stop()
player.stop()
}
func createSignal(frequency: Float, amplitudes: [Float], bufferSize: Int, sampleRate: Float) -> [Float] {
let π = Float(M_PI)
let T = sampleRate / frequency
var x = [Float](count: bufferSize, repeatedValue: 0.0)
for k in 0 ..< x.count {
for h in 0 ..< amplitudes.count {
x[k] += amplitudes[h] * sin(2.0 * π * Float(h + 1) * Float(k) / T)
}
}
return x
}
}
But it doesn't sound good enough so I've gone with sampling the notes I need and just use AVAudioPlayer instead to play them.
I alredy managed to translate this code so the render callback gets called:
http://www.cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html
I'm sure that my render callback method is not right implemented, because I either get no sound at all or I get pretty awful noise from my headphones.
I also don't see a connection between my audioSession in viewDidLoad and the rest of the code.
Is there anyone who can help me out with this?
private func performRender(
inRefCon: UnsafeMutablePointer<Void>,
ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBufNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>) -> OSStatus
{
// get object
let vc = unsafeBitCast(inRefCon, ViewController.self)
print("callback")
let thetaIncrement = 2.0 * M_PI * vc.kFrequency / vc.kSampleRate
var theta = vc.theta
// var sinValues = [Int32]()
let amplitude : Double = 0.25
let abl = UnsafeMutableAudioBufferListPointer(ioData)
for buffer in abl
{
let val : Int32 = Int32((sin(theta) * amplitude))
// sinValues.append(val)
theta += thetaIncrement
memset(buffer.mData, val, Int(buffer.mDataByteSize))
}
vc.theta = theta
return noErr
}
class ViewController: UIViewController
{
let kSampleRate : Float64 = 44100
let kFrequency : Double = 440
var theta : Double = 0
private var toneUnit = AudioUnit()
private let kInputBus = AudioUnitElement(1)
private let kOutputBus = AudioUnitElement(0)
#IBAction func tooglePlay(sender: UIButton)
{
if(toneUnit != nil)
{
AudioOutputUnitStop(toneUnit)
AudioUnitInitialize(toneUnit)
AudioComponentInstanceDispose(toneUnit)
toneUnit = nil
}
else
{
createToneUnit()
var err = AudioUnitInitialize(toneUnit)
assert(err == noErr, "error initializing audiounit!")
err = AudioOutputUnitStart(toneUnit)
assert(err == noErr, "error starting audiooutput unit!")
}
}
func createToneUnit()
{
var defaultOutputDescription = AudioComponentDescription(
componentType: kAudioUnitType_Output,
componentSubType: kAudioUnitSubType_RemoteIO,
componentManufacturer: kAudioUnitManufacturer_Apple,
componentFlags: 0,
componentFlagsMask: 0)
let defaultOutput = AudioComponentFindNext(nil,&defaultOutputDescription)
let fourBytesPerFloat : UInt32 = 4
let eightBitsPerByte : UInt32 = 8
var err = AudioComponentInstanceNew(defaultOutput, &toneUnit)
assert(err == noErr, "error setting audio component instance!")
var input = AURenderCallbackStruct(inputProc: performRender, inputProcRefCon: UnsafeMutablePointer(unsafeAddressOf(self)))
err = AudioUnitSetProperty(toneUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Global, kOutputBus, &input, UInt32(sizeof(AURenderCallbackStruct)))
assert(err == noErr, "error setting render callback!")
var streamFormat = AudioStreamBasicDescription(
mSampleRate: kSampleRate,
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kAudioFormatFlagsNativeFloatPacked,
mBytesPerPacket: fourBytesPerFloat,
mFramesPerPacket: 1,
mBytesPerFrame: fourBytesPerFloat,
mChannelsPerFrame: 1,
mBitsPerChannel: fourBytesPerFloat*eightBitsPerByte,
mReserved: 0)
err = AudioUnitSetProperty(toneUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, kInputBus, &streamFormat, UInt32(sizeof(AudioStreamBasicDescription)))
assert(err == noErr, "error setting audiounit property!")
}
override func viewDidLoad()
{
super.viewDidLoad()
let audioSession = AVAudioSession.sharedInstance()
do
{
try audioSession.setCategory(AVAudioSessionCategoryPlayback)
}
catch
{
print("Audio session setCategory failed")
}
do
{
try audioSession.setPreferredSampleRate(kSampleRate)
}
catch
{
print("Audio session samplerate error")
}
do
{
try audioSession.setPreferredIOBufferDuration(0.005)
}
catch
{
print("Audio session bufferduration error")
}
do
{
try audioSession.setActive(true)
}
catch
{
print("Audio session activate failure")
}
}
vc.theta isn't being incremented
memset only takes a byte's worth of val
the AudioUnit expects floats, but you're storing Int32s
the range of the audio data looks funny too - why not keep it in the range [-1, 1]?
there's no need to constrain theta either, sin can do that fine.
Are you sure this used to work in objective-c?