You can view this project on github here: https://github.com/Lkember/MotoIntercom/
The class that is of importance is PhoneViewController.swift
I have an AVAudioPCMBuffer. The buffer is then converted to NSData using this function:
func audioBufferToNSData(PCMBuffer: AVAudioPCMBuffer) -> NSData {
let channelCount = 1
let channels = UnsafeBufferPointer(start: PCMBuffer.floatChannelData, count: channelCount)
let data = NSData(bytes: channels[0], length:Int(PCMBuffer.frameCapacity * PCMBuffer.format.streamDescription.pointee.mBytesPerFrame))
return data
}
This data needs to be converted to UnsafePointer< UInt8 > according to the documentation on OutputStream.write.
https://developer.apple.com/reference/foundation/outputstream/1410720-write
This is what I have so far:
let data = self.audioBufferToNSData(PCMBuffer: buffer)
let output = self.outputStream!.write(UnsafePointer<UInt8>(data.bytes.assumingMemoryBound(to: UInt8.self)), maxLength: data.length)
When this data is received, it is converted back to an AVAudioPCMBuffer using this method:
func dataToPCMBuffer(data: NSData) -> AVAudioPCMBuffer {
let audioFormat = AVAudioFormat(commonFormat: AVAudioCommonFormat.pcmFormatFloat32, sampleRate: 8000, channels: 1, interleaved: false) // given NSData audio format
let audioBuffer = AVAudioPCMBuffer(pcmFormat: audioFormat, frameCapacity: UInt32(data.length) / audioFormat.streamDescription.pointee.mBytesPerFrame)
audioBuffer.frameLength = audioBuffer.frameCapacity
let channels = UnsafeBufferPointer(start: audioBuffer.floatChannelData, count: Int(audioBuffer.format.channelCount))
data.getBytes(UnsafeMutableRawPointer(channels[0]) , length: data.length)
return audioBuffer
}
Unfortunately, when I play this audioBuffer, I only hear static. I don't believe that it is an issue with my conversion from AVAudioPCMBuffer to NSData or my conversion from NSData back to AVAudioPCMBuffer. I imagine it is the way that I am writing NSData to the stream.
The reason I don't believe that it is my conversion is because I have created a sample project located here (which you can download and try) that records audio to an AVAudioPCMBuffer, converts it to NSData, converts the NSData back to AVAudioPCMBuffer and plays the audio. In this case there are no problems playing the audio.
EDIT:
I never showed how I actually get Data from the stream as well. Here is how it's done:
func stream(_ aStream: Stream, handle eventCode: Stream.Event) {
switch (eventCode) {
case Stream.Event.hasBytesAvailable:
DispatchQueue.global().async {
var buffer = [UInt8](repeating: 0, count: 8192)
let length = self.inputStream!.read(&buffer, maxLength: buffer.count)
let data = NSData.init(bytes: buffer, length: buffer.count)
print("\(#file) > \(#function) > \(length) bytes read on queue \(self.currentQueueName()!) buffer.count \(data.length)")
if (length > 0) {
let audioBuffer = self.dataToPCMBuffer(data: data)
self.audioPlayerQueue.async {
self.peerAudioPlayer.scheduleBuffer(audioBuffer)
if (!self.peerAudioPlayer.isPlaying && self.localAudioEngine.isRunning) {
self.peerAudioPlayer.play()
}
}
}
else if (length == 0) {
print("\(#file) > \(#function) > Reached end of stream")
}
}
Once I have this data, I use the dataToPCMBuffer method to convert it to an AVAudioPCMBuffer.
EDIT 1:
Here is the AVAudioFormat's that I use:
self.localInputFormat = AVAudioFormat.init(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: false)
Originally, I was using this:
self.localInputFormat = self.localInput?.inputFormat(forBus: 0)
However, if the channel count does not equal the expected channel count, than I was getting crashes. So I switched it to the above.
The actual AVAudioPCMBuffer I'm using is in the installTap method (where localInput is an AVAudioInputNode):
localInput?.installTap(onBus: 0, bufferSize: 4096, format: localInputFormat) {
(buffer, time) -> Void in
Pretty sure you want to replace this:
let length = self.inputStream!.read(&buffer, maxLength: buffer.count)
let data = NSData.init(bytes: buffer, length: buffer.count)
With
let length = self.inputStream!.read(&buffer, maxLength: buffer.count)
let data = NSData.init(bytes: buffer, length: length)
Also, I am not 100% sure that the random blocks of data will always be ok to use to make the audio buffers. You might need to collect up the data first into a bigger block of NSData.
Right now, since you always pass in blocks of 8192 (even if you read less), the buffer creation probably always succeeds. It might not now.
Related
I have gone through the Apple Sample Code on Equalizing Audio with vDSP, where the audio file is filtered in AVAudioSourceNode and reproduced.
My objective is to do exactly the same, but instead of taking the audio from an audio file, take it in real-time from the microphone. Is it possible to do so in AVAudioEngine? A couple of ways to do so are based on installTap or AVAudioSinkNode, as described in First strategy and Second strategy sections.
So far, I got a bit closer to my objective with the following 2 strategies.
First strategy
// Added new class variables
private lazy var sinkNode = AVAudioSinkNode { (timestep, frames, audioBufferList) -> OSStatus in
let ptr = audioBufferList.pointee.mBuffers.mData?.assumingMemoryBound(to: Float.self)
var monoSamples = [Float]()
monoSamples.append(contentsOf: UnsafeBufferPointer(start: ptr, count: Int(frames)))
self.page = monoSamples.
for frame in 0..<frames {
print("sink: " + String(monoSamples[Int(frame)]))
}
return noErr
}
// AVAudioEngine connections
engine.attach(sinkNode)
// Audio input is passed to the AVAudioSinkNode and the [Float] array is pased to the AVAudioSourceNode through the _page_ variable
engine.connect(input, to: sinkNode, format: formatt)
engine.attach(srcNode)
engine.connect(srcNode,
to: engine.mainMixerNode,
format: format)
engine.connect(engine.mainMixerNode,
to: engine.outputNode,
format: format)
// The AVAudioSourceNode access the self.page array through the getSinalElement() function.
private func getSignalElement() -> Float {
return page.isEmpty ? 0 : page.removeFirst()
}
This approach made it possible to play the audio through the AVAudioSourceNode, but, the audio stops playing after a few seconds (even though, I still successfully get the self.page array in AVAudioSourceNode) and the app finally crashes.
2 strategy
In a similar approach, I used installtap
engine.attach(srcNode)
engine.connect(srcNode,
to: engine.mainMixerNode,
format: format)
engine.connect(engine.mainMixerNode,
to: engine.outputNode,
format: format)
input.installTap(onBus: 0, bufferSize:1024, format:formatt, block: { [weak self] buffer, when in
let arraySize = Int(buffer.frameLength)
let samples = Array(UnsafeBufferPointer(start: buffer.floatChannelData![0], count:arraySize))
self!.page = samples
})
// The AVAudioSourceNode access the self.page array through the getSinalElement() function.
private func getSignalElement() -> Float {
return page.isEmpty ? 0 : page.removeFirst()
}
The outcome after implementing Second strategy is the same as in First strategy. Which can be the issues making these approaches fail?
You can use AvAudioEngine().inputNode like following:
let engine = AVAudioEngine()
private lazy var srcNode = AVAudioSourceNode { _, _, frameCount, audioBufferList -> OSStatus in
return noErr
}
// Attach First
engine.attach(srcNode)
// Then connect nodes
let input = engine.inputNode
engine.connect(input, to: srcNode, format: input.inputFormat(forBus: 0))
It is important to use input.inputFormat(...) as format type.
do{
try audioSession.setCategory(.playAndRecord, mode: .default, options: [.mixWithOthers, .defaultToSpeaker,.allowBluetoothA2DP,.allowAirPlay,.allowBluetooth])
try audioSession.setActive(true)
} catch{
print(error.localizedDescription)
}
engine.attach(player)
//Add this only you want putch
let pitch = AVAudioUnitTimePitch()
// pitch.pitch = 1000 //Filtered Voice
//pitch.rate = 1 //Normal rate
// engine.attach(pitch)
engine.attach(srcNode)
engine.connect(srcNode,
to: engine.mainMixerNode,
format: engine.inputNode.inputFormat(forBus: 0))
engine.connect(engine.mainMixerNode,
to: engine.outputNode,
format: engine.inputNode.inputFormat(forBus: 0))
engine.prepare()
engine.inputNode.installTap(onBus: 0, bufferSize: 512, format: engine.inputNode.inputFormat(forBus: 0)) { (buffer, time) -> Void in
// self.player.scheduleBuffer(buffer)
let arraySize = Int(buffer.frameLength)
let samples = Array(UnsafeBufferPointer(start: buffer.floatChannelData![0], count:arraySize))
self.page = samples
print("samples",samples)
}
engine.mainMixerNode.outputVolume = 0.5
If you had a file stored as data which is much larger than your buffer and you wanted to iterate over this data in pieces of a buffer size, what is a good way to do this? If you could provide some context, that would be great.
I was thinking,
let bufferSize: Int = 20000
let myData: Data = Data(..)
var buffer: ??? = ???
var theOffset: ??? = ???
func runWhenReady() {
buffer = &myData
let amount = sendingData(&buffer[bufferOffset] maxLength: bufferSize - bufferOffset)
bufferOffset += amount
}
// pseudocode
// from foundation, but changed a bit (taken from Obj-C foundations just for types)
// writes the bytes from the specified buffer to the stream up to len bytes. Returns the number of bytes actually written.
func sendingData(_ buffer: const uint8_t *, maxLength len: NSUInteger) -> Int {
...
}
If you want to iterate you need a loop.
This is an example to slice the data in chunks of bufferSize with stride.
let bufferSize = 20000
var buffer = [UInt8]()
let myData = Data(..)
let dataCount = myData.count
for currentIndex in stride(from: 0, to: dataCount, by: bufferSize) {
let length = min(bufferSize, dataCount - currentIndex) // ensures that the last chunk is the remainder of the data
let endIndex = myData.index(currentIndex, offsetBy: length)
buffer = [UInt8](myData[currentIndex..<endIndex])
// do something with buffer
}
Open the file
let fileUrl: URL = ...
let fileHandle = try! FileHandle(forReadingFrom: fileUrl)
defer {
fileHandle.closeFile()
}
Create buffer:
let bufferSize = 20_000
let buffer = UnsafeMutablePointer<UInt8>.allocate(capacity: bufferSize)
defer {
buffer.deallocate()
}
Read until end of file is reached:
while true {
let bytesRead = read(fileHandle.fileDescriptor, buffer, bufferSize)
if bytesRead < 0 {
// handle error
break
}
if bytesRead == 0 {
// end of file
break
}
// do something with data in buffer
}
How convert AAC to PCM using AVAudioConverter, AVAudioCompressedBuffer and AVAudioPCMBuffer on Swift?
On WWDC 2015, 507 Session was said, that AVAudioConverter can encode and decode PCM buffer, was showed encode example, but wasn't showed examples with decoding.
I tried decode, and something doesn't work. I don't know what:(
Calls:
//buffer - it's AVAudioPCMBuffer from AVAudioInputNode(AVAudioEngine)
let aacBuffer = AudioBufferConverter.convertToAAC(from: buffer, error: nil) //has data
let data = Data(bytes: aacBuffer!.data, count: Int(aacBuffer!.byteLength)) //has data
let aacReverseBuffer = AudioBufferConverter.convertToAAC(from: data) //has data
let pcmReverseBuffer = AudioBufferConverter.convertToPCM(from: aacBuffer2!, error: nil) //zeros data. data object exist, but filled by zeros
It's code for converting:
class AudioBufferFormatHelper {
static func PCMFormat() -> AVAudioFormat? {
return AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: false)
}
static func AACFormat() -> AVAudioFormat? {
var outDesc = AudioStreamBasicDescription(
mSampleRate: 44100,
mFormatID: kAudioFormatMPEG4AAC,
mFormatFlags: 0,
mBytesPerPacket: 0,
mFramesPerPacket: 0,
mBytesPerFrame: 0,
mChannelsPerFrame: 1,
mBitsPerChannel: 0,
mReserved: 0)
let outFormat = AVAudioFormat(streamDescription: &outDesc)
return outFormat
}
}
class AudioBufferConverter {
static func convertToAAC(from buffer: AVAudioBuffer, error outError: NSErrorPointer) -> AVAudioCompressedBuffer? {
let outputFormat = AudioBufferFormatHelper.AACFormat()
let outBuffer = AVAudioCompressedBuffer(format: outputFormat!, packetCapacity: 8, maximumPacketSize: 768)
self.convert(from: buffer, to: outBuffer, error: outError)
return outBuffer
}
static func convertToPCM(from buffer: AVAudioBuffer, error outError: NSErrorPointer) -> AVAudioPCMBuffer? {
let outputFormat = AudioBufferFormatHelper.PCMFormat()
guard let outBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat!, frameCapacity: 4410) else {
return nil
}
outBuffer.frameLength = 4410
self.convert(from: buffer, to: outBuffer, error: outError)
return outBuffer
}
static func convertToAAC(from data: Data) -> AVAudioCompressedBuffer? {
let nsData = NSData(data: data)
let inputFormat = AudioBufferFormatHelper.AACFormat()
let buffer = AVAudioCompressedBuffer(format: inputFormat!, packetCapacity: 8, maximumPacketSize: 768)
buffer.byteLength = UInt32(data.count)
buffer.packetCount = 8
buffer.data.copyMemory(from: nsData.bytes, byteCount: nsData.length)
buffer.packetDescriptions!.pointee.mDataByteSize = 4
return buffer
}
private static func convert(from sourceBuffer: AVAudioBuffer, to destinationBuffer: AVAudioBuffer, error outError: NSErrorPointer) {
//init converter
let inputFormat = sourceBuffer.format
let outputFormat = destinationBuffer.format
let converter = AVAudioConverter(from: inputFormat, to: outputFormat)
converter!.bitRate = 32000
let inputBlock : AVAudioConverterInputBlock = { inNumPackets, outStatus in
outStatus.pointee = AVAudioConverterInputStatus.haveData
return sourceBuffer
}
_ = converter!.convert(to: destinationBuffer, error: outError, withInputFrom: inputBlock)
}
}
In result AVAudioPCMBuffer has data with zeros.
And in messages I see errors:
AACDecoder.cpp:192:Deserialize: Unmatched number of channel elements in payload
AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
[ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x14f81b840) Error decoding packet 1: err = -1, packet length: 0
AACDecoder.cpp:192:Deserialize: Unmatched number of channel elements in payload
AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
[ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x14f81b840) Error decoding packet 3: err = -1, packet length: 0
AACDecoder.cpp:192:Deserialize: Unmatched number of channel elements in payload
AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
[ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x14f81b840) Error decoding packet 5: err = -1, packet length: 0
AACDecoder.cpp:192:Deserialize: Unmatched number of channel elements in payload
AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
[ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x14f81b840) Error decoding packet 7: err = -1, packet length: 0
There were a few problems with your attempt:
you're not setting the multiple packet descriptions when you convert data -> AVAudioCompressedBuffer. You need to create them, as AAC packets are of variable size. You can either copy them from the original AAC buffer, or parse them from your data by hand (ouch) or by using the AudioFileStream api.
you re-create your AVAudioConverters over and over again - once for each buffer, throwing away their state. e.g. the AAC encoder for its own personal reasons needs to add 2112 frames of silence before it can get around to reproducing your audio, so recreating the converter gets you a whole lot of silence.
you present the same buffer over and over to the AVAudioConverter's input block. You should only present each buffer once.
the bit rate of 32000 didn't work (for me)
That's all I can think of right now. Try the following modifications to your code instead which you now call like so:
(p.s. I changed some of the mono to stereo so I could play the round trip buffers on my mac, whose microphone input is strangely stereo - you might need to change it back)
(p.p.s there's obviously some kind of round trip / serialising/deserialising attempt going on here, but what exactly are you trying to do? do you want to stream AAC audio from one device to another? because it might be easier to let another API like AVPlayer play the resulting stream instead of dealing with the packets yourself)
let aacBuffer = AudioBufferConverter.convertToAAC(from: buffer, error: nil)!
let data = Data(bytes: aacBuffer.data, count: Int(aacBuffer.byteLength))
let packetDescriptions = Array(UnsafeBufferPointer(start: aacBuffer.packetDescriptions, count: Int(aacBuffer.packetCount)))
let aacReverseBuffer = AudioBufferConverter.convertToAAC(from: data, packetDescriptions: packetDescriptions)!
// was aacBuffer2
let pcmReverseBuffer = AudioBufferConverter.convertToPCM(from: aacReverseBuffer, error: nil)
class AudioBufferFormatHelper {
static func PCMFormat() -> AVAudioFormat? {
return AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: false)
}
static func AACFormat() -> AVAudioFormat? {
var outDesc = AudioStreamBasicDescription(
mSampleRate: 44100,
mFormatID: kAudioFormatMPEG4AAC,
mFormatFlags: 0,
mBytesPerPacket: 0,
mFramesPerPacket: 0,
mBytesPerFrame: 0,
mChannelsPerFrame: 1,
mBitsPerChannel: 0,
mReserved: 0)
let outFormat = AVAudioFormat(streamDescription: &outDesc)
return outFormat
}
}
class AudioBufferConverter {
static var lpcmToAACConverter: AVAudioConverter! = nil
static func convertToAAC(from buffer: AVAudioBuffer, error outError: NSErrorPointer) -> AVAudioCompressedBuffer? {
let outputFormat = AudioBufferFormatHelper.AACFormat()
let outBuffer = AVAudioCompressedBuffer(format: outputFormat!, packetCapacity: 8, maximumPacketSize: 768)
//init converter once
if lpcmToAACConverter == nil {
let inputFormat = buffer.format
lpcmToAACConverter = AVAudioConverter(from: inputFormat, to: outputFormat!)
// print("available rates \(lpcmToAACConverter.applicableEncodeBitRates)")
// lpcmToAACConverter!.bitRate = 96000
lpcmToAACConverter.bitRate = 32000 // have end of stream problems with this, not sure why
}
self.convert(withConverter:lpcmToAACConverter, from: buffer, to: outBuffer, error: outError)
return outBuffer
}
static var aacToLPCMConverter: AVAudioConverter! = nil
static func convertToPCM(from buffer: AVAudioBuffer, error outError: NSErrorPointer) -> AVAudioPCMBuffer? {
let outputFormat = AudioBufferFormatHelper.PCMFormat()
guard let outBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat!, frameCapacity: 4410) else {
return nil
}
//init converter once
if aacToLPCMConverter == nil {
let inputFormat = buffer.format
aacToLPCMConverter = AVAudioConverter(from: inputFormat, to: outputFormat!)
}
self.convert(withConverter: aacToLPCMConverter, from: buffer, to: outBuffer, error: outError)
return outBuffer
}
static func convertToAAC(from data: Data, packetDescriptions: [AudioStreamPacketDescription]) -> AVAudioCompressedBuffer? {
let nsData = NSData(data: data)
let inputFormat = AudioBufferFormatHelper.AACFormat()
let maximumPacketSize = packetDescriptions.map { $0.mDataByteSize }.max()!
let buffer = AVAudioCompressedBuffer(format: inputFormat!, packetCapacity: AVAudioPacketCount(packetDescriptions.count), maximumPacketSize: Int(maximumPacketSize))
buffer.byteLength = UInt32(data.count)
buffer.packetCount = AVAudioPacketCount(packetDescriptions.count)
buffer.data.copyMemory(from: nsData.bytes, byteCount: nsData.length)
buffer.packetDescriptions!.pointee.mDataByteSize = UInt32(data.count)
buffer.packetDescriptions!.initialize(from: packetDescriptions, count: packetDescriptions.count)
return buffer
}
private static func convert(withConverter: AVAudioConverter, from sourceBuffer: AVAudioBuffer, to destinationBuffer: AVAudioBuffer, error outError: NSErrorPointer) {
// input each buffer only once
var newBufferAvailable = true
let inputBlock : AVAudioConverterInputBlock = {
inNumPackets, outStatus in
if newBufferAvailable {
outStatus.pointee = .haveData
newBufferAvailable = false
return sourceBuffer
} else {
outStatus.pointee = .noDataNow
return nil
}
}
let status = withConverter.convert(to: destinationBuffer, error: outError, withInputFrom: inputBlock)
print("status: \(status.rawValue)")
}
}
I'm recording audio using the following:
localInput?.installTap(onBus: 0, bufferSize: 4096, format: localInputFormat) {
(buffer, time) -> Void in
let audioBuffer = self.audioBufferToBytes(audioBuffer: buffer)
let output = self.outputStream!.write(audioBuffer, maxLength: Int(buffer.frameLength))
if output > 0 {
print("\(#file) > \(#function) > \(output) bytes written from queue \(self.currentQueueName())")
}
else if output == -1 {
let error = self.outputStream!.streamError
print("\(#file) > \(#function) > Error writing to stream: \(error?.localizedDescription)")
}
}
Where my localInputFormat is the following:
self.localInput = self.localAudioEngine.inputNode
self.localAudioEngine.attach(self.localAudioPlayer)
self.localInputFormat = self.localInput?.inputFormat(forBus: 0)
self.localAudioEngine.connect(self.localAudioPlayer, to: self.localAudioEngine.mainMixerNode, format: self.localInputFormat)
The function audioBufferToBytes is as follows:
func audioBufferToBytes(audioBuffer: AVAudioPCMBuffer) -> [UInt8] {
let srcLeft = audioBuffer.floatChannelData![0]
let bytesPerFrame = audioBuffer.format.streamDescription.pointee.mBytesPerFrame
let numBytes = Int(bytesPerFrame * audioBuffer.frameLength)
// initialize bytes by 0
var audioByteArray = [UInt8](repeating: 0, count: numBytes)
srcLeft.withMemoryRebound(to: UInt8.self, capacity: numBytes) { srcByteData in
audioByteArray.withUnsafeMutableBufferPointer {
$0.baseAddress!.initialize(from: srcByteData, count: numBytes)
}
}
return audioByteArray
}
On the other device, when I receive the data I have to convert it back. So as it's received it runs through the following:
func bytesToAudioBuffer(_ buf: [UInt8]) -> AVAudioPCMBuffer {
let fmt = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: true)
let frameLength = UInt32(buf.count) / fmt.streamDescription.pointee.mBytesPerFrame
let audioBuffer = AVAudioPCMBuffer(pcmFormat: fmt, frameCapacity: frameLength)
audioBuffer.frameLength = frameLength
let dstLeft = audioBuffer.floatChannelData![0]
buf.withUnsafeBufferPointer {
let src = UnsafeRawPointer($0.baseAddress!).bindMemory(to: Float.self, capacity: Int(frameLength))
dstLeft.initialize(from: src, count: Int(frameLength))
}
return audioBuffer
}
And lastly, we play this audio data:
self.audioPlayerQueue.async {
self.peerAudioPlayer.scheduleBuffer(audioBuffer)
if (!self.peerAudioPlayer.isPlaying && self.localAudioEngine.isRunning) {
self.peerAudioPlayer.play()
}
}
However, on either speaker I just hear what sounds like someone tapping the microphone every half-second(ish). Not them actually talking or anything. I imagine this is due to my conversion from an audio buffer to bytes and back, but I'm not sure. Does anyone see any issues with the above?
Thanks.
If anyone is interested in the solution, basically the issue was that the audio on the recording device was 17640 bytes but to stream it, it breaks it up into smaller pieces, and on the receiving device I had to read the first 17640 bytes and THEN play the audio. Not play every small bit of data that was received.
I'd like to cache CAF files before converting them to PCM whenever they play.
For example,
char *mybuffer = malloc(mysoundsize);
FILE *f = fopen("mysound.caf", "rb");
fread(mybuffer, mysoundsize, 1, f);
fclose(f);
char *pcmBuffer = malloc(pcmsoundsize);
// Convert to PCM for playing
AudioFileReadBytes(mybuffer, false, 0, mysoundsize, &numbytes, pcmBuffer);
This way, whenever the sound plays, the compressed CAF file is already loaded into memory, avoiding disk access. How can I open a block of memory with an 'AudioFileID' to make AudioFileReadBytes happy? Is there another method I can use?
I have not done it myself, but from the documentation I would think that you have to use AudioFileOpenWithCallbacks and implement callback functions that read from your memory buffer.
You can finish it with AudioFileStreamOpen
fileprivate var streamID: AudioFileStreamID?
public func parse(data: Data) throws {
let streamID = self.streamID!
let count = data.count
_ = try data.withUnsafeBytes { (bytes: UnsafePointer<UInt8>) in
let result = AudioFileStreamParseBytes(streamID, UInt32(count), bytes, [])
guard result == noErr else {
throw ParserError.failedToParseBytes(result)
}
}
}
you can store the data in memory within the callback
func ParserPacketCallback(_ context: UnsafeMutableRawPointer, _ byteCount: UInt32, _ packetCount: UInt32, _ data: UnsafeRawPointer, _ packetDescriptions: Optional<UnsafeMutablePointer<AudioStreamPacketDescription>>) {
let parser = Unmanaged<Parser>.fromOpaque(context).takeUnretainedValue()
/// At this point we should definitely have a data format
guard let dataFormat = parser.dataFormatD else {
return
}
let format = dataFormat.streamDescription.pointee
let bytesPerPacket = Int(format.mBytesPerPacket)
for i in 0 ..< Int(packetCount) {
let packetStart = i * bytesPerPacket
let packetSize = bytesPerPacket
let packetData = Data(bytes: data.advanced(by: packetStart), count: packetSize)
parser.packetsX.append(packetData)
}
}
full code in github repo