Decode AAC to PCM format using AVAudioConverter Swift - ios

How convert AAC to PCM using AVAudioConverter, AVAudioCompressedBuffer and AVAudioPCMBuffer on Swift?
On WWDC 2015, 507 Session was said, that AVAudioConverter can encode and decode PCM buffer, was showed encode example, but wasn't showed examples with decoding.
I tried decode, and something doesn't work. I don't know what:(
Calls:
//buffer - it's AVAudioPCMBuffer from AVAudioInputNode(AVAudioEngine)
let aacBuffer = AudioBufferConverter.convertToAAC(from: buffer, error: nil) //has data
let data = Data(bytes: aacBuffer!.data, count: Int(aacBuffer!.byteLength)) //has data
let aacReverseBuffer = AudioBufferConverter.convertToAAC(from: data) //has data
let pcmReverseBuffer = AudioBufferConverter.convertToPCM(from: aacBuffer2!, error: nil) //zeros data. data object exist, but filled by zeros
It's code for converting:
class AudioBufferFormatHelper {
static func PCMFormat() -> AVAudioFormat? {
return AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: false)
}
static func AACFormat() -> AVAudioFormat? {
var outDesc = AudioStreamBasicDescription(
mSampleRate: 44100,
mFormatID: kAudioFormatMPEG4AAC,
mFormatFlags: 0,
mBytesPerPacket: 0,
mFramesPerPacket: 0,
mBytesPerFrame: 0,
mChannelsPerFrame: 1,
mBitsPerChannel: 0,
mReserved: 0)
let outFormat = AVAudioFormat(streamDescription: &outDesc)
return outFormat
}
}
class AudioBufferConverter {
static func convertToAAC(from buffer: AVAudioBuffer, error outError: NSErrorPointer) -> AVAudioCompressedBuffer? {
let outputFormat = AudioBufferFormatHelper.AACFormat()
let outBuffer = AVAudioCompressedBuffer(format: outputFormat!, packetCapacity: 8, maximumPacketSize: 768)
self.convert(from: buffer, to: outBuffer, error: outError)
return outBuffer
}
static func convertToPCM(from buffer: AVAudioBuffer, error outError: NSErrorPointer) -> AVAudioPCMBuffer? {
let outputFormat = AudioBufferFormatHelper.PCMFormat()
guard let outBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat!, frameCapacity: 4410) else {
return nil
}
outBuffer.frameLength = 4410
self.convert(from: buffer, to: outBuffer, error: outError)
return outBuffer
}
static func convertToAAC(from data: Data) -> AVAudioCompressedBuffer? {
let nsData = NSData(data: data)
let inputFormat = AudioBufferFormatHelper.AACFormat()
let buffer = AVAudioCompressedBuffer(format: inputFormat!, packetCapacity: 8, maximumPacketSize: 768)
buffer.byteLength = UInt32(data.count)
buffer.packetCount = 8
buffer.data.copyMemory(from: nsData.bytes, byteCount: nsData.length)
buffer.packetDescriptions!.pointee.mDataByteSize = 4
return buffer
}
private static func convert(from sourceBuffer: AVAudioBuffer, to destinationBuffer: AVAudioBuffer, error outError: NSErrorPointer) {
//init converter
let inputFormat = sourceBuffer.format
let outputFormat = destinationBuffer.format
let converter = AVAudioConverter(from: inputFormat, to: outputFormat)
converter!.bitRate = 32000
let inputBlock : AVAudioConverterInputBlock = { inNumPackets, outStatus in
outStatus.pointee = AVAudioConverterInputStatus.haveData
return sourceBuffer
}
_ = converter!.convert(to: destinationBuffer, error: outError, withInputFrom: inputBlock)
}
}
In result AVAudioPCMBuffer has data with zeros.
And in messages I see errors:
AACDecoder.cpp:192:Deserialize: Unmatched number of channel elements in payload
AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
[ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x14f81b840) Error decoding packet 1: err = -1, packet length: 0
AACDecoder.cpp:192:Deserialize: Unmatched number of channel elements in payload
AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
[ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x14f81b840) Error decoding packet 3: err = -1, packet length: 0
AACDecoder.cpp:192:Deserialize: Unmatched number of channel elements in payload
AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
[ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x14f81b840) Error decoding packet 5: err = -1, packet length: 0
AACDecoder.cpp:192:Deserialize: Unmatched number of channel elements in payload
AACDecoder.cpp:220:DecodeFrame: Error deserializing packet
[ac] ACMP4AACBaseDecoder.cpp:1337:ProduceOutputBufferList: (0x14f81b840) Error decoding packet 7: err = -1, packet length: 0

There were a few problems with your attempt:
you're not setting the multiple packet descriptions when you convert data -> AVAudioCompressedBuffer. You need to create them, as AAC packets are of variable size. You can either copy them from the original AAC buffer, or parse them from your data by hand (ouch) or by using the AudioFileStream api.
you re-create your AVAudioConverters over and over again - once for each buffer, throwing away their state. e.g. the AAC encoder for its own personal reasons needs to add 2112 frames of silence before it can get around to reproducing your audio, so recreating the converter gets you a whole lot of silence.
you present the same buffer over and over to the AVAudioConverter's input block. You should only present each buffer once.
the bit rate of 32000 didn't work (for me)
That's all I can think of right now. Try the following modifications to your code instead which you now call like so:
(p.s. I changed some of the mono to stereo so I could play the round trip buffers on my mac, whose microphone input is strangely stereo - you might need to change it back)
(p.p.s there's obviously some kind of round trip / serialising/deserialising attempt going on here, but what exactly are you trying to do? do you want to stream AAC audio from one device to another? because it might be easier to let another API like AVPlayer play the resulting stream instead of dealing with the packets yourself)
let aacBuffer = AudioBufferConverter.convertToAAC(from: buffer, error: nil)!
let data = Data(bytes: aacBuffer.data, count: Int(aacBuffer.byteLength))
let packetDescriptions = Array(UnsafeBufferPointer(start: aacBuffer.packetDescriptions, count: Int(aacBuffer.packetCount)))
let aacReverseBuffer = AudioBufferConverter.convertToAAC(from: data, packetDescriptions: packetDescriptions)!
// was aacBuffer2
let pcmReverseBuffer = AudioBufferConverter.convertToPCM(from: aacReverseBuffer, error: nil)
class AudioBufferFormatHelper {
static func PCMFormat() -> AVAudioFormat? {
return AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: false)
}
static func AACFormat() -> AVAudioFormat? {
var outDesc = AudioStreamBasicDescription(
mSampleRate: 44100,
mFormatID: kAudioFormatMPEG4AAC,
mFormatFlags: 0,
mBytesPerPacket: 0,
mFramesPerPacket: 0,
mBytesPerFrame: 0,
mChannelsPerFrame: 1,
mBitsPerChannel: 0,
mReserved: 0)
let outFormat = AVAudioFormat(streamDescription: &outDesc)
return outFormat
}
}
class AudioBufferConverter {
static var lpcmToAACConverter: AVAudioConverter! = nil
static func convertToAAC(from buffer: AVAudioBuffer, error outError: NSErrorPointer) -> AVAudioCompressedBuffer? {
let outputFormat = AudioBufferFormatHelper.AACFormat()
let outBuffer = AVAudioCompressedBuffer(format: outputFormat!, packetCapacity: 8, maximumPacketSize: 768)
//init converter once
if lpcmToAACConverter == nil {
let inputFormat = buffer.format
lpcmToAACConverter = AVAudioConverter(from: inputFormat, to: outputFormat!)
// print("available rates \(lpcmToAACConverter.applicableEncodeBitRates)")
// lpcmToAACConverter!.bitRate = 96000
lpcmToAACConverter.bitRate = 32000 // have end of stream problems with this, not sure why
}
self.convert(withConverter:lpcmToAACConverter, from: buffer, to: outBuffer, error: outError)
return outBuffer
}
static var aacToLPCMConverter: AVAudioConverter! = nil
static func convertToPCM(from buffer: AVAudioBuffer, error outError: NSErrorPointer) -> AVAudioPCMBuffer? {
let outputFormat = AudioBufferFormatHelper.PCMFormat()
guard let outBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat!, frameCapacity: 4410) else {
return nil
}
//init converter once
if aacToLPCMConverter == nil {
let inputFormat = buffer.format
aacToLPCMConverter = AVAudioConverter(from: inputFormat, to: outputFormat!)
}
self.convert(withConverter: aacToLPCMConverter, from: buffer, to: outBuffer, error: outError)
return outBuffer
}
static func convertToAAC(from data: Data, packetDescriptions: [AudioStreamPacketDescription]) -> AVAudioCompressedBuffer? {
let nsData = NSData(data: data)
let inputFormat = AudioBufferFormatHelper.AACFormat()
let maximumPacketSize = packetDescriptions.map { $0.mDataByteSize }.max()!
let buffer = AVAudioCompressedBuffer(format: inputFormat!, packetCapacity: AVAudioPacketCount(packetDescriptions.count), maximumPacketSize: Int(maximumPacketSize))
buffer.byteLength = UInt32(data.count)
buffer.packetCount = AVAudioPacketCount(packetDescriptions.count)
buffer.data.copyMemory(from: nsData.bytes, byteCount: nsData.length)
buffer.packetDescriptions!.pointee.mDataByteSize = UInt32(data.count)
buffer.packetDescriptions!.initialize(from: packetDescriptions, count: packetDescriptions.count)
return buffer
}
private static func convert(withConverter: AVAudioConverter, from sourceBuffer: AVAudioBuffer, to destinationBuffer: AVAudioBuffer, error outError: NSErrorPointer) {
// input each buffer only once
var newBufferAvailable = true
let inputBlock : AVAudioConverterInputBlock = {
inNumPackets, outStatus in
if newBufferAvailable {
outStatus.pointee = .haveData
newBufferAvailable = false
return sourceBuffer
} else {
outStatus.pointee = .noDataNow
return nil
}
}
let status = withConverter.convert(to: destinationBuffer, error: outError, withInputFrom: inputBlock)
print("status: \(status.rawValue)")
}
}

Related

Can you play audio directly from a CMSampleBuffer?

I have mic audio captured during an ARSession that I wish to pass to another VC and play back after the capture has taken place, but whilst the app is still running (and audio in memory).
The audio is currently captured as a single CMSampleBuffer and accessed through the didOutputAudioSampleBuffer ARSessionDelegate method.
I've worked with audio files and AVAudioPlayer before, but am new to CMSampleBuffer.
Is there a way of taking the raw buffer as is and playing it? If so, which classes enable this? Or does it need to be rendered/converted into some other format or file first?
This is the format description of the data in the buffer:
mediaType:'soun'
mediaSubType:'lpcm'
mediaSpecific: {
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'lpcm'
mFormatFlags: 0xc
mBytesPerPacket: 2
mFramesPerPacket: 1
mBytesPerFrame: 2
mChannelsPerFrame: 1
mBitsPerChannel: 16 }
cookie: {(null)}
ACL: {Mono}
FormatList Array: {
Index: 0
ChannelLayoutTag: 0x640001
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'lpcm'
mFormatFlags: 0xc
mBytesPerPacket: 2
mFramesPerPacket: 1
mBytesPerFrame: 2
mChannelsPerFrame: 1
mBitsPerChannel: 16 }}
}
extensions: {(null)}
Any guidance appreciated, as Apple's docs aren't clear on this matter, and related questions on SO deal more with a live-stream of audio than capture and subsequent playback.
It seems that the answer is no, you can't simply save and play back raw buffer audio, it needs to be converted to something more persistent first.
Looks like the main way to do this is to use AVAssetWriter to save the buffer data as an audio file, for playback later using AVAudioPlayer.
It's possible to pass mic to audio engine in parallel with recording, at minimum lag:
let audioEngine = AVAudioEngine()
...
self.audioEngine.connect(self.audioEngine.inputNode,
to: self.audioEngine.mainMixerNode, format: nil)
self.audioEngine.start()
If sample buffer use is important --
Roughly, it can be done with conversion into PCM buffer:
import AVFoundation
extension AVAudioPCMBuffer {
static func create(from sampleBuffer: CMSampleBuffer) -> AVAudioPCMBuffer? {
guard let description: CMFormatDescription = CMSampleBufferGetFormatDescription(sampleBuffer),
let sampleRate: Float64 = description.audioStreamBasicDescription?.mSampleRate,
let channelsPerFrame: UInt32 = description.audioStreamBasicDescription?.mChannelsPerFrame /*,
let numberOfChannels = description.audioChannelLayout?.numberOfChannels */
else { return nil }
guard let blockBuffer: CMBlockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer) else {
return nil
}
let samplesCount = CMSampleBufferGetNumSamples(sampleBuffer)
//let length: Int = CMBlockBufferGetDataLength(blockBuffer)
let audioFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: sampleRate, channels: AVAudioChannelCount(1), interleaved: false)
let buffer = AVAudioPCMBuffer(pcmFormat: audioFormat!, frameCapacity: AVAudioFrameCount(samplesCount))!
buffer.frameLength = buffer.frameCapacity
// GET BYTES
var dataPointer: UnsafeMutablePointer<Int8>?
CMBlockBufferGetDataPointer(blockBuffer, atOffset: 0, lengthAtOffsetOut: nil, totalLengthOut: nil, dataPointerOut: &dataPointer)
guard var channel: UnsafeMutablePointer<Float> = buffer.floatChannelData?[0],
let data = dataPointer else { return nil }
var data16 = UnsafeRawPointer(data).assumingMemoryBound(to: Int16.self)
for _ in 0...samplesCount - 1 {
channel.pointee = Float32(data16.pointee) / Float32(Int16.max)
channel += 1
for _ in 0...channelsPerFrame - 1 {
data16 += 1
}
}
return buffer
}
}
class BufferPlayer {
let audioEngine = AVAudioEngine()
let player = AVAudioPlayerNode()
deinit {
self.audioEngine.stop()
}
init(withBuffer: CMSampleBuffer) {
self.audioEngine.attach(self.player)
self.audioEngine.connect(self.player,
to: self.audioEngine.mainMixerNode,
format: AVAudioPCMBuffer.create(from: withBuffer)!.format
)
_ = try? audioEngine.start()
}
func playEnqueue(buffer: CMSampleBuffer) {
guard let bufferPCM = AVAudioPCMBuffer.create(from: buffer) else { return }
self.player.scheduleBuffer(bufferPCM, completionHandler: nil)
if !self.player.isPlaying { self.player.play() }
}
}

How to play raw audio data from socket in Swift

I need to play raw audio data coming over socket in small chunks. I have read that I suppose to use circular buffer and found few solutions in Objective C, but couldn't made any of them to work, especially in Swift 3.
Can anyone help me?
First you implement ring Buffer like so.
public struct RingBuffer<T> {
private var array: [T?]
private var readIndex = 0
private var writeIndex = 0
public init(count: Int) {
array = [T?](repeating: nil, count: count)
}
/* Returns false if out of space. */
#discardableResult public mutating func write(element: T) -> Bool {
if !isFull {
array[writeIndex % array.count] = element
writeIndex += 1
return true
} else {
return false
}
}
/* Returns nil if the buffer is empty. */
public mutating func read() -> T? {
if !isEmpty {
let element = array[readIndex % array.count]
readIndex += 1
return element
} else {
return nil
}
}
fileprivate var availableSpaceForReading: Int {
return writeIndex - readIndex
}
public var isEmpty: Bool {
return availableSpaceForReading == 0
}
fileprivate var availableSpaceForWriting: Int {
return array.count - availableSpaceForReading
}
public var isFull: Bool {
return availableSpaceForWriting == 0
}
}
After that, you implement Audio Unit like so. ( modify if necessary)
class ToneGenerator {
fileprivate var toneUnit: AudioUnit? = nil
init() {
setupAudioUnit()
}
deinit {
stop()
}
func setupAudioUnit() {
// Configure the description of the output audio component we want to find:
let componentSubtype: OSType
#if os(OSX)
componentSubtype = kAudioUnitSubType_DefaultOutput
#else
componentSubtype = kAudioUnitSubType_RemoteIO
#endif
var defaultOutputDescription = AudioComponentDescription(componentType: kAudioUnitType_Output,
componentSubType: componentSubtype,
componentManufacturer: kAudioUnitManufacturer_Apple,
componentFlags: 0,
componentFlagsMask: 0)
let defaultOutput = AudioComponentFindNext(nil, &defaultOutputDescription)
var err: OSStatus
// Create a new instance of it in the form of our audio unit:
err = AudioComponentInstanceNew(defaultOutput!, &toneUnit)
assert(err == noErr, "AudioComponentInstanceNew failed")
// Set the render callback as the input for our audio unit:
var renderCallbackStruct = AURenderCallbackStruct(inputProc: renderCallback as? AURenderCallback,
inputProcRefCon: nil)
err = AudioUnitSetProperty(toneUnit!,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&renderCallbackStruct,
UInt32(MemoryLayout<AURenderCallbackStruct>.size))
assert(err == noErr, "AudioUnitSetProperty SetRenderCallback failed")
// Set the stream format for the audio unit. That is, the format of the data that our render callback will provide.
var streamFormat = AudioStreamBasicDescription(mSampleRate: Float64(sampleRate),
mFormatID: kAudioFormatLinearPCM,
mFormatFlags: kAudioFormatFlagsNativeFloatPacked|kAudioFormatFlagIsNonInterleaved,
mBytesPerPacket: 4 /*four bytes per float*/,
mFramesPerPacket: 1,
mBytesPerFrame: 4,
mChannelsPerFrame: 1,
mBitsPerChannel: 4*8,
mReserved: 0)
err = AudioUnitSetProperty(toneUnit!,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&streamFormat,
UInt32(MemoryLayout<AudioStreamBasicDescription>.size))
assert(err == noErr, "AudioUnitSetProperty StreamFormat failed")
}
func start() {
var status: OSStatus
status = AudioUnitInitialize(toneUnit!)
status = AudioOutputUnitStart(toneUnit!)
assert(status == noErr)
}
func stop() {
AudioOutputUnitStop(toneUnit!)
AudioUnitUninitialize(toneUnit!)
}
}
This is Fixed values
private let sampleRate = 16000
private let amplitude: Float = 1.0
private let frequency: Float = 440
/// Theta is changed over time as each sample is provided.
private var theta: Float = 0.0
private func renderCallback(_ inRefCon: UnsafeMutableRawPointer,
ioActionFlags: UnsafeMutablePointer<AudioUnitRenderActionFlags>,
inTimeStamp: UnsafePointer<AudioTimeStamp>,
inBusNumber: UInt32,
inNumberFrames: UInt32,
ioData: UnsafeMutablePointer<AudioBufferList>) -> OSStatus {
let abl = UnsafeMutableAudioBufferListPointer(ioData)
let buffer = abl[0]
let pointer: UnsafeMutableBufferPointer<Float32> = UnsafeMutableBufferPointer(buffer)
for frame in 0..<inNumberFrames {
let pointerIndex = pointer.startIndex.advanced(by: Int(frame))
pointer[pointerIndex] = sin(theta) * amplitude
theta += 2.0 * Float(M_PI) * frequency / Float(sampleRate)
}
return noErr
}
You need to put data in a Circular buffer and then play the sound.

What is the best way to write a struct to file?

I have this two structs:
struct pcap_hdr_s {
UInt32 magic_number;
UInt16 version_major;
UInt16 version_minor;
int32_t thiszone;
UInt32 sigfigs;
UInt32 snaplen;
UInt32 network;
};
//packet header
struct pcaprec_hdr_s {
UInt32 ts_sec;
UInt32 ts_usec;
UInt32 incl_len;
UInt32 orig_len;
};
which are initialised as follows(for example):
let pcapHeader : pcap_hdr_s = pcap_hdr_s(magic_number: 0xa1b2c3d4,
version_major: 2,
version_minor: 4,
thiszone: 0,
sigfigs: 0,
snaplen:
pcap_record_size,
network: LINKTYPE_ETHERNET)
let pcapRecHeader : pcaprec_hdr_s = pcaprec_hdr_s(ts_sec: UInt32(ts.tv_sec),
ts_usec: UInt32(ts.tv_nsec),
incl_len: plen,
orig_len: length)
I tried to create Data/NSData objects of the structs like this:
//write pcap header
let pcapHeaderData : NSData = NSData(bytes: pcapHeader, length: sizeofValue(pcapHeader))
//write pcaprec header
let pcapRecHeaderData : NSData = NSData(bytes: pcapRecHeader, length: sizeofValue(pcapRecHeader))
but I always get this error for each line:
"Connot convert value if type 'pcap_hdr_s' to expected arguemnt type 'UsafeRawPointer?'"
I had a look at the documentation of UnsafeRawPointers in Swift, but I don't get it enough as for now, to create the NSData object from the structs.
Am I on the right way or is there a better one to accomplish my intend?
If this Data initialisation would work, my next steps would be
Append pcapRecHeaderData to pcapHeaderData
write pcapHeaderData atomically to file/url with the provided function of Data/NSData
EDIT:
//packet ethernet header
struct ethernet_hdr_s {
let dhost : [UInt8]
let shost : [UInt8]
let type : UInt16
};
let src_mac : [UInt8] = [0x66, 0x77, 0x88, 0x99, 0xAA, 0xBB]
let dest_mac : [UInt8] = [0x00, 0x11, 0x22, 0x33, 0x44, 0x55]
let ethernetHeader : ethernet_hdr_s = ethernet_hdr_s(dhost: dest_mac, shost: src_mac, type: 0x0800)
EDIT 2:
let payloadSize = packet.payload.count
let plen = (payloadSize < Int(pcap_record_size) ? payloadSize : Int(pcap_record_size));
bytesWritten = withUnsafePointer(to: &(packet.payload)) {
$0.withMemoryRebound(to: UInt8.self, capacity: Int(plen)) {
ostream.write($0, maxLength: Int(plen))
}
}
if bytesWritten != (Int(plen)) {
// Could not write all bytes, report error ...
NSLog("error in Writting packet payload, not all Bytes written: bytesWritten: %d|plen: %d", bytesWritten, Int(plen))
}
You can write arbitrary data to an InputStream without creating a
(NS)Data object first. The "challenge" is how to convert the pointer to
the struct to an UInt8 pointer as expected by the write method:
let ostream = OutputStream(url: url, append: false)! // Add error checking here!
ostream.open()
var pcapHeader = pcap_hdr_s(...)
let headerSize = MemoryLayout.size(ofValue: pcapHeader)
let bytesWritten = withUnsafePointer(to: &pcapHeader) {
$0.withMemoryRebound(to: UInt8.self, capacity: headerSize) {
ostream.write($0, maxLength: headerSize)
}
}
if bytesWritten != headerSize {
// Could not write all bytes, report error ...
}
In the same way you can read data from in InputStream:
let istream = InputStream(url: url)! // Add error checking here!
istream.open()
let bytesRead = withUnsafeMutablePointer(to: &pcapHeader) {
$0.withMemoryRebound(to: UInt8.self, capacity: headerSize) {
istream.read($0, maxLength: headerSize)
}
}
if bytesRead != headerSize {
// Could not read all bytes, report error ...
}
If the file was possibly created on a different platform with a
different byte order then you can check the "magic" and swap bytes
if necessary (as described on https://wiki.wireshark.org/Development/LibpcapFileFormat):
switch pcapHeader.magic_number {
case 0xa1b2c3d4:
break // Already in host byte order
case 0xd4c3b2a1:
pcapHeader.version_major = pcapHeader.version_major.byteSwapped
pcapHeader.version_minor = pcapHeader.version_minor.byteSwapped
// ...
default:
// Unknown magic, report error ...
}
To simplify the task of writing and reading structs one can define
custom extension methods, e.g.
extension OutputStream {
enum ValueWriteError: Error {
case incompleteWrite
case unknownError
}
func write<T>(value: T) throws {
var value = value
let size = MemoryLayout.size(ofValue: value)
let bytesWritten = withUnsafePointer(to: &value) {
$0.withMemoryRebound(to: UInt8.self, capacity: size) {
write($0, maxLength: size)
}
}
if bytesWritten == -1 {
throw streamError ?? ValueWriteError.unknownError
} else if bytesWritten != size {
throw ValueWriteError.incompleteWrite
}
}
}
extension InputStream {
enum ValueReadError: Error {
case incompleteRead
case unknownError
}
func read<T>(value: inout T) throws {
let size = MemoryLayout.size(ofValue: value)
let bytesRead = withUnsafeMutablePointer(to: &value) {
$0.withMemoryRebound(to: UInt8.self, capacity: size) {
read($0, maxLength: size)
}
}
if bytesRead == -1 {
throw streamError ?? ValueReadError.unknownError
} else if bytesRead != size {
throw ValueReadError.incompleteRead
}
}
}
Now you can write and read simply with
try ostream.write(value: pcapHeader)
try istream.read(value: &pcapHeader)
Of course this works only with "self-contained" structs like your
pcap_hdr_s and pcaprec_hdr_s.
You can convert pcap_hdr_s to Data and vice versa in Swift 3 with
pcap_hdr_s -> Data
var pcapHeader : pcap_hdr_s = pcap_hdr_s(magic_number ...
let data = withUnsafePointer(to: &pcapHeader) {
Data(bytes: UnsafePointer($0), count: MemoryLayout.size(ofValue: pcapHeader))
}
Data -> pcap_hdr_s
let header: pcap_hdr_s = data.withUnsafeBytes { $0.pointee }
Reference: round trip Swift number types to/from Data

How to send NSData over an OutputStream

You can view this project on github here: https://github.com/Lkember/MotoIntercom/
The class that is of importance is PhoneViewController.swift
I have an AVAudioPCMBuffer. The buffer is then converted to NSData using this function:
func audioBufferToNSData(PCMBuffer: AVAudioPCMBuffer) -> NSData {
let channelCount = 1
let channels = UnsafeBufferPointer(start: PCMBuffer.floatChannelData, count: channelCount)
let data = NSData(bytes: channels[0], length:Int(PCMBuffer.frameCapacity * PCMBuffer.format.streamDescription.pointee.mBytesPerFrame))
return data
}
This data needs to be converted to UnsafePointer< UInt8 > according to the documentation on OutputStream.write.
https://developer.apple.com/reference/foundation/outputstream/1410720-write
This is what I have so far:
let data = self.audioBufferToNSData(PCMBuffer: buffer)
let output = self.outputStream!.write(UnsafePointer<UInt8>(data.bytes.assumingMemoryBound(to: UInt8.self)), maxLength: data.length)
When this data is received, it is converted back to an AVAudioPCMBuffer using this method:
func dataToPCMBuffer(data: NSData) -> AVAudioPCMBuffer {
let audioFormat = AVAudioFormat(commonFormat: AVAudioCommonFormat.pcmFormatFloat32, sampleRate: 8000, channels: 1, interleaved: false) // given NSData audio format
let audioBuffer = AVAudioPCMBuffer(pcmFormat: audioFormat, frameCapacity: UInt32(data.length) / audioFormat.streamDescription.pointee.mBytesPerFrame)
audioBuffer.frameLength = audioBuffer.frameCapacity
let channels = UnsafeBufferPointer(start: audioBuffer.floatChannelData, count: Int(audioBuffer.format.channelCount))
data.getBytes(UnsafeMutableRawPointer(channels[0]) , length: data.length)
return audioBuffer
}
Unfortunately, when I play this audioBuffer, I only hear static. I don't believe that it is an issue with my conversion from AVAudioPCMBuffer to NSData or my conversion from NSData back to AVAudioPCMBuffer. I imagine it is the way that I am writing NSData to the stream.
The reason I don't believe that it is my conversion is because I have created a sample project located here (which you can download and try) that records audio to an AVAudioPCMBuffer, converts it to NSData, converts the NSData back to AVAudioPCMBuffer and plays the audio. In this case there are no problems playing the audio.
EDIT:
I never showed how I actually get Data from the stream as well. Here is how it's done:
func stream(_ aStream: Stream, handle eventCode: Stream.Event) {
switch (eventCode) {
case Stream.Event.hasBytesAvailable:
DispatchQueue.global().async {
var buffer = [UInt8](repeating: 0, count: 8192)
let length = self.inputStream!.read(&buffer, maxLength: buffer.count)
let data = NSData.init(bytes: buffer, length: buffer.count)
print("\(#file) > \(#function) > \(length) bytes read on queue \(self.currentQueueName()!) buffer.count \(data.length)")
if (length > 0) {
let audioBuffer = self.dataToPCMBuffer(data: data)
self.audioPlayerQueue.async {
self.peerAudioPlayer.scheduleBuffer(audioBuffer)
if (!self.peerAudioPlayer.isPlaying && self.localAudioEngine.isRunning) {
self.peerAudioPlayer.play()
}
}
}
else if (length == 0) {
print("\(#file) > \(#function) > Reached end of stream")
}
}
Once I have this data, I use the dataToPCMBuffer method to convert it to an AVAudioPCMBuffer.
EDIT 1:
Here is the AVAudioFormat's that I use:
self.localInputFormat = AVAudioFormat.init(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: false)
Originally, I was using this:
self.localInputFormat = self.localInput?.inputFormat(forBus: 0)
However, if the channel count does not equal the expected channel count, than I was getting crashes. So I switched it to the above.
The actual AVAudioPCMBuffer I'm using is in the installTap method (where localInput is an AVAudioInputNode):
localInput?.installTap(onBus: 0, bufferSize: 4096, format: localInputFormat) {
(buffer, time) -> Void in
Pretty sure you want to replace this:
let length = self.inputStream!.read(&buffer, maxLength: buffer.count)
let data = NSData.init(bytes: buffer, length: buffer.count)
With
let length = self.inputStream!.read(&buffer, maxLength: buffer.count)
let data = NSData.init(bytes: buffer, length: length)
Also, I am not 100% sure that the random blocks of data will always be ok to use to make the audio buffers. You might need to collect up the data first into a bigger block of NSData.
Right now, since you always pass in blocks of 8192 (even if you read less), the buffer creation probably always succeeds. It might not now.

Audio being played from microphone is choppy and sounds like air blowing into the microphone

I'm recording audio using the following:
localInput?.installTap(onBus: 0, bufferSize: 4096, format: localInputFormat) {
(buffer, time) -> Void in
let audioBuffer = self.audioBufferToBytes(audioBuffer: buffer)
let output = self.outputStream!.write(audioBuffer, maxLength: Int(buffer.frameLength))
if output > 0 {
print("\(#file) > \(#function) > \(output) bytes written from queue \(self.currentQueueName())")
}
else if output == -1 {
let error = self.outputStream!.streamError
print("\(#file) > \(#function) > Error writing to stream: \(error?.localizedDescription)")
}
}
Where my localInputFormat is the following:
self.localInput = self.localAudioEngine.inputNode
self.localAudioEngine.attach(self.localAudioPlayer)
self.localInputFormat = self.localInput?.inputFormat(forBus: 0)
self.localAudioEngine.connect(self.localAudioPlayer, to: self.localAudioEngine.mainMixerNode, format: self.localInputFormat)
The function audioBufferToBytes is as follows:
func audioBufferToBytes(audioBuffer: AVAudioPCMBuffer) -> [UInt8] {
let srcLeft = audioBuffer.floatChannelData![0]
let bytesPerFrame = audioBuffer.format.streamDescription.pointee.mBytesPerFrame
let numBytes = Int(bytesPerFrame * audioBuffer.frameLength)
// initialize bytes by 0
var audioByteArray = [UInt8](repeating: 0, count: numBytes)
srcLeft.withMemoryRebound(to: UInt8.self, capacity: numBytes) { srcByteData in
audioByteArray.withUnsafeMutableBufferPointer {
$0.baseAddress!.initialize(from: srcByteData, count: numBytes)
}
}
return audioByteArray
}
On the other device, when I receive the data I have to convert it back. So as it's received it runs through the following:
func bytesToAudioBuffer(_ buf: [UInt8]) -> AVAudioPCMBuffer {
let fmt = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: true)
let frameLength = UInt32(buf.count) / fmt.streamDescription.pointee.mBytesPerFrame
let audioBuffer = AVAudioPCMBuffer(pcmFormat: fmt, frameCapacity: frameLength)
audioBuffer.frameLength = frameLength
let dstLeft = audioBuffer.floatChannelData![0]
buf.withUnsafeBufferPointer {
let src = UnsafeRawPointer($0.baseAddress!).bindMemory(to: Float.self, capacity: Int(frameLength))
dstLeft.initialize(from: src, count: Int(frameLength))
}
return audioBuffer
}
And lastly, we play this audio data:
self.audioPlayerQueue.async {
self.peerAudioPlayer.scheduleBuffer(audioBuffer)
if (!self.peerAudioPlayer.isPlaying && self.localAudioEngine.isRunning) {
self.peerAudioPlayer.play()
}
}
However, on either speaker I just hear what sounds like someone tapping the microphone every half-second(ish). Not them actually talking or anything. I imagine this is due to my conversion from an audio buffer to bytes and back, but I'm not sure. Does anyone see any issues with the above?
Thanks.
If anyone is interested in the solution, basically the issue was that the audio on the recording device was 17640 bytes but to stream it, it breaks it up into smaller pieces, and on the receiving device I had to read the first 17640 bytes and THEN play the audio. Not play every small bit of data that was received.

Resources