How to play audio from AVAudioPCMBuffer converted from NSData - ios

I am getting audio PCM 16bit Mono data from udp packets like this:
(void)udpSocket:(GCDAsyncUdpSocket *)sock didReceiveData:(NSData *)data
fromAddress:(NSData *)address
withFilterContext:(id)filterContext
{
...
}
I am converting this data into PCM buffer by calling a swift function as below:
func toPCMBuffer(data: NSData) -> AVAudioPCMBuffer {
let audioFormat = AVAudioFormat(commonFormat: AVAudioCommonFormat.PCMFormatFloat32, sampleRate: 8000, channels: 1, interleaved: false) // given NSData audio format
var PCMBuffer = AVAudioPCMBuffer(PCMFormat: audioFormat, frameCapacity:1024*10)
PCMBuffer.frameLength = PCMBuffer.frameCapacity
let channels = UnsafeBufferPointer(start: PCMBuffer.floatChannelData, count: Int(PCMBuffer.format.channelCount))
data.getBytes(UnsafeMutablePointer<Void>(channels[0]) , length: data.length)
return PCMBuffer
}
Data is converted to PCM buffer and i can see its length in logs.
But when i try to play the buffer i hear no voice.
Here is the code for receiving:
func toPCMBuffer(data: NSData) -> AVAudioPCMBuffer {
let audioFormat = AVAudioFormat(commonFormat: AVAudioCommonFormat.PCMFormatFloat32, sampleRate: 8000, channels: 1, interleaved: false) // given NSData audio format
var PCMBuffer = AVAudioPCMBuffer(PCMFormat: audioFormat, frameCapacity:1024*10)
PCMBuffer.frameLength = PCMBuffer.frameCapacity
let channels = UnsafeBufferPointer(start: PCMBuffer.floatChannelData, count: Int(PCMBuffer.format.channelCount))
data.getBytes(UnsafeMutablePointer<Void>(channels[0]) , length: data.length)
var mainMixer = audioEngine.mainMixerNode
audioEngine.attachNode(audioFilePlayer)
audioEngine.connect(audioFilePlayer, to:mainMixer, format: PCMBuffer.format)
audioEngine.startAndReturnError(nil)
audioFilePlayer.play()
audioFilePlayer.scheduleBuffer(PCMBuffer, atTime: nil, options: nil, completionHandler: nil)
return PCMBuffer
}

ended up using an objective-c function:data is getting converted fine
-(AudioBufferList *) getBufferListFromData: (NSData *) data
{
if (data.length > 0)
{
NSUInteger len = [data length];
//NSData *d2 = [data subdataWithRange:NSMakeRange(4, 1028)];
//I guess you can use Byte*, void* or Float32*. I am not sure if that makes any difference.
Byte* byteData = (Byte*) malloc (len);
memcpy (byteData, [data bytes], len);
if (byteData)
{
AudioBufferList * theDataBuffer =(AudioBufferList*)malloc(sizeof(AudioBufferList) * 1);
theDataBuffer->mNumberBuffers = 1;
theDataBuffer->mBuffers[0].mDataByteSize =(UInt32) len;
theDataBuffer->mBuffers[0].mNumberChannels = 1;
theDataBuffer->mBuffers[0].mData = byteData;
// Read the data into an AudioBufferList
return theDataBuffer;
}
}
return nil;
}

Related

How to convert PCM to G711 µlaw raw format

I am trying PCM (microphone input)->G711 and am having trouble getting it to work.
With the current implementation, the conversion itself succeeds, but it is as if there is no data stored in the buffer.
I have been doing a lot of research, but I can't seem to find the cause of the problem, so I would appreciate your advice.
var audioEngine = AVAudioEngine()
var convertFormatBasicDescription = AudioStreamBasicDescription(
mSampleRate: 8000,
mFormatID: kAudioFormatULaw,
mFormatFlags: AudioFormatFlags(kAudioFormatULaw),
mBytesPerPacket: 1,
mFramesPerPacket: 1,
mBytesPerFrame: 1,
mChannelsPerFrame: 1,
mBitsPerChannel: 8,
mReserved: 0
)
func convert () {
let inputNode = audioEngine.inputNode
let micInputFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: true)!
let convertFormat = AVAudioFormat(streamDescription: &convertFormatBasicDescription)!
let converter = AVAudioConverter(from: micInputFormat, to: convertFormat)!
inputNode.installTap(onBus: 0, bufferSize: 1024, format: micInputFormat) { [weak self] (buffer, time) in
var newBufferAvailable = true
let inputCallback: AVAudioConverterInputBlock = { inNumPackets, outStatus in
if newBufferAvailable {
outStatus.pointee = .haveData
newBufferAvailable = false
return buffer
} else {
outStatus.pointee = .noDataNow
return nil
}
}
let convertedBuffer = AVAudioPCMBuffer(pcmFormat: convertFormat, frameCapacity: AVAudioFrameCount(convertFormat.sampleRate) * buffer.frameLength / AVAudioFrameCount(buffer.format.sampleRate))!
convertedBuffer.frameLength = convertedBuffer.frameCapacity
var error: NSError?
let status = converter.convert(to: convertedBuffer, error: &error, withInputFrom: inputCallback)
print(convertedBuffer.floatChannelData) <- nil
}
audioEngine.prepare()
try audioEngine.start()
}
Thank you.
It was audioBufferList.pointee.mBuffers.mData, not floatChannelData, that was needed to get the data converted to G711.

Stream audio with Swift

I'm developing an application that should record a user's voice and stream it to a custom device via the MQTT protocol.
The audio specification for the custom device: little-endian, unsigned, 16-bit LPCM at 8khz sample rate. Packets should be 1000 bytes each.
I'm not familiar with AudioEngine and I found this sample of code which I believe fits my case:
func startRecord() {
audioEngine = AVAudioEngine()
let bus = 0
let inputNode = audioEngine.inputNode
let inputFormat = inputNode.outputFormat(forBus: bus)
var streamDescription = AudioStreamBasicDescription()
streamDescription.mFormatID = kAudioFormatLinearPCM.littleEndian
streamDescription.mSampleRate = 8000.0
streamDescription.mChannelsPerFrame = 1
streamDescription.mBitsPerChannel = 16
streamDescription.mBytesPerPacket = 1000
let outputFormat = AVAudioFormat(streamDescription: &streamDescription)!
guard let converter: AVAudioConverter = AVAudioConverter(from: inputFormat, to: outputFormat) else {
print("Can't convert in to this format")
return
}
inputNode.installTap(onBus: 0, bufferSize: 1024, format: inputFormat) { (buffer, time) in
print("Buffer format: \(buffer.format)")
var newBufferAvailable = true
let inputCallback: AVAudioConverterInputBlock = { inNumPackets, outStatus in
if newBufferAvailable {
outStatus.pointee = .haveData
newBufferAvailable = false
return buffer
} else {
outStatus.pointee = .noDataNow
return nil
}
}
let convertedBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat, frameCapacity: AVAudioFrameCount(outputFormat.sampleRate) * buffer.frameLength / AVAudioFrameCount(buffer.format.sampleRate))!
var error: NSError?
let status = converter.convert(to: convertedBuffer, error: &error, withInputFrom: inputCallback)
assert(status != .error)
print("Converted buffer format:", convertedBuffer.format)
}
audioEngine.prepare()
do {
try audioEngine.start()
} catch {
print("Can't start the engine: \(error)")
}
}
But currently, the converter can't convert the input format to my output format and I don't understand why.
If I change my output format to something like that:
let outputFormat = AVAudioFormat(commonFormat: .pcmFormatInt16, sampleRate: 8000.0, channels: 1, interleaved: false)!
Then it works.
Your streamDescription is wrong, you hadn't filled in all the fields, and mBytesPerPacket was wrong - this is not the same kind of packet your protocol calls for. For uncompressed audio (like LPCM) AudioStreamBasicDescription requires this field to be 1. If your protocol requires samples to be in groups of 1000, then you will have to do that.
Try this
var streamDescription = AudioStreamBasicDescription()
streamDescription.mSampleRate = 8000.0
streamDescription.mFormatID = kAudioFormatLinearPCM
streamDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger // no endian flag means little endian
streamDescription.mBytesPerPacket = 2
streamDescription.mFramesPerPacket = 1
streamDescription.mBytesPerFrame = 2
streamDescription.mChannelsPerFrame = 1
streamDescription.mBitsPerChannel = 16
streamDescription.mReserved = 0

Convert CMSampleBuffer to AVAudioPCMBuffer to get live audio frequencies

I am trying to read the frequencies values from a CMSampleBuffer returned by captureOutput of AVCaptureAudioDataOutputSampleBufferDelegate.
The idea is to create a AVAudioPCMBuffer so that then I can read its floatChannelData. But I am not sure how to pass the buffer to it.
I guess I could create it with:
public func captureOutput(_ output: AVCaptureOutput,
didOutput sampleBuffer: CMSampleBuffer,
from connection: AVCaptureConnection) {
guard let blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer) else {
return
}
let length = CMBlockBufferGetDataLength(blockBuffer)
let audioFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: false)
let pcmBuffer = AVAudioPCMBuffer(pcmFormat: audioFormat!, frameCapacity: AVAudioFrameCount(length))
pcmBuffer?.frameLength = pcmBuffer!.frameCapacity
But how could I fill its data?
Something along these lines should help:
var asbd = CMSampleBufferGetFormatDescription(sampleBuffer)!.audioStreamBasicDescription!
var audioBufferList = AudioBufferList()
var blockBuffer : CMBlockBuffer?
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sampleBuffer,
bufferListSizeNeededOut: nil,
bufferListOut: &audioBufferList,
bufferListSize: MemoryLayout<AudioBufferList>.size,
blockBufferAllocator: nil,
blockBufferMemoryAllocator: nil,
flags: 0,
blockBufferOut: &blockBuffer
)
let mBuffers = audioBufferList.mBuffers
let frameLength = AVAudioFrameCount(Int(mBuffers.mDataByteSize) / MemoryLayout<Float>.size)
let pcmBuffer = AVAudioPCMBuffer(pcmFormat: AVAudioFormat(streamDescription: &asbd)!, frameCapacity: frameLength)!
pcmBuffer.frameLength = frameLength
pcmBuffer.mutableAudioBufferList.pointee.mBuffers = mBuffers
pcmBuffer.mutableAudioBufferList.pointee.mNumberBuffers = 1
This seems to create a valid AVAudioPCMBuffer at the end of it inside a capture session. But it's at the wrong frame length for my use case right now, so need to do some further buffering.

Audio being played over a stream is jittering/stuttering

I will briefly go over all the elements of my app:
I have an application that records audio to an AVAudioPCMBuffer. This buffer is then converted to NSData and then to [UInt8]. It is then streamed over an OutputStream. On another device, this data is received using an InputStream. Then it is converted to NSData, and back to an AVAudioPCMBuffer. This buffer is then played.
The issue is that the audio is very jittery and you can't make out voices, only that the audio gets louder or quieter depending on if the other person is talking.
When scheduling the buffer:
self.peerAudioPlayer.scheduleBuffer(audioBuffer, completionHandler: nil)
I have delayed playing this audio for a few seconds and then played it, hoping that this would make the audio clearer, however it did not help. My best guess is that the buffer I'm creating is somehow cutting off some of the audio. So I will show you my relevant code:
Here is how I record audio:
localInput?.installTap(onBus: 1, bufferSize: 4096, format: localInputFormat) {
(buffer, when) -> Void in
let data = self.audioBufferToNSData(PCMBuffer: buffer)
let output = self.outputStream!.write(data.bytes.assumingMemoryBound(to: UInt8.self), maxLength: data.length)
}
audioBufferToNSData is just a method which converts AVAudioPCMBuffer to NSData and here it is:
func audioBufferToNSData(PCMBuffer: AVAudioPCMBuffer) -> NSData {
let channelCount = 1 // given PCMBuffer channel count is 1
let channels = UnsafeBufferPointer(start: PCMBuffer.floatChannelData, count: channelCount)
let data = NSData(bytes: channels[0], length:Int(PCMBuffer.frameCapacity * PCMBuffer.format.streamDescription.pointee.mBytesPerFrame))
return data
}
I'm wondering if the issue could be at the method above. Possibly when I calculate the length of the NSData object, maybe I am cutting off part of the audio.
On the receiving end I have this:
case Stream.Event.hasBytesAvailable:
DispatchQueue.global().async {
var tempBuffer: [UInt8] = .init(repeating: 0, count: 17640)
let length = self.inputStream!.read(&tempBuffer, maxLength: tempBuffer.count)
self.testBufferCount += length
self.testBuffer.append(contentsOf: tempBuffer)
if (self.testBufferCount >= 17640) {
let data = NSData.init(bytes: &self.testBuffer, length: self.testBufferCount)
let audioBuffer = self.dataToPCMBuffer(data: data)
self.peerAudioPlayer.scheduleBuffer(audioBuffer, completionHandler: nil)
self.testBuffer.removeAll()
self.testBufferCount = 0
}
}
The reason I check for 17640 is because the data being sent is exactly 17640 bytes, so I need to get all of this data before I play it.
Furthermore, the dataToPCMBuffer method just converts NSData to an AVAudioPCMBuffer so that it can be played. Here is that method:
func dataToPCMBuffer(data: NSData) -> AVAudioPCMBuffer {
let audioFormat = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 44100, channels: 1, interleaved: false) // given NSData audio format
let audioBuffer = AVAudioPCMBuffer(pcmFormat: audioFormat, frameCapacity: UInt32(data.length) / audioFormat.streamDescription.pointee.mBytesPerFrame)
audioBuffer.frameLength = audioBuffer.frameCapacity
let channels = UnsafeBufferPointer(start: audioBuffer.floatChannelData, count: Int(audioBuffer.format.channelCount))
data.getBytes(UnsafeMutableRawPointer(channels[0]) , length: data.length)
return audioBuffer
}
Thank you in advance!
I think that in audioBufferToNSData you should use frame​Length instead of frameCapacity.
let data = NSData(bytes: channels[0], length:Int(PCMBuffer.frame​Length * PCMBuffer.format.streamDescription.pointee.mBytesPerFrame))
PCMBuffer.frameCapacity -> how much can be stored
PCMBuffer.frame​Length -> how much of PCMBuffer.frameCapacity is actual valid data

Swift/iOS - How to compress audio so that it can be streamed to another device, then decompress and play the same audio

I've been using the following methods to convert an AVAudioPCMBuffer to [UInt8] and then [UInt8] back to an AVAudioPCMBuffer. The problem is that every single conversion is a total of 17640 bytes, which to stream over MultipeerConnectivity is a lot. In fact, I think my stream ends up reading data slower than data is coming in, since if I end the stream on one device I can see that the other continues to read data until it realizes that the stream has ended.
Here is my conversion of AVAudioPCMBuffer to [UInt8]. Credit for this answer goes to Rhythmic Fistman from this answer.
func audioBufferToBytes(audioBuffer: AVAudioPCMBuffer) -> [UInt8] {
let srcLeft = audioBuffer.floatChannelData![0]
let bytesPerFrame = audioBuffer.format.streamDescription.pointee.mBytesPerFrame
let numBytes = Int(bytesPerFrame * audioBuffer.frameLength)
// initialize bytes by 0
var audioByteArray = [UInt8](repeating: 0, count: numBytes)
srcLeft.withMemoryRebound(to: UInt8.self, capacity: numBytes) { srcByteData in
audioByteArray.withUnsafeMutableBufferPointer {
$0.baseAddress!.initialize(from: srcByteData, count: numBytes)
}
}
return audioByteArray
}
And here is [UInt8] to AVAudioPCMBuffer
func bytesToAudioBuffer(_ buf: [UInt8]) -> AVAudioPCMBuffer {
let fmt = AVAudioFormat(commonFormat: .pcmFormatFloat32, sampleRate: 8000, channels: 1, interleaved: false)
let frameLength = UInt32(buf.count) / fmt.streamDescription.pointee.mBytesPerFrame
let audioBuffer = AVAudioPCMBuffer(pcmFormat: fmt, frameCapacity: frameLength)
audioBuffer.frameLength = frameLength
let dstLeft = audioBuffer.floatChannelData![0]
buf.withUnsafeBufferPointer {
let src = UnsafeRawPointer($0.baseAddress!).bindMemory(to: Float.self, capacity: Int(frameLength))
dstLeft.initialize(from: src, count: Int(frameLength))
}
return audioBuffer
}
Can anyone help me compress this data so that it is easier to send over a stream, and then decompress it so it can be played?

Resources