How to use AVAudioConverter convertToBuffer:error:withInputFromBlock: - avaudiopcmbuffer

I would like to use AVAudioConverter to convert AVAudioPCMFormatFloat32 non interleaved 44.1 KHz buffer to AVAudioPCMFormatInt16 interleaved 48 KHz.
The incoming AVAudioPCMBuffer have 800 samples and I need to return AVAudioPCMBuffer with 1920 capacity.
AVAudioConverterOutputStatus conversionStatus = [_audioConverter convertToBuffer:audioPCMBuffer error:&conversionError withInputFromBlock:^AVAudioBuffer * _Nullable(AVAudioPacketCount inNumberOfPackets, AVAudioConverterInputStatus * _Nonnull outStatus) {
AVAudioPCMBuffer *dequeuedBuffer = [self dequeueIncomingAudioBuffer];
if (dequeuedBuffer != nil) {
*outStatus = AVAudioConverterInputStatus_HaveData;
} else {
*outStatus = AVAudioConverterInputStatus_NoDataNow;
}
return [dequeuedBuffer autorelease];
}];
When I have some audio, but not enough, the audio supplied in the block is converted into the outputBuffer.
When more incoming audio became available, I call convertToBuffer again and reuse the same outputBuffer.
The problem is that the converter instead of appending converted audio to the end, overrides the existing content.
Is there a way to append audio at the end instead of overriding it ?
If not I'll have to wait for having the requested number of samples before returning any incoming audio buffer.
Any advice would be greatly appreciated.

Related

iOS speaker output format (iPhone)

I want to playback audio data received from the network.
My incoming audio PCM data are in the format Int16, 1 channel, sample rate 8000, 160 bytes/package
Now I'm not sure, which audio format iOS is supporting on the speaker side?
IMHO I have to work with Float32 and the sample rate 44.100 / 48000 is that right?
So I think I have to convert my Int16 linear PCM data to Float32.
Maybe I have also tu resample the data from 8k to 48k, I'm not sure (maybe the hardware does it).
Could someone help me?
Here is my current code, where I build the AVAudioPCMBuffer.
func convertInt16ToFloat32(_ data: [Int16]) -> AVAudioPCMBuffer {
let audioBuffer = AVAudioPCMBuffer(pcmFormat: outputFormat!, frameCapacity: 160)!
// Each real data of the array input is reduced to the interval [-1, 1]
for i in 0..<data.count {
// Convert the buffer to floats. (before resampling)
let div: Float32 = (1.0 / 32768.0)
let floatKiller = div * Float32(i)
audioBuffer.floatChannelData?.pointee[i] = floatKiller
}
audioBuffer.frameLength = audioBuffer.frameCapacity
return audioBuffer
}
And on the other side I play back the created AVAudioPCMBuffer in my AVAudioEngine.
func playFromNetwork(data: [Int16]) {
// data: linear data PCM-Int16, sample rate 8000, 160 bytes
let audio = convertInt16ToFloat32(data)
// playback converted data on AVAudioPlayerNode
self.playerNode!.scheduleBuffer(audio, completionHandler: nil)
Logger.Audio.log("Play audio data .....")
}
Here is my setup for AVAudioEngine:
func initAudio() {
try! AVAudioSession.sharedInstance().setActive(true)
try! AVAudioSession.sharedInstance().setCategory(.playback)
engine = AVAudioEngine()
playerNode = AVAudioPlayerNode()
engine!.attach(playerNode!)
engine!.connect(playerNode!, to: engine!.mainMixerNode, format: outputFormat)
engine!.prepare()
try! engine!.start()
playerNode!.play()
}

iOS Swift playing audio (aac) from network stream

I'm developing an iOS application and I'm quite new to iOS development. So far I have implemented a h264 decoder from network stream using VideoToolbox, which was quite hard.
Now I need to play an audio stream that comes from network, but with no file involved, just a raw AAC stream read directly from the socket. This streams comes from the output of a ffmpeg instance.
The problem is that I don't know how to start with this, it seems there is little information about this topic. I have already tried with AVAudioPlayer but found just silence. I think I have first need to decompress the packets from the stream, just like with the h264 decoder.
I have been trying also with AVAudioEngine and AVAudioPlayerNode but no sucess, same as with AVAudioPlayer. Can someone provide me some guidance? Maybe AudioToolbox? AudioQueue?
Thank you very much for the help :)
Edit:
I'm playing around with AVAudioCompressedBuffer and having no error using AVAudioEngine and AVAudioNode. But, I don't know what this output means:
inBuffer: <AVAudioCompressedBuffer#0x6040004039f0: 0/1024 bytes>
Does this mean that the buffer is empty? I have been trying to feed this buffer in several ways, but always returns something like 0/1024. I think I'm not doing this right:
compressedBuffer.mutableAudioBufferList.pointee = audioBufferList
Any idea?
Thank you!
Edit 2:
I'm editing for reflecting my code for decompressing the buffer. Maybe some one can point me in the right direction.
Note: The packet that is ingested by this function actually is passed without the ADTS header (9 bytes) but I have also tried passing it with the header.
func decodeCompressedPacket(packet: Data) -> AVAudioPCMBuffer {
var packetCopy = packet
var streamDescription: AudioStreamBasicDescription = AudioStreamBasicDescription.init(mSampleRate: 44100, mFormatID: kAudioFormatMPEG4AAC, mFormatFlags: UInt32(MPEG4ObjectID.AAC_LC.rawValue), mBytesPerPacket: 0, mFramesPerPacket: 1024, mBytesPerFrame: 0, mChannelsPerFrame: 1, mBitsPerChannel: 0, mReserved: 0)
let audioFormat = AVAudioFormat.init(streamDescription: &streamDescription)
let compressedBuffer = AVAudioCompressedBuffer.init(format: audioFormat!, packetCapacity: 1, maximumPacketSize: 1024)
print("packetCopy count: \(packetCopy.count)")
var audioBuffer: AudioBuffer = AudioBuffer.init(mNumberChannels: 1, mDataByteSize: UInt32(packetCopy.count), mData: &packetCopy)
var audioBufferList: AudioBufferList = AudioBufferList.init(mNumberBuffers: 1, mBuffers: audioBuffer)
var mNumberBuffers = 1
var packetSize = packetCopy.count
// memcpy(&compressedBuffer.mutableAudioBufferList[0].mBuffers, &audioBuffer, MemoryLayout<AudioBuffer>.size)
// memcpy(&compressedBuffer.mutableAudioBufferList[0].mBuffers.mDataByteSize, &packetSize, MemoryLayout<Int>.size)
// memcpy(&compressedBuffer.mutableAudioBufferList[0].mNumberBuffers, &mNumberBuffers, MemoryLayout<UInt32>.size)
// compressedBuffer.mutableAudioBufferList.pointee = audioBufferList
var bufferPointer = compressedBuffer.data
for byte in packetCopy {
memset(compressedBuffer.mutableAudioBufferList[0].mBuffers.mData, Int32(byte), MemoryLayout<UInt8>.size)
}
print("mBuffers: \(compressedBuffer.audioBufferList[0].mBuffers.mNumberChannels)")
print("mBuffers: \(compressedBuffer.audioBufferList[0].mBuffers.mDataByteSize)")
print("mBuffers: \(compressedBuffer.audioBufferList[0].mBuffers.mData)")
var uncompressedBuffer = uncompress(inBuffer: compressedBuffer)
print("uncompressedBuffer: \(uncompressedBuffer)")
return uncompressedBuffer
}
So you are right in thinking you will (most likely) need to decompress the packets received from the stream. The idea is to get them to raw PCM format so that this can be sent directly to the audio output. This way you could also apply any DSP / audio manipulation you could want to the audio stream.
As you mentioned, you will probably need to be looking into the AudioQueue direction and the Apple Docs provide a good example of streaming audio in realtime, although this is in obj-c (in this case I think it may be a good idea to carry this out in obj-c). This is probably the best place to get started (interfacing the obj-c to swift is super simple).
Looking again at it in Swift there is the class AVAudioCompressedBuffer which seems to handle AAC for your case (would not need to decode the AAC if you get this to work), however there is no direct method for setting the buffer as it is intended for just being a storage container, I believe. Here's a working example of someone using the AVAudioCompressedBuffer along with an AVAudioFile (maybe you could buffer everything into files in background threads? I think it would be too much IO overhead).
However, if you tackle this in obj-c there is a post on how to set the AVAudioPCMBuffer (maybe works with AVAudioCompressedBuffer?) directly through memset (kind of digusting but at the same time lovely as an embedded programmer myself).
// make a silent stereo buffer
AVAudioChannelLayout *chLayout = [[AVAudioChannelLayout alloc] initWithLayoutTag:kAudioChannelLayoutTag_Stereo];
AVAudioFormat *chFormat = [[AVAudioFormat alloc] initWithCommonFormat:AVAudioPCMFormatFloat32
sampleRate:44100.0
interleaved:NO
channelLayout:chLayout];
AVAudioPCMBuffer *thePCMBuffer = [[AVAudioPCMBuffer alloc] initWithPCMFormat:chFormat frameCapacity:1024];
thePCMBuffer.frameLength = thePCMBuffer.frameCapacity;
for (AVAudioChannelCount ch = 0; ch < chFormat.channelCount; ++ch) {
memset(thePCMBuffer.floatChannelData[ch], 0, thePCMBuffer.frameLength * chFormat.streamDescription->mBytesPerFrame);
}
I know this is a lot to take and no way seems like a simple solution, but I think the obj-c AudioQueue technique would be my first stop!
Hope this helps!

Spotify iOS SDK FFT with EZAudio returning NaN

I am trying to perform fft on Spotify's audio stream using EZAudio.
Following this suggestion, I have subclassed SPTCoreAudioController, overrode attemptToDeliverAudioFrames:ofCount:streamDescription:, and initialized my SPTAudioStreamingController with my new class successfully.
Spotify does not say if the pointer, which is passed into the overridden function, points to a float, double, integer, etc.. I have interpreted it as many different data types, which all failed, leaving me confused if my fft is wrong or if my audio buffer is wrong. Here is spotify's documentation on SPTCoreAudioController.
Assuming the audio buffer is a buffer of floats, here is one of my attempts at FFT:
class GetAudioPCM : SPTCoreAudioController, EZAudioFFTDelegate{
let ViewControllerFFTWindowSize: vDSP_Length = 128
var fft: EZAudioFFTRolling?
//var fft: EZAudioFFT?
override func attempt(toDeliverAudioFrames audioFrames: UnsafeRawPointer!, ofCount frameCount: Int, streamDescription audioDescription: AudioStreamBasicDescription) -> Int {
if let fft = fft{
let newPointer = (UnsafeMutableRawPointer(mutating: audioFrames)!.assumingMemoryBound(to: Float.self))
let resultBuffer : UnsafeMutablePointer<Float> = (fft.computeFFT(withBuffer: newPointer, withBufferSize: 128))
print("results: \(resultBuffer.pointee)")
}else{
fft = EZAudioFFTRolling(windowSize: ViewControllerFFTWindowSize, sampleRate: Float(audioDescription.mSampleRate), delegate: self)
//fft = EZAudioFFT(maximumBufferSize: 128, sampleRate: Float(audioDescription.mSampleRate))
}
return super.attempt(toDeliverAudioFrames: audioFrames, ofCount: frameCount, streamDescription: audioDescription)
}
func fft(_ fft: EZAudioFFT!, updatedWithFFTData fftData: UnsafeMutablePointer<Float>, bufferSize: vDSP_Length) {
print("\n \n DATA ---------------------")
print(bufferSize)
if (fft?.fftData) != nil {
print("First: \(fftData.pointee)")
for i : Int in 0..<Int(bufferSize) {
print(fftData[i], terminator: " :: ")
}
}
}
}
I make my custom class the EZAudioFFTDelegate, I initialize an EZAudioFFTRolling (have also tried plain EZAudioFFT) object, and tell it to perform an fft with the buffer. I chose a small size of 128 just for initial testing.
I have tried different data types for the buffer and different methods of fft. I decided using a well known library that has fft should give me the correct results. Yet, my output from this and similar FFT's produced 'nan' for almost every single float in the new buffer.
Is the way I access spotify's audio buffer wrong, or is my FFT process at fault?

implementing Queue Services in AV Audio Recorder using Swift

Is it possible to create a buffer concept similar to AudioQueue services in AVRecorder Framework. In my application , i need to capture the Audio buffer and send it over the Internet. The server connection part is done, but i wanted to know if there is a way to record the voice continuously in the foreground, and pass this audio buffer by buffer at the background to the server using Swift.
Comments are appreciated.
AVAudioRecorder records to a file, so you can't easily use it to stream audio data out of your app. AVAudioEngine on the other hand can call you back as it captures audio buffers:
var engine = AVAudioEngine()
func startCapturingBuffers() {
let input = engine.inputNode!
let bus = 0
input.installTapOnBus(bus, bufferSize: 512, format: input.inputFormatForBus(bus)) { (buffer, time) -> Void in
// buffer.floatChannelData contains audio data
}
try! engine.start()
}

Set timestamp in CMsampleBuffer using AVAssetWriter not working

Hello I'm working in an app that is recording video + audio. The Video source is the camera, and the audio is coming from streaming. My problem happen when the communication with streaming is closed for some reason. Then in that case I switch the audio source to built in mic. The problem is the audio is not synchronised at all. I would like to add a space in my audio and then set the timestamp in realtime according to the current video timestamp. Seems AvassetWritter is adding the frames consecutive from built in mic and it looks like is ignoring the timestamp.
Do you know why avassetwriter is ignoring the timestamp?
EDIT:
This is the code than gets the latest video timestamp
- (void)renderVideoSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
CVPixelBufferRef renderedPixelBuffer = NULL;
CMTime timestamp = CMSampleBufferGetPresentationTimeStamp( sampleBuffer );
self.lastVideoTimestamp = timestamp;
and this is the code that I use to synchronise audio coming from built in mic when the stream is disconnected.
CFRelease(sampleBuffer);
sampleBuffer = [self adjustTime:sampleBuffer by:self.lastVideoTimestamp];
//Adjust CMSampleBufferFunction
- (CMSampleBufferRef) adjustTime:(CMSampleBufferRef) sample by:(CMTime) offset
{
CMItemCount count;
CMSampleBufferGetSampleTimingInfoArray(sample, 0, nil, &count);
CMSampleTimingInfo* pInfo = malloc(sizeof(CMSampleTimingInfo) * count);
CMSampleBufferGetSampleTimingInfoArray(sample, count, pInfo, &count);
for (CMItemCount i = 0; i < count; i++)
{
pInfo[i].decodeTimeStamp = kCMTimeInvalid;//CMTimeSubtract(pInfo[i].decodeTimeStamp, offset);
pInfo[i].presentationTimeStamp = CMTimeSubtract(pInfo[i].presentationTimeStamp, offset);
}
CMSampleBufferRef sout;
CMSampleBufferCreateCopyWithNewTiming(nil, sample, count, pInfo, &sout);
free(pInfo);
return sout;
}
That is what I want to do.
Video
--------------------------------------------------------------------
Stream disconnect stream Built in mic
----------------------------------- -----------------
I would like to get this, as you can see there is a space with no audio, because the audio coming from the stream is disconnected and maybe you didn't receive all of the audio.
What it is currently doing:
Video
--------------------------------------------------------------------
Stream disconnect stream Built in mic
--------------------------------------------------------------------

Resources