Handle Varying Number of Samples in Audio Unit Rendering Cycle - ios

This is a problem that's come up in my app after the introduction of the iPhone 6s and 6s+, and I'm almost positive that it is because the new model's built-in mic is stuck recording at 48kHz (you can read more about this here). To clarify, this was never a problem with previous phone models that I've tested. I'll walk through my Audio Engine implementation and the varying results at different points depending on the phone model further below.
So here's what's happening - when my code runs on previous devices I get a consistent number of audio samples in each CMSampleBuffer returned by the AVCaptureDevice, usually 1024 samples. The render callback for my audio unit graph provides an appropriate buffer with space for 1024 frames. Everything works great and sounds great.
Then Apple had to go make this damn iPhone 6s (just kidding, it's great, this bug is just getting to my head) and now I get some very inconsistent and confusing results. The AVCaptureDevice now varies between capturing 940 or 941 samples and the render callback now starts making a buffer with space for 940 or 941 sample frames on the first call, but then immediately starts increasing the space it reserves on subsequent calls up to 1010, 1012, or 1024 sample frames, then stays there. The space it ends up reserving varies by session. To be honest, I have no idea how this render callback is determining how many frames it prepares for the render, but I'm guessing it has to do with the sample rate of the Audio Unit that the render callback is on.
The format of the CMSampleBuffer comes in at 44.1kHz sample rate no matter what the device is, so I'm guessing theres some sort of implicit sample rate conversion that happens before I'm even receiving the CMSampleBuffer from the AVCaptureDevice on the 6s. The only difference is that the preferred hardware sample rate of the 6s is 48kHz opposed to earlier versions at 44.1kHz.
I've read that with the 6s you do have to be ready to make space for a varying number of samples being returned, but is the kind of behavior I described above normal? If it is, how can my render cycle be tailored to handle this?
Below is the code that is processing the audio buffers if you care to look further into this:
The audio samples buffers, which are CMSampleBufferRefs, come in through the mic AVCaptureDevice and are sent to my audio processing function that does the following to the captured CMSampleBufferRef named audioBuffer
CMBlockBufferRef buffer = CMSampleBufferGetDataBuffer(audioBuffer);
CMItemCount numSamplesInBuffer = CMSampleBufferGetNumSamples(audioBuffer);
AudioBufferList audioBufferList;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(audioBuffer,
NULL,
&audioBufferList,
sizeof(audioBufferList),
NULL,
NULL,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
&buffer
);
self.audioProcessingCallback(&audioBufferList, numSamplesInBuffer, audioBuffer);
CFRelease(buffer);
This is putting the the audio samples into an AudioBufferList and sending it, along with the number of samples and the retained CMSampleBuffer, to the below function that I use for audio processing. TL;DR the following code sets up some Audio Units that are in an Audio Graph, using the CMSampleBuffer's format to set the ASBD for input, runs the audio samples through a converter unit, a newTimePitch unit, and then another converter unit. I then start a render call on the output converter unit with the number of samples that I received from the CMSampleBufferRef and put the rendered samples back into the AudioBufferList to subsequently be written out to the movie file, more on the Audio Unit Render Callback below.
movieWriter.audioProcessingCallback = {(audioBufferList, numSamplesInBuffer, CMSampleBuffer) -> () in
var ASBDSize = UInt32(sizeof(AudioStreamBasicDescription))
self.currentInputAudioBufferList = audioBufferList.memory
let formatDescription = CMSampleBufferGetFormatDescription(CMSampleBuffer)
let sampleBufferASBD = CMAudioFormatDescriptionGetStreamBasicDescription(formatDescription!)
if (sampleBufferASBD.memory.mFormatID != kAudioFormatLinearPCM) {
print("Bad ASBD")
}
if(sampleBufferASBD.memory.mChannelsPerFrame != self.currentInputASBD.mChannelsPerFrame || sampleBufferASBD.memory.mSampleRate != self.currentInputASBD.mSampleRate){
// Set currentInputASBD to format of data coming IN from camera
self.currentInputASBD = sampleBufferASBD.memory
print("New IN ASBD: \(self.currentInputASBD)")
// set the ASBD for converter in's input to currentInputASBD
var err = AudioUnitSetProperty(self.converterInAudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&self.currentInputASBD,
UInt32(sizeof(AudioStreamBasicDescription)))
self.checkErr(err, "Set converter in's input stream format")
// Set currentOutputASBD to the in/out format for newTimePitch unit
err = AudioUnitGetProperty(self.newTimePitchAudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&self.currentOutputASBD,
&ASBDSize)
self.checkErr(err, "Get NewTimePitch ASBD stream format")
print("New OUT ASBD: \(self.currentOutputASBD)")
//Set the ASBD for the convert out's input to currentOutputASBD
err = AudioUnitSetProperty(self.converterOutAudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&self.currentOutputASBD,
ASBDSize)
self.checkErr(err, "Set converter out's input stream format")
//Set the ASBD for the converter out's output to currentInputASBD
err = AudioUnitSetProperty(self.converterOutAudioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
0,
&self.currentInputASBD,
ASBDSize)
self.checkErr(err, "Set converter out's output stream format")
//Initialize the graph
err = AUGraphInitialize(self.auGraph)
self.checkErr(err, "Initialize audio graph")
self.checkAllASBD()
}
self.currentSampleTime += Double(numSamplesInBuffer)
var timeStamp = AudioTimeStamp()
memset(&timeStamp, 0, sizeof(AudioTimeStamp))
timeStamp.mSampleTime = self.currentSampleTime
timeStamp.mFlags = AudioTimeStampFlags.SampleTimeValid
var flags = AudioUnitRenderActionFlags(rawValue: 0)
err = AudioUnitRender(self.converterOutAudioUnit,
&flags,
&timeStamp,
0,
UInt32(numSamplesInBuffer),
audioBufferList)
self.checkErr(err, "Render Call on converterOutAU")
}
The Audio Unit Render Callback that is called once the AudioUnitRender call reaches the input converter unit is below
func pushCurrentInputBufferIntoAudioUnit(inRefCon : UnsafeMutablePointer<Void>, ioActionFlags : UnsafeMutablePointer<AudioUnitRenderActionFlags>, inTimeStamp : UnsafePointer<AudioTimeStamp>, inBusNumber : UInt32, inNumberFrames : UInt32, ioData : UnsafeMutablePointer<AudioBufferList>) -> OSStatus {
let bufferRef = UnsafeMutablePointer<AudioBufferList>(inRefCon)
ioData.memory = bufferRef.memory
print(inNumberFrames);
return noErr
}
Blah, this is a huge brain dump but I really appreciate ANY help. Please let me know if there's any additional information you need.

Generally, you handle slight variations in buffer size (but a constant sample rate in and out) by putting the incoming samples in a lock-free circular fifo, and not removing any blocks of samples from that circular fifo until you have a full size block plus potentially some safety padding to cover future size jitter.
The variation in size probably has to do with the sample rate converter ratio not being a simple multiple, the resampling filter(s) needed, and any buffering needed for the resampling process.
1024 * (44100/48000) = 940.8
So that rate conversion might explain the jitter between 940 and 941 samples. If the hardware is always shipping out blocks of 1024 samples at a fixed rate of 48 kHz, and you need that block resampled to 44100 for your callback ASAP, there's a fraction of a converted sample that eventually needs to be output on only some output callbacks.

Related

STM32 - Reading I2S to record a .WAV file. Audio choppy, what is causing it?

I'm using an STM32 (STM32F446RE) to receive audio from two INMP441 mems microphone in an stereo setup via I2S protocol and record it into a .WAV on a micro SD card, using the HAL library.
I wrote the firmware that records audio into a .WAV with FreeRTOS. But the audio files that I record sound like Darth Vader. Here is a screenshot of the audio in audacity:
if you zoom in you can see a constant noise being inserted in between the real audio data:
I don't know what is causing this.
I have tried increasing the MessageQueue, but that doesnt seem to be the problem, the queue is kept at 0 most of the time. I've tried different frame sizes and sampling rates, changing the number of channels, using only one inmp441. All this without any success.
I proceed explaining the firmware.
Here is a block diagram of the architecture for the RTOS that I have implemented:
It consists of three tasks. The first one receives a command via UART (with interrupts) that signals to start or stop recording. the second one is simply an state machine that walks through the steps to write a .WAV.
Here the code for the WriteWavFileTask:
switch(audio_state)
{
case STATE_START_RECORDING:
sprintf(filename, "%saud_%03d.wav", SDPath, count++);
do
{
res = f_open(&file_ptr, filename, FA_CREATE_ALWAYS|FA_WRITE);
}
while(res != FR_OK);
res = fwrite_wav_header(&file_ptr, I2S_SAMPLE_FREQUENCY, I2S_FRAME, 2);
HAL_I2S_Receive_DMA(&hi2s2, aud_buf, READ_SIZE);
audio_state = STATE_RECORDING;
break;
case STATE_RECORDING:
osDelay(50);
break;
case STATE_STOP:
HAL_I2S_DMAStop(&hi2s2);
while(osMessageQueueGetCount(AudioQueueHandle)) osDelay(1000);
filesize = f_size(&file_ptr);
data_len = filesize - 44;
total_len = filesize - 8;
f_lseek(&file_ptr, 4);
f_write(&file_ptr, (uint8_t*)&total_len, 4, bw);
f_lseek(&file_ptr, 40);
f_write(&file_ptr, (uint8_t*)&data_len, 4, bw);
f_close(&file_ptr);
audio_state = STATE_IDLE;
break;
case STATE_IDLE:
osThreadSuspend(WAVHandle);
audio_state = STATE_START_RECORDING;
break;
default:
osDelay(50);
break;
Here are the macros used in the code for readability:
#define I2S_DATA_WORD_LENGTH (24) // industry-standard 24-bit I2S
#define I2S_FRAME (32) // bits per sample
#define READ_SIZE (128) // samples to read from I2S
#define WRITE_SIZE (READ_SIZE*I2S_FRAME/16) // half words to write
#define WRITE_SIZE_BYTES (WRITE_SIZE*2) // bytes to write
#define I2S_SAMPLE_FREQUENCY (16000) // sample frequency
The last task is the responsible for processing the buffer received via I2S. Here is the code:
void convert_endianness(uint32_t *array, uint16_t Size) {
for (int i = 0; i < Size; i++) {
array[i] = __REV(array[i]);
}
}
void HAL_I2S_RxCpltCallback(I2S_HandleTypeDef *hi2s)
{
convert_endianness((uint32_t *)aud_buf, READ_SIZE);
osMessageQueuePut(AudioQueueHandle, aud_buf, 0L, 0);
HAL_I2S_Receive_DMA(hi2s, aud_buf, READ_SIZE);
}
void pvrWriteAudioTask(void *argument)
{
/* USER CODE BEGIN pvrWriteAudioTask */
static UINT *bw;
static uint16_t aud_ptr[WRITE_SIZE];
/* Infinite loop */
for(;;)
{
osMessageQueueGet(AudioQueueHandle, aud_ptr, 0L, osWaitForever);
res = f_write(&file_ptr, aud_ptr, WRITE_SIZE_BYTES, bw);
}
/* USER CODE END pvrWriteAudioTask */
}
This tasks reads from a queue an array of 256 uint16_t elements containing the raw audio data in PCM. f_write takes the Size parameter in number of bytes to write to the SD card, so 512 bytes. The I2S Receives 128 frames (for a 32 bit frame, 128 words).
The following is the configuration for the I2S and clocks:
Any help would be much appreciated!
Solution
As pmacfarlane pointed out, the problem was with the method used for buffering the audio data. The solution consisted of easing the overhead on the ISR and implementing a circular DMA for double buffering. Here is the code:
#define I2S_DATA_WORD_LENGTH (24) // industry-standard 24-bit I2S
#define I2S_FRAME (32) // bits per sample
#define READ_SIZE (128) // samples to read from I2S
#define BUFFER_SIZE (READ_SIZE*I2S_FRAME/16) // number of uint16_t elements expected
#define WRITE_SIZE_BYTES (BUFFER_SIZE*2) // bytes to write
#define I2S_SAMPLE_FREQUENCY (16000) // sample frequency
uint16_t aud_buf[2*BUFFER_SIZE]; // Double buffering
static volatile int16_t *BufPtr;
void convert_endianness(uint32_t *array, uint16_t Size) {
for (int i = 0; i < Size; i++) {
array[i] = __REV(array[i]);
}
}
void HAL_I2S_RxHalfCpltCallback(I2S_HandleTypeDef *hi2s)
{
BufPtr = aud_buf;
osSemaphoreRelease(RxAudioSemHandle);
}
void HAL_I2S_RxCpltCallback(I2S_HandleTypeDef *hi2s)
{
BufPtr = &aud_buf[BUFFER_SIZE];
osSemaphoreRelease(RxAudioSemHandle);
}
void pvrWriteAudioTask(void *argument)
{
/* USER CODE BEGIN pvrWriteAudioTask */
static UINT *bw;
/* Infinite loop */
for(;;)
{
osSemaphoreAcquire(RxAudioSemHandle, osWaitForever);
convert_endianness((uint32_t *)BufPtr, READ_SIZE);
res = f_write(&file_ptr, BufPtr, WRITE_SIZE_BYTES, bw);
}
/* USER CODE END pvrWriteAudioTask */
}
Problems
I think the problem is your method of buffering the audio data - mainly in this function:
void HAL_I2S_RxCpltCallback(I2S_HandleTypeDef *hi2s)
{
convert_endianness((uint32_t *)aud_buf, READ_SIZE);
osMessageQueuePut(AudioQueueHandle, aud_buf, 0L, 0);
HAL_I2S_Receive_DMA(hi2s, aud_buf, READ_SIZE);
}
The main problem is that you are re-using the same buffer each time. You have queued a message to save aud_buf to the SD-card, but you've also instructed the I2S to start DMAing data into that same buffer, before it has been saved. You'll end up saving some kind of mish-mash of "old" data and "new" data.
#Flexz pointed out that the message queue takes a copy of the data, so there is no issue about the I2S writing over the data that is being written to the SD-card. However, taking the copy (in an ISR) adds overhead, and delays the start of the new I2S DMA.
Another problem is that you are doing the endian conversion in this function (that is called from an ISR). This will block any other (lower priority) interrupts from being serviced while this happens, which is a bad thing in an embedded system. You should do the endian conversion in the task that reads from the queue. ISRs should be very short and do the minimum possible work (often just setting a flag, giving a semaphore, or adding something to a queue).
Lastly, while you are doing the endian conversion, what is happening to audio samples? The previous DMA has completed, and you haven't started a new one, so they will just be dropped on the floor.
Possible solution
You probably want to allocate a suitably big buffer, and configure your DMA to work in circular buffer mode. This means that once started, the DMA will continue forever (until you stop it), so you'll never drop any samples. There won't be any gap between one DMA finishing and a new one starting, since you never need to start a new one.
The DMA provides a "half-complete" interrupt, to say when it has filled half the buffer. So start the DMA, and when you get the half-complete interrupt, queue up the first half of the buffer to be saved. When you get the fully-complete interrupt, queue up the second half of the buffer to be saved. Rinse and repeat.
You might want to add some logic to detect if the interrupt happens before the previous save has completed, since the data will be overrun and possibly corrupted. Depending on the speed of the SD-card (and the sample rate), this may or may not be a problem.

Cancel iOS microphone echo cancellation and noise suppression

Is there a way to cancel pre-processing like echo cancellation and noise suppression in audio recorder in iOS?
I'm using AVAudioRecorder with meteringEnabled=true, and I get the average decibel level using averagePowerForChannel (docs).
I am trying to measure ambient noise near the phone, and iPhone 8 seems to amplify low noises or cancel them out if I start to speak. For example, if background music has an absolute decibel level of 30 - iOS seems to amplify it. When I start to speak even quietly - the dB level drops significantly.
But since I want to measure ambient noise - I don't want this pre-processing.
I tried setInputGain (docs) but isInputGainSettable is always false - therefore, I can't take this approach.
Is there a way to cancel any amplification or pre-processing like echo cancellation and noise suppression?
You can enable and disable AEC,AGC - using the AudioUnitSetProperty
https://developer.apple.com/documentation/audiotoolbox/1440371-audiounitsetproperty
Here is some code snippet for the same.
lResult = AudioUnitSetProperty(lAUAudioUnit,
kAUVoiceIOProperty_BypassVoiceProcessing,
kAudioUnitScope_Global,
lInputBus,
&lFalse,
sizeof(lFalse));
lResult = AudioUnitSetProperty(lAUAudioUnit,
kAUVoiceIOProperty_VoiceProcessingEnableAGC,
kAudioUnitScope_Global,
lInputBus,
&lFalse,
sizeof(lFalse));
What the app needs is access to unprocessed audio after disabling the AGC (Auto Gain Control) filters on the audio channel. To get access to raw and unprocessed audio, turn on Measurement mode in iOS.
As appears in the iOS documentation here, "Measurement" mode is a mode that indicates that your app is performing measurement of audio input or output.
This mode is intended for apps that need to minimize the amount of system-supplied signal processing to input and output signals. If recording on devices with more than one built-in microphone, the primary microphone is used.
The javascript code I used to modify this (using nativescript), before recording, is this:
// Disable AGC
const avSession = AVAudioSession.sharedInstance();
avSession.setCategoryModeOptionsError(AVAudioSessionCategoryRecord, AVAudioSessionModeMeasurement, null);
i was try above solution , but it didn't work for me.
below is my code :
var componentDesc: AudioComponentDescription
= AudioComponentDescription(
componentType: OSType(kAudioUnitType_Output),
componentSubType: OSType(kAudioUnitSubType_RemoteIO),
componentManufacturer: OSType(kAudioUnitManufacturer_Apple),
componentFlags: UInt32(0),
componentFlagsMask: UInt32(0) )
var osErr: OSStatus = noErr
let component: AudioComponent! = AudioComponentFindNext(nil, &componentDesc)
var tempAudioUnit: AudioUnit?
osErr = AudioComponentInstanceNew(component, &tempAudioUnit)
var outUnit = tempAudioUnit!
var lFalse = UInt32(1)
let lInputBus = AudioUnitElement(1)
let outputBus = AudioUnitElement(0)
let lResult = AudioUnitSetProperty(outUnit,
kAUVoiceIOProperty_BypassVoiceProcessing,
kAudioUnitScope_Global,
lInputBus,
&lFalse,
0);
var flag = Int32(1)
let Result = AudioUnitSetProperty(outUnit,
kAUVoiceIOProperty_VoiceProcessingEnableAGC,
kAudioUnitScope_Global,
outputBus,
&flag,
0);

AVAudioPlayerNode lastRenderTime

I use multiple AVAudioPlayerNode in AVAudioEngine to mix audio files for playback.
Once all the setup is done (engine prepared, started, audio file segments scheduled), I'm calling play() method on each player node to start playback.
Because it takes times to loop through all player nodes, I take a snapshot of the first nodes's lastRenderTime value and use it to compute a start time for the nodes play(at:) method, to keep playback in sync between nodes :
let delay = 0.0
let startSampleTime = time.sampleTime // time is the snapshot value
let sampleRate = player.outputFormat(forBus: 0).sampleRate
let startTime = AVAudioTime(
sampleTime: startSampleTime + AVAudioFramePosition(delay * sampleRate),
atRate: sampleRate)
player.play(at: startTime)
The problem is with the current playback time.
I use this computation to get the value, where seekTime is a value I keep track of in case we seek the player. It's 0.0 at start :
private var _currentTime: TimeInterval {
guard player.engine != nil,
let lastRenderTime = player.lastRenderTime,
lastRenderTime.isSampleTimeValid,
lastRenderTime.isHostTimeValid else {
return seekTime
}
let sampleRate = player.outputFormat(forBus: 0).sampleRate
let sampleTime = player.playerTime(forNodeTime: lastRenderTime)?.sampleTime ?? 0
if sampleTime > 0 && sampleRate != 0 {
return seekTime + (Double(sampleTime) / sampleRate)
}
return seekTime
}
While this produces a relatively correct value, I can hear a delay between the time I play, and the first sound I hear. Because the lastRenderTime immediately starts to advance once I call play(at:), and there must be some kind of processing/buffering time offset.
The noticeable delay is around 100ms, which is very big, and I need a precise current time value to do visual rendering in parallel.
It probably doesn't matter, but every audio file is AAC audio, and I schedule segments of them in player nodes, I don't use buffers directly.
Segments length may vary. I also call prepare(withFrameCount:) on each player node once I have scheduled audio data.
So my question is, is the delay I observe is a buffering issue ? (I mean should I schedule shorter segments for example), is there a way to compute precisely this value so I can adjust my current playback time computation ?
When I install a tap block on one AVAudioPlayerNode, the block is called with a buffer of length 4410, and the sample rate is 44100 Hz, this means 0.1s of audio data. Should I rely on this to compute the latency ?
I'm wondering if I can trust the length of the buffer I get in the tap block. Alternatively, I'm trying to compute the total latency for my audio graph. Can someone provide insights on how to determine this value precisely ?
From a post on Apple's developer forums by theanalogkid:
On the system, latency is measured by:
Audio Device I/O Buffer Frame Size + Output Safety Offset + Output Stream Latency + Output Device Latency
If you're trying to calculate total roundtrip latency you can add:
Input Latency + Input Safety Offset to the above.
The timestamp you see at the render proc. account for the buffer frame size and the safety offset but the stream and device latencies are not accounted for.
iOS gives you access to the most important of the above information via AVAudioSession and as mentioned you can also use the "preferred" session settings - setPreferredIOBufferDuration and preferredIOBufferDuration for further control.
/ The current hardware input latency in seconds. */
#property(readonly) NSTimeInterval inputLatency NS_AVAILABLE_IOS(6_0);
/ The current hardware output latency in seconds. */
#property(readonly) NSTimeInterval outputLatency NS_AVAILABLE_IOS(6_0);
/ The current hardware IO buffer duration in seconds. */
#property(readonly) NSTimeInterval IOBufferDuration NS_AVAILABLE_IOS(6_0);
Audio Units also have the kAudioUnitProperty_Latency property you can query.

Encoding PCM (CMSampleBufferRef) to AAC on iOS - How to set frequency and bitrate?

I want to encode PCM (CMSampleBufferRef(s) going live from AVCaptureAudioDataOutputSampleBufferDelegate) into AAC.
When the first CMSampleBufferRef arrives, I set both (in/out) AudioStreamBasicDescription(s), "out" according to documentation
AudioStreamBasicDescription inAudioStreamBasicDescription = *CMAudioFormatDescriptionGetStreamBasicDescription((CMAudioFormatDescriptionRef)CMSampleBufferGetFormatDescription(sampleBuffer));
AudioStreamBasicDescription outAudioStreamBasicDescription = {0}; // Always initialize the fields of a new audio stream basic description structure to zero, as shown here: ...
outAudioStreamBasicDescription.mSampleRate = 44100; // The number of frames per second of the data in the stream, when the stream is played at normal speed. For compressed formats, this field indicates the number of frames per second of equivalent decompressed data. The mSampleRate field must be nonzero, except when this structure is used in a listing of supported formats (see “kAudioStreamAnyRate”).
outAudioStreamBasicDescription.mFormatID = kAudioFormatMPEG4AAC; // kAudioFormatMPEG4AAC_HE does not work. Can't find `AudioClassDescription`. `mFormatFlags` is set to 0.
outAudioStreamBasicDescription.mFormatFlags = kMPEG4Object_AAC_SSR; // Format-specific flags to specify details of the format. Set to 0 to indicate no format flags. See “Audio Data Format Identifiers” for the flags that apply to each format.
outAudioStreamBasicDescription.mBytesPerPacket = 0; // The number of bytes in a packet of audio data. To indicate variable packet size, set this field to 0. For a format that uses variable packet size, specify the size of each packet using an AudioStreamPacketDescription structure.
outAudioStreamBasicDescription.mFramesPerPacket = 1024; // The number of frames in a packet of audio data. For uncompressed audio, the value is 1. For variable bit-rate formats, the value is a larger fixed number, such as 1024 for AAC. For formats with a variable number of frames per packet, such as Ogg Vorbis, set this field to 0.
outAudioStreamBasicDescription.mBytesPerFrame = 0; // The number of bytes from the start of one frame to the start of the next frame in an audio buffer. Set this field to 0 for compressed formats. ...
outAudioStreamBasicDescription.mChannelsPerFrame = 1; // The number of channels in each frame of audio data. This value must be nonzero.
outAudioStreamBasicDescription.mBitsPerChannel = 0; // ... Set this field to 0 for compressed formats.
outAudioStreamBasicDescription.mReserved = 0; // Pads the structure out to force an even 8-byte alignment. Must be set to 0.
and AudioConverterRef.
AudioClassDescription audioClassDescription;
memset(&audioClassDescription, 0, sizeof(audioClassDescription));
UInt32 size;
NSAssert(AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders, sizeof(outAudioStreamBasicDescription.mFormatID), &outAudioStreamBasicDescription.mFormatID, &size) == noErr, nil);
uint32_t count = size / sizeof(AudioClassDescription);
AudioClassDescription descriptions[count];
NSAssert(AudioFormatGetProperty(kAudioFormatProperty_Encoders, sizeof(outAudioStreamBasicDescription.mFormatID), &outAudioStreamBasicDescription.mFormatID, &size, descriptions) == noErr, nil);
for (uint32_t i = 0; i < count; i++) {
if ((outAudioStreamBasicDescription.mFormatID == descriptions[i].mSubType) && (kAppleSoftwareAudioCodecManufacturer == descriptions[i].mManufacturer)) {
memcpy(&audioClassDescription, &descriptions[i], sizeof(audioClassDescription));
}
}
NSAssert(audioClassDescription.mSubType == outAudioStreamBasicDescription.mFormatID && audioClassDescription.mManufacturer == kAppleSoftwareAudioCodecManufacturer, nil);
AudioConverterRef audioConverter;
memset(&audioConverter, 0, sizeof(audioConverter));
NSAssert(AudioConverterNewSpecific(&inAudioStreamBasicDescription, &outAudioStreamBasicDescription, 1, &audioClassDescription, &audioConverter) == 0, nil);
And then, I convert every CMSampleBufferRef into raw AAC data.
AudioBufferList inAaudioBufferList;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &inAaudioBufferList, sizeof(inAaudioBufferList), NULL, NULL, 0, &blockBuffer);
NSAssert(inAaudioBufferList.mNumberBuffers == 1, nil);
uint32_t bufferSize = inAaudioBufferList.mBuffers[0].mDataByteSize;
uint8_t *buffer = (uint8_t *)malloc(bufferSize);
memset(buffer, 0, bufferSize);
AudioBufferList outAudioBufferList;
outAudioBufferList.mNumberBuffers = 1;
outAudioBufferList.mBuffers[0].mNumberChannels = inAaudioBufferList.mBuffers[0].mNumberChannels;
outAudioBufferList.mBuffers[0].mDataByteSize = bufferSize;
outAudioBufferList.mBuffers[0].mData = buffer;
UInt32 ioOutputDataPacketSize = 1;
NSAssert(AudioConverterFillComplexBuffer(audioConverter, inInputDataProc, &inAaudioBufferList, &ioOutputDataPacketSize, &outAudioBufferList, NULL) == 0, nil);
NSData *data = [NSData dataWithBytes:outAudioBufferList.mBuffers[0].mData length:outAudioBufferList.mBuffers[0].mDataByteSize];
free(buffer);
CFRelease(blockBuffer);
inInputDataProc() implementation:
OSStatus inInputDataProc(AudioConverterRef inAudioConverter, UInt32 *ioNumberDataPackets, AudioBufferList *ioData, AudioStreamPacketDescription **outDataPacketDescription, void *inUserData)
{
AudioBufferList audioBufferList = *(AudioBufferList *)inUserData;
ioData->mBuffers[0].mData = audioBufferList.mBuffers[0].mData;
ioData->mBuffers[0].mDataByteSize = audioBufferList.mBuffers[0].mDataByteSize;
return noErr;
}
Now, the data holds my raw AAC, which I wrap into ADTS frame with proper ADTS header and sequence of these ADTS frames is playable AAC document.
But I don't understand this code as much as I want to. Generally, I don't understand the audio... I've just wrote it somehow following blogs, forums and docs, in pretty much time and now it works but I don't know why and how to change some parameters. So here are my questions:
I need to use this converter during HW encoder is occupied (by AVAssetWriter). This is why I make SW converter via AudioConverterNewSpecific() and not AudioConverterNew(). But now setting outAudioStreamBasicDescription.mFormatID = kAudioFormatMPEG4AAC_HE; does not work. Can't find AudioClassDescription. Even if mFormatFlags is set to 0. What am I loosing by using kAudioFormatMPEG4AAC (kMPEG4Object_AAC_SSR) over kAudioFormatMPEG4AAC_HE? What should I use for live stream? kMPEG4Object_AAC_SSR or kMPEG4Object_AAC_Main?
How to change sample rate properly? If I set outAudioStreamBasicDescription.mSampleRate to 22050 or 8000 for example, the audio playback is like slowed down. I set the sampling frequency index in ADTS header for same frequency as outAudioStreamBasicDescription.mSampleRate is.
How to change bitrate? ffmpeg -i shows this info for produced aac:
Stream #0:0: Audio: aac, 44100 Hz, mono, fltp, 64 kb/s.
How to change it to 16 kbps for example? Bitrate is decreasing as I'm decreasing the frequency, but I believe this is not the only way? And playback is damaged by decreasing the frequency as I'm mentioning in 2 anyway.
How to calculate the size of buffer? Now I set it to uint32_t bufferSize = inAaudioBufferList.mBuffers[0].mDataByteSize; as I believe compressed format won't be larger than uncompressed... But isn't it unnecessarily too much?
How to set ioOutputDataPacketSize properly? If I am getting the documentation right, I should set it as UInt32 ioOutputDataPacketSize = bufferSize / outAudioStreamBasicDescription.mBytesPerPacket; but mBytesPerPacket is 0. If I set it to 0, AudioConverterFillComplexBuffer() returns error. If I set it to 1, it works but I don't know why...
In inInputDataProc() there are 3 "out" parameters. I set just ioData. Should I also set ioNumberDataPackets and outDataPacketDescription? Why and how?
You may need to change the sample rate of the raw audio data by using a resampling audio unit before feeding the audio to the AAC converter. Otherwise there will be a mismatch between the AAC header and the audio data.

How to set timestamp of CMSampleBuffer for AVWriter writing

I'm working with AVFoundation for capturing and recording audio. There are some issues I don't quite understand.
Basically I want to capture audio from AVCaptureSession and write it using AVWriter, however I need some shifting in the timestamp of the CMSampleBuffer I get from AVCaptureSession. I read documentation of CMSampleBuffer I see two different term of timestamp: 'presentation timestamp' and 'output presentation timestamp'. What the different of the two ?
Let say I get a CMSampleBuffer (for audio) instance from AVCaptureSession, and I want to write it to a file using AVWriter, what function should I use to 'inject' a CMTime to the buffer in order to set the presentation timestamp of it in the resulting file ?
Thanks.
Use the CMSampleBufferGetPresentationTimeStamp, that is the time when the buffer is captured and should be "presented" at when played back to be in sync. To quote session 520 at WWDC 2012: "Presentation time is the time at which the first sample in the buffer was picked up by the microphone".
If you start the AVWriter with
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
and then append samples with
if(videoWriterInput.readyForMoreMediaData) [videoWriterInput appendSampleBuffer:sampleBuffer];
the frames in the finished video will be consistent with CMSampleBufferGetPresentationTimeStamp (I have checked). If you want to modify the time when adding samples you have to use AVAssetWriterInputPixelBufferAdaptor
Chunk of sample code from here: http://www.gdcl.co.uk/2013/02/20/iPhone-Pause.html
CMSampleBufferRef sample - is your sampleBuffer, CMSampleBufferRef sout your output. NewTimeStamp is your time stamp.
CMItemCount count;
CMTime newTimeStamp = CMTimeMake(YOURTIME_GOES_HERE);
CMSampleBufferGetSampleTimingInfoArray(sample, 0, nil, &count);
CMSampleTimingInfo* pInfo = malloc(sizeof(CMSampleTimingInfo) * count);
CMSampleBufferGetSampleTimingInfoArray(sample, count, pInfo, &count);
for (CMItemCount i = 0; i < count; i++)
{
pInfo[i].decodeTimeStamp = newTimeStamp; // kCMTimeInvalid if in sequence
pInfo[i].presentationTimeStamp = newTimeStamp;
}
CMSampleBufferRef sout;
CMSampleBufferCreateCopyWithNewTiming(kCFAllocatorDefault, sample, count, pInfo, &sout);
free(pInfo);

Resources