Distorted (robotic) audio recording on iOS using AudioQueue - ios

I'm using AudioQueue in iOS to implement streaming of recording to the web.
The problem is that it usually works quite well, but sometimes (~20% of my attempts) the sound is horribly distorted - it sounds robotic.
Edit:
I am able to reproduce it quite easily on the ios6 and ios6.1 simulator - but I wasn't able to reproduce it on a real phone (ios6.1.3).
Attempting to debug it I save the PCM data to a file. The same distortion appears in the PCM file, so this is not a problem in the encoding or upload code. I've also tried to play with the number of buffers and the size of the buffers - nothing helped.
The problem is I don't know how to debug it farther - it appears the buffer is distorted as input to the callback - before my code is activated (except for the audio queue config).
do you have ideas what could be the problem?
or how to debug it further?
Queue setup code:
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mSampleRate = SAMPLE_RATE; //16000.0;
audioFormat.mChannelsPerFrame = CHANNELS; //1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mFramesPerPacket = 1;
audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame * sizeof(SInt16);
audioFormat.mBytesPerPacket = audioFormat.mBytesPerFrame * audioFormat.mFramesPerPacket;
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger
| kLinearPCMFormatFlagIsPacked;
AudioQueueNewInput(
&audioFormat,
recordCallback,
self, // userData
CFRunLoopGetMain(), // run loop
NULL, // run loop mode
0, // flags
&recordQueue);
UInt32 trueValue = true;
AudioQueueSetProperty(recordQueue,kAudioQueueProperty_EnableLevelMetering,&trueValue,sizeof (UInt32));
for (int t = 0; t < NUMBER_AUDIO_DATA_BUFFERS; ++t)
{
AudioQueueAllocateBuffer(
recordQueue,
bufferByteSize,
&recordQueueBuffers[t]);
}
for (int t = 0; t < NUMBER_AUDIO_DATA_BUFFERS; ++t)
{
AudioQueueEnqueueBuffer(
recordQueue,
recordQueueBuffers[t],
0,
NULL);
}
Start recording function:
pcmFile = [[NSOutputStream alloc] initToFileAtPath:pcmFilePath append:YES];
[pcmFile scheduleInRunLoop:[NSRunLoop currentRunLoop]
forMode:NSDefaultRunLoopMode];
[pcmFile open];
setupQueue(); // see above
AudioQueueStart(recordQueue, NULL);
Callback code:
static void recordCallback(
void* inUserData,
AudioQueueRef inAudioQueue,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp* inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription* inPacketDesc)
Recorder* recorder = (Recorder*) inUserData;
if (!recorder.recording)
return;
[recorder.pcmFile write:inBuffer->mAudioData maxLength:inBuffer->mAudioDataByteSize];
AudioQueueEnqueueBuffer(inAudioQueue, inBuffer, 0, NULL);
}

we had the exact same issue in SoundJS Web Audio on iOS whenever a video element was present and the cache was empty. Once cached, the problem went away. We were unable to find a fix or a work around for the issue.
You can read the details in our community forum:
http://community.createjs.com/discussions/soundjs/162-ios-audio-distortion-when-video-element-exists
Hope that helps.

Related

Using CMSampleTimingInfo, CMSampleBuffer and AudioBufferList from raw PCM stream

I'm receiving a raw PCM stream from Google's WebRTC C++ reference implementation (a hook inserted into VoEBaseImpl::GetPlayoutData). The audio appears to be linear PCM, signed int16, but when recording this using an AssetWriter it saves to the audio file highly distorted and higher pitch.
I am assuming this is an error somewhere with the input parameters, most probably with respect to the conversion of the stereo-int16 to an AudioBufferList and then on to a CMSampleBuffer. Is there any issue with the below code?
void RecorderImpl::RenderAudioFrame(void* audio_data, size_t number_of_frames, int sample_rate, int64_t elapsed_time_ms, int64_t ntp_time_ms) {
OSStatus status;
AudioChannelLayout acl;
bzero(&acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = sample_rate;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 2;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = audioFormat.mFramesPerPacket * audioFormat.mChannelsPerFrame * audioFormat.mBitsPerChannel / 8;
audioFormat.mBytesPerFrame = audioFormat.mBytesPerPacket / audioFormat.mFramesPerPacket;
CMSampleTimingInfo timing = { CMTimeMake(1, sample_rate), CMTimeMake(elapsed_time_ms, 1000), kCMTimeInvalid };
CMFormatDescriptionRef format = NULL;
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, sizeof(acl), &acl, 0, NULL, NULL, &format);
if(status != 0) {
NSLog(#"Failed to create audio format description");
return;
}
CMSampleBufferRef buffer;
status = CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, (CMItemCount)number_of_frames, 1, &timing, 0, NULL, &buffer);
if(status != 0) {
NSLog(#"Failed to allocate sample buffer");
return;
}
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = audioFormat.mChannelsPerFrame;
bufferList.mBuffers[0].mDataByteSize = (UInt32)(number_of_frames * audioFormat.mBytesPerFrame);
bufferList.mBuffers[0].mData = audio_data;
status = CMSampleBufferSetDataBufferFromAudioBufferList(buffer, kCFAllocatorDefault, kCFAllocatorDefault, 0, &bufferList);
if(status != 0) {
NSLog(#"Failed to convert audio buffer list into sample buffer");
return;
}
[recorder writeAudioFrames:buffer];
CFRelease(buffer);
}
For reference, the sample rate I'm receiving from WebRTC on an iPhone 6S+ / iOS 9.2 is 48kHz with 480 samples per invocation of this hook and I'm receiving data every 10 ms.
First of all, congratulations on having the temerity to create an audio CMSampleBuffer from scratch. For most, they are neither created nor destroyed, but handed down immaculate and mysterious from CoreMedia and AVFoundation.
The presentationTimeStamps in your timing info are in integral milliseconds, which cannot represent your 48kHz samples' positions in time.
Instead of CMTimeMake(elapsed_time_ms, 1000), try CMTimeMake(elapsed_frames, sample_rate), where elapsed_frames are the number of frames that you have previously written.
That would explain the distortion, but not the pitch, so make sure that the AudioStreamBasicDescription matches your AVAssetWriterInput setup. It's hard to say without seeing your AVAssetWriter code.
p.s Look out for writeAudioFrames - if it's asynchronous, you'll have problems with ownership of the audio_data.
p.p.s. it looks like you're leaking the CMFormatDescriptionRef.
I ended up opening up the audio file that was generated in Audacity and saw that every frame had half of it dropped, as shown in this rather bizarre looking waveform:
Changing acl.mChannelLayoutTag to kAudioChannelLayoutTag_Mono and changing audioFormat.mChannelsPerFrame to 1 solved the issue and now the audio quality is perfect. Hooray!

error converting AudioBufferList to CMBlockBufferRef

I am trying to take a video file read it in using AVAssetReader and pass the audio off to CoreAudio for processing (adding effects and stuff) before saving it back out to disk using AVAssetWriter. I would like to point out that if i set the componentSubType on AudioComponentDescription of my output node as RemoteIO, things play correctly though the speakers. This makes me confident that my AUGraph is properly setup as I can hear things working. I am setting the subType to GenericOutput though so I can do the rendering myself and get back the adjusted audio.
I am reading in the audio and i pass the CMSampleBufferRef off to copyBuffer. This puts the audio into a circular buffer that will be read in later.
- (void)copyBuffer:(CMSampleBufferRef)buf {
if (_readyForMoreBytes == NO)
{
return;
}
AudioBufferList abl;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(buf, NULL, &abl, sizeof(abl), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);
UInt32 size = (unsigned int)CMSampleBufferGetTotalSampleSize(buf);
BOOL bytesCopied = TPCircularBufferProduceBytes(&circularBuffer, abl.mBuffers[0].mData, size);
if (!bytesCopied){
/
_readyForMoreBytes = NO;
if (size > kRescueBufferSize){
NSLog(#"Unable to allocate enought space for rescue buffer, dropping audio frame");
} else {
if (rescueBuffer == nil) {
rescueBuffer = malloc(kRescueBufferSize);
}
rescueBufferSize = size;
memcpy(rescueBuffer, abl.mBuffers[0].mData, size);
}
}
CFRelease(blockBuffer);
if (!self.hasBuffer && bytesCopied > 0)
{
self.hasBuffer = YES;
}
}
Next I call processOutput. This will do a manual reder on the outputUnit. When AudioUnitRender is called it invokes the playbackCallback below, which is what is hooked up as input callback on my first node. playbackCallback pulls the data off the circular buffer and feeds it into the audioBufferList passed in. Like I said before if the output is set as RemoteIO this will cause the audio to correctly be played on the speakers. When AudioUnitRender finishes, it returns noErr and the bufferList object contains valid data. When I call CMSampleBufferSetDataBufferFromAudioBufferList though I get kCMSampleBufferError_RequiredParameterMissing (-12731).
-(CMSampleBufferRef)processOutput
{
if(self.offline == NO)
{
return NULL;
}
AudioUnitRenderActionFlags flags = 0;
AudioTimeStamp inTimeStamp;
memset(&inTimeStamp, 0, sizeof(AudioTimeStamp));
inTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
UInt32 busNumber = 0;
UInt32 numberFrames = 512;
inTimeStamp.mSampleTime = 0;
UInt32 channelCount = 2;
AudioBufferList *bufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1));
bufferList->mNumberBuffers = channelCount;
for (int j=0; j<channelCount; j++)
{
AudioBuffer buffer = {0};
buffer.mNumberChannels = 1;
buffer.mDataByteSize = numberFrames*sizeof(SInt32);
buffer.mData = calloc(numberFrames,sizeof(SInt32));
bufferList->mBuffers[j] = buffer;
}
CheckError(AudioUnitRender(outputUnit, &flags, &inTimeStamp, busNumber, numberFrames, bufferList), #"AudioUnitRender outputUnit");
CMSampleBufferRef sampleBufferRef = NULL;
CMFormatDescriptionRef format = NULL;
CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid };
AudioStreamBasicDescription audioFormat = self.audioFormat;
CheckError(CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, NULL, 0, NULL, NULL, &format), #"CMAudioFormatDescriptionCreate");
CheckError(CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numberFrames, 1, &timing, 0, NULL, &sampleBufferRef), #"CMSampleBufferCreate");
CheckError(CMSampleBufferSetDataBufferFromAudioBufferList(sampleBufferRef, kCFAllocatorDefault, kCFAllocatorDefault, 0, bufferList), #"CMSampleBufferSetDataBufferFromAudioBufferList");
return sampleBufferRef;
}
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
int numberOfChannels = ioData->mBuffers[0].mNumberChannels;
SInt16 *outSample = (SInt16 *)ioData->mBuffers[0].mData;
/
memset(outSample, 0, ioData->mBuffers[0].mDataByteSize);
MyAudioPlayer *p = (__bridge MyAudioPlayer *)inRefCon;
if (p.hasBuffer){
int32_t availableBytes;
SInt16 *bufferTail = TPCircularBufferTail([p getBuffer], &availableBytes);
int32_t requestedBytesSize = inNumberFrames * kUnitSize * numberOfChannels;
int bytesToRead = MIN(availableBytes, requestedBytesSize);
memcpy(outSample, bufferTail, bytesToRead);
TPCircularBufferConsume([p getBuffer], bytesToRead);
if (availableBytes <= requestedBytesSize*2){
[p setReadyForMoreBytes];
}
if (availableBytes <= requestedBytesSize) {
p.hasBuffer = NO;
}
}
return noErr;
}
The CMSampleBufferRef I pass in looks valid (below is a dump of the object from the debugger)
CMSampleBuffer 0x7f87d2a03120 retainCount: 1 allocator: 0x103333180
invalid = NO
dataReady = NO
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
formatDescription = <CMAudioFormatDescription 0x7f87d2a02b20 [0x103333180]> {
mediaType:'soun'
mediaSubType:'lpcm'
mediaSpecific: {
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'lpcm'
mFormatFlags: 0xc2c
mBytesPerPacket: 2
mFramesPerPacket: 1
mBytesPerFrame: 2
mChannelsPerFrame: 1
mBitsPerChannel: 16 }
cookie: {(null)}
ACL: {(null)}
}
extensions: {(null)}
}
sbufToTrackReadiness = 0x0
numSamples = 512
sampleTimingArray[1] = {
{PTS = {0/1 = 0.000}, DTS = {INVALID}, duration = {1/44100 = 0.000}},
}
dataBuffer = 0x0
The buffer list looks like this
Printing description of bufferList:
(AudioBufferList *) bufferList = 0x00007f87d280b0a0
Printing description of bufferList->mNumberBuffers:
(UInt32) mNumberBuffers = 2
Printing description of bufferList->mBuffers:
(AudioBuffer [1]) mBuffers = {
[0] = (mNumberChannels = 1, mDataByteSize = 2048, mData = 0x00007f87d3008c00)
}
Really at a loss here, hoping someone can help. Thanks,
In case it matters i am debugging this in ios 8.3 simulator and the audio is coming from a mp4 that i shot on my iphone 6 then saved to my laptop.
I have read the following issues, however still to no avail, things are not working.
How to convert AudioBufferList to CMSampleBuffer?
Converting an AudioBufferList to a CMSampleBuffer Produces Unexpected Results
CMSampleBufferSetDataBufferFromAudioBufferList returning error 12731
core audio offline rendering GenericOutput
UPDATE
I poked around some more and notice that when my AudioBufferList right before AudioUnitRender runs looks like this:
bufferList->mNumberBuffers = 2,
bufferList->mBuffers[0].mNumberChannels = 1,
bufferList->mBuffers[0].mDataByteSize = 2048
mDataByteSize is numberFrames*sizeof(SInt32), which is 512 * 4. When I look at the AudioBufferList passed in playbackCallback, the list looks like this:
bufferList->mNumberBuffers = 1,
bufferList->mBuffers[0].mNumberChannels = 1,
bufferList->mBuffers[0].mDataByteSize = 1024
not really sure where that other buffer is going, or the other 1024 byte size...
if when i get finished calling Redner if I do something like this
AudioBufferList newbuff;
newbuff.mNumberBuffers = 1;
newbuff.mBuffers[0] = bufferList->mBuffers[0];
newbuff.mBuffers[0].mDataByteSize = 1024;
and pass newbuff off to CMSampleBufferSetDataBufferFromAudioBufferList the error goes away.
If I try setting the size of BufferList to have 1 mNumberBuffers or its mDataByteSize to be numberFrames*sizeof(SInt16) I get a -50 when calling AudioUnitRender
UPDATE 2
I hooked up a render callback so I can inspect the output when I play the sound over the speakers. I noticed that the output that goes to the speakers also has a AudioBufferList with 2 buffers, and the mDataByteSize during the input callback is 1024 and in the render callback its 2048, which is the same as I have been seeing when manually calling AudioUnitRender. When I inspect the data in the rendered AudioBufferList I notice that the bytes in the 2 buffers are the same, which means I can just ignore the second buffer. But I am not sure how to handle the fact that the data is 2048 in size after being rendered instead of 1024 as it's being taken in. Any ideas on why that could be happening? Is it in more of a raw form after going through the audio graph and that is why the size is doubling?
Sounds like the issue you're dealing with is because of a discrepancy in the number of channels. The reason you're seeing data in blocks of 2048 instead of 1024 is because it is feeding you back two channels (stereo). Check to make sure all of your audio units are properly configured to use mono throughout the entire audio graph, including the Pitch Unit and any audio format descriptions.
One thing to especially beware of is that calls to AudioUnitSetProperty can fail - so be sure to wrap those in CheckError() as well.

Playing .caf audio file with ExtAudioFileRead is not sync

My app does huge data processing on audio coming from the mic input.
In order to get a "demo mode", I want to do the same thing based on a local .caf audio file.
I managed to get the audio file.
Now I am trying to use ExtAudioFileRead to read the .caf file and then do the data processing.
void readFile()
{
OSStatus err = noErr;
// Audio file
NSURL *path = [[NSBundle mainBundle] URLForResource:#"output" withExtension:#"caf"];
ExtAudioFileOpenURL((__bridge CFURLRef)path, &audio->audiofile);
assert(audio->audiofile);
// File's format.
AudioStreamBasicDescription fileFormat;
UInt32 size = sizeof(fileFormat);
err = ExtAudioFileGetProperty(audio->audiofile, kExtAudioFileProperty_FileDataFormat, &size, &fileFormat);
// tell the ExtAudioFile API what format we want samples back in
//bzero(&audio->clientFormat, sizeof(audio->clientFormat));
audio->clientFormat.mSampleRate = SampleRate;
audio->clientFormat.mFormatID = kAudioFormatLinearPCM;
audio->clientFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audio->clientFormat.mFramesPerPacket = 1;
audio->clientFormat.mChannelsPerFrame = 1;
audio->clientFormat.mBitsPerChannel = 16;//sizeof(AudioSampleType) * 8;
audio->clientFormat.mBytesPerPacket = 2 * audio->clientFormat.mChannelsPerFrame;
audio->clientFormat.mBytesPerFrame = 2 * audio->clientFormat.mChannelsPerFrame;
err = ExtAudioFileSetProperty(audio->audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(audio->clientFormat), &audio->clientFormat);
// find out how many frames we need to read
SInt64 numFrames = 0;
size = sizeof(numFrames);
err = ExtAudioFileGetProperty(audio->audiofile, kExtAudioFileProperty_FileLengthFrames, &size, &numFrames);
// create the buffers for reading in data
AudioBufferList *bufferList = malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * (audio->clientFormat.mChannelsPerFrame - 1));
bufferList->mNumberBuffers = audio->clientFormat.mChannelsPerFrame;
for (int ii=0; ii < bufferList->mNumberBuffers; ++ii)
{
bufferList->mBuffers[ii].mDataByteSize = sizeof(float) * (int)numFrames;
bufferList->mBuffers[ii].mNumberChannels = 1;
bufferList->mBuffers[ii].mData = malloc(bufferList->mBuffers[ii].mDataByteSize);
}
UInt32 maxReadFrames = 1024;
UInt32 rFrames = (UInt32)numFrames;
while(rFrames > 0)
{
UInt32 framesToRead = (maxReadFrames > rFrames) ? rFrames : maxReadFrames;
err = ExtAudioFileRead(audio->audiofile, &framesToRead, bufferList);
[audio processAudio:bufferList];
if (rFrames % SampleRate == 0)
[audio realtimeUpdate:nil];
rFrames = rFrames - maxReadFrames;
}
// Close the file
ExtAudioFileDispose(audio->audiofile);
// destroy the buffers
for (int ii=0; ii < bufferList->mNumberBuffers; ++ii)
{
free(bufferList->mBuffers[ii].mData);
}
free(bufferList);
bufferList = NULL;
}
There is clearly something that i did not understand or that I am doing wrong with ExtAudioFileRead because this code does not work at all. I have two main problems :
The file is played instantaneously. I mean that 44'100 samples are clearly not equal to 1 second. My 3 minutes audio file processing is done in a few seconds...
During the processing, I need to update the UI. So I have a few dispatch_sync in processaudio and realtimeUpdate. This seems to be really not appreciated by ExtAudioFileRead and it freezes.
Thanks for you help.
The code you wrote is just reading samples from the file and then calling processAudio. This will be done as fast as possible. As soon as processAudio is finished the next batch of samples is read and processAudio is called again. You shouldn't assume that reading from an audio file (which is a low level and non blocking os call) takes the same time the audio read would take to play.
If you want to process the audio in the file according to the sample rate you should probably use an AUFilePlayer audio unit. This can play back the file at the right speed and you can use a callback to process the samples in real audio time instead of "as fast as possible".

Corrupt recording with repeating audio in IOS

My application records streaming audio on iPhone. My problem is that a small percent (~2%) of the recordings are corrupted. They appear to have some audio buffers duplicated.
For example listen to this file.
Edit: A surprising thing is that looking closely at the data using Audacity shows the repeating parts are very very similar but not identical. Since FLAC (the format I use for encoding the audio) is a loss-less compression, I guess this is not a bug in the streaming/encoding but the problem originates at the data that comes from the microphone!
Below is the code I use to setup the audio recording streaming - is there anything wrong with it?
// see functions implementation below
- (void)startRecording
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
, ^{
[self setUpRecordQueue];
[self setUpRecordQueueBuffers];
[self primeRecordQueueBuffers];
AudioQueueStart(recordQueue, NULL);
});
}
// this is called only once before any recording takes place
- (void)setUpAudioFormat
{
AudioSessionInitialize(
NULL,
NULL,
nil,
(__bridge void *)(self)
);
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
AudioSessionSetProperty(
kAudioSessionProperty_AudioCategory,
sizeof(sessionCategory),
&sessionCategory
);
AudioSessionSetActive(true);
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mSampleRate = SAMPLE_RATE;//16000.0;
audioFormat.mChannelsPerFrame = CHANNELS;//1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mFramesPerPacket = 1;
audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame * sizeof(SInt16);
audioFormat.mBytesPerPacket = audioFormat.mBytesPerFrame * audioFormat.mFramesPerPacket;
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
bufferNumPackets = 2048; // must be power of 2 for FFT!
bufferByteSize = [self byteSizeForNumPackets:bufferNumPackets];
}
// I suspect the duplicate buffers arrive here:
static void recordCallback(
void* inUserData,
AudioQueueRef inAudioQueue,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp* inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription* inPacketDesc)
{
Recorder* recorder = (__bridge Recorder*) inUserData;
if (inNumPackets > 0)
{
// append the buffer to FLAC encoder
[recorder recordedBuffer:inBuffer->mAudioData byteSize:inBuffer->mAudioDataByteSize packetsNum:inNumPackets];
}
AudioQueueEnqueueBuffer(inAudioQueue, inBuffer, 0, NULL);
}
- (void)setUpRecordQueue
{
OSStatus errorStatus = AudioQueueNewInput(
&audioFormat,
recordCallback,
(__bridge void *)(self), // userData
CFRunLoopGetMain(), // run loop
NULL, // run loop mode
0, // flags
&recordQueue);
UInt32 trueValue = true;
AudioQueueSetProperty(recordQueue,kAudioQueueProperty_EnableLevelMetering,&trueValue,sizeof (UInt32));
}
- (void)setUpRecordQueueBuffers
{
for (int t = 0; t < NUMBER_AUDIO_DATA_BUFFERS; ++t)
{
OSStatus errorStatus = AudioQueueAllocateBuffer(
recordQueue,
bufferByteSize,
&recordQueueBuffers[t]);
}
}
- (void)primeRecordQueueBuffers
{
for (int t = 0; t < NUMBER_AUDIO_DATA_BUFFERS; ++t)
{
OSStatus errorStatus = AudioQueueEnqueueBuffer(
recordQueue,
recordQueueBuffers[t],
0,
NULL);
}
}
Turns out there was a rare bug allowing multiple recordings to start at nearly the same time - so two recordings took place in parallel but sent the audio buffers to the same callback, making the distorted repeating buffers in the encoded recordings...

Can I use AVCaptureSession to encode an AAC stream to memory?

I'm writing an iOS app that streams video and audio over the network.
I am using AVCaptureSession to grab raw video frames using AVCaptureVideoDataOutput and encode them in software using x264. This works great.
I wanted to do the same for audio, only that I don't need that much control on the audio side so I wanted to use the built in hardware encoder to produce an AAC stream. This meant using Audio Converter from the Audio Toolbox layer. In order to do so I put in a handler for AVCaptudeAudioDataOutput's audio frames:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
// get the audio samples into a common buffer _pcmBuffer
CMBlockBufferRef blockBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
CMBlockBufferGetDataPointer(blockBuffer, 0, NULL, &_pcmBufferSize, &_pcmBuffer);
// use AudioConverter to
UInt32 ouputPacketsCount = 1;
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = 1;
bufferList.mBuffers[0].mDataByteSize = sizeof(_aacBuffer);
bufferList.mBuffers[0].mData = _aacBuffer;
OSStatus st = AudioConverterFillComplexBuffer(_converter, converter_callback, (__bridge void *) self, &ouputPacketsCount, &bufferList, NULL);
if (0 == st) {
// ... send bufferList.mBuffers[0].mDataByteSize bytes from _aacBuffer...
}
}
In this case the callback function for the audio converter is pretty simple (assuming packet sizes and counts are setup properly):
- (void) putPcmSamplesInBufferList:(AudioBufferList *)bufferList withCount:(UInt32 *)count
{
bufferList->mBuffers[0].mData = _pcmBuffer;
bufferList->mBuffers[0].mDataByteSize = _pcmBufferSize;
}
And the setup for the audio converter looks like this:
{
// ...
AudioStreamBasicDescription pcmASBD = {0};
pcmASBD.mSampleRate = ((AVAudioSession *) [AVAudioSession sharedInstance]).currentHardwareSampleRate;
pcmASBD.mFormatID = kAudioFormatLinearPCM;
pcmASBD.mFormatFlags = kAudioFormatFlagsCanonical;
pcmASBD.mChannelsPerFrame = 1;
pcmASBD.mBytesPerFrame = sizeof(AudioSampleType);
pcmASBD.mFramesPerPacket = 1;
pcmASBD.mBytesPerPacket = pcmASBD.mBytesPerFrame * pcmASBD.mFramesPerPacket;
pcmASBD.mBitsPerChannel = 8 * pcmASBD.mBytesPerFrame;
AudioStreamBasicDescription aacASBD = {0};
aacASBD.mFormatID = kAudioFormatMPEG4AAC;
aacASBD.mSampleRate = pcmASBD.mSampleRate;
aacASBD.mChannelsPerFrame = pcmASBD.mChannelsPerFrame;
size = sizeof(aacASBD);
AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &size, &aacASBD);
AudioConverterNew(&pcmASBD, &aacASBD, &_converter);
// ...
}
This seems pretty straight forward only the IT DOES NOT WORK. Once the AVCaptureSession is running, the audio converter (specifically AudioConverterFillComplexBuffer) returns an 'hwiu' (hardware in use) error. Conversion works fine if the session is stopped but then I can't capture anything...
I was wondering if there was a way to get an AAC stream out of AVCaptureSession. The options I'm considering are:
Somehow using AVAssetWriterInput to encode audio samples into AAC and then get the encoded packets somehow (not through AVAssetWriter, which would only write to a file).
Reorganizing my app so that it uses AVCaptureSession only on the video side and uses Audio Queues on the audio side. This will make flow control (starting and stopping recording, responding to interruptions) more complicated and I'm afraid that it might cause synching problems between the audio and video. Also, it just doesn't seem like a good design.
Does anyone know if getting the AAC out of AVCaptureSession is possible? Do I have to use Audio Queues here? Could this get me into synching or control problems?
I ended up asking Apple for advice (it turns out you can do that if you have a paid developer account).
It seems that AVCaptureSession grabs a hold of the AAC hardware encoder but only lets you use it to write directly to file.
You can use the software encoder but you have to ask for it specifically instead of using AudioConverterNew:
AudioClassDescription *description = [self
getAudioClassDescriptionWithType:kAudioFormatMPEG4AAC
fromManufacturer:kAppleSoftwareAudioCodecManufacturer];
if (!description) {
return false;
}
// see the question as for setting up pcmASBD and arc ASBD
OSStatus st = AudioConverterNewSpecific(&pcmASBD, &aacASBD, 1, description, &_converter);
if (st) {
NSLog(#"error creating audio converter: %s", OSSTATUS(st));
return false;
}
with
- (AudioClassDescription *)getAudioClassDescriptionWithType:(UInt32)type
fromManufacturer:(UInt32)manufacturer
{
static AudioClassDescription desc;
UInt32 encoderSpecifier = type;
OSStatus st;
UInt32 size;
st = AudioFormatGetPropertyInfo(kAudioFormatProperty_Encoders,
sizeof(encoderSpecifier),
&encoderSpecifier,
&size);
if (st) {
NSLog(#"error getting audio format propery info: %s", OSSTATUS(st));
return nil;
}
unsigned int count = size / sizeof(AudioClassDescription);
AudioClassDescription descriptions[count];
st = AudioFormatGetProperty(kAudioFormatProperty_Encoders,
sizeof(encoderSpecifier),
&encoderSpecifier,
&size,
descriptions);
if (st) {
NSLog(#"error getting audio format propery: %s", OSSTATUS(st));
return nil;
}
for (unsigned int i = 0; i < count; i++) {
if ((type == descriptions[i].mSubType) &&
(manufacturer == descriptions[i].mManufacturer)) {
memcpy(&desc, &(descriptions[i]), sizeof(desc));
return &desc;
}
}
return nil;
}
The software encoder will take up CPU resources, of course, but will get the job done.

Resources