error converting AudioBufferList to CMBlockBufferRef - ios

I am trying to take a video file read it in using AVAssetReader and pass the audio off to CoreAudio for processing (adding effects and stuff) before saving it back out to disk using AVAssetWriter. I would like to point out that if i set the componentSubType on AudioComponentDescription of my output node as RemoteIO, things play correctly though the speakers. This makes me confident that my AUGraph is properly setup as I can hear things working. I am setting the subType to GenericOutput though so I can do the rendering myself and get back the adjusted audio.
I am reading in the audio and i pass the CMSampleBufferRef off to copyBuffer. This puts the audio into a circular buffer that will be read in later.
- (void)copyBuffer:(CMSampleBufferRef)buf {
if (_readyForMoreBytes == NO)
{
return;
}
AudioBufferList abl;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(buf, NULL, &abl, sizeof(abl), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);
UInt32 size = (unsigned int)CMSampleBufferGetTotalSampleSize(buf);
BOOL bytesCopied = TPCircularBufferProduceBytes(&circularBuffer, abl.mBuffers[0].mData, size);
if (!bytesCopied){
/
_readyForMoreBytes = NO;
if (size > kRescueBufferSize){
NSLog(#"Unable to allocate enought space for rescue buffer, dropping audio frame");
} else {
if (rescueBuffer == nil) {
rescueBuffer = malloc(kRescueBufferSize);
}
rescueBufferSize = size;
memcpy(rescueBuffer, abl.mBuffers[0].mData, size);
}
}
CFRelease(blockBuffer);
if (!self.hasBuffer && bytesCopied > 0)
{
self.hasBuffer = YES;
}
}
Next I call processOutput. This will do a manual reder on the outputUnit. When AudioUnitRender is called it invokes the playbackCallback below, which is what is hooked up as input callback on my first node. playbackCallback pulls the data off the circular buffer and feeds it into the audioBufferList passed in. Like I said before if the output is set as RemoteIO this will cause the audio to correctly be played on the speakers. When AudioUnitRender finishes, it returns noErr and the bufferList object contains valid data. When I call CMSampleBufferSetDataBufferFromAudioBufferList though I get kCMSampleBufferError_RequiredParameterMissing (-12731).
-(CMSampleBufferRef)processOutput
{
if(self.offline == NO)
{
return NULL;
}
AudioUnitRenderActionFlags flags = 0;
AudioTimeStamp inTimeStamp;
memset(&inTimeStamp, 0, sizeof(AudioTimeStamp));
inTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
UInt32 busNumber = 0;
UInt32 numberFrames = 512;
inTimeStamp.mSampleTime = 0;
UInt32 channelCount = 2;
AudioBufferList *bufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1));
bufferList->mNumberBuffers = channelCount;
for (int j=0; j<channelCount; j++)
{
AudioBuffer buffer = {0};
buffer.mNumberChannels = 1;
buffer.mDataByteSize = numberFrames*sizeof(SInt32);
buffer.mData = calloc(numberFrames,sizeof(SInt32));
bufferList->mBuffers[j] = buffer;
}
CheckError(AudioUnitRender(outputUnit, &flags, &inTimeStamp, busNumber, numberFrames, bufferList), #"AudioUnitRender outputUnit");
CMSampleBufferRef sampleBufferRef = NULL;
CMFormatDescriptionRef format = NULL;
CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid };
AudioStreamBasicDescription audioFormat = self.audioFormat;
CheckError(CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, NULL, 0, NULL, NULL, &format), #"CMAudioFormatDescriptionCreate");
CheckError(CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numberFrames, 1, &timing, 0, NULL, &sampleBufferRef), #"CMSampleBufferCreate");
CheckError(CMSampleBufferSetDataBufferFromAudioBufferList(sampleBufferRef, kCFAllocatorDefault, kCFAllocatorDefault, 0, bufferList), #"CMSampleBufferSetDataBufferFromAudioBufferList");
return sampleBufferRef;
}
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
int numberOfChannels = ioData->mBuffers[0].mNumberChannels;
SInt16 *outSample = (SInt16 *)ioData->mBuffers[0].mData;
/
memset(outSample, 0, ioData->mBuffers[0].mDataByteSize);
MyAudioPlayer *p = (__bridge MyAudioPlayer *)inRefCon;
if (p.hasBuffer){
int32_t availableBytes;
SInt16 *bufferTail = TPCircularBufferTail([p getBuffer], &availableBytes);
int32_t requestedBytesSize = inNumberFrames * kUnitSize * numberOfChannels;
int bytesToRead = MIN(availableBytes, requestedBytesSize);
memcpy(outSample, bufferTail, bytesToRead);
TPCircularBufferConsume([p getBuffer], bytesToRead);
if (availableBytes <= requestedBytesSize*2){
[p setReadyForMoreBytes];
}
if (availableBytes <= requestedBytesSize) {
p.hasBuffer = NO;
}
}
return noErr;
}
The CMSampleBufferRef I pass in looks valid (below is a dump of the object from the debugger)
CMSampleBuffer 0x7f87d2a03120 retainCount: 1 allocator: 0x103333180
invalid = NO
dataReady = NO
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
formatDescription = <CMAudioFormatDescription 0x7f87d2a02b20 [0x103333180]> {
mediaType:'soun'
mediaSubType:'lpcm'
mediaSpecific: {
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'lpcm'
mFormatFlags: 0xc2c
mBytesPerPacket: 2
mFramesPerPacket: 1
mBytesPerFrame: 2
mChannelsPerFrame: 1
mBitsPerChannel: 16 }
cookie: {(null)}
ACL: {(null)}
}
extensions: {(null)}
}
sbufToTrackReadiness = 0x0
numSamples = 512
sampleTimingArray[1] = {
{PTS = {0/1 = 0.000}, DTS = {INVALID}, duration = {1/44100 = 0.000}},
}
dataBuffer = 0x0
The buffer list looks like this
Printing description of bufferList:
(AudioBufferList *) bufferList = 0x00007f87d280b0a0
Printing description of bufferList->mNumberBuffers:
(UInt32) mNumberBuffers = 2
Printing description of bufferList->mBuffers:
(AudioBuffer [1]) mBuffers = {
[0] = (mNumberChannels = 1, mDataByteSize = 2048, mData = 0x00007f87d3008c00)
}
Really at a loss here, hoping someone can help. Thanks,
In case it matters i am debugging this in ios 8.3 simulator and the audio is coming from a mp4 that i shot on my iphone 6 then saved to my laptop.
I have read the following issues, however still to no avail, things are not working.
How to convert AudioBufferList to CMSampleBuffer?
Converting an AudioBufferList to a CMSampleBuffer Produces Unexpected Results
CMSampleBufferSetDataBufferFromAudioBufferList returning error 12731
core audio offline rendering GenericOutput
UPDATE
I poked around some more and notice that when my AudioBufferList right before AudioUnitRender runs looks like this:
bufferList->mNumberBuffers = 2,
bufferList->mBuffers[0].mNumberChannels = 1,
bufferList->mBuffers[0].mDataByteSize = 2048
mDataByteSize is numberFrames*sizeof(SInt32), which is 512 * 4. When I look at the AudioBufferList passed in playbackCallback, the list looks like this:
bufferList->mNumberBuffers = 1,
bufferList->mBuffers[0].mNumberChannels = 1,
bufferList->mBuffers[0].mDataByteSize = 1024
not really sure where that other buffer is going, or the other 1024 byte size...
if when i get finished calling Redner if I do something like this
AudioBufferList newbuff;
newbuff.mNumberBuffers = 1;
newbuff.mBuffers[0] = bufferList->mBuffers[0];
newbuff.mBuffers[0].mDataByteSize = 1024;
and pass newbuff off to CMSampleBufferSetDataBufferFromAudioBufferList the error goes away.
If I try setting the size of BufferList to have 1 mNumberBuffers or its mDataByteSize to be numberFrames*sizeof(SInt16) I get a -50 when calling AudioUnitRender
UPDATE 2
I hooked up a render callback so I can inspect the output when I play the sound over the speakers. I noticed that the output that goes to the speakers also has a AudioBufferList with 2 buffers, and the mDataByteSize during the input callback is 1024 and in the render callback its 2048, which is the same as I have been seeing when manually calling AudioUnitRender. When I inspect the data in the rendered AudioBufferList I notice that the bytes in the 2 buffers are the same, which means I can just ignore the second buffer. But I am not sure how to handle the fact that the data is 2048 in size after being rendered instead of 1024 as it's being taken in. Any ideas on why that could be happening? Is it in more of a raw form after going through the audio graph and that is why the size is doubling?

Sounds like the issue you're dealing with is because of a discrepancy in the number of channels. The reason you're seeing data in blocks of 2048 instead of 1024 is because it is feeding you back two channels (stereo). Check to make sure all of your audio units are properly configured to use mono throughout the entire audio graph, including the Pitch Unit and any audio format descriptions.
One thing to especially beware of is that calls to AudioUnitSetProperty can fail - so be sure to wrap those in CheckError() as well.

Related

Audio Recording AudioQueueStart buffer never filled

I am using AudioQueueStart in order to start recording on an iOS device and I want all the recording data streamed to me in buffers so that I can process them and send them to a server.
Basic functionality works great however in my BufferFilled function I usually get < 10 bytes of data on every call. This feels very inefficient. Especially since I have tried to set the buffer size to 16384 btyes (see beginning of startRecording method)
How can I make it fill up the buffer more before calling BufferFilled? Or do I need to make a second phase buffering before sending to server to achieve what I want?
OSStatus BufferFilled(void *aqData, SInt64 inPosition, UInt32 requestCount, const void *inBuffer, UInt32 *actualCount) {
AQRecorderState *pAqData = (AQRecorderState*)aqData;
NSData *audioData = [NSData dataWithBytes:inBuffer length:requestCount];
*actualCount = inBuffer + requestCount;
//audioData is ususally < 10 bytes, sometimes 100 bytes but never close to 16384 bytes
return 0;
}
void HandleInputBuffer(void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer, const AudioTimeStamp *inStartTime, UInt32 inNumPackets, const AudioStreamPacketDescription *inPacketDesc) {
AQRecorderState *pAqData = (AQRecorderState*)aqData;
if (inNumPackets == 0 && pAqData->mDataFormat.mBytesPerPacket != 0)
inNumPackets = inBuffer->mAudioDataByteSize / pAqData->mDataFormat.mBytesPerPacket;
if(AudioFileWritePackets(pAqData->mAudioFile, false, inBuffer->mAudioDataByteSize, inPacketDesc, pAqData->mCurrentPacket, &inNumPackets, inBuffer->mAudioData) == noErr) {
pAqData->mCurrentPacket += inNumPackets;
}
if (pAqData->mIsRunning == 0)
return;
OSStatus error = AudioQueueEnqueueBuffer(pAqData->mQueue, inBuffer, 0, NULL);
}
void DeriveBufferSize(AudioQueueRef audioQueue, AudioStreamBasicDescription *ASBDescription, Float64 seconds, UInt32 *outBufferSize) {
static const int maxBufferSize = 0x50000;
int maxPacketSize = ASBDescription->mBytesPerPacket;
if (maxPacketSize == 0) {
UInt32 maxVBRPacketSize = sizeof(maxPacketSize);
AudioQueueGetProperty(audioQueue, kAudioQueueProperty_MaximumOutputPacketSize, &maxPacketSize, &maxVBRPacketSize);
}
Float64 numBytesForTime = ASBDescription->mSampleRate * maxPacketSize * seconds;
*outBufferSize = (UInt32)(numBytesForTime < maxBufferSize ? numBytesForTime : maxBufferSize);
}
OSStatus SetMagicCookieForFile (AudioQueueRef inQueue, AudioFileID inFile) {
OSStatus result = noErr;
UInt32 cookieSize;
if (AudioQueueGetPropertySize (inQueue, kAudioQueueProperty_MagicCookie, &cookieSize) == noErr) {
char* magicCookie =
(char *) malloc (cookieSize);
if (AudioQueueGetProperty (inQueue, kAudioQueueProperty_MagicCookie, magicCookie, &cookieSize) == noErr)
result = AudioFileSetProperty (inFile, kAudioFilePropertyMagicCookieData, cookieSize, magicCookie);
free(magicCookie);
}
return result;
}
- (void)startRecording {
aqData.mDataFormat.mFormatID = kAudioFormatMPEG4AAC;
aqData.mDataFormat.mSampleRate = 22050.0;
aqData.mDataFormat.mChannelsPerFrame = 1;
aqData.mDataFormat.mBitsPerChannel = 0;
aqData.mDataFormat.mBytesPerPacket = 0;
aqData.mDataFormat.mBytesPerFrame = 0;
aqData.mDataFormat.mFramesPerPacket = 1024;
aqData.mDataFormat.mFormatFlags = kMPEG4Object_AAC_Main;
AudioFileTypeID fileType = kAudioFileAAC_ADTSType;
aqData.bufferByteSize = 16384;
UInt32 defaultToSpeaker = TRUE;
AudioSessionSetProperty(kAudioSessionProperty_OverrideCategoryDefaultToSpeaker, sizeof(defaultToSpeaker), &defaultToSpeaker);
OSStatus status = AudioQueueNewInput(&aqData.mDataFormat, HandleInputBuffer, &aqData, NULL, kCFRunLoopCommonModes, 0, &aqData.mQueue);
UInt32 dataFormatSize = sizeof (aqData.mDataFormat);
status = AudioQueueGetProperty(aqData.mQueue, kAudioQueueProperty_StreamDescription, &aqData.mDataFormat, &dataFormatSize);
status = AudioFileInitializeWithCallbacks(&aqData, nil, BufferFilled, nil, nil, fileType, &aqData.mDataFormat, 0, &aqData.mAudioFile);
for (int i = 0; i < kNumberBuffers; ++i) {
status = AudioQueueAllocateBuffer (aqData.mQueue, aqData.bufferByteSize, &aqData.mBuffers[i]);
status = AudioQueueEnqueueBuffer (aqData.mQueue, aqData.mBuffers[i], 0, NULL);
}
aqData.mCurrentPacket = 0;
aqData.mIsRunning = true;
status = AudioQueueStart(aqData.mQueue, NULL);
}
UPDATE: I have logged the data that I receive and it is quite interesting, it almost seems like half of the "packets" are some kind of header and half is sound data. Could I assume this is just how the AAC encoding on iOS works? It writes header in one buffer, then data in the next one and so on. And it never wants more than around 170-180 bytes for each data chunk and that is why it ignores my large buffer?
I solved this eventually. Turns out that yes the encoding on iOS produces small and large chunks of data. I added a second phase buffer myself using NSMutableData and it worked perfectly.

Using CMSampleTimingInfo, CMSampleBuffer and AudioBufferList from raw PCM stream

I'm receiving a raw PCM stream from Google's WebRTC C++ reference implementation (a hook inserted into VoEBaseImpl::GetPlayoutData). The audio appears to be linear PCM, signed int16, but when recording this using an AssetWriter it saves to the audio file highly distorted and higher pitch.
I am assuming this is an error somewhere with the input parameters, most probably with respect to the conversion of the stereo-int16 to an AudioBufferList and then on to a CMSampleBuffer. Is there any issue with the below code?
void RecorderImpl::RenderAudioFrame(void* audio_data, size_t number_of_frames, int sample_rate, int64_t elapsed_time_ms, int64_t ntp_time_ms) {
OSStatus status;
AudioChannelLayout acl;
bzero(&acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = sample_rate;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 2;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = audioFormat.mFramesPerPacket * audioFormat.mChannelsPerFrame * audioFormat.mBitsPerChannel / 8;
audioFormat.mBytesPerFrame = audioFormat.mBytesPerPacket / audioFormat.mFramesPerPacket;
CMSampleTimingInfo timing = { CMTimeMake(1, sample_rate), CMTimeMake(elapsed_time_ms, 1000), kCMTimeInvalid };
CMFormatDescriptionRef format = NULL;
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, sizeof(acl), &acl, 0, NULL, NULL, &format);
if(status != 0) {
NSLog(#"Failed to create audio format description");
return;
}
CMSampleBufferRef buffer;
status = CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, (CMItemCount)number_of_frames, 1, &timing, 0, NULL, &buffer);
if(status != 0) {
NSLog(#"Failed to allocate sample buffer");
return;
}
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mNumberChannels = audioFormat.mChannelsPerFrame;
bufferList.mBuffers[0].mDataByteSize = (UInt32)(number_of_frames * audioFormat.mBytesPerFrame);
bufferList.mBuffers[0].mData = audio_data;
status = CMSampleBufferSetDataBufferFromAudioBufferList(buffer, kCFAllocatorDefault, kCFAllocatorDefault, 0, &bufferList);
if(status != 0) {
NSLog(#"Failed to convert audio buffer list into sample buffer");
return;
}
[recorder writeAudioFrames:buffer];
CFRelease(buffer);
}
For reference, the sample rate I'm receiving from WebRTC on an iPhone 6S+ / iOS 9.2 is 48kHz with 480 samples per invocation of this hook and I'm receiving data every 10 ms.
First of all, congratulations on having the temerity to create an audio CMSampleBuffer from scratch. For most, they are neither created nor destroyed, but handed down immaculate and mysterious from CoreMedia and AVFoundation.
The presentationTimeStamps in your timing info are in integral milliseconds, which cannot represent your 48kHz samples' positions in time.
Instead of CMTimeMake(elapsed_time_ms, 1000), try CMTimeMake(elapsed_frames, sample_rate), where elapsed_frames are the number of frames that you have previously written.
That would explain the distortion, but not the pitch, so make sure that the AudioStreamBasicDescription matches your AVAssetWriterInput setup. It's hard to say without seeing your AVAssetWriter code.
p.s Look out for writeAudioFrames - if it's asynchronous, you'll have problems with ownership of the audio_data.
p.p.s. it looks like you're leaking the CMFormatDescriptionRef.
I ended up opening up the audio file that was generated in Audacity and saw that every frame had half of it dropped, as shown in this rather bizarre looking waveform:
Changing acl.mChannelLayoutTag to kAudioChannelLayoutTag_Mono and changing audioFormat.mChannelsPerFrame to 1 solved the issue and now the audio quality is perfect. Hooray!

AudioUnit + Opus codec = crackle issue

I am creating a voip app for iOS in objective-c. Currently i am trying to create the audio part: recording the audio data from microphone, encoding with Opus, decoding, and then playing. For the recording and playing i use AudioUnit. Also i made a buffer implementation which allocates places of memory each with initially set size. There are three main methods:
- setBufferSize - for setting buffer's sub allocated spaces.
- writeDataToBuffer - for creating new space(if needed), and filling data into current writing space.
- readDataFromBuffer - read data from current reading space.
I use the buffer for storing the audio data there. It works good. I've tested it. Also if i try to use it without Opus just reading audio data, storing it into the buffer, reading from the buffer and then playing, everything works great. But the problem comes when i include opus. Actually it encodes and decodes the audio data, but the quality is not so good and there are some crackle as well. I was wondering what am i doing wrong? Here are pieces of my code:
AudioUnit:
OSStatus status;
m_sAudioDescription.componentType = kAudioUnitType_Output;
m_sAudioDescription.componentSubType = kAudioUnitSubType_VoiceProcessingIO/*kAudioUnitSubType_RemoteIO*/;
m_sAudioDescription.componentFlags = 0;
m_sAudioDescription.componentFlagsMask = 0;
m_sAudioDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent inputComponent = AudioComponentFindNext(NULL, &m_sAudioDescription);
status = AudioComponentInstanceNew(inputComponent, &m_audioUnit);
// Enable IO for recording
UInt32 flag = 1;
status = AudioUnitSetProperty(m_audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Input,
VOIP_AUDIO_INPUT_ELEMENT,
&flag,
sizeof(flag));
// Enable IO for playback
status = AudioUnitSetProperty(m_audioUnit,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitScope_Output,
VOIP_AUDIO_OUTPUT_ELEMENT,
&flag,
sizeof(flag));
// Describe format
m_sAudioFormat.mSampleRate = 48000.00;//48000.00;/*44100.00*/;
m_sAudioFormat.mFormatID = kAudioFormatLinearPCM;
m_sAudioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked/* | kAudioFormatFlagsCanonical*/;
m_sAudioFormat.mFramesPerPacket = 1;
m_sAudioFormat.mChannelsPerFrame = 1;
m_sAudioFormat.mBitsPerChannel = 16; //8 * bytesPerSample
m_sAudioFormat.mBytesPerFrame = /*(UInt32)bytesPerSample;*/2; //bitsPerChannel / 8 * channelsPerFrame
m_sAudioFormat.mBytesPerPacket = 2; //bytesPerFrame * framesPerPacket
// Apply format
status = AudioUnitSetProperty(m_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
VOIP_AUDIO_INPUT_ELEMENT,
&m_sAudioFormat,
sizeof(m_sAudioFormat));
status = AudioUnitSetProperty(m_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
VOIP_AUDIO_OUTPUT_ELEMENT,
&m_sAudioFormat,
sizeof(m_sAudioFormat));
// Set input callback
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = inputRenderCallback;
callbackStruct.inputProcRefCon = this;
status = AudioUnitSetProperty(m_audioUnit,
kAudioOutputUnitProperty_SetInputCallback,
kAudioUnitScope_Global,
VOIP_AUDIO_INPUT_ELEMENT,
&callbackStruct,
sizeof(callbackStruct));
// Set output callback
callbackStruct.inputProc = outputRenderCallback;
callbackStruct.inputProcRefCon = this;
status = AudioUnitSetProperty(m_audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Global,
VOIP_AUDIO_OUTPUT_ELEMENT,
&callbackStruct,
sizeof(callbackStruct));
//Enable Echo cancelation:
this->_setEchoCancelation(true);
//Enable Automatic Gain control:
this->_setAGC(false);
// Initialise
status = AudioUnitInitialize(m_audioUnit);
return noErr;
Input buffer allocation and setting the size of storing buffers:
void VoipAudio::_allocBuffer()
{
UInt32 numFramesPerBuffer;
UInt32 size = sizeof(/*VoipUInt32*/VoipInt16);
AudioUnitGetProperty(m_audioUnit,
kAudioUnitProperty_MaximumFramesPerSlice,
kAudioUnitScope_Global,
VOIP_AUDIO_OUTPUT_ELEMENT, &numFramesPerBuffer, &siz
UInt32 inputBufferListSize = offsetof(AudioBufferList, mBuffers[0]) + (sizeof(AudioBuffer) * m_sAudioFormat.mChannelsPerFrame);
inputBuffer = (AudioBufferList *)malloc(inputBufferListSize);
inputBuffer->mNumberBuffers = m_sAudioFormat.mChannelsPerFrame;
//pre-malloc buffers for AudioBufferLists
for(VoipUInt32 tmp_int1 = 0; tmp_int1 < inputBuffer->mNumberBuffers; tmp_int1++)
{
inputBuffer->mBuffers[tmp_int1].mNumberChannels = 1;
inputBuffer->mBuffers[tmp_int1].mDataByteSize = 2048;
inputBuffer->mBuffers[tmp_int1].mData = malloc(2048);
memset(inputBuffer->mBuffers[tmp_int1].mData, 0, 2048);
}
this->m_oAudioBuffer = new VoipBuffer();
this->m_oAudioBuffer->setBufferSize(2048);
this->m_oAudioReadBuffer = new VoipBuffer();
this->m_oAudioReadBuffer->setBufferSize(2880);
}
Record callback:
this->m_oAudioReadBuffer->writeDataToBuffer(samples, samplesSize);
void* tmp_buffer = this->m_oAudioReadBuffer->readDataFromBuffer();
if (tmp_buffer != nullptr)
{
sVoipAudioCodecOpusEncodedResult* encodedSamples = VoipAudioCodecs::Opus_Encode((VoipInt16*)tmp_buffer, 2880);
sVoipAudioCodecOpusDecodedResult* decodedSamples = VoipAudioCodecs::Opus_Decode(encodedSamples->m_data, encodedSamples->m_dataSize);
this->m_oAudioBuffer->writeDataToBuffer(decodedSamples->m_data, decodedSamples->m_dataSize);
free(encodedSamples->m_data);
free(encodedSamples);
free(decodedSamples->m_data);
free(decodedSamples);
}
Playing callback:
void* tmp_buffer = this->m_oAudioBuffer->readDataFromBuffer();
if (tmp_buffer != nullptr)
{
memset(buffer->mBuffers[0].mData, 0, 2048);
memcpy(buffer->mBuffers[0].mData, tmp_buffer, 2048);
buffer->mBuffers[0].mDataByteSize = 2048;
} else {
memset(buffer->mBuffers[0].mData, 0, 2048);
buffer->mBuffers[0].mDataByteSize = 2048;
}
Opus Init Code:
int _error = 0;
VoipAudioCodecs::m_oEncoder = opus_encoder_create(SAMPLE_RATE, CHANNELS, APPLICATION, &_error);
if (_error < 0)
{
fprintf(stderr, "VoipAudioCodecs error: failed to create an encoder: %s\n", opus_strerror(_error));
return;
}
_error = opus_encoder_ctl(VoipAudioCodecs::m_oEncoder, OPUS_SET_BITRATE(BITRATE/*OPUS_BITRATE_MAX*/));
if (_error < 0)
{
fprintf(stderr, "VoipAudioCodecs error: failed to set bitrate: %s\n", opus_strerror(_error));
return;
}
VoipAudioCodecs::m_oDecoder = opus_decoder_create(SAMPLE_RATE, CHANNELS, &_error);
if (_error < 0)
{
fprintf(stderr, "VoipAudioCodecs error: failed to create decoder: %s\n", opus_strerror(_error));
return;
}
Opus encode/decode:
sVoipAudioCodecOpusEncodedResult* VoipAudioCodecs::Opus_Encode(VoipInt16* number, int samplesCount)
{
unsigned char cbits[MAX_PACKET_SIZE];
VoipInt32 nbBytes;
nbBytes = opus_encode(VoipAudioCodecs::m_oEncoder, number, FRAME_SIZE, cbits, MAX_PACKET_SIZE);
if (nbBytes < 0)
{
fprintf(stderr, "VoipAudioCodecs error: encode failed: %s\n", opus_strerror(nbBytes));
return nullptr;
}
sVoipAudioCodecOpusEncodedResult* result = (sVoipAudioCodecOpusEncodedResult* )malloc(sizeof(sVoipAudioCodecOpusEncodedResult));
result->m_data = (unsigned char*)malloc(nbBytes);
memcpy(result->m_data, cbits, nbBytes);
result->m_dataSize = nbBytes;
return result;
}
sVoipAudioCodecOpusDecodedResult* VoipAudioCodecs::Opus_Decode(void* encoded, VoipInt32 nbBytes)
{
VoipInt16 decodedPacket[MAX_FRAME_SIZE];
int frame_size = opus_decode(VoipAudioCodecs::m_oDecoder, (const unsigned char*)encoded, nbBytes, decodedPacket, MAX_FRAME_SIZE, 0);
if (frame_size < 0)
{
fprintf(stderr, "VoipAudioCodecs error: decoder failed: %s\n", opus_strerror(frame_size));
return nullptr;
}
sVoipAudioCodecOpusDecodedResult* result = (sVoipAudioCodecOpusDecodedResult* )malloc(sizeof(sVoipAudioCodecOpusDecodedResult));
result->m_data = (VoipInt16*)malloc(frame_size / sizeof(VoipInt16));
memcpy(result->m_data, decodedPacket, (frame_size / sizeof(VoipInt16)));
result->m_dataSize = frame_size / sizeof(VoipInt16);
return result;
}
Here are some constants i use:
#define FRAME_SIZE 2880 //120, 240, 480, 960, 1920, 2880
#define SAMPLE_RATE 48000
#define CHANNELS 1
#define APPLICATION OPUS_APPLICATION_VOIP//OPUS_APPLICATION_AUDIO
#define BITRATE 64000
#define MAX_FRAME_SIZE 4096
#define MAX_PACKET_SIZE (3*1276)
Can you help me please?
Your audio call back time may need increased. Try increasing your session setPreferredIOBufferDuration time. I have used opus on iOS and have measured the decoding time. It takes 2 to 3 ms to decode about 240 frames of data. There is a good chance you are missing your subsequent callbacks because it is taking to long to decode the audio.
i was have same problem in my project, the problem was the iOS give me unstable frame size, i used audio queue service and audio unit, they give me same result ( crackled voice ).
all you have to do is, save some samples in ring buffer in audio callback.
then in separate thread, do audio processing to make fixed frame to each round.
for example :
audioUnit give you frames or samples like this: [2048 .. 2048 .. 2048]
and opus codec need, 2880 fame for each packet, so you need to get 2048 from first buffer and 832 remain frames from next buffer to get fixed frame size to send it to opus encoder.
this function i used in my project
func audioProcessing(){
DispatchQueue.global(qos: .default).async {
// this to save remain data from ring buffer
var remainData:NSMutableData = NSMutableData()
var remainDataSize = 0
while self.room_oppened{
// here we define the fixed frame we want to use in our opus encoder
var packetOffset = 0
let fixedFrameSize:Int = 5760
var dataToGetFullFrame:Int = 5760
let packetData:NSMutableData = NSMutableData(length: fixedFrameSize)!// this need to filled with data
if remainDataSize > 0 {
if remainDataSize < fixedFrameSize{
memcpy(packetData.mutableBytes.advanced(by: packetOffset), remainData.mutableBytes.advanced(by: 0), remainDataSize)// add the remain data
dataToGetFullFrame = dataToGetFullFrame - remainDataSize
packetOffset = packetOffset + remainDataSize// - 1
}else{
memcpy(packetData.mutableBytes.advanced(by: packetOffset), remainData.mutableBytes.advanced(by: 0), fixedFrameSize)// add the remain data
dataToGetFullFrame = 0
}
remainDataSize = 0
}
// if the packet not fill full, we need to get more data from circle buffer
if dataToGetFullFrame > 0 {
while dataToGetFullFrame > 0 {
let bufferData = self.ringBufferEncodedAudio.read()// read chunk of data from bufer
if bufferData != nil{
var chunkOffset = 0
if dataToGetFullFrame > bufferData!.length{
memcpy(packetData.mutableBytes.advanced(by: packetOffset) , bufferData!.mutableBytes , bufferData!.length)
chunkOffset = bufferData!.length// this how much data we read
dataToGetFullFrame = dataToGetFullFrame - bufferData!.length // how much of data we need to fill packet
packetOffset = packetOffset + bufferData!.length// + 1
}else{
memcpy(packetData.mutableBytes.advanced(by: packetOffset) , bufferData!.mutableBytes , dataToGetFullFrame)
chunkOffset = dataToGetFullFrame// this how much data we read
packetOffset = packetOffset + dataToGetFullFrame// + 1
dataToGetFullFrame = dataToGetFullFrame - dataToGetFullFrame // how much of data we need to fill packet
}
if dataToGetFullFrame <= 0 {
var size = bufferData!.length - chunkOffset
remainData = NSMutableData(bytes: bufferData?.mutableBytes.advanced(by: chunkOffset), length: size)
remainDataSize = size
}
}
usleep(useconds_t(8 * 1000))
}
}
// send packet to encoder
if self.enable_streaming {
let dataToEncode:Data = packetData as Data
let packet = OpusSwiftPort.shared.encodeData(dataToEncode)
if packet != nil{
self.sendAudioPacket(packet: packet!)// <--- this to network
}
}
}
}
}
after i did this audio processing i get very clear audio.
i hope this was helpful for you.

Corrupt recording with repeating audio in IOS

My application records streaming audio on iPhone. My problem is that a small percent (~2%) of the recordings are corrupted. They appear to have some audio buffers duplicated.
For example listen to this file.
Edit: A surprising thing is that looking closely at the data using Audacity shows the repeating parts are very very similar but not identical. Since FLAC (the format I use for encoding the audio) is a loss-less compression, I guess this is not a bug in the streaming/encoding but the problem originates at the data that comes from the microphone!
Below is the code I use to setup the audio recording streaming - is there anything wrong with it?
// see functions implementation below
- (void)startRecording
{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
, ^{
[self setUpRecordQueue];
[self setUpRecordQueueBuffers];
[self primeRecordQueueBuffers];
AudioQueueStart(recordQueue, NULL);
});
}
// this is called only once before any recording takes place
- (void)setUpAudioFormat
{
AudioSessionInitialize(
NULL,
NULL,
nil,
(__bridge void *)(self)
);
UInt32 sessionCategory = kAudioSessionCategory_PlayAndRecord;
AudioSessionSetProperty(
kAudioSessionProperty_AudioCategory,
sizeof(sessionCategory),
&sessionCategory
);
AudioSessionSetActive(true);
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mSampleRate = SAMPLE_RATE;//16000.0;
audioFormat.mChannelsPerFrame = CHANNELS;//1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mFramesPerPacket = 1;
audioFormat.mBytesPerFrame = audioFormat.mChannelsPerFrame * sizeof(SInt16);
audioFormat.mBytesPerPacket = audioFormat.mBytesPerFrame * audioFormat.mFramesPerPacket;
audioFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
bufferNumPackets = 2048; // must be power of 2 for FFT!
bufferByteSize = [self byteSizeForNumPackets:bufferNumPackets];
}
// I suspect the duplicate buffers arrive here:
static void recordCallback(
void* inUserData,
AudioQueueRef inAudioQueue,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp* inStartTime,
UInt32 inNumPackets,
const AudioStreamPacketDescription* inPacketDesc)
{
Recorder* recorder = (__bridge Recorder*) inUserData;
if (inNumPackets > 0)
{
// append the buffer to FLAC encoder
[recorder recordedBuffer:inBuffer->mAudioData byteSize:inBuffer->mAudioDataByteSize packetsNum:inNumPackets];
}
AudioQueueEnqueueBuffer(inAudioQueue, inBuffer, 0, NULL);
}
- (void)setUpRecordQueue
{
OSStatus errorStatus = AudioQueueNewInput(
&audioFormat,
recordCallback,
(__bridge void *)(self), // userData
CFRunLoopGetMain(), // run loop
NULL, // run loop mode
0, // flags
&recordQueue);
UInt32 trueValue = true;
AudioQueueSetProperty(recordQueue,kAudioQueueProperty_EnableLevelMetering,&trueValue,sizeof (UInt32));
}
- (void)setUpRecordQueueBuffers
{
for (int t = 0; t < NUMBER_AUDIO_DATA_BUFFERS; ++t)
{
OSStatus errorStatus = AudioQueueAllocateBuffer(
recordQueue,
bufferByteSize,
&recordQueueBuffers[t]);
}
}
- (void)primeRecordQueueBuffers
{
for (int t = 0; t < NUMBER_AUDIO_DATA_BUFFERS; ++t)
{
OSStatus errorStatus = AudioQueueEnqueueBuffer(
recordQueue,
recordQueueBuffers[t],
0,
NULL);
}
}
Turns out there was a rare bug allowing multiple recordings to start at nearly the same time - so two recordings took place in parallel but sent the audio buffers to the same callback, making the distorted repeating buffers in the encoded recordings...

Does the Remote I/O audio unit set the number of channels in the buffer?

I'm using kxmovie (it's a ffmpeg-based video player) as a base for an app and I'm trying to figure out how the RemoteI/O unit works on iOS when the only thing connected to a device is headphones and we're playing a track with more than 2 channels (say a surround 6 track channel). It seems like it is going with the output channel setting and the buffer only has 2 channels. Is this because of Core Audio's pull structure? And if so, what's happening to the other channels in the track? Are they being downmixed or simply ignored?
The code for the render callback connected to the remoteio unit is here:
- (BOOL) renderFrames: (UInt32) numFrames
ioData: (AudioBufferList *) ioData
{
NSLog(#"Number of channels in buffer: %lu",ioData->mNumberBuffers);
for (int iBuffer=0; iBuffer < ioData->mNumberBuffers; ++iBuffer) {
memset(ioData->mBuffers[iBuffer].mData, 0, ioData->mBuffers[iBuffer].mDataByteSize);
}
if (_playing && _outputBlock ) {
// Collect data to render from the callbacks
_outputBlock(_outData, numFrames, _numOutputChannels);
// Put the rendered data into the output buffer
if (_numBytesPerSample == 4) // then we've already got floats
{
float zero = 0.0;
for (int iBuffer=0; iBuffer < ioData->mNumberBuffers; ++iBuffer) {
int thisNumChannels = ioData->mBuffers[iBuffer].mNumberChannels;
for (int iChannel = 0; iChannel < thisNumChannels; ++iChannel) {
vDSP_vsadd(_outData+iChannel, _numOutputChannels, &zero, (float *)ioData->mBuffers[iBuffer].mData, thisNumChannels, numFrames);
}
}
}
else if (_numBytesPerSample == 2) // then we need to convert SInt16 -> Float (and also scale)
{
float scale = (float)INT16_MAX;
vDSP_vsmul(_outData, 1, &scale, _outData, 1, numFrames*_numOutputChannels);
for (int iBuffer=0; iBuffer < ioData->mNumberBuffers; ++iBuffer) {
int thisNumChannels = ioData->mBuffers[iBuffer].mNumberChannels;
for (int iChannel = 0; iChannel < thisNumChannels; ++iChannel) {
vDSP_vfix16(_outData+iChannel, _numOutputChannels, (SInt16 *)ioData->mBuffers[iBuffer].mData+iChannel, thisNumChannels, numFrames);
}
}
}
}
return noErr;
}
Thanks!
edit: Here's the code for the ASBD (_ouputFormat). It's getting its values straight from the remoteio. You can also check the whole method file here.
if (checkError(AudioUnitGetProperty(_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&_outputFormat,
&size),
"Couldn't get the hardware output stream format"))
return NO;
_outputFormat.mSampleRate = _samplingRate;
if (checkError(AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&_outputFormat,
size),
"Couldn't set the hardware output stream format")) {
// just warning
}
_numBytesPerSample = _outputFormat.mBitsPerChannel / 8;
_numOutputChannels = _outputFormat.mChannelsPerFrame;
NSLog(#"Current output bytes per sample: %ld", _numBytesPerSample);
NSLog(#"Current output num channels: %ld", _numOutputChannels);
// Slap a render callback on the unit
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = renderCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
if (checkError(AudioUnitSetProperty(_audioUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&callbackStruct,
sizeof(callbackStruct)),
"Couldn't set the render callback on the audio unit"))
return NO;
I finally found the piece of code that's making it remix channels to stereo. It sets a property in KxAudioManager using the ASBD of the RIO. And then, in KxMovieDecoder.m, it sets ffmpeg options using that same variable. Here's the code:
id<KxAudioManager> audioManager = [KxAudioManager audioManager];
swrContext = swr_alloc_set_opts(NULL,
av_get_default_channel_layout(audioManager.numOutputChannels),
AV_SAMPLE_FMT_S16,
audioManager.samplingRate,
av_get_default_channel_layout(codecCtx->channels),
codecCtx->sample_fmt,
codecCtx->sample_rate,
0,
NULL);
Now it's off to do some reading on how ffmpeg is doing the decoding. Fun times.

Resources