CMSampleBufferSetDataBufferFromAudioBufferList returning error 12731 - ios

I am trying to capture app sound and pass it to AVAssetWriter as input.
I am setting callback for audio unit to get AudioBufferList.
The problem starts with converting AudioBufferList to CMSampleBufferRef.
It always return error -12731 which indicates that required parameter is missing
Thanks Karol
-(OSStatus) recordingCallbackWithRef:(void*)inRefCon
flags:(AudioUnitRenderActionFlags*)flags
timeStamp:(const AudioTimeStamp*)timeStamp
busNumber:(UInt32)busNumber
framesNumber:(UInt32)numberOfFrames
data:(AudioBufferList*)data
{
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0].mData = NULL;
OSStatus status;
status = AudioUnitRender(audioUnit,
flags,
timeStamp,
busNumber,
numberOfFrames,
&bufferList);
[self checkOSStatus:status];
AudioStreamBasicDescription audioFormat;
// Describe format
audioFormat.mSampleRate = 44100.00;
audioFormat.mFormatID = kAudioFormatLinearPCM;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = 2;
audioFormat.mBytesPerFrame = 2;
CMSampleBufferRef buff = NULL;
CMFormatDescriptionRef format = NULL;
CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid };
status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, NULL, 0, NULL, NULL, &format);
[self checkOSStatus:status];
status = CMSampleBufferCreate(kCFAllocatorDefault,NULL,false,NULL,NULL,format,1, 1, &timing, 0, NULL, &buff);
[self checkOSStatus:status];
status = CMSampleBufferSetDataBufferFromAudioBufferList(buff,
kCFAllocatorDefault,
kCFAllocatorDefault,
0,
&bufferList);
[self checkOSStatus:status]; //Status here is 12731
//Do something with the buffer
return noErr;
}
Edit:
I checked bufferList.mBuffers[0].mData and it is not null so probably that's not a problem.

Since there is a similar question without answer all over the internet.
I managed to solve it and the recording fully works.
My problem was wrong parameter passed to CMSampleBufferCreate.
numSamples instead of 1 should be equal to numberOfFrames.
So the final call is:
CMSampleBufferCreate(kCFAllocatorDefault,NULL,false,NULL,NULL,format,
(CMItemCount)numberOfFrames, 1, &timing, 0, NULL, &buff);

Related

Using CMSampleTimingInfo, CMSampleBuffer and AudioBufferList from raw PCM 16000 sample rate stream

I recevie audio data and size from outside, the audio appears to be linear PCM, signed int16, but when recording this using an AssetWriter it saves to the audio file highly distorted and higher pitch.
#define kSamplingRate 16000
#define kNumberChannels 1
UInt32 framesAlreadyWritten = 0;
-(AudioStreamBasicDescription) getAudioFormat {
AudioStreamBasicDescription format;
format.mSampleRate = kSamplingRate;
format.mFormatID = kAudioFormatLinearPCM;
format.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
format.mChannelsPerFrame = 1; // mono
format.mBitsPerChannel = 16;
format.mBytesPerFrame = sizeof(SInt16);
format.mFramesPerPacket = 1;
format.mBytesPerPacket = format.mBytesPerFrame * format.mFramesPerPacket;
format.mReserved = 0;
return format;
}
- (CMSampleBufferRef)createAudioSample:(const void *)audioData frames: (UInt32)len {
AudioStreamBasicDescription asbd = [self getAudioFormat];
CMSampleBufferRef buff = NULL;
static CMFormatDescriptionRef format = NULL;
OSStatus error = 0;
if(format == NULL) {
AudioChannelLayout acl;
bzero(&acl, sizeof(acl));
acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono;
error = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &asbd, sizeof(acl), &acl, 0, NULL, NULL, &format);
}
CMTime duration = CMTimeMake(1, kSamplingRate);
CMTime pts = CMTimeMake(framesAlreadyWritten, kSamplingRate);
NSLog(#"-----------pts");
CMTimeShow(pts);
CMSampleTimingInfo timing = {duration , pts, kCMTimeInvalid };
error = CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, len, 1, &timing, 0, NULL, &buff);
framesAlreadyWritten += len;
if (error) {
NSLog(#"CMSampleBufferCreate returned error: %ld", (long)error);
return NULL;
}
AudioBufferList audioBufferList;
audioBufferList.mNumberBuffers = 1;
audioBufferList.mBuffers[0].mNumberChannels = asbd.mChannelsPerFrame;
audioBufferList.mBuffers[0].mDataByteSize = (UInt32)(number_of_frames * audioFormat.mBytesPerFrame);
audioBufferList.mBuffers[0].mData = audioData;
error = CMSampleBufferSetDataBufferFromAudioBufferList(buff, kCFAllocatorDefault, kCFAllocatorDefault, 0, &audioBufferList);
if(error) {
NSLog(#"CMSampleBufferSetDataBufferFromAudioBufferList returned error: %ld", (long)error);
return NULL;
}
return buff;
}
Not sure why you're dividing len by two, but your time should progress instead of being constant, something like
CMTime time = CMTimeMake(framesAlreadyWritten , kSamplingRate);

Using AudioQueueNewInput to record stereo

I would like to use AudioQueueNewInput to create a stereo recording. I configured it as follows:
audioFormat.mFormatID = kAudioFormatLinearPCM;
hardwareChannels = 2;
audioFormat.mChannelsPerFrame = hardwareChannels;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagIsBigEndian;
audioFormat.mFramesPerPacket = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = (audioFormat.mBitsPerChannel / 8) * hardwareChannels;
audioFormat.mBytesPerFrame = audioFormat.mBytesPerPacket;
OSStatus result = AudioQueueNewInput(
&audioFormat,
recordingCallback,
(__bridge void *)(self), // userData
NULL, // run loop
NULL, // run loop mode
0, // flags
&queueObject
);
AudioQueueStart (
queueObject,
NULL // start time. NULL means as soon as possible.
);
I tested this code on an iPhone 6s plus with an external stereo microphone. It does not seem to record stereo. Both the left and right channels get identical streams of data. What else do I need to do to record stereo?

Output is not generated AudioConverterFillComplexBuffer to convert from AAC to PCM?

Hi I am trying to convert AAC buffer to PCM using AudioConverterFillComplexBuffer..Here is my code
-(void)initDecoder{
AudioStreamBasicDescription outAudioStreamBasicDescription;
outAudioStreamBasicDescription.mSampleRate = 44100.0;
outAudioStreamBasicDescription.mFormatID = kAudioFormatLinearPCM;
outAudioStreamBasicDescription.mFormatFlags = 0xc;
outAudioStreamBasicDescription.mBytesPerPacket = 2;
outAudioStreamBasicDescription.mFramesPerPacket = 1;
outAudioStreamBasicDescription.mBytesPerFrame = 2;
outAudioStreamBasicDescription.mChannelsPerFrame = 1;
outAudioStreamBasicDescription.mBitsPerChannel = 16;
AudioStreamBasicDescription inAudioStreamBasicDescription;
inAudioStreamBasicDescription.mSampleRate = 44100;
inAudioStreamBasicDescription.mFormatID = kAudioFormatMPEG4AAC;
inAudioStreamBasicDescription.mFormatFlags = kMPEG4Object_AAC_SSR;
inAudioStreamBasicDescription.mBytesPerPacket = 0;
inAudioStreamBasicDescription.mFramesPerPacket = 1024;
inAudioStreamBasicDescription.mBytesPerFrame = 0;
inAudioStreamBasicDescription.mChannelsPerFrame = 1;
inAudioStreamBasicDescription.mBitsPerChannel = 0;
inAudioStreamBasicDescription.mReserved = 0;
AudioClassDescription audioClassDescription;
memset(&audioClassDescription, 0, sizeof(audioClassDescription));
audioClassDescription.mManufacturer = kAppleSoftwareAudioCodecManufacturer;
audioClassDescription.mSubType = outAudioStreamBasicDescription.mFormatID;
audioClassDescription.mType = kAudioFormatLinearPCM;
NSAssert(audioClassDescription.mSubType == outAudioStreamBasicDescription.mFormatID && audioClassDescription.mManufacturer == kAppleSoftwareAudioCodecManufacturer, nil);
NSAssert(AudioConverterNewSpecific(&inAudioStreamBasicDescription, &outAudioStreamBasicDescription, 1, &audioClassDescription, &audioConverterDecode) == 0, nil);
}
OSStatus inInputDataProc(AudioConverterRef inAudioConverter, UInt32 *ioNumberDataPackets, AudioBufferList *ioData, AudioStreamPacketDescription **outDataPacketDescription, void *inUserData)
{
AudioBufferList audioBufferList = *(AudioBufferList *)inUserData;
ioData->mBuffers[0].mData = audioBufferList.mBuffers[0].mData;
ioData->mBuffers[0].mDataByteSize = audioBufferList.mBuffers[0].mDataByteSize;
return noErr;
}
-(void)decodeSample:(AudioBufferList)inAaudioBufferList{
//inAaudioBufferList is the AAC buffer
if (!audioConverterDecode) {
[self initDecoder];
}
NSAssert(inAaudioBufferList.mNumberBuffers == 1, nil);
uint32_t bufferSize = 1024*2;//inAaudioBufferList.mBuffers[0].mDataByteSize;
uint8_t *buffer = (uint8_t *)malloc(1024*2);
memset(buffer, 0, bufferSize);
AudioBufferList outAudioBufferList;
outAudioBufferList.mNumberBuffers = 1;
outAudioBufferList.mBuffers[0].mNumberChannels = 1;
outAudioBufferList.mBuffers[0].mDataByteSize = bufferSize;
outAudioBufferList.mBuffers[0].mData = buffer;
UInt32 ioOutputDataPacketSize = bufferSize;
OSStatus ret = AudioConverterFillComplexBuffer(audioConverterDecode, inInputDataProc, &inAaudioBufferList, &ioOutputDataPacketSize, &outAudioBufferList, NULL) ;//== 0, nil);
NSData *data = [NSData dataWithBytes:outAudioBufferList.mBuffers[0].mData length:outAudioBufferList.mBuffers[0].mDataByteSize];
DLog(#"Rev Size = %d",(unsigned int)outAudioBufferList.mBuffers[0].mDataByteSize);
free(buffer);
}
The decoded output length is zero and the OSStatus code for AudioConverterFillComplexBuffer is 561015652
So what could be wrong..?
This is a shot in the dark and you probably already found the solution to this but I believe you need to change this
UInt32 ioOutputDataPacketSize = bufferSize;
to this
UInt32 ioOutputDataPacketSize = bufferSize/2;
Personally for me I do this in the inInputDataProc method and always sent
UInt32 ioOutputDataPacketSize = 1;
into the AudioConverterFillComplexBuffer method and then set it within the inInputDataProc like this.
UInt32 maxPackets = audioBufferList.mBuffers[0].mDataByteSize / 2;
*ioNumberDataPackets = maxPackets;
Hope this helps.

How do you save generated audio to a file in iOS?

I have successfully generated a tone using iOS with the following code. After that, I want to save the generated tone to an audio file. How can I do this?
- (void)createToneUnit
{
// Configure the search parameters to find the default playback output unit
// (called the kAudioUnitSubType_RemoteIO on iOS but
// kAudioUnitSubType_DefaultOutput on Mac OS X)
AudioComponentDescription defaultOutputDescription;
defaultOutputDescription.componentType = kAudioUnitType_Output;
defaultOutputDescription.componentSubType = kAudioUnitSubType_RemoteIO;
defaultOutputDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
defaultOutputDescription.componentFlags = 0;
defaultOutputDescription.componentFlagsMask = 0;
// Get the default playback output unit
AudioComponent defaultOutput = AudioComponentFindNext(NULL, &defaultOutputDescription);
NSAssert(defaultOutput, #"Can't find default output");
// Create a new unit based on this that we'll use for output
OSErr err = AudioComponentInstanceNew(defaultOutput, &toneUnit);
NSAssert1(toneUnit, #"Error creating unit: %ld", err);
// Set our tone rendering function on the unit
AURenderCallbackStruct input;
input.inputProc = RenderTone;
input.inputProcRefCon = (__bridge void *)(self);
err = AudioUnitSetProperty(toneUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&input,
sizeof(input));
NSAssert1(err == noErr, #"Error setting callback: %ld", err);
// Set the format to 32 bit, single channel, floating point, linear PCM
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = sampleRate;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags =
kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
streamFormat.mBytesPerPacket = four_bytes_per_float;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerFrame = four_bytes_per_float;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBitsPerChannel = four_bytes_per_float * eight_bits_per_byte;
err = AudioUnitSetProperty (toneUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&streamFormat,
sizeof(AudioStreamBasicDescription));
NSAssert1(err == noErr, #"Error setting stream format: %ld", err);
}
The Render Code :
OSStatus RenderTone(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
// Fixed amplitude is good enough for our purposes
const double amplitude = 0.25;
// Get the tone parameters out of the view controller
ViewController *viewController =
(__bridge ViewController *)inRefCon;
double theta = viewController->theta;
double theta_increment = 2.0 * M_PI * viewController->frequency / viewController->sampleRate;
// This is a mono tone generator so we only need the first buffer
const int channel = 0;
Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;
// Generate the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
buffer[frame] = sin(theta) * amplitude;
theta += theta_increment;
if (theta > 2.0 * M_PI)
{
theta -= 2.0 * M_PI;
}
}
// Store the theta back in the view controller
viewController->theta = theta;
return noErr;
}
And to play the generated tone, I just :
OSErr err = AudioUnitInitialize(toneUnit);
err = AudioOutputUnitStart(toneUnit);
The extended audio file api provides an easy way to write audio files to disk.

How can I modify this AudioUnit code so that it has stereo output?

I can't seem to find what I'm looking for in the documentation. This code works great, but I want stereo output.
- (void)createToneUnit
{
// Configure the search parameters to find the default playback output unit
// (called the kAudioUnitSubType_RemoteIO on iOS but
// kAudioUnitSubType_DefaultOutput on Mac OS X)
AudioComponentDescription defaultOutputDescription;
defaultOutputDescription.componentType = kAudioUnitType_Output;
defaultOutputDescription.componentSubType = kAudioUnitSubType_RemoteIO;
defaultOutputDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
defaultOutputDescription.componentFlags = 0;
defaultOutputDescription.componentFlagsMask = 0;
// Get the default playback output unit
AudioComponent defaultOutput = AudioComponentFindNext(NULL, &defaultOutputDescription);
NSAssert(defaultOutput, #"Can't find default output");
// Create a new unit based on this that we'll use for output
OSErr err = AudioComponentInstanceNew(defaultOutput, &_toneUnit);
NSAssert1(_toneUnit, #"Error creating unit: %d", err);
// Set our tone rendering function on the unit
AURenderCallbackStruct input;
input.inputProc = RenderTone;
input.inputProcRefCon = (__bridge void*)self;
err = AudioUnitSetProperty(_toneUnit,
kAudioUnitProperty_SetRenderCallback,
kAudioUnitScope_Input,
0,
&input,
sizeof(input));
NSAssert1(err == noErr, #"Error setting callback: %d", err);
// Set the format to 32 bit, single channel, floating point, linear PCM
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = kSampleRate;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags =
kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
streamFormat.mBytesPerPacket = four_bytes_per_float;
streamFormat.mFramesPerPacket = 1;
streamFormat.mBytesPerFrame = four_bytes_per_float;
streamFormat.mChannelsPerFrame = 1;
streamFormat.mBitsPerChannel = four_bytes_per_float * eight_bits_per_byte;
err = AudioUnitSetProperty (_toneUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Input,
0,
&streamFormat,
sizeof(AudioStreamBasicDescription));
NSAssert1(err == noErr, #"Error setting stream format: %dd", err);
}
And here is the callback:
OSStatus RenderTone( void* inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData){
// Get the tone parameters out of the view controller
VWWSynthesizerC *synth = (__bridge VWWSynthesizerC *)inRefCon;
double theta = synth.theta;
double theta_increment = 2.0 * M_PI * synth.frequency / kSampleRate;
// This is a mono tone generator so we only need the first buffer
const int channel = 0;
Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;
// Generate the samples
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
if(synth.muted){
buffer[frame] = 0;
}
else{
switch(synth.waveType){
case VWWWaveTypeSine:{
buffer[frame] = sin(theta) * synth.amplitude;
break;
}
case VWWWaveTypeSquare:{
buffer[frame] = square(theta) * synth.amplitude;
break;
}
case VWWWaveTypeSawtooth:{
buffer[frame] = sawtooth(theta) * synth.amplitude;
break;
}
case VWWWaveTypeTriangle:{
buffer[frame] = triangle(theta) * synth.amplitude;
break;
}
default:
break;
}
}
theta += theta_increment;
if (theta > 2.0 * M_PI)
{
theta -= 2.0 * M_PI;
}
}
synth.theta = theta;
return noErr;
}
If there is a different or better way to render this data, I'm open to suggestions. I'm rendering sine, square, triangle, sawtooth, etc... waves.

Resources