AudioConverterFillComplexBuffer '!stt' error - ios

I have an Audio App that from time to time is need to encode audio data from PCM to AAC format. I'm using software decoder (Actually I don't care what encoder is used, but I've checked twice, and there's software decoder). I'm using (https://github.com/TheAmazingAudioEngine/TheAmazingAudioEngine library)
I setup Audio session category as kAudioSessionCategory_PlayAndRecord
Have these formats
// Output format
outputFormat.mFormatID = kAudioFormatMPEG4AAC;
outputFormat.mSampleRate = 44100;
outputFormat.mFormatFlags = kMPEG4Object_AAC_Scalable;
outputFormat.mChannelsPerFrame = 2;
outputFormat.mBitsPerChannel = 0;
outputFormat.mBytesPerFrame = 0;
outputFormat.mBytesPerPacket = 0;
outputFormat.mFramesPerPacket = 1024;
// Input format
audioDescription.mFormatID = kAudioFormatLinearPCM;
audioDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsNonInterleaved;
audioDescription.mChannelsPerFrame = 2;
audioDescription.mBytesPerPacket = sizeof(SInt16);
audioDescription.mFramesPerPacket = 1;
audioDescription.mBytesPerFrame = sizeof(SInt16);
audioDescription.mBitsPerChannel = 8 * sizeof(SInt16);
audioDescription.mSampleRate = 44100.0;
And All works perfectly with kAudioSessionProperty_OverrideCategoryMixWithOthers == YES
But when I setup kAudioSessionProperty_OverrideCategoryMixWithOthers with NO then:
iOS Simulator - OK
iPod (6.1.3) - OK
iPhone 4S (7.0.3) - Fail with ('!sst' error on `AudioConverterFillComplexBuffer`) call
iPad 3(7.0.3) - Fail
As I already said all works fine until I change audio session property kAudioSessionProperty_OverrideCategoryMixWithOthers to NO
So the question is:
What is this error about? (I didn't found any clues in Headers and documentation what does this error mean (There's also no '!sst' string in any public framework header))
How can I fix it?
If you have any other Ideas that you think I need to try - Feel free to comment in.

Related

Using AudioQueueNewInput to record stereo

I would like to use AudioQueueNewInput to create a stereo recording. I configured it as follows:
audioFormat.mFormatID = kAudioFormatLinearPCM;
hardwareChannels = 2;
audioFormat.mChannelsPerFrame = hardwareChannels;
audioFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagIsBigEndian;
audioFormat.mFramesPerPacket = 1;
audioFormat.mBitsPerChannel = 16;
audioFormat.mBytesPerPacket = (audioFormat.mBitsPerChannel / 8) * hardwareChannels;
audioFormat.mBytesPerFrame = audioFormat.mBytesPerPacket;
OSStatus result = AudioQueueNewInput(
&audioFormat,
recordingCallback,
(__bridge void *)(self), // userData
NULL, // run loop
NULL, // run loop mode
0, // flags
&queueObject
);
AudioQueueStart (
queueObject,
NULL // start time. NULL means as soon as possible.
);
I tested this code on an iPhone 6s plus with an external stereo microphone. It does not seem to record stereo. Both the left and right channels get identical streams of data. What else do I need to do to record stereo?

How to set AudioStreamBasicDescription properties?

I'm trying to play PCM stream data from server using AudioQueue.
PCM data format :
Sample rate = 48000, num of channel = 2, Bit per sample = 16
And, server is not streaming fixed bytes to client. (variable bytes.)
(ex : 30848, 128, 2764, ... bytes )
How to set ASBD ?
I don't know how to set mFramesPerPacket, mBytesPerFrame, mBytesPerPacket .
I have read Apple reference document, but there is no detailed descriptions.
Please give me any idea.
New added : Here, ASBD structure what I have setted. (language : Swift)
// Create ASBD structure & set properties.
var streamFormat = AudioStreamBasicDescription()
streamFormat.mSampleRate = 48000
streamFormat.mFormatID = kAudioFormatLinearPCM
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked
streamFormat.mFramesPerPacket = 1
streamFormat.mChannelsPerFrame = 2
streamFormat.mBitsPerChannel = 16
streamFormat.mBytesPerFrame = (streamFormat.mBitsPerChannel / 8) * streamFormat.mChannelsPerFrame
streamFormat.mBytesPerPacket = streamFormat.mBytesPerFrame
streamFormat.mReserved = 0
// Create AudioQueue for playing PCM streaming data.
var err = AudioQueueNewOutput(&streamFormat, self.queueCallbackProc, nil, nil, nil, 0, &aq)
...
I have setted ASBD structure like the above.
AudioQueue play streamed PCM data very well for a few seconds,
but soon playing is stop. What can I do?
(still streaming, and queueing AudioQueue)
Please give me any idea.
ASBD is just a structure underneath defined like follows:
struct AudioStreamBasicDescription
{
Float64 mSampleRate;
AudioFormatID mFormatID;
AudioFormatFlags mFormatFlags;
UInt32 mBytesPerPacket;
UInt32 mFramesPerPacket;
UInt32 mBytesPerFrame;
UInt32 mChannelsPerFrame;
UInt32 mBitsPerChannel;
UInt32 mReserved;
};
typedef struct AudioStreamBasicDescription AudioStreamBasicDescription;
You may set the variables of a struct like this:
AudioStreamBasicDescription streamFormat;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;
streamFormat.mSampleRate = sampleRate;
streamFormat.mBitsPerChannel = bitsPerChannel;
streamFormat.mChannelsPerFrame = channelsPerFrame;
streamFormat.mFramesPerPacket = 1;
int bytes = (bitsPerChannel / 8) * channelsPerFrame;
streamFormat.mBytesPerFrame = bytes;
streamFormat.mBytesPerPacket = bytes;

Audio Queue Converting sample rate iOS

ok so, noob to iOS. I am using the Audio Queue Buffer to record audio. The Linear PCM format defaults to 44100 Hz, 1 channel, 16bit, little endian. Is there a way I can force a format of 8000 hz, 1 channel, 32bit floating point, little endian?
You can specify the format you want at initialization:
AudioStreamBasicDescription asbd;
asbd.mSampleRate = 8000;
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kLinearPCMFormatFlagIsFloat;
asbd.mBytesPerPacket = sizeof(float);
asbd.mFramesPerPacket = 1;
asbd.mBytesPerFrame = sizeof(float);
asbd.mChannelsPerFrame = 1;
asbd.mBitsPerChannel = sizeof(float) * CHAR_BIT;
asbd.mReserved = 0;
OSStatus e = AudioQueueNewInput(&asbd, ...............

Recording wav file in compressed format in iOS

Right now i am using AQRecorder for recording audio as .wav. My audio file description is as below;
mRecordFormat.mFormatID = kAudioFormatLinearPCM;
mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
mRecordFormat.mSampleRate = 8000.0;
mRecordFormat.mBitsPerChannel = 16;
mRecordFormat.mChannelsPerFrame = 1;
mRecordFormat.mFramesPerPacket = 1;
mRecordFormat.mBytesPerPacket = 2;
mRecordFormat.mBytesPerFrame = 2;
I want to know that can i record data to .wav in compressed format if yes please let me know how can i do this.(I dont want to record file in .caf format)

What stream format should iOS5 Effect Units use

I'm trying to use a Low Pass Filter AU. I keep getting a kAudioUnitErr_FormatNotSupported (-10868) error when setting the stream format to the filter unit, but if I just use the Remote IO unit there's no error.
The stream format I'm using is (Updated):
myASBD.mSampleRate = hardwareSampleRate;
myASBD.mFormatID = kAudioFormatLinearPCM;
myASBD.mFormatFlags = kAudioFormatFlagIsSignedInteger;
myASBD.mBitsPerChannel = 8 * sizeof(float);
myASBD.mFramesPerPacket = 1;
myASBD.mChannelsPerFrame = 1;
myASBD.mBytesPerPacket = sizeof(float) * myASBD.mFramesPerPacket;
myASBD.mBytesPerFrame = sizeof(float) * myASBD.mChannelsPerFrame;
And I'm setting the filter stream like this:
// Sets input stream type to ASBD
setupErr = AudioUnitSetProperty(filterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &myASBD, sizeof(myASBD));
NSLog(#"Filter in: %i", setupErr);
//NSAssert(setupErr == noErr, #"No ASBD on Finput");
//Sets output stream type to ASBD
setupErr = AudioUnitSetProperty(filterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &myASBD, sizeof(myASBD));
NSLog(#"Filter out: %i", setupErr);
NSAssert(setupErr == noErr, #"No ASBD on Foutput");
The canonical format for iOS filter audio units is 8.24 fixed-point (linear PCM), which is 32 bits per channel, not 16.
what format is working wit the reverb unit??? I'm getting weird errors tryn to record a buffer....any news on this topic?
Try this for the canonical format.
size_t bytesPerSample = sizeof (AudioUnitSampleType); //Default is 4 bytes
myASBD.mSampleRate = hardwareSampleRate;
myASBD.mFormatID = kAudioFormatLinearPCM;
myASBD.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical; //Canonical AU format
myASBD.mBytesPerPacket = bytesPerSample;
myASBD.mFramesPerPacket = 1;
myASBD.mBytesPerFrame = bytesPerSample;
myASBD.mChannelsPerFrame = 2; //Stereo
myASBD.mBitsPerChannel = 8 * bytesPerSample; //32bit integer
You will need to make sure all your AudioUnits ASBDs are configured uniformly.
If you are doing heavy audio processing, floats (supported in iOS5) is not a bad idea too.
size_t bytesPerSample = sizeof (float); //float is 4 bytes
myASBD.mSampleRate = hardwareSampleRate;
myASBD.mFormatID = kAudioFormatLinearPCM;
myASBD.mFormatFlags = kAudioFormatFlagIsFloat;
myASBD.mBytesPerPacket = bytesPerSample;
myASBD.mFramesPerPacket = 1;
myASBD.mBytesPerFrame = bytesPerSample;
myASBD.mChannelsPerFrame = 2;
myASBD.mBitsPerChannel = 8 * bytesPerSample; //32bit float

Resources