Swift Error: Struct 'XX' must be completely initialized before a member is stored to - ios

I am trying to define AudioStreamBasicDescription in Swift.
In Objective-C, I used something like the following code.
AudioStreamBasicDescription ASBD;
ASBD.mSampleRate = 8000;
ASBD.mFormatID = kAudioFormatLinearPCM;
ASBD.mFormatFlags = kAudioFormatFlagsCanonical | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
ASBD.mFramesPerPacket = 1;
ASBD.mChannelsPerFrame = 1;
ASBD.mBitsPerChannel = 16;
ASBD.mBytesPerPacket = 2;
ASBD.mBytesPerFrame = 2;
And my converted Swift code is bellow
var ASBD: AudioStreamBasicDescription
ASBD.mSampleRate = 8000 // ERROR here
ASBD.mFormatID = kAudioFormatLinearPCM
ASBD.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked
ASBD.mFramesPerPacket = 1
ASBD.mChannelsPerFrame = 1
ASBD.mBitsPerChannel = 16
ASBD.mBytesPerPacket = 2
ASBD.mBytesPerFrame = 2
But the second line of this swift code is throwing this error. I don't know why i am getting this error. Can anyone please help me with this ?

This means that a structure needs to be completely initialized before using it. AudioStreamBasicDescription is a structure, so you need to initialize it before using it. The right code would be the following:
var ASBD: AudioStreamBasicDescription! = AudioStreamBasicDescription()
ASBD.mSampleRate = 8000
ASBD.mFormatID = kAudioFormatLinearPCM
ASBD.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked
ASBD.mFramesPerPacket = 1
ASBD.mChannelsPerFrame = 1
ASBD.mBitsPerChannel = 16
ASBD.mBytesPerPacket = 2
ASBD.mBytesPerFrame = 2

Related

How to set AudioStreamBasicDescription properties?

I'm trying to play PCM stream data from server using AudioQueue.
PCM data format :
Sample rate = 48000, num of channel = 2, Bit per sample = 16
And, server is not streaming fixed bytes to client. (variable bytes.)
(ex : 30848, 128, 2764, ... bytes )
How to set ASBD ?
I don't know how to set mFramesPerPacket, mBytesPerFrame, mBytesPerPacket .
I have read Apple reference document, but there is no detailed descriptions.
Please give me any idea.
New added : Here, ASBD structure what I have setted. (language : Swift)
// Create ASBD structure & set properties.
var streamFormat = AudioStreamBasicDescription()
streamFormat.mSampleRate = 48000
streamFormat.mFormatID = kAudioFormatLinearPCM
streamFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked
streamFormat.mFramesPerPacket = 1
streamFormat.mChannelsPerFrame = 2
streamFormat.mBitsPerChannel = 16
streamFormat.mBytesPerFrame = (streamFormat.mBitsPerChannel / 8) * streamFormat.mChannelsPerFrame
streamFormat.mBytesPerPacket = streamFormat.mBytesPerFrame
streamFormat.mReserved = 0
// Create AudioQueue for playing PCM streaming data.
var err = AudioQueueNewOutput(&streamFormat, self.queueCallbackProc, nil, nil, nil, 0, &aq)
...
I have setted ASBD structure like the above.
AudioQueue play streamed PCM data very well for a few seconds,
but soon playing is stop. What can I do?
(still streaming, and queueing AudioQueue)
Please give me any idea.
ASBD is just a structure underneath defined like follows:
struct AudioStreamBasicDescription
{
Float64 mSampleRate;
AudioFormatID mFormatID;
AudioFormatFlags mFormatFlags;
UInt32 mBytesPerPacket;
UInt32 mFramesPerPacket;
UInt32 mBytesPerFrame;
UInt32 mChannelsPerFrame;
UInt32 mBitsPerChannel;
UInt32 mReserved;
};
typedef struct AudioStreamBasicDescription AudioStreamBasicDescription;
You may set the variables of a struct like this:
AudioStreamBasicDescription streamFormat;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;
streamFormat.mSampleRate = sampleRate;
streamFormat.mBitsPerChannel = bitsPerChannel;
streamFormat.mChannelsPerFrame = channelsPerFrame;
streamFormat.mFramesPerPacket = 1;
int bytes = (bitsPerChannel / 8) * channelsPerFrame;
streamFormat.mBytesPerFrame = bytes;
streamFormat.mBytesPerPacket = bytes;

Format error setting up AudioQueue for recording, but only happens once in a while

So oddly, this error happens only once in a while, when we are setting up the audio queue (even though I'm doing everything the same way). Device iPhone 5, iOS8.3:
mediaserverd[37] <Error>: 15:14:24.594 ERROR: [0x2883000] >aq> 323: AudioConverterNew from AudioQueueNew returned 'fmt?'
io: 0 ch, 44100 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved
client: 0 ch, 44100 Hz, 'lpcm' (0x0000000C) 16-bit signed integer
Here's the code that triggers it.
SetupAudioFormat(kAudioFormatLinearPCM);
// create the queue
XThrowIfError(AudioQueueNewInput(
&mRecordFormat,
MyInputBufferHandler,
this /* userData */,
NULL /* run loop */, NULL /* run loop mode */,
0 /* flags */, &mQueue), "AudioQueueNewInput failed");
where mRecordFormat is setup like:
void AQRecorder::SetupAudioFormat(UInt32 inFormatID)
{
memset(&mRecordFormat, 0, sizeof(mRecordFormat));
UInt32 size = sizeof(mRecordFormat.mSampleRate);
mRecordFormat.mSampleRate=[AVAudioSession sharedInstance].sampleRate;
size = sizeof(mRecordFormat.mChannelsPerFrame);
mRecordFormat.mChannelsPerFrame=(UInt32)[AVAudioSession sharedInstance].inputNumberOfChannels;
mRecordFormat.mFormatID = inFormatID;
mRecordFormat.mBytesPerFrame =mRecordFormat.mChannelsPerFrame * sizeof (SInt16);
if (inFormatID == kAudioFormatLinearPCM)
{
// if we want pcm, default to signed 16-bit little-endian
mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
mRecordFormat.mBitsPerChannel = 16;
mRecordFormat.mBytesPerPacket = mRecordFormat.mBytesPerFrame = (mRecordFormat.mBitsPerChannel / 8) * mRecordFormat.mChannelsPerFrame;
mRecordFormat.mFramesPerPacket = 1;
} else if(inFormatID==kAudioFileAIFFType) {
mRecordFormat.mFramesPerPacket = 1;
mRecordFormat.mFormatFlags =
kLinearPCMFormatFlagIsBigEndian
| kLinearPCMFormatFlagIsSignedInteger
| kLinearPCMFormatFlagIsPacked;
}
}
My interpretation of the error is that the phone is recording as 32-bit little-endian float, deinterleaved and I'm trying to setup a queue with a format that is 16-bit signed integer. But why don't I get the error everytime? How to fix it?
AudioStreamBasicDescriptions are really annoying. Here is what I use. I have typedefed AudioStreamBasicDescription to ASBD
ASBD asbdWithInfo(Boolean isFloat,int numberOfChannels,Boolean interleavedIfStereo){
ASBD asbd = {0};
int sampleSize = isFloat ? sizeof(float) : sizeof(SInt16);
asbd.mChannelsPerFrame = (numberOfChannels == 1) ? 1 : 2;
asbd.mBitsPerChannel = 8 * sampleSize;
asbd.mFramesPerPacket = 1;
asbd.mSampleRate = 44100.0;
asbd.mBytesPerFrame = interleavedIfStereo ? sampleSize * asbd.mChannelsPerFrame : sampleSize;
asbd.mBytesPerPacket = asbd.mBytesPerFrame;
asbd.mReserved = 0;
asbd.mFormatID = kAudioFormatLinearPCM;
if (isFloat) {
asbd.mFormatFlags = kAudioFormatFlagIsFloat;
if (interleavedIfStereo) {
if (numberOfChannels == 1) {
asbd.mFormatFlags = asbd.mFormatFlags | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved;
}
}
else{
asbd.mFormatFlags = asbd.mFormatFlags | kAudioFormatFlagIsNonInterleaved | kAudioFormatFlagIsPacked ;
}
}
else{
asbd.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked;
if (!interleavedIfStereo) {
if (numberOfChannels > 1) {
asbd.mFormatFlags = asbd.mFormatFlags | kAudioFormatFlagIsNonInterleaved;
}
}
}
return asbd;
}

AudioConverterFillComplexBuffer '!stt' error

I have an Audio App that from time to time is need to encode audio data from PCM to AAC format. I'm using software decoder (Actually I don't care what encoder is used, but I've checked twice, and there's software decoder). I'm using (https://github.com/TheAmazingAudioEngine/TheAmazingAudioEngine library)
I setup Audio session category as kAudioSessionCategory_PlayAndRecord
Have these formats
// Output format
outputFormat.mFormatID = kAudioFormatMPEG4AAC;
outputFormat.mSampleRate = 44100;
outputFormat.mFormatFlags = kMPEG4Object_AAC_Scalable;
outputFormat.mChannelsPerFrame = 2;
outputFormat.mBitsPerChannel = 0;
outputFormat.mBytesPerFrame = 0;
outputFormat.mBytesPerPacket = 0;
outputFormat.mFramesPerPacket = 1024;
// Input format
audioDescription.mFormatID = kAudioFormatLinearPCM;
audioDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsNonInterleaved;
audioDescription.mChannelsPerFrame = 2;
audioDescription.mBytesPerPacket = sizeof(SInt16);
audioDescription.mFramesPerPacket = 1;
audioDescription.mBytesPerFrame = sizeof(SInt16);
audioDescription.mBitsPerChannel = 8 * sizeof(SInt16);
audioDescription.mSampleRate = 44100.0;
And All works perfectly with kAudioSessionProperty_OverrideCategoryMixWithOthers == YES
But when I setup kAudioSessionProperty_OverrideCategoryMixWithOthers with NO then:
iOS Simulator - OK
iPod (6.1.3) - OK
iPhone 4S (7.0.3) - Fail with ('!sst' error on `AudioConverterFillComplexBuffer`) call
iPad 3(7.0.3) - Fail
As I already said all works fine until I change audio session property kAudioSessionProperty_OverrideCategoryMixWithOthers to NO
So the question is:
What is this error about? (I didn't found any clues in Headers and documentation what does this error mean (There's also no '!sst' string in any public framework header))
How can I fix it?
If you have any other Ideas that you think I need to try - Feel free to comment in.

Audio Queue Converting sample rate iOS

ok so, noob to iOS. I am using the Audio Queue Buffer to record audio. The Linear PCM format defaults to 44100 Hz, 1 channel, 16bit, little endian. Is there a way I can force a format of 8000 hz, 1 channel, 32bit floating point, little endian?
You can specify the format you want at initialization:
AudioStreamBasicDescription asbd;
asbd.mSampleRate = 8000;
asbd.mFormatID = kAudioFormatLinearPCM;
asbd.mFormatFlags = kLinearPCMFormatFlagIsFloat;
asbd.mBytesPerPacket = sizeof(float);
asbd.mFramesPerPacket = 1;
asbd.mBytesPerFrame = sizeof(float);
asbd.mChannelsPerFrame = 1;
asbd.mBitsPerChannel = sizeof(float) * CHAR_BIT;
asbd.mReserved = 0;
OSStatus e = AudioQueueNewInput(&asbd, ...............

What stream format should iOS5 Effect Units use

I'm trying to use a Low Pass Filter AU. I keep getting a kAudioUnitErr_FormatNotSupported (-10868) error when setting the stream format to the filter unit, but if I just use the Remote IO unit there's no error.
The stream format I'm using is (Updated):
myASBD.mSampleRate = hardwareSampleRate;
myASBD.mFormatID = kAudioFormatLinearPCM;
myASBD.mFormatFlags = kAudioFormatFlagIsSignedInteger;
myASBD.mBitsPerChannel = 8 * sizeof(float);
myASBD.mFramesPerPacket = 1;
myASBD.mChannelsPerFrame = 1;
myASBD.mBytesPerPacket = sizeof(float) * myASBD.mFramesPerPacket;
myASBD.mBytesPerFrame = sizeof(float) * myASBD.mChannelsPerFrame;
And I'm setting the filter stream like this:
// Sets input stream type to ASBD
setupErr = AudioUnitSetProperty(filterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &myASBD, sizeof(myASBD));
NSLog(#"Filter in: %i", setupErr);
//NSAssert(setupErr == noErr, #"No ASBD on Finput");
//Sets output stream type to ASBD
setupErr = AudioUnitSetProperty(filterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &myASBD, sizeof(myASBD));
NSLog(#"Filter out: %i", setupErr);
NSAssert(setupErr == noErr, #"No ASBD on Foutput");
The canonical format for iOS filter audio units is 8.24 fixed-point (linear PCM), which is 32 bits per channel, not 16.
what format is working wit the reverb unit??? I'm getting weird errors tryn to record a buffer....any news on this topic?
Try this for the canonical format.
size_t bytesPerSample = sizeof (AudioUnitSampleType); //Default is 4 bytes
myASBD.mSampleRate = hardwareSampleRate;
myASBD.mFormatID = kAudioFormatLinearPCM;
myASBD.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical; //Canonical AU format
myASBD.mBytesPerPacket = bytesPerSample;
myASBD.mFramesPerPacket = 1;
myASBD.mBytesPerFrame = bytesPerSample;
myASBD.mChannelsPerFrame = 2; //Stereo
myASBD.mBitsPerChannel = 8 * bytesPerSample; //32bit integer
You will need to make sure all your AudioUnits ASBDs are configured uniformly.
If you are doing heavy audio processing, floats (supported in iOS5) is not a bad idea too.
size_t bytesPerSample = sizeof (float); //float is 4 bytes
myASBD.mSampleRate = hardwareSampleRate;
myASBD.mFormatID = kAudioFormatLinearPCM;
myASBD.mFormatFlags = kAudioFormatFlagIsFloat;
myASBD.mBytesPerPacket = bytesPerSample;
myASBD.mFramesPerPacket = 1;
myASBD.mBytesPerFrame = bytesPerSample;
myASBD.mChannelsPerFrame = 2;
myASBD.mBitsPerChannel = 8 * bytesPerSample; //32bit float

Resources