Calculate Audio Buffer size in AudioQueue in Xamarin.ios - ios

I need to record audio from microphone form 1(min) sec to 15 sec(max). Recording time may be vary between 1 to 15 sec. So how i can calculate size of buffer or resize buffer size. My AudioStreamBasicDescription setting is-
this.audioStreamDescription.Format = AudioFormatType.LinearPCM;
this.audioStreamDescription.FormatFlags = AudioFormatFlags.LinearPCMIsSignedInteger | AudioFormatFlags.LinearPCMIsPacked;
this.audioStreamDescription.SampleRate = 8000;
this.audioStreamDescription.BitsPerChannel = 16;
this.audioStreamDescription.ChannelsPerFrame = 1;
this.audioStreamDescription.BytesPerFrame = (16 / 8) * 1;
this.audioStreamDescription.FramesPerPacket = 1;
this.audioStreamDescription.BytesPerPacket = audioStreamDescription.BytesPerFrame * audioStreamDescription.FramesPerPacket;
this.audioStreamDescription.Reserved = 0;
Alocating buffer -
inputQueue.AllocateBuffer(-----, out bufferPointer);
inputQueue.EnqueueBuffer(bufferPointer, -----, null);

Related

AudioFileWriteBytes performance for stereo file

I'm writing a stereo wave file with AudioFileWriteBytes (CoreAudio / iOS) and the only way I can get it to work is by calling it for each sample on each channel.
The following code works:
// Prepare the format AudioStreamBasicDescription;
AudioStreamBasicDescription asbd = {
.mSampleRate = session.samplerate,
.mFormatID = kAudioFormatLinearPCM,
.mFormatFlags = kAudioFormatFlagIsBigEndian| kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked,
.mChannelsPerFrame = 2,
.mBitsPerChannel = 16,
.mFramesPerPacket = 1, // Always 1 for uncompressed formats
.mBytesPerPacket = 4, // 16 bits for 2 channels = 4 bytes
.mBytesPerFrame = 4 // 16 bits for 2 channels = 4 bytes
};
// Set up the file
AudioFileID audioFile;
OSStatus audioError = noErr;
audioError = AudioFileCreateWithURL((__bridge CFURLRef)fileURL, kAudioFileAIFFType, &asbd, kAudioFileFlags_EraseFile, &audioFile);
if (audioError != noErr) {
NSLog(#"Error creating file");
return;
}
// Write samples
UInt64 currentFrame = 0;
while (currentFrame < totalLengthInFrames) {
UInt64 numberOfFramesToWrite = totalLengthInFrames - currentFrame;
if (numberOfFramesToWrite > 2048) {
numberOfFramesToWrite = 2048;
}
UInt32 sampleByteCount = sizeof(int16_t);
UInt32 bytesToWrite = (UInt32)numberOfFramesToWrite * sampleByteCount;
int16_t *sampleBufferLeft = (int16_t *)malloc(bytesToWrite);
int16_t *sampleBufferRight = (int16_t *)malloc(bytesToWrite);
// Some magic to fill the buffers
for (int j = 0; j < numberOfFramesToWrite; j++) {
int16_t left = CFSwapInt16HostToBig(sampleBufferLeft[j]);
int16_t right = CFSwapInt16HostToBig(sampleBufferRight[j]);
audioError = AudioFileWriteBytes(audioFile, false, (currentFrame + j) * 4, &sampleByteCount, &left);
assert(audioError == noErr);
audioError = AudioFileWriteBytes(audioFile, false, (currentFrame + j) * 4 + 2, &sampleByteCount, &right);
assert(audioError == noErr);
}
free(sampleBufferLeft);
free(sampleBufferRight);
currentFrame += numberOfFramesToWrite;
}
However, it is (obviously) very slow and inefficient.
I can't find anything on how to use it with a big buffer so that I can write more than a single sample while also writing 2 channels.
I tried making a buffer going LRLRLRLR (left / right), and then write that with just one AudioFileWriteBytes call. I expected that to work, but it produced a file filled with noise.
This is the code:
UInt64 currentFrame = 0;
UInt64 bytePos = 0;
while (currentFrame < totalLengthInFrames) {
UInt64 numberOfFramesToWrite = totalLengthInFrames - currentFrame;
if (numberOfFramesToWrite > 2048) {
numberOfFramesToWrite = 2048;
}
UInt32 sampleByteCount = sizeof(int16_t);
UInt32 bytesInBuffer = (UInt32)numberOfFramesToWrite * sampleByteCount;
UInt32 bytesInOutputBuffer = (UInt32)numberOfFramesToWrite * sampleByteCount * 2;
int16_t *sampleBufferLeft = (int16_t *)malloc(bytesInBuffer);
int16_t *sampleBufferRight = (int16_t *)malloc(bytesInBuffer);
int16_t *outputBuffer = (int16_t *)malloc(bytesInOutputBuffer);
// Some magic to fill the buffers
for (int j = 0; j < numberOfFramesToWrite; j++) {
int16_t left = CFSwapInt16HostToBig(sampleBufferLeft[j]);
int16_t right = CFSwapInt16HostToBig(sampleBufferRight[j]);
outputBuffer[(j * 2)] = left;
outputBuffer[(j * 2) + 1] = right;
}
audioError = AudioFileWriteBytes(audioFile, false, bytePos, &bytesInOutputBuffer, &outputBuffer);
assert(audioError == noErr);
free(sampleBufferLeft);
free(sampleBufferRight);
free(outputBuffer);
bytePos += bytesInOutputBuffer;
currentFrame += numberOfFramesToWrite;
}
I also tried to just write the buffers at once (2048*L, 2048*R, etc.) which I did not expect to work, and it didn't.
How do I speed this up AND get a working wave file?
I tried making a buffer going LRLRLRLR (left / right), and then write that with just one AudioFileWriteBytes call.
This is the correct approach if using (the rather difficult) Audio File Services.
If possible, instead of the very low level Audio File Services, use Extended Audio File Services. It is a wrapper around Audio File Services that has built in format converters. Or even better yet, use AVAudioFile it is a wrapper around Extended Audio File Services that covers most common use cases.
If you are set on using Audio File Services, you'll have to interleave the audio manually like you had tried. Maybe show the code where you attempted this.

iOS using AudioQueueNewOutput to output sound in both left and right channel

I am developing an app to show sin wave.
I am using AudioQueueNewOutput to output mono sound is OK, but when I come to stereo output, I have no idea how to do it.
I know the mChannelsPerFrame = 2 can generate wave in both left and right channel.
I also want to know what is the sequence of sending bytes to left and right channel? Is the first byte to left channel and the second byte to right channel?
Code:
_audioFormat = new AudioStreamBasicDescription();
_audioFormat->mSampleRate = SAMPLE_RATE; // 44100
_audioFormat->mFormatID = kAudioFormatLinearPCM;
_audioFormat->mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
_audioFormat->mFramesPerPacket = 1;
_audioFormat->mChannelsPerFrame = NUM_CHANNELS; // 1
_audioFormat->mBitsPerChannel = BITS_PER_CHANNEL; // 16
_audioFormat->mBytesPerPacket = BYTES_PER_FRAME; // 2
_audioFormat->mBytesPerFrame = BYTES_PER_FRAME; // 2
and
_sineTableLength = _audioFormat.mSampleRate / SAMPLE_LIMIT_FACTOR; // 44100/100 = 441
_sineTable = new SInt16[_sineTableLength];
for(int i = 0; i < _sineTableLength; i++)
{
// Transfer values between -1.0 and 1.0 to integer values between -sample max and sample max
_sineTable[i] = (SInt16)(sin(i * 2 * M_PI / _sineTableLength) * 32767);
}
and
AudioQueueNewOutput (&_audioFormat,
playbackCallback,
(__bridge void *)(self),
nil,
nil,
0,
&_queueObject);
static void playbackCallback (void* inUserData,
AudioQueueRef inAudioQueue,
AudioQueueBufferRef bufferReference){
SInt16* sample = (SInt16*)bufferReference->mAudioData;
// bufferSize 1024
for(int i = 0; i < bufferSize; i += _audioFormat.mBytesPerFrame, sample++)
{
// set value for *sample
// 9ms sin wave and 4.5ms 0
...
}
...
AudioQueueEnqueueBuffer(...)
}
Several days later,I have found the answer.
First: AudioStreamBasicDescription can set just like this ;
Then: bufferSize change from 1024 to 2048 ;
And: SInt16 in SInt16* sample = (SInt16*)bufferReference->mAudioData; all change to SInt32. Because the channel double,the bits double;
Last: Each 16 bits contains data that left or right channel need in sample,just feed it whatever you want.

AudioStreamBasicDescription setting values for wav

I am trying to play a simple PCM file on iOS but couldn't wrap my head around AudioStreamBasicDescription and this link does not provide enough information.
I get this values from terminal
afinfo BlameItOnTheNight.wav
File: BlameItOnTheNight.wav
File type ID: WAVE
Num Tracks: 1
----
Data format: 2 ch, 44100 Hz, 'lpcm' (0x0000000C) 16-bit little-endian signed integer
no channel layout.
estimated duration: 9.938141 sec
audio bytes: 1753088
audio packets: 438272
bit rate: 1411200 bits per second
packet size upper bound: 4
maximum packet size: 4
audio data file offset: 44
optimized
source bit depth: I16
----
Then I choose values in code
- (void)setupAudioFormat:(AudioStreamBasicDescription*)format
{
format->mSampleRate = 44100.0;
format->mFormatID = kAudioFormatLinearPCM;
format->mFramesPerPacket = 1;
format->mChannelsPerFrame = 2;
format->mBytesPerFrame = format->mChannelsPerFrame * sizeof(Float32);
format->mBytesPerPacket = format->mFramesPerPacket * format->mBytesPerFrame;
format->mBitsPerChannel = sizeof(Float32) * 8;
format->mReserved = 0;
format->mFormatFlags = kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagsNativeEndian |
kAudioFormatFlagIsPacked;
}
Audio plays really fast.
Whats the correct way the calculate this values based on actual audio file?
when I changed the values I was getting following error.
error for object 0x7fba72c50db8: incorrect checksum for freed object - object was probably modified after being freed.
*** set a breakpoint in malloc_error_break to debug
then finally I figured out that my AudioStreamBasicDescription bitsperchannel values was not correct also the buffer size was not enough.
So first I have changed the values to
- (void)setupAudioFormat:(AudioStreamBasicDescription*)format
{
format->mSampleRate = 44100.0;
format->mFormatID = kAudioFormatLinearPCM;
format->mFramesPerPacket = 1; //For uncompressed audio, the value is 1. For variable bit-rate formats, the value is a larger fixed number, such as 1024 for AAC
format->mChannelsPerFrame = 2;
format->mBytesPerFrame = format->mChannelsPerFrame * 2;
format->mBytesPerPacket = format->mFramesPerPacket * format->mBytesPerFrame;
format->mBitsPerChannel = 16;
format->mReserved = 0;
format->mFormatFlags = kAudioFormatFlagIsSignedInteger |
kAudioFormatFlagsNativeEndian |
kLinearPCMFormatFlagIsPacked;
}
then when I allocate buffer I increased the size
// Allocate and prime playback buffers
playState.playing = true;
for (int i = 0; i < NUM_BUFFERS && playState.playing; i++)
{
AudioQueueAllocateBuffer(playState.queue, 32000, &playState.buffers[i]);
AudioOutputCallback(&playState, playState.queue, playState.buffers[i]);
}
In my original code it was set to 8000, now changing it to 32000 solves the problem.

AudioConverterFillComplexBuffer work with Internet streamed mp3

I am currently streaming mp3 audio through the Internet. I am using AudioFileStream to parse the mp3 steam
comes through a CFReadStreamRef, decode the mp3 using AudioConverterFillComplexBuffer and copy the converted PCM
data into a ring buffer and finally play the PCM using RemoteIO.
The problem I am currently facing is the AudioConverterFillComplexBuffer always returns 0 (no error) but the conversion
result seems incorrect. In details, I can notice,
A. The UInt32 *ioOutputDataPacketSize keeps the same value I sent in.
B. The convertedData.mBuffers[0].mDataByteSize always been set to the size of the outputbuffer (doesn't matter how big the buffer is).
C. I can only hear clicking noise with the output data.
Below is my procedures for rendering the audio.
The same procedure works for my Audio queue implementation so I believe
I didn't something wrong in either the place I invoking the AudioConverterFillComplexBuffer or the callback of AudioConverterFillComplexBuffer.
I have been stuck on this issue for a long time. Any help will be highly appreciated.
Open a AudioFileStream.
// create an audio file stream parser
AudioFileTypeID fileTypeHint = kAudioFileMP3Type;
AudioFileStreamOpen(self, MyPropertyListenerProc, MyPacketsProc, fileTypeHint, &audioFileStream);
Handle the parsed data in the callback function ("MyPacketsProc").
void MyPacketsProc(void * inClientData,
UInt32 inNumberBytes,
UInt32 inNumberPackets,
const void * inInputData,
AudioStreamPacketDescription *inPacketDescriptions)
{
#synchronized(self)
{
// Init the audio converter.
if (!audioConverter)
AudioConverterNew(&asbd, &asbd_out, &audioConverter);
struct mp3Data mSettings;
memset(&mSettings, 0, sizeof(mSettings));
UInt32 packetsPerBuffer = 0;
UInt32 outputBufferSize = 1024 * 32; // 32 KB is a good starting point.
UInt32 sizePerPacket = asbd.mBytesPerPacket;
// Calculate the size per buffer.
// Variable Bit Rate Data.
if (sizePerPacket == 0)
{
UInt32 size = sizeof(sizePerPacket);
AudioConverterGetProperty(audioConverter, kAudioConverterPropertyMaximumOutputPacketSize, &size, &sizePerPacket);
if (sizePerPacket > outputBufferSize)
outputBufferSize = sizePerPacket;
packetsPerBuffer = outputBufferSize / sizePerPacket;
}
//CBR
else
packetsPerBuffer = outputBufferSize / sizePerPacket;
// Prepare the input data for the callback.
mSettings.inputBuffer.mDataByteSize = inNumberBytes;
mSettings.inputBuffer.mData = (void *)inInputData;
mSettings.inputBuffer.mNumberChannels = 1;
mSettings.numberPackets = inNumberPackets;
mSettings.packetDescription = inPacketDescriptions;
// Set up our output buffers
UInt8 * outputBuffer = (UInt8*)malloc(sizeof(UInt8) * outputBufferSize);
memset(outputBuffer, 0, outputBufferSize);
// describe output data buffers into which we can receive data.
AudioBufferList convertedData;
convertedData.mNumberBuffers = 1;
convertedData.mBuffers[0].mNumberChannels = 1;
convertedData.mBuffers[0].mDataByteSize = outputBufferSize;
convertedData.mBuffers[0].mData = outputBuffer;
// Convert.
UInt32 ioOutputDataPackets = packetsPerBuffer;
OSStatus result = AudioConverterFillComplexBuffer(audioConverter,
converterComplexInputDataProc,
&mSettings,
&ioOutputDataPackets,
&convertedData,
NULL
);
// Enqueue the ouput pcm data.
TPCircularBufferProduceBytes(&m_pcmBuffer, convertedData.mBuffers[0].mData, convertedData.mBuffers[0].mDataByteSize);
free(outputBuffer);
}
}
Feed the audio converter from its callback function ("converterComplexInputDataProc").
OSStatus converterComplexInputDataProc(AudioConverterRef inAudioConverter,
UInt32* ioNumberDataPackets,
AudioBufferList* ioData,
AudioStreamPacketDescription** ioDataPacketDescription,
void* inUserData)
{
struct mp3Data THIS = (struct mp3Data) inUserData;
if (THIS->inputBuffer.mDataByteSize > 0)
{
*ioNumberDataPackets = THIS->numberPackets;
ioData->mNumberBuffers = 1;
ioData->mBuffers[0].mDataByteSize = THIS->inputBuffer.mDataByteSize;
ioData->mBuffers[0].mData = THIS->inputBuffer.mData;
ioData->mBuffers[0].mNumberChannels = 1;
if (ioDataPacketDescription)
*ioDataPacketDescription = THIS->packetDescription;
}
else
*ioDataPacketDescription = 0;
return 0;
}
Playback using the RemoteIO component.
The input and output AudioStreamBasicDescription.
Input:
Sample Rate: 16000
Format ID: .mp3
Format Flags: 0
Bytes per Packet: 0
Frames per Packet: 576
Bytes per Frame: 0
Channels per Frame: 1
Bits per Channel: 0
output:
Sample Rate: 44100
Format ID: lpcm
Format Flags: 3116
Bytes per Packet: 4
Frames per Packet: 1
Bytes per Frame: 4
Channels per Frame: 1
Bits per Channel: 32

What stream format should iOS5 Effect Units use

I'm trying to use a Low Pass Filter AU. I keep getting a kAudioUnitErr_FormatNotSupported (-10868) error when setting the stream format to the filter unit, but if I just use the Remote IO unit there's no error.
The stream format I'm using is (Updated):
myASBD.mSampleRate = hardwareSampleRate;
myASBD.mFormatID = kAudioFormatLinearPCM;
myASBD.mFormatFlags = kAudioFormatFlagIsSignedInteger;
myASBD.mBitsPerChannel = 8 * sizeof(float);
myASBD.mFramesPerPacket = 1;
myASBD.mChannelsPerFrame = 1;
myASBD.mBytesPerPacket = sizeof(float) * myASBD.mFramesPerPacket;
myASBD.mBytesPerFrame = sizeof(float) * myASBD.mChannelsPerFrame;
And I'm setting the filter stream like this:
// Sets input stream type to ASBD
setupErr = AudioUnitSetProperty(filterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &myASBD, sizeof(myASBD));
NSLog(#"Filter in: %i", setupErr);
//NSAssert(setupErr == noErr, #"No ASBD on Finput");
//Sets output stream type to ASBD
setupErr = AudioUnitSetProperty(filterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &myASBD, sizeof(myASBD));
NSLog(#"Filter out: %i", setupErr);
NSAssert(setupErr == noErr, #"No ASBD on Foutput");
The canonical format for iOS filter audio units is 8.24 fixed-point (linear PCM), which is 32 bits per channel, not 16.
what format is working wit the reverb unit??? I'm getting weird errors tryn to record a buffer....any news on this topic?
Try this for the canonical format.
size_t bytesPerSample = sizeof (AudioUnitSampleType); //Default is 4 bytes
myASBD.mSampleRate = hardwareSampleRate;
myASBD.mFormatID = kAudioFormatLinearPCM;
myASBD.mFormatFlags = kAudioFormatFlagsAudioUnitCanonical; //Canonical AU format
myASBD.mBytesPerPacket = bytesPerSample;
myASBD.mFramesPerPacket = 1;
myASBD.mBytesPerFrame = bytesPerSample;
myASBD.mChannelsPerFrame = 2; //Stereo
myASBD.mBitsPerChannel = 8 * bytesPerSample; //32bit integer
You will need to make sure all your AudioUnits ASBDs are configured uniformly.
If you are doing heavy audio processing, floats (supported in iOS5) is not a bad idea too.
size_t bytesPerSample = sizeof (float); //float is 4 bytes
myASBD.mSampleRate = hardwareSampleRate;
myASBD.mFormatID = kAudioFormatLinearPCM;
myASBD.mFormatFlags = kAudioFormatFlagIsFloat;
myASBD.mBytesPerPacket = bytesPerSample;
myASBD.mFramesPerPacket = 1;
myASBD.mBytesPerFrame = bytesPerSample;
myASBD.mChannelsPerFrame = 2;
myASBD.mBitsPerChannel = 8 * bytesPerSample; //32bit float

Resources