Reading audio files on iOS with ExtAudioFileRead, it seems that reaching eof completely freezes the reader… Example, assumes _abl AudioBufferList and _eaf ExtendedAudioFileRef are allocated and correctly configured:
- ( void )testRead
{
UInt32 requestedFrames = 1024;
UInt32 numFrames = requestedFrames;
OSStatus error = 0;
error = ExtAudioFileRead( _eaf, &numFrames, _abl );
if( numFrames < requestedFrames ) //eof, want to read enough frames from the beginning of the file to reach requestedFrames and loop gaplessly
{
requestedFrames = requestedFrames - numFrames;
numFrames = requestedFrames;
// move some pointers in _abl's buffers to write at the correct offset
error = ExtAudioFileSeek( _eaf, 0 );
error = ExtAudioFileRead( _eaf, &numFrames, _abl );
if( numFrames != requestedFrames ) //Now this call always sets numFrames to the same value as the previous read call...
{
NSLog( #"Oh no!" );
}
}
}
No errors, always the same behavior, exactly as if the reader was stuck at the end of the file. ExtAudioFileTell confirms the requested seek, btw. Also tried keeping track of the position in the file to request only the number of frames available at eof, same result: as soon as the last packet is read, seek seems to have no effect.
Happily seeking in other circumstances.
Bug? Feature? Imminent face palm? I'd very much appreciate any help in solving this!
I'm testing this on an iPad 3 ( iOS7.1 ).
Cheers,
Gregzo
Woozah!
Gotcha, evil AudioBufferList tinkerer.
So, in addition to informing the client as to the number of frames actually read, ExtAudioFileRead also sets the AudioBufferList's AudioBuffers mDataByteSize to the number of bytes read. As it clamps reading to that value, not resetting it at eof results in perpetually getting less frames than asking.
So, once eof is reached, simply reset the abl's buffers size.
-( void )resetABLBuffersSize: ( AudioBufferList * )alb size: ( UInt32 )size
{
AudioBuffer * buffer;
UInt32 i;
for( i = 0; i < abl->mNumberBuffers; i++ )
{
buffer = &( abl->mBuffers[ i ] );
buffer->mDataByteSize = size;
}
}
Shouldn't this be documented? The official docs only describe the AudioBufferList parameter as such: One or more buffers into which the audio data is read.
Cheers,
Gregzo
Related
I am trying to take a video file read it in using AVAssetReader and pass the audio off to CoreAudio for processing (adding effects and stuff) before saving it back out to disk using AVAssetWriter. I would like to point out that if i set the componentSubType on AudioComponentDescription of my output node as RemoteIO, things play correctly though the speakers. This makes me confident that my AUGraph is properly setup as I can hear things working. I am setting the subType to GenericOutput though so I can do the rendering myself and get back the adjusted audio.
I am reading in the audio and i pass the CMSampleBufferRef off to copyBuffer. This puts the audio into a circular buffer that will be read in later.
- (void)copyBuffer:(CMSampleBufferRef)buf {
if (_readyForMoreBytes == NO)
{
return;
}
AudioBufferList abl;
CMBlockBufferRef blockBuffer;
CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(buf, NULL, &abl, sizeof(abl), NULL, NULL, kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment, &blockBuffer);
UInt32 size = (unsigned int)CMSampleBufferGetTotalSampleSize(buf);
BOOL bytesCopied = TPCircularBufferProduceBytes(&circularBuffer, abl.mBuffers[0].mData, size);
if (!bytesCopied){
/
_readyForMoreBytes = NO;
if (size > kRescueBufferSize){
NSLog(#"Unable to allocate enought space for rescue buffer, dropping audio frame");
} else {
if (rescueBuffer == nil) {
rescueBuffer = malloc(kRescueBufferSize);
}
rescueBufferSize = size;
memcpy(rescueBuffer, abl.mBuffers[0].mData, size);
}
}
CFRelease(blockBuffer);
if (!self.hasBuffer && bytesCopied > 0)
{
self.hasBuffer = YES;
}
}
Next I call processOutput. This will do a manual reder on the outputUnit. When AudioUnitRender is called it invokes the playbackCallback below, which is what is hooked up as input callback on my first node. playbackCallback pulls the data off the circular buffer and feeds it into the audioBufferList passed in. Like I said before if the output is set as RemoteIO this will cause the audio to correctly be played on the speakers. When AudioUnitRender finishes, it returns noErr and the bufferList object contains valid data. When I call CMSampleBufferSetDataBufferFromAudioBufferList though I get kCMSampleBufferError_RequiredParameterMissing (-12731).
-(CMSampleBufferRef)processOutput
{
if(self.offline == NO)
{
return NULL;
}
AudioUnitRenderActionFlags flags = 0;
AudioTimeStamp inTimeStamp;
memset(&inTimeStamp, 0, sizeof(AudioTimeStamp));
inTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
UInt32 busNumber = 0;
UInt32 numberFrames = 512;
inTimeStamp.mSampleTime = 0;
UInt32 channelCount = 2;
AudioBufferList *bufferList = (AudioBufferList*)malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(channelCount-1));
bufferList->mNumberBuffers = channelCount;
for (int j=0; j<channelCount; j++)
{
AudioBuffer buffer = {0};
buffer.mNumberChannels = 1;
buffer.mDataByteSize = numberFrames*sizeof(SInt32);
buffer.mData = calloc(numberFrames,sizeof(SInt32));
bufferList->mBuffers[j] = buffer;
}
CheckError(AudioUnitRender(outputUnit, &flags, &inTimeStamp, busNumber, numberFrames, bufferList), #"AudioUnitRender outputUnit");
CMSampleBufferRef sampleBufferRef = NULL;
CMFormatDescriptionRef format = NULL;
CMSampleTimingInfo timing = { CMTimeMake(1, 44100), kCMTimeZero, kCMTimeInvalid };
AudioStreamBasicDescription audioFormat = self.audioFormat;
CheckError(CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &audioFormat, 0, NULL, 0, NULL, NULL, &format), #"CMAudioFormatDescriptionCreate");
CheckError(CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numberFrames, 1, &timing, 0, NULL, &sampleBufferRef), #"CMSampleBufferCreate");
CheckError(CMSampleBufferSetDataBufferFromAudioBufferList(sampleBufferRef, kCFAllocatorDefault, kCFAllocatorDefault, 0, bufferList), #"CMSampleBufferSetDataBufferFromAudioBufferList");
return sampleBufferRef;
}
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
int numberOfChannels = ioData->mBuffers[0].mNumberChannels;
SInt16 *outSample = (SInt16 *)ioData->mBuffers[0].mData;
/
memset(outSample, 0, ioData->mBuffers[0].mDataByteSize);
MyAudioPlayer *p = (__bridge MyAudioPlayer *)inRefCon;
if (p.hasBuffer){
int32_t availableBytes;
SInt16 *bufferTail = TPCircularBufferTail([p getBuffer], &availableBytes);
int32_t requestedBytesSize = inNumberFrames * kUnitSize * numberOfChannels;
int bytesToRead = MIN(availableBytes, requestedBytesSize);
memcpy(outSample, bufferTail, bytesToRead);
TPCircularBufferConsume([p getBuffer], bytesToRead);
if (availableBytes <= requestedBytesSize*2){
[p setReadyForMoreBytes];
}
if (availableBytes <= requestedBytesSize) {
p.hasBuffer = NO;
}
}
return noErr;
}
The CMSampleBufferRef I pass in looks valid (below is a dump of the object from the debugger)
CMSampleBuffer 0x7f87d2a03120 retainCount: 1 allocator: 0x103333180
invalid = NO
dataReady = NO
makeDataReadyCallback = 0x0
makeDataReadyRefcon = 0x0
formatDescription = <CMAudioFormatDescription 0x7f87d2a02b20 [0x103333180]> {
mediaType:'soun'
mediaSubType:'lpcm'
mediaSpecific: {
ASBD: {
mSampleRate: 44100.000000
mFormatID: 'lpcm'
mFormatFlags: 0xc2c
mBytesPerPacket: 2
mFramesPerPacket: 1
mBytesPerFrame: 2
mChannelsPerFrame: 1
mBitsPerChannel: 16 }
cookie: {(null)}
ACL: {(null)}
}
extensions: {(null)}
}
sbufToTrackReadiness = 0x0
numSamples = 512
sampleTimingArray[1] = {
{PTS = {0/1 = 0.000}, DTS = {INVALID}, duration = {1/44100 = 0.000}},
}
dataBuffer = 0x0
The buffer list looks like this
Printing description of bufferList:
(AudioBufferList *) bufferList = 0x00007f87d280b0a0
Printing description of bufferList->mNumberBuffers:
(UInt32) mNumberBuffers = 2
Printing description of bufferList->mBuffers:
(AudioBuffer [1]) mBuffers = {
[0] = (mNumberChannels = 1, mDataByteSize = 2048, mData = 0x00007f87d3008c00)
}
Really at a loss here, hoping someone can help. Thanks,
In case it matters i am debugging this in ios 8.3 simulator and the audio is coming from a mp4 that i shot on my iphone 6 then saved to my laptop.
I have read the following issues, however still to no avail, things are not working.
How to convert AudioBufferList to CMSampleBuffer?
Converting an AudioBufferList to a CMSampleBuffer Produces Unexpected Results
CMSampleBufferSetDataBufferFromAudioBufferList returning error 12731
core audio offline rendering GenericOutput
UPDATE
I poked around some more and notice that when my AudioBufferList right before AudioUnitRender runs looks like this:
bufferList->mNumberBuffers = 2,
bufferList->mBuffers[0].mNumberChannels = 1,
bufferList->mBuffers[0].mDataByteSize = 2048
mDataByteSize is numberFrames*sizeof(SInt32), which is 512 * 4. When I look at the AudioBufferList passed in playbackCallback, the list looks like this:
bufferList->mNumberBuffers = 1,
bufferList->mBuffers[0].mNumberChannels = 1,
bufferList->mBuffers[0].mDataByteSize = 1024
not really sure where that other buffer is going, or the other 1024 byte size...
if when i get finished calling Redner if I do something like this
AudioBufferList newbuff;
newbuff.mNumberBuffers = 1;
newbuff.mBuffers[0] = bufferList->mBuffers[0];
newbuff.mBuffers[0].mDataByteSize = 1024;
and pass newbuff off to CMSampleBufferSetDataBufferFromAudioBufferList the error goes away.
If I try setting the size of BufferList to have 1 mNumberBuffers or its mDataByteSize to be numberFrames*sizeof(SInt16) I get a -50 when calling AudioUnitRender
UPDATE 2
I hooked up a render callback so I can inspect the output when I play the sound over the speakers. I noticed that the output that goes to the speakers also has a AudioBufferList with 2 buffers, and the mDataByteSize during the input callback is 1024 and in the render callback its 2048, which is the same as I have been seeing when manually calling AudioUnitRender. When I inspect the data in the rendered AudioBufferList I notice that the bytes in the 2 buffers are the same, which means I can just ignore the second buffer. But I am not sure how to handle the fact that the data is 2048 in size after being rendered instead of 1024 as it's being taken in. Any ideas on why that could be happening? Is it in more of a raw form after going through the audio graph and that is why the size is doubling?
Sounds like the issue you're dealing with is because of a discrepancy in the number of channels. The reason you're seeing data in blocks of 2048 instead of 1024 is because it is feeding you back two channels (stereo). Check to make sure all of your audio units are properly configured to use mono throughout the entire audio graph, including the Pitch Unit and any audio format descriptions.
One thing to especially beware of is that calls to AudioUnitSetProperty can fail - so be sure to wrap those in CheckError() as well.
My app does huge data processing on audio coming from the mic input.
In order to get a "demo mode", I want to do the same thing based on a local .caf audio file.
I managed to get the audio file.
Now I am trying to use ExtAudioFileRead to read the .caf file and then do the data processing.
void readFile()
{
OSStatus err = noErr;
// Audio file
NSURL *path = [[NSBundle mainBundle] URLForResource:#"output" withExtension:#"caf"];
ExtAudioFileOpenURL((__bridge CFURLRef)path, &audio->audiofile);
assert(audio->audiofile);
// File's format.
AudioStreamBasicDescription fileFormat;
UInt32 size = sizeof(fileFormat);
err = ExtAudioFileGetProperty(audio->audiofile, kExtAudioFileProperty_FileDataFormat, &size, &fileFormat);
// tell the ExtAudioFile API what format we want samples back in
//bzero(&audio->clientFormat, sizeof(audio->clientFormat));
audio->clientFormat.mSampleRate = SampleRate;
audio->clientFormat.mFormatID = kAudioFormatLinearPCM;
audio->clientFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audio->clientFormat.mFramesPerPacket = 1;
audio->clientFormat.mChannelsPerFrame = 1;
audio->clientFormat.mBitsPerChannel = 16;//sizeof(AudioSampleType) * 8;
audio->clientFormat.mBytesPerPacket = 2 * audio->clientFormat.mChannelsPerFrame;
audio->clientFormat.mBytesPerFrame = 2 * audio->clientFormat.mChannelsPerFrame;
err = ExtAudioFileSetProperty(audio->audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(audio->clientFormat), &audio->clientFormat);
// find out how many frames we need to read
SInt64 numFrames = 0;
size = sizeof(numFrames);
err = ExtAudioFileGetProperty(audio->audiofile, kExtAudioFileProperty_FileLengthFrames, &size, &numFrames);
// create the buffers for reading in data
AudioBufferList *bufferList = malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * (audio->clientFormat.mChannelsPerFrame - 1));
bufferList->mNumberBuffers = audio->clientFormat.mChannelsPerFrame;
for (int ii=0; ii < bufferList->mNumberBuffers; ++ii)
{
bufferList->mBuffers[ii].mDataByteSize = sizeof(float) * (int)numFrames;
bufferList->mBuffers[ii].mNumberChannels = 1;
bufferList->mBuffers[ii].mData = malloc(bufferList->mBuffers[ii].mDataByteSize);
}
UInt32 maxReadFrames = 1024;
UInt32 rFrames = (UInt32)numFrames;
while(rFrames > 0)
{
UInt32 framesToRead = (maxReadFrames > rFrames) ? rFrames : maxReadFrames;
err = ExtAudioFileRead(audio->audiofile, &framesToRead, bufferList);
[audio processAudio:bufferList];
if (rFrames % SampleRate == 0)
[audio realtimeUpdate:nil];
rFrames = rFrames - maxReadFrames;
}
// Close the file
ExtAudioFileDispose(audio->audiofile);
// destroy the buffers
for (int ii=0; ii < bufferList->mNumberBuffers; ++ii)
{
free(bufferList->mBuffers[ii].mData);
}
free(bufferList);
bufferList = NULL;
}
There is clearly something that i did not understand or that I am doing wrong with ExtAudioFileRead because this code does not work at all. I have two main problems :
The file is played instantaneously. I mean that 44'100 samples are clearly not equal to 1 second. My 3 minutes audio file processing is done in a few seconds...
During the processing, I need to update the UI. So I have a few dispatch_sync in processaudio and realtimeUpdate. This seems to be really not appreciated by ExtAudioFileRead and it freezes.
Thanks for you help.
The code you wrote is just reading samples from the file and then calling processAudio. This will be done as fast as possible. As soon as processAudio is finished the next batch of samples is read and processAudio is called again. You shouldn't assume that reading from an audio file (which is a low level and non blocking os call) takes the same time the audio read would take to play.
If you want to process the audio in the file according to the sample rate you should probably use an AUFilePlayer audio unit. This can play back the file at the right speed and you can use a callback to process the samples in real audio time instead of "as fast as possible".
I have code that needs to write 16 bit PCM samples from memory into an AudioBuffer.
// Completely override the output callback function
- (void)
output:(EZOutput *)output
callbackWithActionFlags:(AudioUnitRenderActionFlags *)ioActionFlags
inTimeStamp:(const AudioTimeStamp *)inTimeStamp
inBusNumber:(UInt32)inBusNumber
inNumberFrames:(UInt32)inNumberFrames
ioData:(AudioBufferList *)ioData {
for (int i = 0; i < ioData->mNumberBuffers; i++) {
AudioBuffer audioBuffer = ioData->mBuffers[i];
for (int j = 0; j < audioBuffer.mDataByteSize; j++) {
audioBuffer.mData[j]; //void* how do I know what this expects??
}
}
// Fill the ioData with your audio data from anywhere
}
I'm using EZAUdio framework from https://github.com/syedhali/EZAudio to help playback some PCM audio data I download using a properietary mechanism.
How am I supposed to file a buffer of type void*? Isnt that just a pointer to an arbitrary location.
The format of the data the buffer expects is given by -audioStreamBasicDescription in EZOutput. This returns an AudioStreamBasicDescription struct that you can read to determine the number of channels, bits per channel, etc.
To write to a void * you cast it to appropriate type before writing, if you're handling individual samples, or you call memcpy if your audio is already prepared.
So, basically I want to play some audio files (mp3 and caf mostly). But the callback never gets called. Only when I call them to prime the queue.
Here's my data struct:
struct AQPlayerState
{
CAStreamBasicDescription mDataFormat;
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[kBufferNum];
AudioFileID mAudioFile;
UInt32 bufferByteSize;
SInt64 mCurrentPacket;
UInt32 mNumPacketsToRead;
AudioStreamPacketDescription *mPacketDescs;
bool mIsRunning;
};
Here's my callback function:
static void HandleOutputBuffer (void *aqData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer)
{
NSLog(#"HandleOutput");
AQPlayerState *pAqData = (AQPlayerState *) aqData;
if (pAqData->mIsRunning == false) return;
UInt32 numBytesReadFromFile;
UInt32 numPackets = pAqData->mNumPacketsToRead;
AudioFileReadPackets (pAqData->mAudioFile,
false,
&numBytesReadFromFile,
pAqData->mPacketDescs,
pAqData->mCurrentPacket,
&numPackets,
inBuffer->mAudioData);
if (numPackets > 0) {
inBuffer->mAudioDataByteSize = numBytesReadFromFile;
AudioQueueEnqueueBuffer (pAqData->mQueue,
inBuffer,
(pAqData->mPacketDescs ? numPackets : 0),
pAqData->mPacketDescs);
pAqData->mCurrentPacket += numPackets;
} else {
// AudioQueueStop(pAqData->mQueue, false);
// AudioQueueDispose(pAqData->mQueue, true);
// AudioFileClose (pAqData->mAudioFile);
// free(pAqData->mPacketDescs);
// free(pAqData->mFloatBuffer);
pAqData->mIsRunning = false;
}
}
And here's my method:
- (void)playFile
{
AQPlayerState aqData;
// get the source file
NSString *p = [[NSBundle mainBundle] pathForResource:#"1_Female" ofType:#"mp3"];
NSURL *url2 = [NSURL fileURLWithPath:p];
CFURLRef srcFile = (__bridge CFURLRef)url2;
OSStatus result = AudioFileOpenURL(srcFile, 0x1/*fsRdPerm*/, 0/*inFileTypeHint*/, &aqData.mAudioFile);
CFRelease (srcFile);
CheckError(result, "Error opinning sound file");
UInt32 size = sizeof(aqData.mDataFormat);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyDataFormat, &size, &aqData.mDataFormat),
"Error getting file's data format");
CheckError(AudioQueueNewOutput(&aqData.mDataFormat, HandleOutputBuffer, &aqData, CFRunLoopGetCurrent(), kCFRunLoopCommonModes, 0, &aqData.mQueue),
"Error AudioQueueNewOutPut");
// we need to calculate how many packets we read at a time and how big a buffer we need
// we base this on the size of the packets in the file and an approximate duration for each buffer
{
bool isFormatVBR = (aqData.mDataFormat.mBytesPerPacket == 0 || aqData.mDataFormat.mFramesPerPacket == 0);
// first check to see what the max size of a packet is - if it is bigger
// than our allocation default size, that needs to become larger
UInt32 maxPacketSize;
size = sizeof(maxPacketSize);
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyPacketSizeUpperBound, &size, &maxPacketSize),
"Error getting max packet size");
// adjust buffer size to represent about a second of audio based on this format
CalculateBytesForTime(aqData.mDataFormat, maxPacketSize, 1.0/*seconds*/, &aqData.bufferByteSize, &aqData.mNumPacketsToRead);
if (isFormatVBR) {
aqData.mPacketDescs = new AudioStreamPacketDescription [aqData.mNumPacketsToRead];
} else {
aqData.mPacketDescs = NULL; // we don't provide packet descriptions for constant bit rate formats (like linear PCM)
}
printf ("Buffer Byte Size: %d, Num Packets to Read: %d\n", (int)aqData.bufferByteSize, (int)aqData.mNumPacketsToRead);
}
// if the file has a magic cookie, we should get it and set it on the AQ
size = sizeof(UInt32);
result = AudioFileGetPropertyInfo(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, NULL);
if (!result && size) {
char* cookie = new char [size];
CheckError(AudioFileGetProperty(aqData.mAudioFile, kAudioFilePropertyMagicCookieData, &size, cookie),
"Error getting cookie from file");
CheckError(AudioQueueSetProperty(aqData.mQueue, kAudioQueueProperty_MagicCookie, cookie, size),
"Error setting cookie to file");
delete[] cookie;
}
aqData.mCurrentPacket = 0;
for (int i = 0; i < kBufferNum; ++i) {
CheckError(AudioQueueAllocateBuffer (aqData.mQueue,
aqData.bufferByteSize,
&aqData.mBuffers[i]),
"Error AudioQueueAllocateBuffer");
HandleOutputBuffer (&aqData,
aqData.mQueue,
aqData.mBuffers[i]);
}
// set queue's gain
Float32 gain = 1.0;
CheckError(AudioQueueSetParameter (aqData.mQueue,
kAudioQueueParam_Volume,
gain),
"Error AudioQueueSetParameter");
aqData.mIsRunning = true;
CheckError(AudioQueueStart(aqData.mQueue,
NULL),
"Error AudioQueueStart");
}
And the output when I press play:
Buffer Byte Size: 40310, Num Packets to Read: 38
HandleOutput start
HandleOutput start
HandleOutput start
I tryed replacing CFRunLoopGetCurrent() with CFRunLoopGetMain() and CFRunLoopCommonModes with CFRunLoopDefaultMode, but nothing.
Shouldn't the primed buffers start playing right away I start the queue?
When I start the queue, no callbacks are bang fired.
What am I doing wrong? Thanks for any ideas
What you are basically trying to do here is a basic example of audio playback using Audio Queues. Without looking at your code in detail to see what's missing (that could take a while) i'd rather recommend to you to follow the steps in this basic sample code that does exactly what you're doing (without the extras that aren't really relevant.. for example why are you trying to add audio gain?)
Somewhere else you were trying to play audio using audio units. Audio units are more complex than basic audio queue playback, and I wouldn't attempt them before being very comfortable with audio queues. But you can look at this example project for a basic example of audio queues.
In general when it comes to Core Audio programming in iOS, it's best you take your time with the basic examples and build your way up.. the problem with a lot of tutorials online is that they add extra stuff and often mix it with obj-c code.. when Core Audio is purely C code (ie the extra stuff won't add anything to the learning process). I strongly recommend you go over the book Learning Core Audio if you haven't already. All the sample code is available online, but you can also clone it from this repo for convenience. That's how I learned core audio. It takes time :)
I've successfully recorded audio from the microphone into an audio file using Audio Units with the help of openframeworks and this website http://atastypixel.com/blog/using-remoteio-audio-unit.
I want to be able to stream the file back to audio units and play the audio. According to Play an audio file using RemoteIO and Audio Unit I can use ExtAudioFileOpenURL and ExtAudioFileRead. However, how do I play audio data in my buffer?
This is what I currently have:
static OSStatus setupAudioFileRead() {
//construct the file destination URL
CFURLRef destinationURL = audioSystemFileURL();
OSStatus status = ExtAudioFileOpenURL(destinationURL, &audioFileRef);
CFRelease(destinationURL);
if (checkStatus(status)) { ofLog(OF_LOG_ERROR, "ofxiPhoneSoundStream: Couldn't open file to read"); return status; }
while( TRUE ) {
// Try to fill the buffer to capacity.
UInt32 framesRead = 8000;
status = ExtAudioFileRead( audioFileRef, &framesRead, &inputBufferList );
// error check
if( checkStatus(status) ) { break; }
// 0 frames read means EOF.
if( framesRead == 0 ) { break; }
//play audio???
}
return noErr;
}
From this author: http://atastypixel.com/blog/using-remoteio-audio-unit/, if you scroll down to the PLAYBACK section, try something like this:
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Notes: ioData contains buffers (may be more than one!)
// Fill them up as much as you can. Remember to set the size value in each buffer to match how
// much data is in the buffer.
for (int i=0; i < ioData->mNumberBuffers; i++)
{
AudioBuffer buffer = ioData->mBuffers[i];
// copy from your whatever buffer data to output buffer
UInt32 size = min(buffer.mDataByteSize, your buffer.size);
memcpy(buffer.mData, your buffer, size);
buffer.mDataByteSize = size; // indicate how much data we wrote in the buffer
// To test if your Audio Unit setup is working - comment out the three
// lines above and uncomment the for loop below to hear random noise
/*
UInt16 *frameBuffer = buffer.mData;
for (int j = 0; j < inNumberFrames; j++) {
frameBuffer[j] = rand();
}
*/
}
return noErr;
}
If you are only looking for recording from MIC to a file and play it back, the Apple's Speakhere sample is probably much more ready to use.
Basically,
1. Create a RemoteIO unit (See references about how to create RemoteIO);
Create a FilePlayer audio unit which is a dedicated audio unit to read an audio file and provide audio data in the file to output units, for example, the RemoteIO unit created in step 1. To actually use the FilePlayer, a lot of settings (specify which file to play, which part of the file to play, etc.) are needed to be done on the it;
Set kAudioUnitProperty_SetRenderCallback and kAudioUnitProperty_StreamFormat properties of the RemoteIO unit. The first property is essentially a callback function from which the RemoteIO unit pulls audio data and play it. The second property must be set in accordance to StreamFormat that supported by the FilePlayer. It can be derived from a get-property function invoked on the FilePlayer.
Define the callback set in step 3 where the most important thing to do is asking the FilePlayer to render into the buffer provided by the callback for which you will need to invoke AudioUnitRender() on the FilePlayer.
Finally start the RemoteIO unit to play the file.
Above is just a preliminary outline of basic things to do to play files using audio units on iOS. You can refer to Chris Adamson and Kevin Avila's Learning Core Audio for details.
It's a relatively simple approach that utilizes the audio unit mentioned in the Tasty Pixel blog. In the recording callback, instead of filling the buffer with data from the microphone, you could fill it with data from the file using ExtAudioFileRead. I'll try and paste an example below. Mind you this will just work for .caf files.
In the start method call an readAudio or initAudioFile function, something that just gets all the info about the file.
- (void) start {
readAudio();
OSStatus status = AudioOutputUnitStart(audioUnit);
checkStatus(status);
}
Now in the readAudio method you initialize the audio file reference as such.
ExtAudioFileRef fileRef;
void readAudio() {
NSString * name = #"AudioFile";
NSString * source = [[NSBundle mainBundle] pathForResource:name ofType:#"caf"];
const char * cString = [source cStringUsingEncoding:NSASCIIStringEncoding];
CFStringRef str = CFStringCreateWithCString(NULL, cString, kCFStringEncodingMacRoman);
CFURLRef inputFileURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, str, kCFURLPOSIXPathStyle, false);
AudioFileID fileID;
OSStatus err = AudioFileOpenURL(inputFileURL, kAudioFileReadPermission, 0, &fileID);
CheckError(err, "AudioFileOpenURL");
err = ExtAudioFileOpenURL(inputFileURL, &fileRef);
CheckError(err, "ExtAudioFileOpenURL");
err = ExtAudioFileSetProperty(fileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &audioFormat);
CheckError(err, "ExtAudioFileSetProperty");
}
Now that you have the Audio Data at hand, next step is pretty easy. In the recordingCallback read the data from the file instead of the mic.
static OSStatus recordingCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
// Because of the way our audio format (setup below) is chosen:
// we only need 1 buffer, since it is mono
// Samples are 16 bits = 2 bytes.
// 1 frame includes only 1 sample
AudioBuffer buffer;
buffer.mNumberChannels = 1;
buffer.mDataByteSize = inNumberFrames * 2;
buffer.mData = malloc( inNumberFrames * 2 );
// Put buffer in a AudioBufferList
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
// Then:
// Obtain recorded samples
OSStatus err = ExtAudioFileRead(fileRef, &inNumberFrames, &bufferList);
// Now, we have the samples we just read sitting in buffers in bufferList
// Process the new data
[iosAudio processAudio:&bufferList];
// release the malloc'ed data in the buffer we created earlier
free(bufferList.mBuffers[0].mData);
return noErr;
}
This worked for me.