EXC_BAD_ACCESS with AudioFileReadPackets - ios

I've been trying to serialize a local audio file following the instructions here, only instead of feeding the packets directly to the AudioQueBuffer, I would like send it over bluetooth/wifi using GKSession. I'm getting an EXC_BAD_ACCESS error:
- (void)broadcastServerMusicWithSession:(GKSession *)session playerName:(NSString *)name clients: (NSArray *)clients
{
self.isServer = YES;
_session = session;
_session.available = NO;
_session.delegate = self;
[_session setDataReceiveHandler:self withContext:nil];
CFURLRef fileURL = (__bridge CFURLRef)[[NSBundle mainBundle] URLForResource:#"mozart" withExtension:#"mp3"]; // file URL
AudioFile *audioFile = [[AudioFile alloc] initWithURL:fileURL];
//no we start sending the data over
#define MAX_PACKET_COUNT 4096
SInt64 currentPacketNumber = 0;
UInt32 numBytes;
UInt32 numPacketsToRead;
AudioStreamPacketDescription packetDescriptions[MAX_PACKET_COUNT];
NSInteger bufferByteSize = 4096;
if (bufferByteSize < audioFile.maxPacketSize) bufferByteSize = audioFile.maxPacketSize;
numPacketsToRead = bufferByteSize/audioFile.maxPacketSize;
UInt32 numPackets = numPacketsToRead;
BOOL isVBR = ([audioFile audioFormatRef]->mBytesPerPacket == 0) ? YES : NO;
// this should be in a do while (numPackets < numPacketsToRead) loop..
// but i took it out just to narrow down the problem
NSMutableData *myBuffer = [[NSMutableData alloc] initWithCapacity:numPackets * audioFile.maxPacketSize];
AudioFileReadPackets (
audioFile.fileID,
NO,
&numBytes,
isVBR ? packetDescriptions : 0,
currentPacketNumber,
&numPackets,
&myBuffer
);
NSError *error;
if (![session sendDataToAllPeers:packetData withDataMode:GKSendDataReliable error:&error])
{
NSLog(#"Error sending data to clients: %#", error);
}
I know for a fact that it's the AudioFileReadPackets call that's breaking the code, since it runs fine if I comment that bit out
I've tried doing Zombies profiling using xcode 4.3, but it just crashes.. one useful debugging tip I got though was breaking when the code crashes and outputting the error code using gdb console.. this is what I get:
(lldb) po [$eax className] NSException
error: warning: receiver type 'unsigned int' is not 'id' or interface pointer, consider casting it to 'id'
warning: declaration does not declare anything
error: expected ';' after expression
error: 1 errors parsing expression
but I couldn't do much with it.. any help?

Turns out there are two problems with the above code:
[audioFile audioFormatRef]->mBytesPerPacket == 0
the compiler isn't happy with this one.. if you refer to the code that defines audioFormatRef, it returns a pointer to a format struct, which has a mBytesPerPacket field.. I donno why the above syntax isn't working though
I shouldn't be supplying myBuffer to the AudioFileReadPackets function (type mistmach).. rather I should do something like this:
NSMutableData *myBuffer = [[NSMutableData alloc] initWithCapacity:numPackets * audioFile.maxPacketSize];
const void *buffer = [myBuffer bytes];
AudioFileReadPackets (
audioFile.fileID,
NO,
&numBytes,
isVBR ? packetDescriptions : 0,
currentPacketNumber,
&numPackets,
buffer
);
this works too
void *buffer = [myBuffer mutablebytes];
read this post to explain the difference between those two options

Related

Objective-C - BAD ACCESS EXC

I am trying to create a project that uses audio streaming from unity. For this, I am developing a plugin.
At the time of recording the audio has no problem and I send the data through a websocket in a base64 string.
Then from xcode I catch it and transform it to NSDATA, that's the problem. At first there are no problems but after a moment, Xcode show me the error EXC_BAD_ACCESS and I can't continue copying the NSDATA into the buffer.
Here is the code.
#import "AudioProcessor.h"
#pragma mark Playback callback
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
AudioProcessor audioProcessor = (AudioProcessor) inRefCon;
// copy buffer to audio buffer which gets played after function return
if(ioData->mNumberBuffers > 0) {
AudioBuffer buffer = ioData->mBuffers[0];
// get the data from Unity
NSString *inputData = audioProcessor.getInputData;
if(inputData && ![inputData isKindOfClass:[NSNull class]])
{
//here it's the problem.
NSData *data = [[NSData alloc] initWithBase64EncodedString:inputData options:0];
memcpy(buffer.mData, data.bytes, data.length);
buffer.mDataByteSize = (int) data.length;
free(data);
}
return noErr;
}
#pragma mark controll stream
-(void)setInputData:(NSString *)datosValue
{
inputData = datosValue;
}
-(NSString*)getInputData
{
return inputData;
}
If someone knows how it could be done so that the application does not close, I would appreciate it.
First of all please conform to the naming convention that variable names start with a lowercase letter.
The error occurs because an instance of NSData is an object / a pointer, you have to add a *
NSData *data = [[NSData alloc] init....
Further it's highly recommended to access properties with dot notation for example
data.bytes
data.length

Recording audio and passing the data to a UIWebView (JavascriptCore) on iOS 8/9

We have an app that is mostly a UIWebView for a heavily javascript based web app. The requirement we have come up against is being able to play audio to the user and then record the user, play back that recording for confirmation and then send the audio to a server. This works in Chrome, Android and other platforms because that ability is built into the browser. No native code required.
Sadly, the iOS (iOS 8/9) web view lacks the ability to record audio.
The first workaround we tried was recording the audio with an AudioQueue and passing the data (LinearPCM 16bit) to a JS AudioNode so the web app could process the iOS audio exactly the same way as other platforms. This got to a point where we could pass the audio to JS, but the app would eventually crash with a bad memory access error or the javascript side just could not keep up with the data being sent.
The next idea was to save the audio recording to a file and send partial audio data to JS for visual feedback, a basic audio visualizer displayed during recording only.
The audio records and plays back fine to a WAVE file as Linear PCM signed 16bit. The JS visualizer is where we are stuck. It is expecting Linear PCM unsigned 8bit so I added a conversion step that may be wrong. I've tried several different ways, mostly found online, and have not found one that works which makes me think something else is wrong or missing before we even get to the conversion step.
Since I don't know what or where exactly the problem is I'll dump the code below for the audio recording and playback classes. Any suggestions would be welcome to resolve, or bypass somehow, this issue.
One idea I had was to record in a different format (CAF) using different format flags. Looking at the values that are produced, non of the signed 16bit ints come even close to the max value. I rarely see anything above +/-1000. Is that because of the kLinearPCMFormatFlagIsPacked flag in the AudioStreamPacketDescription? Removing that flag cuases the audio file to not be created because of an invalid format. Maybe switching to CAF would work but we need to convert to WAVE before sending the audio back to our server.
Or maybe my conversion from signed 16bit to unsigned 8bit is wrong? I have also tried bitshifting and casting. The only difference is, with this conversion all the audio values get compressed to between 125 and 130. Bit shifting and casting change that to 0-5 and 250-255. That doesn't really solve any problems on the JS side.
The next step would be, instead of passing the data to JS run it through a FFT function and produce values to be used directly by JS for the audio visualizer. I'd rather figure out if I have done something obviously wrong before going that direction.
AQRecorder.h - EDIT: updated audio format to LinearPCM 32bit Float.
#ifndef AQRecorder_h
#define AQRecorder_h
#import <AudioToolbox/AudioToolbox.h>
#define NUM_BUFFERS 3
#define AUDIO_DATA_TYPE_FORMAT float
#define JS_AUDIO_DATA_SIZE 32
#interface AQRecorder : NSObject {
AudioStreamBasicDescription mDataFormat;
AudioQueueRef mQueue;
AudioQueueBufferRef mBuffers[ NUM_BUFFERS ];
AudioFileID mAudioFile;
UInt32 bufferByteSize;
SInt64 mCurrentPacket;
bool mIsRunning;
}
- (void)setupAudioFormat;
- (void)startRecording;
- (void)stopRecording;
- (void)processSamplesForJS:(UInt32)audioDataBytesCapacity audioData:(void *)audioData;
- (Boolean)isRunning;
#end
#endif
AQRecorder.m - EDIT: updated audio format to LinearPCM 32bit Float. Added FFT step in processSamplesForJS instead of sending audio data directly.
#import <AVFoundation/AVFoundation.h>
#import "AQRecorder.h"
#import "JSMonitor.h"
#implementation AQRecorder
void AudioQueueCallback(void * inUserData,
AudioQueueRef inAQ,
AudioQueueBufferRef inBuffer,
const AudioTimeStamp * inStartTime,
UInt32 inNumberPacketDescriptions,
const AudioStreamPacketDescription* inPacketDescs)
{
AQRecorder *aqr = (__bridge AQRecorder *)inUserData;
if ( [aqr isRunning] )
{
if ( inNumberPacketDescriptions > 0 )
{
AudioFileWritePackets(aqr->mAudioFile, FALSE, inBuffer->mAudioDataByteSize, inPacketDescs, aqr->mCurrentPacket, &inNumberPacketDescriptions, inBuffer->mAudioData);
aqr->mCurrentPacket += inNumberPacketDescriptions;
[aqr processSamplesForJS:inBuffer->mAudioDataBytesCapacity audioData:inBuffer->mAudioData];
}
AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
}
}
- (void)debugDataFormat
{
NSLog(#"format=%i, sampleRate=%f, channels=%i, flags=%i, BPC=%i, BPF=%i", mDataFormat.mFormatID, mDataFormat.mSampleRate, (unsigned int)mDataFormat.mChannelsPerFrame, mDataFormat.mFormatFlags, mDataFormat.mBitsPerChannel, mDataFormat.mBytesPerFrame);
}
- (void)setupAudioFormat
{
memset(&mDataFormat, 0, sizeof(mDataFormat));
mDataFormat.mSampleRate = 44100.;
mDataFormat.mChannelsPerFrame = 1;
mDataFormat.mFormatID = kAudioFormatLinearPCM;
mDataFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat | kLinearPCMFormatFlagIsPacked;
int sampleSize = sizeof(AUDIO_DATA_TYPE_FORMAT);
mDataFormat.mBitsPerChannel = 32;
mDataFormat.mBytesPerPacket = mDataFormat.mBytesPerFrame = (mDataFormat.mBitsPerChannel / 8) * mDataFormat.mChannelsPerFrame;
mDataFormat.mFramesPerPacket = 1;
mDataFormat.mReserved = 0;
[self debugDataFormat];
}
- (void)startRecording/
{
[self setupAudioFormat];
mCurrentPacket = 0;
NSString *recordFile = [NSTemporaryDirectory() stringByAppendingPathComponent: #"AudioFile.wav"];
CFURLRef url = CFURLCreateWithString(kCFAllocatorDefault, (CFStringRef)recordFile, NULL);;
OSStatus *stat =
AudioFileCreateWithURL(url, kAudioFileWAVEType, &mDataFormat, kAudioFileFlags_EraseFile, &mAudioFile);
NSError *error = [NSError errorWithDomain:NSOSStatusErrorDomain code:stat userInfo:nil];
NSLog(#"AudioFileCreateWithURL OSStatus :: %#", error);
CFRelease(url);
bufferByteSize = 896 * mDataFormat.mBytesPerFrame;
AudioQueueNewInput(&mDataFormat, AudioQueueCallback, (__bridge void *)(self), NULL, NULL, 0, &mQueue);
for ( int i = 0; i < NUM_BUFFERS; i++ )
{
AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]);
AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULL);
}
mIsRunning = true;
AudioQueueStart(mQueue, NULL);
}
- (void)stopRecording
{
mIsRunning = false;
AudioQueueStop(mQueue, false);
AudioQueueDispose(mQueue, false);
AudioFileClose(mAudioFile);
}
- (void)processSamplesForJS:(UInt32)audioDataBytesCapacity audioData:(void *)audioData
{
int sampleCount = audioDataBytesCapacity / sizeof(AUDIO_DATA_TYPE_FORMAT);
AUDIO_DATA_TYPE_FORMAT *samples = (AUDIO_DATA_TYPE_FORMAT*)audioData;
NSMutableArray *audioDataBuffer = [[NSMutableArray alloc] initWithCapacity:JS_AUDIO_DATA_SIZE];
// FFT stuff taken mostly from Apples aurioTouch example
const Float32 kAdjust0DB = 1.5849e-13;
int bufferFrames = sampleCount;
int bufferlog2 = round(log2(bufferFrames));
float fftNormFactor = (1.0/(2*bufferFrames));
FFTSetup fftSetup = vDSP_create_fftsetup(bufferlog2, kFFTRadix2);
Float32 *outReal = (Float32*) malloc((bufferFrames / 2)*sizeof(Float32));
Float32 *outImaginary = (Float32*) malloc((bufferFrames / 2)*sizeof(Float32));
COMPLEX_SPLIT mDspSplitComplex = { .realp = outReal, .imagp = outImaginary };
Float32 *outFFTData = (Float32*) malloc((bufferFrames / 2)*sizeof(Float32));
//Generate a split complex vector from the real data
vDSP_ctoz((COMPLEX *)samples, 2, &mDspSplitComplex, 1, bufferFrames / 2);
//Take the fft and scale appropriately
vDSP_fft_zrip(fftSetup, &mDspSplitComplex, 1, bufferlog2, kFFTDirection_Forward);
vDSP_vsmul(mDspSplitComplex.realp, 1, &fftNormFactor, mDspSplitComplex.realp, 1, bufferFrames / 2);
vDSP_vsmul(mDspSplitComplex.imagp, 1, &fftNormFactor, mDspSplitComplex.imagp, 1, bufferFrames / 2);
//Zero out the nyquist value
mDspSplitComplex.imagp[0] = 0.0;
//Convert the fft data to dB
vDSP_zvmags(&mDspSplitComplex, 1, outFFTData, 1, bufferFrames / 2);
//In order to avoid taking log10 of zero, an adjusting factor is added in to make the minimum value equal -128dB
vDSP_vsadd(outFFTData, 1, &kAdjust0DB, outFFTData, 1, bufferFrames / 2);
Float32 one = 1;
vDSP_vdbcon(outFFTData, 1, &one, outFFTData, 1, bufferFrames / 2, 0);
// Average out FFT dB values
int grpSize = (bufferFrames / 2) / 32;
int c = 1;
Float32 avg = 0;
int d = 1;
for ( int i = 1; i < bufferFrames / 2; i++ )
{
if ( outFFTData[ i ] != outFFTData[ i ] || outFFTData[ i ] == INFINITY )
{ // NAN / INFINITE check
c++;
}
else
{
avg += outFFTData[ i ];
d++;
//NSLog(#"db = %f, avg = %f", outFFTData[ i ], avg);
if ( ++c >= grpSize )
{
uint8_t u = (uint8_t)((avg / d) + 128); //dB values seem to range from -128 to 0.
NSLog(#"%i = %i (%f)", i, u, avg);
[audioDataBuffer addObject:[NSNumber numberWithUnsignedInt:u]];
avg = 0;
c = 0;
d = 1;
}
}
}
[[JSMonitor shared] passAudioDataToJavascriptBridge:audioDataBuffer];
}
- (Boolean)isRunning
{
return mIsRunning;
}
#end
Audio playback and recording contrller classes
Audio.h
#ifndef Audio_h
#define Audio_h
#import <AVFoundation/AVFoundation.h>
#import "AQRecorder.h"
#interface Audio : NSObject <AVAudioPlayerDelegate> {
AQRecorder* recorder;
AVAudioPlayer* player;
bool mIsSetup;
bool mIsRecording;
bool mIsPlaying;
}
- (void)setupAudio;
- (void)startRecording;
- (void)stopRecording;
- (void)startPlaying;
- (void)stopPlaying;
- (Boolean)isRecording;
- (Boolean)isPlaying;
- (NSString *) getAudioDataBase64String;
#end
#endif
Audio.m
#import "Audio.h"
#import <AudioToolbox/AudioToolbox.h>
#import "JSMonitor.h"
#implementation Audio
- (void)setupAudio
{
NSLog(#"Audio->setupAudio");
AVAudioSession *session = [AVAudioSession sharedInstance];
NSError * error;
[session setCategory:AVAudioSessionCategoryPlayAndRecord error:&error];
[session setActive:YES error:nil];
recorder = [[AQRecorder alloc] init];
mIsSetup = YES;
}
- (void)startRecording
{
NSLog(#"Audio->startRecording");
if ( !mIsSetup )
{
[self setupAudio];
}
if ( mIsRecording ) {
return;
}
if ( [recorder isRunning] == NO )
{
[recorder startRecording];
}
mIsRecording = [recorder isRunning];
}
- (void)stopRecording
{
NSLog(#"Audio->stopRecording");
[recorder stopRecording];
mIsRecording = [recorder isRunning];
[[JSMonitor shared] sendAudioInputStoppedEvent];
}
- (void)startPlaying
{
if ( mIsPlaying )
{
return;
}
mIsPlaying = YES;
NSLog(#"Audio->startPlaying");
NSError* error = nil;
NSString *recordFile = [NSTemporaryDirectory() stringByAppendingPathComponent: #"AudioFile.wav"];
player = [[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:recordFile] error:&error];
if ( error )
{
NSLog(#"AVAudioPlayer failed :: %#", error);
}
player.delegate = self;
[player play];
}
- (void)stopPlaying
{
NSLog(#"Audio->stopPlaying");
[player stop];
mIsPlaying = NO;
[[JSMonitor shared] sendAudioPlaybackCompleteEvent];
}
- (NSString *) getAudioDataBase64String
{
NSString *recordFile = [NSTemporaryDirectory() stringByAppendingPathComponent: #"AudioFile.wav"];
NSError* error = nil;
NSData *fileData = [NSData dataWithContentsOfFile:recordFile options: 0 error: &error];
if ( fileData == nil )
{
NSLog(#"Failed to read file, error %#", error);
return #"DATAENCODINGFAILED";
}
else
{
return [fileData base64EncodedStringWithOptions:0];
}
}
- (Boolean)isRecording { return mIsRecording; }
- (Boolean)isPlaying { return mIsPlaying; }
- (void)audioPlayerDidFinishPlaying:(AVAudioPlayer *)player successfully:(BOOL)flag
{
NSLog(#"Audio->audioPlayerDidFinishPlaying: %i", flag);
mIsPlaying = NO;
[[JSMonitor shared] sendAudioPlaybackCompleteEvent];
}
- (void)audioPlayerDecodeErrorDidOccur:(AVAudioPlayer *)player error:(NSError *)error
{
NSLog(#"Audio->audioPlayerDecodeErrorDidOccur: %#", error.localizedFailureReason);
mIsPlaying = NO;
[[JSMonitor shared] sendAudioPlaybackCompleteEvent];
}
#end
The JSMonitor class is a bridge between the UIWebView javascriptcore and the native code. I'm not including it because it doesn't do anything for audio other than pass data / calls between these classes and JSCore.
EDIT
The format of the audio has changed to LinearPCM Float 32bit. Instead of sending the audio data it is sent through an FFT function and the dB values are averaged and sent instead.
Core Audio is a pain to work with. Fortunately, AVFoundation provides AVAudioRecorder to record video and also gives you access to the average and peak audio power that you can send to back to your JavaScript to update your UI visualizer. From the docs:
An instance of the AVAudioRecorder class, called an audio recorder,
provides audio recording capability in your application. Using an
audio recorder you can:
Record until the user stops the recording
Record for a specified duration
Pause and resume a recording
Obtain input audio-level data that you can use to provide level
metering
This Stack Overflow question has an example of how to use AVAudioRecorder.

Multiple MIDI sounds with MusicDeviceMIDIEvent

I am creating MIDI sounds with MusicDeviceMIDIEvent and it's perfectly fine, I use some aupreset files I've created. One for each instrument.
My code is basically the example from apple: LoadPresetDemo. All I use is:
- (void)loadPreset:(id)sender instrumentName:(const char *)instrumentName {
NSString *presetURL1 = [NSString stringWithCString:instrumentName encoding:NSUTF8StringEncoding];
NSString *path = [[NSBundle mainBundle] pathForResource:presetURL1 ofType:#"aupreset"];
NSURL *presetURL = [[NSURL alloc] initFileURLWithPath: path];
if (presetURL) {
NSLog(#"Attempting to load preset '%#'\n", [presetURL description]);
}
else {
NSLog(#"COULD NOT GET PRESET PATH!");
}
[self loadSynthFromPresetURL: presetURL];
}
To load my aupreset file, then:
- (void) noteOn:(id)sender midiNumber:(int)midiNumber {
UInt32 noteNum = midiNumber;
UInt32 onVelocity = 127;
UInt32 noteCommand = kMIDIMessage_NoteOn << 4 | 0;
OSStatus result = noErr;
require_noerr (result = MusicDeviceMIDIEvent (self.samplerUnit, noteCommand, noteNum, onVelocity, 0), logTheError);
logTheError:
if (result != noErr) NSLog (#"Unable to start playing the low note. Error code: %d '%.4s'\n", (int) result, (const char *)&result);
}
to play the note of the previously loaded aupreset, and:
- (void) noteOff:(id)sender midiNumber:(int)midiNumber
when I want it to stop.
Now I would like to play one note of each instrument simultaneously. What is the easiest way of doing this?

How to write audio file locally recorded from microphone using AudioBuffer in iPhone?

I am new to Audio framework, anyone help me to write the audio file which is playing by capturing from microphone?
below is the code to play mic input through iphone speaker, now i would like to save the audio in iphone for future use.
i found the code from here to record audio using microphone http://www.stefanpopp.de/2011/capture-iphone-microphone/
/**
Code start from here for playing the recorded voice
*/
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) {
/**
This is the reference to the object who owns the callback.
*/
AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;
// iterate over incoming stream an copy to output stream
for (int i=0; i < ioData->mNumberBuffers; i++) {
AudioBuffer buffer = ioData->mBuffers[i];
// find minimum size
UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);
// copy buffer to audio buffer which gets played after function return
memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);
// set data size
buffer.mDataByteSize = size;
// get a pointer to the recorder struct variable
Recorder recInfo = audioProcessor.audioRecorder;
// write the bytes
OSStatus audioErr = noErr;
if (recInfo.running) {
audioErr = AudioFileWriteBytes (recInfo.recordFile,
false,
recInfo.inStartingByte,
&size,
&buffer.mData);
assert (audioErr == noErr);
// increment our byte count
recInfo.inStartingByte += (SInt64)size;// size should be number of bytes
audioProcessor.audioRecorder = recInfo;
}
}
return noErr;
}
-(void)prepareAudioFileToRecord{
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
NSTimeInterval time = ([[NSDate date] timeIntervalSince1970]); // returned as a double
long digits = (long)time; // this is the first 10 digits
int decimalDigits = (int)(fmod(time, 1) * 1000); // this will get the 3 missing digits
// long timestamp = (digits * 1000) + decimalDigits;
NSString *timeStampValue = [NSString stringWithFormat:#"%ld",digits];
// NSString *timeStampValue = [NSString stringWithFormat:#"%ld.%d",digits ,decimalDigits];
NSString *fileName = [NSString stringWithFormat:#"test%#.caf",timeStampValue];
NSString *filePath = [basePath stringByAppendingPathComponent:fileName];
NSURL *fileURL = [NSURL fileURLWithPath:filePath];
// modify the ASBD (see EDIT: towards the end of this post!)
audioFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
// set up the file (bridge cast will differ if using ARC)
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL,
kAudioFileCAFType,
&audioFormat,
kAudioFileFlags_EraseFile,
&audioRecorder.recordFile);
assert (audioErr == noErr);// simple error checking
audioRecorder.inStartingByte = 0;
audioRecorder.running = true;
self.audioRecorder = audioRecorder;
}
thanks in advance
bala
To write the bytes from an AudioBuffer to a file locally we need the help from the AudioFileServices link class which is included in the AudioToolbox framework.
Conceptually we will do the following - set up an audio file and maintain a reference to it (we need this reference to be accessible from the render callback that you included in your post). We also need to keep track of the number of bytes that are written for each time the callback is called. Finally a flag to check that will let us know to stop writing to file and close the file.
Because the code in the link you provided declares an AudioStreamBasicDescription which is LPCM and hence constant bit rate, we can use the AudioFileWriteBytes function (writing compressed audio is more involved and would use AudioFileWritePackets function instead).
Let's start by declaring a custom struct (which contains all the extra data we'll need) and adding an instance variable of this custom struct and also making a property that points to the struct variable. We'll add this to the AudioProcessor custom class, as you already have access to this object from within the callback where you typecast in this line.
AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;
Add this to AudioProcessor.h (above the #interface)
typedef struct Recorder {
AudioFileID recordFile;
SInt64 inStartingByte;
Boolean running;
} Recorder;
Now let's add an instance variable and also make it a pointer property and assign it to the instance variable (so we can access it from within the callback function).
In the #interface add an instance variable named audioRecorder and also make the ASBD available to the class.
Recorder audioRecorder;
AudioStreamBasicDescription recordFormat;// assign this ivar to where the asbd is created in the class
In the method -(void)initializeAudio comment out or delete this line as we have made recordFormat an ivar.
//AudioStreamBasicDescription recordFormat;
Now add the kAudioFormatFlagIsBigEndian format flag to where the ASBD is set up.
// also modify the ASBD in the AudioProcessor classes -(void)initializeAudio method (see EDIT: towards the end of this post!)
recordFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
And finally add it as a property that is a pointer to the audioRecorder instance variable and don't forget to synthesise it in AudioProcessor.m. We will name the pointer property audioRecorderPointer
#property Recorder *audioRecorderPointer;
// in .m synthesise the property
#synthesize audioRecorderPointer;
Now let's assign the pointer to the ivar (this could be placed in the -(void)initializeAudio method of the AudioProcessor class)
// ASSIGN POINTER PROPERTY TO IVAR
self.audioRecorderPointer = &audioRecorder;
Now in the AudioProcessor.m let's add a method to setup the file and open it so we can write to it. This should be called before you start the AUGraph running.
-(void)prepareAudioFileToRecord {
// lets set up a test file in the documents directory
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
NSString *fileName = #"test_recording.aif";
NSString *filePath = [basePath stringByAppendingPathComponent:fileName];
NSURL *fileURL = [NSURL fileURLWithPath:filePath];
// set up the file (bridge cast will differ if using ARC)
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL,
kAudioFileAIFFType,
recordFormat,
kAudioFileFlags_EraseFile,
&audioRecorder.recordFile);
assert (audioErr == noErr);// simple error checking
audioRecorder.inStartingByte = 0;
audioRecorder.running = true;
}
Okay, we are nearly there. Now we have a file to write to, and an AudioFileID that can be accessed from the render callback. So inside the callback function you posted add the following right before you return noErr at the end of the method.
// get a pointer to the recorder struct instance variable
Recorder *recInfo = audioProcessor.audioRecorderPointer;
// write the bytes
OSStatus audioErr = noErr;
if (recInfo->running) {
audioErr = AudioFileWriteBytes (recInfo->recordFile,
false,
recInfo->inStartingByte,
&size,
buffer.mData);
assert (audioErr == noErr);
// increment our byte count
recInfo->inStartingByte += (SInt64)size;// size should be number of bytes
}
When we want to stop recording (probably invoked by some user action), simply make the running boolean false and close the file like this somewhere in the AudioProcessor class.
audioRecorder.running = false;
OSStatus audioErr = AudioFileClose(audioRecorder.recordFile);
assert (audioErr == noErr);
EDIT: the endianness of the samples need to be big endian for the file so add the kAudioFormatFlagIsBigEndian bit mask flag to the ASBD in the source code found at the link provided in question.
For extra info about this topic the Apple documents are a great resource and I also recommend reading 'Learning Core Audio' by Chris Adamson and Kevin Avila (of which I own a copy).
Use Audio Queue Services.
There is an example in the Apple documentation that does exactly what you ask:
Audio Queue Services Programming Guide - Recording Audio

why is audio coming up garbled when using AVAssetReader with audio queue

based on my research.. people keep on saying that it's based on mismatched/wrong formatting.. but i'm using lPCM formatting for both input and output.. how can you go wrong with that? the result i'm getting is just noise.. (like white noise)
I've decided to just paste my entire code.. perhaps that would help:
#import "AppDelegate.h"
#import "ViewController.h"
#implementation AppDelegate
#synthesize window = _window;
#synthesize viewController = _viewController;
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
// Override point for customization after application launch.
self.viewController = [[ViewController alloc] initWithNibName:#"ViewController" bundle:nil];
self.window.rootViewController = self.viewController;
[self.window makeKeyAndVisible];
// Insert code here to initialize your application
player = [[Player alloc] init];
[self setupReader];
[self setupQueue];
// initialize reader in a new thread
internalThread =[[NSThread alloc]
initWithTarget:self
selector:#selector(readPackets)
object:nil];
[internalThread start];
// start the queue. this function returns immedatly and begins
// invoking the callback, as needed, asynchronously.
//CheckError(AudioQueueStart(queue, NULL), "AudioQueueStart failed");
// and wait
printf("Playing...\n");
do
{
CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0.25, false);
} while (!player.isDone /*|| gIsRunning*/);
// isDone represents the state of the Audio File enqueuing. This does not mean the
// Audio Queue is actually done playing yet. Since we have 3 half-second buffers in-flight
// run for continue to run for a short additional time so they can be processed
CFRunLoopRunInMode(kCFRunLoopDefaultMode, 2, false);
// end playback
player.isDone = true;
CheckError(AudioQueueStop(queue, TRUE), "AudioQueueStop failed");
cleanup:
AudioQueueDispose(queue, TRUE);
AudioFileClose(player.playbackFile);
return YES;
}
- (void) setupReader
{
NSURL *assetURL = [NSURL URLWithString:#"ipod-library://item/item.m4a?id=1053020204400037178"]; // from ilham's ipod
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];
// from AVAssetReader Class Reference:
// AVAssetReader is not intended for use with real-time sources,
// and its performance is not guaranteed for real-time operations.
NSError * error = nil;
AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:songAsset error:&error];
AVAssetTrack* track = [songAsset.tracks objectAtIndex:0];
readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track
outputSettings:nil];
// AVAssetReaderOutput* readerOutput = [[AVAssetReaderAudioMixOutput alloc] initWithAudioTracks:songAsset.tracks audioSettings:nil];
[reader addOutput:readerOutput];
[reader startReading];
}
- (void) setupQueue
{
// get the audio data format from the file
// we know that it is PCM.. since it's converted
AudioStreamBasicDescription dataFormat;
dataFormat.mSampleRate = 44100.0;
dataFormat.mFormatID = kAudioFormatLinearPCM;
dataFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
dataFormat.mBytesPerPacket = 4;
dataFormat.mFramesPerPacket = 1;
dataFormat.mBytesPerFrame = 4;
dataFormat.mChannelsPerFrame = 2;
dataFormat.mBitsPerChannel = 16;
// create a output (playback) queue
CheckError(AudioQueueNewOutput(&dataFormat, // ASBD
MyAQOutputCallback, // Callback
(__bridge void *)self, // user data
NULL, // run loop
NULL, // run loop mode
0, // flags (always 0)
&queue), // output: reference to AudioQueue object
"AudioQueueNewOutput failed");
// adjust buffer size to represent about a half second (0.5) of audio based on this format
CalculateBytesForTime(dataFormat, 0.5, &bufferByteSize, &player->numPacketsToRead);
// check if we are dealing with a VBR file. ASBDs for VBR files always have
// mBytesPerPacket and mFramesPerPacket as 0 since they can fluctuate at any time.
// If we are dealing with a VBR file, we allocate memory to hold the packet descriptions
bool isFormatVBR = (dataFormat.mBytesPerPacket == 0 || dataFormat.mFramesPerPacket == 0);
if (isFormatVBR)
player.packetDescs = (AudioStreamPacketDescription*)malloc(sizeof(AudioStreamPacketDescription) * player.numPacketsToRead);
else
player.packetDescs = NULL; // we don't provide packet descriptions for constant bit rate formats (like linear PCM)
// get magic cookie from file and set on queue
MyCopyEncoderCookieToQueue(player.playbackFile, queue);
// allocate the buffers and prime the queue with some data before starting
player.isDone = false;
player.packetPosition = 0;
int i;
for (i = 0; i < kNumberPlaybackBuffers; ++i)
{
CheckError(AudioQueueAllocateBuffer(queue, bufferByteSize, &audioQueueBuffers[i]), "AudioQueueAllocateBuffer failed");
// EOF (the entire file's contents fit in the buffers)
if (player.isDone)
break;
}
}
-(void)readPackets
{
// initialize a mutex and condition so that we can block on buffers in use.
pthread_mutex_init(&queueBuffersMutex, NULL);
pthread_cond_init(&queueBufferReadyCondition, NULL);
state = AS_BUFFERING;
while ((sample = [readerOutput copyNextSampleBuffer])) {
AudioBufferList audioBufferList;
CMBlockBufferRef CMBuffer = CMSampleBufferGetDataBuffer( sample );
CheckError(CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sample,
NULL,
&audioBufferList,
sizeof(audioBufferList),
NULL,
NULL,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
&CMBuffer
),
"could not read samples");
AudioBuffer audioBuffer = audioBufferList.mBuffers[0];
UInt32 inNumberBytes = audioBuffer.mDataByteSize;
size_t incomingDataOffset = 0;
while (inNumberBytes) {
size_t bufSpaceRemaining;
bufSpaceRemaining = bufferByteSize - bytesFilled;
#synchronized(self)
{
bufSpaceRemaining = bufferByteSize - bytesFilled;
size_t copySize;
if (bufSpaceRemaining < inNumberBytes)
{
copySize = bufSpaceRemaining;
}
else
{
copySize = inNumberBytes;
}
// copy data to the audio queue buffer
AudioQueueBufferRef fillBuf = audioQueueBuffers[fillBufferIndex];
memcpy((char*)fillBuf->mAudioData + bytesFilled, (const char*)(audioBuffer.mData + incomingDataOffset), copySize);
// keep track of bytes filled
bytesFilled +=copySize;
incomingDataOffset +=copySize;
inNumberBytes -=copySize;
}
// if the space remaining in the buffer is not enough for this packet, then enqueue the buffer.
if (bufSpaceRemaining < inNumberBytes + bytesFilled)
{
[self enqueueBuffer];
}
}
}
}
-(void)enqueueBuffer
{
#synchronized(self)
{
inuse[fillBufferIndex] = true; // set in use flag
buffersUsed++;
// enqueue buffer
AudioQueueBufferRef fillBuf = audioQueueBuffers[fillBufferIndex];
NSLog(#"we are now enqueing buffer %d",fillBufferIndex);
fillBuf->mAudioDataByteSize = bytesFilled;
err = AudioQueueEnqueueBuffer(queue, fillBuf, 0, NULL);
if (err)
{
NSLog(#"could not enqueue queue with buffer");
return;
}
if (state == AS_BUFFERING)
{
//
// Fill all the buffers before starting. This ensures that the
// AudioFileStream stays a small amount ahead of the AudioQueue to
// avoid an audio glitch playing streaming files on iPhone SDKs < 3.0
//
if (buffersUsed == kNumberPlaybackBuffers - 1)
{
err = AudioQueueStart(queue, NULL);
if (err)
{
NSLog(#"couldn't start queue");
return;
}
state = AS_PLAYING;
}
}
// go to next buffer
if (++fillBufferIndex >= kNumberPlaybackBuffers) fillBufferIndex = 0;
bytesFilled = 0; // reset bytes filled
}
// wait until next buffer is not in use
pthread_mutex_lock(&queueBuffersMutex);
while (inuse[fillBufferIndex])
{
pthread_cond_wait(&queueBufferReadyCondition, &queueBuffersMutex);
}
pthread_mutex_unlock(&queueBuffersMutex);
}
#pragma mark - utility functions -
// generic error handler - if err is nonzero, prints error message and exits program.
static void CheckError(OSStatus error, const char *operation)
{
if (error == noErr) return;
char str[20];
// see if it appears to be a 4-char-code
*(UInt32 *)(str + 1) = CFSwapInt32HostToBig(error);
if (isprint(str[1]) && isprint(str[2]) && isprint(str[3]) && isprint(str[4])) {
str[0] = str[5] = '\'';
str[6] = '\0';
} else
// no, format it as an integer
sprintf(str, "%d", (int)error);
fprintf(stderr, "Error: %s (%s)\n", operation, str);
exit(1);
}
// we only use time here as a guideline
// we're really trying to get somewhere between 16K and 64K buffers, but not allocate too much if we don't need it/*
void CalculateBytesForTime(AudioStreamBasicDescription inDesc, Float64 inSeconds, UInt32 *outBufferSize, UInt32 *outNumPackets)
{
// we need to calculate how many packets we read at a time, and how big a buffer we need.
// we base this on the size of the packets in the file and an approximate duration for each buffer.
//
// first check to see what the max size of a packet is, if it is bigger than our default
// allocation size, that needs to become larger
// we don't have access to file packet size, so we just default it to maxBufferSize
UInt32 maxPacketSize = 0x10000;
static const int maxBufferSize = 0x10000; // limit size to 64K
static const int minBufferSize = 0x4000; // limit size to 16K
if (inDesc.mFramesPerPacket) {
Float64 numPacketsForTime = inDesc.mSampleRate / inDesc.mFramesPerPacket * inSeconds;
*outBufferSize = numPacketsForTime * maxPacketSize;
} else {
// if frames per packet is zero, then the codec has no predictable packet == time
// so we can't tailor this (we don't know how many Packets represent a time period
// we'll just return a default buffer size
*outBufferSize = maxBufferSize > maxPacketSize ? maxBufferSize : maxPacketSize;
}
// we're going to limit our size to our default
if (*outBufferSize > maxBufferSize && *outBufferSize > maxPacketSize)
*outBufferSize = maxBufferSize;
else {
// also make sure we're not too small - we don't want to go the disk for too small chunks
if (*outBufferSize < minBufferSize)
*outBufferSize = minBufferSize;
}
*outNumPackets = *outBufferSize / maxPacketSize;
}
// many encoded formats require a 'magic cookie'. if the file has a cookie we get it
// and configure the queue with it
static void MyCopyEncoderCookieToQueue(AudioFileID theFile, AudioQueueRef queue ) {
UInt32 propertySize;
OSStatus result = AudioFileGetPropertyInfo (theFile, kAudioFilePropertyMagicCookieData, &propertySize, NULL);
if (result == noErr && propertySize > 0)
{
Byte* magicCookie = (UInt8*)malloc(sizeof(UInt8) * propertySize);
CheckError(AudioFileGetProperty (theFile, kAudioFilePropertyMagicCookieData, &propertySize, magicCookie), "get cookie from file failed");
CheckError(AudioQueueSetProperty(queue, kAudioQueueProperty_MagicCookie, magicCookie, propertySize), "set cookie on queue failed");
free(magicCookie);
}
}
#pragma mark - audio queue -
static void MyAQOutputCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer)
{
AppDelegate *appDelegate = (__bridge AppDelegate *) inUserData;
[appDelegate myCallback:inUserData
inAudioQueue:inAQ
audioQueueBufferRef:inCompleteAQBuffer];
}
- (void)myCallback:(void *)userData
inAudioQueue:(AudioQueueRef)inAQ
audioQueueBufferRef:(AudioQueueBufferRef)inCompleteAQBuffer
{
unsigned int bufIndex = -1;
for (unsigned int i = 0; i < kNumberPlaybackBuffers; ++i)
{
if (inCompleteAQBuffer == audioQueueBuffers[i])
{
bufIndex = i;
break;
}
}
if (bufIndex == -1)
{
NSLog(#"something went wrong at queue callback");
return;
}
// signal waiting thread that the buffer is free.
pthread_mutex_lock(&queueBuffersMutex);
NSLog(#"signalling that buffer %d is free",bufIndex);
inuse[bufIndex] = false;
buffersUsed--;
pthread_cond_signal(&queueBufferReadyCondition);
pthread_mutex_unlock(&queueBuffersMutex);
}
#end
Update:
btomw's answer below solved the problem magnificently. But I want to get to the bottom of this (most novice developers like myself and even btomw when he first started usually shoot in the dark with parameters, formatting etc - see here for an example -)..
the reason why I provided nul as a parameter for
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:audioReadSettings];
was because according to the documentation and trial and error, I realized that any formatting I put other than lPCM would be rejected outright. In other words, when you use AVAseetReader or conversion even the result is always lPCM.. so I thought the default format was lPCM anyways and so I left it as null.. but I guess I was wrong.
The weird part in this (please correct me anyone, if I'm wrong) is that as I mentioned.. supposed the original file was .mp3, and my intention was to play it back (or send the packets over a network etc) as mp3.. and so I provided an mp3 ABSD.. the asset reader will crash! so is that if i wanted to send it in it's original form, i just supply null? the obvious problem with this is that there would be no way for me to figure out what ABSD it has once I receive it on the other side.. or could I?
Update 2:You can download the code from github.
So here's what I think is happening and also how I think you can fix it.
You're pulling a predefined item out of the ipod (music) library on an iOS device. you are then using an asset reader to collect it's buffers, and queue those buffers, where possible, in an AudioQueue.
The problem you are having, I think, is that you are setting the audio queue buffer's input format to Linear Pulse Code Modulation (LPCM - hope I got that right, I might be off on the acronym). The output settings you are passing to the asset reader output are nil, which means that you'll get an output that is most likely NOT LPCM, but is instead aiff or aac or mp3 or whatever the format is of the song as it exists in iOS's media library. You can, however, remedy this situation by passing in different output settings.
Try changing
readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track outputSettings:nil];
to:
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)],
AVChannelLayoutKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
output = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track audioSettings:outputSettings];
It's my understanding (per the documentation at Apple1) that passing nil as the output settings param gives you samples of the same file type as the original audio track. Even if you have a file that is LPCM, some other settings might be off, which might cause your problems. At the very least, this will normalize all the reader output, which should make things a bit easier to trouble shoot.
Hope that helps!
Edit:
the reason why I provided nul as a parameter for AVURLAsset *songAsset
= [AVURLAsset URLAssetWithURL:assetURL options:audioReadSettings];
was because according to the documentation and trial and error, I...
AVAssetReaders do 2 things; read back an audio file as it exists on disk (i.e.: mp3, aac, aiff), or convert the audio into lpcm.
If you pass nil as the output settings, it will read the file back as it exists, and in this you are correct. I apologize for not mentioning that an asset reader will only allow nil or LPCM. I actually ran into that problem myself (it's in the docs somewhere, but requires a bit of diving), but didn't elect to mention it here as it wasn't on my mind at the time. Sooooo... sorry about that?
If you want to know the AudioStreamBasicDescription (ASBD) of the track you are reading before you read it, you can get it by doing this:
AVURLAsset* uasset = [[AVURLAsset URLAssetWithURL:<#assetURL#> options:nil]retain];
AVAssetTrack*track = [uasset.tracks objectAtIndex:0];
CMFormatDescriptionRef formDesc = (CMFormatDescriptionRef)[[track formatDescriptions] objectAtIndex:0];
const AudioStreamBasicDescription* asbdPointer = CMAudioFormatDescriptionGetStreamBasicDescription(formDesc);
//because this is a pointer and not a struct we need to move the data into a struct so we can use it
AudioStreamBasicDescription asbd = {0};
memcpy(&asbd, asbdPointer, sizeof(asbd));
//asbd now contains a basic description for the track
You can then convert asbd to binary data in whatever format you see fit and transfer it over the network. You should then be able to start sending audio buffer data over the network and successfully play it back with your AudioQueue.
I actually had a system like this working not that long ago, but since I could't keep the connection alive when the iOS client device went to the background, I wasn't able to use it for my purpose. Still, if all that work lets me help someone else who can actually use the info, seems like a win to me.

Resources