AudioFileReadPackets gives strange data - ios

I'm trying to record audio to a file (working) and then sample the data in that file (giving strange results).
FYI, I am roughly following this code...
Extracting Amplitude Data from Linear PCM on the iPhone
I have noticed a few different results. For simplicity, assume the record time is fixed at 1 second.
when sampling up to 8,000 samples/sec, the mutable array (see code) will list 8,000 entries but only the first 4,000 have real-looking data, the last 4,000 points are the same number value (the exact number value varies from run-to-run).
somewhat related to issue #1. when sampling above 8,000 samples/second, the first half of the samples (ex. 5,000 of a 10,0000 sample set from 10,000 samples/sec for 1 second) will look like real data, while the values of the second half of the set will be fixed to some value (again this exact value varies run to run). See below snippet from my debug window, first number is packetIndex, second number is buffer value.
4996:-137
4997:1043
4998:-405
4999:-641
5000:195notice the switch from random data to constant value at 5k, for 10k sample file
5001:195
5002:195
5003:195
5004:195
3 . when having the mic listen to a speaker playing a 1kHz sinusoidal tone in close proximity and sampling this tone at 40,000 samples per second, the resulting data when plotted in a spreadsheet shows the signal at about 2kHz, or double.
Any ideas what I may be doing wrong here?
Here is my setup work to record the audio from the mic...
-(void) initAudioSession {
// setup av session
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryPlayAndRecord error:nil];
[audioSession setActive:YES error: nil];
NSLog(#"audio session initiated");
// settings for the recorded file
NSDictionary *recordSettings = [[NSDictionary alloc] initWithObjectsAndKeys:
[NSNumber numberWithFloat:SAMPLERATE],AVSampleRateKey,
[NSNumber numberWithInt:kAudioFormatLinearPCM],AVFormatIDKey,
[NSNumber numberWithInt:1],AVNumberOfChannelsKey,
[NSNumber numberWithInt:16],AVEncoderBitRateKey,
[NSNumber numberWithInt:AVAudioQualityMax],AVEncoderAudioQualityKey, nil];
// setup file name and location
NSString *docDir = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject];
fileURL = [NSURL fileURLWithPath:[docDir stringByAppendingPathComponent:#"input.caf"]];//caf or aif?
// initialize my new audio recorder
newAudioRecorder = [[AVAudioRecorder alloc] initWithURL:fileURL settings:recordSettings error:nil];
// show file location so i can check it with some player
NSLog(#"file path = %#",fileURL);
// check if the recorder exists, if so prepare the recorder, if not tell me in debug window
if (newAudioRecorder) {
[newAudioRecorder setDelegate:self];
[newAudioRecorder prepareToRecord];
[self.setupStatus setText:[NSString stringWithFormat:#"recorder ready"]];
}else{
NSLog(#"error setting up recorder");
}
}
Here is my code for loading the recorded file and grabbing the data...
//loads file and go thru values, converts data to be put into an NSMutableArray
-(void)readingRainbow{
// get audio file and put into a file ID
AudioFileID fileID;
AudioFileOpenURL((__bridge CFURLRef)fileURL, kAudioFileReadPermission, kAudioFileCAFType /*kAudioFileAIFFType*/ , &fileID);
// get number of packets of audio contained in file
// instead of getting packets, i just set them to the duration times the sample rate i set
// not sure if this is a valid approach
UInt64 totalPacketCount = SAMPLERATE*timer;
// get size of each packet, is this valid?
UInt32 maxPacketSizeInBytes = sizeof(SInt32);
// setup to extract audio data
UInt32 totPack32 = SAMPLERATE*timer;
UInt32 ioNumBytes = totPack32*maxPacketSizeInBytes;
SInt16 *outBuffer = malloc(ioNumBytes);
memset(outBuffer, 0, ioNumBytes);
// setup array to put buffer samples in
readArray = [[NSMutableArray alloc] initWithObjects: nil];
NSNumber *arrayData;
SInt16 data;
int data2;
// this may be where i need help as well....
// process every packet
for (SInt64 packetIndex = 0; packetIndex<totalPacketCount; packetIndex++) {
// method description for reference..
// AudioFileReadPackets(<#AudioFileID inAudioFile#>, <#Boolean inUseCache#>, <#UInt32 *outNumBytes#>,
// <#AudioStreamPacketDescription *outPacketDescriptions#>, <#SInt64 inStartingPacket#>,
// <#UInt32 *ioNumPackets#>, <#void *outBuffer#>)
// extract packet data, not sure if i'm setting this up properly
AudioFileReadPackets(fileID, false, &ioNumBytes, NULL, packetIndex, &totPack32, outBuffer);
// get buffer data and pass into mutable array
data = outBuffer[packetIndex];
data2=data;
arrayData = [[NSNumber alloc] initWithInt:data2];
[readArray addObject:arrayData];
// printf("%lld:%d\n",packetIndex,data);
printf("%d,",data);
}
Also, I'm using this method to start the recorder...
[newAudioRecorder recordForDuration:timer];
Thots? I'm a noob, so any info is greatly appreciated!

You may be recording 16-bit samples, but trying to read 32-bit samples from the file data, thus only finding half as many samples (the rest may be garbage).

Related

How do I add ADTS header when reading m4a raw data from iPod library?

Objective
Reading m4a file bought from iTunes Store via AVAssetReader.
Stream via HTTP and consumed by MobileVLCKit.
What I've tried
As far as I know, AVAssetReader only generates audio raw data, so I guess I should add ADTS header in front of every sample.
NSError *error = nil;
AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
if (error != nil) {
NSLog(#"%#", [error localizedDescription]);
return -1;
}
AVAssetTrack* track = [asset.tracks objectAtIndex:0];
AVAssetReaderTrackOutput *readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track
outputSettings:nil];
[reader addOutput:readerOutput];
[reader startReading];
while (reader.status == AVAssetReaderStatusReading){
AVAssetReaderTrackOutput * trackOutput = (AVAssetReaderTrackOutput *)[reader.outputs objectAtIndex:0];
CMSampleBufferRef sampleBufferRef;
#synchronized(self) {
sampleBufferRef = [trackOutput copyNextSampleBuffer];
}
CMItemCount = CMSampleBufferGetNumSamples(sampleBufferRef);
...
}
So, my question is, how do I loop every sample and add ADTS header?
First, you don't need trackOutput, it's the same as readerOutput that you already have.
UPDATE
My mistake, you're absolutely right. I thought the usual 0xFFF sync words were part of AAC, instead they're ADTS headers. So you must add an ADTS header to each of your AAC packets to stream them as ADTS or "aac". I think you have two choices:
Use AudioFileInitializeWithCallbacks + kAudioFileAAC_ADTSType to get the AudioFile API to add the headers for you. You write AAC packets to the AudioFileID and it will call your write callback from where you can stream AAC in ADTS.
Add the headers to the packets yourself. They're only 7 fiddly bytes (9 with checksums, but who uses them?). Some readable implementations here and here
Either way you need to call either CMSampleBufferGetAudioStreamPacketDescriptions or CMSampleBufferCallBlockForEachSample to get the individual AAC packets from a CMSampleBufferRef.

the amazing audio engine how to apply filters to microphone input

Im trying to make karaoke app that records the background music from file and the microphone.
I also want to add filter effects to the microphone input.
i can do everything stated above using the amazing audio engine sdk but i cant figure out how to add the microphone input as a channel so i can apply filters to it (and not to the background music.)
any help would be appreciated.
my current recording code:
- (void)beginRecording {
// Init recorder
self.recorder = [[AERecorder alloc] initWithAudioController:_audioController];
NSString *documentsFolder = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES)
objectAtIndex:0];
NSString *filePath = [documentsFolder stringByAppendingPathComponent:#"Recording.aiff"];
// Start the recording process
NSError *error = NULL;
if ( ![_recorder beginRecordingToFileAtPath:filePath
fileType:kAudioFileAIFFType
error:&error] ) {
// Report error
return;
}
// Receive both audio input and audio output. Note that if you're using
// AEPlaythroughChannel, mentioned above, you may not need to receive the input again.
[_audioController addInputReceiver:_recorder];
[_audioController addOutputReceiver:_recorder];
}
You can separate your your back ground music and your mic by using different channels and then you can apply the filter to your mic channel only.
first declare a channel group in the header file
AEChannelGroupRef _group;
then simply add the player that you are using for recorded file to this group
[_audioController addChannels:#[_player] toChannelGroup:_group ];
and then add the filter to this group only
[_audioController addFilter:_reverb toChannelGroup:_group];
self.reverb = [[[AEAudioUnitFilter alloc] initWithComponentDescription:AEAudioComponentDescriptionMake(kAudioUnitManufacturer_Apple, kAudioUnitType_Effect, kAudioUnitSubType_Reverb2) audioController:_audioController error:NULL] autorelease];
AudioUnitSetParameter(_reverb.audioUnit, kReverb2Param_DryWetMix, kAudioUnitScope_Global, 0, 100.f, 0);
[_audioController addFilter:_reverb];
You can apply filters at the time of playing the recorded audio.

Passing AVCaptureAudioDataOutput data into vDSP / Accelerate.framework

I am trying to create an application which runs a FFT on microphone data, so I can examine e.g. the loudest frequency in the input.
I see that there are many methods of getting audio input (the RemoteIO AudioUnit, AudioQueue services, and AVFoundation) but it seems like AVFoundation is the simplest. I have this setup:
// Configure the audio session
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryRecord error:NULL];
[session setMode:AVAudioSessionModeMeasurement error:NULL];
[session setActive:YES error:NULL];
// Optional - default gives 1024 samples at 44.1kHz
//[session setPreferredIOBufferDuration:samplesPerSlice/session.sampleRate error:NULL];
// Configure the capture session (strongly-referenced instance variable, otherwise the capture stops after one slice)
_captureSession = [[AVCaptureSession alloc] init];
// Configure audio device input
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:NULL];
[_captureSession addInput:input];
// Configure audio data output
AVCaptureAudioDataOutput *output = [[AVCaptureAudioDataOutput alloc] init];
dispatch_queue_t queue = dispatch_queue_create("My callback", DISPATCH_QUEUE_SERIAL);
[output setSampleBufferDelegate:self queue:queue];
[_captureSession addOutput:output];
// Start the capture session.
[_captureSession startRunning];
(plus error checking, omitted here for readability).
Then I implement the following AVCaptureAudioDataOutputSampleBufferDelegate method:
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
NSLog(#"Num samples: %ld", CMSampleBufferGetNumSamples(sampleBuffer));
// Usually gives 1024 (except the first slice)
}
I'm unsure what the next step should be. What exactly does the CMSampleBuffer format describe (and what assumptions can be made about it, if any)? How should I get the raw audio data into vDSP_fft_zrip with the least possible amount of extra preprocessing? (Also, what would you recommend doing to verify that the raw data I see is correct?)
The CMSampleBufferRef is an opaque type that contains 0 or more media samples. There is a bit of blurb in the docs:
http://developer.apple.com/library/ios/#documentation/CoreMedia/Reference/CMSampleBuffer/Reference/reference.html
In this case it will contain an audio buffer, as well as the description of the sample format and timing information and so on. If you are really interested just put a breakpoint in the delegate callback and take a look.
The first step is to get a pointer to the data buffer that has been returned:
// get a pointer to the audio bytes
CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);
CMBlockBufferRef audioBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t lengthAtOffset;
size_t totalLength;
char *samples;
CMBlockBufferGetDataPointer(audioBuffer, 0, &lengthAtOffset, &totalLength, &samples);
The default sample format for the iPhone mic is linear PCM, with 16 bit samples. This may be mono or stereo depending on if there is an external mic or not. To calculate the FFT we need to have a float vector. Fortunately there is an accelerate function to do the conversion for us:
// check what sample format we have
// this should always be linear PCM
// but may have 1 or 2 channels
CMAudioFormatDescriptionRef format = CMSampleBufferGetFormatDescription(sampleBuffer);
const AudioStreamBasicDescription *desc = CMAudioFormatDescriptionGetStreamBasicDescription(format);
assert(desc->mFormatID == kAudioFormatLinearPCM);
if (desc->mChannelsPerFrame == 1 && desc->mBitsPerChannel == 16) {
float *convertedSamples = malloc(numSamples * sizeof(float));
vDSP_vflt16((short *)samples, 1, convertedSamples, 1, numSamples);
} else {
// handle other cases as required
}
Now you have a float vector of the sample buffer which you can use with vDSP_fft_zrip. It doesn't seem possible to change the input format from the microphone to float samples with AVFoundation, so you are stuck with this last conversion step. I would keep around the buffers in practice, reallocing them if necessary when a larger buffer arrives, so that you are not mallocing and freeing buffers with every delegate callback.
As for your last question, I guess the easiest way to do this would be to inject a known input and check that it gives you the correct response. You could play a sine wave into the mic and check that your FFT had a peak in the correct frequency bin, something like that.
I don't suggest to use AVFoundation for 3 reasons:
I used it for some of mine apps (morsedec , irtty), it works well on simulator and in some hardware, but in others totally failed !
you do not have good control of sample rate an format.
latency could be high.
I suggest to start with apple's sample code aurioTouch.
To make FFT you can shift to vDSP framework using a circular buffer (I LOVE https://github.com/michaeltyson/TPCircularBuffer).
Hope this help

RemoteIO recorded audio file is either silent or 4KB

I'm using RemoteIO successfully to perform analysis on the incoming audio stream from the mic. I can't seem to get a file written to disk, though. I've read around a number of questions:
Example of saving audio from RemoteIO?,
AudioBufferList contents in remoteIO audio unit playback callback,
Recording to AAC from RemoteIO: data is getting written but file unplayable
Recording from RemoteIO: resulting .caf is pitch shifted slower + distorted
And tried to implement the suggestions there: Except they're not working. Where's the correct place to call ExtAudioFileWriteAsync, and how do I set it up?
Aside from the (fairly arduous but better covered by Apple's example code) setup process of RemoteIO itself, the key points of insight were:
Using the same AudioStreamBasicDescription (*audioFormat) that I used to set up the stream in the first place. I don't know how long I spent trying to set up a new one with slightly different parameters, based on other questions and posts. Just referencing the stream attributes from my ivar was sufficient.
Set an "isRecording" bool so that you can turn on and off write-to-file without having to tear down and re-set-up your RemoteIO session
It is ok to write to a file in the recordingCallback, um, callback, but do it asynchronously. Lots of info talks about doing it in the playbackCallback or setting up some third audioFileWriteCallback. This resulted in silent files or 4KB (i.e. empty) files. Don't do it.
Also, be sure to use a copy of the ioData that got passed into the callback
in recordingCallback after AudioUnitRender into bufferList:
AudioDeviceManager* THIS = (__bridge AudioDeviceManager *)inRefCon;
if (THIS->isRecording) {
ExtAudioFileWriteAsync(THIS->extAudioFileRef, inNumberFrames, bufferList);
}
Start and stop recording functions, for reference:
-(void)startRecording {
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *destinationFilePath = [documentsDirectory stringByAppendingPathComponent:kAudioFileName];
CFURLRef destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)destinationFilePath, kCFURLPOSIXPathStyle, false);
OSStatus status;
// create the capture file
status = ExtAudioFileCreateWithURL(destinationURL, kAudioFileWAVEType, &audioFormat, NULL, kAudioFileFlags_EraseFile, &extAudioFileRef);
if (status) NSLog(#"Error creating file with URL: %ld", status);
// use the same "audioFormat" AudioStreamBasicDescription we used to set up RemoteIO in the first place
status = ExtAudioFileSetProperty(extAudioFileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &audioFormat);
ExtAudioFileSeek(extAudioFileRef, 0);
ExtAudioFileWrite(extAudioFileRef, 0, NULL);
isRecording = YES;
}
- (void)stopRecording {
isRecording = NO;
OSStatus status = ExtAudioFileDispose(extAudioFileRef);
if (status) printf("ExtAudioFileDispose %ld \n", status);
}
That's it!

why is audio coming up garbled when using AVAssetReader with audio queue

based on my research.. people keep on saying that it's based on mismatched/wrong formatting.. but i'm using lPCM formatting for both input and output.. how can you go wrong with that? the result i'm getting is just noise.. (like white noise)
I've decided to just paste my entire code.. perhaps that would help:
#import "AppDelegate.h"
#import "ViewController.h"
#implementation AppDelegate
#synthesize window = _window;
#synthesize viewController = _viewController;
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
self.window = [[UIWindow alloc] initWithFrame:[[UIScreen mainScreen] bounds]];
// Override point for customization after application launch.
self.viewController = [[ViewController alloc] initWithNibName:#"ViewController" bundle:nil];
self.window.rootViewController = self.viewController;
[self.window makeKeyAndVisible];
// Insert code here to initialize your application
player = [[Player alloc] init];
[self setupReader];
[self setupQueue];
// initialize reader in a new thread
internalThread =[[NSThread alloc]
initWithTarget:self
selector:#selector(readPackets)
object:nil];
[internalThread start];
// start the queue. this function returns immedatly and begins
// invoking the callback, as needed, asynchronously.
//CheckError(AudioQueueStart(queue, NULL), "AudioQueueStart failed");
// and wait
printf("Playing...\n");
do
{
CFRunLoopRunInMode(kCFRunLoopDefaultMode, 0.25, false);
} while (!player.isDone /*|| gIsRunning*/);
// isDone represents the state of the Audio File enqueuing. This does not mean the
// Audio Queue is actually done playing yet. Since we have 3 half-second buffers in-flight
// run for continue to run for a short additional time so they can be processed
CFRunLoopRunInMode(kCFRunLoopDefaultMode, 2, false);
// end playback
player.isDone = true;
CheckError(AudioQueueStop(queue, TRUE), "AudioQueueStop failed");
cleanup:
AudioQueueDispose(queue, TRUE);
AudioFileClose(player.playbackFile);
return YES;
}
- (void) setupReader
{
NSURL *assetURL = [NSURL URLWithString:#"ipod-library://item/item.m4a?id=1053020204400037178"]; // from ilham's ipod
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:nil];
// from AVAssetReader Class Reference:
// AVAssetReader is not intended for use with real-time sources,
// and its performance is not guaranteed for real-time operations.
NSError * error = nil;
AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:songAsset error:&error];
AVAssetTrack* track = [songAsset.tracks objectAtIndex:0];
readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track
outputSettings:nil];
// AVAssetReaderOutput* readerOutput = [[AVAssetReaderAudioMixOutput alloc] initWithAudioTracks:songAsset.tracks audioSettings:nil];
[reader addOutput:readerOutput];
[reader startReading];
}
- (void) setupQueue
{
// get the audio data format from the file
// we know that it is PCM.. since it's converted
AudioStreamBasicDescription dataFormat;
dataFormat.mSampleRate = 44100.0;
dataFormat.mFormatID = kAudioFormatLinearPCM;
dataFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
dataFormat.mBytesPerPacket = 4;
dataFormat.mFramesPerPacket = 1;
dataFormat.mBytesPerFrame = 4;
dataFormat.mChannelsPerFrame = 2;
dataFormat.mBitsPerChannel = 16;
// create a output (playback) queue
CheckError(AudioQueueNewOutput(&dataFormat, // ASBD
MyAQOutputCallback, // Callback
(__bridge void *)self, // user data
NULL, // run loop
NULL, // run loop mode
0, // flags (always 0)
&queue), // output: reference to AudioQueue object
"AudioQueueNewOutput failed");
// adjust buffer size to represent about a half second (0.5) of audio based on this format
CalculateBytesForTime(dataFormat, 0.5, &bufferByteSize, &player->numPacketsToRead);
// check if we are dealing with a VBR file. ASBDs for VBR files always have
// mBytesPerPacket and mFramesPerPacket as 0 since they can fluctuate at any time.
// If we are dealing with a VBR file, we allocate memory to hold the packet descriptions
bool isFormatVBR = (dataFormat.mBytesPerPacket == 0 || dataFormat.mFramesPerPacket == 0);
if (isFormatVBR)
player.packetDescs = (AudioStreamPacketDescription*)malloc(sizeof(AudioStreamPacketDescription) * player.numPacketsToRead);
else
player.packetDescs = NULL; // we don't provide packet descriptions for constant bit rate formats (like linear PCM)
// get magic cookie from file and set on queue
MyCopyEncoderCookieToQueue(player.playbackFile, queue);
// allocate the buffers and prime the queue with some data before starting
player.isDone = false;
player.packetPosition = 0;
int i;
for (i = 0; i < kNumberPlaybackBuffers; ++i)
{
CheckError(AudioQueueAllocateBuffer(queue, bufferByteSize, &audioQueueBuffers[i]), "AudioQueueAllocateBuffer failed");
// EOF (the entire file's contents fit in the buffers)
if (player.isDone)
break;
}
}
-(void)readPackets
{
// initialize a mutex and condition so that we can block on buffers in use.
pthread_mutex_init(&queueBuffersMutex, NULL);
pthread_cond_init(&queueBufferReadyCondition, NULL);
state = AS_BUFFERING;
while ((sample = [readerOutput copyNextSampleBuffer])) {
AudioBufferList audioBufferList;
CMBlockBufferRef CMBuffer = CMSampleBufferGetDataBuffer( sample );
CheckError(CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
sample,
NULL,
&audioBufferList,
sizeof(audioBufferList),
NULL,
NULL,
kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
&CMBuffer
),
"could not read samples");
AudioBuffer audioBuffer = audioBufferList.mBuffers[0];
UInt32 inNumberBytes = audioBuffer.mDataByteSize;
size_t incomingDataOffset = 0;
while (inNumberBytes) {
size_t bufSpaceRemaining;
bufSpaceRemaining = bufferByteSize - bytesFilled;
#synchronized(self)
{
bufSpaceRemaining = bufferByteSize - bytesFilled;
size_t copySize;
if (bufSpaceRemaining < inNumberBytes)
{
copySize = bufSpaceRemaining;
}
else
{
copySize = inNumberBytes;
}
// copy data to the audio queue buffer
AudioQueueBufferRef fillBuf = audioQueueBuffers[fillBufferIndex];
memcpy((char*)fillBuf->mAudioData + bytesFilled, (const char*)(audioBuffer.mData + incomingDataOffset), copySize);
// keep track of bytes filled
bytesFilled +=copySize;
incomingDataOffset +=copySize;
inNumberBytes -=copySize;
}
// if the space remaining in the buffer is not enough for this packet, then enqueue the buffer.
if (bufSpaceRemaining < inNumberBytes + bytesFilled)
{
[self enqueueBuffer];
}
}
}
}
-(void)enqueueBuffer
{
#synchronized(self)
{
inuse[fillBufferIndex] = true; // set in use flag
buffersUsed++;
// enqueue buffer
AudioQueueBufferRef fillBuf = audioQueueBuffers[fillBufferIndex];
NSLog(#"we are now enqueing buffer %d",fillBufferIndex);
fillBuf->mAudioDataByteSize = bytesFilled;
err = AudioQueueEnqueueBuffer(queue, fillBuf, 0, NULL);
if (err)
{
NSLog(#"could not enqueue queue with buffer");
return;
}
if (state == AS_BUFFERING)
{
//
// Fill all the buffers before starting. This ensures that the
// AudioFileStream stays a small amount ahead of the AudioQueue to
// avoid an audio glitch playing streaming files on iPhone SDKs < 3.0
//
if (buffersUsed == kNumberPlaybackBuffers - 1)
{
err = AudioQueueStart(queue, NULL);
if (err)
{
NSLog(#"couldn't start queue");
return;
}
state = AS_PLAYING;
}
}
// go to next buffer
if (++fillBufferIndex >= kNumberPlaybackBuffers) fillBufferIndex = 0;
bytesFilled = 0; // reset bytes filled
}
// wait until next buffer is not in use
pthread_mutex_lock(&queueBuffersMutex);
while (inuse[fillBufferIndex])
{
pthread_cond_wait(&queueBufferReadyCondition, &queueBuffersMutex);
}
pthread_mutex_unlock(&queueBuffersMutex);
}
#pragma mark - utility functions -
// generic error handler - if err is nonzero, prints error message and exits program.
static void CheckError(OSStatus error, const char *operation)
{
if (error == noErr) return;
char str[20];
// see if it appears to be a 4-char-code
*(UInt32 *)(str + 1) = CFSwapInt32HostToBig(error);
if (isprint(str[1]) && isprint(str[2]) && isprint(str[3]) && isprint(str[4])) {
str[0] = str[5] = '\'';
str[6] = '\0';
} else
// no, format it as an integer
sprintf(str, "%d", (int)error);
fprintf(stderr, "Error: %s (%s)\n", operation, str);
exit(1);
}
// we only use time here as a guideline
// we're really trying to get somewhere between 16K and 64K buffers, but not allocate too much if we don't need it/*
void CalculateBytesForTime(AudioStreamBasicDescription inDesc, Float64 inSeconds, UInt32 *outBufferSize, UInt32 *outNumPackets)
{
// we need to calculate how many packets we read at a time, and how big a buffer we need.
// we base this on the size of the packets in the file and an approximate duration for each buffer.
//
// first check to see what the max size of a packet is, if it is bigger than our default
// allocation size, that needs to become larger
// we don't have access to file packet size, so we just default it to maxBufferSize
UInt32 maxPacketSize = 0x10000;
static const int maxBufferSize = 0x10000; // limit size to 64K
static const int minBufferSize = 0x4000; // limit size to 16K
if (inDesc.mFramesPerPacket) {
Float64 numPacketsForTime = inDesc.mSampleRate / inDesc.mFramesPerPacket * inSeconds;
*outBufferSize = numPacketsForTime * maxPacketSize;
} else {
// if frames per packet is zero, then the codec has no predictable packet == time
// so we can't tailor this (we don't know how many Packets represent a time period
// we'll just return a default buffer size
*outBufferSize = maxBufferSize > maxPacketSize ? maxBufferSize : maxPacketSize;
}
// we're going to limit our size to our default
if (*outBufferSize > maxBufferSize && *outBufferSize > maxPacketSize)
*outBufferSize = maxBufferSize;
else {
// also make sure we're not too small - we don't want to go the disk for too small chunks
if (*outBufferSize < minBufferSize)
*outBufferSize = minBufferSize;
}
*outNumPackets = *outBufferSize / maxPacketSize;
}
// many encoded formats require a 'magic cookie'. if the file has a cookie we get it
// and configure the queue with it
static void MyCopyEncoderCookieToQueue(AudioFileID theFile, AudioQueueRef queue ) {
UInt32 propertySize;
OSStatus result = AudioFileGetPropertyInfo (theFile, kAudioFilePropertyMagicCookieData, &propertySize, NULL);
if (result == noErr && propertySize > 0)
{
Byte* magicCookie = (UInt8*)malloc(sizeof(UInt8) * propertySize);
CheckError(AudioFileGetProperty (theFile, kAudioFilePropertyMagicCookieData, &propertySize, magicCookie), "get cookie from file failed");
CheckError(AudioQueueSetProperty(queue, kAudioQueueProperty_MagicCookie, magicCookie, propertySize), "set cookie on queue failed");
free(magicCookie);
}
}
#pragma mark - audio queue -
static void MyAQOutputCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inCompleteAQBuffer)
{
AppDelegate *appDelegate = (__bridge AppDelegate *) inUserData;
[appDelegate myCallback:inUserData
inAudioQueue:inAQ
audioQueueBufferRef:inCompleteAQBuffer];
}
- (void)myCallback:(void *)userData
inAudioQueue:(AudioQueueRef)inAQ
audioQueueBufferRef:(AudioQueueBufferRef)inCompleteAQBuffer
{
unsigned int bufIndex = -1;
for (unsigned int i = 0; i < kNumberPlaybackBuffers; ++i)
{
if (inCompleteAQBuffer == audioQueueBuffers[i])
{
bufIndex = i;
break;
}
}
if (bufIndex == -1)
{
NSLog(#"something went wrong at queue callback");
return;
}
// signal waiting thread that the buffer is free.
pthread_mutex_lock(&queueBuffersMutex);
NSLog(#"signalling that buffer %d is free",bufIndex);
inuse[bufIndex] = false;
buffersUsed--;
pthread_cond_signal(&queueBufferReadyCondition);
pthread_mutex_unlock(&queueBuffersMutex);
}
#end
Update:
btomw's answer below solved the problem magnificently. But I want to get to the bottom of this (most novice developers like myself and even btomw when he first started usually shoot in the dark with parameters, formatting etc - see here for an example -)..
the reason why I provided nul as a parameter for
AVURLAsset *songAsset = [AVURLAsset URLAssetWithURL:assetURL options:audioReadSettings];
was because according to the documentation and trial and error, I realized that any formatting I put other than lPCM would be rejected outright. In other words, when you use AVAseetReader or conversion even the result is always lPCM.. so I thought the default format was lPCM anyways and so I left it as null.. but I guess I was wrong.
The weird part in this (please correct me anyone, if I'm wrong) is that as I mentioned.. supposed the original file was .mp3, and my intention was to play it back (or send the packets over a network etc) as mp3.. and so I provided an mp3 ABSD.. the asset reader will crash! so is that if i wanted to send it in it's original form, i just supply null? the obvious problem with this is that there would be no way for me to figure out what ABSD it has once I receive it on the other side.. or could I?
Update 2:You can download the code from github.
So here's what I think is happening and also how I think you can fix it.
You're pulling a predefined item out of the ipod (music) library on an iOS device. you are then using an asset reader to collect it's buffers, and queue those buffers, where possible, in an AudioQueue.
The problem you are having, I think, is that you are setting the audio queue buffer's input format to Linear Pulse Code Modulation (LPCM - hope I got that right, I might be off on the acronym). The output settings you are passing to the asset reader output are nil, which means that you'll get an output that is most likely NOT LPCM, but is instead aiff or aac or mp3 or whatever the format is of the song as it exists in iOS's media library. You can, however, remedy this situation by passing in different output settings.
Try changing
readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track outputSettings:nil];
to:
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kAudioFormatLinearPCM], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)],
AVChannelLayoutKey,
[NSNumber numberWithInt:16], AVLinearPCMBitDepthKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsNonInterleaved,
[NSNumber numberWithBool:NO],AVLinearPCMIsFloatKey,
[NSNumber numberWithBool:NO], AVLinearPCMIsBigEndianKey,
nil];
output = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:track audioSettings:outputSettings];
It's my understanding (per the documentation at Apple1) that passing nil as the output settings param gives you samples of the same file type as the original audio track. Even if you have a file that is LPCM, some other settings might be off, which might cause your problems. At the very least, this will normalize all the reader output, which should make things a bit easier to trouble shoot.
Hope that helps!
Edit:
the reason why I provided nul as a parameter for AVURLAsset *songAsset
= [AVURLAsset URLAssetWithURL:assetURL options:audioReadSettings];
was because according to the documentation and trial and error, I...
AVAssetReaders do 2 things; read back an audio file as it exists on disk (i.e.: mp3, aac, aiff), or convert the audio into lpcm.
If you pass nil as the output settings, it will read the file back as it exists, and in this you are correct. I apologize for not mentioning that an asset reader will only allow nil or LPCM. I actually ran into that problem myself (it's in the docs somewhere, but requires a bit of diving), but didn't elect to mention it here as it wasn't on my mind at the time. Sooooo... sorry about that?
If you want to know the AudioStreamBasicDescription (ASBD) of the track you are reading before you read it, you can get it by doing this:
AVURLAsset* uasset = [[AVURLAsset URLAssetWithURL:<#assetURL#> options:nil]retain];
AVAssetTrack*track = [uasset.tracks objectAtIndex:0];
CMFormatDescriptionRef formDesc = (CMFormatDescriptionRef)[[track formatDescriptions] objectAtIndex:0];
const AudioStreamBasicDescription* asbdPointer = CMAudioFormatDescriptionGetStreamBasicDescription(formDesc);
//because this is a pointer and not a struct we need to move the data into a struct so we can use it
AudioStreamBasicDescription asbd = {0};
memcpy(&asbd, asbdPointer, sizeof(asbd));
//asbd now contains a basic description for the track
You can then convert asbd to binary data in whatever format you see fit and transfer it over the network. You should then be able to start sending audio buffer data over the network and successfully play it back with your AudioQueue.
I actually had a system like this working not that long ago, but since I could't keep the connection alive when the iOS client device went to the background, I wasn't able to use it for my purpose. Still, if all that work lets me help someone else who can actually use the info, seems like a win to me.

Resources