Play sound file together with AudioUnit - ios

i have an app which uses an AudioUnit with subtype kAudioUnitSubType_VoiceProcessingIO. The input comes from a network stream. That works fine, but now i want to play a simple sound from file (alert sound after receiving a push notification). That does not work while the AudioUnit is in action, only when it's not started yet or already disposed. I can even stop the AudioUnit while the sound file stills plays to hear the final samples, so the playback seems to work, but it's silent.
I tried AudioServicesPlaySystemSound and AVAudioPlayer without success on a real device (might work in simulator).
Is there a simple solution to this simple task or do i need to manually mix in the file based audio content?
Thanks in advance!

You can build AUGraph with Audio Units and play audio file through kAudioUnitSubType_AudioFilePlayer unit.

Try
OSStatus propertySetError = 0;
UInt32 allowMixing = true;
propertySetError = AudioSessionSetProperty (
kAudioSessionProperty_OverrideCategoryMixWithOthers, // 1
sizeof (allowMixing), // 2
&allowMixing // 3
);
Or you can use kAudioUnitType_Mixer type AudioUnit.
AudioComponentDescription MixerUnitDescription;
MixerUnitDescription.componentType = kAudioUnitType_Mixer;
MixerUnitDescription.componentSubType = kAudioUnitSubType_MultiChannelMixer;
MixerUnitDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
MixerUnitDescription.componentFlags = 0;
MixerUnitDescription.componentFlagsMask = 0;
Add them(mixer_unit,voice_processing_unit) into a AUGraph. set input bus count 2 for mixer unit.
UInt32 busCount = 2; // bus count for mixer unit input
status = AudioUnitSetProperty (mixer_unit,
kAudioUnitProperty_ElementCount,
kAudioUnitScope_Input,
0,
&busCount,
sizeof(busCount)
);
Add render callback for each bus of mixer_unit:
for (UInt16 busNumber = 0; busNumber < busCount; ++busNumber) {
AURenderCallbackStruct inputCallbackStruct;
inputCallbackStruct.inputProc = &VoiceMixRenderCallback;
inputCallbackStruct.inputProcRefCon = self;
status = AUGraphSetNodeInputCallback (_mixSaveGraph,
mixerNode,
busNumber,
&inputCallbackStruct
);
if (status) { printf("AudioUnitSetProperty set callback"); return NO; }
}
Connect the mixer_unit's output bus to io_unit's input bus:
status = AUGraphConnectNodeInput (_mixSaveGraph,
mixerNode, // source node
0, // source node output bus number
saveNode, // destination node
0 // desintation node input bus number
);
Start the graph , u'll get reader call back with different Bus number (0 and 1) , like this:
static OSStatus VoiceMixRenderCallback(void * inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData )
{
OSStatus status;
AURecorder *recorder = (AURecorder*)inRefCon;
if (inBusNumber == BUS_ONE && (recorder.isMusicPlaying)) {
//Get data from music file. try ExtAudioFileRead
} else if (inBusNumber == BUS_TWO) {
status = AudioUnitRender(recorder.io_unit, ioActionFlags, inTimeStamp, INPUT_BUS, inNumberFrames, ioData);
//Get data from io unit's input bus.
if (status)
{
printf("AudioUnitRender for record error");
*ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;
}
} else {
*ioActionFlags |= kAudioUnitRenderAction_OutputIsSilence;
}
return noErr;
}

This might be a system restriction. If your iOS device didn't do anything when you call AudioServicesPlaySystemSound while using PlayAndRecord Category, try to stop your AudioUnit first, and then call this function.

I had a similar problem. In a VOIP application, I had to play tones while the call was on-going. Playing a tone file using AVPlayer would get played thru receiver and at 50% the volume.
What worked for me was to switch the AVAudioSession category from AVAudioSessionCategoryPlayAndRecord to AVAudioSessionCategoryPlayback and back for the duration of the tone file.
There are a few catches here:
You have to switch over for the duration of the audio file you are about to play.
Your AudioUnit or input processing code should be okay with silence for the duration of the switchover.
The category switch-over is not free. Keep in mind the older devices which may take longer to turn the microphone on and off.
I think what was happening is that the speakerphone is exclusively allocated to the AudioUnit and any parallel playback using AVAudioPlayer or AVPlayer is played thru the receiver. If your audio file's level is low, it will be hardly noticeable. Also, while the AudioUnit is active and AVAudioSession is in PlayAndRecord category, other sound files will be 'ducked'.
You can confirm this by using Mac's console while your device is attached, search for mediaserverd and grep for 'duck'.
I got something like this:
default 18:36:01.844883-0800 mediaserverd -CMSessionMgr- cmsDuckVolumeBy: ducking 'sid:0x36952, $$$$$$(2854), 'prim'' duckToLevelDB = -40.0000 duckVolume = 0.100 duckFadeDuration = 0.500
default 18:36:02.371953-0800 mediaserverd -CMSystSounds- cmsmSystemSoundShouldPlay_block_invoke: SSID = 4096 with systemSoundCategory = UserAlert returning OutVolume = 1.000000, Audio = 1, Vibration = 2, Synchronized = 16, Interrupt = 0, NeedsDidFinishCall = 0, NeedsUnduckCall = 128, BudgetNotAvailable = 0
default 18:36:02.372685-0800 mediaserverd SSSActionLists.cpp:130:LogFlags: mSSID 4105, mShouldPlayAudio 1, mShouldVibe 1, mAudioVolume 1.000000, mVibeIntensity 1.000000, mNeedsFinishCall 0, mNeedsUnduckCall 1, mSynchronizedSystemSound 1, mInterruptCurrentSystemSounds 0
default 18:36:02.729872-0800 mediaserverd ActiveSoundList.cpp:386:SendUnduckMessageToCM: Notifying CM that sound should unduck now for ssidForCMSession: 4096, token 3569
default 18:36:02.729928-0800 mediaserverd -CMSystSounds- cmsmSystemSoundUnduckMedia: called for ssid 4096
default 18:36:02.729973-0800 mediaserverd -CMSessionMgr- cmsUnduckVolume: 'sid:0x36952, $$$$$$(2854), 'prim'' unduckFadeDuration: 0.500000

Related

How to play a signal with AudioUnit (iOS)?

I need to generate a signal and play it with iPhone's speakers or a headset.
To do so I generate an interleaved signal. Then i need to instantiate an AudioUnit inherited class object with the next info: 2 channels, 44100 kHz sample rate, some buffer size to store a few frames.
Then I need to write a callback method which will take a chink of my signal and pit it into iPhone's output buffer.
The problem is that I have no idea how to write an AudioUnit inherited class. I can't understand Apple's documentation regarding it, and all the examples I could find either read from file and play it with huge lag or use depricated constructions.
I start to think I am stupid or something. Please, help...
To play audio to the iPhone's hardware with an AudioUnit, you don't derive from the AudioUnit as CoreAudio is a c framework - instead you give it a render callback in which you feed the unit your audio samples. The following code sample shows you how. You need to replace the asserts with real error handling and you'll probably want to change or at least inspect the audio unit's sample format using the kAudioUnitProperty_StreamFormat selector. My format happens to be 48kHz floating point interleaved stereo.
static OSStatus
renderCallback(
void* inRefCon,
AudioUnitRenderActionFlags* ioActionFlags,
const AudioTimeStamp* inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList* ioData)
{
// inRefCon contains your cookie
// write inNumberFrames to ioData->mBuffers[i].mData here
return noErr;
}
AudioUnit
createAudioUnit() {
AudioUnit au;
OSStatus err;
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
assert(0 != comp);
err = AudioComponentInstanceNew(comp, &au);
assert(0 == err);
AURenderCallbackStruct input;
input.inputProc = renderCallback;
input.inputProcRefCon = 0; // put your cookie here
err = AudioUnitSetProperty(au, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &input, sizeof(input));
assert(0 == err);
err = AudioUnitInitialize(au);
assert(0 == err);
err = AudioOutputUnitStart(au);
assert(0 == err);
return au;
}

Core audio: file playback render callback function

I am using RemoteIO Audio Unit for audio playback in my app with kAudioUnitProperty_ScheduledFileIDs.
Audio files are in PCM format. How can I implement a render callback function for this case, so I could manually modify buffer samples?
Here is my code:
static AudioComponentInstance audioUnit;
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
CheckError(AudioComponentInstanceNew(comp, &audioUnit), "error AudioComponentInstanceNew");
NSURL *playerFile = [[NSBundle mainBundle] URLForResource:#"short" withExtension:#"wav"];
AudioFileID audioFileID;
CheckError(AudioFileOpenURL((__bridge CFURLRef)playerFile, kAudioFileReadPermission, 0, &audioFileID), "error AudioFileOpenURL");
// Determine file properties
UInt64 packetCount;
UInt32 size = sizeof(packetCount);
CheckError(AudioFileGetProperty(audioFileID, kAudioFilePropertyAudioDataPacketCount, &size, &packetCount),
"AudioFileGetProperty(kAudioFilePropertyAudioDataPacketCount)");
AudioStreamBasicDescription dataFormat;
size = sizeof(dataFormat);
CheckError(AudioFileGetProperty(audioFileID, kAudioFilePropertyDataFormat, &size, &dataFormat),
"AudioFileGetProperty(kAudioFilePropertyDataFormat)");
// Assign the region to play
ScheduledAudioFileRegion region;
memset (&region.mTimeStamp, 0, sizeof(region.mTimeStamp));
region.mTimeStamp.mFlags = kAudioTimeStampSampleTimeValid;
region.mTimeStamp.mSampleTime = 0;
region.mCompletionProc = NULL;
region.mCompletionProcUserData = NULL;
region.mAudioFile = audioFileID;
region.mLoopCount = 0;
region.mStartFrame = 0;
region.mFramesToPlay = (UInt32)packetCount * dataFormat.mFramesPerPacket;
CheckError(AudioUnitSetProperty(audioUnit, kAudioUnitProperty_ScheduledFileRegion, kAudioUnitScope_Global, 0, &region, sizeof(region)),
"AudioUnitSetProperty(kAudioUnitProperty_ScheduledFileRegion)");
// Prime the player by reading some frames from disk
UInt32 defaultNumberOfFrames = 0;
CheckError(AudioUnitSetProperty(audioUnit, kAudioUnitProperty_ScheduledFilePrime, kAudioUnitScope_Global, 0, &defaultNumberOfFrames, sizeof(defaultNumberOfFrames)),
"AudioUnitSetProperty(kAudioUnitProperty_ScheduledFilePrime)");
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = MyCallback;
callbackStruct.inputProcRefCon = (__bridge void * _Nullable)(self);
CheckError(AudioUnitSetProperty(audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &callbackStruct, sizeof(callbackStruct)), "error AudioUnitSetProperty[kAudioUnitProperty_setRenderCallback]");
CheckError(AudioUnitInitialize(audioUnit), "error AudioUnitInitialize");
Callback function:
static OSStatus MyCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData){
printf("my callback");
return noErr;
}
Audio Unit start playback on button press:
- (IBAction)playSound:(id)sender {
CheckError(AudioOutputUnitStart(audioUnit), "error AudioOutputUnitStart");
}
This code fails during compiling with kAudioUnitErr_InvalidProperty(-10879) error. The goal is to modify buffer samples that has been read from the AudioFileID and send the result to the speakers.
Seeing as how you are just getting familiar with core audio, I suggest you first get your remoteIO callback working independently of your file player. Just remove all of your file player related code and try to get that working first.
Then, once you have that working, move on to incorporating your file player.
As far as what I can see that's wrong, I think you are confusing the Audio File Services API with an audio unit. This API is used to read a file into a buffer which you would manually feed to to remoteIO, if you do want to go this route, use the Extended Audio File Services API, it's a LOT easier. The kAudioUnitProperty_ScheduledFileRegion property is supposed to be called on a file player audio unit. To get one of those, you would need to create it the same way as your remmoteIO with the exception that AudioComponentDescription's componentSubType and componentType are kAudioUnitSubType_AudioFilePlayer and kAudioUnitType_Generator respectively. Then, once you have that unit you would need to connect it to the remoteIO using the kAudioUnitProperty_MakeConnection property.
But seriously, start with just getting your remoteIO callback working, then try making a file player audio unit and connecting it (without the callback), then go from there.
Ask very specific questions about each of these steps independently, posting code you have tried that's not working, and you'll get a ton of help.

Using AVCaptureSession and Audio Units Together Causes Problems for AVAssetWriterInput

I'm working on an iOS app that does two things at the same time:
It captures audio and video and relays them to a server to provide video chat functionality.
It captures local audio and video and encodes them into an mp4 file to be saved for posterity.
Unfortunately, when we configure the app with an audio unit to enable echo cancellation, the recording functionality breaks: the AVAssetWriterInput instance we're using to encode audio rejects incoming samples. When we don't set up the audio unit, recording works, but we have terrible echo.
To enable echo cancellation, we configure an audio unit like this (paraphrasing for the sake of brevity):
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_VoiceProcessingIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
AudioComponent comp = AudioComponentFindNext(NULL, &desc);
OSStatus status = AudioComponentInstanceNew(comp, &_audioUnit);
status = AudioUnitInitialize(_audioUnit);
This works fine for video chat, but it breaks the recording functionality, which is set up like this (again, paraphrasing—the actual implementation is spread out over several methods).
_captureSession = [[AVCaptureSession alloc] init];
// Need to use the existing audio session & configuration to ensure we get echo cancellation
_captureSession.usesApplicationAudioSession = YES;
_captureSession.automaticallyConfiguresApplicationAudioSession = NO;
[_captureSession beginConfiguration];
AVCaptureDeviceInput *audioInput = [[AVCaptureDeviceInput alloc] initWithDevice:[self audioCaptureDevice] error:NULL];
[_captureSession addInput:audioInput];
_audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
[_audioDataOutput setSampleBufferDelegate:self queue:_cameraProcessingQueue];
[_captureSession addOutput:_audioDataOutput];
[_captureSession commitConfiguration];
And the relevant portion of captureOutput looks something like this:
NSLog(#"Audio format, channels: %d, sample rate: %f, format id: %d, bits per channel: %d", basicFormat->mChannelsPerFrame, basicFormat->mSampleRate, basicFormat->mFormatID, basicFormat->mBitsPerChannel);
if (_assetWriter.status == AVAssetWriterStatusWriting) {
if (_audioEncoder.readyForMoreMediaData) {
if (![_audioEncoder appendSampleBuffer:sampleBuffer]) {
NSLog(#"Audio encoder couldn't append sample buffer");
}
}
}
What happens is the call to appendSampleBuffer fails, but—and this is the strange part—only if I don't have earphones plugged into my phone. Examining the logs produced when this happens, I found that without earphones connected, the number of channels reported in the log message was 3, whereas with earphones connected, the number of channels was 1. This explains why the encode operation was failing, since the encoder was configured to expect just a single channel.
What I don't understand is why I'm getting three channels here. If I comment out the code that initializes the audio unit, I only get a single channel and recording works fine, but echo cancellation doesn't work. Furthermore, if I remove these lines
// Need to use the existing audio session & configuration to ensure we get echo cancellation
_captureSession.usesApplicationAudioSession = YES;
_captureSession.automaticallyConfiguresApplicationAudioSession = NO;
recording works (I only get a single channel with or without headphones), but again, we lose echo cancellation.
So, the crux of my question is: why am I getting three channels of audio when I configure an audio unit to provide echo cancellation? Furthermore, is there any way to prevent this from happening or to work around this behavior using AVCaptureSession?
I've considered piping the microphone audio directly from the low-level audio unit callback into the encoder, as well as to the chat pipeline, but it seems like conjuring up the necessary Core Media buffers to do so would be a bit of work that I'd like to avoid if possible.
Note that the chat and recording functions were written by different people—neither of them me—which is the reason this code isn't more integrated. If possible, I'd like to avoid having to refactor the whole mess.
Ultimately, I was able to work around this issue by gathering audio samples from the microphone via the I/O audio unit, repackaging these samples into a CMSampleBuffer, and passing the newly constructed CMSampleBuffer into the encoder.
The code to do the conversion looks like this (abbreviated for the sake of brevity).
// Create a CMSampleBufferRef from the list of samples, which we'll own
AudioStreamBasicDescription monoStreamFormat;
memset(&monoStreamFormat, 0, sizeof(monoStreamFormat));
monoStreamFormat.mSampleRate = 48000;
monoStreamFormat.mFormatID = kAudioFormatLinearPCM;
monoStreamFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagsNativeEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsNonInterleaved;
monoStreamFormat.mBytesPerPacket = 2;
monoStreamFormat.mFramesPerPacket = 1;
monoStreamFormat.mBytesPerFrame = 2;
monoStreamFormat.mChannelsPerFrame = 1;
monoStreamFormat.mBitsPerChannel = 16;
CMFormatDescriptionRef format = NULL;
OSStatus status = CMAudioFormatDescriptionCreate(kCFAllocatorDefault, &monoStreamFormat, 0, NULL, 0, NULL, NULL, &format);
// Convert the AudioTimestamp to a CMTime and create a CMTimingInfo for this set of samples
uint64_t timeNS = (uint64_t)(hostTime * _hostTimeToNSFactor);
CMTime presentationTime = CMTimeMake(timeNS, 1000000000);
CMSampleTimingInfo timing = { CMTimeMake(1, 48000), presentationTime, kCMTimeInvalid };
CMSampleBufferRef sampleBuffer = NULL;
status = CMSampleBufferCreate(kCFAllocatorDefault, NULL, false, NULL, NULL, format, numSamples, 1, &timing, 0, NULL, &sampleBuffer);
// add the samples to the buffer
status = CMSampleBufferSetDataBufferFromAudioBufferList(sampleBuffer,
kCFAllocatorDefault,
kCFAllocatorDefault,
0,
samples);
// Pass the buffer into the encoder...
Please note that I've removed error handling and cleanup of the allocated objects.

Core Audio - Interapp Audio - How to Retrieve output audio packets from Node app inside Host App?

I am writing an HOST app that uses Core Audio's new iOS 7 Inter App Audio technology to pull audio from a single NODE "generator" app and route it into my host app. I am using the Audio Components Services and Audio Unit Component Services C frameworks to achieve this.
What I want to achieve is to establish a connection to an external node app who can generate sound. I want that sound to be routed into my host app and for my host app to be able to directly access the audio packet data as a stream of raw audio data.
I have written the code inside my HOST app that does the following sequentially:
Sets up and activates an audio session with the correct session category.
Refreshes a list of interapp audio compatible apps who are of typekAudioUnitType_RemoteGenerator or kAudioUnitType_RemoteInstrument (I'm not interested in effects apps).
Pulls out the last object from that list and attempts to establish a connection using AudioComponentInstanceNew()
Sets the Audio Stream Basic Description that my host app needs the audio format in.
Sets up audio unit properties and callbacks as well as an audio unit render callback on the output scope (bus).
Initializes the audio unit.
So far so good, I have been able to successfully establish a connection, but my problem is that my render callback is not being called at all. What I am having trouble understanding is how exactly to pull the audio from the node application? I have read that I need to call AudioUnitRender() in order to initiate a rendering cycle on the node app, but how exactly does this need to be setup in my situation? I have seen other examples where AudioUnitRender() is called from inside the rendering callback, but this isnt going to work for me because my render callback isn't being called currently. Do I need to setup up my own audio processing thread and periodically call AudioUnitRender() on my 'node'?
The following is the code described above inside my HOST app.
static OSStatus MyAURenderCallback (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
//Do something here with the audio data?
//This method is never being called?
//Do I need to puts AudioUnitRender() in here?
}
- (void)start
{
[self configureAudioSession];
[self refreshAUList];
}
- (void)configureAudioSession
{
NSError *audioSessionError = nil;
AVAudioSession *mySession = [AVAudioSession sharedInstance];
[mySession setPreferredSampleRate: _graphSampleRate error: &audioSessionError];
[mySession setCategory: AVAudioSessionCategoryPlayAndRecord error: &audioSessionError];
[mySession setActive: YES error: &audioSessionError];
self.graphSampleRate = [mySession sampleRate];
}
- (void)refreshAUList
{
_audioUnits = #[].mutableCopy;
AudioComponentDescription searchDesc = { 0, 0, 0, 0, 0 }, foundDesc;
AudioComponent comp = NULL;
while (true) {
comp = AudioComponentFindNext(comp, &searchDesc);
if (comp == NULL) break;
if (AudioComponentGetDescription(comp, &foundDesc) != noErr) continue;
if (foundDesc.componentType == kAudioUnitType_RemoteGenerator || foundDesc.componentType == kAudioUnitType_RemoteInstrument) {
RemoteAU *rau = [[RemoteAU alloc] init];
rau->_desc = foundDesc;
rau->_comp = comp;
AudioComponentCopyName(comp, &rau->_name);
rau->_image = AudioComponentGetIcon(comp, 48);
rau->_lastActiveTime = AudioComponentGetLastActiveTime(comp);
[_audioUnits addObject:rau];
}
}
[self connect];
}
- (void)connect {
if ([_audioUnits count] <= 0) {
return;
}
RemoteAU *rau = [_audioUnits lastObject];
AudioUnit myAudioUnit;
//Node application will get launched in background
Check(AudioComponentInstanceNew(rau->_comp, &myAudioUnit));
AudioStreamBasicDescription format = {0};
format.mChannelsPerFrame = 2;
format.mSampleRate = [[AVAudioSession sharedInstance] sampleRate];
format.mFormatID = kAudioFormatMPEG4AAC;
UInt32 propSize = sizeof(format);
Check(AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &propSize, &format));
//Output format from node to host
Check(AudioUnitSetProperty(myAudioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &format, sizeof(format)));
//Setup a render callback to the output scope of the audio unit representing the node app
AURenderCallbackStruct callbackStruct = {0};
callbackStruct.inputProc = MyAURenderCallback;
callbackStruct.inputProcRefCon = (__bridge void *)(self);
Check(AudioUnitSetProperty(myAudioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Output, 0, &callbackStruct, sizeof(callbackStruct)));
//setup call backs
Check(AudioUnitAddPropertyListener(myAudioUnit, kAudioUnitProperty_IsInterAppConnected, IsInterappConnected, NULL));
Check(AudioUnitAddPropertyListener(myAudioUnit, kAudioOutputUnitProperty_HostTransportState, AudioUnitPropertyChangeDispatcher, NULL));
//intialize the audio unit representing the node application
Check(AudioUnitInitialize(myAudioUnit));
}

OSStatus error -50 (paramErr) on AudioUnitRender call on device

I'm writing an iOS app that captures audio from the microphone, filters it with a high-pass filter, and plays it back through the speakers.
I'm getting a -50 OSStatus error when I call AudioUnitRender on the render callback function when I run it on an iPhone 4S, but it runs fine on the simulator.
I'm using an AUGraph, which has a RemoteIO unit, a HighPassFilter effect unit, and an AUConverter unit to make the ASBDs between the HPF and the output match. The converter AudioUnit instance is called converterUnit.
Here's the code.
static OSStatus renderInput(void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inBusNumber, UInt32 inNumberFrames, AudioBufferList *ioData)
{
AudioController *THIS = (AudioController*)inRefCon;
AudioBuffer buffer;
AudioStreamBasicDescription converterOutputASBD;
UInt32 converterOutputASBDSize = sizeof(converterOutputASBD);
AudioUnitGetProperty([THIS converterUnit], kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &converterOutputASBD, &converterOutputASBDSize);
buffer.mDataByteSize = inNumberFrames * converterOutputASBD.mBytesPerFrame;
buffer.mNumberChannels = converterOutputASBD.mChannelsPerFrame;
buffer.mData = malloc(buffer.mDataByteSize);
AudioBufferList bufferList;
bufferList.mNumberBuffers = 1;
bufferList.mBuffers[0] = buffer;
OSStatus result = AudioUnitRender([THIS converterUnit], ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
...
}
I think -50 error means one of the parameters is wrong. The only parameters that can be wrong are [THIS converterUnit] and &bufferList, given that all the rest are handed to me as arguments. I've checked the converterUnit instance and it is correctly allocated and initialized (what's more, if that was the problem, it wouldn't run on the simulator either). The only parameter left to check is the bufferList. What I could make out so far from debugging is that both the RemoteIO's output element's input ASBD, and the inNumberFrames are different in the phone and on the simulator. But still, I think that to me that doesn't change things, given that I create and allocate memory for the AudioBuffer buffer based on an ASBD resulting from a AudioUnitGetProperty([THIS ioUnit], kAudioUnitProperty_StreamFormat, ...) call.
Any help will be much appreciated, I'm kind of running desperate here..
You guys rock.
Cheers.
UPDATE:
Here's the audio controller class' definition:
#interface AudioController : NSObject
{
AUGraph mGraph;
AudioUnit mEffects;
AudioUnit ioUnit;
AudioUnit converterUnit;
}
#property (readonly, nonatomic) AudioUnit mEffects;
#property (readonly, nonatomic) AudioUnit ioUnit;
#property (readonly, nonatomic) AudioUnit converterUnit;
#property (nonatomic) float* volumenPromedio;
-(void)initializeAUGraph;
-(void)startAUGraph;
-(void)stopAUGraph;
#end
, and here's the initialization code for the AUGraph (defined in AudioController.mm):
- (void)initializeAUGraph
{
NSError *audioSessionError = nil;
AVAudioSession *mySession = [AVAudioSession sharedInstance];
[mySession setPreferredHardwareSampleRate: kGraphSampleRate
error: &audioSessionError];
[mySession setCategory: AVAudioSessionCategoryPlayAndRecord
error: &audioSessionError];
[mySession setActive: YES error: &audioSessionError];
OSStatus result = noErr;
// create a new AUGraph
result = NewAUGraph(&mGraph);
AUNode outputNode;
AUNode effectsNode;
AUNode converterNode;
// effects component
AudioComponentDescription effects_desc;
effects_desc.componentType = kAudioUnitType_Effect;
effects_desc.componentSubType = kAudioUnitSubType_HighPassFilter;
effects_desc.componentFlags = 0;
effects_desc.componentFlagsMask = 0;
effects_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// output component
AudioComponentDescription output_desc;
output_desc.componentType = kAudioUnitType_Output;
output_desc.componentSubType = kAudioUnitSubType_RemoteIO;
output_desc.componentFlags = 0;
output_desc.componentFlagsMask = 0;
output_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// stream format converter component
AudioComponentDescription converter_desc;
converter_desc.componentType = kAudioUnitType_FormatConverter;
converter_desc.componentSubType = kAudioUnitSubType_AUConverter;
converter_desc.componentFlags = 0;
converter_desc.componentFlagsMask = 0;
converter_desc.componentManufacturer = kAudioUnitManufacturer_Apple;
// Add nodes to the graph
result = AUGraphAddNode(mGraph, &output_desc, &outputNode);
[self hasError:result:__FILE__:__LINE__];
result = AUGraphAddNode(mGraph, &effects_desc, &effectsNode);
[self hasError:result:__FILE__:__LINE__];
result = AUGraphAddNode(mGraph, &converter_desc, &converterNode);
// manage connections in the graph
// Connect the io unit node's input element's output to the effectsNode input
result = AUGraphConnectNodeInput(mGraph, outputNode, 1, effectsNode, 0);
// Connect the effects node's output to the converter node's input
result = AUGraphConnectNodeInput(mGraph, effectsNode, 0, converterNode, 0);
// open the graph
result = AUGraphOpen(mGraph);
// Get references to the audio units
result = AUGraphNodeInfo(mGraph, effectsNode, NULL, &mEffects);
result = AUGraphNodeInfo(mGraph, outputNode, NULL, &ioUnit);
result = AUGraphNodeInfo(mGraph, converterNode, NULL, &converterUnit);
// Enable input on remote io unit
UInt32 flag = 1;
result = AudioUnitSetProperty(ioUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 1, &flag, sizeof(flag));
// Setup render callback struct
AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc = &renderInput;
renderCallbackStruct.inputProcRefCon = self;
result = AUGraphSetNodeInputCallback(mGraph, outputNode, 0, &renderCallbackStruct);
// Get fx unit's input current stream format...
AudioStreamBasicDescription fxInputASBD;
UInt32 sizeOfASBD = sizeof(AudioStreamBasicDescription);
result = AudioUnitGetProperty(mEffects, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &fxInputASBD, &sizeOfASBD);
// ...and set it on the io unit's input scope's output
result = AudioUnitSetProperty(ioUnit,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
1,
&fxInputASBD,
sizeof(fxInputASBD));
// Set fx unit's output sample rate, just in case
Float64 sampleRate = 44100.0;
result = AudioUnitSetProperty(mEffects,
kAudioUnitProperty_SampleRate,
kAudioUnitScope_Output,
0,
&sampleRate,
sizeof(sampleRate));
AudioStreamBasicDescription fxOutputASBD;
// get fx audio unit's output ASBD...
result = AudioUnitGetProperty(mEffects, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &fxOutputASBD, &sizeOfASBD);
// ...and set it to the converter audio unit's input
result = AudioUnitSetProperty(converterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &fxOutputASBD, sizeof(fxOutputASBD));
AudioStreamBasicDescription ioUnitsOutputElementInputASBD;
// now get io audio unit's output element's input ASBD...
result = AudioUnitGetProperty(ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &ioUnitsOutputElementInputASBD, &sizeOfASBD);
// ...set the sample rate...
ioUnitsOutputElementInputASBD.mSampleRate = 44100.0;
// ...and set it to the converter audio unit's output
result = AudioUnitSetProperty(converterUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 0, &ioUnitsOutputElementInputASBD, sizeof(ioUnitsOutputElementInputASBD));
// initialize graph
result = AUGraphInitialize(mGraph);
}
The reason I make the connection between the converter's output and the remote io unit's output element's input with a render callback function (rather than with the AUGraphConnectNodeInput method) is because I need to make some calculations on the samples right after they've been processed by the high-pass filter. The render callback gives me the opportunity to look into the samples buffer right after the AudioUnitRender call, and do said calculations there.
UPDATE 2:
By debugging, I found differences in the Remote IO output bus' input ASBD on the device and on the simulator. It shouldn't make a difference (I allocate and initialize the AudioBufferList based on data coming from a previous AudioUnitGetProperty([THIS ioUnit], kAudioUnitProperty_StreamFormat, ...) call), but it's the only thing I can see different in the device and the simulator.
Here's the Remote IO output bus' input ASBD on the device:
Float64 mSampleRate 44100
UInt32 mFormatID 1819304813
UInt32 mFormatFlags 41
UInt32 mBytesPerPacket 4
UInt32 mFramesPerPacket 1
UInt32 mBytesPerFrame 4
UInt32 mChannelsPerFrame 2
UInt32 mBitsPerChannel 32
UInt32 mReserved 0
, and here it is on the simulator:
Float64 mSampleRate 44100
UInt32 mFormatID 1819304813
UInt32 mFormatFlags 12
UInt32 mBytesPerPacket 4
UInt32 mFramesPerPacket 1
UInt32 mBytesPerFrame 4
UInt32 mChannelsPerFrame 2
UInt32 mBitsPerChannel 16
UInt32 mReserved 0

Resources