iOS: Bug in the simulator using AudioUnitRender - ios

I have hit yet another iOS simulator bug. My question is, is there some workaround?
Bug is this:
Load apple's AurioTouch sample Project.
and simply print out the number of frames getting received by the render callback (in aurioTouchAppDelegate.mm)
static OSStatus PerformThru(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
printf( "%u, ", (unsigned int)inNumberFrames );
I get the following output:
471, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, ...
However, if you comment out the call to AudioUnitRender on the next line:
{
printf( "%u, ", (unsigned int)inNumberFrames );
aurioTouchAppDelegate *THIS = (aurioTouchAppDelegate *)inRefCon;
OSStatus err = 0; // AudioUnitRender(THIS->rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
It now sends an appropriate number of floats each time.
471, 470, 471, 470, 470, 471, 470, 471, 470, 470, 471, 470, 471, 470, 470, 471, 470,
Another question I have is: why such a random number as 470, 471? I read somewhere that you specify the buffer length implicitly by specifying its time duration, and it sets the buffer length to the power of two that yields the best approximation to this duration. But empirical evidence suggests this is not so.
Anyway, pretty sure this is a bug. I'm going to go on file it. If anyone can shed some light, please do!

Generally the workaround to Simulator bugs is to test the app on the device. The iOS Simulators is just a simulator, not an emulator.
The iOS Simulator has some odd bugs. It may have to do with buffer sizes according to this post by Christopher Penrose:
The simulator will act wildly different setup to setup as it relies on your host audio gear, which may in your case be a third-party interface. I have seen the simulator refuse a reasonable power of 2 size because of the device. I have not been able to use audio in the simulator reliably.
James is telling me that I am being foolish but in practice I have been able to rely on the original configured buffer size without having it change on me.
Link with possibly more helpful info: http://osdir.com/ml/coreaudio-api/2010-04/msg00150.html

If you want to get the audio working with your simulator, you need to make sure your samplerate is set to 44.1k in OS X's audio/midi setup tool. AVAudioSession/Audio Services will report your samplerate as 44.1k no matter what it actually is when using the simulator.
By setting your mac's samplerate to 44.1k, you'll get a consistent inNumberFrames (default is 1024) per callback, although this can still allegedly be changed by the system (ex. app goes to background).

Related

AudioBufferList passed to ABReceiverPortReceive does not match clientFormat when using The Amazing Audio Engine

Hi I need multiple input streams form audiobus and I am using TAAE framework...
I tired this just to test if I can manually send audio :
AEBlockChannel *channel = [AEBlockChannel channelWithBlock:^(const AudioTimeStamp *time, UInt32 frames, AudioBufferList *audio) {
ABReceiverPortReceive(_abreceiverPort, nil, audio, frames, time);
}];
and I get "AudioBufferList passed to ABReceiverPortReceive does not match clientFormat "
What should I do ? I try to understand how TAAE works from its source but was not able to understand how I can create correct AudioBufferList, maybe some little example will enlighten me.
I found just this in sources AEAllocateAndInitAudioBufferList(rawAudioDescription, kInputAudioBufferFrames) , how it is created..
Recieved answer from Michael Tyson on audiobus forum.
Please read http://developer.audiob.us/doc/_receiver-_port.html#Receiving-Separate-Streams
Specifically, see the part about ABReceiverPortEndReceiveTimeInterval.

my iOS app using audio units with an 8000 hertz sample rate returns a distorted voice

I really need help with this issue. I'm developing an iOS application with audio units, the recorded audio needs to at 8bit / 8000 hertz sample rate using alaw format. How ever I'm getting a distorted voice coming out the speaker.
I came across this sample online:
http://www.stefanpopp.de/2011/capture-iphone-microphone/comment-page-1/
while trying to debug my app I used my audioFormat in his application and I am getting the same distorted sound. I guessing I either have incorrect settings or I need to do something else to enable this to work. Given the application in the link and the below audioFormat can anyone tell me if I'm doing something wrong or missing something ? I don't know a lot about this stuff, thanks.
Audio Format:
AudioStreamBasicDescription audioFormat;
audioFormat.mSampleRate = 8000;
audioFormat.mFormatID = kAudioFormatALaw;
audioFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;
audioFormat.mFramesPerPacket = 1;
audioFormat.mChannelsPerFrame = 1;
audioFormat.mBitsPerChannel = 8;
audioFormat.mBytesPerPacket = 1;
audioFormat.mBytesPerFrame = 1;
Eventually got it to play correctly. I'm posting here to help out anyone else facing similar issues.
Main issue I was facing is that there is a huge difference between the simulator and an actual device. Running the app on the device the sound quality was better but it kept skipping every second or 2, I found a setting that seemed to fix this and a setting to change the buffer size / duration. (The duration setting does not work on the simulator, some of my issues were needing it to run at a certain rate to sync with something else, this was causing distorted sounds)
status = AudioSessionInitialize(NULL, kCFRunLoopDefaultMode, NULL, audioUnit);
UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
status = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(audioCategory), &audioCategory);
[self hasError:status:__FILE__:__LINE__];
Float32 preferredBufferSize = 0.005805; // in seconds
status = AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, sizeof(preferredBufferSize), &preferredBufferSize);
[self hasError:status:__FILE__:__LINE__];
status = AudioSessionSetActive(true);
The first audio session property is what stopped the skipping making it play much more smoothly. The second adjusts the buffer duration, this is in seconds how often the callbacks are fired and will give you a different buffer size. Its best effort meaning it will get as close as it can to the value you provide but it seems to have a list of available sizes and picks the closest.
See the post I link to in my question for a very good tutorial / sample program to get started with this stuff.

Core Audio (iOS 5.1) Reverb2 properties do not exist, error code -10877

I am playing with Apple's sample project "LoadPresetDemo". I have added the reverb audio unit AudioUnit kAudioUnitSubType_Reverb2 to the graph, which is the only iOS reverb available. In the CoreAudio header file "AudioUnitParameters.h", it states that Reverb2 should respond to these parameters:
enum {
// Global, CrossFade, 0->100, 100
kReverb2Param_DryWetMix = 0,
// Global, Decibels, -20->20, 0
kReverb2Param_Gain = 1,
// Global, Secs, 0.0001->1.0, 0.008
kReverb2Param_MinDelayTime = 2,
// Global, Secs, 0.0001->1.0, 0.050
kReverb2Param_MaxDelayTime = 3,
// Global, Secs, 0.001->20.0, 1.0
kReverb2Param_DecayTimeAt0Hz = 4,
// Global, Secs, 0.001->20.0, 0.5
kReverb2Param_DecayTimeAtNyquist = 5,
// Global, Integer, 1->1000
kReverb2Param_RandomizeReflections = 6,
};
After the AUGraph has been initialized and started, everything compiles, I hear sound.
Next, I alter the kReverb2Param_DryWetMix parameter (changing to full wet mix):
AudioUnitSetParameter(_reverbUnit, kAudioUnitScope_Global, 0, kReverb2Param_DryWetMix, 100.0f, 0);
All good, I hear sound with full wet mixed reverb.
Now hear is where I run into issues. When trying to alter any parameter other than kReverb2Param_DryWetMix I get error code -10877. It seems as if these other parameters listed in the header file do not actually exist.
For example, calling
AudioUnitSetParameter(_reverbUnit, kAudioUnitScope_Global, 0, kReverb2Param_DecayTimeAtNyquist, 20.0f, 0)
Throws the -10877 error.
Is this a bug? Have I omitted any audio frameworks? Have I not imported specific audio headers?
The current audio frameworks included are AVFoundation and AudioToolbox.
The current audio imports are
#import <AudioToolbox/AudioToolbox.h>
#import <AVFoundation/AVFoundation.h>
#import <CoreAudio/CoreAudioTypes.h>
I have scoured google with no solution. I know I have problems when the Google route fails. Any help would be greatly appreciated.
Note: I tested with simulator and iPhone 4S device, same problem.
UPDATE: I have tried
AudioUnitGetParameter(_reverbUnit, kReverb2Param_DecayTimeAtNyquist, kAudioUnitScope_Global, 0, &value)
and it returns a value of 0.500000, which means the property does exist. So what am I doing wrong in setting the value?
Doh! I realize that I was confusing AudioUnitSetParameter with AudioUnitSetProperty, including their parameters. Man, subtle but evil.

iOS CoreAudio - using AUFilePlayer and a render callback

just started developing a small test iOS 5.0 app to see what is possible on the platform.
Some ressources were unvaluable, for example Chris Adamson's blog, and David Zicarelli's audioGraph example (based upon Apple's MixerHost with a bunch of new features).
What I'm trying to do is to setup something like this, using the new FilePlayer AudioUnit from the iOS 5.x SDK.
(AUFilePlayer bus0) -> (some custom process) -> (bus0 MultiChannelMixer bus0) -> (bus0 Remote I/O)
I started with audioGraph, removed what I didn't want, and ended up with the above chain.
I could see that AUFilePlayer preferred output is a 8.24 streams, so the mixer is also setup that way (8.24 on input scope). My process will handle any conversion it will need.
The "custom process" callback is registered for the mixer on bus 0. Once the app is launched, it gets called regularly, which I could verify by logging.
static OSStatus simpleRenderCallback (
void *inRefCon,
AudioUnitRenderActionFlags
*ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData
) {
MixerHostAudio* THIS = (MixerHostAudio*) inRefCon; //the MixerHostAudio instance
AudioUnit fpUnit= THIS.auFilePlayerUnit_1;
OSStatus renderErr = noErr;
renderErr= AudioUnitRender(AUFilePlayerUnit, ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);
if (renderErr < 0) {
NSLog(#"error:%ld",renderErr);
return renderErr;
}
return noErr;
}
The issue is that I'm always getting a renderErr = -50 each time the AudioUnitRender gets called in my callback.
I'm running with the simulator for now, the Mac soundcard is set to 44,100Hz, and I could see that the inNumberFrames is always equal to 512.
Where does the problem come from? -50 means "bad param" in CoreAudio, but that's not enough to know what's wrong.
Thanks!

AudioBufferList contents in remoteIO audio unit playback callback

I'd like to "intercept" audio data on its way to the iOS device's speaker. I believe this can be done using remoteIO audio units and callbacks. In the playbackCallback below does ioData actually contain any audio data?
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) { ... }
I'm confused because logging info about ioData seems to imply it contains audio data...
// if (ioData->mNumberBuffers > 0)
AudioBuffer buffer = ioData->mBuffers[0];
NSLog(#"buffer.mNumberChannels: %ld", buffer.mNumberChannels); // prints 2
NSLog(#"buffer.mDataByteSize : %ld", buffer.mDataByteSize); // prints 4096
However creating a CMSampleBufferRef from the contents of ioData and writing it to a CoreAudioFile using an AVAssetWriter yields a silent file. The length of the output file seems fine (a few seconds) but opening the file in Audacity for example shows a flatline.
After reading numerous SO posts and experimenting with lots of remoteIO audio unit sample code I'm starting to wonder if ioData above contains pre-sized but empty buffers that should be populated in the playbackCallback.
The ioData buffers in a playCallback are where the callback should put the audio samples you want to play. The buffers do not contain other audio intercepted on its way to the speaker.

Resources