inNumberFrames of performRender becomes 1 in aurioTouch sample codes - ios

aurioTouch sample codes runs OK on iOS15 devices, but on iOS16, inNumberFrames of performRender always becomes 1.
On iOS15, inNumberFrames is usually 512 or 1024. Sometimes smaller or bigger, but not 1.
// Render callback function
static OSStatus performRender (void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
I know aurioTouch is old sample code, but it runs OK on iOS15, so I suppose there should be difference between iOS15 and iOS16.
Any help on this issue would be appreciated.

Related

AudioBufferList passed to ABReceiverPortReceive does not match clientFormat when using The Amazing Audio Engine

Hi I need multiple input streams form audiobus and I am using TAAE framework...
I tired this just to test if I can manually send audio :
AEBlockChannel *channel = [AEBlockChannel channelWithBlock:^(const AudioTimeStamp *time, UInt32 frames, AudioBufferList *audio) {
ABReceiverPortReceive(_abreceiverPort, nil, audio, frames, time);
}];
and I get "AudioBufferList passed to ABReceiverPortReceive does not match clientFormat "
What should I do ? I try to understand how TAAE works from its source but was not able to understand how I can create correct AudioBufferList, maybe some little example will enlighten me.
I found just this in sources AEAllocateAndInitAudioBufferList(rawAudioDescription, kInputAudioBufferFrames) , how it is created..
Recieved answer from Michael Tyson on audiobus forum.
Please read http://developer.audiob.us/doc/_receiver-_port.html#Receiving-Separate-Streams
Specifically, see the part about ABReceiverPortEndReceiveTimeInterval.

iOS API method to ask whether a sound loaded and started by the app is still being played

Is there a method provided by the iOS SDK to allow the application to start a loaded sound, and know when it is finished or still being played?
I'm using this audio library but it lacks that kind of functionality
These are the functions I've been using
To load a sound effect or background music:
OSStatus SoundEngine_LoadBackgroundMusicTrack(const char* inPath, Boolean inAddToQueue, Boolean inLoadAtOnce)
OSStatus SoundEngine_LoadEffect(const char* inPath, UInt32* outEffectID)
OSStatus SoundEngine_LoadLoopingEffect(const char* inLoopFilePath, const char* inAttackFilePath, const char* inDecayFilePath, UInt32* outEffectID)
To play them
OSStatus SoundEngine_StartBackgroundMusic()
OSStatus SoundEngine_StartEffect(UInt32 inEffectID)
I bet it's not possible due to privacy concerns, but I don't know for sure.
The answers given to this question are very helpful, since Stormyprods' sound engine is a wrapper for OpenAL
I used the following code:
ALint sourceState;
alGetSourcei(soundID, AL_SOURCE_STATE, &sourceState);
if (sourceState != AL_PLAYING)
{
SoundEngine_StartEffect(soundID);
}
else
{
printf("already playing sfx %d\n", soundID);
}

iOS CoreAudio - using AUFilePlayer and a render callback

just started developing a small test iOS 5.0 app to see what is possible on the platform.
Some ressources were unvaluable, for example Chris Adamson's blog, and David Zicarelli's audioGraph example (based upon Apple's MixerHost with a bunch of new features).
What I'm trying to do is to setup something like this, using the new FilePlayer AudioUnit from the iOS 5.x SDK.
(AUFilePlayer bus0) -> (some custom process) -> (bus0 MultiChannelMixer bus0) -> (bus0 Remote I/O)
I started with audioGraph, removed what I didn't want, and ended up with the above chain.
I could see that AUFilePlayer preferred output is a 8.24 streams, so the mixer is also setup that way (8.24 on input scope). My process will handle any conversion it will need.
The "custom process" callback is registered for the mixer on bus 0. Once the app is launched, it gets called regularly, which I could verify by logging.
static OSStatus simpleRenderCallback (
void *inRefCon,
AudioUnitRenderActionFlags
*ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData
) {
MixerHostAudio* THIS = (MixerHostAudio*) inRefCon; //the MixerHostAudio instance
AudioUnit fpUnit= THIS.auFilePlayerUnit_1;
OSStatus renderErr = noErr;
renderErr= AudioUnitRender(AUFilePlayerUnit, ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);
if (renderErr < 0) {
NSLog(#"error:%ld",renderErr);
return renderErr;
}
return noErr;
}
The issue is that I'm always getting a renderErr = -50 each time the AudioUnitRender gets called in my callback.
I'm running with the simulator for now, the Mac soundcard is set to 44,100Hz, and I could see that the inNumberFrames is always equal to 512.
Where does the problem come from? -50 means "bad param" in CoreAudio, but that's not enough to know what's wrong.
Thanks!

AudioBufferList contents in remoteIO audio unit playback callback

I'd like to "intercept" audio data on its way to the iOS device's speaker. I believe this can be done using remoteIO audio units and callbacks. In the playbackCallback below does ioData actually contain any audio data?
static OSStatus playbackCallback(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData) { ... }
I'm confused because logging info about ioData seems to imply it contains audio data...
// if (ioData->mNumberBuffers > 0)
AudioBuffer buffer = ioData->mBuffers[0];
NSLog(#"buffer.mNumberChannels: %ld", buffer.mNumberChannels); // prints 2
NSLog(#"buffer.mDataByteSize : %ld", buffer.mDataByteSize); // prints 4096
However creating a CMSampleBufferRef from the contents of ioData and writing it to a CoreAudioFile using an AVAssetWriter yields a silent file. The length of the output file seems fine (a few seconds) but opening the file in Audacity for example shows a flatline.
After reading numerous SO posts and experimenting with lots of remoteIO audio unit sample code I'm starting to wonder if ioData above contains pre-sized but empty buffers that should be populated in the playbackCallback.
The ioData buffers in a playCallback are where the callback should put the audio samples you want to play. The buffers do not contain other audio intercepted on its way to the speaker.

iOS: Bug in the simulator using AudioUnitRender

I have hit yet another iOS simulator bug. My question is, is there some workaround?
Bug is this:
Load apple's AurioTouch sample Project.
and simply print out the number of frames getting received by the render callback (in aurioTouchAppDelegate.mm)
static OSStatus PerformThru(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
printf( "%u, ", (unsigned int)inNumberFrames );
I get the following output:
471, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, ...
However, if you comment out the call to AudioUnitRender on the next line:
{
printf( "%u, ", (unsigned int)inNumberFrames );
aurioTouchAppDelegate *THIS = (aurioTouchAppDelegate *)inRefCon;
OSStatus err = 0; // AudioUnitRender(THIS->rioUnit, ioActionFlags, inTimeStamp, 1, inNumberFrames, ioData);
It now sends an appropriate number of floats each time.
471, 470, 471, 470, 470, 471, 470, 471, 470, 470, 471, 470, 471, 470, 470, 471, 470,
Another question I have is: why such a random number as 470, 471? I read somewhere that you specify the buffer length implicitly by specifying its time duration, and it sets the buffer length to the power of two that yields the best approximation to this duration. But empirical evidence suggests this is not so.
Anyway, pretty sure this is a bug. I'm going to go on file it. If anyone can shed some light, please do!
Generally the workaround to Simulator bugs is to test the app on the device. The iOS Simulators is just a simulator, not an emulator.
The iOS Simulator has some odd bugs. It may have to do with buffer sizes according to this post by Christopher Penrose:
The simulator will act wildly different setup to setup as it relies on your host audio gear, which may in your case be a third-party interface. I have seen the simulator refuse a reasonable power of 2 size because of the device. I have not been able to use audio in the simulator reliably.
James is telling me that I am being foolish but in practice I have been able to rely on the original configured buffer size without having it change on me.
Link with possibly more helpful info: http://osdir.com/ml/coreaudio-api/2010-04/msg00150.html
If you want to get the audio working with your simulator, you need to make sure your samplerate is set to 44.1k in OS X's audio/midi setup tool. AVAudioSession/Audio Services will report your samplerate as 44.1k no matter what it actually is when using the simulator.
By setting your mac's samplerate to 44.1k, you'll get a consistent inNumberFrames (default is 1024) per callback, although this can still allegedly be changed by the system (ex. app goes to background).

Resources