iOS - Accessing generated signal from Audio Unit render callback - ios

I have an iOS/Objective-C program that uses a single audio unit to play a generated signal when a button is pressed. I'd like to add functionality such that:
a) When the button is first pressed, a signal is generated in some kind of numeric array.
b) The audio then begins, and the render callback accesses (and plays) that generated signal.
Given my current code, I feel like these additions will just be a few lines, but I'm having trouble with the syntax, which variable types to use, how to track the current sample, and so on. I've included the related code as it is now:
The button press:
- (IBAction)startPressed:(id)sender {
[self setupAudioPlayer];
[self createSignal];
[self playAudio];
}
A line from setupAudioPlayer:
input.inputProcRefCon=&mySignal; // mySignal is an instance var
The audio creation:
-(void)createSignal{
int beepLength=0.020*Fs; // Fs is sampling frequency
float beepFrequency=440; // Hz
// Declare some kind of numeric array "mySignal", which is an instance var.
mySignal=...?
// Generate audio signal (pure tone)
for (int i=1; i<=beepLength; i++) {
float t=i/Fs;
mySignal[i]=sinf(2*M_PI*beepFrequency*t);
}
}
The render callback:
OSStatus RenderTone(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
const int channel1 = 0;
Float32 *buffer = (Float32 *)ioData->mBuffers[channel1].mData;
// This is where things get hazy
Float32 *mySignal=(Float32 *)inRefCon;
for (UInt32 frame = 0; frame < inNumberFrames; frame++)
{
buffer[frame]=mySignal[?];
}
return noErr;
}
So, to summarize my questions: How should mySignal be defined? How do I access this instance variable from RenderTone (my 'hazy' code above is just a guess)? How can I track the current sample in RenderTone? Is there anything else missing/wonky in this approach?
Thanks for reading and for any help, really appreciated!
(I have seen sample code that passes a reference to the view controller's instance into the render callback, and then accesses the instance variables that way. However, perhaps mistakenly, I read elsewhere that this wasn't good form as it may involve too much computational overhead for a callback with such strict timing requirements.)

Since you're generating the frames from an algebraic function, why don't you simply follow Matt Gallagher's example? In brief: just move the function inside the render callback and transfer the parameters through the vc instance.
Generally speaking your choices are limited for passing data to a callback that has a pre-defined form. I'm probably the last person to counsel on good form in Objective C, but one of the few options is to use globals.
You could pass mySignal array (or else the frequency) as a global. Not the most 'elegant' object-oriented solution, but one that will work and avoid all the O.O. frou-frou overhead. Seems only appropriate to use a C-based solution, since the render callback is at base a C function.
As to "tracking," not quite sure what you mean, but in my own work with generating tones, I've initialized a remainingCycles global with the tone-length (in frame cycles = length in seconds * Fs or sampleRate whatever you want to call it) and decrementing each pass through the frame loop; when the number hits zero, you end the tone. (Of course, you could use an instance variable instead of a global.)
Maybe this violates the Canons of Object-Oriented Coding, but at the end of the day, you just need to get the job done.

Related

Calling MusicDeviceMIDIEvent from the audio unit's render thread

There's one thing I don't understand about MusicDeviceMIDIEvent. In every single example I ever seen (searched Github and Apple examples) it was always used from the main thread. Now, in order to use the sample offset parameter the documentation states:
inOffsetSampleFrame:
If you are scheduling the MIDI Event from the audio unit's render thread, then you can supply a
sample offset that the audio unit may apply when applying that event in its next audio unit render.
This allows you to schedule to the sample, the time when a MIDI command is applied and is particularly
important when starting new notes. If you are not scheduling in the audio unit's render thread,
then you should set this value to 0
Still, even in the most simple case, in which you only have a sampler audio unit and an io unit, how can you schedule MIDI events from the audio unit's render thread since the sampler doesn't allow a render callback and even if it would (or if you use the io's callback just to tap in), it would feel hackish, since the render callback is not intended for schedule MIDI events?
How does one correctly calls this function from the audio unit's render thread?
A renderNotify callback is a perfect place to do scheduling from the render thread. You can even set the renderNotify on the MusicDevice itself. Here's what it might look like on an AUSampler.
OSStatus status = AudioUnitAddRenderNotify(sampler, renderNotify, sampler);
In this example I passed the sampler in as a reference via the inRefCon argument, and am just sending a note-on(144) to note 64 every 44100 samples, but in an application you would pass in a c struct to inRefCon with a reference to your midi device, and all the values you need to do your scheduling. Note the checking of the render flag for pre-render.
static OSStatus renderNotify(void * inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData) {
AudioUnit sampler = inRefCon;
if (ioActionFlags & kAudioUnitRenderAction_PreRender) {
for (int i = 0; i < inNumberFrames; i++) {
if (fmod(inTimeStamp->mSampleTime + i, 44000) == 0) {
MusicDeviceMIDIEvent(sampler,144, 64, 127, i); // i is the offset from render start, so use it for offset argument.
}
}
}
return noErr;
}

Pthreads create argument passing

I am trying to pass 2 arguments using a struct in the pthread_create method. One argument is for the total amount of threads active and the other one is for the id of the thread, which is the number it receives from theloop (i). The problem i have right now is that the argument_struct.id value has the same value when i use the run the for loop.
struct argument_struct {
int total_threads;
int id;
} argument_struct;
void * body(void *args)
{
struct argument_struct *arguments = args;
printf("Hello World! %i, %i \n", argument_struct.total_threads, argument_struct.id);
return NULL;
}
int main()
{
const num_threads = 20;
pthread_t thread[num_threads];
pthread_attr_t attr;
int i;
/* Initiate the thread attributes */
pthread_attr_init(&attr);
pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_JOINABLE);
argument_struct.total_threads = num_threads;
for (i=0; i<num_threads; i++)
{
argument_struct.id = i;
pthread_create(&thread[i], &attr, &body, (void *)&argument_struct);
}
output
Hello world! 20, 19
Hello world! 20, 19
Hello world! 20, 19
etc
You have a race condition - by the time a thread gets to look at the structure it has been passed, the main thread has already reused that structure to pass to the next created thread.
There are several ways you could solve this:
1) have an array of struct argument_struct items (the array at least large enough for the number of threads), and use a separate element to pass to each newly created thread
2) store the total_threads value in a global that the threads can read. There's no data race here, since the value isn't updated after it is initialized and the threads will only read it. On the other hand you have a shared global which isn't a great idea to use as an interface, but might be OK for small programs. To avoid the problem with sharing the id element, just pass it directly by casting it to a (void*).
3) dynamically allocate a new struct argument_struct to pass to each thread. The thread will become the owner of the struct and will be responsible for freeing it. This is probably the best solution generally (ie., the technique will work well even if you start passing in a large number or complex set of arguments).
4) Use something like a semaphore or condition variable to let the thread signal when it has finished using the passed in structure. The main thread should wait on that synchronization object before reusing the structure to create the next thread. This might be overkill for your simple example, but again it'll work in situations of greater complexity (though I still think that passing in a dynamically allocated structure is simpler).
You are passing each thread the same argument, which is why they all see the same value.

Audio Unit Render Callback - change it on the fly?

I have a multichannel mixer and a remote I/O in a graph, setup to play uncompressed caf files. So far so good.
Next, I am experimenting with doing weird stuff on the render callback (say, generate white noise, or play a sine wave, etc. - procedurally generated sounds).
Instead of adding conditionals to the existing render callback (which is assigned on setup to all the buses of the mixer), I would like to be able to switch the render callback attached to each bus, at runtime.
So far I'm trying this code, but it doesn't work: My alternative render callback does not get called.
- (void) playNoise
{
if (_noiseBus != -1) {
// Already playing
return;
}
_noiseBus = [self firstFreeMixerBus];
AUGraphDisconnectNodeInput(processingGraph,
mixerNode,
_noiseBus);
inputCallbackStructArray[_noiseBus].inputProc = &noiseRenderCallback;
inputCallbackStructArray[_noiseBus].inputProcRefCon = NULL;
OSStatus result = AUGraphSetNodeInputCallback(processingGraph,
mixerNode,
_noiseBus,
&inputCallbackStructArray[_noiseBus]);
if (result != noErr) {
return NSLog#"AUGraphSetNodeInputCallback");
}
result = AudioUnitSetParameter(_mixerUnit,
kMultiChannelMixerParam_Enable,
kAudioUnitScope_Input,
_noiseBus,
1,
0);
if (result != noErr) {
return NSLog(#"Failed to enable bus");
}
result = AudioUnitSetParameter (_mixerUnit,
kMultiChannelMixerParam_Volume,
kAudioUnitScope_Input,
_noiseBus,
0.5,
0);
if (result != noErr) {
return NSLog(#"AudioUnitSetParameter (set mixer unit input volume) Failed");
}
unsigned char updated;
AUGraphUpdate(processingGraph, &(updated));
// updated ends un being zero ('\0')
}
In the code above, none of the error conditions are met (no function call fails), but the boolean 'updated' remains false until the end.
Am I missing a step, or is it not possible to switch render callbacks after setup? Do I need to set aside dedicated buses to these alternative callbacks? I would like to be able to set custom callbacks from the client code (the side calling my sound engine)...
EDIT Actually it is working, but only after the second time: I must call -playNoise, then -stopNoise, and from then on it will play normally. I couldn't tell, because I was giving up at the first try...
BTW, The updated flag is still 0.
I added lots of audio unit calls out of desperation, but perhaps some are not necessary. I'll see which ones I can trim, then keep looking for the reason it needs two calls to work...
EDIT 2: After poking around, adding/removing calls and fixing bugs, I got to the point where the noise render callback works from the first time, but after playing the noise at least once, if I attempt to reuse that bus form playing PCM (caf file), it still uses the noise render callback (despite having disconnected it). I'm going with the solution suggested by #hotpaw2 in the comments and using a 'stub' callback and further function pointers...

Synchronising with Core Audio Thread

I am using the render callback of the ioUnit to store the audio data into a circular buffer:
OSStatus ioUnitRenderCallback(
void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList *ioData)
{
OSStatus err = noErr;
AMNAudioController *This = (__bridge AMNAudioController*)inRefCon;
err = AudioUnitRender(This.encoderMixerNode->unit,
ioActionFlags,
inTimeStamp,
inBusNumber,
inNumberFrames,
ioData);
// Copy the audio to the encoder buffer
TPCircularBufferCopyAudioBufferList(&(This->encoderBuffer), ioData, inTimeStamp, kTPCircularBufferCopyAll, NULL);
return err;
}
I then want to read the bytes out of the circular buffer, feed them to libLame and then to libShout.
I have tried starting a thread and using NSCondition to make it wait until data is available but this causes all sorts of issues due to using locks on the Core Audio callback.
What would be the recommended way to do this?
Thanks in advance.
More detail on how I implemented Adam's answer
I ended up taking Adam's advice and implemented it like so.
Producer
I use TPCircularBufferProduceBytes in the Core Audio Render callback to add the bytes to the circular buffer. In my case I have non-interleaved audio data so I ended up using two circular buffers.
Consumer
I spawn a new thread using pthread_create
Within the new thread create a new CFTimer and add it to the current
CFRunLoop (an interval of 0.005 seconds appears to work well)
I tell the current CFRunLoop to run
Within my timer callback I encode the audio and send it to the server (returning quickly if no data is buffered)
I also have a buffer size of 5MB which appears to work well (2MB was giving me overruns). This does seem a bit high :/
Use a repeating timer (NSTimer or CADisplayLink) to poll your lock-free circular buffer or FIFO. Skip doing work if there is not enough data in the buffer, and return (to the run loop). This works because you know the sample rate with high accuracy, and how much data you prefer or need to handle at a time, so can set the polling rate just slightly faster, to be on the safe side, but still be very close to the same efficiency as using conditional locks.
Using semaphores or locks (or anything else with unpredictable latency) in a real-time audio thread callback is not recommended.
You're on the right track, but you don't need NSCondition. You definitely don't want to block. The circular buffer implementation you're using is lock free and should do the trick. In the audio render callback, put the data into the buffer by calling TPCircularBufferProduceBytes. Then in the reader context (a timer callback is good, as hotpaw suggests), call TPCircularBufferTail to get the tail pointer (read address) and number of available bytes to read, and then call TPCircularBufferConsume to do the actual reading. Now you've done the transfer without taking any locks. Just make sure the buffer you allocate is large enough to handle the worst-case condition where your reader thread gets held off by the os for whatever reason, otherwise you can hit a buffer overrun condition and will lose data.

iOS Audio unit get current time stamp

I'm using audio unit (iOS) to play music from file. How do I get the current time stamp of the music I m playing?
I found that there is a variable call "inTimeStamp" of type AudioTimeStamp in the playbackCallback function. Is it the right place i look for the current time stamp?
Here you are:
AudioTimeStamp ts;
UInt32 size = sizeof(ts);
AudioUnitGetProperty(THIS->audioPlayerUnit,
kAudioUnitProperty_CurrentPlayTime,
kAudioUnitScope_Global,
0, &ts, &size);
NSLog(#"TS%f", ts.mSampleTime);
A better way to get the seconds is to add
THIS->currentTime = ts.mSampleTime / THIS.streamFormatDescription.mSampleRate;
loretoparisi's answer is not suited for all scenes. If you're using kAudioUnitSubType_AudioFilePlayer, then try his answer; but if you're using other audio unit types such as RemoteIO unit, you need to set a global variable that stores the frame count that the audio unit has rendered, and update the value in every render cycle.
player.progress += inNumberFrames/player.canonicalFormat.mSampleRate;

Resources