Are there direct calls to the audio unit so that I don't have to depend on the system callbacks-input and render callbacks(I can mimic this with a timer). For example, like there is AudioUnitRender to pull data from the audio unit, is there another API to push data to the audio unit?
While I'm not aware of a specific push-like call in the CoreAudio API, you can easily accomplish this by doing your DSP processing in a separate C function which takes floating point buffers passed into it. This way, the render callback can do the hard work there, and you can also call the function manually if you need to do push-based processing.
The audio units behavior is the pull model. The output request data to play and if it has it, then it plays.
The common way to start recording is to call Render within the output callback. The Render will get you the data from the recorder which can be played or just save your recorded data to somewhere else and set the player to play silence.
Related
This is a pretty "minutia" question regarding timing...
I'm using iOS's RemoteIO audio unit to do things. Just wonder how exactly system handles the timing: after calling AudioOutputUnitStart(), the unit should on "on", then render callbacks will be pulled by downstream units. Allow me to guess:
Possibility 1: the next render callback happens right after the execution of AudioOutputUnitStart(), then it goes on
Possibility 2: the system has its own render callback rhythm. After calling AudioOutputUnitStart(), the next render callback catches on one of the system's "next" tick, then start from there
1 or 2? or there's 3? Thanks in advance!
The audio latency seems to depend on the specific device model, audio session and options, requested sample rate and buffer size, and whether any other audio (background or recently closed app) is or has recently been playing or recording on the system. Whether or not the internal audio amplifier circuits (etc.) need to be powered up or are already turned on may make the biggest difference. Requesting certain sample rates seems to also cause extra time due to the buffering potentially needed by the OS resampling and mixer code.
So likely (2) or (3).
The best way to minimize latency when using RemoteIO is to request very short buffers (1 to 6 mS) in the audio session setup, start the audio session and Audio Unit way ahead of time (at app startup, view load, etc.), then fill the callback buffers with zeros (or discard recorded callback data) until you need sound.
I have an audio callback that I use to access a bufferList and analyse the audio.
I need to record this audio too. Firstly would it be wise to do the recording in the same callback?
e.g. memcpy(void *dest, ioData->mBuffers[0].mData, int byteCount);
Or should the recording have its own callback?
Either way, is this memcpy the correct way to do this and how would I write this audio to a file?
Should the totalByteCount be used with pointer arithmetic on the void * dest once the audio input completes and pass the data to a file writer?
What is the best way to record audio in a core-audio render callback?
I think you can have two different callbacks each for both input and output audio stream. Normally when you open a particular stream it could be input or output you specify the callback too. In the callback you can do all your audio processing provided that you can meeting the callback deadline otherwise there are chances that you may end up missing audio samples. A better way would be to use some kind of circular buffer and it the callback you just fill the buffer. You can do all the other processing in main thread (along with recording).
I'm not sure which audio framework you are using. I've used portaudio in my project and it worked fine. Portaudio also provide a lock free circular buffer which can be used inside callback without need for thread locking mechanism.
Following links might help you.
http://portaudio.com/docs/v19-doxydocs/paex__record_8c.html.
http://portaudio.com/docs/v19-doxydocs/paex_ocean_shore_8c.html
I've searched around but haven't found any good examples or tutorials of saving audio out of a RemoteIO Audio Unit.
My setup: Using the MusicPlayer API, I have several AUSamplers -> MixerUnit -> RemoteIO
Audio playback works great. I would like to add functionality to save the audio output to a file. Would I do this in a render callback on the RemoteIO?
Any tips or pointers to example code much appreciated!
Due to the tight latency requirements of Audio Unit callbacks, one should not to do any synchronous file access (or any other calls that could potentially block, involve memory management or OS locking actions) inside the RemoteIO callback. Instead, just copy the audio data out to another buffer (a larger circular buffer for example), and set some state indicating how much data has been copied. Then, in another thread, when the amount of data is sufficient, write the contents of that buffer out to a file. This could be a raw PCM file, which can later be converted by AVAssetReader/Writer into another audio file type.
I'm writing an iOS application that will play audio instructions as one of it's features.
Every time the application wants to play audio it reads from a non-standard file and puts the resulting PCM data for that audio in a buffer in memory.
Even though I have that buffer with the PCM data, I'm having trouble getting the application to actually play the sound. After searching the iOS documentation, I started implementing an AudioUnit. The problem with this AudioUnit is the use of a render callback (as far as I know, the only way to output sound). From Apple's developer documentation:
… render callbacks have a strict performance requirement that you must
adhere to. A render callback lives on a real-time priority thread on
which subsequent render calls arrive asynchronously. The work you do
in the body of a render callback takes place in this time-constrained
environment. If your callback is still producing sample frames in
response to the previous render call when the next render call
arrives, you get a gap in the sound. For this reason you must not take
locks, allocate memory, access the file system or a network
connection, or otherwise perform time-consuming tasks in the body of a
render callback function
If I can't use locks inside the render callback method I can't be reading the buffer while writing in it. There is no opportunity to read the file and write to the buffer because the render callback will be accessing it constantly.
The only example I found actually generated the PCM data inside the render method, which I can't do.
Is this the only way of using AudioUnits (with an asynchronous render callback)?
Is there an alternative to playback PCM data from memory?
Using the RemoteIO Audio Unit might require having a separate data queue (fifo or circular buffer), outside the audio unit callback, which can pre-buffer up enough audio data from a file read, ahead of audio unit render callback, to meet worse case latencies. Then the render callback only needs to do a quick copy of the audio data, and then the update of a write-only flag that indicates that audio data was consumed.
An alternative built into iOS is to use the Audio Queue API, which does the pre-buffering for you. It allows your app to fill a number of larger audio buffers in the main run loop ahead of time. You still have to pre-buffer enough data to allow for the max of file, network, lock or other latencies.
Another strategy is to have alternative audio data to feed the real-time render callback if the file or network read didn't keep up, such as quickly creating an audio buffer that tapers to silence (and then un-tapering when real data starts arriving again).
Could someone explain in terms of Audio Unit connections how to modify the iPhone microphone data stream visible to other processes with gain or EQ? I understand how to use a remote I/O unit to grab mic data and do my processing. I want this new data to replace the original mic data stream, not go to speakers or a file. "Audio Unit Hosting Fundamentals" Figure 1-3 is close.
I have read everything out there on Audio Units and used several of the online examples (Tim B, Play It Loud, Tasty Pixel) but don't see how to do this yet.
Any help?
Thanks
This doesn't seem to be clearly explained or illustrated in the documentation. However, if you look at the AURIOTOUCH sample code, you will see how within the remote I/O render callback, it makes a call to retrieve data from the microphone. then it optionally processes this data, and returns it.
this is kind of doubly useful because this call to retrieve microphone data returns already created buffesr. this means you don't have to create your own buffers, which is great becaues that is a bit of a hassle.