I've searched around but haven't found any good examples or tutorials of saving audio out of a RemoteIO Audio Unit.
My setup: Using the MusicPlayer API, I have several AUSamplers -> MixerUnit -> RemoteIO
Audio playback works great. I would like to add functionality to save the audio output to a file. Would I do this in a render callback on the RemoteIO?
Any tips or pointers to example code much appreciated!
Due to the tight latency requirements of Audio Unit callbacks, one should not to do any synchronous file access (or any other calls that could potentially block, involve memory management or OS locking actions) inside the RemoteIO callback. Instead, just copy the audio data out to another buffer (a larger circular buffer for example), and set some state indicating how much data has been copied. Then, in another thread, when the amount of data is sufficient, write the contents of that buffer out to a file. This could be a raw PCM file, which can later be converted by AVAssetReader/Writer into another audio file type.
Related
I'm working on an app which has a requirement for running some basic audio filters (such as normalisation and reverb) on a file. The idea is to take an existing audio file, add the filters, and then write the data to a new file. Crucially, this must be done without any playback and should be fast (i.e. on a 60 second audio file I should be able to add reverb in under a second).
I've looked at several solutions such as The Amazing Audio Engine and AudioBox but these all seem to rely on you playing back any audio in realtime rather than writing it to a file.
Does anybody have examples, or can point me in the right direction, for simply taking a file and applying a basic audio filter without listening to it. I'm sure I must be missing something simple somewhere but my searches have turned up nothing.
In general,the steps are:
Set up an AUGraph like this - AudioFilePlayer -> Reverb/Limiter/EQ/etc. -> GenericOutput
Open the input file and schedule it on the AudioFilePlayer.
Create an output file and repeatedly call AudioUnitRender on the GenericOutput unit, writing the rendered buffers to the output file.
I'm not sure about the speed of this, but it should be acceptable.
There is a comprehensive example of offline rendering in this thread that covers the setup and rendering process.
I have an audio callback that I use to access a bufferList and analyse the audio.
I need to record this audio too. Firstly would it be wise to do the recording in the same callback?
e.g. memcpy(void *dest, ioData->mBuffers[0].mData, int byteCount);
Or should the recording have its own callback?
Either way, is this memcpy the correct way to do this and how would I write this audio to a file?
Should the totalByteCount be used with pointer arithmetic on the void * dest once the audio input completes and pass the data to a file writer?
What is the best way to record audio in a core-audio render callback?
I think you can have two different callbacks each for both input and output audio stream. Normally when you open a particular stream it could be input or output you specify the callback too. In the callback you can do all your audio processing provided that you can meeting the callback deadline otherwise there are chances that you may end up missing audio samples. A better way would be to use some kind of circular buffer and it the callback you just fill the buffer. You can do all the other processing in main thread (along with recording).
I'm not sure which audio framework you are using. I've used portaudio in my project and it worked fine. Portaudio also provide a lock free circular buffer which can be used inside callback without need for thread locking mechanism.
Following links might help you.
http://portaudio.com/docs/v19-doxydocs/paex__record_8c.html.
http://portaudio.com/docs/v19-doxydocs/paex_ocean_shore_8c.html
I am interested in adding effects to songs played from the iTunes library. I have constructed an auGraph as follows: AUFilePlayer -> Effects Unit -> Mixer -> RemoteIO. There is much emphasis on the use of data buffers and render callback when playing large audio files as part of efficient memory management. I have found in scattered sources that the AUFilePlayer (>iOS 5) somewhat reduces the need for a buffer. Given my setup using an AUFilePlayer, should my design still include a ring buffer and render callback?
In short, No. There is no need to add a buffer.
AUFilePlayer internally loads audio into a buffer and pulls from it as the graph requests audio.
Max
I'm writing an iOS application that will play audio instructions as one of it's features.
Every time the application wants to play audio it reads from a non-standard file and puts the resulting PCM data for that audio in a buffer in memory.
Even though I have that buffer with the PCM data, I'm having trouble getting the application to actually play the sound. After searching the iOS documentation, I started implementing an AudioUnit. The problem with this AudioUnit is the use of a render callback (as far as I know, the only way to output sound). From Apple's developer documentation:
… render callbacks have a strict performance requirement that you must
adhere to. A render callback lives on a real-time priority thread on
which subsequent render calls arrive asynchronously. The work you do
in the body of a render callback takes place in this time-constrained
environment. If your callback is still producing sample frames in
response to the previous render call when the next render call
arrives, you get a gap in the sound. For this reason you must not take
locks, allocate memory, access the file system or a network
connection, or otherwise perform time-consuming tasks in the body of a
render callback function
If I can't use locks inside the render callback method I can't be reading the buffer while writing in it. There is no opportunity to read the file and write to the buffer because the render callback will be accessing it constantly.
The only example I found actually generated the PCM data inside the render method, which I can't do.
Is this the only way of using AudioUnits (with an asynchronous render callback)?
Is there an alternative to playback PCM data from memory?
Using the RemoteIO Audio Unit might require having a separate data queue (fifo or circular buffer), outside the audio unit callback, which can pre-buffer up enough audio data from a file read, ahead of audio unit render callback, to meet worse case latencies. Then the render callback only needs to do a quick copy of the audio data, and then the update of a write-only flag that indicates that audio data was consumed.
An alternative built into iOS is to use the Audio Queue API, which does the pre-buffering for you. It allows your app to fill a number of larger audio buffers in the main run loop ahead of time. You still have to pre-buffer enough data to allow for the max of file, network, lock or other latencies.
Another strategy is to have alternative audio data to feed the real-time render callback if the file or network read didn't keep up, such as quickly creating an audio buffer that tapers to silence (and then un-tapering when real data starts arriving again).
Could someone explain in terms of Audio Unit connections how to modify the iPhone microphone data stream visible to other processes with gain or EQ? I understand how to use a remote I/O unit to grab mic data and do my processing. I want this new data to replace the original mic data stream, not go to speakers or a file. "Audio Unit Hosting Fundamentals" Figure 1-3 is close.
I have read everything out there on Audio Units and used several of the online examples (Tim B, Play It Loud, Tasty Pixel) but don't see how to do this yet.
Any help?
Thanks
This doesn't seem to be clearly explained or illustrated in the documentation. However, if you look at the AURIOTOUCH sample code, you will see how within the remote I/O render callback, it makes a call to retrieve data from the microphone. then it optionally processes this data, and returns it.
this is kind of doubly useful because this call to retrieve microphone data returns already created buffesr. this means you don't have to create your own buffers, which is great becaues that is a bit of a hassle.