Could someone explain in terms of Audio Unit connections how to modify the iPhone microphone data stream visible to other processes with gain or EQ? I understand how to use a remote I/O unit to grab mic data and do my processing. I want this new data to replace the original mic data stream, not go to speakers or a file. "Audio Unit Hosting Fundamentals" Figure 1-3 is close.
I have read everything out there on Audio Units and used several of the online examples (Tim B, Play It Loud, Tasty Pixel) but don't see how to do this yet.
Any help?
Thanks
This doesn't seem to be clearly explained or illustrated in the documentation. However, if you look at the AURIOTOUCH sample code, you will see how within the remote I/O render callback, it makes a call to retrieve data from the microphone. then it optionally processes this data, and returns it.
this is kind of doubly useful because this call to retrieve microphone data returns already created buffesr. this means you don't have to create your own buffers, which is great becaues that is a bit of a hassle.
Related
I'm trying to send music over bluetooth from one iOS device to another. I've been using this to build packets like in Ray Wenderlich's SNAP tutorial, but I've been having trouble reconstructing the packet information on the receiving phone. I have tried using https://github.com/abbood/iphoneAudioSyncer but I think it is too complicated for my needs (since I do not need synced playing). What is the simplest buffer approach that accounts for things like lost/out of order packets? I have read through a lot of CoreAudio stuff but it is very dense, so I would appreciate help from someone who has tackled this type of problem.
when you talk about los/out of order packets.. you're talking about the topic of Packet Loss Concealment.. which is a very dense topic (I mean if you think core audio is dense.. wait till you dive into PLC).
In a nutshell, there are many ways to deal with packet loss.. but the simplest way (which I advise you to do) is to replace the lost packets with silence (same goes with out of order packets.. if a packet is out of order.. just discard it).
that being said.. you are dealing with audio that is streamed to you (ie sent via the bluetooth/wifi network).. which means in almost 100% of the time it's compressed audio you're getting (ie Variable Bit Rate audio VBR).. if you simply try to substitute lost VBR packets with silence.. you'll run into this problem. You'll either have to insert silence packets in the same compression format as the VBR audio you're dealing with, or you will have to convert your VBR compressed audio into non-compressed audio (Lossless PCM), then insert zeros in place of the missing packets.
These have been around in OS X for a little while now and just recently became available in ios with ios 6. I am trying to figure what they let you do exactly. So the idea is you can tap into an audio queue and process the data before sending it on. Does this mean you can now intercept raw audio coming from different applications and process that (such as the iOS music player) before it plays? In other words is inter-app audio possible? I have read over the audioQueue.h file and can't quite figure out what to make of it.
Consider it a mid-level entry for your audio custom processing (e.g. insert effect) or reading (e.g. for analysis or display purposes) of the queue's sample data. A basic interface for reading or processing an AQ's data.
Does this mean you can now intercept raw audio coming from different applications and process that (such as the iOS music player) before it plays? In other words is inter-app audio possible?
Nope - it's not inter-process; you have no access to other processes' audio queues. These are for your queues' sample data. They can be used to simplify general audio render or analysis chains (the common case, by app count). My guess is that it was provided because a lot of people wanted an easier entry to access this sample data for processing or analysis. Custom processing entries on iOS can also be more complicated to implement (i.e. AudioUnit availability is restricted).
I am building a game that lets users remix songs. I have built a mixer (based upon the apple sample code MixerHost (creating an audioGraph with a mixer audioUnit), but expanded to load 12 tracks. everything is working fine, however it takes a really long time for the songs to load when the gamer selects the songs they want to remix. This is because the program has to load 12 separate mp4 files into memory before I can start playing the music.
I think what I need is to create a AUFilePlayer audioUnit that is in charge of loading the file into the mixer. If the AUFilePlayer can handle loading the file on the fly then the user will not have to wait for the files to load 100% into memory. My two questions are, 1. can an AUFilePlayer be used this way? 2. The documentation on AUFilePlayer is very very very thin. Where can I find some example code demonstrated how to implement a AUFilePlayer properly in IOS (not in MacOS)?
Thanks
I think you're right - in this case a 'direct-from-disk' buffering approach is probably what you need. I believe the correct AudioUnit subtype is AudioFilePlayer. From the documentation:
The unit reads and converts
audio file data into its own internal
buffers. It performs disk I/O on a
high-priority thread shared among all
instances of this unit within a process.
Upon completion of a disk read, the unit
internally schedules buffers for playback.
A working example of using this unit on Mac OS X is given in Chris Adamson's book Learning Core Audio. The code for iOS isn't much different, and is discussed in this thread on the CoreAudio-API mailing list. Adamson's working code example can be found here. You should be able to adapt this to your requirements.
I've searched around but haven't found any good examples or tutorials of saving audio out of a RemoteIO Audio Unit.
My setup: Using the MusicPlayer API, I have several AUSamplers -> MixerUnit -> RemoteIO
Audio playback works great. I would like to add functionality to save the audio output to a file. Would I do this in a render callback on the RemoteIO?
Any tips or pointers to example code much appreciated!
Due to the tight latency requirements of Audio Unit callbacks, one should not to do any synchronous file access (or any other calls that could potentially block, involve memory management or OS locking actions) inside the RemoteIO callback. Instead, just copy the audio data out to another buffer (a larger circular buffer for example), and set some state indicating how much data has been copied. Then, in another thread, when the amount of data is sufficient, write the contents of that buffer out to a file. This could be a raw PCM file, which can later be converted by AVAssetReader/Writer into another audio file type.
What I want to do is to take the output samples of an AVAsset corresponding to an audio file (no video involved) and send them to an audio effect class that takes in a block of samples, and I want to be able to this in real time.
I am currently looking at the AVfoundation class reference and programming guide, but I can't see a way of redirect the output of a player item and send it to my effect class, and from there, send the transformed samples to an Audio output (using AVAssetReaderAudioMixOutput?) and hear it from there. I see that the AVAssetReader class gives me a way to get a block of samples using
[myAVAssetReader addOutput:myAVAssetReaderTrackOutput];
[myAVAssetReaderTrackOutput copyNextSampleBuffer];
but Apple documentation specifies that the AVAssetReader class is not made and should not be used for real-time situations. Does anybody have a suggestion on where to look, or if I am having the right approach?
The MTAudioProcessingTap is perfect for this. By leveraging an AVPlayer, you can avoid having to block the samples yourself with the AVAssetReaderOutput and then render them yourself in an Audio Queue or with an Audio Unit.
Instead, attach an MTAudioProcessingTap to the inputParameters of your AVAsset's audioMix, and you'll be given samples in blocks which are easy to then throw into an effect unit.
Another benefit from this is that it will work with AVAssets derived from URLs that can't always be opened by other Apple APIs (like Audio File Services), such as the user's iPod library. Additionally, you get all of the functionality like tolerance of audio interruptions that the AVPlayer provides for free, which you would otherwise have to implement by hand if you went with an AVAssetReader solution.
To set up a tap you have to set up some callbacks that the system invokes as appropriate during playback. Full code for such processing can be found at this tutorial here.
There's a new MTAudioProcessingTap object in iOS 6 and Mac OS 10.8 . Check out the Session 517 video from WWDC 2012 - they've demonstrated exactly what you want to do.
WWDC Link
AVAssetReader is not ideal for realtime usage because it handles the decoding for you, and in various cases copyNextSampleBuffer can block for random amounts of time.
That being said, AVAssetReader can be used wonderfully well in a producer thread feeding a circular buffer. It depends on your required usage, but I've had good success using this method to feed a RemoteIO output, and doing my effects/signal processing in the RemoteIO callback.