Redirection playback output of avplayer item - ios

What I want to do is to take the output samples of an AVAsset corresponding to an audio file (no video involved) and send them to an audio effect class that takes in a block of samples, and I want to be able to this in real time.
I am currently looking at the AVfoundation class reference and programming guide, but I can't see a way of redirect the output of a player item and send it to my effect class, and from there, send the transformed samples to an Audio output (using AVAssetReaderAudioMixOutput?) and hear it from there. I see that the AVAssetReader class gives me a way to get a block of samples using
[myAVAssetReader addOutput:myAVAssetReaderTrackOutput];
[myAVAssetReaderTrackOutput copyNextSampleBuffer];
but Apple documentation specifies that the AVAssetReader class is not made and should not be used for real-time situations. Does anybody have a suggestion on where to look, or if I am having the right approach?

The MTAudioProcessingTap is perfect for this. By leveraging an AVPlayer, you can avoid having to block the samples yourself with the AVAssetReaderOutput and then render them yourself in an Audio Queue or with an Audio Unit.
Instead, attach an MTAudioProcessingTap to the inputParameters of your AVAsset's audioMix, and you'll be given samples in blocks which are easy to then throw into an effect unit.
Another benefit from this is that it will work with AVAssets derived from URLs that can't always be opened by other Apple APIs (like Audio File Services), such as the user's iPod library. Additionally, you get all of the functionality like tolerance of audio interruptions that the AVPlayer provides for free, which you would otherwise have to implement by hand if you went with an AVAssetReader solution.
To set up a tap you have to set up some callbacks that the system invokes as appropriate during playback. Full code for such processing can be found at this tutorial here.

There's a new MTAudioProcessingTap object in iOS 6 and Mac OS 10.8 . Check out the Session 517 video from WWDC 2012 - they've demonstrated exactly what you want to do.
WWDC Link

AVAssetReader is not ideal for realtime usage because it handles the decoding for you, and in various cases copyNextSampleBuffer can block for random amounts of time.
That being said, AVAssetReader can be used wonderfully well in a producer thread feeding a circular buffer. It depends on your required usage, but I've had good success using this method to feed a RemoteIO output, and doing my effects/signal processing in the RemoteIO callback.

Related

Audio waveform visualization in swift

This is a two part question:
Using AVAudioRecorder is it possible to have a waveform respond to the incoming audio in real time similar to what happens when you activate siri on the iphone. Perhaps using averagePowerForChannel?
Also, is there a way to gather the audio samples of a recording to render a waveform?
I know novocaine exists, but I was hoping not to use a framework.
Does not seem possible using AVAudioRecorder by itself.
An alternative would be to use AVCaptureSession with an AVCaptureAudioDataOutput which provides access to the raw audio buffer, from which the wave form can be read.
Most of the processing would be done in the delegate:
func captureOutput(AVCaptureOutput!, didOutputSampleBuffer: CMSampleBuffer!, from: AVCaptureConnection!)
You would probably need to implement some sort of throttling to only process every Nth sample so that your visualiser code doesn't interfere with the audio.
AVCaptureSession is far more rudimentary compared to AVAudioRecorder - it does not provide any recording facilities by itself for example, and so if you wanted to also record the audio you would need to use an AVAssetWriter to save the samples.
This SO question shows how to access the sample buffers. It uses AVAssetReader to load a file, but the delegate is exactly the same as would be used for realtime processing:
Reading audio samples via AVAssetReader

Why does no AVPlayerItemAudioOutput exist in AVFoundation?

AVPlayerItemVideoOutput is a subclass of AVPlayerItemOutput in AVFoundation, I can get the visual data in pixel buffer format and do some process. (through copyPixelBufferForItemTime:)
However, there is no AVPlayerItemAudioOutput exists accordingly. How can I process the audio data?
Do I have to use the AVAssetReader class to get this?
This is a great question. -[AVPlayerItem addOutput:] mentions audio but there is nothing to be found on it in AVPlayerItemOutput.h (unless you're meant to get audio via the AVPlayerItemLegibleOutput class - I'm only half joking, as a class that vends CMSampleBuffers, I think a a hypothetical AVPlayerItemAudioOutput would look a lot like this).
So I don't know where AVPlayerItemAudioOutput is, but yes you can use AVAssetReader to get at audio data.
However if you're already using an AVPlayer, your most painless path would be using MTAudioProcessingTap to play the role of the hypothetical AVPlayerItemAudioOutput.
You can add a tap to the inputParameters of your AVPlayer's currentItem's audioMix to receive (and even modify) the audio of your chosen audio tracks.
It's probably easier to read some example code than it is to parse what I just wrote.

What exactly is an audio queue processing tap?

These have been around in OS X for a little while now and just recently became available in ios with ios 6. I am trying to figure what they let you do exactly. So the idea is you can tap into an audio queue and process the data before sending it on. Does this mean you can now intercept raw audio coming from different applications and process that (such as the iOS music player) before it plays? In other words is inter-app audio possible? I have read over the audioQueue.h file and can't quite figure out what to make of it.
Consider it a mid-level entry for your audio custom processing (e.g. insert effect) or reading (e.g. for analysis or display purposes) of the queue's sample data. A basic interface for reading or processing an AQ's data.
Does this mean you can now intercept raw audio coming from different applications and process that (such as the iOS music player) before it plays? In other words is inter-app audio possible?
Nope - it's not inter-process; you have no access to other processes' audio queues. These are for your queues' sample data. They can be used to simplify general audio render or analysis chains (the common case, by app count). My guess is that it was provided because a lot of people wanted an easier entry to access this sample data for processing or analysis. Custom processing entries on iOS can also be more complicated to implement (i.e. AudioUnit availability is restricted).

Using Audio Units to play several short audio files with overlap

I have run through an audio units tutorial for a sine wave generator and done a bit of reading, and I understand basically how it is working. What I would actually like to do for my app, is play a short sound file in response to some external event. These sounds would be about 1-2 seconds in duration and occur at a rate of about about 1-2 per second.
Basically where I am at right now is trying to figure out how to play an actual audio file using my audio unit, rather than generating a sine wave. So basically my question is, how do I get an audio unit to play an audio file?
Do I simply read bytes from the audio file into the buffer in the render callback?
(if so what class do I need to deal with to open / convert / decompress / read the audio file)
or is there some simpler method where I could maybe just hand off the entire buffer and tell it to play?
Any names of specific classes or APIs I will need to look at to accomplish this would be very helpful.
OK, check this:
http://developer.apple.com/library/ios/samplecode/MixerHost/Introduction/Intro.html
EDIT: That is a sample project. This page has detailed instructions with inline code to setup common configurations: http://developer.apple.com/library/ios/ipad/#DOCUMENTATION/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/ConstructingAudioUnitApps/ConstructingAudioUnitApps.html#//apple_ref/doc/uid/TP40009492-CH16-SW1
If you don't mind being tied to IOS 5+, you should look into AUFilePlayer. It is much easer then using the callbacks and you don't have to worry about setting up your own ring buffer (something that you would need to do if you want to avoid loading all of your audio data into memory on start-up)

Record audio iOS

How does one record audio using iOS? Not the input recording from the microphone, but I want to be able to capture/record the current playing audio within my app?
So, e.g. I start a recording session, and any sound that plays within my app only, I want to record it to a file?
I have done research on this but I am confused with what to use as it looks like mixing audio frameworks can cause problems?
I just want to be able to capture and save the audio playing within my application.
Well if you're looking to just record the audio that YOUR app produces, then yes this is very much possible.
What isn't possible, is recording all audio that is output through the speaker.
(EDIT: I just want to clarify that there is no way to record audio output produced by other applications. You can only record the audio samples that YOU produce).
If you want to record your app's audio output, you must use the remote io audio unit (http://atastypixel.com/blog/using-remoteio-audio-unit/).
All you would really need to do is copy the playback buffer after you fill it.
ex)
memcpy(void *dest, ioData->mBuffers[0].mData, int amount_of_bytes);
This is possible by wrapping a Core Audio public utility file CAAudioUnitOutputCapturer
http://developer.apple.com/library/mac/#samplecode/CoreAudioUtilityClasses/Introduction/Intro.html
See my reply in this question for the wrapper classes.
Properly use Objective C++
There is no public API for capturing or recording all generic audio output from an iOS app.
Check out the MixerHostAudio sample application from Apple. Its a great way to start learning about Audio Units. Once you have an grasp of that, there are many tutorials online that talk about adding recording.

Resources