Why does no AVPlayerItemAudioOutput exist in AVFoundation? - ios

AVPlayerItemVideoOutput is a subclass of AVPlayerItemOutput in AVFoundation, I can get the visual data in pixel buffer format and do some process. (through copyPixelBufferForItemTime:)
However, there is no AVPlayerItemAudioOutput exists accordingly. How can I process the audio data?
Do I have to use the AVAssetReader class to get this?

This is a great question. -[AVPlayerItem addOutput:] mentions audio but there is nothing to be found on it in AVPlayerItemOutput.h (unless you're meant to get audio via the AVPlayerItemLegibleOutput class - I'm only half joking, as a class that vends CMSampleBuffers, I think a a hypothetical AVPlayerItemAudioOutput would look a lot like this).
So I don't know where AVPlayerItemAudioOutput is, but yes you can use AVAssetReader to get at audio data.
However if you're already using an AVPlayer, your most painless path would be using MTAudioProcessingTap to play the role of the hypothetical AVPlayerItemAudioOutput.
You can add a tap to the inputParameters of your AVPlayer's currentItem's audioMix to receive (and even modify) the audio of your chosen audio tracks.
It's probably easier to read some example code than it is to parse what I just wrote.

Related

Recording output audio with Swift

Is it possible to record output audio in an app using Swift? So, for example, say I'm listening to a podcast, and I want to, within a separate app, record a small segment of the podcast's audio. Is there any way to do that?
I've looked around but have only been able to find information on recording microphone recording and such.
It depends on how you are producing the audio. If the production of the audio is within your control, you can put a tap on the output and record to a file as it plays. The easiest way is with the new AVAudioEngine feature (there are other ways, but AVAudioEngine is basically an easy front end for them).
Of course, if the real problem is to take a copy of a podcast, then obviously all you have to do is download the podcast as opposed to listening to it. Similarly, you could buffer and save streaming audio to a file. There are many apps that do this. But this is not because the device's output is being hijacked; it is, again, because we have control of the sound data itself.
I believe you'll have to write a kernel extension to do that
https://developer.apple.com/library/mac/documentation/Darwin/Conceptual/KEXTConcept/KEXTConceptIOKit/iokit_tutorial.html
You'd have to make your own audio driver to record it
It appears as though
That is how softonic made soundflowerbed.
http://features.en.softonic.com/how-to-record-internal-sound-on-a-mac

Redirection playback output of avplayer item

What I want to do is to take the output samples of an AVAsset corresponding to an audio file (no video involved) and send them to an audio effect class that takes in a block of samples, and I want to be able to this in real time.
I am currently looking at the AVfoundation class reference and programming guide, but I can't see a way of redirect the output of a player item and send it to my effect class, and from there, send the transformed samples to an Audio output (using AVAssetReaderAudioMixOutput?) and hear it from there. I see that the AVAssetReader class gives me a way to get a block of samples using
[myAVAssetReader addOutput:myAVAssetReaderTrackOutput];
[myAVAssetReaderTrackOutput copyNextSampleBuffer];
but Apple documentation specifies that the AVAssetReader class is not made and should not be used for real-time situations. Does anybody have a suggestion on where to look, or if I am having the right approach?
The MTAudioProcessingTap is perfect for this. By leveraging an AVPlayer, you can avoid having to block the samples yourself with the AVAssetReaderOutput and then render them yourself in an Audio Queue or with an Audio Unit.
Instead, attach an MTAudioProcessingTap to the inputParameters of your AVAsset's audioMix, and you'll be given samples in blocks which are easy to then throw into an effect unit.
Another benefit from this is that it will work with AVAssets derived from URLs that can't always be opened by other Apple APIs (like Audio File Services), such as the user's iPod library. Additionally, you get all of the functionality like tolerance of audio interruptions that the AVPlayer provides for free, which you would otherwise have to implement by hand if you went with an AVAssetReader solution.
To set up a tap you have to set up some callbacks that the system invokes as appropriate during playback. Full code for such processing can be found at this tutorial here.
There's a new MTAudioProcessingTap object in iOS 6 and Mac OS 10.8 . Check out the Session 517 video from WWDC 2012 - they've demonstrated exactly what you want to do.
WWDC Link
AVAssetReader is not ideal for realtime usage because it handles the decoding for you, and in various cases copyNextSampleBuffer can block for random amounts of time.
That being said, AVAssetReader can be used wonderfully well in a producer thread feeding a circular buffer. It depends on your required usage, but I've had good success using this method to feed a RemoteIO output, and doing my effects/signal processing in the RemoteIO callback.

Record audio iOS

How does one record audio using iOS? Not the input recording from the microphone, but I want to be able to capture/record the current playing audio within my app?
So, e.g. I start a recording session, and any sound that plays within my app only, I want to record it to a file?
I have done research on this but I am confused with what to use as it looks like mixing audio frameworks can cause problems?
I just want to be able to capture and save the audio playing within my application.
Well if you're looking to just record the audio that YOUR app produces, then yes this is very much possible.
What isn't possible, is recording all audio that is output through the speaker.
(EDIT: I just want to clarify that there is no way to record audio output produced by other applications. You can only record the audio samples that YOU produce).
If you want to record your app's audio output, you must use the remote io audio unit (http://atastypixel.com/blog/using-remoteio-audio-unit/).
All you would really need to do is copy the playback buffer after you fill it.
ex)
memcpy(void *dest, ioData->mBuffers[0].mData, int amount_of_bytes);
This is possible by wrapping a Core Audio public utility file CAAudioUnitOutputCapturer
http://developer.apple.com/library/mac/#samplecode/CoreAudioUtilityClasses/Introduction/Intro.html
See my reply in this question for the wrapper classes.
Properly use Objective C++
There is no public API for capturing or recording all generic audio output from an iOS app.
Check out the MixerHostAudio sample application from Apple. Its a great way to start learning about Audio Units. Once you have an grasp of that, there are many tutorials online that talk about adding recording.

Whither AVCaptureAudioFileOutput on iOS?

The Internets don't seem to have an answer to this question.
In this reference page for AVCaptureFileOutput, they state that:
The concrete subclasses of AVCaptureFileOutput are
AVCaptureMovieFileOutput, which records media to a QuickTime movie
file, and AVCaptureAudioFileOutput, which writes audio media to a
variety of audio file formats.
It happens that I have an app that captures video in one feature, and audio only in another. So I am trying to set up an instance of the AVCaptureAudioFileOutput to accomplish that. However, it's not available in iOS! AVCaptureMovieFileOutput is present and accounted for; what am I supposed to do to record audio only?
Forget about AVCaptureFileOutput and its descendent and instead use AVCaptureAudioDataOutput to capture audio buffers which you then write to an audio file (e.g. M4A or WAV) using an AVAssetWriter.

How do you write audio to the first frame with AVAssetWriter while capturing video/audio on iOS?

Long story short, I am trying to implement a naive solution for streaming video from the iOS camera/microphone to a server.
I am using AVCaptureSession with audio and video AVCaptureOutputs, and then using AVAssetWriter/AVAssetWriterInput to capture video and audio in the captureOutput:didOutputSampleBuffer:fromConnection method and write the resulting video to a file.
To make this a stream, I am using an NSTimer to break the video files into 1 second chunks (by hot-swapping in a different AVAssetWriter that has a different outputURL) and upload these to a server over HTTP.
This is working, but the issue I'm running into is this: the beginning of the .mp4 files appear to always be missing audio in the first frame, so when the video files are concatenated on the server (running ffmpeg) there is a noticeable audio skip at the intersections of these files. The video is just fine - no skipping.
I tried many ways of making sure there were no CMSampleBuffers dropped and checked their timestamps to make sure they were going to the right AVAssetWriter, but to no avail.
Checking the AVCam example with AVCaptureMovieFileOutput and AVCaptureLocation example with AVAssetWriter and it appears the files they generate do the same thing.
Maybe there is something fundamental I am misunderstanding here about the nature of audio/video files, as I'm new to video/audio capture - but thought I'd check before I tried to workaround this by learning to use ffmpeg as some seem to do to fragment the stream (if you have any tips on this, too, let me know!). Thanks in advance!
I had the same problem and solved it by recording audio with a different API, Audio Queue. This seems to solve it, just need to take care of timing in order to avoid sound delay.

Resources