Whither AVCaptureAudioFileOutput on iOS? - ios

The Internets don't seem to have an answer to this question.
In this reference page for AVCaptureFileOutput, they state that:
The concrete subclasses of AVCaptureFileOutput are
AVCaptureMovieFileOutput, which records media to a QuickTime movie
file, and AVCaptureAudioFileOutput, which writes audio media to a
variety of audio file formats.
It happens that I have an app that captures video in one feature, and audio only in another. So I am trying to set up an instance of the AVCaptureAudioFileOutput to accomplish that. However, it's not available in iOS! AVCaptureMovieFileOutput is present and accounted for; what am I supposed to do to record audio only?

Forget about AVCaptureFileOutput and its descendent and instead use AVCaptureAudioDataOutput to capture audio buffers which you then write to an audio file (e.g. M4A or WAV) using an AVAssetWriter.

Related

Adding an external audio track to HLS video

I am using AVPlayer to run a HLS video. The video has no sound. Also i have a audio track url of same format m3u8. Can i somehow change the AVPlayer item asset or something while running my video without sound to add my other audio track so that they are sort of played together.
Disappointingly, you can't create an AVComposition using non local video and audio tracks and play that.
But HLS is at its heart a sort of textual playlist consisting of media pieces that can be played either serially or concurrently. If you inspect both your video and audio m3u8 streams, you should be able to concoct a new single m3u8 stream that includes both video and audio.
HOWEVER, disappointingly, it seems you can't play this resulting stream as a local file (why!?!), so you'd set up an http server to serve it to you, either locally or from or afar, or maybe (!?) you could avoid all that with a clever use of AVAssetResourceLoaderDelegate.
It also seems synchronising two AVPlayers is unsupported too, although perhaps that situation has improved.

Can i use AVAudioRecorder for sending a stream of recordings to server?

I am trying to write a program for live streaming the audio to the server. Does AVAudioRecorder has the streaming functionality or should I use any other frameworks? Preferably I am trying to use apple builtin frameworks.
I have used AVCaptureSession for streaming audio coupled with AVCaptureDevice as Audio as input device and output device as AVCaptureAudioDataOutput which in turn calls AVCaptureAudioDataOutputSampleBufferDelegate and gives data as a buffered stream.
AVFoundation Cameras and Media Capture
According to this document, you have to initialize an AVAudioRecorder with a file path, which means: if you want to do the live streaming, you have to either wait for the current recording to finish, or initialize a new AVAudioRecorder with another path.
I would recommend you to create multiple AVAudioRecorder instances and run each instance based on the size of the audio chunk. (You can also divide them based on the time, but make sure your buffer is large to keep them all)
And, just upload previous chunks and then start a new instance to keep the recording going on.

Why does no AVPlayerItemAudioOutput exist in AVFoundation?

AVPlayerItemVideoOutput is a subclass of AVPlayerItemOutput in AVFoundation, I can get the visual data in pixel buffer format and do some process. (through copyPixelBufferForItemTime:)
However, there is no AVPlayerItemAudioOutput exists accordingly. How can I process the audio data?
Do I have to use the AVAssetReader class to get this?
This is a great question. -[AVPlayerItem addOutput:] mentions audio but there is nothing to be found on it in AVPlayerItemOutput.h (unless you're meant to get audio via the AVPlayerItemLegibleOutput class - I'm only half joking, as a class that vends CMSampleBuffers, I think a a hypothetical AVPlayerItemAudioOutput would look a lot like this).
So I don't know where AVPlayerItemAudioOutput is, but yes you can use AVAssetReader to get at audio data.
However if you're already using an AVPlayer, your most painless path would be using MTAudioProcessingTap to play the role of the hypothetical AVPlayerItemAudioOutput.
You can add a tap to the inputParameters of your AVPlayer's currentItem's audioMix to receive (and even modify) the audio of your chosen audio tracks.
It's probably easier to read some example code than it is to parse what I just wrote.

Is it possible to add an additional audio track to a video file in iOS?

I'm creating an app where I want the possibility to record a video(and sound), and then play it back while recording audio. After this, I'd be left with a video file (containing audio), and a separate audio file(completely different from the video's audio track).
Is it possible to use AVMutableCompositionTrack to compose a new video file containing one video track and two separate audio tracks, and then using AVAssetExportSession to export this to one single standalone video-file which keeps these audio tracks separated? What I hope to achieve with this is that the user can later watch this video-file and choose if one or both audio tracks should be playing. I know of the possibility to use multiple AVAssets to synchronize playback from different audio/video-files, but I'm wondering if I can create one file containing separable audio tracks, and then later define each audio track as an AVAsset to control the syncing.
I know some video formats/codecs etc have the ability to change audio language, even when it's only one single file. I also know AVFoundation has great support for handling tracks. What I don't know, is if these are compatible to each other: Can AVFoundation handle separate tracks from within one single file? And are there any codecs with support for such tracking for iOS(e.g .mp4, .mov, ...)?
I imagine "problems" would occur if a standard video player tried to watch this resulting movie-file (possibly only playing the video with the first audio track), but I'm thinking since I can already assume there are two (or more) audio tracks, it could be done?
Is this possible in any way?
Yes, it is possible to create a video file with multiple audio tracks by using AVAssetWriterInputGroup. The reference says:
Use this class to associate tracks corresponding to multiple AVAssetWriterInput instances as mutually exclusive to each other for playback or other processing.
For example, if you are creating an asset with multiple audio tracks using different spoken languages—and only one track should be played at a time—group the inputs corresponding to those tracks into a single instance of AVAssetWriterInputGroup and add the group to the AVAssetWriter instance using the AVAssetWriter method add(_:).

Record audio iOS

How does one record audio using iOS? Not the input recording from the microphone, but I want to be able to capture/record the current playing audio within my app?
So, e.g. I start a recording session, and any sound that plays within my app only, I want to record it to a file?
I have done research on this but I am confused with what to use as it looks like mixing audio frameworks can cause problems?
I just want to be able to capture and save the audio playing within my application.
Well if you're looking to just record the audio that YOUR app produces, then yes this is very much possible.
What isn't possible, is recording all audio that is output through the speaker.
(EDIT: I just want to clarify that there is no way to record audio output produced by other applications. You can only record the audio samples that YOU produce).
If you want to record your app's audio output, you must use the remote io audio unit (http://atastypixel.com/blog/using-remoteio-audio-unit/).
All you would really need to do is copy the playback buffer after you fill it.
ex)
memcpy(void *dest, ioData->mBuffers[0].mData, int amount_of_bytes);
This is possible by wrapping a Core Audio public utility file CAAudioUnitOutputCapturer
http://developer.apple.com/library/mac/#samplecode/CoreAudioUtilityClasses/Introduction/Intro.html
See my reply in this question for the wrapper classes.
Properly use Objective C++
There is no public API for capturing or recording all generic audio output from an iOS app.
Check out the MixerHostAudio sample application from Apple. Its a great way to start learning about Audio Units. Once you have an grasp of that, there are many tutorials online that talk about adding recording.

Resources