How to record all audio generated from my app (might from AVPlayer or others) in iOS? (not mic) - ios

There are many audio generated from my app, the sources could be AVPlayer, AudioUnit, etc. I want to record all the audio(not from mic because that would record user voice) into a single file. Is there any way to get the final mixed audio data before sent to the audio playback hardware?
I've tried AudioUnit, and The Amazing Audio Engine. However it could only record audio played by AudioUnit.
Also read the MTAudioProcessingTap, but it has to inject some code into AVPlayer, and seems complicated to mix all the audio.

Related

AppRTC storing the audio locally

I am using AppRTC which is a iOS wrapper of WebRTC,
I need to record the audio of the Video Call, so that we can play the audio later if needed.
I tried using AVAudioRecorder, but it only records the input of microphone (not the external speakers).
How to record the conversation to a audio file?

The Amazing Audio Engine and AVSpeechSynthesizer

I use The Amazing Audio Engine to record output audio of my app, which is played by AVSpeechSyntehsizer's speakUtterance method. I used code provided here: Record all sounds generated by my app in a audio file (not from mic)
I get the output file, but I can't play it (file size is always 4kb no matter how long I record, I tried using aiff and m4a extensions, but iTunes is not able to open them). What could be the problem?
Related question:
I was able to record the app output using AVAudioRecorder activated with AVAudioSessionCategoryPlayAndRecord, but it included input from microphone. Is there any way to record app output only? Perhaps change session?
ULTIMATE GOAL:
I need to record AVSpeechSynthesizer to audio file, and since there is no API for this, the only way is to record audio output as it's being played. I'm planning to have my user to use headphones while it's being played/recorded (and warn him that no other sounds should be played while recording is happening). I found that I should use Audio Units, but couldn't find any tutorials on that matter, Apple's manuals are very poor.

Record Voice from microphone and monitor with delay

What I need to do is Record my voice using the microphone and simultaneously listen to what I am saying with latency.
I have tried using AVAudioRecorder and AVAudioPlayer (firing the AVAudioPlayer let's say 1 second later to play the file from the same nsurl as I am recording to) but that does not work.
Any ideas?
AVAudioRecorder and AVAudioPlayer both deal with discrete audio files.
You are going to need to deal with streaming audio. You probably want to AVFoundation and create both an AVAssetReader and an AVAssetWriter, connect the reader to the microphone, and connect the writer to an output stream.
AVFoundation is tricky to figure out. I haven't worked with it in quite a while, and I'm no expert. I suggest you do some digging using those specific search terms.

Record audio streamed from AVPlayer

I'm trying to record/capture audio that's being streamed via an AVPlayer. The AVAudioRecorder only records from the microphone which may work if the audio is played via speakerphone (although quality will suffer) but it'll definitely not work if the headphones are plugged in.
I've looked everywhere for a solution but still haven't found a solution that'll work for me. Would I need to grab the audio buffer? Is there another way to capture what's being played?
You can grab audio buffers by adding an MTAudioProcessingTap to your AVPlayer.
The process is a little convoluted, but there is some information out there.
The easiest approach nowadays is to play and record using AVAudioEngine.

Audio Unit: Use sound output as input source

I want to process the stereo output from iOS devices, no matter what application causes them and visualize it in real-time.
Is it possible to use the generic output device (or anything else) to get at the audio data which are currently being played? Maybe as an input to a remoteIO unit?
In other words: I want to do what aurioTouch2 does (FFT only) but instead of using the microphone as input source, I want to process everything which is coming out of the speakers at a given time.
Kind regards
If your own app is playing using the RemoteIO Audio Unit, you can capture that content. You can not capture audio your app is playing using many of the other audio APIs. The iOS security sandbox will prevent your app from capturing audio that any other app is playing (unless that app explicitly exports audio via the Inter-App Audio API or equivalent).

Resources