I am using AppRTC which is a iOS wrapper of WebRTC,
I need to record the audio of the Video Call, so that we can play the audio later if needed.
I tried using AVAudioRecorder, but it only records the input of microphone (not the external speakers).
How to record the conversation to a audio file?
Related
I need a way to record local participant’s audio to a file. I see there are methods startAudioRecording and stopAudioRecording but they record audio of all participant’s in the call. Is there any way to achieve this without low level audio handling?
I want to get my OWN app audio output (for example currently playing video in UIWebView) and save it to file. How to create a AudioKit Node for this purpose?
Just because your app has audio doesn't really mean you have control of the audio if you're just using web views for instance. Another example is using the speech synthesizer in iOS - the app utters the speech, but you can't direct that audio anywhere except to the user's speaker.
We're currently using Linphone library to make VOIP calls and they have their own solution for audio playback. However, we would like to display a visualizer for the audio that Linphone is outputting from within our own app. Is there a way that we can intercept this data (maybe through sample buffering) in order to draw up audio waves/volume meter in the user interface?
AVAudioPlayer or AVPlayer is out of the question since we do not have access to those objects. Is there a solution in place for AVAudioSession or in CoreAudio?
Only if the audio output app is exporting the audio data using Inter-App-audio or Audiobus. Otherwise the iOS security sandbox will hide that audio output from your app.
There are many audio generated from my app, the sources could be AVPlayer, AudioUnit, etc. I want to record all the audio(not from mic because that would record user voice) into a single file. Is there any way to get the final mixed audio data before sent to the audio playback hardware?
I've tried AudioUnit, and The Amazing Audio Engine. However it could only record audio played by AudioUnit.
Also read the MTAudioProcessingTap, but it has to inject some code into AVPlayer, and seems complicated to mix all the audio.
I want to process the stereo output from iOS devices, no matter what application causes them and visualize it in real-time.
Is it possible to use the generic output device (or anything else) to get at the audio data which are currently being played? Maybe as an input to a remoteIO unit?
In other words: I want to do what aurioTouch2 does (FFT only) but instead of using the microphone as input source, I want to process everything which is coming out of the speakers at a given time.
Kind regards
If your own app is playing using the RemoteIO Audio Unit, you can capture that content. You can not capture audio your app is playing using many of the other audio APIs. The iOS security sandbox will prevent your app from capturing audio that any other app is playing (unless that app explicitly exports audio via the Inter-App Audio API or equivalent).