Hey, I'm a new developer in Objective C. I'm trying to record the audio running out of iPhone speakers. I can capture the audio by mouth speaker and record it. But I cannot record the audio producing from my iPhone. Please help me.
Unfortunately, there is no way to directly capture from the "audio bus". You can either capture the audio via the internal microphone or headset microphone, but that's it. If you are rendering the audio, you could obviously also write that audio out to a file as well at the same time. That's pretty much your only option.
yes, you only get a handle on the audio generated by your process. There is no way to get the audio generated by the rest of the system.
Related
I use The Amazing Audio Engine to record output audio of my app, which is played by AVSpeechSyntehsizer's speakUtterance method. I used code provided here: Record all sounds generated by my app in a audio file (not from mic)
I get the output file, but I can't play it (file size is always 4kb no matter how long I record, I tried using aiff and m4a extensions, but iTunes is not able to open them). What could be the problem?
Related question:
I was able to record the app output using AVAudioRecorder activated with AVAudioSessionCategoryPlayAndRecord, but it included input from microphone. Is there any way to record app output only? Perhaps change session?
ULTIMATE GOAL:
I need to record AVSpeechSynthesizer to audio file, and since there is no API for this, the only way is to record audio output as it's being played. I'm planning to have my user to use headphones while it's being played/recorded (and warn him that no other sounds should be played while recording is happening). I found that I should use Audio Units, but couldn't find any tutorials on that matter, Apple's manuals are very poor.
I'm trying to record/capture audio that's being streamed via an AVPlayer. The AVAudioRecorder only records from the microphone which may work if the audio is played via speakerphone (although quality will suffer) but it'll definitely not work if the headphones are plugged in.
I've looked everywhere for a solution but still haven't found a solution that'll work for me. Would I need to grab the audio buffer? Is there another way to capture what's being played?
You can grab audio buffers by adding an MTAudioProcessingTap to your AVPlayer.
The process is a little convoluted, but there is some information out there.
The easiest approach nowadays is to play and record using AVAudioEngine.
I am looking at the feasibility of getting the current raw audio stream playing and do stuff with it such as stream it over Bluetooth or equalize it, etc. Is there any way to do this in iOS 8?
For example: apps such as Pandora/Spotify are playing music and I want to access the audio they are playing.
To process audio from another app, that app needs to participate in Inter-App Audio.
I don't know if your example apps do that.
I want to process the stereo output from iOS devices, no matter what application causes them and visualize it in real-time.
Is it possible to use the generic output device (or anything else) to get at the audio data which are currently being played? Maybe as an input to a remoteIO unit?
In other words: I want to do what aurioTouch2 does (FFT only) but instead of using the microphone as input source, I want to process everything which is coming out of the speakers at a given time.
Kind regards
If your own app is playing using the RemoteIO Audio Unit, you can capture that content. You can not capture audio your app is playing using many of the other audio APIs. The iOS security sandbox will prevent your app from capturing audio that any other app is playing (unless that app explicitly exports audio via the Inter-App Audio API or equivalent).
Say you want to playback exactly what the iPhone mic is picking up in real-time. Which framework/class would be used?
You'll need to use the Core Audio framework for this. Specifically, look into audio graphs, audio units, and RemoteIO. Plenty of sample code for those to get you started.