The Amazing Audio Engine and AVSpeechSynthesizer - ios

I use The Amazing Audio Engine to record output audio of my app, which is played by AVSpeechSyntehsizer's speakUtterance method. I used code provided here: Record all sounds generated by my app in a audio file (not from mic)
I get the output file, but I can't play it (file size is always 4kb no matter how long I record, I tried using aiff and m4a extensions, but iTunes is not able to open them). What could be the problem?
Related question:
I was able to record the app output using AVAudioRecorder activated with AVAudioSessionCategoryPlayAndRecord, but it included input from microphone. Is there any way to record app output only? Perhaps change session?
ULTIMATE GOAL:
I need to record AVSpeechSynthesizer to audio file, and since there is no API for this, the only way is to record audio output as it's being played. I'm planning to have my user to use headphones while it's being played/recorded (and warn him that no other sounds should be played while recording is happening). I found that I should use Audio Units, but couldn't find any tutorials on that matter, Apple's manuals are very poor.

Related

How to record all audio generated from my app (might from AVPlayer or others) in iOS? (not mic)

There are many audio generated from my app, the sources could be AVPlayer, AudioUnit, etc. I want to record all the audio(not from mic because that would record user voice) into a single file. Is there any way to get the final mixed audio data before sent to the audio playback hardware?
I've tried AudioUnit, and The Amazing Audio Engine. However it could only record audio played by AudioUnit.
Also read the MTAudioProcessingTap, but it has to inject some code into AVPlayer, and seems complicated to mix all the audio.

Record audio streamed from AVPlayer

I'm trying to record/capture audio that's being streamed via an AVPlayer. The AVAudioRecorder only records from the microphone which may work if the audio is played via speakerphone (although quality will suffer) but it'll definitely not work if the headphones are plugged in.
I've looked everywhere for a solution but still haven't found a solution that'll work for me. Would I need to grab the audio buffer? Is there another way to capture what's being played?
You can grab audio buffers by adding an MTAudioProcessingTap to your AVPlayer.
The process is a little convoluted, but there is some information out there.
The easiest approach nowadays is to play and record using AVAudioEngine.

ios Can I use MPMoviePlayerController to play .mp3 url

My app needs to play some music files, like .mp3. I would like to use MPMoviePlayerController because it has implemented all the UI stuff for me, i.e. I do not want to bother implementing progress slide bar and things like this.
I tested to use it to play a .mp3 file and it worked fine but I do not know if it is fine to use it to do this because its name says "movie player" and it seems it is supposed to play a movie. Would apple reject this? Thank you.
For playing audio from a file or memory, AVAudioPlayer is your best option but unfortunately it doesn't support a network stream while MPMoviePlayerController can
From documentation :
An instance of the AVAudioPlayer class, called an audio player,
provides playback of audio data from a file or memory.
Apple recommends that you use this class for audio playback unless you
are playing audio captured from a network stream or require very low
I/O latency.
For the Apple validation I don't think that your application can be rejected because you're using the Media Player Framework to play an audio file. In fact here they explicitly say that you can do just that:
Choose the right technology for your needs:
To play the audio items in a user’s iPod library, or to play local or
streamed movies, use the Media Player framework. Classes in this
framework automatically support sending audio and video to AirPlay
devices such as Apple TV.
Not sure about performance and memory issues though!
Best of luck.

Audio Unit: Use sound output as input source

I want to process the stereo output from iOS devices, no matter what application causes them and visualize it in real-time.
Is it possible to use the generic output device (or anything else) to get at the audio data which are currently being played? Maybe as an input to a remoteIO unit?
In other words: I want to do what aurioTouch2 does (FFT only) but instead of using the microphone as input source, I want to process everything which is coming out of the speakers at a given time.
Kind regards
If your own app is playing using the RemoteIO Audio Unit, you can capture that content. You can not capture audio your app is playing using many of the other audio APIs. The iOS security sandbox will prevent your app from capturing audio that any other app is playing (unless that app explicitly exports audio via the Inter-App Audio API or equivalent).

How to record audio running out of iphone speakers?

Hey, I'm a new developer in Objective C. I'm trying to record the audio running out of iPhone speakers. I can capture the audio by mouth speaker and record it. But I cannot record the audio producing from my iPhone. Please help me.
Unfortunately, there is no way to directly capture from the "audio bus". You can either capture the audio via the internal microphone or headset microphone, but that's it. If you are rendering the audio, you could obviously also write that audio out to a file as well at the same time. That's pretty much your only option.
yes, you only get a handle on the audio generated by your process. There is no way to get the audio generated by the rest of the system.

Resources